Hi, i have been looking through the various examples in the llm prompt optimisation. I am trying to figure out how to optimise my own prompt by testing with the tutorial code. I notice
using llm_feedback is set to true which supposedly improves the performance. However, unlike circle with artifacts example, it seems the evaluator.py in llm optimisation does not have the artifact code option available.
|
def evaluate(program_path): |
So if it is not implemented, this option cant be used ? If i understand correctly, the feedback provides incorrect examples to the LLM optimiser based on the current prompt.
Hi, i have been looking through the various examples in the llm prompt optimisation. I am trying to figure out how to optimise my own prompt by testing with the tutorial code. I notice
openevolve/examples/llm_prompt_optimization/config_qwen3_evolution.yaml
Line 66 in ba1ca44
using llm_feedback is set to true which supposedly improves the performance. However, unlike circle with artifacts example, it seems the evaluator.py in llm optimisation does not have the artifact code option available.
openevolve/examples/circle_packing_with_artifacts/evaluator.py
Line 192 in ba1ca44
So if it is not implemented, this option cant be used ? If i understand correctly, the feedback provides incorrect examples to the LLM optimiser based on the current prompt.