First of all, thank the author for sharing the self-evolve agent project.
I used the api provided by Deepseek's official website. When configuring the model: deepseek-reasoner in the yaml file and running the example, an error was reported:
LoongFlow/src/evolux/react/components/default_reasoner.py", line 47, in reason
raise Exception( Exception: Error code: litellm_error, error: litellm.BadRequestError: DeepseekException - {"error":{"message":"Missing reasoning_content field in the assistant message at message index 2. For more information, please refer to https://api-docs.deepseek.com/guides/thinking_mode#tool-calls","type":"invalid_request_error","param":null,"code":"inval id_request_error"}}
The above problems have not yet been solved.
Secondly, when configuring the model: deepseek-chat, an error {"message":"Invalid max_tokens value, the valid range of max_tokens is [1, 8192]" is reported. This has been solved by myself. Just add max_token.
python llm_request = CompletionRequest(messages=[user_message]) resp_generator = self.model.generate(llm_request)
So, I'd like to ask if there's a problem with my configuration?
First of all, thank the author for sharing the self-evolve agent project.
I used the api provided by Deepseek's official website. When configuring the model: deepseek-reasoner in the yaml file and running the example, an error was reported:
The above problems have not yet been solved.
Secondly, when configuring the model: deepseek-chat, an error {"message":"Invalid max_tokens value, the valid range of max_tokens is [1, 8192]" is reported. This has been solved by myself. Just add max_token.
python llm_request = CompletionRequest(messages=[user_message]) resp_generator = self.model.generate(llm_request)So, I'd like to ask if there's a problem with my configuration?