Enhance Prompt logic for Model update to 5-series#1309
Enhance Prompt logic for Model update to 5-series#1309iceljc merged 1 commit intoSciSharp:masterfrom
Conversation
Added a small piece to enhance the prompt logic for a stable toll calling results when shifting to 5-series model; Also proved to have no influence for previous 4-series model usage.
Review Summary by QodoAdd JSON format instruction to agent prompt template
WalkthroughsDescription• Add explicit JSON format instruction to prompt template • Clarify function calling output requirement for model • Improve prompt consistency for 5-series model compatibility Diagramflowchart LR
A["Agent Instruction Template"] -- "Add JSON format directive" --> B["Enhanced Prompt Logic"]
B -- "Improves 5-series model compatibility" --> C["Stable Function Calling Results"]
File Changes1. src/Infrastructure/BotSharp.Core/data/agents/01fcc3e5-9af7-49e6-ad7a-a760bd12dc4a/instructions/instruction.liquid
|
Code Review by Qodo
1. Toolcall suppressed by JSON
|
| [CONVERSATION] | ||
| {{ conversation }} | ||
|
|
||
| Determine which function to call to proceed. Your output must be in JSON format. |
There was a problem hiding this comment.
1. Toolcall suppressed by json 🐞 Bug ✓ Correctness
The routing agent instruction now mandates JSON output, which can cause tool-capable providers to return a normal assistant message instead of a tool/function call. When the routing agent is invoked via InvokeAgent, non-function responses are overwritten with a generic apology, breaking routing.
Agent Prompt
### Issue description
The router instruction now forces “output must be in JSON”, which can suppress native tool calls on providers like OpenAI. When a routing agent is invoked via `RoutingService.InvokeAgent`, any non-tool-call output is replaced with a generic apology, so routing fails.
### Issue Context
OpenAI provider supplies agent functions as `options.Tools`, and only returns `AgentRole.Function` when the model emits a tool call. `InvokeAgent` requires `AgentRole.Function` for routing agents; otherwise it overwrites with an apology.
### Fix Focus Areas
- src/Infrastructure/BotSharp.Core/data/agents/01fcc3e5-9af7-49e6-ad7a-a760bd12dc4a/instructions/instruction.liquid[39-42]
- src/Infrastructure/BotSharp.Core/Routing/RoutingService.InvokeAgent.cs[50-81]
- src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs[369-406]
### Suggested changes
1. Update the instruction to prefer tool calling explicitly (e.g., “Call one of the available functions; do not respond with a JSON message body”), OR scope the JSON requirement to reasoner templates instead of the global router instruction.
2. Optionally add a fallback in `InvokeAgent` for routing agents: if `response.Role != Function` but `response.Content` contains a valid `FunctionCallFromLlm` JSON, parse it and execute the corresponding function rather than overwriting with apology.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| [CONVERSATION] | ||
| {{ conversation }} | ||
|
|
||
| Determine which function to call to proceed. Your output must be in JSON format. |
There was a problem hiding this comment.
2. Tool args ignored by reasoner 🐞 Bug ⛯ Reliability
Even with the new JSON requirement, tool-capable providers may still return tool calls with
arguments in FunctionArgs and empty Content. OneStepForwardReasoner parses only
response.Content, so it can deserialize {} and produce a routing instruction missing required
routing data, breaking routing execution.
Agent Prompt
### Issue description
`OneStepForwardReasoner` (and similarly `HFReasoner`) parses only `response.Content` as JSON. With tool-calling models/providers, routing instructions may be returned as a tool call where the JSON is in `response.FunctionArgs` and `response.Content` is empty, causing deserialization of `{}` and malformed routing instructions.
### Issue Context
OpenAI provider returns `AgentRole.Function` with `FunctionArgs` populated on tool calls. `NaiveReasoner` already handles this by parsing `(response.FunctionArgs ?? response.Content)`.
### Fix Focus Areas
- src/Infrastructure/BotSharp.Core/Routing/Reasoning/OneStepForwardReasoner.cs[64-71]
- src/Infrastructure/BotSharp.Core/Routing/Reasoning/HFReasoner.cs[53-60]
- src/Infrastructure/BotSharp.Core/Routing/Reasoning/NaiveReasoner.cs[57-62]
### Suggested changes
1. Change OneStepForwardReasoner to parse `(response.FunctionArgs ?? response.Content).JsonContent<FunctionCallFromLlm>()`.
2. Apply the same change to HFReasoner.
3. After code is robust, consider removing/scoping the new “output must be in JSON” directive from the global router instruction to avoid suppressing tool calls.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
Added a small piece to enhance the prompt logic for a stable toll calling results when shifting to 5-series model; Also proved to have no influence for previous 4-series model usage.