fix: #3310 reject empty chat tool outputs#3312
Conversation
seratch
left a comment
There was a problem hiding this comment.
One concern: do we actually need to reject here?
I agree the current content: [] conversion is lossy and misleading for non-text-only tool outputs, but changing it to raise UserError is still a behavior change. Existing Chat Completions users who return ToolOutputImage / ToolOutputFileContent currently get a successful run, even if the model receives an empty tool message. With this PR, the same workflow starts failing.
Could we consider a less breaking fallback instead? For example, preserve the existing "run continues" behavior but avoid an empty tool message by converting the non-text-only result to a textual placeholder/serialized representation, or gate the hard error behind an explicit strict mode.
If we do keep the hard error, I think the PR should call out that this is an intentional behavior change and explain why fail-fast is preferable to preserving compatibility here.
|
Thanks, that compatibility concern makes sense. We probably should not make this a default From a compatibility and consistency perspective, we could consider gating the hard error behind My main concern with a default textual placeholder / serialized fallback is that it may make the model think it received a meaningful representation of image/file/audio output, even though the default Chat Completions path still cannot really consume that non-text content. If we did keep the default hard error, I agree the PR should call that out as an intentional behavior change. The fail-fast rationale would be that the existing default conversion is not just lossy, but silently misleading: a non-text-only tool output becomes Which direction would you prefer we take here: strict-mode gating, a default placeholder/serialized fallback, or keeping the default hard error while explicitly documenting it as an intentional behavior change? |
c0032ed to
7e89518
Compare
Summary
content: [].preserve_tool_output_all_content=Trueopt-in for compatible providers that can consume non-text tool result parts.Test plan
uv run pytest tests/models/test_openai_chatcompletions_converter.py -k "non_text_function_output or function_output_item"bash .agents/skills/code-change-verification/scripts/run.shIssue number
Closes #3310
Checks
make lintandmake format