Skip to content

fix: #3310 reject empty chat tool outputs#3312

Draft
Aphroq wants to merge 1 commit intoopenai:mainfrom
Aphroq:fix/chatcmpl-nontext-tool-output-empty
Draft

fix: #3310 reject empty chat tool outputs#3312
Aphroq wants to merge 1 commit intoopenai:mainfrom
Aphroq:fix/chatcmpl-nontext-tool-output-empty

Conversation

@Aphroq
Copy link
Copy Markdown
Contributor

@Aphroq Aphroq commented May 9, 2026

Summary

  • Make the default Chat Completions converter raise when a tool output contains only non-text content and would otherwise become content: [].
  • Keep string, text-only, and mixed text/non-text outputs compatible with the existing default filtering behavior.
  • Preserve the existing preserve_tool_output_all_content=True opt-in for compatible providers that can consume non-text tool result parts.

Test plan

  • uv run pytest tests/models/test_openai_chatcompletions_converter.py -k "non_text_function_output or function_output_item"
  • bash .agents/skills/code-change-verification/scripts/run.sh

Issue number

Closes #3310

Checks

  • I've added new tests (if relevant)
  • I've added/updated the relevant documentation
  • I've run make lint and make format
  • I've made sure tests pass

@github-actions github-actions Bot added bug Something isn't working feature:chat-completions labels May 9, 2026
@Aphroq Aphroq changed the title fix(models): reject empty chat tool outputs fix: #3310 reject empty chat tool outputs May 9, 2026
Copy link
Copy Markdown
Member

@seratch seratch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One concern: do we actually need to reject here?

I agree the current content: [] conversion is lossy and misleading for non-text-only tool outputs, but changing it to raise UserError is still a behavior change. Existing Chat Completions users who return ToolOutputImage / ToolOutputFileContent currently get a successful run, even if the model receives an empty tool message. With this PR, the same workflow starts failing.

Could we consider a less breaking fallback instead? For example, preserve the existing "run continues" behavior but avoid an empty tool message by converting the non-text-only result to a textual placeholder/serialized representation, or gate the hard error behind an explicit strict mode.

If we do keep the hard error, I think the PR should call out that this is an intentional behavior change and explain why fail-fast is preferable to preserving compatibility here.

@seratch seratch marked this pull request as draft May 9, 2026 23:57
@Aphroq
Copy link
Copy Markdown
Contributor Author

Aphroq commented May 10, 2026

Thanks, that compatibility concern makes sense. We probably should not make this a default UserError, especially because existing users can currently continue the run even if the model receives content: [].

From a compatibility and consistency perspective, we could consider gating the hard error behind strict_feature_validation=True: keep the default path compatible, and raise explicitly in strict mode instead of silently producing an empty tool message. This also matches the recent custom tool call handling.

My main concern with a default textual placeholder / serialized fallback is that it may make the model think it received a meaningful representation of image/file/audio output, even though the default Chat Completions path still cannot really consume that non-text content.

If we did keep the default hard error, I agree the PR should call that out as an intentional behavior change. The fail-fast rationale would be that the existing default conversion is not just lossy, but silently misleading: a non-text-only tool output becomes content: [], so the run appears to succeed even though the model receives no usable representation of the tool result. That can hide integration bugs and lead to follow-up model behavior based on an empty tool message.

Which direction would you prefer we take here: strict-mode gating, a default placeholder/serialized fallback, or keeping the default hard error while explicitly documenting it as an intentional behavior change?

@Aphroq Aphroq force-pushed the fix/chatcmpl-nontext-tool-output-empty branch from c0032ed to 7e89518 Compare May 11, 2026 05:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working feature:chat-completions

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Chat Completions converter can send empty tool output for non-text results

2 participants