Skip to content

fix: Wrap non-JSON tool responses for Gemini compatibility#1392

Closed
azanux wants to merge 2 commits intoembabel:mainfrom
azanux:fix/matryoshka-tool-json-response-gemini
Closed

fix: Wrap non-JSON tool responses for Gemini compatibility#1392
azanux wants to merge 2 commits intoembabel:mainfrom
azanux:fix/matryoshka-tool-json-response-gemini

Conversation

@azanux
Copy link
Copy Markdown
Collaborator

@azanux azanux commented Feb 8, 2026

#1391

Summary

Google GenAI (Gemini) requires tool responses (FunctionResponse.response) to be valid JSON objects. When a tool returns a plain text string, Spring AI's GoogleGenAiChatModel.parseJsonToMap() either:

  1. Crashes with a JsonParseException - for MatryoshkaTool responses that return plain text like "Enabled 4 tools: ..."
  2. Silently drops the content - for RAG/search tools returning plain text results, causing the LLM to hallucinate

The same tools work correctly with OpenAI, which accepts plain text in tool response content fields.

Root cause

All tool results flow through messageConverters.kt before being sent to the LLM. The textContent was passed as-is to ToolResponseMessage.ToolResponse.responseData without ensuring it's valid JSON.

Most tools worked "by luck" because MethodTool.convertResult() serializes non-String return types to JSON via Jackson only tools returning a raw String (or implementing Tool directly like MatryoshkaTool) were affected.

Fix

  • messageConverters.kt: Added ensureJson() extension function that wraps non-JSON tool response text in {"result": "..."} before sending to the LLM.
  • JSON objects and arrays are preserved as-is. This acts as a safety net at the single conversion point all tools pass through.
  • MatryoshkaTool.kt: Changed SimpleMatryoshkaTool.call() and SelectableMatryoshkaTool.call() to return structured JSON ({"enabled_tools_count": N, "enabled_tools": [...]}) instead of plain text, via a shared enabledToolsJson() helper.

Test plan

  • MessageConversionTest: new tests covering plain text wrapping, JSON object/array preservation, and whitespace handling
  • MatryoshkaToolTest: All assertions updated to verify JSON format responses
  • Manual verification with https://github.com/embabel/ragbot + Google GenAI to confirm no more JsonParseException and no hallucination

@alexheifetz
Copy link
Copy Markdown
Contributor

@azanux — thanks for identifying this and the original fix! I took a slightly different approach: introduced a pluggable ToolResponseContentAdapter strategy and wired it via Gemini auto-configuration, so the JSON wrapping is localized to Gemini only. Other providers keep getting plain text unchanged.
This also covers all tools that return plain strings (RAG, context tools, etc.), not just Matryoshka/UnfoldingTool.
Take a look at #1468 — would love your feedback since you dug into the root cause.

@azanux
Copy link
Copy Markdown
Collaborator Author

azanux commented Mar 4, 2026

@alexheifetz

Thanks for the fix, I'll test all of this by this weekend and I'll confirm, if all goes well.

@azanux
Copy link
Copy Markdown
Collaborator Author

azanux commented Mar 8, 2026

@alexheifetz

The ToolResponseContentAdapter strategy is really elegant - localizing the JSON wrapping to Gemini is much cleaner.

One thing though: in ChatClientLlmOperations.createMessageSender() line 183, when llmRequestEvent != null (observability enabled), the SpringAiLlmMessageSender is created without the adapter , so it falls back to PASSTHROUGH instead of the Gemini JSON wrapper: (so we have silent failing and hallucination)

// current
return SpringAiLlmMessageSender(instrumentedModel, chatOptions)
// should be
return SpringAiLlmMessageSender(instrumentedModel, chatOptions, springAiLlm.toolResponseContentAdapter)

Thanks again for the fix!

@azanux azanux closed this Mar 10, 2026
@azanux azanux deleted the fix/matryoshka-tool-json-response-gemini branch April 16, 2026 17:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants