Skip to content

fix: stabilize function call IDs across streaming events#4732

Open
giulio-leone wants to merge 2 commits intogoogle:mainfrom
giulio-leone:fix/streaming-function-call-id-mismatch
Open

fix: stabilize function call IDs across streaming events#4732
giulio-leone wants to merge 2 commits intogoogle:mainfrom
giulio-leone:fix/streaming-function-call-id-mismatch

Conversation

@giulio-leone
Copy link

When streaming function calls, the function call ID can change between chunks, causing the tool result to reference a stale ID. This leads to mismatched function call/result pairs.

This PR stabilizes the function call ID by preserving the first-seen ID throughout the streaming session.

Supersedes #4653 (closed due to CLA issue, now resolved).

Co-authored-by: Copilot 223556219+Copilot@users.noreply.github.com

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves an issue where function call IDs would inconsistently change between streaming chunks, leading to mismatches between function calls and their corresponding results. The solution introduces a caching mechanism to ensure that function call IDs remain stable throughout a streaming session, thereby preventing data integrity problems and improving the reliability of streaming function call processing.

Highlights

  • Function Call ID Stabilization: Introduced a function_call_id_cache parameter to key functions involved in processing LLM responses to ensure stable IDs during streaming.
  • Caching Logic Implementation: Modified the populate_client_function_call_id function to utilize the new cache, ensuring that once a function call receives an ID, that ID is reused for subsequent streaming chunks of the same call.
  • Cache Integration: Implemented the initialization and passing of the function_call_id_cache within the _run_one_step_async and _postprocess_async methods to maintain ID stability across the entire streaming process.
  • New Unit Tests: Added a new comprehensive test file to validate the correct behavior of function call ID stabilization during streaming, covering various scenarios including multiple calls and partial/final events.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • src/google/adk/flows/llm_flows/base_llm_flow.py
    • Added function_call_id_cache parameter to _finalize_model_response_event and _postprocess_async methods.
    • Initialized function_call_id_cache as an empty dictionary in _run_one_step_async.
    • Passed the function_call_id_cache to calls to _finalize_model_response_event.
  • src/google/adk/flows/llm_flows/functions.py
    • Modified populate_client_function_call_id to accept an optional function_call_id_cache.
    • Implemented logic to store and retrieve function call IDs from the cache using a (name, index) key for stability across streaming events.
  • tests/unittests/flows/llm_flows/test_streaming_function_call_ids.py
    • Added a new test file to verify function call ID stabilization.
    • Included tests for populate_client_function_call_id with caching, covering ID generation, reuse, and handling multiple calls.
    • Added tests for _finalize_model_response_event to confirm ID stability across partial and final streaming events.
Activity
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@adk-bot
Copy link
Collaborator

adk-bot commented Mar 6, 2026

Response from ADK Triaging Agent

Hello @giulio-leone, thank you for your contribution!

To help reviewers understand the scope of your testing, could you please add a Testing Plan section to your PR description? This section should describe how you've tested the changes.

You can find more details in our contribution guidelines. Thanks!

@adk-bot adk-bot added the live [Component] This issue is related to live, voice and video chat label Mar 6, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses an issue with unstable function call IDs during streaming by introducing a cache to preserve the first-seen ID. The implementation for the non-live streaming path is correct and is accompanied by a comprehensive set of new unit tests. However, the fix appears to be missing for the live streaming path (run_live), which could leave it vulnerable to the same bug. I've also suggested a small refactoring to improve the readability of the caching logic.

@giulio-leone giulio-leone force-pushed the fix/streaming-function-call-id-mismatch branch from c2ed499 to 344a742 Compare March 6, 2026 16:13
giulio-leone and others added 2 commits March 6, 2026 18:14
When models don't provide function call IDs, ADK generates client-side
IDs via populate_client_function_call_id(). In streaming mode, partial
and final events for the same logical function call each get a fresh
uuid4, causing an ID mismatch that breaks HITL (human-in-the-loop)
workflows and SSE consumers that correlate function calls across chunks.

Root cause: _finalize_model_response_event creates a new Event object
for each llm_response chunk, and populate_client_function_call_id
generates a brand-new ID every time without knowledge of prior IDs.

Fix: Add an optional function_call_id_cache dict that maps
(name, index) keys to previously generated IDs. The streaming loop in
_run_async creates the cache before iteration and threads it through
_postprocess_async → _finalize_model_response_event →
populate_client_function_call_id, ensuring the same logical function
call gets a stable ID across all streaming events.

The cache is keyed by (name:index) to correctly handle multiple calls
to the same function within a single response.

Fixes google#4609
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@giulio-leone giulio-leone force-pushed the fix/streaming-function-call-id-mismatch branch from 344a742 to c71d31b Compare March 6, 2026 17:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

live [Component] This issue is related to live, voice and video chat

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants