Skip to content

LongRunningFunctionTool: no way to skip LLM summarization when client sends back function_response #4700

@Axibord

Description

@Axibord

Is your feature request related to a specific problem?

Yes, when using LongRunningFunctionTool for Human-in-the-Loop (HITL) flows, the client eventually sends a function_response back to the agent (e.g., "user approved" / "user denied"). ADK passes this function_response to the LLM, which then generates an unnecessary text summary like "Understood." or "Done." in my case but it could by anything else.

There is no clean way to prevent this text generation:

  • tool_context.actions.skip_summarization only fires during the initial tool execution (after_tool_callback), not when the client's function_response arrives in a subsequent invocation.
  • before_model_callback can intercept the LLM call and return an empty LlmResponse, but LlmResponse has no actions field. So skip_summarization cannot be set. The empty LlmResponse(content=Content(role="model", parts=[Part(text="")])) gets persisted to session history, causing the LLM to generate acknowledgment text ("Understood.", "Done.") on the next user message.

This is especially problematic on Agent Engine (VertexAiSessionService), where the built-in HITL tools (adk_request_confirmation, adk_request_input) are explicitly not supported, making LongRunningFunctionTool the only available mechanism for HITL flows.

Describe the Solution You'd Like

I think any of the following would solve the problem:

1. A skip_summarization_on_response flag on LongRunningFunctionTool

send_email_tool = LongRunningFunctionTool(
    func=send_campaign_email,
    skip_summarization_on_response=True,  # Skip LLM call when client responds
)

When the client sends back a function_response for this tool, ADK skips the LLM summarization call entirely and does not persist an empty model turn to session history.

2. Allow before_model_callback to return Event (not just LlmResponse)

This would let the callback set actions.skip_summarization = True, giving the developer full control:

def my_callback(callback_context, llm_request):
    # ... detect HITL function_response ...
    return Event(
        actions=EventActions(skip_summarization=True),
        content=Content(role="model", parts=[]),
    )

3. An on_function_response_callback that fires when a client sends back a function_response for a LongRunningFunctionTool, giving access to ToolContext with actions.skip_summarization.

I think option 1 is the simplest and most targeted.

Impact on your work

This affects anyone deploying custom HITL flows on Agent Engine. Since VertexAiSessionService doesn't support the built-in confirmation tools, LongRunningFunctionTool is the only option. But it generates unwanted LLM text after every approval/denial. The current workaround (returning an empty LlmResponse via before_model_callback) works but is a workaround, not a proper solution.

Willingness to contribute

Yes, I would be very happy to submit a PR if the team agrees on an approach.


Describe Alternatives You've Considered

Current workaround: A before_model_callback that returns an empty LlmResponse when it detects a function_response from a HITL tool, skipping the LLM call entirely:

_HITL_TOOL_NAMES = {"send_campaign_email"}

def _skip_hitl_summarization(callback_context, llm_request):
    contents = llm_request.contents or []
    if not contents:
        return None

    for part in contents[-1].parts or []:
        if part.function_response and part.function_response.name in _HITL_TOOL_NAMES:
            return LlmResponse(
                content=types.Content(role="model", parts=[types.Part(text="")])
            )

    return None

This prevents the LLM from generating text on the current turn, but the empty model response still gets persisted to session history. A first-class skip_summarization mechanism would avoid this entirely.

Additional Context

  • ADK version: latest (pip install google-adk)
  • Deployment: Vertex AI Agent Engine (VertexAiSessionService)
  • The built-in HITL tools (adk_request_confirmation, adk_request_input) explicitly list VertexAiSessionService as unsupported in the ADK docs, making LongRunningFunctionTool the only viable HITL mechanism on Agent Engine.
  • Related issues: #760, #173, #302, #3986

Metadata

Metadata

Labels

needs review[Status] The PR/issue is awaiting review from the maintainertools[Component] This issue is related to tools

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions