Skip to content

feat(litellm): add cost tracking to agent results#1911

Open
stefanoamorelli wants to merge 4 commits intostrands-agents:mainfrom
stefanoamorelli:feat/litellm-cost-tracking
Open

feat(litellm): add cost tracking to agent results#1911
stefanoamorelli wants to merge 4 commits intostrands-agents:mainfrom
stefanoamorelli:feat/litellm-cost-tracking

Conversation

@stefanoamorelli
Copy link
Contributor

Description

Tip

Follows conventional commits. Better reviewed commit-by-commit.

Exposes LiteLLM's cost_per_token() data (in USD) through the Strands streaming pipeline so that accumulated cost is available in EventLoopMetrics when using LiteLLMModel.

The cost flows through: LiteLLMModel.format_chunk()MetadataEventprocess_stream()ModelStopReason → event loop → EventLoopMetrics.accumulated_cost.

Cost calculation is wrapped in try/except so models not yet mapped in LiteLLM's pricing database degrade gracefully (cost is simply omitted). The ModelStopReason tuple is extended from 4 to 5 elements; existing consumers use *_ unpacking for forward-compatibility.

Related Issues

Closes #1216

Documentation PR

N/A

Type of Change

  • New feature

Testing

  • I ran hatch run prepare
  • 2383 tests passing, 0 regressions from the cost tracking changes
  • 9 new tests covering cost calculation, accumulation, graceful failure, and cache token forwarding
  • Verify that the changes do not break functionality or introduce warnings in consuming repositories: agents-docs, agents-tools, agents-cli

Checklist

  • I have read the CONTRIBUTING document
  • I have added any necessary tests that prove my fix is effective or my feature works
  • I have updated the documentation accordingly
  • I have added an appropriate example to the documentation to outline the feature, or no new docs are needed
  • My changes generate no new warnings
  • Any dependent changes have been merged and published

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

I'm adding a 5th element to the ModelStopReason tuple to carry
per-invocation cost data (in USD [1]) through the streaming pipeline.
The field defaults to None so all existing model providers continue
to work without changes.

The MetadataEvent TypedDict also gets an optional cost field, which
is where model providers will inject their cost before it reaches
the stop event.

Existing consumers of the stop tuple (anthropic, bedrock, and the
summarizing conversation manager) now use *_ unpacking so they're
forward-compatible with the new element.

[1]: https://docs.litellm.ai/docs/completion/token_usage#critical-cost_per_token
EventLoopMetrics now has an accumulated_cost field (defaults to 0.0)
and an update_cost() method that the event loop will call after each
model invocation. The cost in USD [1] is included in get_summary()
and displayed in the metrics summary output when it's greater than
zero.

This is the accumulation layer that sits between the streaming
pipeline (which provides per-invocation cost) and the user-facing
AgentResult.

[1]: https://docs.litellm.ai/docs/completion/token_usage#critical-cost_per_token
process_stream() now extracts the cost field from MetadataEvent
(if present) and passes it as the 5th element of the ModelStopReason
tuple. On the event loop side, I unpack the cost and call
EventLoopMetrics.update_cost() when a value is available.

This connects the model layer (which calculates cost) to the metrics
layer (which accumulates it), completing the data flow for any model
provider that populates MetadataEvent.cost.
This is the actual cost calculation that makes use of the pipeline
built in the previous commits. In format_chunk(), after extracting
usage data, I call litellm.cost_per_token() to get prompt and
completion costs and attach the total to MetadataEvent.

The values returned by cost_per_token() are in USD [1][2], which is
what we store in accumulated_cost.

The calculation is wrapped in try/except because litellm's pricing
database doesn't cover every model. When a model isn't mapped, the
cost field is simply omitted and the rest of the pipeline continues
as if cost tracking isn't available. I chose cost_per_token() over
completion_cost() because it doesn't require constructing a fake
ModelResponse object.

Cache tokens (both read and creation) are forwarded to the cost
function so pricing accounts for cached token discounts on providers
like Anthropic.

Closes strands-agents#1216

[1]: strands-agents#1216
[2]: https://docs.litellm.ai/docs/completion/token_usage#critical-cost_per_token
    "Returns: A tuple containing the cost in USD dollars for prompt
     tokens and completion tokens, respectively."
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE] Add LiteLLM Cost Tracking to Agent Results

1 participant