Skip to content

fix: Update cache token tracking for GenerateText GenerateObject calls#12783

Closed
nelsonauner wants to merge 1 commit intovercel:mainfrom
nelsonauner:fix-nelson-generateText-cache-tokens
Closed

fix: Update cache token tracking for GenerateText GenerateObject calls#12783
nelsonauner wants to merge 1 commit intovercel:mainfrom
nelsonauner:fix-nelson-generateText-cache-tokens

Conversation

@nelsonauner
Copy link
Copy Markdown

@nelsonauner nelsonauner commented Feb 23, 2026

Background

The streamText and streamObject paths emit the newer telemetry span attributes (ai.usage.inputTokens, ai.usage.outputTokens, ai.usage.totalTokens, ai.usage.reasoningTokens, ai.usage.cachedInputTokens) on both their doGenerate and step/root spans. However, generateText and generateObject only emit the legacy ai.usage.promptTokens and ai.usage.completionTokens attributes, missing the cache and reasoning token breakdown entirely. The TODO comments in the code (// TODO rename telemetry attributes to inputTokens and outputTokens) suggest this was planned but not completed for the non-streaming paths.

Summary

Added the 5 missing ai.usage.* telemetry span attributes to generateText and generateObject, matching what streamText and streamObject already emit:

  • ai.usage.inputTokens
  • ai.usage.outputTokens
  • ai.usage.totalTokens
  • ai.usage.reasoningTokens
  • ai.usage.cachedInputTokens

These are added at both the doGenerate span (per-model-call) and the root span levels. The legacy ai.usage.promptTokens and ai.usage.completionTokens attributes are preserved for backward compatibility.

Manual Verification

Compared the span attributes set in all four code paths (generateText, streamText, generateObject, streamObject) to confirm they now emit a consistent set of ai.usage.* attributes. The new attributes are undefined (and thus omitted from spans) when the provider doesn't return cache/reasoning data, so this is backward-compatible.

Checklist

  • Tests have been added / updated (for bug fixes / features)
  • Documentation has been added / updated (for bug fixes / features)
  • A patch changeset for relevant packages has been added (for bug fixes / features - run pnpm changeset in the project root)
  • I have reviewed this pull request (self-review)

Future Work

The legacy ai.usage.promptTokens / ai.usage.completionTokens attributes are preserved for backward compatibility and could be removed in a future major version.

Related Issues

@tigent tigent Bot added ai/core core functions like generateText, streamText, etc. Provider utils, and provider spec. ai/telemetry bug Something isn't working as documented labels Feb 23, 2026
@nelsonauner nelsonauner changed the title fix: add cache token tracking fix: Update cache token tracking for GenerateText GenerateObject calls Feb 24, 2026
@nelsonauner
Copy link
Copy Markdown
Author

@aayush-kapoor I know you're really busy but I'd love some quick feedback on if this is worth fixing or its a red herring or will be obsoleted by your changes in #12784

@aayush-kapoor
Copy link
Copy Markdown
Collaborator

@nelsonauner thanks for raising this! don't think it makes sense to add it here rn. i'll make a note of it to ensure all tokens (cache read, write) are accounted for in the new telemetry setup.

and if you still think a gap exists, you can raise a pr against the new setup. marking this as closed for now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ai/core core functions like generateText, streamText, etc. Provider utils, and provider spec. bug Something isn't working as documented

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants