feat: simplify LLMClient with auto-injected model for generateText/generateObject/streamText/streamObject#2125
feat: simplify LLMClient with auto-injected model for generateText/generateObject/streamText/streamObject#2125ruguoba wants to merge 1 commit into
Conversation
…rateObject/streamText/streamObject
Make it easier to use the LLMClient's AI SDK wrappers by automatically
injecting the client's language model when calling generateText(),
generateObject(), streamText(), and streamObject().
Previously, users had to manually pass the model parameter:
await stagehand.llmClient.generateText({ model: someModel, prompt: '...' })
Now the model is resolved from getLanguageModel() automatically:
await stagehand.llmClient.generateText({ prompt: '...' })
This makes the API feel like the Vercel AI SDK, while still allowing
model overrides when needed.
Closes browserbase#666
|
|
This PR is from an external contributor and must be approved by a stagehand team member with write access before CI can run. |
There was a problem hiding this comment.
No issues found across 1 file
Confidence score: 5/5
- Automated review surfaced no issues in the provided summaries.
- No files require special attention.
Architecture diagram
sequenceDiagram
participant Caller as Stagehand User Code
participant LLMClient as LLMClient
participant Resolve as resolveModel()
participant VercelSDK as Vercel AI SDK
Note over Caller,VercelSDK: NEW: Auto-injected model for generateText/generateObject/streamText/streamObject
Caller->>LLMClient: generateText({ prompt: "..." })
LLMClient->>Resolve: resolveModel(this, undefined)
alt Explicit model provided
Resolve->>Resolve: Use passed model
else getLanguageModel() available
Resolve->>LLMClient: getLanguageModel()
LLMClient-->>Resolve: LanguageModelV2
Resolve->>Resolve: Use returned model
else Neither available
Resolve->>Resolve: Throw Error with migration guidance
Resolve-->>Caller: Error: "No language model available..."
end
Resolve-->>LLMClient: LanguageModelV2
LLMClient->>VercelSDK: generateText({ model, prompt })
VercelSDK-->>LLMClient: { text, ... }
LLMClient-->>Caller: { text, ... }
Note over Caller,VercelSDK: Same pattern for generateObject, streamText, streamObject
Caller->>LLMClient: generateObject({ schema, prompt })
LLMClient->>Resolve: resolveModel(this, params.model)
Resolve-->>LLMClient: LanguageModelV2
LLMClient->>VercelSDK: generateObject({ model, schema, prompt })
VercelSDK-->>LLMClient: { object, ... }
LLMClient-->>Caller: { object, ... }
Note over Caller,VercelSDK: Unchanged methods (no auto-injection)
Caller->>LLMClient: generateImage(...)
LLMClient->>VercelSDK: experimental_generateImage(...)
VercelSDK-->>LLMClient: result
LLMClient-->>Caller: result
Caller->>LLMClient: embed(...)
LLMClient->>VercelSDK: embed(...)
VercelSDK-->>LLMClient: result
LLMClient-->>Caller: result
Note over Caller,VercelSDK: Explicit model override still works
Caller->>LLMClient: generateText({ model: customModel, prompt })
LLMClient->>Resolve: resolveModel(this, customModel)
Resolve->>Resolve: Use explicit customModel
Resolve-->>LLMClient: customModel
LLMClient->>VercelSDK: generateText({ model: customModel, prompt })
VercelSDK-->>LLMClient: { text, ... }
LLMClient-->>Caller: { text, ... }
Summary
Makes
stagehand.llmClient.generateText(),.generateObject(),.streamText(), and.streamObject()automatically use the client's language model — no need to passmodelmanually.Closes #666
Problem
Currently, the LLMClient exposes
generateText,generateObject, etc. as bare re-exports of the Vercel AI SDK functions. Users must still provide themodelparameter:This is awkward because the LLMClient already knows its model.
Solution
Convert the four AI SDK convenience methods from direct function references into wrapper methods that auto-resolve the model via
getLanguageModel():If
getLanguageModel()is not available (legacy non-AI-SDK clients), a clear error is thrown with guidance on how to fix it.Explicit
modeloverrides still work for advanced use cases.Changes
packages/core/lib/v3/llm/LLMClient.ts:resolveModel()helper that prefers an explicitmodelparam, falls back togetLanguageModel(), and throws a descriptive error if neither is availablepublic generateText = generateText;(and similar) with proper wrapper methods that callresolveModel()before delegating to the AI SDK functiongenerateImage,embed,embedMany,transcribe,generateSpeechremain as direct references (they use different model types)@exampleblocks on each new methodSummary by cubic
Auto-injects the client’s language model into
generateText,generateObject,streamText, andstreamObjectso you don’t need to passmodel. Overrides still work, and legacy clients get a clear error with guidance.resolveModel()to prefer an explicitmodel, else usegetLanguageModel(), otherwise throw a helpful error.generateImage,embed,embedMany,transcribe, andgenerateSpeechunchanged.Written for commit 0852de7. Summary will update on new commits.