Skip to content

feat: simplify LLMClient with auto-injected model for generateText/generateObject/streamText/streamObject#2125

Open
ruguoba wants to merge 1 commit into
browserbase:mainfrom
ruguoba:main
Open

feat: simplify LLMClient with auto-injected model for generateText/generateObject/streamText/streamObject#2125
ruguoba wants to merge 1 commit into
browserbase:mainfrom
ruguoba:main

Conversation

@ruguoba
Copy link
Copy Markdown

@ruguoba ruguoba commented May 15, 2026

Summary

Makes stagehand.llmClient.generateText(), .generateObject(), .streamText(), and .streamObject() automatically use the client's language model — no need to pass model manually.

Closes #666

Problem

Currently, the LLMClient exposes generateText, generateObject, etc. as bare re-exports of the Vercel AI SDK functions. Users must still provide the model parameter:

// Before: verbose — you must find and pass the model yourself
const { text } = await stagehand.llmClient.generateText({
  model: someLanguageModel,
  prompt: 'What should I type next?'
});

This is awkward because the LLMClient already knows its model.

Solution

Convert the four AI SDK convenience methods from direct function references into wrapper methods that auto-resolve the model via getLanguageModel():

// After: just works — model is auto-injected from the client
const { text } = await stagehand.llmClient.generateText({
  prompt: 'What should I type next?'
});

// Structured output too
const { object } = await stagehand.llmClient.generateObject({
  schema: myZodSchema,
  prompt: 'Extract the product details'
});

// Streaming
const { textStream } = await stagehand.llmClient.streamText({
  prompt: 'Write a long story'
});

If getLanguageModel() is not available (legacy non-AI-SDK clients), a clear error is thrown with guidance on how to fix it.

Explicit model overrides still work for advanced use cases.

Changes

packages/core/lib/v3/llm/LLMClient.ts:

  • Add resolveModel() helper that prefers an explicit model param, falls back to getLanguageModel(), and throws a descriptive error if neither is available
  • Replace public generateText = generateText; (and similar) with proper wrapper methods that call resolveModel() before delegating to the AI SDK function
  • generateImage, embed, embedMany, transcribe, generateSpeech remain as direct references (they use different model types)
  • Full JSDoc with @example blocks on each new method

Summary by cubic

Auto-injects the client’s language model into generateText, generateObject, streamText, and streamObject so you don’t need to pass model. Overrides still work, and legacy clients get a clear error with guidance.

  • New Features
    • Added resolveModel() to prefer an explicit model, else use getLanguageModel(), otherwise throw a helpful error.
    • Wrapped the four methods to inject the model before calling the Vercel AI SDK.
    • Left generateImage, embed, embedMany, transcribe, and generateSpeech unchanged.

Written for commit 0852de7. Summary will update on new commits.

…rateObject/streamText/streamObject

Make it easier to use the LLMClient's AI SDK wrappers by automatically
injecting the client's language model when calling generateText(),
generateObject(), streamText(), and streamObject().

Previously, users had to manually pass the model parameter:
  await stagehand.llmClient.generateText({ model: someModel, prompt: '...' })

Now the model is resolved from getLanguageModel() automatically:
  await stagehand.llmClient.generateText({ prompt: '...' })

This makes the API feel like the Vercel AI SDK, while still allowing
model overrides when needed.

Closes browserbase#666
@changeset-bot
Copy link
Copy Markdown

changeset-bot Bot commented May 15, 2026

⚠️ No Changeset found

Latest commit: 0852de7

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@github-actions
Copy link
Copy Markdown
Contributor

This PR is from an external contributor and must be approved by a stagehand team member with write access before CI can run.
Approving the latest commit mirrors it into an internal PR owned by the approver.
If new commits are pushed later, the internal PR stays open but is marked stale until someone approves the latest external commit and refreshes it.

@github-actions github-actions Bot added external-contributor Tracks PRs mirrored from external contributor forks. external-contributor:awaiting-approval Waiting for a stagehand team member to approve the latest external commit. labels May 15, 2026
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 1 file

Confidence score: 5/5

  • Automated review surfaced no issues in the provided summaries.
  • No files require special attention.
Architecture diagram
sequenceDiagram
    participant Caller as Stagehand User Code
    participant LLMClient as LLMClient
    participant Resolve as resolveModel()
    participant VercelSDK as Vercel AI SDK

    Note over Caller,VercelSDK: NEW: Auto-injected model for generateText/generateObject/streamText/streamObject

    Caller->>LLMClient: generateText({ prompt: "..." })
    LLMClient->>Resolve: resolveModel(this, undefined)
    alt Explicit model provided
        Resolve->>Resolve: Use passed model
    else getLanguageModel() available
        Resolve->>LLMClient: getLanguageModel()
        LLMClient-->>Resolve: LanguageModelV2
        Resolve->>Resolve: Use returned model
    else Neither available
        Resolve->>Resolve: Throw Error with migration guidance
        Resolve-->>Caller: Error: "No language model available..."
    end
    Resolve-->>LLMClient: LanguageModelV2
    LLMClient->>VercelSDK: generateText({ model, prompt })
    VercelSDK-->>LLMClient: { text, ... }
    LLMClient-->>Caller: { text, ... }

    Note over Caller,VercelSDK: Same pattern for generateObject, streamText, streamObject

    Caller->>LLMClient: generateObject({ schema, prompt })
    LLMClient->>Resolve: resolveModel(this, params.model)
    Resolve-->>LLMClient: LanguageModelV2
    LLMClient->>VercelSDK: generateObject({ model, schema, prompt })
    VercelSDK-->>LLMClient: { object, ... }
    LLMClient-->>Caller: { object, ... }

    Note over Caller,VercelSDK: Unchanged methods (no auto-injection)

    Caller->>LLMClient: generateImage(...)
    LLMClient->>VercelSDK: experimental_generateImage(...)
    VercelSDK-->>LLMClient: result
    LLMClient-->>Caller: result
    
    Caller->>LLMClient: embed(...)
    LLMClient->>VercelSDK: embed(...)
    VercelSDK-->>LLMClient: result
    LLMClient-->>Caller: result

    Note over Caller,VercelSDK: Explicit model override still works

    Caller->>LLMClient: generateText({ model: customModel, prompt })
    LLMClient->>Resolve: resolveModel(this, customModel)
    Resolve->>Resolve: Use explicit customModel
    Resolve-->>LLMClient: customModel
    LLMClient->>VercelSDK: generateText({ model: customModel, prompt })
    VercelSDK-->>LLMClient: { text, ... }
    LLMClient-->>Caller: { text, ... }
Loading

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

external-contributor:awaiting-approval Waiting for a stagehand team member to approve the latest external commit. external-contributor Tracks PRs mirrored from external contributor forks.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Simplify createChatCompletion

1 participant