Conversation
|
There was a problem hiding this comment.
3 issues found across 24 files
Confidence score: 3/5
- There is moderate merge risk because
packages/server-v4/src/schemas/internal/llm.tshas a concrete contract mismatch: strictInternalLLMConfigSchemarejects the requiredsourcefield, which can cause internal config validation failures. packages/server-v4/src/routes/v4/stubState.tsmay create orphaned default LLM rows when override ID validation fails after persistence, introducing cleanup and data consistency risk in failed session flows.packages/server-v4/src/routes/v4/llms/_id/index.tsappears under-documented for error responses (404/401/500 withLLMErrorResponseSchema), which is lower severity but can mislead API consumers about real behavior.- Pay close attention to
packages/server-v4/src/schemas/internal/llm.tsandpackages/server-v4/src/routes/v4/stubState.ts- validation mismatch and persistence order can cause runtime failures or leaked records.
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="packages/server-v4/src/routes/v4/stubState.ts">
<violation number="1" location="packages/server-v4/src/routes/v4/stubState.ts:172">
P2: `createBrowserSession` persists a default LLM before validating override LLM IDs, so failed session creation can leak orphan default LLM records.</violation>
</file>
<file name="packages/server-v4/src/routes/v4/llms/_id/index.ts">
<violation number="1" location="packages/server-v4/src/routes/v4/llms/_id/index.ts:36">
P2: Document the error responses for this route (e.g., 404/401/500 with LLMErrorResponseSchema). The handler can throw and the plugin error handler returns LLMErrorResponse, so the current schema under-specifies the actual responses.</violation>
</file>
<file name="packages/server-v4/src/schemas/internal/llm.ts">
<violation number="1" location="packages/server-v4/src/schemas/internal/llm.ts:253">
P2: InternalLLMConfigSchema omits the `source` field even though the public LLM contract requires it. Because this schema is strict, internal config objects that include `source` (e.g., system-default vs user) will fail validation or lose that metadata. Add a source field (and schema) to the internal config shape so it can round-trip the public LLM resource.</violation>
</file>
Architecture diagram
sequenceDiagram
participant Client
participant API as API Route (/v4)
participant State as Stub State (In-Memory)
Note over Client,State: LLM Configuration Setup
opt Register Custom LLM
Client->>API: POST /v4/llms
API->>State: NEW: createLLM(config)
State-->>API: LLM Object (source: "user")
API-->>Client: 201 Created (LLMResult)
end
Note over Client,State: Browser Session Creation
Client->>API: POST /v4/browsersession
API->>State: createBrowserSession(input)
alt NEW: llmId provided
State->>State: Validate LLM exists
else NEW: llmId omitted
State->>State: NEW: createDefaultLLM()<br/>(source: "system-default")
end
State->>State: NEW: Initialize LLMChat<br/>(Internal mapping)
State-->>API: BrowserSession
API-->>Client: 200 OK (BrowserSessionResult)
Note over Client,State: Operation-Specific LLM Overrides
Client->>API: PATCH /v4/browsersession/:id
API->>State: NEW: updateBrowserSession(overrides)
Note right of State: CHANGED: Primary llmId is used for any<br/>unset overrides (act, observe, extract)
alt NEW: Invalid LLM ID referenced
State-->>API: 404 Not Found
API-->>Client: 404 Error
else Success
State->>State: Update session LLM references
State-->>API: Updated BrowserSession
API-->>Client: 200 OK
end
Note over State: Internal Data Flow (Future DB)
Note right of State: BrowserSession (1) -> (N) LLMChat<br/>LLMChat (1) -> (N) UIMessage
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
1 issue found across 2 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="packages/server-v3/src/routes/v1/sessions/start.ts">
<violation number="1" location="packages/server-v3/src/routes/v1/sessions/start.ts:237">
P2: Serialize `cleanupErr` before logging; passing a raw `Error` object here can lose message/stack details with the current logger behavior.
(Based on your team's feedback about logger mishandling Error objects.) [FEEDBACK_USED]</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Adds the v4
/llmsstubs, and the internal zod entities we will map to drizzle DB objects later.Note: An internal
LLMSessionis not the full accumulated history of an entire browser session. It is the container for the LLM thread involved in a single logical Stagehand operation, which today is usually very small. For example, anact()call may create oneLLMSessionwith oneLLMCall;extract()may create oneLLMSessionwith twoLLMCallrows. It is not a giant transcript of every LLM call across the whole browser session.Public:
/v4/llmsfor reusable LLM configsBrowserSession.llmIdas the primary/default LLM config reference, plusactLlmId,observeLlmId, andextractLlmIdas public per-operation override referencesInternal only (for now):
LLMSessionLLMCallStagehandStepWe are intentionally not exposing
LLMSession/LLMCallhistory in this PR. That data can still be surfaced through logs/streaming work, and we can decide later whether we want a first-class public API for it.Sequence Example
llmId, the server materializes a real default LLM config withsource = "system-default"and attaches it to the session.POST /v4/llmsfirst and then create the browser session withllmId.actLlmId,observeLlmId, and/orextractLlmIdon the browser session.act/observe/extractcall runs, Stagehand resolves the template config from the per-operation override or falls back tollmId, creates a dedicatedLLMSessionfor that operation, and records one or moreLLMCallrows under it.classDiagram class BrowserSession { id: uuid projectId: uuid env: BrowserSessionEnv status: BrowserSessionStatus browserbaseSessionId: uuid? cdpUrl: string? llmId: uuid actLlmId: uuid? observeLlmId: uuid? extractLlmId: uuid? createdAt: datetime updatedAt: datetime endedAt: datetime? } class LLMSession { id: uuid copiedTemplateId: uuid? forkedSessionId: uuid? projectId: uuid browserSessionId: uuid createdAt: datetime updatedAt: datetime connectedAt: datetime? disconnectedAt: datetime? lastRequestAt: datetime? lastResponseAt: datetime? lastErrorAt: datetime? lastErrorMessage: string? status: LLMSessionStatus model: string baseUrl: string? options: json? extraHttpHeaders: json? systemPrompt: string? tokensInput: int tokensOutput: int tokensReasoning: int tokensCachedInput: int tokensTotal: int } class LLMCall { id: uuid llmSessionId: uuid sentAt: datetime receivedAt: datetime? prompt: string expectedResponseSchema: json? response: json? error: json? usage: json? model: string } class StagehandStep { id: uuid stagehandBrowserSessionId: uuid operation: StagehandStepOperation llmTemplateId: uuid llmSessionId: uuid? params: json result: json? } class LLMConfig { id: uuid projectId: uuid source: LLMSource displayName: string? modelName: string baseUrl: string? systemPrompt: string? providerOptions: LLMProviderOptions? createdAt: datetime updatedAt: datetime } BrowserSession "1" <-- "*" LLMSession : has many LLMSession "1" --> "*" LLMCall : has many BrowserSession "1" --> "*" StagehandStep : has many StagehandStep "1" --> "0..1" LLMSession : resolves to LLMConfig "1" <-- "*" BrowserSession : is referenced by manyImplementation notes
/llmsresources and appear inGET /v4/llms.LLMconfigs remain non-secret; request auth still stays out of the public config resource.LLMSessionis the config-bearing per-operation thread andLLMCallis the append-friendly log of provider exchanges under that thread.