feat: pluggable capture transport system for shadow-agent#44
Conversation
Add multiple transport implementations and refactor capture layer: New modules (shadow-agent/src/capture/): - capture-transport.ts: Transport interface/base abstraction - capture-transports.ts: Transport factory and registry - http-stream-transport.ts: HTTP streaming transport - socket-transport.ts: TCP socket transport - websocket-transport.ts: WebSocket transport - renderer-surface-adapter.tsx: Surface adapter for renderer layer Updated capture modules: - session-manager.ts, transcript-watcher.ts: integrate transport system - event-buffer.ts, ipc-bridge.ts: transport-aware updates - start-main-process.ts, preload.ts: Electron integration updates - shadow-inference-engine.ts, direct-api.ts: inference layer updates - App.tsx, app-state.ts, host.ts: renderer integration - CanvasRenderer.tsx, ShadowPanel.tsx, TimelineScrubber.tsx: UI updates - privacy.ts, schema.ts: shared type/policy updates New tests: - tests/capture/capture-transports.test.ts - tests/capture/session-manager.test.ts - tests/electron/start-main-process-privacy.test.ts - tests/renderer/renderer-surface-adapter.test.ts - Expand existing capture, electron, privacy, app-state, host tests Docs: update architecture, domain-events, domain-inference, getting-started, shadow-agent README to reflect transport system
|
CodeAnt AI is reviewing your PR. |
📝 WalkthroughWalkthroughThis PR introduces a transport-pluggable event capture architecture replacing file-only transcript watching, adds privacy settings persistence with environment-variable override precedence, refactors the session manager to be transport-driven, extends Electron IPC with dynamic privacy management, and exports renderer component types for composition via a surface adapter pattern. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Electron<br/>Renderer
participant MainProc as Main Process
participant PrivacyStore as Privacy<br/>Settings File
participant PrivacyMod as Privacy<br/>Module
participant SessionMgr as Session<br/>Manager
participant Transport as Capture<br/>Transport
Client->>MainProc: updatePrivacySettings(updates)
MainProc->>PrivacyMod: saveTranscriptPrivacySettings(updates)
PrivacyMod->>PrivacyStore: Write to privacy.json
MainProc->>SessionMgr: getPrivacy() [re-evaluates]
SessionMgr->>Transport: [Chunks processed with new privacy]
MainProc->>Client: return PrivacyPolicy
sequenceDiagram
participant App as shadow-agent<br/>Application
participant TransportResolver as Transport<br/>Resolver
participant FileTail as File-Tail<br/>Transport
participant HTTPStream as HTTP Stream<br/>Transport
participant Context as Capture<br/>Context
participant SessionMgr as Session<br/>Manager
participant Buffer as Event<br/>Buffer
App->>TransportResolver: resolveCaptureTransportOptionsFromEnv()
alt File-Tail Transport
TransportResolver->>FileTail: createFileTailCaptureTransport(options)
FileTail->>Context: onSessionStarted(session)
FileTail->>Context: onChunk({session, chunk})
else HTTP Streaming
TransportResolver->>HTTPStream: createHttpStreamCaptureTransport(options)
HTTPStream->>Context: onSessionStarted(session)
HTTPStream->>Context: onChunk({session, chunk})
end
Context->>SessionMgr: [Incremental parser receives chunks]
SessionMgr->>Buffer: pushEvent(normalizedEvent)
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly Related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| return resolveTranscriptPrivacySettings(overrides, { | ||
| ...(storedSettings.allowRawTranscriptStorage !== undefined | ||
| ? { [TRANSCRIPT_PRIVACY_ENV_KEYS.allowRawTranscriptStorage]: String(storedSettings.allowRawTranscriptStorage) } | ||
| : {}), | ||
| ...(storedSettings.allowOffHostInference !== undefined | ||
| ? { [TRANSCRIPT_PRIVACY_ENV_KEYS.allowOffHostInference]: String(storedSettings.allowOffHostInference) } | ||
| : {}), | ||
| ...fileEnv, | ||
| ...env | ||
| }); |
There was a problem hiding this comment.
Suggestion: The source-merging logic lets an invalid higher-priority env value (for example SHADOW_ALLOW_RAW_TRANSCRIPT_STORAGE=maybe) overwrite a valid stored setting, because merge precedence is applied before boolean parsing. This can silently reset persisted consent to defaults instead of falling back to the previously valid stored value. Parse each source first (stored/file/process) and only override when a parsed boolean is defined. [logic error]
Severity Level: Major ⚠️
- ❌ Off-host inference disabled despite persisted opt-in in privacy.json.
- ⚠️ Electron main process misreads user consent on startup.
- ⚠️ Inference requests treat opted-in sessions as local-only.Steps of Reproduction ✅
1. Persist an opted-in setting by calling `saveTranscriptPrivacySettings({
allowRawTranscriptStorage: false, allowOffHostInference: true })` from
`shadow-agent/src/shared/privacy.ts:117-128`, which writes `privacy.json` at the path
computed by `getTranscriptPrivacySettingsPath()` (`privacy.ts:48-52`).
2. Create a dotenv file at the default path used by `loadTranscriptPrivacySettings`
(`envPath = join(homedir(), '.shadow-agent', '.env')` in `privacy.ts:89-92`) containing an
invalid value, e.g. `SHADOW_ALLOW_OFF_HOST_INFERENCE=maybe`.
3. Call `loadTranscriptPrivacySettings()` with default arguments, either directly in code
or via the Electron startup path `startMainProcess` which invokes it at
`shadow-agent/src/electron/start-main-process.ts:12-21` (`privacySettings = await
loadTranscriptPrivacySettings();` on line 15). Inside `loadTranscriptPrivacySettings`, the
dotenv file is parsed into `fileEnv` (`privacy.ts:98-100`), and `storedSettings` is read
from `privacy.json` (`privacy.ts:95-96`).
4. Observe that the merged env object passed to `resolveTranscriptPrivacySettings` is
constructed with stored values first, then `fileEnv`, then `env` (`privacy.ts:105-114`),
so `SHADOW_ALLOW_OFF_HOST_INFERENCE='maybe'` from the dotenv file overwrites the persisted
`true`. In `resolveTranscriptPrivacySettings` (`privacy.ts:69-81`),
`parseBooleanSetting('maybe')` (`privacy.ts:32-45`) returns `undefined`, causing
`allowOffHostInference` to fall back to the default `false` instead of the previously
stored `true`, so off-host inference is treated as disabled despite persisted opt-in.Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/shared/privacy.ts
**Line:** 105:114
**Comment:**
*Logic Error: The source-merging logic lets an invalid higher-priority env value (for example `SHADOW_ALLOW_RAW_TRANSCRIPT_STORAGE=maybe`) overwrite a valid stored setting, because merge precedence is applied before boolean parsing. This can silently reset persisted consent to defaults instead of falling back to the previously valid stored value. Parse each source first (stored/file/process) and only override when a parsed boolean is defined.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix| logger.info('capture', 'transport.websocket.connecting', { url: options.url }); | ||
| const nextSocket = new WebSocket(options.url, options.protocols); | ||
| socket = nextSocket; |
There was a problem hiding this comment.
Suggestion: The WebSocket connection is started in fire-and-forget mode, but socket creation is not guarded. If new WebSocket(...) throws (for example from an invalid URL/protocol), the async connect call rejects without being handled, which can produce an unhandled promise rejection and leave the transport permanently disconnected. Wrap connection setup in try/catch and route failures through reconnect scheduling. [possible bug]
Severity Level: Critical 🚨
- ❌ WebSocket-based live capture never starts after constructor failure.
- ❌ SessionManager buffer stays empty; no live transcript events.
- ⚠️ Inference engine runs on empty state, reducing app usefulness.
- ⚠️ Unhandled promise rejection logged in Electron main process.Steps of Reproduction ✅
1. Configure the app to use the WebSocket capture transport by setting environment
variables so that `resolveCaptureTransportOptionsFromEnv()` in
`shadow-agent/src/capture/capture-transports.ts:24-52` returns a WebSocket config, e.g.
`SHADOW_CAPTURE_TRANSPORT=websocket` and `SHADOW_CAPTURE_WS_URL=not-a-valid-url` (a
syntactically invalid URL).
2. Start the Electron main process, which calls `startMainProcess()` in
`shadow-agent/src/electron/start-main-process.ts:7`. In the `app.whenReady().then(...)`
handler at lines 17-21, it constructs a `SessionManager` with `transport:
resolveCaptureTransportOptionsFromEnv()` and then, inside the `try` block at lines 53-59,
calls `void currentSessionManager.start();` to start live transcript capture in
fire-and-forget mode.
3. Inside `SessionManager.start()` in
`shadow-agent/src/capture/session-manager.ts:70-120`, the code builds `effectiveTransport`
and awaits `effectiveTransport.start({ ... })` at line 86. For WebSocket options,
`createCaptureTransport()` in `shadow-agent/src/capture/capture-transports.ts:67-75`
returns `createWebSocketCaptureTransport(options)`, so `CaptureTransport.start()` from
`websocket-transport.ts` is invoked.
4. In `createWebSocketCaptureTransport().start()` at
`shadow-agent/src/capture/websocket-transport.ts:50-118`, the transport defines `connect`
as an `async` function at lines 76-116 and calls it in fire-and-forget mode with `void
connect();` at line 118. When `connect()` executes, it logs at line 81 and then calls `new
WebSocket(options.url, options.protocols);` at line 82. With an invalid `options.url`
(from the misconfigured `SHADOW_CAPTURE_WS_URL`), the `WebSocket` constructor throws
synchronously. Because this happens inside an `async` function, the exception becomes a
rejected `Promise` from `connect()`, but the caller ignores that promise (`void
connect()`), so the rejection is unhandled. No `onopen`, `onclose`, or
`scheduleReconnect()` callbacks are ever wired, `socket` stays `null`, and no reconnect
attempts are scheduled. At runtime, Electron/Node will log an unhandled promise rejection
for `connect()` while the main process continues, but the WebSocket transport never
connects and never emits `onSessionStarted` or `onChunk` events, leaving live capture
permanently disabled for that session.Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/capture/websocket-transport.ts
**Line:** 81:83
**Comment:**
*Possible Bug: The WebSocket connection is started in fire-and-forget mode, but socket creation is not guarded. If `new WebSocket(...)` throws (for example from an invalid URL/protocol), the async `connect` call rejects without being handled, which can produce an unhandled promise rejection and leave the transport permanently disconnected. Wrap connection setup in try/catch and route failures through reconnect scheduling.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix| if (checkpointBuffer) { | ||
| void drainPendingEvents(); | ||
| return; |
There was a problem hiding this comment.
Suggestion: Pending-drain work is launched without error handling, so failures from asynchronous checkpoint reads/commits can surface as unhandled promise rejections from the subscription path. Attach a .catch(...) (or handle errors inside the drain function) so transient I/O failures do not destabilize the process. [possible bug]
Severity Level: Major ⚠️
- ❌ I/O errors cause unhandled rejection in Electron main.
- ⚠️ Shadow insights processing may stop after drain failure.Steps of Reproduction ✅
1. Start the Electron app, which invokes `startMainProcess()` in
`shadow-agent/src/electron/start-main-process.ts:136-169`. This constructs a
`SessionManager` (`session-manager.ts:34-50`) whose `getBuffer()` returns an `EventBuffer`
from `shadow-agent/src/capture/event-buffer.ts`, and an inference engine via
`createInferenceEngine` at `start-main-process.ts:23-35`.
2. When `refreshInferenceEngine()` runs (`start-main-process.ts:35-39`), it calls
`inferenceEngine.start()`. Inside `createInferenceEngine().start()`
(`shadow-inference-engine.ts:135-167`), a `checkpointBuffer` is detected, a consumer is
registered (`registerConsumer` at line 157), and then a subscription is attached:
`unsubscribeBuffer = buffer.subscribe((events) => { ... })` at lines 160-166.
3. The underlying buffer's `push()` implementation in `event-buffer.ts:33-52` persists
events and then calls `notify(events, metrics)` at lines 82-93, which iterates
`subscribers` and invokes the inference engine's subscription callback. In checkpoint
mode, that callback executes:
- `if (checkpointBuffer) { void drainPendingEvents(); return; }` at
`shadow-inference-engine.ts:161-163`,
launching `drainPendingEvents()` as an async fire-and-forget task.
4. `drainPendingEvents()` (`shadow-inference-engine.ts:108-132`) performs `await
checkpointBuffer.readPending(...)` and `await checkpointBuffer.commitCheckpoint(...)`.
These, in turn, perform filesystem I/O via `readCheckpointFile`, `readSpillFile`, and
`writeCheckpointFile` in `event-buffer.ts:52-60, 96-98, 47-63`. If any of these operations
fail (e.g., permission error or disk issue in the `queuePersistenceRoot` path configured
in `session-manager.ts:46-50`), `readPending`/`commitCheckpoint` will cause
`drainPendingEvents()` to reject. Because the subscription path calls `void
drainPendingEvents();` with no `.catch` and no internal catch in `drainPendingEvents`,
that rejection has no attached handler and surfaces as a process-level unhandled promise
rejection in the Electron main process.Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/inference/shadow-inference-engine.ts
**Line:** 161:163
**Comment:**
*Possible Bug: Pending-drain work is launched without error handling, so failures from asynchronous checkpoint reads/commits can surface as unhandled promise rejections from the subscription path. Attach a `.catch(...)` (or handle errors inside the drain function) so transient I/O failures do not destabilize the process.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fixThere was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: f6d02747fc
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if (!options.overridePath) { | ||
| rediscoveryTimer = setInterval(() => { | ||
| void runDiscovery(); | ||
| }, discoveryIntervalMs); |
There was a problem hiding this comment.
Keep override-path discovery polling active
With an overridePath, discovery now runs only once and the timer is skipped, so startup before the target file exists leaves capture permanently idle. createSessionManager.start(overridePath) still routes override paths into this transport, so the old behavior of eventually attaching when the file appears is lost for that mode.
Useful? React with 👍 / 👎.
| return { | ||
| kind: 'http-stream', | ||
| url: requiredEnv(env.SHADOW_CAPTURE_HTTP_URL ?? env.SHADOW_CAPTURE_TARGET, 'SHADOW_CAPTURE_HTTP_URL'), | ||
| reconnectDelayMs: env.SHADOW_CAPTURE_RECONNECT_MS ? Number.parseInt(env.SHADOW_CAPTURE_RECONNECT_MS, 10) : undefined, |
There was a problem hiding this comment.
Validate reconnect delay env parsing
This parse accepts non-numeric values as NaN and passes them through to transport options; each transport then computes Math.max(50, options.reconnectDelayMs ?? ...), which stays NaN, so reconnect timers fire with an effectively immediate delay during failures. A malformed SHADOW_CAPTURE_RECONNECT_MS can therefore create a tight reconnect loop (HTTP/WebSocket/Socket) instead of a bounded retry interval.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Pull request overview
Re-introduces a pluggable capture transport system for shadow-agent (file-tail, HTTP stream, WebSocket, TCP socket) and adds end-to-end privacy policy persistence + UI controls for off-host inference and raw transcript storage/export.
Changes:
- Adds a capture transport abstraction and multiple transport implementations, refactoring session manager + transcript watcher to be transport-driven.
- Introduces persisted privacy settings (
~/.shadow-agent/privacy.json), new IPC bridge methods, and renderer UI/state to manage privacy toggles. - Updates inference startup to consume privacy settings and refresh inference when privacy changes; expands tests and docs accordingly.
Reviewed changes
Copilot reviewed 37 out of 37 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| shadow-agent/tests/renderer/renderer-surface-adapter.test.ts | Adds contract test for the renderer surface adapter boundary. |
| shadow-agent/tests/renderer/host.test.ts | Updates host capability expectations to include privacy management. |
| shadow-agent/tests/renderer/app-state.test.ts | Adds reducer coverage for privacy update actions. |
| shadow-agent/tests/privacy.test.ts | Adds coverage for persisted privacy settings load/save behavior. |
| shadow-agent/tests/electron/start-main-process.test.ts | Extends IPC handler tests for privacy policy + updates and export options. |
| shadow-agent/tests/electron/start-main-process-privacy.test.ts | Verifies privacy settings are forwarded into capture + inference startup. |
| shadow-agent/tests/electron/renderer-host.test.ts | Updates bridge host tests to include privacy methods. |
| shadow-agent/tests/capture/session-manager.test.ts | Adds transport-driven session manager integration coverage (reset + parsing). |
| shadow-agent/tests/capture/capture-transports.test.ts | Adds integration tests for file-tail/http/ws/socket transports. |
| shadow-agent/tests/capture.test.ts | Adds regression test ensuring spill-to-disk transcript sanitization. |
| shadow-agent/src/shared/schema.ts | Extends ShadowAgentBridge interface with privacy methods. |
| shadow-agent/src/shared/privacy.ts | Adds persisted privacy settings path/load/save and policy resolution helpers. |
| shadow-agent/src/renderer/renderer-surface-adapter.tsx | Introduces pluggable renderer surface adapter (stable component boundary). |
| shadow-agent/src/renderer/host.ts | Adds privacy methods + capability detection to renderer host abstraction. |
| shadow-agent/src/renderer/components/TimelineScrubber.tsx | Exports props type for adapter typing. |
| shadow-agent/src/renderer/components/ShadowPanel.tsx | Exports props type for adapter typing. |
| shadow-agent/src/renderer/canvas/CanvasRenderer.tsx | Exports props type for adapter typing. |
| shadow-agent/src/renderer/app-state.ts | Adds privacy busy state + privacy update actions/reducer handling. |
| shadow-agent/src/renderer/App.tsx | Adds privacy panel UI, raw export option, and renderer surface adapter wiring. |
| shadow-agent/src/inference/shadow-inference-engine.ts | Improves buffer capability detection via a type guard and uses it consistently. |
| shadow-agent/src/inference/direct-api.ts | Removes obsolete @ts-expect-error for Anthropic SDK import. |
| shadow-agent/src/electron/start-main-process.ts | Wires capture transport selection, persisted privacy updates, IPC privacy handlers, and inference refresh on privacy change. |
| shadow-agent/src/electron/preload.ts | Exposes privacy IPC methods on the renderer bridge. |
| shadow-agent/src/capture/websocket-transport.ts | Adds WebSocket capture transport with reconnect + message normalization. |
| shadow-agent/src/capture/transcript-watcher.ts | Refactors file tailer into a robust file-tail capture transport with rotation detection + discovery. |
| shadow-agent/src/capture/socket-transport.ts | Adds TCP socket capture transport with reconnect + basic backpressure pausing. |
| shadow-agent/src/capture/session-manager.ts | Refactors session manager to be transport-driven and handle session resets cleanly. |
| shadow-agent/src/capture/ipc-bridge.ts | Plumbs dynamic privacy into IPC event preparation. |
| shadow-agent/src/capture/http-stream-transport.ts | Adds streaming HTTP transport with reconnect and incremental chunk delivery. |
| shadow-agent/src/capture/event-buffer.ts | Sanitizes transcript content before spilling event envelopes to disk. |
| shadow-agent/src/capture/capture-transports.ts | Adds env-driven transport option resolution and transport factory. |
| shadow-agent/src/capture/capture-transport.ts | Introduces core transport interfaces/types (session/chunk/context/subscription). |
| shadow-agent/README.md | Documents persisted privacy toggles and env override behavior. |
| docs/getting-started.md | Adds privacy controls section describing persisted settings and overrides. |
| docs/domain-inference.md | Updates privacy model documentation to mention persisted privacy toggles. |
| docs/domain-events.md | Updates domain docs to describe supported capture transports. |
| docs/architecture.md | Updates architecture to reflect transport-pluggable capture and privacy consent model. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| <p className="privacy-note"> | ||
| Transcript content is sanitized before render, export, persistence, and prompt delivery unless you explicitly | ||
| allow a narrower exception below. | ||
| </p> |
There was a problem hiding this comment.
The Privacy panel text says transcript content is sanitized before “render, export, persistence” unless the user opts in, but the capture queue spill-to-disk path currently always sanitizes events (and the renderer snapshot is always sanitized too). Either adjust this copy to match the actual behavior, or wire allowRawTranscriptStorage into the persistence/export pipeline so the opt-in actually changes what gets persisted/exported.
| const updatePrivacySettings = async (updates: Partial<TranscriptPrivacySettings>) => { | ||
| privacySettings = await saveTranscriptPrivacySettings({ | ||
| ...privacySettings, | ||
| ...updates | ||
| }); |
There was a problem hiding this comment.
updatePrivacySettings merges updates via object spread. If the IPC payload includes keys with undefined values, they will overwrite the existing settings and saveTranscriptPrivacySettings will normalize them to false, unintentionally disabling a setting. Consider filtering out undefined keys (or validating the updates object) before merging/saving.
| const sanitizedEvents = prepareEventsForStorage(envelopes.map((envelope) => envelope.event)); | ||
| const payload = `${envelopes | ||
| .map((envelope, index) => | ||
| JSON.stringify({ | ||
| ...envelope, | ||
| event: sanitizedEvents[index] ?? envelope.event | ||
| }) | ||
| ) | ||
| .join('\n')}\n`; |
There was a problem hiding this comment.
writeSpillFile sanitizes events using prepareEventsForStorage(...) without passing privacy settings, so it always uses DEFAULT_TRANSCRIPT_PRIVACY_SETTINGS (sanitized) and there’s no way for allowRawTranscriptStorage to ever affect spill-to-disk persistence. If raw persistence is meant to be covered by the opt-in, consider plumbing a getPrivacy()/settings parameter into the event buffer and calling prepareEventsForStorage(events, getPrivacy(), { storeRawTranscript: getPrivacy().allowRawTranscriptStorage }) (or equivalent).
| return { | ||
| kind: 'http-stream', | ||
| url: requiredEnv(env.SHADOW_CAPTURE_HTTP_URL ?? env.SHADOW_CAPTURE_TARGET, 'SHADOW_CAPTURE_HTTP_URL'), | ||
| reconnectDelayMs: env.SHADOW_CAPTURE_RECONNECT_MS ? Number.parseInt(env.SHADOW_CAPTURE_RECONNECT_MS, 10) : undefined, | ||
| sessionId: env.SHADOW_CAPTURE_SESSION_ID?.trim() || undefined, | ||
| sessionLabel: env.SHADOW_CAPTURE_SESSION_LABEL?.trim() || undefined | ||
| } satisfies HttpStreamCaptureTransportOptions; | ||
| case 'ws': | ||
| case 'websocket': | ||
| return { | ||
| kind: 'websocket', | ||
| url: requiredEnv(env.SHADOW_CAPTURE_WS_URL ?? env.SHADOW_CAPTURE_TARGET, 'SHADOW_CAPTURE_WS_URL'), | ||
| reconnectDelayMs: env.SHADOW_CAPTURE_RECONNECT_MS ? Number.parseInt(env.SHADOW_CAPTURE_RECONNECT_MS, 10) : undefined, | ||
| sessionId: env.SHADOW_CAPTURE_SESSION_ID?.trim() || undefined, | ||
| sessionLabel: env.SHADOW_CAPTURE_SESSION_LABEL?.trim() || undefined | ||
| } satisfies WebSocketCaptureTransportOptions; |
There was a problem hiding this comment.
requiredEnv’s error message always names SHADOW_CAPTURE_HTTP_URL / SHADOW_CAPTURE_WS_URL, but the code also accepts SHADOW_CAPTURE_TARGET as a fallback. If neither is set, the thrown message can be misleading—consider mentioning both accepted variables (or passing the variable name that was actually required for that transport).
| const exportReplay = async (options?: { storeRawTranscript?: boolean }) => { | ||
| if (!snapshot || !host.exportReplayJsonl) { | ||
| return; | ||
| } | ||
|
|
||
| dispatch({ type: 'EXPORT_START' }); | ||
| try { | ||
| const result = await host.exportReplayJsonl(snapshot.events, exportName); | ||
| const result = await host.exportReplayJsonl(snapshot.events, exportName, options); | ||
| if (result.error) { |
There was a problem hiding this comment.
exportReplay passes snapshot.events to host.exportReplayJsonl even for the “Export raw replay” flow. However snapshot.events are always sanitized by buildRendererInput (via prepareEventsForStorage(..., {storeRawTranscript:false})), so exporting with { storeRawTranscript: true } cannot actually include raw transcript content. Consider sourcing raw events for export from the main process (e.g., the event buffer / loaded replay store), or having the snapshot include raw events when privacy.allowRawTranscriptStorage is enabled (while keeping UI rendering sanitized separately).
| async start(context: CaptureTransportContext): Promise<CaptureTransportSubscription> { | ||
| const session = buildSession(options); | ||
| const reconnectDelayMs = Math.max(50, options.reconnectDelayMs ?? DEFAULT_RECONNECT_DELAY_MS); | ||
| const decoder = new TextDecoder(); |
There was a problem hiding this comment.
Suggestion: TextDecoder is created once for the whole transport and reused across reconnect attempts. If a stream drops mid-multibyte character, decoder state can leak into the next connection and corrupt the first emitted chunk after reconnect. Create a fresh decoder per connection (inside connect) or explicitly reset it on reconnect/error boundaries. [logic error]
Severity Level: Major ⚠️
- ❌ HTTP-stream capture drops or corrupts events around reconnects.
- ⚠️ Incremental parser logs malformed_line and discards affected JSON.
- ⚠️ Shadow Agent UI may show gaps in live transcript.Steps of Reproduction ✅
1. Start the Electron main process via `startMainProcess()` in
`shadow-agent/src/electron/start-main-process.ts:7-21`, with environment
`SHADOW_CAPTURE_TRANSPORT=http-stream` and `SHADOW_CAPTURE_HTTP_URL` pointing at a
long-lived HTTP NDJSON stream that emits UTF-8 text (including multibyte characters such
as emoji or non-ASCII letters).
2. In `start-main-process.ts:17-21`, `resolveCaptureTransportOptionsFromEnv()` from
`shadow-agent/src/capture/capture-transports.ts:24-43` builds `{ kind: 'http-stream', url,
reconnectDelayMs, ... }`, which `createSessionManager()` in
`shadow-agent/src/capture/session-manager.ts:44-59` passes as `options.transport`. Because
this object does not have a `start` method, `createSessionManager` calls
`createCaptureTransport()` (`capture-transports.ts:67-72`), which returns
`createHttpStreamCaptureTransport(options)`.
3. `SessionManager.start()` at `shadow-agent/src/capture/session-manager.ts:110-120` calls
`effectiveTransport.start(...)`, which invokes
`createHttpStreamCaptureTransport().start()` in
`shadow-agent/src/capture/http-stream-transport.ts:30-33`. This creates a single
`TextDecoder` instance (`decoder`) for the lifetime of the transport, then repeatedly
calls `connect()` (lines 57-76, 105-114) on errors/reconnects, using the same decoder
across all HTTP connections while streaming chunks to `context.onChunk` (lines 84-100).
`onChunk` forwards each decoded string to `activeParser.push(chunk)` in
`session-manager.ts:153-157`, which is implemented by `createIncrementalParser` in
`shadow-agent/src/capture/incremental-parser.ts:17-27`.
4. Induce a network drop mid-multibyte character on the HTTP stream (for example, the
server writes a JSON line containing a non-ASCII character, then the TCP connection is
interrupted so that the last UTF-8 code point is split across the final chunk of the first
connection and the first chunk of the next connection). The read loop in
`http-stream-transport.ts:84-101` is interrupted with an error, so it exits via the
`catch` block at lines 105-108 and `finally` block at 109-114 without ever hitting `if
(done) { const trailing = decoder.decode(); ... }` at lines 88-92, meaning the decoder's
internal partial multibyte state is never flushed or reset. On the subsequent reconnect
(`scheduleReconnect()` at lines 47-55 calling `connect()` again), the same `TextDecoder`
instance is reused; its buffered partial bytes from the previous connection are combined
with the first bytes from the new connection when `decoder.decode(value, { stream: true
})` is called (lines 95-99). This corrupts the first decoded character(s) of the new
stream and can mangle the surrounding NDJSON line so that `createIncrementalParser.push()`
(`incremental-parser.ts:21-31`) sees malformed JSON, logs `parser.malformed_line` (line
34), and drops the record. In practice, this manifests as missing or corrupted capture
events around reconnect boundaries when using HTTP streaming with non-ASCII content, even
though tests (`shadow-agent/tests/capture/capture-transports.test.ts:9-34`) only cover
ASCII payloads and do not exercise this failure mode.Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/capture/http-stream-transport.ts
**Line:** 33:33
**Comment:**
*Logic Error: `TextDecoder` is created once for the whole transport and reused across reconnect attempts. If a stream drops mid-multibyte character, decoder state can leak into the next connection and corrupt the first emitted chunk after reconnect. Create a fresh decoder per connection (inside `connect`) or explicitly reset it on reconnect/error boundaries.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix| return { | ||
| kind: 'http-stream', | ||
| url: requiredEnv(env.SHADOW_CAPTURE_HTTP_URL ?? env.SHADOW_CAPTURE_TARGET, 'SHADOW_CAPTURE_HTTP_URL'), | ||
| reconnectDelayMs: env.SHADOW_CAPTURE_RECONNECT_MS ? Number.parseInt(env.SHADOW_CAPTURE_RECONNECT_MS, 10) : undefined, |
There was a problem hiding this comment.
Suggestion: SHADOW_CAPTURE_RECONNECT_MS is parsed with Number.parseInt but never validated, so non-numeric values become NaN and flow into transport reconnect delays. In the transports this results in Math.max(50, NaN) => NaN, which makes reconnect timers effectively immediate and can create a tight reconnect loop. Validate parsed reconnect values and reject invalid input before returning options. [logic error]
Severity Level: Major ⚠️
- ❌ Live capture transports reconnect almost instantly on misconfigured delay.
- ⚠️ Remote HTTP/WebSocket/socket capture may hammer endpoints after failures.
- ⚠️ Electron main process misconfigurations hard to diagnose quickly.Steps of Reproduction ✅
1. Configure the Electron main process to use an HTTP capture transport by setting
environment variables before launch (read in
`shadow-agent/src/capture/capture-transports.ts:24-40`):
- `SHADOW_CAPTURE_TRANSPORT=http`
- `SHADOW_CAPTURE_HTTP_URL=http://localhost:9999/stream`
- `SHADOW_CAPTURE_RECONNECT_MS=abc` (intentionally non-numeric).
2. Start the app so `startMainProcess()` in
`shadow-agent/src/electron/start-main-process.ts:7-21` runs. It calls
`createSessionManager(..., { transport: resolveCaptureTransportOptionsFromEnv() })`, so
`resolveCaptureTransportOptionsFromEnv` builds `{ kind: 'http-stream', url,
reconnectDelayMs: Number.parseInt('abc', 10) }` at `capture-transports.ts:35-43`. Because
`parseInt('abc', 10)` is `NaN`, `reconnectDelayMs` is set to `NaN` instead of falling back
to a sane default.
3. Inside `createSessionManager` at `shadow-agent/src/capture/session-manager.ts:17-21`,
the options are passed to `createCaptureTransport`, which for `kind: 'http-stream'`
returns the HTTP stream transport (`capture-transports.ts:67-72`). The resulting transport
instance is started in `SessionManager.start()` at `session-manager.ts:70-87`, wiring it
into the live capture pipeline.
4. In the HTTP stream transport implementation at
`shadow-agent/src/capture/http-stream-transport.ts:30-55`, the reconnect delay is computed
as `const reconnectDelayMs = Math.max(50, options.reconnectDelayMs ??
DEFAULT_RECONNECT_DELAY_MS);` (`line 32`). With `options.reconnectDelayMs = NaN`, the
nullish coalescing operator `??` does not fall back to `DEFAULT_RECONNECT_DELAY_MS`, so
the expression becomes `Math.max(50, NaN)` which evaluates to `NaN`. When the upstream
HTTP endpoint is unavailable or returns an error, `scheduleReconnect()` (`lines 47-55`)
calls `setTimeout(() => { reconnectTimer = null; void connect(); }, reconnectDelayMs);`
with `reconnectDelayMs = NaN`. In Node/JS this delay is coerced to `0`, causing immediate
reconnect attempts with effectively no backoff. The same pattern exists for WebSocket and
socket transports (`websocket-transport.ts:52, 66-73` and `socket-transport.ts:33,
47-54`), so any of those transports misconfigured with a non-numeric
`SHADOW_CAPTURE_RECONNECT_MS` will exhibit a tight reconnect loop.Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/capture/capture-transports.ts
**Line:** 40:40
**Comment:**
*Logic Error: `SHADOW_CAPTURE_RECONNECT_MS` is parsed with `Number.parseInt` but never validated, so non-numeric values become `NaN` and flow into transport reconnect delays. In the transports this results in `Math.max(50, NaN)` => `NaN`, which makes reconnect timers effectively immediate and can create a tight reconnect loop. Validate parsed reconnect values and reject invalid input before returning options.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix| activeSession = session; | ||
| sessionTitle = `Live: ${session.sessionId.slice(0, 12)}`; | ||
| sessionTitle = session.label; | ||
| await buffer.setSession(session.sessionId); | ||
|
|
||
| logger.info('capture', 'session_manager.start_session', { | ||
| filePath: session.filePath, | ||
| sessionId: session.sessionId, | ||
| }); | ||
|
|
||
| // Catch-up: read the entire existing file first | ||
| try { | ||
| const raw = await readFile(session.filePath, 'utf8'); | ||
| const lines = raw.split(/\r?\n/).filter((l) => l.trim()); | ||
| const catchupEvents = lines.flatMap((line) => { | ||
| try { | ||
| const entry = JSON.parse(line) as Record<string, unknown>; | ||
| return normalizeEntry(entry, session.sessionId); | ||
| } catch { | ||
| return []; | ||
| } | ||
| }); | ||
| if (catchupEvents.length > 0) { | ||
| const result = await buffer.push(catchupEvents); | ||
| logger.info('capture', 'session_manager.catchup', { | ||
| count: catchupEvents.length, | ||
| backpressureLevel: result.backpressure.level | ||
| }); | ||
| } | ||
| } catch (err) { | ||
| logger.warn('capture', 'session_manager.catchup_failed', { error: err }); | ||
| } | ||
|
|
||
| // Then watch for incremental updates | ||
| const parser = createIncrementalParser((entry) => { | ||
| activeParser = createIncrementalParser((entry) => { |
There was a problem hiding this comment.
Suggestion: Session state is marked active before buffer.setSession completes, but that call is async. If chunks arrive during this window (which can happen with socket/websocket transports that don't block message delivery on onSessionStarted completion), they are parsed by the old parser and can be lost or attributed to the wrong session. Move the activeSession switch and parser replacement to happen atomically after reset completes, or gate chunk processing on a session-initialization promise. [race condition]
Severity Level: Major ⚠️
- ❌ WebSocket/socket live capture can drop first transcript chunk.
- ⚠️ Inference engine state misses initial assistant/user messages.
- ⚠️ Renderer snapshot (`shadow:events`) may omit first events.Steps of Reproduction ✅
1. Configure the app to use a WebSocket or socket transport by setting
`SHADOW_CAPTURE_TRANSPORT=websocket` or `socket` (transport options resolved in
`shadow-agent/src/capture/capture-transports.ts:24-64` and wired into
`createSessionManager` in `shadow-agent/src/electron/start-main-process.ts:11-15`).
2. Start the Electron main process; `startMainProcess()` in
`shadow-agent/src/electron/start-main-process.ts:1-16` creates a `SessionManager` and
calls `void currentSessionManager.start()` at line 52, which in turn calls the selected
transport's `start()` method (`CaptureTransport.start` interface in
`shadow-agent/src/capture/capture-transport.ts:30-34`).
3. When the WebSocket or TCP socket connects, the transport invokes
`context.onSessionStarted(session)` asynchronously: WebSocket in
`shadow-agent/src/capture/websocket-transport.ts:85-93` and socket in
`shadow-agent/src/capture/socket-transport.ts:74-82`. In `SessionManager.start()` this
calls `onSessionStarted` at `shadow-agent/src/capture/session-manager.ts:125-141`, which
calls `startSession(session)` (line 140). Inside `startSession`
(`shadow-agent/src/capture/session-manager.ts:85-100`), `activeSession` is set immediately
(line 86) but `await buffer.setSession(session.sessionId)` (line 88, implemented in
`shadow-agent/src/capture/event-buffer.ts:341-369`) runs asynchronously before
`activeParser` is replaced (line 89).
4. If the server sends the first message immediately after the connection opens (e.g., a
streaming token), the transport's message handler invokes `context.onChunk` while
`buffer.setSession` is still in flight: WebSocket in
`shadow-agent/src/capture/websocket-transport.ts:96-102` and socket in
`shadow-agent/src/capture/socket-transport.ts:85-97`. `SessionManager`'s `onChunk` handler
(`shadow-agent/src/capture/session-manager.ts:153-158`) sees `activeSession` already set
for that `session`, skips `startSession`, and calls `activeParser.push(chunk)` (line 157).
Because `activeParser` still refers to the old parser initialized at line 52 (which either
ignores entries or is tied to the previous session), this first chunk is discarded or
mis-attributed and never appears in snapshots returned by `getCurrentSnapshot()`
(`shadow-agent/src/capture/session-manager.ts:175-176`) or in the renderer via the IPC
bridge (`shadow-agent/src/capture/ipc-bridge.ts:35-83`).Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/capture/session-manager.ts
**Line:** 86:89
**Comment:**
*Race Condition: Session state is marked active before `buffer.setSession` completes, but that call is async. If chunks arrive during this window (which can happen with socket/websocket transports that don't block message delivery on `onSessionStarted` completion), they are parsed by the old parser and can be lost or attributed to the wrong session. Move the `activeSession` switch and parser replacement to happen atomically after reset completes, or gate chunk processing on a session-initialization promise.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix|
CodeAnt AI finished reviewing your PR. |
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
shadow-agent/tests/electron/renderer-host.test.ts (1)
43-64: 🧹 Nitpick | 🔵 TrivialAdd delegation assertions for the new privacy bridge methods.
This test now mocks
getPrivacyPolicyandupdatePrivacySettings, but it never verifies host-level forwarding for them. Adding those checks will lock the contract and prevent silent adapter drift.✅ Suggested assertion extension
it('adapts the preload bridge into the platform-agnostic renderer host', async () => { @@ await host.openReplayFile?.(); + await host.getPrivacyPolicy?.(); + await host.updatePrivacySettings?.({ allowOffHostInference: true }); await host.exportReplayJsonl?.([], 'session.jsonl'); @@ expect(bridge.openReplayFile).toHaveBeenCalledTimes(1); + expect(bridge.getPrivacyPolicy).toHaveBeenCalledTimes(1); + expect(bridge.updatePrivacySettings).toHaveBeenCalledWith({ allowOffHostInference: true }); expect(bridge.exportReplayJsonl).toHaveBeenCalledWith([], 'session.jsonl'); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@shadow-agent/tests/electron/renderer-host.test.ts` around lines 43 - 64, The test currently mocks bridge.getPrivacyPolicy and bridge.updatePrivacySettings but doesn't assert that createElectronHost forwards calls; call host.getPrivacyPolicy and host.updatePrivacySettings (with a sample settings object), await their results, and assert bridge.getPrivacyPolicy was called once and bridge.updatePrivacySettings was called with the same settings; also assert host.getPrivacyPolicy !== bridge.getPrivacyPolicy (and similarly for updatePrivacySettings) to ensure the host adapts rather than re-exports the bridge functions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@shadow-agent/src/capture/capture-transports.ts`:
- Around line 16-20: The parsing functions currently use Number.parseInt which
accepts partial numeric prefixes (e.g., "8080abc"); update parsePort and the
similar parsers for SHADOW_CAPTURE_RECONNECT_MS to first validate the entire raw
string with a strict /^\d+$/ check (reject undefined or non-matching strings),
then convert with Number(raw) and apply the existing numeric range checks (port
1–65535, reconnect delay >= 0). Replace Number.parseInt(...) calls in parsePort
and the other parse/reconnect parsing locations with this regex validation +
Number(...) conversion and ensure error messages include the raw input and env
name.
In `@shadow-agent/src/capture/http-stream-transport.ts`:
- Line 33: Remove the module-level TextDecoder instance and instead instantiate
a fresh TextDecoder each time a new stream is started or a reconnect occurs:
replace the single shared "decoder" with local creation at the start of the
streaming logic (the code path that uses { stream: true }) and ensure any
reconnect/retry branch re-creates the decoder before resuming read() of the
response body so incomplete multi-byte sequences are not retained across
connections.
In `@shadow-agent/src/capture/socket-transport.ts`:
- Around line 74-98: Wrap the async IIFEs in both the 'connect' and 'data'
handlers with try/catch blocks so thrown errors are not swallowed: for connect
(inside the IIFE that calls context.onSessionStarted / context.onSessionReset)
catch errors and propagate them by emitting an 'error' on nextSocket
(nextSocket.emit('error', err)) or logging via the module logger; for data
(inside the IIFE that calls context.getBackpressure and context.onChunk) catch
errors, ensure the socket is resumed if it was paused (and stopped is false),
and emit/log the error (nextSocket.emit('error', err)). This ensures failures in
context.onSessionStarted, context.onSessionReset, or context.onChunk are
surfaced and the socket is not left paused.
In `@shadow-agent/src/capture/transcript-watcher.ts`:
- Around line 201-257: readNew() currently dereferences mutable activeSession
and activeHandle across awaits which allows activateSession() to swap/close them
mid-read; snapshot the current session and handle (e.g., const session =
activeSession; const handle = activeHandle) immediately after verifying
filePath/info/fingerprint and before any awaits that read chunks, then before
calling context.onChunk() confirm session === activeSession and handle ===
activeHandle (or abort/stop the loop if they differ) so bytes from an old file
aren't emitted under a new session; apply the same snapshot-and-validate pattern
to the similar later read loop (the block around where onChunk is called again).
- Around line 338-363: The start() call from
createFileTailCaptureTransport(...).start() can reject and currently has no
rejection handler, causing unhandled rejections and leaving stop() only chaining
.then(...); change the creation of subscriptionPromise so rejections are caught
and converted into a resolved "noop" subscription object (an object with a
stop() no-op and optional logged error) so the wrapper always returns a promise
that resolves to a subscription even on failure; specifically handle errors from
createFileTailCaptureTransport(...).start() (reference:
createFileTailCaptureTransport, start, subscriptionPromise) by appending
.catch(err => { processLogger?.error(...) or console.error(...); return { stop:
async () => {/*noop*/} }; }) and ensure stop() (which checks stopped and calls
subscriptionPromise.then(subscription => subscription.stop())) still works when
the promise resolved to the noop subscription.
In `@shadow-agent/src/capture/websocket-transport.ts`:
- Around line 124-128: The code checks readyState against WebSocket.OPEN causing
a ReferenceError in Node tests; instead remove the global WebSocket.OPEN
reference and the conditional check and always call socket?.close() (i.e.,
replace the readyState-based branch with a single socket?.close() call) so the
code no longer depends on a global WebSocket constant and safely closes the
socket via the existing socket.close() method.
In `@shadow-agent/src/electron/start-main-process.ts`:
- Around line 164-175: Concurrent calls to updatePrivacySettings can race and
stop the running inferenceEngine too early; serialize updates via a
single-flight promise chain and only replace the engine after the new instance
is started successfully. Modify updatePrivacySettings to enqueue work on a
module-scoped promise (e.g., pendingPrivacyUpdate) so each invocation awaits the
previous; inside the queued task, call saveTranscriptPrivacySettings with the
merged settings, create a new engine via createRuntimeInferenceEngine(), start
the new engine (await newEngine.start()), and only then stop and swap the old
inferenceEngine reference; on start failure, keep the existing engine running
and propagate the error. Ensure you reference updatePrivacySettings,
refreshInferenceEngine (or inline its logic), inferenceEngine,
createRuntimeInferenceEngine, saveTranscriptPrivacySettings, and
inferenceEngine.start/stop so the implementation updates the correct symbols.
In `@shadow-agent/src/inference/shadow-inference-engine.ts`:
- Around line 156-163: After calling checkpointBuffer.registerConsumer and
setting unsubscribeBuffer = buffer.subscribe(...), call drainPendingEvents once
immediately (e.g., await drainPendingEvents() or otherwise invoke it) right
after setting up the subscription so any events that arrived between
registerConsumer and the subscription callback are processed; keep the existing
drain inside the subscribe callback for ongoing events and reference the
existing symbols checkpointBuffer.registerConsumer, buffer.subscribe,
unsubscribeBuffer and drainPendingEvents when making the change.
In `@shadow-agent/src/renderer/App.tsx`:
- Around line 310-315: Replace the hardcoded privacyPolicy object with a call to
resolvePrivacyPolicy using the incoming snapshot privacy state: call
resolvePrivacyPolicy(snapshot?.privacy) (or resolvePrivacyPolicy with a default)
so processingMode is computed from allowOffHostInference correctly; locate the
assignment to privacyPolicy in App.tsx and replace the current manual object
construction with resolvePrivacyPolicy(...) to derive the final PrivacyPolicy.
In `@shadow-agent/tests/renderer/app-state.test.ts`:
- Around line 181-198: The test for PRIVACY_UPDATE_SUCCESS doesn't assert
allowOffHostInference was applied to the snapshot; update the test in
app-state.test.ts to assert that after calling appReducer(prior, { type:
'PRIVACY_UPDATE_SUCCESS', ... }), next.snapshot?.privacy.allowOffHostInference
is true so the appReducer mapping for PRIVACY_UPDATE_SUCCESS (and the snapshot
privacy fields) are fully validated alongside existing assertions for
processingMode and allowRawTranscriptStorage.
---
Outside diff comments:
In `@shadow-agent/tests/electron/renderer-host.test.ts`:
- Around line 43-64: The test currently mocks bridge.getPrivacyPolicy and
bridge.updatePrivacySettings but doesn't assert that createElectronHost forwards
calls; call host.getPrivacyPolicy and host.updatePrivacySettings (with a sample
settings object), await their results, and assert bridge.getPrivacyPolicy was
called once and bridge.updatePrivacySettings was called with the same settings;
also assert host.getPrivacyPolicy !== bridge.getPrivacyPolicy (and similarly for
updatePrivacySettings) to ensure the host adapts rather than re-exports the
bridge functions.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: ff89c498-82fd-4980-aea7-90b421ad7157
📒 Files selected for processing (37)
docs/architecture.mddocs/domain-events.mddocs/domain-inference.mddocs/getting-started.mdshadow-agent/README.mdshadow-agent/src/capture/capture-transport.tsshadow-agent/src/capture/capture-transports.tsshadow-agent/src/capture/event-buffer.tsshadow-agent/src/capture/http-stream-transport.tsshadow-agent/src/capture/ipc-bridge.tsshadow-agent/src/capture/session-manager.tsshadow-agent/src/capture/socket-transport.tsshadow-agent/src/capture/transcript-watcher.tsshadow-agent/src/capture/websocket-transport.tsshadow-agent/src/electron/preload.tsshadow-agent/src/electron/start-main-process.tsshadow-agent/src/inference/direct-api.tsshadow-agent/src/inference/shadow-inference-engine.tsshadow-agent/src/renderer/App.tsxshadow-agent/src/renderer/app-state.tsshadow-agent/src/renderer/canvas/CanvasRenderer.tsxshadow-agent/src/renderer/components/ShadowPanel.tsxshadow-agent/src/renderer/components/TimelineScrubber.tsxshadow-agent/src/renderer/host.tsshadow-agent/src/renderer/renderer-surface-adapter.tsxshadow-agent/src/shared/privacy.tsshadow-agent/src/shared/schema.tsshadow-agent/tests/capture.test.tsshadow-agent/tests/capture/capture-transports.test.tsshadow-agent/tests/capture/session-manager.test.tsshadow-agent/tests/electron/renderer-host.test.tsshadow-agent/tests/electron/start-main-process-privacy.test.tsshadow-agent/tests/electron/start-main-process.test.tsshadow-agent/tests/privacy.test.tsshadow-agent/tests/renderer/app-state.test.tsshadow-agent/tests/renderer/host.test.tsshadow-agent/tests/renderer/renderer-surface-adapter.test.ts
💤 Files with no reviewable changes (1)
- shadow-agent/src/inference/direct-api.ts
| function parsePort(raw: string | undefined): number { | ||
| const port = Number.parseInt(requiredEnv(raw, 'SHADOW_CAPTURE_SOCKET_PORT'), 10); | ||
| if (!Number.isInteger(port) || port < 1 || port > 65_535) { | ||
| throw new Error(`Invalid SHADOW_CAPTURE_SOCKET_PORT: ${raw ?? '<unset>'}`); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find and examine the capture-transports.ts file
find . -name "capture-transports.ts" -type f | head -5Repository: Coldaine/AgentVisualCrazy
Length of output: 118
🏁 Script executed:
# Check if the file exists and view its content
cat -n shadow-agent/src/capture/capture-transports.ts | head -100Repository: Coldaine/AgentVisualCrazy
Length of output: 4126
🏁 Script executed:
# Verify Number.parseInt behavior with partial numeric strings using Node.js
node << 'EOF'
console.log("Testing Number.parseInt() behavior:");
console.log("Number.parseInt('8080abc', 10):", Number.parseInt('8080abc', 10));
console.log("Number.parseInt('9000abc', 10):", Number.parseInt('9000abc', 10));
console.log("Number.parseInt('abc9000', 10):", Number.parseInt('abc9000', 10));
console.log("Number.parseInt('', 10):", Number.parseInt('', 10));
console.log("Number.parseInt(' 9000', 10):", Number.parseInt(' 9000', 10));
console.log("");
console.log("For comparison with Number():");
console.log("Number('8080abc'):", Number('8080abc'));
console.log("Number('9000'):", Number('9000'));
console.log("Number(''):", Number(''));
console.log("Number(' 9000 '):", Number(' 9000 '));
EOFRepository: Coldaine/AgentVisualCrazy
Length of output: 391
Validate numeric environment variables completely; Number.parseInt() accepts partial prefixes.
Number.parseInt('8080abc', 10) returns 8080, so a malformed SHADOW_CAPTURE_SOCKET_PORT value like "9000xyz" is silently parsed as port 9000 and bypasses validation. The same issue exists at lines 40, 49, and 58 where SHADOW_CAPTURE_RECONNECT_MS is parsed without validation, allowing values like "-100abc" to become negative delays and collapse reconnect backoff into tight loops.
Validate the entire string is numeric before parsing. Use a regex check (e.g., /^\d+$/) and convert via Number() instead of Number.parseInt().
Suggested fix
function parsePort(raw: string | undefined): number {
- const port = Number.parseInt(requiredEnv(raw, 'SHADOW_CAPTURE_SOCKET_PORT'), 10);
- if (!Number.isInteger(port) || port < 1 || port > 65_535) {
+ const value = requiredEnv(raw, 'SHADOW_CAPTURE_SOCKET_PORT');
+ if (!/^\d+$/.test(value)) {
+ throw new Error(`Invalid SHADOW_CAPTURE_SOCKET_PORT: ${raw ?? '<unset>'}`);
+ }
+ const port = Number(value);
+ if (!Number.isInteger(port) || port < 1 || port > 65_535) {
throw new Error(`Invalid SHADOW_CAPTURE_SOCKET_PORT: ${raw ?? '<unset>'}`);
}
return port;
}
+
+function parseReconnectDelay(raw: string | undefined): number | undefined {
+ if (!raw || !raw.trim()) {
+ return undefined;
+ }
+ const value = raw.trim();
+ if (!/^\d+$/.test(value)) {
+ throw new Error(`Invalid SHADOW_CAPTURE_RECONNECT_MS: ${raw}`);
+ }
+ const delay = Number(value);
+ if (!Number.isInteger(delay) || delay < 0) {
+ throw new Error(`Invalid SHADOW_CAPTURE_RECONNECT_MS: ${raw}`);
+ }
+ return delay;
+}Also applies to lines 40, 49, 58.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function parsePort(raw: string | undefined): number { | |
| const port = Number.parseInt(requiredEnv(raw, 'SHADOW_CAPTURE_SOCKET_PORT'), 10); | |
| if (!Number.isInteger(port) || port < 1 || port > 65_535) { | |
| throw new Error(`Invalid SHADOW_CAPTURE_SOCKET_PORT: ${raw ?? '<unset>'}`); | |
| } | |
| function parsePort(raw: string | undefined): number { | |
| const value = requiredEnv(raw, 'SHADOW_CAPTURE_SOCKET_PORT'); | |
| if (!/^\d+$/.test(value)) { | |
| throw new Error(`Invalid SHADOW_CAPTURE_SOCKET_PORT: ${raw ?? '<unset>'}`); | |
| } | |
| const port = Number(value); | |
| if (!Number.isInteger(port) || port < 1 || port > 65_535) { | |
| throw new Error(`Invalid SHADOW_CAPTURE_SOCKET_PORT: ${raw ?? '<unset>'}`); | |
| } | |
| return port; | |
| } | |
| function parseReconnectDelay(raw: string | undefined): number | undefined { | |
| if (!raw || !raw.trim()) { | |
| return undefined; | |
| } | |
| const value = raw.trim(); | |
| if (!/^\d+$/.test(value)) { | |
| throw new Error(`Invalid SHADOW_CAPTURE_RECONNECT_MS: ${raw}`); | |
| } | |
| const delay = Number(value); | |
| if (!Number.isInteger(delay) || delay < 0) { | |
| throw new Error(`Invalid SHADOW_CAPTURE_RECONNECT_MS: ${raw}`); | |
| } | |
| return delay; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@shadow-agent/src/capture/capture-transports.ts` around lines 16 - 20, The
parsing functions currently use Number.parseInt which accepts partial numeric
prefixes (e.g., "8080abc"); update parsePort and the similar parsers for
SHADOW_CAPTURE_RECONNECT_MS to first validate the entire raw string with a
strict /^\d+$/ check (reject undefined or non-matching strings), then convert
with Number(raw) and apply the existing numeric range checks (port 1–65535,
reconnect delay >= 0). Replace Number.parseInt(...) calls in parsePort and the
other parse/reconnect parsing locations with this regex validation + Number(...)
conversion and ensure error messages include the raw input and env name.
| async start(context: CaptureTransportContext): Promise<CaptureTransportSubscription> { | ||
| const session = buildSession(options); | ||
| const reconnectDelayMs = Math.max(50, options.reconnectDelayMs ?? DEFAULT_RECONNECT_DELAY_MS); | ||
| const decoder = new TextDecoder(); |
There was a problem hiding this comment.
Reset TextDecoder on reconnect to avoid corrupted output.
The TextDecoder is created once at line 33 but reused across reconnects. When using { stream: true } mode (line 98), the decoder retains incomplete multi-byte sequences. If a disconnect occurs mid-character, the stale decoder state will corrupt the first chunk after reconnect.
🔧 Proposed fix to reset decoder on reconnect
const connect = async (): Promise<void> => {
if (stopped) {
return;
}
abortController = new AbortController();
+ const decoder = new TextDecoder();
try {And remove line 33:
- const decoder = new TextDecoder();Also applies to: 77-82
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@shadow-agent/src/capture/http-stream-transport.ts` at line 33, Remove the
module-level TextDecoder instance and instead instantiate a fresh TextDecoder
each time a new stream is started or a reconnect occurs: replace the single
shared "decoder" with local creation at the start of the streaming logic (the
code path that uses { stream: true }) and ensure any reconnect/retry branch
re-creates the decoder before resuming read() of the response body so incomplete
multi-byte sequences are not retained across connections.
| nextSocket.on('connect', () => { | ||
| void (async () => { | ||
| if (!hasConnected) { | ||
| await context.onSessionStarted(session); | ||
| hasConnected = true; | ||
| } else { | ||
| await context.onSessionReset(session, 'reconnect'); | ||
| } | ||
| })(); | ||
| }); | ||
|
|
||
| nextSocket.on('data', (chunk: string) => { | ||
| void (async () => { | ||
| const backpressure = context.getBackpressure(); | ||
| if (backpressure.shouldThrottle) { | ||
| nextSocket.pause(); | ||
| setTimeout(() => { | ||
| if (!stopped) { | ||
| nextSocket.resume(); | ||
| } | ||
| }, computeWatchDelay(50, backpressure)); | ||
| } | ||
| await context.onChunk({ session, chunk }); | ||
| })(); | ||
| }); |
There was a problem hiding this comment.
Add error handling to async event handlers.
The async IIFEs in the connect and data event handlers silently swallow errors. If context.onSessionStarted, context.onSessionReset, or context.onChunk throws, the error is lost and the transport may enter an inconsistent state.
🛡️ Proposed fix to add error handling
nextSocket.on('connect', () => {
void (async () => {
+ try {
if (!hasConnected) {
await context.onSessionStarted(session);
hasConnected = true;
} else {
await context.onSessionReset(session, 'reconnect');
}
+ } catch (error) {
+ logger.error('capture', 'transport.socket.session_callback_error', { error });
+ }
})();
});
nextSocket.on('data', (chunk: string) => {
void (async () => {
+ try {
const backpressure = context.getBackpressure();
if (backpressure.shouldThrottle) {
nextSocket.pause();
setTimeout(() => {
if (!stopped) {
nextSocket.resume();
}
}, computeWatchDelay(50, backpressure));
}
await context.onChunk({ session, chunk });
+ } catch (error) {
+ logger.error('capture', 'transport.socket.chunk_callback_error', { error });
+ }
})();
});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@shadow-agent/src/capture/socket-transport.ts` around lines 74 - 98, Wrap the
async IIFEs in both the 'connect' and 'data' handlers with try/catch blocks so
thrown errors are not swallowed: for connect (inside the IIFE that calls
context.onSessionStarted / context.onSessionReset) catch errors and propagate
them by emitting an 'error' on nextSocket (nextSocket.emit('error', err)) or
logging via the module logger; for data (inside the IIFE that calls
context.getBackpressure and context.onChunk) catch errors, ensure the socket is
resumed if it was paused (and stopped is false), and emit/log the error
(nextSocket.emit('error', err)). This ensures failures in
context.onSessionStarted, context.onSessionReset, or context.onChunk are
surfaced and the socket is not left paused.
| const readNew = async () => { | ||
| if (stopped || !activeSession) { | ||
| return; | ||
| } | ||
| if (readInFlight) { | ||
| pendingRead = true; | ||
| return; | ||
| } | ||
|
|
||
| readInFlight = true; | ||
| try { | ||
| const filePath = activeSession.path; | ||
| if (!filePath) { | ||
| return; | ||
| } | ||
|
|
||
| const info = await stat(filePath).catch(() => null); | ||
| if (!info) { | ||
| return; | ||
| } | ||
|
|
||
| const fingerprint = await computeFileFingerprint(filePath, fingerprintBytes); | ||
|
|
||
| if (info.size < offset) { | ||
| logger.info('capture', 'watcher.truncated', { filePath, oldOffset: offset }); | ||
| await resetCursor('truncation'); | ||
| } else if (offset > 0 && hasFingerprintChanged(activeFingerprint, fingerprint)) { | ||
| logger.info('capture', 'watcher.rotated', { filePath }); | ||
| await resetCursor('rotation'); | ||
| } | ||
|
|
||
| if (info.size <= offset) { | ||
| activeFingerprint = fingerprint; | ||
| return; | ||
| } | ||
|
|
||
| if (!activeHandle) { | ||
| activeHandle = await open(filePath, 'r'); | ||
| } | ||
|
|
||
| while (!stopped) { | ||
| const remaining = info.size - offset; | ||
| if (remaining <= 0) { | ||
| break; | ||
| } | ||
|
|
||
| const buffer = Buffer.allocUnsafe(Math.min(remaining, READ_CHUNK)); | ||
| const { bytesRead } = await activeHandle.read(buffer, 0, buffer.length, offset); | ||
| if (bytesRead <= 0) { | ||
| break; | ||
| } | ||
|
|
||
| const toRead = info.size - offset; | ||
| const buf = Buffer.allocUnsafe(Math.min(toRead, READ_CHUNK)); | ||
| const { bytesRead } = await handle.read(buf, 0, buf.length, offset); | ||
| if (bytesRead > 0) { | ||
| offset += bytesRead; | ||
| onChunk(buf.subarray(0, bytesRead).toString('utf8')); | ||
| // If there's more data, schedule another read immediately | ||
| if (bytesRead === READ_CHUNK) { | ||
| scheduleRead(0); | ||
| offset += bytesRead; | ||
| await context.onChunk({ | ||
| session: activeSession, | ||
| chunk: buffer.subarray(0, bytesRead).toString('utf8') | ||
| }); |
There was a problem hiding this comment.
Don't emit old-file bytes under a new session.
readNew() dereferences mutable activeSession/activeHandle after several awaits, while activateSession() can swap them on rediscovery. That can label bytes from the previous file as the new session, or continue reading through a handle that has just been closed. Snapshot the session/handle at the start of the read and abort if they no longer match before calling onChunk.
Suggested fix
const readNew = async () => {
- if (stopped || !activeSession) {
+ if (stopped || !activeSession) {
return;
}
+ const session = activeSession;
if (readInFlight) {
pendingRead = true;
return;
}
@@
- const filePath = activeSession.path;
+ const filePath = session.path;
if (!filePath) {
return;
}
@@
- if (!activeHandle) {
- activeHandle = await open(filePath, 'r');
+ if (!activeHandle) {
+ activeHandle = await open(filePath, 'r');
}
+ const handle = activeHandle;
while (!stopped) {
+ if (session !== activeSession || handle !== activeHandle) {
+ break;
+ }
const remaining = info.size - offset;
if (remaining <= 0) {
break;
}
const buffer = Buffer.allocUnsafe(Math.min(remaining, READ_CHUNK));
- const { bytesRead } = await activeHandle.read(buffer, 0, buffer.length, offset);
+ const { bytesRead } = await handle.read(buffer, 0, buffer.length, offset);
if (bytesRead <= 0) {
break;
}
offset += bytesRead;
await context.onChunk({
- session: activeSession,
+ session,
chunk: buffer.subarray(0, bytesRead).toString('utf8')
});
}Also applies to: 276-291
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@shadow-agent/src/capture/transcript-watcher.ts` around lines 201 - 257,
readNew() currently dereferences mutable activeSession and activeHandle across
awaits which allows activateSession() to swap/close them mid-read; snapshot the
current session and handle (e.g., const session = activeSession; const handle =
activeHandle) immediately after verifying filePath/info/fingerprint and before
any awaits that read chunks, then before calling context.onChunk() confirm
session === activeSession and handle === activeHandle (or abort/stop the loop if
they differ) so bytes from an old file aren't emitted under a new session; apply
the same snapshot-and-validate pattern to the similar later read loop (the block
around where onChunk is called again).
| subscriptionPromise = createFileTailCaptureTransport({ | ||
| kind: 'file-tail', | ||
| overridePath: filePath | ||
| }).start({ | ||
| getBackpressure: () => | ||
| options.getBackpressure?.() ?? { | ||
| level: 'normal', | ||
| shouldThrottle: false, | ||
| totalRatio: 0, | ||
| pendingWrites: 0 | ||
| }, | ||
| onSessionStarted: () => undefined, | ||
| onSessionReset: () => undefined, | ||
| onChunk: ({ chunk }) => { | ||
| onChunk(chunk); | ||
| } | ||
| }); | ||
|
|
||
| // Initial read — catch up on existing content | ||
| void readNew(); | ||
|
|
||
| return { | ||
| stop() { | ||
| stopped = true; | ||
| if (debounceTimer) clearTimeout(debounceTimer); | ||
| watcher.close(); | ||
| if (handle) { | ||
| void handle.close().catch(() => undefined); | ||
| handle = null; | ||
| if (stopped) { | ||
| return; | ||
| } | ||
| logger.info('capture', 'watcher.stopped', { filePath }); | ||
| }, | ||
| stopped = true; | ||
| void subscriptionPromise?.then((subscription) => subscription.stop()); | ||
| subscriptionPromise = null; |
There was a problem hiding this comment.
Contain start() failures in the legacy watcher wrapper.
The wrapper never attaches a rejection handler to createFileTailCaptureTransport(...).start(). If discovery, file open, or a context hook rejects, Electron gets an unhandled rejection, and stop() only chains .then(...). Convert failures into a logged noop subscription before returning the watcher.
Suggested fix
subscriptionPromise = createFileTailCaptureTransport({
kind: 'file-tail',
overridePath: filePath
}).start({
@@
onChunk: ({ chunk }) => {
onChunk(chunk);
}
- });
+ }).catch((error) => {
+ logger.error('capture', 'watcher.start_failed', { filePath, error });
+ return {
+ stop() {
+ return undefined;
+ }
+ };
+ });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@shadow-agent/src/capture/transcript-watcher.ts` around lines 338 - 363, The
start() call from createFileTailCaptureTransport(...).start() can reject and
currently has no rejection handler, causing unhandled rejections and leaving
stop() only chaining .then(...); change the creation of subscriptionPromise so
rejections are caught and converted into a resolved "noop" subscription object
(an object with a stop() no-op and optional logged error) so the wrapper always
returns a promise that resolves to a subscription even on failure; specifically
handle errors from createFileTailCaptureTransport(...).start() (reference:
createFileTailCaptureTransport, start, subscriptionPromise) by appending
.catch(err => { processLogger?.error(...) or console.error(...); return { stop:
async () => {/*noop*/} }; }) and ensure stop() (which checks stopped and calls
subscriptionPromise.then(subscription => subscription.stop())) still works when
the promise resolved to the noop subscription.
| if (socket && socket.readyState === WebSocket.OPEN) { | ||
| socket.close(); | ||
| } else { | ||
| socket?.close(); | ||
| } |
There was a problem hiding this comment.
Fix WebSocket.OPEN reference causing test failure.
The test failure shows ReferenceError: WebSocket is not defined because WebSocket.OPEN is accessed when the global WebSocket may not exist in Node.js environments. The mock provides MockWebSocket.OPEN, but the production code references the global WebSocket.OPEN constant.
🐛 Proposed fix using numeric constant
+const WS_READY_STATE_OPEN = 1;
+
stop() {
stopped = true;
clearReconnectTimer();
- if (socket && socket.readyState === WebSocket.OPEN) {
- socket.close();
- } else {
- socket?.close();
- }
+ socket?.close();
socket = null;
logger.info('capture', 'transport.websocket.stopped', { url: options.url });
}The conditional check for readyState === WebSocket.OPEN before calling close() is unnecessary—close() is safe to call regardless of ready state and will no-op if already closed.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (socket && socket.readyState === WebSocket.OPEN) { | |
| socket.close(); | |
| } else { | |
| socket?.close(); | |
| } | |
| const WS_READY_STATE_OPEN = 1; | |
| stop() { | |
| stopped = true; | |
| clearReconnectTimer(); | |
| socket?.close(); | |
| socket = null; | |
| logger.info('capture', 'transport.websocket.stopped', { url: options.url }); | |
| } |
🧰 Tools
🪛 GitHub Actions: CI
[error] 124-124: Test failed due to ReferenceError: WebSocket is not defined in stop(). Condition checks socket.readyState === WebSocket.OPEN, but WebSocket is unavailable in the test environment.
🪛 GitHub Check: test
[failure] 124-124: tests/capture/capture-transports.test.ts > capture transports > websocket transport normalizes framed messages and reconnects after close
ReferenceError: WebSocket is not defined
❯ Object.stop src/capture/websocket-transport.ts:124:47
❯ tests/capture/capture-transports.test.ts:113:32
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@shadow-agent/src/capture/websocket-transport.ts` around lines 124 - 128, The
code checks readyState against WebSocket.OPEN causing a ReferenceError in Node
tests; instead remove the global WebSocket.OPEN reference and the conditional
check and always call socket?.close() (i.e., replace the readyState-based branch
with a single socket?.close() call) so the code no longer depends on a global
WebSocket constant and safely closes the socket via the existing socket.close()
method.
| const refreshInferenceEngine = async () => { | ||
| inferenceEngine?.stop(); | ||
| inferenceEngine = createRuntimeInferenceEngine(); | ||
| await inferenceEngine.start(); | ||
| }; | ||
| const updatePrivacySettings = async (updates: Partial<TranscriptPrivacySettings>) => { | ||
| privacySettings = await saveTranscriptPrivacySettings({ | ||
| ...privacySettings, | ||
| ...updates | ||
| }); | ||
| await refreshInferenceEngine(); | ||
| return privacySettings; |
There was a problem hiding this comment.
Make privacy updates single-flight.
Concurrent shadow-agent:update-privacy-settings calls can read the same privacySettings snapshot, overwrite each other on save, and interleave refreshInferenceEngine(). Because refresh stops the current engine before the replacement has started, one failing/slow refresh can also leave inference offline. Queue updates through one promise chain and only swap engines after the new instance is healthy.
Suggested fix
- const refreshInferenceEngine = async () => {
- inferenceEngine?.stop();
- inferenceEngine = createRuntimeInferenceEngine();
- await inferenceEngine.start();
- };
- const updatePrivacySettings = async (updates: Partial<TranscriptPrivacySettings>) => {
- privacySettings = await saveTranscriptPrivacySettings({
- ...privacySettings,
- ...updates
- });
- await refreshInferenceEngine();
- return privacySettings;
- };
+ const refreshInferenceEngine = async () => {
+ const nextEngine = createRuntimeInferenceEngine();
+ await nextEngine.start();
+ const previousEngine = inferenceEngine;
+ inferenceEngine = nextEngine;
+ previousEngine?.stop();
+ };
+
+ let privacyUpdateChain = Promise.resolve<TranscriptPrivacySettings>(privacySettings);
+ const updatePrivacySettings = (updates: Partial<TranscriptPrivacySettings>) => {
+ privacyUpdateChain = privacyUpdateChain.then(async () => {
+ privacySettings = await saveTranscriptPrivacySettings({
+ ...privacySettings,
+ ...updates
+ });
+ await refreshInferenceEngine();
+ return privacySettings;
+ });
+ return privacyUpdateChain;
+ };📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const refreshInferenceEngine = async () => { | |
| inferenceEngine?.stop(); | |
| inferenceEngine = createRuntimeInferenceEngine(); | |
| await inferenceEngine.start(); | |
| }; | |
| const updatePrivacySettings = async (updates: Partial<TranscriptPrivacySettings>) => { | |
| privacySettings = await saveTranscriptPrivacySettings({ | |
| ...privacySettings, | |
| ...updates | |
| }); | |
| await refreshInferenceEngine(); | |
| return privacySettings; | |
| const refreshInferenceEngine = async () => { | |
| const nextEngine = createRuntimeInferenceEngine(); | |
| await nextEngine.start(); | |
| const previousEngine = inferenceEngine; | |
| inferenceEngine = nextEngine; | |
| previousEngine?.stop(); | |
| }; | |
| let privacyUpdateChain = Promise.resolve<TranscriptPrivacySettings>(privacySettings); | |
| const updatePrivacySettings = (updates: Partial<TranscriptPrivacySettings>) => { | |
| privacyUpdateChain = privacyUpdateChain.then(async () => { | |
| privacySettings = await saveTranscriptPrivacySettings({ | |
| ...privacySettings, | |
| ...updates | |
| }); | |
| await refreshInferenceEngine(); | |
| return privacySettings; | |
| }); | |
| return privacyUpdateChain; | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@shadow-agent/src/electron/start-main-process.ts` around lines 164 - 175,
Concurrent calls to updatePrivacySettings can race and stop the running
inferenceEngine too early; serialize updates via a single-flight promise chain
and only replace the engine after the new instance is started successfully.
Modify updatePrivacySettings to enqueue work on a module-scoped promise (e.g.,
pendingPrivacyUpdate) so each invocation awaits the previous; inside the queued
task, call saveTranscriptPrivacySettings with the merged settings, create a new
engine via createRuntimeInferenceEngine(), start the new engine (await
newEngine.start()), and only then stop and swap the old inferenceEngine
reference; on start failure, keep the existing engine running and propagate the
error. Ensure you reference updatePrivacySettings, refreshInferenceEngine (or
inline its logic), inferenceEngine, createRuntimeInferenceEngine,
saveTranscriptPrivacySettings, and inferenceEngine.start/stop so the
implementation updates the correct symbols.
| if (checkpointBuffer) { | ||
| await checkpointBuffer.registerConsumer(INFERENCE_CONSUMER_ID, { startAt: 'latest' }); | ||
| } | ||
|
|
||
| unsubscribeBuffer = buffer.subscribe((events) => { | ||
| if (supportsBufferedDrain) { | ||
| if (checkpointBuffer) { | ||
| void drainPendingEvents(); | ||
| return; |
There was a problem hiding this comment.
Drain once immediately after subscribing to avoid missing startup-era events.
If an event lands after registerConsumer(...) but before (or without) a later subscription callback, it can remain pending indefinitely. Please trigger an initial drain right after subscription setup (Line 160+ path).
Proposed fix
if (checkpointBuffer) {
await checkpointBuffer.registerConsumer(INFERENCE_CONSUMER_ID, { startAt: 'latest' });
}
unsubscribeBuffer = buffer.subscribe((events) => {
if (checkpointBuffer) {
void drainPendingEvents();
return;
}
trigger.onEvents(events);
});
+
+ if (checkpointBuffer) {
+ void drainPendingEvents();
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (checkpointBuffer) { | |
| await checkpointBuffer.registerConsumer(INFERENCE_CONSUMER_ID, { startAt: 'latest' }); | |
| } | |
| unsubscribeBuffer = buffer.subscribe((events) => { | |
| if (supportsBufferedDrain) { | |
| if (checkpointBuffer) { | |
| void drainPendingEvents(); | |
| return; | |
| if (checkpointBuffer) { | |
| await checkpointBuffer.registerConsumer(INFERENCE_CONSUMER_ID, { startAt: 'latest' }); | |
| } | |
| unsubscribeBuffer = buffer.subscribe((events) => { | |
| if (checkpointBuffer) { | |
| void drainPendingEvents(); | |
| return; | |
| } | |
| trigger.onEvents(events); | |
| }); | |
| if (checkpointBuffer) { | |
| void drainPendingEvents(); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@shadow-agent/src/inference/shadow-inference-engine.ts` around lines 156 -
163, After calling checkpointBuffer.registerConsumer and setting
unsubscribeBuffer = buffer.subscribe(...), call drainPendingEvents once
immediately (e.g., await drainPendingEvents() or otherwise invoke it) right
after setting up the subscription so any events that arrived between
registerConsumer and the subscription callback are processed; keep the existing
drain inside the subscribe callback for ongoing events and reference the
existing symbols checkpointBuffer.registerConsumer, buffer.subscribe,
unsubscribeBuffer and drainPendingEvents when making the change.
| const privacyPolicy: PrivacyPolicy = snapshot?.privacy ?? { | ||
| allowRawTranscriptStorage: false, | ||
| allowOffHostInference: false, | ||
| processingMode: 'local-only', | ||
| transcriptHandling: 'sanitized-by-default' | ||
| }; |
There was a problem hiding this comment.
Use resolvePrivacyPolicy() instead of hardcoding computed fields.
The privacyPolicy derivation bypasses the resolvePrivacyPolicy() function from privacy.ts, which computes processingMode based on allowOffHostInference. If snapshot?.privacy.allowOffHostInference is true, the hardcoded processingMode: 'local-only' will be incorrect—the UI will show "Local only" when it should show "Off-host opted in".
🔧 Proposed fix using resolvePrivacyPolicy
+import { resolvePrivacyPolicy } from '../shared/privacy';
+
- const privacyPolicy: PrivacyPolicy = snapshot?.privacy ?? {
- allowRawTranscriptStorage: false,
- allowOffHostInference: false,
- processingMode: 'local-only',
- transcriptHandling: 'sanitized-by-default'
- };
+ const privacyPolicy: PrivacyPolicy = resolvePrivacyPolicy(snapshot?.privacy);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const privacyPolicy: PrivacyPolicy = snapshot?.privacy ?? { | |
| allowRawTranscriptStorage: false, | |
| allowOffHostInference: false, | |
| processingMode: 'local-only', | |
| transcriptHandling: 'sanitized-by-default' | |
| }; | |
| import { resolvePrivacyPolicy } from '../shared/privacy'; | |
| const privacyPolicy: PrivacyPolicy = resolvePrivacyPolicy(snapshot?.privacy); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@shadow-agent/src/renderer/App.tsx` around lines 310 - 315, Replace the
hardcoded privacyPolicy object with a call to resolvePrivacyPolicy using the
incoming snapshot privacy state: call resolvePrivacyPolicy(snapshot?.privacy)
(or resolvePrivacyPolicy with a default) so processingMode is computed from
allowOffHostInference correctly; locate the assignment to privacyPolicy in
App.tsx and replace the current manual object construction with
resolvePrivacyPolicy(...) to derive the final PrivacyPolicy.
| it('PRIVACY_UPDATE_SUCCESS updates snapshot privacy and clears busy', () => { | ||
| const snapshot = makeSnapshot(); | ||
| const prior: AppState = { busy: 'privacy', error: null, snapshot }; | ||
| const next = appReducer(prior, { | ||
| type: 'PRIVACY_UPDATE_SUCCESS', | ||
| privacy: { | ||
| allowRawTranscriptStorage: true, | ||
| allowOffHostInference: true, | ||
| processingMode: 'off-host-opted-in', | ||
| transcriptHandling: 'sanitized-by-default' | ||
| } | ||
| }); | ||
|
|
||
| expect(next.busy).toBeNull(); | ||
| expect(next.error).toBeNull(); | ||
| expect(next.snapshot?.privacy.processingMode).toBe('off-host-opted-in'); | ||
| expect(next.snapshot?.privacy.allowRawTranscriptStorage).toBe(true); | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Expand assertions to fully validate the privacy payload mapping.
Line 188 sets allowOffHostInference: true, but the test does not assert it on the resulting snapshot. Add one assertion so this case catches partial field regressions.
✅ Suggested test addition
expect(next.snapshot?.privacy.processingMode).toBe('off-host-opted-in');
expect(next.snapshot?.privacy.allowRawTranscriptStorage).toBe(true);
+ expect(next.snapshot?.privacy.allowOffHostInference).toBe(true);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| it('PRIVACY_UPDATE_SUCCESS updates snapshot privacy and clears busy', () => { | |
| const snapshot = makeSnapshot(); | |
| const prior: AppState = { busy: 'privacy', error: null, snapshot }; | |
| const next = appReducer(prior, { | |
| type: 'PRIVACY_UPDATE_SUCCESS', | |
| privacy: { | |
| allowRawTranscriptStorage: true, | |
| allowOffHostInference: true, | |
| processingMode: 'off-host-opted-in', | |
| transcriptHandling: 'sanitized-by-default' | |
| } | |
| }); | |
| expect(next.busy).toBeNull(); | |
| expect(next.error).toBeNull(); | |
| expect(next.snapshot?.privacy.processingMode).toBe('off-host-opted-in'); | |
| expect(next.snapshot?.privacy.allowRawTranscriptStorage).toBe(true); | |
| }); | |
| it('PRIVACY_UPDATE_SUCCESS updates snapshot privacy and clears busy', () => { | |
| const snapshot = makeSnapshot(); | |
| const prior: AppState = { busy: 'privacy', error: null, snapshot }; | |
| const next = appReducer(prior, { | |
| type: 'PRIVACY_UPDATE_SUCCESS', | |
| privacy: { | |
| allowRawTranscriptStorage: true, | |
| allowOffHostInference: true, | |
| processingMode: 'off-host-opted-in', | |
| transcriptHandling: 'sanitized-by-default' | |
| } | |
| }); | |
| expect(next.busy).toBeNull(); | |
| expect(next.error).toBeNull(); | |
| expect(next.snapshot?.privacy.processingMode).toBe('off-host-opted-in'); | |
| expect(next.snapshot?.privacy.allowRawTranscriptStorage).toBe(true); | |
| expect(next.snapshot?.privacy.allowOffHostInference).toBe(true); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@shadow-agent/tests/renderer/app-state.test.ts` around lines 181 - 198, The
test for PRIVACY_UPDATE_SUCCESS doesn't assert allowOffHostInference was applied
to the snapshot; update the test in app-state.test.ts to assert that after
calling appReducer(prior, { type: 'PRIVACY_UPDATE_SUCCESS', ... }),
next.snapshot?.privacy.allowOffHostInference is true so the appReducer mapping
for PRIVACY_UPDATE_SUCCESS (and the snapshot privacy fields) are fully validated
alongside existing assertions for processingMode and allowRawTranscriptStorage.
| async start(context: CaptureTransportContext): Promise<CaptureTransportSubscription> { | ||
| const session = buildSession(options); | ||
| const reconnectDelayMs = Math.max(50, options.reconnectDelayMs ?? DEFAULT_RECONNECT_DELAY_MS); | ||
| const decoder = new TextDecoder(); |
There was a problem hiding this comment.
CRITICAL: TextDecoder instance is shared across HTTP stream reconnections, potentially causing decoding errors if previous streams ended with incomplete UTF-8 sequences. Move const decoder = new TextDecoder(); inside the connect function after the stopped check.
Code Review SummaryStatus: 1 Issue Found | Recommendation: Address before merge Overview
Issue Details (click to expand)CRITICAL
Other Observations (not in diff)Issues found in unchanged code that cannot receive inline comments:
Files Reviewed (15 files)
|
|
CodeAnt AI is running the review. |
Sequence DiagramThis PR refactors live capture so bytes can come from a pluggable transport (file tail, HTTP stream, WebSocket, or socket) while keeping a unified pipeline into the event buffer and renderer UI. sequenceDiagram
participant ElectronMain
participant SessionManager
participant CaptureTransport
participant EventBuffer
participant RendererApp
ElectronMain->>SessionManager: Start with transport options from environment
SessionManager->>CaptureTransport: start(getBackpressure, session and chunk callbacks)
CaptureTransport-->>SessionManager: onSessionStarted(session)
CaptureTransport-->>SessionManager: onChunk(raw transcript bytes)
SessionManager->>EventBuffer: Parse, normalize, and push canonical events
EventBuffer-->>ElectronMain: Notify subscribers of new events
ElectronMain-->>RendererApp: Send sanitized events over IPC for live view
Generated by CodeAnt AI |
| const updatePrivacySettings = async (updates: Partial<TranscriptPrivacySettings>) => { | ||
| if (!host.updatePrivacySettings) { | ||
| return; | ||
| } | ||
|
|
||
| dispatch({ type: 'PRIVACY_UPDATE_START' }); | ||
| try { | ||
| const nextPolicy = await host.updatePrivacySettings(updates); | ||
| dispatch({ type: 'PRIVACY_UPDATE_SUCCESS', privacy: nextPolicy }); | ||
| } catch (err) { | ||
| dispatch({ | ||
| type: 'PRIVACY_UPDATE_ERROR', | ||
| message: err instanceof Error ? err.message : 'Unable to update privacy settings.' | ||
| }); | ||
| } | ||
| }; |
There was a problem hiding this comment.
Suggestion: Remove interactive privacy-setting update calls from the renderer and treat privacy policy as display-only state to avoid mutating persisted workspace/user settings. [custom_rule]
Severity Level: Minor
Why it matters? 🤔
The renderer directly calls host.updatePrivacySettings(...), which mutates privacy settings from the UI. If the intended rule is that the observer surface should be display-only and avoid mutating persisted workspace/user settings, this is a genuine violation.
Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/renderer/App.tsx
**Line:** 369:384
**Comment:**
*Custom Rule: Remove interactive privacy-setting update calls from the renderer and treat privacy policy as display-only state to avoid mutating persisted workspace/user settings.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix| <button | ||
| type="button" | ||
| className="button button--ghost" | ||
| onClick={() => void exportReplay({ storeRawTranscript: true })} | ||
| disabled={!snapshot || busy !== null || !privacyPolicy.allowRawTranscriptStorage} | ||
| > | ||
| Export raw replay | ||
| </button> |
There was a problem hiding this comment.
Suggestion: Remove the raw replay export action from the renderer so the observer cannot initiate file-writing of transcript data, and keep this surface strictly read-only. [custom_rule]
Severity Level: Minor
Why it matters? 🤔
The renderer currently exposes a button that triggers exportReplay({ storeRawTranscript: true }), which initiates file output from the UI. If the custom rule is to keep the observer read-only and prevent file-writing from the renderer, this is a real violation.
Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/renderer/App.tsx
**Line:** 434:441
**Comment:**
*Custom Rule: Remove the raw replay export action from the renderer so the observer cannot initiate file-writing of transcript data, and keep this surface strictly read-only.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix| snapshot: state.snapshot | ||
| ? { | ||
| ...state.snapshot, | ||
| privacy: action.privacy | ||
| } | ||
| : state.snapshot |
There was a problem hiding this comment.
Suggestion: PRIVACY_UPDATE_SUCCESS only applies the new policy when snapshot is already present; if a privacy update completes during boot (when snapshot is still null), the success action is effectively dropped and the UI can later show stale privacy data when boot completes with an older snapshot. Persist privacy updates independently of snapshot availability (for example, by storing privacy separately in app state or queuing/merging it into the next loaded snapshot). [race condition]
Severity Level: Major ⚠️
- ❌ Privacy panel may display outdated policy after boot.
- ⚠️ UI privacy state can ignore successful early updates.
- ⚠️ User consent view may diverge from backend configuration.Steps of Reproduction ✅
1. Mount the `App` component in `shadow-agent/src/renderer/App.tsx` which initializes
state via `useReducer(appReducer, undefined, initialAppState)` at lines 200–202.
`initialAppState()` in `shadow-agent/src/renderer/app-state.ts:49-51` sets `busy:
'booting'` and `snapshot: null`.
2. On first render, `useEffect` in `App.tsx:227-252` dispatches `BOOT_START` (handled at
`app-state.ts:55-56`) and asynchronously calls `loadPreferredSnapshot()`, leaving
`state.snapshot` as `null` while the initial snapshot loads.
3. While `snapshot` is still `null`, the UI renders `PrivacyPanel` in `App.tsx:226-233`
with `privacy={privacyPolicy}` where `privacyPolicy` is computed at `App.tsx:51-56` as
`snapshot?.privacy ?? { ...defaults }`. Because `snapshot` is `null`, the privacy UI shows
defaults, and remains interactive because `interactive={capabilities.canManagePrivacy}`
(App.tsx:228-230) only depends on host capabilities from `renderer/host.ts:28-35`, not on
`busy` or `snapshot`.
4. The user toggles a privacy checkbox before boot completes, triggering
`updatePrivacySettings` in `App.tsx:110-125`. This dispatches `PRIVACY_UPDATE_START`
(handled at `app-state.ts:88-89`, setting `busy: 'privacy'` but leaving `snapshot: null`),
then awaits `host.updatePrivacySettings`. When the promise resolves, `App.tsx:117-118`
dispatches `PRIVACY_UPDATE_SUCCESS` with the new `privacy` policy, but `appReducer`
handles it at `app-state.ts:91-101` with:
```ts
snapshot: state.snapshot
? { ...state.snapshot, privacy: action.privacy }
: state.snapshot
Since state.snapshot is still null during boot, this branch returns { busy: null, error: null, snapshot: null }, effectively discarding action.privacy. Later, when
BOOT_SUCCESS runs (App.tsx:240, reducer at app-state.ts:58-59), it overwrites
snapshot with the boot snapshot, and privacyPolicy in App.tsx:51-56 continues to
come solely from that snapshot. Any privacy update that completed while snapshot was
null never affects UI state, so the Privacy panel can show a stale policy after boot
if the boot snapshot does not already include the just-applied update.
</details>
[Fix in Cursor](https://app.codeant.ai/fix-in-ide?tool=cursor&prompt=This%20is%20a%20comment%20left%20during%20a%20code%20review.%0A%0A%2A%2APath%3A%2A%2A%20shadow-agent%2Fsrc%2Frenderer%2Fapp-state.ts%0A%2A%2ALine%3A%2A%2A%2095%3A100%0A%2A%2AComment%3A%2A%2A%0A%09%2ARace%20Condition%3A%20%60PRIVACY_UPDATE_SUCCESS%60%20only%20applies%20the%20new%20policy%20when%20%60snapshot%60%20is%20already%20present%3B%20if%20a%20privacy%20update%20completes%20during%20boot%20%28when%20%60snapshot%60%20is%20still%20%60null%60%29%2C%20the%20success%20action%20is%20effectively%20dropped%20and%20the%20UI%20can%20later%20show%20stale%20privacy%20data%20when%20boot%20completes%20with%20an%20older%20snapshot.%20Persist%20privacy%20updates%20independently%20of%20snapshot%20availability%20%28for%20example%2C%20by%20storing%20privacy%20separately%20in%20app%20state%20or%20queuing%2Fmerging%20it%20into%20the%20next%20loaded%20snapshot%29.%0A%0AValidate%20the%20correctness%20of%20the%20flagged%20issue.%20If%20correct%2C%20How%20can%20I%20resolve%20this%3F%20If%20you%20propose%20a%20fix%2C%20implement%20it%20and%20please%20make%20it%20concise.%0AOnce%20fix%20is%20implemented%2C%20also%20check%20other%20comments%20on%20the%20same%20PR%2C%20and%20ask%20user%20if%20the%20user%20wants%20to%20fix%20the%20rest%20of%20the%20comments%20as%20well.%20if%20said%20yes%2C%20then%20fetch%20all%20the%20comments%20validate%20the%20correctness%20and%20implement%20a%20minimal%20fix%0A) | [Fix in VSCode Claude](https://app.codeant.ai/fix-in-ide?tool=vscode-claude&prompt=This%20is%20a%20comment%20left%20during%20a%20code%20review.%0A%0A%2A%2APath%3A%2A%2A%20shadow-agent%2Fsrc%2Frenderer%2Fapp-state.ts%0A%2A%2ALine%3A%2A%2A%2095%3A100%0A%2A%2AComment%3A%2A%2A%0A%09%2ARace%20Condition%3A%20%60PRIVACY_UPDATE_SUCCESS%60%20only%20applies%20the%20new%20policy%20when%20%60snapshot%60%20is%20already%20present%3B%20if%20a%20privacy%20update%20completes%20during%20boot%20%28when%20%60snapshot%60%20is%20still%20%60null%60%29%2C%20the%20success%20action%20is%20effectively%20dropped%20and%20the%20UI%20can%20later%20show%20stale%20privacy%20data%20when%20boot%20completes%20with%20an%20older%20snapshot.%20Persist%20privacy%20updates%20independently%20of%20snapshot%20availability%20%28for%20example%2C%20by%20storing%20privacy%20separately%20in%20app%20state%20or%20queuing%2Fmerging%20it%20into%20the%20next%20loaded%20snapshot%29.%0A%0AValidate%20the%20correctness%20of%20the%20flagged%20issue.%20If%20correct%2C%20How%20can%20I%20resolve%20this%3F%20If%20you%20propose%20a%20fix%2C%20implement%20it%20and%20please%20make%20it%20concise.%0AOnce%20fix%20is%20implemented%2C%20also%20check%20other%20comments%20on%20the%20same%20PR%2C%20and%20ask%20user%20if%20the%20user%20wants%20to%20fix%20the%20rest%20of%20the%20comments%20as%20well.%20if%20said%20yes%2C%20then%20fetch%20all%20the%20comments%20validate%20the%20correctness%20and%20implement%20a%20minimal%20fix%0A)
*(Use Cmd/Ctrl + Click for best experience)*
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:** shadow-agent/src/renderer/app-state.ts
**Line:** 95:100
**Comment:**
*Race Condition: `PRIVACY_UPDATE_SUCCESS` only applies the new policy when `snapshot` is already present; if a privacy update completes during boot (when `snapshot` is still `null`), the success action is effectively dropped and the UI can later show stale privacy data when boot completes with an older snapshot. Persist privacy updates independently of snapshot availability (for example, by storing privacy separately in app state or queuing/merging it into the next loaded snapshot).
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix
| const exportReplay = async (options?: { storeRawTranscript?: boolean }) => { | ||
| if (!snapshot || !host.exportReplayJsonl) { | ||
| return; | ||
| } | ||
|
|
||
| dispatch({ type: 'EXPORT_START' }); | ||
| try { | ||
| const result = await host.exportReplayJsonl(snapshot.events, exportName); | ||
| const result = await host.exportReplayJsonl(snapshot.events, exportName, options); | ||
| if (result.error) { |
There was a problem hiding this comment.
🟠 Architect Review — HIGH
Raw replay export sends snapshot.events (which are already sanitized before reaching the renderer) back to the main process, so even when storeRawTranscript: true is requested the export cannot contain raw transcript content.
Suggestion: Change the raw export path so the main process sources events from an unsanitized store (e.g. the capture buffer or replay store) when storeRawTranscript is true, rather than from renderer snapshot.events, keeping renderer snapshots sanitized-only.
Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is an **Architect / Logical Review** comment left during a code review. These reviews are first-class, important findings — not optional suggestions. Do NOT dismiss this as a 'big architectural change' just because the title says architect review; most of these can be resolved with a small, localized fix once the intent is understood.
**Path:** shadow-agent/src/renderer/App.tsx
**Line:** 351:359
**Comment:**
*HIGH: Raw replay export sends snapshot.events (which are already sanitized before reaching the renderer) back to the main process, so even when storeRawTranscript: true is requested the export cannot contain raw transcript content.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
If a suggested approach is provided above, use it as the authoritative instruction. If no explicit code suggestion is given, you MUST still draft and apply your own minimal, localized fix — do not punt back with 'no suggestion provided, review manually'. Keep the change as small as possible: add a guard clause, gate on a loading state, reorder an await, wrap in a conditional, etc. Do not refactor surrounding code or expand scope beyond the finding.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix| offset += bytesRead; | ||
| await context.onChunk({ | ||
| session: activeSession, | ||
| chunk: buffer.subarray(0, bytesRead).toString('utf8') | ||
| }); |
There was a problem hiding this comment.
Suggestion: The chunk callback uses the mutable shared activeSession reference, which can change while an async read loop is in progress (for example during rediscovery/session switch). This can mis-tag chunks from the old file as belonging to the new session. Capture and use an immutable per-read session reference before awaiting. [race condition]
Severity Level: Major ⚠️
- ❌ Events from old transcript file tagged to new session.
- ⚠️ Timeline view mixes messages from different Claude projects.
- ⚠️ Inference engine derives insights from misattributed session data.Steps of Reproduction ✅
1. Run the Shadow Agent Electron app in its default live-capture mode so it uses the
file-tail transport (no capture transport environment variables set). `startMainProcess`
calls `createSessionManager(..., { transport: resolveCaptureTransportOptionsFromEnv() })`
(`start-main-process.ts:141-150`), and `resolveCaptureTransportOptionsFromEnv` chooses
`kind: 'file-tail'` with no `overridePath` (`capture-transports.ts:24-35`), so
`createCaptureTransport` returns `createFileTailCaptureTransport(options)`
(`capture-transports.ts:48-51`).
2. When `startMainProcess` invokes `void currentSessionManager.start();`
(`start-main-process.ts:186-187`), `createSessionManager.start` calls
`effectiveTransport.start(...)` (`session-manager.ts:81-86`). Inside
`createFileTailCaptureTransport.start`, the initial `runDiscovery`
(`transcript-watcher.ts:294-302`) discovers the current Claude transcript JSONL file "A"
via `discoverActiveSession`'s non-override path scan (`session-discovery.ts:21-34`), then
`activateSession` sets `activeSession` to "A", resets offset, attaches file and directory
watchers, and schedules the first read via `scheduleRead(0)`
(`transcript-watcher.ts:276-292`).
3. Because transcript "A" can be large, the first `readNew` invocation
(`transcript-watcher.ts:201-274`) may spend noticeable time streaming the backlog: it
reads chunks in a loop and for each successful read executes `await context.onChunk({
session: activeSession, chunk: buffer.subarray(0, bytesRead).toString('utf8') })`
(`transcript-watcher.ts:241-257`). During this loop, `activeSession` is a shared variable
on the transport instance and may be mutated by other async code while these `await`
points yield back to the event loop.
4. After `DEFAULT_DISCOVERY_INTERVAL_MS` (30 seconds, `transcript-watcher.ts:43` and
`138`), the `rediscoveryTimer` set up when `!options.overridePath` is true
(`transcript-watcher.ts:304-307`) fires and calls `runDiscovery` again. If a newer
transcript "B" has been created (for example, the user opened a different Claude project),
`discoverActiveSession` now returns "B", and `activateSession` runs concurrently with the
in-flight `readNew` loop: it closes the old handle, clears watchers, and sets
`activeSession = nextSession` for "B" (`transcript-watcher.ts:276-287`). The still-running
`readNew` loop continues using its previously captured `filePath` and `info` for "A", but
subsequent iterations of `await context.onChunk({ session: activeSession, chunk: ... })`
now pass the updated `activeSession` for "B" along with data read from file "A". In
`createSessionManager`, the `onChunk` handler (`session-manager.ts:86-119`) uses the
provided `session` to decide whether to call `startSession(session)` and to tag events by
`session.sessionId`. This causes some backlog chunks from file "A" to be mis-attributed to
new session "B", mixing transcripts and potentially starting a new session with stale data
from the previous file.Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/capture/transcript-watcher.ts
**Line:** 253:257
**Comment:**
*Race Condition: The chunk callback uses the mutable shared `activeSession` reference, which can change while an async read loop is in progress (for example during rediscovery/session switch). This can mis-tag chunks from the old file as belonging to the new session. Capture and use an immutable per-read session reference before awaiting.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix| await runDiscovery(); | ||
|
|
||
| if (!options.overridePath) { | ||
| rediscoveryTimer = setInterval(() => { | ||
| void runDiscovery(); | ||
| }, discoveryIntervalMs); |
There was a problem hiding this comment.
Suggestion: When overridePath is set and the file does not exist at startup, discovery runs only once and then never retries, so capture never starts if the file appears later. Add retry logic (timer or directory watch) for override mode too, otherwise startup timing causes permanent data loss. [possible bug]
Severity Level: Critical 🚨
- ❌ SHADOW_CAPTURE_FILE override never captures if file appears later.
- ❌ Live capture UI shows empty transcript for override sessions.
- ⚠️ Advanced file-tail workflows silently lose all later events.Steps of Reproduction ✅
1. Configure the Electron main process (entrypoint `startMainProcess` in
`shadow-agent/src/electron/start-main-process.ts:136-224`) to use the file-tail transport
with an override path by setting environment variables before launch, for example:
- `SHADOW_CAPTURE_TRANSPORT=file-tail`
- `SHADOW_CAPTURE_FILE=/tmp/nonexistent-session.jsonl` (a path that does NOT exist
yet).
2. When `startMainProcess()` runs, it constructs the session manager with transport
options from the environment (`resolveCaptureTransportOptionsFromEnv` in
`shadow-agent/src/capture/capture-transports.ts:24-35`), which returns `{ kind:
'file-tail', overridePath: '/tmp/nonexistent-session.jsonl' }` because
`SHADOW_CAPTURE_FILE` is set (line 31-34). `createSessionManager` receives this as
`options.transport` (`session-manager.ts:17-21`).
3. `startMainProcess` then calls `void currentSessionManager.start();`
(`start-main-process.ts:186-187`), which in turn calls `effectiveTransport.start(...)` in
`createSessionManager.start` (`session-manager.ts:70-86`). For a file-tail transport with
overridePath, `createCaptureTransport` builds a `createFileTailCaptureTransport` and its
`start` method executes `runDiscovery` once (`transcript-watcher.ts:294-302`). Inside
`runDiscovery`, `discoverActiveSession(options.overridePath)`
(`transcript-watcher.ts:295`) delegates to `discoverActiveSession`'s override branch
(`session-discovery.ts:6-19`), which attempts `stat(overridePath)`; because the file does
not exist, it catches the error, logs `session_discovery.override_not_found`, and returns
`null` (lines 9-18), so `runDiscovery` returns without activating any session
(`transcript-watcher.ts:295-299`).
4. After `await runDiscovery();` completes (`transcript-watcher.ts:302`), the code checks
`if (!options.overridePath)` before scheduling periodic rediscovery
(`transcript-watcher.ts:304-307`). Because `options.overridePath` is truthy in this mode,
the `rediscoveryTimer` is never created, so `runDiscovery` is never called again. As a
result, even when another process later creates `/tmp/nonexistent-session.jsonl` and
appends transcript JSONL data to it, no new discovery occurs, `activateSession` is never
called, and no `onSessionStarted`/`onChunk` callbacks are fired into the `SessionManager`
(`session-manager.ts:86-119`). Live capture from that override file never starts for the
lifetime of the process, and all transcript events written after the file appears are
permanently missed until the user restarts the app with the file already present.Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/capture/transcript-watcher.ts
**Line:** 302:307
**Comment:**
*Possible Bug: When `overridePath` is set and the file does not exist at startup, discovery runs only once and then never retries, so capture never starts if the file appears later. Add retry logic (timer or directory watch) for override mode too, otherwise startup timing causes permanent data loss.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix| nextSocket.onmessage = (event) => { | ||
| void (async () => { | ||
| const chunk = await readMessageData(event.data); | ||
| if (chunk) { | ||
| await context.onChunk({ session, chunk }); | ||
| } | ||
| })(); |
There was a problem hiding this comment.
Suggestion: Message processing is launched in detached async tasks, so multiple incoming messages can execute onChunk concurrently and complete out of order. Since downstream parsing/session initialization is stateful, this can reorder or drop transcript data. Serialize message handling (queue/chain promises) so chunks are processed strictly in receive order. [race condition]
Severity Level: Major ⚠️
- ❌ Live websocket transcript events can arrive out of order.
- ⚠️ Inference engine reads misordered events, reducing insight quality.Steps of Reproduction ✅
1. Start the Electron app via `startMainProcess()` in
`shadow-agent/src/electron/start-main-process.ts:136-153`, with environment configured for
WebSocket capture by setting `SHADOW_CAPTURE_TRANSPORT=websocket` and
`SHADOW_CAPTURE_WS_URL` so `resolveCaptureTransportOptionsFromEnv()` in
`shadow-agent/src/capture/capture-transports.ts:24-52` returns `{ kind: 'websocket', url:
... }`, which is passed into `createSessionManager` and then into `createCaptureTransport`
/ `createWebSocketCaptureTransport`.
2. When `currentSessionManager.start()` is invoked in `startMainProcess` at
`shadow-agent/src/electron/start-main-process.ts:51-52`, the session manager calls
`createCaptureTransport` and ultimately `createWebSocketCaptureTransport` in
`shadow-agent/src/capture/websocket-transport.ts:44-50`, which registers
`nextSocket.onmessage` at `shadow-agent/src/capture/websocket-transport.ts:96-103` and
wires the transport into the `CaptureTransportContext` whose `onChunk` implementation is
in `shadow-agent/src/capture/session-manager.ts:153-157`.
3. Feed two WebSocket messages back-to-back from the configured server such that
`event.data` is delivered as a `Blob` (a valid WebSocket usage in browsers/Electron); for
each message, `nextSocket.onmessage` spawns a detached async task that executes `const
chunk = await readMessageData(event.data);` in `websocket-transport.ts:97-99`, where
`readMessageData` uses `await data.text()` for `Blob` inputs in
`websocket-transport.ts:28-40`, causing each message's processing to be asynchronous and
independent.
4. Because each `onmessage` handler launches a separate async IIFE without any
serialization, the two `await readMessageData(event.data)` calls may resolve in a
different order than the messages were received, so the later message's `await
context.onChunk({ session, chunk });` at `websocket-transport.ts:100` can run before the
earlier one; this reaches `onChunk` in
`shadow-agent/src/capture/session-manager.ts:153-157`, where the stateful `activeSession`
and `activeParser.push(chunk)` logic assumes ordered chunks, resulting in transcript
chunks being pushed into the parser out of receive order and yielding misordered
transcript events in the live capture pipeline.Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/capture/websocket-transport.ts
**Line:** 96:102
**Comment:**
*Race Condition: Message processing is launched in detached async tasks, so multiple incoming messages can execute `onChunk` concurrently and complete out of order. Since downstream parsing/session initialization is stateful, this can reorder or drop transcript data. Serialize message handling (queue/chain promises) so chunks are processed strictly in receive order.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix| nextSocket.on('connect', () => { | ||
| void (async () => { | ||
| if (!hasConnected) { | ||
| await context.onSessionStarted(session); | ||
| hasConnected = true; | ||
| } else { | ||
| await context.onSessionReset(session, 'reconnect'); | ||
| } | ||
| })(); | ||
| }); |
There was a problem hiding this comment.
Suggestion: The async event handlers are launched with void and no error handling, so if onSessionStarted, onSessionReset, or onChunk rejects, it becomes an unhandled promise rejection that can crash the process (or silently drop failures depending on runtime settings). Wrap these awaited calls in a try/catch inside the handler and log/report failures. [possible bug]
Severity Level: Major ⚠️
- ❌ Unhandled rejection in main capture pipeline on socket connect.
- ⚠️ Possible Electron main crash under filesystem or pipeline errors.
- ⚠️ Live socket capture may silently drop critical failure signals.Steps of Reproduction ✅
1. Configure the app to use the socket capture transport by setting environment variables
so `resolveCaptureTransportOptionsFromEnv` returns kind `"socket"` (see
`shadow-agent/src/capture/capture-transports.ts:24-61`, where
`SHADOW_CAPTURE_TRANSPORT=socket`, `SHADOW_CAPTURE_SOCKET_HOST`, and
`SHADOW_CAPTURE_SOCKET_PORT` are read).
2. Start the Electron main process (`startMainProcess` in
`shadow-agent/src/electron/start-main-process.ts:136-150`), which constructs a
`SessionManager` via `createSessionManager(..., { queuePersistenceRoot:
path.join(app.getPath('userData'), 'capture-queue'), transport:
resolveCaptureTransportOptionsFromEnv() })` and then calls `void
currentSessionManager.start()` at `start-main-process.ts:57-59`.
3. Inside `SessionManager.start` (`shadow-agent/src/capture/session-manager.ts:110-125`),
the resolved socket transport's `start` method is invoked with a context whose
`onSessionStarted`, `onSessionReset`, and `onChunk` callbacks are async and can reject
(for example, `onSessionStarted` awaits `startSession`, which awaits `buffer.setSession`
in `event-buffer.ts:82-110`, performing filesystem operations that may fail).
4. When a TCP server accepts the connection and the socket emits `'connect'`,
`createSocketCaptureTransport`'s handler at `socket-transport.ts:74-82` runs `void (async
() => { await context.onSessionStarted(session) /* or onSessionReset */ })();`. If
`onSessionStarted` rejects (e.g., due to a disk error in `setSession`'s
`rm`/`writeCheckpointFile` calls in `event-buffer.ts:81-110`), that rejection is not
caught anywhere in the handler, producing an unhandled promise rejection in the Electron
main process (no global `unhandledRejection` handler exists—verified by searching the
repo). Depending on Node/Electron settings, this can crash the process or at least log an
unhandled rejection while silently dropping the failure, matching the suggested issue.Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/capture/socket-transport.ts
**Line:** 74:83
**Comment:**
*Possible Bug: The async event handlers are launched with `void` and no error handling, so if `onSessionStarted`, `onSessionReset`, or `onChunk` rejects, it becomes an unhandled promise rejection that can crash the process (or silently drop failures depending on runtime settings). Wrap these awaited calls in a `try/catch` inside the handler and log/report failures.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix|
CodeAnt AI finished running the review. |
|
CodeAnt AI is running the review. |
Sequence DiagramThis PR makes live capture source pluggable (file tail, HTTP, WebSocket, socket) and wires in persisted privacy controls that gate sanitization, inference, and raw export between the Electron main process and renderer. sequenceDiagram
participant User
participant Renderer
participant Main
participant Transport
participant Buffer
participant PrivacyStore
Main->>PrivacyStore: Load privacy settings and transport config
Main->>Transport: Start configured capture transport
Transport-->>Main: Emit session and text chunks
Main->>Buffer: Parse and enqueue normalized events
Buffer-->>Renderer: Push sanitized live events over IPC
User->>Renderer: Toggle privacy settings in UI
Renderer->>Main: Request privacy update
Main->>PrivacyStore: Persist settings and refresh inference
PrivacyStore-->>Renderer: Return updated privacy policy and consent gates
Generated by CodeAnt AI |
| const exportReplay = async (options?: { storeRawTranscript?: boolean }) => { | ||
| if (!snapshot || !host.exportReplayJsonl) { | ||
| return; | ||
| } | ||
|
|
||
| dispatch({ type: 'EXPORT_START' }); | ||
| try { | ||
| const result = await host.exportReplayJsonl(snapshot.events, exportName); | ||
| const result = await host.exportReplayJsonl(snapshot.events, exportName, options); | ||
| if (result.error) { |
There was a problem hiding this comment.
🟠 Architect Review — HIGH
"Export raw replay" sends SnapshotPayload.events from the renderer to the main process, but snapshots are always built from events passed through prepareEventsForStorage (sanitized), so even with raw transcript opt-in the exported "raw" replay never contains truly unsanitized transcript content.
Suggestion: Change the raw export path to source events from a main-process view of the capture stream (or another unsanitized persistence layer) keyed by session, bypassing the sanitized SnapshotPayload.events, so that when storeRawTranscript is true and raw storage is allowed the exported file is built from genuinely raw events.
Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is an **Architect / Logical Review** comment left during a code review. These reviews are first-class, important findings — not optional suggestions. Do NOT dismiss this as a 'big architectural change' just because the title says architect review; most of these can be resolved with a small, localized fix once the intent is understood.
**Path:** shadow-agent/src/renderer/App.tsx
**Line:** 351:359
**Comment:**
*HIGH: "Export raw replay" sends SnapshotPayload.events from the renderer to the main process, but snapshots are always built from events passed through prepareEventsForStorage (sanitized), so even with raw transcript opt-in the exported "raw" replay never contains truly unsanitized transcript content.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
If a suggested approach is provided above, use it as the authoritative instruction. If no explicit code suggestion is given, you MUST still draft and apply your own minimal, localized fix — do not punt back with 'no suggestion provided, review manually'. Keep the change as small as possible: add a guard clause, gate on a loading state, reorder an await, wrap in a conditional, etc. Do not refactor surrounding code or expand scope beyond the finding.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix| export async function loadTranscriptPrivacySettings( | ||
| overrides: Partial<TranscriptPrivacySettings> = {}, | ||
| envPath = join(homedir(), '.shadow-agent', '.env'), | ||
| env: NodeJS.ProcessEnv = process.env | ||
| env: NodeJS.ProcessEnv = process.env, | ||
| settingsPath = getTranscriptPrivacySettingsPath() |
There was a problem hiding this comment.
Suggestion: envPath and settingsPath default independently, so callers that pass a custom envPath but omit settingsPath will still read privacy.json from the real home directory. This creates cross-environment state leakage and surprising behavior in tests or alternate runtime roots; derive the default settings path from the chosen env root (or use an options object so both paths are configured together). [possible bug]
Severity Level: Major ⚠️
- ⚠️ Tests read host ~/.shadow-agent/privacy.json unexpectedly.
- ⚠️ Custom dotenv roots still use default privacy store.Steps of Reproduction ✅
1. Create a persisted privacy settings file in the default store by calling
`saveTranscriptPrivacySettings` from `shadow-agent/src/shared/privacy.ts:117-128` without
a custom `settingsPath`, e.g. `await saveTranscriptPrivacySettings({
allowRawTranscriptStorage: true, allowOffHostInference: true });`
This writes `privacy.json` under the real home directory via
`getTranscriptPrivacySettingsPath` (`privacy.ts:48-51`).
2. In the test suite `shadow-agent/tests/privacy.test.ts`, `createTempEnvFile` at lines
`19-25` creates a temporary `.env` file in a fresh directory, returning `envPath` that
does *not* live under the real home directory.
For example, `falls back to defaults when the dotenv file is missing or invalid` at
`privacy.test.ts:73-80` calls `const envPath = await
createTempEnvFile('SHADOW_ALLOW_OFF_HOST_INFERENCE=maybe\n');` then deletes that file.
3. The same test calls `loadTranscriptPrivacySettings({}, envPath, {})` at
`privacy.test.ts:77-79`, passing:
- `overrides = {}`
- `envPath` pointing into the temporary directory
- `env = {}`
but *not* passing `settingsPath`, so `loadTranscriptPrivacySettings` uses the default
`settingsPath = getTranscriptPrivacySettingsPath()` (`privacy.ts:89-94`), which still
points at the real home directory.
4. At runtime, `readStoredPrivacySettings(settingsPath)` in `privacy.ts:54-66` reads the
previously persisted `privacy.json` from the real home directory, injecting those stored
booleans into the synthetic env object at `privacy.ts:95-114`.
Because `fileEnv` and `env` are empty in this test scenario (missing dotenv file and
empty process env), the stored values win, and `loadTranscriptPrivacySettings` returns
the persisted settings instead of the defaults that `privacy.test.ts:77-80` expects,
demonstrating cross-environment leakage from the real home directory into a custom
env-rooted test.Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/shared/privacy.ts
**Line:** 89:93
**Comment:**
*Possible Bug: `envPath` and `settingsPath` default independently, so callers that pass a custom `envPath` but omit `settingsPath` will still read `privacy.json` from the real home directory. This creates cross-environment state leakage and surprising behavior in tests or alternate runtime roots; derive the default settings path from the chosen env root (or use an options object so both paths are configured together).
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix| const updatePrivacySettings = async (updates: Partial<TranscriptPrivacySettings>) => { | ||
| privacySettings = await saveTranscriptPrivacySettings({ | ||
| ...privacySettings, | ||
| ...updates | ||
| }); | ||
| await refreshInferenceEngine(); | ||
| return privacySettings; | ||
| }; |
There was a problem hiding this comment.
Suggestion: Privacy updates are implemented as read-modify-write on shared mutable state without synchronization, so concurrent IPC calls can overwrite each other and lose one of the toggles. Serialize updatePrivacySettings requests (e.g., with a mutex/queue) so each update applies to the latest persisted state. [race condition]
Severity Level: Major ⚠️
- ⚠️ Rapid privacy toggles can lose one of the changes.
- ⚠️ Persisted transcript privacy may not match user selections.
- ⚠️ Inference engine restarts with stale privacy configuration.Steps of Reproduction ✅
1. Start the Electron app so `startMainProcess()` in
`shadow-agent/src/electron/start-main-process.ts:136` runs and initializes
`privacySettings` and `updatePrivacySettings` (lines 144-146 and 169-176).
2. `registerIpcHandlers()` is called at
`shadow-agent/src/electron/start-main-process.ts:177-180` with `{ getSettings,
updateSettings: updatePrivacySettings }`, wiring `updatePrivacySettings` into the IPC
handler `ipcMain.handle('shadow-agent:update-privacy-settings', ...)` defined at
`shadow-agent/src/electron/start-main-process.ts:125-133`.
3. In the renderer process, a UI component (e.g., the privacy controls React components
under `shadow-agent/src/renderer/ShadowPanel.tsx` / related files) issues multiple
concurrent `ipcRenderer.invoke('shadow-agent:update-privacy-settings', {...})` calls—for
instance, the user quickly toggles two different privacy switches so that two invocations
are in-flight at the same time.
4. In the main process, both IPC invocations enter `updatePrivacySettings` concurrently:
each call evaluates `saveTranscriptPrivacySettings({ ...privacySettings, ...updates })`
using the same pre-update `privacySettings` snapshot, then awaits. If the second call's
`await saveTranscriptPrivacySettings(...)` resolves first, it sets `privacySettings = S2`
(merged with the second toggle). When the first call later resolves, it overwrites
`privacySettings` with `S1` (merged with the first toggle but missing the second), causing
the last-applied setting to be lost. On the next app start,
`loadTranscriptPrivacySettings()` (line 144) reads this stale merged state, so one of the
user's toggles is not persisted.Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/electron/start-main-process.ts
**Line:** 169:176
**Comment:**
*Race Condition: Privacy updates are implemented as read-modify-write on shared mutable state without synchronization, so concurrent IPC calls can overwrite each other and lose one of the toggles. Serialize `updatePrivacySettings` requests (e.g., with a mutex/queue) so each update applies to the latest persisted state.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix| // Start live transcript capture (non-blocking; falls back gracefully if no session found) | ||
| void currentSessionManager.start(); |
There was a problem hiding this comment.
Suggestion: start() is invoked with void and no rejection handler, so any async failure during capture transport startup will become an unhandled promise rejection and can destabilize the main process. Keep it non-blocking, but attach a .catch(...) (or wrap in a fire-and-forget helper that handles errors) so startup failures are logged and contained. [possible bug]
Severity Level: Major ⚠️
- ❌ Live capture startup failures can crash or destabilize main.
- ⚠️ Misconfigured capture transport hides root cause behind unhandled rejection.
- ⚠️ Live transcript capture silently fails when transport initialization breaks.Steps of Reproduction ✅
1. Launch the Electron app so that `startMainProcess()` from
`shadow-agent/src/electron/start-main-process.ts:136` is executed as the main-process
entrypoint (verified via project bootstrap configuration and imports).
2. In `startMainProcess()`, when `app.whenReady()` resolves, the code at
`shadow-agent/src/electron/start-main-process.ts:146-150` creates `currentSessionManager =
createSessionManager(...)` with transport options from
`resolveCaptureTransportOptionsFromEnv()`.
3. Still inside the `.then(async () => { ... })` block, the try-block at
`shadow-agent/src/electron/start-main-process.ts:181-189` calls `void
currentSessionManager.start();` (line 187) without any `.catch` or surrounding `try/catch`
around the promise returned by `start()`.
4. Inspecting `shadow-agent/src/capture/session-manager.ts` shows `async start():
Promise<void>` calling into the configured transport (e.g., HTTP/WebSocket/socket) where
connection/initialization failures reject the promise (e.g., failing to connect to the URL
derived from env vars in `capture-transports.ts`). With a misconfigured or unreachable
capture endpoint, `start()` rejects; because it was invoked with `void` and no rejection
handler at line 187, this becomes an unhandled promise rejection in the Electron main
process, which can either terminate the process or at minimum log an `unhandledRejection`
warning and leave capture in an unknown state.Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/electron/start-main-process.ts
**Line:** 186:187
**Comment:**
*Possible Bug: `start()` is invoked with `void` and no rejection handler, so any async failure during capture transport startup will become an unhandled promise rejection and can destabilize the main process. Keep it non-blocking, but attach a `.catch(...)` (or wrap in a fire-and-forget helper that handles errors) so startup failures are logged and contained.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix|
|
||
| if (supportsBufferedDrain) { | ||
| await buffer.registerConsumer(INFERENCE_CONSUMER_ID, { startAt: 'latest' }); | ||
| if (checkpointBuffer) { |
There was a problem hiding this comment.
Suggestion: After registering the checkpoint consumer, the code waits for future buffer subscriptions to trigger draining, but it never performs an initial drain. Events that arrive between consumer registration and the next subscription callback can remain pending indefinitely if no later events are pushed. Trigger one immediate drainPendingEvents() after subscribing (or right after registering) to avoid missing that first pending batch. [logic error]
Severity Level: Major ⚠️
- ❌ Inference may skip final transcript events of sessions.
- ⚠️ Shadow insights missing for last captured events batch.Steps of Reproduction ✅
1. Launch the Electron app so `startMainProcess()` runs
(shadow-agent/src/electron/start-main-process.ts:7, 2–11). This constructs a
`SessionManager` and a shared capture `buffer` via `createSessionManager()`
(start-main-process.ts:7–11, session-manager.ts:34–50).
2. `startMainProcess()` starts live capture and then the inference engine: it calls `void
currentSessionManager.start()` to begin capture and transport
(start-main-process.ts:47–48; session-manager.ts:31–80), and then awaits
`refreshInferenceEngine()` which creates an `InferenceEngine` and calls
`inferenceEngine.start()` (start-main-process.ts:13–29, 47–50).
3. During `InferenceEngine.start()`
(shadow-agent/src/inference/shadow-inference-engine.ts:135–167), the engine registers a
checkpoint consumer with `await checkpointBuffer.registerConsumer(INFERENCE_CONSUMER_ID, {
startAt: 'latest' })` (lines 156–158). While this `await` is in progress, the capture
pipeline can concurrently push events into the same `buffer` via `buffer.push(events)` in
`startSession`'s parser callback (session-manager.ts:6–21 and 74–79). Those events are
stored in the queue but no inference subscriber is attached yet.
4. After `registerConsumer` resolves, `InferenceEngine.start()` attaches a subscriber via
`unsubscribeBuffer = buffer.subscribe((events) => { ... })`
(shadow-inference-engine.ts:160–166), which only calls `drainPendingEvents()` when
*future* events arrive (line 161). If capture emits one or more final transcript events in
the small window after `registerConsumer()` completes but no further events are ever
pushed (e.g., the remote transcript stream ends immediately after that chunk), then the
subscriber callback is never invoked, `drainPendingEvents()` is never called, and the
pending events stored after the checkpoint remain unread. As a result,
`trigger.onEvents()` is never called for that last batch (shadow-inference-engine.ts:126,
160–166), so no inference run is triggered for those events. Adding an immediate `void
drainPendingEvents()` after registering the consumer (or right after subscribing) would
force an initial drain and ensure those pending events are processed even if no later
events arrive.Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/inference/shadow-inference-engine.ts
**Line:** 156:166
**Comment:**
*Logic Error: After registering the checkpoint consumer, the code waits for future buffer subscriptions to trigger draining, but it never performs an initial drain. Events that arrive between consumer registration and the next subscription callback can remain pending indefinitely if no later events are pushed. Trigger one immediate `drainPendingEvents()` after subscribing (or right after registering) to avoid missing that first pending batch.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix| nextSocket.on('data', (chunk: string) => { | ||
| void (async () => { | ||
| const backpressure = context.getBackpressure(); | ||
| if (backpressure.shouldThrottle) { | ||
| nextSocket.pause(); | ||
| setTimeout(() => { | ||
| if (!stopped) { | ||
| nextSocket.resume(); | ||
| } | ||
| }, computeWatchDelay(50, backpressure)); | ||
| } | ||
| await context.onChunk({ session, chunk }); | ||
| })(); | ||
| }); |
There was a problem hiding this comment.
Suggestion: Incoming socket chunks are processed in detached async tasks, so multiple context.onChunk calls can overlap and race. The incremental parser in the session manager is stateful and expects ordered chunk delivery; concurrent chunk handling can interleave session setup and chunk parsing, dropping or reordering transcript data. Serialize chunk processing (queue/chain promises) so each chunk is applied strictly in arrival order. [race condition]
Severity Level: Critical 🚨
- ❌ Initial events from socket sessions can be silently discarded.
- ⚠️ Socket-based live transcripts may appear truncated in the UI.
- ⚠️ Inference engine operates on incomplete event history.Steps of Reproduction ✅
1. Configure the application to use the socket transport by setting
`SHADOW_CAPTURE_TRANSPORT=socket` plus
`SHADOW_CAPTURE_SOCKET_HOST`/`SHADOW_CAPTURE_SOCKET_PORT`, so that
`resolveCaptureTransportOptionsFromEnv` in
`shadow-agent/src/capture/capture-transports.ts:24-61` returns `{ kind: 'socket', host,
port, ... }` and `createCaptureTransport` at lines 67–76 constructs
`createSocketCaptureTransport(options)`.
2. Start the Electron main process via `startMainProcess()` in
`shadow-agent/src/electron/start-main-process.ts:136-188`; during `app.whenReady()` it
creates a `SessionManager` with `createSessionManager(() => mainWindow?.webContents ??
null, { queuePersistenceRoot: path.join(app.getPath('userData'), 'capture-queue'),
transport: resolveCaptureTransportOptionsFromEnv() })` and kicks off live capture with
`void currentSessionManager.start();` at lines 51–52.
3. Run a TCP server that accepts connections from the client and immediately starts
streaming JSONL transcript lines; when the client connects, `createSocketCaptureTransport`
in `shadow-agent/src/capture/socket-transport.ts:25-133` calls `net.createConnection` at
lines 67–70, triggering the `nextSocket.on('connect', ...)` handler at lines 74–82, which
invokes `context.onSessionStarted(session)` using the `SessionManager`'s context (defined
in `session-manager.ts:125-141`) and ultimately `await startSession(session)` at line 140.
4. During this first connection, `startSession` in
`shadow-agent/src/capture/session-manager.ts:85-107` sets `activeSession = session`
synchronously, then awaits `buffer.setSession(session.sessionId);` (implemented in
`shadow-agent/src/capture/event-buffer.ts:81-111`), which performs asynchronous filesystem
operations; while this `await` is in progress, the server's first JSONL line arrives,
triggering the `nextSocket.on('data', ...)` handler at `socket-transport.ts:85-97`, which
starts a detached async IIFE and calls `await context.onChunk({ session, chunk });`. The
`SessionManager`'s `onChunk` implementation at `session-manager.ts:153-157` sees a
non-null `activeSession` matching the session and therefore skips `startSession`, then
calls `activeParser.push(chunk)`. Because `activeParser` is only replaced with a new
incremental parser after `buffer.setSession` completes (in `startSession` after its
`await`), this first chunk is processed by the initial stub parser created at
`session-manager.ts:52` with a no-op callback, so the initial transcript entries from
socket-based capture are effectively dropped before the proper session parser is
installed.Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/capture/socket-transport.ts
**Line:** 85:98
**Comment:**
*Race Condition: Incoming socket chunks are processed in detached async tasks, so multiple `context.onChunk` calls can overlap and race. The incremental parser in the session manager is stateful and expects ordered chunk delivery; concurrent chunk handling can interleave session setup and chunk parsing, dropping or reordering transcript data. Serialize chunk processing (queue/chain promises) so each chunk is applied strictly in arrival order.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix| nextSocket.onopen = () => { | ||
| void (async () => { | ||
| if (!hasOpened) { | ||
| await context.onSessionStarted(session); | ||
| hasOpened = true; | ||
| } else { | ||
| await context.onSessionReset(session, 'reconnect'); | ||
| } | ||
| })(); |
There was a problem hiding this comment.
Suggestion: The async onopen handler is also detached without rejection handling, so failures in context.onSessionStarted/context.onSessionReset become unhandled promise rejections. This can crash or destabilize the main process under session initialization failures. Handle errors explicitly in the handler or await through a guarded execution path. [possible bug]
Severity Level: Major ⚠️
- ⚠️ Unhandled rejections during WebSocket capture session initialization.
- ⚠️ Live capture may silently fail on filesystem errors.
- ⚠️ Electron main stability depends on Node unhandledRejection behavior.Steps of Reproduction ✅
1. Launch the Electron main process via `startMainProcess()` in
`shadow-agent/src/electron/start-main-process.ts:136-189`; this wires up live capture by
creating a `SessionManager` and calling `void currentSessionManager.start()` at line 52.
2. Configure capture to use the WebSocket transport by setting
`SHADOW_CAPTURE_TRANSPORT=websocket` (or `ws`) and
`SHADOW_CAPTURE_WS_URL`/`SHADOW_CAPTURE_TARGET`, so
`resolveCaptureTransportOptionsFromEnv()` in
`shadow-agent/src/capture/capture-transports.ts:24-52` returns `kind: 'websocket'`, which
is passed into `createSessionManager()` and ultimately into
`createWebSocketCaptureTransport()` in
`shadow-agent/src/capture/websocket-transport.ts:44-50`.
3. When `SessionManager.start()` in `shadow-agent/src/capture/session-manager.ts:110-159`
awaits `effectiveTransport.start({...})` (line 125), the WebSocket transport registers
handlers; on successful connection, `nextSocket.onopen` at `websocket-transport.ts:85-94`
runs an async IIFE that calls `await context.onSessionStarted(session)`, which is the
`onSessionStarted` implementation from `session-manager.ts:127-141` that calls `await
startSession(session)` and `await buffer.setSession(session.sessionId)` in
`event-buffer.ts:82-110`.
4. If any of the async filesystem operations in `setSession()` (e.g., `rm()` or
`writeFile()` in `event-buffer.ts:106-108` and `writeCheckpointFile()` in
`event-buffer.ts:164-172`) rejects or if `startSession()` throws synchronously, the async
IIFE in `nextSocket.onopen` (line 86) rejects; because it is invoked via `void (async ()
=> { ... })();` without a `.catch()` or surrounding `try/catch`, this rejection becomes an
unhandled promise rejection in the Electron main process, with no local error handling in
`websocket-transport.ts` or `start-main-process.ts`.Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/capture/websocket-transport.ts
**Line:** 85:93
**Comment:**
*Possible Bug: The async `onopen` handler is also detached without rejection handling, so failures in `context.onSessionStarted`/`context.onSessionReset` become unhandled promise rejections. This can crash or destabilize the main process under session initialization failures. Handle errors explicitly in the handler or await through a guarded execution path.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix|
CodeAnt AI finished running the review. |
|
CodeAnt AI is running the review. |
Sequence DiagramThis diagram shows how the Electron main process resolves a capture transport from environment settings, starts the session manager with that transport, and streams captured events through parsing and buffering to the renderer as sanitized live updates. sequenceDiagram
participant Main as Electron main
participant TransportFactory as Capture transport resolver
participant Transport as Capture transport
participant Session as Session manager
participant Buffer as Event buffer
participant IPC as IPC bridge
participant UI as Renderer
Main->>TransportFactory: Resolve capture options from environment
TransportFactory-->>Main: Transport kind and settings
Main->>Session: Create session manager with transport and privacy
Session->>Transport: Start transport with callbacks
Transport-->>Session: Notify session started and deliver chunks
Session->>Buffer: Parse and normalize events into queue
Buffer-->>IPC: Notify about new queued events
IPC->>UI: Send sanitized live events to renderer
Generated by CodeAnt AI |
| canManagePrivacy: | ||
| typeof host.getPrivacyPolicy === 'function' && | ||
| typeof host.updatePrivacySettings === 'function', |
There was a problem hiding this comment.
Suggestion: canManagePrivacy is currently gated on both read and write privacy APIs, but the renderer only needs the update API to let users toggle settings from the current snapshot policy. If a host implements updatePrivacySettings but not getPrivacyPolicy, privacy controls are incorrectly disabled, creating a contract mismatch and silently breaking privacy updates. [logic error]
Severity Level: Major ⚠️
- ⚠️ Embedded hosts cannot expose interactive privacy toggles reliably.
- ⚠️ Privacy UI disabled despite functional `updatePrivacySettings` implementation.
- ⚠️ Public host contract and capability detection are inconsistent.Steps of Reproduction ✅
1. Note the host capability computation in `shadow-agent/src/renderer/host.ts:28-35`,
where `getHostCapabilities(host)` sets `canManagePrivacy` only if **both**
`host.getPrivacyPolicy` and `host.updatePrivacySettings` are functions:
- line 31: `canManagePrivacy:`
- line 32: `typeof host.getPrivacyPolicy === 'function' &&`
- line 33: `typeof host.updatePrivacySettings === 'function',`.
2. Observe how the renderer uses this flag in `shadow-agent/src/renderer/App.tsx:71-75`
and `380-389`:
- line 74: `const capabilities = useMemo(() => getHostCapabilities(host), [host]);`
- lines 104-112: `<PrivacyPanel privacy={privacyPolicy}
interactive={capabilities.canManagePrivacy} ... onToggle={(updates) => void
updatePrivacySettings(updates)} />`.
The `interactive` prop controls whether the privacy toggles are disabled (see step 3)
but does **not** depend on `getPrivacyPolicy` anywhere else.
3. Inspect `PrivacyPanel` in `shadow-agent/src/renderer/App.tsx:4-61`:
- lines 30-35, 45-50: each `<input type="checkbox">` has `disabled={!interactive ||
busy}`.
- line 60: when `!interactive`, it renders the note "This host can report privacy
policy but cannot change it."
This means if `capabilities.canManagePrivacy` is false, the UI prevents toggling even
if `onToggle` is wired to a working `updatePrivacySettings`.
4. Consider a valid host implementation, per `ShadowAgentHost` in
`shadow-agent/src/renderer/host.ts:10-20`, that:
- implements `loadInitialSnapshot` returning a `SnapshotPayload` whose `.privacy` field
is already correct, and
- implements `updatePrivacySettings` but **omits** `getPrivacyPolicy` (allowed because
both are optional in the interface),
then passes this host into `<App host={host} />` (see `ShadowAgentAppProps` in
`App.tsx:65-72`).
In this setup:
- `getHostCapabilities(host)` computes `canManagePrivacy` as `false` (no
`getPrivacyPolicy`),
- `PrivacyPanel` renders with `interactive={false}`, disabling the checkboxes,
- but `updatePrivacySettings` in `App.tsx:90-99` would otherwise function correctly
(`host.updatePrivacySettings` is present).
Thus privacy controls are unnecessarily disabled for a host that fully supports updates
from the current snapshot privacy state, creating a contract mismatch between
`ShadowAgentHost` (where `getPrivacyPolicy` is optional) and `getHostCapabilities`.Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/renderer/host.ts
**Line:** 31:33
**Comment:**
*Logic Error: `canManagePrivacy` is currently gated on both read and write privacy APIs, but the renderer only needs the update API to let users toggle settings from the current snapshot policy. If a host implements `updatePrivacySettings` but not `getPrivacyPolicy`, privacy controls are incorrectly disabled, creating a contract mismatch and silently breaking privacy updates.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix| reconnectDelayMs: env.SHADOW_CAPTURE_RECONNECT_MS ? Number.parseInt(env.SHADOW_CAPTURE_RECONNECT_MS, 10) : undefined, | ||
| sessionId: env.SHADOW_CAPTURE_SESSION_ID?.trim() || undefined, | ||
| sessionLabel: env.SHADOW_CAPTURE_SESSION_LABEL?.trim() || undefined | ||
| } satisfies HttpStreamCaptureTransportOptions; | ||
| case 'ws': | ||
| case 'websocket': | ||
| return { | ||
| kind: 'websocket', | ||
| url: requiredEnv(env.SHADOW_CAPTURE_WS_URL ?? env.SHADOW_CAPTURE_TARGET, 'SHADOW_CAPTURE_WS_URL'), | ||
| reconnectDelayMs: env.SHADOW_CAPTURE_RECONNECT_MS ? Number.parseInt(env.SHADOW_CAPTURE_RECONNECT_MS, 10) : undefined, | ||
| sessionId: env.SHADOW_CAPTURE_SESSION_ID?.trim() || undefined, | ||
| sessionLabel: env.SHADOW_CAPTURE_SESSION_LABEL?.trim() || undefined | ||
| } satisfies WebSocketCaptureTransportOptions; | ||
| case 'socket': | ||
| return { | ||
| kind: 'socket', | ||
| host: requiredEnv(env.SHADOW_CAPTURE_SOCKET_HOST, 'SHADOW_CAPTURE_SOCKET_HOST'), | ||
| port: parsePort(env.SHADOW_CAPTURE_SOCKET_PORT), | ||
| reconnectDelayMs: env.SHADOW_CAPTURE_RECONNECT_MS ? Number.parseInt(env.SHADOW_CAPTURE_RECONNECT_MS, 10) : undefined, |
There was a problem hiding this comment.
🟠 Architect Review — HIGH
SHADOW_CAPTURE_RECONNECT_MS is parsed with Number.parseInt but never validated; when set to a non-numeric value, reconnectDelayMs becomes NaN and Math.max(50, NaN) propagates NaN into setTimeout in the HTTP, WebSocket, and socket transports, collapsing reconnect delay to effectively 0 ms and causing tight reconnect loops under misconfiguration.
Suggestion: Add a shared validator for reconnectDelayMs that only accepts finite, positive integers and falls back to the transport's default (with a warning log) when the env value is invalid, so misconfiguration cannot degrade reconnect behavior into rapid retry loops.
Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is an **Architect / Logical Review** comment left during a code review. These reviews are first-class, important findings — not optional suggestions. Do NOT dismiss this as a 'big architectural change' just because the title says architect review; most of these can be resolved with a small, localized fix once the intent is understood.
**Path:** shadow-agent/src/capture/capture-transports.ts
**Line:** 40:58
**Comment:**
*HIGH: SHADOW_CAPTURE_RECONNECT_MS is parsed with Number.parseInt but never validated; when set to a non-numeric value, reconnectDelayMs becomes NaN and Math.max(50, NaN) propagates NaN into setTimeout in the HTTP, WebSocket, and socket transports, collapsing reconnect delay to effectively 0 ms and causing tight reconnect loops under misconfiguration.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
If a suggested approach is provided above, use it as the authoritative instruction. If no explicit code suggestion is given, you MUST still draft and apply your own minimal, localized fix — do not punt back with 'no suggestion provided, review manually'. Keep the change as small as possible: add a guard clause, gate on a loading state, reorder an await, wrap in a conditional, etc. Do not refactor surrounding code or expand scope beyond the finding.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix| <PrivacyPanel | ||
| privacy={privacyPolicy} | ||
| interactive={capabilities.canManagePrivacy} | ||
| busy={busy === 'privacy'} |
There was a problem hiding this comment.
Suggestion: Privacy controls are only disabled when busy is 'privacy', so users can still toggle privacy while boot/load/export operations are in progress. That allows overlapping async state transitions and can produce stale or out-of-order UI state updates. Disable privacy toggles whenever any busy operation is active, not just privacy updates. [race condition]
Severity Level: Major ⚠️
- ⚠️ Snapshot privacy can desync from main-process privacy settings.
- ⚠️ Users can overlap boot/load/export with privacy updates.
- ⚠️ Busy indicator no longer represents all in-flight operations.Steps of Reproduction ✅
1. Start the Electron app via `startMainProcess()` in
`shadow-agent/src/electron/start-main-process.ts:136-155`, which registers IPC handlers
including `shadow-agent:bootstrap`, `shadow-agent:get-privacy-policy`, and
`shadow-agent:update-privacy-settings` in `registerIpcHandlers()` at
`start-main-process.ts:59-133`.
2. The renderer mounts `App` (`shadow-agent/src/renderer/App.tsx:194-203`) using
`useReducer(appReducer, undefined, initialAppState)`; `initialAppState()` sets `busy:
'booting'` (`shadow-agent/src/renderer/app-state.ts:49-51`). The `useEffect` at
`App.tsx:227-257` dispatches `BOOT_START` and then awaits `loadPreferredSnapshot()`, which
calls `host.loadInitialSnapshot()` and triggers the `shadow-agent:bootstrap` IPC handler.
3. While `BOOT_START` is in flight (state `busy === 'booting'`), the UI already renders
the Privacy panel. The `PrivacyPanel` inputs are wired with `disabled={!interactive ||
busy}` (`App.tsx:159-179`), but the `busy` prop passed down is `busy={busy === 'privacy'}`
at PR hunk line `489` (`shadow-agent/src/renderer/App.tsx`), so with `busy === 'booting'`
the checkboxes remain enabled. Click the "Allow off-host shadow inference" or "Allow raw
transcript storage/export" checkbox in the Privacy panel; this calls `onToggle`, which
invokes `updatePrivacySettings()` (`App.tsx:369-383`), dispatching `PRIVACY_UPDATE_START`
and kicking off an async `host.updatePrivacySettings()` call over IPC (`preload.ts:13-15`
/ `start-main-process.ts:125-133`).
4. Let `PRIVACY_UPDATE_SUCCESS` resolve before `BOOT_SUCCESS`. In this ordering,
`appReducer` handles `PRIVACY_UPDATE_SUCCESS` while `state.snapshot` is still `null`, so
it returns `{ busy: null, error: null, snapshot: state.snapshot }`
(`app-state.ts:88-103`), effectively dropping the new `PrivacyPolicy` on the floor. When
the slower `BOOT_SUCCESS` then arrives, the reducer unconditionally sets `snapshot:
action.snapshot` (`app-state.ts:58-60`) with whatever privacy settings were captured at
bootstrap time. The main process privacy settings have already been updated
(`start-main-process.ts:30-37`), but the renderer snapshot's `privacy` field and the UI's
`privacyPolicy` (`App.tsx:310-315`) now reflect stale values. This inconsistent state only
occurs because the Privacy controls remained enabled during a non-privacy busy state
(`busy === 'booting'`); disabling them whenever `busy !== null` would prevent this
overlapping of boot and privacy update. Similar overlaps are possible with `busy ===
'loading'` or `'exporting'` since the Privacy toggles are still enabled in those states as
well.Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/renderer/App.tsx
**Line:** 489:489
**Comment:**
*Race Condition: Privacy controls are only disabled when `busy` is `'privacy'`, so users can still toggle privacy while boot/load/export operations are in progress. That allows overlapping async state transitions and can produce stale or out-of-order UI state updates. Disable privacy toggles whenever any busy operation is active, not just privacy updates.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix|
CodeAnt AI finished running the review. |
|
CodeAnt AI is running the review. |
Sequence DiagramThis diagram shows how the main process selects a capture transport to feed live events into the app and how user privacy choices are propagated to inference and export behavior. sequenceDiagram
participant User
participant Renderer
participant ElectronMain
participant CaptureTransport
participant SessionManager
participant InferenceEngine
ElectronMain->>CaptureTransport: Start selected capture transport from environment
CaptureTransport-->>SessionManager: Deliver session info and transcript chunks
SessionManager->>SessionManager: Parse chunks into canonical events and buffer them
SessionManager-->>Renderer: Send sanitized live events via IPC
SessionManager-->>InferenceEngine: Expose buffered events for shadow inference
User->>Renderer: Toggle privacy options in Privacy panel
Renderer->>ElectronMain: Request privacy settings update
ElectronMain->>InferenceEngine: Persist settings and restart with new privacy
ElectronMain-->>Renderer: Return updated privacy policy for UI and exports
Generated by CodeAnt AI |
| activeSession = session; | ||
| sessionTitle = `Live: ${session.sessionId.slice(0, 12)}`; | ||
| sessionTitle = session.label; | ||
| await buffer.setSession(session.sessionId); |
There was a problem hiding this comment.
Suggestion: Remove the session-buffer reset call here and switch to a read-only session switch that does not trigger filesystem deletion or persistence writes. [custom_rule]
Severity Level: Minor
Why it matters? 🤔
The buffer implementation of setSession() deletes the previous session directory, clears the current session directory, and persists checkpoints. That is a real on-disk mutation, so this code does violate the read-only/persistence rule described by the suggestion.
Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/capture/session-manager.ts
**Line:** 88:88
**Comment:**
*Custom Rule: Remove the session-buffer reset call here and switch to a read-only session switch that does not trigger filesystem deletion or persistence writes.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix| {capabilities.canExportReplayJsonl ? ( | ||
| <button | ||
| type="button" | ||
| className="button button--ghost" | ||
| onClick={() => void exportReplay({ storeRawTranscript: true })} | ||
| disabled={!snapshot || busy !== null || !privacyPolicy.allowRawTranscriptStorage} | ||
| > | ||
| Export raw replay | ||
| </button> |
There was a problem hiding this comment.
Suggestion: Remove the raw replay export action from the renderer UI so the shadow-agent stays read-only and does not initiate file-writing flows. [custom_rule]
Severity Level: Minor
Why it matters? 🤔
The renderer now exposes a dedicated "Export raw replay" action that calls exportReplay with storeRawTranscript: true, which is a file-writing/export path. If the rule is that the observer UI should stay read-only and avoid initiating raw export flows, this is a direct violation.
Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/renderer/App.tsx
**Line:** 433:441
**Comment:**
*Custom Rule: Remove the raw replay export action from the renderer UI so the shadow-agent stays read-only and does not initiate file-writing flows.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix| reconnectDelayMs: env.SHADOW_CAPTURE_RECONNECT_MS ? Number.parseInt(env.SHADOW_CAPTURE_RECONNECT_MS, 10) : undefined, | ||
| sessionId: env.SHADOW_CAPTURE_SESSION_ID?.trim() || undefined, | ||
| sessionLabel: env.SHADOW_CAPTURE_SESSION_LABEL?.trim() || undefined | ||
| } satisfies HttpStreamCaptureTransportOptions; | ||
| case 'ws': | ||
| case 'websocket': | ||
| return { | ||
| kind: 'websocket', | ||
| url: requiredEnv(env.SHADOW_CAPTURE_WS_URL ?? env.SHADOW_CAPTURE_TARGET, 'SHADOW_CAPTURE_WS_URL'), | ||
| reconnectDelayMs: env.SHADOW_CAPTURE_RECONNECT_MS ? Number.parseInt(env.SHADOW_CAPTURE_RECONNECT_MS, 10) : undefined, | ||
| sessionId: env.SHADOW_CAPTURE_SESSION_ID?.trim() || undefined, | ||
| sessionLabel: env.SHADOW_CAPTURE_SESSION_LABEL?.trim() || undefined | ||
| } satisfies WebSocketCaptureTransportOptions; | ||
| case 'socket': | ||
| return { | ||
| kind: 'socket', | ||
| host: requiredEnv(env.SHADOW_CAPTURE_SOCKET_HOST, 'SHADOW_CAPTURE_SOCKET_HOST'), | ||
| port: parsePort(env.SHADOW_CAPTURE_SOCKET_PORT), | ||
| reconnectDelayMs: env.SHADOW_CAPTURE_RECONNECT_MS ? Number.parseInt(env.SHADOW_CAPTURE_RECONNECT_MS, 10) : undefined, | ||
| sessionId: env.SHADOW_CAPTURE_SESSION_ID?.trim() || undefined, |
There was a problem hiding this comment.
🟠 Architect Review — HIGH
SHADOW_CAPTURE_RECONNECT_MS is parsed with Number.parseInt but not validated; non-numeric values become NaN and propagate into reconnectDelayMs, causing Math.max(50, NaN) to yield NaN and setTimeout to schedule immediate reconnects, potentially creating tight retry loops under misconfiguration.
Suggestion: Validate SHADOW_CAPTURE_RECONNECT_MS as a finite, non-negative integer (e.g., via Number.isFinite and range checks) and fall back to the default reconnect delay with a warning log when invalid, rather than passing NaN through to the transports.
Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is an **Architect / Logical Review** comment left during a code review. These reviews are first-class, important findings — not optional suggestions. Do NOT dismiss this as a 'big architectural change' just because the title says architect review; most of these can be resolved with a small, localized fix once the intent is understood.
**Path:** shadow-agent/src/capture/capture-transports.ts
**Line:** 40:59
**Comment:**
*HIGH: SHADOW_CAPTURE_RECONNECT_MS is parsed with Number.parseInt but not validated; non-numeric values become NaN and propagate into reconnectDelayMs, causing Math.max(50, NaN) to yield NaN and setTimeout to schedule immediate reconnects, potentially creating tight retry loops under misconfiguration.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
If a suggested approach is provided above, use it as the authoritative instruction. If no explicit code suggestion is given, you MUST still draft and apply your own minimal, localized fix — do not punt back with 'no suggestion provided, review manually'. Keep the change as small as possible: add a guard clause, gate on a loading state, reorder an await, wrap in a conditional, etc. Do not refactor surrounding code or expand scope beyond the finding.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix| // Then watch for incremental updates | ||
| const parser = createIncrementalParser((entry) => { | ||
| activeParser = createIncrementalParser((entry) => { | ||
| const events = normalizeEntry(entry, session.sessionId); |
There was a problem hiding this comment.
Suggestion: Events are normalized without propagating the transport session source, and the normalizer defaults to transcript source, so socket/http/websocket captures are mislabeled as transcript events. Ensure normalized events are stamped with session.source (or pass source into the normalizer) so downstream record/source metadata remains correct. [logic error]
Severity Level: Major ⚠️
- ⚠️ Live HTTP/socket/WebSocket events stored with wrong EventSource.
- ⚠️ Replays from hook streams indistinguishable from transcript captures.Steps of Reproduction ✅
1. Start the Electron main process via `startMainProcess()` in
`shadow-agent/src/electron/start-main-process.ts:17-33`, with environment variables
configuring a non-file-tail live transport (e.g. `SHADOW_CAPTURE_TRANSPORT=socket` and the
required `SHADOW_CAPTURE_SOCKET_HOST`/`SHADOW_CAPTURE_SOCKET_PORT` so
`resolveCaptureTransportOptionsFromEnv()` in
`shadow-agent/src/capture/capture-transports.ts:24-61` returns `kind: 'socket'`).
2. `startMainProcess()` constructs a `SessionManager` using `createSessionManager()` in
`shadow-agent/src/capture/session-manager.ts:34-43` and passes the resolved transport
options; `createCaptureTransport()` in
`shadow-agent/src/capture/capture-transports.ts:67-76` creates a concrete transport whose
`buildSession` functions set `session.source = 'claude-hook'` for HTTP/WebSocket/Socket
(`http-stream-transport.ts:13-21`, `websocket-transport.ts:13-21`,
`socket-transport.ts:15-22`).
3. When live data arrives, the transport's `start()` implementation invokes
`context.onChunk({ session, chunk })` (HTTP:
`shadow-agent/src/capture/http-stream-transport.ts:84-101`; WebSocket:
`shadow-agent/src/capture/websocket-transport.ts:96-102`; Socket:
`shadow-agent/src/capture/socket-transport.ts:85-97`), which flows into the
`SessionManager`'s `onChunk` handler registered in `session-manager.ts:125-159`.
`startSession()` in `session-manager.ts:85-100` wires the parser so that every parsed
entry is normalized via `normalizeEntry(entry, session.sessionId)` at line 90.
4. `normalizeEntry()` in `shadow-agent/src/capture/normalizer.ts:21-28` always sets `const
source = 'claude-transcript' as const;` and stamps this on every `CanonicalEvent`,
ignoring the `CaptureSession.source` provided by the transport. Downstream,
`buildRendererInput()` in `shadow-agent/src/shared/renderer-input-adapter.ts:60-66` calls
`buildSessionRecord(events, title)` (`shared/replay-store.ts:51-78`), which sets
`record.source = first?.source ?? 'replay'`. As a result, live captures coming from
HTTP/WebSocket/Socket transports are recorded and persisted with `source:
'claude-transcript'` rather than `'claude-hook'`, misrepresenting their origin in
snapshots and stored replays.Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/capture/session-manager.ts
**Line:** 90:90
**Comment:**
*Logic Error: Events are normalized without propagating the transport session source, and the normalizer defaults to transcript source, so socket/http/websocket captures are mislabeled as transcript events. Ensure normalized events are stamped with `session.source` (or pass source into the normalizer) so downstream record/source metadata remains correct.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix| onChunk: async ({ session, chunk }) => { | ||
| if (!activeSession || activeSession.sessionId !== session.sessionId) { | ||
| await startSession(session); | ||
| } | ||
| activeParser.push(chunk); |
There was a problem hiding this comment.
Suggestion: The chunk handler can call startSession whenever activeSession is null, but stop() explicitly sets activeSession to null first; any late chunk delivered after stop can silently restart a session and continue ingesting data. Add a stopped/closing guard in onChunk to ignore chunks after shutdown. [race condition]
Severity Level: Major ⚠️
- ⚠️ SessionManager.stop() may not fully stop ingestion.
- ⚠️ Late chunks can reopen sessions after shutdown.Steps of Reproduction ✅
1. Create a `SessionManager` using `createSessionManager()` in
`shadow-agent/src/capture/session-manager.ts:34-43` with a streaming transport, e.g. by
configuring the Electron main process in
`shadow-agent/src/electron/start-main-process.ts:25-31` so that
`resolveCaptureTransportOptionsFromEnv()` in
`shadow-agent/src/capture/capture-transports.ts:24-61` selects `kind: 'http-stream'` or
`kind: 'socket'`.
2. The selected transport's `start()` implementation (HTTP stream:
`shadow-agent/src/capture/http-stream-transport.ts:30-101`; WebSocket:
`shadow-agent/src/capture/websocket-transport.ts:50-103`; Socket:
`shadow-agent/src/capture/socket-transport.ts:31-98`) establishes a long-lived connection,
calls `context.onSessionStarted(session)` once connected, and then repeatedly calls
`context.onChunk({ session, chunk })` as data arrives.
3. While the connection is still active and data is flowing, a caller invokes
`SessionManager.stop()` (`shadow-agent/src/capture/session-manager.ts:162-169`), which
calls `teardown()` at lines 79-83. `teardown()` immediately invokes
`transportSubscription.stop()` (setting each transport's local `stopped` flag) and sets
`activeSession = null`, but it does not prevent already in-flight or just-scheduled
`context.onChunk` calls from running (for example, the HTTP transport loop at
`http-stream-transport.ts:86-101` or the socket `'data'` handler at
`socket-transport.ts:85-97` may still invoke `context.onChunk` once more after `stop()`
has been called).
4. When one of these late `context.onChunk` invocations executes, the `onChunk` handler
registered in `createSessionManager.start()`
(`shadow-agent/src/capture/session-manager.ts:153-157`) observes `!activeSession` (because
`stop()` set it to `null`) and calls `await startSession(session);` again, reinitializing
the parser and buffer and then pushing the chunk into `activeParser`. This causes new
events to be ingested even though `SessionManager.stop()` has already been called,
violating expected shutdown semantics and allowing ingestion to silently restart after a
supposed stop.Fix in Cursor | Fix in VSCode Claude
(Use Cmd/Ctrl + Click for best experience)
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** shadow-agent/src/capture/session-manager.ts
**Line:** 153:157
**Comment:**
*Race Condition: The chunk handler can call `startSession` whenever `activeSession` is null, but `stop()` explicitly sets `activeSession` to null first; any late chunk delivered after stop can silently restart a session and continue ingesting data. Add a stopped/closing guard in `onChunk` to ignore chunks after shutdown.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix|
CodeAnt AI finished running the review. |
|
Attempted auto-merge failed: this PR (feat/pluggable-capture-transport, breach resolution cherry-pick from Apr 28) conflicts with current main. Rebase needed before this can land. Priority: high — this is the formal re-routing of the direct-to-main breach commits. |
User description
Re-introduces the pluggable capture transport system that was inadvertently pushed directly to main on commit 333830e. Main was reverted in da1bf2d to restore PR-based workflow; this PR re-applies the changes via cherry-pick (now f6d0274).
Original commit body
Adds multiple transport implementations and refactors the capture layer:
37 files changed, 2013 insertions(+), 245 deletions(-)
CodeAnt-AI Description
Add pluggable live capture transports and saved privacy controls
What Changed
Impact
✅ Fewer missed live events after transcript rotation✅ Clearer privacy controls for transcript handling✅ Safer local storage of queued transcript data🔄 Retrigger CodeAnt AI Review
Details
💡 Usage Guide
Checking Your Pull Request
Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.
Talking to CodeAnt AI
Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:
This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.
Example
Preserve Org Learnings with CodeAnt
You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:
This helps CodeAnt AI learn and adapt to your team's coding style and standards.
Example
Retrigger review
Ask CodeAnt AI to review the PR again, by typing:
Check Your Repository Health
To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.