Skip to content

🤖 fix: prevent transcript flashes and tearing during chat hydration#3152

Open
ammar-agent wants to merge 6 commits intomainfrom
fix/new-chat-streaming-flash
Open

🤖 fix: prevent transcript flashes and tearing during chat hydration#3152
ammar-agent wants to merge 6 commits intomainfrom
fix/new-chat-streaming-flash

Conversation

@ammar-agent
Copy link
Copy Markdown
Collaborator

@ammar-agent ammar-agent commented Apr 9, 2026

Summary

A newly created chat, or a just-switched existing chat in web mode, could briefly flash generic empty/loading placeholders during the handoff into transcript hydration. Even after those placeholder flashes were suppressed, switching between existing chats could still visibly tear the transcript or briefly show a stale deferred frame from the previous workspace. This change keeps brand-new chats in their explicit starting state, keeps cached workspace content visible during ordinary workspace switches, keeps the transcript viewport mounted during switches, and makes deferred transcript rendering workspace-aware so stale deferred rows cannot survive a chat handoff.

Background

Workspace creation navigates into the new workspace as soon as the workspace exists, but the first sendMessage and chat subscription replay can land a moment later. During that gap, the workspace shell treated the chat as empty or hydrating and could show Catching up with the agent... or No Messages Yet on brand-new chats.

Separately, switching back to an existing workspace in web mode marked the transcript as hydrating before replay completed. Even when that workspace already had cached transcript rows, WorkspaceShell still replaced the pane with the generic Catching up with the agent... splash, which produced a visible transcript flash on every switch.

After suppressing that splash for cached transcript content, there were still two more switch-time artifacts:

  1. WorkspaceShell keyed ChatPane by workspaceId, which forced the entire transcript viewport to unmount/remount on every chat switch and showed up as a visible vertical tear before scroll/layout stabilization finished.
  2. ChatPane still deferred only the raw message array, so a workspace switch could briefly render a stale deferred transcript snapshot from the previous workspace before React committed the new live rows.

Implementation

  • move the optimistic new-chat startup state into StreamingMessageAggregator, alongside the existing pending-stream model/start-time state
  • mark that optimistic state from useCreationWorkspace only for auto-navigated creations so background-created workspaces do not later open in a stale starting state
  • let replay and terminal event handling clear that optimistic state in the same place as normal pending-stream cleanup:
    • empty catch-up cycles preserve the brand-new chat handoff
    • replayed first-user or assistant history, stream errors/aborts, reconnect idle confirmation, and background stream completion clear it
  • keep WorkspaceShell and ChatPane suppressing generic loading and empty placeholders while isStreamStarting is true
  • only show the web-only WorkspaceShell hydration splash when there is no cached transcript content or queued draft to keep visible, so switching between existing chats preserves the current pane while replay catches up
  • keep ChatPane mounted across workspace switches so the transcript viewport itself does not tear; instead of relying on a root remount to clear local UI state, reset the relevant per-workspace state inside ChatPane and re-arm transcript auto-scroll ownership on workspace changes
  • make deferred transcript rendering workspace-aware in ChatPane, and immediately bypass the deferred snapshot when it still belongs to the previous workspace so switch-time stale rows cannot flash into view
  • add focused aggregator/store regressions, the delayed-send UI regression, the cached-content WorkspaceShell hydration regression, the WorkspaceShell regression that asserts the chat pane DOM node survives workspace switches, and a messageUtils regression for cross-workspace deferred snapshot bypass

Validation

  • bun test ./src/browser/utils/messages/StreamingMessageAggregator.test.ts
  • bun test ./src/browser/stores/WorkspaceStore.test.ts
  • bun test ./src/browser/features/ChatInput/useCreationWorkspace.test.tsx
  • bun test ./src/browser/utils/messages/messageUtils.test.ts
  • bun test ./src/browser/components/WorkspaceShell/WorkspaceShell.test.tsx
  • bun test ./tests/ui/chat/bottomLayoutShift.test.ts
  • bun test ./tests/ui/chat/newChatStreamingFlash.test.ts
  • make static-check

Risks

This changes both the startup/hydration state machine and the workspace-switch rendering path. The main regression risk is state leakage or over-eager immediate rendering during workspace switches, but the follow-up changes explicitly reset the local UI state that previously relied on remounting and now bypass stale deferred snapshots only when they still belong to another workspace. The validation set covers startup placeholders, transcript pinning, cached-hydration behavior, switch-time pane reuse, and the new cross-workspace deferred-snapshot case.

Pains

The original fix accumulated special-case store state as review comments exposed more replay edges. The rewrite folded that behavior back into the aggregator's existing pending-stream machinery, and the follow-up switch fixes exposed three separate UI paths: the generic hydration splash needed to respect cached transcript content, the keyed ChatPane remount needed to stop tearing the transcript viewport during workspace switches, and the deferred transcript snapshot needed to become workspace-aware so stale rows could not flash after the switch.


Generated with mux • Model: openai:gpt-5.4 • Thinking: xhigh • Cost: $117.90

@ammar-agent
Copy link
Copy Markdown
Collaborator Author

@codex review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 82248db82c

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ammar-agent ammar-agent force-pushed the fix/new-chat-streaming-flash branch from 82248db to b567850 Compare April 9, 2026 16:54
@ammar-agent
Copy link
Copy Markdown
Collaborator Author

@codex review

Addressed the integration failure by making the new regression test compatible with the Jest-based integration runner as well as Bun.

@chatgpt-codex-connector
Copy link
Copy Markdown

Codex Review: Didn't find any major issues. Swish!

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ammar-agent ammar-agent force-pushed the fix/new-chat-streaming-flash branch from b567850 to 6234138 Compare April 9, 2026 17:00
@ammar-agent
Copy link
Copy Markdown
Collaborator Author

@codex review

Restricted the optimistic pending-start flag to auto-navigated creations and added hook coverage for the background-creation path.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 6234138ce2

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ammar-agent ammar-agent force-pushed the fix/new-chat-streaming-flash branch from 6234138 to d21a216 Compare April 9, 2026 17:15
@ammar-agent
Copy link
Copy Markdown
Collaborator Author

@codex review

Kept the optimistic pending-start flag through buffered first-turn replay until caught-up applies the history, and added a WorkspaceStore regression test for that timing.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d21a21607b

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ammar-agent ammar-agent force-pushed the fix/new-chat-streaming-flash branch from d21a216 to f064605 Compare April 9, 2026 17:27
@ammar-agent
Copy link
Copy Markdown
Collaborator Author

@codex review

Cleared stale optimistic start state when replay/activity confirms no active stream, and added WorkspaceStore coverage for both buffered replay and no-stream catch-up paths.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: f06460566e

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ammar-agent ammar-agent force-pushed the fix/new-chat-streaming-flash branch from f064605 to d9dcdef Compare April 9, 2026 17:35
@ammar-agent
Copy link
Copy Markdown
Collaborator Author

@codex review

Stopped clearing the optimistic-start flag on recency-only non-streaming activity updates, and added WorkspaceStore coverage for that exact startup gap.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d9dcdef0c9

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ammar-agent ammar-agent force-pushed the fix/new-chat-streaming-flash branch from d9dcdef to 6e026b5 Compare April 9, 2026 17:49
@ammar-agent
Copy link
Copy Markdown
Collaborator Author

@codex review

Kept the optimistic-start flag across caught-up-without-first-turn, while still clearing it on buffered/live first-turn observation and on definitive background stop.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 6e026b5709

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ammar-agent ammar-agent force-pushed the fix/new-chat-streaming-flash branch from 6e026b5 to c7101d6 Compare April 9, 2026 18:01
@ammar-agent
Copy link
Copy Markdown
Collaborator Author

@codex review

Preserved optimistic initial-send startup across full replay resets and added a direct WorkspaceStore regression test for that reset path.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: c7101d697a

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ammar-agent ammar-agent force-pushed the fix/new-chat-streaming-flash branch from c7101d6 to 63063c3 Compare April 9, 2026 18:11
@ammar-agent
Copy link
Copy Markdown
Collaborator Author

@codex review

Cleared optimistic pending-start on pre-stream abort as well, with a targeted WorkspaceStore regression test for the interrupted-first-send path.

@chatgpt-codex-connector
Copy link
Copy Markdown

Codex Review: Didn't find any major issues. Delightful!

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ammar-agent
Copy link
Copy Markdown
Collaborator Author

@codex review

Rewrote the new-chat startup fix so the optimistic pending-start state lives in StreamingMessageAggregator instead of a separate WorkspaceStore transient. Local validation is green (make static-check plus the focused startup regressions).

@chatgpt-codex-connector
Copy link
Copy Markdown

Codex Review: Didn't find any major issues. Can't wait for the next one!

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ammar-agent ammar-agent changed the title 🤖 fix: prevent new chat streaming flash 🤖 fix: prevent transcript flashes during chat hydration Apr 11, 2026
@ammar-agent
Copy link
Copy Markdown
Collaborator Author

@codex review

Follow-up fix for the remaining workspace-switch transcript flash:

  • keep cached transcript content visible during web hydration when switching between existing chats
  • retain the new-chat optimistic-start barrier behavior from the earlier rewrite
  • added WorkspaceShell regression coverage and re-ran targeted tests plus make static-check

@chatgpt-codex-connector
Copy link
Copy Markdown

Codex Review: Didn't find any major issues. Breezy!

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Keep a newly created workspace in an optimistic starting state until the first
real send reaches onChat, suppressing the transient catch-up and empty-state
placeholders that could flash during the handoff from project creation to the
workspace chat.

Add a focused regression test that delays the initial send and verifies the
starting barrier stays visible throughout the transition.

---

_Generated with `mux` • Model: `openai:gpt-5.4` • Thinking: `xhigh` • Cost: `$17.56`_

<!-- mux-attribution: model=openai:gpt-5.4 thinking=xhigh costs=17.56 -->
Move the optimistic new-chat startup state into StreamingMessageAggregator so
WorkspaceStore can derive starting directly from aggregator-owned pending stream
state. This removes the extra transient pendingInitialSend bookkeeping while
keeping the startup barrier alive through empty catch-up cycles and replay resets.

---

_Generated with `mux` • Model: `openai:gpt-5.4` • Thinking: `xhigh` • Cost: `$46.60`_

<!-- mux-attribution: model=openai:gpt-5.4 thinking=xhigh costs=46.60 -->
Stop the web-only WorkspaceShell catch-up splash from hiding cached transcript content when switching between existing chats. If the target workspace already has transcript rows or a queued draft, keep that content mounted during the onChat replay handoff instead of flashing it away behind the generic hydration placeholder.

Adds a focused WorkspaceShell regression test for the cached-content case and keeps the existing generic hydration/loading placeholder coverage.

---

_Generated with `mux` • Model: `openai:gpt-5.4` • Thinking: `xhigh` • Cost: `$69.76`_

<!-- mux-attribution: model=openai:gpt-5.4 thinking=xhigh costs=69.76 -->
@ammar-agent ammar-agent force-pushed the fix/new-chat-streaming-flash branch from df78d66 to 6f02a94 Compare April 12, 2026 16:23
@ammar-agent
Copy link
Copy Markdown
Collaborator Author

@codex review

Rebased on main, resolved the two WorkspaceStore.ts conflicts, reran the targeted regression suite plus make static-check, and force-pushed the branch.

@chatgpt-codex-connector
Copy link
Copy Markdown

Codex Review: Didn't find any major issues. 🚀

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Keep the ChatPane viewport mounted across workspace switches so the transcript doesn't visibly tear while the next workspace hydrates. ChatPane now resets the per-workspace local UI state that used to be reset by the root remount, and it re-arms tail ownership on workspace changes so the incoming transcript stays pinned instead of inheriting stale scroll state.

Adds a focused WorkspaceShell regression asserting that the chat pane DOM node survives workspace switches, alongside the existing UI regressions for transcript pinning and new-chat hydration.

---

_Generated with `mux` • Model: `openai:gpt-5.4` • Thinking: `xhigh` • Cost: `$73.46`_

<!-- mux-attribution: model=openai:gpt-5.4 thinking=xhigh costs=73.46 -->
@ammar-agent ammar-agent changed the title 🤖 fix: prevent transcript flashes during chat hydration 🤖 fix: prevent transcript flashes and tearing during chat hydration Apr 12, 2026
@ammar-agent
Copy link
Copy Markdown
Collaborator Author

@codex review

Follow-up fix for the remaining workspace-switch vertical tear:

  • keep ChatPane mounted across workspace switches instead of remounting the whole transcript viewport
  • reset the per-workspace local UI state inside ChatPane that previously piggybacked on the root remount
  • re-arm transcript tail ownership on workspace changes so the incoming chat does not inherit stale scroll state
  • validated with WorkspaceShell, bottomLayoutShift, newChatStreamingFlash, and make static-check

@chatgpt-codex-connector
Copy link
Copy Markdown

Codex Review: Didn't find any major issues. Another round soon, please!

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Make deferred transcript rendering workspace-aware so chat switches cannot briefly render a stale deferred snapshot from the previous workspace. ChatPane now defers a workspace-scoped transcript snapshot instead of just the message array, and the deferred-message guard immediately falls back to the live snapshot when the deferred rows still belong to another workspace.

Adds a regression unit test for the cross-workspace deferred snapshot case and reruns the switch-sensitive UI coverage.

---

_Generated with `mux` • Model: `openai:gpt-5.4` • Thinking: `xhigh` • Cost: `$117.90`_

<!-- mux-attribution: model=openai:gpt-5.4 thinking=xhigh costs=117.90 -->
@ammar-agent
Copy link
Copy Markdown
Collaborator Author

@codex review

Follow-up fix for another likely workspace-switch artifact:

  • make deferred transcript rendering workspace-aware so a chat switch cannot briefly render a stale deferred snapshot from the previous workspace
  • added a messageUtils regression for cross-workspace deferred snapshot bypass
  • reran messageUtils, WorkspaceShell, bottomLayoutShift, newChatStreamingFlash, and make static-check

@chatgpt-codex-connector
Copy link
Copy Markdown

Codex Review: Didn't find any major issues. Already looking forward to the next diff.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Add a browser-mode repro script that boots an isolated dev-server, creates two live mock-chat workspaces, switches between them, and captures screenshot/scroll diagnostics. The script exits non-zero when the target transcript keeps shifting after it is already visible, matching the user-reported same-transcript tear.

---
_Generated with `mux` • Model: `openai:gpt-5.4` • Thinking: `xhigh` • Cost: `$121.96`_

<!-- mux-attribution: model=openai:gpt-5.4 thinking=xhigh costs=121.96 -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant