🤖 fix: prevent transcript flashes and tearing during chat hydration#3152
🤖 fix: prevent transcript flashes and tearing during chat hydration#3152ammar-agent wants to merge 6 commits intomainfrom
Conversation
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 82248db82c
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
82248db to
b567850
Compare
|
@codex review Addressed the integration failure by making the new regression test compatible with the Jest-based integration runner as well as Bun. |
|
Codex Review: Didn't find any major issues. Swish! ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
b567850 to
6234138
Compare
|
@codex review Restricted the optimistic pending-start flag to auto-navigated creations and added hook coverage for the background-creation path. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 6234138ce2
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
6234138 to
d21a216
Compare
|
@codex review Kept the optimistic pending-start flag through buffered first-turn replay until |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d21a21607b
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
d21a216 to
f064605
Compare
|
@codex review Cleared stale optimistic start state when replay/activity confirms no active stream, and added WorkspaceStore coverage for both buffered replay and no-stream catch-up paths. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: f06460566e
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
f064605 to
d9dcdef
Compare
|
@codex review Stopped clearing the optimistic-start flag on recency-only non-streaming activity updates, and added WorkspaceStore coverage for that exact startup gap. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d9dcdef0c9
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
d9dcdef to
6e026b5
Compare
|
@codex review Kept the optimistic-start flag across caught-up-without-first-turn, while still clearing it on buffered/live first-turn observation and on definitive background stop. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 6e026b5709
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
6e026b5 to
c7101d6
Compare
|
@codex review Preserved optimistic initial-send startup across full replay resets and added a direct WorkspaceStore regression test for that reset path. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c7101d697a
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
c7101d6 to
63063c3
Compare
|
@codex review Cleared optimistic pending-start on pre-stream abort as well, with a targeted WorkspaceStore regression test for the interrupted-first-send path. |
|
Codex Review: Didn't find any major issues. Delightful! ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
|
@codex review Rewrote the new-chat startup fix so the optimistic pending-start state lives in |
|
Codex Review: Didn't find any major issues. Can't wait for the next one! ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
|
@codex review Follow-up fix for the remaining workspace-switch transcript flash:
|
|
Codex Review: Didn't find any major issues. Breezy! ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
Keep a newly created workspace in an optimistic starting state until the first real send reaches onChat, suppressing the transient catch-up and empty-state placeholders that could flash during the handoff from project creation to the workspace chat. Add a focused regression test that delays the initial send and verifies the starting barrier stays visible throughout the transition. --- _Generated with `mux` • Model: `openai:gpt-5.4` • Thinking: `xhigh` • Cost: `$17.56`_ <!-- mux-attribution: model=openai:gpt-5.4 thinking=xhigh costs=17.56 -->
Move the optimistic new-chat startup state into StreamingMessageAggregator so WorkspaceStore can derive starting directly from aggregator-owned pending stream state. This removes the extra transient pendingInitialSend bookkeeping while keeping the startup barrier alive through empty catch-up cycles and replay resets. --- _Generated with `mux` • Model: `openai:gpt-5.4` • Thinking: `xhigh` • Cost: `$46.60`_ <!-- mux-attribution: model=openai:gpt-5.4 thinking=xhigh costs=46.60 -->
Stop the web-only WorkspaceShell catch-up splash from hiding cached transcript content when switching between existing chats. If the target workspace already has transcript rows or a queued draft, keep that content mounted during the onChat replay handoff instead of flashing it away behind the generic hydration placeholder. Adds a focused WorkspaceShell regression test for the cached-content case and keeps the existing generic hydration/loading placeholder coverage. --- _Generated with `mux` • Model: `openai:gpt-5.4` • Thinking: `xhigh` • Cost: `$69.76`_ <!-- mux-attribution: model=openai:gpt-5.4 thinking=xhigh costs=69.76 -->
df78d66 to
6f02a94
Compare
|
@codex review Rebased on |
|
Codex Review: Didn't find any major issues. 🚀 ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
Keep the ChatPane viewport mounted across workspace switches so the transcript doesn't visibly tear while the next workspace hydrates. ChatPane now resets the per-workspace local UI state that used to be reset by the root remount, and it re-arms tail ownership on workspace changes so the incoming transcript stays pinned instead of inheriting stale scroll state. Adds a focused WorkspaceShell regression asserting that the chat pane DOM node survives workspace switches, alongside the existing UI regressions for transcript pinning and new-chat hydration. --- _Generated with `mux` • Model: `openai:gpt-5.4` • Thinking: `xhigh` • Cost: `$73.46`_ <!-- mux-attribution: model=openai:gpt-5.4 thinking=xhigh costs=73.46 -->
|
@codex review Follow-up fix for the remaining workspace-switch vertical tear:
|
|
Codex Review: Didn't find any major issues. Another round soon, please! ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
Make deferred transcript rendering workspace-aware so chat switches cannot briefly render a stale deferred snapshot from the previous workspace. ChatPane now defers a workspace-scoped transcript snapshot instead of just the message array, and the deferred-message guard immediately falls back to the live snapshot when the deferred rows still belong to another workspace. Adds a regression unit test for the cross-workspace deferred snapshot case and reruns the switch-sensitive UI coverage. --- _Generated with `mux` • Model: `openai:gpt-5.4` • Thinking: `xhigh` • Cost: `$117.90`_ <!-- mux-attribution: model=openai:gpt-5.4 thinking=xhigh costs=117.90 -->
|
@codex review Follow-up fix for another likely workspace-switch artifact:
|
|
Codex Review: Didn't find any major issues. Already looking forward to the next diff. ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
Add a browser-mode repro script that boots an isolated dev-server, creates two live mock-chat workspaces, switches between them, and captures screenshot/scroll diagnostics. The script exits non-zero when the target transcript keeps shifting after it is already visible, matching the user-reported same-transcript tear. --- _Generated with `mux` • Model: `openai:gpt-5.4` • Thinking: `xhigh` • Cost: `$121.96`_ <!-- mux-attribution: model=openai:gpt-5.4 thinking=xhigh costs=121.96 -->
Summary
A newly created chat, or a just-switched existing chat in web mode, could briefly flash generic empty/loading placeholders during the handoff into transcript hydration. Even after those placeholder flashes were suppressed, switching between existing chats could still visibly tear the transcript or briefly show a stale deferred frame from the previous workspace. This change keeps brand-new chats in their explicit starting state, keeps cached workspace content visible during ordinary workspace switches, keeps the transcript viewport mounted during switches, and makes deferred transcript rendering workspace-aware so stale deferred rows cannot survive a chat handoff.
Background
Workspace creation navigates into the new workspace as soon as the workspace exists, but the first
sendMessageand chat subscription replay can land a moment later. During that gap, the workspace shell treated the chat as empty or hydrating and could showCatching up with the agent...orNo Messages Yeton brand-new chats.Separately, switching back to an existing workspace in web mode marked the transcript as hydrating before replay completed. Even when that workspace already had cached transcript rows,
WorkspaceShellstill replaced the pane with the genericCatching up with the agent...splash, which produced a visible transcript flash on every switch.After suppressing that splash for cached transcript content, there were still two more switch-time artifacts:
WorkspaceShellkeyedChatPanebyworkspaceId, which forced the entire transcript viewport to unmount/remount on every chat switch and showed up as a visible vertical tear before scroll/layout stabilization finished.ChatPanestill deferred only the raw message array, so a workspace switch could briefly render a stale deferred transcript snapshot from the previous workspace before React committed the new live rows.Implementation
StreamingMessageAggregator, alongside the existing pending-stream model/start-time stateuseCreationWorkspaceonly for auto-navigated creations so background-created workspaces do not later open in a stale starting stateWorkspaceShellandChatPanesuppressing generic loading and empty placeholders whileisStreamStartingis trueWorkspaceShellhydration splash when there is no cached transcript content or queued draft to keep visible, so switching between existing chats preserves the current pane while replay catches upChatPanemounted across workspace switches so the transcript viewport itself does not tear; instead of relying on a root remount to clear local UI state, reset the relevant per-workspace state insideChatPaneand re-arm transcript auto-scroll ownership on workspace changesChatPane, and immediately bypass the deferred snapshot when it still belongs to the previous workspace so switch-time stale rows cannot flash into viewWorkspaceShellhydration regression, theWorkspaceShellregression that asserts the chat pane DOM node survives workspace switches, and amessageUtilsregression for cross-workspace deferred snapshot bypassValidation
bun test ./src/browser/utils/messages/StreamingMessageAggregator.test.tsbun test ./src/browser/stores/WorkspaceStore.test.tsbun test ./src/browser/features/ChatInput/useCreationWorkspace.test.tsxbun test ./src/browser/utils/messages/messageUtils.test.tsbun test ./src/browser/components/WorkspaceShell/WorkspaceShell.test.tsxbun test ./tests/ui/chat/bottomLayoutShift.test.tsbun test ./tests/ui/chat/newChatStreamingFlash.test.tsmake static-checkRisks
This changes both the startup/hydration state machine and the workspace-switch rendering path. The main regression risk is state leakage or over-eager immediate rendering during workspace switches, but the follow-up changes explicitly reset the local UI state that previously relied on remounting and now bypass stale deferred snapshots only when they still belong to another workspace. The validation set covers startup placeholders, transcript pinning, cached-hydration behavior, switch-time pane reuse, and the new cross-workspace deferred-snapshot case.
Pains
The original fix accumulated special-case store state as review comments exposed more replay edges. The rewrite folded that behavior back into the aggregator's existing pending-stream machinery, and the follow-up switch fixes exposed three separate UI paths: the generic hydration splash needed to respect cached transcript content, the keyed
ChatPaneremount needed to stop tearing the transcript viewport during workspace switches, and the deferred transcript snapshot needed to become workspace-aware so stale rows could not flash after the switch.Generated with
mux• Model:openai:gpt-5.4• Thinking:xhigh• Cost:$117.90