Skip to content

[pull] canary from vercel:canary#880

Merged
pull[bot] merged 4 commits intocode:canaryfrom
vercel:canary
Mar 14, 2026
Merged

[pull] canary from vercel:canary#880
pull[bot] merged 4 commits intocode:canaryfrom
vercel:canary

Conversation

@pull
Copy link

@pull pull bot commented Mar 14, 2026

See Commits and Changes for more details.


Created by pull[bot] (v2.0.0-alpha.4)

Can you help keep this open source service alive? 💖 Please sponsor : )

mischnic and others added 4 commits March 14, 2026 11:17
#91347)

The JS analyze microbenchmark got 23% faster:

Only a tiny fraction of imports have annotations, be it `import ... with {...}` or `require(/*webpackIgnore: true*/...)`, so a `Option<Pointer<Struct>>` mkes sense here.
```
canary:
references/jsonwebtoken.js/full
                        time:   [63.250 ms 63.918 ms 64.752 ms]
                        change: [-5.2380% -3.0074% -0.8753%] (p = 0.01 < 0.05)
                        Change within noise threshold.
Found 12 outliers among 100 measurements (12.00%)
  5 (5.00%) high mild
  7 (7.00%) high severe


now:
references/jsonwebtoken.js/full
                        time:   [49.070 ms 49.166 ms 49.273 ms]
                        change: [-24.097% -23.079% -22.237%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 6 outliers among 100 measurements (6.00%)
  3 (3.00%) high mild
  3 (3.00%) high severe
````

This came up in #91278 (comment), which added another (tiny) field to ImportAttributes. But it caused a 6% regression in the microbenchmark anyway.
1. Add the CJS version of `import ... with {"turbopack-chunking-type: "parallel"`
2. Allow `shared` as a chunking type
3. Improve error message for invalid values:

<img width="718" height="455" alt="Bildschirmfoto 2026-03-13 um 15 07 06" src="https://github.com/user-attachments/assets/bc1efb43-4cc1-4885-b253-49c1e58b3add" />
Replaces the segment-path-matching scroll system with a simpler model
based on a shared mutable ScrollRef on CacheNode.

The old system accumulated segment paths during navigation and matched
them in layout-router to decide which segments should scroll. This was
necessary when CacheNodes were created lazily during render. Now that we
construct the entire CacheNode tree immediately upon navigation, we can
assign a shared ScrollRef directly to each new leaf node. When any
segment scrolls, it flips the ref to false, preventing other segments
from also scrolling. This removes all the segment path accumulation and
matching logic.

Fixes a regression where calling `refresh()` from a server action
scrolled the page to the top. The old system had a semantic gap between
`null` (no segments) and `[]` (scroll everything) — a server action
refresh with no new segments fell through to a path that scrolled
unconditionally. The new model avoids this: refresh creates no new
CacheNodes, so no ScrollRef is assigned, and nothing scrolls.

Repro: https://github.com/stipsan/nextjs-refresh-regression-repro

There is extensive existing test coverage for scroll restoration
behavior. This adds one additional test for the server action refresh
bug.
### What?

Switch `AsyncModulesInfo` to use `cell = "keyed"` and per-key access
(`is_async()`) instead of reading the full `FxHashSet` via `.await?`.

### Why?

`AsyncModulesInfo` is a large set that gets read from many call sites.
With the default cell, any change to any module's async status
invalidates all readers. The `cell = "keyed"` annotation enables
turbo-tasks to track reads at the key level, so a change to one module's
async status only invalidates tasks that queried that specific module.

This reduces unnecessary recomputation during incremental rebuilds where
only a small subset of modules change their async status.

### How?

**Core change** (`async_module_info.rs`):
- Added `cell = "keyed"` to the `#[turbo_tasks::value]` annotation on
`AsyncModulesInfo`
- Added `is_async()` method that uses `contains_key()` for keyed reads
instead of full-set reads

**Call site migration** (5 files):
- All call sites changed from `.async_module_info().await?` (reads full
set) → `.async_module_info()` (returns `Vc`, defers reads)
- `attach_async_info_to_chunkable_module` and
`from_chunkable_module_or_batch` now take `Vc<AsyncModulesInfo>` instead
of `&ReadRef<AsyncModulesInfo>`
- Each call site uses `async_module_info.is_async(module).await?` for
per-key reads

**`referenced_async_modules`** (`mod.rs`):
- Collects neighbor candidates first, then filters via `is_async()` with
`try_flat_join` for concurrent keyed reads
- Simplified: uses in-place `.reverse()` instead of `.rev().collect()`
double allocation

**`compute_merged_modules`** (`merged_modules.rs`):
- Pre-fetches async status for all mergeable modules into a local
`FxHashSet` before the synchronous fixed-point traversal (which cannot
do async reads)
- Added tracing span `"pre-fetch async module status"` for observability

**Cleanup:**
- Consistent variable naming (`async_module_info`) across all call sites
- Doc comment on `attach_async_info_to_chunkable_module` explaining the
keyed access pattern

---------

Co-authored-by: Tobias Koppers <sokra@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
@pull pull bot locked and limited conversation to collaborators Mar 14, 2026
@pull pull bot added the ⤵️ pull label Mar 14, 2026
@pull pull bot merged commit 4cd8a98 into code:canary Mar 14, 2026
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants