Skip to content

[pull] canary from vercel:canary#877

Merged
pull[bot] merged 16 commits intocode:canaryfrom
vercel:canary
Mar 13, 2026
Merged

[pull] canary from vercel:canary#877
pull[bot] merged 16 commits intocode:canaryfrom
vercel:canary

Conversation

@pull
Copy link

@pull pull bot commented Mar 13, 2026

See Commits and Changes for more details.


Created by pull[bot] (v2.0.0-alpha.4)

Can you help keep this open source service alive? 💖 Please sponsor : )

benfavre and others added 16 commits March 13, 2026 15:01
## Summary

Replace `btoa(String.fromCodePoint(...chunk))` with
`Buffer.from().toString('base64')` for encoding binary Flight data
chunks in `writeFlightDataInstruction`.

The spread operator `...chunk` converts the entire Uint8Array into
individual arguments on the call stack. For a 64KB binary chunk, this
creates 65,536 arguments — causing:
- Significant call stack pressure (V8's argument limit is ~65K)
- Temporary JS string allocation from `String.fromCodePoint`
- The entire chunk must be converted to a JS string before base64
encoding

`Buffer.from().toString('base64')` performs base64 encoding natively in
C++ without any intermediate string allocation or argument spreading.

**Edge runtime compatibility**: Falls back to the original `btoa` path
where `Buffer` is unavailable.

## Test plan

- [x] TypeScript compilation passes
- [x] Prettier and ESLint pass (pre-commit hooks)
- [x] Produces identical base64 output
- [x] Edge runtime falls back to original path (no `Buffer` available)
- [x] Node.js runtime uses `Buffer.from` for native encoding


🤖 Generated with [Claude Code](https://claude.com/claude-code)
When the Node.js inspector is active (e.g. via `next dev --inspect`),
dimming wraps console arguments in a format string which defeats
inspector affordances such as collapsible objects and
clickable/linkified stack traces.

This adds an early return in `convertToDimmedArgs` that skips dimming
entirely when `inspector.url()` is defined. Terminal output without
`--inspect` is unchanged.

Ideally we would only skip dimming when a debugger frontend is actually
attached, but Node.js does not expose a synchronous API for that —
detecting it would require async polling of the `/json/list` HTTP
endpoint.

Test plan:
- Unit tests in `console-dim.external.test.ts`
- Manual: dimming almost never triggers now since we switched to in-band
validation (no separate render pass). To reproduce manually, use a
client component that calls `console.error(new Error(...))` during SSR,
e.g.:
1. `__NEXT_CACHE_COMPONENTS=true pnpm next dev
test/e2e/app-dir/server-source-maps/fixtures/default --inspect`
  2. Open `http://localhost:3000/` in Chrome
  3. Click the Node.js DevTools button
  4. Load `http://localhost:3000/ssr-error-log`
5. Verify the first error in DevTools has source-mapped, expandable
stack frames

Before:

<img width="1496" height="848" alt="before"
src="https://github.com/user-attachments/assets/8b4358e1-06ec-4078-b6cc-269986e295e7"
/>

After:

<img width="1496" height="842" alt="after"
src="https://github.com/user-attachments/assets/42f33c6f-f025-410f-b44c-2ccd88fe0cfe"
/>
When the instant navigation testing cookie is set, the debug static
shell path previously called `getFallbackRouteParams()` which treats all
dynamic segments as fallback params regardless of
`generateStaticParams`. This caused two issues:

- Root params (via `next/root-params`) errored with a
`NEXT_STATIC_GEN_BAILOUT` 500 response because the root layout's param
access returned a hanging promise, which was treated as uncached data
accessed outside of `<Suspense>`.
- Params defined in `generateStaticParams` were incorrectly excluded
from the instant shell, shown behind Suspense fallbacks instead of
resolving.

The fix uses `prerenderInfo?.fallbackRouteParams` from the prerender
manifest instead, which correctly distinguishes between statically known
params (defined in `generateStaticParams`) and unknown params. For
prerendered URLs, the outer guard (`!isPrerendered`) prevents the
fallback path from being entered, so no fallback params are set and all
params resolve normally.

To make this work in dev mode, the dev server now populates
`fallbackRouteParams` in the prerender manifest's dynamic route entries,
which were previously left as `undefined`.

Additionally, the `fallbackParams` request metadata is now overridden
when rendering a debug static shell. Without this override, the staged
rendering would use the smallest set of fallback params across all
prerendered routes (set by `base-server.ts` for dev validation), which
may omit fallback params for the current URL when a different value for
the same param is defined in `generateStaticParams`. The override
ensures the route-specific fallback params are used instead.
Part 1 of 2. This commit adds the server-side infrastructure for
size-based segment bundling but does not change any observable behavior.
The client-side changes that actually consume bundled responses are in
the next commit.

At build time, a measurement pass renders each segment's prefetch
response, measures its gzip size, and decides which segments should be
bundled together vs fetched separately. The decisions are persisted to a
manifest and embedded into the route tree prefetch response so the
client can act on them.

The decisions are computed once at build and remain fixed for the
lifetime of the deployment. They are not recomputed during
ISR/revalidation — if they could change, the client would need to
re-fetch the route tree after every revalidation, defeating the purpose
of caching it independently.

Refer to the next commit for a full description of the design and
motivation.

## Config

experimental.prefetchInlining accepts either a boolean or an object with
threshold overrides (maxSize, maxBundleSize). When true, the default
thresholds are used (2KB per-segment, 10KB total budget). The auto
behavior will eventually become the default. The config will remain
available for overriding thresholds.
### What?

This PR extends `@next/routing` so `resolveRoutes()` returns the
concrete route resolution data callers need after rewrites and dynamic
matches:

- rename `matchedPathname` to `resolvedPathname`
- add `resolvedQuery`
- add `invocationTarget` for the concrete pathname/query that should be
invoked
- export the new query/invocation target types from the package
entrypoint

It also removes the leftover `query` alias so the result shape
consistently uses `resolvedQuery`.

### Why?

`matchedPathname` only described part of the result, and it was
ambiguous for dynamic routes because the resolved route template and the
concrete invocation target are not always the same thing.

For example, a dynamic route can resolve to `/blog/[slug]` while the
actual invocation target is `/blog/post-1`, and rewrites can merge query
params that callers need to preserve. Exposing these values directly
makes the package easier to consume from adapters without each caller
reconstructing them manually.

### How?

- thread resolved query construction through the route resolution paths
- build `invocationTarget` alongside `resolvedPathname` wherever
rewrites, static matches, and dynamic matches resolve successfully
- preserve merged rewrite query params in `resolvedQuery`
- update the public types, README example, and existing tests to use
`resolvedPathname`
- add coverage for resolved query + invocation target behavior on
rewrite and dynamic route matches

Verified with:
- `pnpm --filter @next/routing test -- --runInBand`
- `pnpm --filter @next/routing build`
### What?

During compaction in `turbo-persistence`, when entries are dropped
(superseded by newer values or pruned by tombstones), blob files
referenced by those entries are now marked for deletion.

### Why?

Previously, compaction would merge SST files and correctly drop stale
entries, but blob files referenced by those dropped entries were leaked
on disk (marked with a TODO at the time). Over time this would cause
unbounded disk usage growth for databases that overwrite or delete
blob-sized values.

### How?

When the compaction merge loop skips an entry (because
`skip_remaining_for_this_key` is `true`), it now checks if the dropped
entry is a `LookupValue::Blob` and, if so, pushes its sequence number to
`blob_seq_numbers_to_delete`. The existing `commit()` infrastructure
already handles the rest — writing `.del` files and removing the actual
`.blob` files after the CURRENT pointer is updated.

The change is minimal (4 lines of logic in `db.rs`):
- Made `blob_seq_numbers_to_delete` mutable
- Added an `else` branch to collect blob sequence numbers from dropped
entries

This covers both cases:
- **SingleValue**: After the first (newest) entry for a key is written,
all older entries are skipped. Blob references in those older entries
are marked for deletion.
- **MultiValue**: After a tombstone is encountered, all older entries
for that key are skipped. Blob references in those older entries are
marked for deletion.

### Tests

Added 4 new tests:
- `compaction_deletes_superseded_blob` — blob overwritten by smaller
value → blob deleted after compaction
- `compaction_deletes_blob_on_tombstone` — blob deleted via tombstone →
blob deleted after compaction
- `compaction_deletes_blob_multi_value_tombstone` — MultiValue:
tombstone prunes blob → blob deleted
- `compaction_preserves_active_blob` — blob still referenced → blob
preserved after compaction

All existing compaction tests (23) and full turbo-persistence test suite
(60) continue to pass.

Co-authored-by: Tobias Koppers <sokra@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
…1236)

### What?

This PR wires custom `cacheHandler` / `cacheHandlers` through the
remaining edge entrypoints so edge app pages, edge app routes, edge
pages SSR, middleware, and edge API routes can all see the configured
handlers.

It also adds an e2e regression suite covering:
- pages-router ISR revalidation with a custom incremental cache handler
- app-router cache handlers with cache components enabled
- edge app page + edge app route wiring when cache components are
disabled

### Why?

We already support custom cache handlers in non-edge paths, but several
edge code paths were not forwarding that configuration into the
generated entrypoints/runtime wrappers. That meant custom cache handlers
could be skipped in edge rendering and edge route execution, and we did
not have regression coverage around those cases.

### How?

- pass `cacheHandler` / `cacheHandlers` through the JS build entry
plumbing for edge server entries
- inject cache handler imports/registration into the webpack edge
templates and loaders
- mirror the same wiring in the Turbopack/Rust entry generation for edge
app pages, app routes, pages SSR, and middleware
- update the edge route wrapper to initialize and register cache
handlers before invoking the route module
- extend `next-taskless` template expansion with raw injection support
so the generated edge templates can add imports and registration code
- add `test/e2e/cache-handlers-upstream-wiring` fixtures to cover
pages/app, edge/non-edge, and revalidation behavior
…on Vc (#91223)

This is basically one-shot from Opus 4.6, with a minor tweak to the doc comment by hand.

Discussed this with @lukesandberg, and we realized that there's only one implementation of this trait, so it doesn't make a ton of sense as a trait.
Flag has been re-enabled and is attached to the test team.
### What?

- Add canonical edge entrypoint metadata to edge function definitions in
the middleware manifest.
- Surface that canonical entrypoint through `build-complete` as
`edgeRuntime` metadata in adapter outputs.
- Update middleware and adapter tests to assert the new metadata instead
of relying on bundler-specific emitted file names.

### Why?

Adapter consumers need a stable way to load and invoke edge outputs
without depending on `next-server`'s sandbox handling or guessing which
emitted file is the executable entrypoint. The existing checks were also
brittle across webpack and Turbopack because the emitted file lists can
differ.

### How?

- Add an optional `entrypoint` field to `EdgeFunctionDefinition` and
populate it in both the webpack middleware manifest generator and the
Turbopack manifest emitters for app, pages, and middleware edge
functions.
- In `packages/next/src/build/adapter/build-complete.ts`, use that
canonical entrypoint for `filePath` and expose `edgeRuntime.modulePath`,
`edgeRuntime.entryKey`, and `edgeRuntime.handlerExport` for edge
outputs.
- Adjust the middleware manifest tests to validate cross-bundler
invariants and add adapter-config coverage for the new `edgeRuntime`
fields.
…ngUsageInfo (#91306)

## What?

Apply the `cell = "keyed"` pattern to `used_exports` (`FxHashMap`) and
`export_circuit_breakers` (`FxHashSet`) in `BindingUsageInfo`, matching
the existing pattern already used for `unused_references`.

## Why?

Previously, `used_exports` and `export_circuit_breakers` were stored as
plain inline collections within `BindingUsageInfo`. Any change to any
module's export usage would invalidate **all** callers of
`used_exports()`, even those querying unrelated modules. This causes
unnecessary recomputation during incremental rebuilds.

With keyed cells, lookups like `self.used_exports.get(&module)` and
`self.export_circuit_breakers.contains_key(&module)` only invalidate
callers that queried the specific module whose export usage changed,
providing per-module invalidation granularity.

## How?

1. **New keyed transparent value types** — `UsedExportsMap` and
`ExportCircuitBreakers` wrappers with `#[turbo_tasks::value(transparent,
cell = "keyed")]`.

2. **Fields changed to `ResolvedVc`** — `used_exports` and
`export_circuit_breakers` fields are now `ResolvedVc<UsedExportsMap>`
and `ResolvedVc<ExportCircuitBreakers>` instead of inline
`FxHashMap`/`FxHashSet`.

3. **`used_exports()` becomes a `#[turbo_tasks::function]`** — Moved
into a `#[turbo_tasks::value_impl]` block so it's a tracked task
function. Callers (`BrowserChunkingContext`, `NodeJsChunkingContext`) no
longer need `.await?` — they call it directly and get a
`Vc<ModuleExportUsage>`.

4. **Per-key lookups** — `contains_key(&module).await?` and
`get(&module).await?` leverage the keyed cell pattern for fine-grained
invalidation.

### Files changed

-
`turbopack/crates/turbopack-core/src/module_graph/binding_usage_info.rs`
— Core changes: new keyed types, field type changes, `used_exports()` as
tracked function
- `turbopack/crates/turbopack-browser/src/chunking_context.rs` —
Simplified call site (no `.await?`)
- `turbopack/crates/turbopack-nodejs/src/chunking_context.rs` —
Simplified call site (no `.await?`)

---------

Co-authored-by: Tobias Koppers <sokra@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
…rs (#91229)

### What?

Refactors the `ModuleReference` trait to make `chunking_type()` and `binding_usage()` methods return direct values instead of `Vc<T>` wrapped values, removing the need for async task functions.

Also removes the `get_referenced_asset` task from `EsmAssetReference`, inlining its logic into the callers.

### Why?

This change simplifies the API by eliminating unnecessary async overhead for methods that typically return simple, computed values. The previous implementation required `#[turbo_tasks::function]` annotations and `Vc<T>` wrappers even when the methods didn't need to perform async operations or benefit from caching.

### Impact

| Metric | Base | Change | Delta |
|--------|------|--------|-------|
| Hits | 35,678,143 | 35,845,124 | **+166,981** |
| Misses | 9,418,378 | 7,910,986 | **-1,507,392** |
| Total | 45,096,521 | 43,756,110 | **-1,340,411** |
| Task types | 1,306 | 1,277 | **-29** |

29 task types were removed, eliminating **2.6M total task invocations** (1.1M hits + 1.5M misses):

- **`chunking_type`** — 21 task types removed across all `ModuleReference` implementors (~952k invocations)
- **`binding_usage`** — 6 task types removed (~527k invocations)
- **`BindingUsage::all`** — helper task removed (~36k invocations)
- **`EsmAssetReference::get_referenced_asset`** — removed and inlined (~1.08M invocations: 628k hits + 451k misses)

The removed `get_referenced_asset` hits reappear as +628k hits on `EsmAssetReference::resolve_reference` and `ReferencedAsset::from_resolve_result` (with zero increase in misses), confirming the work is now served from cache through the existing callers.

No tasks had increased misses — the removal is clean with no cache invalidation spillover.

I also ran some builds to measure latency

```
# This branch
$ hyperfine -p 'rm -rf .next' -w 2 -r 10  'pnpm next build --turbopack --experimental-build-mode=compile'
Benchmark 1: pnpm next build --turbopack --experimental-build-mode=compile
  Time (mean ± σ):     52.752 s ±  0.658 s    [User: 376.575 s, System: 106.375 s]
  Range (min … max):   51.913 s … 54.161 s    10 runs

# on canary
$ hyperfine -p 'rm -rf .next' -w 2 -r 10  'pnpm next build --turbopack --experimental-build-mode=compile'
Benchmark 1: pnpm next build --turbopack --experimental-build-mode=compile
  Time (mean ± σ):     54.675 s ±  1.394 s    [User: 389.273 s, System: 114.642 s]
  Range (min … max):   53.434 s … 58.189 s    10 runs
```

so a solid win of almost 2 seconds

MaxRSS also went from 16,474,324,992 bytes to 16,359,309,312 bytes (from one measurement) so a savings of ~100M of max heap size.


### How?

- Changed `chunking_type()` method signature from `Vc<ChunkingTypeOption>` to `Option<ChunkingType>`
- Changed `binding_usage()` method signature from `Vc<BindingUsage>` to `BindingUsage`
- Removed `ChunkingTypeOption` type alias as it's no longer needed
- Updated all implementations across the codebase to return direct values instead of wrapped ones
- Removed `#[turbo_tasks::function]` annotations from these methods
- Updated call sites to use `into_trait_ref().await?` pattern when accessing these methods from `Vc<dyn ModuleReference>`
- Removed `EsmAssetReference::get_referenced_asset`, inlining its logic into callers
- Added validation for `turbopack-chunking-type` annotation values in import analysis
- Fixed cache effectiveness analysis script
@pull pull bot locked and limited conversation to collaborators Mar 13, 2026
@pull pull bot added the ⤵️ pull label Mar 13, 2026
@pull pull bot merged commit 236a76d into code:canary Mar 13, 2026
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.