Skip to content

[pull] canary from vercel:canary#884

Merged
pull[bot] merged 6 commits intocode:canaryfrom
vercel:canary
Mar 16, 2026
Merged

[pull] canary from vercel:canary#884
pull[bot] merged 6 commits intocode:canaryfrom
vercel:canary

Conversation

@pull
Copy link

@pull pull bot commented Mar 16, 2026

See Commits and Changes for more details.


Created by pull[bot] (v2.0.0-alpha.4)

Can you help keep this open source service alive? 💖 Please sponsor : )

mischnic and others added 6 commits March 16, 2026 09:49
1. Don't build twice for edge, patch source before building
2. Use the names for snapshots, as opposed to just a count that is impossible to debug.
…#91258)

## What?

Moves TurboPersistence database compaction out of the write batch commit
path and into the backend's idle loop, where it can run repeatedly (up
to 10 times) while the system is idle.

## Why?

Previously, compaction was kicked off as a background task after each
write batch commit, with the next write batch blocking until the
previous compaction finished. This had several drawbacks:

- Compaction was tightly coupled to the write path, adding latency to
write batches that had to wait for the previous compaction to complete
before starting.
- Only a single compaction pass ran per write batch, which may not be
sufficient to fully compact the database (e.g., after many small writes
create numerous segments).
- The compaction used `spawn` and a `JoinHandle` stored in a `Mutex`,
adding complexity to the write batch and shutdown logic.

By moving compaction to the idle loop, we:
- Decouple compaction from writes, so write batches are never blocked by
ongoing compaction.
- Allow multiple compaction passes (up to 10) while idle, enabling the
database to converge to a more compact state.
- Simplify the write batch code by removing the `compact_join_handle`
machinery (`Mutex`, `JoinHandle`, `spawn`).
- Respect idle-end signals — compaction stops immediately when new work
arrives.

## How?

1. **Added `compact()` method** to the `KeyValueDatabase` trait and
`BackingStorageSealed` trait, with a default no-op implementation
returning `Ok(false)`. Returns `Ok(true)` when compaction actually
merged files, `Ok(false)` when there was nothing to compact.

2. **Implemented `compact()` for `TurboKeyValueDatabase`** — calls the
existing `do_compact` helper synchronously, skipping compaction for
short sessions or empty databases. Changed `do_compact` return type from
`Result<()>` to `Result<bool>` to signal whether work was done.

3. **Wired `compact()` through `KvBackingStorage`** and the
`Either`-based `BackingStorageSealed` impl.

4. **Added compaction loop in the backend idle handler**
(`backend/mod.rs`) — after a successful snapshot commit, runs up to 10
compaction passes, checking for idle-end between each pass and stopping
early if no more compaction is needed or idle ends.

5. **Removed from `TurboWriteBatch`**: the `compact_join_handle` field,
the `Mutex<Option<JoinHandle>>`, the background `spawn` call in
`commit()`, and the blocking join in `write_batch()` and `shutdown()`.
The `shutdown()` path retains its own compaction call for final
compaction on exit.

---------

Co-authored-by: Tobias Koppers <sokra@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
Followup to #90978, which fixed the `ChunkGroup`s  passed to `.chunk_group()`, forgot to update the the module graph entry types, which didn't match up.

The `ChunkGroupEntry::Shared(ResolvedVc::upcast(*server_component))` one was more involved:
- We used to have `layout.js [app-rsc] (Next.js Server Component)` --ChunkingType::Shared--> `layout.js [app-rsc]` (the actual module)
- But this meant that the actual chunk group was `Shared(server_component.await?.module)` (the inner one)
- This wasn't possible to use for the module graph construction though, as the client reference transform needs to see the `layout.js [app-rsc] (Next.js Server Component)` module during the traversal, to infer the parent server component of the given subgraph.

So instead, move to
- `loader-tree in the template`
- --ChunkingType::Shared-->
- `layout.js [app-rsc] (Next.js Server Component)`
- --ChunkingType::Parallel-->
- `layout.js [app-rsc]`

The reason for that `ChunkingType::Shared` is so that we can chunk arbitrary modules that are loaded by `ChunkGroup::Entry`. Meaning that we can do `chunk_group(Shared(root_layout))).concat(chunk_group(Shared(layout))).concat(chunk_group(Entry(page)))` and rely on turbotask caching to deduplicate layout chunking across pages.
### What?
Fixes the "Ready in" printout for the Next.js dev server after restarts.

### Why?
When the dev server restarts (e.g., due to `next.config` changes), the
"Ready in" time was incorrect, showing the total process uptime instead
of the actual startup duration for the newly restarted server instance.
This was because `NEXT_PRIVATE_START_TIME` was not reset.

### How?
`process.env.NEXT_PRIVATE_START_TIME` is now updated to
`Date.now().toString()` in `packages/next/src/cli/next-dev.ts` right
before `startServer()` is re-called when handling a `RESTART_EXIT_CODE`.
This ensures the "Ready in" time accurately reflects the startup
duration from the point of restart.

The tricky part has been writing a test, that both catches a regression,
and won't be flaky on CI. After some back and forth, I think the Vercel
bot suggestion is probably the simplest.

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: Joseph Chamochumbi <joseph.chamochumbi@vercel.com>
Co-authored-by: Vercel <vercel[bot]@users.noreply.github.com>
Co-authored-by: icyJoseph <sephxd1234@gmail.com>
## What?

When a webpack/turbopack loader produces broken code, error messages now
display **both** the original source and the generated code with source
map information, making it much easier to debug loader issues.

## Why?

Previously, when loaders returned invalid code, error messages only
showed the original source file (after source-map remapping). Users had
no way to see what the loader actually generated, making it hard to
diagnose why the code failed to parse. Showing both sides gives full
context about what went wrong.

## How?

### Turbopack Core (`turbopack-core`)

- **`Source::description()`** — New method on the `Source` trait
providing human-readable descriptions of where code comes from.
Implemented across all source types (`FileSource`, `VirtualSource`,
`WebpackLoadersProcessedAsset`, `PostCssTransformedAsset`, etc.),
producing chains like `"loaders [sass-loader] transform of file content
of ./styles.scss"`.
- **`AdditionalIssueSource`** — New struct to hold a labeled source
location. The `Issue` trait gains an `additional_sources()` method so
issues can expose supplementary code frames.
- **`GeneratedCodeSource`** — A wrapper that strips `GenerateSourceMap`
support from a source, ensuring the *generated* code is displayed as-is
rather than being remapped back to the original.
- **`IssueSource::to_generated_code_source()`** — Helper that detects
sources implementing `GenerateSourceMap` and wraps them in
`GeneratedCodeSource` for display. Used by `AnalyzeIssue` and
`ParsingIssue` to automatically attach generated code frames.

### Error Formatting

- **`turbopack-cli-utils`** — Renders additional sources in CLI issue
output.
- **`format-issue.ts`** — Renders additional sources in the browser
error overlay. Extracted `formatSourceCodeFrame()` helper to deduplicate
code-frame rendering between primary and additional sources.
- Long-line truncation (e.g. minified CSS from SCSS) is handled natively
by the Rust-based `codeFrameColumns` implementation.

### Type Definitions

- Added `SourcePosition`, `IssueSource`, and `AdditionalIssueSource`
interfaces to TypeScript types.
- Updated `PlainSource` (added `file_path`), `PlainIssue` (added
`additional_sources`), and NAPI bindings to pass the data through.

### Test Coverage

- **E2e tests** (`test/e2e/webpack-loader-parse-error/`) with custom
broken JS and CSS loaders, covering all 4 modes:
- **Development (Turbopack)** — Verifies parse errors show both original
and generated code via browser error overlay
- **Development (Webpack)** — Verifies error overlay shows the parse
error (webpack doesn't support additional sources)
- **Production (Turbopack)** — Verifies build failure output with full
error extraction and inline snapshots
- **Production (Webpack)** — Verifies build failure output with inline
snapshots
- Updated `test/development/sass-error/` snapshot to include the new
generated code frame for minified SCSS output.

### Example Output

When a loader produces broken code, users now see:
```
⨯ ./app/data.broken.js:3:1
Parsing ecmascript source code failed
  1 | // This file will be processed by broken-js-loader
  2 | // The loader will return invalid JavaScript with a source map
> 3 | export default function Data() {
    | ^
  4 |   return <div>original source content</div>
  5 | }
  6 |

Expected '</', got '{'

Generated code of loaders [./broken-js-loader.js] transform of file content of app/data.broken.js:
./app/data.broken.js:3:46
  1 | // Generated by broken-js-loader
  2 | export default function Page() {
> 3 |   return <div>this is intentionally broken {{{ invalid jsx
    |                                              ^
  4 | }
  5 |

Import trace:
  Server Component:
    ./app/data.broken.js
    ./app/page.js
```

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Tobias Koppers <sokra@users.noreply.github.com>
Co-authored-by: Luke Sandberg <lukeisandberg@gmail.com>
```
    // This is only used by Webpack to correctly output the manifest. It's value shouldn't be relied
    // upon externally. It's possible that the same action can be in different layers in a single
    // page, which cannot be modelled with this API anyway.
```
@pull pull bot locked and limited conversation to collaborators Mar 16, 2026
@pull pull bot added the ⤵️ pull label Mar 16, 2026
@pull pull bot merged commit 4678944 into code:canary Mar 16, 2026
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants