fix(derivation): harden blob verification for PeerDAS sidecars#944
fix(derivation): harden blob verification for PeerDAS sidecars#944curryxbo merged 4 commits intofeat/multi_batchfrom
Conversation
Switch blob authentication from the legacy single-blob VerifyBlobProof path to a commitment round-trip (BlobToCommitment + KZGToVersionedHash match) so derivation keeps working when beacon nodes return EIP-7594 cell proofs (PeerDAS / Osaka) instead of legacy kzg_proofs. The new chain is: blob bytes -> recomputed commitment -> versioned hash, which is then matched against the L1-signed blob hash carried in the type-3 tx. This gives the same soundness as VerifyBlobProof without depending on the beacon-supplied kzg_proof field, which is no longer guaranteed to be a legacy single-blob proof across forks/clients. Also: - Reject malformed beacon responses up front by asserting the decoded blob is exactly BlobSize, so a length mismatch surfaces clearly instead of cascading into a confusing "commitment mismatch" caused by copy()'s silent zero-pad / truncate. - Drop the now-unused ComputeBlobProof call. ParseBatch only consumes Sidecar.Blobs, so computing a fresh proof per blob per batch was pure overhead. Documented how to re-introduce it if a future consumer needs Proofs. Co-authored-by: Cursor <cursoragent@cursor.com>
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Move the inlined blob authentication logic out of fetchRollupDataByTxHash
into a small verifyBlob(blob, expectedHash) helper in beacon.go, mirroring
the structure introduced by ethereum-optimism/optimism PR #17725
("l1-beacon-client: verify blobs using commitment only").
The helper bundles the BlobToCommitment + KZGToVersionedHash + compare
chain in one place, so the caller's loop only needs to do:
if err := verifyBlob(&blob, expectedHash); err != nil { ... }
This is a pure refactor: same authentication path, same security
property (blob bytes -> commitment -> versioned hash matches the
L1-signed hash), no behavioral change. Replaces the previously dead
VerifyBlobProof wrapper, whose name was actively misleading now that
we no longer consume any beacon-supplied kzg_proof.
Co-authored-by: Cursor <cursoragent@cursor.com>
The optimism PR pointer was situational context, not durable documentation. The doc comment now stands on its own. Co-authored-by: Cursor <cursoragent@cursor.com>
…ode L1
The minimal-preset PeerDAS L1 used by morph devnet runs as a single
beacon node. Two spec-level defaults make this unworkable for any
"reset validator and re-derive from L1 genesis" workflow:
* CUSTODY_REQUIREMENT=4 / SAMPLES_PER_SLOT=8 — only 4 of the 128
data columns per slot are persisted, which is never enough to
reconstruct a blob (needs >= 64). With no peers to gossip the
other columns from, those blobs are effectively lost the moment
the proposal pipeline finishes.
* MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS=4096 — at the minimal
preset (8 slot/epoch, 3s/slot) this is ~27h, after which the
beacon legitimately prunes columns. A devnet that has been up
for >27h cannot serve any historical blob to a freshly reset
validator, which manifests as
BAD_REQUEST: Insufficient data columns to reconstruct blobs:
required 64, but only 0 were found.
Set custody/samples to the full 128 so the lone supernode actually
keeps every column it produces, and bump the retention window to
~30 days (110000 epochs * 24s) so derivation can backfill from
genesis throughout normal devnet lifetimes. lighthouse's --supernode
flag is now redundant-but-aligned with the spec rather than silently
fighting it.
Co-authored-by: Cursor <cursoragent@cursor.com>
Summary
Hardens the L1 blob verification path used by
derivationso it stays sound and correct against modern beacon nodes:VerifyBlobProof(blob, commitment, kzg_proof)call with a commitment round-trip —BlobToCommitment(blob)is recomputed locally and matched against the beacon-supplied commitment, whose versioned hash must equal the blob hash signed in the L1 type-3 tx. This avoids depending on the beacon'skzg_prooffield, which after EIP-7594 (PeerDAS / Osaka) is no longer guaranteed to be a legacy single-blob proof. Soundness is unchanged:blob bytes -> commitment -> versioned hash == L1-signed hash.BlobSize(131072 bytes). Without this,copy(blob[:], b)silently zero-pads (when shorter) or truncates (when longer), and the bug surfaces later as a confusing "commitment mismatch" instead of a clear length error.ComputeBlobProof:ParseBatchonly consumesSidecar.Blobs, so computing a fresh proof per blob per batch was pure CPU overhead. Comment explains how to re-introduce it if a future consumer needsProofs.No change to the multi-blob ordering / zstd-decompression behavior; we still iterate
tx.BlobHashes()in declared order and look up sidecars by versioned hash.Test plan
go build ./...innode/go test ./derivation/...(existing unit tests)Made with Cursor