Skip to content

Conversation

@alpe
Copy link
Contributor

@alpe alpe commented Nov 27, 2025

Overview

Resolves #2609

The basic idea is to store an additional DAHightHint field within the p2p store.

As SignedHeader and Data are used in other places, too. I added an an DAHeightHintContainer type to wrap the tuple for the store only.
The DAHeight Hint is added by the da_submitter and read in the syncer to fetch the missing DA header/data for the most recent block as required.

Please note: this is a breaking change to the p2p network and store

@claude
Copy link
Contributor

claude bot commented Nov 27, 2025

Claude finished @alpe's task in 2m 57s —— View job


Code Review Complete

I've performed a comprehensive review of the DA hints implementation. Below are my findings organized by category.

Todo List

  • Review architecture and design patterns
  • Examine core implementation files (p2p_envelope, async_da_retriever, sync_service)
  • Check syncer and da_submitter changes
  • Assess error handling and resilience
  • Review security implications
  • Evaluate performance and concurrency patterns
  • Check test coverage and quality
  • Review backwards compatibility strategy
  • Compile final feedback

Summary

This PR implements DA height hints in the P2P layer to help nodes discover which DA height contains data for a given block height. The implementation is well-architected with strong backwards compatibility measures. However, there are several areas that need attention before merging, particularly around error handling, testing, and production readiness.

Key Strengths:

  • Excellent backwards compatibility design using binary-compatible protobuf types
  • Clean separation of concerns with P2PEnvelope wrapper
  • Async DA retrieval with worker pool prevents blocking
  • Comprehensive test coverage for binary compatibility

Critical Issues:

  • Missing migration testing and documentation
  • Several error paths lack proper handling
  • Race conditions in concurrent access patterns
  • Incomplete test coverage (66.30% patch coverage)

1. Code Quality & Architecture

✅ Strengths

Excellent backwards compatibility design (types/p2p_envelope.go:86-153)
The P2PEnvelope type uses protobuf's optional fields to achieve binary compatibility with legacy SignedHeader and Data types. The implementation correctly mirrors fields and handles the optional da_height_hint:

// New types (P2PSignedHeader, P2PData) can unmarshal old types (SignedHeader, Data)
// and old types can unmarshal new types, ignoring the hint field

The test coverage in types/p2p_envelope_test.go:92-153 validates both directions of compatibility.

Clean separation of concerns (pkg/sync/sync_service.go:319-333)
The AppendDAHint method properly encapsulates DA hint management, separating it from core sync logic.

Async DA retrieval with proper concurrency (block/internal/syncing/async_da_retriever.go)
The async retriever uses:

  • Worker pool (5 workers) for concurrent DA fetching
  • Deduplication via inFlight map
  • Non-blocking request queuing with buffer (100)
  • Proper context cancellation

⚠️ Issues

1. Error handling in DA hint appending is too lenient (block/internal/submitting/da_submitter.go:213-216, 258-261)

if err := s.headerDAHintAppender.AppendDAHint(ctx, res.Height, hashes...); err != nil {
    s.logger.Error().Err(err).Msg("failed to append da height hint in header p2p store")
    // ignoring error here, since we don't want to block the block submission'
}

Issue: Silently ignoring DA hint append failures means hints may be missing in the P2P store, defeating the purpose of this feature. If hints are consistently failing to append, nodes won't benefit from the optimization.

Recommendation: Add metrics to track DA hint append failures and alert if failure rate exceeds a threshold. Consider making this a warning-level issue that triggers operational alerts.

Fix this →

2. Missing nil checks in sync_service getters (pkg/sync/sync_service.go:92-136)

The getter functions in NewDataSyncService and NewHeaderSyncService don't validate that the returned state/data/header are non-nil before wrapping them in P2PEnvelope:

state, err := daStore.GetStateAtHeight(ctx, d.Height())
if err != nil {
    if !errors.Is(err, store.ErrNotFound) {
        return nil, err
    }
    return &types.P2PData{Message: d, DAHeightHint: 0}, nil
}
return &types.P2PData{Message: d, DAHeightHint: state.DAHeight}, nil

Issue: If GetStateAtHeight returns nil, nil (which shouldn't happen but isn't explicitly prevented), this would access state.DAHeight on a nil state.

Recommendation: Add explicit nil checks after retrieving state/data/header from store.

3. P2PHandler processes height without backpressure (block/internal/syncing/p2p_handler.go:77-128)

select {
case heightInCh <- event:
default:
    h.cache.SetPendingEvent(event.Header.Height(), &event)
}

Issue: The default case silently drops events into cache without logging or metrics, making it hard to detect when the system is under pressure.

Recommendation: Add debug logging and metrics when events are cached due to channel pressure.

Fix this →

4. Inconsistent DA height hint extraction (block/internal/syncing/syncer.go:582-622)

The logic for extracting DA hints from P2P events is complex with multiple branches checking for zero values and cache existence:

switch {
case event.DaHeightHints == [2]uint64{0, 0}:
    // empty, nothing to do
case event.DaHeightHints[0] == 0:
    // check only data
case event.DaHeightHints[1] == 0:
    // check only header
default:
    // check both
}

Issue: This logic is fragile and hard to test. The comparison event.DaHeightHints == [2]uint64{0, 0} may not work as expected due to Go's array comparison semantics.

Recommendation: Extract this logic into a well-tested helper function with clear semantics. Consider using a struct with explicit fields instead of a 2-element array.

Fix this →


2. Error Handling & Resilience

✅ Strengths

Graceful degradation when DA hints unavailable (pkg/sync/sync_service.go:129-131)
When state is not found, the code defaults to a zero DA hint rather than failing:

if !errors.Is(err, store.ErrNotFound) {
    return nil, err
}
return &types.P2PData{Message: d, DAHeightHint: 0}, nil

Proper context cancellation in async retriever (block/internal/syncing/async_da_retriever.go:78-89)
Worker goroutines properly respect context cancellation.

⚠️ Issues

1. AsyncDARetriever worker pool doesn't close workCh (block/internal/syncing/async_da_retriever.go:50-57)

func (r *AsyncDARetriever) Stop() {
    if r.cancel != nil {
        r.cancel()
    }
    r.wg.Wait()
    r.logger.Info().Msg("AsyncDARetriever stopped")
}

Issue: The workCh channel is never closed, which could lead to goroutine leaks if code waits on it elsewhere. While the current implementation only sends to it, it's a best practice to close channels when done.

Recommendation: Close workCh in Stop() after canceling context but before waiting on wg.

2. RequestRetrieval drops requests silently when queue is full (block/internal/syncing/async_da_retriever.go:69-75)

select {
case r.workCh <- height:
    r.inFlight[height] = struct{}{}
    r.logger.Debug().Uint64("height", height).Msg("queued DA retrieval request")
default:
    r.logger.Debug().Uint64("height", height).Msg("DA retrieval worker pool full, dropping request")
}

Issue: Debug-level logging for dropped requests makes it hard to detect production issues. If requests are frequently dropped, nodes won't benefit from DA hints.

Recommendation: Use warning-level logging and add metrics for dropped requests.

3. No retry mechanism for failed DA hint appends

When AppendDAHint fails in da_submitter.go, there's no retry mechanism. If the failure is transient (e.g., temporary I/O error), hints are permanently lost.

Recommendation: Consider implementing a retry queue for failed hint appends with exponential backoff.


3. Security

✅ Strengths

Proper validation before using P2P data (block/internal/syncing/p2p_handler.go:90-108)
The P2P handler validates proposer address and data hash consistency before accepting events:

if err := h.assertExpectedProposer(header.ProposerAddress); err != nil {
    h.logger.Debug().Uint64("height", height).Err(err).Msg("invalid header from P2P")
    return err
}

No secret leakage in logs
DA heights and hashes are appropriately logged without exposing sensitive data.

⚠️ Issues

1. P2P DA hints are unauthenticated

DA hints received via P2P are not cryptographically signed. A malicious peer could send incorrect hints, causing nodes to query wrong DA heights.

Impact: While this doesn't compromise correctness (nodes still validate data from DA), it could degrade performance by causing unnecessary DA queries.

Mitigation: The current design is acceptable because:

  1. Hints are optimization-only; incorrect hints don't break correctness
  2. DA retrieval validates all data against block hashes
  3. Async retriever deduplicates requests

Recommendation: Document this security tradeoff in code comments and consider adding metrics to detect malicious hint behavior (e.g., hint success rate).

2. No rate limiting on async DA retrieval

The async retriever accepts unlimited requests (up to buffer size 100), which could be exploited by sending many invalid hints.

Recommendation: Add per-peer rate limiting or request throttling to prevent abuse.


4. Performance & Resource Efficiency

✅ Strengths

Worker pool prevents DA query storms (block/internal/syncing/async_da_retriever.go:42-46)
Fixed pool of 5 workers prevents overwhelming the DA layer with concurrent requests.

Deduplication prevents redundant queries (block/internal/syncing/async_da_retriever.go:62-76)
The inFlight map ensures each DA height is only queried once, even if multiple hints point to it.

Buffered channels reduce contention (block/internal/syncing/async_da_retriever.go:33)
Buffer size of 100 provides reasonable headroom for burst traffic.

⚠️ Issues

1. Unbounded sync.Map for inFlight tracking (block/internal/syncing/async_da_retriever.go:16)

inFlight  map[uint64]struct{}

Issue: While entries are deleted after processing (async_da_retriever.go:92-96), if DA retrieval consistently fails, the map could grow unbounded over time.

Recommendation: Add size limits or periodic cleanup of stale entries (e.g., entries older than 10 minutes).

2. No backpressure mechanism between DA worker and syncer

The heightInCh has a fixed buffer of 100 (syncing/syncer.go:180). If the syncer can't keep up with DA retrieval results, the async retriever will block.

Recommendation: Consider making the async retriever's result channel independently sized or add metrics to monitor channel fullness.

3. GetByHeight queries could be expensive (pkg/sync/sync_service.go:335-342)

func (s *SyncService[H]) GetByHeight(ctx context.Context, height uint64) (H, uint64, error) {
    c, err := s.store.GetByHeight(ctx, height)
    if err != nil {
        var zero H
        return zero, 0, err
    }
    return c, c.DAHint(), nil
}

This method is called frequently in P2P paths but doesn't implement caching. For frequently accessed heights, this could cause repeated database lookups.

Recommendation: Consider adding an LRU cache for recent height lookups if profiling shows this is a bottleneck.


5. Testing & Reliability

✅ Strengths

Excellent binary compatibility tests (types/p2p_envelope_test.go:92-153)
Tests validate both directions: old types unmarshaling new types and vice versa.

Comprehensive async retriever tests (block/internal/syncing/async_da_retriever_test.go)
Tests cover:

  • Successful retrieval
  • Deduplication/idempotency
  • Worker pool limits
  • Proper blocking behavior

Integration test for DA hint flow (block/internal/submitting/da_submitter_integration_test.go)

⚠️ Issues

1. Missing test coverage - 66.30% patch coverage (93 lines uncovered)

Key files with low coverage:

  • pkg/sync/sync_service.go: 57.83% (21 missing, 14 partials) - lines 582-622
  • types/p2p_envelope.go: 53.42% (29 missing, 5 partials)
  • block/internal/syncing/syncer.go: 58.62% (12 missing)

Specific gaps:

  • Error paths in AppendDAHint not tested
  • P2P hint extraction logic (syncer.go:582-622) lacks edge case tests
  • Backwards compatibility edge cases (e.g., mixing old and new nodes)

Recommendation: Add tests for:

  • AppendDAHint failure scenarios
  • P2P hint extraction with various combinations of zero/non-zero hints
  • Network with mixed old/new node versions
  • DA hint retrieval triggering and success/failure paths

Fix this →

2. No migration/upgrade tests

While the code claims to be backwards compatible, there are no tests that:

  • Start an old node, upgrade it, and verify hints work
  • Test mixed networks with old and new nodes
  • Validate that existing P2P stores can be read after upgrade

Recommendation: Add integration tests that simulate upgrade scenarios.

3. No load testing for async retriever

The async retriever has a fixed pool size (5 workers) and buffer (100), but there's no test validating behavior under sustained high load.

Recommendation: Add a load test that sends 1000+ DA hint requests and validates:

  • No deadlocks
  • Reasonable latency distribution
  • Proper deduplication at scale

4. Missing tests for concurrent AppendDAHint calls

Multiple goroutines could call AppendDAHint concurrently (from header and data submission paths). Thread safety isn't explicitly tested.

Recommendation: Add concurrency tests using go test -race.


6. Documentation & Developer Experience

✅ Strengths

Clear purpose documented in PR description
The PR description explains the motivation and approach clearly.

Helpful code comments (pkg/sync/sync_service.go:40-44)

type EntityWithDAHint[H any] interface {
    header.Header[H]
    SetDAHint(daHeight uint64)
    DAHint() uint64
}

The interface clearly defines the contract.

⚠️ Issues

1. Missing migration documentation

The PR description mentions "breaking change to p2p network and store" but doesn't provide:

  • Step-by-step migration guide
  • Rollback procedures
  • Version compatibility matrix
  • Recommendations for network upgrades

Recommendation: Add a migration guide in docs/ or MIGRATION.md covering:

  • Pre-upgrade checklist
  • Upgrade procedure
  • How to handle mixed-version networks
  • Rollback instructions

2. Unclear DA hint semantics in edge cases

What should happen when:

  • DA hint is 0? (Is this "unknown" or "DA height 0"?)
  • DA hint points to non-existent DA height?
  • DA hint is in the future?

Recommendation: Add documentation explaining DA hint semantics and edge case behavior.

3. No performance impact documentation

The PR doesn't document:

  • Expected performance improvement (reduced DA query latency)
  • Resource overhead (memory for hints, CPU for async retrieval)
  • Impact on nodes with slow DA connections

Recommendation: Add benchmarks or performance analysis showing the improvement.

4. Code comment typo (block/internal/submitting/da_submitter.go:215)

// ignoring error here, since we don't want to block the block submission'
//                                                                        ^ extra apostrophe

7. Backwards Compatibility & Migration

✅ Strengths

Binary-compatible protobuf design (proto/evnode/v1/evnode.proto:110-123)

message P2PSignedHeader {
  Header header = 1;
  bytes signature = 2;
  Signer signer = 3;
  optional uint64 da_height_hint = 4;  // Optional field maintains compatibility
}

Using optional fields means:

  • Old nodes can unmarshal new messages (ignoring hint)
  • New nodes can unmarshal old messages (hint defaults to 0)

Test coverage for compatibility (types/p2p_envelope_test.go:92-117)

⚠️ Issues

1. No store migration for existing deployments

PR author notes: "existing p2p stores would need migration to work" (comment from alpe on 2025-12-02).

Issue: The code doesn't include migration logic for existing stores. When nodes upgrade, they'll have a mix of:

  • Old entries: SignedHeader/Data without hints
  • New entries: P2PSignedHeader/P2PData with hints

Recommendation:

  • Add migration code that reads old entries and wraps them in new types with hint=0
  • Provide a migration tool or automatic migration on first startup
  • Document the migration process

2. Network partitioning risk during upgrades

If not all nodes upgrade simultaneously, the network could partition into old and new nodes.

Recommendation:

  • Add feature flag to disable DA hint broadcasting until all nodes are upgraded
  • Implement gradual rollout strategy
  • Add version negotiation in P2P handshake

3. Testing gap: No verification on actual network

From PR comments (tac0turtle on 2026-01-12): "was this tested on an existing network? If not, please do that before merging"

Recommendation: Test on a staging network with:

  • Mix of old and new nodes
  • Existing P2P store data
  • Monitor for issues during upgrade

8. Additional Observations

CI Failures

The "Docs Deploy Preview" workflow failed. While this is not blocking for the DA hints feature, it should be addressed before merge.

Code Style

The code generally follows Go idioms and the repository's style guide. Minor style suggestions:

  1. Prefer early returns over nested conditionals in syncer.go:582-622
  2. Consider extracting magic numbers (buffer sizes, worker counts) to named constants

Architecture Alignment

The implementation aligns well with the project's modular architecture:

  • Clean separation between P2P layer and block sync logic
  • Proper use of interfaces for testability
  • Follows existing patterns for DA retrieval

Recommendations Summary

🔴 Critical (Must Fix Before Merge)

  1. Add migration testing and documentation for existing networks
  2. Fix error handling for DA hint append failures (add metrics/alerts)
  3. Increase test coverage to >80% (currently 66.30%)
  4. Test on actual network with mixed node versions
  5. Add nil checks in sync_service getters

🟡 Important (Should Fix)

  1. Add migration tool for existing P2P stores
  2. Implement retry mechanism for failed DA hint appends
  3. Add rate limiting for async DA retrieval
  4. Close workCh channel in AsyncDARetriever.Stop()
  5. Add comprehensive edge case tests
  6. Document DA hint semantics and edge cases
  7. Refactor DA hint extraction logic into helper function

🟢 Nice to Have

  1. Add LRU cache for frequent height lookups
  2. Add load testing for async retriever
  3. Document performance impact
  4. Add metrics for DA hint effectiveness
  5. Consider size limits for inFlight map
  6. Fix code comment typo

Conclusion

This is a well-designed feature with strong backwards compatibility measures. The async DA retrieval pattern is solid, and the binary-compatible protobuf approach is excellent. However, the PR should not be merged until:

  1. ✅ Migration strategy is documented and tested
  2. ✅ Test coverage increases above 80%
  3. ✅ Testing on actual network is completed
  4. ✅ Critical error handling issues are addressed

The implementation shows good engineering practices, but needs more focus on production readiness, particularly around migration, testing, and operational observability.


@github-actions
Copy link
Contributor

github-actions bot commented Nov 27, 2025

The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedJan 26, 2026, 2:15 PM

@alpe alpe changed the title Alex/hints feat: DA hints in p2p Nov 28, 2025
@codecov
Copy link

codecov bot commented Nov 28, 2025

Codecov Report

❌ Patch coverage is 66.30435% with 93 lines in your changes missing coverage. Please review.
✅ Project coverage is 58.97%. Comparing base (9348732) to head (94fe911).

Files with missing lines Patch % Lines
pkg/sync/sync_service.go 57.83% 21 Missing and 14 partials ⚠️
types/p2p_envelope.go 53.42% 29 Missing and 5 partials ⚠️
block/internal/syncing/syncer.go 58.62% 12 Missing ⚠️
block/internal/syncing/async_da_retriever.go 86.53% 6 Missing and 1 partial ⚠️
block/internal/submitting/da_submitter.go 80.95% 2 Missing and 2 partials ⚠️
pkg/store/store.go 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2891      +/-   ##
==========================================
+ Coverage   58.63%   58.97%   +0.33%     
==========================================
  Files         111      113       +2     
  Lines       10405    10634     +229     
==========================================
+ Hits         6101     6271     +170     
- Misses       3659     3699      +40     
- Partials      645      664      +19     
Flag Coverage Δ
combined 58.97% <66.30%> (+0.33%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

alpe added 3 commits November 28, 2025 17:20
* main:
  refactor: omit unnecessary reassignment (#2892)
  build(deps): Bump the all-go group across 5 directories with 6 updates (#2881)
  chore: fix inconsistent method name in retryWithBackoffOnPayloadStatus comment (#2889)
  fix: ensure consistent network ID usage in P2P subscriber (#2884)
cache.SetHeaderDAIncluded(headerHash.String(), res.Height, header.Height())
hashes[i] = headerHash
}
if err := s.headerDAHintAppender.AppendDAHint(ctx, res.Height, hashes...); err != nil {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where the DA height is passed to the sync service to update the p2p store

Msg("P2P event with DA height hint, triggering targeted DA retrieval")

// Trigger targeted DA retrieval in background via worker pool
s.asyncDARetriever.RequestRetrieval(daHeightHint)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where the "fetch from DA" is triggered for the current block event height

type SignedHeaderWithDAHint = DAHeightHintContainer[*types.SignedHeader]
type DataWithDAHint = DAHeightHintContainer[*types.Data]

type DAHeightHintContainer[H header.Header[H]] struct {
Copy link
Contributor Author

@alpe alpe Dec 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a data container to persist the DA hint together with the block header or data.
types.SignedHeader and types.Data are used all over the place so I did not modify them but added introduced this type for the p2p store and transfer only.

It may make sense to do make this a Proto type. WDYT?

return nil
}

func (s *SyncService[V]) AppendDAHint(ctx context.Context, daHeight uint64, hashes ...types.Hash) error {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stores the DA height hints

@alpe alpe marked this pull request as ready for review December 1, 2025 09:32
@tac0turtle
Copy link
Contributor

if da hint is not in the proto how do other nodes get knowledge of the hint?

also how would an existing network handle using this feature? its breaking so is it safe to upgrade?

"github.com/evstack/ev-node/block/internal/cache"
"github.com/evstack/ev-node/block/internal/common"
"github.com/evstack/ev-node/block/internal/da"
coreda "github.com/evstack/ev-node/core/da"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: gci linter

Copy link
Member

@julienrbrt julienrbrt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! It really makes sense.

I share the same concern as @tac0turtle however about the upgrade strategy given it is p2p breaking.

julienrbrt
julienrbrt previously approved these changes Dec 2, 2025
@alpe
Copy link
Contributor Author

alpe commented Dec 2, 2025

if da hint is not in the proto how do other nodes get knowledge of the hint?

The sync_service wraps the header/data payload in a DAHeightHintContainer object that is passed upstream to the p2p layer. When the DA height is known, the store is updated.

also how would an existing network handle using this feature? its breaking so is it safe to upgrade?

It is a breaking change. Instead of signed header or data types, the p2p network exchanges DAHeightHintContainer. This would be incompatible. Also the existing p2p stores would need migration to work.

@julienrbrt
Copy link
Member

julienrbrt commented Dec 4, 2025

Could we broadcast both until every networks are updated? Then for final we can basically discard the previous one.

@alpe
Copy link
Contributor Author

alpe commented Dec 5, 2025

fyi: This PR is missing a migration strategy for the p2p store ( and ideally network)

* main:
  refactor(sequencers): persist prepended batch (#2907)
  feat(evm): add force inclusion command (#2888)
  feat: DA client, remove interface part 1: copy subset of types needed for the client using blob rpc. (#2905)
  feat: forced inclusion (#2797)
  fix: fix and cleanup metrics (sequencers + block) (#2904)
  build(deps): Bump mdast-util-to-hast from 13.2.0 to 13.2.1 in /docs in the npm_and_yarn group across 1 directory (#2900)
  refactor(block): centralize timeout in client (#2903)
  build(deps): Bump the all-go group across 2 directories with 3 updates (#2898)
  chore: bump default timeout (#2902)
  fix: revert default db (#2897)
  refactor: remove obsolete // +build tag (#2899)
  fix:da visualiser namespace  (#2895)
alpe added 3 commits December 15, 2025 10:52
* main:
  chore: execute goimports to format the code (#2924)
  refactor(block)!: remove GetLastState from components (#2923)
  feat(syncing): add grace period for missing force txs inclusion (#2915)
  chore: minor improvement for docs (#2918)
  feat: DA Client remove interface part 2,  add client for celestia blob api   (#2909)
  chore: update rust deps (#2917)
  feat(sequencers/based): add based batch time (#2911)
  build(deps): Bump golangci/golangci-lint-action from 9.1.0 to 9.2.0 (#2914)
  refactor(sequencers): implement batch position persistance (#2908)
github-merge-queue bot pushed a commit that referenced this pull request Dec 15, 2025
<!--
Please read and fill out this form before submitting your PR.

Please make sure you have reviewed our contributors guide before
submitting your
first PR.

NOTE: PR titles should follow semantic commits:
https://www.conventionalcommits.org/en/v1.0.0/
-->

## Overview

Temporary fix until #2891.
After #2891 the verification for p2p blocks will be done in the
background.

ref: #2906

<!-- 
Please provide an explanation of the PR, including the appropriate
context,
background, goal, and rationale. If there is an issue with this
information,
please provide a tl;dr and link the issue. 

Ex: Closes #<issue number>
-->
@alpe
Copy link
Contributor Author

alpe commented Dec 15, 2025

I have added 2 new types for the p2p store that are binary compatible to the types.Data and SignedHeader. With this, we should be able to roll this out without breaking the in-flight p2p data and store.

julienrbrt
julienrbrt previously approved these changes Dec 15, 2025
Copy link
Member

@julienrbrt julienrbrt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm! I can see how useful the async retriever will be for force inclusion verification as well. We should have @auricom verify if p2p still works with Eden.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is going to be really useful for force inclusion checks as well.

* main:
  build(deps): Bump actions/cache from 4 to 5 (#2934)
  build(deps): Bump actions/download-artifact from 6 to 7 (#2933)
  build(deps): Bump actions/upload-artifact from 5 to 6 (#2932)
  feat: DA Client remove interface part 3, replace types with new code (#2910)
  DA Client remove interface: Part 2.5, create e2e test to validate that a blob is posted in DA layer. (#2920)
julienrbrt
julienrbrt previously approved these changes Dec 16, 2025
alpe added 3 commits December 19, 2025 17:00
* main:
  feat: use DA timestamp (#2939)
  chore: improve code comments clarity (#2943)
  build(deps): bump libp2p (#2937)
(cherry picked from commit ad3e21b)
julienrbrt
julienrbrt previously approved these changes Dec 19, 2025
* main:
  fix: make evm_execution more robust (#2942)
  fix(sequencers/single): deterministic queue (#2938)
  fix(block): fix init logic sequencer for da epoch fetching (#2926)
github-merge-queue bot pushed a commit that referenced this pull request Jan 2, 2026
Introduce envelope for headers on DA to fail fast on unauthorized
content.
Similar approach as in #2891 with a binary compatible sibling type that
carries the additional information.
 
* Add DAHeaderEnvelope type to wrap signed headers on DA
  * Binary compatible to `SignedHeader` proto type
  * Includes signature of of the plain content
* DARetriever checks for valid signature early in the process
* Supports `SignedHeader` for legacy support until first signed envelope
read
alpe added 2 commits January 8, 2026 10:06
* main:
  chore: fix some minor issues in the comments (#2955)
  feat: make reaper poll duration configurable (#2951)
  chore!: move sequencers to pkg (#2931)
  feat: Ensure Header integrity on DA (#2948)
  feat(testda): add header support with GetHeaderByHeight method (#2946)
  chore: improve code comments clarity (#2947)
  chore(sequencers): optimize store check (#2945)
@tac0turtle
Copy link
Contributor

ci seems to be having some issues, can these be fixed.

Also was this tested on an existing network? If not, please do that before merging

alpe added 10 commits January 19, 2026 09:46
* main:
  fix: inconsistent state detection and rollback (#2983)
  chore: improve graceful shutdown restarts (#2985)
  feat(submitting): add posting strategies (#2973)
  chore: adding syncing tracing (#2981)
  feat(tracing): adding block production tracing (#2980)
  feat(tracing): Add Store, P2P and Config tracing (#2972)
  chore: fix upgrade test (#2979)
  build(deps): Bump github.com/ethereum/go-ethereum from 1.16.7 to 1.16.8 in /execution/evm/test in the go_modules group across 1 directory (#2974)
  feat(tracing): adding tracing to DA client (#2968)
  chore: create onboarding skill  (#2971)
  test: add e2e tests for force inclusion (part 2) (#2970)
  feat(tracing): adding eth client tracing (#2960)
  test: add e2e tests for force inclusion (#2964)
  build(deps): Bump the all-go group across 4 directories with 10 updates (#2969)
  fix: Fail fast when executor ahead (#2966)
  feat(block): async epoch fetching (#2952)
  perf: tune badger defaults and add db bench (#2950)
  feat(tracing): add tracing to EngineClient (#2959)
  chore: inject W3C headers into engine client and eth client (#2958)
  feat: adding tracing for Executor and added initial configuration (#2957)
* main:
  feat(tracing): tracing part 9 sequencer (#2990)
  build(deps): use mainline go-header (#2988)
* main:
  chore: update calculator for strategies  (#2995)
  chore: adding tracing for da submitter (#2993)
  feat(tracing): part 10 da retriever tracing (#2991)
  chore: add da posting strategy to docs (#2992)
* main:
  build(deps): Bump the all-go group across 5 directories with 5 updates (#2999)
  feat(tracing): adding forced inclusion tracing (#2997)
* main:
  feat(tracing): add store tracing (#3001)
  feat: p2p exchange wrapper  (#2855)
* main:
  fix(docs): remove blog link from sidebar to fix 404 (#3014)
  build(deps): Bump github.com/cometbft/cometbft from 0.38.20 to 0.38.21 in /execution/evm/test in the go_modules group across 1 directory (#3011)
  refactor: use slices.Contains to simplify code (#3010)
  chore: Bump mermaid version and dependencies (#3009)
  chore: Bump github.com/consensys/gnark-crypto only (#3008)
  test: evm contract interaction (#3006)
  chore: remove redundant log (#3007)
  fix: return values correctly not nil (#3004)
  feat: expose execution client params to ev-node (#2982)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

sync: P2P should provide da inclusion hints

4 participants