fix: process broadcast transactions via dispatch_local#626
fix: process broadcast transactions via dispatch_local#626xdustinface merged 1 commit intov0.42-devfrom
dispatch_local#626Conversation
📝 WalkthroughWalkthroughBroadcast now verifies sync state and returns NotSynced when unsynced; broadcasts transactions to peers then injects them into the local message pipeline; network trait gained Changes
Sequence DiagramsequenceDiagram
participant Client as SPV Client
participant TxHandler as broadcast_transaction()
participant NetworkMgr as NetworkManager
participant MsgDispatcher as MessageDispatcher
participant MempoolMgr as MempoolManager
Client->>TxHandler: broadcast_transaction(tx)
TxHandler->>TxHandler: sync_progress() check
alt Not Synced
TxHandler-->>Client: SpvError::Sync(NotSynced)
else Synced
TxHandler->>NetworkMgr: broadcast(NetworkMessage::Tx(tx))
NetworkMgr-->>TxHandler: result
TxHandler->>NetworkMgr: dispatch_local(NetworkMessage::Tx(tx))
NetworkMgr->>MsgDispatcher: dispatch(Message from 0.0.0.0:0)
MsgDispatcher->>MempoolMgr: handle_tx(tx, peer=0.0.0.0:0)
MempoolMgr->>MempoolMgr: dedupe & record_send if local
alt Tx exists
MempoolMgr-->>MsgDispatcher: Ok(empty)
else New Tx
MempoolMgr-->>MsgDispatcher: Ok(TransactionReceived)
end
MsgDispatcher-->>Client: TransactionReceived event
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related issues
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
dash-spv/src/test_utils/network.rs (1)
170-171:⚠️ Potential issue | 🟡 MinorImplement
MockNetworkManager::broadcast()instead of panicking.
DashSpvClient::broadcast_transaction()now always goes throughNetworkManager::broadcast(), so Line 171 turns any focused client test that uses the mock into a panic. ReturningOk(())/NetworkError::NotConnectedand recording the message is enough to keep the mock usable for unit coverage; puttingsent_messagesbehind interior mutability would make that straightforward.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/test_utils/network.rs` around lines 170 - 171, Currently MockNetworkManager::broadcast panics; implement it to record the incoming NetworkMessage into the mock's sent_messages (use interior mutability like Mutex/RefCell/RefCell<Vec<NetworkMessage>> depending on thread needs) and return a NetworkResult: return Ok(()) when simulating success, or return Err(NetworkError::NotConnected) if you want to simulate disconnected behavior (make this configurable via a flag on MockNetworkManager such as connected: bool). Update the broadcast signature async fn broadcast(&self, message: NetworkMessage) -> NetworkResult<()> to push the message into sent_messages and return the appropriate result instead of panicking.
🧹 Nitpick comments (4)
dash-spv/src/network/manager.rs (1)
1332-1335: Centralize the local-sentinel address.
dispatch_local()open-codes0.0.0.0:0here, anddash-spv/src/test_utils/network.rs:174-178duplicates the same literal whiledash-spv/src/sync/mempool/manager.rsinterprets it specially. That hidden contract is easy to break. Please move it behind a shared constructor/helper such asMessage::local(...)so every implementation uses the same marker.As per coding guidelines,
**/*.rs: Never hardcode network parameters, addresses, or keys in the codebase.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/network/manager.rs` around lines 1332 - 1335, The code hardcodes the local sentinel address (0.0.0.0:0) in dispatch_local and tests; introduce a shared constructor on Message (e.g., Message::local(...)) or a helper function that returns the sentinel Message so all call sites use the same semantic marker. Replace the current Message::new(local_addr, message) call in dispatch_local (and the duplicate literal in dash-spv/src/test_utils/network.rs and any special-case checks in dash-spv/src/sync/mempool/manager.rs) with Message::local(message) so the sentinel address is centralized and never hardcoded across the codebase.dash-spv/tests/dashd_sync/tests_mempool.rs (1)
516-585: Add a companion test for the newNotSyncedguard.This validates the happy path, but the other behavioral change in
DashSpvClient::broadcast_transaction()is rejecting broadcasts before sync completes. Please add a small case that callsbroadcast_transaction()beforewait_for_sync_both()and assertsSpvError::Sync(SyncError::NotSynced), otherwise that branch can regress unnoticed.As per coding guidelines,
**/*.rs: Write unit tests for new functionality, and**/tests/**/*.rs: Write integration tests for network operations.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/tests/dashd_sync/tests_mempool.rs` around lines 516 - 585, Add an integration test that verifies DashSpvClient::broadcast_transaction rejects attempts made before the client is synced: create a new async test (similar to test_broadcast_transaction_local_detection) that obtains a TestContext::new, spawns a client via ctx.spawn_client(MempoolStrategy::FetchAll) into (mut fa, _), do NOT call wait_for_sync_both, construct or reuse a small signed transaction (or create one via ctx.dashd.node.create_signed_transaction as in the existing test), call fa.client.broadcast_transaction(&signed_tx).await and assert it returns Err(SpvError::Sync(SyncError::NotSynced)); ensure you reference spawn_client and broadcast_transaction so the test covers the NotSynced branch and fails if the behavior regresses.dash-spv/src/test_utils/node.rs (1)
375-405: Extract the shared tx-construction/signing flow.This method is now a near copy of
DashCoreNode::send_raw_from_wallet()up to the final RPC call. A private helper returning the signeddashcore::Transactionwould keep both paths aligned the next time inputs, outputs, or signing options change.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/test_utils/node.rs` around lines 375 - 405, Refactor the duplicated tx construction/signing logic in create_signed_transaction and DashCoreNode::send_raw_from_wallet by extracting a private helper (e.g., build_and_sign_transaction or sign_wallet_transaction) that takes the common inputs (wallet_name or RpcClient, inputs Vec<CreateRawTransactionInput>, outputs HashMap<String, Amount>, and optional locktime/replace flags) and returns a signed dashcore::Transaction; replace the duplicated code in create_signed_transaction and send_raw_from_wallet to call this helper, reuse rpc_client_for_wallet and sign_raw_transaction_with_wallet inside it, and keep the same error handling/assertions (expect and assert! on signed.complete) so both call sites remain behaviorally identical.dash-spv/src/sync/mempool/manager.rs (1)
300-301: Centralize the local-dispatch sentinel.Lines 300-301 define an exact sentinel, but Line 349 matches any unspecified IP and Line 895 hardcodes the test literal again. A shared helper/constant would keep the contract exact and avoid production/test drift.
As per coding guidelines, "Never hardcode network parameters, addresses, or keys in the codebase".
Also applies to: 349-350, 895-896
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/sync/mempool/manager.rs` around lines 300 - 301, Define a single shared sentinel constant (e.g., LOCAL_DISPATCH_SENTINEL or LOCAL_SENTINEL) for the local-dispatch address instead of hardcoding "0.0.0.0:0" in multiple places; replace the literal used in the comment/logic that treats peer as self-originated (the code that records into recent_sends), the match at the place referenced around the current Line 349 match, and the test literal around Line 895 to reference this new constant (and update tests to import/use it) so all equality checks and pattern matches use the same canonical symbol and avoid duplicated hardcoded network parameters.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dash-spv/src/sync/mempool/manager.rs`:
- Around line 310-314: The early-return path that checks
self.mempool_state.read().await.transactions.contains_key(&txid) currently skips
updating the local-send bookkeeping (e.g., recent_sends via record_send(txid)),
so local broadcasts that arrive after a peer-seen insert are not recorded;
change the branch to still call the send-recording code (call
self.record_send(txid) or otherwise update recent_sends) before returning, and
apply the same fix to the other analogous spot that checks contains_key around
the lines handling seen_txids (the second occurrence at the other branch).
Ensure you still insert into self.seen_txids and then call record_send(txid) (or
update recent_sends) prior to returning Ok(vec![]).
---
Outside diff comments:
In `@dash-spv/src/test_utils/network.rs`:
- Around line 170-171: Currently MockNetworkManager::broadcast panics; implement
it to record the incoming NetworkMessage into the mock's sent_messages (use
interior mutability like Mutex/RefCell/RefCell<Vec<NetworkMessage>> depending on
thread needs) and return a NetworkResult: return Ok(()) when simulating success,
or return Err(NetworkError::NotConnected) if you want to simulate disconnected
behavior (make this configurable via a flag on MockNetworkManager such as
connected: bool). Update the broadcast signature async fn broadcast(&self,
message: NetworkMessage) -> NetworkResult<()> to push the message into
sent_messages and return the appropriate result instead of panicking.
---
Nitpick comments:
In `@dash-spv/src/network/manager.rs`:
- Around line 1332-1335: The code hardcodes the local sentinel address
(0.0.0.0:0) in dispatch_local and tests; introduce a shared constructor on
Message (e.g., Message::local(...)) or a helper function that returns the
sentinel Message so all call sites use the same semantic marker. Replace the
current Message::new(local_addr, message) call in dispatch_local (and the
duplicate literal in dash-spv/src/test_utils/network.rs and any special-case
checks in dash-spv/src/sync/mempool/manager.rs) with Message::local(message) so
the sentinel address is centralized and never hardcoded across the codebase.
In `@dash-spv/src/sync/mempool/manager.rs`:
- Around line 300-301: Define a single shared sentinel constant (e.g.,
LOCAL_DISPATCH_SENTINEL or LOCAL_SENTINEL) for the local-dispatch address
instead of hardcoding "0.0.0.0:0" in multiple places; replace the literal used
in the comment/logic that treats peer as self-originated (the code that records
into recent_sends), the match at the place referenced around the current Line
349 match, and the test literal around Line 895 to reference this new constant
(and update tests to import/use it) so all equality checks and pattern matches
use the same canonical symbol and avoid duplicated hardcoded network parameters.
In `@dash-spv/src/test_utils/node.rs`:
- Around line 375-405: Refactor the duplicated tx construction/signing logic in
create_signed_transaction and DashCoreNode::send_raw_from_wallet by extracting a
private helper (e.g., build_and_sign_transaction or sign_wallet_transaction)
that takes the common inputs (wallet_name or RpcClient, inputs
Vec<CreateRawTransactionInput>, outputs HashMap<String, Amount>, and optional
locktime/replace flags) and returns a signed dashcore::Transaction; replace the
duplicated code in create_signed_transaction and send_raw_from_wallet to call
this helper, reuse rpc_client_for_wallet and sign_raw_transaction_with_wallet
inside it, and keep the same error handling/assertions (expect and assert! on
signed.complete) so both call sites remain behaviorally identical.
In `@dash-spv/tests/dashd_sync/tests_mempool.rs`:
- Around line 516-585: Add an integration test that verifies
DashSpvClient::broadcast_transaction rejects attempts made before the client is
synced: create a new async test (similar to
test_broadcast_transaction_local_detection) that obtains a TestContext::new,
spawns a client via ctx.spawn_client(MempoolStrategy::FetchAll) into (mut fa,
_), do NOT call wait_for_sync_both, construct or reuse a small signed
transaction (or create one via ctx.dashd.node.create_signed_transaction as in
the existing test), call fa.client.broadcast_transaction(&signed_tx).await and
assert it returns Err(SpvError::Sync(SyncError::NotSynced)); ensure you
reference spawn_client and broadcast_transaction so the test covers the
NotSynced branch and fails if the behavior regresses.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c1fd587c-0099-49f5-9131-366e44487e33
📒 Files selected for processing (9)
dash-spv/src/client/transactions.rsdash-spv/src/error.rsdash-spv/src/network/manager.rsdash-spv/src/network/mod.rsdash-spv/src/sync/mempool/manager.rsdash-spv/src/sync/mempool/sync_manager.rsdash-spv/src/test_utils/network.rsdash-spv/src/test_utils/node.rsdash-spv/tests/dashd_sync/tests_mempool.rs
0330c21 to
b416446
Compare
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## v0.42-dev #626 +/- ##
=============================================
+ Coverage 67.49% 67.73% +0.23%
=============================================
Files 318 318
Lines 67932 67914 -18
=============================================
+ Hits 45851 46001 +150
+ Misses 22081 21913 -168
|
When the SPV client broadcasts a transaction, connected peers never relay it back because Dash Core marks the sender as already having the transaction. This meant broadcast transactions were invisible in the client's mempool state until a restart triggered a full mempool re-fetch. Add `dispatch_local()` to `NetworkManager` to inject messages into the local message pipeline. `broadcast_transaction()` now checks sync state, broadcasts to peers, then injects the tx locally so `MempoolManager::handle_tx()` processes it through the existing path. The sentinel peer address (`0.0.0.0:0`) signals a self-originated transaction, causing `handle_tx` to call `record_send` automatically. A dedup guard skips transactions already tracked in `mempool_state`. Also adds `SyncError::NotSynced` for rejecting broadcasts before sync completes, and a `create_signed_transaction` test helper for building transactions without broadcasting via `dashd`.
b416446 to
b707e28
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (1)
dash-spv/src/client/transactions.rs (1)
29-35: Reuse the message to avoid redundant clones.
NetworkMessagederivesClone, so you can clone the message once instead of cloning the transaction twice:♻️ Suggested optimization
let message = NetworkMessage::Tx(tx.clone()); network_guard.broadcast(message).await?; // Inject locally so the mempool manager picks it up through handle_tx. - network_guard.dispatch_local(NetworkMessage::Tx(tx.clone())).await; + network_guard.dispatch_local(message.clone()).await;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/client/transactions.rs` around lines 29 - 35, The code creates NetworkMessage::Tx twice by cloning tx twice; instead, create the message once (let message = NetworkMessage::Tx(tx.clone())) and pass that same message to both network_guard.broadcast(message.clone()).await? and network_guard.dispatch_local(message).await (or vice versa) to avoid redundant tx clones; update calls to use the single `message` variable and clone the message only where necessary (referencing NetworkMessage::Tx, `message`, `network_guard.broadcast`, and `network_guard.dispatch_local`).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@dash-spv/src/client/transactions.rs`:
- Around line 29-35: The code creates NetworkMessage::Tx twice by cloning tx
twice; instead, create the message once (let message =
NetworkMessage::Tx(tx.clone())) and pass that same message to both
network_guard.broadcast(message.clone()).await? and
network_guard.dispatch_local(message).await (or vice versa) to avoid redundant
tx clones; update calls to use the single `message` variable and clone the
message only where necessary (referencing NetworkMessage::Tx, `message`,
`network_guard.broadcast`, and `network_guard.dispatch_local`).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 4a9a7001-932f-4a89-9607-ab03fdef5793
📒 Files selected for processing (9)
dash-spv/src/client/transactions.rsdash-spv/src/error.rsdash-spv/src/network/manager.rsdash-spv/src/network/mod.rsdash-spv/src/sync/mempool/manager.rsdash-spv/src/sync/mempool/sync_manager.rsdash-spv/src/test_utils/network.rsdash-spv/src/test_utils/node.rsdash-spv/tests/dashd_sync/tests_mempool.rs
✅ Files skipped from review due to trivial changes (1)
- dash-spv/src/network/manager.rs
🚧 Files skipped from review as they are similar to previous changes (5)
- dash-spv/src/error.rs
- dash-spv/src/sync/mempool/sync_manager.rs
- dash-spv/src/test_utils/network.rs
- dash-spv/src/test_utils/node.rs
- dash-spv/tests/dashd_sync/tests_mempool.rs
| /// | ||
| /// Returns the signed transaction for use with `broadcast_transaction()`. | ||
| pub fn create_signed_transaction( | ||
| &self, |
There was a problem hiding this comment.
Don't we already have something that creates signed transactions?? Or the logic was written in the FFI method that exposed that functionality?? If so we should extract it or maybe I was working on that in one of my draft PRs but needed the manager in key wallet to implement my vision, idk, just making a note, this logic is for sure duplicated and key-wallet should have a method for it but I am not sure if it is only in the FFI or not
There was a problem hiding this comment.
Maybe, im not sure but this here is integration test and is using the dashd node for it so it's not depending on the key-wallet.
When the SPV client broadcasts a transaction, connected peers never relay it back because Dash Core marks the sender as already having the transaction. This meant broadcast transactions were invisible in the client's mempool state until a restart triggered a full mempool re-fetch.
Add
dispatch_local()toNetworkManagerto inject messages into the local message pipeline.broadcast_transaction()now checks sync state, broadcasts to peers, then injects the tx locally soMempoolManager::handle_tx()processes it through the existing path.The sentinel peer address (
0.0.0.0:0) signals a self-originated transaction, causinghandle_txto callrecord_sendautomatically. A dedup guard skips transactions already tracked inmempool_state.Also adds
SyncError::NotSyncedfor rejecting broadcasts before sync completes, and acreate_signed_transactiontest helper for building transactions without broadcasting viadashd.Summary by CodeRabbit
New Features
Bug Fixes
Tests