refactor(consensus): Delete Engine Task Queue#2538
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub. |
| Self::QueueLengthReceiver(subscription) => { | ||
| let (_, queue_length_recv) = tokio::sync::watch::channel(0); | ||
| subscription | ||
| .send(queue_length_recv) | ||
| .map_err(|_| EngineQueriesError::OutputChannelClosed) | ||
| } |
There was a problem hiding this comment.
The sender (_) from watch::channel(0) is immediately dropped. When the dev RPC subscriber calls wait_for on this receiver (in dev.rs:80), it will get Err(RecvError) immediately because the sender is gone, causing the subscription loop to exit and log a "Subscription to engine queue size has been closed" warning on every subscription attempt.
If this is intentional stub behavior for a deprecated API, consider either:
- Documenting why the sender is intentionally dropped (e.g., a comment explaining the legacy stub), or
- Keeping the sender alive so the subscription hangs (returning
0forever) rather than erroring out — which might be friendlier to existing callers expecting a live subscription.
Move sequencer build and get-payload requests to direct Engine methods. Delete the GetPayloadTask wrapper and remove queued Build/GetPayload task variants while keeping insert and consolidation on the existing queue. Co-authored-by: Codex <noreply@openai.com>
Forward direct build failures to callers, avoid inline temporary-error retry loops, and harden direct get-payload metrics and payload-version handling. Co-authored-by: Codex <noreply@openai.com>
Merge identical temporary-severity match arms in SealTaskError. Co-authored-by: Codex <noreply@openai.com>
Avoid unchanged direct payload state broadcasts and route direct get-payload failures through engine severity handling. Co-authored-by: Codex <noreply@openai.com>
b24924e to
e9568bc
Compare
Move unsafe payload insertion into direct Engine methods and delete the InsertTask wrapper. Keep consolidation, delegated forkchoice, finalization, and seal on the existing queue. Co-authored-by: Codex <noreply@openai.com>
Apply nightly rustfmt import grouping and collapse the insert acknowledgement send check for clippy. Co-authored-by: Codex <noreply@openai.com>
Propagate local no-ack insert errors through the existing engine task severity handler. Avoid redundant V3/V4 insert payload clones and clarify the newPayload duration log field. Co-authored-by: Codex <noreply@openai.com>
Reshape the unsafe-payload processor rstest table around a case struct so the generated test does not trip clippy's argument limit. Correct the service README's description of when the local insert acknowledgement is sent versus when the sequencer client waits on the unsafe-head watch channel. Co-authored-by: Codex <noreply@openai.com>
Move safe-head consolidation onto direct Engine methods and route ProcessSafeL2SignalRequest through them. Delete the ConsolidateTask wrapper and remove unused queued seal/consolidate variants so the remaining task queue only covers delegated forkchoice and finalize. Co-authored-by: Codex <noreply@openai.com>
Move delegated forkchoice and finalized-head updates onto direct Engine methods. Route the engine processor through those methods and remove the old task wrappers and queue variants. Merge duplicated finalize test match arms, preserve stale finalized-head no-op behavior in the direct path, and keep derived docs/manifest updates from the rebased base branch. Co-authored-by: Codex <noreply@openai.com>
0519c51 to
134b7c4
Compare
Co-authored-by: Codex <noreply@openai.com>
Delete the remaining engine priority queue, task enum, trait dispatch, enqueue, drain, and thin delegated/finalize wrappers. Make Engine a non-generic state owner with generic direct methods, and keep only a small severity helper plus processor-local operation error mapping for reset/flush/critical handling. Co-authored-by: Codex <noreply@openai.com>
2dadbb3 to
6d83e9f
Compare
| .await; | ||
|
|
||
| self.state_sender.send_replace(self.state); | ||
| Metrics::engine_task_count(Metrics::INSERT_TASK_LABEL).increment(1); |
There was a problem hiding this comment.
engine_task_count is incremented unconditionally here (on both success and failure), but the metric is described as "Engine operations successfully executed" and retry_with_severity only increments it on success (line 294). This will inflate the success counter for failed local inserts.
| Metrics::engine_task_count(Metrics::INSERT_TASK_LABEL).increment(1); | |
| if result.is_ok() { | |
| Metrics::engine_task_count(Metrics::INSERT_TASK_LABEL).increment(1); | |
| } |
Move repeated EngineClient bounds into the existing where clauses so clippy no longer reports multiple bound locations. Co-authored-by: Codex <noreply@openai.com>
Remove the remaining SealTask wrapper and fold started-payload sealing into direct Engine methods. Rename the private task_queue module to operations and update stale queue-era docs and metrics. Co-authored-by: Codex <noreply@openai.com>
Review SummaryClean refactor that removes the engine task queue abstraction in favor of direct operations on Issues (2 findings, both covered by existing inline comments)
No new findings beyond the existing inline comments. |
5794d4b to
4038c2c
Compare
Summary
Delete the remaining engine priority queue, task enum, trait dispatch, enqueue, drain, and delegated/finalize wrapper structs. The engine processor now publishes direct-operation state changes without a pre-drain step, and Engine is a non-generic state owner with generic methods for the client-specific calls. The legacy dev queue-length RPC now reports zero without carrying queue state through the engine actor.