Skip to content

fix: preserve per-session ordering in BufferedMessageHandler#166

Closed
Andy963 wants to merge 1 commit intobubbuild:mainfrom
Andy963:fix/buffered-handler-serialization
Closed

fix: preserve per-session ordering in BufferedMessageHandler#166
Andy963 wants to merge 1 commit intobubbuild:mainfrom
Andy963:fix/buffered-handler-serialization

Conversation

@Andy963
Copy link
Copy Markdown
Contributor

@Andy963 Andy963 commented Apr 13, 2026

Summary

This PR fixes per-session ordering in BufferedMessageHandler.

The previous implementation could run _handler concurrently for the same session because _in_processing was cleared before await self._handler(...) completed. It also mixed command handling and buffered message timing in a way that could break expected per-session ordering semantics.

This change replaces the shared timer/event batching flow with a head-driven serialized work queue:

  • command messages are queued as standalone items
  • consecutive non-command messages are grouped into batch items
  • the worker always processes the head of the queue
  • the downstream handler is never executed concurrently for the same session

This keeps strict arrival-order serialization while still allowing non-command messages to be batched.

What changed

  • refactored BufferedMessageHandler to use a per-session queued work model
  • removed the old shared _pending_messages / _event / _timer processing flow
  • preserved batching for consecutive non-command messages
  • ensured commands do not merge into surrounding batches
  • added regression coverage for ordering, batching, serialization, and error recovery

Tests

  • uv run ruff check .
  • uv run mypy src
  • uv run pytest -q tests/test_channels.py
  • uv run pytest -q
  • uv run prek run --all-files

Fixes #<165>

@frostming
Copy link
Copy Markdown
Collaborator

frostming commented Apr 13, 2026

Thank you for this vibe efforts, but this PR requires a significant amount of review effort from me, and it feels like it's not quite right.

To ensure a single worker, a queue with a single consumer is sufficient; this architecture is even simpler than the original, but this change has made things significantly more complex.

Furthermore, for comma commands, jump-the-queue processing is reasonable, and maintaining order is not a requirement. In addition, the execution of _handler() actually just inserts the message into the queue; it does not wait for the model to finish running as you might imagine, so it should be a very lightweighted operation. And eventually the messages in the queue will be processed in sequence, but which one is responded first still depends on the time cost in the model side. The tape will only be updated when each model run is finished. With that said, no matter how you guarantee the process order, A->B->C processing may also result in B->C->A in the tape.

@Andy963 Andy963 closed this Apr 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants