Skip to content

Streaming with withAIBatch accumulates operations and may slow undo #4900

@hhhjin

Description

@hhhjin

Description

Problem

streamInsertChunk is currently wrapped with withAIBatch during AI streaming.

This seems to accumulate a large number of editor operations in history while the response is streaming. Even if those operations are merged into a single batch, undo on long AI generations appears to get progressively slower.

Expected

Undo should remain responsive even after long AI streaming sessions.

Suspected Cause

withAIBatch merges streaming changes into a single history batch, but it does not reduce the number of underlying operations stored in that batch.

As a result, long streaming responses may still produce a very large undo entry.

Question

Should AI streaming avoid recording every intermediate operation in history, or otherwise compact the final history entry before accept/undo?

Reproduction URL

No response

Reproduction steps

1. Trigger AI insert mode
2. Stream a long multi-block response
3. Wait for streaming to finish
4. Run accept or tf.ai.undo()
5. Notice the slowdown grows with response length / chunk count

Plate version

52.3.4

Slate React version

0.123.0

Screenshots

Logs

Browsers

No response

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions