Support for large RPC messages using compression + data streams#1832
Support for large RPC messages using compression + data streams#1832
Conversation
|
size-limit report 📦
|
6128790 to
bc5f66d
Compare
| constructor( | ||
| engine: RTCEngine, | ||
| log: StructuredLogger, | ||
| outgoingDataStreamManager: OutgoingDataStreamManager, | ||
| getRemoteParticipantClientProtocol: (identity: Participant['identity']) => number, | ||
| ) { | ||
| this.engine = engine; | ||
| this.log = log; | ||
| this.outgoingDataStreamManager = outgoingDataStreamManager; | ||
| this.getRemoteParticipantClientProtocol = getRemoteParticipantClientProtocol; | ||
| } | ||
|
|
||
| setupEngine(engine: RTCEngine) { | ||
| this.engine = engine; | ||
|
|
||
| this.engine.on(EngineEvent.DataPacketReceived, this.handleDataPacket); | ||
| } |
There was a problem hiding this comment.
Note to self: in attempting to update the tests for this, I realized that tightly coupling both RpcClientManager / RpcServerManager to RTCEngine is a bad idea. It should work more like the data tracks managers work where there are incoming and outgoing events, and the glue to wire those events up to the engine happens at the room level.
This is so that this client protocol value can be used to know what version of RPC that a remote client supports.
…rotocol "0" for now though
…ed in memory at once
…kas did in the original web example Register a data stream with an attribute, and listen for data streams with that attribute on the other end.
…r rpc calls that take a long time If a RPC call takes a long time and a participant disconnects half way through, just drop the return value.
After benchmarking this proved to be not very useful in practice, compression is basically the same.
… just for legacy cases now
…ne and back into the server manager
fa9fde9 to
b6c177e
Compare
|
(rebased on top of latest |
| * Compress a string payload using gzip. | ||
| * @internal | ||
| */ | ||
| export async function gzipCompress(data: string): Promise<Uint8Array> { |
There was a problem hiding this comment.
for both compress helpers we need to handle unsupported browsers: https://caniuse.com/mdn-api_compressionstream
for regular payloads this should probably fall back to 'regular' in these cases
Currently this prototype does the below:
RPCRequestpath when the advertisedclientProtocolis less than 1.RPCRequestinline (compressedPayload)Also undertakes some signifigant refactoring to move RPC logic out into a
RpcClientManager/RpcServerManager, which given the increase in complexity makes the code a little less fragmented / easier to reason about.Todo
examples/rpc-benchmark- this is what is leading to the large diff size. Before potentially merging this, think about if it makes sense to keep this checked in or not. And if so, go through and update some of the docs to be a little more modern / correct (in particular, it still refers to the "legacy path < 1kb" stuff).