Conversation
|
size-limit report 📦
|
2f31fbc to
6a5960d
Compare
6a5960d to
2e6f8cf
Compare
|
|
||
| /** Publish the track to the SFU. This must be done before calling {@link tryPush} for the first time. */ | ||
| async publish(signal?: AbortSignal) { | ||
| try { |
There was a problem hiding this comment.
I guess this should error early (or warn and return) if isPublished is true?
There was a problem hiding this comment.
There is already an error that gets thrown within publishRequest if the track is already published, but I could probably make the error message a little nicer (right now it is one of those "panic" bare errors).
IMO in this case, I think an explicit throw makes sense because you really should not be publishing / unpublishing data tracks without some care. But looking at the existing tracks it looks like it just warns and that's it. So maybe it makes sense to convert this throw into a warning, not sure.
EDIT: more thoughts on this here!
There was a problem hiding this comment.
From sync: giving some more thought to this, I would say publish should throw whereas unpublish should not. For publish, this is something the application might want to handle and respond to.
| * */ | ||
| async unpublish() { | ||
| if (!this.handle) { | ||
| throw DataTrackPushFrameError.trackUnpublished(); |
There was a problem hiding this comment.
suggestion: might be enough to log a warning here and return early?
There was a problem hiding this comment.
Yea I am not sure - see my previous comment about the throw vs warning here. I like the hard error in that it is something that can be caught and explicitly handled though since this is checking an invariant. The existing track publication logic for audio / video tracks logs a warning so maybe it makes sense to update it to be consistent.
I guess one downside to the hard error is it opens this code up to a TOC-TOU issue where something like the below could happen:
if (track.isPublished) {
// Imagine here the track is unpublished at the SFU level for some unrelated reason ...
await track.unpublish(); // ... so this would then throw an error
}I am definitely curious what others think here!
There was a problem hiding this comment.
You make a good point; a data track unpublish can be SFU initiated (e.g., if publish data permission is revoked). Given that, I think it makes sense to treat this as a first-class error.
There was a problem hiding this comment.
hm, from @1egoman 's description I'd be more inclined to treat it similarly to how other unpublish requests are handled, which is warning only
2619816 to
8e7a138
Compare
Lukas has some reservations: #1820 (comment)
After doing this I was able to get it to work end to end!
Example of where this can be triggered is specifying an empty name when creating a new data track.
Temporarily replace the hexdump with a chart for a demo. I think this probably should be reverted though, the hexdump is more useful longer term.
…ks data channel Adds a "sendLossyBytes" method which hooks into the existing data channel logging / buffer status / etc infrastructure.
This allows multiple different versions of the app to run and all join the same room
…ions This is required so that a user can specify custom extension metadata like user timestamps. As part of this, DataTrackExtensions and friends are now part of the public api interface.
…g extensions" This reverts commit 66ba201.
…ternal interface a user passes into tryPush Doing it this way makes it much easier to specify a user timestamp value.
This a pretty large thing to miss, yikes...
…dle the separate data tracks data channel The old way this worked, channelKind was being computed based on maxRetransmits === 0. But now there are multiple lossy data channels, the regular one and the data tracks one, so this heuristic no longer works.
…s are buffered when the dc is in low status This was encountered when implementing the e2e tests - a large data track frame could be split up into potentially n packets, and if one packet is dropped the whole frame is considered dropped. So the behavior that is really wanted here is to buffer packets and send them once the data channel can accept more data, even though it is technically a lossy data channel.
…channels When operating in dual peer connection contexts, this is required.
… users that a failure is actually happening
lukasIO
left a comment
There was a problem hiding this comment.
a couple of nits that would be nice to address.
More generally and important: what happens currently when running this against older servers that don't have DataTrack support?
| return; | ||
| if (!this.isBufferStatusLow(kind)) { | ||
| // Depending on the exact circumstance that data is being sent, either drop or wait for the | ||
| // buffer status to not be low before continuing. |
There was a problem hiding this comment.
what's the intention behind this differentiation? Why not just drop them for lossy?
There was a problem hiding this comment.
That's what I originally was doing but this didn't work for large data track payloads - here is some more context as to why from some notes I wrote up:
- If the
_data_tracksrtc data channel buffer was full, the existing logic would drop packets (what the lossy data channel currently does).
- This doesn't work though for data track packets, because if the first
npackets of a frame get enqueued but the last packet(s) get dropped because there isn't enough room in the rtc data channel buffer, the receiver only receives a partial set of packets which will never fully decode into a data track frame.- If packets aren't buffered like this, then it becomes impossible to send data track payloads > ~75k bytes (the e2e tests are checking this case)
- Solution (commit): buffer data track packets instead of dropping them if the rtc data channel buffer is full
If there's some other way to handle this, I'm open to alternatives but what I did here was the most straightforward way I could think of.
Integrates the data track managers (both incoming and outgoing) into the existing sdk
RoomandRemoteParticipantinterfaces. Once this is merged, data tracks is fully implemented!Interface - Publishing
Interface - Subscribing
Todo:
Resolves CLT-2582