Skip to content

Improve progress events for onDownloadProgress and onUploadProgress#736

Merged
sholladay merged 7 commits intosindresorhus:mainfrom
sholladay:pull-streams
Sep 26, 2025
Merged

Improve progress events for onDownloadProgress and onUploadProgress#736
sholladay merged 7 commits intosindresorhus:mainfrom
sholladay:pull-streams

Conversation

@sholladay
Copy link
Copy Markdown
Collaborator

@sholladay sholladay commented Sep 13, 2025

Fixes #694
Fixes #735

This PR improves the behavior of the onDownloadProgress and onUploadProgress event callbacks:

  • Ensures that they are only called when needed (no more extraneous events with empty chunks)
  • Events are reported only after a chunk is received by (as opposed to sent to) the stream destination, i.e. the server for uploads or the sink for downloads
  • More accurate progress data, particularly for how percent is calculated
  • Fixes a few inconsistencies between uploads and downloads, such as how percent was rounded for uploads. This rounding has been removed in favor of maximum precision.
  • In case more bytes are received after the expected totalBytes is reached, percent will now report 1 - Number.EPSILON instead of >= 1, to make it more obvious what is happening
  • The speed of progress events will now behave as expected, at network speed. A new pulling strategy is used for the internal stream implementation that facilitates progress tracking, which will be particularly noticeable for uploads. Previously, the upload stream was pushed as quickly as possible onto the network queue, which meant that progress events were reported faster than the network could actually send data. Now, a pull-based stream with proper backpressure is used to ensure data is transferred to the network just-in-time, when it is ready for more data. This also improves memory usage.

Note: In some cases, body size estimation for totalBytes may still be inaccurate. This will be fixed by #734.

Previously, upload streams were pushed as quickly as possible onto the network queue, which meant that progress events were reported faster than the network could actually send data. Now, a pull-based stream with proper backpressure is used to ensure data is transferred to the network just-in-time, when it is ready for more data.

Additionally, progress reporting is now more accurate, with progress reported only after a chunk has actually been received by the server.
Similar to 7a06e91, but for download streams
@sholladay sholladay changed the title Pull streams Improve progress events for onDownloadProgress and onUploadProgress Sep 13, 2025
@sindresorhus
Copy link
Copy Markdown
Owner

Initial AI review (will do human review later when I have a chance):


Major issues (fix)

  1. Eager reader lock (download path)
    You create the reader before the 204 short-circuit. Not harmful for 204 (body is null), but it’s unnecessary risk and asymmetry with the new ReadableStream lifecycle. Move getReader() inside the stream and release on completion. (GitHub)

  2. Cancel doesn’t actually cancel
    cancel() should call reader.cancel(reason) (then release the lock). releaseLock() alone doesn’t propagate cancellation upstream. Same for both upload/download. (GitHub)

  3. Lock never released on normal completion
    When done is true, you close the controller but never release/cancel the reader. Add a finally after the loop step to clean up. (GitHub)

  4. Magic number 0.9999
    Replace with 1 - Number.EPSILON. Clearer intent, fewer future “why 0.9999?” questions, and avoids percentile rounding edge-cases. Same for upload and download. (GitHub)

Minor issues / polish

  • DRY: Upload and download paths are near duplicates. Factor the “previousChunk/pull/enqueue/emit” logic into a tiny helper to keep them in sync.

Tests to add (high-value)

  • 204 downloads: assert single event {percent:1, transferredBytes:0, totalBytes:0} with empty chunk. (You already special-case this.) (GitHub)
  • Unknown content-length download: ensure percent stays 0, totalBytes grows via Math.max, and events match chunking.
  • Zero-byte upload: assert no events (new behavior) or explicitly document if you want one terminal event.
  • Multi-chunk upload: assert monotonic percent, last event === 1, and emitted chunks correspond to previously enqueued data (not the to-be-enqueued chunk). (GitHub)

Minimal patches

Download path

// source/utils/body.ts
export const streamResponse = (response: Response, onDownloadProgress: Options['onDownloadProgress']) => {
	const totalBytes = Number(response.headers.get('content-length')) || 0;
	let transferredBytes = 0;
	let previousChunk: Uint8Array | undefined;

	if (response.status === 204) {
		onDownloadProgress?.({percent: 1, totalBytes, transferredBytes}, new Uint8Array());
		return new Response(null, {headers: response.headers, status: response.status, statusText: response.statusText});
	}

	return new Response(
		new ReadableStream<Uint8Array>({
			async pull(controller) {
				// Lazily create and keep the reader in the closure.
				if (!reader) {
					reader = (response.body ?? new Response('').body!).getReader();
				}

				const {value, done} = await reader.read();

				if (previousChunk) {
					transferredBytes += previousChunk.byteLength;

					const almostOne = 1 - Number.EPSILON;
					let percent = totalBytes === 0 ? 0 : transferredBytes / totalBytes;
					if (!done && percent >= 1) {
						percent = almostOne;
					}

					onDownloadProgress?.(
						{
							percent: Math.min(1, percent),
							transferredBytes,
							totalBytes: Math.max(totalBytes, transferredBytes),
						},
						previousChunk,
					);
				}

				if (done) {
					try {
						controller.close();
					} finally {
						await reader?.cancel();
						reader?.releaseLock();
					}
					return;
				}

				controller.enqueue(value!);
				previousChunk = value!;
			},
			async cancel(reason) {
				try {
					await reader?.cancel(reason);
				} finally {
					reader?.releaseLock();
				}
			},
		}),
		{headers: response.headers, status: response.status, statusText: response.statusText},
	);

	let reader: ReadableStreamDefaultReader<Uint8Array> | undefined;
};

This fixes eager locking, real cancellation, and lock release. It also replaces 0.9999 with 1 - Number.EPSILON. (GitHub)

Upload path

export const streamRequest = (
	request: Request,
	onUploadProgress: Options['onUploadProgress'],
	originalBody?: BodyInit | null,
) => {
	const totalBytes = getBodySize(originalBody ?? request.body);
	let transferredBytes = 0;
	let previousChunk: Uint8Array | undefined;
	let reader: ReadableStreamDefaultReader<Uint8Array> | undefined;

	return new Request(request, {
		// @ts-expect-error - Types are outdated.
		duplex: 'half',
		body: new ReadableStream<Uint8Array>({
			async pull(controller) {
				if (!reader) {
					reader = (request.body ?? new Response('').body!).getReader();
				}

				const {value, done} = await reader.read();

				if (previousChunk) {
					transferredBytes += previousChunk.byteLength;

					const almostOne = 1 - Number.EPSILON;
					let percent = totalBytes === 0 ? 0 : transferredBytes / totalBytes;
					if (!done && percent >= 1) {
						percent = almostOne;
					}

					onUploadProgress?.(
						{
							percent: Math.min(1, percent),
							transferredBytes,
							totalBytes: Math.max(totalBytes, transferredBytes),
						},
						previousChunk,
					);
				}

				if (done) {
					try {
						controller.close();
					} finally {
						await reader?.cancel();
						reader?.releaseLock();
					}
					return;
				}

				controller.enqueue(value!);
				previousChunk = value!;
			},
			async cancel(reason) {
				try {
					await reader?.cancel(reason);
				} finally {
					reader?.releaseLock();
				}
			},
		}),
	});
};

Same fixes mirrored for upload. (GitHub)

Optional refactor (nice-to-have)

Extract a tiny helper to remove duplication:

function progressStream(
	upstream: ReadableStream<Uint8Array> | null,
	totalBytes: number,
	emit: (info: {percent: number; transferredBytes: number; totalBytes: number}, chunk: Uint8Array) => void,
): ReadableStream<Uint8Array> {
	let reader: ReadableStreamDefaultReader<Uint8Array> | undefined;
	let previousChunk: Uint8Array | undefined;
	let transferredBytes = 0;

	return new ReadableStream<Uint8Array>({
		async pull(controller) {
			if (!reader) {
				reader = (upstream ?? new Response('').body!).getReader();
			}

			const {value, done} = await reader.read();

			if (previousChunk) {
				transferredBytes += previousChunk.byteLength;

				const almostOne = 1 - Number.EPSILON;
				let percent = totalBytes === 0 ? 0 : transferredBytes / totalBytes;
				if (!done && percent >= 1) {
					percent = almostOne;
				}

				emit(
					{percent: Math.min(1, percent), transferredBytes, totalBytes: Math.max(totalBytes, transferredBytes)},
					previousChunk,
				);
			}

			if (done) {
				try {
					controller.close();
				} finally {
					await reader?.cancel();
					reader?.releaseLock();
				}
				return;
			}

			controller.enqueue(value!);
			previousChunk = value!;
		},
		async cancel(reason) {
			try {
				await reader?.cancel(reason);
			} finally {
				reader?.releaseLock();
			}
		},
	});
}

Final callouts

  • Keep the improved precision (drop .toFixed(2)), but pin tests to exact 1 on terminal events. Your current test only ensures <= 1 on the first update; add an explicit === 1 check on completion. (GitHub)

@sholladay
Copy link
Copy Markdown
Collaborator Author

It brings up some good points. I probably do need to improve how cancelation is propagated. Will work on it.

This makes overages easier to see as visually distinct from 100% on a progress bar.
@sholladay
Copy link
Copy Markdown
Collaborator Author

@sindresorhus I fixed most of those issues. I somewhat agree with the AI about the use of 0.999 (I reduced it from four to three 9s after the review) but I'm on the fence about it.

There are a few options for what to do when transferredBytes > totalBytes:

  1. Current behavior: Set percent to a simple magic number like 0.999. Easy to read and check. It's visually distinct from 100% on a typical progress bar.
  2. Suggested by the AI: Set percent to 1 - Number.EPSILON. It's not really any more accurate since we have no way of knowing how far away we are from finishing. And it would render as 100% on most progress bars after rounding. But it does prevent an edge case where a very large upload/download with a high-resolution progress bar could appear to go from, say, 0.9995 percent complete down to 0.999 percent complete, which would probably be unnoticeable in practice due to the small difference and it happening quickly, but that would be strange nonetheless, so solving that is nice.
  3. Allow percent to naturally go over 1. I like this because it's simple and solves the same edge case while providing useful information about overage. But it could complicate things for users. I'm not sure how common it is for progress bars and such to handle 105% progress, for example. Presumably, the good ones clamp to 100%. But then it would appear stuck at 100% for a while, which could be more confusing than being stuck at 99.9%, where it's at least intuitive that it's waiting for more data to be transferred.
  4. Set percent to undefined, and then optionally a boolean flag like exceededTotal: true and/or an overageBytes number. This seems the most technically correct to me, solves the edge case, and it would allow native <progress> elements to show their indeterminate state. But again, it might complicate usage in some cases.

I am open to any of these, though 2 is probably my least favorite. More feedback from the community would be useful.

@sindresorhus
Copy link
Copy Markdown
Owner

Do 1.

3 and 4 breaks a lot of expectations.

Comment thread source/utils/body.ts Outdated

let percent = totalBytes === 0 ? 0 : transferredBytes / totalBytes;
if (percent >= 1) {
percent = 0.999;
Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs a succinct code comment

@sindresorhus
Copy link
Copy Markdown
Owner

Actually, Number.EPSILON is better. We never want to regress in percentage (breaks user expectation). If you hit 99.95% and then discover it's exceeded 1, clamping to 0.999 causes a visible (API, not UI) drop (0.9995 → 0.999). 1 - Number.EPSILON ensures no drop.

When the stream exceeds the expected totalBytes size, we now report 1 - Number.EPSILON as the progress `percent`.
@sholladay
Copy link
Copy Markdown
Collaborator Author

It's my least favorite approach, but I made the Number.EPSILON change.

Comment thread source/utils/body.ts
let percent = totalBytes === 0 ? 0 : transferredBytes / totalBytes;
// Avoid reporting 100% progress before the stream is actually finished (in case totalBytes is inaccurate)
if (percent >= 1) {
percent = 1 - Number.EPSILON;
Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
percent = 1 - Number.EPSILON;
// Prevents percentage from going backwards when close to 100%.
percent = 1 - Number.EPSILON;

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is only true relative to the other approaches we could've taken here. I think this comment would be confusing when read by someone in the future.

Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe, but I think it needs a clear comment whye we do it like that.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wrote a comment above the if condition that I think explains the purpose pretty well. Did you see that? The choice of Number.EPSILON is just a way to achieve that goal. It's merely the largest number that isn't 100%.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left a comment to explain epsilon usage. I had to compare it to using 0.99, which we're not doing, for the comment to make any sense.

Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@sholladay sholladay merged commit 60958f9 into sindresorhus:main Sep 26, 2025
3 checks passed
@sholladay sholladay deleted the pull-streams branch September 26, 2025 21:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

onUploadProgress tracks to-be-uploaded size Upload progress is super fast, but it's fake.

2 participants