Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do?
Pulls in the changes from tornadoweb/tornado@119a195
What issues does this PR fix or reference?
Fixes #68854
What was changed
1. ParseMultipartConfig class
A plain configuration class holds three settings:
enabled (default True)— allows multipart parsing to be disabled entirely for applications that don't need itmax_parts (default 100)— caps the number of MIME parts per requestmax_part_header_size (default 10240 bytes / 10 KB)— caps the size of the headers for each individual partDesign decision — plain class instead of dataclass
The upstream 6.5.5 fix uses
@dataclasses.dataclass. Dataclasses are available in Python 3.7+ and the branch supports Python 3.9+, so that wouldn't have been a technical problem. However, no other code in the 4.5.3 vendored file uses dataclasses, and the existing patches all follow the original coding style. A plain class with an__init__is functionally identical, requires no new import, and keeps the diff consistent with the style of the surrounding code.2. _DEFAULT_MULTIPART_CONFIG global and set_parse_body_config()
A module-level default config instance is created once at import time. A
set_parse_body_config()function is provided as a global escape hatch: if a Salt deployment has legitimate forms with more than 100 fields (each<input>element is a part), an operator can raise the limit at application startup without patching the library again. This mirrors the upstream API exactly.3. Limits in parse_multipart_form_data()
The check if
len(parts) > config.max_partsis placed immediately after thedata.split()call, before any iteration. This means a request with 10,000 parts fails fast without processing any of them.The check
if eoh > config.max_part_header_sizeis placed inside the loop, right afterpart.find(b"\r\n\r\n"), before the header bytes are handed toHTTPHeaders.parse(). This prevents a large per-part header from reaching the more expensive header-parsing logic.4. Strict content-type check in parse_body_arguments()
The function previously used
content_type.startswith("multipart/form-data")to detect multipart bodies, then split on;to find the boundary. The upstream commit adds a check thatfields[0].strip() == "multipart/form-data"exactly, which catches malformed content types likemultipart/form-dataxyzthat would have matched the startswith guard but aren't actually valid multipart bodies. This is defence-in-depth and also part of the same upstream commit.Merge requirements satisfied?
[NOTICE] Bug fixes or features added to Salt require tests.
Commits signed with GPG?
Yes