Skip to content

Patch tornado for BDSA-2026-3867#68856

Open
twangboy wants to merge 1 commit intosaltstack:3006.xfrom
twangboy:issue/68854/3006.x
Open

Patch tornado for BDSA-2026-3867#68856
twangboy wants to merge 1 commit intosaltstack:3006.xfrom
twangboy:issue/68854/3006.x

Conversation

@twangboy
Copy link
Copy Markdown
Contributor

@twangboy twangboy commented Mar 27, 2026

What does this PR do?

Pulls in the changes from tornadoweb/tornado@119a195

What issues does this PR fix or reference?

Fixes #68854

What was changed

1. ParseMultipartConfig class

A plain configuration class holds three settings:

  • enabled (default True) — allows multipart parsing to be disabled entirely for applications that don't need it
  • max_parts (default 100) — caps the number of MIME parts per request
  • max_part_header_size (default 10240 bytes / 10 KB) — caps the size of the headers for each individual part
    Design decision — plain class instead of dataclass

The upstream 6.5.5 fix uses @dataclasses.dataclass. Dataclasses are available in Python 3.7+ and the branch supports Python 3.9+, so that wouldn't have been a technical problem. However, no other code in the 4.5.3 vendored file uses dataclasses, and the existing patches all follow the original coding style. A plain class with an __init__ is functionally identical, requires no new import, and keeps the diff consistent with the style of the surrounding code.

2. _DEFAULT_MULTIPART_CONFIG global and set_parse_body_config()

A module-level default config instance is created once at import time. A set_parse_body_config() function is provided as a global escape hatch: if a Salt deployment has legitimate forms with more than 100 fields (each <input> element is a part), an operator can raise the limit at application startup without patching the library again. This mirrors the upstream API exactly.

3. Limits in parse_multipart_form_data()

The check if len(parts) > config.max_parts is placed immediately after the data.split() call, before any iteration. This means a request with 10,000 parts fails fast without processing any of them.

The check if eoh > config.max_part_header_size is placed inside the loop, right after part.find(b"\r\n\r\n"), before the header bytes are handed to HTTPHeaders.parse(). This prevents a large per-part header from reaching the more expensive header-parsing logic.

4. Strict content-type check in parse_body_arguments()

The function previously used content_type.startswith("multipart/form-data") to detect multipart bodies, then split on ; to find the boundary. The upstream commit adds a check that fields[0].strip() == "multipart/form-data" exactly, which catches malformed content types like multipart/form-dataxyz that would have matched the startswith guard but aren't actually valid multipart bodies. This is defence-in-depth and also part of the same upstream commit.

Merge requirements satisfied?

[NOTICE] Bug fixes or features added to Salt require tests.

Commits signed with GPG?

Yes

@twangboy twangboy requested a review from a team as a code owner March 27, 2026 16:26
@twangboy twangboy self-assigned this Mar 27, 2026
@twangboy twangboy added the test:full Run the full test suite label Mar 27, 2026
@twangboy twangboy added this to the Sulpher v3006.24 milestone Mar 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

test:full Run the full test suite

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant