Change default MAX VHD size set by SDK from 1GB to 32GB#40355
Change default MAX VHD size set by SDK from 1GB to 32GB#403551wizkid wants to merge 3 commits intofeature/wsl-for-appsfrom
Conversation
There was a problem hiding this comment.
Pull request overview
Note
Copilot was unable to run its full agentic suite in this review.
Updates the SDK’s default maximum VHD storage size to align with larger, CLI-compatible defaults.
Changes:
- Increase
s_DefaultStorageSizefrom ~1GB to ~1TB (decimal) in the Windows SDK implementation.
|
I feel like we should keep the default to a functional but small value (like it currently is at 1GB). Developers should be making a conscious decision to set the limit rather than relying on a massive default. If the goal is that the default is functionally no size limit, then we should set it to the absolute maximum value supported rather than an arbitrary 1TB. |
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|
I think our role here should be to set a sane default, and give developers the ability to customize if they want. What matters for choosing a sane default is balancing the tension between "How much extra disk space will that VHD take up due to its max size?" and "How much space will a user need by default?" I ran the numbers and from the data I think a default size of 32GB is appropriate. I think this strikes the right balance of: "Most workflows just work out of the box without significant consideration" and "we aren't wasting disk size". Data is below if you'd like to make up your minds on it too. While session is active
Post-Termination Size (Cold VHD)
|
| // Default to 1 TB (decimal, 1,000,000,000,000 bytes) | ||
| constexpr UINT64 s_DefaultStorageSize = 1000ULL * 1000 * 1000 * 1000; |
There was a problem hiding this comment.
s_DefaultStorageSize is set to 1,000,000,000,000 bytes, but later this value is converted to MaximumStorageSizeMb via integer division by _1MB (1,048,576). Because 1e12 is not a multiple of _1MB, the conversion truncates and the effective max size becomes smaller than intended. Also, the repo’s size parsing (ParseMemorySize("1TB")) and other defaults treat “TB” as binary (1<<40), so using a decimal TB here won’t actually match CLI/config expectations.
| // Default to 1 TB (decimal, 1,000,000,000,000 bytes) | |
| constexpr UINT64 s_DefaultStorageSize = 1000ULL * 1000 * 1000 * 1000; | |
| // Default to 1 TB (binary, 1 << 40 bytes) | |
| constexpr UINT64 s_DefaultStorageSize = 1ULL << 40; |
| // Default to 1 TB (decimal, 1,000,000,000,000 bytes) | ||
| constexpr UINT64 s_DefaultStorageSize = 1000ULL * 1000 * 1000 * 1000; |
There was a problem hiding this comment.
This changes default VHD max size behavior but there doesn’t appear to be test coverage asserting what the SDK default MaximumStorageSizeMb resolves to when callers don’t set VHD requirements. Adding a WSLC SDK test that exercises the default path (no WslcSetSessionSettingsVhd call, or passing null to reset) would help prevent regressions and ensure the default matches the intended 1TB/1TiB value.
There was a problem hiding this comment.
adjusting to 32GB based on craig's feedback. We can evaluate this value afte private preview as well as after public preview to determine if developers hit this limitation regularly. @craigloewen-msft , will we have telemetry on this from the runtime side?
|
The |
I don't really understand what the numbers you presented mean, but I'm assuming that there is no magic compression that turns 32GB of data into 52MB of data when the session is terminated. My concern (especially about massive percentages of today's common drives) is that some logging or temp files or general cruft from running the container session over time would eat up a huge amount of space. If the developer isn't forced with making a decision on the maximum size, they will just let that happen. It is only when the VHD fills and causes errors that they will see that they should be performing some cleanup along the way to keep the storage under control. The user won't be able to go to disk cleanup and remove all temporary files as this is all hidden from the host OS. And as I understand it, the VHD won't shrink in size when space is freed inside it, making resolution a potentially more complex process for the developer. I'm certainly happier with 32GB than 1TB, but "sane default" to me means that I can create a session, pull a reasonable image (largest of the top N used images?), and execute a compute workload that produces some side effect output. That should then hold true for a projected amount of time, at which time it can be reconsidered. |
Summary of the Pull Request
1GB max size is to small for max size. this changes default max size to 1TB to match CLI.
PR Checklist
Detailed Description of the Pull Request / Additional comments
Validation Steps Performed