-
-
Notifications
You must be signed in to change notification settings - Fork 1k
Fix WGPU out-of-memory crash by caching overlay textures (closes #3611) #3614
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
Thanks for the contribution. I was never able to reproduce the issue, but we should do this anyway. @afrdbaig7 If you feel comfortable/able to do that, that would be great, otherwise I will take this over at some point. |
|
@timon-schelling Thanks for the clarification I’m comfortable taking this on. I’ll refactor TargetTexture into a proper abstraction that owns the texture + view and handles resizing internally via an ensure_size method, and move the size/creation logic out of render_vello_scene_to_target_texture. I’ll push an updated version once that refactor is done. |
1beb06a to
f17644e
Compare
desktop/src/render/state.rs
Outdated
| self.bind_overlays_texture(texture); | ||
| } | ||
| if let Some(target_texture) = &self.overlays_target_texture { | ||
| self.overlays_texture = Some(target_texture.texture().clone()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is possible to remove self.overlays_texture entirely.
You should be able to just use overlay_target_texture.view() in update bind groups directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested this locally by removing self.overlays_texture and using overlay_target_texture.view() directly when updating the bind groups. Everything worked as expected in my tests, and I didn’t run into any lifetime, resizing, or rendering issues.
I’ll go ahead and push this refactor.
| let texture = self.context.device.create_texture(&wgpu::TextureDescriptor { | ||
| label: None, | ||
| size: wgpu::Extent3d { | ||
| width: size.x, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for calling this out. This change was to avoid output: Option being None at the moment we call ensure_size(). On the first call we create a minimal 1×1 texture so ensure_size() can run and reallocate it to the requested size.
I agree this is a bit wasteful since the 1×1 texture is usually replaced immediately. I went with this approach to keep the change small and low-risk, without widening the scope of this PR.
If you’d prefer the cleaner design, I’m happy to follow up by either:
making ensure_size() handle the None case directly, or
adding TargetTexture::new(device, size) so we allocate the correct size on the first call.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes add new for target texture creation.
| } else { | ||
| pub async fn render_vello_scene_to_target_texture(&self, scene: &Scene, size: UVec2, context: &RenderContext, background: Color, output: &mut Option<TargetTexture>) -> Result<()> { | ||
| // Initialize with a minimal texture if this is the first call | ||
| if output.is_none() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
keep using if let syntax, then you also don't need the unwrap.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! You're right, I'll refactor to use if let syntax to avoid the unwrap(). Will update shortly.
Fixes #3611
Problem
Graphite could crash with a WGPU out-of-memory error during normal usage.
The root cause was repeated allocation of overlay textures instead of reusing existing ones.
Each frame or update created new GPU textures, causing GPU memory usage to grow continuously until WGPU failed.
This resulted in:
GPU out-of-memory crashes
Increasing memory usage over time
Unstable rendering behavior during longer sessions
Solution
Overlay textures are now cached and reused instead of being recreated repeatedly.
By reusing existing GPU textures, unnecessary allocations are avoided and GPU memory usage remains stable.
The save behavior and rendering logic remain unchanged — only texture lifecycle management is corrected.
Changes
Introduced caching for overlay textures
Prevented repeated GPU texture allocations
Ensured existing textures are reused where appropriate
Testing
Built the project locally
Reproduced the out-of-memory crash before the fix
Verified GPU memory usage remains stable after the change
Confirmed no regressions in rendering behavior