Skip to content

Clean up primary-only snapshot DB records on volume expunge#12813

Open
Damans227 wants to merge 4 commits intoapache:4.22from
Damans227:fix/12002-cleanup-orphaned-ceph-snapshots
Open

Clean up primary-only snapshot DB records on volume expunge#12813
Damans227 wants to merge 4 commits intoapache:4.22from
Damans227:fix/12002-cleanup-orphaned-ceph-snapshots

Conversation

@Damans227
Copy link
Collaborator

@Damans227 Damans227 commented Mar 13, 2026

Description

When snapshot.backup.to.secondary=false (KVM + Ceph) and a VM is expunged, Ceph destroys the RBD snapshots along with the volume image, but the DB records (snapshots, snapshot_store_ref) are left behind as undeletable orphans.

Fix

In StorageManagerImpl.cleanupStorage(), clean up primary-only snapshot records before the volume is expunged from storage.

Fixes: #12002

Types of changes

  • Breaking change (fix or feature that would cause existing functionality to change)
  • New feature (non-breaking change which adds functionality)
  • Bug fix (non-breaking change which fixes an issue)
  • Enhancement (improves an existing feature and functionality)
  • Cleanup (Code refactoring and cleanup, that may add test cases)
  • Build/CI
  • Test (unit or integration test code)

Feature/Enhancement Scale or Bug Severity

Feature/Enhancement Scale

  • Major
  • Minor

Bug Severity

  • BLOCKER
  • Critical
  • Major
  • Minor
  • Trivial

Screenshots (if appropriate):

How Has This Been Tested?

Tested on KVM + Ceph (RBD) with snapshot.backup.to.secondary=false:

  1. Deployed VM on Ceph, took volume snapshot, destroyed+expunged VM
  2. Verified snapshot DB records cleaned up after storage scavenger cycle
  3. Confirmed NFS snapshots (with secondary copies) are unaffected
  4. Unit tests cover: primary-only cleanup, skip when secondary exists, skip destroyed snapshots

How did you try to break this feature and the system with this change?

Tested with snapshots having both primary and secondary refs, and with already-destroyed snapshots, both worked correctly skipped.

@codecov
Copy link

codecov bot commented Mar 13, 2026

Codecov Report

❌ Patch coverage is 78.94737% with 4 lines in your changes missing coverage. Please review.
✅ Project coverage is 17.60%. Comparing base (7048944) to head (79c00ba).

Files with missing lines Patch % Lines
...ain/java/com/cloud/storage/StorageManagerImpl.java 78.94% 2 Missing and 2 partials ⚠️
Additional details and impacted files
@@             Coverage Diff              @@
##               4.22   #12813      +/-   ##
============================================
- Coverage     17.61%   17.60%   -0.01%     
- Complexity    15662    15665       +3     
============================================
  Files          5917     5917              
  Lines        531415   531434      +19     
  Branches      64973    64978       +5     
============================================
- Hits          93588    93580       -8     
- Misses       427271   427296      +25     
- Partials      10556    10558       +2     
Flag Coverage Δ
uitests 3.70% <ø> (ø)
unittests 18.68% <78.94%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: In Progress

Development

Successfully merging this pull request may close these issues.

snapshot as Abnormal residue on rbd

2 participants