Use journal instead of CacheKV for giga snapshot/rollback#3295
Use journal instead of CacheKV for giga snapshot/rollback#3295
Conversation
|
Hey @codchen, Code Input detected this PR has a merge conflict. Some of the conflicts of this PR can be resolved with a semantic merge driver. Code Input can do that automatically: https://codeinput.com/r/EESR7EMMZd3 Let me know if you need more help with this conflict or how Code Input works. |
1608a35 to
48a18ad
Compare
|
The latest Buf updates on your PR. Results from workflow Buf / buf (pull_request).
|
48a18ad to
00974b3
Compare
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #3295 +/- ##
==========================================
+ Coverage 59.07% 59.09% +0.01%
==========================================
Files 2099 2099
Lines 172988 173050 +62
==========================================
+ Hits 102195 102258 +63
+ Misses 61922 61918 -4
- Partials 8871 8874 +3
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
00974b3 to
9480ea7
Compare
| denom := s.k.GetBaseDenom(s.ctx) | ||
| if e.isAdd { | ||
| // Was AddBalance: reverse by subtracting | ||
| _ = s.k.BankKeeper().SubUnlockedCoins(ctx, e.seiAddr, sdk.NewCoins(sdk.NewCoin(denom, e.usei)), true) |
There was a problem hiding this comment.
do we want to be propagating the errors here?
There was a problem hiding this comment.
i think it isn't expected to err here. Maybe we should just panic out?
There was a problem hiding this comment.
I think panicking might be the most reasonable thing, honestly.
| s.ctx.MultiStore().(sdk.CacheMultiStore).Write() | ||
| s.ctx.GigaMultiStore().WriteGiga() | ||
|
|
||
| // Emit all surviving events (from snapshots + current) to the committed ctx's EventManager. |
There was a problem hiding this comment.
do we have tests before/after (even existing one) to ensure that the same events are emitted before & after?
| denom := s.k.GetBaseDenom(s.ctx) | ||
| if e.isAdd { | ||
| // Was AddBalance: reverse by subtracting | ||
| _ = s.k.BankKeeper().SubUnlockedCoins(ctx, e.seiAddr, sdk.NewCoins(sdk.NewCoin(denom, e.usei)), true) |
| s.tempState = NewTemporaryState() | ||
| s.journal = []journalEntry{} | ||
| s.snapshottedCtxs = []sdk.Context{} | ||
| s.Snapshot() |
There was a problem hiding this comment.
we used to create a new Snapshot() after cleanup, do we want to enforce that invariant in the new journaled state?
if we don't want it, maybe we add a test enforcing that this isn't required after CleanupForTracer()
9480ea7 to
8185f50
Compare
…er CleanupForTracer, add event/cleanup tests Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
8185f50 to
bca2f2e
Compare
…op SSTOREs) Matches go-ethereum behaviour: if the slot already holds the target value, return early without a KV write or journal entry. Previously N identical SSTOREs to the same slot produced N journal entries. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Problem
giga/deps/xevm/state implemented Snapshot()/RevertToSnapshot() by stacking CacheMultiStore layers: each Snapshot() call wrapped the current context in a new cache, and RevertToSnapshot() restored an older context (dropping the outer cache) to undo KV-store writes. This meant N nested EVM snapshots within a single transaction produced N nested CacheMultiStore layers, with all the associated allocation and copy-on-write overhead. The transient/in-memory state (access lists, logs, refunds, transient storage, account status) was already journal-driven; only the persisted KV state relied on the CacheMultiStore trick.
Solution
Adopt the go-ethereum journal pattern for all state—both KV-stored and in-memory:
Event-manager isolation
Snapshot() pushes the current EventManager onto a stack and attaches a fresh one to the context. RevertToSnapshot() pops back to the snapshot's EM, discarding any events emitted inside the reverted range. Finalize() drains all surviving EMs (in order) into committedCtx.EventManager().
GetCommittedState
Previously read from snapshottedCtxs[0] (the context before the first stateDB-level CMS layer). Now reads from committedCtx, the original context saved at NewDBImpl time, which sits below the single stateDB CMS and therefore always reflects the pre-transaction committed state.
Other changes
Tests (snapshot_test.go, 20 new cases)