-
Notifications
You must be signed in to change notification settings - Fork 5
feat(toolkit): exclude historical balance DBs from lite snapshot #121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -57,3 +57,6 @@ Wallet | |
|
|
||
| /framework/propPath | ||
| .cache | ||
|
|
||
| #AI | ||
| .claude | ||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -6,6 +6,7 @@ | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| import com.google.common.primitives.Bytes; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| import com.google.common.primitives.Ints; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| import com.google.common.primitives.Longs; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| import com.google.protobuf.ByteString; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| import java.io.File; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| import java.io.FileNotFoundException; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| import java.io.IOException; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
@@ -30,6 +31,7 @@ | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| import org.tron.plugins.utils.db.DBIterator; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| import org.tron.plugins.utils.db.DbTool; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| import org.tron.protos.Protocol; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| import org.tron.protos.contract.BalanceContract; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| import picocli.CommandLine; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| @Slf4j(topic = "lite") | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
@@ -57,13 +59,17 @@ public class DbLite implements Callable<Integer> { | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| private static final String TRANSACTION_HISTORY_DB_NAME = "transactionHistoryStore"; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| private static final String PROPERTIES_DB_NAME = "properties"; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| private static final String TRANS_CACHE_DB_NAME = "trans-cache"; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| private static final String BALANCE_TRACE_DB_NAME = "balance-trace"; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| private static final String ACCOUNT_TRACE_DB_NAME = "account-trace"; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| private static final List<String> archiveDbs = Arrays.asList( | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| BLOCK_DB_NAME, | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| BLOCK_INDEX_DB_NAME, | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| TRANS_DB_NAME, | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| TRANSACTION_RET_DB_NAME, | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| TRANSACTION_HISTORY_DB_NAME); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| TRANSACTION_HISTORY_DB_NAME, | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| BALANCE_TRACE_DB_NAME, | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ACCOUNT_TRACE_DB_NAME); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| enum Operate { split, merge } | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
@@ -522,24 +528,42 @@ private void trimExtraHistory(String liteDir, BlockNumInfo blockNumInfo) | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| DBInterface blockDb = DbTool.getDB(liteDir, BLOCK_DB_NAME); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| DBInterface transDb = DbTool.getDB(liteDir, TRANS_DB_NAME); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| DBInterface tranRetDb = DbTool.getDB(liteDir, TRANSACTION_RET_DB_NAME); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| DBInterface balanceTraceDb = DbTool.getDB(liteDir, BALANCE_TRACE_DB_NAME); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| DBInterface accountTraceDb = DbTool.getDB(liteDir, ACCOUNT_TRACE_DB_NAME); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ProgressBar.wrap(LongStream.rangeClosed(start, end) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| .boxed() | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| .sorted((a, b) -> Long.compare(b, a)), "trimHistory").forEach(n -> { | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| .sorted((a, b) -> Long.compare(b, a)), "trimHistory") | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| .map(ByteArray::fromLong).forEach(n -> { | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| try { | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| byte[] blockIdHash = blockIndexDb.get(ByteArray.fromLong(n)); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| byte[] blockIdHash = blockIndexDb.get(n); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Protocol.Block block = Protocol.Block.parseFrom(blockDb.get(blockIdHash)); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| // delete transactions | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| for (Protocol.Transaction e : block.getTransactionsList()) { | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| transDb.delete(DBUtils.getTransactionId(e).getBytes()); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Comment on lines
+531
to
543
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Suggestion: During history trimming, transaction IDs are removed from Severity Level: Major
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| DBInterface balanceTraceDb = DbTool.getDB(liteDir, BALANCE_TRACE_DB_NAME); | |
| DBInterface accountTraceDb = DbTool.getDB(liteDir, ACCOUNT_TRACE_DB_NAME); | |
| ProgressBar.wrap(LongStream.rangeClosed(start, end) | |
| .boxed() | |
| .sorted((a, b) -> Long.compare(b, a)), "trimHistory").forEach(n -> { | |
| .sorted((a, b) -> Long.compare(b, a)), "trimHistory") | |
| .map(ByteArray::fromLong).forEach(n -> { | |
| try { | |
| byte[] blockIdHash = blockIndexDb.get(ByteArray.fromLong(n)); | |
| byte[] blockIdHash = blockIndexDb.get(n); | |
| Protocol.Block block = Protocol.Block.parseFrom(blockDb.get(blockIdHash)); | |
| // delete transactions | |
| for (Protocol.Transaction e : block.getTransactionsList()) { | |
| transDb.delete(DBUtils.getTransactionId(e).getBytes()); | |
| DBInterface transactionHistoryDb = DbTool.getDB(liteDir, TRANSACTION_HISTORY_DB_NAME); | |
| DBInterface balanceTraceDb = DbTool.getDB(liteDir, BALANCE_TRACE_DB_NAME); | |
| DBInterface accountTraceDb = DbTool.getDB(liteDir, ACCOUNT_TRACE_DB_NAME); | |
| ProgressBar.wrap(LongStream.rangeClosed(start, end) | |
| .boxed() | |
| .sorted((a, b) -> Long.compare(b, a)), "trimHistory") | |
| .map(ByteArray::fromLong).forEach(n -> { | |
| try { | |
| byte[] blockIdHash = blockIndexDb.get(n); | |
| Protocol.Block block = Protocol.Block.parseFrom(blockDb.get(blockIdHash)); | |
| // delete transactions | |
| for (Protocol.Transaction e : block.getTransactionsList()) { | |
| byte[] txId = DBUtils.getTransactionId(e).getBytes(); | |
| transDb.delete(txId); | |
| transactionHistoryDb.delete(txId); |
Steps of Reproduction ✅
1. Enable transaction history persistence so `TransactionHistoryStore` is used (its `put`
method checks `CommonParameter.getInstance().getStorage().getTransactionHistorySwitch()`
in `chainbase/src/main/java/org/tron/core/store/TransactionHistoryStore.java:27-32`). Run
a full node so that transactions and their history are written into
`transactionHistoryStore`.
2. Use the DbLite toolkit to create a snapshot and then, after producing additional
blocks, create a history dataset so that `historyMaxNum > snapshotMaxNum`. This pattern is
exercised by `testTools(..., advanceSnapshot=false, historyBalanceLookup=false)` in
`plugins/src/test/java/org/tron/plugins/DbLiteTest.java:23-60`, which generates blocks,
creates a snapshot, generates more blocks, then creates history.
3. Merge history into the lite database by invoking the DbLite CLI with `-o merge` (wired
to `DbLite.call()` at `plugins/src/main/java/common/org/tron/plugins/DbLite.java:119-149`,
which dispatches to `completeHistoryData()` at `DbLite.java:225-260`). In
`completeHistoryData`, archive DBs including `"transactionHistoryStore"` are backed up and
then copied from `historyDir` into `liteDir` (`archiveDbs` list at `DbLite.java:65-72`,
`backupArchiveDbs` at `DbLite.java:497-505`, `copyHistory2Database` at
`DbLite.java:508-512`).
4. When `historyMaxNum > snapshotMaxNum`, `completeHistoryData` calls
`trimExtraHistory(liteDir, blockNumInfo)` at `DbLite.java:244`. Inside `trimExtraHistory`
(`DbLite.java:514-571`), for each block in `[start, end]` it deletes transactions from
`transDb` and per-block results from `TRANSACTION_RET_DB_NAME` via `tranRetDb.delete(n)`
at `DbLite.java:541-546` and removes the corresponding blocks and indices, but it never
touches `TRANSACTION_HISTORY_DB_NAME`. After this, `mergeBak2Database`
(`DbLite.java:573-609`) merges backed-up DBs; for `TRANSACTION_HISTORY_DB_NAME` it copies
all entries from the backup without filtering by block (see `archiveDbs.stream()` and
conditional seek at `DbLite.java:592-603`). As a result, `transactionHistoryStore` retains
entries for transactions whose blocks were trimmed. A client calling
`Wallet.getTransactionInfoById(transactionId)` in
`framework/src/main/java/org/tron/core/Wallet.java:1888-1894` will find no result in
`TransactionRetStore` (entry deleted) but a non-null `TransactionInfoCapsule` from
`TransactionHistoryStore.get(transactionId.toByteArray())`, even though the corresponding
block and trans mapping were removed, leaving transaction-by-id queries returning
information for blocks that no longer exist in the trimmed lite history.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** plugins/src/main/java/common/org/tron/plugins/DbLite.java
**Line:** 531:543
**Comment:**
*Logic Error: During history trimming, transaction IDs are removed from `trans` and `transactionRetStore`, but `transactionHistoryStore` entries for the same trimmed blocks are left behind. This leaves stale transaction info queryable by transaction ID after the corresponding blocks are deleted, so delete transaction-history entries for each removed block transaction as well.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not applicable. transactionHistoryStore is deprecated and no longer written to, so there is no stale data to clean up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Restrict account-trace merge to post-history blocks
mergeBak2Database computes head = historyMaxNum + 1 to restore only backup data newer than the imported history, but this branch forces account-trace to seekToFirst() and copies the entire DB. When snapshotMaxNum > historyMaxNum (the merge-back path) and history balance lookup is enabled, this reintroduces snapshot-side account-trace entries for overlapping heights, so historical balance queries can read forked/stale values for affected addresses. Account-trace rows need height filtering (e.g., decode the block number suffix and skip <= historyMaxNum) instead of unconditional full replay.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not an issue. The merge path runs when snapshotMaxNum > historyMaxNum. The backup (snapshot) and liteDir (history) are from the same chain, so for blocks <= historyMaxNum, account-trace entries are identical — put is idempotent. For blocks > historyMaxNum, only the snapshot has data, which is correctly merged back. There is no fork scenario in normal DbLite usage.
| Original file line number | Diff line number | Diff line change | ||||||
|---|---|---|---|---|---|---|---|---|
|
|
@@ -89,7 +89,13 @@ public void clear() { | |||||||
|
|
||||||||
| public void testTools(String dbType, int checkpointVersion) | ||||||||
| throws InterruptedException, IOException { | ||||||||
| logger.info("dbType {}, checkpointVersion {}", dbType, checkpointVersion); | ||||||||
| testTools(dbType, checkpointVersion, false, false); | ||||||||
| } | ||||||||
|
|
||||||||
| public void testTools(String dbType, int checkpointVersion, boolean advanceSnapshot, | ||||||||
| boolean historyBalanceLookup) throws InterruptedException, IOException { | ||||||||
| logger.info("dbType {}, checkpointVersion {}, advanceSnapshot {}, historyBalanceLookup {}", | ||||||||
| dbType, checkpointVersion, advanceSnapshot, historyBalanceLookup); | ||||||||
| dbPath = String.format("%s_%s_%d", dbPath, dbType, System.currentTimeMillis()); | ||||||||
| init(dbType); | ||||||||
| final String[] argsForSnapshot = | ||||||||
|
|
@@ -104,6 +110,7 @@ public void testTools(String dbType, int checkpointVersion) | |||||||
| new String[] {"-o", "merge", "--fn-data-path", dbPath + File.separator + databaseDir, | ||||||||
| "--dataset-path", dbPath + File.separator + "history"}; | ||||||||
| Args.getInstance().getStorage().setCheckpointVersion(checkpointVersion); | ||||||||
| Args.getInstance().setHistoryBalanceLookup(historyBalanceLookup); | ||||||||
| DbLite.setRecentBlks(3); | ||||||||
| // start fullNode | ||||||||
| startApp(); | ||||||||
|
|
@@ -117,8 +124,7 @@ public void testTools(String dbType, int checkpointVersion) | |||||||
| cli.execute(argsForSnapshot); | ||||||||
| // start fullNode | ||||||||
| startApp(); | ||||||||
| // produce transactions | ||||||||
| generateSomeTransactions(checkpointVersion == 1 ? 6 : 18); | ||||||||
| generateSomeTransactions(advanceSnapshot ? (checkpointVersion == 1 ? 6 : 18) : 18); | ||||||||
| // stop the node | ||||||||
| shutdown(); | ||||||||
| // generate history | ||||||||
|
|
@@ -137,11 +143,11 @@ public void testTools(String dbType, int checkpointVersion) | |||||||
| String.format("rename snapshot to %s failed", | ||||||||
| Paths.get(dbPath, databaseDir))); | ||||||||
| } | ||||||||
| // start and validate the snapshot | ||||||||
| startApp(); | ||||||||
| generateSomeTransactions(checkpointVersion == 1 ? 18 : 6); | ||||||||
| // stop the node | ||||||||
| shutdown(); | ||||||||
| if (advanceSnapshot) { | ||||||||
| startApp(); | ||||||||
| generateSomeTransactions(checkpointVersion == 1 ? 18 : 6); | ||||||||
|
Comment on lines
127
to
+148
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Suggestion: The block-generation durations for Severity Level: Major
|
||||||||
| generateSomeTransactions(advanceSnapshot ? 6 : 18); | |
| generateSomeTransactions(18); |
Steps of Reproduction ✅
1. Run the unit test `testToolsWithRocksDB` in
`plugins/src/test/java/org/tron/plugins/rocksdb/DbLiteRocksDbV2Test.java:9-12`, which
calls `testTools("ROCKSDB", 2, true, false)` on `DbLiteTest`.
2. Inside `DbLiteTest.testTools`
(`plugins/src/test/java/org/tron/plugins/DbLiteTest.java:102-169`), after the initial
18-second block generation and snapshot split, the line at 136 executes
`generateSomeTransactions(advanceSnapshot ? (checkpointVersion == 1 ? 6 : 18) : 18);`
which, for `advanceSnapshot=true` and `checkpointVersion=2`, generates 18 seconds of
additional blocks before the history split (making `historyMaxNum` significantly larger).
3. Later in the same method, after renaming `snapshot` to the live database directory, the
`if (advanceSnapshot)` block at lines 36–41 (diff line 158 in the hunk) runs
`generateSomeTransactions(checkpointVersion == 1 ? 18 : 6);`, which for
`checkpointVersion=2` generates only 6 seconds of blocks on the snapshot, so at merge time
`snapshotMaxNum < historyMaxNum` contrary to the method's Javadoc contract for
`advanceSnapshot=true`.
4. When `cli.execute(argsForMerge)` is called in `DbLiteTest.testTools`
(`plugins/src/test/java/common/org/tron/plugins/DbLite.java:460-519`),
`checkAndGetBlockNumInfo` computes a `BlockNumInfo` with `snapshotMaxNum < historyMaxNum`,
causing `trimExtraHistory` (`DbLite.java:55-52, lines 520-599`) to execute and
`mergeBak2Database` (`DbLite.java:54-64, lines 573-599`) to early-return with "Ignore
merging the bak data"; thus `DbLiteRocksDbV2Test` unintentionally exercises the trim path
again instead of the intended `mergeBak2Database` path for checkpoint version 2.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** plugins/src/test/java/org/tron/plugins/DbLiteTest.java
**Line:** 136:158
**Comment:**
*Logic Error: The block-generation durations for `advanceSnapshot=true` are inverted for checkpoint version 2, which makes history grow more than snapshot and violates the method's own contract (`snapshotMaxNum > historyMaxNum`). This causes the test to exercise the wrong merge path and can miss regressions in the intended branch. Use the same "history shorter, snapshot longer" timing strategy regardless of checkpoint version.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a pre-existing behavior, not introduced by this PR. The original code already had checkpointVersion == 1 ? 6 : 18 / checkpointVersion == 1 ? 18 : 6 timing. This PR only wrapped it in the advanceSnapshot conditional without changing the logic. Can be addressed separately if needed.
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -0,0 +1,13 @@ | ||||||
| package org.tron.plugins.rocksdb; | ||||||
|
|
||||||
| import java.io.IOException; | ||||||
| import org.junit.Test; | ||||||
| import org.tron.plugins.DbLiteTest; | ||||||
|
|
||||||
| public class DbLiteWithHistoryRocksDbTest extends DbLiteTest { | ||||||
|
|
||||||
| @Test | ||||||
| public void testToolsWithTrimHistory() throws InterruptedException, IOException { | ||||||
| testTools("ROCKSDB", 1, true, true); | ||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Suggestion: This test is named to validate trim-history behavior, but passing Severity Level: Major
|
||||||
| testTools("ROCKSDB", 1, true, true); | |
| testTools("ROCKSDB", 1, false, true); |
Steps of Reproduction ✅
1. Open `plugins/src/test/java/org/tron/plugins/rocksdb/DbLiteWithHistoryRocksDbTest.java`
and see `testToolsWithTrimHistory()` at lines 9–12 calling `testTools("ROCKSDB", 1, true,
true);` (Read output confirms this exact call).
2. Open `plugins/src/test/java/org/tron/plugins/DbLiteTest.java` and inspect the Javadoc
for the 4‑arg overload at lines 96–101 (Grep output): it states `advanceSnapshot=true`
makes `snapshotMaxNum > historyMaxNum` and "exercises mergeBak2Database", while
`advanceSnapshot=false` makes `historyMaxNum > snapshotMaxNum` and "exercises
trimExtraHistory".
3. In the same file, locate the 4‑arg method `testTools(String dbType, int
checkpointVersion, boolean advanceSnapshot, boolean historyBalanceLookup)` at line 102
(Grep) and its log statement at line 104; when
`DbLiteWithHistoryRocksDbTest.testToolsWithTrimHistory()` runs, it always invokes this
overload with `advanceSnapshot=true, historyBalanceLookup=true`, so the executed scenario
is the merge path, not the trimExtraHistory path described in the Javadoc.
4. Check other callers via Grep: `DbLiteRocksDbTest` and the LevelDB tests only call the
2‑arg `testTools(dbType, checkpointVersion)` (lines 90–93), which hard‑codes
`advanceSnapshot=false, historyBalanceLookup=false`; thus no test ever invokes `testTools`
with `advanceSnapshot=false` and `historyBalanceLookup=true`. Therefore, despite its name
(`testToolsWithTrimHistory`), the new RocksDB test never exercises the "trim history with
balance lookup enabled" scenario, and history‑trimming logic under
`historyBalanceLookup=true` remains untested.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** plugins/src/test/java/org/tron/plugins/rocksdb/DbLiteWithHistoryRocksDbTest.java
**Line:** 11:11
**Comment:**
*Logic Error: This test is named to validate trim-history behavior, but passing `true` for the third argument drives the merge-back path (`snapshotMaxNum > historyMaxNum`) instead of the trim path (`trimExtraHistory`). As written, it does not cover the intended scenario and can miss regressions in history trimming with balance lookup enabled. Pass `false` so the test actually exercises trim-history logic.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Valid point. advanceSnapshot=true exercises the merge path, not trim — the test name is misleading. Changed to testTools("ROCKSDB", 1, false, true) so it actually covers the v1 trim + historyBalance scenario. Fixed in the next commit.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,13 @@ | ||
| package org.tron.plugins.rocksdb; | ||
|
|
||
| import java.io.IOException; | ||
| import org.junit.Test; | ||
| import org.tron.plugins.DbLiteTest; | ||
|
|
||
| public class DbLiteWithHistoryRocksDbV2Test extends DbLiteTest { | ||
|
|
||
| @Test | ||
| public void testToolsWithTrimHistory() throws InterruptedException, IOException { | ||
| testTools("ROCKSDB", 2, false, true); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When merging with a history dataset that does not contain
balance-trace/account-trace(e.g., datasets generated before this change),trimExtraHistoryopens these DBs viaDbTool.getDB(liteDir, ...), which defaults to LevelDB ifengine.propertiesis absent. In a RocksDB node this creates new trace DB directories tagged as LEVELDB, and subsequent node startup fails engine validation (Cannot open LEVELDB database with ROCKSDB engine). This is a regression introduced by eagerly opening these optional stores without inheriting the existing node engine.Useful? React with 👍 / 👎.