Summary
TRON nodes currently do not perform duplicate or length checks for certain lists during the reception of inventory requests, transaction lists, and blockchain synchronization messages. By adding duplicate detection and length constraints at the message reception stage, the node can manage processing more accurately and improve predictability of resource usage.
Root Cause
FetchInvDataMessage may contain duplicate entries, but the system does not deduplicate at the reception stage.
TransactionMessage may include duplicate transactions, and the current parsing stage does not check for duplicates, relying instead on caching to skip processing.
- The block summary list in
SyncBlockChainMessage has no length limit, and the receiving node does not constrain the list size.
Reproduction
- Start a node and connect it to the P2P network.
- Construct messages containing duplicate inventory entries, duplicate transactions, or an excessively long block summary list.
- Send the messages to the node and observe the reception and processing behavior.
- The node continues processing without interception at the reception stage.
Impact
- Message processing statistics are not entirely accurate: duplicates are still parsed on first reception.
- Node resource management is slightly affected: parsing and caching operations may incur extra overhead when handling many duplicates or long lists.
- Network data handling efficiency may fluctuate slightly, but overall node availability is not impacted.
Suggested Fix
- Add validation logic at the message reception stage:
- Perform duplicate entry checks for FetchInvDataMessage and TransactionMessage lists.
- Enforce a length limit on the block summary list in SyncBlockChainMessage.
- If validation fails, the node can terminate processing early and log the event, improving processing efficiency.
Summary
TRON nodes currently do not perform duplicate or length checks for certain lists during the reception of inventory requests, transaction lists, and blockchain synchronization messages. By adding duplicate detection and length constraints at the message reception stage, the node can manage processing more accurately and improve predictability of resource usage.
Root Cause
FetchInvDataMessagemay contain duplicate entries, but the system does not deduplicate at the reception stage.TransactionMessagemay include duplicate transactions, and the current parsing stage does not check for duplicates, relying instead on caching to skip processing.SyncBlockChainMessagehas no length limit, and the receiving node does not constrain the list size.Reproduction
Impact
Suggested Fix