Conversation
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request replaces an unbounded HashMap with a bounded moka cache to mitigate a potential memory exhaustion vulnerability. The implementation is largely correct, but includes a potential integer overflow on 32-bit systems when casting the cache entry count. My review includes a suggestion to address this.
There was a problem hiding this comment.
Code Review
This pull request replaces an unbounded Mutex<HashMap> with a bounded moka::sync::Cache for the native price cache to address a potential memory exhaustion vulnerability. The implementation adapts the existing logic to moka's API. The review identified one high-severity issue: a potential panic on 32-bit systems due to an integer conversion in the len() method, which could lead to a denial of service.
|
The PR and commit don't agree on the type of cache This is also a bit confusing:
Separating the total would make the description much easier to understand |
It doesn't matter, the squashed commit will have the PR's title message. Renamed the PR a bit. And updated the breakdown description. |
|
| #[derive(Clone)] | ||
| pub struct Cache(Arc<CacheInner>); | ||
|
|
||
| const MAX_CACHE_SIZE: u64 = 20_000; |
There was a problem hiding this comment.
You can add the napkin math you did here to help understand why 20k
There was a problem hiding this comment.
Git blame should work better for that to avoid long comments.
There was a problem hiding this comment.
Git blame helps but this is what comments are ultimately for. The issue with git blame is that as the file gets moved around one will need to perform an archeological excavation to find out the origin — it's not impossible, just time consuming
Regardless, in this case, it's not that important
Description
The native price cache is backed by an unbounded
Mutex<HashMap<Address, CachedResult>>. Entries are inserted on every price estimate but never removed. Once at steady state with many unique tokens, the cache grows without bound.The size limit is currently hardcoded at 20k, since on mainnet we have around that number of unique tokens:
Per cache entry breakdown:
At 20,000 entries: ~3.4 MB upper bound
Changes
Mutex<HashMap<...>>cache withmoka::sync::Cache.requested_atto update, since moka tracks access internally.requested_atfield from theCachedResultsince it is not needed anymore and turned out to be dead code.insert()function to useentry_by_ref().and_compute_with(), which gives gives atomic read-modify-write foraccumulative_errors_count.How to test
Existing tests.