Skip to content

Bound native price cache with moka#4154

Merged
squadgazzz merged 4 commits intomainfrom
boundry-native-price-cache
Feb 13, 2026
Merged

Bound native price cache with moka#4154
squadgazzz merged 4 commits intomainfrom
boundry-native-price-cache

Conversation

@squadgazzz
Copy link
Contributor

@squadgazzz squadgazzz commented Feb 13, 2026

Description

The native price cache is backed by an unbounded Mutex<HashMap<Address, CachedResult>>. Entries are inserted on every price estimate but never removed. Once at steady state with many unique tokens, the cache grows without bound.

The size limit is currently hardcoded at 20k, since on mainnet we have around that number of unique tokens:

SELECT COUNT(*) AS unique_token_count
FROM (SELECT sell_token AS token
      FROM orders
      UNION
      SELECT buy_token AS token
      FROM orders) t;

Returns -> 20109

Per cache entry breakdown:

  • CachedResult value: ~32 bytes
  • Address key: 20 bytes
  • Moka internal overhead: ~100-120 bytes
  • Total per entry: ~170 bytes

At 20,000 entries: ~3.4 MB upper bound

Changes

  • Replace the Mutex<HashMap<...>> cache with moka::sync::Cache.
  • Adapt get_cached_price():
    • No more MutexGuard parameter. Use data.get(&token) which returns Option (moka clones the value). Check staleness + is_ready() on the returned value. No requested_at to update, since moka tracks access internally.
  • Remove the requested_at field from the CachedResult since it is not needed anymore and turned out to be dead code.
  • Adapt the insert() function to use entry_by_ref().and_compute_with(), which gives gives atomic read-modify-write for accumulative_errors_count.

How to test

Existing tests.

@squadgazzz squadgazzz changed the title LRU native price cache LFU native price cache Feb 13, 2026
@squadgazzz
Copy link
Contributor Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request replaces an unbounded HashMap with a bounded moka cache to mitigate a potential memory exhaustion vulnerability. The implementation is largely correct, but includes a potential integer overflow on 32-bit systems when casting the cache entry count. My review includes a suggestion to address this.

@squadgazzz squadgazzz marked this pull request as ready for review February 13, 2026 11:39
@squadgazzz squadgazzz requested a review from a team as a code owner February 13, 2026 11:39
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request replaces an unbounded Mutex<HashMap> with a bounded moka::sync::Cache for the native price cache to address a potential memory exhaustion vulnerability. The implementation adapts the existing logic to moka's API. The review identified one high-severity issue: a potential panic on 32-bit systems due to an integer conversion in the len() method, which could lead to a denial of service.

@jmg-duarte
Copy link
Contributor

jmg-duarte commented Feb 13, 2026

The PR and commit don't agree on the type of cache

This is also a bit confusing:

Per entry: CachedResult (~32 bytes value) + Address key (20 bytes) + moka overhead (~100-120 bytes for internal tracking) ≈ ~170 bytes. 20,000 × 170 ≈ 3.4 MB.

Separating the total would make the description much easier to understand

@squadgazzz squadgazzz changed the title LFU native price cache Bound native price cache with moka Feb 13, 2026
@squadgazzz
Copy link
Contributor Author

The PR and commit don't agree on the type of cache

It doesn't matter, the squashed commit will have the PR's title message. Renamed the PR a bit. And updated the breakdown description.

@jmg-duarte
Copy link
Contributor

It doesn't matter, the squashed commit will have the PR's title message.
In the end it doesn't yes, but for me as a reviewer it was confusing. Thanks for updating

#[derive(Clone)]
pub struct Cache(Arc<CacheInner>);

const MAX_CACHE_SIZE: u64 = 20_000;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can add the napkin math you did here to help understand why 20k

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Git blame should work better for that to avoid long comments.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Git blame helps but this is what comments are ultimately for. The issue with git blame is that as the file gets moved around one will need to perform an archeological excavation to find out the origin — it's not impossible, just time consuming

Regardless, in this case, it's not that important

Copy link
Contributor

@m-sz m-sz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, small nits.

@squadgazzz squadgazzz added this pull request to the merge queue Feb 13, 2026
Merged via the queue into main with commit 32b0408 Feb 13, 2026
19 checks passed
@squadgazzz squadgazzz deleted the boundry-native-price-cache branch February 13, 2026 18:04
@github-actions github-actions bot locked and limited conversation to collaborators Feb 13, 2026
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants