Skip to content

Conversation

@masaori335
Copy link
Contributor

I found that if OpenDir has own reader-writer lock, it doesn't need StripeSM mutex. This means we can avoid the StripeSM mutex lock contention issue for reader-while-writer cases.

Benchmarking RWW is a bit tricky but one of benchmark says max rps is improved 9.9% in below conditions.

Conditions:

  • 10 urls
  • plaintext http
  • response body size: 256 bytes
  • origin returns Cache-Control: public, max-age=0 ///< some requests goes RWW path
  • 63 exec_thread
  • 40 stripes (8 disks x 5 volumes)

Result:

  • vanilla: 58,220.9 req/s
  • patch: 63,999.7 req/s

part of #12788

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR optimizes cache read performance by introducing a dedicated reader-writer lock (ts::bravo::shared_mutex) in the OpenDir class, reducing mutex contention for reader-while-writer (RWW) scenarios. The change allows concurrent read operations on OpenDir without requiring the StripeSM mutex, resulting in a reported 9.9% improvement in maximum requests per second under specific test conditions.

Changes:

  • Added ts::bravo::shared_mutex to OpenDir for fine-grained locking instead of relying on StripeSM mutex
  • Refactored open_read to work without holding the stripe mutex, enabling lock-free RWW path
  • Introduced new_CacheVC_for_read helper function to reduce code duplication in Cache.cc
  • Added CACHE_EVENT_OPEN_DIR_RETRY event type for handling retries in the new locking scheme
  • Made OpenDir members private and reorganized API documentation in StripeSM.h

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
src/iocore/cache/StripeSM.h Added API grouping comments to clarify OpenDir and PreservationTable methods
src/iocore/cache/P_CacheDir.h Changed OpenDir to class with private members, added _shared_mutex for concurrency control
src/iocore/cache/CacheDir.cc Implemented shared/exclusive locking in open_write, close_write, and open_read; updated signal_readers for new locking model
src/iocore/cache/Cache.cc Refactored open_read to check OpenDir without stripe mutex first, added helper function for CacheVC creation
include/iocore/cache/CacheDefs.h Added CACHE_EVENT_OPEN_DIR_RETRY event type for retry handling
Comments suppressed due to low confidence (1)

src/iocore/cache/CacheDir.cc:193

  • Potential use-after-free race condition: The open_read() method returns a raw pointer to an OpenDirEntry while holding a shared lock, but the lock is released when the function returns (line 183 creates a scoped lock). The caller then uses this pointer without any lock protection. Meanwhile, close_write() can acquire the writer lock and free the OpenDirEntry (line 174: THREAD_FREE). This creates a window where the returned OpenDirEntry pointer could be freed while the caller is still using it, leading to a use-after-free. The old code avoided this because both operations held the stripe mutex. Consider using reference counting for OpenDirEntry or ensuring the caller holds some protection until done with the entry.
OpenDirEntry *
OpenDir::open_read(const CryptoHash *key) const
{
  ts::bravo::shared_lock lock(_shared_mutex);

  unsigned int h = key->slice32(0);
  int          b = h % OPEN_DIR_BUCKETS;
  for (OpenDirEntry *d = _bucket[b].head; d; d = d->link.next) {
    if (d->writers.head->first_key == *key) {
      return d;
    }
  }
  return nullptr;
}

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found this call back to the CacheVC can be dead lock, when it tries to acquire lock recursively.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I introduced ts::bravo::recursive_shared_mutex with new commit.

@masaori335
Copy link
Contributor Author

Benchmark of the new commit says almost the same, 58,904 rps vs 65,339 rps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant