Skip to content

*: fix reload tls certs in reconnect and fix region cache#537

Open
yongman wants to merge 4 commits into
tikv:masterfrom
yongman:ym/fix-tls-and-cache
Open

*: fix reload tls certs in reconnect and fix region cache#537
yongman wants to merge 4 commits into
tikv:masterfrom
yongman:ym/fix-tls-and-cache

Conversation

@yongman
Copy link
Copy Markdown
Contributor

@yongman yongman commented May 9, 2026

Deep Review

Problem Summary

  • The branch is trying to fix two long-lived cache problems:
  • TLS materials were effectively pinned into cached TiKV gRPC clients, so cert/key rotation did not take effect until process restart or a full client rebuild.
  • Region-to-store mapping could get stuck on stale cache state, especially when the cached region pointed at a store entry that could no longer be resolved.
  • The implementation addresses this by reloading TLS files on connection creation, attaching a TLS-dependent cache key to cached TiKV clients, and retrying more store-mapping failures through the existing region backoff path.

Solution Walkthrough

  • src/common/security.rs replaces in-memory ca/cert bytes with stored file paths. SecurityManager::connect() now reloads the CA, cert, and key from disk every time it builds a new TLS Endpoint, which is what enables rotated files to affect future connections.
  • src/common/security.rs also adds connection_cache_key(), which hashes per-file metadata from file_signature() and returns None when TLS is disabled. The intended use is to tell whether a cached TiKV client was created from the same TLS inputs as the current filesystem state.
  • src/store/client.rs extends KvConnect with an optional connection_cache_key() hook. TikvConnect forwards that call to SecurityManager, so the TiKV transport layer can participate in TLS-aware cache invalidation without changing non-TLS connectors.
  • src/pd/client.rs changes kv_client_cache from HashMap<String, KvClient> to HashMap<String, CachedKvClient<_>>. PdRpcClient::kv_client() now reads the current TLS cache key, compares it with the cached entry for the target store, and reconnects when the key changes.
  • src/request/plan.rs broadens retry handling in RetryableMultiRegion::single_shard_handler(). Previously only LeaderNotFound during map_region_to_store() went through handle_other_error(). Now any failure from map_region_to_store() or plan.apply_store() invalidates region cache state, consumes region backoff, and retries the full shard mapping flow.
  • src/request/mod.rs and src/pd/client.rs add tests for the new retry path and cache-key-sensitive TiKV client reuse. src/common/security.rs adds a reload test that proves file rereads and cache-key changes when file contents change.
  • .cargo/config.toml adds resolver.incompatible-rust-versions = "fallback" so Cargo can resolve dependencies on older toolchains in CI.

Findings (ordered by severity)

  • Medium: TLS rotation detection is based only on (file length, modified time). A renewed PEM with the same length and an unchanged or coarse-grained mtime will produce the same cache key, so the cached Channel keeps using the old TLS identity indefinitely. This undercuts the main goal of the patch, and the added test only covers size-changing rewrites, so the blind spot is untested. Refs: src/common/security.rs:83-92, src/common/security.rs:161-169, src/common/security.rs:205-241.
    It's enough use file length and the file mtime as fingerprint.

Costs and Negative Impacts

  • Correctness: The region-cache retry fix in src/request/plan.rs:166-189 is directionally correct for stale store mapping, but the TLS cache-key design introduces new failure cases where cert-file metadata availability, rather than connection health, decides whether a request can proceed.
  • Security: Reloading TLS materials per new connection is the right security direction because rotated credentials can take effect without a process restart. The risk is that metadata-only change detection can silently keep using an old client identity after rotation.
  • Compatibility: RetryableMultiRegion now retries any map_region_to_store() / apply_store() error instead of only LeaderNotFound. For in-tree request types that mostly means more aggressive recovery, but it is still a behavioral change for any downstream Shardable implementation that used apply_store() errors as terminal failures. Refs: src/request/plan.rs:168-189.
  • Robustness: The new retry path will also retry local TLS/configuration failures after invalidating region cache state, which adds PD churn and delays surfacing deterministic local errors.
  • Cognitive Load: The TiKV client cache is now coupled to TLS-file metadata and request retry behavior. Future maintainers need to reason about connection reuse, secret rotation, and async hot-path filesystem access together.
  • CPU: Extra hashing work is minor, but it is now paid on every TiKV client lookup.
  • Memory: Negligible; cached entries now carry one additional Option<u64>.
  • Log Volume: No major new log volume, though repeated retry loops on local TLS-file errors will amplify existing retry/debug logs.

Engineering Rules Check

  • No repo-local AGENTS.md was found under /home/dev/workspace/client-rust, so there were no repository-specific engineering rules to validate beyond the code itself.

Questions and Assumptions

  • I assumed the review target is origin/HEAD...HEAD because no explicit diff or PR metadata was provided.
  • I assumed the intended behavior for TLS rotation is “use existing healthy channels until a refresh is required, then rebuild with new materials,” because that preserves availability while still honoring rotated credentials.
  • I did not find external PR text, so the problem statement above is inferred from the commit messages bd2c601, 5e975b0, and 29f4ff1.

Suggested Tests / Validation

  • Add a TLS reload test where file contents change but file length does not, and force the file timestamps to remain equal if the platform allows it. That covers the current metadata-signature blind spot.
  • Add a TiKV client cache test where connection_cache_key() temporarily returns an error after a client has already been cached; verify whether requests should reuse the cached client or fail closed.
  • Add a benchmark or targeted perf test for repeated kv_client() cache hits under TLS to measure the cost of the per-request metadata reads.
  • Validation run: cargo test --lib passed locally on May 12, 2026.

Summary by CodeRabbit

  • Refactor

    • Improved connection caching with intelligent invalidation mechanisms
    • Enhanced TLS certificate management for better resource efficiency
    • Refined error handling in region store mapping operations
  • Tests

    • Added comprehensive tests for connection cache behavior and region store mapping retries
  • Chores

    • Updated build configuration and dependencies

Review Change Stack

Signed-off-by: Ray Yan <yming0221@gmail.com>
@ti-chi-bot ti-chi-bot Bot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. dco-signoff: yes Indicates the PR's author has signed the dco. labels May 9, 2026
@ti-chi-bot
Copy link
Copy Markdown

ti-chi-bot Bot commented May 9, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign marsishandsome for approval. For more information see the Code Review Process.
Please ensure that each of them provides their approval before proceeding.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 9, 2026

📝 Walkthrough

Walkthrough

This PR implements connection caching keyed by TLS file signatures. SecurityManager now stores certificate file paths and computes cache keys asynchronously from filesystem metadata. The KvConnect trait exposes connection_cache_key(), enabling PdRpcClient to conditionally cache and reuse clients when cache keys remain stable. Request plan error handling is consolidated to support cache invalidation on region mapping changes.

Changes

Connection Cache Key Feature

Layer / File(s) Summary
Build configuration and dependencies
.cargo/config.toml, Cargo.toml
Cargo resolver configured for incompatible Rust versions, and tokio fs feature added to support file-based cache key derivation.
SecurityManager refactoring to use file paths and file-based cache keys
src/common/security.rs
SecurityManager replaces in-memory PEM bytes with optional PathBuf fields for CA, certificate, and private key. connection_cache_key() derives a hash from per-file filesystem signatures using file size and modified timestamp. load_tls_materials() loads PEM data on-demand from stored paths. Tests verify material reloading and cache key changes after file modification.
KvConnect trait cache key method
src/store/client.rs
KvConnect trait adds async connection_cache_key() method with default returning Ok(None). TikvConnect implements it by delegating to security_mgr.connection_cache_key().await.
PdRpcClient conditional caching with cache key validation
src/pd/client.rs
PdRpcClient wraps cached KV clients in CachedKvClient to store both client and cache key. kv_client() consults the current cache key and reuses cached clients only when keys match; cache misses or key changes trigger new connections. Tests verify cache hits when cache keys are stable and connection increases when keys change.
Request plan error handling consolidation
src/request/plan.rs, src/request/mod.rs
RetryableMultiRegion::single_shard_handler precomputes region_ver_id and store_id once, then consolidates all region-to-store mapping error handling via handle_other_error with precomputed values. New test validates store mapping retry behavior with PD cache invalidations.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 A path-based key, not bytes in store,
TLS files watched, cache keys explore,
When signatures change, connections renew,
Smart caching through hashes keeps clients in queue!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 14.29% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly addresses the main changes: TLS certificate reloading in reconnect and region cache fixes are the primary focus of this PR.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@ti-chi-bot ti-chi-bot Bot added contribution This PR is from a community contributor. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels May 9, 2026
Signed-off-by: Ray Yan <yming0221@gmail.com>
@yongman yongman force-pushed the ym/fix-tls-and-cache branch from 9e8125c to ea87c9d Compare May 12, 2026 03:23
Signed-off-by: Ray Yan <yming0221@gmail.com>
@yongman yongman force-pushed the ym/fix-tls-and-cache branch from ea87c9d to b2c2777 Compare May 12, 2026 08:39
@ti-chi-bot ti-chi-bot Bot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels May 12, 2026
Signed-off-by: Ray Yan <yming0221@gmail.com>
@yongman yongman force-pushed the ym/fix-tls-and-cache branch from b2c2777 to 7c798eb Compare May 12, 2026 10:07
@ti-chi-bot ti-chi-bot Bot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels May 12, 2026
@yongman yongman marked this pull request as ready for review May 14, 2026 06:20
@ti-chi-bot ti-chi-bot Bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label May 14, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/pd/client.rs (1)

346-371: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Potential cache key race condition.

Between fetching the cache_key (line 347) and storing the newly connected client (lines 358-366), the underlying TLS certificates could change. This would cache the client with a stale key, causing the next call to incorrectly reuse it until the certificates change again.

Consider refetching the cache key immediately after connect() succeeds and before inserting into the cache:

match self.kv_connect.connect(address).await {
    Ok(client) => {
        // Refetch cache key to ensure consistency
        if let Ok(current_key) = self.kv_connect.connection_cache_key().await {
            self.kv_client_cache.write().await.insert(
                address.to_owned(),
                CachedKvClient {
                    cache_key: current_key,
                    client: client.clone(),
                },
            );
        }
        Ok(client)
    }
    Err(e) => Err(e),
}
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/pd/client.rs` around lines 346 - 371, The kv_client method has a race
where the cache_key is read before connect() and may become stale by the time
you insert a newly connected client; after a successful
self.kv_connect.connect(address).await, refetch the cache key by calling
self.kv_connect.connection_cache_key().await and only insert into
self.kv_client_cache (CachedKvClient { cache_key: current_key, client:
client.clone() }) when that refetch returns Ok(current_key); keep returning the
client either way but avoid caching with the original pre-connect key (update
code in kv_client around the connect() match and the cache insertion logic to
use the refetched key).
🧹 Nitpick comments (2)
.cargo/config.toml (1)

1-2: ⚡ Quick win

Provide context for the resolver configuration change.

The addition of incompatible-rust-versions = "fallback" is not explained in the PR description or commit message. Please document why this Cargo resolver setting is necessary—for example, if it addresses a specific dependency conflict or build failure scenario.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In @.cargo/config.toml around lines 1 - 2, The change adding the Cargo resolver
setting incompatible-rust-versions = "fallback" in .cargo/config.toml lacks
explanation; update the PR description or the commit message to document why
this resolver behavior is required (e.g., specific dependency version conflict,
build failure on older/newer rustc, or to allow fallback resolution for a
crate), mention which dependency or build scenario it addresses, and note any
testing you did to validate the change so reviewers can understand and reproduce
the rationale for using incompatible-rust-versions = "fallback".
src/common/security.rs (1)

211-248: ⚡ Quick win

Add test case for same-size certificate updates.

The current test verifies that cache keys change when file sizes differ. However, it doesn't validate the behavior when certificates of the same size are rotated (a common scenario with fixed-length key formats). Consider adding a test case that:

  1. Writes files with identical sizes but different content
  2. Optionally preserves mtime using filetime crate
  3. Verifies whether the cache key changes

This will surface the fingerprinting limitation flagged in the connection_cache_key() review and help validate any future fix.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/common/security.rs` around lines 211 - 248, Add a new async test
(alongside test_security_reload) that exercises SecurityManager::load and
connection_cache_key for rotated certs with identical file sizes: create temp
files for ca/cert/key with initial contents, load via SecurityManager::load and
capture first = load_tls_materials() and key1 = connection_cache_key(). Then
overwrite the three files with different content but exactly the same byte
lengths (use padding) and optionally set identical or different mtimes using the
filetime crate, reload to get second = load_tls_materials() and key2 =
connection_cache_key(), and assert that the material bytes differ while checking
whether key1 != key2 (to document current behavior); reference
test_security_reload, SecurityManager::load, load_tls_materials, and
connection_cache_key when locating insertion point.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@src/common/security.rs`:
- Around line 82-98: connection_cache_key currently uses file_signature
(mtime/size) which is too weak; replace that with a content-based cryptographic
hash to reliably detect certificate/key rotations. In connection_cache_key(),
instead of calling file_signature(...) for ca_path, cert_path and key_path, open
each file and compute a SHA-256 (or other crypto hash) over its contents (or if
performance is a concern, a SHA-256 of first+last N bytes), feed those digest
bytes into the hasher (or combine them directly) and return hasher.finish();
ensure you still respect tls_configured() and the same
ca_path/cert_path/key_path Option unwrapping logic. Use function names
connection_cache_key, file_signature (remove/replace),
ca_path/cert_path/key_path to locate the changes.

---

Outside diff comments:
In `@src/pd/client.rs`:
- Around line 346-371: The kv_client method has a race where the cache_key is
read before connect() and may become stale by the time you insert a newly
connected client; after a successful self.kv_connect.connect(address).await,
refetch the cache key by calling self.kv_connect.connection_cache_key().await
and only insert into self.kv_client_cache (CachedKvClient { cache_key:
current_key, client: client.clone() }) when that refetch returns
Ok(current_key); keep returning the client either way but avoid caching with the
original pre-connect key (update code in kv_client around the connect() match
and the cache insertion logic to use the refetched key).

---

Nitpick comments:
In @.cargo/config.toml:
- Around line 1-2: The change adding the Cargo resolver setting
incompatible-rust-versions = "fallback" in .cargo/config.toml lacks explanation;
update the PR description or the commit message to document why this resolver
behavior is required (e.g., specific dependency version conflict, build failure
on older/newer rustc, or to allow fallback resolution for a crate), mention
which dependency or build scenario it addresses, and note any testing you did to
validate the change so reviewers can understand and reproduce the rationale for
using incompatible-rust-versions = "fallback".

In `@src/common/security.rs`:
- Around line 211-248: Add a new async test (alongside test_security_reload)
that exercises SecurityManager::load and connection_cache_key for rotated certs
with identical file sizes: create temp files for ca/cert/key with initial
contents, load via SecurityManager::load and capture first =
load_tls_materials() and key1 = connection_cache_key(). Then overwrite the three
files with different content but exactly the same byte lengths (use padding) and
optionally set identical or different mtimes using the filetime crate, reload to
get second = load_tls_materials() and key2 = connection_cache_key(), and assert
that the material bytes differ while checking whether key1 != key2 (to document
current behavior); reference test_security_reload, SecurityManager::load,
load_tls_materials, and connection_cache_key when locating insertion point.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 01c444fc-e0da-4389-a2b6-62b50fe48b8b

📥 Commits

Reviewing files that changed from the base of the PR and between 26d6c72 and 7c798eb.

📒 Files selected for processing (7)
  • .cargo/config.toml
  • Cargo.toml
  • src/common/security.rs
  • src/pd/client.rs
  • src/request/mod.rs
  • src/request/plan.rs
  • src/store/client.rs

Comment thread src/common/security.rs
Comment on lines +82 to +98
pub(crate) async fn connection_cache_key(&self) -> Result<Option<u64>> {
if !self.tls_configured() {
return Ok(None);
}

let mut hasher = DefaultHasher::new();
file_signature(self.ca_path.as_ref().expect("tls_configured checked"))
.await?
.hash(&mut hasher);
file_signature(self.cert_path.as_ref().expect("tls_configured checked"))
.await?
.hash(&mut hasher);
file_signature(self.key_path.as_ref().expect("tls_configured checked"))
.await?
.hash(&mut hasher);
Ok(Some(hasher.finish()))
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | 🏗️ Heavy lift

File signature is too weak for reliable cache invalidation.

The cache key derivation uses only file length and modification time, which creates several risks:

  1. False cache hits: Two different certificate files with the same size and mtime will produce identical cache keys, potentially reusing stale TLS connections.
  2. Missed rotations: In-place certificate updates that preserve file size (e.g., same key length, padding) and restoration of mtime will not invalidate the cache.
  3. Security: An attacker with filesystem access could manipulate mtime to prevent certificate rotation from taking effect.

Consider hashing file content instead of relying on metadata. Alternatively, if performance is critical, use a cryptographic hash of the first and last N bytes, or rely on inode + device ID on Unix systems.

🔐 Proposed approach using content hashing

Replace file_signature() with content-based hashing:

-async fn file_signature(path: &Path) -> Result<(u64, Option<u128>)> {
-    let metadata = tokio::fs::metadata(path)
-        .await
-        .map_err(|e| internal_err!("failed to stat {}: {:?}", path.display(), e))?;
-    let modified = metadata.modified().ok().and_then(|t: SystemTime| {
-        t.duration_since(SystemTime::UNIX_EPOCH)
-            .ok()
-            .map(|d| d.as_nanos())
-    });
-    Ok((metadata.len(), modified))
+async fn file_signature(path: &Path) -> Result<u64> {
+    let contents = tokio::fs::read(path)
+        .await
+        .map_err(|e| internal_err!("failed to read {}: {:?}", path.display(), e))?;
+    let mut hasher = DefaultHasher::new();
+    contents.hash(&mut hasher);
+    Ok(hasher.finish())
 }

Then update connection_cache_key():

     let mut hasher = DefaultHasher::new();
-    file_signature(self.ca_path.as_ref().expect("tls_configured checked"))
-        .await?
-        .hash(&mut hasher);
-    file_signature(self.cert_path.as_ref().expect("tls_configured checked"))
-        .await?
-        .hash(&mut hasher);
-    file_signature(self.key_path.as_ref().expect("tls_configured checked"))
-        .await?
-        .hash(&mut hasher);
+    hasher.write_u64(
+        file_signature(self.ca_path.as_ref().expect("tls_configured checked")).await?
+    );
+    hasher.write_u64(
+        file_signature(self.cert_path.as_ref().expect("tls_configured checked")).await?
+    );
+    hasher.write_u64(
+        file_signature(self.key_path.as_ref().expect("tls_configured checked")).await?
+    );
     Ok(Some(hasher.finish()))
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/common/security.rs` around lines 82 - 98, connection_cache_key currently
uses file_signature (mtime/size) which is too weak; replace that with a
content-based cryptographic hash to reliably detect certificate/key rotations.
In connection_cache_key(), instead of calling file_signature(...) for ca_path,
cert_path and key_path, open each file and compute a SHA-256 (or other crypto
hash) over its contents (or if performance is a concern, a SHA-256 of first+last
N bytes), feed those digest bytes into the hasher (or combine them directly) and
return hasher.finish(); ensure you still respect tls_configured() and the same
ca_path/cert_path/key_path Option unwrapping logic. Use function names
connection_cache_key, file_signature (remove/replace),
ca_path/cert_path/key_path to locate the changes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

contribution This PR is from a community contributor. dco-signoff: yes Indicates the PR's author has signed the dco. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant