Skip to content

docs: 添加 Rush-FS 性能对比报告#8

Open
CoderSerio wants to merge 4 commits intomainfrom
docs/benchmark-rush-fs
Open

docs: 添加 Rush-FS 性能对比报告#8
CoderSerio wants to merge 4 commits intomainfrom
docs/benchmark-rush-fs

Conversation

@CoderSerio
Copy link
Copy Markdown
Owner

@CoderSerio CoderSerio commented Mar 17, 2026

📊 性能分析

详细对比 Node.js fs 与 Rush-FS 的性能差异,为 Rush-FS 集成决策提供数据支持。

核心发现

场景 Node.js fs Rush-FS 结论
readFile (100 文件) 2.22 ms 0.53 ms 4.23x 提升
readdir (1550 文件) 4.68 ms 5.65 ms ⚠️ 小规模略慢

分析:

  • NAPI 桥接开销导致小规模 readdir Rush-FS 略慢
  • Rush-FS 优势在大规模场景(30k+ 文件,官方 12x
  • readFile 是 LiteWiki 核心场景,4x 收益显著

🔗 关联 PR

📋 复现

git checkout benchmark/node-fs-baseline && node benchmark/fs-benchmark.mjs
git checkout main && node benchmark/fs-benchmark.mjs

Summary by CodeRabbit

  • Documentation

    • Added a comprehensive benchmark report and two follow-up documents outlining findings, reproducible results, analysis, and suggested performance optimizations for recursive directory listing.
  • Tests

    • Added a benchmark script that runs multiple trials, captures timing and memory metrics, prints comparisons, and emits structured JSON results.
  • Chores

    • Included generated benchmark result artifacts for review and tracking.

- 创建 benchmark 脚本 (fs-benchmark.mjs)
- 性能分析报告 (BENCHMARK_RESULTS.md)
- 测试数据 (benchmark-*.json)

测试结果:
- readdir 递归 (1550 文件): Node.js 略快 (小规模 NAPI 开销)
- readFile 批量 (100 文件): Rush-FS 4.23x 快 ✅
- 内存开销:Rush-FS +300-500KB (可接受)

关联 PR: #9 (Rush-FS 集成)
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 17, 2026

📝 Walkthrough

Walkthrough

Adds a benchmarking suite and results: a Node.js runner that optionally uses Rush‑FS to compare recursive readdir and batched readFile performance (timing and memory), generates human-readable summaries, writes timestamped JSON artifacts, and includes proposal/issue docs for performance degradation strategies.

Changes

Cohort / File(s) Summary
Benchmark script
benchmark/fs-benchmark.mjs
New Node.js benchmark runner that optionally loads @rush-fs/core, creates recursive test data (~1550 files), runs repeated readdir (recursive) and batched readFile benchmarks, collects timing and memory deltas, prints comparison tables, and writes timestamped JSON results.
Benchmark results
benchmark/benchmark-1773743187125.json, benchmark/benchmark-1773747981760.json
New recorded benchmark artifacts containing environment metadata and per-run stats (duration avg/min/max, memory deltas/peaks, runs) for readdir and readFile comparisons between Node and Rush.
Documentation & proposals
benchmark/BENCHMARK_RESULTS.md, benchmark/RUSH_FS_GITHUB_ISSUE.md, benchmark/RUSH_FS_SUGGESTION.md
New human-readable benchmark report plus two design/proposal documents describing observed performance regressions for small trees, proposed "smart degradation" strategies (depth/size/configurable thresholds), pseudocode, testing guidance, and implementation notes.

Sequence Diagram(s)

sequenceDiagram
  participant Script as Benchmark Script
  participant NodeFS as Node.js fs
  participant RushFS as Rush-FS (optional)
  participant FS as Filesystem
  participant Storage as JSON Artifact

  Script->>Script: detect Rush-FS availability
  alt Rush-FS available
    Script->>RushFS: run readdir recursive / batch readFile
    RushFS->>FS: native N-API I/O
    FS-->>RushFS: file entries / data
    RushFS-->>Script: timings & memory deltas
  else Rush-FS not available
    Script->>NodeFS: run readdir recursive / batch readFile
    NodeFS->>FS: libuv I/O
    FS-->>NodeFS: file entries / data
    NodeFS-->>Script: timings & memory deltas
  end
  Script->>Script: aggregate stats (avg/min/max, memory)
  Script->>Storage: write timestamped JSON results
  Script-->>Console: print comparison tables and summary
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐇📈
I hopped through folders, counted each bite,
Timed every read from morning to night,
I logged tiny peaks and big parade—
JSON carrots for trails I made. 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 69.23% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'docs: 添加 Rush-FS 性能对比报告' (Add Rush-FS Performance Comparison Report) directly and specifically describes the main change—adding benchmark documentation and results comparing Node.js fs with Rush-FS performance.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch docs/benchmark-rush-fs
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

You can customize the tone of the review comments and chat replies.

Configure the tone_instructions setting to customize the tone of the review comments and chat replies. For example, you can set the tone to Act like a strict teacher, Act like a pirate and more.

CodeRabbit 评论指出的问题:
- 移除 Node.js readFile 测试中的 writeFile touch 操作
- 移除每文件动态 import() 开销,改用静态导入
- 改用内存峰值替代 delta(避免 GC 负值误导)
- 添加除零/负值保护(比率计算)

结果修正:
- readFile 提升从 4.23x 降为 2.93x(更真实)
- readdir 结果基本不变(-0.86x)

Refs: #8
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
benchmark/BENCHMARK_RESULTS.md (1)

5-8: Pin the compared revisions to immutable SHAs.

Using moving branch names like main and benchmark/node-fs-baseline makes this report drift over time. Please record the exact commit for each run next to the JSON artifact so the comparison stays reproducible.

Also applies to: 122-139

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmark/BENCHMARK_RESULTS.md` around lines 5 - 8, Replace moving branch
names with immutable commit SHAs in the benchmark report: update the "Baseline:
`benchmark/node-fs-baseline`" and "Feature: `main`" entries to include the exact
commit SHA used for each run (e.g., `benchmark/node-fs-baseline@<SHA>` and
`main@<SHA>`), and annotate the JSON artifact entries with the corresponding
SHAs so the comparison is reproducible; also apply the same change to the other
occurrences referenced (around lines 122-139) so all comparisons reference
explicit commit identifiers rather than branch names.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@benchmark/BENCHMARK_RESULTS.md`:
- Around line 108-116: The claim "整体性能提升 **2-4x**" is overstated because
benchmark/fs-benchmark.mjs only runs benchmarkReaddirRecursive() and
benchmarkReadFile(); update the doc and/or tests to reflect that: either change
the phrasing in BENCHMARK_RESULTS.md to present "2-4x" as a hypothesis or
partial result (e.g., "observed 2-4x in readFile microbenchmarks; overall gain
unverified"), or extend benchmark/fs-benchmark.mjs to include mixed and
end-to-end scenarios (add a combined run that exercises
readdir+readFile+writeFile) and re-run before claiming an overall 2-4x
improvement; reference functions benchmarkReaddirRecursive and benchmarkReadFile
when making the change.

In `@benchmark/fs-benchmark.mjs`:
- Around line 29-35: formatBytes currently breaks when passed non-positive or
NaN values (caused by using memoryAfter.heapUsed for memoryPeak and computing
memoryDelta), so update the measurement and formatting: ensure memoryPeak and
memoryDelta are derived from a reliable metric (e.g., process.memoryUsage().rss
or a sampled max-rss) rather than only heapUsed (references: memoryPeak,
memoryDelta, memoryAfter.heapUsed), and make formatBytes robust by clamping
inputs to >=0 and returning "0 B" for NaN/<=0 values; this prevents Math.log()
from receiving invalid values and avoids producing "NaN undefined" outputs.
- Around line 162-193: The benchmark currently measures JS-side recursion in
nodeReaddirRecursive and rushReaddirRecursive instead of exercising Rush-FS's
optimized recursive path; update rushReaddirRecursive to call
rushFs.readdir(dir, { recursive: true, withFileTypes: true }) (or the wrapper in
src/utils/fs.ts) and use its returned entries directly so the implementation
uses the native recursive fast-path rather than performing manual recursion and
repeated JS↔native crossings.

---

Nitpick comments:
In `@benchmark/BENCHMARK_RESULTS.md`:
- Around line 5-8: Replace moving branch names with immutable commit SHAs in the
benchmark report: update the "Baseline: `benchmark/node-fs-baseline`" and
"Feature: `main`" entries to include the exact commit SHA used for each run
(e.g., `benchmark/node-fs-baseline@<SHA>` and `main@<SHA>`), and annotate the
JSON artifact entries with the corresponding SHAs so the comparison is
reproducible; also apply the same change to the other occurrences referenced
(around lines 122-139) so all comparisons reference explicit commit identifiers
rather than branch names.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ef46a24a-9607-4620-8c36-370789b46c6c

📥 Commits

Reviewing files that changed from the base of the PR and between f277289 and bfeccdb.

📒 Files selected for processing (2)
  • benchmark/BENCHMARK_RESULTS.md
  • benchmark/fs-benchmark.mjs

Comment thread benchmark/BENCHMARK_RESULTS.md
Comment on lines +29 to +35
function formatBytes(bytes) {
if (bytes === 0) return "0 B";
const k = 1024;
const sizes = ["B", "KB", "MB", "GB"];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return Math.round((bytes / k ** i) * 100) / 100 + " " + sizes[i];
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n benchmark/fs-benchmark.mjs

Repository: CoderSerio/litewiki

Length of output: 13285


Measurement methodology fundamentally flawed for peak-memory claims

Line 74 confirms memoryPeak: memoryAfter.heapUsed, which captures only post-run heap snapshots. Since heapUsed excludes native/Rust allocations and can decrease after GC, memoryDelta (line 73) can become negative. This breaks formatBytes() (lines 35-41) which uses Math.log() on the input—passing negative values produces NaN, resulting in malformed output like "NaN undefined".

Additionally, this measurement approach cannot reliably support peak-memory metrics or overhead analysis claims, as heapUsed reflects only JavaScript heap state at measurement points, not actual peak memory usage during execution.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmark/fs-benchmark.mjs` around lines 29 - 35, formatBytes currently
breaks when passed non-positive or NaN values (caused by using
memoryAfter.heapUsed for memoryPeak and computing memoryDelta), so update the
measurement and formatting: ensure memoryPeak and memoryDelta are derived from a
reliable metric (e.g., process.memoryUsage().rss or a sampled max-rss) rather
than only heapUsed (references: memoryPeak, memoryDelta, memoryAfter.heapUsed),
and make formatBytes robust by clamping inputs to >=0 and returning "0 B" for
NaN/<=0 values; this prevents Math.log() from receiving invalid values and
avoids producing "NaN undefined" outputs.

之前错误地递归调用 rushFs.readdir,导致 156 次 NAPI 桥接。
正确用法是一次调用 + recursive: true,Rust 内部并行。

修正后性能:
- readdir: 从 -0.86x 提升到 **2.46x** ✅
- readFile: 保持 ~2.75x ✅

结论:Rush-FS 实现正确,之前是用法错误。
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
benchmark/fs-benchmark.mjs (1)

35-40: ⚠️ Potential issue | 🟠 Major

memoryPeak is mislabeled and still not a real peak metric.

Line 74 records only post-run heapUsed, so reported memory.peak* is not “peak during run”. Also, Line 39 can receive invalid values if negative/NaN bytes are ever formatted.

Suggested fix
 function formatBytes(bytes) {
-  if (bytes === 0) return "0 B";
+  if (!Number.isFinite(bytes) || bytes <= 0) return "0 B";
   const k = 1024;
   const sizes = ["B", "KB", "MB", "GB"];
   const i = Math.floor(Math.log(bytes) / Math.log(k));
   return Math.round((bytes / k ** i) * 100) / 100 + " " + sizes[i];
 }
@@
   addRun(duration, memoryBefore, memoryAfter) {
+    const rssDelta = memoryAfter.rss - memoryBefore.rss;
     this.results.push({
       duration,
-      memoryDelta: memoryAfter.heapUsed - memoryBefore.heapUsed,
-      memoryPeak: memoryAfter.heapUsed,
+      memoryDelta: rssDelta,
+      memoryPeak: Math.max(memoryBefore.rss, memoryAfter.rss),
     });
   }

Also applies to: 70-75

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmark/fs-benchmark.mjs` around lines 35 - 40, formatBytes currently can
produce NaN/negative index errors for invalid inputs and the recorded memoryPeak
is not the true peak (it only reads post-run heapUsed); fix formatBytes to
validate and clamp the bytes argument (treat non-number/negative as 0) and
ensure the computed index i is bounded within sizes.length-1 before using
sizes[i]; for peak memory, stop assigning memoryPeak from a single post-run
process.memoryUsage().heapUsed and instead introduce a maxMemory (or memoryPeak)
tracker that is initialized to 0 and updated during the benchmark (e.g., sample
process.memoryUsage().heapUsed at appropriate points or after each major
operation and set memoryPeak = Math.max(memoryPeak, current.heapUsed)) then
report that tracked peak.
🧹 Nitpick comments (1)
benchmark/fs-benchmark.mjs (1)

329-345: Use finally so test data is always cleaned up.

If a benchmark throws before Line 344, benchmark/test-data is left behind.

Suggested fix
   await setupTestData();

   const results = {
@@
     tests: {},
   };

-  results.tests.readdir = await benchmarkReaddirRecursive();
-  results.tests.readFile = await benchmarkReadFile();
-
-  // 清理
-  await rm(TEST_DIR, { recursive: true, force: true });
-  console.log("\n🧹 测试数据已清理");
+  try {
+    results.tests.readdir = await benchmarkReaddirRecursive();
+    results.tests.readFile = await benchmarkReadFile();
+  } finally {
+    await rm(TEST_DIR, { recursive: true, force: true });
+    console.log("\n🧹 测试数据已清理");
+  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmark/fs-benchmark.mjs` around lines 329 - 345, Wrap the test run and
result collection (calls to setupTestData, benchmarkReaddirRecursive,
benchmarkReadFile) in a try block and move the cleanup call to rm(TEST_DIR, {
recursive: true, force: true }) into a finally block so TEST_DIR is always
removed even if a benchmark throws; ensure console.log("🧹 测试数据已清理") remains
after the rm call in the finally to indicate successful cleanup.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@benchmark/fs-benchmark.mjs`:
- Line 357: The current top-level call main().catch(console.error) only logs
failures but leaves the process exit code as zero; change the catch handler on
the main() invocation to log the error and call process.exit(1) so CI detects
failures (i.e., replace the current catch on main() with a handler that calls
console.error(err) and then process.exit(1)).
- Around line 168-191: rushReaddirRecursive returns both files and directories
but nodeReaddirRecursive returns only files, so filter rushFs.readdir results to
match; update rushReaddirRecursive (the call to rushFs.readdir and the
subsequent entries.map) to first filter entries by file type (e.g., e.isFile())
and then map to e.name so the benchmark compares only files.

---

Duplicate comments:
In `@benchmark/fs-benchmark.mjs`:
- Around line 35-40: formatBytes currently can produce NaN/negative index errors
for invalid inputs and the recorded memoryPeak is not the true peak (it only
reads post-run heapUsed); fix formatBytes to validate and clamp the bytes
argument (treat non-number/negative as 0) and ensure the computed index i is
bounded within sizes.length-1 before using sizes[i]; for peak memory, stop
assigning memoryPeak from a single post-run process.memoryUsage().heapUsed and
instead introduce a maxMemory (or memoryPeak) tracker that is initialized to 0
and updated during the benchmark (e.g., sample process.memoryUsage().heapUsed at
appropriate points or after each major operation and set memoryPeak =
Math.max(memoryPeak, current.heapUsed)) then report that tracked peak.

---

Nitpick comments:
In `@benchmark/fs-benchmark.mjs`:
- Around line 329-345: Wrap the test run and result collection (calls to
setupTestData, benchmarkReaddirRecursive, benchmarkReadFile) in a try block and
move the cleanup call to rm(TEST_DIR, { recursive: true, force: true }) into a
finally block so TEST_DIR is always removed even if a benchmark throws; ensure
console.log("🧹 测试数据已清理") remains after the rm call in the finally to indicate
successful cleanup.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 9a9e7703-0299-4e97-aabb-5b6d6d348f01

📥 Commits

Reviewing files that changed from the base of the PR and between bfeccdb and 096e42f.

📒 Files selected for processing (3)
  • benchmark/benchmark-1773743187125.json
  • benchmark/benchmark-1773747981760.json
  • benchmark/fs-benchmark.mjs
✅ Files skipped from review due to trivial changes (1)
  • benchmark/benchmark-1773743187125.json

Comment on lines +168 to +191
async function nodeReaddirRecursive(dir) {
const entries = await nodeReaddir(dir, { withFileTypes: true });
const results = [];

for (const entry of entries) {
const fullPath = join(dir, entry.name);
if (entry.isDirectory()) {
results.push(...(await nodeReaddirRecursive(fullPath)));
} else {
results.push(entry.name);
}
}

return results;
}

// Rush-FS - use recursive mode (single NAPI call, internal parallelism)
async function rushReaddirRecursive(dir) {
const entries = await rushFs.readdir(dir, {
withFileTypes: true,
recursive: true,
});
return entries.map((e) => e.name);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

For @rush-fs/core, what exactly does readdir(path, { recursive: true, withFileTypes: true }) return? Does it include both files and directories, and in what shape?

💡 Result:

In @rush-fs/core, readdir(path, { recursive: true, withFileTypes: true }) follows Node’s fs/promises.readdir() semantics:

  • Return type: Promise<fs.Dirent[]> (an array of Dirent objects, not strings) when withFileTypes: true. [1]
  • What it includes (with recursive: true): it lists everything under path recursively — i.e. files and directories (and “sub files”). [1]
  • Shape of each entry: each array element is an fs.Dirent with:
    • name: string
    • parentPath: string (the path to the entry’s parent directory)
    • type-test methods like isFile(), isDirectory(), isSymbolicLink(), etc. [1][2]

So you typically build a full path as join(dirent.parentPath, dirent.name) (because name is just the basename). [2]

Sources:
[1] Node.js docs: fsPromises.readdir(path[, options]) (recursive + withFileTypes behavior) (nodejs.org)
[2] Node.js docs: fs.Dirent.parentPath (nodejs.org)

Citations:


🏁 Script executed:

cd benchmark && cat -n fs-benchmark.mjs | sed -n '168,191p'

Repository: CoderSerio/litewiki

Length of output: 906


Correct benchmark comparison: filter out directories from Rush-FS results.

nodeReaddirRecursive collects only files (recursively walking directories), while rushReaddirRecursive returns all entries—files and directories—from its single recursive call. The comparison measures unequal workloads.

Apply the suggested filter to normalize the Rush-FS path:

Fix
   async function rushReaddirRecursive(dir) {
     const entries = await rushFs.readdir(dir, {
       withFileTypes: true,
       recursive: true,
     });
-    return entries.map((e) => e.name);
+    return entries
+      .filter((e) => !e.isDirectory?.())
+      .map((e) => e.name);
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmark/fs-benchmark.mjs` around lines 168 - 191, rushReaddirRecursive
returns both files and directories but nodeReaddirRecursive returns only files,
so filter rushFs.readdir results to match; update rushReaddirRecursive (the call
to rushFs.readdir and the subsequent entries.map) to first filter entries by
file type (e.g., e.isFile()) and then map to e.name so the benchmark compares
only files.

console.log(`\n💾 结果已保存:${outputFile}`);
}

main().catch(console.error);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Set non-zero exit code on failure.

Line 357 logs errors but can still exit successfully, which can hide benchmark failures in CI.

Suggested fix
-main().catch(console.error);
+main().catch((err) => {
+  console.error(err);
+  process.exitCode = 1;
+});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmark/fs-benchmark.mjs` at line 357, The current top-level call
main().catch(console.error) only logs failures but leaves the process exit code
as zero; change the catch handler on the main() invocation to log the error and
call process.exit(1) so CI detects failures (i.e., replace the current catch on
main() with a handler that calls console.error(err) and then process.exit(1)).

为 Rush-FS 仓库准备的性能优化建议:
- 问题分析:小规模目录强制并行导致性能下降
- 建议方案:基于深度/规模的智能降级策略
- 预期收益:小规模从负优化转为正收益

文件:
- RUSH_FS_SUGGESTION.md (中文详细版)
- RUSH_FS_GITHUB_ISSUE.md (英文版,可直接提交)

Refs: #8
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@benchmark/RUSH_FS_GITHUB_ISSUE.md`:
- Around line 19-21: Update the benchmark table row containing "Rush-FS
(recursive calls in JS ❌) | 5.50 ms | **-0.86x** (slower!)" to use non-negative
ratio wording; replace the negative ratio with a positive one (e.g., "0.86x") or
an explicit percent slowdown (e.g., "14% slower than Node.js") so the cell reads
something like "5.50 ms | **0.86x** (slower!)" or "5.50 ms | **14% slower than
Node.js**".
- Around line 103-112: Add blank lines before and after each Markdown table to
satisfy markdownlint MD058: insert an empty line between the "### Small Scale (<
1000 files)" heading and its table and another empty line after that table, and
do the same for the "### Large Scale (> 1000 files)" heading and its table so
both table blocks are surrounded by blank lines.
- Around line 125-141: The tests are using the unstable #[bench] attribute
inside mod bench and functions
bench_readdir_small/bench_readdir_medium/bench_readdir_large which requires
nightly; replace them with Criterion-based benchmarks: add a benches/*.rs file
that uses Criterion's benchmarking API for the three scenarios, update
Cargo.toml to add a [[bench]] target with harness = false (or document nightly
if you choose to keep #[bench]), and remove or convert the existing mod bench
block so benchmarks run with cargo bench on stable using Criterion.

In `@benchmark/RUSH_FS_SUGGESTION.md`:
- Around line 22-23: 表格中“Rush-FS (递归调用
❌)”那一行的倍率写为“-0.86x”会造成歧义,请将该单元格改为正值表示慢速(例如“0.86x”)或改写为百分比描述(例如“比 Node.js 慢
14%”),保持与另一行“Rush-FS (recursive: true ✅)”的倍率表达一致并避免负号误导。
- Around line 156-172: The docs use the nightly-only #[bench] tests in the bench
module (functions bench_readdir_small, bench_readdir_medium,
bench_readdir_large); replace them with Criterion-based benchmarks or note
nightly requirement. Specifically, add Criterion as a dev-dependency in
Cargo.toml, convert each #[bench] fn into a fn receiving &mut Criterion (e.g.,
fn bench_readdir_small(c: &mut Criterion)) and register them with
criterion_group! and criterion_main!, using c.bench_function or
c.bench_with_input to run the same small/medium/large scenarios; alternatively,
if you prefer nightly keep the existing code but add an explicit note that
nightly Rust is required.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 38a27d99-7c8b-4a1f-93f6-75594912aed9

📥 Commits

Reviewing files that changed from the base of the PR and between 096e42f and 5337995.

📒 Files selected for processing (2)
  • benchmark/RUSH_FS_GITHUB_ISSUE.md
  • benchmark/RUSH_FS_SUGGESTION.md

Comment on lines +19 to +21
| Rush-FS (recursive calls in JS ❌) | 5.50 ms | **-0.86x** (slower!) |
| Rush-FS (`recursive: true` ✅) | 1.93 ms | **2.46x** |

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use non-negative ratio wording for slower case.

Line 19 uses -0.86x, which is easy to misread. Prefer 0.86x (or “14% slower than Node.js”) for clearer interpretation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmark/RUSH_FS_GITHUB_ISSUE.md` around lines 19 - 21, Update the benchmark
table row containing "Rush-FS (recursive calls in JS ❌) | 5.50 ms | **-0.86x**
(slower!)" to use non-negative ratio wording; replace the negative ratio with a
positive one (e.g., "0.86x") or an explicit percent slowdown (e.g., "14% slower
than Node.js") so the cell reads something like "5.50 ms | **0.86x** (slower!)"
or "5.50 ms | **14% slower than Node.js**".

Comment on lines +103 to +112
### Small Scale (< 1000 files)
| Current | Optimized | Improvement |
|---------|-----------|-------------|
| 5.50 ms (slower than Node.js) | ~4.0 ms | **~1.2x faster than Node.js** |

### Large Scale (> 1000 files)
| Current | Optimized | Change |
|---------|-----------|--------|
| 1.93 ms | ~1.9 ms | No change (still fast) |

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add blank lines around tables to satisfy markdownlint.

Line 104 and Line 109 table blocks should be surrounded by blank lines to pass MD058 consistently.

🧰 Tools
🪛 markdownlint-cli2 (0.21.0)

[warning] 104-104: Tables should be surrounded by blank lines

(MD058, blanks-around-tables)


[warning] 109-109: Tables should be surrounded by blank lines

(MD058, blanks-around-tables)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmark/RUSH_FS_GITHUB_ISSUE.md` around lines 103 - 112, Add blank lines
before and after each Markdown table to satisfy markdownlint MD058: insert an
empty line between the "### Small Scale (< 1000 files)" heading and its table
and another empty line after that table, and do the same for the "### Large
Scale (> 1000 files)" heading and its table so both table blocks are surrounded
by blank lines.

Comment on lines +125 to +141
#[cfg(test)]
mod bench {
#[bench]
fn bench_readdir_small(b: &mut Bencher) {
// 100 files, 10 directories
}

#[bench]
fn bench_readdir_medium(b: &mut Bencher) {
// 1000 files, 100 directories
}

#[bench]
fn bench_readdir_large(b: &mut Bencher) {
// 30000 files, 3000 directories
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

Is Rust's #[bench] attribute stable on stable Rust in 2026, and what is the recommended stable benchmarking approach?

💡 Result:

No. #[bench] is not stable on stable Rust in 2026: Rust’s own release notes state it has been “fully de-stabilize[d]”, and the compiler docs note benchmarks (and #[bench]) are nightly-only. [1][2]

Recommended stable approach: use cargo bench with a custom benchmark harness (disable the built-in libtest harness via harness = false) and a stable benchmarking crate—Cargo’s docs explicitly call out Criterion as a common stable option. [3]

Typical setup (stable):

  • Add a bench target with harness = false in Cargo.toml
  • Implement benchmarks with Criterion (or another crate) in benches/*.rs
  • Run with cargo bench [3]

Sources

  1. Rust Release Notes (#[bench] fully de-stabilized) [1]
  2. rustc book (#[bench] benchmarks are unstable/nightly-only) [2]
  3. Cargo Book: cargo bench + harness = false, and Criterion mentioned for stable [3]

#[bench] is unstable; use Criterion with cargo bench for a stable benchmarking approach.

Rust's #[bench] attribute requires nightly and has been fully de-stabilized on stable Rust. Switch to Criterion—configure a bench target with harness = false in Cargo.toml, implement benchmarks using Criterion in benches/*.rs, and run with cargo bench. Alternatively, if nightly is acceptable, explicitly document that requirement.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmark/RUSH_FS_GITHUB_ISSUE.md` around lines 125 - 141, The tests are
using the unstable #[bench] attribute inside mod bench and functions
bench_readdir_small/bench_readdir_medium/bench_readdir_large which requires
nightly; replace them with Criterion-based benchmarks: add a benches/*.rs file
that uses Criterion's benchmarking API for the three scenarios, update
Cargo.toml to add a [[bench]] target with harness = false (or document nightly
if you choose to keep #[bench]), and remove or convert the existing mod bench
block so benchmarks run with cargo bench on stable using Criterion.

Comment on lines +22 to +23
| Rush-FS (递归调用 ❌) | 5.50 ms | -0.86x |
| Rush-FS (recursive: true ✅) | 1.93 ms | **2.46x** |
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

统一修正倍率表达,避免 “-0.86x” 歧义。

Line 22 建议改为 0.86x 或“比 Node.js 慢 14%”,避免负倍率造成误读。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmark/RUSH_FS_SUGGESTION.md` around lines 22 - 23, 表格中“Rush-FS (递归调用
❌)”那一行的倍率写为“-0.86x”会造成歧义,请将该单元格改为正值表示慢速(例如“0.86x”)或改写为百分比描述(例如“比 Node.js 慢
14%”),保持与另一行“Rush-FS (recursive: true ✅)”的倍率表达一致并避免负号误导。

Comment on lines +156 to +172
#[cfg(test)]
mod bench {
#[bench]
fn bench_readdir_small(b: &mut Bencher) {
// 100 文件,10 目录
}

#[bench]
fn bench_readdir_medium(b: &mut Bencher) {
// 1000 文件,100 目录
}

#[bench]
fn bench_readdir_large(b: &mut Bencher) {
// 30000 文件,3000 目录
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

What is the recommended stable Rust benchmarking library in 2026 instead of #[bench]?

💡 Result:

The commonly recommended stable Rust benchmarking library (as an alternative to nightly-only #[bench] / libtest benchmarks) is Criterion.rs (criterion crate). It’s explicitly called out as an option for running benchmarks on stable Rust, and it’s widely used/maintained as the de‑facto standard benchmarking harness on stable. [1], [2]

Answer: Use Criterion.rs (criterion crate) instead of #[bench]. [1], [2]

Sources:
[1] Rust Cargo Book (cargo bench) notes stable-channel helpers like Criterion. (doc.rust-lang.org)
[2] Criterion.rs repository (project overview/maintenance notes). (github.com)

Citations:


Use Criterion.rs for stable Rust benchmarking compatibility.

The #[bench] attribute requires nightly Rust. Since this is documentation, the example should work on stable by default. Replace with Criterion.rs (the de facto standard for stable benchmarking) or explicitly document nightly as a prerequisite.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmark/RUSH_FS_SUGGESTION.md` around lines 156 - 172, The docs use the
nightly-only #[bench] tests in the bench module (functions bench_readdir_small,
bench_readdir_medium, bench_readdir_large); replace them with Criterion-based
benchmarks or note nightly requirement. Specifically, add Criterion as a
dev-dependency in Cargo.toml, convert each #[bench] fn into a fn receiving &mut
Criterion (e.g., fn bench_readdir_small(c: &mut Criterion)) and register them
with criterion_group! and criterion_main!, using c.bench_function or
c.bench_with_input to run the same small/medium/large scenarios; alternatively,
if you prefer nightly keep the existing code but add an explicit note that
nightly Rust is required.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant