Conversation
- 创建 benchmark 脚本 (fs-benchmark.mjs) - 性能分析报告 (BENCHMARK_RESULTS.md) - 测试数据 (benchmark-*.json) 测试结果: - readdir 递归 (1550 文件): Node.js 略快 (小规模 NAPI 开销) - readFile 批量 (100 文件): Rush-FS 4.23x 快 ✅ - 内存开销:Rush-FS +300-500KB (可接受) 关联 PR: #9 (Rush-FS 集成)
📝 WalkthroughWalkthroughAdds a benchmarking suite and results: a Node.js runner that optionally uses Rush‑FS to compare recursive readdir and batched readFile performance (timing and memory), generates human-readable summaries, writes timestamped JSON artifacts, and includes proposal/issue docs for performance degradation strategies. Changes
Sequence Diagram(s)sequenceDiagram
participant Script as Benchmark Script
participant NodeFS as Node.js fs
participant RushFS as Rush-FS (optional)
participant FS as Filesystem
participant Storage as JSON Artifact
Script->>Script: detect Rush-FS availability
alt Rush-FS available
Script->>RushFS: run readdir recursive / batch readFile
RushFS->>FS: native N-API I/O
FS-->>RushFS: file entries / data
RushFS-->>Script: timings & memory deltas
else Rush-FS not available
Script->>NodeFS: run readdir recursive / batch readFile
NodeFS->>FS: libuv I/O
FS-->>NodeFS: file entries / data
NodeFS-->>Script: timings & memory deltas
end
Script->>Script: aggregate stats (avg/min/max, memory)
Script->>Storage: write timestamped JSON results
Script-->>Console: print comparison tables and summary
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip You can customize the tone of the review comments and chat replies.Configure the |
CodeRabbit 评论指出的问题: - 移除 Node.js readFile 测试中的 writeFile touch 操作 - 移除每文件动态 import() 开销,改用静态导入 - 改用内存峰值替代 delta(避免 GC 负值误导) - 添加除零/负值保护(比率计算) 结果修正: - readFile 提升从 4.23x 降为 2.93x(更真实) - readdir 结果基本不变(-0.86x) Refs: #8
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
benchmark/BENCHMARK_RESULTS.md (1)
5-8: Pin the compared revisions to immutable SHAs.Using moving branch names like
mainandbenchmark/node-fs-baselinemakes this report drift over time. Please record the exact commit for each run next to the JSON artifact so the comparison stays reproducible.Also applies to: 122-139
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmark/BENCHMARK_RESULTS.md` around lines 5 - 8, Replace moving branch names with immutable commit SHAs in the benchmark report: update the "Baseline: `benchmark/node-fs-baseline`" and "Feature: `main`" entries to include the exact commit SHA used for each run (e.g., `benchmark/node-fs-baseline@<SHA>` and `main@<SHA>`), and annotate the JSON artifact entries with the corresponding SHAs so the comparison is reproducible; also apply the same change to the other occurrences referenced (around lines 122-139) so all comparisons reference explicit commit identifiers rather than branch names.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@benchmark/BENCHMARK_RESULTS.md`:
- Around line 108-116: The claim "整体性能提升 **2-4x**" is overstated because
benchmark/fs-benchmark.mjs only runs benchmarkReaddirRecursive() and
benchmarkReadFile(); update the doc and/or tests to reflect that: either change
the phrasing in BENCHMARK_RESULTS.md to present "2-4x" as a hypothesis or
partial result (e.g., "observed 2-4x in readFile microbenchmarks; overall gain
unverified"), or extend benchmark/fs-benchmark.mjs to include mixed and
end-to-end scenarios (add a combined run that exercises
readdir+readFile+writeFile) and re-run before claiming an overall 2-4x
improvement; reference functions benchmarkReaddirRecursive and benchmarkReadFile
when making the change.
In `@benchmark/fs-benchmark.mjs`:
- Around line 29-35: formatBytes currently breaks when passed non-positive or
NaN values (caused by using memoryAfter.heapUsed for memoryPeak and computing
memoryDelta), so update the measurement and formatting: ensure memoryPeak and
memoryDelta are derived from a reliable metric (e.g., process.memoryUsage().rss
or a sampled max-rss) rather than only heapUsed (references: memoryPeak,
memoryDelta, memoryAfter.heapUsed), and make formatBytes robust by clamping
inputs to >=0 and returning "0 B" for NaN/<=0 values; this prevents Math.log()
from receiving invalid values and avoids producing "NaN undefined" outputs.
- Around line 162-193: The benchmark currently measures JS-side recursion in
nodeReaddirRecursive and rushReaddirRecursive instead of exercising Rush-FS's
optimized recursive path; update rushReaddirRecursive to call
rushFs.readdir(dir, { recursive: true, withFileTypes: true }) (or the wrapper in
src/utils/fs.ts) and use its returned entries directly so the implementation
uses the native recursive fast-path rather than performing manual recursion and
repeated JS↔native crossings.
---
Nitpick comments:
In `@benchmark/BENCHMARK_RESULTS.md`:
- Around line 5-8: Replace moving branch names with immutable commit SHAs in the
benchmark report: update the "Baseline: `benchmark/node-fs-baseline`" and
"Feature: `main`" entries to include the exact commit SHA used for each run
(e.g., `benchmark/node-fs-baseline@<SHA>` and `main@<SHA>`), and annotate the
JSON artifact entries with the corresponding SHAs so the comparison is
reproducible; also apply the same change to the other occurrences referenced
(around lines 122-139) so all comparisons reference explicit commit identifiers
rather than branch names.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ef46a24a-9607-4620-8c36-370789b46c6c
📒 Files selected for processing (2)
benchmark/BENCHMARK_RESULTS.mdbenchmark/fs-benchmark.mjs
| function formatBytes(bytes) { | ||
| if (bytes === 0) return "0 B"; | ||
| const k = 1024; | ||
| const sizes = ["B", "KB", "MB", "GB"]; | ||
| const i = Math.floor(Math.log(bytes) / Math.log(k)); | ||
| return Math.round((bytes / k ** i) * 100) / 100 + " " + sizes[i]; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n benchmark/fs-benchmark.mjsRepository: CoderSerio/litewiki
Length of output: 13285
Measurement methodology fundamentally flawed for peak-memory claims
Line 74 confirms memoryPeak: memoryAfter.heapUsed, which captures only post-run heap snapshots. Since heapUsed excludes native/Rust allocations and can decrease after GC, memoryDelta (line 73) can become negative. This breaks formatBytes() (lines 35-41) which uses Math.log() on the input—passing negative values produces NaN, resulting in malformed output like "NaN undefined".
Additionally, this measurement approach cannot reliably support peak-memory metrics or overhead analysis claims, as heapUsed reflects only JavaScript heap state at measurement points, not actual peak memory usage during execution.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@benchmark/fs-benchmark.mjs` around lines 29 - 35, formatBytes currently
breaks when passed non-positive or NaN values (caused by using
memoryAfter.heapUsed for memoryPeak and computing memoryDelta), so update the
measurement and formatting: ensure memoryPeak and memoryDelta are derived from a
reliable metric (e.g., process.memoryUsage().rss or a sampled max-rss) rather
than only heapUsed (references: memoryPeak, memoryDelta, memoryAfter.heapUsed),
and make formatBytes robust by clamping inputs to >=0 and returning "0 B" for
NaN/<=0 values; this prevents Math.log() from receiving invalid values and
avoids producing "NaN undefined" outputs.
之前错误地递归调用 rushFs.readdir,导致 156 次 NAPI 桥接。 正确用法是一次调用 + recursive: true,Rust 内部并行。 修正后性能: - readdir: 从 -0.86x 提升到 **2.46x** ✅ - readFile: 保持 ~2.75x ✅ 结论:Rush-FS 实现正确,之前是用法错误。
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
benchmark/fs-benchmark.mjs (1)
35-40:⚠️ Potential issue | 🟠 Major
memoryPeakis mislabeled and still not a real peak metric.Line 74 records only post-run
heapUsed, so reportedmemory.peak*is not “peak during run”. Also, Line 39 can receive invalid values if negative/NaN bytes are ever formatted.Suggested fix
function formatBytes(bytes) { - if (bytes === 0) return "0 B"; + if (!Number.isFinite(bytes) || bytes <= 0) return "0 B"; const k = 1024; const sizes = ["B", "KB", "MB", "GB"]; const i = Math.floor(Math.log(bytes) / Math.log(k)); return Math.round((bytes / k ** i) * 100) / 100 + " " + sizes[i]; } @@ addRun(duration, memoryBefore, memoryAfter) { + const rssDelta = memoryAfter.rss - memoryBefore.rss; this.results.push({ duration, - memoryDelta: memoryAfter.heapUsed - memoryBefore.heapUsed, - memoryPeak: memoryAfter.heapUsed, + memoryDelta: rssDelta, + memoryPeak: Math.max(memoryBefore.rss, memoryAfter.rss), }); }Also applies to: 70-75
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmark/fs-benchmark.mjs` around lines 35 - 40, formatBytes currently can produce NaN/negative index errors for invalid inputs and the recorded memoryPeak is not the true peak (it only reads post-run heapUsed); fix formatBytes to validate and clamp the bytes argument (treat non-number/negative as 0) and ensure the computed index i is bounded within sizes.length-1 before using sizes[i]; for peak memory, stop assigning memoryPeak from a single post-run process.memoryUsage().heapUsed and instead introduce a maxMemory (or memoryPeak) tracker that is initialized to 0 and updated during the benchmark (e.g., sample process.memoryUsage().heapUsed at appropriate points or after each major operation and set memoryPeak = Math.max(memoryPeak, current.heapUsed)) then report that tracked peak.
🧹 Nitpick comments (1)
benchmark/fs-benchmark.mjs (1)
329-345: Usefinallyso test data is always cleaned up.If a benchmark throws before Line 344,
benchmark/test-datais left behind.Suggested fix
await setupTestData(); const results = { @@ tests: {}, }; - results.tests.readdir = await benchmarkReaddirRecursive(); - results.tests.readFile = await benchmarkReadFile(); - - // 清理 - await rm(TEST_DIR, { recursive: true, force: true }); - console.log("\n🧹 测试数据已清理"); + try { + results.tests.readdir = await benchmarkReaddirRecursive(); + results.tests.readFile = await benchmarkReadFile(); + } finally { + await rm(TEST_DIR, { recursive: true, force: true }); + console.log("\n🧹 测试数据已清理"); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmark/fs-benchmark.mjs` around lines 329 - 345, Wrap the test run and result collection (calls to setupTestData, benchmarkReaddirRecursive, benchmarkReadFile) in a try block and move the cleanup call to rm(TEST_DIR, { recursive: true, force: true }) into a finally block so TEST_DIR is always removed even if a benchmark throws; ensure console.log("🧹 测试数据已清理") remains after the rm call in the finally to indicate successful cleanup.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@benchmark/fs-benchmark.mjs`:
- Line 357: The current top-level call main().catch(console.error) only logs
failures but leaves the process exit code as zero; change the catch handler on
the main() invocation to log the error and call process.exit(1) so CI detects
failures (i.e., replace the current catch on main() with a handler that calls
console.error(err) and then process.exit(1)).
- Around line 168-191: rushReaddirRecursive returns both files and directories
but nodeReaddirRecursive returns only files, so filter rushFs.readdir results to
match; update rushReaddirRecursive (the call to rushFs.readdir and the
subsequent entries.map) to first filter entries by file type (e.g., e.isFile())
and then map to e.name so the benchmark compares only files.
---
Duplicate comments:
In `@benchmark/fs-benchmark.mjs`:
- Around line 35-40: formatBytes currently can produce NaN/negative index errors
for invalid inputs and the recorded memoryPeak is not the true peak (it only
reads post-run heapUsed); fix formatBytes to validate and clamp the bytes
argument (treat non-number/negative as 0) and ensure the computed index i is
bounded within sizes.length-1 before using sizes[i]; for peak memory, stop
assigning memoryPeak from a single post-run process.memoryUsage().heapUsed and
instead introduce a maxMemory (or memoryPeak) tracker that is initialized to 0
and updated during the benchmark (e.g., sample process.memoryUsage().heapUsed at
appropriate points or after each major operation and set memoryPeak =
Math.max(memoryPeak, current.heapUsed)) then report that tracked peak.
---
Nitpick comments:
In `@benchmark/fs-benchmark.mjs`:
- Around line 329-345: Wrap the test run and result collection (calls to
setupTestData, benchmarkReaddirRecursive, benchmarkReadFile) in a try block and
move the cleanup call to rm(TEST_DIR, { recursive: true, force: true }) into a
finally block so TEST_DIR is always removed even if a benchmark throws; ensure
console.log("🧹 测试数据已清理") remains after the rm call in the finally to indicate
successful cleanup.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9a9e7703-0299-4e97-aabb-5b6d6d348f01
📒 Files selected for processing (3)
benchmark/benchmark-1773743187125.jsonbenchmark/benchmark-1773747981760.jsonbenchmark/fs-benchmark.mjs
✅ Files skipped from review due to trivial changes (1)
- benchmark/benchmark-1773743187125.json
| async function nodeReaddirRecursive(dir) { | ||
| const entries = await nodeReaddir(dir, { withFileTypes: true }); | ||
| const results = []; | ||
|
|
||
| for (const entry of entries) { | ||
| const fullPath = join(dir, entry.name); | ||
| if (entry.isDirectory()) { | ||
| results.push(...(await nodeReaddirRecursive(fullPath))); | ||
| } else { | ||
| results.push(entry.name); | ||
| } | ||
| } | ||
|
|
||
| return results; | ||
| } | ||
|
|
||
| // Rush-FS - use recursive mode (single NAPI call, internal parallelism) | ||
| async function rushReaddirRecursive(dir) { | ||
| const entries = await rushFs.readdir(dir, { | ||
| withFileTypes: true, | ||
| recursive: true, | ||
| }); | ||
| return entries.map((e) => e.name); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
For @rush-fs/core, what exactly does readdir(path, { recursive: true, withFileTypes: true }) return? Does it include both files and directories, and in what shape?
💡 Result:
In @rush-fs/core, readdir(path, { recursive: true, withFileTypes: true }) follows Node’s fs/promises.readdir() semantics:
- Return type:
Promise<fs.Dirent[]>(an array of Dirent objects, not strings) whenwithFileTypes: true. [1] - What it includes (with
recursive: true): it lists everything underpathrecursively — i.e. files and directories (and “sub files”). [1] - Shape of each entry: each array element is an
fs.Direntwith:name: stringparentPath: string(the path to the entry’s parent directory)- type-test methods like
isFile(),isDirectory(),isSymbolicLink(), etc. [1][2]
So you typically build a full path as join(dirent.parentPath, dirent.name) (because name is just the basename). [2]
Sources:
[1] Node.js docs: fsPromises.readdir(path[, options]) (recursive + withFileTypes behavior) (nodejs.org)
[2] Node.js docs: fs.Dirent.parentPath (nodejs.org)
Citations:
🏁 Script executed:
cd benchmark && cat -n fs-benchmark.mjs | sed -n '168,191p'Repository: CoderSerio/litewiki
Length of output: 906
Correct benchmark comparison: filter out directories from Rush-FS results.
nodeReaddirRecursive collects only files (recursively walking directories), while rushReaddirRecursive returns all entries—files and directories—from its single recursive call. The comparison measures unequal workloads.
Apply the suggested filter to normalize the Rush-FS path:
Fix
async function rushReaddirRecursive(dir) {
const entries = await rushFs.readdir(dir, {
withFileTypes: true,
recursive: true,
});
- return entries.map((e) => e.name);
+ return entries
+ .filter((e) => !e.isDirectory?.())
+ .map((e) => e.name);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@benchmark/fs-benchmark.mjs` around lines 168 - 191, rushReaddirRecursive
returns both files and directories but nodeReaddirRecursive returns only files,
so filter rushFs.readdir results to match; update rushReaddirRecursive (the call
to rushFs.readdir and the subsequent entries.map) to first filter entries by
file type (e.g., e.isFile()) and then map to e.name so the benchmark compares
only files.
| console.log(`\n💾 结果已保存:${outputFile}`); | ||
| } | ||
|
|
||
| main().catch(console.error); |
There was a problem hiding this comment.
Set non-zero exit code on failure.
Line 357 logs errors but can still exit successfully, which can hide benchmark failures in CI.
Suggested fix
-main().catch(console.error);
+main().catch((err) => {
+ console.error(err);
+ process.exitCode = 1;
+});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@benchmark/fs-benchmark.mjs` at line 357, The current top-level call
main().catch(console.error) only logs failures but leaves the process exit code
as zero; change the catch handler on the main() invocation to log the error and
call process.exit(1) so CI detects failures (i.e., replace the current catch on
main() with a handler that calls console.error(err) and then process.exit(1)).
为 Rush-FS 仓库准备的性能优化建议: - 问题分析:小规模目录强制并行导致性能下降 - 建议方案:基于深度/规模的智能降级策略 - 预期收益:小规模从负优化转为正收益 文件: - RUSH_FS_SUGGESTION.md (中文详细版) - RUSH_FS_GITHUB_ISSUE.md (英文版,可直接提交) Refs: #8
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@benchmark/RUSH_FS_GITHUB_ISSUE.md`:
- Around line 19-21: Update the benchmark table row containing "Rush-FS
(recursive calls in JS ❌) | 5.50 ms | **-0.86x** (slower!)" to use non-negative
ratio wording; replace the negative ratio with a positive one (e.g., "0.86x") or
an explicit percent slowdown (e.g., "14% slower than Node.js") so the cell reads
something like "5.50 ms | **0.86x** (slower!)" or "5.50 ms | **14% slower than
Node.js**".
- Around line 103-112: Add blank lines before and after each Markdown table to
satisfy markdownlint MD058: insert an empty line between the "### Small Scale (<
1000 files)" heading and its table and another empty line after that table, and
do the same for the "### Large Scale (> 1000 files)" heading and its table so
both table blocks are surrounded by blank lines.
- Around line 125-141: The tests are using the unstable #[bench] attribute
inside mod bench and functions
bench_readdir_small/bench_readdir_medium/bench_readdir_large which requires
nightly; replace them with Criterion-based benchmarks: add a benches/*.rs file
that uses Criterion's benchmarking API for the three scenarios, update
Cargo.toml to add a [[bench]] target with harness = false (or document nightly
if you choose to keep #[bench]), and remove or convert the existing mod bench
block so benchmarks run with cargo bench on stable using Criterion.
In `@benchmark/RUSH_FS_SUGGESTION.md`:
- Around line 22-23: 表格中“Rush-FS (递归调用
❌)”那一行的倍率写为“-0.86x”会造成歧义,请将该单元格改为正值表示慢速(例如“0.86x”)或改写为百分比描述(例如“比 Node.js 慢
14%”),保持与另一行“Rush-FS (recursive: true ✅)”的倍率表达一致并避免负号误导。
- Around line 156-172: The docs use the nightly-only #[bench] tests in the bench
module (functions bench_readdir_small, bench_readdir_medium,
bench_readdir_large); replace them with Criterion-based benchmarks or note
nightly requirement. Specifically, add Criterion as a dev-dependency in
Cargo.toml, convert each #[bench] fn into a fn receiving &mut Criterion (e.g.,
fn bench_readdir_small(c: &mut Criterion)) and register them with
criterion_group! and criterion_main!, using c.bench_function or
c.bench_with_input to run the same small/medium/large scenarios; alternatively,
if you prefer nightly keep the existing code but add an explicit note that
nightly Rust is required.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 38a27d99-7c8b-4a1f-93f6-75594912aed9
📒 Files selected for processing (2)
benchmark/RUSH_FS_GITHUB_ISSUE.mdbenchmark/RUSH_FS_SUGGESTION.md
| | Rush-FS (recursive calls in JS ❌) | 5.50 ms | **-0.86x** (slower!) | | ||
| | Rush-FS (`recursive: true` ✅) | 1.93 ms | **2.46x** | | ||
|
|
There was a problem hiding this comment.
Use non-negative ratio wording for slower case.
Line 19 uses -0.86x, which is easy to misread. Prefer 0.86x (or “14% slower than Node.js”) for clearer interpretation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@benchmark/RUSH_FS_GITHUB_ISSUE.md` around lines 19 - 21, Update the benchmark
table row containing "Rush-FS (recursive calls in JS ❌) | 5.50 ms | **-0.86x**
(slower!)" to use non-negative ratio wording; replace the negative ratio with a
positive one (e.g., "0.86x") or an explicit percent slowdown (e.g., "14% slower
than Node.js") so the cell reads something like "5.50 ms | **0.86x** (slower!)"
or "5.50 ms | **14% slower than Node.js**".
| ### Small Scale (< 1000 files) | ||
| | Current | Optimized | Improvement | | ||
| |---------|-----------|-------------| | ||
| | 5.50 ms (slower than Node.js) | ~4.0 ms | **~1.2x faster than Node.js** | | ||
|
|
||
| ### Large Scale (> 1000 files) | ||
| | Current | Optimized | Change | | ||
| |---------|-----------|--------| | ||
| | 1.93 ms | ~1.9 ms | No change (still fast) | | ||
|
|
There was a problem hiding this comment.
Add blank lines around tables to satisfy markdownlint.
Line 104 and Line 109 table blocks should be surrounded by blank lines to pass MD058 consistently.
🧰 Tools
🪛 markdownlint-cli2 (0.21.0)
[warning] 104-104: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
[warning] 109-109: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@benchmark/RUSH_FS_GITHUB_ISSUE.md` around lines 103 - 112, Add blank lines
before and after each Markdown table to satisfy markdownlint MD058: insert an
empty line between the "### Small Scale (< 1000 files)" heading and its table
and another empty line after that table, and do the same for the "### Large
Scale (> 1000 files)" heading and its table so both table blocks are surrounded
by blank lines.
| #[cfg(test)] | ||
| mod bench { | ||
| #[bench] | ||
| fn bench_readdir_small(b: &mut Bencher) { | ||
| // 100 files, 10 directories | ||
| } | ||
|
|
||
| #[bench] | ||
| fn bench_readdir_medium(b: &mut Bencher) { | ||
| // 1000 files, 100 directories | ||
| } | ||
|
|
||
| #[bench] | ||
| fn bench_readdir_large(b: &mut Bencher) { | ||
| // 30000 files, 3000 directories | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Is Rust's #[bench] attribute stable on stable Rust in 2026, and what is the recommended stable benchmarking approach?
💡 Result:
No. #[bench] is not stable on stable Rust in 2026: Rust’s own release notes state it has been “fully de-stabilize[d]”, and the compiler docs note benchmarks (and #[bench]) are nightly-only. [1][2]
Recommended stable approach: use cargo bench with a custom benchmark harness (disable the built-in libtest harness via harness = false) and a stable benchmarking crate—Cargo’s docs explicitly call out Criterion as a common stable option. [3]
Typical setup (stable):
- Add a bench target with
harness = falseinCargo.toml - Implement benchmarks with Criterion (or another crate) in
benches/*.rs - Run with
cargo bench[3]
Sources
- Rust Release Notes (
#[bench]fully de-stabilized) [1] - rustc book (
#[bench]benchmarks are unstable/nightly-only) [2] - Cargo Book:
cargo bench+harness = false, and Criterion mentioned for stable [3]
#[bench] is unstable; use Criterion with cargo bench for a stable benchmarking approach.
Rust's #[bench] attribute requires nightly and has been fully de-stabilized on stable Rust. Switch to Criterion—configure a bench target with harness = false in Cargo.toml, implement benchmarks using Criterion in benches/*.rs, and run with cargo bench. Alternatively, if nightly is acceptable, explicitly document that requirement.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@benchmark/RUSH_FS_GITHUB_ISSUE.md` around lines 125 - 141, The tests are
using the unstable #[bench] attribute inside mod bench and functions
bench_readdir_small/bench_readdir_medium/bench_readdir_large which requires
nightly; replace them with Criterion-based benchmarks: add a benches/*.rs file
that uses Criterion's benchmarking API for the three scenarios, update
Cargo.toml to add a [[bench]] target with harness = false (or document nightly
if you choose to keep #[bench]), and remove or convert the existing mod bench
block so benchmarks run with cargo bench on stable using Criterion.
| | Rush-FS (递归调用 ❌) | 5.50 ms | -0.86x | | ||
| | Rush-FS (recursive: true ✅) | 1.93 ms | **2.46x** | |
There was a problem hiding this comment.
统一修正倍率表达,避免 “-0.86x” 歧义。
Line 22 建议改为 0.86x 或“比 Node.js 慢 14%”,避免负倍率造成误读。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@benchmark/RUSH_FS_SUGGESTION.md` around lines 22 - 23, 表格中“Rush-FS (递归调用
❌)”那一行的倍率写为“-0.86x”会造成歧义,请将该单元格改为正值表示慢速(例如“0.86x”)或改写为百分比描述(例如“比 Node.js 慢
14%”),保持与另一行“Rush-FS (recursive: true ✅)”的倍率表达一致并避免负号误导。
| #[cfg(test)] | ||
| mod bench { | ||
| #[bench] | ||
| fn bench_readdir_small(b: &mut Bencher) { | ||
| // 100 文件,10 目录 | ||
| } | ||
|
|
||
| #[bench] | ||
| fn bench_readdir_medium(b: &mut Bencher) { | ||
| // 1000 文件,100 目录 | ||
| } | ||
|
|
||
| #[bench] | ||
| fn bench_readdir_large(b: &mut Bencher) { | ||
| // 30000 文件,3000 目录 | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What is the recommended stable Rust benchmarking library in 2026 instead of #[bench]?
💡 Result:
The commonly recommended stable Rust benchmarking library (as an alternative to nightly-only #[bench] / libtest benchmarks) is Criterion.rs (criterion crate). It’s explicitly called out as an option for running benchmarks on stable Rust, and it’s widely used/maintained as the de‑facto standard benchmarking harness on stable. [1], [2]
Answer: Use Criterion.rs (criterion crate) instead of #[bench]. [1], [2]
Sources:
[1] Rust Cargo Book (cargo bench) notes stable-channel helpers like Criterion. (doc.rust-lang.org)
[2] Criterion.rs repository (project overview/maintenance notes). (github.com)
Citations:
- 1: https://doc.rust-lang.org/beta/cargo/commands/cargo-bench.html?utm_source=openai
- 2: https://github.com/bheisler/criterion.rs?utm_source=openai
Use Criterion.rs for stable Rust benchmarking compatibility.
The #[bench] attribute requires nightly Rust. Since this is documentation, the example should work on stable by default. Replace with Criterion.rs (the de facto standard for stable benchmarking) or explicitly document nightly as a prerequisite.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@benchmark/RUSH_FS_SUGGESTION.md` around lines 156 - 172, The docs use the
nightly-only #[bench] tests in the bench module (functions bench_readdir_small,
bench_readdir_medium, bench_readdir_large); replace them with Criterion-based
benchmarks or note nightly requirement. Specifically, add Criterion as a
dev-dependency in Cargo.toml, convert each #[bench] fn into a fn receiving &mut
Criterion (e.g., fn bench_readdir_small(c: &mut Criterion)) and register them
with criterion_group! and criterion_main!, using c.bench_function or
c.bench_with_input to run the same small/medium/large scenarios; alternatively,
if you prefer nightly keep the existing code but add an explicit note that
nightly Rust is required.
📊 性能分析
详细对比 Node.js fs 与 Rush-FS 的性能差异,为 Rush-FS 集成决策提供数据支持。
核心发现
分析:
🔗 关联 PR
📋 复现
Summary by CodeRabbit
Documentation
Tests
Chores