This document records the benchmark scenarios, assumptions, and committed result snapshots used for Rezi. The numbers are directional, version-specific, and host-specific.
The benchmark suite exists to answer practical engineering questions:
- does a change regress Rezi on representative workloads?
- how does Rezi behave on primitive, terminal, and structured-application scenarios?
- how do changes affect throughput, latency, and memory on the same host?
It is not intended to serve as a universal leaderboard.
The maintained suite currently compares Rezi against:
- OpenTUI (React)
- OpenTUI (Core)
- Bubble Tea
- terminal-kit
- blessed
- Ratatui
These systems differ in runtime, abstraction level, and language. Some are full UI frameworks, while others are lower-level terminal libraries. Absolute comparisons should be read with that context in mind.
startuptree-constructionrerendercontent-updatelayout-stressscroll-stressvirtual-listtablesmemory-profile
terminal-rerenderterminal-frame-fillterminal-screen-transitionterminal-fps-streamterminal-input-latencyterminal-memory-soakterminal-virtual-listterminal-table
terminal-full-uiterminal-full-ui-navigationterminal-strict-uiterminal-strict-ui-navigation
- Rezi is designed for structured terminal applications where layout, routing, focus, and composition are part of the rendering cost.
- Lower-level libraries may be faster on narrow output-only scenarios because they do less work per frame.
- Memory and latency should always be read per scenario, not as a single global ranking.
Build prerequisites:
npm ci
npm run build
npm run build:native
npx tsc -b packages/benchQuick all-framework run:
node --expose-gc packages/bench/dist/run.js \
--suite all --io pty --quick \
--output-dir benchmarks/local-allRezi-only run:
node --expose-gc packages/bench/dist/run.js \
--framework rezi-native --io pty --quickRigorous terminal run:
node --expose-gc packages/bench/dist/run.js \
--suite terminal --io pty \
--replicates 7 --discard-first-replicate \
--shuffle-framework-order --shuffle-seed local-terminal-rigorous \
--cpu-affinity 0-7 --env-check strict \
--output-dir benchmarks/local-terminalCommitted artifacts live under benchmarks/. They are retained as snapshots of specific runs and should be interpreted alongside the host, runtime, and suite configuration used to produce them.
See BENCHMARK_VALIDITY.md for the current assumptions behind the benchmark runner and result interpretation.