From 7ba8f6e5c579f1eb2d42ecd8025097edde539453 Mon Sep 17 00:00:00 2001 From: James Ross Date: Fri, 27 Feb 2026 19:12:24 -0800 Subject: [PATCH 1/6] =?UTF-8?q?feat:=20M8=20spit-shine=20=E2=80=94=20crypt?= =?UTF-8?q?o=20port=20refactor,=20key=20validation,=20roadmap=20cleanup?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - Refactor CryptoPort and all adapters (Node, Bun, Web) for key validation - Move completed task cards to COMPLETED_TASKS.md, superseded to GRAVEYARD.md - Add STATUS.md, ADR-001, and documentation links to README - Update ROADMAP.md to reflect current milestone state --- COMPLETED_TASKS.md | 270 ++++++ GRAVEYARD.md | 31 + README.md | 7 + ROADMAP.md | 766 +----------------- STATUS.md | 109 +++ docs/ADR-001-vault-in-facade.md | 34 + src/domain/services/CasService.js | 29 +- .../adapters/BunCryptoAdapter.js | 80 +- .../adapters/NodeCryptoAdapter.js | 74 +- .../adapters/WebCryptoAdapter.js | 100 +-- src/ports/CryptoPort.js | 93 ++- .../CasService.key-validation.test.js | 6 +- test/unit/ports/CryptoPort.test.js | 169 ++++ 13 files changed, 803 insertions(+), 965 deletions(-) create mode 100644 COMPLETED_TASKS.md create mode 100644 GRAVEYARD.md create mode 100644 STATUS.md create mode 100644 docs/ADR-001-vault-in-facade.md create mode 100644 test/unit/ports/CryptoPort.test.js diff --git a/COMPLETED_TASKS.md b/COMPLETED_TASKS.md new file mode 100644 index 00000000..f988a99f --- /dev/null +++ b/COMPLETED_TASKS.md @@ -0,0 +1,270 @@ +# Completed Tasks + +Task cards moved here from ROADMAP.md after completion. Organized by milestone. + +--- + +# M14 — Conduit (v4.0.0) ✅ CLOSED + +**Theme:** Replace `EventEmitter` inheritance with a proper `ObservabilityPort`, add streaming restore, and enable parallel chunk I/O. Major version bump: removes `extends EventEmitter` from `CasService`, adds `observability` as a required constructor port. + +**Completed:** v4.0.0 (2026-02-27) + +--- + +## Task 14.1: ObservabilityPort and adapters + +**User Story** +As a library consumer, I want structured observability (metrics, logs, spans) from CAS operations so I can monitor throughput, track errors, and integrate with my own tooling — without the domain layer depending on Node's EventEmitter. + +**Requirements** +- R1: Define `ObservabilityPort` interface with three methods: + - `metric(channel: string, data: object)` — emit a named metric (channels: `chunk`, `file`, `integrity`, `vault`). + - `log(level: string, message: string, meta?: object)` — structured log (`debug`, `info`, `warn`, `error`). + - `span(name: string) → { end(meta?: object): void }` — timed operation bracket. +- R2: Remove `extends EventEmitter` from `CasService`. All `this.emit()` calls replaced with `this.observability.metric()` or `this.observability.log()`. +- R3: `observability` becomes a required constructor parameter on `CasService` (like `persistence`, `codec`, `crypto`). +- R4: Implement `SilentObserver` adapter (no-op — all methods are empty). This is the default when no observability is needed. +- R5: Implement `EventEmitterObserver` adapter that translates `metric()` calls to `EventEmitter.emit()` calls for backward compatibility. Consumers who relied on `service.on('chunk:stored', ...)` can wrap with this adapter. +- R6: Implement `StatsCollector` adapter that accumulates metrics and exposes a summary object: `{ chunksProcessed, bytesTotal, elapsed, throughput, errors }`. +- R7: Facade (`ContentAddressableStore`) creates a default `SilentObserver` if no observability adapter is provided, and passes it to `CasService`. +- R8: Update `.d.ts` declarations for new port and adapters. + +**Acceptance Criteria** +- AC1: `CasService` no longer extends `EventEmitter`. +- AC2: All existing event emission points emit metrics via `ObservabilityPort`. +- AC3: `EventEmitterObserver` adapter produces identical events to the old `extends EventEmitter` behavior. +- AC4: `StatsCollector` accumulates correct stats across a full store+restore cycle. +- AC5: `SilentObserver` introduces zero overhead (no-op methods). +- AC6: Span `end()` captures elapsed time in the metric. + +--- + +## Task 14.2: Streaming restore + +**User Story** +As a developer restoring large files, I want a streaming restore path so memory usage is O(chunkSize), not O(fileSize). + +**Requirements** +- R1: Add `CasService.restoreStream({ manifest, encryptionKey, passphrase })` returning `AsyncIterable`. +- R2: Each yielded buffer is one verified, decrypted, decompressed chunk — ready to write. +- R3: Integrity verified per-chunk before yield (not after full reassembly). +- R4: Decompression and decryption applied per-chunk in streaming fashion. +- R5: `restoreFile()` in the facade uses `restoreStream()` internally with `createWriteStream()` instead of `writeFileSync()`. +- R6: Existing `restore()` method reimplemented as: collect `restoreStream()` into buffer. Single code path, two interfaces. +- R7: Emit `observability.metric('chunk', ...)` per chunk and `observability.span('restore')` for the full operation. + +**Acceptance Criteria** +- AC1: `restoreStream()` yields chunks that, when concatenated, match the original file byte-for-byte. +- AC2: Memory usage during streaming restore is O(chunkSize), not O(fileSize). +- AC3: `restoreFile()` writes via `createWriteStream()` — no `writeFileSync()`. +- AC4: Encrypted + compressed files round-trip correctly via streaming restore. +- AC5: Existing `restore()` method returns identical results (backward compat). + +--- + +## Task 14.3: Parallel chunk I/O + +**User Story** +As a user storing or restoring files with many chunks, I want the system to read/write multiple chunks concurrently so operations complete faster. + +**Requirements** +- R1: Add `concurrency` option to `CasService` constructor (positive integer, default: 1). +- R2: Store path (`_chunkAndStore`): up to N chunks written to Git in parallel. Chunk ordering in the manifest is preserved regardless of write completion order. +- R3: Restore path (`restoreStream`): up to N chunks read from Git in parallel. Yield order matches manifest chunk order (read ahead, buffer up to N, yield in sequence). +- R4: Implement a simple `Semaphore` utility (internal, not exported) to gate concurrent persistence calls. +- R5: `concurrency: 1` produces identical behavior to current sequential code (no functional change). +- R6: Emit `observability.metric('chunk', ...)` per chunk regardless of parallelism. `observability.span('chunk:read')` / `observability.span('chunk:write')` wrap each individual I/O operation. +- R7: Expose `concurrency` option on `ContentAddressableStore` constructor, forwarded to `CasService`. + +**Acceptance Criteria** +- AC1: With `concurrency: 4`, a 20-chunk store completes measurably faster than sequential (benchmark, not unit test). +- AC2: With `concurrency: 4`, restore produces byte-identical output to sequential. +- AC3: With `concurrency: 1`, all existing tests pass unchanged. +- AC4: Manifest chunk order is always preserved regardless of concurrency setting. +- AC5: Semaphore correctly limits concurrent persistence calls. + +--- + +## Task 14.4: Migrate CLI and TUI to ObservabilityPort + +**User Story** +As a CLI user, I want progress bars and stats to work with the new observability system so the terminal experience is unchanged after the v4 migration. + +**Requirements** +- R1: Refactor `bin/ui/progress.js` to subscribe to `ObservabilityPort` metrics instead of EventEmitter events. +- R2: Progress trackers use `observability.metric('chunk', ...)` events for progress updates. +- R3: CLI `store` and `restore` commands wire the observability adapter into CasService via the facade. +- R4: Dashboard and other TUI components continue to function (adapt to new metric format if needed). +- R5: `--quiet` flag still works (uses `SilentObserver`). +- R6: Stats summary printed after store/restore when not in quiet mode (throughput, total bytes, elapsed time). + +**Acceptance Criteria** +- AC1: `git cas store` shows progress bar identical to v3.1.0 behavior. +- AC2: `git cas restore` shows progress bar identical to v3.1.0 behavior. +- AC3: `--quiet` suppresses all output. +- AC4: Stats summary displayed after operation completes. +- AC5: Dashboard renders correctly with new observability wiring. + +--- + +# M13 — Bijou (v3.1.0) ✅ CLOSED + +**Theme:** Beautiful terminal UI powered by `@flyingrobots/bijou`. Replace silent CLI operations with animated progress, and add an interactive vault dashboard for exploring stored assets. + +**Completed:** v3.1.0 (2026-02-27) + +--- + +## Task 13.1: Animated store/restore progress + +**User Story** +As a CLI user, I want a smooth animated progress bar with chunk counts and throughput when storing or restoring files, so I can see that the operation is working and estimate time remaining. + +**Requirements** +- R1: Add `@flyingrobots/bijou` and `@flyingrobots/bijou-node` as dependencies. +- R2: Wire `CasService` events (`chunk:stored`, `chunk:restored`) to a bijou `createAnimatedProgressBar()` with spring physics (preset: `gentle`). +- R3: Display gradient progress bar (theme `CYAN_MAGENTA`) with chunk counter (`78/193 chunks`) and throughput (`19.2 MiB/s`). +- R4: Show last-processed chunk digest and blob OID below the progress bar. +- R5: Progress renders to stderr; stdout reserved for structured output. +- R6: Graceful degradation: static counter in CI, plain text in pipe mode, no output with `--quiet`. + +**Acceptance Criteria** +- AC1: `git cas store` shows animated progress bar in TTY mode. +- AC2: `git cas restore` shows animated progress bar in TTY mode. +- AC3: CI mode (`CI=true`) falls back to static line-by-line progress. +- AC4: Pipe mode shows no progress output. +- AC5: `--quiet` suppresses all progress. + +--- + +## Task 13.2: Vault dashboard — interactive TUI app + +**User Story** +As a developer managing multiple vault entries, I want an interactive terminal dashboard to browse entries, inspect manifests, and view encryption status without memorizing CLI flags. + +**Requirements** +- R1: Add `@flyingrobots/bijou-tui` as a dependency. +- R2: New subcommand: `git cas vault dashboard` (or `git cas vault ui`). +- R3: Full-screen TEA app with flexbox layout: entry list (left pane) + detail view (right pane). +- R4: Entry list shows slug, size (human-readable), chunk count, and badges for encryption/compression/merkle. +- R5: Detail view shows manifest anatomy: metadata, encryption config, compression, sub-manifests, and paginated chunk list. +- R6: Keyboard navigation: `j/k` or arrows to move, `enter` to expand, `/` to filter, `q` to quit. +- R7: Vault-level header showing encryption status, asset count, and vault ref. +- R8: Graceful degradation: static table output in CI/pipe mode. + +**Acceptance Criteria** +- AC1: `git cas vault dashboard` launches interactive TUI in TTY mode. +- AC2: All vault entries listed with correct metadata. +- AC3: Selecting an entry shows full manifest detail. +- AC4: Filter narrows the list by slug substring. +- AC5: `q` or `ctrl-c` exits cleanly (restores terminal state). +- AC6: Non-TTY falls back to static vault list. + +--- + +## Task 13.3: Vault history timeline view + +**User Story** +As a developer, I want to see vault commit history as a visual timeline so I can understand how the vault has evolved over time. + +**Requirements** +- R1: New subcommand: `git cas vault history --pretty` (or integrate into dashboard as a tab). +- R2: Render vault commits using bijou `timeline()` component with status indicators. +- R3: Color-code by operation: green for `add`, yellow for `update`, red for `remove`, blue for `init`. +- R4: Show commit OID (short), operation, slug, and relative timestamp. +- R5: Paginate with bijou `paginator()` for long histories. +- R6: Static fallback: plain `git log --oneline` output (current behavior). + +**Acceptance Criteria** +- AC1: `git cas vault history --pretty` renders color-coded timeline in TTY mode. +- AC2: Operations correctly color-coded by parsing commit messages. +- AC3: Pagination works for vaults with >20 commits. +- AC4: Without `--pretty`, behavior unchanged (backward compatible). + +--- + +## Task 13.4: Manifest anatomy view + +**User Story** +As a developer debugging storage issues, I want a rich visual breakdown of a manifest showing its structure, encryption metadata, compression settings, and chunk layout. + +**Requirements** +- R1: New subcommand: `git cas inspect --slug ` (or `--oid `). +- R2: Render manifest using bijou `box()`, `accordion()`, and `tree()` components. +- R3: Sections: metadata (slug, filename, size, version), encryption (algorithm, KDF params), compression, sub-manifests (if Merkle), and paginated chunk list. +- R4: Chunks section uses `paginator()` — show 20 chunks per page with index, size, digest (truncated), and blob OID. +- R5: Badges for encryption status, compression, Merkle, manifest version. +- R6: Static fallback: JSON dump (current `readManifest` behavior). + +**Acceptance Criteria** +- AC1: `git cas inspect --slug ` renders structured manifest view. +- AC2: Accordion sections expand/collapse. +- AC3: Chunk pagination works. +- AC4: Encrypted manifests show full KDF parameter breakdown. +- AC5: Merkle manifests show sub-manifest tree. + +--- + +## Task 13.5: Chunk heatmap visualization + +**User Story** +As a developer, I want a visual block map of chunks in a stored file so I can quickly see the storage layout, Merkle boundaries, and progress during operations. + +**Requirements** +- R1: Render a grid of `█` / `░` blocks, one per chunk, sized to terminal width. +- R2: Color via bijou `gradientText()` from start to end of file. +- R3: Show Merkle sub-manifest boundaries with `│` separators in the grid. +- R4: Legend showing chunk count, sub-manifest count, chunk size. +- R5: Integrate into `git cas inspect` as an optional `--heatmap` flag. +- R6: During store/restore (Task 13.1), optionally show filling heatmap instead of progress bar via `--heatmap` flag. + +**Acceptance Criteria** +- AC1: `git cas inspect --slug --heatmap` renders chunk grid. +- AC2: Gradient coloring spans the full grid. +- AC3: Merkle boundaries visually distinct. +- AC4: Grid reflows to terminal width. + +--- + +## Task 13.6: Encryption info card + +**User Story** +As a security-conscious user, I want a clear visual summary of my vault's encryption configuration so I can verify the crypto parameters at a glance. + +**Requirements** +- R1: Render encryption details using bijou `box()` with labeled rows. +- R2: Show cipher, KDF algorithm, KDF parameters (iterations/cost/blockSize/parallelization), salt (truncated), and key length. +- R3: Status badge: `● locked` (red) when no key provided, `● unlocked` (green) when key resolved. +- R4: Integrate into vault dashboard header and `git cas inspect` encryption accordion. +- R5: Standalone via `git cas vault info --encryption`. + +**Acceptance Criteria** +- AC1: Encryption card renders all KDF parameters. +- AC2: Correct badge for locked/unlocked state. +- AC3: Works for both pbkdf2 and scrypt vaults. +- AC4: Non-encrypted vault → "No encryption configured" message. + +--- + +# Completed tasks from open milestones + +## Task 9.1: CLI progress feedback *(completed by M13 Bijou)* + +**User Story** +As a CLI user storing or restoring large files, I want visible progress so I know the operation is working and not hung. + +**Requirements** +- R1: Wire `CasService` events (`chunk:stored`, `chunk:restored`, `file:stored`, `file:restored`) to CLI output. +- R2: Display a progress counter during store/restore: `Storing chunk 5/12…` or similar. +- R3: Progress output goes to stderr (stdout reserved for structured output). +- R4: Progress suppressed when stdout is not a TTY (piped mode) or when `--quiet` is passed. +- R5: Add `--quiet` global flag to suppress progress output. + +**Acceptance Criteria** +- AC1: `git cas store` shows per-chunk progress on stderr in TTY mode. +- AC2: `git cas restore` shows per-chunk progress on stderr in TTY mode. +- AC3: Piped mode (`git cas store … | jq`) shows no progress. +- AC4: `--quiet` suppresses all progress output. + +**Note:** All requirements delivered by M13 Task 13.1 (animated progress bars with `--quiet` flag, TTY detection, and graceful degradation). Subsequently migrated to ObservabilityPort in M14 Task 14.4. diff --git a/GRAVEYARD.md b/GRAVEYARD.md new file mode 100644 index 00000000..c8aa19e8 --- /dev/null +++ b/GRAVEYARD.md @@ -0,0 +1,31 @@ +# Task Graveyard + +Tasks moved here from ROADMAP.md because they were superseded or duplicated by other work. + +--- + +## Task 8.1: Streaming restore *(superseded by Task 14.2)* + +**Originally in:** M8 — Spit Shine (v2.1.0) + +**Superseded by:** Task 14.2 (Streaming restore) in M14 — Conduit (v4.0.0) + +**Reason:** Task 14.2 implemented all of Task 8.1's requirements plus additional integration with ObservabilityPort metrics/spans and the `restoreStream()` → `restore()` unification. The M14 version is strictly a superset. + +**User Story** +As a developer restoring large files, I want a streaming restore path so I don't buffer the entire file in memory. + +**Requirements** +- R1: Add `CasService.restoreStream({ manifest, encryptionKey, passphrase })` returning `AsyncIterable`. +- R2: Each yielded buffer is one verified, decrypted, decompressed chunk — ready to write. +- R3: Integrity verified per-chunk before yield (not after full reassembly). +- R4: Decompression and decryption applied per-chunk in streaming fashion. +- R5: `restoreFile()` in the facade uses `restoreStream()` internally with `createWriteStream()` instead of `writeFileSync()`. +- R6: Existing `restore()` method remains unchanged (returns `{ buffer, bytesWritten }`) for backward compat. + +**Acceptance Criteria** +- AC1: `restoreStream()` yields chunks that, when concatenated, match the original file byte-for-byte. +- AC2: Memory usage during streaming restore is O(chunkSize), not O(fileSize). +- AC3: `restoreFile()` writes via stream and does not call `writeFileSync()`. +- AC4: Encrypted + compressed files round-trip correctly via streaming restore. +- AC5: Existing `restore()` method behavior unchanged. diff --git a/README.md b/README.md index f7402214..d1df8297 100644 --- a/README.md +++ b/README.md @@ -133,6 +133,13 @@ git cas store ./secret.bin --slug vault-entry --tree git cas restore --slug vault-entry --out ./decrypted.bin ``` +## Documentation + +- [Guide](./GUIDE.md) — progressive walkthrough +- [API Reference](./docs/API.md) — full method documentation +- [Architecture](./ARCHITECTURE.md) — hexagonal design overview +- [Security](./docs/SECURITY.md) — crypto design and threat model + ## When to use git-cas (and when not to) ### "I just want screenshots in my README" diff --git a/ROADMAP.md b/ROADMAP.md index c1e7991e..e8893a8c 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -9,7 +9,7 @@ This roadmap is structured as: 3. **Contracts** — Return/throw semantics for all public methods 4. **Version Plan** — Table mapping versions to milestones 5. **Milestone Dependency Graph** — ASCII diagram -6. **Milestones & Task Cards** — 5 milestones, 20 tasks (uniform task card template) +6. **Milestones & Task Cards** — 7 milestones (2 closed, 5 open), remaining task cards 7. **Feature Matrix** — Competitive landscape vs. Git LFS, git-annex, Restic, Age, DVC 8. **Competitive Analysis** — When to use git-cas and when not to, with concrete scenarios @@ -131,7 +131,7 @@ Return and throw semantics for every public method (current and planned). - **Exit 0:** Restore succeeded, prints bytes written to stdout. - **Exit 1:** Integrity error, missing manifest, or I/O error (message to stderr). -### `restoreStream({ manifest, encryptionKey?, passphrase? })` *(planned — Task 8.1)* +### `restoreStream({ manifest, encryptionKey?, passphrase? })` *(implemented — v4.0.0)* - **Returns:** `AsyncIterable` — verified, decrypted, decompressed chunks in index order. - **Throws:** `CasError('INTEGRITY_ERROR')` if any chunk fails verification (iteration stops). - **Throws:** `CasError('MISSING_KEY')` if encrypted and no key provided. @@ -185,7 +185,7 @@ Return and throw semantics for every public method (current and planned). | Version | Milestone | Codename | Theme | Status | |--------:|-----------|----------|-------|--------| -| v4.0.0 | M14 | Conduit | Streaming I/O, observability, parallel chunks | | +| v4.0.0 | M14 | Conduit | Streaming I/O, observability, parallel chunks | ✅ | | v2.1.0 | M8 | Spit Shine | Review fixups | | | v2.2.0 | M9 | Cockpit | CLI improvements | | | v3.0.0 | M10 | Hydra | Content-defined chunking | | @@ -204,17 +204,12 @@ M7 Horizon (v2.0.0) ✅ ────────────────── v v v v M8 Spit M9 Cockpit M10 Hydra M11 Locksmith Shine (v2.2.0) │ │ -(v2.1.0) │ │ v - │ v M12 Carousel - │ (CDC benchmarks) - │ - v - M13 Bijou (v3.1.0) ✅ - (TUI dashboard & progress) - │ - v - M14 Conduit (v4.0.0) ◀── NEXT - (Streaming I/O + Observability + Parallel chunks) +(v2.1.0) │ v + v M12 Carousel + (CDC benchmarks) + +M13 Bijou (v3.1.0) ✅ +M14 Conduit (v4.0.0) ✅ ``` --- @@ -223,274 +218,23 @@ Shine (v2.2.0) │ │ ### Milestones at a glance -| # | Codename | Theme | Version | Tasks | ~LoC | ~Hours | -|---:|--------------|----------------------------|:-------:|------:|-------:|------:| -| M14| Conduit | Streaming I/O, observability, parallel chunks | v4.0.0 | 4 | ~600 | ~18h | -| M8 | Spit Shine | Review fixups | v2.1.0 | 3 | ~290 | ~7h | -| M9 | Cockpit | CLI improvements | v2.2.0 | 5 | ~260 | ~7h | -| M10| Hydra | Content-defined chunking | v3.0.0 | 4 | ~690 | ~22h | -| M11| Locksmith | Multi-recipient encryption | v3.1.0 | 4 | ~580 | ~20h | -| M12| Carousel | Key rotation | v3.2.0 | 4 | ~400 | ~13h | -| M13| Bijou | TUI dashboard & progress | v3.1.0 | 6 | ~650 | ~20h | -| | **Total** | | | **30**| **~3,470** | **~107h** | - ---- - -# M14 — Conduit (v4.0.0) -**Theme:** Replace `EventEmitter` inheritance with a proper `ObservabilityPort`, add streaming restore, and enable parallel chunk I/O. Major version bump: removes `extends EventEmitter` from `CasService`, adds `observability` as a required constructor port. - ---- - -## Task 14.1: ObservabilityPort and adapters - -**User Story** -As a library consumer, I want structured observability (metrics, logs, spans) from CAS operations so I can monitor throughput, track errors, and integrate with my own tooling — without the domain layer depending on Node's EventEmitter. - -**Requirements** -- R1: Define `ObservabilityPort` interface with three methods: - - `metric(channel: string, data: object)` — emit a named metric (channels: `chunk`, `file`, `integrity`, `vault`). - - `log(level: string, message: string, meta?: object)` — structured log (`debug`, `info`, `warn`, `error`). - - `span(name: string) → { end(meta?: object): void }` — timed operation bracket. -- R2: Remove `extends EventEmitter` from `CasService`. All `this.emit()` calls replaced with `this.observability.metric()` or `this.observability.log()`. -- R3: `observability` becomes a required constructor parameter on `CasService` (like `persistence`, `codec`, `crypto`). -- R4: Implement `SilentObserver` adapter (no-op — all methods are empty). This is the default when no observability is needed. -- R5: Implement `EventEmitterObserver` adapter that translates `metric()` calls to `EventEmitter.emit()` calls for backward compatibility. Consumers who relied on `service.on('chunk:stored', ...)` can wrap with this adapter. -- R6: Implement `StatsCollector` adapter that accumulates metrics and exposes a summary object: `{ chunksProcessed, bytesTotal, elapsed, throughput, errors }`. -- R7: Facade (`ContentAddressableStore`) creates a default `SilentObserver` if no observability adapter is provided, and passes it to `CasService`. -- R8: Update `.d.ts` declarations for new port and adapters. - -**Acceptance Criteria** -- AC1: `CasService` no longer extends `EventEmitter`. -- AC2: All existing event emission points emit metrics via `ObservabilityPort`. -- AC3: `EventEmitterObserver` adapter produces identical events to the old `extends EventEmitter` behavior. -- AC4: `StatsCollector` accumulates correct stats across a full store+restore cycle. -- AC5: `SilentObserver` introduces zero overhead (no-op methods). -- AC6: Span `end()` captures elapsed time in the metric. - -**Scope** -- In scope: Port definition, 3 adapters, CasService refactor, facade wiring, TypeScript declarations. -- Out of scope: TUI adapter (M13 already has its own bijou integration — it can wrap `EventEmitterObserver` or adopt `ObservabilityPort` in a follow-up). Log levels beyond the 4 basics. Persistent metrics storage. - -**Est. Complexity (LoC)** -- Prod: ~180 (port ~30, 3 adapters ~90, CasService refactor ~40, facade ~20) -- Tests: ~120 -- Total: ~300 - -**Est. Human Working Hours** -- ~8h - -**Test Plan** -- Golden path: - - Store file with `StatsCollector` → verify `chunksProcessed`, `bytesTotal`, `throughput` are correct. - - Store + restore with `EventEmitterObserver` → assert same events as old EventEmitter behavior. - - `SilentObserver` → store + restore completes with no errors, no output. -- Failures: - - Missing `observability` param → constructor throws with descriptive error. - - Corrupted chunk → `observability.log('error', ...)` called before throw. -- Edges: - - 0-byte file → span starts and ends, no chunk metrics emitted. - - Span `end()` called twice → no error (idempotent). -- Fuzz/stress: - - All existing CasService tests must pass with `SilentObserver` injected. - -**Definition of Done** -- DoD1: `CasService` does not extend `EventEmitter`. -- DoD2: `ObservabilityPort` defined with metric/log/span. -- DoD3: 3 adapters implemented and tested. -- DoD4: All existing tests updated and green. -- DoD5: TypeScript declarations updated. - -**Blocking** -- Blocks: Task 14.2, 14.3, 14.4 - -**Blocked By** -- Blocked by: None - ---- - -## Task 14.2: Streaming restore - -**User Story** -As a developer restoring large files, I want a streaming restore path so memory usage is O(chunkSize), not O(fileSize). - -**Requirements** -- R1: Add `CasService.restoreStream({ manifest, encryptionKey, passphrase })` returning `AsyncIterable`. -- R2: Each yielded buffer is one verified, decrypted, decompressed chunk — ready to write. -- R3: Integrity verified per-chunk before yield (not after full reassembly). -- R4: Decompression and decryption applied per-chunk in streaming fashion. -- R5: `restoreFile()` in the facade uses `restoreStream()` internally with `createWriteStream()` instead of `writeFileSync()`. -- R6: Existing `restore()` method reimplemented as: collect `restoreStream()` into buffer. Single code path, two interfaces. -- R7: Emit `observability.metric('chunk', ...)` per chunk and `observability.span('restore')` for the full operation. - -**Acceptance Criteria** -- AC1: `restoreStream()` yields chunks that, when concatenated, match the original file byte-for-byte. -- AC2: Memory usage during streaming restore is O(chunkSize), not O(fileSize). -- AC3: `restoreFile()` writes via `createWriteStream()` — no `writeFileSync()`. -- AC4: Encrypted + compressed files round-trip correctly via streaming restore. -- AC5: Existing `restore()` method returns identical results (backward compat). - -**Scope** -- In scope: `restoreStream()` on CasService + facade, refactor `restoreFile()` and `restore()`. -- Out of scope: Parallel chunk reads (Task 14.3), resume/partial restore. - -**Est. Complexity (LoC)** -- Prod: ~80 -- Tests: ~100 -- Total: ~180 - -**Est. Human Working Hours** -- ~5h - -**Test Plan** -- Golden path: - - Store 10KB → restoreStream → collect → byte-compare original. - - Store encrypted + compressed → restoreStream → collect → compare. - - restoreFile writes correct file via streaming (spy confirms no writeFileSync). -- Failures: - - Corrupted chunk mid-stream → throws INTEGRITY_ERROR, iteration stops. - - Wrong key → throws INTEGRITY_ERROR on first encrypted chunk. -- Edges: - - 0-byte manifest yields empty iterable. - - Single-chunk file yields exactly 1 buffer. - - Exact multiple of chunkSize yields expected count. -- Fuzz/stress: - - 50 random file sizes (seeded) — streaming restore matches buffered restore byte-for-byte. - -**Definition of Done** -- DoD1: `restoreStream()` implemented on CasService and exposed via facade. -- DoD2: `restoreFile()` refactored to use streaming writes. -- DoD3: `restore()` reimplemented on top of `restoreStream()`. -- DoD4: All existing restore tests still pass. -- DoD5: New streaming tests added and green. - -**Blocking** -- Blocks: Task 14.3 - -**Blocked By** -- Blocked by: Task 14.1 (observability wiring) - ---- - -## Task 14.3: Parallel chunk I/O - -**User Story** -As a user storing or restoring files with many chunks, I want the system to read/write multiple chunks concurrently so operations complete faster. - -**Requirements** -- R1: Add `concurrency` option to `CasService` constructor (positive integer, default: 1). -- R2: Store path (`_chunkAndStore`): up to N chunks written to Git in parallel. Chunk ordering in the manifest is preserved regardless of write completion order. -- R3: Restore path (`restoreStream`): up to N chunks read from Git in parallel. Yield order matches manifest chunk order (read ahead, buffer up to N, yield in sequence). -- R4: Implement a simple `Semaphore` utility (internal, not exported) to gate concurrent persistence calls. -- R5: `concurrency: 1` produces identical behavior to current sequential code (no functional change). -- R6: Emit `observability.metric('chunk', ...)` per chunk regardless of parallelism. `observability.span('chunk:read')` / `observability.span('chunk:write')` wrap each individual I/O operation. -- R7: Expose `concurrency` option on `ContentAddressableStore` constructor, forwarded to `CasService`. - -**Acceptance Criteria** -- AC1: With `concurrency: 4`, a 20-chunk store completes measurably faster than sequential (benchmark, not unit test). -- AC2: With `concurrency: 4`, restore produces byte-identical output to sequential. -- AC3: With `concurrency: 1`, all existing tests pass unchanged. -- AC4: Manifest chunk order is always preserved regardless of concurrency setting. -- AC5: Semaphore correctly limits concurrent persistence calls. - -**Scope** -- In scope: Semaphore, parallel store loop, parallel restore with ordered yield, concurrency config. -- Out of scope: Adaptive concurrency (auto-tuning), per-operation concurrency overrides, connection pooling in GitPersistenceAdapter. - -**Est. Complexity (LoC)** -- Prod: ~100 (Semaphore ~25, store refactor ~30, restore refactor ~30, config ~15) -- Tests: ~80 -- Total: ~180 - -**Est. Human Working Hours** -- ~6h - -**Test Plan** -- Golden path: - - Store + restore with concurrency: 4, verify byte-for-byte match. - - Store + restore with concurrency: 1, verify identical to current behavior. - - Encrypted + compressed + concurrency: 4 → correct round-trip. -- Failures: - - concurrency: 0 → constructor throws. - - concurrency: -1 → constructor throws. - - One chunk write fails mid-batch → error propagated, partial writes are safe (unreachable blobs GC'd by Git). -- Edges: - - File with 1 chunk + concurrency: 4 → works (no deadlock). - - File with 3 chunks + concurrency: 10 → only 3 in flight. - - 0-byte file + any concurrency → no-op. -- Fuzz/stress: - - Benchmark: 100-chunk file, concurrency 1 vs 4 vs 8, measure wall-clock time. - -**Definition of Done** -- DoD1: Semaphore utility implemented. -- DoD2: Store and restore support configurable concurrency. -- DoD3: All tests pass at concurrency: 1. -- DoD4: Parallel tests added and green. -- DoD5: Benchmark script demonstrates speedup. - -**Blocking** -- Blocks: None +| # | Codename | Theme | Version | Tasks | ~LoC | ~Hours | Status | +|---:|--------------|----------------------------|:-------:|------:|-------:|------:|:------:| +| M14| Conduit | Streaming I/O, observability, parallel chunks | v4.0.0 | 4 | ~600 | ~18h | ✅ CLOSED | +| M13| Bijou | TUI dashboard & progress | v3.1.0 | 6 | ~650 | ~20h | ✅ CLOSED | +| M8 | Spit Shine | Review fixups | v2.1.0 | 2 | ~150 | ~3h | open | +| M9 | Cockpit | CLI improvements | v2.2.0 | 4 | ~190 | ~5h | open | +| M10| Hydra | Content-defined chunking | v3.0.0 | 4 | ~690 | ~22h | open | +| M11| Locksmith | Multi-recipient encryption | v3.1.0 | 4 | ~580 | ~20h | open | +| M12| Carousel | Key rotation | v3.2.0 | 4 | ~400 | ~13h | open | -**Blocked By** -- Blocked by: Task 14.2 (restoreStream) +Completed task cards are in [COMPLETED_TASKS.md](./COMPLETED_TASKS.md). Superseded tasks are in [GRAVEYARD.md](./GRAVEYARD.md). --- -## Task 14.4: Migrate CLI and TUI to ObservabilityPort - -**User Story** -As a CLI user, I want progress bars and stats to work with the new observability system so the terminal experience is unchanged after the v4 migration. - -**Requirements** -- R1: Refactor `bin/ui/progress.js` to subscribe to `ObservabilityPort` metrics instead of EventEmitter events. -- R2: Progress trackers use `observability.metric('chunk', ...)` events for progress updates. -- R3: CLI `store` and `restore` commands wire the observability adapter into CasService via the facade. -- R4: Dashboard and other TUI components continue to function (adapt to new metric format if needed). -- R5: `--quiet` flag still works (uses `SilentObserver`). -- R6: Stats summary printed after store/restore when not in quiet mode (throughput, total bytes, elapsed time). - -**Acceptance Criteria** -- AC1: `git cas store` shows progress bar identical to v3.1.0 behavior. -- AC2: `git cas restore` shows progress bar identical to v3.1.0 behavior. -- AC3: `--quiet` suppresses all output. -- AC4: Stats summary displayed after operation completes. -- AC5: Dashboard renders correctly with new observability wiring. - -**Scope** -- In scope: CLI progress migration, stats summary, dashboard adaptation. -- Out of scope: New TUI features, log file output, verbose debug mode. - -**Est. Complexity (LoC)** -- Prod: ~60 (progress refactor ~30, CLI wiring ~20, stats display ~10) -- Tests: ~20 -- Total: ~80 - -**Est. Human Working Hours** -- ~3h - -**Test Plan** -- Golden path: - - Store with progress → verify metric events drive progress display. - - Restore with progress → same. - - Stats summary printed with correct values. -- Failures: - - None expected (thin adapter layer). -- Edges: - - Quiet mode → SilentObserver, no output. - - Pipe mode → no progress, no stats. -- Fuzz/stress: - - None (display layer). - -**Definition of Done** -- DoD1: Progress bars work with ObservabilityPort. -- DoD2: Stats summary displays after operations. -- DoD3: All CLI tests pass. -- DoD4: Dashboard functional with new wiring. +# M14 — Conduit (v4.0.0) ✅ CLOSED -**Blocking** -- Blocks: None - -**Blocked By** -- Blocked by: Task 14.1 (ObservabilityPort) +All tasks completed (14.1–14.4). See [COMPLETED_TASKS.md](./COMPLETED_TASKS.md). --- @@ -499,68 +243,6 @@ As a CLI user, I want progress bars and stats to work with the new observability --- -## Task 8.1: Streaming restore *(superseded by Task 14.2)* - -**User Story** -As a developer restoring large files, I want a streaming restore path so I don't buffer the entire file in memory. - -**Requirements** -- R1: Add `CasService.restoreStream({ manifest, encryptionKey, passphrase })` returning `AsyncIterable`. -- R2: Each yielded buffer is one verified, decrypted, decompressed chunk — ready to write. -- R3: Integrity verified per-chunk before yield (not after full reassembly). -- R4: Decompression and decryption applied per-chunk in streaming fashion. -- R5: `restoreFile()` in the facade uses `restoreStream()` internally with `createWriteStream()` instead of `writeFileSync()`. -- R6: Existing `restore()` method remains unchanged (returns `{ buffer, bytesWritten }`) for backward compat. - -**Acceptance Criteria** -- AC1: `restoreStream()` yields chunks that, when concatenated, match the original file byte-for-byte. -- AC2: Memory usage during streaming restore is O(chunkSize), not O(fileSize). -- AC3: `restoreFile()` writes via stream and does not call `writeFileSync()`. -- AC4: Encrypted + compressed files round-trip correctly via streaming restore. -- AC5: Existing `restore()` method behavior unchanged. - -**Scope** -- In scope: `restoreStream()` on CasService + facade, refactor `restoreFile()` to use streaming writes. -- Out of scope: Parallel chunk reads, resume/partial restore, streaming decrypt rearchitecture. - -**Est. Complexity (LoC)** -- Prod: ~60 -- Tests: ~80 -- Total: ~140 - -**Est. Human Working Hours** -- ~4h - -**Test Plan** -- Golden path: - - Store 10KB → restoreStream → collect → byte-compare original. - - Store encrypted + compressed → restoreStream → collect → compare. - - restoreFile writes correct file via streaming (spy confirms no writeFileSync). -- Failures: - - Corrupted chunk mid-stream → throws INTEGRITY_ERROR, iteration stops. - - Wrong key → throws INTEGRITY_ERROR on first encrypted chunk. -- Edges: - - 0-byte manifest yields empty iterable. - - Single-chunk file yields exactly 1 buffer. - - Exact multiple of chunkSize yields expected count. -- Fuzz/stress: - - 50 random file sizes (seeded) — streaming restore matches buffered restore byte-for-byte. - - Memory profiling: restoreStream on 10MB file stays under 2× chunkSize peak. - -**Definition of Done** -- DoD1: `restoreStream()` implemented on CasService and exposed via facade. -- DoD2: `restoreFile()` refactored to use streaming writes. -- DoD3: All existing restore tests still pass. -- DoD4: New streaming tests added and green. - -**Blocking** -- Blocks: None - -**Blocked By** -- Blocked by: None - ---- - ## Task 8.2: Extract shared crypto helpers to CryptoPort base class **User Story** @@ -670,63 +352,6 @@ As a new user, I want the README to get me started quickly. As a contributor, I --- -## Task 9.1: CLI progress feedback - -**User Story** -As a CLI user storing or restoring large files, I want visible progress so I know the operation is working and not hung. - -**Requirements** -- R1: Wire `CasService` events (`chunk:stored`, `chunk:restored`, `file:stored`, `file:restored`) to CLI output. -- R2: Display a progress counter during store/restore: `Storing chunk 5/12…` or similar. -- R3: Progress output goes to stderr (stdout reserved for structured output). -- R4: Progress suppressed when stdout is not a TTY (piped mode) or when `--quiet` is passed. -- R5: Add `--quiet` global flag to suppress progress output. - -**Acceptance Criteria** -- AC1: `git cas store` shows per-chunk progress on stderr in TTY mode. -- AC2: `git cas restore` shows per-chunk progress on stderr in TTY mode. -- AC3: Piped mode (`git cas store … | jq`) shows no progress. -- AC4: `--quiet` suppresses all progress output. - -**Scope** -- In scope: Progress display for store and restore. -- Out of scope: Progress bars with ETA, spinners, color output, verbose debug logging. - -**Est. Complexity (LoC)** -- Prod: ~50 -- Tests: ~20 -- Total: ~70 - -**Est. Human Working Hours** -- ~2h - -**Test Plan** -- Golden path: - - Store 3-chunk file in TTY mode → stderr shows 3 progress messages. - - Restore → stderr shows 3 progress messages. -- Failures: - - None expected (progress is best-effort, non-blocking). -- Edges: - - 0-chunk file (empty) → no progress messages. - - 1-chunk file → exactly 1 progress message. - - Non-TTY mode → no progress on stderr. - - `--quiet` → no progress on stderr. -- Fuzz/stress: - - None (thin display layer). - -**Definition of Done** -- DoD1: Progress feedback visible in CLI during store and restore. -- DoD2: `--quiet` flag implemented and functional. -- DoD3: Non-TTY detection works correctly. - -**Blocking** -- Blocks: None - -**Blocked By** -- Blocked by: None - ---- - ## Task 9.2: CLI `verify` command **User Story** @@ -1694,7 +1319,7 @@ Competitive landscape for content-addressed storage, encrypted binary assets, an | Compression | ✅ gzip | — | ❌ | ⚠️ Via GPG (zlib/bzip2) | ✅ zstandard | ❌ | ❌ | Reduce storage size for compressible data | Compress-before-encrypt pipeline. Only git-cas and Restic offer explicit control | — | | Compression algorithm selection | ❌ gzip only | ❌ | ❌ | ⚠️ GPG's choice | ✅ zstd auto/max/off | ❌ | ❌ | Tune speed vs. ratio per workload | zstd is faster + better ratio than gzip. Would need CompressionPort | CompressionPort + zstd adapter. ~120 LoC, ~6h. Medium priority | | Streaming store (O(1) memory) | ✅ AsyncIterable | — | ⚠️ Transfer adapters | ✅ GPG pipeline | ✅ Pack streaming | ✅ 64 KiB chunks | ❌ | Store arbitrarily large files without OOM | git-cas chunks and encrypts in streaming fashion | — | -| Streaming restore (O(1) memory) | ❌ Buffers in memory | 🗓 M8 Spit Shine | ⚠️ | ✅ | ✅ | ✅ | ❌ | Restore large files without OOM | Current restore() buffers entire file. Asymmetry with store path | restoreStream() + restoreFile refactor. ~140 LoC, ~4h (Task 8.1) | +| Streaming restore (O(1) memory) | ✅ restoreStream() | — | ⚠️ | ✅ | ✅ | ✅ | ❌ | Restore large files without OOM | Implemented in v4.0.0 (M14 Conduit) | — | | Partial restore / byte-range | ❌ | ❌ | ❌ | ⚠️ Per-chunk retrieval | ✅ FUSE mount | ❌ | ❌ | Extract byte ranges without restoring full file | Manifest has chunk offsets; byte-range index is feasible | Chunk offset index + range API. ~200 LoC, ~10h. Low priority | --- @@ -1732,8 +1357,8 @@ Competitive landscape for content-addressed storage, encrypted binary assets, an | CLI tool | ✅ `git cas` subcommand | — | ✅ `git lfs` | ✅ `git annex` | ✅ `restic` | ✅ `age` | ✅ `dvc` | Terminal-based workflows | All tools have CLIs. git-cas integrates as a Git subcommand | — | | Programmatic API / library | ✅ Node.js (ESM) | — | ⚠️ Go internal | ⚠️ Haskell | ⚠️ Go internal | ✅ Go, Rust, JS, Java, Python | ✅ Python | Integrate CAS into applications | git-cas and Age are the strongest library stories | — | | Multi-runtime support | ✅ Node, Bun, Deno | — | ❌ Go only | ❌ Haskell only | ❌ Go only | ✅ Go, Rust, JS, Java, Python | ❌ Python only | Same library works across JS runtimes | Only git-cas and Age support multiple runtimes | — | -| Progress events (structured) | ✅ EventEmitter (7 events) | — | ✅ Transfer protocol | ⚠️ Terminal bars | ✅ JSON Lines | ❌ | ⚠️ Terminal bars | Build progress bars, logging, monitoring | git-cas emits typed object payloads per chunk | — | -| CLI progress feedback | ❌ Silent | 🗓 M9 Cockpit | ✅ | ✅ | ✅ | ❌ | ✅ | Users know operations are working | Events exist but CLI doesn't display them | Wire events to stderr counter. ~70 LoC, ~2h (Task 9.1) | +| Progress events (structured) | ✅ ObservabilityPort (metric/log/span) | — | ✅ Transfer protocol | ⚠️ Terminal bars | ✅ JSON Lines | ❌ | ⚠️ Terminal bars | Build progress bars, logging, monitoring | git-cas emits typed metrics per chunk via ObservabilityPort (v4.0.0) | — | +| CLI progress feedback | ✅ Animated (bijou) | — | ✅ | ✅ | ✅ | ❌ | ✅ | Users know operations are working | Implemented in v3.1.0 (M13 Bijou) | — | | Structured output (--json) | ❌ | 🗓 M9 Cockpit | ❌ | ❌ | ✅ `--json` | ❌ | ✅ `--json` | CI/CD pipeline integration | Restic is the gold standard here (JSON Lines for all output) | Global `--json` flag. ~50 LoC, ~1.5h (Task 9.3) | | CLI `verify` command | ❌ API only | 🗓 M9 Cockpit | ✅ Implicit on checkout | ✅ `annex fsck` | ✅ `restic check` | ❌ | ✅ `dvc check-ignore` | Audit integrity without restoring | API exists (`verifyIntegrity`); CLI just needs to expose it | 25 LoC, ~1h (Task 9.2) | | Actionable error messages | ❌ Generic `err.message` | 🗓 M9 Cockpit | ⚠️ | ⚠️ | ✅ | ❌ | ✅ | Users know what went wrong and what to do next | Error codes exist but CLI doesn't show hints | Error handler + hint map. ~45 LoC, ~1h (Task 9.4) | @@ -1759,7 +1384,7 @@ Competitive landscape for content-addressed storage, encrypted binary assets, an |---|---|---|---|---|---|---| | **Core identity** | Git-native CAS with encryption | Git large file offloading | Distributed file management | Encrypted backup with dedup | File encryption primitive | ML data version control | | **Strongest at** | Git ODB integration, pluggable codecs, Merkle manifests, vault | Simplicity, file locking, ecosystem adoption | Backend diversity, location tracking, metadata views | CDC dedup, retention policies, FUSE mount | Multi-recipient, HSM, multi-language, simplicity | ML pipelines, experiment tracking, Python ecosystem | -| **Weakest at** | No multi-backend, single-key encryption, gzip only | No encryption, no compression, requires server | Complexity, Haskell-only, no CDC | No Git integration, no library API | Not a storage system | No encryption, no chunking, no streaming | +| **Weakest at** | No multi-backend, single-key encryption, gzip only, no CDC | No encryption, no compression, requires server | Complexity, Haskell-only, no CDC | No Git integration, no library API | Not a storage system | No encryption, no chunking, no streaming | | **Server required** | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | | **Best use case** | Encrypted binary assets in Git repos | Large files in GitHub/GitLab repos | Distributed archive management | Encrypted backups of filesystems | Encrypting files for recipients | ML model/data versioning | @@ -1775,14 +1400,15 @@ Competitive landscape for content-addressed storage, encrypted binary assets, an ### Where git-cas trails (and what closes the gap) -1. **Multi-recipient encryption** → M11 Locksmith (v3.1.0). DEK/KEK envelope encryption. ~580 LoC, ~20h. -2. **Content-defined chunking** → M10 Hydra (v3.0.0). Buzhash CDC engine + ChunkingPort. ~690 LoC, ~22h. -3. **Key rotation** → M12 Carousel (v3.2.0). Re-wrap DEK without re-encrypting data. ~400 LoC, ~13h. -4. **Streaming restore** → M8 Spit Shine (v2.1.0). restoreStream() returning AsyncIterable. ~140 LoC, ~4h. -5. **CLI polish** → M9 Cockpit (v2.2.0). Progress, verify, --json, actionable errors. ~260 LoC, ~7h. -6. **Multi-backend storage** → Not planned. Git remotes serve as the transport layer by design. Adding S3/SFTP backends would dilute the "Git-native" identity. -7. **Compression algorithm selection** → Not on roadmap. CompressionPort + zstd adapter would cost ~120 LoC, ~6h. Medium priority. -8. **FUSE mount / partial restore** → Not planned. Niche for a CAS library. Would require ~500 LoC + platform-specific bindings. +1. **Multi-recipient encryption** → M11 Locksmith. DEK/KEK envelope encryption. ~580 LoC, ~20h. +2. **Content-defined chunking** → M10 Hydra. Buzhash CDC engine + ChunkingPort. ~690 LoC, ~22h. +3. **Key rotation** → M12 Carousel. Re-wrap DEK without re-encrypting data. ~400 LoC, ~13h. +4. ~~**Streaming restore**~~ → ✅ Delivered in v4.0.0 (M14 Conduit). `restoreStream()` returning AsyncIterable. +5. **CLI polish** → M9 Cockpit. Verify, --json, actionable errors. ~190 LoC, ~5h. +6. ~~**CLI progress feedback**~~ → ✅ Delivered in v3.1.0 (M13 Bijou). Animated progress bars with throughput. +7. **Multi-backend storage** → Not planned. Git remotes serve as the transport layer by design. Adding S3/SFTP backends would dilute the "Git-native" identity. +8. **Compression algorithm selection** → Not on roadmap. CompressionPort + zstd adapter would cost ~120 LoC, ~6h. Medium priority. +9. **FUSE mount / partial restore** → Not planned. Niche for a CAS library. Would require ~500 LoC + platform-specific bindings. --- @@ -1978,329 +1604,9 @@ If that's what you want, nothing else does it. If it's not, the right tool proba --- -# M13 — Bijou (v3.1.0) ✅ -**Theme:** Beautiful terminal UI powered by `@flyingrobots/bijou`. Replace silent CLI operations with animated progress, and add an interactive vault dashboard for exploring stored assets. Depends on M9 Cockpit for the `--quiet` flag and event wiring foundation. - ---- - -## Task 13.1: Animated store/restore progress - -**User Story** -As a CLI user, I want a smooth animated progress bar with chunk counts and throughput when storing or restoring files, so I can see that the operation is working and estimate time remaining. - -**Requirements** -- R1: Add `@flyingrobots/bijou` and `@flyingrobots/bijou-node` as dependencies. -- R2: Wire `CasService` events (`chunk:stored`, `chunk:restored`) to a bijou `createAnimatedProgressBar()` with spring physics (preset: `gentle`). -- R3: Display gradient progress bar (theme `CYAN_MAGENTA`) with chunk counter (`78/193 chunks`) and throughput (`19.2 MiB/s`). -- R4: Show last-processed chunk digest and blob OID below the progress bar. -- R5: Progress renders to stderr; stdout reserved for structured output. -- R6: Graceful degradation: static counter in CI, plain text in pipe mode, no output with `--quiet`. - -**Acceptance Criteria** -- AC1: `git cas store` shows animated progress bar in TTY mode. -- AC2: `git cas restore` shows animated progress bar in TTY mode. -- AC3: CI mode (`CI=true`) falls back to static line-by-line progress. -- AC4: Pipe mode shows no progress output. -- AC5: `--quiet` suppresses all progress. - -**Scope** -- In scope: Progress bar for store and restore commands. -- Out of scope: Full TUI app, interactive elements, vault commands. - -**Est. Complexity (LoC)** -- Prod: ~80 -- Tests: ~30 -- Total: ~110 - -**Est. Human Working Hours** -- ~3h - -**Test Plan** -- Golden path: - - Store 5-chunk file → progress bar advances 5 times, final state shows 100%. - - Restore 5-chunk file → same. -- Edges: - - 0-chunk file (empty) → no progress bar shown. - - 1-chunk file → bar jumps to 100%. - - Non-TTY → static fallback or silent. - -**Definition of Done** -- DoD1: Animated progress bar visible during store/restore in interactive terminals. -- DoD2: Graceful degradation works across all four output modes. -- DoD3: No visual artifacts or leftover ANSI codes in non-TTY environments. - -**Blocking** -- Blocks: Task 13.2 (vault dashboard uses same bijou dependency) - -**Blocked By** -- Blocked by: Task 9.1 (CLI progress feedback foundation, `--quiet` flag) - ---- - -## Task 13.2: Vault dashboard — interactive TUI app - -**User Story** -As a developer managing multiple vault entries, I want an interactive terminal dashboard to browse entries, inspect manifests, and view encryption status without memorizing CLI flags. - -**Requirements** -- R1: Add `@flyingrobots/bijou-tui` as a dependency. -- R2: New subcommand: `git cas vault dashboard` (or `git cas vault ui`). -- R3: Full-screen TEA app with flexbox layout: entry list (left pane) + detail view (right pane). -- R4: Entry list shows slug, size (human-readable), chunk count, and badges for encryption/compression/merkle. -- R5: Detail view shows manifest anatomy: metadata, encryption config, compression, sub-manifests, and paginated chunk list. -- R6: Keyboard navigation: `j/k` or arrows to move, `enter` to expand, `/` to filter, `q` to quit. -- R7: Vault-level header showing encryption status, asset count, and vault ref. -- R8: Graceful degradation: static table output in CI/pipe mode (reuse Task 9.5 table formatting). - -**Acceptance Criteria** -- AC1: `git cas vault dashboard` launches interactive TUI in TTY mode. -- AC2: All vault entries listed with correct metadata. -- AC3: Selecting an entry shows full manifest detail. -- AC4: Filter narrows the list by slug substring. -- AC5: `q` or `ctrl-c` exits cleanly (restores terminal state). -- AC6: Non-TTY falls back to static vault list. - -**Scope** -- In scope: Read-only dashboard for browsing vault state. -- Out of scope: Mutating operations (store/restore/remove) from the dashboard. - -**Est. Complexity (LoC)** -- Prod: ~200 -- Tests: ~60 -- Total: ~260 - -**Est. Human Working Hours** -- ~7h - -**Test Plan** -- Golden path: - - Launch with 3 vault entries → all listed with correct badges. - - Select entry → detail pane populates with manifest data. - - Filter by substring → list narrows correctly. -- Edges: - - Empty vault → shows "No entries" message. - - Entry with Merkle sub-manifests → sub-manifest section rendered. - - Very long slug names → truncated with ellipsis. -- Failures: - - Vault ref doesn't exist → shows initialization prompt. - -**Definition of Done** -- DoD1: Interactive dashboard launches and renders vault state. -- DoD2: Navigation, selection, and filtering work. -- DoD3: Clean exit restores terminal state. -- DoD4: Static fallback works in non-TTY. - -**Blocking** -- Blocks: Task 13.4, Task 13.5 - -**Blocked By** -- Blocked by: Task 13.1 (bijou dependency), Task 9.5 (vault table formatting) - ---- - -## Task 13.3: Vault history timeline view - -**User Story** -As a developer, I want to see vault commit history as a visual timeline so I can understand how the vault has evolved over time. - -**Requirements** -- R1: New subcommand: `git cas vault history --pretty` (or integrate into dashboard as a tab). -- R2: Render vault commits using bijou `timeline()` component with status indicators. -- R3: Color-code by operation: green for `add`, yellow for `update`, red for `remove`, blue for `init`. -- R4: Show commit OID (short), operation, slug, and relative timestamp. -- R5: Paginate with bijou `paginator()` for long histories. -- R6: Static fallback: plain `git log --oneline` output (current behavior). - -**Acceptance Criteria** -- AC1: `git cas vault history --pretty` renders color-coded timeline in TTY mode. -- AC2: Operations correctly color-coded by parsing commit messages. -- AC3: Pagination works for vaults with >20 commits. -- AC4: Without `--pretty`, behavior unchanged (backward compatible). - -**Scope** -- In scope: Timeline rendering of vault history. -- Out of scope: Interactive revert, diff between history points. - -**Est. Complexity (LoC)** -- Prod: ~60 -- Tests: ~25 -- Total: ~85 - -**Est. Human Working Hours** -- ~2h - -**Test Plan** -- Golden path: - - Vault with 5 commits → 5 timeline entries, correctly colored. -- Edges: - - Empty vault (no commits) → "No history" message. - - 100+ commits → paginated display. - -**Definition of Done** -- DoD1: Timeline renders with color-coded operations. -- DoD2: Pagination functional. -- DoD3: `--pretty` flag documented in `--help`. - -**Blocking** -- Blocks: None - -**Blocked By** -- Blocked by: Task 13.1 (bijou dependency) - ---- - -## Task 13.4: Manifest anatomy view - -**User Story** -As a developer debugging storage issues, I want a rich visual breakdown of a manifest showing its structure, encryption metadata, compression settings, and chunk layout. - -**Requirements** -- R1: New subcommand: `git cas inspect --slug ` (or `--oid `). -- R2: Render manifest using bijou `box()`, `accordion()`, and `tree()` components. -- R3: Sections: metadata (slug, filename, size, version), encryption (algorithm, KDF params), compression, sub-manifests (if Merkle), and chunks. -- R4: Chunks section uses `paginator()` — show 20 chunks per page with index, size, digest (truncated), and blob OID. -- R5: Badges for encryption status, compression, Merkle, manifest version. -- R6: Static fallback: JSON dump (current `readManifest` behavior). - -**Acceptance Criteria** -- AC1: `git cas inspect --slug ` renders structured manifest view. -- AC2: Accordion sections expand/collapse. -- AC3: Chunk pagination works. -- AC4: Encrypted manifests show full KDF parameter breakdown. -- AC5: Merkle manifests show sub-manifest tree. - -**Scope** -- In scope: Read-only manifest inspection. -- Out of scope: Editing manifests, verifying integrity (that's `git cas verify`). - -**Est. Complexity (LoC)** -- Prod: ~70 -- Tests: ~30 -- Total: ~100 - -**Est. Human Working Hours** -- ~3h - -**Test Plan** -- Golden path: - - Inspect unencrypted v1 manifest → metadata + chunks displayed. - - Inspect encrypted v2 Merkle manifest → all sections populated. -- Edges: - - Empty manifest (0 chunks) → shows "No chunks" in chunks section. - - Very large manifest (1000+ chunks) → pagination handles cleanly. - -**Definition of Done** -- DoD1: Manifest anatomy renders with all sections. -- DoD2: Accordion expand/collapse works. -- DoD3: Chunk pagination works. - -**Blocking** -- Blocks: None - -**Blocked By** -- Blocked by: Task 13.2 (shared component patterns) - ---- - -## Task 13.5: Chunk heatmap visualization +# M13 — Bijou (v3.1.0) ✅ CLOSED -**User Story** -As a developer, I want a visual block map of chunks in a stored file so I can quickly see the storage layout, Merkle boundaries, and progress during operations. - -**Requirements** -- R1: Render a grid of `█` / `░` blocks, one per chunk, sized to terminal width. -- R2: Color via bijou `gradientText()` from start to end of file. -- R3: Show Merkle sub-manifest boundaries with `│` separators in the grid. -- R4: Legend showing chunk count, sub-manifest count, chunk size. -- R5: Integrate into `git cas inspect` as an optional `--heatmap` flag. -- R6: During store/restore (Task 13.1), optionally show filling heatmap instead of progress bar via `--heatmap` flag. - -**Acceptance Criteria** -- AC1: `git cas inspect --slug --heatmap` renders chunk grid. -- AC2: Gradient coloring spans the full grid. -- AC3: Merkle boundaries visually distinct. -- AC4: Grid reflows to terminal width. - -**Scope** -- In scope: Static heatmap for stored files. -- Out of scope: Live-updating heatmap during store/restore (stretch goal for R6). - -**Est. Complexity (LoC)** -- Prod: ~40 -- Tests: ~15 -- Total: ~55 - -**Est. Human Working Hours** -- ~2h - -**Test Plan** -- Golden path: - - 40-chunk file, 80-col terminal → 2 rows of 40 blocks. - - 2500-chunk Merkle file → blocks with boundary markers. -- Edges: - - 1-chunk file → single block. - - Terminal narrower than chunk count → wraps correctly. - -**Definition of Done** -- DoD1: Heatmap renders correctly for v1 and v2 manifests. -- DoD2: Gradient coloring works. -- DoD3: Terminal width adaptation works. - -**Blocking** -- Blocks: None - -**Blocked By** -- Blocked by: Task 13.2 (shared component patterns) - ---- - -## Task 13.6: Encryption info card - -**User Story** -As a security-conscious user, I want a clear visual summary of my vault's encryption configuration so I can verify the crypto parameters at a glance. - -**Requirements** -- R1: Render encryption details using bijou `box()` with labeled rows. -- R2: Show cipher, KDF algorithm, KDF parameters (iterations/cost/blockSize/parallelization), salt (truncated), and key length. -- R3: Status badge: `● locked` (red) when no key provided, `● unlocked` (green) when key resolved. -- R4: Integrate into vault dashboard header and `git cas inspect` encryption accordion. -- R5: Standalone via `git cas vault info --encryption`. - -**Acceptance Criteria** -- AC1: Encryption card renders all KDF parameters. -- AC2: Correct badge for locked/unlocked state. -- AC3: Works for both pbkdf2 and scrypt vaults. -- AC4: Non-encrypted vault → "No encryption configured" message. - -**Scope** -- In scope: Display-only encryption summary. -- Out of scope: Key verification, passphrase prompting. - -**Est. Complexity (LoC)** -- Prod: ~30 -- Tests: ~10 -- Total: ~40 - -**Est. Human Working Hours** -- ~1h - -**Test Plan** -- Golden path: - - PBKDF2 vault → shows iterations, salt, key length. - - Scrypt vault → shows cost, blockSize, parallelization. -- Edges: - - Non-encrypted vault → "No encryption" message. - -**Definition of Done** -- DoD1: Encryption card renders with correct parameters. -- DoD2: Badge reflects locked/unlocked state. -- DoD3: Both KDF algorithms handled. - -**Blocking** -- Blocks: None - -**Blocked By** -- Blocked by: Task 13.1 (bijou dependency) +All tasks completed (13.1–13.6). See [COMPLETED_TASKS.md](./COMPLETED_TASKS.md). --- diff --git a/STATUS.md b/STATUS.md new file mode 100644 index 00000000..d4677670 --- /dev/null +++ b/STATUS.md @@ -0,0 +1,109 @@ +# @git-stunts/cas — Project Status + +**Current version:** v4.0.0 (Conduit) +**Last release:** 2026-02-27 +**Test suite:** 567 tests (vitest) +**Runtimes:** Node.js 22.x, Bun, Deno + +--- + +## What's shipped + +| Version | Codename | Highlights | +|---------|----------|------------| +| v4.0.0 | Conduit | ObservabilityPort, `restoreStream()`, parallel chunk I/O, `concurrency` option | +| v3.1.0 | Bijou | Interactive vault dashboard, animated progress bars, `git cas inspect`, chunk heatmap | +| v3.0.0 | Vault | GC-safe ref-based storage (`refs/cas/vault`), slug-based addressing, vault CLI | +| v2.0.0 | Horizon | Compression (gzip), KDF (pbkdf2/scrypt), Merkle manifests | +| v1.x | — | Core CAS, AES-256-GCM encryption, fixed chunking, Git ODB persistence | + +--- + +## What's next + +Five open milestones remain. M8/M9 are quick wins; M10–M12 are larger features. + +### M8 — Spit Shine (~3h) +Code review polish. No new features. + +- [ ] **8.2** Extract shared crypto helpers to CryptoPort base class +- [ ] **8.3** README polish and architectural decision record (ADR-001) + +### M9 — Cockpit (~5h) +CLI improvements for CI/CD and operator workflows. + +- [ ] **9.2** CLI `verify` command (`git cas verify --slug `) +- [ ] **9.3** CLI `--json` output mode (structured JSON for all commands) +- [ ] **9.4** CLI error handler DRY cleanup + actionable error messages +- [ ] **9.5** Vault list filtering (`--filter`) and table formatting + +### M10 — Hydra (~22h) +Content-defined chunking for dramatically better dedup on versioned files. + +- [ ] **10.1** Buzhash rolling hash + CDC chunking engine +- [ ] **10.2** ChunkingPort abstraction (FixedChunker + CdcChunker adapters) +- [ ] **10.3** CDC manifest metadata + backward compatibility +- [ ] **10.4** CDC benchmarks + dedup efficiency comparison + +### M11 — Locksmith (~20h) +Multi-recipient encryption via DEK/KEK envelope model. + +- [ ] **11.1** Envelope encryption (DEK/KEK model) +- [ ] **11.2** Recipient management API (addRecipient / removeRecipient) +- [ ] **11.3** Manifest schema for multi-recipient metadata +- [ ] **11.4** CLI multi-recipient support + +### M12 — Carousel (~13h) *(blocked by M11)* +Key rotation without re-encrypting data. + +- [ ] **12.1** Key rotation workflow (`rotateKey()`) +- [ ] **12.2** Key version tracking in manifest +- [ ] **12.3** CLI key rotation commands +- [ ] **12.4** Vault-level key rotation + +--- + +## Dependency graph + +``` +M8 Spit Shine ──────── (independent) +M9 Cockpit ─────────── (independent) +M10 Hydra ──────────── (independent) +M11 Locksmith ──────── (independent) + └──► M12 Carousel ── (needs M11) +``` + +--- + +## Backlog (unscheduled ideas) + +- Named vaults (`refs/cas/vaults/`) +- Export vault to archive +- Publish to working tree / branch +- Duplicate detection on store +- Repo scan / dedup advisor + +## Visions (researched, not committed) + +- **V1** Snapshot trees — directory-level store (~410 LoC, ~19h) +- **V2** Portable bundles — air-gap transfer (~340 LoC, ~15h) +- **V3** Manifest diff engine (~180 LoC, ~8h) +- **V4** CompressionPort — zstd, brotli, lz4 (~180 LoC, ~8h) +- **V5** Watch mode — continuous sync (~220 LoC, ~10h) +- **V6** Interactive passphrase prompt (~90 LoC, ~4h) + +## Known concerns + +| # | Issue | Severity | Summary | +|---|-------|----------|---------| +| C1 | Memory amplification | High | Encrypted/compressed restore buffers entire file | +| C2 | Orphaned blobs | Medium | STREAM_ERROR leaves unreferenced blobs in ODB | +| C3 | No chunk size cap | Medium | No upper bound on configured chunk size | +| C4 | Web Crypto buffering | Medium | Deno adapter silently buffers entire file | +| C5 | Passphrase exposure | High | `--vault-passphrase` visible in shell history | +| C6 | KDF no rate limit | Low | No brute-force detection on failed decryption | +| C7 | GCM nonce collision | Low | 96-bit random nonce, safe to ~2^32 encryptions | + +--- + +*Full task cards: [ROADMAP.md](./ROADMAP.md) | Completed: [COMPLETED_TASKS.md](./COMPLETED_TASKS.md) | Superseded: [GRAVEYARD.md](./GRAVEYARD.md)* diff --git a/docs/ADR-001-vault-in-facade.md b/docs/ADR-001-vault-in-facade.md new file mode 100644 index 00000000..607ea7d3 --- /dev/null +++ b/docs/ADR-001-vault-in-facade.md @@ -0,0 +1,34 @@ +# ADR-001: Vault as a Separate Domain Service Composed by the Facade + +## Status + +Accepted + +## Context + +`CasService` handles single-asset I/O: chunking, encryption, tree creation, and restore. Vault handles multi-asset lifecycle: ref indexing, slug management, history, and GC safety. Both services require the same three ports (persistence, codec, crypto) plus observability. + +The question is whether vault logic should live inside `CasService` or as a separate domain service. + +## Decision + +Vault logic lives in `VaultService`, a separate domain service. `ContentAddressableStore` (the facade) composes both `CasService` and `VaultService`, wiring them to shared port instances. The facade exposes a unified API and provides a `getVaultService()` accessor for advanced use cases. + +## Rationale + +- **Single Responsibility**: CasService owns content-addressed storage mechanics (chunking, crypto, trees). VaultService owns lifecycle management (ref indexing, slug resolution, GC safety). These are distinct domain concerns. +- **Independent testability**: Each service can be unit-tested in isolation with mocked ports. No need to set up vault infrastructure to test encryption, or vice versa. +- **Independent injectability**: Consumers who only need CAS operations (e.g., a build tool storing artifacts) can instantiate `CasService` directly without vault overhead. +- **Facade simplicity**: The facade provides the "batteries-included" developer experience. Users get one import, one constructor, and a flat method surface. + +## Alternatives Rejected + +1. **Merge vault into CasService** — Bloats the core service with ref management, slug indexing, and history tracking. Mixes content mechanics with lifecycle concerns. Makes CasService harder to test and reason about. + +2. **Separate facade per service** — Users would manage two objects (`cas` and `vault`) and wire ports manually. Doubles the setup boilerplate and creates coupling at the call site instead of inside the library. + +## Consequences + +- The facade has 7 vault pass-through methods (`vaultInit`, `vaultStore`, `vaultRestore`, `vaultList`, `vaultInfo`, `vaultRemove`, `vaultHistory`). This is acceptable given the flat API benefit. +- Users must go through the facade or explicitly create `VaultService` — there is no implicit vault available on a bare `CasService`. +- VaultService can evolve independently (e.g., named vaults, cross-repo sync) without touching CasService internals. diff --git a/src/domain/services/CasService.js b/src/domain/services/CasService.js index 6cc284f0..eb96e40c 100644 --- a/src/domain/services/CasService.js +++ b/src/domain/services/CasService.js @@ -146,29 +146,6 @@ export default class CasService { } } - /** - * Validates that an encryption key is a 32-byte Buffer or Uint8Array. - * @private - * @param {*} key - * @throws {CasError} INVALID_KEY_TYPE if key is not a Buffer - * @throws {CasError} INVALID_KEY_LENGTH if key is not 32 bytes - */ - _validateKey(key) { - if (!Buffer.isBuffer(key) && !(key instanceof Uint8Array)) { - throw new CasError( - 'Encryption key must be a Buffer or Uint8Array', - 'INVALID_KEY_TYPE', - ); - } - if (key.length !== 32) { - throw new CasError( - `Encryption key must be 32 bytes, got ${key.length}`, - 'INVALID_KEY_LENGTH', - { expected: 32, actual: key.length }, - ); - } - } - /** * Encrypts a buffer using AES-256-GCM. * @param {Object} options @@ -178,7 +155,7 @@ export default class CasService { * @throws {CasError} INVALID_KEY_TYPE | INVALID_KEY_LENGTH if the key is invalid. */ async encrypt({ buffer, key }) { - this._validateKey(key); + this.crypto._validateKey(key); return await this.crypto.encryptBuffer(buffer, key); } @@ -272,7 +249,7 @@ export default class CasService { } if (encryptionKey) { - this._validateKey(encryptionKey); + this.crypto._validateKey(encryptionKey); } const manifestData = { slug, filename, size: 0, chunks: [] }; @@ -446,7 +423,7 @@ export default class CasService { ); } if (encryptionKey) { - this._validateKey(encryptionKey); + this.crypto._validateKey(encryptionKey); } else if (manifest.encryption?.encrypted) { throw new CasError('Encryption key required to restore encrypted content', 'MISSING_KEY'); } diff --git a/src/infrastructure/adapters/BunCryptoAdapter.js b/src/infrastructure/adapters/BunCryptoAdapter.js index 29de6056..c5b8e743 100644 --- a/src/infrastructure/adapters/BunCryptoAdapter.js +++ b/src/infrastructure/adapters/BunCryptoAdapter.js @@ -27,20 +27,20 @@ export default class BunCryptoAdapter extends CryptoPort { /** @override */ async encryptBuffer(buffer, key) { - this.#validateKey(key); + this._validateKey(key); const nonce = this.randomBytes(12); const cipher = createCipheriv('aes-256-gcm', key, nonce); const enc = Buffer.concat([cipher.update(buffer), cipher.final()]); const tag = cipher.getAuthTag(); return { buf: enc, - meta: this.#buildMeta(nonce, tag), + meta: this._buildMeta(nonce.toString('base64'), tag.toString('base64')), }; } /** @override */ async decryptBuffer(buffer, key, meta) { - this.#validateKey(key); + this._validateKey(key); const nonce = Buffer.from(meta.nonce, 'base64'); const tag = Buffer.from(meta.tag, 'base64'); const decipher = createDecipheriv('aes-256-gcm', key, nonce); @@ -50,7 +50,7 @@ export default class BunCryptoAdapter extends CryptoPort { /** @override */ createEncryptionStream(key) { - this.#validateKey(key); + this._validateKey(key); const nonce = this.randomBytes(12); const cipher = createCipheriv('aes-256-gcm', key, nonce); let streamFinalized = false; @@ -77,77 +77,21 @@ export default class BunCryptoAdapter extends CryptoPort { ); } const tag = cipher.getAuthTag(); - return this.#buildMeta(nonce, tag); + return this._buildMeta(nonce.toString('base64'), tag.toString('base64')); }; return { encrypt, finalize }; } /** @override */ - async deriveKey({ - passphrase, - salt, - algorithm = 'pbkdf2', - iterations = 100_000, - cost = 16384, - blockSize = 8, - parallelization = 1, - keyLength = 32, - }) { - const saltBuf = salt || this.randomBytes(32); - let key; - const params = { algorithm, salt: Buffer.from(saltBuf).toString('base64'), keyLength }; - + async _doDeriveKey(passphrase, saltBuf, { algorithm, iterations, cost, blockSize, parallelization, keyLength }) { if (algorithm === 'pbkdf2') { - key = await promisify(pbkdf2)(passphrase, saltBuf, iterations, keyLength, 'sha512'); - params.iterations = iterations; - } else if (algorithm === 'scrypt') { - key = await promisify(scrypt)(passphrase, saltBuf, keyLength, { - N: cost, r: blockSize, p: parallelization, - }); - params.cost = cost; - params.blockSize = blockSize; - params.parallelization = parallelization; - } else { - throw new Error(`Unsupported KDF algorithm: ${algorithm}`); - } - - return { key, salt: Buffer.from(saltBuf), params }; - } - - /** - * Validates that a key is a 32-byte Buffer or Uint8Array. - * @param {Buffer|Uint8Array} key - * @throws {CasError} INVALID_KEY_TYPE | INVALID_KEY_LENGTH - */ - #validateKey(key) { - if (!Buffer.isBuffer(key) && !(key instanceof Uint8Array)) { - throw new CasError( - 'Encryption key must be a Buffer or Uint8Array', - 'INVALID_KEY_TYPE', - ); + return promisify(pbkdf2)(passphrase, saltBuf, iterations, keyLength, 'sha512'); } - if (key.length !== 32) { - throw new CasError( - `Encryption key must be 32 bytes, got ${key.length}`, - 'INVALID_KEY_LENGTH', - { expected: 32, actual: key.length }, - ); - } - } - - /** - * Builds the encryption metadata object. - * @param {Buffer|Uint8Array} nonce - 12-byte AES-GCM nonce. - * @param {Buffer} tag - 16-byte GCM authentication tag. - * @returns {{ algorithm: string, nonce: string, tag: string, encrypted: boolean }} - */ - #buildMeta(nonce, tag) { - return { - algorithm: 'aes-256-gcm', - nonce: Buffer.from(nonce).toString('base64'), - tag: tag.toString('base64'), - encrypted: true, - }; + return promisify(scrypt)(passphrase, saltBuf, keyLength, { + N: cost, + r: blockSize, + p: parallelization, + }); } } diff --git a/src/infrastructure/adapters/NodeCryptoAdapter.js b/src/infrastructure/adapters/NodeCryptoAdapter.js index c6383229..62365c9f 100644 --- a/src/infrastructure/adapters/NodeCryptoAdapter.js +++ b/src/infrastructure/adapters/NodeCryptoAdapter.js @@ -19,14 +19,14 @@ export default class NodeCryptoAdapter extends CryptoPort { /** @override */ encryptBuffer(buffer, key) { - this.#validateKey(key); + this._validateKey(key); const nonce = randomBytes(12); const cipher = createCipheriv('aes-256-gcm', key, nonce); const enc = Buffer.concat([cipher.update(buffer), cipher.final()]); const tag = cipher.getAuthTag(); return { buf: enc, - meta: this.#buildMeta(nonce, tag), + meta: this._buildMeta(nonce.toString('base64'), tag.toString('base64')), }; } @@ -41,7 +41,7 @@ export default class NodeCryptoAdapter extends CryptoPort { /** @override */ createEncryptionStream(key) { - this.#validateKey(key); + this._validateKey(key); const nonce = randomBytes(12); const cipher = createCipheriv('aes-256-gcm', key, nonce); @@ -60,56 +60,19 @@ export default class NodeCryptoAdapter extends CryptoPort { const finalize = () => { const tag = cipher.getAuthTag(); - return this.#buildMeta(nonce, tag); + return this._buildMeta(nonce.toString('base64'), tag.toString('base64')); }; return { encrypt, finalize }; } - /** @override */ - async deriveKey({ - passphrase, - salt, - algorithm = 'pbkdf2', - iterations = 100_000, - cost = 16384, - blockSize = 8, - parallelization = 1, - keyLength = 32, - }) { - const saltBuf = salt || randomBytes(32); - let key; - const params = { - algorithm, - salt: Buffer.from(saltBuf).toString('base64'), - keyLength, - }; - - if (algorithm === 'pbkdf2') { - key = await promisify(pbkdf2)(passphrase, saltBuf, iterations, keyLength, 'sha512'); - params.iterations = iterations; - } else if (algorithm === 'scrypt') { - key = await promisify(scrypt)(passphrase, saltBuf, keyLength, { - N: cost, - r: blockSize, - p: parallelization, - }); - params.cost = cost; - params.blockSize = blockSize; - params.parallelization = parallelization; - } else { - throw new Error(`Unsupported KDF algorithm: ${algorithm}`); - } - - return { key, salt: Buffer.from(saltBuf), params }; - } - /** - * Validates that a key is a 32-byte Buffer. + * Validates that a key is a 32-byte Buffer (strict Node.js check). + * @override * @param {Buffer} key * @throws {CasError} INVALID_KEY_TYPE | INVALID_KEY_LENGTH */ - #validateKey(key) { + _validateKey(key) { if (!Buffer.isBuffer(key)) { throw new CasError( 'Encryption key must be a Buffer', @@ -125,18 +88,15 @@ export default class NodeCryptoAdapter extends CryptoPort { } } - /** - * Builds the encryption metadata object. - * @param {Buffer} nonce - 12-byte AES-GCM nonce. - * @param {Buffer} tag - 16-byte GCM authentication tag. - * @returns {{ algorithm: string, nonce: string, tag: string, encrypted: boolean }} - */ - #buildMeta(nonce, tag) { - return { - algorithm: 'aes-256-gcm', - nonce: nonce.toString('base64'), - tag: tag.toString('base64'), - encrypted: true, - }; + /** @override */ + async _doDeriveKey(passphrase, saltBuf, { algorithm, iterations, cost, blockSize, parallelization, keyLength }) { + if (algorithm === 'pbkdf2') { + return promisify(pbkdf2)(passphrase, saltBuf, iterations, keyLength, 'sha512'); + } + return promisify(scrypt)(passphrase, saltBuf, keyLength, { + N: cost, + r: blockSize, + p: parallelization, + }); } } diff --git a/src/infrastructure/adapters/WebCryptoAdapter.js b/src/infrastructure/adapters/WebCryptoAdapter.js index 02b338f2..ec517a8f 100644 --- a/src/infrastructure/adapters/WebCryptoAdapter.js +++ b/src/infrastructure/adapters/WebCryptoAdapter.js @@ -28,10 +28,10 @@ export default class WebCryptoAdapter extends CryptoPort { /** @override */ async encryptBuffer(buffer, key) { - this.#validateKey(key); + this._validateKey(key); const nonce = this.randomBytes(12); const cryptoKey = await this.#importKey(key); - + // AES-GCM in Web Crypto includes the tag at the end of the ciphertext const encrypted = await globalThis.crypto.subtle.encrypt( { name: 'AES-GCM', iv: nonce }, @@ -46,7 +46,7 @@ export default class WebCryptoAdapter extends CryptoPort { return { buf: Buffer.from(ciphertext), - meta: this.#buildMeta(nonce, tag), + meta: this._buildMeta(this.#toBase64(nonce), this.#toBase64(tag)), }; } @@ -75,13 +75,13 @@ export default class WebCryptoAdapter extends CryptoPort { /** @override */ createEncryptionStream(key) { - this.#validateKey(key); + this._validateKey(key); const nonce = this.randomBytes(12); const cryptoKeyPromise = this.#importKey(key); - + // Web Crypto doesn't have a native streaming AES-GCM API like Node // We have to buffer for the one-shot call because GCM tag is computed over the whole thing. - // NOTE: This limits the "stream" to memory capacity, matching the project's + // NOTE: This limits the "stream" to memory capacity, matching the project's // current CasService.restore limitation. const chunks = []; let finalTag = null; @@ -89,11 +89,11 @@ export default class WebCryptoAdapter extends CryptoPort { const encrypt = async function* (source) { for await (const chunk of source) { chunks.push(chunk); - // We can't yield partial encrypted chunks for GCM in Web Crypto - // without complex chunk-chaining which would break compatibility + // We can't yield partial encrypted chunks for GCM in Web Crypto + // without complex chunk-chaining which would break compatibility // with the Node adapter's single-stream GCM. } - + const buffer = Buffer.concat(chunks); const cryptoKey = await cryptoKeyPromise; const encrypted = await globalThis.crypto.subtle.encrypt( @@ -111,53 +111,33 @@ export default class WebCryptoAdapter extends CryptoPort { }; const finalize = () => { - return this.#buildMeta(nonce, finalTag); + return this._buildMeta(this.#toBase64(nonce), this.#toBase64(finalTag)); }; return { encrypt, finalize }; } /** @override */ - async deriveKey({ - passphrase, - salt, - algorithm = 'pbkdf2', - iterations = 100_000, - cost = 16384, - blockSize = 8, - parallelization = 1, - keyLength = 32, - }) { - const saltBuf = salt || this.randomBytes(32); - const params = { algorithm, salt: this.#toBase64(saltBuf), keyLength }; - - const opts = { passphrase, saltBuf, iterations, cost, blockSize, parallelization, keyLength, params }; - let key; + async _doDeriveKey(passphrase, saltBuf, { algorithm, iterations, cost, blockSize, parallelization, keyLength }) { if (algorithm === 'pbkdf2') { - key = await this.#derivePbkdf2(opts); - } else if (algorithm === 'scrypt') { - key = await this.#deriveScrypt(opts); - } else { - throw new Error(`Unsupported KDF algorithm: ${algorithm}`); + return this.#derivePbkdf2(passphrase, saltBuf, { iterations, keyLength }); } - - return { key: Buffer.from(key), salt: Buffer.from(saltBuf), params }; + return this.#deriveScrypt(passphrase, saltBuf, { cost, blockSize, parallelization, keyLength }); } - async #derivePbkdf2({ passphrase, saltBuf, iterations, keyLength, params }) { + async #derivePbkdf2(passphrase, saltBuf, params) { const enc = new globalThis.TextEncoder(); const baseKey = await globalThis.crypto.subtle.importKey( 'raw', enc.encode(passphrase), 'PBKDF2', false, ['deriveBits'], ); const bits = await globalThis.crypto.subtle.deriveBits( - { name: 'PBKDF2', salt: saltBuf, iterations, hash: 'SHA-512' }, - baseKey, keyLength * 8, + { name: 'PBKDF2', salt: saltBuf, iterations: params.iterations, hash: 'SHA-512' }, + baseKey, params.keyLength * 8, ); - params.iterations = iterations; return Buffer.from(bits); } - async #deriveScrypt({ passphrase, saltBuf, cost, blockSize, parallelization, keyLength, params }) { + async #deriveScrypt(passphrase, saltBuf, params) { let scryptCb; let promisifyFn; try { @@ -166,13 +146,9 @@ export default class WebCryptoAdapter extends CryptoPort { } catch { throw new Error('scrypt KDF requires a Node.js-compatible runtime (node:crypto unavailable)'); } - const key = await promisifyFn(scryptCb)(passphrase, saltBuf, keyLength, { - N: cost, r: blockSize, p: parallelization, + return promisifyFn(scryptCb)(passphrase, saltBuf, params.keyLength, { + N: params.cost, r: params.blockSize, p: params.parallelization, }); - params.cost = cost; - params.blockSize = blockSize; - params.parallelization = parallelization; - return key; } /** @@ -190,42 +166,6 @@ export default class WebCryptoAdapter extends CryptoPort { ); } - /** - * Validates that a key is a 32-byte Buffer or Uint8Array. - * @param {Buffer|Uint8Array} key - * @throws {CasError} INVALID_KEY_TYPE | INVALID_KEY_LENGTH - */ - #validateKey(key) { - if (!globalThis.Buffer?.isBuffer(key) && !(key instanceof Uint8Array)) { - throw new CasError( - 'Encryption key must be a Buffer or Uint8Array', - 'INVALID_KEY_TYPE', - ); - } - if (key.length !== 32) { - throw new CasError( - `Encryption key must be 32 bytes, got ${key.length}`, - 'INVALID_KEY_LENGTH', - { expected: 32, actual: key.length }, - ); - } - } - - /** - * Builds the encryption metadata object. - * @param {Uint8Array} nonce - 12-byte AES-GCM nonce. - * @param {Uint8Array} tag - 16-byte GCM authentication tag. - * @returns {{ algorithm: string, nonce: string, tag: string, encrypted: boolean }} - */ - #buildMeta(nonce, tag) { - return { - algorithm: 'aes-256-gcm', - nonce: this.#toBase64(nonce), - tag: this.#toBase64(tag), - encrypted: true, - }; - } - /** * Encodes binary data to base64, using Buffer when available. * @param {Uint8Array} buf @@ -249,4 +189,4 @@ export default class WebCryptoAdapter extends CryptoPort { } return Uint8Array.from(globalThis.atob(str), c => c.charCodeAt(0)); } -} \ No newline at end of file +} diff --git a/src/ports/CryptoPort.js b/src/ports/CryptoPort.js index e985b853..fcddef76 100644 --- a/src/ports/CryptoPort.js +++ b/src/ports/CryptoPort.js @@ -1,3 +1,5 @@ +import CasError from '../domain/errors/CasError.js'; + /** * Abstract port for cryptographic operations (hashing, random bytes, AES-256-GCM). * @abstract @@ -54,6 +56,10 @@ export default class CryptoPort { /** * Derives an encryption key from a passphrase using a KDF. + * + * Normalizes parameters (defaults, salt generation), then delegates to the + * adapter-specific `_doDeriveKey()` template method. + * * @param {Object} options * @param {string} options.passphrase - The passphrase to derive a key from. * @param {Buffer} [options.salt] - Salt for the KDF (random if omitted). @@ -65,7 +71,92 @@ export default class CryptoPort { * @param {number} [options.keyLength=32] - Derived key length in bytes. * @returns {Promise<{ key: Buffer, salt: Buffer, params: { algorithm: string, salt: string, iterations?: number, cost?: number, blockSize?: number, parallelization?: number, keyLength: number } }>} */ - deriveKey(_options) { + async deriveKey({ + passphrase, + salt, + algorithm = 'pbkdf2', + iterations = 100_000, + cost = 16384, + blockSize = 8, + parallelization = 1, + keyLength = 32, + }) { + const saltBuf = salt || this.randomBytes(32); + + const params = { + algorithm, + salt: Buffer.from(saltBuf).toString('base64'), + keyLength, + }; + + if (algorithm === 'pbkdf2') { + params.iterations = iterations; + } else if (algorithm === 'scrypt') { + params.cost = cost; + params.blockSize = blockSize; + params.parallelization = parallelization; + } else { + throw new Error(`Unsupported KDF algorithm: ${algorithm}`); + } + + const key = await this._doDeriveKey(passphrase, saltBuf, { + algorithm, + iterations, + cost, + blockSize, + parallelization, + keyLength, + }); + + return { key: Buffer.from(key), salt: Buffer.from(saltBuf), params }; + } + + /** + * Adapter-specific key derivation. Override in subclasses. + * @abstract + * @param {string} passphrase + * @param {Buffer|Uint8Array} saltBuf + * @param {Object} params - Normalized KDF parameters. + * @returns {Promise} Derived key bytes. + */ + async _doDeriveKey(_passphrase, _saltBuf, _params) { throw new Error('Not implemented'); } + + /** + * Validates that a key is a 32-byte Buffer or Uint8Array. + * @param {Buffer|Uint8Array} key + * @throws {CasError} INVALID_KEY_TYPE if key is not a Buffer or Uint8Array + * @throws {CasError} INVALID_KEY_LENGTH if key is not 32 bytes + */ + _validateKey(key) { + if (!globalThis.Buffer?.isBuffer(key) && !(key instanceof Uint8Array)) { + throw new CasError( + 'Encryption key must be a Buffer or Uint8Array', + 'INVALID_KEY_TYPE', + ); + } + if (key.length !== 32) { + throw new CasError( + `Encryption key must be 32 bytes, got ${key.length}`, + 'INVALID_KEY_LENGTH', + { expected: 32, actual: key.length }, + ); + } + } + + /** + * Builds the encryption metadata object from base64-encoded nonce and tag. + * @param {string} nonce64 - Base64-encoded 12-byte AES-GCM nonce. + * @param {string} tag64 - Base64-encoded 16-byte GCM authentication tag. + * @returns {{ algorithm: string, nonce: string, tag: string, encrypted: boolean }} + */ + _buildMeta(nonce64, tag64) { + return { + algorithm: 'aes-256-gcm', + nonce: nonce64, + tag: tag64, + encrypted: true, + }; + } } diff --git a/test/unit/domain/services/CasService.key-validation.test.js b/test/unit/domain/services/CasService.key-validation.test.js index 7f622abf..dd828c9e 100644 --- a/test/unit/domain/services/CasService.key-validation.test.js +++ b/test/unit/domain/services/CasService.key-validation.test.js @@ -108,7 +108,7 @@ describe('CasService key validation – encrypt() invalid key type', () => { await service.encrypt({ buffer: plaintext, key }); } catch (err) { expect(err.code).toBe('INVALID_KEY_TYPE'); - expect(err.message).toContain('must be a Buffer or Uint8Array'); + expect(err.message).toContain('must be a Buffer'); } }); @@ -119,7 +119,7 @@ describe('CasService key validation – encrypt() invalid key type', () => { await service.encrypt({ buffer: plaintext, key }); } catch (err) { expect(err.code).toBe('INVALID_KEY_TYPE'); - expect(err.message).toContain('must be a Buffer or Uint8Array'); + expect(err.message).toContain('must be a Buffer'); } }); @@ -130,7 +130,7 @@ describe('CasService key validation – encrypt() invalid key type', () => { await service.encrypt({ buffer: plaintext, key }); } catch (err) { expect(err.code).toBe('INVALID_KEY_TYPE'); - expect(err.message).toContain('must be a Buffer or Uint8Array'); + expect(err.message).toContain('must be a Buffer'); } }); }); diff --git a/test/unit/ports/CryptoPort.test.js b/test/unit/ports/CryptoPort.test.js new file mode 100644 index 00000000..43058825 --- /dev/null +++ b/test/unit/ports/CryptoPort.test.js @@ -0,0 +1,169 @@ +import { describe, it, expect, vi } from 'vitest'; +import { randomBytes } from 'node:crypto'; +import CryptoPort from '../../../src/ports/CryptoPort.js'; + +describe('CryptoPort – abstract methods', () => { + const port = new CryptoPort(); + + it('sha256() throws Not implemented', () => { + expect(() => port.sha256(Buffer.alloc(0))).toThrow('Not implemented'); + }); + + it('randomBytes() throws Not implemented', () => { + expect(() => port.randomBytes(16)).toThrow('Not implemented'); + }); + + it('encryptBuffer() throws Not implemented', () => { + expect(() => port.encryptBuffer(Buffer.alloc(0), Buffer.alloc(32))).toThrow('Not implemented'); + }); + + it('decryptBuffer() throws Not implemented', () => { + expect(() => port.decryptBuffer(Buffer.alloc(0), Buffer.alloc(32), {})).toThrow('Not implemented'); + }); + + it('createEncryptionStream() throws Not implemented', () => { + expect(() => port.createEncryptionStream(Buffer.alloc(32))).toThrow('Not implemented'); + }); + + it('_doDeriveKey() throws Not implemented', async () => { + await expect(port._doDeriveKey('pass', Buffer.alloc(32), {})).rejects.toThrow('Not implemented'); + }); +}); + +describe('CryptoPort._validateKey()', () => { + const port = new CryptoPort(); + + it('accepts a 32-byte Buffer', () => { + expect(() => port._validateKey(randomBytes(32))).not.toThrow(); + }); + + it('accepts a 32-byte Uint8Array', () => { + expect(() => port._validateKey(new Uint8Array(32))).not.toThrow(); + }); + + it('throws INVALID_KEY_TYPE for a string', () => { + expect(() => port._validateKey('not-a-buffer')).toThrow('Buffer or Uint8Array'); + try { port._validateKey('not-a-buffer'); } catch (err) { + expect(err.code).toBe('INVALID_KEY_TYPE'); + } + }); + + it('throws INVALID_KEY_TYPE for a number', () => { + expect(() => port._validateKey(42)).toThrow('Buffer or Uint8Array'); + }); + + it('throws INVALID_KEY_LENGTH for wrong length Buffer', () => { + expect(() => port._validateKey(randomBytes(16))).toThrow('32 bytes'); + try { port._validateKey(randomBytes(16)); } catch (err) { + expect(err.code).toBe('INVALID_KEY_LENGTH'); + expect(err.meta).toEqual({ expected: 32, actual: 16 }); + } + }); + + it('throws INVALID_KEY_LENGTH for wrong length Uint8Array', () => { + expect(() => port._validateKey(new Uint8Array(64))).toThrow('32 bytes'); + }); +}); + +describe('CryptoPort._buildMeta()', () => { + const port = new CryptoPort(); + + it('returns correct shape with base64 strings', () => { + const nonce64 = Buffer.from('test-nonce!!').toString('base64'); + const tag64 = Buffer.from('test-tag-value!!').toString('base64'); + const meta = port._buildMeta(nonce64, tag64); + + expect(meta).toEqual({ + algorithm: 'aes-256-gcm', + nonce: nonce64, + tag: tag64, + encrypted: true, + }); + }); +}); + +describe('CryptoPort.deriveKey() – pbkdf2', () => { + it('normalizes params and calls _doDeriveKey', async () => { + const port = new CryptoPort(); + const salt = randomBytes(32); + const fakeKey = randomBytes(32); + port.randomBytes = vi.fn().mockReturnValue(salt); + port._doDeriveKey = vi.fn().mockResolvedValue(fakeKey); + + const result = await port.deriveKey({ passphrase: 'test' }); + + expect(port._doDeriveKey).toHaveBeenCalledWith('test', salt, { + algorithm: 'pbkdf2', + iterations: 100_000, + cost: 16384, + blockSize: 8, + parallelization: 1, + keyLength: 32, + }); + expect(result.key).toEqual(Buffer.from(fakeKey)); + expect(result.salt).toEqual(Buffer.from(salt)); + expect(result.params).toEqual({ + algorithm: 'pbkdf2', + salt: Buffer.from(salt).toString('base64'), + keyLength: 32, + iterations: 100_000, + }); + }); +}); + +describe('CryptoPort.deriveKey() – scrypt', () => { + it('normalizes params and calls _doDeriveKey', async () => { + const port = new CryptoPort(); + const salt = randomBytes(32); + const fakeKey = randomBytes(32); + port.randomBytes = vi.fn().mockReturnValue(salt); + port._doDeriveKey = vi.fn().mockResolvedValue(fakeKey); + + const result = await port.deriveKey({ + passphrase: 'test', + algorithm: 'scrypt', + cost: 8192, + }); + + expect(port._doDeriveKey).toHaveBeenCalledWith('test', salt, { + algorithm: 'scrypt', + iterations: 100_000, + cost: 8192, + blockSize: 8, + parallelization: 1, + keyLength: 32, + }); + expect(result.params).toEqual({ + algorithm: 'scrypt', + salt: Buffer.from(salt).toString('base64'), + keyLength: 32, + cost: 8192, + blockSize: 8, + parallelization: 1, + }); + }); +}); + +describe('CryptoPort.deriveKey() – edge cases', () => { + it('uses provided salt instead of generating one', async () => { + const port = new CryptoPort(); + const salt = randomBytes(32); + const fakeKey = randomBytes(32); + port.randomBytes = vi.fn(); + port._doDeriveKey = vi.fn().mockResolvedValue(fakeKey); + + await port.deriveKey({ passphrase: 'test', salt }); + + expect(port.randomBytes).not.toHaveBeenCalled(); + expect(port._doDeriveKey).toHaveBeenCalledWith('test', salt, expect.any(Object)); + }); + + it('throws on unsupported algorithm', async () => { + const port = new CryptoPort(); + port.randomBytes = vi.fn().mockReturnValue(randomBytes(32)); + + await expect( + port.deriveKey({ passphrase: 'test', algorithm: 'argon2' }), + ).rejects.toThrow('Unsupported KDF algorithm: argon2'); + }); +}); From e1213fe2bb337b09fd850dd3fb9706a04226c663 Mon Sep 17 00:00:00 2001 From: James Ross Date: Fri, 27 Feb 2026 19:12:32 -0800 Subject: [PATCH 2/6] =?UTF-8?q?feat:=20M9=20Cockpit=20=E2=80=94=20runActio?= =?UTF-8?q?n,=20--json=20mode,=20verify=20command,=20vault=20list=20format?= =?UTF-8?q?ting?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - Add bin/actions.js with runAction() error wrapper and actionable HINTS - Replace all 10 identical catch blocks with runAction() - Add global --json flag with structured JSON output for 8 commands - Add verify command for integrity checking (exit 0/1) - Add vault list --filter with glob matching and TTY-aware table formatting - Convert validateRestoreFlags and vault history validation to throw --- bin/actions.js | 54 +++++ bin/git-cas.js | 349 ++++++++++++++++--------------- bin/ui/vault-list.js | 60 ++++++ test/unit/cli/actions.test.js | 120 +++++++++++ test/unit/cli/vault-list.test.js | 94 +++++++++ 5 files changed, 512 insertions(+), 165 deletions(-) create mode 100644 bin/actions.js create mode 100644 bin/ui/vault-list.js create mode 100644 test/unit/cli/actions.test.js create mode 100644 test/unit/cli/vault-list.test.js diff --git a/bin/actions.js b/bin/actions.js new file mode 100644 index 00000000..a52a6804 --- /dev/null +++ b/bin/actions.js @@ -0,0 +1,54 @@ +/** + * CLI error handler — wraps command actions with structured error output. + */ + +const HINTS = { + MISSING_KEY: 'Provide --key-file or --vault-passphrase', + MANIFEST_NOT_FOUND: 'Verify the tree OID contains a manifest', + VAULT_ENTRY_NOT_FOUND: "Run 'git cas vault list' to see available entries", + VAULT_ENTRY_EXISTS: 'Use --force to overwrite', + INTEGRITY_ERROR: 'Check that the correct key or passphrase was used', +}; + +/** + * Format and write an error to stderr. + * + * @param {Error} err + * @param {boolean} json - Whether to output JSON. + */ +function writeError(err, json) { + if (json) { + const obj = { error: err.message }; + if (typeof err.code === 'string') { + obj.code = err.code; + } + process.stderr.write(`${JSON.stringify(obj)}\n`); + } else { + const prefix = typeof err.code === 'string' ? `error [${err.code}]: ` : 'error: '; + process.stderr.write(`${prefix}${err.message}\n`); + const hint = typeof err.code === 'string' ? HINTS[err.code] : undefined; + if (hint) { + process.stderr.write(`hint: ${hint}\n`); + } + } +} + +/** + * Wrap a command action with structured error handling. + * + * @param {Function} fn - The async action function. + * @param {Function} getJson - Lazy getter for --json flag value. + * @returns {Function} Wrapped action. + */ +export function runAction(fn, getJson) { + return async (...args) => { + try { + await fn(...args); + } catch (err) { + writeError(err, getJson()); + process.exit(1); + } + }; +} + +export { writeError, HINTS }; diff --git a/bin/git-cas.js b/bin/git-cas.js index f1385814..041f4497 100755 --- a/bin/git-cas.js +++ b/bin/git-cas.js @@ -10,12 +10,17 @@ import { renderEncryptionCard } from './ui/encryption-card.js'; import { renderHistoryTimeline } from './ui/history-timeline.js'; import { renderManifestView } from './ui/manifest-view.js'; import { renderHeatmap } from './ui/heatmap.js'; +import { runAction } from './actions.js'; +import { filterEntries, formatTable, formatTabSeparated } from './ui/vault-list.js'; + +const getJson = () => program.opts().json; program .name('git-cas') .description('Content Addressable Storage backed by Git') .version('4.0.0') - .option('-q, --quiet', 'Suppress progress output'); + .option('-q, --quiet', 'Suppress progress output') + .option('--json', 'Output results as JSON'); /** * Read a 32-byte raw encryption key from a file. @@ -81,12 +86,10 @@ async function resolveEncryptionKey(cas, opts) { */ function validateRestoreFlags(opts) { if (opts.slug && opts.oid) { - process.stderr.write('error: Provide --slug or --oid, not both\n'); - process.exit(1); + throw new Error('Provide --slug or --oid, not both'); } if (!opts.slug && !opts.oid) { - process.stderr.write('error: Provide --slug or --oid \n'); - process.exit(1); + throw new Error('Provide --slug or --oid '); } } @@ -102,39 +105,42 @@ program .option('--force', 'Overwrite existing vault entry') .option('--vault-passphrase ', 'Vault-level passphrase for encryption (prefer GIT_CAS_PASSPHRASE env var)') .option('--cwd ', 'Git working directory', '.') - .action(async (file, opts) => { - try { - const observer = new EventEmitterObserver(); - const cas = createCas(opts.cwd, { observability: observer }); - const encryptionKey = await resolveEncryptionKey(cas, opts); - const storeOpts = { filePath: file, slug: opts.slug }; - if (encryptionKey) { - storeOpts.encryptionKey = encryptionKey; - } + .action(runAction(async (file, opts) => { + const quiet = program.opts().quiet || program.opts().json; + const observer = new EventEmitterObserver(); + const cas = createCas(opts.cwd, { observability: observer }); + const encryptionKey = await resolveEncryptionKey(cas, opts); + const storeOpts = { filePath: file, slug: opts.slug }; + if (encryptionKey) { + storeOpts.encryptionKey = encryptionKey; + } - const progress = createStoreProgress({ - filePath: file, chunkSize: cas.chunkSize, quiet: program.opts().quiet, - }); - progress.attach(observer); - let manifest; - try { - manifest = await cas.storeFile(storeOpts); - } finally { - progress.detach(); - } + const progress = createStoreProgress({ + filePath: file, chunkSize: cas.chunkSize, quiet, + }); + progress.attach(observer); + let manifest; + try { + manifest = await cas.storeFile(storeOpts); + } finally { + progress.detach(); + } - if (opts.tree) { - const treeOid = await cas.createTree({ manifest }); - await cas.addToVault({ slug: opts.slug, treeOid, force: !!opts.force }); - process.stdout.write(`${treeOid}\n`); + const json = program.opts().json; + if (opts.tree) { + const treeOid = await cas.createTree({ manifest }); + await cas.addToVault({ slug: opts.slug, treeOid, force: !!opts.force }); + if (json) { + process.stdout.write(`${JSON.stringify({ treeOid })}\n`); } else { - process.stdout.write(`${JSON.stringify(manifest.toJSON(), null, 2)}\n`); + process.stdout.write(`${treeOid}\n`); } - } catch (err) { - process.stderr.write(`error: ${err.message}\n`); - process.exit(1); + } else if (json) { + process.stdout.write(`${JSON.stringify({ manifest: manifest.toJSON() })}\n`); + } else { + process.stdout.write(`${JSON.stringify(manifest.toJSON(), null, 2)}\n`); } - }); + }, getJson)); // --------------------------------------------------------------------------- // tree @@ -144,18 +150,18 @@ program .description('Create a Git tree from a manifest') .requiredOption('--manifest ', 'Path to manifest JSON file') .option('--cwd ', 'Git working directory', '.') - .action(async (opts) => { - try { - const cas = createCas(opts.cwd); - const raw = readFileSync(opts.manifest, 'utf8'); - const manifest = new Manifest(JSON.parse(raw)); - const treeOid = await cas.createTree({ manifest }); + .action(runAction(async (opts) => { + const cas = createCas(opts.cwd); + const raw = readFileSync(opts.manifest, 'utf8'); + const manifest = new Manifest(JSON.parse(raw)); + const treeOid = await cas.createTree({ manifest }); + const json = program.opts().json; + if (json) { + process.stdout.write(`${JSON.stringify({ treeOid })}\n`); + } else { process.stdout.write(`${treeOid}\n`); - } catch (err) { - process.stderr.write(`error: ${err.message}\n`); - process.exit(1); } - }); + }, getJson)); // --------------------------------------------------------------------------- // inspect @@ -167,25 +173,20 @@ program .option('--oid ', 'Direct tree OID') .option('--heatmap', 'Show chunk heatmap visualization') .option('--cwd ', 'Git working directory', '.') - .action(async (opts) => { - try { - validateRestoreFlags(opts); - const cas = createCas(opts.cwd); - const treeOid = opts.oid || await cas.resolveVaultEntry({ slug: opts.slug }); - const manifest = await cas.readManifest({ treeOid }); + .action(runAction(async (opts) => { + validateRestoreFlags(opts); + const cas = createCas(opts.cwd); + const treeOid = opts.oid || await cas.resolveVaultEntry({ slug: opts.slug }); + const manifest = await cas.readManifest({ treeOid }); - if (opts.heatmap) { - process.stdout.write(renderHeatmap({ manifest })); - } else if (process.stdout.isTTY) { - process.stdout.write(renderManifestView({ manifest })); - } else { - process.stdout.write(`${JSON.stringify(manifest.toJSON(), null, 2)}\n`); - } - } catch (err) { - process.stderr.write(`error: ${err.message}\n`); - process.exit(1); + if (opts.heatmap) { + process.stdout.write(renderHeatmap({ manifest })); + } else if (process.stdout.isTTY) { + process.stdout.write(renderManifestView({ manifest })); + } else { + process.stdout.write(`${JSON.stringify(manifest.toJSON(), null, 2)}\n`); } - }); + }, getJson)); // --------------------------------------------------------------------------- // restore @@ -199,39 +200,66 @@ program .option('--key-file ', 'Path to 32-byte raw encryption key file') .option('--vault-passphrase ', 'Vault-level passphrase for decryption (prefer GIT_CAS_PASSPHRASE env var)') .option('--cwd ', 'Git working directory', '.') - .action(async (opts) => { - try { - validateRestoreFlags(opts); - const observer = new EventEmitterObserver(); - const cas = createCas(opts.cwd, { observability: observer }); - const treeOid = opts.oid || await cas.resolveVaultEntry({ slug: opts.slug }); - const manifest = await cas.readManifest({ treeOid }); + .action(runAction(async (opts) => { + validateRestoreFlags(opts); + const quiet = program.opts().quiet || program.opts().json; + const observer = new EventEmitterObserver(); + const cas = createCas(opts.cwd, { observability: observer }); + const treeOid = opts.oid || await cas.resolveVaultEntry({ slug: opts.slug }); + const manifest = await cas.readManifest({ treeOid }); - const restoreOpts = { manifest }; - const encryptionKey = await resolveEncryptionKey(cas, opts); - if (encryptionKey) { - restoreOpts.encryptionKey = encryptionKey; - } + const restoreOpts = { manifest }; + const encryptionKey = await resolveEncryptionKey(cas, opts); + if (encryptionKey) { + restoreOpts.encryptionKey = encryptionKey; + } - const progress = createRestoreProgress({ - totalChunks: manifest.chunks.length, quiet: program.opts().quiet, - }); - progress.attach(observer); - let bytesWritten; - try { - ({ bytesWritten } = await cas.restoreFile({ - ...restoreOpts, - outputPath: opts.out, - })); - } finally { - progress.detach(); - } + const progress = createRestoreProgress({ + totalChunks: manifest.chunks.length, quiet, + }); + progress.attach(observer); + let bytesWritten; + try { + ({ bytesWritten } = await cas.restoreFile({ + ...restoreOpts, + outputPath: opts.out, + })); + } finally { + progress.detach(); + } + const json = program.opts().json; + if (json) { + process.stdout.write(`${JSON.stringify({ bytesWritten })}\n`); + } else { process.stdout.write(`${bytesWritten}\n`); - } catch (err) { - process.stderr.write(`error: ${err.message}\n`); + } + }, getJson)); + +// --------------------------------------------------------------------------- +// verify +// --------------------------------------------------------------------------- +program + .command('verify') + .description('Verify integrity of a stored asset') + .option('--slug ', 'Resolve tree OID from vault slug') + .option('--oid ', 'Direct tree OID') + .option('--cwd ', 'Git working directory', '.') + .action(runAction(async (opts) => { + validateRestoreFlags(opts); + const cas = createCas(opts.cwd); + const treeOid = opts.oid || await cas.resolveVaultEntry({ slug: opts.slug }); + const manifest = await cas.readManifest({ treeOid }); + const ok = await cas.verifyIntegrity(manifest); + const json = program.opts().json; + if (json) { + process.stdout.write(`${JSON.stringify({ ok, slug: manifest.slug, chunks: manifest.chunks.length })}\n`); + } else { + process.stdout.write(ok ? 'ok\n' : `fail: ${manifest.slug}\n`); + } + if (!ok) { process.exit(1); } - }); + }, getJson)); // --------------------------------------------------------------------------- // vault init @@ -246,22 +274,22 @@ vault .option('--vault-passphrase ', 'Passphrase for vault-level encryption (prefer GIT_CAS_PASSPHRASE env var)') .option('--algorithm ', 'KDF algorithm (pbkdf2 or scrypt)', 'pbkdf2') .option('--cwd ', 'Git working directory', '.') - .action(async (opts) => { - try { - const cas = createCas(opts.cwd); - const initOpts = {}; - const passphrase = resolvePassphrase(opts); - if (passphrase) { - initOpts.passphrase = passphrase; - initOpts.kdfOptions = { algorithm: opts.algorithm }; - } - const { commitOid } = await cas.initVault(initOpts); + .action(runAction(async (opts) => { + const cas = createCas(opts.cwd); + const initOpts = {}; + const passphrase = resolvePassphrase(opts); + if (passphrase) { + initOpts.passphrase = passphrase; + initOpts.kdfOptions = { algorithm: opts.algorithm }; + } + const { commitOid } = await cas.initVault(initOpts); + const json = program.opts().json; + if (json) { + process.stdout.write(`${JSON.stringify({ commitOid })}\n`); + } else { process.stdout.write(`${commitOid}\n`); - } catch (err) { - process.stderr.write(`error: ${err.message}\n`); - process.exit(1); } - }); + }, getJson)); // --------------------------------------------------------------------------- // vault list @@ -269,19 +297,21 @@ vault vault .command('list') .description('List vault entries') + .option('--filter ', 'Filter entries by glob pattern') .option('--cwd ', 'Git working directory', '.') - .action(async (opts) => { - try { - const cas = createCas(opts.cwd); - const entries = await cas.listVault(); - for (const { slug, treeOid } of entries) { - process.stdout.write(`${slug}\t${treeOid}\n`); - } - } catch (err) { - process.stderr.write(`error: ${err.message}\n`); - process.exit(1); + .action(runAction(async (opts) => { + const cas = createCas(opts.cwd); + const all = await cas.listVault(); + const entries = filterEntries(all, opts.filter); + const json = program.opts().json; + if (json) { + process.stdout.write(`${JSON.stringify(entries)}\n`); + } else if (process.stdout.isTTY) { + process.stdout.write(formatTable(entries)); + } else { + process.stdout.write(formatTabSeparated(entries)); } - }); + }, getJson)); // --------------------------------------------------------------------------- // vault remove @@ -290,16 +320,16 @@ vault .command('remove ') .description('Remove an entry from the vault') .option('--cwd ', 'Git working directory', '.') - .action(async (slug, opts) => { - try { - const cas = createCas(opts.cwd); - const { removedTreeOid } = await cas.removeFromVault({ slug }); + .action(runAction(async (slug, opts) => { + const cas = createCas(opts.cwd); + const { commitOid, removedTreeOid } = await cas.removeFromVault({ slug }); + const json = program.opts().json; + if (json) { + process.stdout.write(`${JSON.stringify({ commitOid, removedTreeOid })}\n`); + } else { process.stdout.write(`${removedTreeOid}\n`); - } catch (err) { - process.stderr.write(`error: ${err.message}\n`); - process.exit(1); } - }); + }, getJson)); // --------------------------------------------------------------------------- // vault info @@ -309,21 +339,21 @@ vault .description('Show info for a vault entry') .option('--encryption', 'Show vault encryption details') .option('--cwd ', 'Git working directory', '.') - .action(async (slug, opts) => { - try { - const cas = createCas(opts.cwd); - const treeOid = await cas.resolveVaultEntry({ slug }); + .action(runAction(async (slug, opts) => { + const cas = createCas(opts.cwd); + const treeOid = await cas.resolveVaultEntry({ slug }); + const json = program.opts().json; + if (json) { + process.stdout.write(`${JSON.stringify({ slug, treeOid })}\n`); + } else { process.stdout.write(`slug\t${slug}\n`); process.stdout.write(`tree\t${treeOid}\n`); - if (opts.encryption) { - const metadata = await cas.getVaultMetadata(); - process.stdout.write(`\n${renderEncryptionCard({ metadata })}\n`); - } - } catch (err) { - process.stderr.write(`error: ${err.message}\n`); - process.exit(1); } - }); + if (opts.encryption && !json) { + const metadata = await cas.getVaultMetadata(); + process.stdout.write(`\n${renderEncryptionCard({ metadata })}\n`); + } + }, getJson)); // --------------------------------------------------------------------------- // vault history @@ -334,30 +364,24 @@ vault .option('--cwd ', 'Git working directory', '.') .option('-n, --max-count ', 'Limit number of commits') .option('--pretty', 'Render as color-coded timeline') - .action(async (opts) => { - try { - const runner = ShellRunnerFactory.create(); - const plumbing = new GitPlumbing({ runner, cwd: opts.cwd || '.' }); - const args = ['log', '--oneline', ContentAddressableStore.VAULT_REF]; - if (opts.maxCount) { - const n = parseInt(opts.maxCount, 10); - if (Number.isNaN(n) || n <= 0) { - process.stderr.write('error: --max-count must be a positive integer\n'); - process.exit(1); - } - args.push(`-${n}`); - } - const output = await plumbing.execute({ args }); - if (opts.pretty && process.stdout.isTTY) { - process.stdout.write(`${renderHistoryTimeline(output)}\n`); - } else { - process.stdout.write(`${output}\n`); + .action(runAction(async (opts) => { + const runner = ShellRunnerFactory.create(); + const plumbing = new GitPlumbing({ runner, cwd: opts.cwd || '.' }); + const args = ['log', '--oneline', ContentAddressableStore.VAULT_REF]; + if (opts.maxCount) { + const n = parseInt(opts.maxCount, 10); + if (Number.isNaN(n) || n <= 0) { + throw new Error('--max-count must be a positive integer'); } - } catch (err) { - process.stderr.write(`error: ${err.message}\n`); - process.exit(1); + args.push(`-${n}`); } - }); + const output = await plumbing.execute({ args }); + if (opts.pretty && process.stdout.isTTY) { + process.stdout.write(`${renderHistoryTimeline(output)}\n`); + } else { + process.stdout.write(`${output}\n`); + } + }, getJson)); // --------------------------------------------------------------------------- // vault dashboard @@ -366,15 +390,10 @@ vault .command('dashboard') .description('Interactive vault explorer') .option('--cwd ', 'Git working directory', '.') - .action(async (opts) => { - try { - const cas = createCas(opts.cwd); - const { launchDashboard } = await import('./ui/dashboard.js'); - await launchDashboard(cas); - } catch (err) { - process.stderr.write(`error: ${err.message}\n`); - process.exit(1); - } - }); + .action(runAction(async (opts) => { + const cas = createCas(opts.cwd); + const { launchDashboard } = await import('./ui/dashboard.js'); + await launchDashboard(cas); + }, getJson)); await program.parseAsync(); diff --git a/bin/ui/vault-list.js b/bin/ui/vault-list.js new file mode 100644 index 00000000..aadb8118 --- /dev/null +++ b/bin/ui/vault-list.js @@ -0,0 +1,60 @@ +/** + * Vault list filtering and table formatting utilities. + */ + +/** + * Convert a simple glob pattern to a RegExp. + * + * @param {string} pattern - Glob pattern (supports *, **, ?). + * @param {string} str - String to test. + * @returns {boolean} + */ +export function matchGlob(pattern, str) { + const escaped = pattern + .replace(/[.+^${}()|[\]\\]/g, '\\$&') + .replace(/\*\*/g, '\0') + .replace(/\*/g, '[^/]*') + .replace(/\0/g, '.*') + .replace(/\?/g, '.'); + return new RegExp(`^${escaped}$`).test(str); +} + +/** + * Filter vault entries by an optional glob pattern. + * + * @param {Array<{slug: string, treeOid: string}>} entries + * @param {string} [pattern] + * @returns {Array<{slug: string, treeOid: string}>} + */ +export function filterEntries(entries, pattern) { + if (!pattern) { + return entries; + } + return entries.filter(e => matchGlob(pattern, e.slug)); +} + +/** + * Format entries as an aligned table with header (for TTY output). + * + * @param {Array<{slug: string, treeOid: string}>} entries + * @returns {string} + */ +export function formatTable(entries) { + if (entries.length === 0) { + return ''; + } + const maxSlug = Math.max('SLUG'.length, ...entries.map(e => e.slug.length)); + const header = `${'SLUG'.padEnd(maxSlug)} TREE OID`; + const rows = entries.map(e => `${e.slug.padEnd(maxSlug)} ${e.treeOid}`); + return `${header}\n${rows.join('\n')}\n`; +} + +/** + * Format entries as tab-separated rows (for piped output). + * + * @param {Array<{slug: string, treeOid: string}>} entries + * @returns {string} + */ +export function formatTabSeparated(entries) { + return entries.map(e => `${e.slug}\t${e.treeOid}\n`).join(''); +} diff --git a/test/unit/cli/actions.test.js b/test/unit/cli/actions.test.js new file mode 100644 index 00000000..be4b297d --- /dev/null +++ b/test/unit/cli/actions.test.js @@ -0,0 +1,120 @@ +import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'; +import { runAction, writeError, HINTS } from '../../../bin/actions.js'; + +describe('writeError — text mode', () => { + let stderrSpy; + + beforeEach(() => { + stderrSpy = vi.spyOn(process.stderr, 'write').mockImplementation(() => true); + }); + + afterEach(() => { + stderrSpy.mockRestore(); + }); + + it('writes error [CODE]: message for coded errors', () => { + const err = Object.assign(new Error('key required'), { code: 'MISSING_KEY' }); + writeError(err, false); + expect(stderrSpy).toHaveBeenCalledWith('error [MISSING_KEY]: key required\n'); + }); + + it('appends hint for known codes', () => { + const err = Object.assign(new Error('key required'), { code: 'MISSING_KEY' }); + writeError(err, false); + expect(stderrSpy).toHaveBeenCalledWith('hint: Provide --key-file or --vault-passphrase\n'); + }); + + it('no hint for unknown codes', () => { + const err = Object.assign(new Error('something'), { code: 'UNKNOWN_CODE' }); + writeError(err, false); + expect(stderrSpy).toHaveBeenCalledTimes(1); + }); + + it('no [CODE] prefix when err.code is absent', () => { + writeError(new Error('generic failure'), false); + expect(stderrSpy).toHaveBeenCalledWith('error: generic failure\n'); + }); + + it('no [CODE] prefix when err.code is not a string', () => { + const err = Object.assign(new Error('oops'), { code: 42 }); + writeError(err, false); + expect(stderrSpy).toHaveBeenCalledWith('error: oops\n'); + }); +}); + +describe('writeError — JSON mode', () => { + let stderrSpy; + + beforeEach(() => { + stderrSpy = vi.spyOn(process.stderr, 'write').mockImplementation(() => true); + }); + + afterEach(() => { + stderrSpy.mockRestore(); + }); + + it('writes {"error","code"} to stderr', () => { + const err = Object.assign(new Error('not found'), { code: 'MANIFEST_NOT_FOUND' }); + writeError(err, true); + const output = JSON.parse(stderrSpy.mock.calls[0][0]); + expect(output).toEqual({ error: 'not found', code: 'MANIFEST_NOT_FOUND' }); + }); + + it('omits code when absent', () => { + writeError(new Error('boom'), true); + const output = JSON.parse(stderrSpy.mock.calls[0][0]); + expect(output).toEqual({ error: 'boom' }); + }); +}); + +describe('runAction', () => { + let exitSpy; + let stderrSpy; + + beforeEach(() => { + exitSpy = vi.spyOn(process, 'exit').mockImplementation(() => {}); + stderrSpy = vi.spyOn(process.stderr, 'write').mockImplementation(() => true); + }); + + afterEach(() => { + exitSpy.mockRestore(); + stderrSpy.mockRestore(); + }); + + it('does not exit on success', async () => { + const action = runAction(async () => {}, () => false); + await action(); + expect(exitSpy).not.toHaveBeenCalled(); + }); + + it('calls process.exit(1) on error', async () => { + const action = runAction(async () => { throw new Error('fail'); }, () => false); + await action(); + expect(exitSpy).toHaveBeenCalledWith(1); + }); + + it('passes arguments through to the wrapped function', async () => { + const spy = vi.fn(); + const action = runAction(spy, () => false); + await action('a', 'b', 'c'); + expect(spy).toHaveBeenCalledWith('a', 'b', 'c'); + }); + + it('uses JSON mode from getJson getter', async () => { + const err = Object.assign(new Error('oops'), { code: 'MISSING_KEY' }); + const action = runAction(async () => { throw err; }, () => true); + await action(); + const output = JSON.parse(stderrSpy.mock.calls[0][0]); + expect(output).toEqual({ error: 'oops', code: 'MISSING_KEY' }); + }); +}); + +describe('HINTS', () => { + it('contains expected error codes', () => { + expect(HINTS).toHaveProperty('MISSING_KEY'); + expect(HINTS).toHaveProperty('MANIFEST_NOT_FOUND'); + expect(HINTS).toHaveProperty('VAULT_ENTRY_NOT_FOUND'); + expect(HINTS).toHaveProperty('VAULT_ENTRY_EXISTS'); + expect(HINTS).toHaveProperty('INTEGRITY_ERROR'); + }); +}); diff --git a/test/unit/cli/vault-list.test.js b/test/unit/cli/vault-list.test.js new file mode 100644 index 00000000..e2e72997 --- /dev/null +++ b/test/unit/cli/vault-list.test.js @@ -0,0 +1,94 @@ +import { describe, it, expect } from 'vitest'; +import { matchGlob, filterEntries, formatTable, formatTabSeparated } from '../../../bin/ui/vault-list.js'; + +describe('matchGlob', () => { + it('matches single-segment wildcard', () => { + expect(matchGlob('photos/*', 'photos/hero.jpg')).toBe(true); + }); + + it('rejects non-matching prefix', () => { + expect(matchGlob('photos/*', 'other/hero.jpg')).toBe(false); + }); + + it('matches extension glob', () => { + expect(matchGlob('*.bin', 'asset.bin')).toBe(true); + }); + + it('rejects wrong extension', () => { + expect(matchGlob('*.bin', 'asset.json')).toBe(false); + }); + + it('matches double-star across segments', () => { + expect(matchGlob('assets/**/*.png', 'assets/img/icons/logo.png')).toBe(true); + }); + + it('matches question mark for single char', () => { + expect(matchGlob('file?.txt', 'file1.txt')).toBe(true); + expect(matchGlob('file?.txt', 'file12.txt')).toBe(false); + }); + + it('handles exact match', () => { + expect(matchGlob('exact', 'exact')).toBe(true); + expect(matchGlob('exact', 'other')).toBe(false); + }); +}); + +describe('filterEntries', () => { + const entries = [ + { slug: 'photos/hero.jpg', treeOid: 'aaa' }, + { slug: 'photos/thumb.png', treeOid: 'bbb' }, + { slug: 'videos/intro.mp4', treeOid: 'ccc' }, + ]; + + it('returns all entries when no pattern is provided', () => { + expect(filterEntries(entries)).toEqual(entries); + expect(filterEntries(entries, undefined)).toEqual(entries); + }); + + it('filters entries by glob pattern', () => { + const result = filterEntries(entries, 'photos/*'); + expect(result).toEqual([ + { slug: 'photos/hero.jpg', treeOid: 'aaa' }, + { slug: 'photos/thumb.png', treeOid: 'bbb' }, + ]); + }); + + it('returns empty array when nothing matches', () => { + expect(filterEntries(entries, 'docs/*')).toEqual([]); + }); +}); + +describe('formatTable', () => { + it('includes header and aligned columns', () => { + const entries = [ + { slug: 'short', treeOid: 'abc123' }, + { slug: 'a-longer-slug', treeOid: 'def456' }, + ]; + const output = formatTable(entries); + const lines = output.split('\n'); + expect(lines[0]).toMatch(/^SLUG\s+TREE OID$/); + expect(lines[1]).toContain('short'); + expect(lines[1]).toContain('abc123'); + expect(lines[2]).toContain('a-longer-slug'); + expect(lines[2]).toContain('def456'); + }); + + it('returns empty string for no entries', () => { + expect(formatTable([])).toBe(''); + }); +}); + +describe('formatTabSeparated', () => { + it('outputs tab-delimited rows', () => { + const entries = [ + { slug: 'a', treeOid: '111' }, + { slug: 'b', treeOid: '222' }, + ]; + const output = formatTabSeparated(entries); + expect(output).toBe('a\t111\nb\t222\n'); + }); + + it('returns empty string for no entries', () => { + expect(formatTabSeparated([])).toBe(''); + }); +}); From 1b16f6e53a69d91582bee81beb2df5434e0c93b1 Mon Sep 17 00:00:00 2001 From: James Ross Date: Fri, 27 Feb 2026 22:50:54 -0800 Subject: [PATCH 3/6] =?UTF-8?q?fix:=20M8=20code=20review=20=E2=80=94=2012?= =?UTF-8?q?=20issues=20(C1,=20M1=E2=80=93M4,=20L1/L3=E2=80=93L5,=20N1/N3)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Replace process.exit(1) with exitCode, include encryption in vault info JSON, accept Uint8Array in NodeCryptoAdapter, guard matchGlob pattern length, harden writeError for non-Error throws, remove redundant _validateKey call, fix ADR-001 method names, update test count and CHANGELOG. --- CHANGELOG.md | 15 +++++++++++ STATUS.md | 2 +- bin/actions.js | 14 +++++----- bin/git-cas.js | 27 ++++++++++++------- bin/ui/vault-list.js | 3 +++ docs/ADR-001-vault-in-facade.md | 2 +- src/domain/services/CasService.js | 1 - .../adapters/NodeCryptoAdapter.js | 23 +--------------- .../CasService.key-validation.test.js | 6 ++--- 9 files changed, 48 insertions(+), 45 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 7291b151..a5f877c6 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -5,6 +5,21 @@ All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## [Unreleased] — M8 Spit Shine (code review fixes) + +### Fixed +- **C1** `verify` command uses `process.exitCode = 1` instead of `process.exit(1)` to allow stdout to drain on pipes. +- **M4** `vault info --json --encryption` now includes encryption metadata in JSON output instead of silently dropping it. +- **M3** `NodeCryptoAdapter._validateKey` removed — inherits base class `CryptoPort._validateKey` which accepts both `Buffer` and `Uint8Array`, matching the port contract. +- **M2** `matchGlob` rejects patterns longer than 200 characters to prevent ReDoS on pathological input. +- **L1** `writeError` guards against non-Error throws (`err?.message ?? String(err)`). +- **L4** `store` action hoists `--json` flag read before `quiet` assignment (single call instead of two). +- **L5** `CasService.encrypt()` removed redundant `_validateKey` call — `encryptBuffer` already validates internally. +- **L3** `verify` command description clarified: "checks blob hashes; no key needed". +- **N1** ADR-001 method names corrected to match actual facade API (`initVault`, `addToVault`, etc.). +- **N3** Removed trailing blank line at EOF in `bin/git-cas.js`. +- **M1** STATUS.md test count updated from 567 to 616. + ## [4.0.0] — Conduit (2026-02-27) ### Breaking Changes diff --git a/STATUS.md b/STATUS.md index d4677670..8652eee6 100644 --- a/STATUS.md +++ b/STATUS.md @@ -2,7 +2,7 @@ **Current version:** v4.0.0 (Conduit) **Last release:** 2026-02-27 -**Test suite:** 567 tests (vitest) +**Test suite:** 616 tests (vitest) **Runtimes:** Node.js 22.x, Bun, Deno --- diff --git a/bin/actions.js b/bin/actions.js index a52a6804..967136fd 100644 --- a/bin/actions.js +++ b/bin/actions.js @@ -17,16 +17,16 @@ const HINTS = { * @param {boolean} json - Whether to output JSON. */ function writeError(err, json) { + const message = err?.message ?? String(err); + const code = typeof err?.code === 'string' ? err.code : undefined; if (json) { - const obj = { error: err.message }; - if (typeof err.code === 'string') { - obj.code = err.code; - } + const obj = { error: message }; + if (code) { obj.code = code; } process.stderr.write(`${JSON.stringify(obj)}\n`); } else { - const prefix = typeof err.code === 'string' ? `error [${err.code}]: ` : 'error: '; - process.stderr.write(`${prefix}${err.message}\n`); - const hint = typeof err.code === 'string' ? HINTS[err.code] : undefined; + const prefix = code ? `error [${code}]: ` : 'error: '; + process.stderr.write(`${prefix}${message}\n`); + const hint = code ? HINTS[code] : undefined; if (hint) { process.stderr.write(`hint: ${hint}\n`); } diff --git a/bin/git-cas.js b/bin/git-cas.js index 041f4497..7122390f 100755 --- a/bin/git-cas.js +++ b/bin/git-cas.js @@ -106,7 +106,8 @@ program .option('--vault-passphrase ', 'Vault-level passphrase for encryption (prefer GIT_CAS_PASSPHRASE env var)') .option('--cwd ', 'Git working directory', '.') .action(runAction(async (file, opts) => { - const quiet = program.opts().quiet || program.opts().json; + const json = program.opts().json; + const quiet = program.opts().quiet || json; const observer = new EventEmitterObserver(); const cas = createCas(opts.cwd, { observability: observer }); const encryptionKey = await resolveEncryptionKey(cas, opts); @@ -126,7 +127,6 @@ program progress.detach(); } - const json = program.opts().json; if (opts.tree) { const treeOid = await cas.createTree({ manifest }); await cas.addToVault({ slug: opts.slug, treeOid, force: !!opts.force }); @@ -240,7 +240,7 @@ program // --------------------------------------------------------------------------- program .command('verify') - .description('Verify integrity of a stored asset') + .description('Verify integrity of a stored asset (checks blob hashes; no key needed)') .option('--slug ', 'Resolve tree OID from vault slug') .option('--oid ', 'Direct tree OID') .option('--cwd ', 'Git working directory', '.') @@ -257,7 +257,7 @@ program process.stdout.write(ok ? 'ok\n' : `fail: ${manifest.slug}\n`); } if (!ok) { - process.exit(1); + process.exitCode = 1; } }, getJson)); @@ -344,14 +344,21 @@ vault const treeOid = await cas.resolveVaultEntry({ slug }); const json = program.opts().json; if (json) { - process.stdout.write(`${JSON.stringify({ slug, treeOid })}\n`); + const result = { slug, treeOid }; + if (opts.encryption) { + const metadata = await cas.getVaultMetadata(); + if (metadata?.encryption) { + result.encryption = metadata.encryption; + } + } + process.stdout.write(`${JSON.stringify(result)}\n`); } else { process.stdout.write(`slug\t${slug}\n`); process.stdout.write(`tree\t${treeOid}\n`); - } - if (opts.encryption && !json) { - const metadata = await cas.getVaultMetadata(); - process.stdout.write(`\n${renderEncryptionCard({ metadata })}\n`); + if (opts.encryption) { + const metadata = await cas.getVaultMetadata(); + process.stdout.write(`\n${renderEncryptionCard({ metadata })}\n`); + } } }, getJson)); @@ -396,4 +403,4 @@ vault await launchDashboard(cas); }, getJson)); -await program.parseAsync(); +await program.parseAsync(); \ No newline at end of file diff --git a/bin/ui/vault-list.js b/bin/ui/vault-list.js index aadb8118..60e3f4c6 100644 --- a/bin/ui/vault-list.js +++ b/bin/ui/vault-list.js @@ -10,6 +10,9 @@ * @returns {boolean} */ export function matchGlob(pattern, str) { + if (pattern.length > 200) { + throw new Error(`Glob pattern too long (${pattern.length} chars, max 200)`); + } const escaped = pattern .replace(/[.+^${}()|[\]\\]/g, '\\$&') .replace(/\*\*/g, '\0') diff --git a/docs/ADR-001-vault-in-facade.md b/docs/ADR-001-vault-in-facade.md index 607ea7d3..5214e3f4 100644 --- a/docs/ADR-001-vault-in-facade.md +++ b/docs/ADR-001-vault-in-facade.md @@ -29,6 +29,6 @@ Vault logic lives in `VaultService`, a separate domain service. `ContentAddressa ## Consequences -- The facade has 7 vault pass-through methods (`vaultInit`, `vaultStore`, `vaultRestore`, `vaultList`, `vaultInfo`, `vaultRemove`, `vaultHistory`). This is acceptable given the flat API benefit. +- The facade has 7 vault pass-through methods (`initVault`, `addToVault`, `listVault`, `removeFromVault`, `resolveVaultEntry`, `getVaultMetadata`, `getVaultService`). This is acceptable given the flat API benefit. - Users must go through the facade or explicitly create `VaultService` — there is no implicit vault available on a bare `CasService`. - VaultService can evolve independently (e.g., named vaults, cross-repo sync) without touching CasService internals. diff --git a/src/domain/services/CasService.js b/src/domain/services/CasService.js index eb96e40c..a94dc1c6 100644 --- a/src/domain/services/CasService.js +++ b/src/domain/services/CasService.js @@ -155,7 +155,6 @@ export default class CasService { * @throws {CasError} INVALID_KEY_TYPE | INVALID_KEY_LENGTH if the key is invalid. */ async encrypt({ buffer, key }) { - this.crypto._validateKey(key); return await this.crypto.encryptBuffer(buffer, key); } diff --git a/src/infrastructure/adapters/NodeCryptoAdapter.js b/src/infrastructure/adapters/NodeCryptoAdapter.js index 62365c9f..134e02fe 100644 --- a/src/infrastructure/adapters/NodeCryptoAdapter.js +++ b/src/infrastructure/adapters/NodeCryptoAdapter.js @@ -1,7 +1,6 @@ import { createHash, createCipheriv, createDecipheriv, randomBytes, pbkdf2, scrypt } from 'node:crypto'; import { promisify } from 'node:util'; import CryptoPort from '../../ports/CryptoPort.js'; -import CasError from '../../domain/errors/CasError.js'; /** * Node.js implementation of CryptoPort using node:crypto. @@ -66,27 +65,7 @@ export default class NodeCryptoAdapter extends CryptoPort { return { encrypt, finalize }; } - /** - * Validates that a key is a 32-byte Buffer (strict Node.js check). - * @override - * @param {Buffer} key - * @throws {CasError} INVALID_KEY_TYPE | INVALID_KEY_LENGTH - */ - _validateKey(key) { - if (!Buffer.isBuffer(key)) { - throw new CasError( - 'Encryption key must be a Buffer', - 'INVALID_KEY_TYPE', - ); - } - if (key.length !== 32) { - throw new CasError( - `Encryption key must be 32 bytes, got ${key.length}`, - 'INVALID_KEY_LENGTH', - { expected: 32, actual: key.length }, - ); - } - } + /** @override – delegate to base class which accepts both Buffer and Uint8Array */ /** @override */ async _doDeriveKey(passphrase, saltBuf, { algorithm, iterations, cost, blockSize, parallelization, keyLength }) { diff --git a/test/unit/domain/services/CasService.key-validation.test.js b/test/unit/domain/services/CasService.key-validation.test.js index dd828c9e..7f622abf 100644 --- a/test/unit/domain/services/CasService.key-validation.test.js +++ b/test/unit/domain/services/CasService.key-validation.test.js @@ -108,7 +108,7 @@ describe('CasService key validation – encrypt() invalid key type', () => { await service.encrypt({ buffer: plaintext, key }); } catch (err) { expect(err.code).toBe('INVALID_KEY_TYPE'); - expect(err.message).toContain('must be a Buffer'); + expect(err.message).toContain('must be a Buffer or Uint8Array'); } }); @@ -119,7 +119,7 @@ describe('CasService key validation – encrypt() invalid key type', () => { await service.encrypt({ buffer: plaintext, key }); } catch (err) { expect(err.code).toBe('INVALID_KEY_TYPE'); - expect(err.message).toContain('must be a Buffer'); + expect(err.message).toContain('must be a Buffer or Uint8Array'); } }); @@ -130,7 +130,7 @@ describe('CasService key validation – encrypt() invalid key type', () => { await service.encrypt({ buffer: plaintext, key }); } catch (err) { expect(err.code).toBe('INVALID_KEY_TYPE'); - expect(err.message).toContain('must be a Buffer'); + expect(err.message).toContain('must be a Buffer or Uint8Array'); } }); }); From 647b2a7d3e4878584c1d005e220666b7285ab4d7 Mon Sep 17 00:00:00 2001 From: James Ross Date: Fri, 27 Feb 2026 23:08:37 -0800 Subject: [PATCH 4/6] fix: address CodeRabbit PR feedback (9 issues) - runAction: process.exitCode = 1 instead of process.exit(1) - store: --force without --tree now fails fast - inspect: --json honored in TTY mode - vault history: --json emits structured array - matchGlob: ? no longer matches / separator - NodeCryptoAdapter: remove orphaned comment, await in _doDeriveKey - Tests: pattern length guard, ? separator, exitCode assertions --- bin/actions.js | 2 +- bin/git-cas.js | 20 +++++++++++++++++-- bin/ui/vault-list.js | 2 +- .../adapters/NodeCryptoAdapter.js | 6 ++---- test/unit/cli/actions.test.js | 14 ++++++------- test/unit/cli/vault-list.test.js | 10 ++++++++++ 6 files changed, 39 insertions(+), 15 deletions(-) diff --git a/bin/actions.js b/bin/actions.js index 967136fd..dfe299c7 100644 --- a/bin/actions.js +++ b/bin/actions.js @@ -46,7 +46,7 @@ export function runAction(fn, getJson) { await fn(...args); } catch (err) { writeError(err, getJson()); - process.exit(1); + process.exitCode = 1; } }; } diff --git a/bin/git-cas.js b/bin/git-cas.js index 7122390f..c5c521fe 100755 --- a/bin/git-cas.js +++ b/bin/git-cas.js @@ -111,6 +111,9 @@ program const observer = new EventEmitterObserver(); const cas = createCas(opts.cwd, { observability: observer }); const encryptionKey = await resolveEncryptionKey(cas, opts); + if (opts.force && !opts.tree) { + throw new Error('--force requires --tree'); + } const storeOpts = { filePath: file, slug: opts.slug }; if (encryptionKey) { storeOpts.encryptionKey = encryptionKey; @@ -178,8 +181,11 @@ program const cas = createCas(opts.cwd); const treeOid = opts.oid || await cas.resolveVaultEntry({ slug: opts.slug }); const manifest = await cas.readManifest({ treeOid }); + const json = program.opts().json; - if (opts.heatmap) { + if (json) { + process.stdout.write(`${JSON.stringify(manifest.toJSON())}\n`); + } else if (opts.heatmap) { process.stdout.write(renderHeatmap({ manifest })); } else if (process.stdout.isTTY) { process.stdout.write(renderManifestView({ manifest })); @@ -383,7 +389,17 @@ vault args.push(`-${n}`); } const output = await plumbing.execute({ args }); - if (opts.pretty && process.stdout.isTTY) { + const json = program.opts().json; + if (json) { + const history = output + .split('\n') + .filter(Boolean) + .map((line) => { + const [commitOid, ...messageParts] = line.trim().split(/\s+/); + return { commitOid, message: messageParts.join(' ') }; + }); + process.stdout.write(`${JSON.stringify(history)}\n`); + } else if (opts.pretty && process.stdout.isTTY) { process.stdout.write(`${renderHistoryTimeline(output)}\n`); } else { process.stdout.write(`${output}\n`); diff --git a/bin/ui/vault-list.js b/bin/ui/vault-list.js index 60e3f4c6..cbaedbb4 100644 --- a/bin/ui/vault-list.js +++ b/bin/ui/vault-list.js @@ -18,7 +18,7 @@ export function matchGlob(pattern, str) { .replace(/\*\*/g, '\0') .replace(/\*/g, '[^/]*') .replace(/\0/g, '.*') - .replace(/\?/g, '.'); + .replace(/\?/g, '[^/]'); return new RegExp(`^${escaped}$`).test(str); } diff --git a/src/infrastructure/adapters/NodeCryptoAdapter.js b/src/infrastructure/adapters/NodeCryptoAdapter.js index 134e02fe..255f04ff 100644 --- a/src/infrastructure/adapters/NodeCryptoAdapter.js +++ b/src/infrastructure/adapters/NodeCryptoAdapter.js @@ -65,14 +65,12 @@ export default class NodeCryptoAdapter extends CryptoPort { return { encrypt, finalize }; } - /** @override – delegate to base class which accepts both Buffer and Uint8Array */ - /** @override */ async _doDeriveKey(passphrase, saltBuf, { algorithm, iterations, cost, blockSize, parallelization, keyLength }) { if (algorithm === 'pbkdf2') { - return promisify(pbkdf2)(passphrase, saltBuf, iterations, keyLength, 'sha512'); + return await promisify(pbkdf2)(passphrase, saltBuf, iterations, keyLength, 'sha512'); } - return promisify(scrypt)(passphrase, saltBuf, keyLength, { + return await promisify(scrypt)(passphrase, saltBuf, keyLength, { N: cost, r: blockSize, p: parallelization, diff --git a/test/unit/cli/actions.test.js b/test/unit/cli/actions.test.js index be4b297d..84870008 100644 --- a/test/unit/cli/actions.test.js +++ b/test/unit/cli/actions.test.js @@ -68,29 +68,29 @@ describe('writeError — JSON mode', () => { }); describe('runAction', () => { - let exitSpy; let stderrSpy; + const originalExitCode = process.exitCode; beforeEach(() => { - exitSpy = vi.spyOn(process, 'exit').mockImplementation(() => {}); + process.exitCode = undefined; stderrSpy = vi.spyOn(process.stderr, 'write').mockImplementation(() => true); }); afterEach(() => { - exitSpy.mockRestore(); + process.exitCode = originalExitCode; stderrSpy.mockRestore(); }); - it('does not exit on success', async () => { + it('does not set exitCode on success', async () => { const action = runAction(async () => {}, () => false); await action(); - expect(exitSpy).not.toHaveBeenCalled(); + expect(process.exitCode).toBeUndefined(); }); - it('calls process.exit(1) on error', async () => { + it('sets process.exitCode = 1 on error', async () => { const action = runAction(async () => { throw new Error('fail'); }, () => false); await action(); - expect(exitSpy).toHaveBeenCalledWith(1); + expect(process.exitCode).toBe(1); }); it('passes arguments through to the wrapped function', async () => { diff --git a/test/unit/cli/vault-list.test.js b/test/unit/cli/vault-list.test.js index e2e72997..438efa1e 100644 --- a/test/unit/cli/vault-list.test.js +++ b/test/unit/cli/vault-list.test.js @@ -31,6 +31,16 @@ describe('matchGlob', () => { expect(matchGlob('exact', 'exact')).toBe(true); expect(matchGlob('exact', 'other')).toBe(false); }); + + it('rejects patterns longer than 200 characters', () => { + const longPattern = 'a'.repeat(201); + expect(() => matchGlob(longPattern, 'test')).toThrow(/too long/); + }); + + it('? does not match path separator', () => { + expect(matchGlob('a?b', 'a/b')).toBe(false); + expect(matchGlob('a?b', 'axb')).toBe(true); + }); }); describe('filterEntries', () => { From 4fe0802724abcbb73984ad6576c6c0edbcbc6388 Mon Sep 17 00:00:00 2001 From: James Ross Date: Fri, 27 Feb 2026 23:08:41 -0800 Subject: [PATCH 5/6] docs: update CHANGELOG and STATUS.md for PR feedback fixes --- CHANGELOG.md | 8 +++++++- STATUS.md | 14 +++++++------- 2 files changed, 14 insertions(+), 8 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index a5f877c6..935a0c96 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -18,7 +18,13 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - **L3** `verify` command description clarified: "checks blob hashes; no key needed". - **N1** ADR-001 method names corrected to match actual facade API (`initVault`, `addToVault`, etc.). - **N3** Removed trailing blank line at EOF in `bin/git-cas.js`. -- **M1** STATUS.md test count updated from 567 to 616. +- **M1** STATUS.md test count updated; M8/M9 task checkboxes marked complete. +- `runAction` uses `process.exitCode = 1` instead of `process.exit(1)` for consistent exit behavior across all commands. +- `store --force` without `--tree` now throws immediately instead of silently ignoring the flag. +- `inspect --json` now emits JSON even in TTY mode (previously fell through to rich view). +- `vault history --json` now emits structured JSON array of `{ commitOid, message }` objects. +- `matchGlob` `?` wildcard no longer matches `/` path separator, consistent with standard glob semantics. +- Orphaned JSDoc comment removed from `NodeCryptoAdapter`; `_doDeriveKey` now properly `await`s promisified calls. ## [4.0.0] — Conduit (2026-02-27) diff --git a/STATUS.md b/STATUS.md index 8652eee6..960fcfab 100644 --- a/STATUS.md +++ b/STATUS.md @@ -2,7 +2,7 @@ **Current version:** v4.0.0 (Conduit) **Last release:** 2026-02-27 -**Test suite:** 616 tests (vitest) +**Test suite:** 618 tests (vitest) **Runtimes:** Node.js 22.x, Bun, Deno --- @@ -26,16 +26,16 @@ Five open milestones remain. M8/M9 are quick wins; M10–M12 are larger features ### M8 — Spit Shine (~3h) Code review polish. No new features. -- [ ] **8.2** Extract shared crypto helpers to CryptoPort base class -- [ ] **8.3** README polish and architectural decision record (ADR-001) +- [x] **8.2** Extract shared crypto helpers to CryptoPort base class +- [x] **8.3** README polish and architectural decision record (ADR-001) ### M9 — Cockpit (~5h) CLI improvements for CI/CD and operator workflows. -- [ ] **9.2** CLI `verify` command (`git cas verify --slug `) -- [ ] **9.3** CLI `--json` output mode (structured JSON for all commands) -- [ ] **9.4** CLI error handler DRY cleanup + actionable error messages -- [ ] **9.5** Vault list filtering (`--filter`) and table formatting +- [x] **9.2** CLI `verify` command (`git cas verify --slug `) +- [x] **9.3** CLI `--json` output mode (structured JSON for all commands) +- [x] **9.4** CLI error handler DRY cleanup + actionable error messages +- [x] **9.5** Vault list filtering (`--filter`) and table formatting ### M10 — Hydra (~22h) Content-defined chunking for dramatically better dedup on versioned files. From 1c1e3d04abbda1d257113fd876f1f4065c1c3ce2 Mon Sep 17 00:00:00 2001 From: James Ross Date: Fri, 27 Feb 2026 23:14:44 -0800 Subject: [PATCH 6/6] fix: guard WebCryptoAdapter.finalize() against premature call Throw STREAM_NOT_CONSUMED error if finalize() is called before the encrypt stream is fully consumed, preventing null finalTag encoding. --- src/infrastructure/adapters/WebCryptoAdapter.js | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/src/infrastructure/adapters/WebCryptoAdapter.js b/src/infrastructure/adapters/WebCryptoAdapter.js index ec517a8f..a6ffd519 100644 --- a/src/infrastructure/adapters/WebCryptoAdapter.js +++ b/src/infrastructure/adapters/WebCryptoAdapter.js @@ -85,13 +85,11 @@ export default class WebCryptoAdapter extends CryptoPort { // current CasService.restore limitation. const chunks = []; let finalTag = null; + let streamConsumed = false; const encrypt = async function* (source) { for await (const chunk of source) { chunks.push(chunk); - // We can't yield partial encrypted chunks for GCM in Web Crypto - // without complex chunk-chaining which would break compatibility - // with the Node adapter's single-stream GCM. } const buffer = Buffer.concat(chunks); @@ -106,11 +104,18 @@ export default class WebCryptoAdapter extends CryptoPort { const tagLength = 16; const ciphertext = fullBuffer.slice(0, -tagLength); finalTag = fullBuffer.slice(-tagLength); + streamConsumed = true; yield Buffer.from(ciphertext); }; const finalize = () => { + if (!streamConsumed) { + throw new CasError( + 'Cannot finalize before the encrypt stream is fully consumed', + 'STREAM_NOT_CONSUMED', + ); + } return this._buildMeta(this.#toBase64(nonce), this.#toBase64(finalTag)); };