Skip to content

feat: add project-standards-reviewer as always-on ce:review persona#402

Merged
tmchow merged 3 commits intomainfrom
feat/skill-review-guide
Mar 27, 2026
Merged

feat: add project-standards-reviewer as always-on ce:review persona#402
tmchow merged 3 commits intomainfrom
feat/skill-review-guide

Conversation

@tmchow
Copy link
Collaborator

@tmchow tmchow commented Mar 27, 2026

Summary

  • Adds project-standards-reviewer as a new always-on persona in ce:review that audits diffs against the project's own CLAUDE.md and AGENTS.md standards (frontmatter, reference inclusion, naming, cross-platform portability, tool selection)
  • Orchestrator discovers standards file paths via glob and passes them to the reviewer, which reads only relevant sections -- inspired by Anthropic's code-review command pattern
  • Documents the "pass paths, not content" orchestration pattern as a learning and AGENTS.md best practice

Test plan

  • Verify bun test passes (496 tests, 0 failures)
  • Verify bun run release:validate shows metadata in sync (45 agents, 42 skills)
  • Run /ce:review on a branch with skill/agent changes and confirm project-standards appears in the reviewer team and produces findings
  • Run /ce:review on a branch with only TypeScript changes and confirm project-standards runs but produces no findings (rules don't apply to those file types)

tmchow added 3 commits March 26, 2026 19:14
Adds a new always-on reviewer that audits diffs against the project's own
CLAUDE.md and AGENTS.md standards -- frontmatter rules, reference inclusion,
naming conventions, cross-platform portability, and tool selection policies.

Inspired by Anthropic's code-review command pattern where CLAUDE.md compliance
is a first-class review lens. The orchestrator discovers standards file paths
via glob and passes them to the reviewer, which reads only the sections
relevant to the changed file types.

Also documents the "pass paths, not content" orchestration pattern as a
learning in docs/solutions/ and a best practice in AGENTS.md.
Empirical testing showed "find all X, then filter" produces 2 tool calls
vs "for each item, walk and check" producing 14 in Claude Code. The right
fix is writing the correct instruction in the skill itself, not adding
meta-rules to AGENTS.md about how to phrase instructions.
When in doubt about instruction phrasing efficiency, test with
claude -p and codex exec to compare tool call counts across platforms
before committing to a phrasing in high-frequency skills.
@tmchow tmchow merged commit b30288c into main Mar 27, 2026
2 checks passed
@github-actions github-actions bot mentioned this pull request Mar 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant