diff --git a/commands/capture-knowledge.md b/commands/capture-knowledge.md index 8013a43..07e332b 100644 --- a/commands/capture-knowledge.md +++ b/commands/capture-knowledge.md @@ -5,8 +5,10 @@ description: Document a code entry point in knowledge docs. Guide me through creating a structured understanding of a code entry point and saving it to the knowledge docs. 1. **Gather & Validate Entry Point** — If not already provided, ask for: entry point (file, folder, function, API), why it matters (feature, bug, investigation), and desired depth or focus areas. Confirm the entry point exists; if ambiguous or not found, clarify or suggest alternatives. -2. **Collect Source Context** — Read the primary file/module and summarize purpose, exports, key patterns. For folders: list structure, highlight key modules. For functions/APIs: capture signature, parameters, return values, error handling. Extract essential snippets (avoid large dumps). -3. **Analyze Dependencies** — Build a dependency view up to depth 3, tracking visited nodes to avoid loops. Categorize: imports, function calls, services, external packages. Note external systems or generated code to exclude. -4. **Synthesize Explanation** — Draft overview (purpose, language, high-level behavior). Detail core logic, execution flow, key patterns. Highlight error handling, performance, security considerations. Identify potential improvements or risks. -5. **Create Documentation** — Normalize name to kebab-case (`calculateTotalPrice` → `calculate-total-price`). Create `docs/ai/implementation/knowledge-{name}.md` with sections: Overview, Implementation Details, Dependencies, Visual Diagrams, Additional Insights, Metadata, Next Steps. Include mermaid diagrams when they clarify flows or relationships. Add metadata (analysis date, depth, files touched). -6. **Review & Next Actions** — Summarize key insights and open questions. Suggest related areas for deeper dives. Confirm file path and remind to commit. +2. **Use Memory for Context** — Search memory for prior knowledge about this module/domain: `npx ai-devkit@latest memory search --query ""`. +3. **Collect Source Context** — Read the primary file/module and summarize purpose, exports, key patterns. For folders: list structure, highlight key modules. For functions/APIs: capture signature, parameters, return values, error handling. Extract essential snippets (avoid large dumps). +4. **Analyze Dependencies** — Build a dependency view up to depth 3, tracking visited nodes to avoid loops. Categorize: imports, function calls, services, external packages. Note external systems or generated code to exclude. +5. **Synthesize Explanation** — Draft overview (purpose, language, high-level behavior). Detail core logic, execution flow, key patterns. Highlight error handling, performance, security considerations. Identify potential improvements or risks. +6. **Create Documentation** — Normalize name to kebab-case (`calculateTotalPrice` → `calculate-total-price`). Create `docs/ai/implementation/knowledge-{name}.md` with sections: Overview, Implementation Details, Dependencies, Visual Diagrams, Additional Insights, Metadata, Next Steps. Include mermaid diagrams when they clarify flows or relationships. Add metadata (analysis date, depth, files touched). +7. **Store Reusable Knowledge** — If insights should persist across sessions, store them using `npx ai-devkit@latest memory store ...`. +8. **Review & Next Actions** — Summarize key insights and open questions. Suggest related areas for deeper dives, confirm file path, and suggest `/remember` for key long-lived rules. diff --git a/commands/check-implementation.md b/commands/check-implementation.md index b78103d..144e9f4 100644 --- a/commands/check-implementation.md +++ b/commands/check-implementation.md @@ -2,9 +2,12 @@ description: Compare implementation with design and requirements docs to ensure alignment. --- -Compare the current implementation with the design in docs/ai/design/ and requirements in docs/ai/requirements/. +Compare the current implementation with the design in `docs/ai/design/` and requirements in `docs/ai/requirements/`. 1. If not already provided, ask for: feature/branch description, list of modified files, relevant design doc(s), and any known constraints or assumptions. -2. For each design doc: summarize key architectural decisions and constraints, highlight components, interfaces, and data flows that must be respected. -3. File-by-file comparison: confirm implementation matches design intent, note deviations or missing pieces, flag logic gaps, edge cases, or security issues, suggest simplifications or refactors, and identify missing tests or documentation updates. -4. Summarize findings with recommended next steps. +2. **Use Memory for Context** — Search memory for known constraints and prior decisions before assessing mismatches: `npx ai-devkit@latest memory search --query ""`. +3. For each design doc: summarize key architectural decisions and constraints, highlight components, interfaces, and data flows that must be respected. +4. File-by-file comparison: confirm implementation matches design intent, note deviations or missing pieces, flag logic gaps, edge cases, or security issues, suggest simplifications or refactors, and identify missing tests or documentation updates. +5. **Store Reusable Knowledge** — Save recurring alignment lessons/patterns with `npx ai-devkit@latest memory store ...`. +6. Summarize findings with recommended next steps. +7. **Next Command Guidance** — If major design issues are found, go back to `/review-design` or `/execute-plan`; if aligned, continue to `/writing-test`. diff --git a/commands/code-review.md b/commands/code-review.md index f35f366..13d7361 100644 --- a/commands/code-review.md +++ b/commands/code-review.md @@ -5,7 +5,10 @@ description: Pre-push code review against design docs. Perform a local code review **before** pushing changes. 1. **Gather Context** — If not already provided, ask for: feature/branch description, list of modified files, relevant design doc(s) (e.g., `docs/ai/design/feature-{name}.md`), known constraints or risky areas, and which tests have been run. Also review the latest diff via `git status` and `git diff --stat`. -2. **Understand Design Alignment** — For each design doc, summarize architectural intent and critical constraints. -3. **File-by-File Review** — For every modified file: check alignment with design/requirements and flag deviations, spot logic issues/edge cases/redundant code, flag security concerns (input validation, secrets, auth, data handling), check error handling/performance/observability, and identify missing or outdated tests. -4. **Cross-Cutting Concerns** — Verify naming consistency and project conventions. Confirm docs/comments updated where behavior changed. Identify missing tests (unit, integration, E2E). Check for needed configuration/migration updates. -5. **Summarize Findings** — Categorize each finding as **blocking**, **important**, or **nice-to-have** with: file, issue, impact, recommendation, and design reference. +2. **Use Memory for Context** — Search memory for project review standards and recurring pitfalls: `npx ai-devkit@latest memory search --query "code review checklist project conventions"`. +3. **Understand Design Alignment** — For each design doc, summarize architectural intent and critical constraints. +4. **File-by-File Review** — For every modified file: check alignment with design/requirements and flag deviations, spot logic issues/edge cases/redundant code, flag security concerns (input validation, secrets, auth, data handling), check error handling/performance/observability, and identify missing or outdated tests. +5. **Cross-Cutting Concerns** — Verify naming consistency and project conventions. Confirm docs/comments updated where behavior changed. Identify missing tests (unit, integration, E2E). Check for needed configuration/migration updates. +6. **Store Reusable Knowledge** — Save durable review findings/checklists with `npx ai-devkit@latest memory store ...`. +7. **Summarize Findings** — Categorize each finding as **blocking**, **important**, or **nice-to-have** with: file, issue, impact, recommendation, and design reference. +8. **Next Command Guidance** — If blocking issues remain, return to `/execute-plan` (code fixes) or `/writing-test` (test gaps); if clean, proceed with push/PR workflow. diff --git a/commands/debug.md b/commands/debug.md index 00a4df6..19aa991 100644 --- a/commands/debug.md +++ b/commands/debug.md @@ -5,7 +5,10 @@ description: Debug an issue with structured root-cause analysis before changing Help me debug an issue. Clarify expectations, identify gaps, and agree on a fix plan before changing code. 1. **Gather Context** — If not already provided, ask for: issue description (what is happening vs what should happen), error messages/logs/screenshots, recent related changes or deployments, and scope of impact. -2. **Clarify Reality vs Expectation** — Restate observed vs expected behavior. Confirm relevant requirements or docs that define the expectation. Define acceptance criteria for the fix. -3. **Reproduce & Isolate** — Determine reproducibility (always, intermittent, environment-specific). Capture reproduction steps. List suspected components or modules. -4. **Analyze Potential Causes** — Brainstorm root causes (data, config, code regressions, external dependencies). Gather supporting evidence (logs, metrics, traces). Highlight unknowns needing investigation. -5. **Resolve** — Present resolution options (quick fix, refactor, rollback, etc.) with pros/cons and risks. Ask which option to pursue. Summarize chosen approach, pre-work, success criteria, and validation steps. +2. **Use Memory for Context** — Search memory for similar incidents/fixes before deep investigation: `npx ai-devkit@latest memory search --query ""`. +3. **Clarify Reality vs Expectation** — Restate observed vs expected behavior. Confirm relevant requirements or docs that define the expectation. Define acceptance criteria for the fix. +4. **Reproduce & Isolate** — Determine reproducibility (always, intermittent, environment-specific). Capture reproduction steps. List suspected components or modules. +5. **Analyze Potential Causes** — Brainstorm root causes (data, config, code regressions, external dependencies). Gather supporting evidence (logs, metrics, traces). Highlight unknowns needing investigation. +6. **Resolve** — Present resolution options (quick fix, refactor, rollback, etc.) with pros/cons and risks. Ask which option to pursue. Summarize chosen approach, pre-work, success criteria, and validation steps. +7. **Store Reusable Knowledge** — Save root-cause and fix patterns via `npx ai-devkit@latest memory store ...`. +8. **Next Command Guidance** — After selecting a fix path, continue with `/execute-plan`; when implemented, use `/check-implementation` and `/writing-test`. diff --git a/commands/execute-plan.md b/commands/execute-plan.md index 489218d..4eede56 100644 --- a/commands/execute-plan.md +++ b/commands/execute-plan.md @@ -5,7 +5,10 @@ description: Execute a feature plan task by task. Help me work through a feature plan one task at a time. 1. **Gather Context** — If not already provided, ask for: feature name (kebab-case, e.g., `user-authentication`), brief feature/branch description, planning doc path (default `docs/ai/planning/feature-{name}.md`), and any supporting docs (design, requirements, implementation). -2. **Load & Present Plan** — Read the planning doc and parse task lists (headings + checkboxes). Present an ordered task queue grouped by section, with status: `todo`, `in-progress`, `done`, `blocked`. -3. **Interactive Task Execution** — For each task in order: display context and full bullet text, reference relevant design/requirements docs, offer to outline sub-steps before starting, prompt for status update (`done`, `in-progress`, `blocked`, `skipped`) with short notes after work, and if blocked record blocker and move to a "Blocked" list. -4. **Update Planning Doc** — After each status change, generate a markdown snippet to paste back into the planning doc. After each section, ask if new tasks were discovered. -5. **Session Summary** — Produce a summary: Completed, In Progress (with next steps), Blocked (with blockers), Skipped/Deferred, and New Tasks. Remind to update `docs/ai/planning/feature-{name}.md` and sync related docs if decisions changed. +2. **Use Memory for Context** — Search for prior implementation notes/patterns before starting: `npx ai-devkit@latest memory search --query ""`. +3. **Load & Present Plan** — Read the planning doc and parse task lists (headings + checkboxes). Present an ordered task queue grouped by section, with status: `todo`, `in-progress`, `done`, `blocked`. +4. **Interactive Task Execution** — For each task in order: display context and full bullet text, reference relevant design/requirements docs, offer to outline sub-steps before starting, prompt for status update (`done`, `in-progress`, `blocked`, `skipped`) with short notes after work, and if blocked record blocker and move to a "Blocked" list. +5. **Update Planning Doc** — After each completed or status-changed task, run `/update-planning` to keep `docs/ai/planning/feature-{name}.md` accurate. +6. **Store Reusable Knowledge** — Save reusable implementation guidance/decisions with `npx ai-devkit@latest memory store ...`. +7. **Session Summary** — Produce a summary: Completed, In Progress (with next steps), Blocked (with blockers), Skipped/Deferred, and New Tasks. +8. **Next Command Guidance** — Continue `/execute-plan` until plan completion; then run `/check-implementation`. diff --git a/commands/new-requirement.md b/commands/new-requirement.md index 72e2745..4ef4da4 100644 --- a/commands/new-requirement.md +++ b/commands/new-requirement.md @@ -5,14 +5,15 @@ description: Scaffold feature documentation from requirements through planning. Guide me through adding a new feature, from requirements documentation to implementation readiness. 1. **Capture Requirement** — If not already provided, ask for: feature name (kebab-case, e.g., `user-authentication`), what problem it solves and who will use it, and key user stories. -2. **Create Feature Documentation Structure** — Copy each template's content (preserving YAML frontmatter and section headings) into feature-specific files: +2. **Use Memory for Context** — Before asking repetitive clarification questions, search memory for related decisions or conventions via `npx ai-devkit@latest memory search --query ""` and reuse relevant context. +3. **Create Feature Documentation Structure** — Copy each template's content (preserving YAML frontmatter and section headings) into feature-specific files: - `docs/ai/requirements/README.md` → `docs/ai/requirements/feature-{name}.md` - `docs/ai/design/README.md` → `docs/ai/design/feature-{name}.md` - `docs/ai/planning/README.md` → `docs/ai/planning/feature-{name}.md` - `docs/ai/implementation/README.md` → `docs/ai/implementation/feature-{name}.md` - `docs/ai/testing/README.md` → `docs/ai/testing/feature-{name}.md` -3. **Requirements Phase** — Fill out `docs/ai/requirements/feature-{name}.md`: problem statement, goals/non-goals, user stories, success criteria, constraints, open questions. -4. **Design Phase** — Fill out `docs/ai/design/feature-{name}.md`: architecture changes, data models, API/interfaces, components, design decisions, security and performance considerations. -5. **Planning Phase** — Fill out `docs/ai/planning/feature-{name}.md`: task breakdown with subtasks, dependencies, effort estimates, implementation order, risks. -6. **Documentation Review** — Run `/review-requirements` and `/review-design` to validate the drafted docs. -7. **Next Steps** — This command focuses on documentation. When ready to implement, use `/execute-plan`. Generate a PR description covering: summary, requirements doc link, key changes, test status, and a readiness checklist. +4. **Requirements Phase** — Fill out `docs/ai/requirements/feature-{name}.md`: problem statement, goals/non-goals, user stories, success criteria, constraints, open questions. +5. **Design Phase** — Fill out `docs/ai/design/feature-{name}.md`: architecture changes, data models, API/interfaces, components, design decisions, security and performance considerations. +6. **Planning Phase** — Fill out `docs/ai/planning/feature-{name}.md`: task breakdown with subtasks, dependencies, effort estimates, implementation order, risks. +7. **Store Reusable Knowledge** — When important conventions or decisions are finalized, store them via `npx ai-devkit@latest memory store --title "" --content "<knowledge>" --tags "<tags>"`. +8. **Next Command Guidance** — Run `/review-requirements` first, then `/review-design`. If both pass, continue with `/execute-plan`. diff --git a/commands/remember.md b/commands/remember.md index 20d2772..b54bac1 100644 --- a/commands/remember.md +++ b/commands/remember.md @@ -2,9 +2,11 @@ description: Store reusable guidance in the knowledge memory service. --- -When I say "remember this" or want to save a reusable rule, help me store it in the knowledge memory service. +Help me store it in the knowledge memory service. 1. **Capture Knowledge** — If not already provided, ask for: a short explicit title (5-12 words), detailed content (markdown, examples encouraged), optional tags (keywords like "api", "testing"), and optional scope (`global`, `project:<name>`, `repo:<name>`). If vague, ask follow-ups to make it specific and actionable. -2. **Validate Quality** — Ensure it is specific and reusable (not generic advice). Avoid storing secrets or sensitive data. -3. **Store** — Call `memory.storeKnowledge` with title, content, tags, scope. If MCP tools are unavailable, use `npx ai-devkit@latest memory store` instead. -4. **Confirm** — Summarize what was saved and offer to store more knowledge if needed. +2. **Search Before Store** — Check for existing similar entries first with `npx ai-devkit@latest memory search --query "<topic>"` to avoid duplicates. +3. **Validate Quality** — Ensure it is specific and reusable (not generic advice). Avoid storing secrets or sensitive data. +4. **Store** — Call `memory.storeKnowledge` with title, content, tags, scope. If MCP tools are unavailable, use `npx ai-devkit@latest memory store` instead. +5. **Confirm** — Summarize what was saved and offer to retrieve related memory entries when helpful. +6. **Next Command Guidance** — Continue with the current lifecycle phase command (`/execute-plan`, `/check-implementation`, `/writing-test`, etc.) as needed. diff --git a/commands/review-design.md b/commands/review-design.md index db60276..ea55cc5 100644 --- a/commands/review-design.md +++ b/commands/review-design.md @@ -2,14 +2,17 @@ description: Review feature design for completeness. --- -Review the design documentation in docs/ai/design/feature-{name}.md (and the project-level README if relevant). Summarize: +Review the design documentation in `docs/ai/design/feature-{name}.md` (and the project-level README if relevant). -- Architecture overview (ensure mermaid diagram is present and accurate) -- Key components and their responsibilities -- Technology choices and rationale -- Data models and relationships -- API/interface contracts (inputs, outputs, auth) -- Major design decisions and trade-offs -- Non-functional requirements that must be preserved - -Highlight any inconsistencies, missing sections, or diagrams that need updates. +1. **Use Memory for Context** — Search memory for prior architecture constraints/patterns: `npx ai-devkit@latest memory search --query "<feature design architecture>"`. +2. Summarize: + - Architecture overview (ensure mermaid diagram is present and accurate) + - Key components and their responsibilities + - Technology choices and rationale + - Data models and relationships + - API/interface contracts (inputs, outputs, auth) + - Major design decisions and trade-offs + - Non-functional requirements that must be preserved +3. Highlight inconsistencies, missing sections, or diagrams that need updates. +4. **Store Reusable Knowledge** — Persist approved design patterns/constraints with `npx ai-devkit@latest memory store ...` when they will help future work. +5. **Next Command Guidance** — If requirements gaps are found, return to `/review-requirements`; if design is sound, continue to `/execute-plan`. diff --git a/commands/review-requirements.md b/commands/review-requirements.md index 963b9df..36e84e7 100644 --- a/commands/review-requirements.md +++ b/commands/review-requirements.md @@ -2,12 +2,15 @@ description: Review feature requirements for completeness. --- -Review `docs/ai/requirements/feature-{name}.md` and the project-level template `docs/ai/requirements/README.md` to ensure structure and content alignment. Summarize: +Review `docs/ai/requirements/feature-{name}.md` and the project-level template `docs/ai/requirements/README.md` to ensure structure and content alignment. -- Core problem statement and affected users -- Goals, non-goals, and success criteria -- Primary user stories & critical flows -- Constraints, assumptions, open questions -- Any missing sections or deviations from the template - -Identify gaps or contradictions and suggest clarifications. +1. **Use Memory for Context** — Search memory for related requirements/domain decisions before starting: `npx ai-devkit@latest memory search --query "<feature requirements>"`. +2. Summarize: + - Core problem statement and affected users + - Goals, non-goals, and success criteria + - Primary user stories & critical flows + - Constraints, assumptions, open questions + - Any missing sections or deviations from the template +3. Identify gaps or contradictions and suggest clarifications. +4. **Store Reusable Knowledge** — If new reusable requirement conventions are agreed, store them with `npx ai-devkit@latest memory store ...`. +5. **Next Command Guidance** — If fundamentals are missing, go back to `/new-requirement`; otherwise continue to `/review-design`. diff --git a/commands/simplify-implementation.md b/commands/simplify-implementation.md index fcfec16..e0f3955 100644 --- a/commands/simplify-implementation.md +++ b/commands/simplify-implementation.md @@ -5,6 +5,9 @@ description: Simplify existing code to reduce complexity. Help me simplify an existing implementation while maintaining or improving its functionality. 1. **Gather Context** — If not already provided, ask for: target file(s) or component(s) to simplify, current pain points (hard to understand, maintain, or extend?), performance or scalability concerns, constraints (backward compatibility, API stability, deadlines), and relevant design docs or requirements. -2. **Analyze Current Complexity** — For each target: identify complexity sources (deep nesting, duplication, unclear abstractions, tight coupling, over-engineering, magic values), assess cognitive load for future maintainers, and identify scalability blockers (single points of failure, sync-where-async-needed, missing caching, inefficient algorithms). -3. **Propose Simplifications** — Prioritize readability over brevity — apply the 30-second test: can a new team member understand each change quickly? For each issue, suggest concrete improvements (extract, consolidate, flatten, decouple, remove dead code, replace with built-ins). Provide before/after snippets. -4. **Prioritize & Plan** — Rank by impact vs risk: (1) high impact, low risk — do first, (2) high impact, higher risk — plan carefully, (3) low impact, low risk — quick wins if time permits, (4) low impact, high risk — skip or defer. For each change specify risk level, testing requirements, and effort. Produce a prioritized action plan with recommended execution order. +2. **Use Memory for Context** — Search memory for established patterns and prior refactors in this area: `npx ai-devkit@latest memory search --query "<component simplification pattern>"`. +3. **Analyze Current Complexity** — For each target: identify complexity sources (deep nesting, duplication, unclear abstractions, tight coupling, over-engineering, magic values), assess cognitive load for future maintainers, and identify scalability blockers (single points of failure, sync-where-async-needed, missing caching, inefficient algorithms). +4. **Propose Simplifications** — Prioritize readability over brevity; apply the 30-second test: can a new team member understand each change quickly? For each issue, suggest concrete improvements (extract, consolidate, flatten, decouple, remove dead code, replace with built-ins). Provide before/after snippets. +5. **Prioritize & Plan** — Rank by impact vs risk: (1) high impact, low risk — do first, (2) high impact, higher risk — plan carefully, (3) low impact, low risk — quick wins if time permits, (4) low impact, high risk — skip or defer. For each change specify risk level, testing requirements, and effort. Produce a prioritized action plan with recommended execution order. +6. **Store Reusable Knowledge** — Save reusable simplification patterns and trade-offs via `npx ai-devkit@latest memory store ...`. +7. **Next Command Guidance** — After implementation, run `/check-implementation` and `/writing-test`. diff --git a/commands/update-planning.md b/commands/update-planning.md index b81d22d..2f67867 100644 --- a/commands/update-planning.md +++ b/commands/update-planning.md @@ -5,6 +5,9 @@ description: Update planning docs to reflect implementation progress. Help me reconcile current implementation progress with the planning documentation. 1. **Gather Context** — If not already provided, ask for: feature/branch name and brief status, tasks completed since last update, new tasks discovered, current blockers or risks, and planning doc path (default `docs/ai/planning/feature-{name}.md`). -2. **Review & Reconcile** — Summarize existing milestones, task breakdowns, and dependencies from the planning doc. For each planned task: mark status (done / in progress / blocked / not started), note scope changes, record blockers, identify skipped or added tasks. -3. **Produce Updated Task List** — Generate an updated checklist grouped by: Done, In Progress, Blocked, Newly Discovered Work — with short notes per task. -4. **Next Steps & Summary** — Suggest the next 2-3 actionable tasks and highlight risky areas. Prepare a summary paragraph for the planning doc covering: current state, major risks/blockers, upcoming focus, and any scope/timeline changes. +2. **Use Memory for Context** — Search memory for prior decisions that affect priorities/scope: `npx ai-devkit@latest memory search --query "<feature planning updates>"`. +3. **Review & Reconcile** — Summarize existing milestones, task breakdowns, and dependencies from the planning doc. For each planned task: mark status (done / in progress / blocked / not started), note scope changes, record blockers, identify skipped or added tasks. +4. **Produce Updated Task List** — Generate an updated checklist grouped by: Done, In Progress, Blocked, Newly Discovered Work — with short notes per task. +5. **Store Reusable Knowledge** — If new planning conventions or risk-handling rules emerge, store them with `npx ai-devkit@latest memory store ...`. +6. **Next Steps & Summary** — Suggest the next 2-3 actionable tasks and prepare a summary paragraph for the planning doc. +7. **Next Command Guidance** — Return to `/execute-plan` for remaining work. When all implementation tasks are complete, run `/check-implementation`. diff --git a/commands/writing-test.md b/commands/writing-test.md index 9b62c30..d6ba6d2 100644 --- a/commands/writing-test.md +++ b/commands/writing-test.md @@ -5,8 +5,11 @@ description: Add tests for a new feature. Review `docs/ai/testing/feature-{name}.md` and ensure it mirrors the base template before writing tests. 1. **Gather Context** — If not already provided, ask for: feature name/branch, summary of changes (link to design & requirements docs), target environment, existing test suites, and any flaky/slow tests to avoid. -2. **Analyze Testing Template** — Identify required sections from `docs/ai/testing/feature-{name}.md`. Confirm success criteria and edge cases from requirements & design docs. Note available mocks/stubs/fixtures. -3. **Unit Tests (aim for 100% coverage)** — For each module/function: list behavior scenarios (happy path, edge cases, error handling), generate test cases with assertions using existing utilities/mocks, and highlight missing branches preventing full coverage. -4. **Integration Tests** — Identify critical cross-component flows. Define setup/teardown steps and test cases for interaction boundaries, data contracts, and failure modes. -5. **Coverage Strategy** — Recommend coverage tooling commands. Call out files/functions still needing coverage and suggest additional tests if <100%. -6. **Update Documentation** — Summarize tests added or still missing. Update `docs/ai/testing/feature-{name}.md` with links to test files and results. Flag deferred tests as follow-up tasks. +2. **Use Memory for Context** — Search memory for existing testing patterns and prior edge cases: `npx ai-devkit@latest memory search --query "<feature testing strategy>"`. +3. **Analyze Testing Template** — Identify required sections from `docs/ai/testing/feature-{name}.md`. Confirm success criteria and edge cases from requirements & design docs. Note available mocks/stubs/fixtures. +4. **Unit Tests (aim for 100% coverage)** — For each module/function: list behavior scenarios (happy path, edge cases, error handling), generate test cases with assertions using existing utilities/mocks, and highlight missing branches preventing full coverage. +5. **Integration Tests** — Identify critical cross-component flows. Define setup/teardown steps and test cases for interaction boundaries, data contracts, and failure modes. +6. **Coverage Strategy** — Recommend coverage tooling commands. Call out files/functions still needing coverage and suggest additional tests if <100%. +7. **Store Reusable Knowledge** — Save reusable testing patterns or tricky fixtures with `npx ai-devkit@latest memory store ...`. +8. **Update Documentation** — Summarize tests added or still missing. Update `docs/ai/testing/feature-{name}.md` with links to test files and results. Flag deferred tests as follow-up tasks. +9. **Next Command Guidance** — If tests expose design issues, return to `/review-design`; otherwise continue to `/code-review`. diff --git a/docs/ai/design/feature-project-skill-registry-priority.md b/docs/ai/design/feature-project-skill-registry-priority.md new file mode 100644 index 0000000..6a7dd69 --- /dev/null +++ b/docs/ai/design/feature-project-skill-registry-priority.md @@ -0,0 +1,50 @@ +--- +phase: design +title: System Design & Architecture +description: Merge registry sources with project-first precedence for skill installation +--- + +# System Design & Architecture + +## Architecture Overview +```mermaid +graph TD + SkillAdd[ai-devkit skill add] --> SkillManager + SkillManager --> DefaultRegistry[Remote default registry.json] + SkillManager --> GlobalConfig[~/.ai-devkit/.ai-devkit.json] + SkillManager --> ProjectConfig[./.ai-devkit.json] + DefaultRegistry --> Merge[Registry merge] + GlobalConfig --> Merge + ProjectConfig --> Merge + Merge --> Resolved[Resolved registry map] +``` + +- `SkillManager.fetchMergedRegistry` remains the single merge point. +- `ConfigManager` adds project registry extraction. +- Merge order is implemented as object spread with source ordering. + +## Data Models +- Registry map shape: `Record<string, string>`. +- Project registry extraction supports: + - `registries` at root. + - `skills.registries` when `skills` is an object. + +## API Design +- `ConfigManager.getSkillRegistries(): Promise<Record<string, string>>`. +- `SkillManager.fetchMergedRegistry()` now merges three sources. + +## Component Breakdown +- `packages/cli/src/lib/Config.ts`: parse project registry mappings. +- `packages/cli/src/lib/SkillManager.ts`: apply precedence order. +- Tests: + - `packages/cli/src/__tests__/lib/Config.test.ts` + - `packages/cli/src/__tests__/lib/SkillManager.test.ts` + +## Design Decisions +- Keep merge logic centralized in `SkillManager` to avoid drift. +- Keep parser tolerant to allow gradual config evolution. +- Favor project determinism by applying project map last. + +## Non-Functional Requirements +- No additional network calls. +- No change to failure mode when default registry fetch fails (still supports fallback sources). diff --git a/docs/ai/implementation/feature-project-skill-registry-priority.md b/docs/ai/implementation/feature-project-skill-registry-priority.md new file mode 100644 index 0000000..d5a7096 --- /dev/null +++ b/docs/ai/implementation/feature-project-skill-registry-priority.md @@ -0,0 +1,59 @@ +--- +phase: implementation +title: Implementation Guide +description: Implementation notes for project-level registry override precedence +--- + +# Implementation Guide + +## Development Setup +- Work in feature branch/worktree: `feature-project-skill-registry-priority`. +- Install deps via `npm ci`. + +## Code Structure +- `SkillManager` owns merged registry resolution. +- `ConfigManager` owns project config parsing helpers. + +## Implementation Notes +### Core Features +- Added `ConfigManager.getSkillRegistries()` to read project registry map from: + - `registries` (root), or + - `skills.registries` (legacy-compatible fallback when `skills` is object). +- Updated `SkillManager.fetchMergedRegistry()` to merge in this order: + - default registry, + - global registries, + - project registries. + +### Patterns & Best Practices +- Ignore malformed/non-string registry values. +- Keep merge deterministic and centralized. + +## Error Handling +- If project config has no valid registry map, return `{}` and continue. +- Existing default-registry fetch warning behavior remains unchanged. + +## Performance Considerations +- No new network requests. +- Constant-time map merge relative to source map sizes. + +## Check Implementation (Phase 6) +- Date: 2026-02-27 +- Verification checklist: +- [x] Requirement: project config contributes registry mappings. + - Implemented in `ConfigManager.getSkillRegistries()` (`packages/cli/src/lib/Config.ts`). +- [x] Requirement: precedence is `project > global > default`. + - Implemented in `SkillManager.fetchMergedRegistry()` merge order (`packages/cli/src/lib/SkillManager.ts`). +- [x] Requirement: backward compatibility for existing flows. + - Existing global override behavior remains active. + - Default registry fetch failure still falls back to other sources. + +## Code Review (Phase 8) +- Date: 2026-02-27 +- Findings: No blocking defects found in changed production code. +- Reviewed scope: + - `packages/cli/src/lib/Config.ts` + - `packages/cli/src/lib/SkillManager.ts` + - Updated unit tests for precedence and parsing behavior. +- Residual risks: + - Full CLI suite currently has one unrelated failing test (`commands/memory.test.ts` module resolution). + - End-to-end fixture coverage for project-level registry override remains optional follow-up. diff --git a/docs/ai/planning/feature-project-skill-registry-priority.md b/docs/ai/planning/feature-project-skill-registry-priority.md new file mode 100644 index 0000000..1720ae2 --- /dev/null +++ b/docs/ai/planning/feature-project-skill-registry-priority.md @@ -0,0 +1,42 @@ +--- +phase: planning +title: Project Planning & Task Breakdown +description: Implement and validate project/global/default registry precedence +--- + +# Project Planning & Task Breakdown + +## Milestones +- [x] Milestone 1: Define requirements and precedence contract. +- [x] Milestone 2: Implement registry source parsing and merge order. +- [x] Milestone 3: Add automated tests and validate feature docs. + +## Task Breakdown +### Phase 1: Requirements & Design +- [x] Task 1.1: Confirm desired precedence (`project > global > default`). +- [x] Task 1.2: Define where project registry mappings are read from. + +### Phase 2: Implementation +- [x] Task 2.1: Add `ConfigManager.getSkillRegistries()`. +- [x] Task 2.2: Update `SkillManager.fetchMergedRegistry()` merge order. + +### Phase 3: Validation +- [x] Task 3.1: Add/adjust tests for project registry parsing. +- [x] Task 3.2: Add/adjust tests for precedence conflicts. +- [x] Task 3.3: Run focused CLI tests and feature lint. + +## Dependencies +- Existing `ConfigManager` and `GlobalConfigManager` APIs. +- Existing `SkillManager` registry merge flow. + +## Timeline & Estimates +- Implementation and tests: same working session. +- Validation: focused unit suite execution. + +## Risks & Mitigation +- Risk: project config schema ambiguity. +- Mitigation: support both root and nested registry map formats and ignore invalid entries. + +## Execution Log +- 2026-02-27: Ran focused tests for `ConfigManager` and `SkillManager` (73 passing). +- 2026-02-27: Ran `npx ai-devkit@latest lint --feature project-skill-registry-priority` (pass). diff --git a/docs/ai/requirements/feature-project-skill-registry-priority.md b/docs/ai/requirements/feature-project-skill-registry-priority.md new file mode 100644 index 0000000..2315978 --- /dev/null +++ b/docs/ai/requirements/feature-project-skill-registry-priority.md @@ -0,0 +1,40 @@ +--- +phase: requirements +title: Requirements & Problem Understanding +description: Add project-level registry source with deterministic precedence for skill installation +--- + +# Requirements & Problem Understanding + +## Problem Statement +- Skill installation currently resolves registries from default remote registry plus global config (`~/.ai-devkit/.ai-devkit.json`). +- Teams need project-specific overrides in repository config so installs are reproducible across contributors and CI. +- Without project-level override, users must edit global state and cannot keep registry decisions version-controlled. + +## Goals & Objectives +- Add project `.ai-devkit.json` as an additional registry source for skill installation. +- Enforce deterministic conflict precedence: `project > global > default`. +- Preserve backward compatibility for existing projects and global-only users. + +## Non-Goals +- Redesigning the entire `.ai-devkit.json` schema. +- Changing non-install commands that do not rely on registry resolution. +- Introducing remote registry auth or secret management. + +## User Stories & Use Cases +- As a project maintainer, I can define custom registry mappings in project config so all contributors use the same registry source. +- As a developer with personal global overrides, project overrides still win inside that repository. +- As an existing user with only global config, behavior remains unchanged. + +## Success Criteria +- `ai-devkit skill add` reads registry maps from project config, global config, and default registry. +- On key collision, selected URL follows `project > global > default`. +- Unit tests cover precedence and parsing behavior. + +## Constraints & Assumptions +- Current runtime already reads `.ai-devkit.json` via `ConfigManager`. +- Project configs may contain either root `registries` or nested `skills.registries`; both should be accepted for resilience. +- Invalid non-string entries are ignored. + +## Questions & Open Items +- None blocking for implementation. diff --git a/docs/ai/testing/feature-project-skill-registry-priority.md b/docs/ai/testing/feature-project-skill-registry-priority.md new file mode 100644 index 0000000..105cf8e --- /dev/null +++ b/docs/ai/testing/feature-project-skill-registry-priority.md @@ -0,0 +1,45 @@ +--- +phase: testing +title: Testing Strategy +description: Test precedence and parsing for project/global/default skill registries +--- + +# Testing Strategy + +## Phase 7 Status +- Date: 2026-02-27 +- Status: Completed for changed scope +- Notes: Feature-specific tests pass; one unrelated pre-existing workspace test failure remains in full CLI sweep. + +## Test Coverage Goals +- Unit coverage for new/changed behavior in `ConfigManager` and `SkillManager`. +- Validate precedence conflict resolution and parser resilience. + +## Unit Tests +### ConfigManager +- [x] Reads registry map from root `registries`. +- [x] Falls back to nested `skills.registries` when root map is absent. +- [x] Returns empty map when no valid registry map exists. + +### SkillManager +- [x] Uses custom global registry over default (existing behavior retained). +- [x] Uses project registry over global and default on ID collision. + +## Integration Tests +- [ ] Optional follow-up: CLI-level `skill add` using fixture `.ai-devkit.json` with project overrides. + +## Test Reporting & Coverage +- Focused command executed: + - `npm run test --workspace=packages/cli -- --runInBand src/__tests__/lib/Config.test.ts src/__tests__/lib/SkillManager.test.ts` + - Result: 2 suites passed, 73 tests passed, 0 failed. +- Broader regression sweep: + - `npm run test --workspace=packages/cli -- --runInBand` + - Result: 25 suites passed, 1 failed. + - Failure: `src/__tests__/commands/memory.test.ts` (`Cannot find module '@ai-devkit/memory'`), outside this feature's changed files. +- Feature documentation lint: + - `npx ai-devkit@latest lint --feature project-skill-registry-priority` + - Result: pass. + +## Coverage Gaps +- No known unit-test gaps for changed paths. +- Optional integration follow-up remains open for full CLI fixture validation. diff --git a/packages/cli/src/__tests__/lib/Config.test.ts b/packages/cli/src/__tests__/lib/Config.test.ts index 0ae7992..8f0f951 100644 --- a/packages/cli/src/__tests__/lib/Config.test.ts +++ b/packages/cli/src/__tests__/lib/Config.test.ts @@ -404,4 +404,66 @@ describe('ConfigManager', () => { expect(mockFs.writeJson).not.toHaveBeenCalled(); }); }); + + describe('getSkillRegistries', () => { + it('returns registries from root-level "registries"', async () => { + (mockFs.pathExists as any).mockResolvedValue(true); + (mockFs.readJson as any).mockResolvedValue({ + version: '1.0.0', + environments: ['cursor'], + phases: [], + registries: { + 'project/skills': 'https://github.com/project/skills.git', + 'invalid/entry': 123 + }, + createdAt: '2024-01-01T00:00:00.000Z', + updatedAt: '2024-01-01T00:00:00.000Z' + }); + + const registries = await configManager.getSkillRegistries(); + + expect(registries).toEqual({ + 'project/skills': 'https://github.com/project/skills.git' + }); + }); + + it('falls back to nested "skills.registries" when root registries are missing', async () => { + (mockFs.pathExists as any).mockResolvedValue(true); + (mockFs.readJson as any).mockResolvedValue({ + version: '1.0.0', + environments: ['cursor'], + phases: [], + skills: { + registries: { + 'nested/skills': 'https://github.com/nested/skills.git', + 'invalid/value': false + } + }, + createdAt: '2024-01-01T00:00:00.000Z', + updatedAt: '2024-01-01T00:00:00.000Z' + }); + + const registries = await configManager.getSkillRegistries(); + + expect(registries).toEqual({ + 'nested/skills': 'https://github.com/nested/skills.git' + }); + }); + + it('returns empty object when no registry map exists', async () => { + (mockFs.pathExists as any).mockResolvedValue(true); + (mockFs.readJson as any).mockResolvedValue({ + version: '1.0.0', + environments: ['cursor'], + phases: [], + skills: [{ registry: 'codeaholicguy/ai-devkit', name: 'debug' }], + createdAt: '2024-01-01T00:00:00.000Z', + updatedAt: '2024-01-01T00:00:00.000Z' + }); + + const registries = await configManager.getSkillRegistries(); + + expect(registries).toEqual({}); + }); + }); }); diff --git a/packages/cli/src/__tests__/lib/SkillManager.test.ts b/packages/cli/src/__tests__/lib/SkillManager.test.ts index 9c24792..c214912 100644 --- a/packages/cli/src/__tests__/lib/SkillManager.test.ts +++ b/packages/cli/src/__tests__/lib/SkillManager.test.ts @@ -75,6 +75,7 @@ describe("SkillManager", () => { new MockedGlobalConfigManager() as jest.Mocked<GlobalConfigManager>; mockGlobalConfigManager.getSkillRegistries.mockResolvedValue({}); + mockConfigManager.getSkillRegistries.mockResolvedValue({}); skillManager = new SkillManager( mockConfigManager, @@ -216,6 +217,56 @@ describe("SkillManager", () => { ); }); + it("should prefer project registry URL over global and default", async () => { + const defaultGitUrl = "https://github.com/default/skills.git"; + const globalGitUrl = "https://github.com/global/skills.git"; + const projectGitUrl = "https://github.com/project/skills.git"; + + mockFetch({ + registries: { + [mockRegistryId]: defaultGitUrl, + }, + }); + + mockGlobalConfigManager.getSkillRegistries.mockResolvedValue({ + [mockRegistryId]: globalGitUrl, + }); + mockConfigManager.getSkillRegistries.mockResolvedValue({ + [mockRegistryId]: projectGitUrl, + }); + + const repoPath = path.join( + os.homedir(), + ".ai-devkit", + "skills", + mockRegistryId, + ); + + (mockedFs.pathExists as any).mockImplementation((checkPath: string) => { + if (checkPath === repoPath) { + return Promise.resolve(false); + } + + if (checkPath.includes(`${path.sep}skills${path.sep}${mockSkillName}`)) { + return Promise.resolve(true); + } + + if (checkPath.endsWith(`${path.sep}SKILL.md`)) { + return Promise.resolve(true); + } + + return Promise.resolve(true); + }); + + await skillManager.addSkill(mockRegistryId, mockSkillName); + + expect(mockedGitUtil.cloneRepository).toHaveBeenCalledWith( + path.join(os.homedir(), ".ai-devkit", "skills"), + mockRegistryId, + projectGitUrl, + ); + }); + it("should read custom registries from global config", async () => { const customGitUrl = "https://github.com/custom/skills.git"; const { GlobalConfigManager: RealGlobalConfigManager } = jest.requireActual( diff --git a/packages/cli/src/lib/Config.ts b/packages/cli/src/lib/Config.ts index ad50b9e..8cbc2a4 100644 --- a/packages/cli/src/lib/Config.ts +++ b/packages/cli/src/lib/Config.ts @@ -112,4 +112,23 @@ export class ConfigManager { skills.push(skill); return this.update({ skills }); } + + async getSkillRegistries(): Promise<Record<string, string>> { + const config = await this.read() as any; + const rootRegistries = config?.registries; + const nestedRegistries = + config?.skills && !Array.isArray(config.skills) + ? config.skills.registries + : undefined; + + const registries = rootRegistries ?? nestedRegistries; + + if (!registries || typeof registries !== 'object' || Array.isArray(registries)) { + return {}; + } + + return Object.fromEntries( + Object.entries(registries).filter(([, value]) => typeof value === 'string') + ) as Record<string, string>; + } } diff --git a/packages/cli/src/lib/SkillManager.ts b/packages/cli/src/lib/SkillManager.ts index 1d36a57..e58a215 100644 --- a/packages/cli/src/lib/SkillManager.ts +++ b/packages/cli/src/lib/SkillManager.ts @@ -379,12 +379,14 @@ export class SkillManager { defaultRegistries = {}; } - const customRegistries = await this.globalConfigManager.getSkillRegistries(); + const globalRegistries = await this.globalConfigManager.getSkillRegistries(); + const projectRegistries = await this.configManager.getSkillRegistries(); return { registries: { ...defaultRegistries, - ...customRegistries + ...globalRegistries, + ...projectRegistries } }; } diff --git a/packages/cli/templates/commands/capture-knowledge.md b/packages/cli/templates/commands/capture-knowledge.md index 8013a43..07e332b 100644 --- a/packages/cli/templates/commands/capture-knowledge.md +++ b/packages/cli/templates/commands/capture-knowledge.md @@ -5,8 +5,10 @@ description: Document a code entry point in knowledge docs. Guide me through creating a structured understanding of a code entry point and saving it to the knowledge docs. 1. **Gather & Validate Entry Point** — If not already provided, ask for: entry point (file, folder, function, API), why it matters (feature, bug, investigation), and desired depth or focus areas. Confirm the entry point exists; if ambiguous or not found, clarify or suggest alternatives. -2. **Collect Source Context** — Read the primary file/module and summarize purpose, exports, key patterns. For folders: list structure, highlight key modules. For functions/APIs: capture signature, parameters, return values, error handling. Extract essential snippets (avoid large dumps). -3. **Analyze Dependencies** — Build a dependency view up to depth 3, tracking visited nodes to avoid loops. Categorize: imports, function calls, services, external packages. Note external systems or generated code to exclude. -4. **Synthesize Explanation** — Draft overview (purpose, language, high-level behavior). Detail core logic, execution flow, key patterns. Highlight error handling, performance, security considerations. Identify potential improvements or risks. -5. **Create Documentation** — Normalize name to kebab-case (`calculateTotalPrice` → `calculate-total-price`). Create `docs/ai/implementation/knowledge-{name}.md` with sections: Overview, Implementation Details, Dependencies, Visual Diagrams, Additional Insights, Metadata, Next Steps. Include mermaid diagrams when they clarify flows or relationships. Add metadata (analysis date, depth, files touched). -6. **Review & Next Actions** — Summarize key insights and open questions. Suggest related areas for deeper dives. Confirm file path and remind to commit. +2. **Use Memory for Context** — Search memory for prior knowledge about this module/domain: `npx ai-devkit@latest memory search --query "<entry point or subsystem>"`. +3. **Collect Source Context** — Read the primary file/module and summarize purpose, exports, key patterns. For folders: list structure, highlight key modules. For functions/APIs: capture signature, parameters, return values, error handling. Extract essential snippets (avoid large dumps). +4. **Analyze Dependencies** — Build a dependency view up to depth 3, tracking visited nodes to avoid loops. Categorize: imports, function calls, services, external packages. Note external systems or generated code to exclude. +5. **Synthesize Explanation** — Draft overview (purpose, language, high-level behavior). Detail core logic, execution flow, key patterns. Highlight error handling, performance, security considerations. Identify potential improvements or risks. +6. **Create Documentation** — Normalize name to kebab-case (`calculateTotalPrice` → `calculate-total-price`). Create `docs/ai/implementation/knowledge-{name}.md` with sections: Overview, Implementation Details, Dependencies, Visual Diagrams, Additional Insights, Metadata, Next Steps. Include mermaid diagrams when they clarify flows or relationships. Add metadata (analysis date, depth, files touched). +7. **Store Reusable Knowledge** — If insights should persist across sessions, store them using `npx ai-devkit@latest memory store ...`. +8. **Review & Next Actions** — Summarize key insights and open questions. Suggest related areas for deeper dives, confirm file path, and suggest `/remember` for key long-lived rules. diff --git a/packages/cli/templates/commands/check-implementation.md b/packages/cli/templates/commands/check-implementation.md index b78103d..144e9f4 100644 --- a/packages/cli/templates/commands/check-implementation.md +++ b/packages/cli/templates/commands/check-implementation.md @@ -2,9 +2,12 @@ description: Compare implementation with design and requirements docs to ensure alignment. --- -Compare the current implementation with the design in docs/ai/design/ and requirements in docs/ai/requirements/. +Compare the current implementation with the design in `docs/ai/design/` and requirements in `docs/ai/requirements/`. 1. If not already provided, ask for: feature/branch description, list of modified files, relevant design doc(s), and any known constraints or assumptions. -2. For each design doc: summarize key architectural decisions and constraints, highlight components, interfaces, and data flows that must be respected. -3. File-by-file comparison: confirm implementation matches design intent, note deviations or missing pieces, flag logic gaps, edge cases, or security issues, suggest simplifications or refactors, and identify missing tests or documentation updates. -4. Summarize findings with recommended next steps. +2. **Use Memory for Context** — Search memory for known constraints and prior decisions before assessing mismatches: `npx ai-devkit@latest memory search --query "<feature implementation alignment>"`. +3. For each design doc: summarize key architectural decisions and constraints, highlight components, interfaces, and data flows that must be respected. +4. File-by-file comparison: confirm implementation matches design intent, note deviations or missing pieces, flag logic gaps, edge cases, or security issues, suggest simplifications or refactors, and identify missing tests or documentation updates. +5. **Store Reusable Knowledge** — Save recurring alignment lessons/patterns with `npx ai-devkit@latest memory store ...`. +6. Summarize findings with recommended next steps. +7. **Next Command Guidance** — If major design issues are found, go back to `/review-design` or `/execute-plan`; if aligned, continue to `/writing-test`. diff --git a/packages/cli/templates/commands/code-review.md b/packages/cli/templates/commands/code-review.md index f35f366..13d7361 100644 --- a/packages/cli/templates/commands/code-review.md +++ b/packages/cli/templates/commands/code-review.md @@ -5,7 +5,10 @@ description: Pre-push code review against design docs. Perform a local code review **before** pushing changes. 1. **Gather Context** — If not already provided, ask for: feature/branch description, list of modified files, relevant design doc(s) (e.g., `docs/ai/design/feature-{name}.md`), known constraints or risky areas, and which tests have been run. Also review the latest diff via `git status` and `git diff --stat`. -2. **Understand Design Alignment** — For each design doc, summarize architectural intent and critical constraints. -3. **File-by-File Review** — For every modified file: check alignment with design/requirements and flag deviations, spot logic issues/edge cases/redundant code, flag security concerns (input validation, secrets, auth, data handling), check error handling/performance/observability, and identify missing or outdated tests. -4. **Cross-Cutting Concerns** — Verify naming consistency and project conventions. Confirm docs/comments updated where behavior changed. Identify missing tests (unit, integration, E2E). Check for needed configuration/migration updates. -5. **Summarize Findings** — Categorize each finding as **blocking**, **important**, or **nice-to-have** with: file, issue, impact, recommendation, and design reference. +2. **Use Memory for Context** — Search memory for project review standards and recurring pitfalls: `npx ai-devkit@latest memory search --query "code review checklist project conventions"`. +3. **Understand Design Alignment** — For each design doc, summarize architectural intent and critical constraints. +4. **File-by-File Review** — For every modified file: check alignment with design/requirements and flag deviations, spot logic issues/edge cases/redundant code, flag security concerns (input validation, secrets, auth, data handling), check error handling/performance/observability, and identify missing or outdated tests. +5. **Cross-Cutting Concerns** — Verify naming consistency and project conventions. Confirm docs/comments updated where behavior changed. Identify missing tests (unit, integration, E2E). Check for needed configuration/migration updates. +6. **Store Reusable Knowledge** — Save durable review findings/checklists with `npx ai-devkit@latest memory store ...`. +7. **Summarize Findings** — Categorize each finding as **blocking**, **important**, or **nice-to-have** with: file, issue, impact, recommendation, and design reference. +8. **Next Command Guidance** — If blocking issues remain, return to `/execute-plan` (code fixes) or `/writing-test` (test gaps); if clean, proceed with push/PR workflow. diff --git a/packages/cli/templates/commands/debug.md b/packages/cli/templates/commands/debug.md index 00a4df6..19aa991 100644 --- a/packages/cli/templates/commands/debug.md +++ b/packages/cli/templates/commands/debug.md @@ -5,7 +5,10 @@ description: Debug an issue with structured root-cause analysis before changing Help me debug an issue. Clarify expectations, identify gaps, and agree on a fix plan before changing code. 1. **Gather Context** — If not already provided, ask for: issue description (what is happening vs what should happen), error messages/logs/screenshots, recent related changes or deployments, and scope of impact. -2. **Clarify Reality vs Expectation** — Restate observed vs expected behavior. Confirm relevant requirements or docs that define the expectation. Define acceptance criteria for the fix. -3. **Reproduce & Isolate** — Determine reproducibility (always, intermittent, environment-specific). Capture reproduction steps. List suspected components or modules. -4. **Analyze Potential Causes** — Brainstorm root causes (data, config, code regressions, external dependencies). Gather supporting evidence (logs, metrics, traces). Highlight unknowns needing investigation. -5. **Resolve** — Present resolution options (quick fix, refactor, rollback, etc.) with pros/cons and risks. Ask which option to pursue. Summarize chosen approach, pre-work, success criteria, and validation steps. +2. **Use Memory for Context** — Search memory for similar incidents/fixes before deep investigation: `npx ai-devkit@latest memory search --query "<issue symptoms or error>"`. +3. **Clarify Reality vs Expectation** — Restate observed vs expected behavior. Confirm relevant requirements or docs that define the expectation. Define acceptance criteria for the fix. +4. **Reproduce & Isolate** — Determine reproducibility (always, intermittent, environment-specific). Capture reproduction steps. List suspected components or modules. +5. **Analyze Potential Causes** — Brainstorm root causes (data, config, code regressions, external dependencies). Gather supporting evidence (logs, metrics, traces). Highlight unknowns needing investigation. +6. **Resolve** — Present resolution options (quick fix, refactor, rollback, etc.) with pros/cons and risks. Ask which option to pursue. Summarize chosen approach, pre-work, success criteria, and validation steps. +7. **Store Reusable Knowledge** — Save root-cause and fix patterns via `npx ai-devkit@latest memory store ...`. +8. **Next Command Guidance** — After selecting a fix path, continue with `/execute-plan`; when implemented, use `/check-implementation` and `/writing-test`. diff --git a/packages/cli/templates/commands/execute-plan.md b/packages/cli/templates/commands/execute-plan.md index 489218d..4eede56 100644 --- a/packages/cli/templates/commands/execute-plan.md +++ b/packages/cli/templates/commands/execute-plan.md @@ -5,7 +5,10 @@ description: Execute a feature plan task by task. Help me work through a feature plan one task at a time. 1. **Gather Context** — If not already provided, ask for: feature name (kebab-case, e.g., `user-authentication`), brief feature/branch description, planning doc path (default `docs/ai/planning/feature-{name}.md`), and any supporting docs (design, requirements, implementation). -2. **Load & Present Plan** — Read the planning doc and parse task lists (headings + checkboxes). Present an ordered task queue grouped by section, with status: `todo`, `in-progress`, `done`, `blocked`. -3. **Interactive Task Execution** — For each task in order: display context and full bullet text, reference relevant design/requirements docs, offer to outline sub-steps before starting, prompt for status update (`done`, `in-progress`, `blocked`, `skipped`) with short notes after work, and if blocked record blocker and move to a "Blocked" list. -4. **Update Planning Doc** — After each status change, generate a markdown snippet to paste back into the planning doc. After each section, ask if new tasks were discovered. -5. **Session Summary** — Produce a summary: Completed, In Progress (with next steps), Blocked (with blockers), Skipped/Deferred, and New Tasks. Remind to update `docs/ai/planning/feature-{name}.md` and sync related docs if decisions changed. +2. **Use Memory for Context** — Search for prior implementation notes/patterns before starting: `npx ai-devkit@latest memory search --query "<feature implementation plan>"`. +3. **Load & Present Plan** — Read the planning doc and parse task lists (headings + checkboxes). Present an ordered task queue grouped by section, with status: `todo`, `in-progress`, `done`, `blocked`. +4. **Interactive Task Execution** — For each task in order: display context and full bullet text, reference relevant design/requirements docs, offer to outline sub-steps before starting, prompt for status update (`done`, `in-progress`, `blocked`, `skipped`) with short notes after work, and if blocked record blocker and move to a "Blocked" list. +5. **Update Planning Doc** — After each completed or status-changed task, run `/update-planning` to keep `docs/ai/planning/feature-{name}.md` accurate. +6. **Store Reusable Knowledge** — Save reusable implementation guidance/decisions with `npx ai-devkit@latest memory store ...`. +7. **Session Summary** — Produce a summary: Completed, In Progress (with next steps), Blocked (with blockers), Skipped/Deferred, and New Tasks. +8. **Next Command Guidance** — Continue `/execute-plan` until plan completion; then run `/check-implementation`. diff --git a/packages/cli/templates/commands/new-requirement.md b/packages/cli/templates/commands/new-requirement.md index 72e2745..4ef4da4 100644 --- a/packages/cli/templates/commands/new-requirement.md +++ b/packages/cli/templates/commands/new-requirement.md @@ -5,14 +5,15 @@ description: Scaffold feature documentation from requirements through planning. Guide me through adding a new feature, from requirements documentation to implementation readiness. 1. **Capture Requirement** — If not already provided, ask for: feature name (kebab-case, e.g., `user-authentication`), what problem it solves and who will use it, and key user stories. -2. **Create Feature Documentation Structure** — Copy each template's content (preserving YAML frontmatter and section headings) into feature-specific files: +2. **Use Memory for Context** — Before asking repetitive clarification questions, search memory for related decisions or conventions via `npx ai-devkit@latest memory search --query "<feature/topic>"` and reuse relevant context. +3. **Create Feature Documentation Structure** — Copy each template's content (preserving YAML frontmatter and section headings) into feature-specific files: - `docs/ai/requirements/README.md` → `docs/ai/requirements/feature-{name}.md` - `docs/ai/design/README.md` → `docs/ai/design/feature-{name}.md` - `docs/ai/planning/README.md` → `docs/ai/planning/feature-{name}.md` - `docs/ai/implementation/README.md` → `docs/ai/implementation/feature-{name}.md` - `docs/ai/testing/README.md` → `docs/ai/testing/feature-{name}.md` -3. **Requirements Phase** — Fill out `docs/ai/requirements/feature-{name}.md`: problem statement, goals/non-goals, user stories, success criteria, constraints, open questions. -4. **Design Phase** — Fill out `docs/ai/design/feature-{name}.md`: architecture changes, data models, API/interfaces, components, design decisions, security and performance considerations. -5. **Planning Phase** — Fill out `docs/ai/planning/feature-{name}.md`: task breakdown with subtasks, dependencies, effort estimates, implementation order, risks. -6. **Documentation Review** — Run `/review-requirements` and `/review-design` to validate the drafted docs. -7. **Next Steps** — This command focuses on documentation. When ready to implement, use `/execute-plan`. Generate a PR description covering: summary, requirements doc link, key changes, test status, and a readiness checklist. +4. **Requirements Phase** — Fill out `docs/ai/requirements/feature-{name}.md`: problem statement, goals/non-goals, user stories, success criteria, constraints, open questions. +5. **Design Phase** — Fill out `docs/ai/design/feature-{name}.md`: architecture changes, data models, API/interfaces, components, design decisions, security and performance considerations. +6. **Planning Phase** — Fill out `docs/ai/planning/feature-{name}.md`: task breakdown with subtasks, dependencies, effort estimates, implementation order, risks. +7. **Store Reusable Knowledge** — When important conventions or decisions are finalized, store them via `npx ai-devkit@latest memory store --title "<title>" --content "<knowledge>" --tags "<tags>"`. +8. **Next Command Guidance** — Run `/review-requirements` first, then `/review-design`. If both pass, continue with `/execute-plan`. diff --git a/packages/cli/templates/commands/remember.md b/packages/cli/templates/commands/remember.md index 20d2772..b54bac1 100644 --- a/packages/cli/templates/commands/remember.md +++ b/packages/cli/templates/commands/remember.md @@ -2,9 +2,11 @@ description: Store reusable guidance in the knowledge memory service. --- -When I say "remember this" or want to save a reusable rule, help me store it in the knowledge memory service. +Help me store it in the knowledge memory service. 1. **Capture Knowledge** — If not already provided, ask for: a short explicit title (5-12 words), detailed content (markdown, examples encouraged), optional tags (keywords like "api", "testing"), and optional scope (`global`, `project:<name>`, `repo:<name>`). If vague, ask follow-ups to make it specific and actionable. -2. **Validate Quality** — Ensure it is specific and reusable (not generic advice). Avoid storing secrets or sensitive data. -3. **Store** — Call `memory.storeKnowledge` with title, content, tags, scope. If MCP tools are unavailable, use `npx ai-devkit@latest memory store` instead. -4. **Confirm** — Summarize what was saved and offer to store more knowledge if needed. +2. **Search Before Store** — Check for existing similar entries first with `npx ai-devkit@latest memory search --query "<topic>"` to avoid duplicates. +3. **Validate Quality** — Ensure it is specific and reusable (not generic advice). Avoid storing secrets or sensitive data. +4. **Store** — Call `memory.storeKnowledge` with title, content, tags, scope. If MCP tools are unavailable, use `npx ai-devkit@latest memory store` instead. +5. **Confirm** — Summarize what was saved and offer to retrieve related memory entries when helpful. +6. **Next Command Guidance** — Continue with the current lifecycle phase command (`/execute-plan`, `/check-implementation`, `/writing-test`, etc.) as needed. diff --git a/packages/cli/templates/commands/review-design.md b/packages/cli/templates/commands/review-design.md index db60276..ea55cc5 100644 --- a/packages/cli/templates/commands/review-design.md +++ b/packages/cli/templates/commands/review-design.md @@ -2,14 +2,17 @@ description: Review feature design for completeness. --- -Review the design documentation in docs/ai/design/feature-{name}.md (and the project-level README if relevant). Summarize: +Review the design documentation in `docs/ai/design/feature-{name}.md` (and the project-level README if relevant). -- Architecture overview (ensure mermaid diagram is present and accurate) -- Key components and their responsibilities -- Technology choices and rationale -- Data models and relationships -- API/interface contracts (inputs, outputs, auth) -- Major design decisions and trade-offs -- Non-functional requirements that must be preserved - -Highlight any inconsistencies, missing sections, or diagrams that need updates. +1. **Use Memory for Context** — Search memory for prior architecture constraints/patterns: `npx ai-devkit@latest memory search --query "<feature design architecture>"`. +2. Summarize: + - Architecture overview (ensure mermaid diagram is present and accurate) + - Key components and their responsibilities + - Technology choices and rationale + - Data models and relationships + - API/interface contracts (inputs, outputs, auth) + - Major design decisions and trade-offs + - Non-functional requirements that must be preserved +3. Highlight inconsistencies, missing sections, or diagrams that need updates. +4. **Store Reusable Knowledge** — Persist approved design patterns/constraints with `npx ai-devkit@latest memory store ...` when they will help future work. +5. **Next Command Guidance** — If requirements gaps are found, return to `/review-requirements`; if design is sound, continue to `/execute-plan`. diff --git a/packages/cli/templates/commands/review-requirements.md b/packages/cli/templates/commands/review-requirements.md index 963b9df..36e84e7 100644 --- a/packages/cli/templates/commands/review-requirements.md +++ b/packages/cli/templates/commands/review-requirements.md @@ -2,12 +2,15 @@ description: Review feature requirements for completeness. --- -Review `docs/ai/requirements/feature-{name}.md` and the project-level template `docs/ai/requirements/README.md` to ensure structure and content alignment. Summarize: +Review `docs/ai/requirements/feature-{name}.md` and the project-level template `docs/ai/requirements/README.md` to ensure structure and content alignment. -- Core problem statement and affected users -- Goals, non-goals, and success criteria -- Primary user stories & critical flows -- Constraints, assumptions, open questions -- Any missing sections or deviations from the template - -Identify gaps or contradictions and suggest clarifications. +1. **Use Memory for Context** — Search memory for related requirements/domain decisions before starting: `npx ai-devkit@latest memory search --query "<feature requirements>"`. +2. Summarize: + - Core problem statement and affected users + - Goals, non-goals, and success criteria + - Primary user stories & critical flows + - Constraints, assumptions, open questions + - Any missing sections or deviations from the template +3. Identify gaps or contradictions and suggest clarifications. +4. **Store Reusable Knowledge** — If new reusable requirement conventions are agreed, store them with `npx ai-devkit@latest memory store ...`. +5. **Next Command Guidance** — If fundamentals are missing, go back to `/new-requirement`; otherwise continue to `/review-design`. diff --git a/packages/cli/templates/commands/simplify-implementation.md b/packages/cli/templates/commands/simplify-implementation.md index fcfec16..e0f3955 100644 --- a/packages/cli/templates/commands/simplify-implementation.md +++ b/packages/cli/templates/commands/simplify-implementation.md @@ -5,6 +5,9 @@ description: Simplify existing code to reduce complexity. Help me simplify an existing implementation while maintaining or improving its functionality. 1. **Gather Context** — If not already provided, ask for: target file(s) or component(s) to simplify, current pain points (hard to understand, maintain, or extend?), performance or scalability concerns, constraints (backward compatibility, API stability, deadlines), and relevant design docs or requirements. -2. **Analyze Current Complexity** — For each target: identify complexity sources (deep nesting, duplication, unclear abstractions, tight coupling, over-engineering, magic values), assess cognitive load for future maintainers, and identify scalability blockers (single points of failure, sync-where-async-needed, missing caching, inefficient algorithms). -3. **Propose Simplifications** — Prioritize readability over brevity — apply the 30-second test: can a new team member understand each change quickly? For each issue, suggest concrete improvements (extract, consolidate, flatten, decouple, remove dead code, replace with built-ins). Provide before/after snippets. -4. **Prioritize & Plan** — Rank by impact vs risk: (1) high impact, low risk — do first, (2) high impact, higher risk — plan carefully, (3) low impact, low risk — quick wins if time permits, (4) low impact, high risk — skip or defer. For each change specify risk level, testing requirements, and effort. Produce a prioritized action plan with recommended execution order. +2. **Use Memory for Context** — Search memory for established patterns and prior refactors in this area: `npx ai-devkit@latest memory search --query "<component simplification pattern>"`. +3. **Analyze Current Complexity** — For each target: identify complexity sources (deep nesting, duplication, unclear abstractions, tight coupling, over-engineering, magic values), assess cognitive load for future maintainers, and identify scalability blockers (single points of failure, sync-where-async-needed, missing caching, inefficient algorithms). +4. **Propose Simplifications** — Prioritize readability over brevity; apply the 30-second test: can a new team member understand each change quickly? For each issue, suggest concrete improvements (extract, consolidate, flatten, decouple, remove dead code, replace with built-ins). Provide before/after snippets. +5. **Prioritize & Plan** — Rank by impact vs risk: (1) high impact, low risk — do first, (2) high impact, higher risk — plan carefully, (3) low impact, low risk — quick wins if time permits, (4) low impact, high risk — skip or defer. For each change specify risk level, testing requirements, and effort. Produce a prioritized action plan with recommended execution order. +6. **Store Reusable Knowledge** — Save reusable simplification patterns and trade-offs via `npx ai-devkit@latest memory store ...`. +7. **Next Command Guidance** — After implementation, run `/check-implementation` and `/writing-test`. diff --git a/packages/cli/templates/commands/update-planning.md b/packages/cli/templates/commands/update-planning.md index b81d22d..2f67867 100644 --- a/packages/cli/templates/commands/update-planning.md +++ b/packages/cli/templates/commands/update-planning.md @@ -5,6 +5,9 @@ description: Update planning docs to reflect implementation progress. Help me reconcile current implementation progress with the planning documentation. 1. **Gather Context** — If not already provided, ask for: feature/branch name and brief status, tasks completed since last update, new tasks discovered, current blockers or risks, and planning doc path (default `docs/ai/planning/feature-{name}.md`). -2. **Review & Reconcile** — Summarize existing milestones, task breakdowns, and dependencies from the planning doc. For each planned task: mark status (done / in progress / blocked / not started), note scope changes, record blockers, identify skipped or added tasks. -3. **Produce Updated Task List** — Generate an updated checklist grouped by: Done, In Progress, Blocked, Newly Discovered Work — with short notes per task. -4. **Next Steps & Summary** — Suggest the next 2-3 actionable tasks and highlight risky areas. Prepare a summary paragraph for the planning doc covering: current state, major risks/blockers, upcoming focus, and any scope/timeline changes. +2. **Use Memory for Context** — Search memory for prior decisions that affect priorities/scope: `npx ai-devkit@latest memory search --query "<feature planning updates>"`. +3. **Review & Reconcile** — Summarize existing milestones, task breakdowns, and dependencies from the planning doc. For each planned task: mark status (done / in progress / blocked / not started), note scope changes, record blockers, identify skipped or added tasks. +4. **Produce Updated Task List** — Generate an updated checklist grouped by: Done, In Progress, Blocked, Newly Discovered Work — with short notes per task. +5. **Store Reusable Knowledge** — If new planning conventions or risk-handling rules emerge, store them with `npx ai-devkit@latest memory store ...`. +6. **Next Steps & Summary** — Suggest the next 2-3 actionable tasks and prepare a summary paragraph for the planning doc. +7. **Next Command Guidance** — Return to `/execute-plan` for remaining work. When all implementation tasks are complete, run `/check-implementation`. diff --git a/packages/cli/templates/commands/writing-test.md b/packages/cli/templates/commands/writing-test.md index 9b62c30..d6ba6d2 100644 --- a/packages/cli/templates/commands/writing-test.md +++ b/packages/cli/templates/commands/writing-test.md @@ -5,8 +5,11 @@ description: Add tests for a new feature. Review `docs/ai/testing/feature-{name}.md` and ensure it mirrors the base template before writing tests. 1. **Gather Context** — If not already provided, ask for: feature name/branch, summary of changes (link to design & requirements docs), target environment, existing test suites, and any flaky/slow tests to avoid. -2. **Analyze Testing Template** — Identify required sections from `docs/ai/testing/feature-{name}.md`. Confirm success criteria and edge cases from requirements & design docs. Note available mocks/stubs/fixtures. -3. **Unit Tests (aim for 100% coverage)** — For each module/function: list behavior scenarios (happy path, edge cases, error handling), generate test cases with assertions using existing utilities/mocks, and highlight missing branches preventing full coverage. -4. **Integration Tests** — Identify critical cross-component flows. Define setup/teardown steps and test cases for interaction boundaries, data contracts, and failure modes. -5. **Coverage Strategy** — Recommend coverage tooling commands. Call out files/functions still needing coverage and suggest additional tests if <100%. -6. **Update Documentation** — Summarize tests added or still missing. Update `docs/ai/testing/feature-{name}.md` with links to test files and results. Flag deferred tests as follow-up tasks. +2. **Use Memory for Context** — Search memory for existing testing patterns and prior edge cases: `npx ai-devkit@latest memory search --query "<feature testing strategy>"`. +3. **Analyze Testing Template** — Identify required sections from `docs/ai/testing/feature-{name}.md`. Confirm success criteria and edge cases from requirements & design docs. Note available mocks/stubs/fixtures. +4. **Unit Tests (aim for 100% coverage)** — For each module/function: list behavior scenarios (happy path, edge cases, error handling), generate test cases with assertions using existing utilities/mocks, and highlight missing branches preventing full coverage. +5. **Integration Tests** — Identify critical cross-component flows. Define setup/teardown steps and test cases for interaction boundaries, data contracts, and failure modes. +6. **Coverage Strategy** — Recommend coverage tooling commands. Call out files/functions still needing coverage and suggest additional tests if <100%. +7. **Store Reusable Knowledge** — Save reusable testing patterns or tricky fixtures with `npx ai-devkit@latest memory store ...`. +8. **Update Documentation** — Summarize tests added or still missing. Update `docs/ai/testing/feature-{name}.md` with links to test files and results. Flag deferred tests as follow-up tasks. +9. **Next Command Guidance** — If tests expose design issues, return to `/review-design`; otherwise continue to `/code-review`.