A practical, repeatable operating model for AI-assisted software engineering — designed to ship faster without sacrificing architecture, security, or quality.
This repository is intentionally language/framework agnostic. It focuses on the operating system: rules → tasks → execution → review → verification → learning loops.
This repo documents an Agentic Software Engineering Framework I use to build complex systems with AI support while maintaining:
- Architecture integrity (boundaries and consistency)
- Governance (guardrails to prevent drift)
- Quality (tests + review discipline)
- Traceability (decision logs and debugging loops)
- Security & privacy (safe prompting and redaction)
AI is treated as an acceleration layer — not a replacement for engineering judgment.
AI should augment engineering, not replace it.
The goal is to enable fast iteration with reliable outcomes, by enforcing:
- structured development workflows
- architecture-first thinking
- reuse-first discipline
- human confirmation gates
- verification-driven delivery (tests + manual checks)
A structured loop:
Rules → Goals → Tasks → Execute (AI-assisted) → Review → Verify → Log → Iterate
Key ideas:
- Rules define architectural constraints and engineering standards.
- Goals define outcomes and success criteria.
- Tasks break work into small, verifiable units.
- AI tools accelerate implementation and analysis.
- Review + verification ensure correctness and safety.
- Logs capture decisions and learnings to improve the workflow.
This framework is tool-agnostic, but different tools tend to be best at different steps.
| Workflow Step | Best Tool Type | Examples |
|---|---|---|
| PRD/spec → breakdown (stories/tasks) | Reasoning model | Claude |
| Small task implementation | IDE agent + code model | Cursor + OpenAI |
| Governance and consistency checks | Rules enforcement agent | Antigravity |
| Debugging and root cause analysis | IDE agent + reasoning model | Cursor + Claude |
| Documentation and technical writing | Reasoning model | Claude |
Important: tools assist execution; architecture and correctness remain human-owned.
docs/01-overview.mddocs/02-ai-assisted-development-model.mddocs/03-architecture-first-development.mddocs/04-task-driven-engineering.mddocs/05-code-review-and-quality.mddocs/06-debugging-and-incident-response.mddocs/07-security-and-privacy.mddocs/08-example-sprints.md
PRD breakdown
prompts/prd-breakdown/prd-to-stories.mdprompts/prd-breakdown/stories-to-tasks.md
Implementation
prompts/implementation/implementation-agent.mdprompts/implementation/bugfix-agent.md
Governance
prompts/governance/rules-enforcement.mdprompts/governance/architecture-check.md
templates/prompt-template.mdtemplates/prd-template.mdtemplates/story-template.mdtemplates/task-template.mdtemplates/architecture-review-template.mdtemplates/pr-review-template.mdtemplates/incident-template.md
examples/bugfix-walkthrough.md
Before using any AI tool, define:
- architecture boundaries
- constraints (security/performance/compliance)
- what is in-scope vs out-of-scope
Use:
templates/task-template.mdprompts/prd-breakdown/*
The smaller the task, the better AI performs.
Use:
- Cursor for code navigation + implementation
- Claude/OpenAI to generate drafts and alternatives
- Antigravity to enforce rules and consistency
Use:
templates/pr-review-template.md- tests + manual verification steps
Treat prompts and templates as versioned engineering assets.
This playbook is intended for:
- software architects
- staff/principal engineers
- solutions and platform architects
- engineering teams adopting AI-assisted development responsibly
MIT
