| description | Coordinate multiple agents for a complex multi-domain project using PM planning, parallel agent spawning, and QA review |
|---|
- Response language follows
languagesetting in.agents/oma-config.yamlif configured. - NEVER skip steps. Execute from Step 0 in order. Explicitly report completion of each step to the user before proceeding to the next.
- You MUST use MCP tools throughout the entire workflow. This is NOT optional.
- Use code analysis tools (
get_symbols_overview,find_symbol,find_referencing_symbols,search_for_pattern) for code exploration. - Use memory tools (read/write/edit) for progress tracking.
- Memory path: configurable via
memoryConfig.basePath(default:.serena/memories) - Tool names: configurable via
memoryConfig.toolsinmcp.json - Do NOT use raw file reads or grep as substitutes. MCP tools are the primary interface for code and memory operations.
- Use code analysis tools (
- Read the oma-coordination skill BEFORE starting. Read
.agents/skills/oma-coordination/SKILL.mdand follow its Core Rules. - Follow the context-loading guide. Read
.agents/skills/_shared/core/context-loading.mdand load only task-relevant resources.
Before starting, determine your runtime environment by following .agents/skills/_shared/core/vendor-detection.md.
The detected vendor determines how agents are spawned (Step 4) and monitored (Step 5).
- Read
.agents/skills/oma-coordination/SKILL.mdand confirm Core Rules. - Read
.agents/skills/_shared/core/context-loading.mdfor resource loading strategy. - Read
.agents/skills/_shared/runtime/memory-protocol.mdfor memory protocol. - Record session start using memory write tool:
- Create
session-coordinate.mdin the memory base path - Include: session start time, user request summary.
- Create
Analyze the user's request and identify involved domains (frontend, backend, mobile, QA).
- Single domain: suggest using the specific agent directly.
- Multiple domains: proceed to Step 2.
- Use MCP code analysis tools (
get_symbols_overvieworsearch_for_pattern) to understand the existing codebase structure relevant to the request. - Report analysis results to the user.
// turbo Activate PM Agent to:
- Analyze requirements.
- Define API contracts.
- Create a prioritized task breakdown.
- Save plan to
.agents/plan.json. - Use memory write tool to record plan completion.
Present the PM Agent's task breakdown to the user:
- Priorities (P0, P1, P2)
- Agent assignments
- Dependencies
- You MUST get user confirmation before proceeding to Step 4. Do NOT proceed without confirmation.
// turbo Spawn agents for each task by priority tier (P0 first, then P1, etc.). Spawn all same-priority tasks in parallel. Assign separate workspaces to avoid file conflicts.
Use the Agent tool to spawn subagents:
Agent(subagent_type="backend-engineer", prompt="Implement backend tasks per plan.", run_in_background=true)Agent(subagent_type="frontend-engineer", prompt="Implement frontend tasks per plan.", run_in_background=true)- Multiple Agent tool calls in the same message = true parallel execution
- Agent definitions:
.claude/agents/{agent}.md
Request parallel subagent execution with the specific tasks. Pass each agent its task description, API contracts, and relevant context.
oh-my-ag agent:spawn backend "task description" session-id -w ./backend &
oh-my-ag agent:spawn frontend "task description" session-id -w ./frontend &
wait- Use memory read tool to poll
progress-{agent}.mdfiles - Use MCP code analysis tools (
find_symbolandsearch_for_pattern) to verify API contract alignment between agents - Use memory edit tool to record monitoring results
After all implementation agents complete, spawn QA Agent to review all deliverables:
- Security (OWASP Top 10)
- Performance
- Accessibility (WCAG 2.1 AA)
- Code quality
If automated measurement is available:
- Load
quality-score.md(conditional, percontext-loading.md) - Measure Quality Score based on QA findings
- Record as baseline in Experiment Ledger via memory tools
If QA finds CRITICAL or HIGH issues:
- Re-spawn the responsible agent with QA findings.
- If Quality Score is active: measure after fix, apply Keep/Discard rule, record in Experiment Ledger.
- Repeat Steps 5-7.
- If same issue persists after 2 fix attempts: Activate Exploration Loop (load
exploration-loop.mdpercontext-loading.md):- Generate 2-3 alternative approaches via Exploration Decision template
- Re-spawn the same agent type with different hypothesis prompts (separate workspaces)
- QA scores each result
- Best result adopted, others discarded
- All experiments recorded in Experiment Ledger
- Continue until all critical issues are resolved.
- Use memory write tool to record final results.
- If Quality Score was measured: generate Experiment Ledger summary and auto-generate lessons from discarded experiments.