Subagent Driven Development Engine
Boost productivity using this executing, implementation, plans, independent. Includes structured workflows, validation checks, and reusable patterns for ai research.
Subagent-Driven Development Engine
Execute implementation plans by dispatching fresh subagents per task with two-stage review — spec compliance first, then code quality — ensuring high-quality, fast iteration on complex projects.
When to Use
Use subagent-driven development when:
- Executing multi-task implementation plans where each task is independent
- Need parallel development across multiple files or features
- Want consistent quality through systematic two-stage review
- Working on complex refactors or feature builds with 5+ tasks
Use direct development when:
- Single-file changes or quick fixes
- Tasks with heavy interdependencies requiring sequential execution
- Simple modifications that don't warrant the orchestration overhead
Quick Start
Basic Workflow
1. Plan → Break work into independent tasks
2. Dispatch → Fresh subagent per task (clean context)
3. Review Stage 1 → Spec compliance check
4. Review Stage 2 → Code quality review
5. Integrate → Merge all task outputs
Task Dispatch Template
## Task: {task_name} ### Context - Project: {project_description} - Architecture: {relevant_architecture} - File(s): {target_files} ### Specification {detailed_requirements} ### Acceptance Criteria - [ ] {criterion_1} - [ ] {criterion_2} - [ ] {criterion_3} ### Constraints - Follow existing patterns in {reference_file} - Use {framework/library} for {specific_concern} - Do not modify {protected_files}
Two-Stage Review
## Stage 1: Spec Compliance Review - Does the implementation match all acceptance criteria? - Are all edge cases from the spec handled? - Does it integrate correctly with existing code? - Are there any missing requirements? ## Stage 2: Code Quality Review - Is the code clean, readable, and maintainable? - Are there any bugs or logic errors? - Does it follow project conventions? - Are there performance concerns? - Is error handling appropriate?
Core Concepts
Why Fresh Subagents?
Each subagent starts with a clean context, which provides:
- No context pollution — previous task decisions don't leak into current task
- Full context budget — entire context window available for the current task
- Parallel execution — independent tasks can run simultaneously
- Consistent quality — no degradation from accumulated context
Task Decomposition
| Task Type | Scope | Subagent Context |
|---|---|---|
| Feature | Single feature, 1-3 files | Feature spec + affected files |
| Refactor | One concern, multiple files | Refactoring goal + all affected files |
| Test | Test suite for one module | Module code + test patterns |
| Fix | Single bug | Bug report + relevant code |
| Config | Infrastructure/config change | Config docs + current config |
Review Cycle
Subagent Output
|
Stage 1: Spec
/ \
Pass Fail → Send feedback → Subagent revises
|
Stage 2: Quality
/ \
Pass Fail → Send feedback → Subagent revises
|
Accept
Configuration
| Parameter | Default | Description |
|---|---|---|
max_tasks_parallel | 3 | Concurrent subagents |
review_stages | 2 | Number of review stages |
max_revisions | 2 | Revision attempts before escalation |
context_strategy | "minimal" | How much context to provide (minimal/full) |
task_timeout | 300s | Maximum time per task |
Best Practices
- Keep tasks independent — if task B depends on task A's output, execute sequentially
- Provide minimal but complete context — include only files the subagent needs to touch
- Write specific acceptance criteria — vague criteria lead to interpretation differences
- Include reference patterns — point subagents to existing code that demonstrates the desired style
- Review spec compliance first — catching spec mismatches early prevents wasted quality review
- Limit parallel tasks — too many concurrent subagents makes review overwhelming
Common Issues
Subagent interprets spec differently than intended: Add concrete examples to the spec. Reference existing code that demonstrates the pattern. Be explicit about edge cases.
Review catches issues that should have been in the spec: Improve your spec template. Add a "non-obvious requirements" section. Include architecture constraints upfront.
Tasks turn out to be interdependent: Re-plan with explicit dependencies. Execute dependent tasks sequentially, passing the output of each to the next.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Full-Stack Code Reviewer
Comprehensive code review skill that checks for security vulnerabilities, performance issues, accessibility, and best practices across frontend and backend code.
Test Suite Generator
Generates comprehensive test suites with unit tests, integration tests, and edge cases. Supports Jest, Vitest, Pytest, and Go testing.
Pro Architecture Workspace
Battle-tested skill for architectural, decision, making, framework. Includes structured workflows, validation checks, and reusable patterns for development.