Task Decomposition Pro
Production-ready agent that handles complex, goal, breakdown, specialist. Includes structured workflows, validation checks, and reusable patterns for ai specialists.
Task Decomposition Pro
An autonomous agent that breaks complex tasks into structured, executable subtasks with dependency graphs, priority ordering, and progress tracking β turning vague requirements into actionable work plans.
When to Use This Agent
Choose Task Decomposition Pro when:
- You have a complex feature request that spans multiple files and systems
- A task seems overwhelming and you need it broken into manageable pieces
- You need to identify dependencies between subtasks for parallel execution
- You want to estimate effort and identify critical path items
Consider alternatives when:
- The task is already clear and small (just do it)
- You need project management features (use Jira, Linear, or similar)
- You need architectural decision-making (use an architect agent)
Quick Start
# .claude/agents/task-decomposition.yml name: task-decomposition-pro description: Break complex tasks into executable subtasks agent_prompt: | You are a Task Decomposition Expert. When given a complex task: 1. Understand the full scope and desired outcome 2. Identify the major workstreams 3. Break each workstream into atomic, testable subtasks 4. Map dependencies between subtasks 5. Identify parallelizable work 6. Estimate relative effort (S/M/L/XL) for each subtask 7. Determine the critical path Rules: - Each subtask should be completable in under 2 hours - Each subtask should have a clear "done" definition - Mark external dependencies or blockers explicitly - Identify risks and mitigation strategies
Example invocation:
claude "Decompose: Add multi-tenancy support to our SaaS application"
Sample decomposition:
Task Decomposition β Multi-Tenancy Support
ββββββββββββββββββββββββββββββββββββββββ
Total Subtasks: 18 | Effort: XL | Critical Path: 12 subtasks
Workstream 1: Database Layer (5 subtasks)
1.1 [M] Add tenant_id column to all tables
β Migration script, backfill existing data
Done: All tables have tenant_id, FK constraint, index
1.2 [S] Create tenant table (id, name, plan, settings)
Done: Migration applied, seed data for dev
1.3 [M] Add Row Level Security policies
Depends: 1.1, 1.2
Done: RLS enforced on all tenant tables
1.4 [S] Update database queries to include tenant filter
Depends: 1.1
Done: No query returns cross-tenant data
1.5 [S] Add tenant-aware connection pool
Depends: 1.2
Done: Pool routes by tenant, connection limits per tenant
Workstream 2: Authentication & Middleware (4 subtasks)
2.1 [M] Extract tenant from request (subdomain/header/JWT)
Done: req.tenantId available in all routes
2.2 [S] Add tenant context middleware
Depends: 2.1
Done: Tenant context propagated through request chain
...
Dependency Graph:
1.1 βββ 1.3
1.1 βββ 1.4
1.2 βββ 1.3
1.2 βββ 1.5
1.2 βββ 2.1 βββ 2.2
Critical Path: 1.1 β 1.3 β 3.1 β 3.2 β 4.1 β 4.3
Parallelizable: {1.1, 1.2} can run together, {1.4, 1.5} can run together
Core Concepts
Decomposition Methodology
| Level | Granularity | Example |
|---|---|---|
| Epic | Major feature or initiative | "Add multi-tenancy" |
| Workstream | Functional area or layer | "Database layer changes" |
| Subtask | Atomic, completable unit | "Add tenant_id to users table" |
| Step | Implementation detail | "Write migration, run, verify" |
Dependency Graph Builder
interface Subtask { id: string; title: string; workstream: string; effort: 'S' | 'M' | 'L' | 'XL'; dependencies: string[]; doneDefinition: string; risks: string[]; } function buildExecutionOrder(subtasks: Subtask[]): Subtask[][] { const completed = new Set<string>(); const batches: Subtask[][] = []; while (completed.size < subtasks.length) { const batch = subtasks.filter(task => !completed.has(task.id) && task.dependencies.every(dep => completed.has(dep)) ); if (batch.length === 0) { throw new Error('Circular dependency detected'); } batches.push(batch); batch.forEach(task => completed.add(task.id)); } return batches; // Each batch can be parallelized }
Effort Estimation Guide
Size Guide:
ββββββββ¬ββββββββββββββββ¬βββββββββββββββββββββββββββββββ
β Size β Time Estimate β Characteristics β
ββββββββΌββββββββββββββββΌβββββββββββββββββββββββββββββββ€
β S β < 1 hour β Single file, clear scope β
β M β 1-3 hours β 2-3 files, some decisions β
β L β 3-8 hours β Multiple files, testing req. β
β XL β 8+ hours β Cross-cutting, should split β
ββββββββ΄ββββββββββββββββ΄βββββββββββββββββββββββββββββββ
If a subtask is XL, it should be decomposed further.
Configuration
| Option | Type | Default | Description |
|---|---|---|---|
maxSubtaskSize | string | "L" | Maximum effort per subtask before splitting |
includeEstimates | boolean | true | Add effort estimates |
showDependencies | boolean | true | Map dependencies between subtasks |
identifyRisks | boolean | true | Flag risks and blockers |
outputFormat | string | "markdown" | Format: markdown, json, mermaid-gantt |
maxDepth | number | 3 | Maximum decomposition depth |
Best Practices
-
Define "done" for every subtask β A subtask without a clear completion criterion ("Add tenant_id column") is ambiguous. Add a testable definition: "All tables have tenant_id column with NOT NULL constraint, foreign key to tenants table, and btree index. Migration is reversible." This prevents scope creep and premature marking as complete.
-
Keep subtasks under 2 hours each β If a subtask takes longer than 2 hours, it is too large and likely hides complexity. Decompose further. Small subtasks provide better progress visibility, easier code review, and lower risk of blocked work.
-
Identify the critical path first β The longest chain of dependent subtasks determines the minimum project duration. Focus your best developers on critical path items and parallelize non-critical work. A 5-day project with a 3-day critical path has 2 days of float for non-critical items.
-
Surface external dependencies and blockers early β If a subtask requires API access, design approval, or third-party integration, flag it immediately. External dependencies often have unpredictable timelines and should be initiated as early as possible, even before the technical decomposition is complete.
-
Review the decomposition with the team before starting work β The decomposition is a plan, not a decree. Team members who will implement the subtasks often spot missing steps, incorrect dependencies, or better parallelization opportunities. A 30-minute review saves days of misdirected work.
Common Issues
Subtask dependencies create unnecessary bottlenecks β Two subtasks are marked as dependent when they could run in parallel. Review dependencies critically: does "Add user profile page" really depend on "Set up database schema," or can the frontend be built with mock data while the backend is in progress? Reduce unnecessary dependencies to maximize parallelism.
Effort estimates are consistently wrong β Small tasks take longer than expected due to hidden complexity (edge cases, testing, deployment). Apply a 1.5x multiplier to initial estimates for unfamiliar work and track actual vs estimated time to calibrate future estimates. Over time, your estimates will improve based on historical data.
Decomposition becomes outdated as requirements change β The initial decomposition assumed a specific approach, but mid-project discoveries require a different architecture. Treat the decomposition as a living document. Re-decompose affected workstreams when significant changes occur rather than forcing new requirements into the old structure.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
API Endpoint Builder
Agent that scaffolds complete REST API endpoints with controller, service, route, types, and tests. Supports Express, Fastify, and NestJS.
Documentation Auto-Generator
Agent that reads your codebase and generates comprehensive documentation including API docs, architecture guides, and setup instructions.
Ai Ethics Advisor Partner
All-in-one agent covering ethics, responsible, development, specialist. Includes structured workflows, validation checks, and reusable patterns for ai specialists.