C

Comprehensive Agent Module

Enterprise-grade skill for create, manage, orchestrate, agents. Includes structured workflows, validation checks, and reusable patterns for ai maestro.

SkillClipticsai maestrov1.0.0MIT
0 views0 copies

Comprehensive Agent Module

Overview

A skill for building, orchestrating, and managing multi-agent systems with Claude Code. Provides structured patterns for spawning agent teams, coordinating parallel work, managing task dependencies, and implementing production-grade agent workflows — from simple fan-out to complex swarm orchestration.

When to Use

  • Coordinating multiple Claude Code instances on a large task
  • Running parallel code reviews, tests, or research across a codebase
  • Building pipeline workflows where stages depend on prior results
  • Implementing self-organizing swarm patterns for complex refactors
  • Managing agent lifecycle (spawn, assign, monitor, cleanup)

Quick Start

# Simple: spawn a subagent for a focused task claude "Spawn an Explore agent to find all API endpoints in src/" # Team: create a review team with specialists claude "Create a review team with security, performance, and architecture reviewers for this PR" # Swarm: self-organizing workers claude "Spawn a swarm of 4 agents to refactor all controllers to use the new service layer"

Core Concepts

Agent Types

TypeTools AvailableBest For
general-purposeAll toolsMulti-step implementation tasks
ExploreRead-only (Glob, Grep, Read)Fast codebase analysis
PlanRead-onlyArchitecture design, strategy
BashBash onlyCommand execution, builds

Spawning Methods

Method 1: Subagent (Short-lived)

Synchronous — spawns, executes, returns result, and exits:

Task tool → subagent_type: "Explore"
prompt: "Find all files using deprecated API v1 endpoints"

Best for: Quick lookups, focused analysis, independent tasks.

Method 2: Teammate (Persistent)

Joins a named team, has an inbox, can receive follow-up messages:

Task tool → team_name: "review-team"
name: "security-reviewer"
prompt: "Review all auth-related changes for vulnerabilities"

Best for: Long-running work, multi-step coordination, inter-agent communication.

Orchestration Patterns

1. Fan-Out (Parallel Specialists)

Spawn multiple agents working simultaneously on independent tasks:

Leader
ā”œā”€ā”€ Agent A: Security review
ā”œā”€ā”€ Agent B: Performance review
ā”œā”€ā”€ Agent C: Architecture review
└── Agent D: Test coverage check

When to use: Code reviews, multi-file analysis, running independent checks.

Implementation:

  1. Leader creates tasks for each specialist
  2. Spawn agents in parallel (all start simultaneously)
  3. Each agent claims and completes their task
  4. Leader collects results and synthesizes
# Leader creates team and spawns reviewers spawnTeam("code-review") # Spawn specialists in parallel spawn("security-sentinel", prompt: "Review for OWASP Top 10 vulnerabilities") spawn("performance-oracle", prompt: "Identify N+1 queries, memory leaks, bottlenecks") spawn("architecture-strategist", prompt: "Check SOLID principles and design patterns") # Collect results broadcast("Submit your findings as a completed task")

2. Pipeline (Sequential Stages)

Each stage feeds into the next, with automatic unblocking:

Research → Plan → Implement → Test → Review

When to use: Feature implementation, migrations, complex refactors.

Implementation:

  1. Create tasks with dependencies
  2. Each task auto-unblocks when its dependency completes
  3. Agents pick up work as it becomes available
TaskCreate: "Research existing auth patterns" (no deps)
TaskCreate: "Design new OAuth flow" (depends on: Research)
TaskCreate: "Implement OAuth service" (depends on: Design)
TaskCreate: "Write integration tests" (depends on: Implement)

3. Map-Reduce

Distribute work across agents, then combine results:

Leader
ā”œā”€ā”€ Worker 1: Process files A-M
ā”œā”€ā”€ Worker 2: Process files N-Z
└── Reducer: Combine all results

When to use: Large-scale refactors, codebase-wide analysis, bulk operations.

4. Swarm (Self-Organizing)

Workers claim available tasks from a shared pool:

Task Pool: [task1, task2, task3, task4, task5]
Worker A: claims task1 → completes → claims task4
Worker B: claims task2 → completes → claims task5
Worker C: claims task3 → completes → done

When to use: Many similar tasks, variable completion times, elastic workload.

Task Management

Task Lifecycle

created → claimed → in_progress → completed
                                → blocked (waiting on dependency)
                                → failed (needs retry or escalation)

Creating Tasks

TaskCreate({ subject: "Migrate user service to new DB schema", description: "Update all queries in src/services/userService.ts to use the new schema", dependencies: ["schema-migration-task-id"], // Won't start until this completes })

Task Dependencies

Tasks can depend on other tasks, creating execution chains:

Schema Migration ──→ Service Update ──→ API Tests
                 ──→ Model Update   ──→ Integration Tests

When a dependency completes, all blocked tasks automatically unblock.

Agent Communication

Direct Messages

Send targeted messages to specific agents:

write("security-reviewer", "Focus especially on the JWT token handling in auth.ts")

Broadcast

Message all team members (use sparingly — expensive):

broadcast("New requirement: all changes must maintain backward compatibility")

Structured Message Types

TypePurpose
textGeneral instruction or update
task_completeAgent finished their work
plan_approvalRequest leader sign-off before proceeding
shutdown_requestGracefully end an agent
permission_requestAsk leader for elevated access

Configuration

Spawn Backend

Control how agents are spawned:

{ "agentTeams": { "backend": "tmux", "maxAgents": 8, "defaultTimeout": 300 } }
BackendVisibilitySpeedBest For
in-processHiddenFastestCI/CD, automated workflows
tmuxTerminal panesFastDevelopment, debugging
iterm2Split tabsFastmacOS visual debugging

Resource Limits

{ "agentTeams": { "maxAgents": 8, "maxTasksPerAgent": 5, "taskTimeout": 600, "inactivityTimeout": 120 } }

Best Practices

  1. Match agent type to task — Use Explore for read-only, general-purpose for writes
  2. Write explicit prompts — Agents have no prior context; be detailed
  3. Use task dependencies — Don't poll; let the task system handle ordering
  4. Prefer write over broadcast — Targeted messages are cheaper and clearer
  5. Always cleanup — Call cleanup() when done to free resources
  6. Name agents meaningfully — security-reviewer not agent-3
  7. Limit concurrency — 4-6 parallel agents is usually optimal
  8. Handle failures — Check task status and retry or escalate failed tasks
  9. Keep agents focused — One clear responsibility per agent
  10. Log agent output — Capture results for debugging and audit trails

Example: Full Code Review Workflow

# 1. Leader creates team spawnTeam("pr-review-42") # 2. Spawn parallel reviewers spawn("security", type: "Explore", prompt: "Check for injection, XSS, auth issues") spawn("perf", type: "Explore", prompt: "Find N+1 queries, unnecessary renders, memory leaks") spawn("arch", type: "Plan", prompt: "Review architecture decisions and suggest improvements") spawn("tests", type: "Bash", prompt: "Run test suite, report failures and coverage") # 3. Wait for all to complete # (Task system handles this automatically via dependencies) # 4. Leader synthesizes # Reads all completed tasks, produces unified review summary # 5. Cleanup cleanup("pr-review-42")

Troubleshooting

IssueSolution
Agent not respondingCheck if it's blocked on a dependency
Task stuck in claimedAgent may have crashed — reassign the task
Too many agents spawnedSet maxAgents in config to limit concurrency
Agents doing duplicate workUse task claiming to ensure exclusivity
Slow performanceReduce parallel agents; check resource usage
Agents losing contextWrite detailed prompts; avoid assuming shared knowledge
Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates