Task Researcher Agent
Battle-tested agent for task, research, specialist, comprehensive. Includes structured workflows, validation checks, and reusable patterns for data ai.
Task Researcher Agent
A research-only agent that performs deep, comprehensive analysis for task planning, investigating codebases, documentation, and requirements to produce thorough research reports without making any code changes.
When to Use This Agent
Choose Task Researcher Agent when:
- Investigating unfamiliar codebases before making changes
- Analyzing existing patterns and conventions for consistency
- Mapping dependencies and impact of proposed changes
- Researching available libraries, frameworks, or approaches
- Creating detailed context reports for planning or handoff
Consider alternatives when:
- Ready to implement changes (use a development agent)
- Creating actionable task plans from research (use a task planner agent)
- Doing quick, targeted code searches (use grep/glob directly)
Quick Start
# .claude/agents/task-researcher-agent.yml name: Task Researcher model: claude-sonnet-4-20250514 tools: - Read - Glob - Grep - Bash prompt: | You are a research specialist. Perform deep analysis of codebases, documentation, and requirements. Write findings to research tracking files. CRITICAL: You must NOT modify any code files. Your job is to investigate and document, never to implement.
Example invocation:
claude --agent task-researcher-agent "Research the authentication system in this codebase: how sessions work, where tokens are validated, what middleware is involved, and how it integrates with the database layer"
Core Concepts
Research Workflow
Define Scope → Explore → Analyze → Synthesize → Document
│ │ │ │ │
Question File scan Patterns Conclusions Research
Boundaries Grep Deps Risks report
Depth Read Flows Unknowns Findings
Research Report Structure
# Research: {Topic} ## Summary - Key findings in 3-5 bullet points ## Architecture Overview - How the relevant system works - Component diagram (ASCII) ## File Map - Critical files with descriptions and line references ## Patterns and Conventions - Coding patterns used - Naming conventions - Error handling approaches ## Dependencies - Internal dependencies (other modules) - External dependencies (libraries, services) ## Risks and Concerns - Technical debt identified - Potential breaking changes - Missing test coverage ## Open Questions - Things that need clarification - Assumptions made during research
Research Depth Levels
| Level | Scope | Time Budget | Use When |
|---|---|---|---|
| Quick scan | File listing, structure overview | 5-10 min | Orientation |
| Standard | Key files read, pattern identification | 30-60 min | Most tasks |
| Deep dive | Full dependency tracing, edge case analysis | 2-4 hours | Critical systems |
| Comprehensive | Cross-system analysis, security review | Half day | Major refactors |
Configuration
| Parameter | Description | Default |
|---|---|---|
output_dir | Research report location | .copilot-tracking/research/ |
depth | Default research depth | Standard |
include_diagrams | Generate ASCII architecture diagrams | true |
trace_dependencies | Map dependency trees | true |
identify_patterns | Catalog coding patterns | true |
flag_risks | Highlight technical risks | true |
read_only | Enforce no-modification constraint | true |
Best Practices
-
Start with the entry points and trace inward. Don't read files randomly. Start with the obvious entry point (route handler, main function, component) and follow the execution path through imports, function calls, and data flow. This approach builds understanding in the same order the code executes, making it easier to construct a mental model of how the system works.
-
Document file references with line numbers. When noting important code, include the exact file path and line number. "Authentication is checked in
src/middleware/auth.ts:42-65" is useful. "There's some auth middleware somewhere" is not. Precise references let the person reading your research jump directly to the relevant code without searching. -
Distinguish between facts and inferences. Separate what you observed ("the middleware calls
validateTokenon line 42") from what you infer ("this likely means tokens are validated on every request"). Label inferences clearly so readers know which findings need verification. Wrong inferences presented as facts create dangerous assumptions in downstream planning. -
Map the blast radius of potential changes. For research that feeds into implementation planning, identify every file and system that would be affected by a change. Search for function callers, type references, and test files. A seemingly local change might have ten callers across five modules. Knowing the blast radius upfront prevents "I didn't know that would break that" surprises.
-
Record what you didn't find as explicitly as what you found. Missing test coverage, absent error handling, undocumented APIs, and mysterious configuration values are all valuable findings. "No tests exist for the payment processing module" is as important as describing how the module works. Missing things represent risks that planning must account for.
Common Issues
Research scope creeps beyond the original question. It's tempting to follow every interesting code path, but research must serve a specific purpose. Define the research question and boundaries before starting. When you discover something interesting but out of scope, note it as a "further investigation" item rather than exploring it immediately. Scoped research is useful research; unfocused exploration produces unusable reports.
Research findings are too technical for the intended audience. Match the report detail level to who will read it. A report for a product manager should emphasize capabilities, limitations, and risks. A report for a senior engineer should include file paths, code patterns, and technical constraints. A report that mixes audiences serves neither well. Write for your reader.
Research takes too long and blocks implementation. Set a time budget proportional to the task risk and complexity. For well-understood systems with low-risk changes, 30 minutes of research is enough. For critical systems or major refactors, invest more. If research reveals the system is more complex than expected, report that finding early rather than continuing to research in silence while the team waits.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
API Endpoint Builder
Agent that scaffolds complete REST API endpoints with controller, service, route, types, and tests. Supports Express, Fastify, and NestJS.
Documentation Auto-Generator
Agent that reads your codebase and generates comprehensive documentation including API docs, architecture guides, and setup instructions.
Ai Ethics Advisor Partner
All-in-one agent covering ethics, responsible, development, specialist. Includes structured workflows, validation checks, and reusable patterns for ai specialists.