T

Task Researcher Agent

Battle-tested agent for task, research, specialist, comprehensive. Includes structured workflows, validation checks, and reusable patterns for data ai.

AgentClipticsdata aiv1.0.0MIT
0 views0 copies

Task Researcher Agent

A research-only agent that performs deep, comprehensive analysis for task planning, investigating codebases, documentation, and requirements to produce thorough research reports without making any code changes.

When to Use This Agent

Choose Task Researcher Agent when:

  • Investigating unfamiliar codebases before making changes
  • Analyzing existing patterns and conventions for consistency
  • Mapping dependencies and impact of proposed changes
  • Researching available libraries, frameworks, or approaches
  • Creating detailed context reports for planning or handoff

Consider alternatives when:

  • Ready to implement changes (use a development agent)
  • Creating actionable task plans from research (use a task planner agent)
  • Doing quick, targeted code searches (use grep/glob directly)

Quick Start

# .claude/agents/task-researcher-agent.yml name: Task Researcher model: claude-sonnet-4-20250514 tools: - Read - Glob - Grep - Bash prompt: | You are a research specialist. Perform deep analysis of codebases, documentation, and requirements. Write findings to research tracking files. CRITICAL: You must NOT modify any code files. Your job is to investigate and document, never to implement.

Example invocation:

claude --agent task-researcher-agent "Research the authentication system in this codebase: how sessions work, where tokens are validated, what middleware is involved, and how it integrates with the database layer"

Core Concepts

Research Workflow

Define Scope → Explore → Analyze → Synthesize → Document
     │            │         │          │            │
  Question     File scan  Patterns   Conclusions  Research
  Boundaries   Grep       Deps       Risks        report
  Depth        Read       Flows      Unknowns     Findings

Research Report Structure

# Research: {Topic} ## Summary - Key findings in 3-5 bullet points ## Architecture Overview - How the relevant system works - Component diagram (ASCII) ## File Map - Critical files with descriptions and line references ## Patterns and Conventions - Coding patterns used - Naming conventions - Error handling approaches ## Dependencies - Internal dependencies (other modules) - External dependencies (libraries, services) ## Risks and Concerns - Technical debt identified - Potential breaking changes - Missing test coverage ## Open Questions - Things that need clarification - Assumptions made during research

Research Depth Levels

LevelScopeTime BudgetUse When
Quick scanFile listing, structure overview5-10 minOrientation
StandardKey files read, pattern identification30-60 minMost tasks
Deep diveFull dependency tracing, edge case analysis2-4 hoursCritical systems
ComprehensiveCross-system analysis, security reviewHalf dayMajor refactors

Configuration

ParameterDescriptionDefault
output_dirResearch report location.copilot-tracking/research/
depthDefault research depthStandard
include_diagramsGenerate ASCII architecture diagramstrue
trace_dependenciesMap dependency treestrue
identify_patternsCatalog coding patternstrue
flag_risksHighlight technical riskstrue
read_onlyEnforce no-modification constrainttrue

Best Practices

  1. Start with the entry points and trace inward. Don't read files randomly. Start with the obvious entry point (route handler, main function, component) and follow the execution path through imports, function calls, and data flow. This approach builds understanding in the same order the code executes, making it easier to construct a mental model of how the system works.

  2. Document file references with line numbers. When noting important code, include the exact file path and line number. "Authentication is checked in src/middleware/auth.ts:42-65" is useful. "There's some auth middleware somewhere" is not. Precise references let the person reading your research jump directly to the relevant code without searching.

  3. Distinguish between facts and inferences. Separate what you observed ("the middleware calls validateToken on line 42") from what you infer ("this likely means tokens are validated on every request"). Label inferences clearly so readers know which findings need verification. Wrong inferences presented as facts create dangerous assumptions in downstream planning.

  4. Map the blast radius of potential changes. For research that feeds into implementation planning, identify every file and system that would be affected by a change. Search for function callers, type references, and test files. A seemingly local change might have ten callers across five modules. Knowing the blast radius upfront prevents "I didn't know that would break that" surprises.

  5. Record what you didn't find as explicitly as what you found. Missing test coverage, absent error handling, undocumented APIs, and mysterious configuration values are all valuable findings. "No tests exist for the payment processing module" is as important as describing how the module works. Missing things represent risks that planning must account for.

Common Issues

Research scope creeps beyond the original question. It's tempting to follow every interesting code path, but research must serve a specific purpose. Define the research question and boundaries before starting. When you discover something interesting but out of scope, note it as a "further investigation" item rather than exploring it immediately. Scoped research is useful research; unfocused exploration produces unusable reports.

Research findings are too technical for the intended audience. Match the report detail level to who will read it. A report for a product manager should emphasize capabilities, limitations, and risks. A report for a senior engineer should include file paths, code patterns, and technical constraints. A report that mixes audiences serves neither well. Write for your reader.

Research takes too long and blocks implementation. Set a time budget proportional to the task risk and complexity. For well-understood systems with low-risk changes, 30 minutes of research is enough. For critical systems or major refactors, invest more. If research reveals the system is more complex than expected, report that finding early rather than continuing to research in silence while the team waits.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates