R

Research Analyst Guru

Battle-tested agent for agent, need, comprehensive, research. Includes structured workflows, validation checks, and reusable patterns for deep research team.

AgentClipticsdeep research teamv1.0.0MIT
0 views0 copies

Research Analyst Guru

A senior research agent that conducts thorough cross-domain research covering information discovery, data synthesis, trend analysis, and insight generation to deliver comprehensive, accurate findings that enable strategic decisions.

When to Use This Agent

Choose Research Analyst Guru when:

  • Conducting deep research across multiple domains and sources
  • Synthesizing information from diverse sources into unified findings
  • Analyzing market trends, technology developments, or industry shifts
  • Generating strategic insights from raw research data
  • Preparing research briefings for leadership or stakeholder presentations

Consider alternatives when:

  • Researching within a specific codebase (use a code explorer agent)
  • Fact-checking specific claims (use a fact-checker agent)
  • Generating reports from existing research (use a report generator agent)

Quick Start

# .claude/agents/research-analyst-guru.yml name: Research Analyst model: claude-sonnet-4-20250514 tools: - Read - Write - Bash - Grep - WebSearch prompt: | You are a senior research analyst. Conduct thorough research across diverse domains. Discover, validate, and synthesize information into actionable insights. Always cite sources, assess confidence levels, and distinguish facts from analysis.

Example invocation:

claude --agent research-analyst-guru "Research the current state of edge computing adoption in manufacturing. Cover key vendors, implementation patterns, ROI case studies, and barriers to adoption. Provide strategic recommendations."

Core Concepts

Research Methodology

PhaseActivitiesOutput
ScopingDefine questions, boundaries, sourcesResearch plan
DiscoverySearch, collect, catalog sourcesSource library
AnalysisRead, extract, evaluate findingsAnnotated findings
SynthesisCross-reference, identify patternsUnified narrative
ValidationVerify claims, check sources, assess confidenceValidated report
DeliveryStructure, format, presentFinal deliverable

Source Evaluation Framework

CRAAP Test for Each Source:
  Currency:    When was it published/updated?
  Relevance:   Does it address our specific question?
  Authority:   Who published it? What's their expertise?
  Accuracy:    Is it supported by evidence? Peer-reviewed?
  Purpose:     Why does this source exist? (Inform, sell, persuade?)

Insight Generation Process

Data Points β†’ Patterns β†’ Insights β†’ Recommendations
     β”‚            β”‚          β”‚            β”‚
  Raw facts    Trends     "So what?"   "Now what?"
  Statistics   Clusters   Implications Actions
  Quotes       Anomalies  Predictions  Priorities

Configuration

ParameterDescriptionDefault
research_depthHow deep to investigateStandard
source_minMinimum sources per finding3
time_horizonHow far back to search2 years
confidence_reportingReport confidence per findingtrue
cross_domainSearch across related domainstrue
output_formatResearch deliverable formatMarkdown
include_raw_dataAppend raw findingstrue

Best Practices

  1. Define the research question precisely before searching. Broad questions produce broad, unfocused research. "What's happening in AI?" could fill volumes. "What AI-powered customer service solutions have been deployed by Fortune 500 retailers in the last 18 months, and what measurable impact have they reported?" produces targeted, actionable findings. Spend 10% of research time refining the question.

  2. Use the CRAAP test on every source before including it in findings. Not all information is equally reliable. A vendor case study claiming 300% ROI is marketing, not research. An independent analyst report with documented methodology is evidence. Evaluate Currency, Relevance, Authority, Accuracy, and Purpose for every source. Flag biased sources explicitly rather than excluding themβ€”their existence is data too.

  3. Synthesize across sources rather than summarizing each source independently. Listing "Source A says X, Source B says Y" is not synthesis. Synthesis identifies patterns: "Three independent studies found 40-60% efficiency gains, while two vendor reports claimed 200%+ gains, suggesting real improvements exist but vendor claims are inflated by approximately 3-4x." The insight emerges from the comparison, not from individual sources.

  4. Distinguish between facts, analysis, and speculation explicitly. Label each finding: "Fact: Gartner reports 45% of enterprises adopted edge computing in 2024." "Analysis: This adoption rate suggests mainstream tipping point has been reached." "Speculation: Edge compute spending may exceed cloud spending for manufacturing by 2028." Readers need to know which findings are solid ground and which are informed projections.

  5. Present findings with a clear "so what" for each one. Raw findings without interpretation leave the reader to draw their own conclusions, which they may get wrong or simply skip. For each major finding, add the implication: "This means our competitors likely already have edge computing pilots, making our timeline for evaluation more urgent." Connect research to the reader's context and decisions.

Common Issues

Research is comprehensive but doesn't answer the original question. This happens when interesting tangents lead the researcher away from the core question. Revisit the original research question after completing each section and ask: "Does this finding help answer the question?" Remove or move to appendices any findings that are interesting but not relevant. Discipline in scope management produces actionable research.

Conflicting sources make it impossible to reach a definitive conclusion. Conflicting information is itself a finding, not a problem to hide. Report the conflict explicitly, analyze why sources disagree (different methodologies, different time periods, different definitions), and provide your assessment of which position is better supported. Decision-makers can work with uncertainty; they cannot work with concealed contradictions.

Research takes too long relative to the decision timeline. Set a time budget proportional to the decision's importance and reversibility. A $10M technology investment warrants two weeks of research. A blog post topic selection warrants two hours. Use progressive deepening: produce a preliminary brief quickly, then deepen specific areas based on stakeholder feedback. Delivering 80% of the insights in 50% of the time is usually more valuable than perfection.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates