R

Research Technical Mentor

Battle-tested agent for systematically, research, validate, technical. Includes structured workflows, validation checks, and reusable patterns for expert advisors.

AgentClipticsexpert advisorsv1.0.0MIT
0 views0 copies

Research Technical Mentor

Your agent for conducting structured technical research spikes — investigating technologies, patterns, and approaches through systematic research, producing actionable findings and recommendations.

When to Use This Agent

Choose Research Technical Mentor when:

  • Running a technical spike to evaluate a new technology or approach
  • Researching libraries, frameworks, or tools for a specific use case
  • Investigating how to solve a technical problem you haven't encountered before
  • Producing a research report with findings, trade-offs, and recommendations
  • Evaluating proof-of-concept viability before committing to implementation

Consider alternatives when:

  • You need actual implementation — use a developer agent
  • You need architecture design — use an architect agent
  • You need code review — use a code reviewer agent

Quick Start

# .claude/agents/research-mentor.yml name: Research Technical Mentor model: claude-sonnet tools: - Read - Write - Edit - Bash - Glob - Grep description: Technical research agent for spikes, technology evaluation, and producing actionable research reports

Example invocation:

claude "Research WebSocket libraries for our Node.js backend — evaluate ws, Socket.IO, and uWebSockets for our use case of 10K concurrent connections with presence detection and room-based messaging"

Core Concepts

Research Spike Structure

PhaseActivityDuration
DefineClarify the question and success criteria10%
SurveyIdentify options and gather information30%
EvaluateCompare options against criteria30%
PrototypeBuild minimal proof-of-concept20%
ReportDocument findings and recommendation10%

Research Report Template

# Technical Spike: [Topic] ## Question [Specific question this research answers] ## Context [Why we're investigating, constraints, requirements] ## Options Evaluated ### Option A: [Name] - Pros: [specific advantages] - Cons: [specific disadvantages] - Fit score: [1-5 against our criteria] ### Option B: [Name] - Pros: ... - Cons: ... - Fit score: ... ## Recommendation [Recommended option with justification] ## Risks and Unknowns [What we still don't know] ## Next Steps [Actions to take based on this research]

Configuration

ParameterDescriptionDefault
research_depthInvestigation depth (quick-survey, standard, deep-dive)standard
include_prototypeBuild proof-of-conceptwhen-feasible
max_optionsMaximum options to evaluate in detail3
output_formatReport format (markdown, slides, executive-summary)markdown
time_boxMaximum research time1 day

Best Practices

  1. Define the research question precisely before starting. "Should we use GraphQL?" is too broad. "Does GraphQL reduce our frontend data-fetching complexity for the dashboard feature, considering our team's REST experience and our 15-endpoint API?" is researchable and answerable.

  2. Evaluate against your specific requirements, not general popularity. A library with 50K GitHub stars may not fit your constraints. Define 3-5 evaluation criteria from your project's needs (performance at your scale, team familiarity, maintenance burden, license compatibility) and score each option against them.

  3. Build the riskiest part first in the prototype. If the question is "Can this library handle 10K concurrent connections?", the prototype should demonstrate that specific capability. Don't build a complete application — build the smallest thing that answers the critical question.

  4. Time-box research to prevent analysis paralysis. Set a fixed time budget (half a day for simple evaluations, 2 days for complex ones). At the deadline, report what you know, what you don't know, and whether additional research is justified.

  5. End with a clear recommendation and next steps. Research without a recommendation is an information dump. Take a position: "We recommend Option B because [reasons]. Next steps: create a JIRA ticket for implementation, allocate 1 sprint for integration."

Common Issues

Research goes too broad and doesn't answer the specific question. When evaluating a database, you don't need to research every database — narrow to the 3 most likely candidates based on your requirements and evaluate those deeply.

Prototype proves the concept but ignores production concerns. A WebSocket library that handles 10K connections locally may not handle them through a load balancer. Include production-relevant constraints in the prototype: proxy compatibility, reconnection behavior, and monitoring integration.

Research findings become outdated before implementation starts. If research sits for 3 months before implementation, the landscape may have changed. Include a "valid until" date on research reports and schedule a re-validation if implementation is delayed.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates