Research Technical Mentor
Battle-tested agent for systematically, research, validate, technical. Includes structured workflows, validation checks, and reusable patterns for expert advisors.
Research Technical Mentor
Your agent for conducting structured technical research spikes — investigating technologies, patterns, and approaches through systematic research, producing actionable findings and recommendations.
When to Use This Agent
Choose Research Technical Mentor when:
- Running a technical spike to evaluate a new technology or approach
- Researching libraries, frameworks, or tools for a specific use case
- Investigating how to solve a technical problem you haven't encountered before
- Producing a research report with findings, trade-offs, and recommendations
- Evaluating proof-of-concept viability before committing to implementation
Consider alternatives when:
- You need actual implementation — use a developer agent
- You need architecture design — use an architect agent
- You need code review — use a code reviewer agent
Quick Start
# .claude/agents/research-mentor.yml name: Research Technical Mentor model: claude-sonnet tools: - Read - Write - Edit - Bash - Glob - Grep description: Technical research agent for spikes, technology evaluation, and producing actionable research reports
Example invocation:
claude "Research WebSocket libraries for our Node.js backend — evaluate ws, Socket.IO, and uWebSockets for our use case of 10K concurrent connections with presence detection and room-based messaging"
Core Concepts
Research Spike Structure
| Phase | Activity | Duration |
|---|---|---|
| Define | Clarify the question and success criteria | 10% |
| Survey | Identify options and gather information | 30% |
| Evaluate | Compare options against criteria | 30% |
| Prototype | Build minimal proof-of-concept | 20% |
| Report | Document findings and recommendation | 10% |
Research Report Template
# Technical Spike: [Topic] ## Question [Specific question this research answers] ## Context [Why we're investigating, constraints, requirements] ## Options Evaluated ### Option A: [Name] - Pros: [specific advantages] - Cons: [specific disadvantages] - Fit score: [1-5 against our criteria] ### Option B: [Name] - Pros: ... - Cons: ... - Fit score: ... ## Recommendation [Recommended option with justification] ## Risks and Unknowns [What we still don't know] ## Next Steps [Actions to take based on this research]
Configuration
| Parameter | Description | Default |
|---|---|---|
research_depth | Investigation depth (quick-survey, standard, deep-dive) | standard |
include_prototype | Build proof-of-concept | when-feasible |
max_options | Maximum options to evaluate in detail | 3 |
output_format | Report format (markdown, slides, executive-summary) | markdown |
time_box | Maximum research time | 1 day |
Best Practices
-
Define the research question precisely before starting. "Should we use GraphQL?" is too broad. "Does GraphQL reduce our frontend data-fetching complexity for the dashboard feature, considering our team's REST experience and our 15-endpoint API?" is researchable and answerable.
-
Evaluate against your specific requirements, not general popularity. A library with 50K GitHub stars may not fit your constraints. Define 3-5 evaluation criteria from your project's needs (performance at your scale, team familiarity, maintenance burden, license compatibility) and score each option against them.
-
Build the riskiest part first in the prototype. If the question is "Can this library handle 10K concurrent connections?", the prototype should demonstrate that specific capability. Don't build a complete application — build the smallest thing that answers the critical question.
-
Time-box research to prevent analysis paralysis. Set a fixed time budget (half a day for simple evaluations, 2 days for complex ones). At the deadline, report what you know, what you don't know, and whether additional research is justified.
-
End with a clear recommendation and next steps. Research without a recommendation is an information dump. Take a position: "We recommend Option B because [reasons]. Next steps: create a JIRA ticket for implementation, allocate 1 sprint for integration."
Common Issues
Research goes too broad and doesn't answer the specific question. When evaluating a database, you don't need to research every database — narrow to the 3 most likely candidates based on your requirements and evaluate those deeply.
Prototype proves the concept but ignores production concerns. A WebSocket library that handles 10K connections locally may not handle them through a load balancer. Include production-relevant constraints in the prototype: proxy compatibility, reconnection behavior, and monitoring integration.
Research findings become outdated before implementation starts. If research sits for 3 months before implementation, the landscape may have changed. Include a "valid until" date on research reports and schedule a re-validation if implementation is delayed.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
API Endpoint Builder
Agent that scaffolds complete REST API endpoints with controller, service, route, types, and tests. Supports Express, Fastify, and NestJS.
Documentation Auto-Generator
Agent that reads your codebase and generates comprehensive documentation including API docs, architecture guides, and setup instructions.
Ai Ethics Advisor Partner
All-in-one agent covering ethics, responsible, development, specialist. Includes structured workflows, validation checks, and reusable patterns for ai specialists.