T

Technical Researcher Companion

Streamline your workflow with this agent, need, analyze, code. Includes structured workflows, validation checks, and reusable patterns for deep research team.

AgentClipticsdeep research teamv1.0.0MIT
0 views0 copies

Technical Researcher Companion

An agent focused on deep technical research covering architecture analysis, technology evaluation, implementation pattern investigation, and technical feasibility assessment for software engineering decisions.

When to Use This Agent

Choose Technical Researcher when:

  • Evaluating technologies, frameworks, or libraries for a project
  • Investigating implementation patterns for specific technical challenges
  • Conducting technical feasibility assessments for proposed features
  • Researching architectural patterns and their trade-offs
  • Deep-diving into technical documentation and specifications

Consider alternatives when:

  • Doing business or market research (use a research analyst)
  • Implementing code based on research findings (use a development agent)
  • Reviewing existing architecture (use an architecture review agent)

Quick Start

# .claude/agents/technical-researcher.yml name: Technical Researcher model: claude-sonnet-4-20250514 tools: - Read - Write - Bash - Glob - Grep - WebSearch prompt: | You are a technical researcher. Investigate technologies, architecture patterns, and implementation approaches. Produce findings with code examples, benchmark data, and practical trade-off analysis. Focus on production readiness and real-world experience, not just theoretical capabilities.

Example invocation:

claude --agent technical-researcher "Research WebSocket vs Server-Sent Events for our real-time notification system. Compare performance, browser support, scaling characteristics, and implementation complexity with code examples."

Core Concepts

Technical Evaluation Framework

DimensionEvaluation CriteriaSources
FunctionalityDoes it solve the technical problem?Docs, tutorials, examples
PerformanceBenchmarks, latency, throughputBenchmark studies, load tests
ScalabilityHow does it handle growth?Architecture docs, case studies
MaturityCommunity size, release history, stabilityGitHub, npm stats
OperationsMonitoring, debugging, maintenance burdenProduction experience reports
IntegrationCompatibility with existing stackAPI docs, plugin ecosystem
SecurityVulnerability history, security modelCVE databases, audits

Technology Comparison Template

## Comparison: {Option A} vs {Option B} ### Feature Matrix | Capability | Option A | Option B | |-----------|---------|---------| | {Feature 1} | {Support level} | {Support level} | ### Performance Benchmarks | Metric | Option A | Option B | Methodology | |--------|---------|---------|-------------| | Throughput | X req/s | Y req/s | {How measured} | | Latency (p99) | Xms | Yms | {How measured} | ### Production Readiness | Factor | Option A | Option B | |--------|---------|---------| | Companies using in prod | {list} | {list} | | Known limitations | {list} | {list} | | Migration complexity | {assessment} | {assessment} | ### Recommendation {Which option for which context, with reasoning}

Configuration

ParameterDescriptionDefault
evaluation_depthHow deep to evaluate each optionModerate
include_benchmarksInclude performance benchmark datatrue
include_codeAdd code examplestrue
production_focusPrioritize production readinesstrue
comparison_formatSide-by-side vs narrativeSide-by-side
risk_assessmentInclude adoption risk analysistrue
output_formatResearch report formatMarkdown

Best Practices

  1. Evaluate technologies in the context of your specific constraints. A technology that's perfect for a startup might be wrong for an enterprise. Evaluate against your actual requirements: team expertise, existing stack, deployment environment, scale requirements, and compliance needs. Generic technology rankings are less useful than contextual evaluations.

  2. Include code examples that demonstrate real use cases, not hello worlds. A "hello world" example proves a technology works. A production-pattern example (error handling, connection pooling, retry logic, monitoring hooks) reveals how the technology behaves under real conditions. Write code examples that match your actual use case as closely as possible.

  3. Cite production experience reports alongside documentation. Official documentation describes intended behavior. Blog posts, conference talks, and GitHub issues from production users describe actual behavior. "The documentation says it supports 100K connections, but three production reports indicate issues above 50K" is critical context for decision-making.

  4. Benchmark with realistic data and access patterns. Benchmarks with random data and uniform access patterns are misleading. Use data that resembles your actual workload: realistic data distributions, representative query patterns, and production-like infrastructure. A database that performs well with uniform keys may struggle with skewed distributions that match your real data.

  5. Assess the exit cost, not just the entry cost. How easy is it to start using the technology? More importantly, how hard is it to stop using it? Proprietary APIs, custom data formats, and deep framework coupling increase lock-in risk. Evaluate whether the technology can be replaced in the future without rewriting the entire system.

Common Issues

Research compares features without comparing operational complexity. Two databases might have the same features but vastly different operational requirements. One might need a dedicated DBA team; another might be fully managed. Include operational considerations: deployment complexity, monitoring requirements, backup procedures, upgrade paths, and debugging tools. The team that operates the technology daily cares more about these than about feature lists.

Benchmark results from the internet don't match your experience. Published benchmarks often use optimal configurations, hardware, and workloads that don't match your environment. Run your own benchmarks with your data, your queries, and your infrastructure. If you can't run benchmarks, at least verify that published benchmarks used comparable configurations and workloads to your planned usage.

Research recommends the newer technology without accounting for maturity risks. New technologies often have better architectures but less battle-tested code, smaller communities, fewer integrations, and thinner documentation. Weight maturity alongside capability. If your project is risk-tolerant (startup, internal tool), newer technologies may be worth the risk. If your project requires reliability (financial system, healthcare), prefer mature solutions even if they're less elegant.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates