Research Brief Generator Partner
Powerful agent for agent, need, transform, user. Includes structured workflows, validation checks, and reusable patterns for deep research team.
Research Brief Generator Partner
An agent that transforms refined research queries into comprehensive, structured research briefs with clear objectives, methodology guidance, resource allocation, and success criteria that guide effective research execution.
When to Use This Agent
Choose Research Brief Generator when:
- Creating structured research plans from clarified queries
- Breaking complex research questions into manageable sub-tasks
- Defining research methodology and resource allocation
- Setting scope boundaries and success criteria for research projects
- Preparing research assignments for distributed research teams
Consider alternatives when:
- Clarifying ambiguous queries (use a query clarifier agent first)
- Conducting the actual research (use a research analyst agent)
- Generating final reports from findings (use a report generator agent)
Quick Start
# .claude/agents/research-brief-generator.yml name: Research Brief Generator model: claude-sonnet-4-20250514 tools: - Read - Write prompt: | You are a research brief specialist. Transform refined queries into comprehensive research briefs with objectives, methodology, scope, resource allocation, and success criteria. Create actionable plans that guide researchers to produce high-quality findings efficiently.
Example invocation:
claude --agent research-brief-generator "Create a research brief for investigating the total cost of ownership comparison between self-hosted Kubernetes and managed Kubernetes services for mid-size companies (500-2000 employees)"
Core Concepts
Brief Structure
# Research Brief: {Title} ## Objective {What this research aims to determine} ## Background {Why this research is needed, decision context} ## Research Questions 1. Primary: {Main question to answer} 2. Secondary: {Supporting questions} ## Scope - In scope: {What to investigate} - Out of scope: {What to exclude} - Time period: {Date range for data} ## Methodology - Sources to consult - Data collection approach - Analysis framework ## Deliverables - {Expected outputs with format} ## Timeline - Phase 1: {dates} - {activities} - Phase 2: {dates} - {activities} ## Success Criteria - {How to determine if research is complete and useful}
Methodology Selection Guide
| Research Question Type | Recommended Methodology |
|---|---|
| "What exists?" (Landscape) | Market scan, vendor review |
| "How much?" (Quantitative) | Data analysis, surveys, benchmarks |
| "Why?" (Causal) | Case studies, expert interviews |
| "What should we do?" (Strategic) | Framework analysis, scenario planning |
| "How do they compare?" (Comparative) | Matrix analysis, feature comparison |
| "What will happen?" (Predictive) | Trend analysis, expert forecasting |
Configuration
| Parameter | Description | Default |
|---|---|---|
brief_format | Output template style | Full structured brief |
max_sub_questions | Maximum sub-questions | 5 |
include_timeline | Add timeline estimates | true |
include_methodology | Specify research methods | true |
scope_constraints | Default scope boundaries | Time + geography |
success_criteria | Define completion standards | true |
Best Practices
-
Write the success criteria before the methodology. If you can't articulate what "done" looks like, the research will expand indefinitely. Define what a successful outcome looks like: "A comparison table covering at least 5 managed Kubernetes providers with pricing, feature parity, and operational overhead metrics, supported by at least 3 customer case studies each." This drives methodology choices and prevents scope creep.
-
Break the primary question into 3-5 independently researchable sub-questions. Each sub-question should be answerable without completing the others, enabling parallel research. "What is the TCO of self-hosted Kubernetes?" and "What is the TCO of managed Kubernetes?" can be researched simultaneously. Dependencies between questions should be explicit and minimized.
-
Specify source types and minimum counts per question. Don't leave source selection to chance. "Consult at least 2 vendor pricing pages, 3 independent benchmark studies, and 2 practitioner case studies" produces more reliable findings than "research the topic." Source type requirements ensure triangulation and prevent over-reliance on any single source category.
-
Include explicit scope boundaries for what not to research. Without boundaries, researchers will expand scope to include adjacent topics. If the brief is about Kubernetes TCO, explicitly exclude: container runtime comparison, CI/CD pipeline costs, and developer training costs (unless they're specifically relevant). Clear exclusions save more time than clear inclusions.
-
Align the brief's depth with the decision's importance. A brief for a $5M platform decision should call for deep research: vendor interviews, reference customers, proof-of-concept results. A brief for a team tooling choice should call for a quick comparison: feature matrix, pricing, community size. Over-researching low-stakes decisions wastes resources; under-researching high-stakes decisions creates risk.
Common Issues
Briefs are too prescriptive, constraining researcher creativity. Specify what to find, not how to find it. "Determine the average infrastructure cost per pod across managed Kubernetes providers" tells the researcher what's needed. "Search Gartner reports for cost data, then cross-reference with AWS pricing calculator" tells them how to do their job. Leave methodology suggestions flexible.
Research brief doesn't account for the decision timeline. If the business decision is in two weeks, a brief calling for six weeks of research is useless. Work backward from the decision date: reserve the final week for report generation and review, allocate remaining time to research phases, and adjust depth to fit the available time. A timely good-enough brief is more valuable than a perfect late one.
Sub-questions overlap, causing duplicate research effort. Map sub-questions against each other and verify they're mutually exclusive. If "What are the infrastructure costs?" and "What are the operational costs?" overlap on monitoring and logging costs, assign that overlap to one question explicitly. The brief should be the single source of truth for research scope allocation.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
API Endpoint Builder
Agent that scaffolds complete REST API endpoints with controller, service, route, types, and tests. Supports Express, Fastify, and NestJS.
Documentation Auto-Generator
Agent that reads your codebase and generates comprehensive documentation including API docs, architecture guides, and setup instructions.
Ai Ethics Advisor Partner
All-in-one agent covering ethics, responsible, development, specialist. Includes structured workflows, validation checks, and reusable patterns for ai specialists.