Expert Deep Research Executor Suite
A comprehensive skill that enables multi-step autonomous research with source aggregation. Built for Claude Code with best practices and real-world patterns.
Deep Research Executor Suite
Comprehensive research execution framework that conducts multi-source deep dives on technical topics, synthesizes findings, evaluates source credibility, and produces structured research reports.
When to Use This Skill
Choose Deep Research Executor when:
- Evaluating technologies, libraries, or frameworks for adoption
- Investigating complex bugs that require understanding underlying systems
- Preparing technical documentation requiring authoritative sources
- Comparing architectural approaches with evidence-based analysis
- Building knowledge bases on emerging technologies
Consider alternatives when:
- The answer is in the project's existing documentation
- A quick web search would suffice for simple factual lookups
- You need real-time monitoring rather than point-in-time research
Quick Start
# Start a deep research session claude skill activate expert-deep-research-executor-suite # Research a technology claude "Deep research: Compare WebSocket vs SSE vs Long Polling for real-time features" # Investigate a domain claude "Research the current state of WebAssembly in server-side applications"
Example Research Report
## Research Report: Edge Computing for API Latency Reduction ### Executive Summary Edge computing reduces API latency by 40-60% for geographically distributed users by processing requests at CDN edge nodes rather than centralized origin servers. ### Key Findings 1. **Cloudflare Workers**: V8 isolate-based, <1ms cold start, 30s CPU limit - Source: Cloudflare documentation, benchmark studies - Confidence: High (well-documented, widely adopted) 2. **Deno Deploy**: Built on Deno runtime, TypeScript-native, global deployment - Source: Deno documentation, community benchmarks - Confidence: Medium (newer platform, smaller adoption) 3. **Vercel Edge Functions**: Next.js integrated, streaming support - Source: Vercel documentation, production case studies - Confidence: High (production-proven at scale) ### Comparison Matrix | Feature | CF Workers | Deno Deploy | Vercel Edge | |---------|-----------|-------------|-------------| | Cold Start | <1ms | ~5ms | ~3ms | | Max Duration | 30s | 50ms (free) | 30s | | Runtime | V8 | Deno | V8 | | Pricing | $0.50/M req | $0.50/M req | Included | ### Recommendations For most web applications: Cloudflare Workers (maturity, ecosystem). For Next.js projects: Vercel Edge Functions (integration). For Deno/TypeScript: Deno Deploy (native support).
Core Concepts
Research Methodology
| Phase | Activities | Deliverable |
|---|---|---|
| Scoping | Define research questions, boundaries, success criteria | Research brief |
| Source Identification | Find primary docs, papers, benchmarks, case studies | Source registry |
| Data Collection | Extract relevant information from each source | Raw findings |
| Analysis | Cross-reference findings, identify patterns and conflicts | Analysis notes |
| Synthesis | Combine findings into coherent narrative with evidence | Draft report |
| Validation | Verify claims, check for bias, assess confidence levels | Final report |
Source Credibility Framework
| Tier | Source Type | Confidence Weight |
|---|---|---|
| Tier 1 | Official documentation, peer-reviewed papers | 1.0 |
| Tier 2 | Production case studies, official benchmarks | 0.8 |
| Tier 3 | Reputable tech blogs, conference talks | 0.6 |
| Tier 4 | Community posts, forum discussions | 0.4 |
| Tier 5 | Unverified claims, marketing materials | 0.2 |
interface ResearchReport { topic: string; questions: string[]; sources: Source[]; findings: Finding[]; synthesis: string; recommendations: Recommendation[]; confidenceScore: number; // 0-1 overall confidence limitations: string[]; } interface Finding { claim: string; evidence: string; source: Source; confidence: 'high' | 'medium' | 'low'; conflictsWith?: Finding[]; }
Configuration
| Parameter | Description | Default |
|---|---|---|
depth | Research depth: shallow, moderate, deep, exhaustive | deep |
max_sources | Maximum number of sources to consult | 20 |
min_confidence | Minimum confidence threshold for including findings | 0.4 |
include_counterarguments | Include opposing viewpoints and limitations | true |
output_format | Report format: markdown, pdf, json | markdown |
recency_bias | Prefer recent sources (weight by publication date) | true |
Best Practices
-
Start with primary sources, then expand outward — Official documentation and original research papers provide the most accurate foundation. Build on these with secondary analysis and community experience rather than starting with blog posts and opinions.
-
Track conflicting information explicitly — When sources disagree, document both positions with their evidence. Don't silently pick one perspective. Conflicts often reveal nuance, edge cases, or different context assumptions.
-
Assess recency alongside credibility — A highly credible source from 3 years ago may be outdated for fast-moving technologies. Note publication dates and check for updates or corrections to original findings.
-
Separate facts from interpretations in your notes — Mark direct quotes and verifiable data points differently from analysis and opinions. This prevents inadvertently presenting someone's interpretation as established fact.
-
Define "good enough" criteria upfront — Deep research can expand indefinitely. Set clear criteria for when you have sufficient evidence to answer your research questions and stop collecting when those criteria are met.
Common Issues
Research scope creeps as interesting tangents emerge. Maintain a strict research brief with 3-5 specific questions. When tangential topics arise, note them in a "future research" section rather than following them immediately. Time-box each research phase and enforce transitions.
Conflicting sources make it impossible to reach firm conclusions. Resolve conflicts by examining the methodology behind each claim. Check sample sizes, testing conditions, and potential biases. When resolution is impossible, present the range of findings with confidence intervals rather than forcing a single answer.
Research findings become outdated before the report is complete. For fast-moving topics, include version numbers, dates, and caveats about expected changes. Structure reports so individual sections can be updated independently. Link to live sources rather than only capturing snapshots.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Full-Stack Code Reviewer
Comprehensive code review skill that checks for security vulnerabilities, performance issues, accessibility, and best practices across frontend and backend code.
Test Suite Generator
Generates comprehensive test suites with unit tests, integration tests, and edge cases. Supports Jest, Vitest, Pytest, and Go testing.
Pro Architecture Workspace
Battle-tested skill for architectural, decision, making, framework. Includes structured workflows, validation checks, and reusable patterns for development.