Academic Researcher Pro
Enterprise-grade agent for academic, research, specialist, scholarly. Includes structured workflows, validation checks, and reusable patterns for deep research team.
Academic Researcher Pro
An agent specializing in finding and analyzing scholarly sources, research papers, and academic literature, providing rigorous literature reviews, citation analysis, and methodology evaluation for research-oriented projects.
When to Use This Agent
Choose Academic Researcher Pro when:
- Searching academic databases for peer-reviewed papers and preprints
- Conducting systematic literature reviews on technical topics
- Evaluating research quality, methodology, and citation impact
- Synthesizing findings across multiple research papers
- Extracting implementation details from academic publications
Consider alternatives when:
- Searching for general web content (use a web search agent)
- Building production implementations from papers (use an engineering agent)
- Writing academic papers yourself (use a technical writing agent)
Quick Start
# .claude/agents/academic-researcher-pro.yml name: Academic Researcher model: claude-sonnet-4-20250514 tools: - Read - Write - Bash - Grep - WebSearch prompt: | You are an academic research specialist. Find, analyze, and synthesize scholarly sources. Evaluate research quality using established criteria. Provide bibliometric analysis and systematic literature reviews. Always cite sources and assess methodology rigor.
Example invocation:
claude --agent academic-researcher-pro "Find the top 10 most cited papers on transformer attention mechanisms published since 2020. Summarize key contributions and identify gaps in the current research landscape."
Core Concepts
Research Database Reference
| Database | Coverage | Best For |
|---|---|---|
| ArXiv | Preprints (CS, math, physics) | Latest ML/AI research |
| PubMed | Biomedical literature | Medical and life sciences |
| Google Scholar | Broad academic coverage | Cross-disciplinary search |
| Semantic Scholar | AI-powered paper discovery | Citation analysis, related papers |
| IEEE Xplore | Engineering and CS | Systems and hardware papers |
| ACM Digital Library | Computing research | Software engineering, HCI |
| DBLP | Computer science bibliography | Conference proceedings |
Literature Review Framework
## Systematic Review: {Topic} ### Search Strategy - Databases searched: {list} - Search terms: {terms and boolean operators} - Date range: {start} to {end} - Inclusion criteria: {what qualifies} - Exclusion criteria: {what's filtered out} ### Results Summary - Total papers found: {N} - After deduplication: {N} - After title/abstract screening: {N} - After full-text review: {N} - Final included papers: {N} ### Thematic Analysis - Theme 1: {description with paper citations} - Theme 2: {description with paper citations} ### Gaps and Future Directions - {Identified gaps in literature}
Paper Quality Assessment
| Criterion | Questions | Weight |
|---|---|---|
| Methodology | Is the approach reproducible? Are baselines fair? | High |
| Results | Are results statistically significant? Ablation studies? | High |
| Novelty | What's new compared to prior work? | Medium |
| Citation Impact | How often cited? By whom? | Medium |
| Venue | Top-tier conference/journal? | Medium |
| Reproducibility | Code available? Data accessible? | High |
| Clarity | Well-written? Clear contributions? | Low |
Configuration
| Parameter | Description | Default |
|---|---|---|
databases | Academic databases to search | ArXiv, Google Scholar |
date_range | Publication date filter | Last 5 years |
min_citations | Minimum citation count filter | 0 |
include_preprints | Include non-peer-reviewed papers | true |
review_type | Literature review methodology | Narrative |
citation_format | Citation style | APA 7th |
output_format | Report output format | Markdown |
Best Practices
-
Define search terms precisely with boolean operators. Academic databases are sensitive to query formulation. "transformer attention" returns different results than "(transformer OR self-attention) AND (mechanism OR architecture)." Start broad, review initial results, and refine terms based on how the relevant papers describe themselves. The terminology used in paper titles and abstracts is your guide.
-
Evaluate papers by methodology rigor, not just citation count. Highly cited papers aren't always the best. A 2020 paper with 50 citations may be more relevant than a 2015 paper with 500 citations. Check whether the paper includes ablation studies, uses appropriate baselines, reports confidence intervals, and makes code available. A well-conducted study with modest citation counts may be more reliable than a popular but methodologically weak one.
-
Track the citation graph, not just individual papers. When you find a relevant paper, examine what it cites (to find foundational work) and what cites it (to find follow-up work). This bidirectional exploration reveals the complete research conversation around a topic. Semantic Scholar's citation analysis tools automate this process and help identify influential papers that keyword searches miss.
-
Distinguish between primary contributions and incremental improvements. In fast-moving fields like ML, many papers propose minor variations on established methods. Identify the 3-5 papers that introduced fundamentally new ideas and treat others as incremental improvements. Your literature review should spend more space on foundational papers and summarize incremental work in aggregate.
-
Always check for replication studies and critiques. A groundbreaking paper may have subsequent publications showing the results don't replicate, the evaluation was flawed, or the approach doesn't generalize. Search for papers that cite the original with terms like "replication," "revisiting," or "critical analysis." Including these perspectives gives a balanced view of the research landscape.
Common Issues
Search returns too many irrelevant results. Narrow the search with specific venue filters (top-tier conferences only), date ranges, and more precise terminology. Use the most specific technical terms from relevant papers you've already found. Filter by citation count to focus on impactful work. If a term is overloaded (like "attention"), add qualifying terms to disambiguate.
Conflicting findings across papers make synthesis difficult. Conflicts often arise from different experimental setups, datasets, or evaluation metrics. Create a comparison table listing each paper's setup, metrics, and results side by side. The differences in methodology usually explain the conflicting results. Note these methodological differences in your synthesis rather than declaring one paper right and another wrong.
Can't access full text of relevant papers. Many papers are behind paywalls, but preprint versions often exist on ArXiv or the authors' personal pages. Search for the paper title plus "pdf" to find open-access versions. Institutional access through university libraries provides broader coverage. For papers without any accessible version, the abstract and citation metadata still contribute to a systematic review.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
API Endpoint Builder
Agent that scaffolds complete REST API endpoints with controller, service, route, types, and tests. Supports Express, Fastify, and NestJS.
Documentation Auto-Generator
Agent that reads your codebase and generates comprehensive documentation including API docs, architecture guides, and setup instructions.
Ai Ethics Advisor Partner
All-in-one agent covering ethics, responsible, development, specialist. Includes structured workflows, validation checks, and reusable patterns for ai specialists.