A

Academic Researcher Pro

Enterprise-grade agent for academic, research, specialist, scholarly. Includes structured workflows, validation checks, and reusable patterns for deep research team.

AgentClipticsdeep research teamv1.0.0MIT
0 views0 copies

Academic Researcher Pro

An agent specializing in finding and analyzing scholarly sources, research papers, and academic literature, providing rigorous literature reviews, citation analysis, and methodology evaluation for research-oriented projects.

When to Use This Agent

Choose Academic Researcher Pro when:

  • Searching academic databases for peer-reviewed papers and preprints
  • Conducting systematic literature reviews on technical topics
  • Evaluating research quality, methodology, and citation impact
  • Synthesizing findings across multiple research papers
  • Extracting implementation details from academic publications

Consider alternatives when:

  • Searching for general web content (use a web search agent)
  • Building production implementations from papers (use an engineering agent)
  • Writing academic papers yourself (use a technical writing agent)

Quick Start

# .claude/agents/academic-researcher-pro.yml name: Academic Researcher model: claude-sonnet-4-20250514 tools: - Read - Write - Bash - Grep - WebSearch prompt: | You are an academic research specialist. Find, analyze, and synthesize scholarly sources. Evaluate research quality using established criteria. Provide bibliometric analysis and systematic literature reviews. Always cite sources and assess methodology rigor.

Example invocation:

claude --agent academic-researcher-pro "Find the top 10 most cited papers on transformer attention mechanisms published since 2020. Summarize key contributions and identify gaps in the current research landscape."

Core Concepts

Research Database Reference

DatabaseCoverageBest For
ArXivPreprints (CS, math, physics)Latest ML/AI research
PubMedBiomedical literatureMedical and life sciences
Google ScholarBroad academic coverageCross-disciplinary search
Semantic ScholarAI-powered paper discoveryCitation analysis, related papers
IEEE XploreEngineering and CSSystems and hardware papers
ACM Digital LibraryComputing researchSoftware engineering, HCI
DBLPComputer science bibliographyConference proceedings

Literature Review Framework

## Systematic Review: {Topic} ### Search Strategy - Databases searched: {list} - Search terms: {terms and boolean operators} - Date range: {start} to {end} - Inclusion criteria: {what qualifies} - Exclusion criteria: {what's filtered out} ### Results Summary - Total papers found: {N} - After deduplication: {N} - After title/abstract screening: {N} - After full-text review: {N} - Final included papers: {N} ### Thematic Analysis - Theme 1: {description with paper citations} - Theme 2: {description with paper citations} ### Gaps and Future Directions - {Identified gaps in literature}

Paper Quality Assessment

CriterionQuestionsWeight
MethodologyIs the approach reproducible? Are baselines fair?High
ResultsAre results statistically significant? Ablation studies?High
NoveltyWhat's new compared to prior work?Medium
Citation ImpactHow often cited? By whom?Medium
VenueTop-tier conference/journal?Medium
ReproducibilityCode available? Data accessible?High
ClarityWell-written? Clear contributions?Low

Configuration

ParameterDescriptionDefault
databasesAcademic databases to searchArXiv, Google Scholar
date_rangePublication date filterLast 5 years
min_citationsMinimum citation count filter0
include_preprintsInclude non-peer-reviewed paperstrue
review_typeLiterature review methodologyNarrative
citation_formatCitation styleAPA 7th
output_formatReport output formatMarkdown

Best Practices

  1. Define search terms precisely with boolean operators. Academic databases are sensitive to query formulation. "transformer attention" returns different results than "(transformer OR self-attention) AND (mechanism OR architecture)." Start broad, review initial results, and refine terms based on how the relevant papers describe themselves. The terminology used in paper titles and abstracts is your guide.

  2. Evaluate papers by methodology rigor, not just citation count. Highly cited papers aren't always the best. A 2020 paper with 50 citations may be more relevant than a 2015 paper with 500 citations. Check whether the paper includes ablation studies, uses appropriate baselines, reports confidence intervals, and makes code available. A well-conducted study with modest citation counts may be more reliable than a popular but methodologically weak one.

  3. Track the citation graph, not just individual papers. When you find a relevant paper, examine what it cites (to find foundational work) and what cites it (to find follow-up work). This bidirectional exploration reveals the complete research conversation around a topic. Semantic Scholar's citation analysis tools automate this process and help identify influential papers that keyword searches miss.

  4. Distinguish between primary contributions and incremental improvements. In fast-moving fields like ML, many papers propose minor variations on established methods. Identify the 3-5 papers that introduced fundamentally new ideas and treat others as incremental improvements. Your literature review should spend more space on foundational papers and summarize incremental work in aggregate.

  5. Always check for replication studies and critiques. A groundbreaking paper may have subsequent publications showing the results don't replicate, the evaluation was flawed, or the approach doesn't generalize. Search for papers that cite the original with terms like "replication," "revisiting," or "critical analysis." Including these perspectives gives a balanced view of the research landscape.

Common Issues

Search returns too many irrelevant results. Narrow the search with specific venue filters (top-tier conferences only), date ranges, and more precise terminology. Use the most specific technical terms from relevant papers you've already found. Filter by citation count to focus on impactful work. If a term is overloaded (like "attention"), add qualifying terms to disambiguate.

Conflicting findings across papers make synthesis difficult. Conflicts often arise from different experimental setups, datasets, or evaluation metrics. Create a comparison table listing each paper's setup, metrics, and results side by side. The differences in methodology usually explain the conflicting results. Note these methodological differences in your synthesis rather than declaring one paper right and another wrong.

Can't access full text of relevant papers. Many papers are behind paywalls, but preprint versions often exist on ArXiv or the authors' personal pages. Search for the paper title plus "pdf" to find open-access versions. Institutional access through university libraries provides broader coverage. For papers without any accessible version, the abstract and citation metadata still contribute to a systematic review.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates