Fact Checker Assistant
Production-ready agent that handles fact, verification, source, validation. Includes structured workflows, validation checks, and reusable patterns for deep research team.
Fact Checker Assistant
An agent specialized in information verification, source validation, and misinformation detection, systematically evaluating claims against reliable sources to determine accuracy, identify bias, and provide confidence-rated assessments.
When to Use This Agent
Choose Fact Checker when:
- Verifying specific claims or statistics before publishing
- Evaluating source credibility and potential bias
- Cross-referencing information across multiple sources
- Detecting misinformation patterns in content
- Providing confidence-rated assessments of factual accuracy
Consider alternatives when:
- Doing original research on a topic (use a research analyst agent)
- Writing content that needs editing (use a content editor agent)
- Conducting academic literature reviews (use an academic researcher agent)
Quick Start
# .claude/agents/fact-checker-assistant.yml name: Fact Checker model: claude-sonnet-4-20250514 tools: - Read - Write - Bash - Grep - WebSearch prompt: | You are a professional fact-checker. Systematically verify claims by identifying specific assertions, locating authoritative sources, cross-referencing across multiple independent sources, and rating confidence. Always distinguish between verified facts, plausible claims, and unverified assertions.
Example invocation:
claude --agent fact-checker-assistant "Verify the following claims from our blog post: 1) 73% of enterprises use multi-cloud 2) Kubernetes handles 80% of container orchestration 3) Serverless adoption grew 300% since 2020"
Core Concepts
Verification Methodology
Claim Extraction → Source Identification → Cross-Reference → Rating
│ │ │ │
Isolate specific Find primary 3+ independent Confidence
verifiable claims authoritative sources confirm score with
from content sources or contradict reasoning
Source Credibility Hierarchy
| Tier | Source Type | Example | Reliability |
|---|---|---|---|
| 1 | Primary/official | Government data, academic papers | Highest |
| 2 | Expert analysis | Industry reports (Gartner, Forrester) | High |
| 3 | Quality journalism | Major tech publications | Medium-High |
| 4 | Community/crowd | Wikipedia, Stack Overflow | Medium |
| 5 | Opinion/advocacy | Blog posts, press releases | Low-Medium |
| 6 | Social/unverified | Social media, forums | Lowest |
Confidence Rating Scale
Verified (95%+): Multiple tier-1 sources confirm
Likely True (80%): One tier-1 source + corroborating tier-2 sources
Plausible (60%): Tier-2/3 sources support, no contradictions
Uncertain (40%): Limited or conflicting sources
Likely False (20%): Multiple sources contradict
False (<5%): Definitively disproven by authoritative sources
Configuration
| Parameter | Description | Default |
|---|---|---|
min_sources | Minimum sources per claim | 2 |
source_tier_min | Minimum source credibility tier | Tier 3 |
confidence_threshold | Minimum confidence to pass | 60% |
check_recency | Verify information currency | true |
flag_bias | Identify potential source bias | true |
output_format | Verification report format | Markdown table |
include_corrections | Suggest corrections for false claims | true |
Best Practices
-
Extract and isolate specific claims before checking anything. A paragraph may contain three separate claims disguised as one statement. "AI adoption grew 300% as enterprises shifted to cloud-native architectures, spending $50B annually" contains three verifiable claims: growth rate, causal relationship, and spending figure. Check each independently because one may be true while others are false.
-
Prioritize primary sources over secondary reporting. When an article cites a Gartner report, find the original Gartner report rather than trusting the article's interpretation. Secondary sources frequently misquote statistics, omit important context, or cite outdated versions of reports. The extra effort of finding primary sources significantly improves verification accuracy.
-
Check the date of the claim and the date of the supporting evidence. A statistic from a 2020 report cited in a 2024 article may be outdated. Technology adoption rates, market sizes, and usage statistics change rapidly. Verify both that the claim was true when originally stated and whether more recent data has superseded it. Flag claims based on data older than two years as potentially outdated.
-
Assess source independence, not just source count. Three articles citing the same original study are one source, not three. Check whether your sources conducted independent research or are all referencing the same upstream data. True cross-referencing requires sources that arrived at similar conclusions through different methodologies or independent data collection.
-
Report confidence levels, not just true/false verdicts. Binary fact-checking oversimplifies complex claims. "Kubernetes handles 80% of container orchestration" might be approximately true (surveys show 65-85% depending on methodology) rather than exactly true or completely false. Communicate the nuance: the claim is directionally accurate, but the specific percentage varies by source and definition.
Common Issues
Claims use vague language that's difficult to verify. Statements like "most companies" or "significantly improved" resist precise verification. Flag these as unverifiable as stated and suggest specific alternatives: replace "most companies" with the actual percentage from a cited survey. Vague claims are often true in spirit but misleading in implication—quantifying them reveals the actual story.
Source appears authoritative but has a conflict of interest. A cloud provider's survey showing 90% cloud adoption has inherent bias: their methodology targets their customer base, and their incentive is to show high adoption. Note potential conflicts of interest alongside source credibility. A biased source isn't automatically wrong, but its claims need independent corroboration before being rated as verified.
Original source for a widely-cited statistic can't be found. Some frequently cited statistics have no traceable origin—they were manufactured, misremembered, or distorted through repeated citation. When the primary source can't be located, downgrade the confidence rating regardless of how widely the claim is repeated. Popularity of a claim is not evidence of its truth. Note the attribution gap in your report.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
API Endpoint Builder
Agent that scaffolds complete REST API endpoints with controller, service, route, types, and tests. Supports Express, Fastify, and NestJS.
Documentation Auto-Generator
Agent that reads your codebase and generates comprehensive documentation including API docs, architecture guides, and setup instructions.
Ai Ethics Advisor Partner
All-in-one agent covering ethics, responsible, development, specialist. Includes structured workflows, validation checks, and reusable patterns for ai specialists.