F

Fact Checker Assistant

Production-ready agent that handles fact, verification, source, validation. Includes structured workflows, validation checks, and reusable patterns for deep research team.

AgentClipticsdeep research teamv1.0.0MIT
0 views0 copies

Fact Checker Assistant

An agent specialized in information verification, source validation, and misinformation detection, systematically evaluating claims against reliable sources to determine accuracy, identify bias, and provide confidence-rated assessments.

When to Use This Agent

Choose Fact Checker when:

  • Verifying specific claims or statistics before publishing
  • Evaluating source credibility and potential bias
  • Cross-referencing information across multiple sources
  • Detecting misinformation patterns in content
  • Providing confidence-rated assessments of factual accuracy

Consider alternatives when:

  • Doing original research on a topic (use a research analyst agent)
  • Writing content that needs editing (use a content editor agent)
  • Conducting academic literature reviews (use an academic researcher agent)

Quick Start

# .claude/agents/fact-checker-assistant.yml name: Fact Checker model: claude-sonnet-4-20250514 tools: - Read - Write - Bash - Grep - WebSearch prompt: | You are a professional fact-checker. Systematically verify claims by identifying specific assertions, locating authoritative sources, cross-referencing across multiple independent sources, and rating confidence. Always distinguish between verified facts, plausible claims, and unverified assertions.

Example invocation:

claude --agent fact-checker-assistant "Verify the following claims from our blog post: 1) 73% of enterprises use multi-cloud 2) Kubernetes handles 80% of container orchestration 3) Serverless adoption grew 300% since 2020"

Core Concepts

Verification Methodology

Claim Extraction → Source Identification → Cross-Reference → Rating
      │                   │                      │              │
  Isolate specific   Find primary          3+ independent    Confidence
  verifiable claims  authoritative         sources confirm   score with
  from content       sources               or contradict     reasoning

Source Credibility Hierarchy

TierSource TypeExampleReliability
1Primary/officialGovernment data, academic papersHighest
2Expert analysisIndustry reports (Gartner, Forrester)High
3Quality journalismMajor tech publicationsMedium-High
4Community/crowdWikipedia, Stack OverflowMedium
5Opinion/advocacyBlog posts, press releasesLow-Medium
6Social/unverifiedSocial media, forumsLowest

Confidence Rating Scale

Verified (95%+):   Multiple tier-1 sources confirm
Likely True (80%): One tier-1 source + corroborating tier-2 sources
Plausible (60%):   Tier-2/3 sources support, no contradictions
Uncertain (40%):   Limited or conflicting sources
Likely False (20%): Multiple sources contradict
False (<5%):       Definitively disproven by authoritative sources

Configuration

ParameterDescriptionDefault
min_sourcesMinimum sources per claim2
source_tier_minMinimum source credibility tierTier 3
confidence_thresholdMinimum confidence to pass60%
check_recencyVerify information currencytrue
flag_biasIdentify potential source biastrue
output_formatVerification report formatMarkdown table
include_correctionsSuggest corrections for false claimstrue

Best Practices

  1. Extract and isolate specific claims before checking anything. A paragraph may contain three separate claims disguised as one statement. "AI adoption grew 300% as enterprises shifted to cloud-native architectures, spending $50B annually" contains three verifiable claims: growth rate, causal relationship, and spending figure. Check each independently because one may be true while others are false.

  2. Prioritize primary sources over secondary reporting. When an article cites a Gartner report, find the original Gartner report rather than trusting the article's interpretation. Secondary sources frequently misquote statistics, omit important context, or cite outdated versions of reports. The extra effort of finding primary sources significantly improves verification accuracy.

  3. Check the date of the claim and the date of the supporting evidence. A statistic from a 2020 report cited in a 2024 article may be outdated. Technology adoption rates, market sizes, and usage statistics change rapidly. Verify both that the claim was true when originally stated and whether more recent data has superseded it. Flag claims based on data older than two years as potentially outdated.

  4. Assess source independence, not just source count. Three articles citing the same original study are one source, not three. Check whether your sources conducted independent research or are all referencing the same upstream data. True cross-referencing requires sources that arrived at similar conclusions through different methodologies or independent data collection.

  5. Report confidence levels, not just true/false verdicts. Binary fact-checking oversimplifies complex claims. "Kubernetes handles 80% of container orchestration" might be approximately true (surveys show 65-85% depending on methodology) rather than exactly true or completely false. Communicate the nuance: the claim is directionally accurate, but the specific percentage varies by source and definition.

Common Issues

Claims use vague language that's difficult to verify. Statements like "most companies" or "significantly improved" resist precise verification. Flag these as unverifiable as stated and suggest specific alternatives: replace "most companies" with the actual percentage from a cited survey. Vague claims are often true in spirit but misleading in implication—quantifying them reveals the actual story.

Source appears authoritative but has a conflict of interest. A cloud provider's survey showing 90% cloud adoption has inherent bias: their methodology targets their customer base, and their incentive is to show high adoption. Note potential conflicts of interest alongside source credibility. A biased source isn't automatically wrong, but its claims need independent corroboration before being rated as verified.

Original source for a widely-cited statistic can't be found. Some frequently cited statistics have no traceable origin—they were manufactured, misremembered, or distorted through repeated citation. When the primary source can't be located, downgrade the confidence rating regardless of how widely the claim is repeated. Popularity of a claim is not evidence of its truth. Note the attribution gap in your report.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates