S

Seo Analyzer Pro

Comprehensive agent designed for analysis, optimization, specialist, proactively. Includes structured workflows, validation checks, and reusable patterns for web tools.

AgentClipticsweb toolsv1.0.0MIT
0 views0 copies

SEO Analyzer Pro

Perform deep technical SEO audits with actionable diagnostics across site architecture, on-page elements, performance metrics, and structured data.

When to Use This Agent

Choose this agent when you need to:

  • Execute comprehensive technical SEO audits covering crawlability, indexation health, URL structure, and internal linking topology
  • Analyze on-page factors including meta tags, heading hierarchy, content quality signals, and keyword targeting
  • Evaluate Core Web Vitals performance, mobile-first indexing readiness, and schema markup for rich results

Consider alternatives when:

  • You need broader search strategy including AI citation optimization -- use the Search AI Assistant
  • Your primary concern is content creation rather than technical site analysis

Quick Start

Configuration

name: seo-analyzer-pro type: agent category: web-tools

Example Invocation

claude agent:invoke seo-analyzer-pro "Run a full technical SEO audit on our e-commerce category pages"

Example Output

Technical SEO Audit Report
===========================
Scope: E-commerce Category Pages (127 URLs)

INDEXATION: 98/127 indexed (77.2%), 8 orphaned pages
ON-PAGE: 34 missing meta descriptions, 7 duplicate title clusters
PERFORMANCE: Median LCP 2.9s, CLS 0.08, INP 165ms
STRUCTURED DATA: Product schema on 89/127, BreadcrumbList missing

Priority Fix Queue: 47 issues (12 critical, 18 high, 17 medium)

Core Concepts

Technical SEO Audit Dimensions

AspectDetails
CrawlabilityRobots.txt directives, XML sitemap coverage, crawl budget allocation, redirect chains
IndexationIndex coverage, canonical consistency, noindex directives, duplicate detection
On-Page ElementsTitle tags, meta descriptions, heading hierarchy, image alt attributes
PerformanceCore Web Vitals (LCP, CLS, INP), TTFB, render-blocking analysis
Structured DataSchema.org validation, rich result eligibility, JSON-LD completeness

SEO Audit Pipeline Architecture

+------------------+     +------------------+     +------------------+
|  URL Discovery   |---->|  Crawl Engine    |---->|  Response        |
|  Sitemap Parse   |     |  HTTP Analysis   |     |  Classification  |
|  Internal Links  |     |  Redirect Trace  |     |  Status Mapping  |
+------------------+     +------------------+     +------------------+
        |                        |                        |
        v                        v                        v
+------------------+     +------------------+     +------------------+
|  On-Page Audit   |     |  Performance     |     |  Schema          |
|  Content Scoring |     |  CWV Assessment  |     |  Validation      |
+------------------+     +------------------+     +------------------+

Configuration

ParameterTypeDefaultDescription
crawl_depthinteger5Maximum link depth to crawl from seed URLs
audit_scopestring"full"Scope: full, technical-only, on-page-only, performance-only
cwv_sourcestring"lab"CWV data source: lab (Lighthouse), field (CrUX), both
content_min_wordsinteger300Minimum words before flagging thin content
schema_validationbooleantrueValidate structured data against Rich Results requirements

Best Practices

  1. Prioritize Issues by Revenue Impact - A missing canonical tag on a high-traffic page demands immediate attention, while a duplicate meta description on a low-traffic archive can wait. Score findings by organic traffic value and conversion contribution.

  2. Audit Internal Linking Topology - Map click-depth distribution of important pages, ensuring high-value URLs sit within three clicks of the homepage. Identify and resolve orphaned pages receiving no internal links.

  3. Validate Schema Against Actual Eligibility - Implementing structured data does not guarantee rich results. Validate each type against Google's specific requirements using the Rich Results Test, not just JSON-LD syntax correctness.

  4. Separate Lab and Field Performance Data - Gaps between Lighthouse scores and CrUX field data typically indicate third-party script impact or geographic latency. Use both data sources for a complete performance picture.

  5. Track Indexation Trends Over Time - Monitor index coverage weekly to detect crawl budget erosion and unexpected noindex propagation. Sudden drops often signal technical regressions from recent deployments.

Common Issues

  1. Faceted Navigation Crawl Bloat - Filter-based URLs generate thousands of low-value combinations wasting crawl budget. Use canonical tags to primary pages, robots.txt blocking for low-value patterns, and selective noindex on zero-demand filter combinations.

  2. Redirect Chain Accumulation - Site migrations create chains that grow over time, increasing latency and diluting link equity. Audit maps quarterly, collapsing chains longer than two hops into single redirects.

  3. Conflicting Canonical and Noindex Signals - Pages with both a canonical to another URL and a noindex directive send contradictory signals. Choose one canonicalization strategy per URL: redirect, canonical, or noindex, but never combinations.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates