Pro Roier Seo
Streamline your workflow with this technical, auditor, fixer, runs. Includes structured workflows, validation checks, and reusable patterns for web development.
Roier SEO
A comprehensive SEO optimization skill for performing technical SEO audits, keyword research, content optimization, and search performance tracking with data-driven strategies.
When to Use
Choose Roier SEO when:
- Performing comprehensive technical SEO audits on websites
- Optimizing content for search engine rankings and organic traffic
- Building keyword research strategies and content gap analysis
- Tracking search performance metrics and identifying ranking opportunities
Consider alternatives when:
- Managing paid search campaigns — use Google Ads or dedicated PPC tools
- Building social media strategy — use social media management platforms
- Creating content without SEO focus — use content management tools
Quick Start
# Install SEO analysis tools pip install requests beautifulsoup4 advertools npm install lighthouse
import requests from bs4 import BeautifulSoup from urllib.parse import urljoin, urlparse import json class SEOAuditor: def __init__(self, base_url): self.base_url = base_url self.session = requests.Session() self.session.headers.update({ 'User-Agent': 'Mozilla/5.0 (compatible; SEOAuditBot/1.0)' }) self.issues = [] def audit_page(self, url): """Run comprehensive SEO audit on a single page""" resp = self.session.get(url, timeout=30) soup = BeautifulSoup(resp.text, 'html.parser') audit = { 'url': url, 'status_code': resp.status_code, 'load_time': resp.elapsed.total_seconds(), 'title': self._check_title(soup), 'meta_description': self._check_meta_description(soup), 'headings': self._check_headings(soup), 'images': self._check_images(soup), 'links': self._check_links(soup, url), 'structured_data': self._check_structured_data(soup), 'canonical': self._check_canonical(soup, url), 'robots': self._check_robots_meta(soup) } return audit def _check_title(self, soup): title = soup.find('title') title_text = title.get_text(strip=True) if title else '' issues = [] if not title_text: issues.append('Missing title tag') elif len(title_text) > 60: issues.append(f'Title too long ({len(title_text)} chars, max 60)') elif len(title_text) < 20: issues.append(f'Title too short ({len(title_text)} chars, min 20)') return {'text': title_text, 'length': len(title_text), 'issues': issues} def _check_meta_description(self, soup): meta = soup.find('meta', attrs={'name': 'description'}) desc = meta.get('content', '') if meta else '' issues = [] if not desc: issues.append('Missing meta description') elif len(desc) > 160: issues.append(f'Meta description too long ({len(desc)} chars)') elif len(desc) < 50: issues.append(f'Meta description too short ({len(desc)} chars)') return {'text': desc, 'length': len(desc), 'issues': issues} def _check_headings(self, soup): headings = {} for level in range(1, 7): tags = soup.find_all(f'h{level}') headings[f'h{level}'] = [tag.get_text(strip=True) for tag in tags] issues = [] if len(headings.get('h1', [])) == 0: issues.append('Missing H1 tag') elif len(headings.get('h1', [])) > 1: issues.append(f'Multiple H1 tags ({len(headings["h1"])})') return {'structure': headings, 'issues': issues} def _check_images(self, soup): images = soup.find_all('img') missing_alt = [img.get('src', '') for img in images if not img.get('alt')] return { 'total': len(images), 'missing_alt': len(missing_alt), 'issues': [f'{len(missing_alt)} images missing alt text'] if missing_alt else [] } def _check_structured_data(self, soup): schemas = soup.find_all('script', type='application/ld+json') data = [] for s in schemas: try: data.append(json.loads(s.string)) except (json.JSONDecodeError, TypeError): pass return {'count': len(data), 'types': [d.get('@type', '') for d in data]} def _check_canonical(self, soup, url): canonical = soup.find('link', rel='canonical') href = canonical.get('href', '') if canonical else '' return {'url': href, 'matches': href == url} def _check_robots_meta(self, soup): meta = soup.find('meta', attrs={'name': 'robots'}) return meta.get('content', 'index, follow') if meta else 'index, follow' def _check_links(self, soup, url): links = soup.find_all('a', href=True) internal = [l for l in links if urlparse(l['href']).netloc in ('', urlparse(url).netloc)] external = [l for l in links if l not in internal and l['href'].startswith('http')] return {'internal': len(internal), 'external': len(external), 'total': len(links)} def generate_report(self, url): audit = self.audit_page(url) all_issues = [] for section in ['title', 'meta_description', 'headings', 'images']: if isinstance(audit[section], dict) and 'issues' in audit[section]: all_issues.extend(audit[section]['issues']) return {**audit, 'total_issues': len(all_issues), 'all_issues': all_issues}
Core Concepts
SEO Audit Checklist
| Category | Checks | Impact |
|---|---|---|
| Technical | Crawlability, robots.txt, sitemap, SSL | Critical |
| On-Page | Title, meta description, headings, content | High |
| Performance | Page speed, Core Web Vitals, mobile | High |
| Content | Keyword density, readability, freshness | High |
| Links | Internal linking, broken links, redirects | Medium |
| Structured Data | Schema markup, Open Graph, Twitter Cards | Medium |
| Mobile | Responsive design, viewport, tap targets | High |
| International | Hreflang tags, language markup | Low-Medium |
Core Web Vitals Monitoring
import subprocess import json def run_lighthouse(url): """Run Lighthouse audit and extract Core Web Vitals""" result = subprocess.run( ['npx', 'lighthouse', url, '--output=json', '--quiet', '--only-categories=performance', '--chrome-flags="--headless"'], capture_output=True, text=True ) data = json.loads(result.stdout) audits = data['audits'] return { 'performance_score': data['categories']['performance']['score'] * 100, 'LCP': audits['largest-contentful-paint']['numericValue'] / 1000, 'FID': audits.get('max-potential-fid', {}).get('numericValue', 0), 'CLS': audits['cumulative-layout-shift']['numericValue'], 'TTFB': audits['server-response-time']['numericValue'] / 1000, 'FCP': audits['first-contentful-paint']['numericValue'] / 1000, 'TBT': audits['total-blocking-time']['numericValue'] }
Configuration
| Option | Description | Default |
|---|---|---|
target_url | Website URL to audit | Required |
crawl_depth | Maximum crawl depth | 3 |
max_pages | Maximum pages to audit | 100 |
check_mobile | Run mobile usability checks | true |
check_performance | Run Core Web Vitals | true |
check_structured_data | Validate schema markup | true |
competitive_analysis | Compare with competitor URLs | false |
export_format | Report format: json, html, csv | "json" |
Best Practices
- Prioritize Core Web Vitals (LCP < 2.5s, FID < 100ms, CLS < 0.1) because Google uses these as ranking signals and poor performance directly impacts both search rankings and user experience
- Write unique, descriptive title tags under 60 characters for every page that include the primary keyword naturally — duplicate or generic titles waste ranking potential and reduce click-through rates in search results
- Implement proper internal linking with descriptive anchor text to help search engines understand page relationships and distribute link equity throughout the site
- Use structured data markup (JSON-LD schema) for rich snippets that improve click-through rates by 20-30%; product pages, articles, FAQs, and how-to guides all have dedicated schema types
- Monitor and fix crawl errors regularly through Google Search Console because 404 errors, redirect loops, and blocked resources prevent search engines from indexing your content
Common Issues
Duplicate content across URL variations: The same content accessible via HTTP/HTTPS, www/non-www, or trailing slash variants dilutes ranking signals. Implement canonical tags, configure proper 301 redirects, and enforce a single canonical URL format across the entire site.
Slow page load times hurting rankings: Large images, render-blocking JavaScript, and unoptimized fonts cause poor Core Web Vitals. Implement lazy loading for below-fold images, defer non-critical JavaScript, use next-gen image formats (WebP/AVIF), and enable proper caching headers.
Missing or incorrect hreflang tags: International sites without proper hreflang tags show the wrong language version in search results. Validate hreflang annotations with a dedicated checker, ensure reciprocal tags exist between all language versions, and include a self-referencing hreflang on each page.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Full-Stack Code Reviewer
Comprehensive code review skill that checks for security vulnerabilities, performance issues, accessibility, and best practices across frontend and backend code.
Test Suite Generator
Generates comprehensive test suites with unit tests, integration tests, and edge cases. Supports Jest, Vitest, Pytest, and Go testing.
Pro Architecture Workspace
Battle-tested skill for architectural, decision, making, framework. Includes structured workflows, validation checks, and reusable patterns for development.