Performance Audit Rapid
Battle-tested command for comprehensive, performance, audit, metrics. Includes structured workflows, validation checks, and reusable patterns for performance.
Performance Audit Rapid
Conduct a rapid, comprehensive performance audit across your application stack, producing a prioritized report of findings with specific remediation steps.
When to Use This Command
Run this command when...
- You need a full performance health check before a release, launch, or scaling event
- Application performance metrics have crossed alert thresholds and you need a diagnostic audit
- A new team member needs to understand the current performance landscape and known bottlenecks
- You want a documented audit trail of performance findings for stakeholder communication
- Periodic performance review is due and you need a systematic assessment rather than ad-hoc investigation
Quick Start
# .claude/commands/performance-audit-rapid.md --- name: Performance Audit Rapid description: Comprehensive performance audit with prioritized findings command: true --- Audit performance: $ARGUMENTS 1. Analyze technology stack and architecture 2. Measure build times, bundle sizes, and runtime performance 3. Review database queries, API response times, and resource usage 4. Generate prioritized findings report with remediation steps
# Invoke the command claude "/performance-audit-rapid full application audit" # Expected output # > Analyzing technology stack... # > Framework: Next.js 14 | Runtime: Node.js 20 | DB: PostgreSQL 15 # > Measuring performance baselines... # > Build time: 42s | Bundle: 380KB | Largest page: 2.1s FCP # > Audit findings (12 total): # > CRITICAL (2): # > - Unindexed foreign keys on 3 tables (affects all list queries) # > - Synchronous file processing blocks event loop (affects uploads) # > HIGH (4): # > - No CDN configured for static assets # > - Bundle includes 3 unused polyfills (62KB) # > - Missing connection pooling (new connection per query) # > - No response compression enabled # > MEDIUM (4): [listed in report] # > LOW (2): [listed in report] # > Full report saved: reports/performance-audit-2026-03-15.md
Core Concepts
| Concept | Description |
|---|---|
| Stack Analysis | Maps the full technology stack to identify audit areas specific to each technology |
| Baseline Measurement | Captures current metrics across build, bundle, runtime, and database performance |
| Severity Classification | Categorizes findings as Critical, High, Medium, or Low based on user impact |
| Remediation Planning | Each finding includes specific steps, estimated effort, and expected improvement |
| Audit Reporting | Produces a structured document suitable for technical review and stakeholder sharing |
Performance Audit Scope
========================
[Build Performance] [Runtime Performance]
Build time Response times (p50/p95/p99)
Bundle sizes Memory usage patterns
Compilation steps CPU utilization
| |
+----------+-------------+
|
[Audit Engine]
|
+----------+-------------+
| |
[Database Performance] [Infrastructure]
Query execution times CDN configuration
Index coverage Compression
Connection pooling Caching headers
| |
+----------+-------------+
|
[Prioritized Report]
Critical | High | Medium | Low
Configuration
| Parameter | Description | Default | Example | Required |
|---|---|---|---|---|
$ARGUMENTS | Audit scope and focus areas | full application | "database only", "frontend" | No |
severity_threshold | Minimum severity to include in report | low | "high", "critical" | No |
output_file | Path for the audit report file | reports/performance-audit-{date}.md | "audit.md" | No |
include_recommendations | Include detailed remediation steps | true | false | No |
compare_baseline | Previous audit file to compare against | none | "reports/audit-2026-02.md" | No |
Best Practices
-
Run audits on a regular schedule -- Monthly or per-sprint audits catch performance regressions before they compound. Comparing against previous audits reveals trends.
-
Audit in production-like environments -- Development environments with small datasets and no concurrent users hide real performance issues. Use staging with production-scale data.
-
Share audit reports with the team -- Performance is a team responsibility. Distribute the audit report so developers can address findings in their domains.
-
Track remediation over time -- Use
compare_baselineto measure progress against previous audit findings. Unresolved critical findings should escalate in priority. -
Audit before and after major releases -- Capture a pre-release baseline and a post-release measurement to catch any performance regressions introduced by new features.
Common Issues
Audit takes too long on large codebases: Full-stack audits on monorepos can be time-consuming. Scope the audit to specific services or layers when time is limited.
Findings conflict with feature priorities: Performance remediation competes with feature work for engineering time. Present audit findings with estimated user impact metrics to help prioritize against feature requests.
False positives in compression checks: Some content types (images, already-compressed files) show as "uncompressed" in audits. Filter audit findings by content type to avoid misleading recommendations.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Git Commit Message Generator
Generates well-structured conventional commit messages by analyzing staged changes. Follows Conventional Commits spec with scope detection.
React Component Scaffolder
Scaffolds a complete React component with TypeScript types, Tailwind styles, Storybook stories, and unit tests. Follows project conventions automatically.
CI/CD Pipeline Generator
Generates GitHub Actions workflows for CI/CD including linting, testing, building, and deploying. Detects project stack automatically.