P

Performance Audit Rapid

Battle-tested command for comprehensive, performance, audit, metrics. Includes structured workflows, validation checks, and reusable patterns for performance.

CommandClipticsperformancev1.0.0MIT
0 views0 copies

Performance Audit Rapid

Conduct a rapid, comprehensive performance audit across your application stack, producing a prioritized report of findings with specific remediation steps.

When to Use This Command

Run this command when...

  • You need a full performance health check before a release, launch, or scaling event
  • Application performance metrics have crossed alert thresholds and you need a diagnostic audit
  • A new team member needs to understand the current performance landscape and known bottlenecks
  • You want a documented audit trail of performance findings for stakeholder communication
  • Periodic performance review is due and you need a systematic assessment rather than ad-hoc investigation

Quick Start

# .claude/commands/performance-audit-rapid.md --- name: Performance Audit Rapid description: Comprehensive performance audit with prioritized findings command: true --- Audit performance: $ARGUMENTS 1. Analyze technology stack and architecture 2. Measure build times, bundle sizes, and runtime performance 3. Review database queries, API response times, and resource usage 4. Generate prioritized findings report with remediation steps
# Invoke the command claude "/performance-audit-rapid full application audit" # Expected output # > Analyzing technology stack... # > Framework: Next.js 14 | Runtime: Node.js 20 | DB: PostgreSQL 15 # > Measuring performance baselines... # > Build time: 42s | Bundle: 380KB | Largest page: 2.1s FCP # > Audit findings (12 total): # > CRITICAL (2): # > - Unindexed foreign keys on 3 tables (affects all list queries) # > - Synchronous file processing blocks event loop (affects uploads) # > HIGH (4): # > - No CDN configured for static assets # > - Bundle includes 3 unused polyfills (62KB) # > - Missing connection pooling (new connection per query) # > - No response compression enabled # > MEDIUM (4): [listed in report] # > LOW (2): [listed in report] # > Full report saved: reports/performance-audit-2026-03-15.md

Core Concepts

ConceptDescription
Stack AnalysisMaps the full technology stack to identify audit areas specific to each technology
Baseline MeasurementCaptures current metrics across build, bundle, runtime, and database performance
Severity ClassificationCategorizes findings as Critical, High, Medium, or Low based on user impact
Remediation PlanningEach finding includes specific steps, estimated effort, and expected improvement
Audit ReportingProduces a structured document suitable for technical review and stakeholder sharing
Performance Audit Scope
========================

  [Build Performance]     [Runtime Performance]
  Build time              Response times (p50/p95/p99)
  Bundle sizes            Memory usage patterns
  Compilation steps       CPU utilization
         |                        |
         +----------+-------------+
                    |
             [Audit Engine]
                    |
         +----------+-------------+
         |                        |
  [Database Performance]  [Infrastructure]
  Query execution times   CDN configuration
  Index coverage          Compression
  Connection pooling      Caching headers
         |                        |
         +----------+-------------+
                    |
          [Prioritized Report]
          Critical | High | Medium | Low

Configuration

ParameterDescriptionDefaultExampleRequired
$ARGUMENTSAudit scope and focus areasfull application"database only", "frontend"No
severity_thresholdMinimum severity to include in reportlow"high", "critical"No
output_filePath for the audit report filereports/performance-audit-{date}.md"audit.md"No
include_recommendationsInclude detailed remediation stepstruefalseNo
compare_baselinePrevious audit file to compare againstnone"reports/audit-2026-02.md"No

Best Practices

  1. Run audits on a regular schedule -- Monthly or per-sprint audits catch performance regressions before they compound. Comparing against previous audits reveals trends.

  2. Audit in production-like environments -- Development environments with small datasets and no concurrent users hide real performance issues. Use staging with production-scale data.

  3. Share audit reports with the team -- Performance is a team responsibility. Distribute the audit report so developers can address findings in their domains.

  4. Track remediation over time -- Use compare_baseline to measure progress against previous audit findings. Unresolved critical findings should escalate in priority.

  5. Audit before and after major releases -- Capture a pre-release baseline and a post-release measurement to catch any performance regressions introduced by new features.

Common Issues

Audit takes too long on large codebases: Full-stack audits on monorepos can be time-consuming. Scope the audit to specific services or layers when time is limited.

Findings conflict with feature priorities: Performance remediation competes with feature work for engineering time. Present audit findings with estimated user impact metrics to help prioritize against feature requests.

False positives in compression checks: Some content types (images, already-compressed files) show as "uncompressed" in audits. Filter audit findings by content type to avoid misleading recommendations.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates