Power Optimize Invoker
Enterprise-grade command for comprehensive, memory, usage, optimization. Includes structured workflows, validation checks, and reusable patterns for performance.
Power Optimize Invoker
Invoke a comprehensive, power-level optimization pass that combines code profiling, architecture review, and infrastructure tuning into a unified performance improvement workflow.
When to Use This Command
Run this command when...
- You need a thorough optimization that goes beyond quick fixes and addresses architectural performance patterns
- The application has multiple interdependent performance issues that require coordinated optimization
- You want a senior-engineer-level performance review that considers system design, not just individual queries
- Performance budgets are being missed and you need a systematic approach to getting back on target
- You are preparing for a significant traffic increase and need to ensure the system can handle the load
Quick Start
# .claude/commands/power-optimize-invoker.md --- name: Power Optimize Invoker description: Deep optimization combining profiling, architecture, and infrastructure command: true --- Power optimize: $ARGUMENTS 1. Profile the entire request lifecycle end-to-end 2. Identify bottlenecks at code, architecture, and infrastructure levels 3. Design optimization plan with dependency ordering 4. Implement optimizations from highest to lowest impact 5. Validate with load testing and metric comparison
# Invoke the command claude "/power-optimize-invoker prepare API for 10x traffic increase" # Expected output # > Profiling end-to-end request lifecycle... # > Bottleneck analysis (current capacity: ~500 req/s): # > Architecture: Synchronous payment processing blocks threads # > Code: Unoptimized serialization in response middleware (15ms/req) # > Database: Missing indexes on 4 high-traffic tables # > Infrastructure: Single Redis instance, no read replicas # > Optimization plan (ordered by impact): # > 1. Add database indexes (immediate: +40% throughput) # > 2. Async payment processing with queue (+25% throughput) # > 3. Optimize serialization middleware (+15% throughput) # > 4. Redis cluster + DB read replicas (+20% throughput) # > Implementing phase 1 and 2... # > Projected capacity after optimization: ~2,800 req/s
Core Concepts
| Concept | Description |
|---|---|
| End-to-End Profiling | Traces request lifecycle from ingress through every layer to response |
| Multi-Level Analysis | Examines code, architecture, database, caching, and infrastructure simultaneously |
| Capacity Planning | Estimates current and post-optimization throughput to validate against targets |
| Coordinated Optimization | Orders optimizations by dependency so changes do not conflict or negate each other |
| Load Validation | Verifies improvements under realistic load conditions, not just single-request benchmarks |
Power Optimize Layers
======================
[Infrastructure] Connection pools, replicas, scaling
|
[Architecture] Async patterns, caching layers, queues
|
[Database] Indexes, query plans, schema design
|
[Application] Algorithms, serialization, middleware
|
[Code] Hot loops, memory allocation, I/O
|
Each layer optimized top-down for maximum compound effect
Configuration
| Parameter | Description | Default | Example | Required |
|---|---|---|---|---|
$ARGUMENTS | Optimization goal or target system | entire application | "prepare for 10x traffic" | No |
target_throughput | Desired requests per second after optimization | none | 5000 | No |
optimization_depth | How many levels deep to optimize | all | "code-only", "infra-only" | No |
implement_phases | Number of optimization phases to implement | 2 | 4 | No |
include_load_test | Generate load test scripts for validation | true | false | No |
Best Practices
-
Define a measurable target -- "Prepare for 10x traffic" or "reduce p99 latency below 200ms" gives the optimizer a clear goal. Open-ended "make it faster" leads to diminishing returns.
-
Optimize top-down through layers -- Infrastructure and architecture changes have multiplicative effects on lower layers. Fix connection pooling before optimizing individual queries.
-
Implement in phases -- Apply the highest-impact optimizations first, measure, then proceed. This prevents wasted effort on optimizations that become irrelevant after architectural changes.
-
Validate under load, not in isolation -- Single-request benchmarks miss contention issues. Use the generated load test scripts to verify improvements hold under concurrent traffic.
-
Document capacity limits -- Record the measured throughput ceiling after each optimization so the team knows when the next scaling intervention will be needed.
Common Issues
Optimization at wrong layer: Spending time optimizing code when the real bottleneck is database I/O wastes effort. Always profile first to identify the actual limiting layer before optimizing.
Async refactoring breaks transaction boundaries: Converting synchronous payment processing to async requires careful handling of database transactions and error rollback. Design the queue consumer with idempotency and failure recovery.
Load test results do not match production: Load tests from a single client machine have different network characteristics than real distributed traffic. Use distributed load testing tools and test from multiple geographic regions.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Git Commit Message Generator
Generates well-structured conventional commit messages by analyzing staged changes. Follows Conventional Commits spec with scope detection.
React Component Scaffolder
Scaffolds a complete React component with TypeScript types, Tailwind styles, Storybook stories, and unit tests. Follows project conventions automatically.
CI/CD Pipeline Generator
Generates GitHub Actions workflows for CI/CD including linting, testing, building, and deploying. Detects project stack automatically.