Easy Optimize Executor
A command template for orchestration workflows. Streamlines development with pre-configured patterns and best practices.
Easy Optimize Executor
Execute targeted performance optimizations on code, queries, or configurations with automated profiling, improvement implementation, and before-after benchmarking.
When to Use This Command
Run this command when...
- You have identified a slow function, endpoint, or query and want an automated optimization pass
- You want Claude to profile code, identify bottlenecks, and apply fixes in one streamlined execution
- A specific component needs optimization and you want measurable before-after metrics
- You need to optimize a configuration file such as webpack, database, or nginx with best-practice settings
- You want quick wins applied to existing code without a full performance audit
Quick Start
# .claude/commands/easy-optimize-executor.md --- name: Easy Optimize Executor description: Profile, optimize, and benchmark code or config in one step command: true --- Optimize: $ARGUMENTS 1. Profile the target to identify bottlenecks 2. Apply targeted optimizations 3. Measure improvement with before/after comparison 4. Report results with metrics
# Invoke the command claude "/easy-optimize-executor the getUserOrders function in src/services/orders.ts" # Expected output # > Profiling getUserOrders... # > Issue 1: N+1 query loading order items (23 queries per call) # > Issue 2: No index on orders.user_id column # > Issue 3: Unnecessary JSON serialization in loop # > Applying optimizations... # > Fix 1: Batch load with JOIN (23 queries -> 1) # > Fix 2: Added index migration for orders.user_id # > Fix 3: Moved serialization outside loop # > Results: 340ms -> 45ms (87% improvement)
Core Concepts
| Concept | Description |
|---|---|
| Targeted Profiling | Analyzes the specific code or config mentioned, not the entire codebase |
| Bottleneck Ranking | Identifies multiple issues and ranks them by performance impact |
| Safe Optimization | Applies changes that preserve behavior while improving speed or efficiency |
| Metric Collection | Captures timing, memory, query count, or bundle size before and after |
| Incremental Approach | Applies one optimization at a time so each improvement is measurable |
Optimize Executor Flow
=======================
Target Code/Config
|
[Profile] --> Bottleneck List (ranked)
|
[Optimize] --> Apply fix #1 --> Measure
| |
| Apply fix #2 --> Measure
| |
| Apply fix #3 --> Measure
|
[Report]
|
Before: 340ms After: 45ms Delta: -87%
Configuration
| Parameter | Description | Default | Example | Required |
|---|---|---|---|---|
$ARGUMENTS | Target to optimize with optional context | none | "database queries in UserService" | Yes |
optimization_type | Category of optimization to apply | auto-detect | "query", "memory", "bundle" | No |
max_changes | Maximum number of optimizations to apply | 5 | 3 | No |
preserve_readability | Avoid optimizations that reduce code clarity | true | false | No |
benchmark_runs | Number of benchmark iterations for measurement | 3 | 10 | No |
Best Practices
-
Point to specific code, not vague areas -- "Optimize the search query in ProductRepository.findByFilters" gives the executor a clear target. Vague requests like "make it faster" force broad scanning that may miss the real bottleneck.
-
Commit before optimizing -- Create a clean git state so you can diff the changes and revert individual optimizations that do not meet your quality standards.
-
Verify behavior preservation -- Run your test suite after optimization to confirm the faster code still produces correct results. Speed without correctness is a regression.
-
Optimize hot paths first -- Focus on code that runs frequently such as request handlers, loops, and background jobs rather than cold startup paths for maximum user-facing impact.
-
Review generated index migrations -- Database index suggestions should be validated against your write patterns. An index that speeds reads may slow writes on high-traffic tables.
Common Issues
Benchmark variance masks improvements: Short-running functions show high variance in timing measurements. Increase benchmark_runs to 10 or higher for sub-millisecond functions to get stable before-after numbers.
Optimization breaks existing tests: Some optimizations change return types, ordering, or side effects. Review failing tests to determine whether the test expectation or the optimization needs adjustment.
Cannot profile without runtime: Static analysis catches structural issues but misses runtime bottlenecks like cache miss rates or I/O wait. For runtime-dependent optimizations, provide profiling data or logs as additional context.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Git Commit Message Generator
Generates well-structured conventional commit messages by analyzing staged changes. Follows Conventional Commits spec with scope detection.
React Component Scaffolder
Scaffolds a complete React component with TypeScript types, Tailwind styles, Storybook stories, and unit tests. Follows project conventions automatically.
CI/CD Pipeline Generator
Generates GitHub Actions workflows for CI/CD including linting, testing, building, and deploying. Detects project stack automatically.