E

Easy Optimize Executor

A command template for orchestration workflows. Streamlines development with pre-configured patterns and best practices.

CommandClipticsorchestrationv1.0.0MIT
0 views0 copies

Easy Optimize Executor

Execute targeted performance optimizations on code, queries, or configurations with automated profiling, improvement implementation, and before-after benchmarking.

When to Use This Command

Run this command when...

  • You have identified a slow function, endpoint, or query and want an automated optimization pass
  • You want Claude to profile code, identify bottlenecks, and apply fixes in one streamlined execution
  • A specific component needs optimization and you want measurable before-after metrics
  • You need to optimize a configuration file such as webpack, database, or nginx with best-practice settings
  • You want quick wins applied to existing code without a full performance audit

Quick Start

# .claude/commands/easy-optimize-executor.md --- name: Easy Optimize Executor description: Profile, optimize, and benchmark code or config in one step command: true --- Optimize: $ARGUMENTS 1. Profile the target to identify bottlenecks 2. Apply targeted optimizations 3. Measure improvement with before/after comparison 4. Report results with metrics
# Invoke the command claude "/easy-optimize-executor the getUserOrders function in src/services/orders.ts" # Expected output # > Profiling getUserOrders... # > Issue 1: N+1 query loading order items (23 queries per call) # > Issue 2: No index on orders.user_id column # > Issue 3: Unnecessary JSON serialization in loop # > Applying optimizations... # > Fix 1: Batch load with JOIN (23 queries -> 1) # > Fix 2: Added index migration for orders.user_id # > Fix 3: Moved serialization outside loop # > Results: 340ms -> 45ms (87% improvement)

Core Concepts

ConceptDescription
Targeted ProfilingAnalyzes the specific code or config mentioned, not the entire codebase
Bottleneck RankingIdentifies multiple issues and ranks them by performance impact
Safe OptimizationApplies changes that preserve behavior while improving speed or efficiency
Metric CollectionCaptures timing, memory, query count, or bundle size before and after
Incremental ApproachApplies one optimization at a time so each improvement is measurable
Optimize Executor Flow
=======================

  Target Code/Config
        |
   [Profile] --> Bottleneck List (ranked)
        |
   [Optimize] --> Apply fix #1 --> Measure
        |              |
        |         Apply fix #2 --> Measure
        |              |
        |         Apply fix #3 --> Measure
        |
   [Report]
        |
  Before: 340ms    After: 45ms    Delta: -87%

Configuration

ParameterDescriptionDefaultExampleRequired
$ARGUMENTSTarget to optimize with optional contextnone"database queries in UserService"Yes
optimization_typeCategory of optimization to applyauto-detect"query", "memory", "bundle"No
max_changesMaximum number of optimizations to apply53No
preserve_readabilityAvoid optimizations that reduce code claritytruefalseNo
benchmark_runsNumber of benchmark iterations for measurement310No

Best Practices

  1. Point to specific code, not vague areas -- "Optimize the search query in ProductRepository.findByFilters" gives the executor a clear target. Vague requests like "make it faster" force broad scanning that may miss the real bottleneck.

  2. Commit before optimizing -- Create a clean git state so you can diff the changes and revert individual optimizations that do not meet your quality standards.

  3. Verify behavior preservation -- Run your test suite after optimization to confirm the faster code still produces correct results. Speed without correctness is a regression.

  4. Optimize hot paths first -- Focus on code that runs frequently such as request handlers, loops, and background jobs rather than cold startup paths for maximum user-facing impact.

  5. Review generated index migrations -- Database index suggestions should be validated against your write patterns. An index that speeds reads may slow writes on high-traffic tables.

Common Issues

Benchmark variance masks improvements: Short-running functions show high variance in timing measurements. Increase benchmark_runs to 10 or higher for sub-millisecond functions to get stable before-after numbers.

Optimization breaks existing tests: Some optimizations change return types, ordering, or side effects. Review failing tests to determine whether the test expectation or the optimization needs adjustment.

Cannot profile without runtime: Static analysis catches structural issues but misses runtime bottlenecks like cache miss rates or I/O wait. For runtime-dependent optimizations, provide profiling data or logs as additional context.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates