C

Code Permutation Tester Launcher

A command template for utilities workflows. Streamlines development with pre-configured patterns and best practices.

CommandClipticsutilitiesv1.0.0MIT
0 views0 copies

Code Permutation Tester Launcher

Generate and test multiple implementation variants of a function to find the optimal approach by benchmarking performance, verifying correctness, and comparing readability.

When to Use This Command

Run this command when...

  • You have a performance-critical function and want to compare multiple implementation strategies
  • You want to test whether a recursive vs iterative vs functional approach is faster for your data
  • You need to validate that different implementations produce identical outputs before choosing one

Avoid this command when...

  • The function is trivial and performance differences would be negligible
  • You already know the optimal approach and just need to implement it

Quick Start

# .claude/commands/code-permutation-tester-launcher.md --- allowed-tools: ["Bash", "Read", "Write", "Edit"] --- Read the target function. Generate 3-5 alternative implementations. Run correctness tests and benchmarks on each. Compare results.

Example usage:

/code-permutation-tester-launcher src/utils/deepMerge.ts

Example output:

Target: deepMerge (src/utils/deepMerge.ts)
Generated 4 permutations:

  Variant A: Recursive (original)    2.3ms   PASS
  Variant B: Iterative w/ stack      1.1ms   PASS
  Variant C: JSON parse/stringify    0.8ms   FAIL (loses Date)
  Variant D: Lodash-style merge      1.4ms   PASS

Winner: Variant B (iterative) -- 52% faster, all tests pass
Recommendation: Replace original with Variant B

Core Concepts

ConceptDescription
Permutation generationCreates alternative implementations using different algorithms
Correctness testingRuns identical inputs through each variant to verify output match
BenchmarkingMeasures execution time across many runs for statistical accuracy
Equivalence checkingEnsures all passing variants produce identical results
Original Function --> Generate Variants
                           |
              +----+----+----+----+
              | A  | B  | C  | D  |
              +----+----+----+----+
                   |
          Run Correctness Tests
                   |
          Run Benchmarks (1000x)
                   |
          Compare & Rank Results

Configuration

OptionDefaultDescription
variants4Number of alternative implementations to generate
iterations1000Benchmark iterations per variant for statistical accuracy
strategiesautoStrategies to try (recursive, iterative, functional, streaming)
correctness-suiteautoTest inputs for verifying equivalence
apply-winnerfalseReplace original with the best-performing variant

Best Practices

  1. Test correctness first -- a fast implementation that produces wrong results is worse than a slow correct one.
  2. Use realistic data -- benchmark with production-like inputs, not trivially small test data.
  3. Run enough iterations -- at least 1000 iterations smooths out noise from garbage collection and OS scheduling.
  4. Check edge cases -- some variants may fail on empty inputs, very large inputs, or circular references.
  5. Document the choice -- add a comment explaining why the chosen implementation was selected over alternatives.

Common Issues

  1. All variants fail -- the original function may have dependencies not captured in the permutation. Ensure all imports are included.
  2. Benchmark results inconsistent -- increase iterations to 5000 and close other CPU-intensive applications to reduce noise.
  3. Generated variants too similar -- specify explicit strategies (e.g., --strategies recursive,iterative,streaming) to force meaningful diversity.
Community

Reviews

Write a review

No reviews yet. Be the first to review this template!