C
Code Permutation Tester Launcher
A command template for utilities workflows. Streamlines development with pre-configured patterns and best practices.
CommandClipticsutilitiesv1.0.0MIT
0 views0 copies
Code Permutation Tester Launcher
Generate and test multiple implementation variants of a function to find the optimal approach by benchmarking performance, verifying correctness, and comparing readability.
When to Use This Command
Run this command when...
- You have a performance-critical function and want to compare multiple implementation strategies
- You want to test whether a recursive vs iterative vs functional approach is faster for your data
- You need to validate that different implementations produce identical outputs before choosing one
Avoid this command when...
- The function is trivial and performance differences would be negligible
- You already know the optimal approach and just need to implement it
Quick Start
# .claude/commands/code-permutation-tester-launcher.md --- allowed-tools: ["Bash", "Read", "Write", "Edit"] --- Read the target function. Generate 3-5 alternative implementations. Run correctness tests and benchmarks on each. Compare results.
Example usage:
/code-permutation-tester-launcher src/utils/deepMerge.ts
Example output:
Target: deepMerge (src/utils/deepMerge.ts)
Generated 4 permutations:
Variant A: Recursive (original) 2.3ms PASS
Variant B: Iterative w/ stack 1.1ms PASS
Variant C: JSON parse/stringify 0.8ms FAIL (loses Date)
Variant D: Lodash-style merge 1.4ms PASS
Winner: Variant B (iterative) -- 52% faster, all tests pass
Recommendation: Replace original with Variant B
Core Concepts
| Concept | Description |
|---|---|
| Permutation generation | Creates alternative implementations using different algorithms |
| Correctness testing | Runs identical inputs through each variant to verify output match |
| Benchmarking | Measures execution time across many runs for statistical accuracy |
| Equivalence checking | Ensures all passing variants produce identical results |
Original Function --> Generate Variants
|
+----+----+----+----+
| A | B | C | D |
+----+----+----+----+
|
Run Correctness Tests
|
Run Benchmarks (1000x)
|
Compare & Rank Results
Configuration
| Option | Default | Description |
|---|---|---|
variants | 4 | Number of alternative implementations to generate |
iterations | 1000 | Benchmark iterations per variant for statistical accuracy |
strategies | auto | Strategies to try (recursive, iterative, functional, streaming) |
correctness-suite | auto | Test inputs for verifying equivalence |
apply-winner | false | Replace original with the best-performing variant |
Best Practices
- Test correctness first -- a fast implementation that produces wrong results is worse than a slow correct one.
- Use realistic data -- benchmark with production-like inputs, not trivially small test data.
- Run enough iterations -- at least 1000 iterations smooths out noise from garbage collection and OS scheduling.
- Check edge cases -- some variants may fail on empty inputs, very large inputs, or circular references.
- Document the choice -- add a comment explaining why the chosen implementation was selected over alternatives.
Common Issues
- All variants fail -- the original function may have dependencies not captured in the permutation. Ensure all imports are included.
- Benchmark results inconsistent -- increase iterations to 5000 and close other CPU-intensive applications to reduce noise.
- Generated variants too similar -- specify explicit strategies (e.g.,
--strategies recursive,iterative,streaming) to force meaningful diversity.
Community
Reviews
No reviews yet. Be the first to review this template!