E

Easy Decision Executor

Powerful command for explore, complex, decision, branches. Includes structured workflows, validation checks, and reusable patterns for simulation.

CommandClipticssimulationv1.0.0MIT
0 views0 copies

Easy Decision Executor

Execute structured decision analysis with weighted criteria, outcome simulation, and clear action recommendations for time-sensitive business and technical choices.

When to Use This Command

Run this command when...

  • You face a multi-criteria decision with competing tradeoffs and need a systematic framework to cut through analysis paralysis
  • You need to document the rationale behind a decision for stakeholders with weighted scoring and sensitivity analysis
  • You want to compare three or more alternatives against quantified criteria before committing team resources

Do NOT use this command when...

  • The decision is trivial or binary with obvious preference -- just decide and move on
  • You need ongoing decision support rather than a one-shot evaluation

Quick Start

# .claude/commands/easy-decision-executor.md # Execute structured decision analysis Analyze decision: $ARGUMENTS
# Run the command claude "easy-decision-executor choose between PostgreSQL, MongoDB, and DynamoDB for user analytics platform"
Expected output:
- Decision criteria with importance weights
- Alternative scoring matrix
- Sensitivity analysis on weight changes
- Recommended choice with confidence level
- Implementation action items

Core Concepts

ConceptDescription
Criteria WeightingAssigns relative importance scores to each evaluation dimension
Scoring MatrixRates each alternative against every criterion on a consistent scale
Sensitivity AnalysisTests how the recommendation changes when weights shift
Confidence LevelQuantifies certainty in the recommendation based on data quality
Action MappingConverts the chosen alternative into concrete next steps
Decision Execution Pipeline:

  Decision Statement
       |
  [Extract Criteria]
       |
  [Weight Criteria]
       |
  [Score Alternatives]
       |
  +----+----+
  |         |
 Rank    Sensitivity
  |      Analysis
  |         |
  +----+----+
       |
  Recommendation
       |
  Action Plan

Configuration

ParameterDefaultDescription
Criteria Count5-8Number of evaluation dimensions extracted from context
Scale Range1-10Numeric scale used for scoring alternatives
Sensitivity Steps3Number of weight perturbation rounds for robustness checking
Alternatives Cap5Maximum number of options compared simultaneously
Output StyleFull matrixSummary table, full matrix, or executive brief

Best Practices

  1. State the decision clearly -- frame it as a choice between specific named alternatives rather than an open question to get focused evaluation
  2. Include evaluation criteria -- mention what matters most (cost, speed, scalability, team expertise) so weighting reflects your real priorities
  3. Provide context constraints -- factors like timeline, budget, and existing infrastructure sharpen the scoring against each criterion
  4. Challenge the recommendation -- use the sensitivity analysis to see how close the runner-up is. A narrow margin suggests more investigation is warranted
  5. Document for posterity -- save the output alongside your decision log so future teams understand why the choice was made

Common Issues

  1. All alternatives score similarly -- your criteria may be too generic. Add domain-specific dimensions like "migration effort from current stack" to differentiate
  2. Recommendation contradicts intuition -- check whether the weight assignments reflect your true priorities. Adjust and re-run rather than discarding the framework
  3. Too many criteria dilute signal -- consolidate overlapping criteria (e.g., merge "reliability" and "uptime" into one dimension) to sharpen differentiation
Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates