Q

Quick Decision Operator

Comprehensive command designed for analyze, team, decision, quality. Includes structured workflows, validation checks, and reusable patterns for team.

CommandClipticsteamv1.0.0MIT
0 views0 copies

Quick Decision Operator

Structured decision-making framework that analyzes options, trade-offs, and constraints to produce documented technical decisions with clear rationale.

When to Use This Command

Run this command when...

  • You are choosing between competing technologies, frameworks, or architectural approaches and need a structured comparison
  • A team decision needs documentation for future reference, showing what was considered and why the chosen option won
  • You want to weigh multiple factors (cost, complexity, timeline, risk) systematically rather than relying on gut feeling

Avoid this command when...

  • The decision is trivial or already made and you just need implementation guidance
  • You need ongoing monitoring of a decision's outcomes (use retrospective analysis commands instead)

Quick Start

# decision-input.yaml question: "Which message queue should we adopt?" options: - name: RabbitMQ pros: [mature, AMQP standard, good tooling] cons: [Erlang dependency, complex clustering] - name: Amazon SQS pros: [fully managed, auto-scaling, low ops burden] cons: [vendor lock-in, 256KB message limit, higher latency] - name: Redis Streams pros: [already in stack, fast, simple] cons: [persistence concerns, no dead-letter natively] constraints: - "Must handle 10K messages/sec" - "Team has no Erlang experience" - "Budget: $500/month max"
claude -p "Analyze the message queue decision in decision-input.yaml and recommend the best option"
Expected output:
=== Decision Analysis: Message Queue Selection ===

Option Scores (weighted):
  Amazon SQS:    82/100 (recommended)
  Redis Streams: 71/100
  RabbitMQ:      64/100

Key Factors:
  Throughput:  SQS (10K+), Redis (10K+), RabbitMQ (10K with tuning)
  Ops Burden:  SQS (none), Redis (low), RabbitMQ (high)
  Cost:        SQS ($340/mo est.), Redis ($0 infra), RabbitMQ ($200/mo)
  Lock-in:     SQS (high), Redis (none), RabbitMQ (low)

Recommendation: Amazon SQS
Rationale: Meets throughput requirement with zero ops burden.
Vendor lock-in mitigated by message abstraction layer.
Constraint satisfied: no Erlang, under $500/month.

Decision record saved: decisions/2026-03-15-message-queue.md

Core Concepts

ConceptDescription
Decision MatrixWeighted scoring grid comparing options across multiple evaluation criteria
Constraint FilteringEliminates options that fail hard requirements before scoring begins
Trade-off AnalysisExplicit documentation of what you gain and lose with each option
Decision RecordPersistent document capturing context, options, rationale, and outcome for future reference
Reversibility AssessmentEvaluates how difficult each option is to undo if the decision proves wrong
Options        Constraints        Scoring          Decision
[RabbitMQ] --> [Erlang exp?] --> FILTERED OUT
[SQS]      --> [10K msg/s?]  --> [Score: 82] --> RECOMMENDED
[Redis]    --> [10K msg/s?]  --> [Score: 71]
                                     |
                              Decision Record
                              (saved to repo)

Configuration

ParameterTypeDefaultDescriptionExample
questionstringrequiredThe decision question being analyzedSee Quick Start
optionsarrayrequiredList of options with pros and consSee Quick Start
constraintsarray[]Hard requirements that filter options["Must support TLS"]
weightsobjectequalCustom weights for evaluation criteria{ "cost": 0.3, "ops": 0.4 }
output_pathstringdecisions/Directory to save decision recordsdocs/decisions/

Best Practices

  1. Include at least three options -- Binary choices miss creative alternatives. Even if one option seems obvious, listing a third forces deeper analysis.
  2. Define constraints before scoring -- Hard constraints eliminate non-starters early, focusing the team's analysis energy on viable options.
  3. Weight criteria based on project priorities -- A startup optimizes for speed-to-market; an enterprise optimizes for compliance. Weights should reflect your context.
  4. Store decision records in version control -- Future team members will ask "why did we choose X?" The decision record provides the answer without archaeology.
  5. Revisit decisions at planned intervals -- Technology evolves. Schedule a 6-month or 12-month review to verify the decision still holds under current conditions.

Common Issues

  1. All options score similarly -- The evaluation criteria may be too broad. Add more specific criteria or increase weight differentiation on the most important factors.
  2. Constraint filtering eliminates all options -- Constraints may be too strict. Review whether each constraint is truly a hard requirement or a strong preference that could be a weighted criterion instead.
  3. Team disagrees with the recommendation -- The model's weights may not reflect the team's true priorities. Re-run with collaboratively agreed weights and discuss the delta.
Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates