P

Project Timeline Simulator Dispatcher

All-in-one command covering simulate, project, outcomes, variable. Includes structured workflows, validation checks, and reusable patterns for project management.

CommandClipticsproject managementv1.0.0MIT
0 views0 copies

Project Timeline Simulator Dispatcher

Simulate project delivery outcomes using Monte Carlo modeling, risk assessment, and scenario analysis to produce confidence-interval timelines and resource optimization recommendations.

When to Use This Command

Run this command when...

  • You are estimating delivery dates for a project and want probability-based timelines rather than single-point estimates
  • You need to model multiple scenarios (baseline, optimistic, pessimistic, disruption) with confidence intervals
  • You want to identify the critical path and understand which tasks have the most schedule risk
  • You are presenting timeline forecasts to stakeholders and need data-backed ranges instead of guesses
  • You need resource allocation recommendations that balance time, quality, and budget constraints

Quick Start

# .claude/commands/project-timeline-simulator-dispatcher.yaml name: Project Timeline Simulator Dispatcher description: Monte Carlo project timeline simulation with risk assessment inputs: - name: scope description: "Project scope description or task list" - name: team_size description: "Number of contributors" default: "auto"
# Simulate from current project data claude "project-timeline-simulator --scope 'E-commerce platform MVP'" # Simulate with explicit parameters claude "project-timeline-simulator --scope 'API migration' --team-size 4 --risk-level medium"
Output:
  [analyze] Team velocity: 12 commits/week (3 contributors)
  [simulate] Running 1000 Monte Carlo iterations...
  Timeline Forecast:
    Optimistic (P10):  6 weeks
    Baseline (P50):    9 weeks
    Pessimistic (P90): 14 weeks
  Critical Path: Auth -> Database -> API -> Integration Tests
  Risk Factors: 2 high, 3 medium, 1 low
  Done. Full report generated.

Core Concepts

ConceptDescription
Monte Carlo SimulationRuns thousands of iterations with randomized task durations to produce probability distributions
Scenario ModelingGenerates baseline, optimistic, pessimistic, and disruption scenarios with different assumptions
Critical Path AnalysisIdentifies the longest chain of dependent tasks that determines the minimum project duration
Confidence IntervalsReports P10, P50, and P90 delivery dates instead of a single estimate
Risk-Adjusted PlanningFactors in technical complexity, external dependencies, team capacity, and historical velocity
Simulation Output:
  Probability Distribution:
  P10 ├────┤              6 weeks (best case)
  P50 ├──────────┤        9 weeks (most likely)
  P90 ├────────────────┤  14 weeks (worst case)

  Critical Path:
  [Auth] ──> [Database] ──> [API Layer] ──> [Integration]
    2w          3w             2w              2w = 9w

Configuration

ParameterTypeDefaultDescription
scopestringrequiredProject description or path to a task breakdown file
team_sizeintegerauto-detectedNumber of contributors; auto-detected from git history
risk_levelstring"medium"Base risk assumption: low, medium, or high
iterationsinteger1000Number of Monte Carlo simulation iterations to run
velocity_periodinteger90Days of git history to use for velocity calculation

Best Practices

  1. Use real velocity data -- Let the simulator auto-detect team velocity from git history rather than using aspirational estimates. Historical data produces more accurate forecasts.
  2. Present ranges, not points -- Share the P10-P50-P90 range with stakeholders. Committing to P50 gives a 50% chance of delivery; P90 gives 90% confidence.
  3. Update simulations weekly -- As tasks complete and new information surfaces, re-running the simulation narrows the confidence interval and improves accuracy.
  4. Identify and mitigate critical path risks -- Tasks on the critical path directly impact delivery date. Assign your strongest team members to these tasks and remove blockers aggressively.
  5. Model disruption scenarios explicitly -- Include scenarios for key person unavailability, scope changes, and external dependency delays to understand worst-case exposure.

Common Issues

  1. Insufficient velocity data -- New projects or teams with less than 30 days of history produce wide confidence intervals. Supplement with industry benchmarks or team self-assessment.
  2. Task dependencies not modeled -- Without explicit dependency information, the simulator assumes tasks are parallelizable, producing overly optimistic timelines. Define dependencies for accurate critical path analysis.
  3. Scope creep invalidates forecasts -- Simulations are only valid for the scope defined at simulation time. When scope changes, re-run the simulation with updated task lists.
Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates