T

Team Velocity Auto

All-in-one command covering track, analyze, team, velocity. Includes structured workflows, validation checks, and reusable patterns for team.

CommandClipticsteamv1.0.0MIT
0 views0 copies

Team Velocity Auto

Team Velocity Auto is a command that automates the collection, analysis, and forecasting of team delivery velocity across sprints and development cycles. It pulls commit history, story point data, and contributor patterns to produce a comprehensive performance picture. The command then applies predictive modeling to forecast future sprint outcomes and highlight areas where the team can improve throughput without sacrificing quality.

When to Use This Command

Run this command when...

  • You need an automated snapshot of your team's delivery pace across the last several sprints and want actionable insights rather than raw numbers.
  • Sprint planning is approaching and stakeholders require data-driven estimates for how many story points or tasks the team can realistically complete.
  • You suspect velocity has been declining or fluctuating and want to identify root causes such as team composition changes, scope creep, or technical debt accumulation.
  • Leadership requests a quarterly performance report with trend analysis and confidence-bounded delivery projections.
  • You are onboarding a new team lead who needs historical context about the team's capacity and consistency patterns.

Consider alternatives when...

  • You only need a simple commit count for the last week without any forecasting or trend analysis; a basic git log command suffices.
  • Your team does not follow sprint-based or iterative delivery cycles, making velocity as a metric less meaningful.
  • You need individual performance reviews rather than team-level aggregate analysis.

Quick Start

# .velocity-auto.yml tracking: sprint_length: 14 # days per sprint lookback_sprints: 8 # number of sprints to analyze metric: story_points # story_points | commits | tasks forecasting: model: monte_carlo simulations: 10000 confidence_levels: [70, 85, 95]

Example invocation:

team-velocity-auto "last 6 sprints with quarterly projection"

Example output:

Sprint Velocity Summary (Last 6 Sprints)
-----------------------------------------
Average Velocity:    42.3 story points/sprint
Standard Deviation:  5.8 points
Trend Direction:     +3.2% per sprint (upward)
Predictability:      78% (moderate-high)

Monte Carlo Forecast (Next Sprint):
  70% confidence: 38-48 points
  85% confidence: 35-51 points
  95% confidence: 31-55 points

Top Recommendations:
  1. Reduce WIP limit from 8 to 6 to stabilize throughput
  2. Address flaky CI pipeline causing 4.2 hrs/sprint delay
  3. Schedule tech debt sprint to restore declining merge rates

Core Concepts

ConceptPurposeDetails
Sprint VelocityMeasures delivery throughput per iterationCalculated from completed story points, merged pull requests, or closed tasks within a defined sprint window
Monte Carlo ForecastingProduces probabilistic delivery predictionsRuns thousands of simulations using historical velocity distributions to generate confidence-bounded future estimates
Velocity Stability IndexQuantifies consistency of outputA ratio of standard deviation to mean velocity; lower values indicate more predictable delivery patterns
Capacity Factor AnalysisAccounts for team availabilityAdjusts raw velocity by factoring in holidays, sick days, onboarding periods, and partial-sprint contributors
Trend DecompositionSeparates signal from noiseBreaks velocity time series into trend, seasonal, and residual components to identify genuine improvement or decline
                    Team Velocity Auto Architecture
  +------------------------------------------------------------------+
  |  DATA COLLECTION LAYER                                           |
  |  [Git Log] --> [Task Tracker] --> [Calendar/Availability]        |
  +------------------------------------------------------------------+
           |                |                    |
           v                v                    v
  +------------------------------------------------------------------+
  |  ANALYSIS ENGINE                                                 |
  |  Sprint Bucketing --> Capacity Adjustment --> Trend Detection     |
  +------------------------------------------------------------------+
           |
           v
  +------------------------------------------------------------------+
  |  FORECASTING MODULE                                              |
  |  Historical Distribution --> Monte Carlo Sim --> Confidence Bands |
  +------------------------------------------------------------------+
           |
           v
  +------------------------------------------------------------------+
  |  OUTPUT & RECOMMENDATIONS                                        |
  |  Velocity Report | Forecast Table | Optimization Suggestions     |
  +------------------------------------------------------------------+

Configuration

ParameterTypeDefaultDescription
sprint_lengthinteger14Duration of each sprint in calendar days used for bucketing commits and tasks
lookback_sprintsinteger6Number of completed sprints to include in the historical analysis window
forecast_modelstringmonte_carloForecasting algorithm to use: monte_carlo, linear_regression, or weighted_average
confidence_levelsarray[70, 85, 95]Percentage confidence intervals to compute and display in forecast output
exclude_authorsarray[]List of git author emails to exclude from velocity calculations, useful for bot accounts

Best Practices

  1. Maintain consistent sprint boundaries. Velocity comparisons only make sense when sprints are measured against the same time window. If your team occasionally extends or shortens sprints, normalize the data to a per-day rate before comparing across iterations. This prevents misleading spikes or dips caused by calendar inconsistencies.

  2. Combine multiple velocity signals. Story points alone can mask important details. Run the command with both story point and commit-based tracking to cross-reference. When commit velocity stays flat but story points rise, it may indicate story inflation rather than genuine improvement. The most reliable picture comes from triangulating at least two independent metrics.

  3. Account for capacity changes explicitly. A sprint where two team members are on vacation will naturally produce lower velocity. Rather than letting this drag down your averages and forecasts, configure the capacity factors so the command adjusts for known availability gaps. This produces forecasts that reflect what the team can actually deliver at full strength.

  4. Review forecasts with the team, not just management. Velocity data is most powerful as a planning tool when the people doing the work understand and trust the numbers. Share the output during sprint planning so developers can flag whether the projections feel realistic. This builds a feedback loop that improves forecast accuracy over time.

  5. Track velocity stability, not just velocity magnitude. A team delivering 40 points per sprint with a standard deviation of 3 is far more predictable than a team averaging 50 points with a deviation of 15. Prioritize reducing variance through WIP limits, scope discipline, and technical debt management. Predictability enables better commitments to stakeholders.

Common Issues

Velocity appears artificially inflated after adopting the tool. This usually happens when teams unconsciously begin splitting stories into smaller pieces to increase point counts without delivering more actual value. Fix this by tracking an additional output metric such as features shipped or customer-facing changes deployed. The command supports multi-metric tracking to expose this pattern.

Monte Carlo forecasts produce extremely wide confidence bands. Wide bands indicate high historical variance in velocity. Rather than treating this as a tool problem, it signals that the team's delivery process has significant unpredictability. Address the root cause by investigating what made past sprints so variable, such as unplanned work, scope changes, or dependency blockers, and work to reduce those factors.

Sprint bucketing misaligns with actual sprint dates. The command defaults to fixed-length windows counting backward from the current date, which may not match your actual sprint calendar. Configure explicit sprint start dates in the YAML configuration or integrate with your task tracker's sprint definitions to ensure each sprint window captures the correct set of work items.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates