Retrospective Analyzer Launcher
Enterprise-grade command for analyze, team, retrospectives, quantitative. Includes structured workflows, validation checks, and reusable patterns for team.
Retrospective Analyzer Launcher
The Retrospective Analyzer Launcher command performs quantitative analysis of team retrospectives by examining sprint metrics, collaboration patterns, and velocity trends to produce actionable improvement insights. It correlates code activity data with sprint outcomes to identify what worked well, what created friction, and where the team should focus improvement efforts. Run this command when you want data-driven retrospective discussions rather than purely subjective team feedback.
When to Use This Command
Run this command when...
- You are preparing for a sprint retrospective and want quantitative data to complement qualitative team feedback
- You need to track improvement trends across multiple retrospectives to verify that action items are producing results
- You want to identify patterns in sprint velocity, code churn, and review turnaround that reveal systemic workflow bottlenecks
- You are analyzing team collaboration metrics to understand how code review, pair programming, and knowledge sharing affect delivery
- You need to generate a retrospective report for distributed teams that summarizes achievements, challenges, and improvement areas with data
Consider alternatives when...
- You are conducting a lightweight retro that relies purely on team sentiment without quantitative analysis
- Your retrospective focuses on interpersonal dynamics that cannot be measured through code and project metrics
- You need to plan future sprints, which the sprint planning command handles with capacity and backlog analysis
Quick Start
# retro-config.yml analysis: sprint-period: "last-2-weeks" metrics: - velocity - code-churn - review-turnaround - deployment-frequency - bug-rate comparison: "previous-sprint" sources: git: true linear: true github: true output: format: "report" visualizations: true
Example invocation:
/retrospective-analyzer-launcher --period "2026-03-01 to 2026-03-14" --compare previous
Example output:
Retrospective Analysis Report
===============================
Sprint: Mar 1-14, 2026
Compared to: Feb 15-28, 2026
Velocity:
Completed: 38 story points (prev: 32, +18.7%)
Carried over: 5 points (prev: 8, improved)
Scope change: +3 points mid-sprint
Code Activity:
Commits: 142 (prev: 128)
Files changed: 234 (prev: 198)
Code churn: 12% (prev: 18%, improved)
Review turnaround: 4.2 hrs (prev: 6.8 hrs, improved)
Collaboration:
PRs reviewed by >1 person: 78% (prev: 65%)
Knowledge distribution: 3.2 contributors per module (prev: 2.8)
Bus factor risk: 2 modules with single contributor
Deployments:
Production deploys: 6 (prev: 4)
Rollbacks: 0 (prev: 1)
Incident count: 1 minor (prev: 2)
Top improvements:
1. Review turnaround reduced by 38%
2. Code churn decreased, indicating better design decisions
3. Velocity increased with lower carryover
Areas for attention:
1. Two modules have bus factor risk (single contributor)
2. Mid-sprint scope additions continue to disrupt planning
3. Test coverage decreased 2% this sprint
Core Concepts
| Concept | Purpose | Details |
|---|---|---|
| Sprint Metrics | Quantify delivery performance | Measures velocity, completion rate, carryover, and scope changes to provide objective sprint performance data |
| Code Activity Analysis | Understand development patterns | Tracks commits, file changes, code churn, and refactoring ratio to assess code quality trends |
| Collaboration Metrics | Evaluate team dynamics | Measures review participation, knowledge distribution, and contributor diversity per module |
| Trend Comparison | Track improvement over time | Compares current sprint metrics against previous sprints to identify positive trends and regressions |
| Actionable Insights | Drive concrete improvements | Translates metric patterns into specific, implementable recommendations for the next sprint |
Architecture: Retrospective Analysis
=======================================
+-------------------+ +---------------------+ +------------------+
| Data Collectors | --> | Metrics Calculator | --> | Trend Analyzer |
| (git, linear, gh) | | (velocity, churn) | | (sprint compare) |
+-------------------+ +---------------------+ +------------------+
|
+----------------------------------+
v
+---------------------+ +-------------------+
| Pattern Detector | --> | Insight Generator |
| (bottleneck finder) | | (recommendations) |
+---------------------+ +-------------------+
Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
period | string | "last-2-weeks" | Sprint period to analyze, specified as a date range or relative time expression |
compare | string | "previous" | Comparison baseline: previous sprint, sprint average, or a specific date range |
metrics | string[] | ["velocity","churn","reviews"] | Metrics to include in the analysis |
team | string | "" | Filter analysis to a specific team when multiple teams share the repository |
output | string | "report" | Output format: report for human-readable summary, json for data export, or dashboard for visualization |
Best Practices
-
Combine quantitative metrics with qualitative team feedback. The analyzer provides objective data about what happened during the sprint, but team members provide irreplaceable context about why. Use the metrics as conversation starters rather than conversation replacements during the retrospective meeting.
-
Track trends across at least three sprints before drawing conclusions. A single sprint's metrics can be influenced by holidays, team changes, or exceptional tasks. Wait until you have three or more data points before identifying a trend as meaningful and acting on it with process changes.
-
Focus on a small number of improvement areas per sprint. The analysis may surface many potential improvements, but attempting to address all of them simultaneously dilutes focus and makes it impossible to attribute improvements to specific actions. Select two to three items and commit to measurable progress on each.
-
Address bus factor risks proactively. When the analysis identifies modules with a single contributor, schedule knowledge-sharing sessions or pair programming rotations before an availability disruption creates a bottleneck. Bus factor risk is easier to address preventively than reactively.
-
Review code churn trends to assess design decision quality. High code churn, where recently written code is modified or deleted, can indicate premature implementation before requirements stabilized or design gaps that required rework. A declining churn trend suggests the team is making better upfront design decisions.
Common Issues
Metrics skewed by infrastructure or automation commits. Automated commits from CI pipelines, dependency updates, or code formatting tools inflate commit counts and file change metrics. Exclude bot commits and automated changes from the analysis to get an accurate picture of human development activity.
Velocity comparison misleading across sprints with different capacity. Comparing story point completion between a full-capacity sprint and one with holidays or team absences produces misleading trends. Normalize velocity by available capacity to make cross-sprint comparisons meaningful.
Review turnaround metrics include weekends and off-hours. Elapsed time between PR creation and review includes nights and weekends, inflating the average turnaround. Calculate business-hours-only turnaround for a more accurate reflection of actual review responsiveness.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Git Commit Message Generator
Generates well-structured conventional commit messages by analyzing staged changes. Follows Conventional Commits spec with scope detection.
React Component Scaffolder
Scaffolds a complete React component with TypeScript types, Tailwind styles, Storybook stories, and unit tests. Follows project conventions automatically.
CI/CD Pipeline Generator
Generates GitHub Actions workflows for CI/CD including linting, testing, building, and deploying. Detects project stack automatically.