Precision Ship-learn-next Iteration Studio
Enterprise-ready skill that automates iterate on what to build next based on user feedback. Built for Claude Code with best practices and real-world patterns.
Ship-Learn-Next Iteration Studio
Iterative product development framework that structures work into rapid ship-learn-iterate cycles, combining lean startup principles with engineering best practices for continuous improvement.
When to Use This Skill
Choose Ship-Learn-Next when:
- Launching new features with uncertainty about user reception
- Building MVPs and iterating based on feedback
- Running product experiments with measurable outcomes
- Managing development cycles focused on learning and adaptation
- Transitioning from waterfall to iterative development
Consider alternatives when:
- Building well-defined features with clear specifications
- Working on infrastructure with no user-facing component
- Performing maintenance or bug fixes with known solutions
Quick Start
# Activate Ship-Learn-Next claude skill activate precision-ship-learn-next-iteration-studio # Plan an iteration cycle claude "Plan a Ship-Learn-Next cycle for our new onboarding flow"
Example: Iteration Cycle Document
## Iteration: Onboarding Flow v2 ### SHIP (What we're building this cycle) **Hypothesis**: Replacing the 5-step onboarding wizard with a single-page progressive form will increase completion rate from 45% to 65%. **Scope** (1-week sprint): - [ ] Single-page form with progressive disclosure - [ ] Skip option for non-essential fields - [ ] Welcome email triggered on completion - [ ] Analytics events for each section interaction **Out of scope**: A/B test framework, personalization, social login **Ship criteria**: Deployed to 20% of new signups by Friday EOD ### LEARN (What we're measuring) | Metric | Baseline | Target | Measurement | |--------|----------|--------|-------------| | Completion rate | 45% | 65% | Analytics events | | Time to complete | 4.2 min | 2.5 min | Start-to-complete timestamp | | Drop-off point | Step 3 (profile) | N/A | Section interaction events | | Support tickets | 12/week | <8/week | Zendesk tagged tickets | **Learning questions**: 1. Does progressive disclosure reduce cognitive overload? 2. Which optional fields do users skip most? 3. Does the skip option reduce data quality unacceptably? ### NEXT (Decision framework) | If we learn... | Then we... | |---------------|------------| | Completion >60% | Roll out to 100%, iterate on details | | Completion 50-60% | Analyze drop-off data, adjust form order | | Completion <50% | Revert, try alternative approach (wizard with fewer steps) | | Skip rate >40% on key fields | Add incentive copy or make progressive |
Core Concepts
Iteration Cycle Structure
| Phase | Duration | Activities | Output |
|---|---|---|---|
| Ship | 60% of cycle | Build, test, deploy the smallest useful increment | Working feature |
| Learn | 25% of cycle | Collect data, analyze metrics, gather feedback | Learning report |
| Next | 15% of cycle | Decide direction based on learnings | Next cycle plan |
Experiment Design
| Component | Description | Example |
|---|---|---|
| Hypothesis | Testable prediction | "Feature X will improve metric Y by Z%" |
| Metric | Measurable outcome | Conversion rate, time-on-task, NPS |
| Sample Size | Users needed for significance | 1,000 per variant (95% confidence) |
| Duration | Time to collect sufficient data | 1-2 weeks |
| Success Criteria | Threshold for positive decision | >10% improvement, p < 0.05 |
| Rollback Plan | How to revert if negative | Feature flag, database migration reverse |
Configuration
| Parameter | Description | Default |
|---|---|---|
cycle_length | Iteration cycle duration | 2 weeks |
ship_ratio | Percentage of cycle for building | 60% |
min_sample_size | Minimum users for conclusions | 500 |
confidence_level | Statistical confidence threshold | 95% |
rollout_strategy | Progressive rollout: percentage, ring, flag | percentage |
metrics_tool | Analytics platform | mixpanel |
Best Practices
-
Ship the smallest thing that tests the hypothesis — Don't build the perfect feature. Build the smallest version that generates the data you need to validate or invalidate your hypothesis. Polish and completeness come after validation.
-
Define success and failure criteria before shipping — Write down specific thresholds ("completion rate above 60%") before you see the data. Post-hoc criteria are biased by results and lead to confirmation bias.
-
Measure behavior, not just satisfaction — User surveys say what people think they want. Analytics show what they actually do. Prioritize behavioral metrics (completion rates, time-on-task, feature adoption) over self-reported satisfaction.
-
Make rollback easy and safe — Use feature flags so you can disable new features instantly without deployment. Design database changes to be backward-compatible so the old code works with the new schema.
-
Document learnings permanently, not just in sprint retros — Create a searchable learning repository. Future teams making similar decisions benefit enormously from past experiment results, especially failed experiments that prevent repeated mistakes.
Common Issues
Stakeholders want to ship the full feature instead of an MVP. Frame the MVP as risk reduction, not compromise. "Shipping the full feature takes 6 weeks with unknown user reception. Shipping the core in 1 week tells us if users want it at all before investing 5 more weeks."
Metrics show inconclusive results after the experiment period. Extend the experiment duration, increase the sample size, or simplify the metric. If the effect is too small to measure, it may be too small to matter. Consider whether the feature provides qualitative value even without statistically significant quantitative improvement.
Teams skip the Learn phase and immediately start the next build cycle. Build the Learn phase into the sprint structure as non-negotiable. Schedule a "learning review" meeting before sprint planning. Without systematic learning, teams build features on assumptions rather than evidence.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Full-Stack Code Reviewer
Comprehensive code review skill that checks for security vulnerabilities, performance issues, accessibility, and best practices across frontend and backend code.
Test Suite Generator
Generates comprehensive test suites with unit tests, integration tests, and edge cases. Supports Jest, Vitest, Pytest, and Go testing.
Pro Architecture Workspace
Battle-tested skill for architectural, decision, making, framework. Includes structured workflows, validation checks, and reusable patterns for development.