P

Precision Ship-learn-next Iteration Studio

Enterprise-ready skill that automates iterate on what to build next based on user feedback. Built for Claude Code with best practices and real-world patterns.

SkillCommunityproductivityv1.0.0MIT
0 views0 copies

Ship-Learn-Next Iteration Studio

Iterative product development framework that structures work into rapid ship-learn-iterate cycles, combining lean startup principles with engineering best practices for continuous improvement.

When to Use This Skill

Choose Ship-Learn-Next when:

  • Launching new features with uncertainty about user reception
  • Building MVPs and iterating based on feedback
  • Running product experiments with measurable outcomes
  • Managing development cycles focused on learning and adaptation
  • Transitioning from waterfall to iterative development

Consider alternatives when:

  • Building well-defined features with clear specifications
  • Working on infrastructure with no user-facing component
  • Performing maintenance or bug fixes with known solutions

Quick Start

# Activate Ship-Learn-Next claude skill activate precision-ship-learn-next-iteration-studio # Plan an iteration cycle claude "Plan a Ship-Learn-Next cycle for our new onboarding flow"

Example: Iteration Cycle Document

## Iteration: Onboarding Flow v2 ### SHIP (What we're building this cycle) **Hypothesis**: Replacing the 5-step onboarding wizard with a single-page progressive form will increase completion rate from 45% to 65%. **Scope** (1-week sprint): - [ ] Single-page form with progressive disclosure - [ ] Skip option for non-essential fields - [ ] Welcome email triggered on completion - [ ] Analytics events for each section interaction **Out of scope**: A/B test framework, personalization, social login **Ship criteria**: Deployed to 20% of new signups by Friday EOD ### LEARN (What we're measuring) | Metric | Baseline | Target | Measurement | |--------|----------|--------|-------------| | Completion rate | 45% | 65% | Analytics events | | Time to complete | 4.2 min | 2.5 min | Start-to-complete timestamp | | Drop-off point | Step 3 (profile) | N/A | Section interaction events | | Support tickets | 12/week | <8/week | Zendesk tagged tickets | **Learning questions**: 1. Does progressive disclosure reduce cognitive overload? 2. Which optional fields do users skip most? 3. Does the skip option reduce data quality unacceptably? ### NEXT (Decision framework) | If we learn... | Then we... | |---------------|------------| | Completion >60% | Roll out to 100%, iterate on details | | Completion 50-60% | Analyze drop-off data, adjust form order | | Completion <50% | Revert, try alternative approach (wizard with fewer steps) | | Skip rate >40% on key fields | Add incentive copy or make progressive |

Core Concepts

Iteration Cycle Structure

PhaseDurationActivitiesOutput
Ship60% of cycleBuild, test, deploy the smallest useful incrementWorking feature
Learn25% of cycleCollect data, analyze metrics, gather feedbackLearning report
Next15% of cycleDecide direction based on learningsNext cycle plan

Experiment Design

ComponentDescriptionExample
HypothesisTestable prediction"Feature X will improve metric Y by Z%"
MetricMeasurable outcomeConversion rate, time-on-task, NPS
Sample SizeUsers needed for significance1,000 per variant (95% confidence)
DurationTime to collect sufficient data1-2 weeks
Success CriteriaThreshold for positive decision>10% improvement, p < 0.05
Rollback PlanHow to revert if negativeFeature flag, database migration reverse

Configuration

ParameterDescriptionDefault
cycle_lengthIteration cycle duration2 weeks
ship_ratioPercentage of cycle for building60%
min_sample_sizeMinimum users for conclusions500
confidence_levelStatistical confidence threshold95%
rollout_strategyProgressive rollout: percentage, ring, flagpercentage
metrics_toolAnalytics platformmixpanel

Best Practices

  1. Ship the smallest thing that tests the hypothesis — Don't build the perfect feature. Build the smallest version that generates the data you need to validate or invalidate your hypothesis. Polish and completeness come after validation.

  2. Define success and failure criteria before shipping — Write down specific thresholds ("completion rate above 60%") before you see the data. Post-hoc criteria are biased by results and lead to confirmation bias.

  3. Measure behavior, not just satisfaction — User surveys say what people think they want. Analytics show what they actually do. Prioritize behavioral metrics (completion rates, time-on-task, feature adoption) over self-reported satisfaction.

  4. Make rollback easy and safe — Use feature flags so you can disable new features instantly without deployment. Design database changes to be backward-compatible so the old code works with the new schema.

  5. Document learnings permanently, not just in sprint retros — Create a searchable learning repository. Future teams making similar decisions benefit enormously from past experiment results, especially failed experiments that prevent repeated mistakes.

Common Issues

Stakeholders want to ship the full feature instead of an MVP. Frame the MVP as risk reduction, not compromise. "Shipping the full feature takes 6 weeks with unknown user reception. Shipping the core in 1 week tells us if users want it at all before investing 5 more weeks."

Metrics show inconclusive results after the experiment period. Extend the experiment duration, increase the sample size, or simplify the metric. If the effect is too small to measure, it may be too small to matter. Consider whether the feature provides qualitative value even without statistically significant quantitative improvement.

Teams skip the Learn phase and immediately start the next build cycle. Build the Learn phase into the sprint structure as non-negotiable. Schedule a "learning review" meeting before sprint planning. Without systematic learning, teams build features on assumptions rather than evidence.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates