P

Precision Test Case Design Toolkit

A comprehensive skill that enables design comprehensive test cases using combinatorial methodology. Built for Claude Code with best practices and real-world patterns.

SkillCommunitytestingv1.0.0MIT
0 views0 copies

Test Case Design Toolkit

A software testing skill for designing comprehensive test cases using structured methodologies including equivalence partitioning, boundary value analysis, decision tables, and state transition testing.

When to Use

Choose Test Case Design when:

  • Designing test suites for new features with systematic coverage
  • Identifying missing test scenarios in existing test suites
  • Creating test plans for complex business logic with many conditions
  • Training teams on structured testing methodologies

Consider alternatives when:

  • Writing automated test code — use testing framework documentation
  • Performing exploratory testing — use session-based test management
  • Load testing — use performance testing tools

Quick Start

from dataclasses import dataclass from itertools import product from typing import List, Callable @dataclass class TestCase: id: str category: str description: str inputs: dict expected_output: any priority: str # critical, high, medium, low class TestCaseDesigner: def __init__(self, feature_name: str): self.feature = feature_name self.test_cases: List[TestCase] = [] self.case_counter = 0 def _next_id(self): self.case_counter += 1 return f"TC-{self.feature[:3].upper()}-{self.case_counter:03d}" def equivalence_partitioning(self, param_name, valid_classes, invalid_classes): """Generate test cases from equivalence classes""" for cls_name, representative in valid_classes.items(): self.test_cases.append(TestCase( id=self._next_id(), category="equivalence-valid", description=f"{param_name}: valid class '{cls_name}'", inputs={param_name: representative}, expected_output="success", priority="high" )) for cls_name, representative in invalid_classes.items(): self.test_cases.append(TestCase( id=self._next_id(), category="equivalence-invalid", description=f"{param_name}: invalid class '{cls_name}'", inputs={param_name: representative}, expected_output="error", priority="high" )) def boundary_value_analysis(self, param_name, min_val, max_val): """Generate boundary test cases""" boundaries = [ (min_val - 1, "below minimum", "error", "critical"), (min_val, "at minimum", "success", "critical"), (min_val + 1, "just above minimum", "success", "medium"), ((min_val + max_val) // 2, "nominal/middle", "success", "low"), (max_val - 1, "just below maximum", "success", "medium"), (max_val, "at maximum", "success", "critical"), (max_val + 1, "above maximum", "error", "critical"), ] for value, desc, expected, priority in boundaries: self.test_cases.append(TestCase( id=self._next_id(), category="boundary", description=f"{param_name}: {desc} ({value})", inputs={param_name: value}, expected_output=expected, priority=priority )) def decision_table(self, conditions, actions, rules): """Generate test cases from decision table rules""" for i, rule in enumerate(rules): inputs = {cond: rule['conditions'][j] for j, cond in enumerate(conditions)} expected = {act: rule['actions'][j] for j, act in enumerate(actions)} self.test_cases.append(TestCase( id=self._next_id(), category="decision-table", description=f"Rule {i+1}: {', '.join(f'{k}={v}' for k,v in inputs.items())}", inputs=inputs, expected_output=expected, priority="high" )) def generate_report(self): by_priority = {} for tc in self.test_cases: by_priority.setdefault(tc.priority, []).append(tc) return { "total": len(self.test_cases), "by_priority": {k: len(v) for k, v in by_priority.items()}, "by_category": {}, "test_cases": [vars(tc) for tc in self.test_cases] }

Core Concepts

Test Design Techniques

TechniqueBest ForCoverage Goal
Equivalence PartitioningInput domains with distinct classesOne value per class
Boundary Value AnalysisNumeric ranges, limitsEdge values
Decision TableComplex boolean logicAll rule combinations
State TransitionStateful systems, workflowsAll valid transitions
Pairwise/CombinatorialMultiple parametersAll value pairs
Error GuessingCommon failure patternsKnown bug patterns
Use Case TestingUser workflowsEnd-to-end scenarios

State Transition Testing

class StateTransitionTester: def __init__(self): self.states = set() self.transitions = [] def add_transition(self, from_state, event, to_state, action=None, guard=None): self.states.add(from_state) self.states.add(to_state) self.transitions.append({ 'from': from_state, 'event': event, 'to': to_state, 'action': action, 'guard': guard }) def generate_positive_tests(self): """Generate tests for all valid transitions""" tests = [] for t in self.transitions: tests.append({ 'description': f"{t['from']} --[{t['event']}]--> {t['to']}", 'initial_state': t['from'], 'event': t['event'], 'expected_state': t['to'], 'expected_action': t['action'] }) return tests def generate_negative_tests(self): """Generate tests for invalid transitions""" tests = [] valid = {(t['from'], t['event']) for t in self.transitions} events = {t['event'] for t in self.transitions} for state in self.states: for event in events: if (state, event) not in valid: tests.append({ 'description': f"{state} --[{event}]--> should fail", 'initial_state': state, 'event': event, 'expected_result': 'error/ignored' }) return tests # Usage: Order state machine tester = StateTransitionTester() tester.add_transition('pending', 'pay', 'paid', action='process_payment') tester.add_transition('paid', 'ship', 'shipped', action='create_shipment') tester.add_transition('shipped', 'deliver', 'delivered') tester.add_transition('pending', 'cancel', 'cancelled', action='refund') tester.add_transition('paid', 'cancel', 'cancelled', action='refund')

Configuration

OptionDescriptionDefault
techniquesTest design techniques to apply["equivalence","boundary"]
coverage_goalTarget coverage: basic, thorough, exhaustive"thorough"
priority_levelsPriority classifications["critical","high","medium","low"]
include_negativeGenerate negative/error test casestrue
output_formatReport format: json, csv, markdown"json"
max_combinationsLimit for combinatorial tests100
feature_nameFeature being testedRequired
requirements_refLink to requirements document""

Best Practices

  1. Start with equivalence partitioning to identify the major input classes, then apply boundary value analysis to the boundaries between classes — this systematic approach gives the best coverage with the fewest test cases
  2. Use decision tables for complex business rules with multiple conditions that interact — decision tables make it visually clear which combinations are covered and which are missing
  3. Prioritize test cases by risk — not all test cases are equally important; critical business paths and security-sensitive functions should have the highest priority and run in every test cycle
  4. Include both positive and negative tests — testing that valid inputs produce correct results is important, but testing that invalid inputs are properly rejected prevents security vulnerabilities and data corruption
  5. Review test cases with developers and stakeholders before implementation to catch misunderstood requirements early — test design documents serve as executable specifications

Common Issues

Combinatorial explosion with many parameters: Testing all combinations of 5 parameters with 5 values each produces 3,125 test cases. Use pairwise testing to cover all value pairs with approximately 25-30 test cases, which catches the majority of interaction bugs with far fewer tests.

Test cases not covering real-world scenarios: Systematic techniques generate cases from specifications but miss scenarios users actually encounter. Supplement specification-based tests with exploratory testing sessions and real usage data analysis to identify high-traffic paths.

Maintaining test cases as requirements change: Test cases based on old requirements become invalid when features change. Link test cases to specific requirements, mark them for review when requirements update, and treat test case maintenance as part of the feature development process.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates