Q

Qa Test Elite

Comprehensive skill designed for generate, comprehensive, test, plans. Includes structured workflows, validation checks, and reusable patterns for ai research.

SkillClipticsai researchv1.0.0MIT
0 views0 copies

QA Test Elite

Comprehensive QA test planning and execution toolkit for generating test plans, manual test cases, regression suites, design validation workflows, and structured bug reports.

When to Use

Use this toolkit when:

  • Creating test plans for new features or releases
  • Generating manual test cases from user stories or requirements
  • Building regression test suites for critical flows
  • Validating implementations against design specs (Figma, mockups)
  • Documenting bugs with reproducible steps and evidence

Use automated testing tools instead when:

  • Writing unit tests (use your testing framework directly)
  • Running CI/CD test suites (use test runners)
  • Performance testing (use dedicated load testing tools)

Quick Start

Generate a Test Plan

# Test Plan: {feature_name} ## Scope - Feature: {feature_description} - Affected areas: {affected_components} - Out of scope: {excluded_areas} ## Test Strategy | Type | Coverage | Priority | |------|----------|----------| | Functional | Core user flows | P0 | | Edge cases | Boundary conditions | P1 | | Integration | API + UI interaction | P1 | | Regression | Existing functionality | P0 | | Accessibility | WCAG 2.1 AA | P2 | ## Entry Criteria - [ ] Feature code merged to staging - [ ] API endpoints deployed - [ ] Test data prepared ## Exit Criteria - [ ] All P0 test cases pass - [ ] No critical or major bugs open - [ ] Regression suite passes - [ ] Accessibility audit complete

Generate Test Cases

## Test Case: TC-{id} **Title**: {descriptive_title} **Priority**: P0 | P1 | P2 **Type**: Functional | Edge Case | Regression | Integration ### Preconditions - {precondition_1} - {precondition_2} ### Steps 1. {action_1} - Expected: {expected_result_1} 2. {action_2} - Expected: {expected_result_2} 3. {action_3} - Expected: {expected_result_3} ### Test Data | Field | Value | |-------|-------| | {field_1} | {value_1} | | {field_2} | {value_2} | ### Pass/Fail Criteria - Pass: {pass_condition} - Fail: {fail_condition}

Bug Report Template

## Bug Report: BUG-{id} **Title**: {concise_description} **Severity**: Critical | Major | Minor | Cosmetic **Priority**: P0 | P1 | P2 | P3 **Environment**: {browser} / {OS} / {device} ### Description {what_happened_vs_expected} ### Steps to Reproduce 1. {step_1} 2. {step_2} 3. {step_3} ### Expected Result {what_should_happen} ### Actual Result {what_actually_happened} ### Evidence - Screenshot: {link} - Video: {link} - Console logs: {paste} ### Additional Context - First seen: {date} - Regression: Yes/No - Frequency: Always / Intermittent / Once

Core Concepts

Test Coverage Matrix

Feature AreaHappy PathEdge CasesError HandlingIntegrationAccessibility
LoginTC-001TC-002TC-003TC-004TC-005
RegistrationTC-006TC-007TC-008TC-009TC-010
CheckoutTC-011TC-012TC-013TC-014TC-015

Risk-Based Test Prioritization

Risk LevelTest PriorityExecutionExamples
HighP0 — AlwaysEvery releasePayment, auth, data loss
MediumP1 — UsuallyMajor releasesSearch, filtering, sorting
LowP2 — SometimesQuarterlyUI polish, tooltips

Regression Suite Structure

Regression Suite
  ├── Smoke Tests (P0, 15 min)
  │   └── Login, core navigation, critical CRUD
  ├── Core Regression (P0+P1, 2 hours)
  │   └── All major user flows end-to-end
  └── Full Regression (P0+P1+P2, 8 hours)
      └── Complete feature coverage

Configuration

ParameterDescription
feature_scopeWhat feature to test
test_typesWhich test types to include
priority_filterP0 only, P0+P1, or all
environmentStaging, production, local
test_data_sourceWhere test data comes from
report_formatMarkdown, JIRA, TestRail

Best Practices

  1. Write test cases from user stories — each acceptance criterion becomes at least one test case
  2. Prioritize ruthlessly — P0 tests should take < 30 minutes and cover critical paths
  3. Include negative tests — what happens when users do the wrong thing?
  4. Test data independence — each test should create its own data, not depend on other tests
  5. Document environment requirements — browser version, screen size, auth state
  6. Update regression suite after every bug fix — the bug's test case joins the regression suite

Common Issues

Test cases too vague to execute: Add specific test data values, exact URLs, and concrete expected results. Another tester should be able to execute without asking questions.

Regression suite takes too long: Split into smoke (15 min), core (2 hr), and full (8 hr) tiers. Run smoke on every deploy, core on releases, full quarterly.

Bug reports bounced by developers: Include exact reproduction steps, environment details, and console logs. Attach screenshots or screen recordings. Note whether it's a regression (worked before).

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates