E

Expert Qa Bot

Powerful agent for agent, need, comprehensive, quality. Includes structured workflows, validation checks, and reusable patterns for development tools.

AgentClipticsdevelopment toolsv1.0.0MIT
0 views0 copies

Expert QA Bot

A senior QA specialist agent with comprehensive expertise in quality assurance strategies, test methodologies, and quality metrics, focusing on preventing defects, ensuring user satisfaction, and maintaining high quality standards across the development lifecycle.

When to Use This Agent

Choose Expert QA Bot when:

  • Developing test plans and QA strategies for new features or releases
  • Establishing quality metrics and KPIs for the development team
  • Designing test case matrices covering functional, non-functional, and edge cases
  • Implementing quality gates in CI/CD pipelines
  • Conducting risk-based testing prioritization for releases

Consider alternatives when:

  • Writing automated test code (use a test engineering agent)
  • Debugging specific failures (use a debugging agent)
  • Running performance or load tests (use a performance engineering agent)

Quick Start

# .claude/agents/expert-qa-bot.yml name: Expert QA Bot description: Comprehensive QA strategy and quality metrics model: claude-sonnet tools: - Read - Write - Glob - Grep - Bash

Example invocation:

claude "Create a comprehensive test plan for the new user authentication system covering functional, security, performance, and usability testing"

Core Concepts

Test Strategy Matrix

LevelWhatWhenWho
UnitFunctions, methodsEvery commitDeveloper
IntegrationModule interactionsEvery PRDeveloper + QA
APIEndpoint contractsEvery deployAutomated
E2EUser workflowsPre-releaseQA + Automated
ExploratoryUnknown unknownsSprint cadenceQA
PerformanceLoad, stress, soakPre-releaseQA + SRE
SecurityVulnerabilitiesQuarterlySecurity + QA

Test Plan Template

## Test Plan: User Authentication System ### Scope - Login (email/password, SSO, magic link) - Registration (form validation, email verification) - Password reset (request, token validation, update) - Session management (token refresh, timeout, concurrent) ### Risk Assessment | Feature | Business Impact | Complexity | Test Priority | |---------|----------------|------------|---------------| | Login | Critical | Medium | P0 | | Registration | High | Low | P1 | | Password Reset | High | Medium | P1 | | Session Mgmt | Critical | High | P0 | ### Test Cases: Login | ID | Scenario | Steps | Expected | Priority | |----|----------|-------|----------|----------| | L001 | Valid credentials | Enter email+pass, click login | Dashboard shown, token stored | P0 | | L002 | Invalid password | Enter email+wrong pass | Error shown, no token | P0 | | L003 | Non-existent email | Enter unknown email | Generic error (no email hint) | P0 | | L004 | SQL injection attempt | Enter `' OR 1=1--` in email | Input rejected, no DB error | P0 | | L005 | Brute force protection | Attempt 10 failed logins | Account locked for 15min | P0 | | L006 | Concurrent sessions | Login from 2 browsers | Both active (or policy applied) | P1 | | L007 | Expired session | Wait past timeout | Redirect to login | P1 | | L008 | Remember me | Check remember, close browser | Session persists on return | P2 | ### Exit Criteria - All P0 test cases pass - 95% of P1 test cases pass - No critical or high severity bugs open - Performance: login < 500ms P95 - Security: no OWASP Top 10 vulnerabilities

Quality Metrics Dashboard

## Quality KPIs (Sprint 24) ### Defect Metrics - Bugs Found: 23 (18 in testing, 5 in production) - Escape Rate: 21.7% (target: <15%) - Mean Time to Fix: 1.8 days (target: <2 days) - Reopened Rate: 8.7% (target: <10%) ### Coverage Metrics - Code Coverage: 78% (target: 80%) - Requirement Coverage: 92% (target: 95%) - API Endpoint Coverage: 100% ### Process Metrics - Test Automation Rate: 65% (target: 70%) - Flaky Test Rate: 4.2% (target: <5%) - CI Pipeline Pass Rate: 91% (target: >95%)

Configuration

ParameterDescriptionDefault
methodologyQA methodology (risk-based, exploratory, scripted)risk-based
automation_targetTarget test automation percentage70
defect_toolBug tracking tool (jira, linear, github)Auto-detect
risk_toleranceRelease risk tolerance (low, medium, high)low
reporting_cadenceQuality report frequency (daily, sprint, release)sprint
complianceRegulatory standards to track (soc2, hipaa, gdpr)None

Best Practices

  1. Prioritize test cases by risk, not by feature completeness. Allocate testing effort proportional to the business impact and technical complexity of each feature. A critical payment flow deserves 10x the test investment of a profile settings page. Use a risk matrix (impact x probability) to score each feature and allocate testing resources accordingly. Full coverage is a myth — smart coverage is the goal.

  2. Implement quality gates that block deployment, not just report. A quality gate that sends an email when coverage drops below 80% is ignored. A gate that blocks the deployment pipeline forces action. Configure gates for: all P0 tests passing, code coverage above threshold, no critical security findings, and performance benchmarks met. Gates should be strict for production and advisory for staging.

  3. Invest in exploratory testing to find bugs that automation cannot. Automated tests verify expected behavior. Exploratory testing discovers unexpected behavior. Dedicate 20% of QA time to unscripted exploration where testers follow their intuition, try unusual input combinations, and break the application in creative ways. Document discoveries as new automated test cases to prevent regression.

  4. Track the defect escape rate as the primary quality metric. The escape rate (bugs found in production / total bugs found) measures the effectiveness of your entire QA process. A decreasing escape rate means testing is catching more bugs before they reach users. Target an escape rate below 10%. When bugs escape, conduct a brief analysis: what test would have caught this, and why was it not in the test suite?

  5. Test negative scenarios with more rigor than positive ones. Valid inputs and expected workflows are easy to test and usually work. The bugs live in unexpected inputs, edge cases, error handling, and concurrent access patterns. For every positive test case, write at least two negative cases: invalid input, missing data, timeout, concurrent modification, and boundary values. This is where production failures originate.

Common Issues

Test automation rate stagnates below targets. Teams write automated tests for new features but never automate existing manual test cases. Dedicate a fixed percentage of sprint capacity (10-15%) to automation debt reduction. Prioritize automating the tests that are run most frequently and provide the most value. Not every manual test needs automation — some are one-time or exploratory by nature.

Quality metrics look healthy but users still report bugs. Metrics like code coverage and test pass rates can be gamed or misleading. High coverage with shallow assertions does not catch bugs. All tests passing does not mean all scenarios are tested. Supplement internal metrics with user-facing quality indicators: support ticket volume, crash rates, user satisfaction scores, and feature adoption rates. These measure actual quality as experienced by users.

Release testing takes too long and delays deployments. A multi-day regression testing cycle creates bottlenecks that pressure teams to skip testing or batch large releases (which increases risk). Shorten the release testing cycle by: automating the regression suite, running tests in parallel, using risk-based selection to test only what changed, and implementing canary releases where small traffic percentages validate the release before full rollout.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates