P

Pro Bug Finder Toolkit

Enterprise-ready skill that automates identify and locate bugs in codebases. Built for Claude Code with best practices and real-world patterns.

SkillCommunitydebuggingv1.0.0MIT
0 views0 copies

Bug Finder Toolkit

Systematic bug detection and diagnosis framework covering debugging strategies, error reproduction, root cause identification, and automated bug detection patterns for finding and fixing software defects efficiently.

When to Use This Skill

Choose Bug Finder when:

  • Investigating reported bugs that are hard to reproduce
  • Setting up systematic debugging workflows for the team
  • Finding hidden bugs before they reach production
  • Diagnosing intermittent or non-deterministic failures
  • Building automated bug detection into CI/CD pipelines

Consider alternatives when:

  • Bug is already identified and needs fixing — just fix it
  • Need performance profiling — use profiling tools
  • Need security vulnerability scanning — use SAST/DAST tools

Quick Start

# Activate bug finder claude skill activate pro-bug-finder-toolkit # Investigate a bug claude "Help me find the root cause of the intermittent login failure" # Systematic bug hunt claude "Scan the payment module for potential bugs and edge cases"

Example: Bug Investigation Workflow

// Systematic bug investigation steps interface BugInvestigation { report: { symptom: string; // "Users see blank page after login" frequency: string; // "~10% of login attempts" environment: string; // "Production, Chrome 120+, macOS" firstOccurrence: string; // "2024-03-10 14:00 UTC" userImpact: string; // "Users cannot access dashboard" }; reproduction: { steps: string[]; // Step-by-step reproduction preconditions: string[]; // Required state for reproduction dataRequirements: string; // Specific data needed reproducible: boolean; // Can we reproduce consistently? minimalCase: string; // Smallest reproduction case }; diagnosis: { hypotheses: Hypothesis[]; // Ranked list of possible causes evidence: Evidence[]; // Collected data points rootCause: string; // Confirmed root cause contributingFactors: string[]; // Related issues }; fix: { change: string; // Code change description testCoverage: string; // Tests that verify the fix regressionRisk: string; // Potential side effects rolloutPlan: string; // Deployment strategy }; }

Core Concepts

Debugging Strategies

StrategyWhen to UseApproach
Binary SearchBug in a large change setgit bisect to find breaking commit
Divide and ConquerBug in complex systemIsolate components, test individually
Rubber DuckStuck on logic errorExplain the code step-by-step
Log TracingProduction issueFollow data through log entries
State InspectionIncorrect outputExamine variable state at each step
Minimal ReproductionInconsistent bugStrip away code until bug is isolated

Common Bug Categories

CategorySymptomsDetection Method
Race ConditionsIntermittent failures, data corruptionStress testing, thread analysis
Off-by-OneWrong counts, missing items, index errorsBoundary testing
Null/UndefinedCrashes, blank displays, TypeErrorStatic analysis, null checking
Memory LeaksGrowing memory, eventual crashHeap profiling, long-running tests
State MutationsUnexpected behavior, stale dataImmutability enforcement, logging
Edge CasesFailures with unusual inputProperty-based testing, fuzzing
# Debugging commands # Git bisect for finding breaking commit git bisect start git bisect bad HEAD git bisect good v2.3.0 # Git bisect will checkout commits for testing # Automated git bisect with test git bisect start HEAD v2.3.0 git bisect run npm test # Node.js debugging node --inspect-brk app.js # Then open chrome://inspect # Heap snapshot for memory leaks node --expose-gc --inspect app.js # In DevTools: Memory tab → Take heap snapshot # Network-level debugging curl -v https://api.example.com/endpoint 2>&1 | grep -E "^[<>*]"

Configuration

ParameterDescriptionDefault
debug_levelLogging verbosity: error, warn, info, debug, tracedebug
breakpointsSet automatic breakpoints on errorstrue
stack_trace_depthMaximum stack trace depth20
timeoutInvestigation time limit2h
auto_bisectUse git bisect for regressionstrue
capture_stateSave reproduction state for latertrue

Best Practices

  1. Reproduce the bug before attempting to fix it — A bug you can't reproduce is a bug you can't verify is fixed. Invest time in creating a reliable reproduction case, even if it requires specific data, timing, or environment conditions.

  2. Use git bisect for any regression — If something worked before and doesn't now, git bisect finds the exact commit that broke it. Use git bisect run <test-command> for fully automated binary search across hundreds of commits.

  3. Write the test first, then fix the bug — Create a test that fails due to the bug before writing any fix. This proves the bug exists, documents the expected behavior, and prevents the same bug from recurring after future changes.

  4. Check the simplest explanation first — Before investigating complex race conditions, check for typos, wrong variable names, missing null checks, and incorrect API usage. Most bugs have simple causes that complex theories obscure.

  5. Document every bug investigation in a knowledge base — Record the symptom, root cause, fix, and any patterns that might help future investigations. Build a searchable bug database that accelerates future debugging by connecting similar symptoms to known causes.

Common Issues

Bug only reproduces in production, not locally. Differences between environments cause most "works on my machine" bugs: environment variables, database state, timezone settings, concurrent load, and network latency. Create a staging environment that mirrors production as closely as possible and instrument logging to capture state during production failures.

Intermittent bug disappears when debugging is enabled. Adding logging or breakpoints changes timing, which can mask race conditions (Heisenbug). Use non-intrusive tracing, increase log verbosity in production temporarily, or use tools that capture execution without altering timing.

Fix introduces new bugs in unrelated areas. The "unrelated" area shares code paths, data, or assumptions with the fixed code. Run the full test suite after any fix, review all callers of modified functions, and use code coverage tools to identify untested paths affected by the change.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates