T

Thinking Beast Mode Guru

Boost productivity using this transcendent, coding, agent, quantum. Includes structured workflows, validation checks, and reusable patterns for api graphql.

AgentClipticsapi graphqlv1.0.0MIT
0 views0 copies

Thinking Beast Mode Guru

An autonomous agent that activates deep, methodical reasoning for complex problems β€” applying extended thinking chains, multi-perspective analysis, and structured problem-solving frameworks to tackle challenges that resist simple solutions.

When to Use This Agent

Choose Thinking Beast Mode Guru when:

  • A problem has resisted multiple attempts at solution and needs deeper analysis
  • You need to reason through complex architectural trade-offs with many variables
  • The problem requires considering multiple perspectives, constraints, and edge cases
  • You want systematic analysis rather than a quick answer

Consider alternatives when:

  • The problem has a straightforward, known solution (just implement it)
  • You need code written quickly without deep analysis
  • The task is routine and well-documented

Quick Start

# .claude/agents/thinking-beast-mode.yml name: thinking-beast-mode-guru description: Deep analytical reasoning for complex problems agent_prompt: | You are in BEAST MODE β€” maximum analytical depth activated. When tackling a problem: 1. Restate the problem precisely (eliminate ambiguity) 2. Identify ALL constraints, assumptions, and unknowns 3. Generate 3+ distinct approaches with trade-off analysis 4. For each approach, trace through edge cases and failure modes 5. Select the optimal approach with explicit justification 6. Provide a step-by-step implementation plan Reasoning rules: - Show your work. Make every reasoning step explicit. - Challenge your own assumptions. What if you're wrong? - Consider second-order effects. What breaks downstream? - Think in failure modes. What's the worst case? - Quantify trade-offs where possible (latency, cost, complexity).

Example invocation:

claude "BEAST MODE: Our payment system processes duplicate charges 0.1% of the time. We've tried idempotency keys but the problem persists. Analyze root causes and design a bulletproof solution."

Sample deep analysis output:

BEAST MODE Analysis β€” Duplicate Payment Charges
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Problem Restatement:
  0.1% of transactions (β‰ˆ50/day at current volume) result in
  duplicate charges despite idempotency keys being implemented.

Root Cause Analysis (5 Whys Γ— 3 Hypotheses):

  Hypothesis 1: Idempotency key collision
    β†’ Keys generated client-side with insufficient entropy
    β†’ P(collision) at 50K txn/day with UUID v4 = 10^-32 ❌ Eliminated

  Hypothesis 2: Race condition in check-then-charge
    β†’ Two requests with same key arrive within <10ms
    β†’ DB read shows "not processed" for both before either writes
    β†’ CONFIRMED: No database lock on idempotency check
    β†’ Evidence: 92% of duplicates have <50ms timestamp gap

  Hypothesis 3: Retry storm from client timeout
    β†’ Client times out at 30s, retries with SAME key
    β†’ But key is marked "processing" not "completed"
    β†’ Second request sees "processing" and waits... then also charges
    β†’ CONFIRMED: 8% of duplicates from retry after timeout

Solutions Evaluated:
  A. SELECT FOR UPDATE lock β†’ Adds 5ms latency, fixes race condition
  B. Redis SETNX as distributed lock β†’ <1ms, but adds infra dependency
  C. Database unique constraint + INSERT β†’ Best: atomic, no extra infra

  Selected: C (unique constraint) + timeout fix for retry case

Core Concepts

Deep Reasoning Framework

PhaseActionOutput
ReframeRestate the problem preciselyClear problem statement
DecomposeBreak into sub-problemsComponent analysis
HypothesizeGenerate multiple explanationsHypothesis list
AnalyzeTrace each hypothesis through evidenceElimination or confirmation
SynthesizeCombine findings into solutionIntegrated approach
ValidateCheck solution against all constraintsVerification

Multi-Perspective Analysis

Stakeholder Analysis Grid:
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Perspective  β”‚ Priorities  β”‚ Concerns     β”‚ Constraints  β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ End User     β”‚ Reliability β”‚ Double chargeβ”‚ Patience     β”‚
β”‚ Developer    β”‚ Simplicity  β”‚ Complexity   β”‚ Time budget  β”‚
β”‚ Ops/SRE      β”‚ Observabilityβ”‚ Alert fatigueβ”‚ Infra limits β”‚
β”‚ Business     β”‚ Revenue     β”‚ Chargebacks  β”‚ Compliance   β”‚
β”‚ Security     β”‚ Integrity   β”‚ Fraud vector β”‚ PCI scope    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Trade-Off Quantification

// Structured trade-off comparison interface Solution { name: string; latencyImpact: number; // ms added per request complexityScore: number; // 1-10 reliabilityGain: number; // percentage improvement infraCost: number; // monthly $ added implementationHours: number; risks: string[]; } const solutions: Solution[] = [ { name: "Database unique constraint", latencyImpact: 2, complexityScore: 3, reliabilityGain: 99.9, infraCost: 0, implementationHours: 8, risks: ["Requires migration on large table"] }, { name: "Redis distributed lock", latencyImpact: 1, complexityScore: 5, reliabilityGain: 99.8, infraCost: 50, implementationHours: 16, risks: ["Redis single point of failure", "Lock expiry edge case"] } ];

Configuration

OptionTypeDefaultDescription
analysisDepthstring"deep"Depth: quick, standard, deep, exhaustive
perspectivesnumber3Minimum alternative approaches to evaluate
showReasoningbooleantrueDisplay full chain of thought
quantifyTradeoffsbooleantrueAdd numeric trade-off comparisons
includeRisksbooleantrueAnalyze failure modes per solution
outputFormatstring"structured"Format: structured, narrative, decision-matrix

Best Practices

  1. Restate the problem before solving it β€” Half of "hard problems" are actually poorly defined problems. Force yourself to write a single, precise sentence describing what needs to be solved. If you cannot, the problem is not well enough understood for a solution to exist.

  2. Generate at least 3 approaches before committing β€” The first solution you think of is rarely the best. Generating three distinct approaches forces you to consider trade-offs that a single approach hides. Even if you end up choosing the first approach, you will have validated it against alternatives.

  3. Trace through failure modes explicitly β€” For each proposed solution, ask: "What happens when this fails?" Trace the failure path completely. A solution that works 99.9% of the time but causes data corruption in the 0.1% case is worse than one that works 99% of the time but fails safely.

  4. Quantify trade-offs instead of describing them β€” "Adds some latency" is not actionable. "Adds 5ms p50 / 50ms p99 latency" enables informed decision-making. Whenever possible, express trade-offs in numbers: milliseconds, dollars, error rates, or lines of code.

  5. Challenge your assumptions explicitly β€” List every assumption you are making and ask "What if this is wrong?" The assumption that causes the most damage if wrong should be validated first. Many complex bugs exist because an assumption that was true during development became false in production.

Common Issues

Analysis paralysis β€” too many options, no decision β€” Deep analysis generates multiple valid approaches and the team cannot choose. Set a time limit for analysis (e.g., 2 hours), require a recommendation (not just a comparison), and use the "disagree and commit" principle β€” pick the approach with the best risk-adjusted expected value and move forward.

Second-order effects missed despite thorough analysis β€” The solution fixes the immediate problem but creates a new one downstream. For example, adding a database lock fixes duplicates but causes deadlocks under high concurrency. Extend the analysis by one more step: "If we implement this solution, what new problems does it create?" Trace the effects through the entire system.

Over-engineering the solution due to deep analysis β€” Beast mode analysis reveals 15 edge cases, and the team tries to handle all of them in the first implementation. Prioritize edge cases by probability Γ— impact. Handle the top 3, document the rest, and add them to the backlog. A solution that handles the common cases and fails gracefully on rare ones ships faster than one that handles everything.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates