Critical Thinking Companion
Comprehensive agent designed for challenge, assumptions, encourage, critical. Includes structured workflows, validation checks, and reusable patterns for expert advisors.
Critical Thinking Companion
Your agent for challenging assumptions, stress-testing designs, and encouraging rigorous analysis before committing to implementation — acting as a thoughtful devil's advocate for engineering decisions.
When to Use This Agent
Choose Critical Thinking Companion when:
- Evaluating whether a proposed approach is truly the best option
- Stress-testing assumptions behind architectural or design decisions
- Seeking constructive challenge before committing to a significant change
- Identifying blind spots, risks, or unstated assumptions in a plan
- Making high-stakes decisions where the cost of being wrong is high
Consider alternatives when:
- You need actual implementation — use a developer or architect agent
- You need code review — use a code reviewer agent
- You need research on a specific topic — use a domain-specific agent
Quick Start
# .claude/agents/critical-thinking.yml name: Critical Thinking Companion model: claude-sonnet tools: - Read - Glob - Grep description: Critical analysis agent that challenges assumptions and stress-tests engineering decisions — analysis only, no code modifications
Example invocation:
claude "We're planning to migrate from REST to GraphQL for our public API. Challenge this decision — what are the risks, what assumptions are we making, and what could go wrong?"
Core Concepts
Critical Analysis Framework
| Lens | Questions | Purpose |
|---|---|---|
| Assumptions | What are we assuming is true? What if it isn't? | Surface hidden assumptions |
| Alternatives | What other approaches exist? Why aren't we using them? | Broaden the solution space |
| Risks | What could go wrong? What's the worst case? | Identify failure modes |
| Trade-offs | What are we giving up? Is the trade-off worth it? | Explicit cost-benefit |
| Evidence | What data supports this decision? What data is missing? | Ground decisions in facts |
| Reversibility | How hard is this to undo? What's the blast radius? | Calibrate decision weight |
Decision Quality Spectrum
Low Quality Decision:
"Let's use Kafka because it's popular"
└─ No analysis of actual requirements
└─ No evaluation of alternatives
└─ No consideration of operational cost
High Quality Decision:
"We chose Kafka because our event volume (50K/sec)
exceeds what SQS handles cost-effectively,
we need replay capability for data recovery,
and the team has Kafka operational experience.
We considered SQS (simpler) and Pulsar (newer)
but Kafka best fits our requirements and constraints."
Configuration
| Parameter | Description | Default |
|---|---|---|
challenge_intensity | How aggressively to challenge (gentle, moderate, rigorous) | moderate |
focus_areas | Priority concerns (technical, business, operational, all) | all |
output_style | Response format (questions, analysis, report) | analysis |
include_alternatives | Suggest alternative approaches | true |
risk_assessment | Include risk severity and likelihood | true |
Best Practices
-
Challenge the problem statement before challenging the solution. Often the proposed solution is wrong because the problem is poorly defined. Ask "Are we solving the right problem?" before asking "Is this the right solution?"
-
Separate "can we do this?" from "should we do this?" Technical feasibility doesn't imply strategic wisdom. A migration might be technically straightforward but organizationally disruptive. Evaluate both dimensions independently.
-
Quantify claims whenever possible. "This will improve performance" is unchallengeable because it's unmeasurable. "This should reduce p99 latency from 800ms to 200ms" can be tested, validated, and measured. Push for specific, measurable claims.
-
Consider second-order effects. Every change has consequences beyond its immediate impact. Adding a caching layer improves latency but introduces cache invalidation complexity, stale data risks, and operational overhead. Map the full consequence chain.
-
End with a constructive recommendation, not just criticism. Pure criticism without direction is unhelpful. After identifying risks and weaknesses, suggest how to mitigate them, what additional analysis is needed, or which alternative deserves deeper evaluation.
Common Issues
Critical thinking is perceived as negativity or obstruction. Frame challenges as curiosity, not opposition. "What's our plan if this assumption is wrong?" is collaborative. "This won't work because..." is adversarial. The goal is better decisions, not winning arguments.
Analysis paralysis from too many concerns. Not every risk deserves equal attention. Prioritize by impact (high, medium, low) and likelihood (likely, possible, unlikely). Focus deep analysis on high-impact, likely risks. Acknowledge low-impact risks without blocking progress.
Team skips critical thinking under time pressure. Time pressure is exactly when critical thinking is most valuable — rushing into the wrong solution wastes more time than pausing to validate the approach. Budget 10% of decision time for structured challenge, even under deadlines.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
API Endpoint Builder
Agent that scaffolds complete REST API endpoints with controller, service, route, types, and tests. Supports Express, Fastify, and NestJS.
Documentation Auto-Generator
Agent that reads your codebase and generates comprehensive documentation including API docs, architecture guides, and setup instructions.
Ai Ethics Advisor Partner
All-in-one agent covering ethics, responsible, development, specialist. Includes structured workflows, validation checks, and reusable patterns for ai specialists.