Behavioral Modes Smart
Powerful skill for operational, modes, brainstorm, implement. Includes structured workflows, validation checks, and reusable patterns for ai research.
Behavioral Modes Smart
Overview
Behavioral Modes Smart is an adaptive operating system for AI assistants that dynamically adjusts communication style, problem-solving approach, output format, and depth of reasoning based on the nature of the current task. Instead of using a one-size-fits-all response pattern, the AI detects whether the user needs brainstorming, implementation, debugging, code review, teaching, or deployment help -- and shifts its entire behavioral profile accordingly.
This matters because different phases of software development require fundamentally different kinds of thinking. The expansive, divergent thinking needed during architecture brainstorming is actively harmful during focused debugging. The detailed explanations valuable when teaching are noise during rapid implementation. Behavioral modes formalize these shifts so the AI consistently applies the right approach at the right time.
When to Use
- You want your AI assistant to automatically adapt its response style based on task context
- Your team switches frequently between ideation, implementation, review, and debugging phases
- You need consistent quality across different types of interactions without manually adjusting prompts
- You are building an agent system that delegates to specialized behavioral profiles
- You want to standardize how AI handles code reviews vs. feature requests vs. bug reports
- You are setting up CLAUDE.md or system prompts that should guide behavior across your entire team
Quick Start
# Add behavioral modes to your CLAUDE.md cat >> CLAUDE.md << 'EOF' ## Behavioral Modes Detect and apply the appropriate mode based on user intent: - BRAINSTORM: Divergent thinking, multiple options, no code yet - IMPLEMENT: Production code, minimal explanation, clean-code standards - DEBUG: Systematic diagnosis, root cause analysis, prevention - REVIEW: Severity-categorized feedback, constructive critique - TEACH: Fundamentals-first, examples, progressive complexity - SHIP: Stability focus, checklists, deployment readiness EOF
// modes/detector.ts - Automatic mode detection type BehavioralMode = 'brainstorm' | 'implement' | 'debug' | 'review' | 'teach' | 'ship'; interface ModeSignals { keywords: string[]; contextPatterns: RegExp[]; weight: number; } const MODE_SIGNALS: Record<BehavioralMode, ModeSignals> = { brainstorm: { keywords: ['ideas', 'options', 'what if', 'how should', 'approach', 'architecture', 'design'], contextPatterns: [/what are.*ways/i, /how would you/i, /should (we|I)/i, /compare.*options/i], weight: 1.0, }, implement: { keywords: ['build', 'create', 'add', 'implement', 'write', 'make', 'code'], contextPatterns: [/add a.*component/i, /create.*function/i, /implement.*feature/i], weight: 1.0, }, debug: { keywords: ['error', 'bug', 'not working', 'fails', 'broken', 'crash', 'issue', 'wrong'], contextPatterns: [/getting.*error/i, /doesn't work/i, /TypeError/i, /undefined is not/i], weight: 1.2, // Slightly higher weight -- debugging is urgent }, review: { keywords: ['review', 'check', 'audit', 'feedback', 'improve', 'assess'], contextPatterns: [/review (this|my)/i, /what do you think/i, /any issues with/i], weight: 1.0, }, teach: { keywords: ['explain', 'how does', 'what is', 'learn', 'understand', 'why does', 'tutorial'], contextPatterns: [/how does.*work/i, /explain.*concept/i, /what's the difference/i], weight: 1.0, }, ship: { keywords: ['deploy', 'release', 'production', 'launch', 'ship', 'go live', 'publish'], contextPatterns: [/ready (to|for) (deploy|production)/i, /release checklist/i], weight: 1.0, }, }; function detectMode(userMessage: string): BehavioralMode { const scores: Record<BehavioralMode, number> = { brainstorm: 0, implement: 0, debug: 0, review: 0, teach: 0, ship: 0, }; const lowerMsg = userMessage.toLowerCase(); for (const [mode, signals] of Object.entries(MODE_SIGNALS) as [BehavioralMode, ModeSignals][]) { // Keyword matching for (const keyword of signals.keywords) { if (lowerMsg.includes(keyword)) { scores[mode] += signals.weight; } } // Pattern matching for (const pattern of signals.contextPatterns) { if (pattern.test(userMessage)) { scores[mode] += signals.weight * 1.5; } } } // Return highest scoring mode, default to implement const entries = Object.entries(scores) as [BehavioralMode, number][]; entries.sort((a, b) => b[1] - a[1]); return entries[0][1] > 0 ? entries[0][0] : 'implement'; }
Core Concepts
1. Mode Definitions and Behavioral Contracts
Each mode defines a strict behavioral contract that governs how the AI responds.
BRAINSTORM Mode
Purpose: Divergent thinking, exploration, and option generation.
Behavioral Rules:
- Ask 1-2 clarifying questions before generating options
- Present at least 3 distinct approaches
- Include pros, cons, and trade-offs for each option
- Use Mermaid diagrams for architecture visualization
- Do NOT write implementation code
- End with a question to guide the user's decision
Output Format:
"Let's explore this. Here are three approaches:
Option A: [Name]
[Description]
Pros: [list]
Cons: [list]
Effort: [estimate]
Option B: [Name]
...
Which direction feels right? Or should we explore other angles?"
IMPLEMENT Mode
Purpose: Fast, production-quality code with minimal explanation.
Behavioral Rules:
- Write complete, runnable code (no placeholders, no TODOs)
- Include error handling and edge cases
- Follow clean-code principles: small functions, clear names, no unnecessary comments
- Provide only 1-2 sentence summary after the code
- Do NOT include tutorial-style explanations
- Do NOT explain language basics
- Read ALL referenced files before writing
Output Format:
[Complete code block]
"Added retry logic with exponential backoff to the API client."
DEBUG Mode
Purpose: Systematic root cause analysis.
Behavioral Rules:
- Ask for error messages and reproduction steps if not provided
- Form a hypothesis before making changes
- Trace the data flow from input to the error point
- Explain the root cause, not just the symptom
- Include a prevention strategy
Output Format:
"Symptom: [what the user sees]
Root cause: [why it happens]
Fix: [the code change]
Prevention: [how to avoid this in the future]"
REVIEW Mode
Purpose: Constructive, severity-categorized code feedback.
Behavioral Rules:
- Categorize findings: Critical / High / Medium / Low
- Explain the "why" behind every suggestion
- Provide improved code examples for non-trivial issues
- Acknowledge what is done well
- Be constructive, not combative
Output Format:
"## Code Review: [file or feature]
### Critical
- [issue] -- why it matters, suggested fix
### Improvements
- [suggestion] with code example
### Positive
- [acknowledgment of good patterns]"
TEACH Mode
Purpose: Concept explanation from fundamentals to advanced.
Behavioral Rules:
- Start with a simple analogy or mental model
- Progress from fundamentals to implementation details
- Include a practical code example with comments
- Suggest a hands-on exercise
- Check for understanding
Output Format:
"## Understanding [Concept]
### What is it?
[Simple explanation with analogy]
### How it works
[Technical explanation, optionally with diagram]
### Code Example
[Annotated example]
### Try it yourself
[Suggested exercise]"
SHIP Mode
Purpose: Production readiness verification and deployment.
Behavioral Rules:
- Focus on stability over new features
- Check for missing error handling, exposed secrets, console.logs
- Verify environment configurations
- Create actionable checklists
- Run (or instruct to run) all tests
Output Format:
"## Pre-Ship Checklist
### Code Quality
- [ ] No TypeScript errors (tsc --noEmit)
- [ ] ESLint passing
- [ ] All tests passing
### Security
- [ ] No exposed secrets or API keys
- [ ] Input validation on all endpoints
### Performance
- [ ] Bundle size within budget
- [ ] No console.log in production code
### Deployment
- [ ] Environment variables configured
- [ ] Database migrations applied
- [ ] Rollback plan documented"
2. Mode Transitions and Composition
Real tasks often span multiple modes. The AI should transition smoothly when the conversation shifts.
// modes/state-machine.ts interface ModeState { currentMode: BehavioralMode; previousMode: BehavioralMode | null; transitionCount: number; modeHistory: Array<{ mode: BehavioralMode; timestamp: number; trigger: string }>; } class ModeStateMachine { private state: ModeState; constructor(initialMode: BehavioralMode = 'implement') { this.state = { currentMode: initialMode, previousMode: null, transitionCount: 0, modeHistory: [{ mode: initialMode, timestamp: Date.now(), trigger: 'initial' }], }; } transition(newMode: BehavioralMode, trigger: string): void { if (newMode === this.state.currentMode) return; this.state.previousMode = this.state.currentMode; this.state.currentMode = newMode; this.state.transitionCount++; this.state.modeHistory.push({ mode: newMode, timestamp: Date.now(), trigger }); } getCurrentMode(): BehavioralMode { return this.state.currentMode; } // Some tasks are composites: brainstorm -> implement -> review getCompositeWorkflow(taskType: string): BehavioralMode[] { const workflows: Record<string, BehavioralMode[]> = { 'new-feature': ['brainstorm', 'implement', 'review', 'ship'], 'bug-fix': ['debug', 'implement', 'review'], 'refactor': ['review', 'brainstorm', 'implement', 'review'], 'learning': ['teach', 'implement'], 'deployment': ['review', 'ship'], }; return workflows[taskType] || ['implement']; } }
3. Multi-Agent Mode Collaboration (PEC Pattern)
The Plan-Execute-Critic (PEC) pattern cycles through specialized modes for complex tasks:
# modes/pec_cycle.py class PECCycle: """ Plan-Execute-Critic cycle for high-complexity tasks. Each phase uses a different behavioral mode. """ def __init__(self, llm_client, max_cycles: int = 3): self.llm = llm_client self.max_cycles = max_cycles def run(self, task: str, context: str) -> dict: plan = None implementation = None critique = None for cycle in range(self.max_cycles): # PLAN phase (Brainstorm mode behavior) plan = self._plan(task, context, previous_critique=critique) # EXECUTE phase (Implement mode behavior) implementation = self._execute(plan) # CRITIC phase (Review mode behavior) critique = self._critique(plan, implementation) if critique["approved"]: return { "status": "approved", "plan": plan, "implementation": implementation, "cycles": cycle + 1, } return { "status": "max_cycles_reached", "plan": plan, "implementation": implementation, "last_critique": critique, } def _plan(self, task: str, context: str, previous_critique: dict = None) -> str: critique_section = "" if previous_critique: critique_section = f"\nPrevious attempt was rejected. Feedback:\n{previous_critique['feedback']}\nAddress these issues in the new plan." response = self.llm.messages.create( model="claude-sonnet-4-20250514", max_tokens=2048, system="You are in BRAINSTORM mode. Explore multiple approaches, then commit to the best one. Output a clear, step-by-step plan.", messages=[{"role": "user", "content": f"Task: {task}\nContext: {context}{critique_section}"}], ) return response.content[0].text def _execute(self, plan: str) -> str: response = self.llm.messages.create( model="claude-sonnet-4-20250514", max_tokens=4096, system="You are in IMPLEMENT mode. Write production-quality code. No explanations, no TODOs, no placeholders.", messages=[{"role": "user", "content": f"Execute this plan:\n{plan}"}], ) return response.content[0].text def _critique(self, plan: str, implementation: str) -> dict: response = self.llm.messages.create( model="claude-sonnet-4-20250514", max_tokens=2048, system="You are in REVIEW mode. Evaluate the implementation against the plan. Categorize issues by severity. Respond with JSON: {\"approved\": bool, \"feedback\": str, \"issues\": [{\"severity\": str, \"description\": str}]}", messages=[{"role": "user", "content": f"Plan:\n{plan}\n\nImplementation:\n{implementation}"}], ) import json return json.loads(response.content[0].text)
4. Manual Mode Switching
Users should be able to explicitly override automatic detection using slash commands or directive keywords.
// modes/manual.ts const MANUAL_TRIGGERS: Record<string, BehavioralMode> = { '/brainstorm': 'brainstorm', '/implement': 'implement', '/debug': 'debug', '/review': 'review', '/teach': 'teach', '/ship': 'ship', }; function parseManualMode(message: string): { mode: BehavioralMode | null; cleanMessage: string } { for (const [trigger, mode] of Object.entries(MANUAL_TRIGGERS)) { if (message.startsWith(trigger)) { return { mode, cleanMessage: message.slice(trigger.length).trim(), }; } } return { mode: null, cleanMessage: message }; } // Usage in system prompt / CLAUDE.md: // When the user starts a message with /brainstorm, /implement, /debug, // /review, /teach, or /ship, switch to that mode immediately.
Configuration Reference
| Parameter | Type | Default | Description |
|---|---|---|---|
default_mode | string | implement | Mode used when no signals are detected |
auto_detect | bool | true | Automatically detect mode from user message content |
manual_override | bool | true | Allow slash-command mode switching |
brainstorm_min_options | int | 3 | Minimum alternatives to present in brainstorm mode |
implement_max_explanation | int | 2 | Maximum sentences of explanation in implement mode |
review_severity_levels | string[] | ["critical","high","medium","low"] | Severity categories for review mode |
debug_require_reproduction | bool | true | Whether debug mode should ask for repro steps |
ship_checklist_categories | string[] | ["code","security","performance","deploy"] | Checklist sections for ship mode |
pec_max_cycles | int | 3 | Maximum Plan-Execute-Critic iterations |
mode_transition_logging | bool | false | Log mode transitions for analytics |
Best Practices
-
Let automatic detection handle 80% of cases, but always support manual override. Users should never feel locked into a mode. The
/implementand/debugtriggers give immediate control. -
Keep IMPLEMENT mode ruthlessly concise. The most common complaint about AI code assistants is verbosity. In implement mode, the code IS the explanation. Limit prose to 1-2 sentences maximum.
-
In DEBUG mode, always state the root cause before the fix. Users who understand why something broke can prevent future occurrences. "The null check was missing because..." is more valuable than just the fix.
-
REVIEW mode should acknowledge positives, not just criticize. Starting with what is done well builds trust and encourages good patterns. "The error handling in the API layer is solid" before "The input validation is missing..."
-
Use the PEC cycle for any task that takes more than 10 minutes. Planning before implementing and reviewing after implementing catches issues that single-pass coding misses.
-
Avoid mode blending in a single response. If the user asks to brainstorm AND implement in one message, do the brainstorm first, get confirmation, then switch to implement. Mixing modes produces muddled output.
-
Calibrate SHIP mode checklists to your actual stack. A generic checklist is less useful than one tailored to your frameworks, deployment targets, and known failure points.
-
Track mode usage over time to improve detection. If your team overwhelmingly uses debug mode on Mondays after weekend deployments, you might want to make debug the default on Mondays.
Troubleshooting
Problem: AI keeps explaining code in IMPLEMENT mode. Solution: Reinforce the behavioral contract in your CLAUDE.md or system prompt. Add explicit instructions: "In IMPLEMENT mode, output code blocks followed by at most 2 sentences. No step-by-step explanations."
Problem: Mode detection picks the wrong mode.
Solution: Manual override with /mode commands is the immediate fix. Long-term, adjust keyword weights in the detection logic. Debug signals (error, bug, not working) should have higher weight since debugging is typically more urgent.
Problem: BRAINSTORM mode produces options that are too similar. Solution: Add a constraint in the brainstorm prompt: "Ensure options are architecturally distinct. Option A should use a fundamentally different approach from Option B." Also increase the minimum options count to 4-5 for complex decisions.
Problem: PEC cycle never converges (critic keeps rejecting). Solution: Set a maximum cycle count (3 is typical). On the final cycle, the critic should provide a "good enough with caveats" approval. Also check if the critic's standards are unrealistically high -- calibrate against your actual code quality bar.
Problem: Mode transitions feel jarring mid-conversation. Solution: Add a brief transition sentence when switching modes: "Switching to debug mode to investigate this error." This sets the user's expectations for the changed response style.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Full-Stack Code Reviewer
Comprehensive code review skill that checks for security vulnerabilities, performance issues, accessibility, and best practices across frontend and backend code.
Test Suite Generator
Generates comprehensive test suites with unit tests, integration tests, and edge cases. Supports Jest, Vitest, Pytest, and Go testing.
Pro Architecture Workspace
Battle-tested skill for architectural, decision, making, framework. Includes structured workflows, validation checks, and reusable patterns for development.