Prompt Engineering Dynamic
Comprehensive skill designed for extract, structured, data, responses. Includes structured workflows, validation checks, and reusable patterns for ai research.
Prompt Engineering Dynamic
Adaptive prompt engineering techniques that adjust strategy based on task complexity, model capabilities, and runtime feedback — enabling self-optimizing prompt systems.
When to Use
Use dynamic prompting when:
- Task complexity varies widely across inputs (simple to multi-step reasoning)
- Need to optimize token costs by using simpler prompts when possible
- Building systems that adapt to different model versions
- Want automatic fallback strategies when initial prompts fail
Use static prompts when:
- Task complexity is consistent across inputs
- Token budget is not a concern
- Simplicity of implementation matters more than optimization
Quick Start
Complexity-Based Routing
class DynamicPromptRouter: def __init__(self, llm_client): self.client = llm_client def classify_complexity(self, query): """Estimate task complexity to select prompt strategy.""" # Simple heuristics word_count = len(query.split()) has_code = "```" in query or "def " in query multi_step = any(w in query.lower() for w in ["and then", "after that", "finally"]) if word_count < 20 and not has_code and not multi_step: return "simple" elif has_code or multi_step: return "complex" else: return "moderate" def get_prompt(self, query, context=None): complexity = self.classify_complexity(query) if complexity == "simple": return self._zero_shot(query) elif complexity == "moderate": return self._few_shot(query, context) else: return self._chain_of_thought(query, context) def _zero_shot(self, query): return f"Answer concisely: {query}" def _few_shot(self, query, context): examples = self._select_relevant_examples(query) return f"Follow these examples:\n{examples}\n\nNow answer: {query}" def _chain_of_thought(self, query, context): return f"""Think through this step by step. Context: {context or 'None provided'} Question: {query} Step 1: Identify what's being asked Step 2: Break down into sub-problems Step 3: Solve each sub-problem Step 4: Combine into final answer Reasoning:"""
Self-Correcting Prompts
class SelfCorrectingPrompt: def __init__(self, llm_client, max_retries=2): self.client = llm_client self.max_retries = max_retries def execute(self, prompt, validator_fn): response = self.client.complete(prompt) for attempt in range(self.max_retries): is_valid, error = validator_fn(response) if is_valid: return response # Self-correct with feedback correction_prompt = f"""Your previous response had an issue: {error} Original prompt: {prompt} Your response: {response} Please fix the issue and provide a corrected response.""" response = self.client.complete(correction_prompt) return response # Return best attempt
Core Concepts
Adaptive Strategy Selection
| Input Complexity | Strategy | Token Cost | Accuracy |
|---|---|---|---|
| Simple | Zero-shot | Low | Good |
| Moderate | Few-shot (2-3 examples) | Medium | Better |
| Complex | Chain-of-thought | High | Best |
| Critical | Self-consistency (3-5 runs) | Very high | Highest |
Feedback Loop Architecture
Input → Complexity Classifier → Strategy Selector → Prompt Builder
|
LLM Response
|
Validator
/ \
Pass Fail
| |
Return Retry with
correction
Configuration
| Parameter | Default | Description |
|---|---|---|
complexity_threshold_simple | 20 | Word count below this → simple |
complexity_threshold_complex | 50 | Word count above this → complex |
max_retries | 2 | Self-correction attempts |
fallback_strategy | "chain_of_thought" | Strategy when classifier is uncertain |
example_pool_size | 50 | Available few-shot examples |
similarity_model | "all-MiniLM-L6-v2" | Embedding model for example selection |
Best Practices
- Start with static prompts and add dynamic routing only when you have data showing varying performance across complexity levels
- Log complexity classifications to validate your routing heuristics against actual outcomes
- Use the simplest effective strategy — zero-shot works for 60-70% of typical queries
- Set retry budgets — self-correction costs 2-3x tokens per retry, cap at 2 retries
- A/B test dynamic vs static — measure if the complexity is worth the engineering effort
- Cache similar queries — if the same complexity level is hit repeatedly, cache the strategy
Common Issues
Complexity classifier routes too many queries to expensive strategies: Raise the complexity thresholds. Add more specific heuristics. Use a lightweight model to classify instead of rule-based heuristics.
Self-correction loops without improving: Add diversity between retries (increase temperature). Change the correction prompt structure. Set a quality floor — if the first attempt is close enough, accept it.
Dynamic routing adds latency: Pre-compute complexity classifications in batch. Use cached strategies for known query patterns. Run classification in parallel with prompt preparation.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Full-Stack Code Reviewer
Comprehensive code review skill that checks for security vulnerabilities, performance issues, accessibility, and best practices across frontend and backend code.
Test Suite Generator
Generates comprehensive test suites with unit tests, integration tests, and edge cases. Supports Jest, Vitest, Pytest, and Go testing.
Pro Architecture Workspace
Battle-tested skill for architectural, decision, making, framework. Includes structured workflows, validation checks, and reusable patterns for development.