P

Precision Prompt Engineering Patterns Toolkit

Enterprise-ready skill that automates teach and apply prompt engineering techniques. Built for Claude Code with best practices and real-world patterns.

SkillCommunityaiv1.0.0MIT
0 views0 copies

Prompt Engineering Patterns

A skill for designing effective AI prompts using structured patterns, techniques, and frameworks for consistent, high-quality AI outputs across various use cases.

When to Use

Choose Prompt Engineering Patterns when:

  • Designing prompts for consistent, reliable AI outputs in production systems
  • Optimizing prompts for specific tasks like classification, extraction, or generation
  • Building prompt templates with variable substitution for parameterized use
  • Implementing structured output formats (JSON, tables, specific schemas)

Consider alternatives when:

  • Simple one-off questions — just ask directly without engineering
  • Building a full AI application — use an AI framework with built-in prompting
  • Fine-tuning a model — prompt engineering may not be sufficient

Quick Start

class PromptBuilder: def __init__(self): self.sections = [] def system(self, content): self.sections.append({"role": "system", "content": content}) return self def user(self, content): self.sections.append({"role": "user", "content": content}) return self def assistant(self, content): self.sections.append({"role": "assistant", "content": content}) return self def few_shot(self, examples): """Add few-shot examples as user/assistant pairs""" for ex in examples: self.user(ex['input']) self.assistant(ex['output']) return self def build(self): return self.sections # Chain of Thought pattern prompt = (PromptBuilder() .system("""You are a math tutor. For each problem: 1. Identify what is being asked 2. List the relevant formulas 3. Show step-by-step work 4. State the final answer clearly Always show your reasoning before the answer.""") .few_shot([{ 'input': 'If a car travels 60 mph for 2.5 hours, how far does it go?', 'output': """What is being asked: Total distance traveled Formula: distance = speed × time Step-by-step: - Speed = 60 mph - Time = 2.5 hours - Distance = 60 × 2.5 = 150 miles **Answer: 150 miles**""" }]) .user('A train travels 85 km/h for 3 hours and 20 minutes. How far does it travel?') .build())

Core Concepts

Prompt Patterns

PatternDescriptionBest For
Chain of ThoughtAsk for step-by-step reasoningComplex reasoning, math
Few-ShotProvide examples of desired outputClassification, formatting
Role/PersonaAssign expert identityDomain-specific tasks
Structured OutputDefine exact output formatJSON, tables, schemas
Self-ConsistencyGenerate multiple answers, pick majorityFactual accuracy
Tree of ThoughtExplore multiple reasoning pathsComplex decisions
ReActInterleave reasoning and actionTool use, research

Structured Output Patterns

# JSON output enforcement EXTRACTION_PROMPT = """Extract the following information from the text below. Return ONLY a JSON object with these exact fields: { "company_name": "string", "revenue": "number or null", "employees": "number or null", "founded_year": "number or null", "headquarters": "string or null", "industry": "string" } Rules: - Use null for missing information, never guess - Revenue should be in USD millions - Do not include any text outside the JSON object Text: {input_text}""" # Classification with confidence CLASSIFICATION_PROMPT = """Classify the following customer message into exactly one category. Categories: - billing: Payment, invoice, subscription, refund issues - technical: Bugs, errors, feature not working - account: Login, password, profile, settings - sales: Pricing, plans, upgrades, enterprise - other: Anything that doesn't fit above Respond with JSON: { "category": "one of the categories above", "confidence": "high/medium/low", "reasoning": "one sentence explaining the classification" } Message: {customer_message}"""

Configuration

OptionDescriptionDefault
modelTarget AI modelModel-dependent
temperatureRandomness (0=deterministic, 1=creative)Task-dependent
max_tokensMaximum response lengthTask-dependent
few_shot_countNumber of examples to include2-3
output_formatExpected response format"text"
system_promptBase system instructions""
chain_of_thoughtEnable step-by-step reasoningfalse
validation_schemaJSON schema for output validationnull

Best Practices

  1. Put the most important instructions first because models pay more attention to the beginning and end of prompts — state the task, constraints, and output format at the top before providing context
  2. Use delimiters (triple backticks, XML tags, or markers) to clearly separate instructions from input data — this prevents prompt injection and helps the model distinguish between instructions and content to process
  3. Specify what NOT to do alongside positive instructions — "Do not include explanations, preamble, or markdown formatting" prevents common unwanted output patterns
  4. Use few-shot examples that demonstrate edge cases and the exact output format you want — two well-chosen examples are more effective than a paragraph of description
  5. Test prompts with adversarial inputs that might confuse or manipulate the model — include examples that test boundary conditions, ambiguous inputs, and potential injection attempts

Common Issues

Inconsistent output format across calls: The model sometimes returns JSON, sometimes text, sometimes markdown. Add explicit format instructions at both the beginning and end of the prompt, use a JSON schema validator to catch malformed responses, and implement retry logic that re-prompts with clearer instructions.

Model ignoring instructions in long prompts: Important instructions buried in long system prompts get lower attention. Move critical constraints to the beginning, use numbered lists rather than paragraphs, and consider splitting complex prompts into multi-turn conversations where each turn has focused instructions.

Few-shot examples biasing output: The model over-indexes on the specific values in examples rather than learning the pattern. Use diverse examples that demonstrate different scenarios, explicitly state "these are examples of the format, not the only valid answers," and include examples that differ significantly from each other.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates