Master Prompt Suite
Enterprise-grade skill for control, output, regex, grammars. Includes structured workflows, validation checks, and reusable patterns for ai research.
Master Prompt Suite
Production-grade collection of battle-tested prompt templates for common LLM tasks — code review, content generation, data extraction, classification, and conversational agents — with built-in evaluation and customization.
When to Use
Use these templates when:
- Need proven prompt patterns for common LLM tasks
- Starting a new LLM-powered feature and want a strong baseline
- Building a prompt library for your organization
- Need role-based prompts (act as expert X)
Customize further when:
- Domain-specific requirements differ from general templates
- Your evaluation shows room for improvement on specific tasks
- Production performance needs don't match template defaults
Quick Start
Code Review Prompt
# Role You are a staff-level software engineer conducting a thorough code review. # Task Review the code below for: 1. Bugs and logic errors 2. Security vulnerabilities (OWASP Top 10) 3. Performance issues 4. Code style and maintainability # Code ```{language} {code}
Output Format
For each finding, provide:
- Severity: Critical / Major / Minor / Suggestion
- Line: Line number or range
- Issue: What's wrong
- Fix: Recommended solution
End with an Overall Assessment (1-2 sentences) and Score (1-10).
### Data Extraction Prompt
```markdown
# Role
You are a data extraction specialist.
# Task
Extract structured information from the following text.
# Text
{input_text}
# Required Fields
{field_definitions}
# Rules
- If a field is not found in the text, use null
- Dates should be in ISO 8601 format (YYYY-MM-DD)
- Numbers should be numeric values, not strings
- Extract exact text for string fields, do not paraphrase
# Output
Respond with valid JSON matching this schema:
{json_schema}
Classification Prompt
# Task Classify the following text into exactly one category. # Categories {categories_with_descriptions} # Rules - Choose the single most appropriate category - If uncertain between two categories, choose the more specific one - Respond with ONLY the category name, nothing else # Text {input_text} # Category:
Core Concepts
Template Categories
| Category | Templates | Use Case |
|---|---|---|
| Code Quality | Review, refactor, explain, debug | Development workflows |
| Content | Summarize, write, translate, edit | Content generation |
| Data | Extract, classify, transform, validate | Data processing |
| Analysis | Compare, evaluate, research, report | Decision support |
| Conversation | Support agent, tutor, advisor | Interactive applications |
Customization Points
Each template has marked customization points:
{role} → Expertise persona (e.g., "security expert" vs "UX designer") {context} → Domain-specific background information {constraints} → Task-specific rules and limitations {format} → Output structure (JSON, markdown, plain text) {examples} → Few-shot demonstrations for your domain
Evaluation Rubric
| Metric | Measurement | Target |
|---|---|---|
| Format compliance | Output matches expected structure | > 95% |
| Content accuracy | Correct information in output | > 90% |
| Consistency | Same input → similar output | > 85% |
| Latency | Time to generate response | < 5s |
| Token efficiency | Output tokens / useful content ratio | > 70% |
Configuration
| Parameter | Description |
|---|---|
template_name | Which template to use |
role | Expert persona for the model |
temperature | 0.0 for deterministic, 0.7+ for creative |
max_tokens | Response length limit |
few_shot_examples | Domain-specific examples to include |
output_format | JSON, markdown, or plain text |
Best Practices
- Start with a template, then customize — don't build prompts from scratch when a proven pattern exists
- Add domain-specific examples — generic templates improve dramatically with 2-3 relevant examples
- Test templates on your actual data — performance varies significantly across domains
- Monitor template performance — track accuracy and format compliance in production
- Share templates across teams — standardized prompts reduce inconsistency
- Include negative examples — show the model what NOT to do for better compliance
Common Issues
Template output doesn't match expected format: Add an explicit example of the complete expected output. Move format specification to the end of the prompt. Use XML tags or JSON schemas for structure enforcement.
Generic templates underperform on domain-specific tasks: Add domain context to the role section. Include 2-3 few-shot examples from your domain. Adjust constraints to match your specific requirements.
Templates too long for context window: Remove redundant instructions. Compress few-shot examples. Split into system prompt (template) + user prompt (input + task-specific instructions).
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Full-Stack Code Reviewer
Comprehensive code review skill that checks for security vulnerabilities, performance issues, accessibility, and best practices across frontend and backend code.
Test Suite Generator
Generates comprehensive test suites with unit tests, integration tests, and edge cases. Supports Jest, Vitest, Pytest, and Go testing.
Pro Architecture Workspace
Battle-tested skill for architectural, decision, making, framework. Includes structured workflows, validation checks, and reusable patterns for development.