M

Master Prompt Suite

Enterprise-grade skill for control, output, regex, grammars. Includes structured workflows, validation checks, and reusable patterns for ai research.

SkillClipticsai researchv1.0.0MIT
0 views0 copies

Master Prompt Suite

Production-grade collection of battle-tested prompt templates for common LLM tasks — code review, content generation, data extraction, classification, and conversational agents — with built-in evaluation and customization.

When to Use

Use these templates when:

  • Need proven prompt patterns for common LLM tasks
  • Starting a new LLM-powered feature and want a strong baseline
  • Building a prompt library for your organization
  • Need role-based prompts (act as expert X)

Customize further when:

  • Domain-specific requirements differ from general templates
  • Your evaluation shows room for improvement on specific tasks
  • Production performance needs don't match template defaults

Quick Start

Code Review Prompt

# Role You are a staff-level software engineer conducting a thorough code review. # Task Review the code below for: 1. Bugs and logic errors 2. Security vulnerabilities (OWASP Top 10) 3. Performance issues 4. Code style and maintainability # Code ```{language} {code}

Output Format

For each finding, provide:

  • Severity: Critical / Major / Minor / Suggestion
  • Line: Line number or range
  • Issue: What's wrong
  • Fix: Recommended solution

End with an Overall Assessment (1-2 sentences) and Score (1-10).


### Data Extraction Prompt

```markdown
# Role
You are a data extraction specialist.

# Task
Extract structured information from the following text.

# Text
{input_text}

# Required Fields
{field_definitions}

# Rules
- If a field is not found in the text, use null
- Dates should be in ISO 8601 format (YYYY-MM-DD)
- Numbers should be numeric values, not strings
- Extract exact text for string fields, do not paraphrase

# Output
Respond with valid JSON matching this schema:
{json_schema}

Classification Prompt

# Task Classify the following text into exactly one category. # Categories {categories_with_descriptions} # Rules - Choose the single most appropriate category - If uncertain between two categories, choose the more specific one - Respond with ONLY the category name, nothing else # Text {input_text} # Category:

Core Concepts

Template Categories

CategoryTemplatesUse Case
Code QualityReview, refactor, explain, debugDevelopment workflows
ContentSummarize, write, translate, editContent generation
DataExtract, classify, transform, validateData processing
AnalysisCompare, evaluate, research, reportDecision support
ConversationSupport agent, tutor, advisorInteractive applications

Customization Points

Each template has marked customization points:

{role} → Expertise persona (e.g., "security expert" vs "UX designer") {context} → Domain-specific background information {constraints} → Task-specific rules and limitations {format} → Output structure (JSON, markdown, plain text) {examples} → Few-shot demonstrations for your domain

Evaluation Rubric

MetricMeasurementTarget
Format complianceOutput matches expected structure> 95%
Content accuracyCorrect information in output> 90%
ConsistencySame input → similar output> 85%
LatencyTime to generate response< 5s
Token efficiencyOutput tokens / useful content ratio> 70%

Configuration

ParameterDescription
template_nameWhich template to use
roleExpert persona for the model
temperature0.0 for deterministic, 0.7+ for creative
max_tokensResponse length limit
few_shot_examplesDomain-specific examples to include
output_formatJSON, markdown, or plain text

Best Practices

  1. Start with a template, then customize — don't build prompts from scratch when a proven pattern exists
  2. Add domain-specific examples — generic templates improve dramatically with 2-3 relevant examples
  3. Test templates on your actual data — performance varies significantly across domains
  4. Monitor template performance — track accuracy and format compliance in production
  5. Share templates across teams — standardized prompts reduce inconsistency
  6. Include negative examples — show the model what NOT to do for better compliance

Common Issues

Template output doesn't match expected format: Add an explicit example of the complete expected output. Move format specification to the end of the prompt. Use XML tags or JSON schemas for structure enforcement.

Generic templates underperform on domain-specific tasks: Add domain context to the role section. Include 2-3 few-shot examples from your domain. Adjust constraints to match your specific requirements.

Templates too long for context window: Remove redundant instructions. Compress few-shot examples. Split into system prompt (template) + user prompt (input + task-specific instructions).

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates