P

Prompt Optimizer Skill

Optimizes prompts for better LLM outputs using the EARS methodology (Examples, Attributes, Rules, Structure). Transforms vague prompts into precise, high-performing instructions with measurable improvement.

SkillCommunitydevelopmentv1.0.0MIT
0 views0 copies

Description

This skill takes existing prompts and optimizes them using the EARS methodology — Examples, Attributes, Rules, Structure. It systematically identifies weaknesses in prompts and rewrites them for dramatically better LLM outputs.

Instructions

When the user asks you to optimize a prompt, follow the EARS framework:

E — Examples

Add concrete input/output examples to anchor the model's behavior:

## Before (vague) "Summarize this article." ## After (with examples) "Summarize this article in 2-3 sentences. Focus on the key finding and its implications. Example input: [300-word article about climate study] Example output: 'A new study published in Nature found that Arctic ice loss has accelerated 3x faster than previous models predicted. The researchers analyzed 40 years of satellite data, revealing that current climate models significantly underestimate ice sheet vulnerability. This finding suggests sea level rise projections may need upward revision.'"

A — Attributes

Specify the desired attributes of the output:

AttributeSpecification
Length2-3 sentences
ToneProfessional, objective
FormatPlain text paragraph
AudienceTechnical decision-makers
FocusKey finding + implications

R — Rules

Add explicit constraints and guardrails:

## Rules - Do NOT include author names or publication dates in the summary - Do NOT use phrases like "this article discusses" or "the author argues" - Start directly with the key finding - Use specific numbers/data when available - If the article has no clear finding, state that explicitly

S — Structure

Define the output structure explicitly:

## Output Structure [Key finding in one sentence]. [Supporting evidence or methodology]. [Implication or significance].

Optimization Workflow

  1. Analyze the original prompt — identify what's missing (E, A, R, or S)
  2. Score the original: rate each EARS dimension 1-5
  3. Rewrite with all four dimensions addressed
  4. Compare before/after with a test input
  5. Iterate if the output quality is still below expectations

Scoring Rubric

Examples:   1=none  2=vague  3=one example   4=2-3 examples  5=diverse examples
Attributes: 1=none  2=length only  3=+tone  4=+audience  5=all specified
Rules:      1=none  2=basic  3=edge cases  4=guardrails  5=comprehensive
Structure:  1=none  2=format hint  3=template  4=+sections  5=full schema

Rules

  • Always show the before/after comparison
  • Include EARS scores for both original and optimized versions
  • Test the optimized prompt with at least one example input
  • Keep the optimized prompt concise — longer is not always better
  • Preserve the user's intent — do not change what the prompt is trying to do
  • If the original prompt is already good (score 16+/20), say so and suggest minor tweaks only
  • For system prompts, focus heavily on Rules and Structure
  • For user-facing prompts, focus heavily on Examples and Attributes

Examples

User: Optimize this prompt: "Write me a blog post about AI" Action: Score (E:1 A:1 R:1 S:1 = 4/20), add topic focus, tone, length, audience, structure, and examples. Rewrite to score 16+.

User: Make this system prompt better: "You are a helpful assistant." Action: Add persona definition, capability boundaries, output format rules, and interaction examples.

User: My prompt gives inconsistent results, fix it Action: Identify which EARS dimension is weakest, add constraints to reduce output variance

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates