Screenshot Reviewer Guru
Powerful agent for reviews, synthesized, task, lists. Includes structured workflows, validation checks, and reusable patterns for ui analysis.
Screenshot Reviewer Guru
Reviews synthesized task lists against original screenshots and analysis results to ensure completeness, consistency, and development readiness.
When to Use This Agent
Choose this agent when you need to:
- Validate that a generated task list fully accounts for all visible UI elements, interactions, and business functions from source screenshots
- Check task list quality for consistent terminology, uniform granularity, logical hierarchy, and absence of contradictions
- Produce a formal review verdict with specific change recommendations before the task list is handed to development teams
Consider alternatives when:
- You need to generate the initial task list from screenshots rather than review an existing one (use the synthesizer agent instead)
- Your review scope is code quality or architecture review rather than requirements completeness against visual specifications
Quick Start
Configuration
name: screenshot-reviewer-guru type: agent category: ui-analysis
Example Invocation
claude agent:invoke screenshot-reviewer-guru "Review this task list against the original dashboard screenshots and analysis JSONs"
Example Output
Review Summary: Dashboard Task List v1.2
Completeness: NEEDS_WORK
[x] Covered: Navigation structure, user management CRUD,
analytics charts, notification panel
[ ] Missing: Export functionality visible in toolbar
[ ] Missing: Pagination controls on data tables
[ ] Missing: Empty state handling for zero-data scenarios
Consistency: PASS
No terminology conflicts detected
Granularity is uniform across modules
Quality: NEEDS_WORK
Issue: Task "Handle the database stuff" is implementation-
specific and vague. Rewrite as "Persist user
preference selections across sessions."
Issue: Missing acceptance criteria on 4 tasks in the
Reporting module.
Recommended Changes:
1. Add 3 tasks for data export (CSV, PDF, print)
2. Add pagination task for all list views
3. Rewrite 2 vague tasks with behavioral descriptions
4. Add empty state tasks for dashboard widgets
Final Verdict: NEEDS_REVISION
Core Concepts
Review Dimensions Overview
| Aspect | Details |
|---|---|
| Completeness Check | All visible UI elements accounted for, all interactions covered, edge cases (empty states, errors, loading) included |
| Consistency Check | Uniform terminology throughout, uniform task granularity, logical hierarchy, no contradictory requirements |
| Quality Check | Tasks describe what not how, no implementation details, specific and verifiable criteria, dependencies noted |
| Usability Check | Tasks actionable by developers, sensible grouping for development sprints, clear priority, no ambiguity |
| Cross-Reference | Task list validated against original screenshots, UI analysis JSON, interaction analysis JSON, and business analysis JSON |
Review Process Architecture
ββββββββββββββββββ ββββββββββββββββββ ββββββββββββββββββ
β Screenshot(s) β β Analysis β β Task List β
β Original β β JSONs (3) β β Under Review β
β Source β β UI/Interactionβ β β
β β β /Business β β Modules β
βββββββββ¬βββββββββ βββββββββ¬βββββββββ β Features β
β β β Subtasks β
v v βββββββββ¬βββββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Cross-Reference Engine β
β β
β Compare visual elements <-> task coverage β
β Verify analysis findings <-> task inclusion β
β Check terminology <-> consistency β
β Validate granularity <-> uniformity β
ββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββ
v
ββββββββββββββββββββββββββββββββββββββββββββββββ
β Review Verdict: APPROVED or NEEDS_REVISION β
β + Specific change recommendations β
β + Corrected task sections if needed β
ββββββββββββββββββββββββββββββββββββββββββββββββ
Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
| strictnessLevel | string | "standard" | Review rigor: lenient (approve if usable), standard, or strict (flag all imperfections) |
| edgeCaseChecks | boolean | true | Verify task list includes empty states, error handling, loading states, and boundary conditions |
| autoCorrect | boolean | false | Automatically provide corrected task list sections alongside issue descriptions |
| terminologyGlossary | string | "" | Path to project glossary file for enforcing consistent term usage across the task list |
| verdictThreshold | string | "moderate" | Approval threshold: permissive (few issues allowed), moderate, or zero-defect |
Best Practices
-
Walk Through Screenshots Visually Before Reading the Task List - Start by independently cataloging what you observe in the screenshots before consulting the task list. This prevents anchoring bias where you unconsciously skip missing items because the task list does not mention them. Compare your independent catalog against the task list to identify genuine coverage gaps.
-
Flag Real Issues, Not Nitpicks - Distinguish between material gaps that would cause development confusion and minor stylistic preferences. A missing feature is a material gap. Preferring "User Profile" over "Account Settings" when both are understood is a nitpick. Reviews that flag dozens of trivial issues alongside critical gaps dilute the signal and slow the review process.
-
Provide the Fix Alongside the Finding - Every issue identified should include a specific corrective action. Instead of "Task 3.2 is vague," write "Task 3.2: Replace 'Handle notifications' with 'Display real-time notification badges on the header bell icon with unread count and mark-as-read on click.'" Actionable feedback eliminates interpretation guesswork.
-
Verify Edge Case Coverage Systematically - For every data display element in the task list, confirm that empty states, loading states, error states, and boundary conditions (maximum items, long text truncation, zero values) have corresponding tasks. These edge cases account for significant development effort and are the most commonly omitted requirements.
-
Approve When Usable, Even If Imperfect - Demanding perfection in every review cycle creates approval bottlenecks that stall development. If the task list is actionable and covers all core functionality with minor gaps in edge cases, approve with noted recommendations. Reserve NEEDS_REVISION for material completeness failures or contradictory requirements that would derail implementation.
Common Issues
-
Orphaned Features Without Corresponding Tasks - Analysis JSONs identify features that never appear in the task list, or the task list references capabilities not visible in any screenshot. Cross-reference every analysis finding against the task list and every task against visual evidence. Orphaned items indicate either analysis errors or task list omissions that need resolution.
-
Inconsistent Task Granularity Across Modules - One module defines tasks at the level of "Implement user authentication" (epic-sized) while another specifies "Add placeholder text to email input" (micro-task). Normalize granularity so all tasks represent roughly equivalent development effort, typically at the feature or user story level, with subtasks for specific behaviors.
-
Implementation Details Leaking into Requirements - Tasks that specify "Create a React component with useState hook" or "Use PostgreSQL JSONB column" prescribe implementation rather than describing behavior. Requirements should state what the system does from a user perspective, leaving technology decisions to the development team during implementation planning.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
API Endpoint Builder
Agent that scaffolds complete REST API endpoints with controller, service, route, types, and tests. Supports Express, Fastify, and NestJS.
Documentation Auto-Generator
Agent that reads your codebase and generates comprehensive documentation including API docs, architecture guides, and setup instructions.
Ai Ethics Advisor Partner
All-in-one agent covering ethics, responsible, development, specialist. Includes structured workflows, validation checks, and reusable patterns for ai specialists.