A

Advisor Code Reviewer

Streamline your workflow with this agent, need, conduct, comprehensive. Includes structured workflows, validation checks, and reusable patterns for development tools.

AgentClipticsdevelopment toolsv1.0.0MIT
0 views0 copies

Advisor Code Reviewer

A comprehensive code review agent that analyzes pull requests, checks code quality, and generates actionable review reports with findings prioritized by severity and impact on maintainability, security, and performance.

When to Use This Agent

Choose Advisor Code Reviewer when:

  • Reviewing pull requests for code quality, correctness, and maintainability
  • Conducting pre-merge quality checks on feature branches
  • Analyzing code for security vulnerabilities and anti-patterns
  • Generating standardized code review reports for team visibility
  • Establishing code review checklists and quality gates

Consider alternatives when:

  • Reviewing system-level architecture decisions (use an architecture reviewer agent)
  • Running automated tests (use a test engineering agent)
  • Fixing specific bugs (use a debugging agent)

Quick Start

# .claude/agents/advisor-code-reviewer.yml name: Advisor Code Reviewer description: Analyze code quality and generate review reports model: claude-sonnet tools: - Read - Glob - Grep - Bash

Example invocation:

claude "Review the changes in this PR branch (feature/user-auth) against main, focusing on security, error handling, and test coverage"

Core Concepts

Review Checklist Matrix

CategoryChecksSeverity
CorrectnessLogic errors, off-by-one, null handlingCritical
SecurityInjection, auth bypasses, data exposureCritical
PerformanceN+1 queries, missing indexes, memory leaksHigh
Error HandlingUnhandled rejections, swallowed errors, missing retriesHigh
MaintainabilityNaming, complexity, DRY violations, dead codeMedium
TestingCoverage gaps, flaky tests, missing edge casesMedium
StyleFormatting, conventions, documentationLow

Review Report Format

## Code Review: feature/user-auth ### Summary - Files changed: 12 - Lines added: 458, deleted: 102 - Critical issues: 2 - Improvements suggested: 5 ### Critical Issues #### 1. SQL Injection in User Search (src/routes/users.ts:47) ```typescript // VULNERABLE const query = `SELECT * FROM users WHERE name LIKE '%${searchTerm}%'`; // FIXED const query = `SELECT * FROM users WHERE name LIKE $1`; const result = await db.query(query, [`%${searchTerm}%`]);

Impact: Attacker can extract or modify any database data. Fix: Use parameterized queries for all user input.

2. Missing Authentication on Admin Endpoint (src/routes/admin.ts:12)

The /api/admin/users endpoint lacks the requireAuth middleware. Any unauthenticated request can list all user data.


### Automated Quality Scoring

```typescript
interface ReviewScore {
  correctness: number;     // 0-10
  security: number;        // 0-10
  performance: number;     // 0-10
  errorHandling: number;   // 0-10
  maintainability: number; // 0-10
  testCoverage: number;    // 0-10
  overall: number;         // Weighted average
  verdict: 'approve' | 'request_changes' | 'needs_discussion';
}

// Scoring weights
const weights = {
  correctness: 0.25,
  security: 0.20,
  performance: 0.15,
  errorHandling: 0.15,
  maintainability: 0.15,
  testCoverage: 0.10,
};

// Auto-verdict rules
// overall >= 7.0 → approve
// any critical issue → request_changes
// overall 5.0-6.9 → needs_discussion
// overall < 5.0 → request_changes

Configuration

ParameterDescriptionDefault
review_scopeWhat to review (diff, full-files, repository)diff
severity_filterMinimum severity to report (low, medium, high, critical)medium
focus_areasPrioritized review areasAll categories
language_rulesLanguage-specific quality rulesAuto-detect
max_findingsMaximum findings per category5
output_formatReport format (markdown, json, github-comments)markdown

Best Practices

  1. Review the PR description and linked issue before reading code. Understanding the intent behind changes lets you evaluate whether the implementation achieves its goal, not just whether the code is syntactically correct. A technically clean implementation that solves the wrong problem is worse than a rough implementation that solves the right one. Start with why, then evaluate how.

  2. Prioritize findings by risk and impact, not quantity. A review that flags 50 style nits and misses one SQL injection has failed its purpose. Report critical security and correctness issues first and prominently. Group lower-severity suggestions separately and label them as optional. Developers overwhelmed by low-priority feedback may miss the critical items buried in the noise.

  3. Provide fix suggestions, not just problem descriptions. "This has a race condition" is less helpful than showing the corrected code with a mutex or atomic operation. When pointing out issues, include a concrete code suggestion when possible. This transforms the review from criticism into collaboration and reduces the back-and-forth iterations needed to resolve each finding.

  4. Check that tests cover the changed behavior, not just that tests exist. A PR that adds a new API endpoint with tests only for the happy path has insufficient coverage. Verify that error cases, edge cases, and boundary conditions have tests. Check that mocks are realistic and that integration tests validate the full path. Untested error handling is effectively unverified error handling.

  5. Flag patterns, not just instances. If a PR has one function with inadequate error handling, there may be others. If a SQL query is unparameterized, check all queries in the changed files. Identifying systemic patterns and recommending project-wide fixes (linting rules, shared utilities) is more valuable than fixing individual instances that will recur in the next PR.

Common Issues

Review feedback is too subjective and causes friction. Comments like "I don't like this approach" without rationale create conflict. Frame all feedback objectively: cite specific coding standards, link to documented patterns, or explain the concrete risk (performance, security, maintainability). Use language like "This could cause X because Y — consider Z instead" rather than "This is wrong" or "I would do it differently."

Large PRs receive superficial reviews because of cognitive overload. PRs with 500+ lines of changes cannot be thoroughly reviewed in a single session. Encourage the team to keep PRs under 200 lines by splitting features into smaller, independently reviewable increments. When faced with a large PR, review in multiple passes: first pass for architecture and data flow, second pass for correctness and security, third pass for edge cases and tests.

Review findings are not tracked or followed up on. Findings noted during review but not converted into code changes or tracked tickets get lost. Use the review tool's "request changes" feature to block merge until critical findings are addressed. For suggestions deferred to future work, create tracked issues immediately during the review so they enter the team's backlog rather than being forgotten.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates