Code Review Engine
Comprehensive skill designed for perform, code, reviews, following. Includes structured workflows, validation checks, and reusable patterns for sentry.
Code Review Engine
A systematic code review skill for analyzing code changes, identifying bugs, security vulnerabilities, performance issues, and suggesting improvements following established coding standards.
When to Use
Choose Code Review when:
- Reviewing pull requests or merge requests for quality and correctness
- Performing security-focused code audits on new features
- Checking code changes for adherence to team coding standards
- Identifying performance regressions in code changes before merging
Consider alternatives when:
- Running automated linting only — use ESLint, Pylint, or language-specific linters
- Performing dependency vulnerability scanning — use Snyk or npm audit
- Needing runtime behavior analysis — use profiling and testing tools
Quick Start
# Review diff of current branch vs main git diff main...HEAD --stat git diff main...HEAD -- '*.ts' '*.tsx' # Run static analysis alongside review npx eslint --format compact src/ npx tsc --noEmit
import subprocess import json from dataclasses import dataclass, field @dataclass class ReviewFinding: file: str line: int severity: str # critical, warning, suggestion category: str # bug, security, performance, style message: str suggestion: str = "" class CodeReviewer: def __init__(self, base_branch="main"): self.base_branch = base_branch self.findings: list[ReviewFinding] = [] def get_changed_files(self): result = subprocess.run( ["git", "diff", "--name-only", f"{self.base_branch}...HEAD"], capture_output=True, text=True ) return result.stdout.strip().split("\n") def get_diff_lines(self, filepath): result = subprocess.run( ["git", "diff", f"{self.base_branch}...HEAD", "--", filepath], capture_output=True, text=True ) added_lines = [] current_line = 0 for line in result.stdout.split("\n"): if line.startswith("@@"): parts = line.split("+")[1].split(",") current_line = int(parts[0]) elif line.startswith("+") and not line.startswith("+++"): added_lines.append((current_line, line[1:])) current_line += 1 elif not line.startswith("-"): current_line += 1 return added_lines def check_security(self, filepath, lines): """Check for common security anti-patterns""" patterns = { "eval(": "Avoid eval() - code injection risk", "innerHTML": "Use textContent instead of innerHTML to prevent XSS", "dangerouslySetInnerHTML": "Verify HTML is sanitized before rendering", "SELECT.*FROM.*WHERE.*+": "Possible SQL injection via concatenation", "subprocess.call(.*shell=True": "Shell injection risk with shell=True", "password.*=.*['\"]": "Hardcoded credential detected", } for line_num, code in lines: for pattern, msg in patterns.items(): if pattern.lower() in code.lower(): self.findings.append(ReviewFinding( file=filepath, line=line_num, severity="critical", category="security", message=msg )) def generate_summary(self): critical = [f for f in self.findings if f.severity == "critical"] warnings = [f for f in self.findings if f.severity == "warning"] return { "total": len(self.findings), "critical": len(critical), "warnings": len(warnings), "files_reviewed": len(self.get_changed_files()), "findings": [vars(f) for f in self.findings] }
Core Concepts
Review Checklist Categories
| Category | Check Points | Priority |
|---|---|---|
| Correctness | Logic errors, edge cases, null handling | High |
| Security | Injection, auth bypass, data exposure | Critical |
| Performance | N+1 queries, unnecessary re-renders, memory leaks | High |
| Error Handling | Missing try/catch, unhandled promises, error propagation | Medium |
| Testing | Test coverage for changes, edge case tests | Medium |
| Readability | Naming, complexity, documentation | Low |
| Architecture | Separation of concerns, dependency direction | Medium |
Automated Check Integration
# Pre-review automated checks pipeline #!/bin/bash set -e echo "=== Type Check ===" npx tsc --noEmit echo "=== Lint Check ===" npx eslint src/ --format json > lint-results.json echo "=== Test Coverage ===" npx jest --coverage --changedSince=main --json > test-results.json echo "=== Dependency Audit ===" npm audit --json > audit-results.json echo "=== Bundle Size Check ===" npx size-limit --json > size-results.json
Configuration
| Option | Description | Default |
|---|---|---|
base_branch | Branch to compare against | "main" |
file_patterns | Glob patterns for files to review | ["*.ts","*.tsx","*.py"] |
severity_threshold | Minimum severity to report | "warning" |
check_security | Enable security pattern detection | true |
check_performance | Enable performance anti-pattern detection | true |
max_file_size | Skip files larger than (bytes) | 50000 |
ignore_patterns | Files/directories to ignore | ["*.test.*","*.spec.*"] |
custom_rules | Path to custom rule definitions | "" |
Best Practices
- Review the overall design before individual lines — understand what the change is trying to accomplish and whether the approach is sound before diving into implementation details like variable names or formatting
- Focus on logic and security first, style last — catching a SQL injection matters far more than debating bracket placement; use automated formatters for style enforcement so human review time goes to things that matter
- Check error paths and edge cases explicitly — most bugs live in the unhappy paths; trace what happens when inputs are null, empty, at boundary values, or when external calls fail
- Verify test coverage matches the risk level — critical business logic and security-sensitive code need thorough tests; low-risk utility functions need basic coverage at most
- Leave actionable comments with suggested fixes rather than vague criticism — instead of "this could be better," provide a specific code suggestion or explain exactly what change you recommend and why
Common Issues
Review fatigue on large pull requests: PRs with 500+ lines of changes get superficial reviews because no one can maintain focus that long. Encourage teams to split large changes into smaller, focused PRs of 200-300 lines maximum. When that is not possible, review in multiple sessions focusing on different aspects each time.
Missing context for architectural decisions: Reviewers often lack context about why a particular approach was chosen. PR descriptions should explain the reasoning behind design decisions, link to relevant issues or RFCs, and note alternatives that were considered and rejected.
Inconsistent review standards across team: Different reviewers focus on different things and apply varying levels of scrutiny. Create a team review checklist that covers security, testing, and architecture checks, and rotate the primary reviewer role so everyone maintains consistent standards.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Full-Stack Code Reviewer
Comprehensive code review skill that checks for security vulnerabilities, performance issues, accessibility, and best practices across frontend and backend code.
Test Suite Generator
Generates comprehensive test suites with unit tests, integration tests, and edge cases. Supports Jest, Vitest, Pytest, and Go testing.
Pro Architecture Workspace
Battle-tested skill for architectural, decision, making, framework. Includes structured workflows, validation checks, and reusable patterns for development.