Auto Performance Monitor
Enterprise-grade hook for monitor, system, performance, during. Includes structured workflows, validation checks, and reusable patterns for performance.
Auto Performance Monitor
Continuously monitors system performance metrics during Claude Code sessions, tracking resource usage, response latency, and throughput in real time.
When to Use This Hook
Attach this hook when you need to:
- Track CPU, memory, and I/O metrics during long-running code generation sessions
- Identify performance bottlenecks in build pipelines and test suites triggered by hooks
- Collect baseline performance data for capacity planning and optimization decisions
Consider alternatives when:
- You already have APM tools like Datadog or New Relic monitoring your development environment
- Your sessions are short and the overhead of continuous monitoring is not justified
Quick Start
Configuration
name: auto-performance-monitor type: hook trigger: PostToolUse category: performance
Example Trigger
# Hook triggers after any tool use to collect performance snapshots claude> Run the test suite # After test execution, performance metrics are captured
Example Output
Performance Monitor Snapshot
CPU Usage: 34% (baseline: 28%)
Memory: 1.2GB / 8GB (15%)
Disk I/O: 12MB/s read, 4MB/s write
Active Processes: 47
Session Duration: 14m 32s
Tool Executions: 8 (avg latency: 1.4s)
Status: All metrics within normal range
Core Concepts
Monitoring Dimensions Overview
| Aspect | Details |
|---|---|
| Resource Tracking | CPU, memory, disk I/O, and network bandwidth |
| Latency Metrics | Per-tool execution time and aggregate session latency |
| Baseline Comparison | Compares current readings against established baselines |
| Anomaly Detection | Flags readings that deviate more than 2 standard deviations |
| Trend Analysis | Tracks metric changes over the session to spot degradation |
Monitoring Workflow
Tool Execution Completes
|
Capture Metrics Snapshot
|
āāāāāāāā¼āāāāāāā
| | |
CPU Memory Disk
| | |
āāāāāāāā¼āāāāāāā
|
Compare to Baseline
|
āāāāāāāā“āāāāāāā
| |
Normal Anomaly
| |
Log Alert +
Data Recommend
Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
sample_interval_sec | number | 30 | Seconds between metric snapshots |
cpu_warning_pct | number | 80 | CPU usage percentage that triggers a warning |
memory_warning_pct | number | 85 | Memory usage percentage that triggers a warning |
retention_minutes | number | 60 | How long to retain metric history in session |
anomaly_std_devs | number | 2 | Standard deviations from baseline to flag anomaly |
Best Practices
-
Establish Baselines First - Run a few normal sessions without alerts enabled to collect baseline metrics. The monitor needs historical context to distinguish genuine anomalies from normal variation.
-
Keep Overhead Under 2% - Performance monitoring should not meaningfully impact the system it measures. Sample at reasonable intervals and avoid expensive metric collection commands during critical operations.
-
Focus on Actionable Metrics - Track metrics that lead to concrete actions. CPU percentage is actionable (you can optimize code), while system uptime during a coding session is not particularly useful.
-
Correlate Metrics with Tool Use - The most valuable insight comes from correlating resource spikes with specific tool executions. Tag each snapshot with the tool that triggered it for easy post-session analysis.
-
Set Graduated Alert Levels - Use warning thresholds for early awareness and critical thresholds for intervention. A single threshold creates either too many false alarms or too-late notifications.
Common Issues
-
High CPU from Monitor Itself - Running
toporps auxevery few seconds adds measurable overhead. Use lightweight sampling commands like/proc/statreads on Linux orvm_staton macOS instead. -
Memory Readings Include Cache - Operating systems report cached memory as used. Use available memory rather than free memory for accurate readings, or the monitor will trigger false warnings.
-
Baseline Drift Over Long Sessions - As a session progresses and more files are opened, baselines naturally shift upward. Use a rolling window for baseline calculation rather than a fixed initial snapshot.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Pre-Commit Security Scanner
Pre-commit hook that scans staged files for hardcoded secrets, API keys, passwords, and sensitive data patterns before allowing commits.
Agents Md Watcher
Streamline your workflow with this automatically, loads, agents, configuration. Includes structured workflows, validation checks, and reusable patterns for automation.
Automated Build Inspector
Boost productivity using this automatically, trigger, build, processes. Includes structured workflows, validation checks, and reusable patterns for automation.