A

Auto Performance Monitor

Enterprise-grade hook for monitor, system, performance, during. Includes structured workflows, validation checks, and reusable patterns for performance.

HookClipticsperformancev1.0.0MIT
0 views0 copies

Auto Performance Monitor

Continuously monitors system performance metrics during Claude Code sessions, tracking resource usage, response latency, and throughput in real time.

When to Use This Hook

Attach this hook when you need to:

  • Track CPU, memory, and I/O metrics during long-running code generation sessions
  • Identify performance bottlenecks in build pipelines and test suites triggered by hooks
  • Collect baseline performance data for capacity planning and optimization decisions

Consider alternatives when:

  • You already have APM tools like Datadog or New Relic monitoring your development environment
  • Your sessions are short and the overhead of continuous monitoring is not justified

Quick Start

Configuration

name: auto-performance-monitor type: hook trigger: PostToolUse category: performance

Example Trigger

# Hook triggers after any tool use to collect performance snapshots claude> Run the test suite # After test execution, performance metrics are captured

Example Output

Performance Monitor Snapshot
  CPU Usage: 34% (baseline: 28%)
  Memory: 1.2GB / 8GB (15%)
  Disk I/O: 12MB/s read, 4MB/s write
  Active Processes: 47
  Session Duration: 14m 32s
  Tool Executions: 8 (avg latency: 1.4s)
Status: All metrics within normal range

Core Concepts

Monitoring Dimensions Overview

AspectDetails
Resource TrackingCPU, memory, disk I/O, and network bandwidth
Latency MetricsPer-tool execution time and aggregate session latency
Baseline ComparisonCompares current readings against established baselines
Anomaly DetectionFlags readings that deviate more than 2 standard deviations
Trend AnalysisTracks metric changes over the session to spot degradation

Monitoring Workflow

Tool Execution Completes
         |
  Capture Metrics Snapshot
         |
  ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”¼ā”€ā”€ā”€ā”€ā”€ā”€ā”
  |      |      |
 CPU   Memory  Disk
  |      |      |
  ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”¼ā”€ā”€ā”€ā”€ā”€ā”€ā”˜
         |
  Compare to Baseline
         |
  ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”“ā”€ā”€ā”€ā”€ā”€ā”€ā”
  |             |
Normal      Anomaly
  |             |
  Log        Alert +
  Data       Recommend

Configuration

ParameterTypeDefaultDescription
sample_interval_secnumber30Seconds between metric snapshots
cpu_warning_pctnumber80CPU usage percentage that triggers a warning
memory_warning_pctnumber85Memory usage percentage that triggers a warning
retention_minutesnumber60How long to retain metric history in session
anomaly_std_devsnumber2Standard deviations from baseline to flag anomaly

Best Practices

  1. Establish Baselines First - Run a few normal sessions without alerts enabled to collect baseline metrics. The monitor needs historical context to distinguish genuine anomalies from normal variation.

  2. Keep Overhead Under 2% - Performance monitoring should not meaningfully impact the system it measures. Sample at reasonable intervals and avoid expensive metric collection commands during critical operations.

  3. Focus on Actionable Metrics - Track metrics that lead to concrete actions. CPU percentage is actionable (you can optimize code), while system uptime during a coding session is not particularly useful.

  4. Correlate Metrics with Tool Use - The most valuable insight comes from correlating resource spikes with specific tool executions. Tag each snapshot with the tool that triggered it for easy post-session analysis.

  5. Set Graduated Alert Levels - Use warning thresholds for early awareness and critical thresholds for intervention. A single threshold creates either too many false alarms or too-late notifications.

Common Issues

  1. High CPU from Monitor Itself - Running top or ps aux every few seconds adds measurable overhead. Use lightweight sampling commands like /proc/stat reads on Linux or vm_stat on macOS instead.

  2. Memory Readings Include Cache - Operating systems report cached memory as used. Use available memory rather than free memory for accurate readings, or the monitor will trigger false warnings.

  3. Baseline Drift Over Long Sessions - As a session progresses and more files are opened, baselines naturally shift upward. Use a rolling window for baseline calculation rather than a fixed initial snapshot.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates