O

Optimize Database Fast

Enterprise-grade command for optimize, database, queries, indexing. Includes structured workflows, validation checks, and reusable patterns for performance.

CommandClipticsperformancev1.0.0MIT
0 views0 copies

Optimize Database Fast

Quickly identify and resolve database performance bottlenecks through query analysis, index optimization, schema review, and connection pool tuning.

When to Use This Command

Run this command when...

  • Slow database queries are causing API response time degradation in production or staging
  • You need to analyze query execution plans and add missing indexes to critical tables
  • Database CPU or I/O utilization is spiking and you need to identify the responsible queries
  • A new feature introduced slow queries and you want rapid optimization before the next release
  • You want to review database schema design for normalization issues and performance anti-patterns

Quick Start

# .claude/commands/optimize-database-fast.md --- name: Optimize Database Fast description: Fast database query and schema optimization with index analysis command: true --- Optimize database: $ARGUMENTS 1. Identify slow queries and execution bottlenecks 2. Analyze execution plans and index coverage 3. Optimize queries, add indexes, tune configuration 4. Measure query performance improvement
# Invoke the command claude "/optimize-database-fast slow queries in the orders module" # Expected output # > Analyzing database queries in orders module... # > Slow queries identified: # > 1. getOrdersByUser: full table scan on orders (no index on user_id) # > Execution time: 340ms | Rows scanned: 1.2M # > 2. getOrderItems: N+1 pattern loading items per order # > 23 queries per request | Total: 180ms # > 3. orderStatusReport: unoptimized aggregate query # > Execution time: 890ms | Missing compound index # > Applying fixes: # > 1. CREATE INDEX idx_orders_user_id ON orders(user_id) # > 2. Rewrote to JOIN with eager loading (23 queries -> 1) # > 3. Added compound index (status, created_at) + query rewrite # > Results: # > Query 1: 340ms -> 4ms | Query 2: 180ms -> 12ms | Query 3: 890ms -> 25ms

Core Concepts

ConceptDescription
Query AnalysisIdentifies slow queries through code scanning and execution plan review
Index OptimizationRecommends and creates indexes based on query patterns and WHERE clauses
N+1 DetectionFinds loop-based query patterns and rewrites them as JOINs or batch loads
Schema ReviewEvaluates table design for normalization, data types, and constraint issues
Connection PoolingAnalyzes and tunes connection pool settings for optimal throughput
Database Optimization Flow
============================

  Scan Code for Queries
         |
  [Identify Slow Queries]
  Full scans | N+1 | Bad joins
         |
  [Analyze Execution Plans]
  EXPLAIN | Index usage | Row estimates
         |
  [Apply Optimizations]
  +--------+--------+--------+
  |        |        |        |
  Add     Rewrite  Tune     Fix
  indexes queries  pool     schema
         |
  [Measure Improvement]
  340ms -> 4ms per query

Configuration

ParameterDescriptionDefaultExampleRequired
$ARGUMENTSTarget module, table, or query to optimizeall detected queries"orders module"No
database_typeDatabase engine in useauto-detect"postgresql", "mysql", "mongodb"No
include_migrationsGenerate migration files for index changestruefalseNo
analyze_schemaInclude schema design reviewtruefalseNo
connection_pool_tuneAnalyze and tune connection pool settingsfalsetrueNo

Best Practices

  1. Start with the slowest queries -- Focus on the queries that cause the most user-facing latency. A single query improvement from 500ms to 5ms can transform the entire application experience.

  2. Always generate migrations for indexes -- Keep include_migrations: true so index changes are version-controlled and reproducible across environments. Never add indexes directly in production.

  3. Test index impact on writes -- Every index speeds reads but slows writes. For high-write tables, benchmark INSERT and UPDATE performance after adding indexes.

  4. Profile with production-like data -- Query plans behave differently on small dev databases versus production datasets. Use production data volumes for accurate EXPLAIN analysis.

  5. Fix N+1 patterns at the ORM level -- Rather than just adding indexes, rewrite the code to use eager loading or batch fetching. N+1 patterns indicate an application-level problem, not just a database one.

Common Issues

Index not used despite being created: The query optimizer may choose a full scan when the selectivity of an index is too low (e.g., a boolean column). Use composite indexes or restructure the query to improve selectivity.

Migration conflicts in team environments: Multiple developers adding index migrations simultaneously can cause conflicts. Coordinate index changes through your team workflow and rebase before merging.

Connection pool exhaustion under load: Default pool sizes are often too small for production traffic. Monitor active connections and increase the pool size, but ensure the database server can handle the increased connection count.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates