Optimize Database Fast
Enterprise-grade command for optimize, database, queries, indexing. Includes structured workflows, validation checks, and reusable patterns for performance.
Optimize Database Fast
Quickly identify and resolve database performance bottlenecks through query analysis, index optimization, schema review, and connection pool tuning.
When to Use This Command
Run this command when...
- Slow database queries are causing API response time degradation in production or staging
- You need to analyze query execution plans and add missing indexes to critical tables
- Database CPU or I/O utilization is spiking and you need to identify the responsible queries
- A new feature introduced slow queries and you want rapid optimization before the next release
- You want to review database schema design for normalization issues and performance anti-patterns
Quick Start
# .claude/commands/optimize-database-fast.md --- name: Optimize Database Fast description: Fast database query and schema optimization with index analysis command: true --- Optimize database: $ARGUMENTS 1. Identify slow queries and execution bottlenecks 2. Analyze execution plans and index coverage 3. Optimize queries, add indexes, tune configuration 4. Measure query performance improvement
# Invoke the command claude "/optimize-database-fast slow queries in the orders module" # Expected output # > Analyzing database queries in orders module... # > Slow queries identified: # > 1. getOrdersByUser: full table scan on orders (no index on user_id) # > Execution time: 340ms | Rows scanned: 1.2M # > 2. getOrderItems: N+1 pattern loading items per order # > 23 queries per request | Total: 180ms # > 3. orderStatusReport: unoptimized aggregate query # > Execution time: 890ms | Missing compound index # > Applying fixes: # > 1. CREATE INDEX idx_orders_user_id ON orders(user_id) # > 2. Rewrote to JOIN with eager loading (23 queries -> 1) # > 3. Added compound index (status, created_at) + query rewrite # > Results: # > Query 1: 340ms -> 4ms | Query 2: 180ms -> 12ms | Query 3: 890ms -> 25ms
Core Concepts
| Concept | Description |
|---|---|
| Query Analysis | Identifies slow queries through code scanning and execution plan review |
| Index Optimization | Recommends and creates indexes based on query patterns and WHERE clauses |
| N+1 Detection | Finds loop-based query patterns and rewrites them as JOINs or batch loads |
| Schema Review | Evaluates table design for normalization, data types, and constraint issues |
| Connection Pooling | Analyzes and tunes connection pool settings for optimal throughput |
Database Optimization Flow
============================
Scan Code for Queries
|
[Identify Slow Queries]
Full scans | N+1 | Bad joins
|
[Analyze Execution Plans]
EXPLAIN | Index usage | Row estimates
|
[Apply Optimizations]
+--------+--------+--------+
| | | |
Add Rewrite Tune Fix
indexes queries pool schema
|
[Measure Improvement]
340ms -> 4ms per query
Configuration
| Parameter | Description | Default | Example | Required |
|---|---|---|---|---|
$ARGUMENTS | Target module, table, or query to optimize | all detected queries | "orders module" | No |
database_type | Database engine in use | auto-detect | "postgresql", "mysql", "mongodb" | No |
include_migrations | Generate migration files for index changes | true | false | No |
analyze_schema | Include schema design review | true | false | No |
connection_pool_tune | Analyze and tune connection pool settings | false | true | No |
Best Practices
-
Start with the slowest queries -- Focus on the queries that cause the most user-facing latency. A single query improvement from 500ms to 5ms can transform the entire application experience.
-
Always generate migrations for indexes -- Keep
include_migrations: trueso index changes are version-controlled and reproducible across environments. Never add indexes directly in production. -
Test index impact on writes -- Every index speeds reads but slows writes. For high-write tables, benchmark INSERT and UPDATE performance after adding indexes.
-
Profile with production-like data -- Query plans behave differently on small dev databases versus production datasets. Use production data volumes for accurate EXPLAIN analysis.
-
Fix N+1 patterns at the ORM level -- Rather than just adding indexes, rewrite the code to use eager loading or batch fetching. N+1 patterns indicate an application-level problem, not just a database one.
Common Issues
Index not used despite being created: The query optimizer may choose a full scan when the selectivity of an index is too low (e.g., a boolean column). Use composite indexes or restructure the query to improve selectivity.
Migration conflicts in team environments: Multiple developers adding index migrations simultaneously can cause conflicts. Coordinate index changes through your team workflow and rebase before merging.
Connection pool exhaustion under load: Default pool sizes are often too small for production traffic. Monitor active connections and increase the pool size, but ensure the database server can handle the increased connection count.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Git Commit Message Generator
Generates well-structured conventional commit messages by analyzing staged changes. Follows Conventional Commits spec with scope detection.
React Component Scaffolder
Scaffolds a complete React component with TypeScript types, Tailwind styles, Storybook stories, and unit tests. Follows project conventions automatically.
CI/CD Pipeline Generator
Generates GitHub Actions workflows for CI/CD including linting, testing, building, and deploying. Detects project stack automatically.