Ultimate PostgreSQL Query Runner Suite
Professional-grade skill designed for execute safe read-only SQL queries against databases. Built for Claude Code with best practices and real-world patterns.
PostgreSQL Query Runner Suite
Full-featured PostgreSQL query execution and database management toolkit with query building, performance optimization, migration support, and result formatting.
When to Use This Skill
Choose PostgreSQL Query Runner when:
- Running ad-hoc queries against PostgreSQL databases
- Debugging slow queries with EXPLAIN ANALYZE
- Building and testing complex SQL queries iteratively
- Performing database migrations or schema changes
- Exporting query results in various formats
Consider alternatives when:
- Using NoSQL databases like MongoDB or DynamoDB
- Need an ORM layer — use Prisma, Drizzle, or TypeORM instead
- Working with SQLite for local/embedded databases
Quick Start
# Connect and run queries claude skill activate ultimate-postgresql-query-runner-suite # Explore database schema claude "Show me all tables and their relationships in the orders database" # Optimize a slow query claude "Analyze and optimize this query: SELECT * FROM orders JOIN users ON..."
Example Usage
-- Explore database structure SELECT table_name, column_name, data_type, is_nullable, column_default FROM information_schema.columns WHERE table_schema = 'public' ORDER BY table_name, ordinal_position; -- Performance analysis EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT) SELECT o.id, o.total, u.email, COUNT(oi.id) AS item_count FROM orders o JOIN users u ON u.id = o.user_id JOIN order_items oi ON oi.order_id = o.id WHERE o.created_at > NOW() - INTERVAL '30 days' GROUP BY o.id, o.total, u.email ORDER BY o.total DESC LIMIT 50; -- Index recommendations based on query patterns SELECT schemaname, tablename, seq_scan, seq_tup_read, idx_scan, idx_tup_fetch, CASE WHEN seq_scan > 0 THEN round(seq_tup_read::numeric / seq_scan, 0) ELSE 0 END AS avg_rows_per_seq_scan FROM pg_stat_user_tables WHERE seq_scan > 100 ORDER BY seq_tup_read DESC;
Core Concepts
Query Operations
| Operation | Description | Example |
|---|---|---|
| Schema Inspection | List tables, columns, indexes, constraints | \dt, \d+ tablename |
| Query Execution | Run SELECT, INSERT, UPDATE, DELETE with results | Standard SQL |
| Query Planning | EXPLAIN ANALYZE with buffer and timing stats | EXPLAIN (ANALYZE, BUFFERS) |
| Index Analysis | Find missing indexes, unused indexes | pg_stat_user_indexes |
| Lock Monitoring | Identify blocking queries and deadlocks | pg_locks + pg_stat_activity |
| Connection Stats | Active connections, idle transactions | pg_stat_activity |
Common Query Patterns
-- Find slow queries currently running SELECT pid, now() - pg_stat_activity.query_start AS duration, query, state, wait_event_type, wait_event FROM pg_stat_activity WHERE (now() - pg_stat_activity.query_start) > interval '5 seconds' AND state != 'idle' ORDER BY duration DESC; -- Kill a long-running query SELECT pg_cancel_backend(pid); -- graceful SELECT pg_terminate_backend(pid); -- force -- Table sizes with indexes SELECT relname AS table_name, pg_size_pretty(pg_total_relation_size(relid)) AS total_size, pg_size_pretty(pg_relation_size(relid)) AS data_size, pg_size_pretty(pg_indexes_size(relid)) AS index_size FROM pg_catalog.pg_statio_user_tables ORDER BY pg_total_relation_size(relid) DESC; -- Dead tuples needing vacuum SELECT relname, n_dead_tup, n_live_tup, round(n_dead_tup * 100.0 / NULLIF(n_live_tup, 0), 1) AS dead_pct, last_autovacuum FROM pg_stat_user_tables WHERE n_dead_tup > 1000 ORDER BY n_dead_tup DESC;
Configuration
| Parameter | Description | Default |
|---|---|---|
connection_string | PostgreSQL connection URL | postgresql://localhost:5432/db |
statement_timeout | Max query execution time | 30s |
max_rows | Maximum rows to return in results | 1000 |
output_format | Result format: table, csv, json | table |
auto_explain | Automatically run EXPLAIN on slow queries | true |
read_only | Restrict to SELECT queries only | false |
search_path | PostgreSQL search path for schemas | public |
Best Practices
-
Always use EXPLAIN ANALYZE before optimizing — Don't guess at query performance. Run EXPLAIN ANALYZE to see actual execution times, row estimates versus actual counts, and buffer usage. Estimated row counts that differ by 10x or more indicate stale statistics.
-
Use parameterized queries, never string interpolation — Even for ad-hoc queries, use
$1, $2placeholders with parameter arrays. String concatenation opens SQL injection vulnerabilities and prevents query plan caching. -
Add indexes based on actual query patterns, not assumptions — Check
pg_stat_user_tablesfor tables with high sequential scan counts relative to index scans. Create indexes for WHERE, JOIN, and ORDER BY columns that appear in frequent queries. -
Monitor and limit long-running transactions — Idle transactions hold locks and prevent vacuum from reclaiming dead tuples. Set
idle_in_transaction_session_timeoutand alert on transactions exceeding 5 minutes. -
Test schema migrations on a copy first — Use
pg_dumpto create a test database, run migrations there, verify data integrity, then apply to production. Always include rollback scripts for every migration.
Common Issues
Query performance degrades over time despite no code changes. Table statistics become stale as data grows and distribution changes. Run ANALYZE on affected tables to update planner statistics. Check if autovacuum is keeping up by monitoring n_dead_tup in pg_stat_user_tables — high dead tuple counts cause bloat and slow sequential scans.
Connection pool exhaustion causes "too many connections" errors. PostgreSQL defaults to 100 max connections. Application-side connection pooling (PgBouncer or application pool) should limit connections to ~25% of max. Investigate idle connections with SELECT * FROM pg_stat_activity WHERE state = 'idle' and set idle_in_transaction_session_timeout to reclaim stuck connections.
Migrations fail partway through, leaving the schema in an inconsistent state. Wrap DDL migrations in transactions where possible. For operations that cannot run inside transactions (CREATE INDEX CONCURRENTLY, ALTER TYPE ADD VALUE), use idempotent statements with IF NOT EXISTS clauses so they can be safely re-run after a failure.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Full-Stack Code Reviewer
Comprehensive code review skill that checks for security vulnerabilities, performance issues, accessibility, and best practices across frontend and backend code.
Test Suite Generator
Generates comprehensive test suites with unit tests, integration tests, and edge cases. Supports Jest, Vitest, Pytest, and Go testing.
Pro Architecture Workspace
Battle-tested skill for architectural, decision, making, framework. Includes structured workflows, validation checks, and reusable patterns for development.