U

Ultimate PostgreSQL Query Runner Suite

Professional-grade skill designed for execute safe read-only SQL queries against databases. Built for Claude Code with best practices and real-world patterns.

SkillCommunitydatabasev1.0.0MIT
0 views0 copies

PostgreSQL Query Runner Suite

Full-featured PostgreSQL query execution and database management toolkit with query building, performance optimization, migration support, and result formatting.

When to Use This Skill

Choose PostgreSQL Query Runner when:

  • Running ad-hoc queries against PostgreSQL databases
  • Debugging slow queries with EXPLAIN ANALYZE
  • Building and testing complex SQL queries iteratively
  • Performing database migrations or schema changes
  • Exporting query results in various formats

Consider alternatives when:

  • Using NoSQL databases like MongoDB or DynamoDB
  • Need an ORM layer — use Prisma, Drizzle, or TypeORM instead
  • Working with SQLite for local/embedded databases

Quick Start

# Connect and run queries claude skill activate ultimate-postgresql-query-runner-suite # Explore database schema claude "Show me all tables and their relationships in the orders database" # Optimize a slow query claude "Analyze and optimize this query: SELECT * FROM orders JOIN users ON..."

Example Usage

-- Explore database structure SELECT table_name, column_name, data_type, is_nullable, column_default FROM information_schema.columns WHERE table_schema = 'public' ORDER BY table_name, ordinal_position; -- Performance analysis EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT) SELECT o.id, o.total, u.email, COUNT(oi.id) AS item_count FROM orders o JOIN users u ON u.id = o.user_id JOIN order_items oi ON oi.order_id = o.id WHERE o.created_at > NOW() - INTERVAL '30 days' GROUP BY o.id, o.total, u.email ORDER BY o.total DESC LIMIT 50; -- Index recommendations based on query patterns SELECT schemaname, tablename, seq_scan, seq_tup_read, idx_scan, idx_tup_fetch, CASE WHEN seq_scan > 0 THEN round(seq_tup_read::numeric / seq_scan, 0) ELSE 0 END AS avg_rows_per_seq_scan FROM pg_stat_user_tables WHERE seq_scan > 100 ORDER BY seq_tup_read DESC;

Core Concepts

Query Operations

OperationDescriptionExample
Schema InspectionList tables, columns, indexes, constraints\dt, \d+ tablename
Query ExecutionRun SELECT, INSERT, UPDATE, DELETE with resultsStandard SQL
Query PlanningEXPLAIN ANALYZE with buffer and timing statsEXPLAIN (ANALYZE, BUFFERS)
Index AnalysisFind missing indexes, unused indexespg_stat_user_indexes
Lock MonitoringIdentify blocking queries and deadlockspg_locks + pg_stat_activity
Connection StatsActive connections, idle transactionspg_stat_activity

Common Query Patterns

-- Find slow queries currently running SELECT pid, now() - pg_stat_activity.query_start AS duration, query, state, wait_event_type, wait_event FROM pg_stat_activity WHERE (now() - pg_stat_activity.query_start) > interval '5 seconds' AND state != 'idle' ORDER BY duration DESC; -- Kill a long-running query SELECT pg_cancel_backend(pid); -- graceful SELECT pg_terminate_backend(pid); -- force -- Table sizes with indexes SELECT relname AS table_name, pg_size_pretty(pg_total_relation_size(relid)) AS total_size, pg_size_pretty(pg_relation_size(relid)) AS data_size, pg_size_pretty(pg_indexes_size(relid)) AS index_size FROM pg_catalog.pg_statio_user_tables ORDER BY pg_total_relation_size(relid) DESC; -- Dead tuples needing vacuum SELECT relname, n_dead_tup, n_live_tup, round(n_dead_tup * 100.0 / NULLIF(n_live_tup, 0), 1) AS dead_pct, last_autovacuum FROM pg_stat_user_tables WHERE n_dead_tup > 1000 ORDER BY n_dead_tup DESC;

Configuration

ParameterDescriptionDefault
connection_stringPostgreSQL connection URLpostgresql://localhost:5432/db
statement_timeoutMax query execution time30s
max_rowsMaximum rows to return in results1000
output_formatResult format: table, csv, jsontable
auto_explainAutomatically run EXPLAIN on slow queriestrue
read_onlyRestrict to SELECT queries onlyfalse
search_pathPostgreSQL search path for schemaspublic

Best Practices

  1. Always use EXPLAIN ANALYZE before optimizing — Don't guess at query performance. Run EXPLAIN ANALYZE to see actual execution times, row estimates versus actual counts, and buffer usage. Estimated row counts that differ by 10x or more indicate stale statistics.

  2. Use parameterized queries, never string interpolation — Even for ad-hoc queries, use $1, $2 placeholders with parameter arrays. String concatenation opens SQL injection vulnerabilities and prevents query plan caching.

  3. Add indexes based on actual query patterns, not assumptions — Check pg_stat_user_tables for tables with high sequential scan counts relative to index scans. Create indexes for WHERE, JOIN, and ORDER BY columns that appear in frequent queries.

  4. Monitor and limit long-running transactions — Idle transactions hold locks and prevent vacuum from reclaiming dead tuples. Set idle_in_transaction_session_timeout and alert on transactions exceeding 5 minutes.

  5. Test schema migrations on a copy first — Use pg_dump to create a test database, run migrations there, verify data integrity, then apply to production. Always include rollback scripts for every migration.

Common Issues

Query performance degrades over time despite no code changes. Table statistics become stale as data grows and distribution changes. Run ANALYZE on affected tables to update planner statistics. Check if autovacuum is keeping up by monitoring n_dead_tup in pg_stat_user_tables — high dead tuple counts cause bloat and slow sequential scans.

Connection pool exhaustion causes "too many connections" errors. PostgreSQL defaults to 100 max connections. Application-side connection pooling (PgBouncer or application pool) should limit connections to ~25% of max. Investigate idle connections with SELECT * FROM pg_stat_activity WHERE state = 'idle' and set idle_in_transaction_session_timeout to reclaim stuck connections.

Migrations fail partway through, leaving the schema in an inconsistent state. Wrap DDL migrations in transactions where possible. For operations that cannot run inside transactions (CREATE INDEX CONCURRENTLY, ALTER TYPE ADD VALUE), use idempotent statements with IF NOT EXISTS clauses so they can be safely re-run after a failure.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates