S

Specialist Neon Ally

Comprehensive agent designed for identify, slow, postgres, queries. Includes structured workflows, validation checks, and reusable patterns for data ai.

AgentClipticsdata aiv1.0.0MIT
0 views0 copies

Specialist Neon Ally

An agent specialized in database migrations on Neon's serverless PostgreSQL, performing safe, reversible schema changes using Neon's branching workflow to test migrations against production-equivalent data before applying them.

When to Use This Agent

Choose Neon Ally when:

  • Planning and executing schema migrations on Neon PostgreSQL
  • Using Neon branching to safely test migration scripts
  • Performing zero-downtime migrations on production databases
  • Rolling back failed migrations with minimal data loss
  • Migrating data between Neon branches or from other databases

Consider alternatives when:

  • Setting up a new Neon project from scratch (use the Neon Champion agent)
  • Optimizing query performance without schema changes (use a PostgreSQL DBA agent)
  • Working with non-Neon PostgreSQL migrations (use a general migration tool)

Quick Start

# .claude/agents/specialist-neon-ally.yml name: Neon Migration Ally model: claude-sonnet-4-20250514 tools: - Read - Write - Bash - Glob - Grep prompt: | You are a Neon database migration specialist. Perform safe, reversible schema changes using Neon's branching workflow. Always create a branch, test the migration, verify the results, and only then apply to the target branch. Every migration must have a rollback plan.

Example invocation:

claude --agent specialist-neon-ally "Migrate the users table to add a jsonb preferences column with a default value, add a GIN index, and backfill existing rows - all with zero downtime"

Core Concepts

Migration Workflow with Branching

1. Create branch from main β†’ migration/add-preferences
2. Apply migration on branch β†’ ALTER TABLE, backfill
3. Test on branch β†’ Run application tests, verify data
4. Review migration β†’ Check performance, rollback plan
5. Apply to main β†’ Execute verified migration
6. Clean up β†’ Delete migration branch

Migration Safety Levels

OperationRiskStrategy
Add column (nullable)LowDirect ALTER TABLE
Add column (NOT NULL + default)LowALTER with DEFAULT
Add indexMediumCREATE INDEX CONCURRENTLY
Rename columnHighAdd new β†’ migrate β†’ drop old
Change column typeHighAdd new β†’ convert β†’ swap
Drop columnHighRemove code refs β†’ deploy β†’ drop
Drop tableCriticalArchive β†’ verify β†’ drop

Zero-Downtime Migration Pattern

-- Step 1: Add new column (no lock) ALTER TABLE users ADD COLUMN preferences jsonb DEFAULT '{}'; -- Step 2: Create index concurrently (no lock) CREATE INDEX CONCURRENTLY idx_users_preferences ON users USING GIN (preferences); -- Step 3: Backfill in batches (minimal lock) UPDATE users SET preferences = '{"theme": "light"}' WHERE id BETWEEN 1 AND 10000 AND preferences = '{}'; -- Repeat for remaining batches -- Step 4: Add constraint if needed (validated separately) ALTER TABLE users ADD CONSTRAINT chk_preferences CHECK (preferences IS NOT NULL) NOT VALID; ALTER TABLE users VALIDATE CONSTRAINT chk_preferences;

Configuration

ParameterDescriptionDefault
branch_prefixPrefix for migration branchesmigration/
batch_sizeRows per backfill batch10,000
lock_timeout_msMaximum lock wait time5,000
statement_timeout_msMaximum statement execution time30,000
concurrent_indexUse CONCURRENTLY for indexestrue
auto_rollbackAuto-rollback on test failuretrue
cleanup_branchesDelete branches after mergetrue

Best Practices

  1. Always test migrations on a Neon branch first. Create a branch from production, run the migration, then execute your full test suite against the branched database. Neon branches are instant, copy-on-write copies of your database with real production data. This catches data-dependent migration failures that empty test databases miss, like unique constraint violations on existing data.

  2. Use batched updates for large table backfills. Updating millions of rows in a single transaction locks the table and may timeout. Instead, update in batches of 10,000-50,000 rows using a loop with explicit commits between batches. Add a brief delay between batches to reduce write pressure. Monitor replication lag if using read replicas.

  3. Make every migration reversible. Write both the up and down migration before executing. For additive changes (new columns, new indexes), the rollback is simple deletion. For destructive changes (dropping columns, changing types), the rollback requires data recoveryβ€”which means keeping the old structure until the new one is proven stable. Test the rollback on your migration branch.

  4. Use CREATE INDEX CONCURRENTLY for production indexes. Standard CREATE INDEX locks the table for writes during the entire build process. CONCURRENTLY builds the index without holding a write lock, though it takes longer and uses more resources. For production databases with any traffic, the concurrency trade-off is always worth it. Note that CONCURRENTLY cannot run inside a transaction.

  5. Deploy code changes before and after schema changes. Follow the expand-contract pattern: first deploy code that works with both old and new schemas, then apply the migration, then deploy code that removes old schema support. This ensures zero downtime because the running application is always compatible with the current database state.

Common Issues

Migration succeeds on branch but fails on production. This happens when production has concurrent writes that conflict with the migration. Branches capture a point-in-time snapshot, so they don't replicate write contention. For migrations involving locks, set lock_timeout to 5 seconds so the migration fails fast rather than blocking production queries. Retry during lower traffic periods if the lock cannot be acquired.

Backfill takes too long on large tables. Estimate the total time before starting: measure one batch's duration and multiply by the number of batches. For multi-hour backfills, run them as background jobs that can be paused and resumed. Track progress by recording the last processed ID. Use pg_stat_progress_create_index for index builds and custom logging for data backfills.

Rollback after partial migration leaves database in inconsistent state. This is why migrations should be designed as atomic steps that each leave the database in a valid state. If step 3 of 5 fails, the database should be usable with only steps 1-2 applied. Design each step to be independently deployable and reversible. Never create dependencies between migration steps that require all-or-nothing completion.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates