D

Database Migration Generator

Creates safe, reversible database migration scripts from schema changes. Supports PostgreSQL, MySQL, MongoDB, and DynamoDB.

SkillClipticsdatabasev1.0.0MIT
0 views0 copies

Database Migration Generator

Create safe, reversible database migration scripts with proper sequencing, rollback support, and data transformation logic for schema changes across environments.

When to Use This Template

Choose Database Migration Generator when:

  • You need to alter tables, add columns, or modify indexes in production databases
  • Schema changes require careful data migration with zero-downtime deployment
  • You want to generate migration files that follow your ORM's conventions (Prisma, Knex, TypeORM)
  • Multiple environments need synchronized schema changes with rollback capability

Consider alternatives when:

  • You are designing a brand-new database schema from scratch (use a schema designer)
  • You need one-time data cleanup scripts without schema changes
  • Your database is schemaless (document stores with no migration tooling)

Quick Start

# .claude/skills/database-migration-generator.yml name: database-migration-generator description: Generate database migration scripts with rollback support prompt: | Generate a database migration for the requested schema change. Follow these rules: 1. Create both "up" and "down" migration functions 2. Use transactions for multi-step changes 3. Handle existing data migration when altering columns 4. Add appropriate indexes for new columns 5. Include comments explaining each change 6. Follow the project's migration naming convention Output migration files in the project's ORM format.

Example invocation:

claude "Add a 'status' enum column to the orders table with values: pending, processing, shipped, delivered, cancelled. Default to 'pending' for existing rows."

Generated migration (Knex.js):

// migrations/20250115_add_status_to_orders.js exports.up = async function(knex) { // Create enum type await knex.raw(` CREATE TYPE order_status AS ENUM ( 'pending', 'processing', 'shipped', 'delivered', 'cancelled' ) `); // Add column with default await knex.schema.alterTable('orders', (table) => { table.enum('status', null, { useNative: true, existingType: true, enumName: 'order_status' }).defaultTo('pending').notNullable(); table.index('status', 'idx_orders_status'); }); }; exports.down = async function(knex) { await knex.schema.alterTable('orders', (table) => { table.dropIndex('status', 'idx_orders_status'); table.dropColumn('status'); }); await knex.raw('DROP TYPE order_status'); };

Core Concepts

Migration Safety Levels

LevelStrategyDowntimeUse Case
SafeAdd nullable column, backfill, add constraintNoneProduction changes
CautiousLock table briefly, alter, releaseSecondsLow-traffic windows
RebuildCreate new table, copy data, swapMinutesMajor restructuring
DestructiveDrop column/table directlyNoneDev/staging cleanup

Zero-Downtime Migration Pattern

// Step 1: Add new column (nullable, no constraint) exports.up_step1 = async (knex) => { await knex.schema.alterTable('users', (table) => { table.string('email_normalized').nullable(); }); }; // Step 2: Backfill data (run as background job) exports.up_step2 = async (knex) => { const batchSize = 1000; let offset = 0; let rows; do { rows = await knex('users') .whereNull('email_normalized') .limit(batchSize) .offset(offset); for (const row of rows) { await knex('users') .where('id', row.id) .update({ email_normalized: row.email.toLowerCase().trim() }); } offset += batchSize; } while (rows.length === batchSize); }; // Step 3: Add constraint after backfill completes exports.up_step3 = async (knex) => { await knex.schema.alterTable('users', (table) => { table.string('email_normalized').notNullable().alter(); table.unique('email_normalized', 'uq_users_email_normalized'); }); };

Multi-Database Support

// Prisma migration model Order { id String @id @default(cuid()) status OrderStatus @default(PENDING) createdAt DateTime @default(now()) @@index([status]) } enum OrderStatus { PENDING PROCESSING SHIPPED DELIVERED CANCELLED }
# SQLAlchemy / Alembic migration def upgrade(): op.add_column('orders', sa.Column('status', sa.Enum('pending', 'processing', 'shipped', 'delivered', 'cancelled', name='order_status'), nullable=False, server_default='pending' ) ) op.create_index('idx_orders_status', 'orders', ['status']) def downgrade(): op.drop_index('idx_orders_status', 'orders') op.drop_column('orders', 'status') sa.Enum(name='order_status').drop(op.get_bind())

Configuration

OptionTypeDefaultDescription
ormstring"auto"Target ORM: prisma, knex, typeorm, sequelize, alembic
databasestring"postgresql"Target database: postgresql, mysql, sqlite
transactionalbooleantrueWrap migrations in transactions
batchSizenumber1000Rows per batch during data migration
generateRollbackbooleantrueAlways generate down/rollback migration
namingConventionstring"timestamp"File naming: timestamp, sequential, descriptive

Best Practices

  1. Always write reversible migrations — Every up migration should have a corresponding down that restores the previous state exactly. Test rollbacks in staging before deploying to production. Irreversible changes (like dropping a column with data) should log warnings.

  2. Use transactions for multi-step changes — Wrap related alterations in a single transaction so that if step 3 of 5 fails, steps 1-2 are rolled back automatically. This prevents half-applied migrations that leave the schema in an inconsistent state.

  3. Separate schema changes from data migrations — Create one migration for the DDL change (add column) and a separate one for the data backfill. This keeps each migration fast and allows the data migration to run asynchronously without holding locks.

  4. Add indexes concurrently in PostgreSQL — Use CREATE INDEX CONCURRENTLY to avoid locking the table during index creation on large tables. Standard CREATE INDEX blocks writes for the entire duration on tables with millions of rows.

  5. Test migrations against production-like data volumes — A migration that runs in 2 seconds on 1,000 rows may take 45 minutes on 10 million rows. Always benchmark against realistic data volumes and set appropriate lock timeouts to fail fast rather than block.

Common Issues

Migration timeout on large tables — Adding a column with a default value to a table with millions of rows can lock the table and timeout. In PostgreSQL 11+, adding a column with a non-volatile default is instant. For older versions or MySQL, add the column as nullable first, backfill in batches, then add the NOT NULL constraint.

Foreign key constraint failures during rollback — The down migration tries to drop a column that other tables reference via foreign keys. Always drop dependent foreign keys before dropping the referenced column, and recreate them in the up migration. Check information_schema.referential_constraints before generating rollbacks.

Enum type conflicts across migrations — Creating an enum type that already exists (from a failed previous migration) causes the migration to fail. Use CREATE TYPE IF NOT EXISTS or check for the type's existence before creation. When rolling back, only drop the enum if no other columns reference it.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates