P

Pro Bullmq Workspace

Enterprise-grade skill for bullmq, expert, redis, backed. Includes structured workflows, validation checks, and reusable patterns for development.

SkillClipticsdevelopmentv1.0.0MIT
0 views0 copies

Pro BullMQ Workspace

A Claude Code skill for building robust job queues and background processing with BullMQ. Covers queue setup, job scheduling, worker configuration, retry strategies, rate limiting, prioritization, and monitoring for Node.js/TypeScript applications using Redis-backed queues.

When to Use This Skill

Choose Pro BullMQ Workspace when:

  • You need background job processing in a Node.js application
  • You want to implement reliable queues with retries and error handling
  • You need scheduled/recurring jobs (cron-like functionality)
  • You want rate-limited processing for API calls or external services
  • You need job prioritization and concurrent worker management

Consider alternatives when:

  • You need AWS-specific queuing (use SQS with AWS Serverless skill)
  • You want simple in-process scheduling (use node-cron)
  • You need stream processing (use Kafka or Redis Streams)

Quick Start

# Install BullMQ npm install bullmq ioredis # Install the skill claude install pro-bullmq-workspace # Set up a basic queue claude "Set up a BullMQ queue for sending emails: add jobs, process with workers, handle failures with retries" # Add scheduled jobs claude "Add a scheduled job that runs daily at midnight to generate analytics reports using BullMQ" # Implement rate limiting claude "Implement a rate-limited queue for API calls: max 10 requests per second with proper backpressure"

Core Concepts

BullMQ Architecture

Producer → Queue (Redis) → Worker → Result

Components:
ā”œā”€ā”€ Queue: Named job container stored in Redis
ā”œā”€ā”€ Job: Unit of work with data, options, and lifecycle
ā”œā”€ā”€ Worker: Processes jobs from the queue
ā”œā”€ā”€ QueueScheduler: Manages delayed and repeatable jobs
ā”œā”€ā”€ QueueEvents: Event listener for job lifecycle events
└── FlowProducer: Create job dependency chains

Job Options

OptionPurposeExample
attemptsMax retry count{ attempts: 3 }
backoffRetry delay strategy{ type: 'exponential', delay: 1000 }
delayDelay before first attempt{ delay: 5000 } (5 seconds)
priorityJob priority (lower = higher){ priority: 1 }
repeatRecurring schedule{ pattern: '0 0 * * *' } (daily midnight)
removeOnCompleteCleanup after success{ removeOnComplete: 100 } (keep last 100)
removeOnFailCleanup after failure{ removeOnFail: 500 }

Worker Configuration

import { Worker, Job } from 'bullmq'; const worker = new Worker('email-queue', async (job: Job) => { const { to, subject, body } = job.data; await sendEmail(to, subject, body); return { sent: true, timestamp: Date.now() }; }, { connection: { host: 'localhost', port: 6379 }, concurrency: 5, // Process 5 jobs simultaneously limiter: { max: 10, // Max 10 jobs duration: 1000, // Per 1 second }, }); worker.on('completed', (job) => console.log(`Job ${job.id} completed`)); worker.on('failed', (job, err) => console.error(`Job ${job?.id} failed:`, err));

Configuration

ParameterTypeDefaultDescription
redis_urlstring"redis://localhost:6379"Redis connection URL
concurrencynumber5Concurrent job processing
max_retriesnumber3Default retry attempts
backoff_typestring"exponential"Backoff: exponential, fixed, custom
default_timeoutnumber30000Job timeout in milliseconds
remove_on_completenumber100Keep last N completed jobs

Best Practices

  1. Use exponential backoff for retries — When a job fails because an external service is down, retrying immediately just adds load. Exponential backoff (1s, 2s, 4s, 8s) gives the service time to recover and prevents retry storms.

  2. Set job timeouts — A job stuck waiting for an API response will block a worker slot indefinitely. Set lockDuration on the worker and ensure all async operations have timeouts. Stale jobs should fail and retry rather than hang forever.

  3. Monitor queue health — Track queue length, processing rate, failure rate, and worker utilization. Use BullMQ's built-in events or tools like Bull Board for a dashboard. A growing queue length is an early warning sign.

  4. Make jobs idempotent — Jobs may be processed more than once due to retries or worker crashes. Design job handlers so processing the same job twice produces the same result. Use unique identifiers to deduplicate side effects.

  5. Limit completed job retention — By default, BullMQ keeps all completed jobs in Redis forever. Use removeOnComplete: { count: 100 } to keep only the last 100 completed jobs. This prevents Redis memory from growing unbounded.

Common Issues

Jobs are stuck in "waiting" — No worker is processing the queue. Ensure a worker is running and connected to the same Redis instance with the same queue name. Check Redis connectivity from the worker process.

Redis memory growing continuously — Completed and failed jobs accumulate. Set removeOnComplete and removeOnFail to limit retention. Also check for jobs with large payloads — store data in a database and pass only IDs in job data.

Jobs processed out of order — BullMQ processes jobs concurrently by default, so order is not guaranteed. If order matters, set concurrency: 1 on the worker or use job dependencies with FlowProducer to enforce sequencing.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates