Pro Bullmq Workspace
Enterprise-grade skill for bullmq, expert, redis, backed. Includes structured workflows, validation checks, and reusable patterns for development.
Pro BullMQ Workspace
A Claude Code skill for building robust job queues and background processing with BullMQ. Covers queue setup, job scheduling, worker configuration, retry strategies, rate limiting, prioritization, and monitoring for Node.js/TypeScript applications using Redis-backed queues.
When to Use This Skill
Choose Pro BullMQ Workspace when:
- You need background job processing in a Node.js application
- You want to implement reliable queues with retries and error handling
- You need scheduled/recurring jobs (cron-like functionality)
- You want rate-limited processing for API calls or external services
- You need job prioritization and concurrent worker management
Consider alternatives when:
- You need AWS-specific queuing (use SQS with AWS Serverless skill)
- You want simple in-process scheduling (use node-cron)
- You need stream processing (use Kafka or Redis Streams)
Quick Start
# Install BullMQ npm install bullmq ioredis # Install the skill claude install pro-bullmq-workspace # Set up a basic queue claude "Set up a BullMQ queue for sending emails: add jobs, process with workers, handle failures with retries" # Add scheduled jobs claude "Add a scheduled job that runs daily at midnight to generate analytics reports using BullMQ" # Implement rate limiting claude "Implement a rate-limited queue for API calls: max 10 requests per second with proper backpressure"
Core Concepts
BullMQ Architecture
Producer ā Queue (Redis) ā Worker ā Result
Components:
āāā Queue: Named job container stored in Redis
āāā Job: Unit of work with data, options, and lifecycle
āāā Worker: Processes jobs from the queue
āāā QueueScheduler: Manages delayed and repeatable jobs
āāā QueueEvents: Event listener for job lifecycle events
āāā FlowProducer: Create job dependency chains
Job Options
| Option | Purpose | Example |
|---|---|---|
attempts | Max retry count | { attempts: 3 } |
backoff | Retry delay strategy | { type: 'exponential', delay: 1000 } |
delay | Delay before first attempt | { delay: 5000 } (5 seconds) |
priority | Job priority (lower = higher) | { priority: 1 } |
repeat | Recurring schedule | { pattern: '0 0 * * *' } (daily midnight) |
removeOnComplete | Cleanup after success | { removeOnComplete: 100 } (keep last 100) |
removeOnFail | Cleanup after failure | { removeOnFail: 500 } |
Worker Configuration
import { Worker, Job } from 'bullmq'; const worker = new Worker('email-queue', async (job: Job) => { const { to, subject, body } = job.data; await sendEmail(to, subject, body); return { sent: true, timestamp: Date.now() }; }, { connection: { host: 'localhost', port: 6379 }, concurrency: 5, // Process 5 jobs simultaneously limiter: { max: 10, // Max 10 jobs duration: 1000, // Per 1 second }, }); worker.on('completed', (job) => console.log(`Job ${job.id} completed`)); worker.on('failed', (job, err) => console.error(`Job ${job?.id} failed:`, err));
Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
redis_url | string | "redis://localhost:6379" | Redis connection URL |
concurrency | number | 5 | Concurrent job processing |
max_retries | number | 3 | Default retry attempts |
backoff_type | string | "exponential" | Backoff: exponential, fixed, custom |
default_timeout | number | 30000 | Job timeout in milliseconds |
remove_on_complete | number | 100 | Keep last N completed jobs |
Best Practices
-
Use exponential backoff for retries ā When a job fails because an external service is down, retrying immediately just adds load. Exponential backoff (1s, 2s, 4s, 8s) gives the service time to recover and prevents retry storms.
-
Set job timeouts ā A job stuck waiting for an API response will block a worker slot indefinitely. Set
lockDurationon the worker and ensure all async operations have timeouts. Stale jobs should fail and retry rather than hang forever. -
Monitor queue health ā Track queue length, processing rate, failure rate, and worker utilization. Use BullMQ's built-in events or tools like Bull Board for a dashboard. A growing queue length is an early warning sign.
-
Make jobs idempotent ā Jobs may be processed more than once due to retries or worker crashes. Design job handlers so processing the same job twice produces the same result. Use unique identifiers to deduplicate side effects.
-
Limit completed job retention ā By default, BullMQ keeps all completed jobs in Redis forever. Use
removeOnComplete: { count: 100 }to keep only the last 100 completed jobs. This prevents Redis memory from growing unbounded.
Common Issues
Jobs are stuck in "waiting" ā No worker is processing the queue. Ensure a worker is running and connected to the same Redis instance with the same queue name. Check Redis connectivity from the worker process.
Redis memory growing continuously ā Completed and failed jobs accumulate. Set removeOnComplete and removeOnFail to limit retention. Also check for jobs with large payloads ā store data in a database and pass only IDs in job data.
Jobs processed out of order ā BullMQ processes jobs concurrently by default, so order is not guaranteed. If order matters, set concurrency: 1 on the worker or use job dependencies with FlowProducer to enforce sequencing.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Full-Stack Code Reviewer
Comprehensive code review skill that checks for security vulnerabilities, performance issues, accessibility, and best practices across frontend and backend code.
Test Suite Generator
Generates comprehensive test suites with unit tests, integration tests, and edge cases. Supports Jest, Vitest, Pytest, and Go testing.
Pro Architecture Workspace
Battle-tested skill for architectural, decision, making, framework. Includes structured workflows, validation checks, and reusable patterns for development.