T

Trigger Dev Smart

Battle-tested skill for trigger, expert, background, jobs. Includes structured workflows, validation checks, and reusable patterns for workflow automation.

SkillClipticsworkflow automationv1.0.0MIT
0 views0 copies

Trigger.dev Smart

A background job and workflow automation skill for building long-running tasks, scheduled jobs, and event-driven workflows in serverless environments using Trigger.dev v3.

When to Use

Choose Trigger.dev when:

  • Building background jobs that exceed serverless function time limits
  • Creating scheduled tasks, webhooks, and event-triggered workflows
  • Processing long-running operations like file conversions, data imports, and AI pipelines
  • Implementing job queues with retries, concurrency control, and monitoring

Consider alternatives when:

  • Simple cron jobs on a server — use system cron or node-cron
  • Real-time event streaming — use Kafka or Redis Streams
  • Complex DAG-based pipelines — use Apache Airflow

Quick Start

# Install Trigger.dev v3 npm install @trigger.dev/sdk npx trigger.dev@latest init npx trigger.dev@latest dev
import { task, schedules, wait } from '@trigger.dev/sdk/v3'; // Define a background task export const processUpload = task({ id: 'process-upload', maxDuration: 300, // 5 minutes max retry: { maxAttempts: 3, minTimeoutInMs: 1000, maxTimeoutInMs: 10000, factor: 2 }, run: async (payload: { fileUrl: string; userId: string }) => { // Step 1: Download file const file = await downloadFile(payload.fileUrl); // Step 2: Process file const processed = await convertFile(file, 'webp'); // Step 3: Upload result const resultUrl = await uploadToStorage(processed); // Step 4: Notify user await notifyUser(payload.userId, { message: 'File processed successfully', url: resultUrl }); return { resultUrl }; } }); // Scheduled task export const dailyCleanup = schedules.task({ id: 'daily-cleanup', cron: '0 2 * * *', // 2 AM daily run: async () => { const deleted = await cleanupExpiredFiles(); const archived = await archiveOldRecords(30); // 30 days return { deleted, archived }; } }); // Task with subtasks export const batchProcess = task({ id: 'batch-process', run: async (payload: { items: string[] }) => { // Process items in parallel batches const results = await processUpload.batchTriggerAndWait( payload.items.map(item => ({ payload: { fileUrl: item, userId: 'system' } })) ); return { total: results.length, succeeded: results.filter(r => r.ok).length, failed: results.filter(r => !r.ok).length }; } }); // Trigger from your API import { tasks } from '@trigger.dev/sdk/v3'; export async function POST(request: Request) { const body = await request.json(); const handle = await tasks.trigger<typeof processUpload>('process-upload', { fileUrl: body.fileUrl, userId: body.userId }); return Response.json({ jobId: handle.id }); }

Core Concepts

Task Types

TypeTriggerUse Case
taskProgrammatic via trigger()Background processing
schedules.taskCron scheduleRecurring jobs
webhookHTTP webhookExternal event handling
batchTriggerParallel executionBulk processing

Concurrency and Queue Management

export const rateLimitedTask = task({ id: 'api-sync', queue: { concurrencyLimit: 5 // Max 5 parallel runs }, retry: { maxAttempts: 5, minTimeoutInMs: 2000, factor: 3 }, run: async (payload: { endpoint: string }) => { const result = await fetch(payload.endpoint); if (result.status === 429) { throw new Error('Rate limited'); // Will retry with backoff } return result.json(); } });

Configuration

OptionDescriptionDefault
idUnique task identifierRequired
maxDurationMaximum execution time (seconds)60
retry.maxAttemptsMaximum retry attempts3
retry.factorExponential backoff multiplier2
queue.concurrencyLimitMax parallel task executionsUnlimited
machineCompute size: small, medium, large"small"
cronCron expression for scheduled tasksNone
timeoutTask timeout (alternative to maxDuration)60s

Best Practices

  1. Set appropriate maxDuration for each task type — file processing may need 300s while API calls need 30s; overly generous limits waste resources while too-short limits cause unnecessary failures
  2. Use batchTriggerAndWait for parallel processing instead of sequential loops — Trigger.dev handles parallel execution, resource management, and result aggregation more efficiently than manual Promise.all patterns
  3. Implement idempotent task handlers because retries will re-execute the entire task; use idempotency keys or check-before-write patterns to prevent duplicate side effects
  4. Use concurrency limits on tasks that call rate-limited APIs to prevent overwhelming external services during bursts of triggered jobs
  5. Monitor task execution through the Trigger.dev dashboard to identify slow tasks, high failure rates, and queue backlogs before they impact users

Common Issues

Tasks failing with timeout errors: Long-running operations exceed the default 60-second timeout. Increase maxDuration for the specific task, or break the work into subtasks that each complete within the timeout and use batchTriggerAndWait to orchestrate them.

Retry storms overwhelming external services: A failing external API triggers retries across many tasks simultaneously, amplifying the load. Use concurrency limits on the queue, configure exponential backoff with reasonable delays, and add circuit breaker logic that stops retrying when the external service is clearly down.

Development environment not matching production: Tasks that work in the dev server may fail in production due to missing environment variables, different file system access, or network restrictions. Test tasks against production-like environments, ensure all environment variables are configured in the Trigger.dev dashboard, and avoid relying on local file system paths.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates