Expert Cloudflare Workers Workshop
A comprehensive skill that enables deploy and manage edge computing workers. Built for Claude Code with best practices and real-world patterns.
Cloudflare Workers Workshop
Complete Cloudflare Workers development guide covering edge computing, KV storage, Durable Objects, D1 database, R2 storage, and deployment patterns for globally distributed applications.
When to Use This Skill
Choose Cloudflare Workers when:
- Building globally distributed API endpoints with sub-millisecond cold starts
- Implementing edge-side logic (auth, redirects, A/B testing, geolocation)
- Creating full-stack applications with D1, KV, and R2
- Need serverless compute at Cloudflare's 300+ edge locations
- Building middleware for request/response transformation
Consider alternatives when:
- Need long-running processes (>30s CPU time) — use traditional servers
- Need WebSocket connections with state — use Durable Objects specifically
- Need GPU compute — use cloud VM instances
Quick Start
# Install Wrangler CLI npm install -g wrangler # Create new project wrangler init my-worker cd my-worker # Activate workshop claude skill activate expert-cloudflare-workers-workshop # Develop locally wrangler dev
Example: API Worker with D1 Database
// src/index.ts export interface Env { DB: D1Database; CACHE: KVNamespace; BUCKET: R2Bucket; API_KEY: string; } export default { async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> { const url = new URL(request.url); // CORS headers const corsHeaders = { 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Methods': 'GET, POST, OPTIONS', 'Content-Type': 'application/json', }; if (request.method === 'OPTIONS') { return new Response(null, { headers: corsHeaders }); } try { // Route handling if (url.pathname === '/api/posts' && request.method === 'GET') { // Check KV cache first const cached = await env.CACHE.get('posts:all', 'json'); if (cached) return Response.json(cached, { headers: corsHeaders }); // Query D1 database const { results } = await env.DB.prepare( 'SELECT * FROM posts ORDER BY created_at DESC LIMIT 50' ).all(); // Cache for 5 minutes ctx.waitUntil( env.CACHE.put('posts:all', JSON.stringify(results), { expirationTtl: 300 }) ); return Response.json(results, { headers: corsHeaders }); } if (url.pathname === '/api/posts' && request.method === 'POST') { const body = await request.json() as { title: string; content: string }; const result = await env.DB.prepare( 'INSERT INTO posts (title, content) VALUES (?, ?) RETURNING *' ).bind(body.title, body.content).first(); // Invalidate cache ctx.waitUntil(env.CACHE.delete('posts:all')); return Response.json(result, { status: 201, headers: corsHeaders }); } return Response.json({ error: 'Not found' }, { status: 404, headers: corsHeaders }); } catch (err) { return Response.json({ error: 'Internal error' }, { status: 500, headers: corsHeaders }); } }, } satisfies ExportedHandler<Env>;
Core Concepts
Cloudflare Services
| Service | Purpose | Use Case |
|---|---|---|
| Workers | Edge compute (V8 isolates) | API endpoints, middleware |
| KV | Global key-value store | Caching, config, session data |
| D1 | SQLite database at the edge | Application data storage |
| R2 | S3-compatible object storage | Files, images, backups |
| Durable Objects | Stateful edge compute | WebSockets, coordination |
| Queues | Message queues | Async processing, batching |
| AI | ML model inference at edge | Text generation, embeddings |
Worker Limits
| Limit | Free Plan | Paid Plan |
|---|---|---|
| CPU Time | 10ms/request | 30s/request |
| Memory | 128MB | 128MB |
| Request Size | 100MB | 100MB |
| KV Reads | 100K/day | Unlimited |
| D1 Rows Read | 5M/day | 25B/month |
| R2 Storage | 10GB | Unlimited (metered) |
# wrangler.toml configuration name = "my-api" main = "src/index.ts" compatibility_date = "2024-03-01" [vars] ENVIRONMENT = "production" [[kv_namespaces]] binding = "CACHE" id = "abc123" [[d1_databases]] binding = "DB" database_id = "def456" [[r2_buckets]] binding = "BUCKET" bucket_name = "my-files" [triggers] crons = ["*/5 * * * *"] # Every 5 minutes
Configuration
| Parameter | Description | Default |
|---|---|---|
compatibility_date | Workers runtime version | Latest |
routes | URL patterns to intercept | ["api.example.com/*"] |
cron_triggers | Scheduled execution patterns | [] |
usage_model | Billing: bundled or unbound | bundled |
logpush | Enable log shipping | false |
node_compat | Node.js API compatibility | true |
Best Practices
-
Use
ctx.waitUntil()for non-blocking background work — Cache writes, analytics logging, and webhook notifications don't need to block the response. Wrap them inctx.waitUntil()to execute after the response is sent while keeping the worker alive. -
Cache aggressively with KV for read-heavy workloads — KV is globally replicated and optimized for reads. Cache API responses, computed results, and frequently accessed database queries. Use short TTLs (60-300s) for freshness with automatic expiration.
-
Use D1 for relational data, KV for simple lookups — D1 provides SQL queries, joins, and transactions. KV is faster for simple key-value lookups but doesn't support queries. Choose based on data access patterns, not just data volume.
-
Handle errors gracefully with structured responses — Workers that throw uncaught errors return Cloudflare's generic error page. Always wrap handler logic in try/catch and return structured JSON errors with appropriate status codes.
-
Use Wrangler's local development mode for fast iteration —
wrangler devprovides local D1, KV, and R2 emulation with hot reloading. Test locally before deploying to avoid consuming production quotas during development.
Common Issues
Worker exceeds CPU time limit on complex operations. Offload heavy computation to Cloudflare Queues for async processing, or split work across multiple worker invocations. For database-heavy operations, optimize queries with indexes and limit result sets.
KV reads return stale data after writes. KV is eventually consistent — writes may take up to 60 seconds to propagate globally. For consistency-critical reads, use D1 or Durable Objects. For caching, design your application to tolerate brief staleness.
D1 migrations fail or cause downtime. Use Wrangler's migration system (wrangler d1 migrations) for schema changes. Test migrations against a local D1 database first. For zero-downtime migrations, use additive changes (add columns) rather than destructive ones (drop/rename).
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Full-Stack Code Reviewer
Comprehensive code review skill that checks for security vulnerabilities, performance issues, accessibility, and best practices across frontend and backend code.
Test Suite Generator
Generates comprehensive test suites with unit tests, integration tests, and edge cases. Supports Jest, Vitest, Pytest, and Go testing.
Pro Architecture Workspace
Battle-tested skill for architectural, decision, making, framework. Includes structured workflows, validation checks, and reusable patterns for development.