Smart Serverless Functions Framework
All-in-one skill for managing build serverless API endpoints and background tasks. Built for Claude Code with best practices and real-world patterns.
Serverless Functions Framework
Comprehensive serverless function development guide covering AWS Lambda, Vercel Functions, Cloudflare Workers, and edge functions with cold start optimization, monitoring, and deployment patterns.
When to Use This Skill
Choose Serverless Functions when:
- Building API endpoints without managing servers
- Processing events from queues, databases, or file uploads
- Creating webhooks for third-party service integrations
- Running scheduled tasks (cron jobs) without infrastructure
- Building microservices with independent scaling per function
Consider alternatives when:
- Need persistent connections (WebSockets) — use Durable Objects or dedicated servers
- Need GPU compute — use cloud VM instances
- Processing takes >15 minutes — use container-based compute
- Need local development parity — use Docker containers
Quick Start
# Activate serverless framework claude skill activate smart-serverless-functions-framework # Create serverless API claude "Create a serverless REST API with Lambda and API Gateway" # Optimize cold starts claude "Optimize cold start performance for our Lambda functions"
Example: Multi-Platform Serverless Function
// Portable serverless function (works on AWS Lambda, Vercel, Cloudflare) interface FunctionContext { method: string; path: string; headers: Record<string, string>; body: any; query: Record<string, string>; } interface FunctionResponse { statusCode: number; headers?: Record<string, string>; body: string; } // Core business logic (platform-agnostic) async function handleRequest(ctx: FunctionContext): Promise<FunctionResponse> { if (ctx.method === 'GET' && ctx.path === '/api/users') { const users = await db.query('SELECT id, name, email FROM users LIMIT 50'); return { statusCode: 200, headers: { 'Content-Type': 'application/json', 'Cache-Control': 'max-age=60' }, body: JSON.stringify(users), }; } if (ctx.method === 'POST' && ctx.path === '/api/users') { const { name, email } = ctx.body; const user = await db.query('INSERT INTO users (name, email) VALUES ($1, $2) RETURNING *', [name, email]); return { statusCode: 201, body: JSON.stringify(user) }; } return { statusCode: 404, body: JSON.stringify({ error: 'Not found' }) }; } // AWS Lambda adapter export const lambdaHandler = async (event: APIGatewayProxyEvent) => { const ctx: FunctionContext = { method: event.httpMethod, path: event.path, headers: event.headers as Record<string, string>, body: event.body ? JSON.parse(event.body) : null, query: event.queryStringParameters as Record<string, string> || {}, }; return handleRequest(ctx); }; // Vercel adapter export default async function vercelHandler(req: Request) { const url = new URL(req.url); const ctx: FunctionContext = { method: req.method, path: url.pathname, headers: Object.fromEntries(req.headers), body: req.method !== 'GET' ? await req.json() : null, query: Object.fromEntries(url.searchParams), }; const res = await handleRequest(ctx); return new Response(res.body, { status: res.statusCode, headers: res.headers }); }
Core Concepts
Platform Comparison
| Feature | AWS Lambda | Vercel Functions | Cloudflare Workers |
|---|---|---|---|
| Runtime | Node, Python, Go, Java | Node, Edge Runtime | V8 Isolates |
| Cold Start | 100-500ms (provisioned: <1ms) | 50-200ms | <1ms |
| Max Duration | 15 min | 10s (hobby), 300s (pro) | 30s CPU |
| Max Memory | 10GB | 1024MB | 128MB |
| Pricing | Per-request + duration | Included in plan | Per-request |
| Edge Support | Lambda@Edge | Edge Runtime | Native (300+ locations) |
Cold Start Optimization
| Technique | Impact | Platform |
|---|---|---|
| Provisioned Concurrency | Eliminates cold starts | AWS Lambda |
| Smaller bundle size | Faster initialization | All |
| Lazy initialization | Defer heavy imports | All |
| Connection pooling | Reuse DB connections | AWS Lambda |
| V8 snapshots | Pre-compiled code | Cloudflare Workers |
| Edge runtime | Sub-ms cold starts | Vercel Edge, Cloudflare |
// Cold start optimization patterns // Lazy initialization - only import when needed let dbClient: DBClient | null = null; async function getDB(): Promise<DBClient> { if (!dbClient) { const { createClient } = await import('./db'); dbClient = createClient(process.env.DATABASE_URL!); } return dbClient; } // Connection reuse (Lambda) // Declare outside handler to persist across warm invocations const pool = new Pool({ connectionString: process.env.DATABASE_URL, max: 1, // Single connection per Lambda instance idleTimeoutMillis: 120000, }); export async function handler(event: any) { const client = await pool.connect(); try { const result = await client.query('SELECT NOW()'); return { statusCode: 200, body: JSON.stringify(result.rows) }; } finally { client.release(); } }
Configuration
| Parameter | Description | Default |
|---|---|---|
runtime | Runtime: nodejs20, python3.12, edge | nodejs20 |
memory | Memory allocation (MB) | 128 |
timeout | Max execution time (seconds) | 10 |
regions | Deployment regions | ["us-east-1"] |
provisioned_concurrency | Pre-warmed instances | 0 |
environment_variables | Secrets and config | {} |
layers | Shared dependency layers | [] |
Best Practices
-
Keep functions focused and small — Each function should handle one route or one event type. Small functions have faster cold starts, easier debugging, and independent scaling. Don't build monolithic Lambda handlers with complex routing.
-
Reuse connections across warm invocations — Declare database connections, HTTP clients, and SDK instances outside the handler function. Lambda reuses the execution environment for subsequent invocations, so connections initialized at module level persist.
-
Use edge runtime for latency-sensitive endpoints — Edge functions run at CDN locations worldwide with sub-millisecond cold starts. Use them for auth checks, redirects, A/B testing, and API responses that don't need heavy compute.
-
Set appropriate timeout and memory limits — Over-provisioning wastes money; under-provisioning causes failures. Monitor actual usage and set limits 20% above typical peak values. Lambda costs scale linearly with memory — more memory also means faster CPU.
-
Implement structured logging from day one — Use JSON-structured logs with request IDs, timing, and error details. Serverless functions have limited debugging tools — comprehensive logs are your primary observability mechanism.
Common Issues
Cold starts cause timeout errors on first request after idle periods. Use provisioned concurrency (AWS Lambda) for production endpoints. For other platforms, implement client-side retry with exponential backoff. Keep functions warm with scheduled pings only as a last resort — it's wasteful and unreliable.
Database connection limits exhausted under high concurrency. Each serverless instance creates its own database connection. Use connection pooling services (RDS Proxy, PgBouncer, Supabase pooler) between Lambda and the database to limit total connections regardless of concurrent function instances.
Function works locally but fails in production. Common causes: missing environment variables, different file system (read-only in Lambda), different Node.js version, and missing native dependencies. Use sam local invoke (AWS) or wrangler dev (Cloudflare) for local testing with production-like environments.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Full-Stack Code Reviewer
Comprehensive code review skill that checks for security vulnerabilities, performance issues, accessibility, and best practices across frontend and backend code.
Test Suite Generator
Generates comprehensive test suites with unit tests, integration tests, and edge cases. Supports Jest, Vitest, Pytest, and Go testing.
Pro Architecture Workspace
Battle-tested skill for architectural, decision, making, framework. Includes structured workflows, validation checks, and reusable patterns for development.