Graphql Performance Optimizer Partner
Boost productivity using this graphql, performance, analysis, optimization. Includes structured workflows, validation checks, and reusable patterns for api graphql.
GraphQL Performance Optimizer Partner
An autonomous agent that analyzes and optimizes GraphQL API performance ā resolving N+1 queries, implementing caching strategies, optimizing resolver chains, and reducing payload sizes for faster client experiences.
When to Use This Agent
Choose GraphQL Performance Optimizer Partner when:
- GraphQL queries take seconds instead of milliseconds
- Database monitoring shows excessive query counts per request
- Client applications experience slow data loading or high bandwidth
- You need to implement caching at the resolver or response level
Consider alternatives when:
- Your performance issues are at the database level, not GraphQL (optimize queries directly)
- You need to design a new GraphQL schema (use a GraphQL architect agent)
- The bottleneck is network latency, not server processing
Quick Start
# .claude/agents/graphql-performance-optimizer.yml name: graphql-performance-optimizer-partner description: Optimize GraphQL API performance agent_prompt: | You are a GraphQL Performance Optimizer. When analyzing performance: 1. Profile query execution to identify slow resolvers 2. Detect N+1 query patterns and implement DataLoader 3. Add response caching at appropriate layers 4. Optimize payload sizes with field selection analysis 5. Implement query complexity limits 6. Set up performance monitoring and alerting Always measure before and after. Quantify every optimization.
Example invocation:
claude "Our GraphQL API's getProject query takes 4 seconds. Profile it and optimize."
Sample optimization report:
GraphQL Performance Optimization ā getProject
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Before: 4,200ms | 847 database queries | 2.8MB response
Analysis:
1. N+1 on task.assignee: 200 tasks Ć 1 user query = 200 queries
2. N+1 on task.comments: 200 tasks Ć 1 comment query = 200 queries
3. Full user objects returned (avatar, bio, settings) when only name needed
4. No caching on project.stats (recalculated every request)
Optimizations Applied:
1. DataLoader for users: 200 queries ā 1 batched query (-198)
2. DataLoader for comments: 200 queries ā 1 batched query (-198)
3. Field-level resolvers: only fetch requested fields
4. Redis cache on stats: 5-minute TTL
After: 340ms | 12 database queries | 420KB response
Speed: 12.4x faster
Queries: 98.6% reduction
Payload: 85% smaller
Core Concepts
Performance Optimization Layers
| Layer | Technique | Impact |
|---|---|---|
| Query | Complexity limits, depth limiting | Prevent abuse |
| Resolver | DataLoader, batching | Eliminate N+1 |
| Data | Field selection, projections | Reduce DB load |
| Cache | Response cache, resolver cache | Eliminate redundant work |
| Network | Persisted queries, compression | Reduce payload size |
Resolver Performance Profiling
// Apollo Server plugin for resolver timing const performancePlugin = { requestDidStart() { const resolverTimings: Map<string, number[]> = new Map(); return { executionDidStart() { return { willResolveField({ info }) { const path = `${info.parentType.name}.${info.fieldName}`; const start = process.hrtime.bigint(); return () => { const duration = Number(process.hrtime.bigint() - start) / 1e6; if (!resolverTimings.has(path)) resolverTimings.set(path, []); resolverTimings.get(path)!.push(duration); }; } }; }, willSendResponse({ response }) { // Log slow resolvers for (const [path, timings] of resolverTimings) { const total = timings.reduce((a, b) => a + b, 0); const count = timings.length; if (total > 100) { console.warn(`Slow resolver: ${path} ā ${count} calls, ${total.toFixed(0)}ms total`); } } } }; } };
Caching Strategies
// Multi-layer caching for GraphQL import { KeyValueCache } from '@apollo/utils.keyvaluecache'; // Layer 1: DataLoader (per-request, automatic) const userLoader = new DataLoader(batchUsers, { cache: true // Deduplicates within single request }); // Layer 2: Redis cache (cross-request, TTL-based) const resolvers = { Project: { stats: async (project, _args, { cache }) => { const cacheKey = `project:${project.id}:stats`; const cached = await cache.get(cacheKey); if (cached) return JSON.parse(cached); const stats = await computeProjectStats(project.id); await cache.set(cacheKey, JSON.stringify(stats), { ttl: 300 }); return stats; } } }; // Layer 3: CDN cache (for public queries) // Set cache-control headers on responses const cachePlugin = { requestDidStart() { return { willSendResponse({ response }) { if (isPublicQuery(response)) { response.http.headers.set('Cache-Control', 'public, max-age=60'); } } }; } };
Configuration
| Option | Type | Default | Description |
|---|---|---|---|
enableProfiling | boolean | true | Profile resolver execution times |
slowResolverThreshold | number | 100 | Alert threshold in ms |
cacheStrategy | string | "redis" | Cache: in-memory, redis, none |
cacheTTL | number | 300 | Default cache TTL in seconds |
enablePersistedQueries | boolean | true | Use APQ for production |
maxQueryComplexity | number | 1000 | Maximum allowed query cost |
Best Practices
-
Profile before optimizing ā Install resolver-level tracing to identify which resolvers are slow and how many times they execute per request. Optimizing a resolver that runs once and takes 5ms while ignoring one that runs 200 times and takes 2ms each misses the real bottleneck.
-
Batch everything with DataLoader, cache selectively ā DataLoader should wrap every database call in every resolver, no exceptions. But caching should be selective: cache stable data (user profiles, product catalogs) aggressively, but never cache data that must be real-time (notifications, balances).
-
Use persisted queries in production ā Instead of sending the full query string on every request, clients send a hash of the query. This reduces payload size by 90%+, enables CDN caching, and prevents arbitrary query injection. Apollo and Relay both support automatic persisted queries.
-
Implement field-level projections ā If a client only requests
user { name, email }, your resolver should not load the entire user object with avatar, bio, and settings. Use theinfoparameter to determine which fields are requested and pass that to your database query as a field selection. -
Set complexity limits based on real query patterns ā Analyze your production query logs to find the most expensive legitimate query. Set the complexity limit at 1.5x that cost. This blocks abusive queries while allowing all real client queries to succeed.
Common Issues
DataLoader returns wrong data or null for some items ā The batch function must return results in exactly the same order as the input keys. If your database query returns results in a different order (which most do), you need to reorder: keys.map(key => results.find(r => r.id === key)). Missing items should return null, not be omitted.
Cache invalidation delays cause stale data ā After a mutation, the cached data in Redis still shows the old value until TTL expires. Implement mutation-aware cache invalidation: after updateProject mutation, delete the cache key for that project's stats. Use cache tags to invalidate groups of related keys at once.
Query complexity calculation does not account for resolved data ā A query for projects(first: 10) { tasks(first: 100) } has a theoretical cost of 10 Ć 100 = 1000, but if most projects only have 5 tasks, the actual cost is much lower. Use a hybrid approach: estimate cost before execution (reject obviously abusive queries) and measure actual cost after execution (log for tuning).
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
API Endpoint Builder
Agent that scaffolds complete REST API endpoints with controller, service, route, types, and tests. Supports Express, Fastify, and NestJS.
Documentation Auto-Generator
Agent that reads your codebase and generates comprehensive documentation including API docs, architecture guides, and setup instructions.
Ai Ethics Advisor Partner
All-in-one agent covering ethics, responsible, development, specialist. Includes structured workflows, validation checks, and reusable patterns for ai specialists.