G

Graphql Performance Optimizer Partner

Boost productivity using this graphql, performance, analysis, optimization. Includes structured workflows, validation checks, and reusable patterns for api graphql.

AgentClipticsapi graphqlv1.0.0MIT
0 views0 copies

GraphQL Performance Optimizer Partner

An autonomous agent that analyzes and optimizes GraphQL API performance — resolving N+1 queries, implementing caching strategies, optimizing resolver chains, and reducing payload sizes for faster client experiences.

When to Use This Agent

Choose GraphQL Performance Optimizer Partner when:

  • GraphQL queries take seconds instead of milliseconds
  • Database monitoring shows excessive query counts per request
  • Client applications experience slow data loading or high bandwidth
  • You need to implement caching at the resolver or response level

Consider alternatives when:

  • Your performance issues are at the database level, not GraphQL (optimize queries directly)
  • You need to design a new GraphQL schema (use a GraphQL architect agent)
  • The bottleneck is network latency, not server processing

Quick Start

# .claude/agents/graphql-performance-optimizer.yml name: graphql-performance-optimizer-partner description: Optimize GraphQL API performance agent_prompt: | You are a GraphQL Performance Optimizer. When analyzing performance: 1. Profile query execution to identify slow resolvers 2. Detect N+1 query patterns and implement DataLoader 3. Add response caching at appropriate layers 4. Optimize payload sizes with field selection analysis 5. Implement query complexity limits 6. Set up performance monitoring and alerting Always measure before and after. Quantify every optimization.

Example invocation:

claude "Our GraphQL API's getProject query takes 4 seconds. Profile it and optimize."

Sample optimization report:

GraphQL Performance Optimization — getProject
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Before: 4,200ms | 847 database queries | 2.8MB response

Analysis:
  1. N+1 on task.assignee: 200 tasks Ɨ 1 user query = 200 queries
  2. N+1 on task.comments: 200 tasks Ɨ 1 comment query = 200 queries
  3. Full user objects returned (avatar, bio, settings) when only name needed
  4. No caching on project.stats (recalculated every request)

Optimizations Applied:
  1. DataLoader for users: 200 queries → 1 batched query (-198)
  2. DataLoader for comments: 200 queries → 1 batched query (-198)
  3. Field-level resolvers: only fetch requested fields
  4. Redis cache on stats: 5-minute TTL

After: 340ms | 12 database queries | 420KB response
  Speed: 12.4x faster
  Queries: 98.6% reduction
  Payload: 85% smaller

Core Concepts

Performance Optimization Layers

LayerTechniqueImpact
QueryComplexity limits, depth limitingPrevent abuse
ResolverDataLoader, batchingEliminate N+1
DataField selection, projectionsReduce DB load
CacheResponse cache, resolver cacheEliminate redundant work
NetworkPersisted queries, compressionReduce payload size

Resolver Performance Profiling

// Apollo Server plugin for resolver timing const performancePlugin = { requestDidStart() { const resolverTimings: Map<string, number[]> = new Map(); return { executionDidStart() { return { willResolveField({ info }) { const path = `${info.parentType.name}.${info.fieldName}`; const start = process.hrtime.bigint(); return () => { const duration = Number(process.hrtime.bigint() - start) / 1e6; if (!resolverTimings.has(path)) resolverTimings.set(path, []); resolverTimings.get(path)!.push(duration); }; } }; }, willSendResponse({ response }) { // Log slow resolvers for (const [path, timings] of resolverTimings) { const total = timings.reduce((a, b) => a + b, 0); const count = timings.length; if (total > 100) { console.warn(`Slow resolver: ${path} — ${count} calls, ${total.toFixed(0)}ms total`); } } } }; } };

Caching Strategies

// Multi-layer caching for GraphQL import { KeyValueCache } from '@apollo/utils.keyvaluecache'; // Layer 1: DataLoader (per-request, automatic) const userLoader = new DataLoader(batchUsers, { cache: true // Deduplicates within single request }); // Layer 2: Redis cache (cross-request, TTL-based) const resolvers = { Project: { stats: async (project, _args, { cache }) => { const cacheKey = `project:${project.id}:stats`; const cached = await cache.get(cacheKey); if (cached) return JSON.parse(cached); const stats = await computeProjectStats(project.id); await cache.set(cacheKey, JSON.stringify(stats), { ttl: 300 }); return stats; } } }; // Layer 3: CDN cache (for public queries) // Set cache-control headers on responses const cachePlugin = { requestDidStart() { return { willSendResponse({ response }) { if (isPublicQuery(response)) { response.http.headers.set('Cache-Control', 'public, max-age=60'); } } }; } };

Configuration

OptionTypeDefaultDescription
enableProfilingbooleantrueProfile resolver execution times
slowResolverThresholdnumber100Alert threshold in ms
cacheStrategystring"redis"Cache: in-memory, redis, none
cacheTTLnumber300Default cache TTL in seconds
enablePersistedQueriesbooleantrueUse APQ for production
maxQueryComplexitynumber1000Maximum allowed query cost

Best Practices

  1. Profile before optimizing — Install resolver-level tracing to identify which resolvers are slow and how many times they execute per request. Optimizing a resolver that runs once and takes 5ms while ignoring one that runs 200 times and takes 2ms each misses the real bottleneck.

  2. Batch everything with DataLoader, cache selectively — DataLoader should wrap every database call in every resolver, no exceptions. But caching should be selective: cache stable data (user profiles, product catalogs) aggressively, but never cache data that must be real-time (notifications, balances).

  3. Use persisted queries in production — Instead of sending the full query string on every request, clients send a hash of the query. This reduces payload size by 90%+, enables CDN caching, and prevents arbitrary query injection. Apollo and Relay both support automatic persisted queries.

  4. Implement field-level projections — If a client only requests user { name, email }, your resolver should not load the entire user object with avatar, bio, and settings. Use the info parameter to determine which fields are requested and pass that to your database query as a field selection.

  5. Set complexity limits based on real query patterns — Analyze your production query logs to find the most expensive legitimate query. Set the complexity limit at 1.5x that cost. This blocks abusive queries while allowing all real client queries to succeed.

Common Issues

DataLoader returns wrong data or null for some items — The batch function must return results in exactly the same order as the input keys. If your database query returns results in a different order (which most do), you need to reorder: keys.map(key => results.find(r => r.id === key)). Missing items should return null, not be omitted.

Cache invalidation delays cause stale data — After a mutation, the cached data in Redis still shows the old value until TTL expires. Implement mutation-aware cache invalidation: after updateProject mutation, delete the cache key for that project's stats. Use cache tags to invalidate groups of related keys at once.

Query complexity calculation does not account for resolved data — A query for projects(first: 10) { tasks(first: 100) } has a theoretical cost of 10 Ɨ 100 = 1000, but if most projects only have 5 tasks, the actual cost is much lower. Use a hybrid approach: estimate cost before execution (reject obviously abusive queries) and measure actual cost after execution (log for tuning).

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates