E

Efficient Implement Caching Strategy

Comprehensive command designed for design, implement, comprehensive, caching. Includes structured workflows, validation checks, and reusable patterns for performance.

CommandClipticsperformancev1.0.0MIT
0 views0 copies

Efficient Implement Caching Strategy

Design and implement multi-layer caching solutions with browser, CDN, application, and database caching configured for your specific architecture and access patterns.

When to Use This Command

Run this command when...

  • Your application has performance bottlenecks caused by repeated data fetching or expensive computations
  • You need to implement caching at multiple layers: browser, CDN, application memory, and database query results
  • An API endpoint is receiving high traffic and response times need to be reduced through intelligent caching
  • You want to set up cache invalidation strategies that maintain data consistency across your caching layers
  • You are designing a caching architecture from scratch and need TTL policies, storage selection, and invalidation logic

Quick Start

# .claude/commands/efficient-implement-caching-strategy.md --- name: Efficient Implement Caching Strategy description: Multi-layer caching with TTL, invalidation, and consistency command: true --- Implement caching: $ARGUMENTS 1. Analyze access patterns and bottlenecks 2. Design multi-layer caching architecture 3. Implement browser/CDN/application/database caching 4. Configure TTL policies and invalidation strategies 5. Add cache monitoring and hit-rate tracking
# Invoke the command claude "/efficient-implement-caching-strategy for product catalog API" # Expected output # > Analyzing product catalog access patterns... # > Read/write ratio: 95/5 (high cache potential) # > Average response time: 420ms # > Top queries: getProducts (80%), getProductById (15%) # > Designing caching strategy... # > Layer 1: Browser - Cache-Control headers (max-age=300) # > Layer 2: CDN - Edge caching for GET /products (5min TTL) # > Layer 3: Application - Redis cache for product queries (10min TTL) # > Layer 4: Database - Query result cache (30min TTL) # > Implementing... # > Created: src/middleware/cacheHeaders.ts # > Created: src/services/CacheService.ts # > Modified: src/controllers/ProductController.ts # > Post-implementation: avg response time 420ms -> 35ms

Core Concepts

ConceptDescription
Multi-Layer ArchitectureImplements caching at browser, CDN, application, and database levels
Access Pattern AnalysisStudies read/write ratios and query frequency to determine optimal cache placement
TTL Policy DesignSets time-to-live values per cache layer based on data freshness requirements
Invalidation StrategyConfigures event-driven, time-based, or write-through invalidation for consistency
Cache MonitoringAdds hit rate, miss rate, and eviction tracking for ongoing cache health visibility
Multi-Layer Cache Architecture
================================

  Client Request
       |
  [Browser Cache] -- hit --> Return cached (0ms)
       | miss
  [CDN Edge Cache] -- hit --> Return cached (5ms)
       | miss
  [App Memory/Redis] -- hit --> Return cached (2ms)
       | miss
  [Database Query Cache] -- hit --> Return cached (10ms)
       | miss
  [Database] --> Execute query (200ms+)
       |
  Populate all cache layers on response

Configuration

ParameterDescriptionDefaultExampleRequired
$ARGUMENTSTarget API, service, or feature to cachenone"product catalog API"Yes
cache_backendApplication-level cache storageauto-detect"redis", "memcached", "in-memory"No
default_ttlDefault time-to-live in seconds300600No
invalidation_strategyHow caches are invalidated on writes"write-through""event-driven", "ttl-only"No
enable_monitoringAdd cache hit/miss rate monitoringtruefalseNo

Best Practices

  1. Analyze before caching -- Not all endpoints benefit from caching. Focus on high-read, low-write endpoints with expensive underlying queries. Caching a rarely-accessed endpoint adds complexity without benefit.

  2. Set TTLs based on data sensitivity -- User profile data (changes rarely) can have long TTLs. Inventory counts (change frequently) need short TTLs or event-driven invalidation.

  3. Implement cache warming for critical paths -- Pre-populate caches for frequently accessed data during deployment or startup to avoid cold-start latency spikes.

  4. Monitor hit rates in production -- A cache with a low hit rate (below 70%) may have incorrect TTL settings or is caching the wrong data. Use monitoring data to tune the strategy.

  5. Plan for cache failure -- Design the system to function correctly (albeit slowly) when the cache is unavailable. Never make the cache a hard dependency on the request path.

Common Issues

Cache stampede on expiration: When a popular cache key expires, many simultaneous requests hit the database. Implement cache locking or staggered TTLs to prevent thundering herd problems.

Stale data after writes: Write operations that do not invalidate the cache serve outdated data. Ensure every write path includes cache invalidation for affected keys and related collections.

Memory pressure from over-caching: Caching too many keys or large objects exhausts Redis memory or application heap. Set maximum cache sizes and implement eviction policies (LRU recommended).

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates