English | 한국어 | 简体中文 | 日本語 | Español
The multi-layer caching toolkit that Node.js deserves.
Stack memory + Redis + disk. One API. Zero stampedes.
Website | Quick Start | Features | API Reference | Integrations | Comparison | Tutorial | Migration Guide
Every growing Node.js service hits the same caching wall:
Memory-only cache --> Fast, but each instance has a different view of data
Redis-only cache --> Shared, but every request pays a network round-trip
Hand-rolled hybrid --> Works... until you need stampede prevention, invalidation,
stale serving, observability, and distributed consistency
layercache gives you a unified multi-layer cache with production-grade features built in:
┌───────────────────────────────────────┐
your app ---->│ layercache │
│ │
│ L1 Memory ~0.01ms (per-process) │
│ | │
│ L2 Redis ~0.5ms (shared) │
│ | │
│ L3 Disk ~2ms (persistent) │
│ | │
│ Fetcher ~20ms (runs once) │
└───────────────────────────────────────┘
On a hit --> serves the fastest layer, backfills the rest
On a miss --> fetcher runs ONCE (even under 100x concurrency)
npm install layercacheimport { CacheStack, MemoryLayer, RedisLayer } from 'layercache'
import Redis from 'ioredis'
const cache = new CacheStack([
new MemoryLayer({ ttl: 60, maxSize: 1_000 }), // L1: in-process
new RedisLayer({ client: new Redis(), ttl: 3600 }), // L2: shared
])
// Read-through: fetcher runs once, all layers filled
const user = await cache.get('user:123', () => db.findUser(123))Memory-only (no Redis required)
const cache = new CacheStack([
new MemoryLayer({ ttl: 60 })
])Three-layer setup with disk persistence
import { CacheStack, MemoryLayer, RedisLayer, DiskLayer } from 'layercache'
const cache = new CacheStack([
new MemoryLayer({ ttl: 60, maxSize: 5_000 }),
new RedisLayer({ client: new Redis(), ttl: 3600, compression: 'gzip' }),
new DiskLayer({ directory: './var/cache', maxFiles: 10_000 }),
])| Feature | What it does |
|---|---|
| Layered reads + auto backfill | Reads hit L1 first; on a partial hit, upper layers are filled automatically |
| Stampede prevention | 100 concurrent requests for the same key = 1 fetcher execution |
| Distributed single-flight | Cross-instance dedup via Redis locks with lease renewal |
| Bulk operations | getMany() / setMany() / mdelete() with layer-level fast paths |
wrap() API |
Transparent function caching with automatic key derivation |
| Namespaces | Scoped cache views with hierarchical prefix support |
| Cache warming | Pre-populate layers at startup with priority-based loading |
| Negative caching | Cache misses (e.g., "user not found") for short TTLs |
| Feature | What it does |
|---|---|
| Tag invalidation | Delete all keys with a given tag across all layers |
| Batch tag invalidation | Multi-tag operations with any / all semantics |
| Wildcard & prefix invalidation | Glob-style and hierarchical key patterns |
| Generation-based rotation | Bulk namespace invalidation without scanning |
| Stale-while-revalidate | Return cached value, refresh in background |
| Stale-if-error | Keep serving stale when upstream fails |
| Sliding TTL | Reset expiry on every read for frequently-accessed keys |
| Adaptive TTL | Auto-ramp TTL for hot keys up to a ceiling |
| Refresh-ahead | Proactively refresh before expiry |
| TTL policies | Align expirations to calendar boundaries (until-midnight, next-hour, custom) |
| Feature | What it does |
|---|---|
| Graceful degradation | Skip failed layers temporarily, keep cache available |
| Circuit breaker | Stop hammering broken upstreams after repeated failures |
| Fetcher rate limiting | Scoped to global, per-key, or per-fetcher with custom buckets |
| Write policies | strict (fail if any layer fails) or best-effort |
| Write-behind | Batch writes with configurable flush interval |
| Compression | gzip / brotli in RedisLayer with configurable threshold |
| MessagePack | Pluggable serializers (JSON default, MessagePack alternative) |
| Persistence | Export/import snapshots to memory or disk |
| Feature | What it does |
|---|---|
| Metrics | Hits, misses, fetches, stale hits, circuit breaker trips, and more |
| Per-layer latency | Avg, max, and sample count using Welford's algorithm |
| Health checks | Async health endpoint per layer with latency measurement |
| Event hooks | hit, miss, set, delete, stale-serve, stampede-dedupe, backfill, warm, error |
| OpenTelemetry | Hook-based distributed tracing support without method monkey-patching |
| Prometheus exporter | Metrics export including latency gauges |
| HTTP stats handler | JSON endpoint for dashboards |
| Admin CLI | npx layercache stats|keys|invalidate for Redis-backed caches |
layercache plugs into the frameworks you already use:
| Framework | Integration |
|---|---|
| Express | createExpressCacheMiddleware(cache, opts) - auto-caches responses with x-cache: HIT/MISS header |
| Fastify | createFastifyLayercachePlugin(cache, opts) - registers fastify.cache with optional stats route |
| Hono | createHonoCacheMiddleware(cache, opts) - edge-compatible middleware |
| tRPC | createTrpcCacheMiddleware(cache, prefix, opts) - procedure middleware |
| GraphQL | cacheGraphqlResolver(cache, prefix, resolver, opts) - field resolver wrapper |
| Next.js | Works natively with App Router and API routes |
| OpenTelemetry | createOpenTelemetryPlugin(cache, tracer) - event-driven tracing spans without monkey-patching |
Express example
import { CacheStack, MemoryLayer, createExpressCacheMiddleware } from 'layercache'
const cache = new CacheStack([new MemoryLayer({ ttl: 60 })])
app.get('/api/users', createExpressCacheMiddleware(cache, {
ttl: 30,
tags: ['users'],
keyResolver: (req) => `users:${req.url}`
}), async (req, res) => {
res.json(await db.getUsers())
})Next.js App Router example
export async function GET(_req: Request, { params }: { params: { id: string } }) {
const data = await cache.get(`user:${params.id}`, () => db.findUser(Number(params.id)))
return Response.json(data)
}layercache is built for multi-instance production environments:
┌───────────┐ ┌───────────┐ ┌───────────┐
│ Server A │ │ Server B │ │ Server C │
│ [Memory] │ │ [Memory] │ │ [Memory] │
└─────┬─────┘ └─────┬─────┘ └─────┬─────┘
│ │ │
└──── Redis Pub/Sub ──────────────┘ <-- L1 invalidation bus
│
┌─────┴──────┐
│ Redis │ <-- shared L2 + tag index + single-flight
└────────────┘
- Redis single-flight - dedup misses across instances with distributed locks
- Redis invalidation bus - pub/sub-based L1 invalidation for memory consistency
- Redis tag index - shared tag tracking with optional sharding
- Snapshot persistence - export/import state between instances
Full distributed setup
import {
CacheStack, MemoryLayer, RedisLayer,
RedisInvalidationBus, RedisTagIndex, RedisSingleFlightCoordinator
} from 'layercache'
const redis = new Redis()
const bus = new RedisInvalidationBus({ publisher: redis, subscriber: new Redis() })
const tagIndex = new RedisTagIndex({ client: redis, prefix: 'myapp:tags' })
const coordinator = new RedisSingleFlightCoordinator({ client: redis })
const cache = new CacheStack(
[
new MemoryLayer({ ttl: 60, maxSize: 10_000 }),
new RedisLayer({ client: redis, ttl: 3600, prefix: 'myapp:cache:' })
],
{
invalidationBus: bus,
tagIndex: tagIndex,
singleFlightCoordinator: coordinator,
gracefulDegradation: { retryAfterMs: 10_000 }
}
)┌─────────────────────┬──────────────┐
│ Scenario │ Avg Latency │
├─────────────────────┼──────────────┤
│ L1 memory hit │ ~0.006 ms │
│ L2 Redis hit │ ~0.020 ms │
│ No cache (sim. DB) │ ~1.08 ms │
└─────────────────────┴──────────────┘
┌─────────────────────┬────────┐
│ concurrentRequests │ 100 │
│ fetcherExecutions │ 1 │ <-- stampede prevention
└─────────────────────┴────────┘
Benchmark commands, fixtures, and scenario notes live in docs/benchmarking.md.
| node-cache-manager | keyv | cacheable | layercache | |
|---|---|---|---|---|
| Multi-layer with auto backfill | Partial | Plugin | -- | Yes |
| Stampede prevention | -- | -- | -- | Yes |
| Distributed single-flight | -- | -- | -- | Yes |
| Tag invalidation | -- | -- | Yes | Yes |
| Distributed tags | -- | -- | -- | Yes |
| Cross-server L1 flush | -- | -- | -- | Yes |
| Stale-while-revalidate | -- | -- | -- | Yes |
| Circuit breaker | -- | -- | -- | Yes |
| Graceful degradation | -- | -- | -- | Yes |
| Sliding / adaptive TTL | -- | -- | -- | Yes |
| Cache warming | -- | -- | -- | Yes |
| Persistence / snapshots | -- | -- | -- | Yes |
| Compression | -- | -- | Yes | Yes |
| Admin CLI | -- | -- | -- | Yes |
| TypeScript-first | Partial | Yes | Yes | Yes |
| Wrap / decorator API | Yes | -- | -- | Yes |
| Namespaces | -- | Yes | Yes | Yes |
| Event hooks | Yes | Yes | Yes | Yes |
| Custom layers | Partial | -- | -- | Yes |
See the full comparison guide for detailed breakdowns.
| Document | Description |
|---|---|
| API Reference | Complete API documentation with all options |
| Tutorial | Step-by-step operational walkthrough |
| Comparison Guide | Detailed feature comparison with alternatives |
| Migration Guide | Migrate from node-cache-manager, keyv, or cacheable |
| Benchmarking | Benchmark scenarios and methodology |
| Changelog | Version history and breaking changes |
The examples/ directory contains ready-to-run projects:
express-api/- Express REST API with layered cachingnextjs-api-routes/- Next.js App Router with layercache
- Node.js >= 20
- TypeScript >= 5.0 (optional - fully typed, ships
.d.ts) - ioredis >= 5 (optional - only needed for Redis features)
Runtime dependencies: async-mutex and @msgpack/msgpack
Contributions welcome - bug fixes, docs, performance, new adapters, or issues.
git clone https://github.com/flyingsquirrel0419/layercache
cd layercache
npm install
npm run lint && npm test && npm run build:allSee the Contributing Guide and Code of Conduct.
Apache 2.0 - use it freely in personal and commercial projects.
If layercache saves you time, consider giving it a star on GitHub. It helps others discover the project.
