HybridCache — The Best of In-Memory and Distributed Cache
HybridCache (.NET 9) combines a fast L1 in-memory layer with L2 Redis, adds stampede protection, and tag-based invalidation out of the box. Replaces the IMemoryCache + IDistributedCache juggling act.
The Problem With the Old Stack
Before .NET 9, caching required two separate services with different APIs:
// IMemoryCache — fast, in-process, no distributed support
cache.GetOrCreate(key, entry => { ... });
// IDistributedCache — distributed but slow (serialization + network), no stampede protection
var bytes = await distributedCache.GetAsync(key);
var value = JsonSerializer.Deserialize<T>(bytes);You'd typically implement a two-layer pattern yourself: check memory first, fall back to Redis, then to the database. Every team had a different (and usually buggy) version of this.
HybridCache is that pattern — built in, tested, and shipped with .NET 9.
Architecture
Request
│
▼
L1: IMemoryCache (in-process)
│ miss
▼
L2: IDistributedCache (Redis / SQL / etc.)
│ miss
▼
Factory function (database / API call)
│
▼ (result stored in both L2 and L1)
ResponseOn a cache hit at L1, there's zero network overhead. L2 ensures all instances share the same data. L1 acts as a read-through buffer.
Setup
Install:
dotnet add package Microsoft.Extensions.Caching.HybridRegister:
// Program.cs
// L2 backing store — Redis
builder.Services.AddStackExchangeRedisCache(options =>
{
options.Configuration = builder.Configuration.GetConnectionString("Redis");
options.InstanceName = "myapp:";
});
// HybridCache — wraps both layers
builder.Services.AddHybridCache(options =>
{
options.MaximumPayloadBytes = 1024 * 1024; // 1MB max per entry
options.MaximumKeyLength = 512;
// Default TTLs (overridable per-entry)
options.DefaultEntryOptions = new HybridCacheEntryOptions
{
Expiration = TimeSpan.FromMinutes(5), // L2 TTL
LocalCacheExpiration = TimeSpan.FromMinutes(1) // L1 TTL (shorter)
};
});If you don't register IDistributedCache, HybridCache operates as an in-memory-only cache — still useful for the stampede protection.
GetOrCreateAsync — The Main API
public class ProductService(HybridCache cache, IProductRepository repo)
{
public async Task<Product?> GetProductAsync(int id, CancellationToken ct = default)
{
return await cache.GetOrCreateAsync(
key: $"product:{id}",
factory: async (token) => await repo.FindByIdAsync(id, token),
cancellationToken: ct);
}
}Per-entry options override the defaults:
public async Task<IReadOnlyList<Category>> GetCategoriesAsync(CancellationToken ct = default)
{
return await cache.GetOrCreateAsync(
key: "categories:all",
factory: async (token) => await repo.GetAllCategoriesAsync(token),
options: new HybridCacheEntryOptions
{
Expiration = TimeSpan.FromHours(1), // Redis TTL
LocalCacheExpiration = TimeSpan.FromMinutes(5) // Memory TTL
},
tags: ["categories"], // for tag-based invalidation
cancellationToken: ct);
}Stampede Protection — Built In
HybridCache coalesces concurrent requests for the same key. When multiple callers request an uncached key simultaneously, only one factory call is made. All callers await that single result.
// 1000 concurrent requests for "product:42" — only one DB call
var tasks = Enumerable.Range(0, 1000)
.Select(_ => cache.GetOrCreateAsync("product:42", async _ =>
await repo.FindByIdAsync(42, ct)));
var results = await Task.WhenAll(tasks); // all get the same value, one DB hitThis is the key advantage over IMemoryCache.GetOrCreateAsync, which has no such protection.
Tag-Based Invalidation
Tags group related cache entries for bulk invalidation:
// Store with tags
await cache.GetOrCreateAsync(
"product:1",
async _ => await repo.FindByIdAsync(1, ct),
tags: ["products", "catalogue"],
cancellationToken: ct);
await cache.GetOrCreateAsync(
"product:2",
async _ => await repo.FindByIdAsync(2, ct),
tags: ["products", "catalogue"],
cancellationToken: ct);
// Invalidate all entries tagged "products" — across all instances
await cache.RemoveByTagAsync("products", ct);RemoveByTagAsync evicts matching entries from both L1 and L2.
public class ProductsController(HybridCache cache, IProductService svc) : ControllerBase
{
[HttpPost]
public async Task<IActionResult> Create(CreateProductRequest request, CancellationToken ct)
{
var product = await svc.CreateProductAsync(request, ct);
await cache.RemoveByTagAsync("products", ct);
return CreatedAtAction(nameof(Get), new { id = product.Id }, product);
}
[HttpDelete("{id:int}")]
public async Task<IActionResult> Delete(int id, CancellationToken ct)
{
await svc.DeleteProductAsync(id, ct);
// Remove specific entry AND invalidate list caches
await cache.RemoveAsync($"product:{id}", ct);
await cache.RemoveByTagAsync("products", ct);
return NoContent();
}
}Configuring Per-Entry Options at Registration
Set global defaults with per-type overrides:
builder.Services.AddHybridCache(options =>
{
options.DefaultEntryOptions = new HybridCacheEntryOptions
{
Expiration = TimeSpan.FromMinutes(5),
LocalCacheExpiration = TimeSpan.FromMinutes(1)
};
// These apply to all entries — override at call site for specific needs
options.MaximumPayloadBytes = 512 * 1024; // 512KB
});Serialization
By default HybridCache uses System.Text.Json for L2 serialization. Register a custom serializer:
builder.Services.AddHybridCache()
.AddSerializer<Product, ProductSerializer>(); // custom IHybridCacheSerializer<T>
// Or register a default serializer factory for all types
builder.Services.AddHybridCache()
.AddSerializerFactory<MessagePackSerializerFactory>();L1 stores the deserialized object — no serialization cost on L1 hits.
HybridCache vs IMemoryCache vs IDistributedCache
| | IMemoryCache | IDistributedCache | HybridCache | |---|---|---|---| | Speed | Fastest (in-process) | Slower (network) | Fast (L1 hit = in-process) | | Multi-instance | No | Yes | Yes | | Stampede protection | No | No | Yes | | Tag invalidation | No | No | Yes | | .NET version | All | All | .NET 9+ | | API simplicity | Medium | Low (byte arrays) | High | | Best for | Single instance, objects | Shared state, sessions | Everything new in .NET 9+ |
Without Redis — In-Memory Only
If you don't register IDistributedCache, HybridCache still provides value:
- Single unified API
- Stampede protection (concurrent callers share one factory invocation)
- Tag-based invalidation within the process
- Consistent
GetOrCreateAsyncpattern
Upgrade to Redis later by registering AddStackExchangeRedisCache() — no changes to call sites.
Key Takeaways
HybridCachereplaces the manual L1+L2 pattern with a tested, first-party implementation- L1 (memory) TTL should be shorter than L2 (Redis) TTL — short enough to pick up distributed invalidations
- Stampede protection is automatic — no
SemaphoreSlimneeded - Tag-based invalidation works across all instances when backed by Redis
- Works without Redis — acts as an improved
IMemoryCachewith stampede protection
Enjoyed this article?
Explore the Backend Systems learning path for more.
Found this helpful?
Leave a comment
Have a question, correction, or just found this helpful? Leave a note below.