SQL & NoSQL Databases: Complete Guide · Lesson 5 of 9
Redis: Beyond Caching
Redis Is Not Just a Cache
Redis (Remote Dictionary Server) is an in-memory data structure server. While it's most commonly used as a cache, its rich data types make it suitable for:
- Sessions — sub-millisecond session reads for millions of users
- Leaderboards — sorted sets maintain ranked data automatically
- Rate limiting — atomic increment + expire in a single command
- Job queues — lists or Streams replace simple message brokers
- Pub/Sub — fan-out to multiple subscribers in real time
- Distributed locks — Redlock algorithm for coordinating microservices
- Feature flags — hash maps read in under 1ms
Used by: Twitter, GitHub, Stack Overflow, Snapchat, Craigslist, Tinder.
Setup
Bash
# Docker
docker run -d \
--name redis-dev \
-p 6379:6379 \
redis:7-alpine \
redis-server --requirepass devpassword --save 60 1000
# Connect
redis-cli -a devpassword
# Or with URL
redis-cli -u "redis://:devpassword@localhost:6379"Data Types
Strings — The Foundation
Bash
SET user:99:name "Sarah K."
GET user:99:name # "Sarah K."
SET counter 0
INCR counter # 1 (atomic — safe for concurrent use)
INCRBY counter 5 # 6
# Expiry (TTL)
SET session:abc123 '{"userId":"99"}' EX 3600 # expires in 1 hour
TTL session:abc123 # seconds remaining
PERSIST session:abc123 # remove TTL
# NX — only set if not exists (mutex/lock pattern)
SET lock:payment:order-99 "worker-1" NX EX 30Hashes — Object Storage
Bash
# Store a user object
HSET user:99 name "Sarah K." email "sarah@example.com" plan "pro" loginCount 0
# Read
HGET user:99 email # sarah@example.com
HGETALL user:99 # all fields
HMGET user:99 name plan # multiple fields
# Update one field
HSET user:99 plan "enterprise"
HINCRBY user:99 loginCount 1 # atomic increment on a field
# Check and list
HEXISTS user:99 phone # 0 (false)
HKEYS user:99 # [name, email, plan, loginCount]
HLEN user:99 # 4Lists — Queues and Stacks
Bash
# Push to list
RPUSH queue:emails '{"to":"sarah@example.com","subject":"Welcome"}' # append right
LPUSH queue:priority '{"urgent":true}' # prepend left
# Pop (blocking — perfect for workers)
BRPOP queue:priority queue:emails 5 # block up to 5s, pop from whichever has data
# Peek without consuming
LRANGE queue:emails 0 -1 # all items
LLEN queue:emails # countSets — Unique Collections
Bash
# Tags, followers, online users
SADD tags:post:99 "redis" "database" "backend"
SADD tags:post:99 "redis" # duplicate — ignored
SMEMBERS tags:post:99 # all members
SISMEMBER tags:post:99 "redis" # 1 (true)
SCARD tags:post:99 # 3
# Set operations
SUNION tags:post:99 tags:post:100 # union of two tag sets
SINTER online:region:eu online:plan:pro # users who are EU AND pro
SDIFF followers:alice followers:bob # alice's followers not following bobSorted Sets — Leaderboards
Bash
# Score is always a float — sorted automatically
ZADD leaderboard 10400 "mikhail"
ZADD leaderboard 8750 "yuki"
ZADD leaderboard 7200 "sarah"
# Top 10 (highest score first)
ZREVRANGE leaderboard 0 9 WITHSCORES
# Rank of a member (0-indexed)
ZREVRANK leaderboard "yuki" # 1 (2nd place)
ZSCORE leaderboard "sarah" # 7200.0
# Add XP
ZINCRBY leaderboard 300 "sarah" # now 7500
# Range by score
ZRANGEBYSCORE leaderboard 5000 +inf WITHSCORESStreams — Persistent Event Log
Bash
# Append events (auto-generate ID)
XADD events:orders * orderId ORD-001 userId u_99 action placed total 10997
# Read from beginning
XRANGE events:orders - +
# Consumer group — multiple workers process events exactly once
XGROUP CREATE events:orders workers $ MKSTREAM
XREADGROUP GROUP workers worker-1 COUNT 10 STREAMS events:orders >
# ... process events ...
XACK events:orders workers event-id-hereProduction Patterns
Pattern 1: Cache-Aside
C#
public async Task<Product?> GetProductAsync(string sku)
{
var cacheKey = $"product:{sku}";
var cached = await _redis.StringGetAsync(cacheKey);
if (cached.HasValue)
return JsonSerializer.Deserialize<Product>(cached!);
var product = await _db.Products.FirstOrDefaultAsync(p => p.Sku == sku);
if (product is not null)
await _redis.StringSetAsync(cacheKey,
JsonSerializer.Serialize(product),
TimeSpan.FromMinutes(15));
return product;
}Pattern 2: Rate Limiter (Sliding Window)
LUA
-- Lua script — executes atomically
local key = KEYS[1] -- "ratelimit:user:99:api"
local limit = tonumber(ARGV[1]) -- 100
local window = tonumber(ARGV[2]) -- 60 (seconds)
local now = tonumber(ARGV[3])
redis.call("ZREMRANGEBYSCORE", key, 0, now - window * 1000)
local count = redis.call("ZCARD", key)
if count < limit then
redis.call("ZADD", key, now, now)
redis.call("EXPIRE", key, window)
return 1 -- allowed
else
return 0 -- rejected
endPattern 3: Distributed Lock (Redlock)
C#
// StackExchange.Redis
var lockKey = "lock:invoice:INV-001";
var lockValue = Guid.NewGuid().ToString(); // unique per worker
// Acquire
bool acquired = await _redis.StringSetAsync(
lockKey, lockValue,
TimeSpan.FromSeconds(30),
When.NotExists);
if (!acquired) return; // another worker has the lock
try
{
// Do exclusive work
await ProcessInvoiceAsync("INV-001");
}
finally
{
// Release only if we own the lock (Lua for atomicity)
var script = @"
if redis.call('GET', KEYS[1]) == ARGV[1] then
return redis.call('DEL', KEYS[1])
else return 0 end";
await _redis.ScriptEvaluateAsync(script,
new RedisKey[] { lockKey },
new RedisValue[] { lockValue });
}Pattern 4: Pub/Sub Fan-Out
C#
// Publisher
var pub = _redis.GetSubscriber();
await pub.PublishAsync("notifications:user:99",
JsonSerializer.Serialize(new { type = "badge_earned", badge = "week-streak" }));
// Subscriber (e.g., WebSocket service)
await pub.SubscribeAsync("notifications:user:*", (channel, message) =>
{
var userId = channel.ToString().Split(':')[2];
var payload = message.ToString();
_webSocketHub.SendToUser(userId, payload);
});Persistence Options
Bash
# RDB (snapshot) — good for backups, some data loss possible
save 900 1 # save if 1 change in 15 min
save 300 10 # save if 10 changes in 5 min
save 60 10000 # save if 10000 changes in 1 min
# AOF (append-only file) — much better durability
appendonly yes
appendfsync everysec # fsync every second (balanced)
# appendfsync always # fsync every write (slowest, safest)
# Both — use RDB for backups, AOF for durabilityClustering
Bash
# Redis Cluster — automatic sharding across 6 nodes (3 primary + 3 replica)
redis-cli --cluster create \
127.0.0.1:7000 127.0.0.1:7001 \
127.0.0.1:7002 127.0.0.1:7003 \
127.0.0.1:7004 127.0.0.1:7005 \
--cluster-replicas 116,384 hash slots are distributed across primary nodes. Each key is hashed to a slot.
Limitation: Multi-key operations (MGET, transactions) must target keys on the same slot — use hash tags: {user:99}:cart and {user:99}:session share the same slot.
Cloud Managed Redis
Azure Cache for Redis
Bash
# Create with Azure CLI
az redis create \
--resource-group myRG \
--name myapp-redis \
--location eastus \
--sku Premium \
--vm-size P2 \
--enable-non-ssl-port false
# Enable Redis Cluster on Premium tier
az redis update --name myapp-redis --resource-group myRG \
--shard-count 3Tiers:
- Basic — single node, no SLA — dev only
- Standard — primary + replica, 99.9% SLA
- Premium — clustering, persistence, VNet injection, geo-replication, 99.9% SLA
- Enterprise — Redis Stack modules (Search, JSON, TimeSeries), 99.99% SLA
AWS ElastiCache for Redis
Bash
aws elasticache create-replication-group \
--replication-group-id myapp-redis \
--replication-group-description "Production Redis" \
--engine redis \
--engine-version 7.1 \
--cache-node-type cache.r7g.large \
--num-cache-clusters 3 \
--automatic-failover-enabled \
--multi-az-enabled \
--at-rest-encryption-enabled \
--transit-encryption-enabledOptions: Serverless (auto-scale, pay per GB-hour), Cluster Mode Enabled (sharding), Global Datastore (cross-region replication).
GCP Memorystore for Redis
Bash
gcloud redis instances create myapp-redis \
--size=5 \
--region=us-central1 \
--redis-version=redis_7_0 \
--tier=STANDARD_HA \
--replica-count=1Key Takeaways
- Redis is a multi-tool — sessions, caches, queues, leaderboards, locks, and pub/sub are all natural fits.
- Sorted sets are unique — no SQL or document database can maintain ranked data as efficiently.
- Always set a TTL on cached data — memory is finite, unbounded growth will crash your server.
- Use Lua scripts for operations that must be atomic across multiple keys.
- Redis Streams are a lightweight alternative to Kafka for ordered event logs with consumer groups.
- Premium tier on Azure / Cluster on AWS are required for production — Standard/Basic have no persistence guarantee on node failure.