DevToolBoxGRATIS
Blog

Redis Complete Guide: Caching, Pub/Sub, Streams, and Production Patterns

13 min readdi DevToolBox
TL;DRRedis is an in-memory data structure store used as a database, cache, message broker, and streaming engine. It supports strings, lists, sets, sorted sets, hashes, and streams. For caching, use TTL-based expiration with cache-aside or write-through patterns. Redis Cluster provides horizontal scaling and automatic failover. Use ioredis for Node.js and redis-py for Python. Secure with ACLs, TLS, and network isolation. Monitor with INFO, SLOWLOG, and Prometheus exporters.
Key Takeaways
  • Redis supports 6 core data structures: strings, lists, sets, sorted sets, hashes, and streams
  • Use cache-aside pattern with TTL for caching; choose eviction policy based on your workload
  • Pub/Sub for real-time broadcast, Streams for reliable event processing with consumer groups
  • Use Lua scripts for atomic multi-step operations to avoid race conditions
  • Redis Cluster for auto-sharding and failover; Sentinel for standalone Redis HA
  • Pipeline commands for 10-100x throughput improvement over individual commands
  • Secure production with ACLs, TLS, network isolation; never expose Redis to the internet
  • Monitor with INFO, SLOWLOG, and Prometheus Redis Exporter for alerting

1. Redis Data Structures

Redis is more than a simple key-value store. It is a data structure server that supports multiple rich data types, each with a dedicated set of commands. Understanding these data structures is the foundation for using Redis effectively.

Strings

Strings are the most basic Redis data type. They can hold text, integers, or binary data (up to 512MB). Strings support atomic INCR/DECR operations, making them ideal for counters and distributed locks.

# String operations
SET user:1001:name "Alice"
GET user:1001:name                    # "Alice"

# Atomic increment/decrement
SET page:views 0
INCR page:views                       # 1
INCRBY page:views 10                  # 11

# Set with TTL (seconds)
SET session:abc123 "user_data" EX 3600
TTL session:abc123                    # 3600

# Set only if not exists (distributed lock)
SET lock:order:5001 "worker-1" NX EX 30

# Multiple operations
MSET user:1:name "Alice" user:1:email "alice@example.com"
MGET user:1:name user:1:email

Lists

Lists are ordered collections of strings backed by quicklists. They support push/pop from both ends, making them ideal for message queues, recent activity feeds, and timelines.

# List operations — task queue
LPUSH queue:emails "email_1" "email_2" "email_3"
RPOP queue:emails                     # "email_1" (FIFO)
LLEN queue:emails                     # 2

# Blocking pop (wait up to 30s)
BRPOP queue:emails 30

# Recent activity feed (keep last 100)
LPUSH feed:user:1001 "liked post #42"
LTRIM feed:user:1001 0 99
LRANGE feed:user:1001 0 9             # Last 10 items

Sets

Sets are unordered collections of unique strings. They support intersection, union, and difference operations, ideal for tagging, unique visitor counting, and social relationships.

# Set operations — tagging
SADD article:1001:tags "redis" "database" "nosql"
SADD article:1002:tags "redis" "caching" "performance"

# Intersection — articles sharing tags
SINTER article:1001:tags article:1002:tags  # ["redis"]

# Union — all tags
SUNION article:1001:tags article:1002:tags
# ["redis", "database", "nosql", "caching", "performance"]

# Membership check
SISMEMBER article:1001:tags "redis"   # 1 (true)
SCARD article:1001:tags               # 3 (count)

Sorted Sets

Sorted sets are like sets but each member has an associated score. Members are ordered by score, making them perfect for leaderboards, priority queues, and time-series indexing.

# Sorted set — game leaderboard
ZADD leaderboard 1500 "player:alice"
ZADD leaderboard 2200 "player:bob"
ZADD leaderboard 1800 "player:charlie"
ZADD leaderboard 3100 "player:diana"

# Top 3 players (highest scores)
ZREVRANGE leaderboard 0 2 WITHSCORES
# ["player:diana", "3100", "player:bob", "2200", "player:charlie", "1800"]

# Rank of a player (0-indexed, descending)
ZREVRANK leaderboard "player:bob"     # 1

# Increment score
ZINCRBY leaderboard 500 "player:alice"  # 2000

# Range by score
ZRANGEBYSCORE leaderboard 1500 2500 WITHSCORES

Hashes

Hashes are collections of field-value pairs, like objects or dictionaries. They are ideal for storing object data (user profiles, configurations) and are more efficient than serializing JSON into strings.

# Hash — user profile
HSET user:1001 name "Alice" email "alice@example.com" age 28 role "admin"
HGET user:1001 name                   # "Alice"
HGETALL user:1001
# {name: "Alice", email: "alice@example.com", age: "28", role: "admin"}

# Update specific fields
HSET user:1001 age 29 last_login "2026-02-28"

# Increment numeric field
HINCRBY user:1001 login_count 1

# Check field existence
HEXISTS user:1001 email               # 1 (true)
HDEL user:1001 role

Streams

Redis Streams, introduced in Redis 5.0, are an append-only log data structure with consumer groups, acknowledgments, and persistence. They serve as a lightweight alternative to Apache Kafka.

# Stream — event log
XADD events * type "order" user_id "1001" amount "59.99"
XADD events * type "payment" user_id "1001" status "completed"

# Read last 10 events
XREVRANGE events + - COUNT 10

# Create consumer group
XGROUP CREATE events order-processors $ MKSTREAM

# Consumer reads (blocks up to 5000ms)
XREADGROUP GROUP order-processors worker-1 COUNT 1 BLOCK 5000 STREAMS events >

# Acknowledge processed message
XACK events order-processors 1677000000000-0

# Check pending messages
XPENDING events order-processors - + 10

Data Structure Comparison

TypeBest ForTime ComplexityMax Size
StringCaching, counters, distributed locksO(1)512 MB
ListQueues, activity feeds, recent itemsO(1) push/pop4B+ elements
SetTags, unique values, relationshipsO(1) add/check4B+ members
Sorted SetLeaderboards, range queries, priority queuesO(log N)4B+ members
HashObject storage, user profiles, configO(1) per field4B+ fields
StreamEvent logs, message queues, audit trailsO(1) appendMemory limited

2. Redis as Cache

Caching is one of the most common use cases for Redis. A proper caching strategy can reduce database load by 80% or more and cut response times from hundreds of milliseconds to sub-millisecond.

TTL Strategies & Eviction Policies

# TTL strategies
SET product:1001 '{"name":"Widget","price":29.99}' EX 3600    # 1 hour
SET user:session:abc '{"uid":1001}' PX 1800000               # 30 min (ms)

# Check remaining TTL
TTL product:1001                      # seconds remaining
PTTL user:session:abc                 # milliseconds remaining

# Refresh TTL on access (sliding expiration)
GET product:1001
EXPIRE product:1001 3600

# Set TTL only if key exists
EXPIRE product:9999 3600              # 0 (key does not exist)

# Remove TTL (make persistent)
PERSIST product:1001

# --- Eviction policies (redis.conf) ---
# maxmemory 4gb
# maxmemory-policy allkeys-lru        # Best for general caching
# maxmemory-policy allkeys-lfu        # Best for skewed workloads
# maxmemory-policy volatile-lru       # Only evict keys with TTL
# maxmemory-policy volatile-ttl       # Evict soonest-expiring first
# maxmemory-policy noeviction         # Return errors when full

Cache-Aside vs Write-Through Patterns

Cache-Aside is the most common pattern: the application checks the cache first; on a miss, it reads from the database and populates the cache. Write-Through updates the cache synchronously on every database write, ensuring consistency at the cost of write latency.

// Cache-Aside pattern (Node.js with ioredis)
async function getUser(userId: string) {
  const cacheKey = `user:${userId}`;

  // 1. Check cache
  const cached = await redis.get(cacheKey);
  if (cached) {
    return JSON.parse(cached);  // Cache hit
  }

  // 2. Cache miss — fetch from DB
  const user = await db.query("SELECT * FROM users WHERE id = $1", [userId]);

  // 3. Populate cache with TTL
  await redis.set(cacheKey, JSON.stringify(user), "EX", 3600);

  return user;
}

// Write-Through pattern
async function updateUser(userId: string, data: Partial<User>) {
  // 1. Update database
  const user = await db.query(
    "UPDATE users SET name=$1, email=$2 WHERE id=$3 RETURNING *",
    [data.name, data.email, userId]
  );

  // 2. Update cache synchronously
  const cacheKey = `user:${userId}`;
  await redis.set(cacheKey, JSON.stringify(user), "EX", 3600);

  return user;
}

// Cache invalidation on delete
async function deleteUser(userId: string) {
  await db.query("DELETE FROM users WHERE id = $1", [userId]);
  await redis.del(`user:${userId}`);
}

3. Redis Pub/Sub & Streams

Redis provides two messaging mechanisms: Pub/Sub for real-time fire-and-forget broadcast, and Streams for persistent, reliable message processing.

Pub/Sub Real-Time Messaging

// Publisher (Node.js)
import Redis from "ioredis";
const publisher = new Redis();

async function publishEvent(channel: string, event: object) {
  await publisher.publish(channel, JSON.stringify(event));
}

// Publish order events
await publishEvent("orders", {
  type: "order.created",
  orderId: "ORD-5001",
  userId: "1001",
  total: 99.99,
  timestamp: Date.now(),
});

// Subscriber
const subscriber = new Redis();

subscriber.subscribe("orders", "payments", (err, count) => {
  console.log(`Subscribed to ${count} channels`);
});

subscriber.on("message", (channel, message) => {
  const event = JSON.parse(message);
  console.log(`[${channel}] ${event.type}:`, event);
});

// Pattern subscription (wildcard)
subscriber.psubscribe("orders.*", (err, count) => {
  console.log(`Pattern subscribed to ${count} patterns`);
});

subscriber.on("pmessage", (pattern, channel, message) => {
  console.log(`[${pattern}] ${channel}:`, message);
});

Streams with Consumer Groups

// Redis Streams with consumer groups (Node.js)
import Redis from "ioredis";
const redis = new Redis();

// Producer: add events to stream
async function addOrderEvent(order: { id: string; userId: string; total: number }) {
  const id = await redis.xadd(
    "stream:orders",
    "*",                          // Auto-generate ID
    "orderId", order.id,
    "userId", order.userId,
    "total", String(order.total),
    "timestamp", String(Date.now())
  );
  return id;
}

// Create consumer group (run once)
await redis.xgroup("CREATE", "stream:orders", "order-service", "$", "MKSTREAM")
  .catch(() => {}); // Ignore if group already exists

// Consumer: read and process
async function consumeOrders(consumerName: string) {
  while (true) {
    const results = await redis.xreadgroup(
      "GROUP", "order-service", consumerName,
      "COUNT", "10",
      "BLOCK", "5000",            // Block 5s if no messages
      "STREAMS", "stream:orders", ">"
    );

    if (results) {
      for (const [stream, messages] of results) {
        for (const [id, fields] of messages) {
          // Process the order
          console.log(`Processing order ${fields[1]} for user ${fields[3]}`);

          // Acknowledge after successful processing
          await redis.xack("stream:orders", "order-service", id);
        }
      }
    }
  }
}

// Start consumers
consumeOrders("worker-1");
consumeOrders("worker-2");

4. Redis Transactions & Lua Scripting

Redis transactions (MULTI/EXEC) guarantee atomic execution of a group of commands. For scenarios requiring conditional logic, Lua scripts are more powerful -- scripts execute atomically on the server with no race conditions.

MULTI/EXEC Transactions

# Basic transaction
MULTI
SET account:1001:balance 500
SET account:1002:balance 300
EXEC
# Both commands execute atomically

# Optimistic locking with WATCH
WATCH account:1001:balance
# Read current balance
GET account:1001:balance              # "500"
MULTI
DECRBY account:1001:balance 100
INCRBY account:1002:balance 100
EXEC
# EXEC returns nil if account:1001:balance changed between WATCH and EXEC
# Application must retry in that case

Lua Scripting (Atomic Operations)

-- Lua: atomic transfer between accounts
-- KEYS[1] = source account, KEYS[2] = destination account
-- ARGV[1] = transfer amount

local source_balance = tonumber(redis.call("GET", KEYS[1]))
local amount = tonumber(ARGV[1])

if source_balance >= amount then
  redis.call("DECRBY", KEYS[1], amount)
  redis.call("INCRBY", KEYS[2], amount)
  return 1  -- success
else
  return 0  -- insufficient funds
end
// Execute Lua script from Node.js (ioredis)
const transferScript = `
local source_balance = tonumber(redis.call("GET", KEYS[1]))
local amount = tonumber(ARGV[1])
if source_balance >= amount then
  redis.call("DECRBY", KEYS[1], amount)
  redis.call("INCRBY", KEYS[2], amount)
  return 1
else
  return 0
end
`;

// Define custom command
redis.defineCommand("transfer", {
  numberOfKeys: 2,
  lua: transferScript,
});

// Use it
const result = await (redis as any).transfer(
  "account:1001:balance",
  "account:1002:balance",
  "100"
);
console.log(result === 1 ? "Transfer successful" : "Insufficient funds");

5. Redis Cluster & Sentinel

Production Redis requires high availability and scalability. Redis Sentinel provides automatic failover, while Redis Cluster provides data sharding and distributed processing.

Redis Sentinel Configuration

# sentinel.conf
port 26379
sentinel monitor mymaster 127.0.0.1 6379 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
sentinel parallel-syncs mymaster 1
sentinel auth-pass mymaster your_strong_password

# Start sentinel
redis-sentinel /etc/redis/sentinel.conf

# Check sentinel status
redis-cli -p 26379 SENTINEL masters
redis-cli -p 26379 SENTINEL replicas mymaster
redis-cli -p 26379 SENTINEL get-master-addr-by-name mymaster

Redis Cluster Deployment

# Create a 6-node cluster (3 masters + 3 replicas)
# Start 6 Redis instances on ports 7000-7005
for port in 7000 7001 7002 7003 7004 7005; do
  mkdir -p /etc/redis/cluster/$port
  cat > /etc/redis/cluster/$port/redis.conf << EOF
port $port
cluster-enabled yes
cluster-config-file nodes-$port.conf
cluster-node-timeout 5000
appendonly yes
appendfilename "appendonly-$port.aof"
requirepass your_password
masterauth your_password
EOF
  redis-server /etc/redis/cluster/$port/redis.conf &
done

# Create the cluster
redis-cli --cluster create \
  127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 \
  127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 \
  --cluster-replicas 1 -a your_password

# Check cluster info
redis-cli -p 7000 -a your_password CLUSTER INFO
redis-cli -p 7000 -a your_password CLUSTER NODES

# Add a new node
redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000 -a your_password

# Reshard slots to the new node
redis-cli --cluster reshard 127.0.0.1:7000 -a your_password

Connecting to Redis Cluster (Node.js)

import Redis from "ioredis";

// Connect to Redis Cluster
const cluster = new Redis.Cluster(
  [
    { host: "127.0.0.1", port: 7000 },
    { host: "127.0.0.1", port: 7001 },
    { host: "127.0.0.1", port: 7002 },
  ],
  {
    redisOptions: {
      password: "your_password",
    },
    scaleReads: "slave",              // Read from replicas
    natMap: {},                        // NAT mapping if needed
  }
);

// Connect to Sentinel
const sentinel = new Redis({
  sentinels: [
    { host: "127.0.0.1", port: 26379 },
    { host: "127.0.0.1", port: 26380 },
    { host: "127.0.0.1", port: 26381 },
  ],
  name: "mymaster",
  password: "your_password",
  sentinelPassword: "sentinel_password",
});

6. Redis with Node.js

ioredis is the recommended Redis client for Node.js. It supports Cluster, Sentinel, pipelining, Lua scripting, and Streams with a Promise-based API and automatic reconnection.

Connection & Pipelining

import Redis from "ioredis";

// Basic connection with options
const redis = new Redis({
  host: "127.0.0.1",
  port: 6379,
  password: "your_password",
  db: 0,
  maxRetriesPerRequest: 3,
  retryStrategy(times) {
    const delay = Math.min(times * 50, 2000);
    return delay;
  },
  lazyConnect: true,                  // Connect on first command
});

await redis.connect();

// --- Pipelining: batch commands (10-100x throughput) ---
const pipeline = redis.pipeline();
for (let i = 0; i < 1000; i++) {
  pipeline.set(`key:${i}`, `value:${i}`, "EX", 3600);
}
const results = await pipeline.exec();
// results: [[null, "OK"], [null, "OK"], ...]

// Pipeline with mixed read/write
const pipe = redis.pipeline();
pipe.hgetall("user:1001");
pipe.lrange("feed:1001", 0, 9);
pipe.zrevrange("leaderboard", 0, 4, "WITHSCORES");
pipe.get("config:feature_flags");
const [user, feed, topPlayers, flags] = await pipe.exec();

// --- Connection pool pattern ---
class RedisPool {
  private pool: Redis[] = [];
  private index = 0;

  constructor(private size: number, private options: object) {
    for (let i = 0; i < size; i++) {
      this.pool.push(new Redis(options));
    }
  }

  getClient(): Redis {
    const client = this.pool[this.index % this.size];
    this.index++;
    return client;
  }

  async disconnectAll(): Promise<void> {
    await Promise.all(this.pool.map((c) => c.quit()));
  }
}

7. Redis with Python

redis-py is the standard Redis client for Python. Since version 4.2, it includes built-in async support. It supports connection pooling, pipelines, Pub/Sub, and Cluster.

import redis
import json
from datetime import timedelta

# Connection pool (recommended for production)
pool = redis.ConnectionPool(
    host="127.0.0.1",
    port=6379,
    password="your_password",
    db=0,
    max_connections=20,
    decode_responses=True,           # Auto-decode bytes to str
)
r = redis.Redis(connection_pool=pool)

# Basic operations
r.set("user:1001", json.dumps({"name": "Alice", "email": "alice@example.com"}))
r.expire("user:1001", timedelta(hours=1))
user = json.loads(r.get("user:1001"))

# Pipeline (batch commands)
with r.pipeline() as pipe:
    pipe.hset("product:1", mapping={"name": "Widget", "price": "29.99", "stock": "150"})
    pipe.hset("product:2", mapping={"name": "Gadget", "price": "49.99", "stock": "75"})
    pipe.expire("product:1", 3600)
    pipe.expire("product:2", 3600)
    results = pipe.execute()

# Pub/Sub
pubsub = r.pubsub()
pubsub.subscribe("notifications")

for message in pubsub.listen():
    if message["type"] == "message":
        data = json.loads(message["data"])
        print(f"Received: {data}")

# --- Async redis (built-in since 4.2) ---
import redis.asyncio as aioredis

async def async_example():
    r = aioredis.Redis(
        host="127.0.0.1",
        port=6379,
        password="your_password",
        decode_responses=True,
    )

    await r.set("async_key", "async_value", ex=3600)
    value = await r.get("async_key")
    print(f"Async value: {value}")

    # Async pipeline
    async with r.pipeline() as pipe:
        await pipe.set("k1", "v1").set("k2", "v2").execute()

    await r.aclose()

8. Rate Limiting with Redis

Redis atomic operations and expiration make it ideal for distributed rate limiting. Here are three common rate limiting algorithm implementations.

Sliding Window Rate Limiter

// Sliding Window with Sorted Set (Node.js)
async function slidingWindowRateLimit(
  redis: Redis,
  key: string,
  limit: number,
  windowSec: number
): Promise<{ allowed: boolean; remaining: number; retryAfter: number }> {
  const now = Date.now();
  const windowStart = now - windowSec * 1000;

  const pipe = redis.pipeline();
  pipe.zremrangebyscore(key, 0, windowStart);   // Remove expired
  pipe.zadd(key, String(now), `${now}:` + Math.random());
  pipe.zcard(key);                               // Count in window
  pipe.expire(key, windowSec);                   // Auto-cleanup

  const results = await pipe.exec();
  const count = results![2][1] as number;

  if (count > limit) {
    // Get oldest entry to calculate retry-after
    const oldest = await redis.zrange(key, 0, 0, "WITHSCORES");
    const retryAfter = oldest.length > 1
      ? Math.ceil((Number(oldest[1]) + windowSec * 1000 - now) / 1000)
      : windowSec;

    return { allowed: false, remaining: 0, retryAfter };
  }

  return { allowed: true, remaining: limit - count, retryAfter: 0 };
}

// Usage: 100 requests per 60 seconds
const result = await slidingWindowRateLimit(redis, "rate:user:1001", 100, 60);
if (!result.allowed) {
  res.status(429).json({
    error: "Rate limit exceeded",
    retryAfter: result.retryAfter,
  });
}

Token Bucket Algorithm (Lua Script)

-- Token Bucket Lua Script
-- KEYS[1] = bucket key
-- ARGV[1] = max tokens, ARGV[2] = refill rate (tokens/sec)
-- ARGV[3] = current timestamp (ms), ARGV[4] = tokens to consume

local key = KEYS[1]
local max_tokens = tonumber(ARGV[1])
local refill_rate = tonumber(ARGV[2])
local now = tonumber(ARGV[3])
local requested = tonumber(ARGV[4])

-- Get current state
local data = redis.call("HMGET", key, "tokens", "last_refill")
local tokens = tonumber(data[1]) or max_tokens
local last_refill = tonumber(data[2]) or now

-- Calculate refill
local elapsed = (now - last_refill) / 1000
local new_tokens = math.min(max_tokens, tokens + elapsed * refill_rate)

-- Check if enough tokens
if new_tokens >= requested then
  new_tokens = new_tokens - requested
  redis.call("HMSET", key, "tokens", new_tokens, "last_refill", now)
  redis.call("EXPIRE", key, math.ceil(max_tokens / refill_rate) * 2)
  return {1, math.floor(new_tokens)}  -- allowed, remaining
else
  redis.call("HMSET", key, "tokens", new_tokens, "last_refill", now)
  redis.call("EXPIRE", key, math.ceil(max_tokens / refill_rate) * 2)
  local wait = math.ceil((requested - new_tokens) / refill_rate)
  return {0, wait}  -- denied, retry_after_seconds
end

Leaky Bucket Algorithm

# Leaky Bucket with Redis (Python)
import time
import redis

class LeakyBucket:
    def __init__(self, r: redis.Redis, key: str, capacity: int, leak_rate: float):
        """
        capacity: max requests in the bucket
        leak_rate: requests processed per second
        """
        self.r = r
        self.key = key
        self.capacity = capacity
        self.leak_rate = leak_rate

    def allow(self) -> bool:
        now = time.time()
        pipe = self.r.pipeline()

        # Remove leaked (processed) requests
        pipe.zremrangebyscore(self.key, 0, now - self.capacity / self.leak_rate)
        pipe.zcard(self.key)  # Current queue size
        _, queue_size = pipe.execute()

        if queue_size < self.capacity:
            # Add request to queue
            self.r.zadd(self.key, {f"{now}:{id(self)}": now})
            self.r.expire(self.key, int(self.capacity / self.leak_rate) + 10)
            return True

        return False  # Bucket is full

# Usage: 10 requests max, processes 2/sec
bucket = LeakyBucket(r, "leaky:api:user:1001", capacity=10, leak_rate=2.0)
if bucket.allow():
    print("Request accepted")
else:
    print("Rate limited — bucket full")

9. Session Management

Redis is ideal for distributed session storage. It supports fast reads/writes, automatic expiration, and session sharing across servers.

Distributed Sessions (Express.js)

import express from "express";
import session from "express-session";
import RedisStore from "connect-redis";
import Redis from "ioredis";

const redis = new Redis({
  host: "127.0.0.1",
  port: 6379,
  password: "your_password",
});

const app = express();

app.use(
  session({
    store: new RedisStore({ client: redis, prefix: "sess:" }),
    secret: "your-session-secret",
    resave: false,
    saveUninitialized: false,
    cookie: {
      secure: true,                   // HTTPS only
      httpOnly: true,                 // No JS access
      maxAge: 24 * 60 * 60 * 1000,   // 24 hours
      sameSite: "lax",
    },
  })
);

// Session is automatically stored in Redis
app.post("/login", async (req, res) => {
  const { username, password } = req.body;
  const user = await authenticateUser(username, password);

  if (user) {
    req.session.userId = user.id;
    req.session.role = user.role;
    res.json({ message: "Logged in" });
  } else {
    res.status(401).json({ error: "Invalid credentials" });
  }
});

// Logout — destroy session in Redis
app.post("/logout", (req, res) => {
  req.session.destroy((err) => {
    if (err) return res.status(500).json({ error: "Logout failed" });
    res.clearCookie("connect.sid");
    res.json({ message: "Logged out" });
  });
});

JWT + Redis (Token Blacklist)

import jwt from "jsonwebtoken";
import Redis from "ioredis";

const redis = new Redis();
const JWT_SECRET = process.env.JWT_SECRET!;

// Issue JWT
function issueToken(userId: string, role: string): string {
  return jwt.sign(
    { sub: userId, role },
    JWT_SECRET,
    { expiresIn: "1h", jti: crypto.randomUUID() }
  );
}

// Revoke JWT by adding to blacklist
async function revokeToken(token: string): Promise<void> {
  const decoded = jwt.decode(token) as jwt.JwtPayload;
  if (!decoded?.jti || !decoded?.exp) return;

  const ttl = decoded.exp - Math.floor(Date.now() / 1000);
  if (ttl > 0) {
    await redis.set(`blacklist:${decoded.jti}`, "1", "EX", ttl);
  }
}

// Verify JWT (check blacklist)
async function verifyToken(token: string): Promise<jwt.JwtPayload | null> {
  try {
    const decoded = jwt.verify(token, JWT_SECRET) as jwt.JwtPayload;

    // Check if token is blacklisted
    const isBlacklisted = await redis.exists(`blacklist:${decoded.jti}`);
    if (isBlacklisted) return null;

    return decoded;
  } catch {
    return null;
  }
}

// Middleware
async function authMiddleware(req: any, res: any, next: any) {
  const token = req.headers.authorization?.replace("Bearer ", "");
  if (!token) return res.status(401).json({ error: "No token" });

  const payload = await verifyToken(token);
  if (!payload) return res.status(401).json({ error: "Invalid token" });

  req.user = payload;
  next();
}

10. Redis Search & JSON

The RediSearch module adds full-text search, secondary indexing, and aggregation to Redis. RedisJSON allows native JSON document storage and manipulation. Together they enable powerful document search systems.

# --- RedisJSON ---
# Store JSON document
JSON.SET product:1001 $ '{"name":"Wireless Mouse","brand":"TechCo","price":29.99,"category":"electronics","tags":["wireless","ergonomic","bluetooth"],"specs":{"dpi":1600,"battery":"AA","weight":"85g"}}'

# Read nested fields
JSON.GET product:1001 $.name             # '"Wireless Mouse"'
JSON.GET product:1001 $.specs.dpi        # '1600'
JSON.GET product:1001 $.tags[0]          # '"wireless"'

# Update nested fields
JSON.SET product:1001 $.price 24.99
JSON.NUMINCRBY product:1001 $.specs.dpi 400  # 2000
JSON.ARRAPPEND product:1001 $.tags '"usb-c"'

# --- RediSearch ---
# Create search index on JSON documents
FT.CREATE idx:products ON JSON PREFIX 1 product: SCHEMA
  $.name AS name TEXT WEIGHT 5.0
  $.brand AS brand TEXT
  $.category AS category TAG
  $.price AS price NUMERIC SORTABLE
  $.tags[*] AS tags TAG

# Full-text search
FT.SEARCH idx:products "wireless mouse" LIMIT 0 10

# Filter by category and price range
FT.SEARCH idx:products "@category:{electronics} @price:[10 50]" SORTBY price ASC

# Autocomplete suggestions
FT.SUGADD autocomplete "Wireless Mouse" 100
FT.SUGADD autocomplete "Wireless Keyboard" 90
FT.SUGGET autocomplete "wire" FUZZY MAX 5

# Aggregation — average price per category
FT.AGGREGATE idx:products "*"
  GROUPBY 1 @category
  REDUCE AVG 1 @price AS avg_price
  REDUCE COUNT 0 AS total
  SORTBY 2 @avg_price DESC

Using RediSearch in Node.js

import { createClient, SchemaFieldTypes } from "redis";

const client = createClient({ url: "redis://localhost:6379" });
await client.connect();

// Create index
try {
  await client.ft.create("idx:products", {
    "$.name": { type: SchemaFieldTypes.TEXT, AS: "name", WEIGHT: 5 },
    "$.brand": { type: SchemaFieldTypes.TEXT, AS: "brand" },
    "$.category": { type: SchemaFieldTypes.TAG, AS: "category" },
    "$.price": { type: SchemaFieldTypes.NUMERIC, AS: "price", SORTABLE: true },
  }, { ON: "JSON", PREFIX: "product:" });
} catch (e) {
  // Index already exists
}

// Store products as JSON
await client.json.set("product:2001", "$", {
  name: "Mechanical Keyboard",
  brand: "KeyTech",
  category: "electronics",
  price: 89.99,
  tags: ["mechanical", "rgb", "cherry-mx"],
});

// Search
const results = await client.ft.search("idx:products", "@category:{electronics} @price:[50 100]", {
  SORTBY: { BY: "price", DIRECTION: "ASC" },
  LIMIT: { from: 0, size: 10 },
});

console.log(`Found ${results.total} products:`);
for (const doc of results.documents) {
  console.log(doc.id, doc.value);
}

11. Redis Performance Tuning

Optimizing Redis performance requires attention to memory management, persistence configuration, and command optimization. Here are the key tuning parameters for production.

Memory Optimization

# redis.conf — Memory optimization

# Set memory limit
maxmemory 4gb
maxmemory-policy allkeys-lfu

# Ziplist encoding for small hashes (saves 5-10x memory)
hash-max-ziplist-entries 128
hash-max-ziplist-value 64

# Ziplist encoding for small sorted sets
zset-max-ziplist-entries 128
zset-max-ziplist-value 64

# Listpack for small lists
list-max-ziplist-size -2      # 8kb per node
list-compress-depth 1         # Compress all but head/tail

# Intset for small integer sets
set-max-intset-entries 512

# Lazy freeing (non-blocking deletes)
lazyfree-lazy-eviction yes
lazyfree-lazy-expire yes
lazyfree-lazy-server-del yes
replica-lazy-flush yes

Persistence: RDB vs AOF

FeatureRDBAOFRDB + AOF
MechanismPoint-in-time snapshotsAppend-only write logBoth combined
Data loss riskUp to snapshot intervalUp to 1 secondUp to 1 second
Recovery speedFastSlow (replay log)Fast (loads RDB first)
Write performanceBrief pause on forkContinuous small overheadBoth
RecommendationCache-only useWhen durability neededRecommended for production
# redis.conf — Persistence configuration

# RDB snapshots
save 900 1          # Snapshot if 1+ keys changed in 900s
save 300 10         # Snapshot if 10+ keys changed in 300s
save 60 10000       # Snapshot if 10000+ keys changed in 60s
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /var/lib/redis

# AOF (recommended for production)
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec          # Fsync every second (best tradeoff)
# appendfsync always          # Fsync after every write (safest, slowest)
# appendfsync no              # Let OS decide (fastest, riskiest)

# AOF rewrite (compact the log)
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-use-rdb-preamble yes      # Hybrid format (fast load + AOF durability)

Benchmarking & Diagnostics

# Built-in benchmark tool
redis-benchmark -h 127.0.0.1 -p 6379 -c 50 -n 100000 -q

# Benchmark specific commands
redis-benchmark -t set,get,lpush,lpop,zadd -n 100000 -q

# Benchmark with pipeline
redis-benchmark -t set -n 1000000 -P 16 -q

# Latency diagnostics
redis-cli --latency                    # Continuous latency sampling
redis-cli --latency-history -i 5       # Latency over time (5s intervals)
redis-cli --latency-dist               # Latency histogram
redis-cli --intrinsic-latency 10       # System baseline latency (10s)

# Memory analysis
redis-cli INFO memory
redis-cli MEMORY DOCTOR
redis-cli MEMORY USAGE user:1001       # Bytes used by specific key

# Find big keys (scan without blocking)
redis-cli --bigkeys
redis-cli --memkeys

12. Redis Security

Redis is designed to run in trusted networks by default. Production deployments must configure authentication, network isolation, and access controls to protect data.

ACL (Access Control Lists)

# redis.conf — ACL configuration

# Default user (disable or set strong password)
requirepass your_very_strong_password_here

# Create users with specific permissions
# user <username> on|off [password] [commands] [keys]

# Read-only user for analytics
user analytics on >analytics_pass ~analytics:* +get +mget +hgetall +zrange +lrange -@all

# Application user with limited commands
user webapp on >webapp_secret_pass ~user:* ~session:* ~cache:* +@read +@write +@set +@hash -@admin -@dangerous

# Admin user (all permissions)
user admin on >admin_super_secret ~* +@all

# Disable default user (after creating named users)
user default off

# --- Runtime ACL management ---
ACL SETUSER readonly on >readonly_pass ~cache:* +get +mget
ACL LIST                                # List all users
ACL WHOAMI                              # Current user
ACL GETUSER webapp                      # Get user details
ACL DELUSER temp_user                   # Delete user
ACL LOG 10                              # Last 10 ACL violations

TLS Encryption & Network Security

# redis.conf — TLS configuration

# Enable TLS
tls-port 6380
port 0                                  # Disable non-TLS port

tls-cert-file /etc/redis/tls/redis.crt
tls-key-file /etc/redis/tls/redis.key
tls-ca-cert-file /etc/redis/tls/ca.crt

# Require client certificate (mutual TLS)
tls-auth-clients yes

# TLS for replication
tls-replication yes

# TLS for cluster bus
tls-cluster yes

# --- Network security ---

# Bind to specific interfaces only
bind 127.0.0.1 10.0.1.5              # Localhost + private network

# Enable protected mode
protected-mode yes

# Disable dangerous commands
rename-command FLUSHALL ""
rename-command FLUSHDB ""
rename-command DEBUG ""
rename-command CONFIG "REDIS_CONFIG_8fj3k"  # Rename to obscure name

# --- Firewall rules (iptables) ---
# Allow only app servers
# iptables -A INPUT -p tcp --dport 6379 -s 10.0.1.0/24 -j ACCEPT
# iptables -A INPUT -p tcp --dport 6379 -j DROP

TLS Connection (Node.js)

import Redis from "ioredis";
import fs from "fs";

const redis = new Redis({
  host: "redis.example.com",
  port: 6380,
  username: "webapp",
  password: "webapp_secret_pass",
  tls: {
    ca: fs.readFileSync("/path/to/ca.crt"),
    cert: fs.readFileSync("/path/to/client.crt"),
    key: fs.readFileSync("/path/to/client.key"),
    rejectUnauthorized: true,
  },
});

13. Redis Monitoring & Observability

Monitoring is essential for maintaining a healthy Redis deployment. Redis provides built-in INFO and SLOWLOG commands, and combined with Prometheus + Grafana, you can achieve comprehensive observability.

INFO Command & Key Metrics

# Essential monitoring commands
redis-cli INFO server                   # Version, uptime, OS info
redis-cli INFO memory                   # Memory usage details
redis-cli INFO stats                    # Ops/sec, hits, misses
redis-cli INFO replication              # Master/replica status
redis-cli INFO clients                  # Connected clients
redis-cli INFO keyspace                 # DB key counts

# Key metrics to monitor:
# used_memory / used_memory_rss         — Memory usage
# connected_clients                     — Active connections
# instantaneous_ops_per_sec             — Current throughput
# keyspace_hits / keyspace_misses       — Cache hit ratio
# evicted_keys                          — Keys removed by eviction
# rejected_connections                  — Max clients reached
# rdb_last_bgsave_status                — Last snapshot status

# Calculate hit ratio
# hit_ratio = keyspace_hits / (keyspace_hits + keyspace_misses) * 100
# Target: > 95% for caching workloads

# SLOWLOG — queries exceeding threshold
CONFIG SET slowlog-log-slower-than 10000   # Log queries > 10ms
CONFIG SET slowlog-max-len 128
SLOWLOG GET 10                              # Last 10 slow queries
SLOWLOG LEN                                 # Total slow query count
SLOWLOG RESET                               # Clear the log

# Real-time command monitoring (use briefly, high overhead)
redis-cli MONITOR                           # Prints all commands

# Client list (debug connections)
redis-cli CLIENT LIST
redis-cli CLIENT KILL ID 123               # Kill specific client

Prometheus + Grafana Monitoring

# docker-compose.yml — Redis monitoring stack
version: "3.8"
services:
  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"
    command: redis-server --requirepass your_password
    volumes:
      - redis_data:/data

  redis-exporter:
    image: oliver006/redis_exporter:latest
    ports:
      - "9121:9121"
    environment:
      REDIS_ADDR: redis://redis:6379
      REDIS_PASSWORD: your_password
    depends_on:
      - redis

  prometheus:
    image: prom/prometheus:latest
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml

  grafana:
    image: grafana/grafana:latest
    ports:
      - "3000:3000"
    environment:
      GF_SECURITY_ADMIN_PASSWORD: admin

volumes:
  redis_data:
# prometheus.yml
global:
  scrape_interval: 15s

scrape_configs:
  - job_name: "redis"
    static_configs:
      - targets: ["redis-exporter:9121"]
    metrics_path: /metrics

Alerting Rules

# prometheus-alerts.yml
groups:
  - name: redis_alerts
    rules:
      - alert: RedisHighMemoryUsage
        expr: redis_memory_used_bytes / redis_memory_max_bytes > 0.85
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: "Redis memory usage above 85%"
          description: "Redis instance {{ $labels.instance }} memory at {{ $value | humanizePercentage }}"

      - alert: RedisHighLatency
        expr: redis_commands_duration_seconds_total / redis_commands_processed_total > 0.01
        for: 2m
        labels:
          severity: critical
        annotations:
          summary: "Redis average latency above 10ms"

      - alert: RedisLowHitRatio
        expr: |
          redis_keyspace_hits_total /
          (redis_keyspace_hits_total + redis_keyspace_misses_total) < 0.9
        for: 10m
        labels:
          severity: warning
        annotations:
          summary: "Redis cache hit ratio below 90%"

      - alert: RedisReplicationBroken
        expr: redis_connected_slaves < 1
        for: 1m
        labels:
          severity: critical
        annotations:
          summary: "Redis has no connected replicas"

      - alert: RedisTooManyConnections
        expr: redis_connected_clients > 1000
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: "Redis has over 1000 connected clients"

Custom Health Check Script

#!/bin/bash
# redis-health-check.sh — Quick Redis health report

REDIS_CLI="redis-cli -a your_password --no-auth-warning"

echo "=== Redis Health Report ==="
echo ""

# Uptime
UPTIME=$($REDIS_CLI INFO server | grep uptime_in_days | tr -d "\r")
echo "Uptime: $UPTIME"

# Memory
USED_MEM=$($REDIS_CLI INFO memory | grep used_memory_human | tr -d "\r")
MAX_MEM=$($REDIS_CLI CONFIG GET maxmemory | tail -1)
echo "Memory: $USED_MEM (max: $MAX_MEM bytes)"

# Hit ratio
HITS=$($REDIS_CLI INFO stats | grep keyspace_hits | cut -d: -f2 | tr -d "\r")
MISSES=$($REDIS_CLI INFO stats | grep keyspace_misses | cut -d: -f2 | tr -d "\r")
if [ "$HITS" -gt 0 ] 2>/dev/null; then
  RATIO=$(echo "scale=2; $HITS * 100 / ($HITS + $MISSES)" | bc)
  echo "Hit Ratio: $RATIO%"
fi

# Connected clients
CLIENTS=$($REDIS_CLI INFO clients | grep connected_clients | tr -d "\r")
echo "Clients: $CLIENTS"

# Ops per second
OPS=$($REDIS_CLI INFO stats | grep instantaneous_ops_per_sec | tr -d "\r")
echo "Throughput: $OPS"

# Slow queries
SLOW=$($REDIS_CLI SLOWLOG LEN)
echo "Slow queries: $SLOW"

# Evicted keys
EVICTED=$($REDIS_CLI INFO stats | grep evicted_keys | tr -d "\r")
echo "Evicted: $EVICTED"

echo ""
echo "=== End Report ==="

Summary

Redis is an indispensable component in modern application architectures. From simple caching to complex real-time data processing, Redis rich data structures and high performance make it a go-to choice for developers. Here is a quick reference for choosing the right Redis pattern:

Use CaseRecommended ApproachKey Commands
CachingCache-aside + TTL + LFU evictionSET EX, GET, DEL
LeaderboardsSorted SetsZADD, ZREVRANGE, ZINCRBY
Rate LimitingSliding window (sorted set) or token bucket (Lua)ZADD, ZRANGEBYSCORE, EVALSHA
SessionsHashes + TTL or connect-redisHSET, HGETALL, EXPIRE
Message QueueStreams + Consumer GroupsXADD, XREADGROUP, XACK
Real-time NotificationsPub/SubPUBLISH, SUBSCRIBE, PSUBSCRIBE
Full-text SearchRediSearch + RedisJSONFT.CREATE, FT.SEARCH, JSON.SET
Distributed LockSET NX EX + Lua renewalSET NX EX, EVALSHA

Whether you use Redis as a caching layer, message broker, or primary data store, the patterns and best practices covered in this guide will help you build high-performance, reliable, and secure Redis deployments. Always use connection pools, pipeline batch commands, configure appropriate persistence, and secure with ACLs and TLS.

𝕏 Twitterin LinkedIn
È stato utile?

Resta aggiornato

Ricevi consigli dev e nuovi strumenti ogni settimana.

Niente spam. Cancella quando vuoi.

Prova questi strumenti correlati

{ }JSON Formatter#Hash GeneratorIDUUID Generator

Articoli correlati

Docker Commands: Complete Guide from Basics to Production

Master Docker with this complete commands guide. Covers docker run/build/push, Dockerfile, multi-stage builds, volumes, networking, Docker Compose, security, registry, and production deployment patterns.

WebSocket Complete Guide: Real-Time Communication with ws and Socket.io

Master WebSocket real-time communication. Complete guide with Browser API, Node.js ws, Socket.io, React hooks, Python websockets, Go gorilla/websocket, authentication, scaling, and error handling.

API Testing: Complete Guide with cURL, Supertest, and k6

Master API testing with this complete guide. Covers HTTP methods, cURL, fetch/axios, Postman/Newman, supertest, Python httpx, mock servers, contract testing, k6 load testing, and OpenAPI documentation.