DevToolBoxKOSTENLOS
Blog

Redis Datenstrukturen Guide: Strings, Hashes, Lists, Sets und Sorted Sets

14 Min.von DevToolBox

Redis is not just a key-value cache — it is a data structure server. Each Redis data type is optimized for specific use cases, and choosing the right structure dramatically simplifies your application logic. This comprehensive guide covers all seven core Redis data structures with real-world examples and performance characteristics.

Strings

The most fundamental Redis type. Strings store any sequence of bytes (text, integers, serialized JSON, binary data). Maximum size is 512 MB.

# STRINGS — the simplest type, just bytes

# Basic set and get
SET user:1000:name "Alice"
GET user:1000:name           # "Alice"

# Set with expiry (TTL in seconds)
SET session:abc123 "user_data" EX 3600      # expires in 1 hour
SET session:abc123 "user_data" PX 3600000   # expires in 1 hour (milliseconds)
SET session:abc123 "user_data" EXAT 1800000000  # expires at Unix timestamp

# Atomic counter operations
SET page:home:views 0
INCR page:home:views         # 1
INCR page:home:views         # 2
INCRBY page:home:views 10    # 12
DECR page:home:views         # 11

# String manipulation
SET greeting "Hello"
APPEND greeting " World"     # "Hello World"
STRLEN greeting              # 11

# GET and set in one operation
GETSET page:home:views 0     # returns old value, resets to 0
SETNX lock:resource "owner"  # set only if not exists (mutex pattern)

# Multiple keys at once
MSET user:1:name "Alice" user:1:age 30 user:2:name "Bob"
MGET user:1:name user:1:age user:2:name

Lists

Redis Lists are linked lists. They are extremely fast for operations at the head or tail (O(1)), and support blocking pop operations — perfect for task queues.

# LISTS — ordered sequences (linked list under the hood)

# Push to left (head) or right (tail)
RPUSH tasks "task1" "task2" "task3"   # right push (append)
LPUSH tasks "task0"                   # left push (prepend)

# Pop from left or right
LPOP tasks                # "task0" (removes and returns)
RPOP tasks                # "task3"

# Blocking pop (for task queues — blocks until item available)
BLPOP queue:jobs 30       # blocks up to 30 seconds

# Range (0-indexed; -1 means last element)
LRANGE tasks 0 -1         # all elements
LRANGE tasks 0 2          # first 3 elements

# Length and index access
LLEN tasks                # number of items
LINDEX tasks 0            # first item (no removal)

# Queue pattern (FIFO): producer RPUSH, consumer LPOP
RPUSH queue:emails "email1" "email2"
LPOP queue:emails         # "email1" (FIFO order)

# Stack pattern (LIFO): producer and consumer both on same end
LPUSH stack:undo "action1"
LPUSH stack:undo "action2"
LPOP stack:undo           # "action2" (LIFO order)

Sets

Unordered collections of unique strings. Redis provides set algebra operations (union, intersection, difference) that are invaluable for social features and recommendation systems.

# SETS — unordered collections of unique strings

# Add members
SADD tags:post:42 "nodejs" "javascript" "backend"
SADD tags:post:42 "nodejs"   # ignored (already exists)

# Check membership, count, remove
SISMEMBER tags:post:42 "nodejs"   # 1 (true)
SCARD tags:post:42                # 3
SREM tags:post:42 "backend"       # removes "backend"

# Get all members
SMEMBERS tags:post:42

# Random members (useful for recommendations)
SRANDMEMBER tags:post:42 2    # 2 random members (no removal)
SPOP tags:post:42             # 1 random member (with removal)

# Set operations (great for social features)
SADD user:1:following 2 3 4
SADD user:2:following 3 5 6

# Intersection: mutual follows
SINTER user:1:following user:2:following    # {3}

# Union: anyone either user follows
SUNION user:1:following user:2:following    # {2,3,4,5,6}

# Difference: who user 1 follows that user 2 doesn't
SDIFF user:1:following user:2:following     # {2,4}

# Store result of set operation into a new key
SINTERSTORE mutual:1:2 user:1:following user:2:following

Sorted Sets (ZSET)

Like Sets but every member has an associated floating-point score. Members are ordered by score. The quintessential Redis structure for leaderboards, rate limiting, and time-series.

# SORTED SETS (ZSET) — unique members, each with a float score

# Add members with scores
ZADD leaderboard 1500 "alice"
ZADD leaderboard 2300 "bob"
ZADD leaderboard 1800 "carol"
ZADD leaderboard 2300 "dave"   # score tie with bob

# Score operations
ZINCRBY leaderboard 200 "alice"   # alice now has 1700

# Get rank (0-indexed, lowest score first)
ZRANK leaderboard "alice"         # rank by ascending score
ZREVRANK leaderboard "bob"        # rank by descending score (0 = highest)

# Get members by rank range
ZRANGE leaderboard 0 -1                     # all, ascending
ZRANGE leaderboard 0 -1 WITHSCORES         # with scores
ZREVRANGE leaderboard 0 2 WITHSCORES       # top 3

# Get members by score range
ZRANGEBYSCORE leaderboard 1000 2000 WITHSCORES
ZRANGEBYSCORE leaderboard -inf +inf LIMIT 0 10   # pagination

# Count members in score range
ZCOUNT leaderboard 1500 2000    # count between scores

# Use case: rate limiting (sliding window)
ZADD rate:user:123 1708000000 "req_1"
ZREMRANGEBYSCORE rate:user:123 -inf 1707999000  # remove old entries
ZCARD rate:user:123             # current window count

Hashes

Key-value maps within a key. Ideal for representing objects with multiple fields. More memory-efficient than storing each field as a separate key.

# HASHES — field-value pairs (like objects/maps)

# Set fields
HSET user:1000 name "Alice" age 30 email "alice@example.com"

# Get one field or all fields
HGET user:1000 name              # "Alice"
HMGET user:1000 name age         # ["Alice", "30"]
HGETALL user:1000                # {name: Alice, age: 30, email: ...}

# Field existence, count, keys, values
HEXISTS user:1000 email          # 1
HLEN user:1000                   # 3
HKEYS user:1000                  # [name, age, email]
HVALS user:1000                  # [Alice, 30, alice@example.com]

# Numeric operations on hash fields
HINCRBY user:1000 login_count 1
HINCRBYFLOAT user:1000 balance 10.50

# Delete a field
HDEL user:1000 age

# Scan a hash incrementally (for large hashes)
HSCAN user:1000 0 MATCH "em*" COUNT 10

Streams

Redis Streams (added in 5.0) are an append-only log data structure. They support consumer groups for distributed, fault-tolerant event processing — similar to Apache Kafka but simpler.

# STREAMS — append-only log (like Kafka, but built-in)

# Add an event (auto-generated ID: timestamp-sequence)
XADD events:orders * user_id 123 product "Widget" amount 9.99
XADD events:orders * user_id 456 product "Gadget" amount 24.99

# Add with explicit ID
XADD events:orders 1708000000000-0 user_id 789 product "Doohickey" amount 4.99

# Read from beginning
XRANGE events:orders - +
XRANGE events:orders - + COUNT 10   # limit results

# Read from last N entries
XREVRANGE events:orders + - COUNT 5

# Consumer groups (for distributed processing)
XGROUP CREATE events:orders processors $ MKSTREAM

# Consumer reads from group
XREADGROUP GROUP processors worker1 COUNT 10 STREAMS events:orders >

# Acknowledge processed messages
XACK events:orders processors 1708000000000-0

# Stream info
XLEN events:orders              # total messages
XINFO STREAM events:orders      # metadata

Data Structure Use Case Guide

StructureCommon Use CasesExample
StringSessions, tokens, counters, caching, flagssession:abc = user data
ListTask queues, activity feeds, recent itemsqueue:jobs — RPUSH/BLPOP
SetTags, likes, follows, unique visitors, permissionsuser:1:following
Sorted SetLeaderboards, rate limiting, priority queues, timelinesleaderboard — score = points
HashUser profiles, product catalog, config objectsuser:1000 — name, age, email
StreamEvent log, message queue, audit trail, IoT dataevents:orders — consumer groups

Time Complexity Reference

OperationComplexityNote
GET/SET (String)O(1)Constant time
LPUSH/RPOP (List)O(1)Head/tail only
LRANGE (List)O(n)n = elements returned
SADD/SREM (Set)O(1)Per element
SINTER (Set)O(N*M)N = min set size, M = sets
ZADD (Sorted Set)O(log N)Skip list insertion
ZRANGE (Sorted Set)O(log N + M)M = elements returned
HSET/HGET (Hash)O(1)Per field
HGETALL (Hash)O(n)n = fields

Best Practices

  1. Use consistent key naming conventions: object_type:id:field (e.g., user:1000:profile). This makes key scanning and TTL management predictable.
  2. Always set TTLs on session data, tokens, and cache entries. Redis is not a primary store for data that must survive forever without eviction policies.
  3. Use HSET for objects instead of separate string keys. user:1000 with fields is more efficient than user:1000:name + user:1000:age.
  4. Use pipelines (MULTI/EXEC or client-side pipelining) to batch multiple commands and reduce round-trip latency.
  5. Avoid KEYS * in production. Use SCAN with a cursor pattern for safe, incremental key enumeration.

Frequently Asked Questions

What is the maximum number of keys Redis can hold?

Redis can hold up to 2^32 - 1 keys per database (~4.2 billion). In practice, you are limited by available RAM. A Redis instance with 32 GB RAM can comfortably hold hundreds of millions of keys.

When should I use a Hash vs a String for an object?

Use Hash when: (1) you need to update individual fields without fetching the whole object, (2) you have many fields and want to save memory with compact hash encoding, or (3) you use HINCRBY for counters within the object. Use String (with JSON serialization) when: you always read/write the entire object atomically, or you need to index the JSON content.

How does Redis persistence work with data structures?

All Redis data structures are saved equally by both persistence mechanisms: RDB (snapshots at intervals) and AOF (append-only file — logs every write command). Data structures in memory serialize/deserialize seamlessly. Use RDB for backup, AOF for durability, or both for maximum safety.

What is the difference between EXPIRE and TTL?

EXPIRE key seconds sets the expiry time on a key. TTL key returns the remaining time to live in seconds (-1 means no expiry, -2 means key does not exist). PERSIST removes the expiry. EXPIREAT sets absolute expiry using a Unix timestamp. PTTL and PEXPIRE work in milliseconds.

How do I use Redis Sorted Sets for rate limiting?

Sliding window rate limiting with Sorted Sets: (1) Use current timestamp as score, (2) ZADD rate:user:123 timestamp request_id, (3) ZREMRANGEBYSCORE to remove entries older than the window, (4) ZCARD to count remaining entries, (5) if count < limit, allow the request. This is accurate and atomic when wrapped in a Lua script or MULTI/EXEC.

Related Tools

𝕏 Twitterin LinkedIn
War das hilfreich?

Bleiben Sie informiert

Wöchentliche Dev-Tipps und neue Tools.

Kein Spam. Jederzeit abbestellbar.

Verwandte Tools ausprobieren

{ }JSON Formatter#Hash GeneratorB→Base64 Encode Online

Verwandte Artikel

PostgreSQL JSONB Guide: Abfragen, Indizierung und Volltextsuche

PostgreSQL JSONB meistern: verschachtelte JSON-Abfragen, GIN-Indizes, Volltextsuche und MongoDB-Migration.

API Rate Limiting Guide: Strategien, Algorithmen und Implementierung

Vollstaendiger Guide zu API Rate Limiting. Token Bucket, Sliding Window, Leaky Bucket Algorithmen mit Code-Beispielen. Express.js Middleware, Redis verteiltes Rate Limiting.