All guides

How to Set Up Redis Sentinel: Step-by-Step Tutorial

A complete tutorial for setting up Redis Sentinel with automatic failover. Covers installation, configuration, HAProxy routing, and production hardening.

February 17, 2026

Running Redis without high availability means a single server failure takes down your cache, session store, or message broker until someone manually intervenes. Redis Sentinel adds automatic failover, health monitoring, and service discovery so your application keeps running when a node goes down.

This tutorial walks through setting up a 3-node Redis Sentinel cluster from scratch: installing Redis, configuring replication, deploying Sentinel on each node, testing failover, adding HAProxy for transparent routing, and hardening the setup for production.

Prerequisites

Before starting, you need:

  • 3 Linux servers (Ubuntu 22.04 or Debian 12 recommended) with at least 2GB RAM each
  • SSH root access to all three servers
  • Private networking between the servers (same datacenter or VPC) -- Sentinel requires reliable, low-latency communication between nodes
  • Docker installed on all three servers (this guide uses Docker containers for isolation, but the configuration applies equally to bare-metal installs)
  • Ports open between the nodes: 6379 (Redis) and 26379 (Sentinel) on the private network

Throughout this guide, we will refer to the three servers as:

HostnamePrivate IPRole
redis-110.0.0.1Primary + Sentinel
redis-210.0.0.2Replica + Sentinel
redis-310.0.0.3Replica + Sentinel

Replace the IPs with your actual private network addresses.

Step 1: Install Redis on All Three Nodes

Run the following on each server to pull the Redis 7 Alpine image and create a data volume:

docker pull redis:7-alpine
docker volume create redis-data
docker network create redis-sentinel

Create the configuration directory:

mkdir -p /opt/redis/redis
mkdir -p /opt/redis/sentinel

Step 2: Configure the Primary Node

On redis-1, create the Redis configuration at /opt/redis/redis/redis.conf:

bind 0.0.0.0
protected-mode yes
port 6379
requirepass YOUR_STRONG_PASSWORD
masterauth YOUR_STRONG_PASSWORD

# Announce this node's IP so replicas and Sentinel can find it
replica-announce-ip 10.0.0.1
replica-announce-port 6379

# Persistence
appendonly yes
appendfsync everysec

# Replication tuning
repl-diskless-sync yes
repl-backlog-size 256mb

# Write safety: require at least 1 replica to acknowledge writes
min-replicas-to-write 1
min-replicas-max-lag 10

Start Redis on the primary:

docker run -d \
  --name redis \
  --network redis-sentinel \
  --restart unless-stopped \
  -p 6379:6379 \
  -v redis-data:/data \
  -v /opt/redis/redis/redis.conf:/usr/local/etc/redis/redis.conf:ro \
  redis:7-alpine redis-server /usr/local/etc/redis/redis.conf

Verify it is running:

docker exec redis redis-cli -a YOUR_STRONG_PASSWORD PING
# Expected output: PONG

Step 3: Configure the Replica Nodes

On redis-2 and redis-3, create /opt/redis/redis/redis.conf. The configuration is identical to the primary except for two additions: the replicaof directive and the node-specific replica-announce-ip.

For redis-2 (10.0.0.2):

bind 0.0.0.0
protected-mode yes
port 6379
requirepass YOUR_STRONG_PASSWORD
masterauth YOUR_STRONG_PASSWORD

replica-announce-ip 10.0.0.2
replica-announce-port 6379

appendonly yes
appendfsync everysec
repl-diskless-sync yes
repl-backlog-size 256mb

min-replicas-to-write 1
min-replicas-max-lag 10

# This node is a replica of the primary
replicaof 10.0.0.1 6379

For redis-3, use the same configuration but set replica-announce-ip to 10.0.0.3.

Start Redis on both replicas using the same Docker command from Step 2. Then verify replication is working:

docker exec redis redis-cli -a YOUR_STRONG_PASSWORD INFO replication

On the primary, you should see role:master and connected_slaves:2. On each replica, you should see role:slave and master_link_status:up.

Step 4: Configure Sentinel on Each Node

Sentinel runs as a separate process on each of the three servers. It monitors the Redis primary, detects failures, and coordinates failover by promoting a replica to primary.

Create /opt/redis/sentinel/sentinel.conf on all three nodes. The configuration is the same on every node except for the sentinel announce-ip line.

For redis-1 (10.0.0.1):

port 26379
bind 0.0.0.0
protected-mode yes

# Authentication
requirepass YOUR_STRONG_PASSWORD
sentinel sentinel-pass YOUR_STRONG_PASSWORD

# Announce this Sentinel's address to the others
sentinel announce-ip 10.0.0.1
sentinel announce-port 26379

# Enable hostname resolution (useful when using hostnames instead of IPs)
sentinel resolve-hostnames yes
sentinel announce-hostnames yes

# Monitor the primary: name, host, port, quorum
sentinel monitor mymaster 10.0.0.1 6379 2
sentinel auth-pass mymaster YOUR_STRONG_PASSWORD

# How long to wait before considering a node down (milliseconds)
sentinel down-after-milliseconds mymaster 5000

# Failover timeout (milliseconds)
sentinel failover-timeout mymaster 60000

# How many replicas to reconfigure simultaneously during failover
sentinel parallel-syncs mymaster 1

On redis-2 and redis-3, use the same configuration but update sentinel announce-ip to the respective node's IP. The sentinel monitor directive should still point to the primary's IP (10.0.0.1) -- Sentinel will automatically discover the current master after its first failover.

Start Sentinel on each node:

docker run -d \
  --name sentinel \
  --network redis-sentinel \
  --restart unless-stopped \
  -p 26379:26379 \
  -v /opt/redis/sentinel:/usr/local/etc/redis \
  redis:7-alpine redis-sentinel /usr/local/etc/redis/sentinel.conf

Note that the Sentinel configuration directory is mounted writable (not :ro) because Sentinel modifies its own configuration file at runtime to persist state across restarts.

Verify Sentinel Is Working

Wait about 30 seconds for the Sentinels to discover each other, then check the cluster state:

docker exec sentinel redis-cli -p 26379 -a YOUR_STRONG_PASSWORD SENTINEL masters

You should see the master listed with flags set to master, and num-slaves showing 2. Check that all three Sentinels see each other:

docker exec sentinel redis-cli -p 26379 -a YOUR_STRONG_PASSWORD SENTINEL sentinels mymaster

This should return information about the other two Sentinel instances. If num-other-sentinels shows 2, your quorum is ready.

Step 5: Testing Failover

Failover testing is not optional. You need to confirm that your cluster recovers correctly before running it in production.

Trigger a Manual Failover

From any Sentinel node, force a failover:

docker exec sentinel redis-cli -p 26379 -a YOUR_STRONG_PASSWORD SENTINEL failover mymaster

This tells Sentinel to promote one of the replicas to primary, even though the current primary is healthy. Watch what happens:

# Check which node is now the master
docker exec sentinel redis-cli -p 26379 -a YOUR_STRONG_PASSWORD SENTINEL get-master-addr-by-name mymaster

The output should show a different IP than 10.0.0.1, confirming a replica was promoted.

Simulate a Primary Crash

For a more realistic test, stop the Redis container on the current primary:

docker stop redis

Wait 5-10 seconds (the down-after-milliseconds value plus election time), then check the new master from one of the remaining Sentinel nodes:

docker exec sentinel redis-cli -p 26379 -a YOUR_STRONG_PASSWORD SENTINEL get-master-addr-by-name mymaster

A new primary should have been elected. Bring the old node back:

docker start redis

The old primary will rejoin as a replica automatically. Verify with:

docker exec redis redis-cli -a YOUR_STRONG_PASSWORD INFO replication

You should see role:slave -- it correctly joined as a replica of the new primary rather than competing for the master role.

Write During Failover

To test data integrity, start a write loop before triggering failover:

for i in $(seq 1 100); do
  docker exec redis redis-cli -a YOUR_STRONG_PASSWORD SET "test:$i" "$i" 2>/dev/null
  sleep 0.1
done

Some writes will fail during the failover window (typically 5-15 seconds). This is expected with asynchronous replication. After failover completes, verify your data on the new primary to understand how many writes were lost.

Step 6: Add HAProxy for Transparent Routing

Sentinel handles failover, but your application still needs to know which node is the current primary. You have two options: use a Sentinel-aware Redis client library, or put HAProxy in front of Redis to always route traffic to the master automatically.

HAProxy is the simpler option because it works with any Redis client -- no Sentinel-specific code needed. HAProxy performs TCP health checks that authenticate with Redis and check INFO replication to find which node reports role:master.

Create /opt/haproxy/haproxy.cfg:

global
    log stdout format raw local0
    maxconn 4096

defaults
    log global
    mode tcp
    timeout connect 5s
    timeout client 1m
    timeout server 1m

frontend redis_front
    bind *:6380
    default_backend redis_master

backend redis_master
    mode tcp
    option tcp-check

    tcp-check connect
    tcp-check send AUTH\ YOUR_STRONG_PASSWORD\r\n
    tcp-check expect string +OK
    tcp-check send PING\r\n
    tcp-check expect string +PONG
    tcp-check send INFO\ replication\r\n
    tcp-check expect string role:master

    server redis-1 10.0.0.1:6379 check inter 1s fall 3 rise 2
    server redis-2 10.0.0.2:6379 check inter 1s fall 3 rise 2
    server redis-3 10.0.0.3:6379 check inter 1s fall 3 rise 2

The key part is the tcp-check sequence. HAProxy connects to each Redis server, authenticates, pings, and then checks INFO replication. Only the node that responds with role:master is marked as healthy. When a failover happens, HAProxy detects the change within 1-3 seconds (based on the inter 1s check interval) and routes traffic to the new primary.

Start HAProxy (this can run on any server -- even a separate node):

docker run -d \
  --name haproxy \
  --network redis-sentinel \
  --restart unless-stopped \
  -p 6380:6380 \
  -v /opt/haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro \
  haproxy:2.9-alpine

Now your application connects to redis://HAPROXY_IP:6380 with your Redis password, and HAProxy handles routing to whichever node is currently the primary. No Sentinel-aware client library needed.

Step 7: Production Hardening

A working Sentinel cluster is a start. Making it production-ready requires attention to persistence, memory management, security, and monitoring.

Persistence Configuration

The appendonly yes and appendfsync everysec settings in our configuration give a good balance between durability and performance. With everysec, you lose at most 1 second of data on a crash. If you need stronger guarantees, use appendfsync always (at the cost of write throughput). For pure caching workloads where data loss is acceptable, you can disable AOF and rely on RDB snapshots only.

Memory Limits

Always set a maxmemory limit to prevent Redis from consuming all available RAM and triggering the OOM killer. Add these lines to redis.conf:

maxmemory 1500mb
maxmemory-policy allkeys-lru

Set maxmemory to about 75% of available RAM to leave room for the OS, the AOF rewrite buffer, and replication buffers. The allkeys-lru eviction policy is a safe default for caching -- it evicts the least recently used keys when memory is full.

Network Security

Redis should never be exposed to the public internet. The configuration in this guide binds to 0.0.0.0, which means Redis listens on all interfaces. Restrict access using firewall rules:

# Allow Redis port only from private network
ufw allow from 10.0.0.0/24 to any port 6379
ufw allow from 10.0.0.0/24 to any port 26379

# If using HAProxy, allow client connections from your application servers
ufw allow from YOUR_APP_SUBNET to any port 6380

Use protected-mode yes (enabled in our configuration) as a safety net -- it rejects connections from non-loopback addresses unless authentication is configured.

Monitoring

At minimum, monitor these metrics on each node:

  • Redis memory usage (INFO memory -- used_memory vs maxmemory)
  • Replication lag (INFO replication -- master_repl_offset vs replica offset)
  • Connected clients (INFO clients)
  • Sentinel quorum health (SENTINEL ckquorum mymaster)

For production monitoring, export metrics to Prometheus using a Redis exporter and set up alerts on replication lag and Sentinel quorum loss.

Backup Strategy

Even with replication, you need backups. Redis replication is for availability, not for protection against accidental FLUSHALL or application bugs that corrupt data.

Schedule RDB snapshots on a replica (not the primary, to avoid performance impact):

docker exec redis redis-cli -a YOUR_STRONG_PASSWORD BGSAVE

Copy the resulting dump.rdb file to off-site storage. Automate this with a cron job that runs BGSAVE, waits for completion, and uploads the dump to S3 or equivalent.

Manual Setup vs Using sshploy

The steps above give you a fully functional Redis Sentinel cluster, but there is a lot to manage: keeping configurations consistent across nodes, setting up firewall rules correctly, ensuring Docker networking is configured for cross-node communication, and handling the nuances of Sentinel's hostname resolution and announcement settings.

sshploy automates this entire process. You select your servers, configure your Redis password and cluster size, and sshploy runs the Ansible playbook that deploys Redis with replication, Sentinel with quorum, HAProxy for automatic master routing, and firewall rules scoped to your cluster. It takes under a minute and produces the same production-grade setup described in this guide -- including the HAProxy health checks, write-safety settings, and persistence configuration. When you need to update or scale the cluster, sshploy re-runs the deployment respecting Sentinel's current master election so it never overrides an active failover.

FAQ

How many Sentinel nodes do I need?

You need at least 3 Sentinel instances to form a reliable quorum. Sentinel requires a majority to agree before triggering a failover -- with 3 nodes, 2 must agree (tolerating 1 failure). With 5 nodes, 3 must agree (tolerating 2 failures). Always use an odd number. Running Sentinel on the same servers as Redis is standard practice and saves you from provisioning extra nodes.

What happens to writes during a failover?

Writes sent to the old primary after it fails but before clients switch to the new primary are lost. Redis replication is asynchronous, so the replicas may not have received the most recent writes before the primary went down. The min-replicas-to-write and min-replicas-max-lag settings reduce this window by rejecting writes on the primary if replicas are not keeping up, but they cannot eliminate it entirely. For most caching and session workloads, this trade-off is acceptable.

Can I add more replicas later without downtime?

Yes. Adding a replica is non-disruptive. Configure the new node with replicaof pointing to the current primary's IP, start it, and Sentinel will automatically discover it. The new replica will perform an initial full sync (which increases load on the primary temporarily) and then switch to streaming replication. No existing nodes need to be restarted.

Should I use hostnames or IP addresses in the configuration?

IP addresses are more reliable for initial setup because they eliminate DNS resolution as a potential failure point. However, if you use hostnames, enable sentinel resolve-hostnames yes and sentinel announce-hostnames yes in your Sentinel configuration so Sentinel can properly resolve and announce node addresses. sshploy uses internal hostnames (like redis-1.internal) mapped via /etc/hosts for stable addressing across redeployments.

How do I connect my application to the Sentinel cluster?

You have two approaches. First, use a Sentinel-aware client library (most Redis clients support this) by passing the Sentinel addresses and master name -- the client queries Sentinel to discover the current primary. Second, use HAProxy as described in Step 6, which lets you connect with any standard Redis client to a single endpoint (HAPROXY_IP:6380). The HAProxy approach is simpler and works with every Redis client, which is why it is the recommended setup for most applications.

Ready to deploy?

Skip the manual setup. sshploy handles the entire deployment for you.

Deploy Redis Sentinel