Redis主从及哨兵

Redis主从用法

MySQL一样,redis是支持主从同步的,而且也支持一主多从以及多级从结构。

主从结构,一是为了纯粹的冗余备份,二是为了提升读性能,比如很消耗性能的SORT就可以由从服务器来承担。redis的主从同步是异步进行的,这意味着主从同步不会影响主逻辑,也不会降低redis的处理性能。

主从架构中,可以考虑关闭主服务器的数据持久化功能,只让从服务器进行持久化,这样可以提高主服务器的处理性能。

在主从架构中,从服务器通常被设置为只读模式,这样可以避免从服务器的数据被误修改。但是从服务器仍然可以接受CONFIG等指令,所以还是不应该将从服务器直接暴露到不安全的网络环境中。如果必须如此,那可以考虑给重要指令进行重命名,来避免命令被外人误执行。

Redis同步原理

从服务器会向主服务器发出SYNC指令,当主服务器接到此命令后,就会调用BGSAVE指令来创建一个子进程专门进行数据持久化工作,也就是将主服务器的数据写入RDB文件中。在数据持久化期间,主服务器将执行的写指令都缓存在内存中。

BGSAVE指令执行完成后,主服务器会将持久化好的RDB文件发送给从服务器,从服务器接到此文件后会将其存储到磁盘上,然后再将其读取到内存中。这个动作完成后,主服务器会将这段时间缓存的写指令再以redis协议的格式发送给从服务器。

另外,要说的一点是,即使有多个从服务器同时发来SYNC指令,主服务器也只会执行一次BGSAVE,然后把持久化好的RDB文件发给多个下游。在redis2.8版本之前,如果从服务器与主服务器因某些原因断开连接的话,都会进行一次主从之间的全量的数据同步;而在2.8版本之后,redis支持了效率更高的增量同步策略,这大大降低了连接断开的恢复成本。

主服务器会在内存中维护一个缓冲区,缓冲区中存储着将要发给从服务器的内容。从服务器在与主服务器出现网络瞬断之后,从服务器会尝试再次与主服务器连接,一旦连接成功,从服务器就会把“希望同步的主服务器ID”和“希望请求的数据的偏移位置(replication offset)”发送出去。主服务器接收到这样的同步请求后,首先会验证主服务器ID是否和自己的ID匹配,其次会检查“请求的偏移位置”是否存在于自己的缓冲区中,如果两者都满足的话,主服务器就会向从服务器发送增量内容。

增量同步功能,需要服务器端支持全新的PSYNC指令。这个指令,只有在redis-2.8之后才具有。

建立复制:

slaveof {masterHost} {masterPort}

从节点查看复制状态信息:

info replication

断开复制:

slaveof no one

安全性:

主节点使用requirepass参数进行密码验证,客户端使用auth命令校验,从节点的masterauth参数与主节点密码保持一致

只读:

默认情况下,从节点使用slave-read-only=yes配置为只读模式

复制支持树状结构,从节点可以复制另一个从节点,实现一层层向下的复制流。

复制原理:

1)执行slaveof后从节点只保存主节点的地址信息便直接返回,保存主节点信息

2)从节点内部通过每秒运行的定时任务维护复制相关逻辑,与主节点建立socket连接

3)从节点发送ping命令请求进行首次通信

4)权限验证

5)同步数据集,2.8以后采用新复制命令psync进行数据同步

6)命令持续复制

psync命令需要以下组件支持:

主从节点各自复制偏移量:master_repl_offsetslave_repl_offset

主节点复制积压缓冲区

主节点运行id

全量复制时间开销:

主节点bgsave时间

RDB文件网络传输时间

从节点清空数据时间

从节点加载RDB时间

可能的AOF重写时间

心跳:

主从节点在建立复制后,它们之间维护着长连接并彼此发送心跳命令

异步复制

复制可能遇到的问题:

复制数据延迟

读到过期数据

从节点故障

Redis配置主从备份及主备切换方案配置

192.168.1.101   master   6379

192.168.1.102   slave1    6379

192.168.1.103   slave2    6379

修改master文件:/etc/redis/6379.conf

pidfile /var/run/redis_6379.pid

logfile /data/logs/redis.master.log

protected-mode yes

masterauth "mypass"

requirepass "mypass"

daemonize yes

tcp-backlog 511

timeout 0

tcp-keepalive 60

loglevel notice

databases 16

dir /data

stop-writes-on-bgsave-error no

repl-timeout 60

repl-ping-slave-period 10

repl-disable-tcp-nodelay no

repl-backlog-size 10M

repl-backlog-ttl 7200

slave-serve-stale-data yes

slave-read-only yes

slave-priority 100

lua-time-limit 5000

slowlog-log-slower-than 10000

slowlog-max-len 128

hash-max-ziplist-entries 512

hash-max-ziplist-value 64

list-max-ziplist-entries 512

list-max-ziplist-value 64

set-max-intset-entries 512

zset-max-ziplist-entries 128

zset-max-ziplist-value 64

activerehashing yes

client-output-buffer-limit normal 0 0 0

client-output-buffer-limit slave 512mb 128mb 60

client-output-buffer-limit pubsub 32mb 8mb 60

hz 10

port 6379

maxmemory 512mb

maxmemory-policy volatile-lru

appendonly yes

appendfsync everysec

appendfilename appendonly-6379.aof

dbfilename dump-6379.rdb

aof-rewrite-incremental-fsync yes

no-appendfsync-on-rewrite yes

auto-aof-rewrite-min-size 64m

auto-aof-rewrite-percentage 89

rdbcompression yes

rdbchecksum yes

repl-diskless-sync no

repl-diskless-sync-delay 5

maxclients 10000

hll-sparse-max-bytes 3000

min-slaves-to-write 0

min-slaves-max-lag 10

aof-load-truncated yes

notify-keyspace-events ""

修改slave1文件:/etc/redis/6379.conf

pidfile /var/run/redis_6379.pid

logfile /data/logs/redis.slave1.log

protected-mode yes

masterauth "mypass"

requirepass "mypass"

daemonize yes

tcp-backlog 511

timeout 0

tcp-keepalive 60

loglevel notice

databases 16

dir /data

stop-writes-on-bgsave-error no

repl-timeout 60

repl-ping-slave-period 10

repl-disable-tcp-nodelay no

repl-backlog-size 10000000

repl-backlog-ttl 7200

slave-serve-stale-data yes

slave-read-only yes

slave-priority 100

lua-time-limit 5000

slowlog-log-slower-than 10000

slowlog-max-len 128

hash-max-ziplist-entries 512

hash-max-ziplist-value 64

list-max-ziplist-entries 512

list-max-ziplist-value 64

set-max-intset-entries 512

zset-max-ziplist-entries 128

zset-max-ziplist-value 64

activerehashing yes

client-output-buffer-limit normal 0 0 0

client-output-buffer-limit slave 512mb 128mb 60

client-output-buffer-limit pubsub 32mb 8mb 60

hz 10

port 6379

maxmemory 512mb

maxmemory-policy volatile-lru

appendonly yes

appendfsync everysec

appendfilename "appendonly-6379.aof"

dbfilename "dump-6379.rdb"

aof-rewrite-incremental-fsync yes

no-appendfsync-on-rewrite yes

auto-aof-rewrite-min-size 62500kb

auto-aof-rewrite-percentage 81

rdbcompression yes

rdbchecksum yes

repl-diskless-sync no

repl-diskless-sync-delay 5

maxclients 4064

hll-sparse-max-bytes 3000

min-slaves-to-write 0

min-slaves-max-lag 10

aof-load-truncated yes

notify-keyspace-events ""

slaveof 192.168.1.101 6379

修改slave2文件:/etc/redis/6379.conf

pidfile /var/run/redis_6379.pid

logfile /data/logs/redis.slave2.log

protected-mode yes

masterauth "mypass"

requirepass "mypass"

daemonize yes

tcp-backlog 511

timeout 0

tcp-keepalive 60

loglevel notice

databases 16

dir /data

stop-writes-on-bgsave-error no

repl-timeout 60

repl-ping-slave-period 10

repl-disable-tcp-nodelay no

repl-backlog-size 10000000

repl-backlog-ttl 7200

slave-serve-stale-data yes

slave-read-only yes

slave-priority 100

lua-time-limit 5000

slowlog-log-slower-than 10000

slowlog-max-len 128

hash-max-ziplist-entries 512

hash-max-ziplist-value 64

list-max-ziplist-entries 512

list-max-ziplist-value 64

set-max-intset-entries 512

zset-max-ziplist-entries 128

zset-max-ziplist-value 64

activerehashing yes

client-output-buffer-limit normal 0 0 0

client-output-buffer-limit slave 512mb 128mb 60

client-output-buffer-limit pubsub 32mb 8mb 60

hz 10

port 6379

maxmemory 512mb

maxmemory-policy volatile-lru

appendonly yes

appendfsync everysec

appendfilename "appendonly-6379.aof"

dbfilename "dump-6379.rdb"

aof-rewrite-incremental-fsync yes

no-appendfsync-on-rewrite yes

auto-aof-rewrite-min-size 62500kb

auto-aof-rewrite-percentage 81

rdbcompression yes

rdbchecksum yes

repl-diskless-sync no

repl-diskless-sync-delay 5

maxclients 4064

hll-sparse-max-bytes 3000

min-slaves-to-write 0

min-slaves-max-lag 10

aof-load-truncated yes

notify-keyspace-events ""

slaveof 192.168.1.101 6379

启动master

[root@mydb1 bin]# pwd

/usr/local/redis/bin

[root@mydb1 bin]# ./redis-server /etc/redis/6379.conf &

启动slave1

[root@mydb2 bin]# ./redis-server /etc/redis/6379.conf &

启动slave2

[root@mydb3 bin]# ./redis-server /etc/redis/6379.conf &

查看master信息

[root@mydb1 bin]# ./redis-cli -h 192.168.1.101 -p 6379 -a "mypass"

192.168.1.101:6379> info replication

# Replication

role:master

connected_slaves:2

slave0:ip=192.168.1.102,port=6379,state=online,offset=2507,lag=0

slave1:ip=192.168.1.103,port=6379,state=online,offset=2507,lag=0

master_repl_offset:2507

repl_backlog_active:1

repl_backlog_size:1048576

repl_backlog_first_byte_offset:2

repl_backlog_histlen:2506

查看slave1信息

[root@mydb2 bin]# ./redis-cli -h 192.168.1.102 -p 6379 -a "mypass"

192.168.1.102:6379> info replication

# Replication

role:slave

master_host:192.168.1.101

master_port:6379

master_link_status:up

master_last_io_seconds_ago:4

master_sync_in_progress:0

slave_repl_offset:2619

slave_priority:100

slave_read_only:1

connected_slaves:0

master_repl_offset:0

repl_backlog_active:0

repl_backlog_size:1048576

repl_backlog_first_byte_offset:0

repl_backlog_histlen:0

查看slave2信息

[root@mydb3 bin]# ./redis-cli -h 192.168.1.103 -p 6379 -a "mypass"

192.168.1.103:6379> info replication

# Replication

role:slave

master_host:192.168.1.101

master_port:6379

master_link_status:up

master_last_io_seconds_ago:6

master_sync_in_progress:0

slave_repl_offset:2703

slave_priority:100

slave_read_only:1

connected_slaves:0

master_repl_offset:0

repl_backlog_active:0

repl_backlog_size:1048576

repl_backlog_first_byte_offset:0

repl_backlog_histlen:0

测试数据同步

master上执行

192.168.1.101:6379> set name allen

OK

192.168.1.101:6379> set age 32

OK

192.168.1.101:6379> set sex male

OK

192.168.1.101:6379> set phone 13718097805

OK

192.168.1.101:6379> keys *

1) "age"

2) "name"

3) "phone"

4) "sex"

slave1上查看

192.168.1.102:6379> keys *

1) "sex"

2) "phone"

3) "age"

4) "name"

192.168.1.102:6379> get name

"allen"

slave2上查看

192.168.1.103:6379> keys *

1) "name"

2) "phone"

3) "sex"

4) "age"

192.168.1.103:6379> get name

"allen"

使用Redis Sentinel实现Redis HA

只需要设置一个sentinel配置文件如下:

/etc/redis/sentinel.conf

port 26379

bind 192.168.1.101  需要根据情况变化

protected-mode no

dir /data

logfile /data/logs/sentinel.log

sentinel monitor mymaster 192.168.1.101 6379 1

sentinel down-after-milliseconds mymaster 30000

sentinel parallel-syncs mymaster 1

sentinel failover-timeout mymaster 18000

sentinel auth-pass mymaster mypass

启动Redis Sentinel

[root@mydb1 bin]# pwd

/usr/local/redis/bin

[root@mydb1 bin]# ./redis-sentinel /etc/redis/sentinel.conf &

[root@mydb2 bin]# ./redis-sentinel /etc/redis/sentinel.conf &

[root@mydb3 bin]# ./redis-sentinel /etc/redis/sentinel.conf &

测试:场景一:slave1宕机

192.168.1.102:6379> shutdown

not connected>

192.168.1.101:6379> info Replication

# Replication

role:master

connected_slaves:1

slave0:ip=192.168.1.103,port=6379,state=online,offset=110139,lag=0

master_repl_offset:110562

repl_backlog_active:1

repl_backlog_size:1048576

repl_backlog_first_byte_offset:2

repl_backlog_histlen:110561

192.168.1.103:6379> info Replication

# Replication

role:slave

master_host:192.168.1.101

master_port:6379

master_link_status:up

master_last_io_seconds_ago:2

master_sync_in_progress:0

slave_repl_offset:113128

slave_priority:100

slave_read_only:1

connected_slaves:0

master_repl_offset:0

repl_backlog_active:0

repl_backlog_size:1048576

repl_backlog_first_byte_offset:0

repl_backlog_histlen:0

测试:场景二:slave1恢复

[root@mydb2 bin]# ./redis-server /etc/redis/6379.conf &

192.168.1.101:6379> info Replication

# Replication

role:master

connected_slaves:2

slave0:ip=192.168.1.103,port=6379,state=online,offset=139827,lag=1

slave1:ip=192.168.1.102,port=6379,state=online,offset=139968,lag=0

master_repl_offset:139968

repl_backlog_active:1

repl_backlog_size:1048576

repl_backlog_first_byte_offset:2

repl_backlog_histlen:139967

192.168.1.102:6379> info Replication

# Replication

role:slave

master_host:192.168.1.101

master_port:6379

master_link_status:up

master_last_io_seconds_ago:0

master_sync_in_progress:0

slave_repl_offset:147215

slave_priority:100

slave_read_only:1

connected_slaves:0

master_repl_offset:0

repl_backlog_active:0

repl_backlog_size:1048576

repl_backlog_first_byte_offset:0

repl_backlog_histlen:0

192.168.1.103:6379> info Replication

# Replication

role:slave

master_host:192.168.1.101

master_port:6379

master_link_status:up

master_last_io_seconds_ago:1

master_sync_in_progress:0

slave_repl_offset:150627

slave_priority:100

slave_read_only:1

connected_slaves:0

master_repl_offset:0

repl_backlog_active:0

repl_backlog_size:1048576

repl_backlog_first_byte_offset:0

repl_backlog_histlen:0

测试:场景三:master宕机

192.168.1.101:6379> shutdown

not connected>

经过一段时候后

观察slave1的日志

6423:S 12 Aug 10:32:21.683 # Error condition on socket for SYNC: Connection refused

6423:S 12 Aug 10:32:22.692 * Connecting to MASTER 192.168.1.101:6379

6423:S 12 Aug 10:32:22.692 * MASTER <-> SLAVE sync started

6423:S 12 Aug 10:32:22.693 # Error condition on socket for SYNC: Connection refused

6423:M 12 Aug 10:32:22.841 * Discarding previously cached master state.

6423:M 12 Aug 10:32:22.842 * MASTER MODE enabled (user request from 'id=3 addr=192.168.1.103:41106 fd=6 name=sentinel-a409bbba-cmd age=175 idle=0 flags=x db=0 sub=0 psub=0 multi=3 qbuf=0 qbuf-free=32768 obl=36 oll=0 omem=0 events=r cmd=exec')

6423:M 12 Aug 10:32:22.843 # CONFIG REWRITE executed with success.

6423:M 12 Aug 10:32:23.857 * Slave 192.168.1.103:6379 asks for synchronization

6423:M 12 Aug 10:32:23.857 * Full resync requested by slave 192.168.1.103:6379

6423:M 12 Aug 10:32:23.857 * Starting BGSAVE for SYNC with target: disk

6423:M 12 Aug 10:32:23.858 * Background saving started by pid 6434

6434:C 12 Aug 10:32:23.903 * DB saved on disk

6434:C 12 Aug 10:32:23.903 * RDB: 6 MB of memory used by copy-on-write

6423:M 12 Aug 10:32:24.002 * Background saving terminated with success

6423:M 12 Aug 10:32:24.002 * Synchronization with slave 192.168.1.103:6379 succeeded

观察slave2的日志

6134:S 12 Aug 10:32:50.887 # Error condition on socket for SYNC: Connection refused

6134:S 12 Aug 10:32:51.895 * Connecting to MASTER 192.168.1.101:6379

6134:S 12 Aug 10:32:51.895 * MASTER <-> SLAVE sync started

6134:S 12 Aug 10:32:51.896 # Error condition on socket for SYNC: Connection refused

6134:S 12 Aug 10:32:52.665 * Discarding previously cached master state.

6134:S 12 Aug 10:32:52.665 * SLAVE OF 192.168.1.102:6379 enabled (user request from 'id=4 addr=192.168.1.103:42804 fd=7 name=sentinel-a409bbba-cmd age=264 idle=0 flags=x db=0 sub=0 psub=0 multi=3 qbuf=141 qbuf-free=32627 obl=36 oll=0 omem=0 events=r cmd=exec')

6134:S 12 Aug 10:32:52.668 # CONFIG REWRITE executed with success.

6134:S 12 Aug 10:32:52.906 * Connecting to MASTER 192.168.1.102:6379

6134:S 12 Aug 10:32:52.906 * MASTER <-> SLAVE sync started

6134:S 12 Aug 10:32:52.906 * Non blocking connect for SYNC fired the event.

6134:S 12 Aug 10:32:52.907 * Master replied to PING, replication can continue...

6134:S 12 Aug 10:32:52.908 * Partial resynchronization not possible (no cached master)

6134:S 12 Aug 10:32:52.911 * Full resync from master: 018886996edab0be5e5be9e458f1debb32b83263:1

6134:S 12 Aug 10:32:53.054 * MASTER <-> SLAVE sync: receiving 129 bytes from master

6134:S 12 Aug 10:32:53.055 * MASTER <-> SLAVE sync: Flushing old data

6134:S 12 Aug 10:32:53.055 * MASTER <-> SLAVE sync: Loading DB in memory

6134:S 12 Aug 10:32:53.055 * MASTER <-> SLAVE sync: Finished with success

观察sentinel的日志

6157:X 12 Aug 10:32:51.778 # +sdown master mymaster 192.168.1.101 6379

6157:X 12 Aug 10:32:51.778 # +odown master mymaster 192.168.1.101 6379 #quorum 1/1

6157:X 12 Aug 10:32:51.778 # +new-epoch 1

6157:X 12 Aug 10:32:51.778 # +try-failover master mymaster 192.168.1.101 6379

6157:X 12 Aug 10:32:51.782 # +vote-for-leader a409bbbafcd5dd0da5639afb4485d228aac95b78 1

6157:X 12 Aug 10:32:51.783 # +elected-leader master mymaster 192.168.1.101 6379

6157:X 12 Aug 10:32:51.783 # +failover-state-select-slave master mymaster 192.168.1.101 6379

6157:X 12 Aug 10:32:51.836 # +selected-slave slave 192.168.1.102:6379 192.168.1.102 6379 @ mymaster 192.168.1.101 6379

6157:X 12 Aug 10:32:51.836 * +failover-state-send-slaveof-noone slave 192.168.1.102:6379 192.168.1.102 6379 @ mymaster 192.168.1.101 6379

6157:X 12 Aug 10:32:51.893 * +failover-state-wait-promotion slave 192.168.1.102:6379 192.168.1.102 6379 @ mymaster 192.168.1.101 6379

6157:X 12 Aug 10:32:52.597 # +promoted-slave slave 192.168.1.102:6379 192.168.1.102 6379 @ mymaster 192.168.1.101 6379

6157:X 12 Aug 10:32:52.597 # +failover-state-reconf-slaves master mymaster 192.168.1.101 6379

6157:X 12 Aug 10:32:52.665 * +slave-reconf-sent slave 192.168.1.103:6379 192.168.1.103 6379 @ mymaster 192.168.1.101 6379

6157:X 12 Aug 10:32:53.229 * +slave-reconf-inprog slave 192.168.1.103:6379 192.168.1.103 6379 @ mymaster 192.168.1.101 6379

6157:X 12 Aug 10:32:53.229 * +slave-reconf-done slave 192.168.1.103:6379 192.168.1.103 6379 @ mymaster 192.168.1.101 6379

6157:X 12 Aug 10:32:53.306 # +failover-end master mymaster 192.168.1.101 6379

6157:X 12 Aug 10:32:53.306 # +switch-master mymaster 192.168.1.101 6379 192.168.1.102 6379

6157:X 12 Aug 10:32:53.306 * +slave slave 192.168.1.103:6379 192.168.1.103 6379 @ mymaster 192.168.1.102 6379

已经将slave1转变为master节点

192.168.1.102:6379> info replication

# Replication

role:master

connected_slaves:1

slave0:ip=192.168.1.103,port=6379,state=online,offset=19592,lag=1

master_repl_offset:19733

repl_backlog_active:1

repl_backlog_size:1048576

repl_backlog_first_byte_offset:2

repl_backlog_histlen:19732

已经将原slave2转变为新master节点的slave

192.168.1.103:6379> info replication

# Replication

role:slave

master_host:192.168.1.102

master_port:6379

master_link_status:up

master_last_io_seconds_ago:0

master_sync_in_progress:0

slave_repl_offset:26500

slave_priority:100

slave_read_only:1

connected_slaves:0

master_repl_offset:0

repl_backlog_active:0

repl_backlog_size:1048576

repl_backlog_first_byte_offset:0

repl_backlog_histlen:0

测试:场景四:master恢复

[root@mydb1 bin]# ./redis-server /etc/redis/6379.conf &

master挂掉时,sentinel会自动的从slave中挑选出一个作为master,并重新配置各redis实例的配置文件。当master重启后,sentinel会自动将它加入到当前的环境中,变成一个slave节点。

192.168.1.101:6379> info replication

# Replication

role:slave

master_host:192.168.1.102

master_port:6379

master_link_status:up

master_last_io_seconds_ago:1

master_sync_in_progress:0

slave_repl_offset:48093

slave_priority:100

slave_read_only:1

connected_slaves:0

master_repl_offset:0

repl_backlog_active:0

repl_backlog_size:1048576

repl_backlog_first_byte_offset:0

repl_backlog_histlen:0

192.168.1.102:6379> info replication

# Replication

role:master

connected_slaves:2

slave0:ip=192.168.1.103,port=6379,state=online,offset=49108,lag=0

slave1:ip=192.168.1.101,port=6379,state=online,offset=49108,lag=0

master_repl_offset:49108

repl_backlog_active:1

repl_backlog_size:1048576

repl_backlog_first_byte_offset:2

repl_backlog_histlen:49107

192.168.1.103:6379> info replication

# Replication

role:slave

master_host:192.168.1.102

master_port:6379

master_link_status:up

master_last_io_seconds_ago:1

master_sync_in_progress:0

slave_repl_offset:50250

slave_priority:100

slave_read_only:1

connected_slaves:0

master_repl_offset:0

repl_backlog_active:0

repl_backlog_size:1048576

repl_backlog_first_byte_offset:0

repl_backlog_histlen:0

当之前挂掉的master又重启后,如果希望还是还原到原先的主从配置

登录到sentinel,使用命令:

[root@mydb3 bin]## ./redis-cli -h 192.168.1.103 -p 26379

192.168.1.103:26379> sentinel failover mymaster

一主(Master)多从(Slave),主库不开启AOF持久化,只是每天备份一下RDB[官方给的建议是每小时备份RDB文件],而在从库上开启AOF备份,并且会用脚本将相应的备份文件推送到备份服务器。

redis服务器挂掉时,重启时将按照以下优先级恢复数据到内存:

恢复时需要注意,要是主库挂了不能直接重启主库,否则会直接覆盖掉从库的AOF文件,一定要确保要恢复的文件都正确才能启动,否则会冲掉原来的文件。

原文地址:https://www.cnblogs.com/allenhu320/p/11339840.html