Redis集群

Redis集群配置

支持多节点数据集自动分片

提供一定程度的分区可用性,部分节点挂掉或者无法连接其他节点后,服务可以正常运行

集群中的每个Redis节点需要2TCP连接端口,如6379端口用于Client连接,16379端口用于集群数据通信

集群采用Hash Slot方案,而不是一致性哈希,16384Hash slot。如果有3台机器,那么NodeA0-5500NodeB 5501-11000NodeC11001-16384.这种设计下,添加,删除新Node比较方便。例如添加新节点D,只需移动A,B,C上的slot到节点D即可。如需移除节点A,将A上的slot迁移到BC即可。由于HashSlot在节点间的迁移无需停止操作,集群新增或者删除节点,改变集群内部节点占用的Slot比例等都可在线完成。

现在我们是三个主节点分别是:A, B, C 三个节点,它们可以是一台机器上的三个端口,也可以是三台不同的服务器。那么,采用哈希槽 (hash slot)的方式来分配16384slot 的话,它们三个节点分别承担的slot 区间是:

节点A覆盖05460;

节点B覆盖546110922;

节点C覆盖1092316383.

获取数据:

如果存入一个值,按照redis cluster哈希槽的算法:CRC16('key')%16384 = 6782。 那么就会把这个key 的存储分配到 B 上了。同样,当我连接(A,B,C)任何一个节点想获取'key'这个key时,也会这样的算法,然后内部跳转到B节点上获取数据

新增一个主节点:

新增一个节点Dredis cluster的这种做法是从各个节点的前面各拿取一部分slotD上,我会在接下来的实践中实验。大致就会变成这样:

节点A覆盖1365-5460

节点B覆盖6827-10922

节点C覆盖12288-16383

节点D覆盖0-1364,5461-6826,10923-12287

同样删除一个节点也是类似,移动完成后就可以删除这个节点了。

简单测试

修改三个节点的配置

bind 192.168.1.101

port 6379

pidfile /var/run/redis_6379.pid

logfile /data/logs/redis.a.log

daemonize yes

cluster-enabled yes

cluster-config-file /etc/redis/nodes-6379.conf

cluster-node-timeout 15000

bind 192.168.1.102

port 6379

pidfile /var/run/redis_6379.pid

logfile /data/logs/redis.b.log

daemonize yes

cluster-enabled yes

cluster-config-file /etc/redis/nodes-6379.conf

cluster-node-timeout 15000

bind 192.168.1.103

port 6379

pidfile /var/run/redis_6379.pid

logfile /data/logs/redis.c.log

daemonize yes

cluster-enabled yes

cluster-config-file /etc/redis/nodes-6379.conf

cluster-node-timeout 15000

# mkdir -p /data/logs

[root@mydb1 bin]# pwd

/usr/local/redis/bin

[root@mydb1 bin]# ./redis-server /etc/redis/6379.conf

[root@mydb2 bin]# ./redis-server /etc/redis/6379.conf

[root@mydb3 bin]# ./redis-server /etc/redis/6379.conf

查看集群状态

[root@mydb1 bin]# ./redis-cli -h 192.168.1.101 -p 6379

192.168.1.101:6379> cluster nodes

e9442bd0ca9c9254263d1b29bf214f0c9f41719d :6379@16379 myself,master - 0 0 0 connected

192.168.1.101:6379> cluster info

cluster_state:fail

cluster_slots_assigned:0

cluster_slots_ok:0

cluster_slots_pfail:0

cluster_slots_fail:0

cluster_known_nodes:1

cluster_size:0

cluster_current_epoch:0

cluster_my_epoch:0

cluster_stats_messages_sent:0

cluster_stats_messages_received:0

通过cluster meet命令关联集群各个服务器

192.168.1.101:6379> cluster meet 192.168.1.102 6379

OK

192.168.1.101:6379> cluster meet 192.168.1.103 6379

OK

192.168.1.101:6379> cluster nodes

e9442bd0ca9c9254263d1b29bf214f0c9f41719d 192.168.1.101:6379@16379 myself,master - 0 1556353980000 1 connected

b9b4014a7d7c769e80fe03ac00cf9ad8c10e7141 192.168.1.103:6379@16379 master - 0 1556382571080 2 connected

eda8fd7e15a0b501f2974a2c84e9369f821a61fd 192.168.1.102:6379@16379 master - 0 1556382572090 0 connected

192.168.1.101:6379> cluster info

cluster_state:fail

cluster_slots_assigned:0

cluster_slots_ok:0

cluster_slots_pfail:0

cluster_slots_fail:0

cluster_known_nodes:3

cluster_size:0

cluster_current_epoch:2

cluster_my_epoch:1

cluster_stats_messages_ping_sent:64

cluster_stats_messages_pong_sent:66

cluster_stats_messages_meet_sent:2

cluster_stats_messages_sent:132

cluster_stats_messages_ping_received:66

cluster_stats_messages_pong_received:66

cluster_stats_messages_received:132

为集群中各个服务器分配hash slots

Redis Cluster通过hash slot将数据根据主键来分区,所以一条key-value数据会根据算法自动映射到一个hash slot,但是一个hash slot存储在哪个Redis节点上并不是自动映射的,是需要集群管理者自行分配的。根据源码得知共有16384hash slots

修改node-conf文件,保留myself那行记录,其余记录删除

[root@mydb1 ~]# cat /etc/redis/nodes-6379.conf

e9442bd0ca9c9254263d1b29bf214f0c9f41719d 192.168.1.101:6379@16379 myself,master - 0 0 1 connected 0-5000

[root@mydb2 ~]# cat /etc/redis/nodes-6379.conf

eda8fd7e15a0b501f2974a2c84e9369f821a61fd 192.168.1.102:6379@16379 myself,master - 0 0 0 connected 5001-10000

[root@mydb3 ~]# cat /etc/redis/nodes-6379.conf

b9b4014a7d7c769e80fe03ac00cf9ad8c10e7141 192.168.1.103:6379@16379 myself,master - 0 0 2 connected 10001-16383

重启3redis服务

重新使用cluster meet命令关联各个服务器节点

[root@mydb1 bin]# ./redis-cli -h 192.168.1.101 -p 6379

192.168.1.101:6379> cluster meet 192.168.1.102 6379

OK

192.168.1.101:6379> cluster meet 192.168.1.103 6379

OK

192.168.1.101:6379> cluster info

cluster_state:ok

cluster_slots_assigned:16384

cluster_slots_ok:16384

cluster_slots_pfail:0

cluster_slots_fail:0

cluster_known_nodes:3

cluster_size:3

cluster_current_epoch:2

cluster_my_epoch:1

cluster_stats_messages_ping_sent:101

cluster_stats_messages_pong_sent:120

cluster_stats_messages_meet_sent:2

cluster_stats_messages_sent:223

cluster_stats_messages_ping_received:120

cluster_stats_messages_pong_received:103

cluster_stats_messages_received:223

192.168.1.101:6379> set name allen

(error) MOVED 5798 192.168.1.102:6379

192.168.1.101:6379> set age 32

OK

192.168.1.101:6379> set phone 13718097805

(error) MOVED 8939 192.168.1.102:6379

192.168.1.101:6379> set job mysqldba

OK

192.168.1.102:6379> set name allen

OK

192.168.1.102:6379> set phone 13718097805

OK

三主三从测试

我们在集群建立的时候,一定要为每个主节点都添加了从节点, 比如像这样, 集群包含主节点ABC, 以及从节点A1B1C1, 那么即使B挂掉系统也可以继续正确工作。B1节点替代了B节点,所以Redis集群将会选择B1节点作为新的主节点,集群将会继续正确地提供服务。 当B重新开启后,它就会变成B1的从节点。

192.168.1.101  A 6379  A1 6380

192.168.1.102  B 6379  B1 6380

192.168.1.103  C 6379  C1 6380

三台机器上操作

安装ruby ruby-devel

安装rubygems

# yum -y install ruby ruby-devel ruby-rdoc

下载rubygems-2.6.8.tgz

# tar zxvf rubygems-2.6.8.tgz

# cd rubygems-2.6.8

# ruby setup.rb

此时需要虚拟机上网

# gem install redis

gem list

gem uninstall redis --version 3.3.2

gem install redis --version 3.3.2

gem list

创建相关目录

[root@mydb1 ~]# mkdir -p /redis_cluster/6379

[root@mydb1 ~]# mkdir -p /redis_cluster/6380

[root@mydb2 ~]# mkdir -p /redis_cluster/6379

[root@mydb2 ~]# mkdir -p /redis_cluster/6380

[root@mydb3 ~]# mkdir -p /redis_cluster/6379

[root@mydb3 ~]# mkdir -p /redis_cluster/6380

redis.conf文件cp到这些目录中

[root@mydb1 redis]# cp 6379.conf /redis_cluster/6379/redis.conf

[root@mydb1 redis]# cp 6379.conf /redis_cluster/6380/redis.conf

[root@mydb2 redis]# cp 6379.conf /redis_cluster/6379/redis.conf

[root@mydb2 redis]# cp 6379.conf /redis_cluster/6380/redis.conf

[root@mydb3 redis]# cp 6379.conf /redis_cluster/6379/redis.conf

[root@mydb3 redis]# cp 6379.conf /redis_cluster/6380/redis.conf

修改六个配置文件

bind 192.168.1.101                                     #改成对应IP

port 6379                                                    #改成对应端口

pidfile /var/run/redis_6379.pid                    #改成相应名字

logfile /redis_cluster/6379/redis.a.log         #修改相应名字

daemonize yes

cluster-enabled yes

cluster-config-file /redis_cluster/6379/nodes-6379.conf     #修改相应名字

cluster-node-timeout 15000

启动各个节点

[root@mydb1 bin]# ./redis-server /redis_cluster/6379/redis.conf

[root@mydb1 bin]# ./redis-server /redis_cluster/6380/redis.conf

[root@mydb2 bin]# ./redis-server /redis_cluster/6379/redis.conf

[root@mydb2 bin]# ./redis-server /redis_cluster/6380/redis.conf

[root@mydb3 bin]# ./redis-server /redis_cluster/6379/redis.conf

[root@mydb3 bin]# ./redis-server /redis_cluster/6380/redis.conf

创建集群

[root@mydb1 bin]# ./redis-trib.rb create --replicas 1 192.168.1.101:6379 192.168.1.101:6380 192.168.1.102:6379 192.168.1.102:6380 192.168.1.103:6379 192.168.1.103:6380

>>> Creating cluster

>>> Performing hash slots allocation on 6 nodes...

Using 3 masters:

192.168.1.101:6379

192.168.1.102:6379

192.168.1.103:6379

Adding replica 192.168.1.102:6380 to 192.168.1.101:6379

Adding replica 192.168.1.103:6380 to 192.168.1.102:6379

Adding replica 192.168.1.101:6380 to 192.168.1.103:6379

M: 42a08452328aad56cd8f57b355cb8cb063fe912b 192.168.1.101:6379

   slots:0-5460 (5461 slots) master

S: 6048f2a9137e16f7d15bd1081995143864399e5b 192.168.1.101:6380

   replicates e84a386ef15c42cde124cb288c4e1a9b517cb461

M: bb4ade0eeb0df0994ab5fb5d111933e941d462c6 192.168.1.102:6379

   slots:5461-10922 (5462 slots) master

S: 13bf283128a23b06d31a4dd2b7ce42def0b98666 192.168.1.102:6380

   replicates 42a08452328aad56cd8f57b355cb8cb063fe912b

M: e84a386ef15c42cde124cb288c4e1a9b517cb461 192.168.1.103:6379

   slots:10923-16383 (5461 slots) master

S: cc30fe66c7f4d5696cf20fb729cbca12beaa5b18 192.168.1.103:6380

   replicates bb4ade0eeb0df0994ab5fb5d111933e941d462c6

Can I set the above configuration? (type 'yes' to accept): yes

>>> Nodes configuration updated

>>> Assign a different config epoch to each node

>>> Sending CLUSTER MEET messages to join the cluster

Waiting for the cluster to join...

>>> Performing Cluster Check (using node 192.168.1.101:6379)

M: 42a08452328aad56cd8f57b355cb8cb063fe912b 192.168.1.101:6379

   slots:0-5460 (5461 slots) master

   1 additional replica(s)

S: 6048f2a9137e16f7d15bd1081995143864399e5b 192.168.1.101:6380

   slots: (0 slots) slave

   replicates e84a386ef15c42cde124cb288c4e1a9b517cb461

M: bb4ade0eeb0df0994ab5fb5d111933e941d462c6 192.168.1.102:6379

   slots:5461-10922 (5462 slots) master

   1 additional replica(s)

S: 13bf283128a23b06d31a4dd2b7ce42def0b98666 192.168.1.102:6380

   slots: (0 slots) slave

   replicates 42a08452328aad56cd8f57b355cb8cb063fe912b

M: e84a386ef15c42cde124cb288c4e1a9b517cb461 192.168.1.103:6379

   slots:10923-16383 (5461 slots) master

   1 additional replica(s)

S: cc30fe66c7f4d5696cf20fb729cbca12beaa5b18 192.168.1.103:6380

   slots: (0 slots) slave

   replicates bb4ade0eeb0df0994ab5fb5d111933e941d462c6

[OK] All nodes agree about slots configuration.

>>> Check for open slots...

>>> Check slots coverage...

[OK] All 16384 slots covered.

检查集群状态

[root@mydb1 bin]# ./redis-trib.rb check 192.168.1.101:6379

>>> Performing Cluster Check (using node 192.168.1.101:6379)

M: 42a08452328aad56cd8f57b355cb8cb063fe912b 192.168.1.101:6379

   slots:0-5460 (5461 slots) master

   1 additional replica(s)

S: 6048f2a9137e16f7d15bd1081995143864399e5b 192.168.1.101:6380

   slots: (0 slots) slave

   replicates e84a386ef15c42cde124cb288c4e1a9b517cb461

M: bb4ade0eeb0df0994ab5fb5d111933e941d462c6 192.168.1.102:6379

   slots:5461-10922 (5462 slots) master

   1 additional replica(s)

S: 13bf283128a23b06d31a4dd2b7ce42def0b98666 192.168.1.102:6380

   slots: (0 slots) slave

   replicates 42a08452328aad56cd8f57b355cb8cb063fe912b

M: e84a386ef15c42cde124cb288c4e1a9b517cb461 192.168.1.103:6379

   slots:10923-16383 (5461 slots) master

   1 additional replica(s)

S: cc30fe66c7f4d5696cf20fb729cbca12beaa5b18 192.168.1.103:6380

   slots: (0 slots) slave

   replicates bb4ade0eeb0df0994ab5fb5d111933e941d462c6

[OK] All nodes agree about slots configuration.

>>> Check for open slots...

>>> Check slots coverage...

[OK] All 16384 slots covered.

测试数据

[root@mydb1 bin]# ./redis-cli -h 192.168.1.101 -c -p 6379

192.168.1.101:6379> set name allen

-> Redirected to slot [5798] located at 192.168.1.102:6379

OK

[root@mydb2 bin]# ./redis-cli -h 192.168.1.102 -c -p 6379

192.168.1.102:6379> set age 32

-> Redirected to slot [741] located at 192.168.1.101:6379

OK

[root@mydb3 bin]# ./redis-cli -h 192.168.1.103 -c -p 6379

192.168.1.103:6379> set phone 13718097805

-> Redirected to slot [8939] located at 192.168.1.102:6379

OK

[root@mydb1 bin]# ./redis-cli -h 192.168.1.101 -c -p 6379

192.168.1.101:6379> get name

-> Redirected to slot [5798] located at 192.168.1.102:6379

"allen"

192.168.1.102:6379> get age

-> Redirected to slot [741] located at 192.168.1.101:6379

"32"

192.168.1.101:6379> get phone

-> Redirected to slot [8939] located at 192.168.1.102:6379

"13718097805"

删除一个slave节点

[root@mydb1 bin]# ./redis-trib.rb del-node 192.168.1.103:6380 'cc30fe66c7f4d5696cf20fb729cbca12beaa5b18'

>>> Removing node cc30fe66c7f4d5696cf20fb729cbca12beaa5b18 from cluster 192.168.1.103:6380

>>> Sending CLUSTER FORGET messages to the cluster...

>>> SHUTDOWN the node.

添加新slave节点

需要将新增的节点下的aofrdb等本地备份文件删除,同时将新node的集群配置文件删除

生成相应的配置文件

/redis_cluster/6380/redis.conf

启动节点

[root@mydb3 bin]# ./redis-server /redis_cluster/6380/redis.conf

加入空点到集群先删除slots文件

[root@mydb1 bin]# ./redis-trib.rb add-node 192.168.1.103:6380 192.168.1.102:6379

>>> Adding node 192.168.1.103:6380 to cluster 192.168.1.102:6379

>>> Performing Cluster Check (using node 192.168.1.102:6379)

M: cf6149a0f088098c3b88f2395146e713c8d152bb 192.168.1.102:6379

   slots:5461-10922 (5462 slots) master

   0 additional replica(s)

S: 63a51b06b2ad695acc29f27558004c82afd7c2cc 192.168.1.103:6379

   slots: (0 slots) slave

   replicates c3ebcc0f01d3e788d90b3f16dc4bd15204ddd9e9

S: c44d256ed6723a92b779455d2ef80099ca0cc4ae 192.168.1.102:6380

   slots: (0 slots) slave

   replicates 5cfa14086bf8f4da0b8bb94b0006565f7bdb0834

M: c3ebcc0f01d3e788d90b3f16dc4bd15204ddd9e9 192.168.1.101:6380

   slots:10923-16383 (5461 slots) master

   1 additional replica(s)

M: 5cfa14086bf8f4da0b8bb94b0006565f7bdb0834 192.168.1.101:6379

   slots:0-5460 (5461 slots) master

   1 additional replica(s)

[OK] All nodes agree about slots configuration.

>>> Check for open slots...

>>> Check slots coverage...

[OK] All 16384 slots covered.

>>> Send CLUSTER MEET to node 192.168.1.103:6380 to make it join the cluster.

[OK] New node added correctly.

建立主从关系

[root@mydb3 bin]# ./redis-cli -h 192.168.1.103 -c -p 6380

192.168.1.103:6380> cluster replicate cf6149a0f088098c3b88f2395146e713c8d152bb

OK

在线reshard数据

[root@mydb1 bin]# ./redis-trib.rb reshard 192.168.1.101:6379

>>> Performing Cluster Check (using node 192.168.1.101:6379)

M: 5cfa14086bf8f4da0b8bb94b0006565f7bdb0834 192.168.1.101:6379

   slots:0-5460 (5461 slots) master

   1 additional replica(s)

S: c44d256ed6723a92b779455d2ef80099ca0cc4ae 192.168.1.102:6380

   slots: (0 slots) slave

   replicates 5cfa14086bf8f4da0b8bb94b0006565f7bdb0834

M: c3ebcc0f01d3e788d90b3f16dc4bd15204ddd9e9 192.168.1.101:6380

   slots:10923-16383 (5461 slots) master

   1 additional replica(s)

M: cf6149a0f088098c3b88f2395146e713c8d152bb 192.168.1.102:6379

   slots:5461-10922 (5462 slots) master

   1 additional replica(s)

S: 63a51b06b2ad695acc29f27558004c82afd7c2cc 192.168.1.103:6379

   slots: (0 slots) slave

   replicates c3ebcc0f01d3e788d90b3f16dc4bd15204ddd9e9

S: a1f5874052d1d39b5ef757a52edf3892f5f2c4c1 192.168.1.103:6380

   slots: (0 slots) slave

   replicates cf6149a0f088098c3b88f2395146e713c8d152bb

[OK] All nodes agree about slots configuration.

>>> Check for open slots...

>>> Check slots coverage...

[OK] All 16384 slots covered.

How many slots do you want to move (from 1 to 16384)? 1000

What is the receiving node ID? cf6149a0f088098c3b88f2395146e713c8d152bb

Please enter all the source node IDs.

  Type 'all' to use all the nodes as source nodes for the hash slots.

  Type 'done' once you entered all the source nodes IDs.

Source node #1:all

Do you want to proceed with the proposed reshard plan (yes/no)? yes

[root@mydb1 bin]# ./redis-trib.rb check 192.168.1.101:6379

>>> Performing Cluster Check (using node 192.168.1.101:6379)

M: 5cfa14086bf8f4da0b8bb94b0006565f7bdb0834 192.168.1.101:6379

   slots:500-5460 (4961 slots) master

   1 additional replica(s)

S: c44d256ed6723a92b779455d2ef80099ca0cc4ae 192.168.1.102:6380

   slots: (0 slots) slave

   replicates 5cfa14086bf8f4da0b8bb94b0006565f7bdb0834

M: c3ebcc0f01d3e788d90b3f16dc4bd15204ddd9e9 192.168.1.101:6380

   slots:11423-16383 (4961 slots) master

   1 additional replica(s)

M: cf6149a0f088098c3b88f2395146e713c8d152bb 192.168.1.102:6379

   slots:0-499,5461-11422 (6462 slots) master

   1 additional replica(s)

S: 63a51b06b2ad695acc29f27558004c82afd7c2cc 192.168.1.103:6379

   slots: (0 slots) slave

   replicates c3ebcc0f01d3e788d90b3f16dc4bd15204ddd9e9

S: a1f5874052d1d39b5ef757a52edf3892f5f2c4c1 192.168.1.103:6380

   slots: (0 slots) slave

   replicates cf6149a0f088098c3b88f2395146e713c8d152bb

[OK] All nodes agree about slots configuration.

>>> Check for open slots...

>>> Check slots coverage...

[OK] All 16384 slots covered.

删除一个master节点

先将这个节点的所有slot移动到其他主节点上,再删除这个master节点

[root@mydb1 bin]# ./redis-trib.rb reshard 192.168.1.101:6379

>>> Performing Cluster Check (using node 192.168.1.101:6379)

M: 5cfa14086bf8f4da0b8bb94b0006565f7bdb0834 192.168.1.101:6379

   slots:500-5460 (4961 slots) master

   1 additional replica(s)

S: c44d256ed6723a92b779455d2ef80099ca0cc4ae 192.168.1.102:6380

   slots: (0 slots) slave

   replicates 5cfa14086bf8f4da0b8bb94b0006565f7bdb0834

M: c3ebcc0f01d3e788d90b3f16dc4bd15204ddd9e9 192.168.1.101:6380

   slots:11423-16383 (4961 slots) master

   1 additional replica(s)

M: cf6149a0f088098c3b88f2395146e713c8d152bb 192.168.1.102:6379

   slots:0-499,5461-11422 (6462 slots) master

   1 additional replica(s)

S: 63a51b06b2ad695acc29f27558004c82afd7c2cc 192.168.1.103:6379

   slots: (0 slots) slave

   replicates c3ebcc0f01d3e788d90b3f16dc4bd15204ddd9e9

S: a1f5874052d1d39b5ef757a52edf3892f5f2c4c1 192.168.1.103:6380

   slots: (0 slots) slave

   replicates cf6149a0f088098c3b88f2395146e713c8d152bb

[OK] All nodes agree about slots configuration.

>>> Check for open slots...

>>> Check slots coverage...

[OK] All 16384 slots covered.

How many slots do you want to move (from 1 to 16384)? 4961

What is the receiving node ID? cf6149a0f088098c3b88f2395146e713c8d152bb

Please enter all the source node IDs.

  Type 'all' to use all the nodes as source nodes for the hash slots.

  Type 'done' once you entered all the source nodes IDs.

Source node #1:5cfa14086bf8f4da0b8bb94b0006565f7bdb0834

Source node #2:done

Do you want to proceed with the proposed reshard plan (yes/no)? yes

[root@mydb1 bin]# ./redis-trib.rb check 192.168.1.101:6379

>>> Performing Cluster Check (using node 192.168.1.101:6379)

M: 5cfa14086bf8f4da0b8bb94b0006565f7bdb0834 192.168.1.101:6379

   slots: (0 slots) master

   0 additional replica(s)

S: c44d256ed6723a92b779455d2ef80099ca0cc4ae 192.168.1.102:6380

   slots: (0 slots) slave

   replicates c3ebcc0f01d3e788d90b3f16dc4bd15204ddd9e9

M: c3ebcc0f01d3e788d90b3f16dc4bd15204ddd9e9 192.168.1.101:6380

   slots:5191-5460,11423-16383 (5231 slots) master

   2 additional replica(s)

M: cf6149a0f088098c3b88f2395146e713c8d152bb 192.168.1.102:6379

   slots:0-5190,5461-11422 (11153 slots) master

   1 additional replica(s)

S: 63a51b06b2ad695acc29f27558004c82afd7c2cc 192.168.1.103:6379

   slots: (0 slots) slave

   replicates c3ebcc0f01d3e788d90b3f16dc4bd15204ddd9e9

S: a1f5874052d1d39b5ef757a52edf3892f5f2c4c1 192.168.1.103:6380

   slots: (0 slots) slave

   replicates cf6149a0f088098c3b88f2395146e713c8d152bb

[OK] All nodes agree about slots configuration.

>>> Check for open slots...

>>> Check slots coverage...

[OK] All 16384 slots covered.

[root@mydb1 bin]# ./redis-trib.rb del-node 192.168.1.101:6379 '5cfa14086bf8f4da0b8bb94b0006565f7bdb0834'

>>> Removing node 5cfa14086bf8f4da0b8bb94b0006565f7bdb0834 from cluster 192.168.1.101:6379

>>> Sending CLUSTER FORGET messages to the cluster...

>>> SHUTDOWN the node.

添加新master节点

需要将新增的节点下的aofrdb等本地备份文件删除,同时将新node的集群配置文件删除

生成相应的配置文件

/redis_cluster/6379/redis.conf

启动节点

[root@mydb1 bin]# ./redis-server /redis_cluster/6379/redis.conf

加入空点到集群

[root@mydb1 bin]# ./redis-trib.rb add-node 192.168.1.101:6379 192.168.1.102:6379  #新加节点的信息和任意一个集群节点信息

[root@mydb1 bin]# ./redis-trib.rb check 192.168.1.101:6379

>>> Performing Cluster Check (using node 192.168.1.101:6379)

M: b4f4e3c93d57350c5a58f4cab27958fe598d1911 192.168.1.101:6379

   slots: (0 slots) master

   0 additional replica(s)

S: 63a51b06b2ad695acc29f27558004c82afd7c2cc 192.168.1.103:6379

   slots: (0 slots) slave

   replicates c3ebcc0f01d3e788d90b3f16dc4bd15204ddd9e9

S: c44d256ed6723a92b779455d2ef80099ca0cc4ae 192.168.1.102:6380

   slots: (0 slots) slave

   replicates c3ebcc0f01d3e788d90b3f16dc4bd15204ddd9e9

M: cf6149a0f088098c3b88f2395146e713c8d152bb 192.168.1.102:6379

   slots:0-5190,5461-11422 (11153 slots) master

   1 additional replica(s)

S: a1f5874052d1d39b5ef757a52edf3892f5f2c4c1 192.168.1.103:6380

   slots: (0 slots) slave

   replicates cf6149a0f088098c3b88f2395146e713c8d152bb

M: c3ebcc0f01d3e788d90b3f16dc4bd15204ddd9e9 192.168.1.101:6380

   slots:5191-5460,11423-16383 (5231 slots) master

   2 additional replica(s)

[OK] All nodes agree about slots configuration.

>>> Check for open slots...

>>> Check slots coverage...

[OK] All 16384 slots covered.

[root@mydb1 bin]# ./redis-trib.rb reshard 192.168.1.102:6379

>>> Performing Cluster Check (using node 192.168.1.102:6379)

M: cf6149a0f088098c3b88f2395146e713c8d152bb 192.168.1.102:6379

   slots:0-5190,5461-11422 (11153 slots) master

   1 additional replica(s)

S: a1f5874052d1d39b5ef757a52edf3892f5f2c4c1 192.168.1.103:6380

   slots: (0 slots) slave

   replicates cf6149a0f088098c3b88f2395146e713c8d152bb

S: 63a51b06b2ad695acc29f27558004c82afd7c2cc 192.168.1.103:6379

   slots: (0 slots) slave

   replicates c3ebcc0f01d3e788d90b3f16dc4bd15204ddd9e9

S: c44d256ed6723a92b779455d2ef80099ca0cc4ae 192.168.1.102:6380

   slots: (0 slots) slave

   replicates c3ebcc0f01d3e788d90b3f16dc4bd15204ddd9e9

M: c3ebcc0f01d3e788d90b3f16dc4bd15204ddd9e9 192.168.1.101:6380

   slots:5191-5460,11423-16383 (5231 slots) master

   2 additional replica(s)

M: b4f4e3c93d57350c5a58f4cab27958fe598d1911 192.168.1.101:6379

   slots: (0 slots) master

   0 additional replica(s)

[OK] All nodes agree about slots configuration.

>>> Check for open slots...

>>> Check slots coverage...

[OK] All 16384 slots covered.

How many slots do you want to move (from 1 to 16384)? 4000

What is the receiving node ID? b4f4e3c93d57350c5a58f4cab27958fe598d1911

Please enter all the source node IDs.

  Type 'all' to use all the nodes as source nodes for the hash slots.

  Type 'done' once you entered all the source nodes IDs.

Source node #1:cf6149a0f088098c3b88f2395146e713c8d152bb

Source node #2:done

Do you want to proceed with the proposed reshard plan (yes/no)? yes

[root@mydb1 bin]# ./redis-trib.rb check 192.168.1.101:6379

>>> Performing Cluster Check (using node 192.168.1.101:6379)

M: b4f4e3c93d57350c5a58f4cab27958fe598d1911 192.168.1.101:6379

   slots:0-3999 (4000 slots) master

   1 additional replica(s)

S: 63a51b06b2ad695acc29f27558004c82afd7c2cc 192.168.1.103:6379

   slots: (0 slots) slave

   replicates b4f4e3c93d57350c5a58f4cab27958fe598d1911

S: c44d256ed6723a92b779455d2ef80099ca0cc4ae 192.168.1.102:6380

   slots: (0 slots) slave

   replicates c3ebcc0f01d3e788d90b3f16dc4bd15204ddd9e9

M: cf6149a0f088098c3b88f2395146e713c8d152bb 192.168.1.102:6379

   slots:4000-5190,5461-11422 (7153 slots) master

   1 additional replica(s)

S: a1f5874052d1d39b5ef757a52edf3892f5f2c4c1 192.168.1.103:6380

   slots: (0 slots) slave

   replicates cf6149a0f088098c3b88f2395146e713c8d152bb

M: c3ebcc0f01d3e788d90b3f16dc4bd15204ddd9e9 192.168.1.101:6380

   slots:5191-5460,11423-16383 (5231 slots) master

   1 additional replica(s)

[OK] All nodes agree about slots configuration.

>>> Check for open slots...

>>> Check slots coverage...

[OK] All 16384 slots covered.

原文地址:https://www.cnblogs.com/allenhu320/p/11339865.html