redis

redis集群方案与redis cluster集群实现

阅读(682)

一:redis集群与高可用有多重方式可以实现,比如高可用可以使用哨兵或者redis主从+keepalived的方式实现,redis集群支持多重方式,比如客户端分片、代理方式、redis cluster以及Coodis,每个实现方式都有自己的优缺点,具体方案及实现如下:

1.1:客户端分片:  

mysql、memcached以及redis等都可以通过客户端分片实现,其中mysql还可以通过客户端实现分库分表,客户端分片是在客户端将key进行hash按照不同的值保存到不同的redis 服务器,读取的话也是按照不同的位置进行读取,优势是比较灵活,不存在单点故障,缺点是添加节点需要重新配置分片算法,并且需要手动同步数据,在缓存场景客户端分片最适用于使用memcached,因为缓存是可以丢失一部分数据的,但是memcached可以做集群进行数据同步。
1.2:Redis Cluster:

在3.0版本以后支持,无中心,在某种情况下会造成数据丢失,其也是通过算法将数据分片保存至某个redis服务器,即不再通过客户端计算key保存的redis服务器,redis服务器需要提前设置好自己所负责的槽位,比如redis A负责处理0-5000的哈希槽位数据,redis B负责处理5001-10000的hash槽位数据,redis C负责处理10001-16384的hash槽位数据,redis cluster需要特定的客户端,要求客户端必须支持集群协议 ,但是目前还没有比较好的客户端。

这种将哈希槽分布到不同节点的做法使得用户可以很容易地向集群中添加或者删除节点。 比如说:
如果用户将新节点 D 添加到集群中, 那么集群只需要将节点 A 、B 、 C 中的某些槽移动到节点 D 就可以了。
与此类似, 如果用户要从集群中移除节点 A , 那么集群只需要将节点 A 中的所有哈希槽移动到节点 B 和节点 C , 然后再移除空白(不包含任何哈希槽)的节点 A 就可以了。
因为将一个哈希槽从一个节点移动到另一个节点不会造成节点阻塞, 所以无论是添加新节点还是移除已存在节点, 又或者改变某个节点包含的哈希槽数量, 都不会造成集群下线。
redis cluster需要专门的客户端,比如python当中引入的redis模块也不能使用了, 目前官方的客户端也不是很多,需要自己开发,

1.3:代理:

例如Twemproxy,由proxy代替客户端换和服务端实现分片,可以使用在缓存场景中允许数据丢失的场景,其还支持memcached,可以为proxy配置算法,缺点为twemproxy是瓶颈,不支持数据迁移,官方github地址https://github.com/twitter/twemproxy/ 

1.4:Codis:豌豆荚的开源方案,目前redis集群比较稳定的方案,豌豆荚gitlab地址https://github.com/pingcap:

豌豆荚codis项目官方github地址https://github.com/CodisLabs/codis

可以无缝迁移到codis

可以动态扩容和缩容

多业务完全透明,业务不知道运行的是codis

支持多核心CPU,twemproxy只能单核

codis是有中心基于proxy的设计,是客户端像连接单机一样操作proxy

有部分命令不能支持,比如keys *等

支持group划分,组内可以设置一个主多个从,通过sentinel 监控redis主从,当主down了自动将从切换为主

设置的进程要最大等于CPU的核心,不能超过CPU的核心数

其基于zookeeper,里面保存的是key保存的redis主机位置,因此zookeeper要做高可用

监控可以使用接口和dashboard

tidb豌豆荚团队实现的分布式mysql数据库,github地址https://github.com/pingcap/tidb

二:实现redis cluster:

2.1:环境:

操作系统:Centos 7.2-1511 

服务器数量:2 台,启动8个redis服务

redis 版本:3.2.6

2.2:部署redis cluster集群,并实现动态增加主机:

2.2.1:服务器分别下载并安装redis:

# cd /opt
# wget http://download.redis.io/releases/redis-3.2.6.tar.gz
# tar xvf redis-3.2.6.tar.gz
# ln -sv /opt/redis-3.2.6 /usr/local/redis
# cd /usr/local/redis/
# make && make install

2.2.2:需要服务器启动6个redis 服务做集群,实现3主3从,另外在准备2个服务做动态集群的主机添加,一个主一个从,一共是每个服务器启动8个redis 服务,因此需要准备8个不通的redis 配置文件:

[root@redis1 redis]# pwd
/usr/local/redi

# mkdir   conf.d && cd conf.d

# mkdir `seq 6381  6388`

# cp /usr/local/redis/redis.conf  /usr/local/redis/conf.d/6381/

# vim /usr/local/redis/conf.d/6381/redis.conf #主要修改以下地方,将各服务的端口、PID、日志文件以及数据持久保存的路径进行单独保存:

bind  0.0.0.0
port 6381
daemonize yes
pidfile /var/run/redis_6381.pid
loglevel notice
logfile "/usr/local/redis/conf.d/6381/6381.log"
dir /usr/local/redis/conf.d/6381/
maxmemory 512M
appendonly yes
appendfilename "6381.aof"
appendfsync everysec
cluster-enabled yes #必须打开cluster功能,否则集群创建不成功
cluster-config-file  6381.conf #每个主机一个配置文件,有集群创建和管理,集群内的各主机不能重名

2.2.3:服务器批量生成redis配置文件:

# cp /usr/local/redis/conf.d/6381/redis.conf  /opt/  #复制配置文件为模板,通过sed批量生成reids 配置文件
# sed 's/6381/6382/g' /opt/redis.conf  >> /usr/local/redis/conf.d/6382/redis.conf
# sed 's/6381/6383/g' /opt/redis.conf  >> /usr/local/redis/conf.d/6383/redis.conf
# sed 's/6381/6384/g' /opt/redis.conf  >> /usr/local/redis/conf.d/6384/redis.conf
# sed 's/6381/6385/g' /opt/redis.conf  >> /usr/local/redis/conf.d/6385/redis.conf
# sed 's/6381/6386/g' /opt/redis.conf  >> /usr/local/redis/conf.d/6386/redis.conf
# sed 's/6381/6387/g' /opt/redis.conf  >> /usr/local/redis/conf.d/6387/redis.conf
# sed 's/6381/6388/g' /opt/redis.conf  >> /usr/local/redis/conf.d/6388/redis.conf

2.2.4:服务器批量启动redis 服务并验证端口存在:

# for i in `seq 6381 6388`;do  /usr/local/redis/src/redis-server /usr/local/redis/conf.d/$i/redis.conf;done

2.2.5:服务器验证redis 命令可以连接到redis服务:

# redis-cli  -h 192.168.10.101 -p 6388
192.168.10.101:6388> 

2.3:安装ruby 管理工具:

2.3.1:配置ruby源并安装redis管理工具:

# yum install ruby rubygems -y
# gem install redis 
# gem sources -l  #当前使用的源为国外,但是连接很慢而且经常连接不上,因此将其删除并添加阿里的ruby源
*** CURRENT SOURCES ***
https://rubygems.org/

# gem source -r https://rubygems.org/  #删除默认的国外ruby源
# gem sources --add https://ruby.taobao.org/   #添加淘宝的ruby源
# gem sources -l #验证ruby源已经更换
*** CURRENT SOURCES ***
https://ruby.taobao.org/
# gem sources -u  #更新缓存
# gem install redis #安装redis 工具
Fetching: redis-3.3.2.gem (100%)
Successfully installed redis-3.3.2
Parsing documentation for redis-3.3.2
Installing ri documentation for redis-3.3.2
1 gem installed

2.3.2:复制ruby的管理脚本:

# cp /usr/local/redis/src/redis-trib.rb  /usr/local/bin/redis-trib

2.3.3:redis-trib命令介绍:

# redis-trib  help
Usage: redis-trib <command> <options> <arguments ...>

  help            (show this help)
  del-node        host:port node_id #删除节点
  reshard         host:port #重新分片
                  --timeout <arg>
                  --pipeline <arg>
                  --slots <arg>
                  --to <arg>
                  --yes
                  --from <arg>
  fix             host:port
                  --timeout <arg>
  create          host1:port1 ... hostN:portN #创建集群
                  --replicas <arg>
  rebalance       host:port
                  --timeout <arg>
                  --simulate
                  --pipeline <arg>
                  --threshold <arg>
                  --use-empty-masters
                  --auto-weights
                  --weight <arg>
  call            host:port command arg arg .. arg
  add-node        new_host:new_port existing_host:existing_port #添加节点
                  --slave
                  --master-id <arg>
  check           host:port #检测节点
  import          host:port
                  --replace
                  --copy
                  --from <arg>
  set-timeout     host:port milliseconds
  info            host:port

2.4:在单机部署redis cluster集群:

2.4.1:创建redis cluster:

# redis-trib  create --replicas 1  192.168.10.101:6381  192.168.10.101:6382 192.168.10.101:6383 192.168.10.101:6384 192.168.10.101:6385 192.168.10.101:6386
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.10.101:6381
192.168.10.101:6382
192.168.10.101:6383
Adding replica 192.168.10.101:6384 to 192.168.10.101:6381 #前三个节点主主,后三个节点是从
Adding replica 192.168.10.101:6385 to 192.168.10.101:6382
Adding replica 192.168.10.101:6386 to 192.168.10.101:6383
M: fc3b44c6d18abbf7191338a8a7fafdc516b6d758 192.168.10.101:6381 #主
   slots:0-5460 (5461 slots) master #master的分片位置
M: b3f5ba5a3b1f53e358f438c923f9591055510b96 192.168.10.101:6382 #主
   slots:5461-10922 (5462 slots) master #master 的分片位置
M: 8fe3c52b352a1209dfc3b9f2f9fa1be9bc2e69a5 192.168.10.101:6383 #主
   slots:10923-16383 (5461 slots) master #master的分片位置
S: b786bd3a567634a7087aa289714063ed6e53bb47 192.168.10.101:6384 #从
   replicates fc3b44c6d18abbf7191338a8a7fafdc516b6d758
S: 7d2da4da536003b2c25c8599526c1c937d071e1e 192.168.10.101:6385 #从
   replicates b3f5ba5a3b1f53e358f438c923f9591055510b96
S: cc76054bf257f9bbfec868b86880e22f04fab96e 192.168.10.101:6386 #从,可见后三个节点是前三个节点的从
   replicates 8fe3c52b352a1209dfc3b9f2f9fa1be9bc2e69a5
Can I set the above configuration? (type 'yes' to accept): yes #输入yes继续,no退出
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join......
>>> Performing Cluster Check (using node 192.168.10.101:6381)
M: fc3b44c6d18abbf7191338a8a7fafdc516b6d758 192.168.10.101:6381 #当前redis服务的ID即主机IP短裤呢
   slots:0-5460 (5461 slots) master #对应的分片位置,0-5460
   1 additional replica(s)
S: cc76054bf257f9bbfec868b86880e22f04fab96e 192.168.10.101:6386 #从redis的IP即端口
   slots: (0 slots) slave #没有分片位置
   replicates 8fe3c52b352a1209dfc3b9f2f9fa1be9bc2e69a5
S: b786bd3a567634a7087aa289714063ed6e53bb47 192.168.10.101:6384
   slots: (0 slots) slave
   replicates fc3b44c6d18abbf7191338a8a7fafdc516b6d758
M: 8fe3c52b352a1209dfc3b9f2f9fa1be9bc2e69a5 192.168.10.101:6383
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
M: b3f5ba5a3b1f53e358f438c923f9591055510b96 192.168.10.101:6382
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 7d2da4da536003b2c25c8599526c1c937d071e1e 192.168.10.101:6385
   slots: (0 slots) slave
   replicates b3f5ba5a3b1f53e358f438c923f9591055510b96
[OK] All nodes agree about slots configuration. #配置完成
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered. #一共16384个分片

2.4.2:连接到redis cluster:

# redis-cli -c   -h 192.168.10.101  -p 6381
# Replication
role:master
connected_slaves:1
slave0:ip=192.168.10.101,port=6384,state=online,offset=771,lag=0
master_repl_offset:771
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:770

# CPU
used_cpu_sys:2.05
used_cpu_user:0.13
used_cpu_sys_children:0.00
used_cpu_user_children:0.00

# Cluster
cluster_enabled:1 #当前状态已经开启集群

# redis-cli -c   -h 192.168.10.101  -p 6384  #从redis 的状态
# Replication #复制的信息
role:slave #状态为从
master_host:192.168.10.101 #master的IP
master_port:6381
master_link_status:up
master_last_io_seconds_ago:8
master_sync_in_progress:0
slave_repl_offset:1051
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

# Cluster
cluster_enabled:1 #开启cluster

2.4.3:写入数据:

[root@redis1 6381]# redis-cli -c   -h 192.168.10.101  -p 6381
192.168.10.101:6381> set k1 v1
-> Redirected to slot [12706] located at 192.168.10.101:6383
OK
192.168.10.101:6383> set k2 v2
-> Redirected to slot [449] located at 192.168.10.101:6381
OK
192.168.10.101:6381> set k3  v3 
OK
192.168.10.101:6381> set k4  v4
-> Redirected to slot [8455] located at 192.168.10.101:6382
OK
192.168.10.101:6382> set k5  v5
-> Redirected to slot [12582] located at 192.168.10.101:6383
OK
192.168.10.101:6383> set k6   v6
-> Redirected to slot [325] located at 192.168.10.101:6381
OK
192.168.10.101:6381> set k7   v7
OK
192.168.10.101:6381> set k8   v8
-> Redirected to slot [8331] located at 192.168.10.101:6382
OK
192.168.10.101:6382> set k9   v9 #可以看出写入数据是有规律的,类似于轮训写入
-> Redirected to slot [12458] located at 192.168.10.101:6383

2.4.4.:查看集群信息:

192.168.10.101:6383> CLUSTER INFO
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:3
cluster_stats_messages_sent:2570
cluster_stats_messages_received:2570

2.4.5:查看集群种的主从关系:

cc76054bf257f9bbfec868b86880e22f04fab96e 192.168.10.101:6386 slave 8fe3c52b352a1209dfc3b9f2f9fa1be9bc2e69a5 0 1482903991630 6 connected
b3f5ba5a3b1f53e358f438c923f9591055510b96 192.168.10.101:6382 master - 0 1482903990620 2 connected 5461-10922
7d2da4da536003b2c25c8599526c1c937d071e1e 192.168.10.101:6385 slave b3f5ba5a3b1f53e358f438c923f9591055510b96 0 1482903989610 5 connected
b786bd3a567634a7087aa289714063ed6e53bb47 192.168.10.101:6384 slave fc3b44c6d18abbf7191338a8a7fafdc516b6d758 0 1482903988600 4 connected
fc3b44c6d18abbf7191338a8a7fafdc516b6d758 192.168.10.101:6381 master - 0 1482903989105 1 connected 0-5460
8fe3c52b352a1209dfc3b9f2f9fa1be9bc2e69a5 192.168.10.101:6383 myself,master - 0 0 3 connected 10923-16383

2,5:动态增加redis服务节点并重新分片:

2.5.1:添加redis主机到集群种:

                        要添加的redis节点IP和端口    添加到的集群中的master IP:端口
# redis-trib  add-node   192.168.10.101:6387         192.168.10.101:6381 
[root@redis1 6388]# redis-trib  add-node   192.168.10.101:6387   192.168.10.101:6381
>>> Adding node 192.168.10.101:6387 to cluster 192.168.10.101:6381
>>> Performing Cluster Check (using node 192.168.10.101:6381)
M: fc3b44c6d18abbf7191338a8a7fafdc516b6d758 192.168.10.101:6381
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
S: cc76054bf257f9bbfec868b86880e22f04fab96e 192.168.10.101:6386
   slots: (0 slots) slave
   replicates 8fe3c52b352a1209dfc3b9f2f9fa1be9bc2e69a5
S: b786bd3a567634a7087aa289714063ed6e53bb47 192.168.10.101:6384
   slots: (0 slots) slave
   replicates fc3b44c6d18abbf7191338a8a7fafdc516b6d758
M: 8fe3c52b352a1209dfc3b9f2f9fa1be9bc2e69a5 192.168.10.101:6383
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
M: b3f5ba5a3b1f53e358f438c923f9591055510b96 192.168.10.101:6382
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 7d2da4da536003b2c25c8599526c1c937d071e1e 192.168.10.101:6385
   slots: (0 slots) slave
   replicates b3f5ba5a3b1f53e358f438c923f9591055510b96
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.10.101:6387 to make it join the cluster.
[OK] New node added correctly.

2.6:添加主机之后需要对添加至集群种的新主机重新分片否则其没有分片,如下:

2.6.1:验证新添加主机的分片:

192.168.10.101:6381> CLUSTER nodes
cc76054bf257f9bbfec868b86880e22f04fab96e 192.168.10.101:6386 slave 8fe3c52b352a1209dfc3b9f2f9fa1be9bc2e69a5 0 1482905220043 6 connected
b786bd3a567634a7087aa289714063ed6e53bb47 192.168.10.101:6384 slave fc3b44c6d18abbf7191338a8a7fafdc516b6d758 0 1482905221559 4 connected
45df2831eaba9b2d0108a38e6def32e76b12e027 192.168.10.101:6387 master - 0 1482905220548 0 connected #此redis master没有分片,即不会被分配到数据
fc3b44c6d18abbf7191338a8a7fafdc516b6d758 192.168.10.101:6381 myself,master - 0 0 1 connected 0-5460
8fe3c52b352a1209dfc3b9f2f9fa1be9bc2e69a5 192.168.10.101:6383 master - 0 1482905217518 3 connected 10923-16383
b3f5ba5a3b1f53e358f438c923f9591055510b96 192.168.10.101:6382 master - 0 1482905222568 2 connected 5461-10922
7d2da4da536003b2c25c8599526c1c937d071e1e 192.168.10.101:6385 slave b3f5ba5a3b1f53e358f438c923f9591055510b96 0 1482905222063 5 connected

2.6.2:对添加的redis服务重新分片:

# redis-trib reshard 192.168.10.101:6387  #上一步骤添加到redis cluster的主机
>>> Performing Cluster Check (using node 192.168.10.101:6387)
M: 141580829002b23dbff5a5d7609eaa5e9ec4710b 192.168.10.101:6387
   slots: (0 slots) master
   0 additional replica(s)
S: 4a968803993de79e660193c28d034af68a750a97 192.168.10.101:6384
   slots: (0 slots) slave
   replicates a6c2425025a04039185d33997092f1738d43614c
M: f331d11d3b1409c30c7f7473d9f9e72634b673fe 192.168.10.101:6383
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: 6143ab6e475612877da739f61933873b1594e290 192.168.10.101:6386
   slots: (0 slots) slave
   replicates f331d11d3b1409c30c7f7473d9f9e72634b673fe
S: 0ef2996f7b2e8a77eb16f7d94829b9906d5a532f 192.168.10.101:6385
   slots: (0 slots) slave
   replicates ffcba6782faa729d0e794c8a0ff3241b068b39e3
M: ffcba6782faa729d0e794c8a0ff3241b068b39e3 192.168.10.101:6382
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
M: a6c2425025a04039185d33997092f1738d43614c 192.168.10.101:6381
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4000  #分配多少个槽位给192.168.10.101:6387
What is the receiving node ID? 141580829002b23dbff5a5d7609eaa5e9ec4710b  #192.168.10.101:6387的ID
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1:all #将哪些源主机的槽位分配给192.168.10.101:6387,all是自动在所有的redis选择,如果是从redis cluster删除主机可以使用此方式将主机上的槽位全部移动到别的redis主机

Ready to move 4000 slots.
  Source nodes:
    M: f331d11d3b1409c30c7f7473d9f9e72634b673fe 192.168.10.101:6383
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
    M: ffcba6782faa729d0e794c8a0ff3241b068b39e3 192.168.10.101:6382
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
    M: a6c2425025a04039185d33997092f1738d43614c 192.168.10.101:6381
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
  Destination node:
    M: 141580829002b23dbff5a5d7609eaa5e9ec4710b 192.168.10.101:6387
   slots: (0 slots) master
   0 additional replica(s)
  Resharding plan:
    Moving slot 5461 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
    Moving slot 5462 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
    Moving slot 5463 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
    Moving slot 5464 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
    Moving slot 5465 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
    Moving slot 5466 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
    Moving slot 5467 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
    Moving slot 5468 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
    Moving slot 5469 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
    Moving slot 5470 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
    Moving slot 5471 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
    Moving slot 5472 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
    Moving slot 5473 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
    Moving slot 5474 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
	。。。。。。。。。。 
	Do you want to proceed with the proposed reshard plan (yes/no)? yes #确认
Moving slot 5461 from 192.168.10.101:6382 to 192.168.10.101:6387:  #开始将每个主机的槽位分配给新主机
Moving slot 5462 from 192.168.10.101:6382 to 192.168.10.101:6387: 
Moving slot 5463 from 192.168.10.101:6382 to 192.168.10.101:6387: 
Moving slot 5464 from 192.168.10.101:6382 to 192.168.10.101:6387: 
Moving slot 5465 from 192.168.10.101:6382 to 192.168.10.101:6387Moving slot 10973 from 192.168.10.101:6383 to 192.168.10.101:6387:Moving slot 10974 from 192.168.10.101:6383 to 192.168.10.101:6387:Moving slot 10975 from 192.168.10.101:6383 to 192.168.10.101:6387:Moving slot 10976 from 192.168.10.101:6383 to 192.168.10.101:6387:Moving slot 10977 from 192.168.10.101:6383 to 192.168.10.101:6387:Moving slot 10978 from 192.168.10.101:6383 to 192.168.10.101:6387:Moving slot 10979 from 192.168.10.101:6383 to 192.168.10.101:6387:Moving slot 10980 from 192.168.10.101:6383 to 192.168.10.101:6387:Moving slot 10981 from 192.168.10.101:6383 to 192.168.10.101:6387:Moving slot 684 from 192.168.10.101:6381 to 192.168.10.101:6387:Moving slot 685 from 192.168.10.101:6381 to 192.168.10.101:6387:Moving slot 686 from 192.168.10.101:6381 to 192.168.10.101:6387:Moving slot 687 from 192.168.10.101:6381 to 192.168.10.101:6387:Moving slot 688 from 192.168.10.101:6381 to 192.168.10.101:6387:Moving slot 689 from 192.168.10.101:6381 to 192.168.10.101:6387:Moving slot 690 from 192.168.10.101:6381 to 192.168.10.101:6387:Moving slot 691 from 192.168.10.101:6381 to 192.168.10.101:6387:

2.6.3:查看重新分片之后的redis cluster状态:

[root@redis1 ~]# redis-cli   -c -h 192.168.10.101 -p 6387
192.168.10.101:6387> CLUSTER NODES
4a968803993de79e660193c28d034af68a750a97 192.168.10.101:6384 slave a6c2425025a04039185d33997092f1738d43614c 0 1484327687435 1 connected
141580829002b23dbff5a5d7609eaa5e9ec4710b 192.168.10.101:6387 myself,master - 0 0 7 connected 0-1332 5461-6794 10923-12255 #新节点已经有了分片
f331d11d3b1409c30c7f7473d9f9e72634b673fe 192.168.10.101:6383 master - 0 1484327686426 3 connected 12256-16383
6143ab6e475612877da739f61933873b1594e290 192.168.10.101:6386 slave f331d11d3b1409c30c7f7473d9f9e72634b673fe 0 1484327684411 3 connected
0ef2996f7b2e8a77eb16f7d94829b9906d5a532f 192.168.10.101:6385 slave ffcba6782faa729d0e794c8a0ff3241b068b39e3 0 1484327688442 2 connected
ffcba6782faa729d0e794c8a0ff3241b068b39e3 192.168.10.101:6382 master - 0 1484327689449 2 connected 6795-10922
a6c2425025a04039185d33997092f1738d43614c 192.168.10.101:6381 master - 0 1484327683405 1 connected 1333-5460
192.168.10.101:6387> CLUSTER INFO
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:7
cluster_size:4
cluster_current_epoch:7
cluster_my_epoch:7
cluster_stats_messages_sent:5000
cluster_stats_messages_received:4987

2.6.4:为新的节点添加redis从节点:

[root@redis1 conf.d]# redis-trib  add-node  192.168.10.101:6388  192.168.10.101:6387  #格式为 新节点:IP   已存在节点:IP
>>> Adding node 192.168.10.101:6388 to cluster 192.168.10.101:6387
>>> Performing Cluster Check (using node 192.168.10.101:6387)
M: 141580829002b23dbff5a5d7609eaa5e9ec4710b 192.168.10.101:6387
   slots:0-1332,5461-6794,10923-12255 (4000 slots) master
   0 additional replica(s)
S: 4a968803993de79e660193c28d034af68a750a97 192.168.10.101:6384
   slots: (0 slots) slave
   replicates a6c2425025a04039185d33997092f1738d43614c
M: f331d11d3b1409c30c7f7473d9f9e72634b673fe 192.168.10.101:6383
   slots:12256-16383 (4128 slots) master
   1 additional replica(s)
S: 6143ab6e475612877da739f61933873b1594e290 192.168.10.101:6386
   slots: (0 slots) slave
   replicates f331d11d3b1409c30c7f7473d9f9e72634b673fe
S: 0ef2996f7b2e8a77eb16f7d94829b9906d5a532f 192.168.10.101:6385
   slots: (0 slots) slave
   replicates ffcba6782faa729d0e794c8a0ff3241b068b39e3
M: ffcba6782faa729d0e794c8a0ff3241b068b39e3 192.168.10.101:6382
   slots:6795-10922 (4128 slots) master
   1 additional replica(s)
M: a6c2425025a04039185d33997092f1738d43614c 192.168.10.101:6381
   slots:1333-5460 (4128 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.10.101:6388 to make it join the cluster.
[OK] New node added correctly.

2.6.5:登录到新添加的节点并将其设置为从节点:

[root@redis1 redis]# redis-cli  -c -h 192.168.10.101 -p 6388 #登录到新添加的节点
192.168.10.101:6388> CLUSTER NODES
4a968803993de79e660193c28d034af68a750a97 192.168.10.101:6384 slave a6c2425025a04039185d33997092f1738d43614c 0 1484328517178 1 connected
8e40a0498f3645e812e463198e1b68ec54d5fdce 192.168.10.101:6388 myself,master - 0 0 0 connected #该节点默认为master
a6c2425025a04039185d33997092f1738d43614c 192.168.10.101:6381 master - 0 1484328518189 1 connected 1333-5460
f331d11d3b1409c30c7f7473d9f9e72634b673fe 192.168.10.101:6383 master - 0 1484328515158 3 connected 12256-16383
6143ab6e475612877da739f61933873b1594e290 192.168.10.101:6386 slave f331d11d3b1409c30c7f7473d9f9e72634b673fe 0 1484328519197 3 connected
ffcba6782faa729d0e794c8a0ff3241b068b39e3 192.168.10.101:6382 master - 0 1484328516170 2 connected 6795-10922
0ef2996f7b2e8a77eb16f7d94829b9906d5a532f 192.168.10.101:6385 slave ffcba6782faa729d0e794c8a0ff3241b068b39e3 0 1484328520207 2 connected
141580829002b23dbff5a5d7609eaa5e9ec4710b 192.168.10.101:6387 master - 0 1484328517683 7 connected 0-1332 5461-6794 10923-12255
192.168.10.101:6388> CLUSTER REPLICATE 141580829002b23dbff5a5d7609eaa5e9ec4710b #将其设置slave,命令格式为cluster replicate MASTERID
OK
192.168.10.101:6388> CLUSTER NODES #再次查看主机状态
4a968803993de79e660193c28d034af68a750a97 192.168.10.101:6384 slave a6c2425025a04039185d33997092f1738d43614c 0 1484328550457 1 connected
8e40a0498f3645e812e463198e1b68ec54d5fdce 192.168.10.101:6388 myself,slave 141580829002b23dbff5a5d7609eaa5e9ec4710b 0 0 0 connected #已经变为从节点
a6c2425025a04039185d33997092f1738d43614c 192.168.10.101:6381 master - 0 1484328550964 1 connected 1333-5460
f331d11d3b1409c30c7f7473d9f9e72634b673fe 192.168.10.101:6383 master - 0 1484328549952 3 connected 12256-16383
6143ab6e475612877da739f61933873b1594e290 192.168.10.101:6386 slave f331d11d3b1409c30c7f7473d9f9e72634b673fe 0 1484328549449 3 connected
ffcba6782faa729d0e794c8a0ff3241b068b39e3 192.168.10.101:6382 master - 0 1484328551467 2 connected 6795-10922
0ef2996f7b2e8a77eb16f7d94829b9906d5a532f 192.168.10.101:6385 slave ffcba6782faa729d0e794c8a0ff3241b068b39e3 0 1484328546423 2 connected
141580829002b23dbff5a5d7609eaa5e9ec4710b 192.168.10.101:6387 master - 0 1484328548440 7 connected 0-1332 5461-6794 10923-12255

2.7:从redis cluster删除节点,适用于物理机硬件损坏、架构变更等场景,假如我要讲上面添加的2个redis server删除:

2.7.1:先将master节点上的槽位全部迁移到其他master节点:

[root@redis1 redis]# redis-trib   reshard 192.168.10.101:6381 
>>> Performing Cluster Check (using node 192.168.10.101:6381)
M: a6c2425025a04039185d33997092f1738d43614c 192.168.10.101:6381
   slots:1333-5460 (4128 slots) master
   1 additional replica(s)
S: 8e40a0498f3645e812e463198e1b68ec54d5fdce 192.168.10.101:6388
   slots: (0 slots) slave
   replicates 141580829002b23dbff5a5d7609eaa5e9ec4710b
M: ffcba6782faa729d0e794c8a0ff3241b068b39e3 192.168.10.101:6382
   slots:6795-10922 (4128 slots) master
   1 additional replica(s)
S: 6143ab6e475612877da739f61933873b1594e290 192.168.10.101:6386
   slots: (0 slots) slave
   replicates f331d11d3b1409c30c7f7473d9f9e72634b673fe
M: f331d11d3b1409c30c7f7473d9f9e72634b673fe 192.168.10.101:6383
   slots:12256-16383 (4128 slots) master
   1 additional replica(s)
S: 0ef2996f7b2e8a77eb16f7d94829b9906d5a532f 192.168.10.101:6385
   slots: (0 slots) slave
   replicates ffcba6782faa729d0e794c8a0ff3241b068b39e3
S: 4a968803993de79e660193c28d034af68a750a97 192.168.10.101:6384
   slots: (0 slots) slave
   replicates a6c2425025a04039185d33997092f1738d43614c
M: 141580829002b23dbff5a5d7609eaa5e9ec4710b 192.168.10.101:6387 
   slots:0-1332,5461-6794,10923-12255 (4000 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4000 #迁移多少个槽位,要等于被迁移服务器的数量,上面的M上Master会有,slave没有
What is the receiving node ID? a6c2425025a04039185d33997092f1738d43614c  #目标服务器的ID,即将槽位迁移到目标服务器
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1:141580829002b23dbff5a5d7609eaa5e9ec4710b #被迁移槽位的服务器,即源服务器
Source node #2:done
    Moving slot 12252 from 141580829002b23dbff5a5d7609eaa5e9ec4710b
    Moving slot 12253 from 141580829002b23dbff5a5d7609eaa5e9ec4710b
    Moving slot 12254 from 141580829002b23dbff5a5d7609eaa5e9ec4710b
    Moving slot 12255 from 141580829002b23dbff5a5d7609eaa5e9ec4710b
Do you want to proceed with the proposed reshard plan (yes/no)? yes
Moving slot 12248 from 192.168.10.101:6387 to 192.168.10.101:6381: 
Moving slot 12249 from 192.168.10.101:6387 to 192.168.10.101:6381: 
Moving slot 12250 from 192.168.10.101:6387 to 192.168.10.101:6381: 
Moving slot 12251 from 192.168.10.101:6387 to 192.168.10.101:6381: 
Moving slot 12252 from 192.168.10.101:6387 to 192.168.10.101:6381: 
Moving slot 12253 from 192.168.10.101:6387 to 192.168.10.101:6381: 
Moving slot 12254 from 192.168.10.101:6387 to 192.168.10.101:6381: 
Moving slot 12255 from 192.168.10.101:6387 to 192.168.10.101:6381: 
。。。。。。。。。。。。。。迁移过程中

2.7.2:删除服务器:

[root@redis1 redis]# redis-cli  -c -h 192.168.10.101 -p 6388 #先连接到集群插卡要删除主机的ID
192.168.10.101:6388> CLUSTER NODES
4a968803993de79e660193c28d034af68a750a97 192.168.10.101:6384 slave a6c2425025a04039185d33997092f1738d43614c 0 1484329347313 8 connected
8e40a0498f3645e812e463198e1b68ec54d5fdce 192.168.10.101:6388 myself,slave a6c2425025a04039185d33997092f1738d43614c 0 0 0 connected
a6c2425025a04039185d33997092f1738d43614c 192.168.10.101:6381 master - 0 1484329344289 8 connected 0-6794 10923-12255
f331d11d3b1409c30c7f7473d9f9e72634b673fe 192.168.10.101:6383 master - 0 1484329346305 3 connected 12256-16383
6143ab6e475612877da739f61933873b1594e290 192.168.10.101:6386 slave f331d11d3b1409c30c7f7473d9f9e72634b673fe 0 1484329350343 3 connected
ffcba6782faa729d0e794c8a0ff3241b068b39e3 192.168.10.101:6382 master - 0 1484329347313 2 connected 6795-10922
0ef2996f7b2e8a77eb16f7d94829b9906d5a532f 192.168.10.101:6385 slave ffcba6782faa729d0e794c8a0ff3241b068b39e3 0 1484329348323 2 connected
141580829002b23dbff5a5d7609eaa5e9ec4710b 192.168.10.101:6387 master - 0 1484329349333 7 connected #这是我要删除的主机ID和IP地址,Master需要和上一步一样将槽位全部移走,否则丢失数据不负责
192.168.10.101:6388> 
[root@redis1 redis]# redis-trib  del-node 192.168.10.101:6387 141580829002b23dbff5a5d7609eaa5e9ec4710b #删除主机redis-trib del-node IP:PORT ID
>>> Removing node 141580829002b23dbff5a5d7609eaa5e9ec4710b from cluster 192.168.10.101:6387
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
[root@redis1 redis]# redis-cli  -c -h 192.168.10.101 -p 6388 #再次查看信息,确认删除成功,slave状态的主机可以直接删除,只有master才需要迁移槽位
192.168.10.101:6388> CLUSTER  nodes
4a968803993de79e660193c28d034af68a750a97 192.168.10.101:6384 slave a6c2425025a04039185d33997092f1738d43614c 0 1484329378619 8 connected
8e40a0498f3645e812e463198e1b68ec54d5fdce 192.168.10.101:6388 myself,slave a6c2425025a04039185d33997092f1738d43614c 0 0 0 connected
a6c2425025a04039185d33997092f1738d43614c 192.168.10.101:6381 master - 0 1484329378113 8 connected 0-6794 10923-12255
f331d11d3b1409c30c7f7473d9f9e72634b673fe 192.168.10.101:6383 master - 0 1484329376600 3 connected 12256-16383
6143ab6e475612877da739f61933873b1594e290 192.168.10.101:6386 slave f331d11d3b1409c30c7f7473d9f9e72634b673fe 0 1484329375590 3 connected
ffcba6782faa729d0e794c8a0ff3241b068b39e3 192.168.10.101:6382 master - 0 1484329377610 2 connected 6795-10922
0ef2996f7b2e8a77eb16f7d94829b9906d5a532f 192.168.10.101:6385 slave ffcba6782faa729d0e794c8a0ff3241b068b39e3 0 1484329379627 2 connected
原文地址:https://www.cnblogs.com/you0329/p/8591496.html