单实例redis数据迁移到集群

1、先创建好redis集群,然后将所有的slot分配到一个主节点

2、把单节点的实例快照或是aof文件拷贝到集群的一个节点上,将所有数据放到主节点的16834个slot里面,然后启动集群另外的节点,将16834个slot里的数据分发到其他的节点上去,最后是为每个主节点创建从库。

具体步骤如下:

1、创建好redis集群

2、假如单实例redis同时开启了RDB和AOF,只要AOF文件就可以了,因为当AOF和RDB同时存在的时候,Redis还是会先加载AOF文件的

在单实例redis上执行 BGREWRITEAOF保存数据

如果没有开启AOF只开了rdb则执行BGSAVE,

BGSAVE 命令执行之后立即返回 OK ,然后 Redis fork 出一个新子进程,原来的 Redis 进程(父进程)继续处理客户端请求,而子进程则负责将数据保存到磁盘,然后退出。

客户端可以通过 LASTSAVE 命令查看相关信息,判断 BGSAVE 命令是否执行成功。

3、查看集群当前卡槽分配情况并将所有卡槽集中到一个master节点

./redis-trib.rb check 10.253.112.161:7001

#redis5以上统一使用redis-cli命令
redis-cli --cluster info 10.253.112.161:7001
>>> Performing Cluster Check (using node 10.253.112.161:7001)
M: 8ec7056e12788255ada53b548ee960cb690e72c7 10.253.112.161:7001
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
S: 4e445cdf6b0430433a7a32928622e803f686ba76 10.253.112.161:7005
   slots: (0 slots) slave
   replicates 4faa969455ae34669e5b9bf4b14655e969514122
S: dd2407d636459bbc8dc88df11321e387e567ee1e 10.253.112.161:7004
   slots: (0 slots) slave
   replicates 8ec7056e12788255ada53b548ee960cb690e72c7
M: b816bbff799243edd86e067879e6bfbfe37330e0 10.253.112.161:7003
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
M: 4faa969455ae34669e5b9bf4b14655e969514122 10.253.112.161:7002
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: c7e644caca2d0725d8ff0bd852f6e51eceb48970 10.253.112.161:7006
   slots: (0 slots) slave
   replicates b816bbff799243edd86e067879e6bfbfe37330e0
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

可以看到10.253.112.161:7001分配的卡槽是0-5460

              10.253.112.161:7003分配的卡槽是10923-16383

              10.253.112.161:7002分配的卡槽是5461-10922

我们现在要将10.253.112.161:7002和10.253.112.161:7003的卡槽迁移到10.253.112.161:7001上面去

先将10.253.112.161:7002的卡槽迁移到10.253.112.161:7001

./redis-trib.rb reshard 10.253.112.161:7002

redis-cli --cluster reshard 10.253.112.161:7002

该过程会先让你输入迁出多少个卡槽,然后输入迁到哪个卡槽再输入从哪迁

>>> Performing Cluster Check (using node 10.253.112.161:7002)
M: 4faa969455ae34669e5b9bf4b14655e969514122 10.253.112.161:7002
   slots: (0 slots) master
   0 additional replica(s)
S: 4e445cdf6b0430433a7a32928622e803f686ba76 10.253.112.161:7005
   slots: (0 slots) slave
   replicates 8ec7056e12788255ada53b548ee960cb690e72c7
M: 8ec7056e12788255ada53b548ee960cb690e72c7 10.253.112.161:7001
   slots:0-10922 (10923 slots) master
   2 additional replica(s)
M: b816bbff799243edd86e067879e6bfbfe37330e0 10.253.112.161:7003
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: dd2407d636459bbc8dc88df11321e387e567ee1e 10.253.112.161:7004
   slots: (0 slots) slave
   replicates 8ec7056e12788255ada53b548ee960cb690e72c7
S: c7e644caca2d0725d8ff0bd852f6e51eceb48970 10.253.112.161:7006
   slots: (0 slots) slave
   replicates b816bbff799243edd86e067879e6bfbfe37330e0
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 5462  #表示迁出5462个卡槽
What is the receiving node ID? 4faa969455ae34669e5b9bf4b14655e969514122  #接收迁出卡槽的id,因为我们要把所以卡槽迁到10.253.112.161:7001上面,所以这里的id就是10.253.112.161:7001的id
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1:8ec7056e12788255ada53b548ee960cb690e72c7    #移出去的节点id ,因为我们要把10.253.112.161:7002的卡槽迁出去,所以这里的id是10.253.112.161:7002的id 
Source node #
2:done
.............
.............
Do you want to proceed with the proposed reshard plan (yes/no)?yes

同上将10.253.112.161:7003的卡槽迁移到10.253.112.161:7001

./redis-trib.rb reshard 10.253.112.161:7003

redis-cli --cluster reshard 10.253.112.161:7003
[root@ddyh-app-02 redis-cluster]# ./redis-trib.rb check 10.253.112.161:7001  
>>> Performing Cluster Check (using node 10.253.112.161:7001)
M: 8ec7056e12788255ada53b548ee960cb690e72c7 10.253.112.161:7001
   slots:0-16383 (16384 slots) master
   3 additional replica(s)
S: 4e445cdf6b0430433a7a32928622e803f686ba76 10.253.112.161:7005
   slots: (0 slots) slave
   replicates 8ec7056e12788255ada53b548ee960cb690e72c7
S: dd2407d636459bbc8dc88df11321e387e567ee1e 10.253.112.161:7004
   slots: (0 slots) slave
   replicates 8ec7056e12788255ada53b548ee960cb690e72c7
M: b816bbff799243edd86e067879e6bfbfe37330e0 10.253.112.161:7003
   slots: (0 slots) master
   0 additional replica(s)
M: 4faa969455ae34669e5b9bf4b14655e969514122 10.253.112.161:7002
   slots: (0 slots) master
   0 additional replica(s)
S: c7e644caca2d0725d8ff0bd852f6e51eceb48970 10.253.112.161:7006
   slots: (0 slots) slave
   replicates 8ec7056e12788255ada53b548ee960cb690e72c7
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

可以看到所有的卡槽均已在10.253.112.161:7001上面。

4、停掉redis集群,然后把单实例节点的aof文件或者dump文件拷贝到10.253.112.161:7001的节点对应的目录下,单实例开启了aof的化只需要aof文件即可,没开启aof则拷贝dump.rdb文件。

将拷贝过来的文件命名为跟10.253.112.161:7001节点配置文件定义的名字一样。

注意:因为redis默认优先读写aof文件,如果集群开启了aof模式,但只拷贝rdb文件过来,启动redis集群时,redis内依然无数据,因为没有aof文件,此时需要先关闭aof再启动redis集群,数据导完后生成aof文件然后停止redis集群再开启aof

appendonly no

然后只启动7001这个节点

cd /data1/redis-cluster/redis-01
./redis-server redis.conf

  

检测数据是否拷贝完成,没有的话则查看日志是否报错

./redis-trib.rb info 10.253.112.161:7001

redis-cli --cluster info 10.253.112.161:7001

如果直接拷过来的是aof文件,下面三步无需操作:

  使用redis-cli命令,登录redis,执行BGREWRITEAOF命令,重新生成aof文件

  关闭redis,在redis.conf中开启aof模式

  重新开启redis服务,这时候,redis就会读取appendonly.aof文件,加载完整数据了。

5、将10.253.112.161:7001的槽 均匀分配给其他两个master节点

./redis-trib.rb reshard --from 8ec7056e12788255ada53b548ee960cb690e72c7 --to 4faa969455ae34669e5b9bf4b14655e969514122 --slots 5462 --yes 10.253.112.161:7002
./redis-trib.rb reshard --from 8ec7056e12788255ada53b548ee960cb690e72c7 --to b816bbff799243edd86e067879e6bfbfe37330e0 --slots 5461 --yes 10.253.112.161:7003

再次检测节点卡槽分配及数据即可

原文地址:https://www.cnblogs.com/zphqq/p/11315852.html