Zookeeper集群搭建

先建立三台虚拟机,保证他们可以互通,且都有java环境。

系统 java版本 myid ip zookeeper版本
Centos7 jdk1.8 1 172.16.0.2 zookeeper-3.4.14
Centos7 jdk1.8 2 172.16.0.3 zookeeper-3.4.14
Centos7 jdk1.8 3 172.16.0.4 zookeeper-3.4.14

具体怎么建立虚拟机可以参考我这篇博文:

VMWare Linux 网络配置

Centos7 java环境安装可以参考我这篇博文:

Centos7环境初始化

在一台主机上配置好zk

XShell连接上172.16.0.2主机后,利用rz命令上传zookeeper-3.4.14.tar.gz到虚拟机中,解压到/opt目录:

tar -xzvf zookeeper-3.4.14.tar.gz -C /opt/

进入/opt/zookeeper-3.4.14进行操作:

cd /opt/zookeeper-3.4.14

创建一个data目录用于存放数据,创建一个logs目录用于存放日志文件:

mkdir data
mkdir logs

在data目录下创建一个myid文件用于记录该zookeeper在集群中的id:

vim data/myid
## 内容
1
##保存退出

编辑zookeeper配置文件:

# 复制一份示例配置文件
cp conf/zoo_sample.cfg conf/zoo.cfg
# 编辑 zoo.cfg
vim conf/zoo.cfg

以下为zoo.cfg内容:

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/opt/zookeeper-3.4.14/data
dataLogDir=/opt/zookeeper-3.4.14/logs
# the port at which the clients will connect
clientPort=2181
# zk集群配置
server.1=172.16.0.2:2888:3888
server.2=172.16.0.3:2888:3888
server.3=172.16.0.4:2888:3888                                   

把配置好的zookeeper打包发送到其他虚拟机进行配置

# 把配置好的zk打包
tar -czvf zookeeper.tar.gz zookeeper-3.4.14/
# 发送zookeeper.tar.gz到其它两个机器的/opt/目录
scp zookeeper.tar.gz root@172.16.0.3:/opt/
scp zookeeper.tar.gz root@172.16.0.4:/opt/

传送完毕后在对应的机器上解压并且修改myid文件即可:

# 解压
tar -xzvf zookeeper.tar.gz
# 修改myid,172.16.0.3修改为2,172.16.0.4修改为3
vim data/myid 
#myid内容 172.16.0.3
2
#myid内容 172.16.0.4
3
#保存退出

启动zookeeper

三台机器上依次执行:

# 关闭防火墙
systemctl stop firewalld.service
# 启动zk
./bin/zkCli.sh

启动成功后通过客户端连接查看是否启动成功:

./bin/zkCli.sh -server 172.16.0.2:2181
./bin/zkCli.sh -server 172.16.0.3:2181
./bin/zkCli.sh -server 172.16.0.4:2181

安装nc,利用zookeeper四字命令查看主从状态:

yum install nc
echo stat | nc 172.16.0.2 2181
echo stat | nc 172.16.0.3 2181
echo stat | nc 172.16.0.4 2181

mode为leader的是master,mode为follower的是slaver。

测试

我的集群状态目前是 172.16.0.2和172.16.0.3是slaver节点,172.16.0.4是master节点,

用客户端在自己的机器上分别连接到自己的服务端,防止混淆,然后在其中一个节点创建数据,在其他节点查看:

# 172.16.0.4 (master)
create /test 111
# 172.16.0.3
[zk: 172.16.0.3:2181(CONNECTED) 0] get /test
2222
cZxid = 0x100000009
ctime = Fri Dec 13 18:44:39 HKT 2019
mZxid = 0x10000000a
mtime = Fri Dec 13 18:45:11 HKT 2019
pZxid = 0x10000000b
cversion = 1
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 4
numChildren = 1
# 172.15.0.2
[zk: 172.16.0.3:2181(CONNECTED) 0] get /test
2222
cZxid = 0x100000009
ctime = Fri Dec 13 18:44:39 HKT 2019
mZxid = 0x10000000a
mtime = Fri Dec 13 18:45:11 HKT 2019
pZxid = 0x10000000b
cversion = 1
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 4
numChildren = 1
原文地址:https://www.cnblogs.com/yanshaoshuai/p/12033945.html