centos安装kafka集群

一、首先需要搭建zookeeper集群

参考:https://www.cnblogs.com/sky-cheng/p/13182687.html

二、三台服务器配置

三台服务器分别为

172.28.5.120:kafka120.blockchain.hl95.com

172.28.5.124:kafka124.blockchain.hl95.com

172.28.5.125:kafka125.blockchain.hl95.com

将域名配置到三台服务器的/etc/hosts文件下

172.28.5.120 kafka120.blockchain.hl95.com
172.28.5.124 kafka124.blockchain.hl95.com
172.28.5.125 kafka125.blockchain.hl95.com 

三、三台服务器分别下载安装包

root@redis-01 zookeeper-3.4.14]# cd /usr/local/src/
[root@redis-01 src]# wget http://mirrors.hust.edu.cn/apache/kafka/2.2.2/kafka_2.11-2.2.2.tgz

 四、解压

root@redis-01 src]# tar -zxvf kafka_2.11-2.2.2.tgz

五、移动到/usr/local下

[root@redis-01 local]# mv kafka_2.11-2.2.2/ ../

六、config/server.properties配置文件说明

broker.id=0  #当前机器在集群中的唯一标识,和zookeeper的myid性质一样
listeners=PLAINTEXT://kafka12x.blockchain.hl95.com:9092 #当前kafka对外提供服务的端口默认是9092
num.network.threads=3 #这个是borker进行网络处理的线程数
num.io.threads=8 #这个是borker进行I/O处理的线程数
log.dirs=/usr/local/kafka_2.11-2.2.2/logs/ #消息存放的目录,这个目录可以配置为“,”逗号分割的表达式,上面的num.io.threads要大于这个目录的个数这个目录,如果配置多个目录,新创建的topic他把消息持久化的地方是,当前以逗号分割的目录中,那个分区数最少就放那一个
socket.send.buffer.bytes=102400 #发送缓冲区buffer大小,数据不是一下子就发送的,先回存储到缓冲区了到达一定的大小后在发送,能提高性能
socket.receive.buffer.bytes=102400 #kafka接收缓冲区大小,当数据到达一定大小后在序列化到磁盘
socket.request.max.bytes=104857600 #这个参数是向kafka请求消息或者向kafka发送消息的请请求的最大数,这个值不能超过java的堆栈大小
num.partitions=1 #默认的分区数,一个topic默认1个分区数
log.retention.hours=168 #默认消息的最大持久化时间,168小时,7天
message.max.byte=5242880  #消息保存的最大值5M
default.replication.factor=2  #kafka保存消息的副本数,如果一个副本失效了,另一个还可以继续提供服务
replica.fetch.max.bytes=5242880  #取消息的最大直接数
log.segment.bytes=1073741824 #这个参数是:因为kafka的消息是以追加的形式落地到文件,当超过这个值的时候,kafka会新起一个文件
log.retention.check.interval.ms=300000 #每隔300000毫秒去检查上面配置的log失效时间(log.retention.hours=168 ),到目录查看是否有过期的消息如果有,删除
zookeeper.connect=zookeeper120.blockchain.com:2181,zookeeper124.blockchain.com:2181,zookeeper125.blockchain.com:2181#设置zookeeper的连接端口

主要修改配置的地方

broker.id=0  #当前机器在集群中的唯一标识,和zookeeper的myid性质一样
listeners=PLAINTEXT://kafka12x.blockchain.hl95.com:9092 #当前kafka对外提供服务的端口默认是9092
zookeeper.connect=zookeeper120.blockchain.com:2181,zookeeper124.blockchain.com:2181,zookeeper125.blockchain.com:2181#设置zookeeper的连接端口
log.dirs=/usr/local/kafka_2.11-2.2.2/logs/ #消息存放的目录

七、三台服务器分别启动kafaka 

[root@redis-01 local]# cd /usr/local/kafka_2.11-2.2.2/
[root@redis-01 kafka_2.11-2.2.2]# bin/kafka-server-start.sh config/server.properties

八、通过zkCli.sh查看brokers

[root@redis-01 kafka_2.11-2.2.2]# zkCli.sh -server zookeeper120.blockchain.hl95.com:2181
Connecting to 172.28.5.120:2181
2020-06-28 16:18:02,148 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT
2020-06-28 16:18:02,153 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=slave1
2020-06-28 16:18:02,153 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_251
2020-06-28 16:18:02,156 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2020-06-28 16:18:02,156 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/local/jdk1.8.0_251/jre
2020-06-28 16:18:02,157 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/usr/local/zookeeper-3.4.14/bin/../zookeeper-server/target/classes:/usr/local/zookeeper-3.4.14/bin/../build/classes:/usr/local/zookeeper-3.4.14/bin/../zookeeper-server/target/lib/*.jar:/usr/local/zookeeper-3.4.14/bin/../build/lib/*.jar:/usr/local/zookeeper-3.4.14/bin/../lib/slf4j-log4j12-1.7.25.jar:/usr/local/zookeeper-3.4.14/bin/../lib/slf4j-api-1.7.25.jar:/usr/local/zookeeper-3.4.14/bin/../lib/netty-3.10.6.Final.jar:/usr/local/zookeeper-3.4.14/bin/../lib/log4j-1.2.17.jar:/usr/local/zookeeper-3.4.14/bin/../lib/jline-0.9.94.jar:/usr/local/zookeeper-3.4.14/bin/../lib/audience-annotations-0.5.0.jar:/usr/local/zookeeper-3.4.14/bin/../zookeeper-3.4.14.jar:/usr/local/zookeeper-3.4.14/bin/../zookeeper-server/src/main/resources/lib/*.jar:/usr/local/zookeeper-3.4.14/bin/../conf:/usr/local/jdk1.8.0_251/lib/dt.jar:/usr/local/jdk1.7.0_79/lib/rt.jar:/usr/local/jdk1.7.0_79/lib/tools.jar
2020-06-28 16:18:02,157 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2020-06-28 16:18:02,157 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2020-06-28 16:18:02,157 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
2020-06-28 16:18:02,157 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
2020-06-28 16:18:02,158 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
2020-06-28 16:18:02,158 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=3.10.0-862.el7.x86_64
2020-06-28 16:18:02,158 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=root
2020-06-28 16:18:02,158 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/root
2020-06-28 16:18:02,158 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/usr/local/kafka_2.11-2.2.2
2020-06-28 16:18:02,160 [myid:] - INFO  [main:ZooKeeper@442] - Initiating client connection, connectString=172.28.5.120:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@4b85612c
Welcome to ZooKeeper!
2020-06-28 16:18:02,190 [myid:] - INFO  [main-SendThread(slave1:2181):ClientCnxn$SendThread@1025] - Opening socket connection to server slave1/172.28.5.120:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2020-06-28 16:18:02,276 [myid:] - INFO  [main-SendThread(slave1:2181):ClientCnxn$SendThread@879] - Socket connection established to slave1/172.28.5.120:2181, initiating session
[zk: 172.28.5.120:2181(CONNECTING) 0] 2020-06-28 16:18:02,316 [myid:] - INFO  [main-SendThread(slave1:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server slave1/172.28.5.120:2181, sessionid = 0x103b998ee6f000a, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null

查看 /

ls /
[cluster, controller_epoch, controller, brokers, zookeeper, admin, isr_change_notification, consumers, log_dir_event_notification, latest_producer_id_block, config]
[zk: 172.28.5.120:2181(CONNECTED) 1] 

查看/brokers

[zk: 172.28.5.120:2181(CONNECTED) 1] ls /brokers
[ids, topics, seqid]
[zk: 172.28.5.120:2181(CONNECTED) 2] 

查看/brokers/ids

[zk: 172.28.5.120:2181(CONNECTED) 2] ls /brokers/ids
[0, 1, 2]
[zk: 172.28.5.120:2181(CONNECTED) 3] 

可以看到有3个,现在关闭第二个

[root@redis-02 kafka_2.11-2.2.2]# bin/kafka-server-stop.sh config/server.properties 
[root@redis-02 kafka_2.11-2.2.2]# 

再次查看ids

[zk: 172.28.5.120:2181(CONNECTED) 6] ls /brokers/ids
[0, 2]
[zk: 172.28.5.120:2181(CONNECTED) 7] 

此时,id=1的kafka已经没有了,再次启动id=1的kafka

[root@redis-02 kafka_2.11-2.2.2]# bin/kafka-server-start.sh config/server.properties 

再次查看ids

[zk: 172.28.5.120:2181(CONNECTED) 7] ls /brokers/ids
[0, 1, 2]

id=2的kafka已经上线了

九、kafka集群测试

1、创建Topic

[root@redis-01 kafka_2.11-2.2.2]# bin/kafka-topics.sh --create --zookeeper zookeeper120.blockchain.hl95.com:2181,zookeeper124.blockchain.hl95.com:2181,zookeeper125.blockchain.hl95.com
:2181 --replication-factor 3 --partitions 3 --topic test 
Created topic test.
[root@redis
-01 kafka_2.11-2.2.2]#

--replication-factor 3 #复制两份
--partitions 1 #创建1个分区
--topic #主题为test

使用zkCli.sh查看

[zk: zookeeper120.blockchain.hl95.com:2181(CONNECTED) 17] ls /brokers/topics/test/partitions
[0, 1, 2]
[zk:zookeeper120.blockchain.hl95.com:2181(CONNECTED) 51] get /brokers/ids/0 {"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://slave1:9092"],"jmx_port":-1,"host":"slave1","timestamp":"1593328382757","port":9092,"version":4} cZxid = 0x300000018 ctime = Sun Jun 28 15:13:02 CST 2020 mZxid = 0x300000018 mtime = Sun Jun 28 15:13:02 CST 2020 pZxid = 0x300000018 cversion = 0 dataVersion = 1 aclVersion = 0 ephemeralOwner = 0x103b998ee6f0000 dataLength = 182 numChildren = 0 [zk: 172.28.5.120:2181(CONNECTED) 52]

2、查看Topic详情

[root@redis-01 kafka_2.11-2.2.2]# bin/kafka-topics.sh --describe --zookeeper zookeeper120.blockchain.hl95.com:2181,zookeeper124.blockchain.hl95.com:2181,zookeeper125.blockchain.hl95.com
:2181 --topic test 
Topic:test PartitionCount:
3 ReplicationFactor:3 Configs: Topic: test Partition: 0 Leader: 0 Replicas: 0,2,1 Isr: 0,1,2 Topic: test Partition: 1 Leader: 1 Replicas: 1,0,2 Isr: 1,0,2 Topic: test Partition: 2 Leader: 2 Replicas: 2,1,0 Isr: 2,1,0

3、列出Topic

[root@redis-01 kafka_2.11-2.2.2]# bin/kafka-topics.sh --list --zookeeper zookeeper120.blockchain.hl95.com:2181,zookeeper124.blockchain.hl95.com:2181,zookeeper125.blockchain.hl95.com
:2181 test

4、在其中一台服务器上创建生产者producer

[root@redis-01 kafka_2.11-2.2.2]# bin/kafka-console-producer.sh --broker-list zookeeper120.blockchain.hl95.com:9092 --topic test >

5、在另外两台服务器上创建消费者consumer

[root@redis-02 kafka_2.11-2.2.2]# bin/kafka-console-consumer.sh --bootstrap-server zookeeper124.blockchain.hl95.com:9092 --topic test --from-beginning 
[root@redis
-03 kafka_2.11-2.2.2]# bin/kafka-console-consumer.sh --bootstrap-server zookeeper125.blockchain.hl95.com:9092 --topic test --from-beginning

6、在生产者服务器上输入内容,同时消费者服务器上会显示内容

[root@redis-01 kafka_2.11-2.2.2]# bin/kafka-console-producer.sh --broker-list zookeeper120.blockchain.hl95.com:9092 --topic test 

>hello

>kafka
[root@redis-02 kafka_2.11-2.2.2]# bin/kafka-console-consumer.sh --bootstrap-server zookeeper124.blockchain.hl95.com:9092 --topic test --from-beginning 
hello
kafka
[root@redis-03 kafka_2.11-2.2.2]# bin/kafka-console-consumer.sh --bootstrap-server zookeeper125.blockchain.hl95.com:9092 --topic test --from-beginning 
hello
kafka

7、删除Topic

[root@redis-01 kafka_2.11-2.2.2]# bin/kafka-topics.sh --delete --zookeeper zookeeper120.blockchain.hl95.com:2181 --topic test 
Topic test is marked
for deletion. Note: This will have no impact if delete.topic.enable is not set to true.
[root@redis
-01 kafka_2.11-2.2.2]#
原文地址:https://www.cnblogs.com/sky-cheng/p/13201657.html