Kafka集群安装配置

Kafka集群安装配置

官方网站:
Kafka集群安装配置

相关开源产品,facebook的scribe,apache的chukwa,linkedin的samza和cloudera的flume

环境:
CentOS7
zookeeper-3.4.8
kafka-0.10.0.1

kafka1: 192.168.8.201:9092
kafka2: 192.168.8.202:9092
kafka3: 192.168.8.203:9092
Kafka集群安装配置



zookeeper
zookeeper: 192.168.8.254:2181

kafka
一.安装JDK

二.安装kafka
二进制包
1.下载解压
mv /opt/kafka_2.11-0.10.1.0 /opt/kafka
2.配置文件
kafka1
cp /opt/kafka/config/server.properties{,.default}
cat > /opt/kafka/config/server.properties <<EOF

broker.id=1

delete.topic.enable=true


advertised.listeners=PLAINTEXT://localhost:9092

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600


num.partitions=1

num.recovery.threads.per.data.dir=1

log.dirs=/opt/kafka-logs

log.flush.interval.messages=10000

log.flush.interval.ms=1000

log.retention.hours=168

log.retention.bytes=1073741824

log.segment.bytes=1073741824

log.retention.check.interval.ms=300000


zookeeper.connect=192.168.8.254:2181

zookeeper.connection.timeout.ms=6000

EOF
注意:broker.id必须集群内唯一(这里分别为broker.id=1,broker.id=2,broker.id=3)
kafka2,kafka3只需修改broker.id即可
3.启动kafka集群
提示: JVM参数在/opt/kafka/bin/kafka-server-start.sh文件中修改
/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
4.创建topic(3分区2副本)
Kafka集群安装配置
/opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --partitions 3 --replication-factor 2 --topic mytopic
注意: --replication-factor参数值<=broker(kafka节点数-1),如果大于则会报错larger than available brokers: 2
5.查看topic
/opt/kafka/bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic mytopic

[root@node4 ~]# /opt/kafka/bin/kafka-topics.sh --zookeeper 192.168.8.254:2181 --describe --topic mytopic

Topic:mytopic PartitionCount:3 ReplicationFactor:2 Configs:

Topic: mytopic Partition: 0 Leader: 1 Replicas: 1,3 Isr: 1

Topic: mytopic Partition: 1 Leader: 1 Replicas: 3,1 Isr: 1

Topic: mytopic Partition: 2 Leader: 1 Replicas: 1,3 Isr: 1

6.创建producer
echo -e "all streams lead to kafka hello kafka streams join kafka summit" > file-input.txt
/opt/kafka/bin/kafka-console-producer.sh --broker-list 192.168.8.201:9092,192.168.8.202:9092,192.168.8.203:9092 --topic mytopic < file-input.txt
7.创建consumer
/opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.8.202:9092 --topic mytopic --from-beginning 
Kafka集群安装配置

[root@node4 ~]# /opt/kafka/bin/kafka-console-producer.sh --broker-list 192.168.8.201:9092,192.168.8.202:9092,192.168.8.203:9092 --topic mytopic < file-input.txt

[root@node4 ~]# /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.8.202:9092 --topic mytopic --from-beginning

hello kafka streams

all streams lead to kafka

join kafka summit


停止kafka
/opt/kafka/bin/kafka-server-stop.sh
删除topic
/opt/kafka/bin/kafka-topics.sh --zookeeper 192.168.8.254:2181 --delete --topic mytopic
或(/opt/kafka/bin/kafka-run-class.sh kafka.admin.DeleteTopicCommand --topic mytopic --zookeeper 192.168.8.254:2181 )
zookeeper查看kafka注册信息,如topics
/opt/kafka/bin/zookeeper-shell.sh 192.168.8.254:2181 ls /brokers/topics
或(/opt/kafka/bin/kafka-run-class.sh org.apache.zookeeper.ZooKeeperMain -server 192.168.8.254:2181 ls /brokers/topics )



源码包

cd /usr/local/src/kafka-0.10.0.1-src

以下摘自README.md
### Building a binary release gzipped tar ball ###
    ./gradlew clean
    ./gradlew releaseTarGz

The above command will fail if you haven't set up the signing key. To bypass signing the artifact, you can run:
    ./gradlew releaseTarGz -x signArchives
The release file can be found inside `./core/build/distributions/`.

实测报如下错误
Error: Could not find or load main class org.gradle.wrapper.GradleWrapperMain
挖坑在此
原文地址:https://www.cnblogs.com/lixuebin/p/10814001.html