docker容器中部署 kafka 和 elk

1、下载zookeeper
docker pull wurstmeister/zookeeper

2、下载kafka
docker pull wurstmeister/kafka:2.11-0.11.0.3

3、启动zookeeper
docker run -d --name zookeeper --publish 2181:2181 --volume /etc/localtime:/etc/localtime wurstmeister/zookeeper
报错:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused "rootfs_linux.go:58: mounting \"/etc/localtime\" to rootfs \"/var/lib/docker/overlay2/7b63cb086d7b04c3b5a979624d6ea3c39672b268e634d0d80224eefde47e51e2/merged\" at \"/var/lib/docker/overlay2/7b63cb086d7b04c3b5a979624d6ea3c39672b268e634d0d80224eefde47e51e2/merged/etc/localtime\" caused \"not a directory\""": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
解决:创建一个自定义容器卷 edc-zookeeper-vol   查看所有容器:docker volume ls
docker volume edc-zookeeper-vol
将挂载的目录也修改为 /var/lib/docker/volumes/etc/localtime 再次执行:
docker run -d --name zookeeper --publish 2181:2181 --volume edc-zookeeper-vol:/var/lib/docker/volumes/etc/localtime wurstmeister/zookeeper

4、启动kafa
docker run -d --name kafka --publish 9092:9092  --link zookeeper  --env KAFKA_ZOOKEEPER_CONNECT=192.168.0.104:2181  --env KAFKA_ADVERTISED_HOST_NAME=192.168.0.104  --env KAFKA_ADVERTISED_PORT=9092  --volume edc-zookeeper-vol:/var/lib/docker/volumes/etc/localtime  wurstmeister/kafka:2.11-0.11.0.3

5、创建topic
以下是因为kafka-topics.sh并不在bin目录下所走的弯路:
查看kafka容器ID:docker ps
进入容器:docker exec -it [容器ID] bin/bash
创建topic
bin/kafka-topics.sh --create --zookeeper 192.168.0.104:2181 --replication-factor 1 --partitions 1 --topic mykafka
问题:创建topics时执行命令报错:bash: bin/kafka-topics.sh: No such file or directory
进入bin文件:cd /bin
创建文件kafka-topic.sh文件:touch kafka-topics.sh  
问题:再次执行创建topic时报错:bash: bin/kafka-topics.sh: Permission denied   因为没有权限
解决:进入文件所在的目录执行:chmod a+x *.sh  然后再次执行创建topics的命令

其实kafka-topics.sh文件在 /opt/kafka/bin 目录下
再次创建:
 opt/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.0.104:2181 --replication-factor 1 --partitions 1 --topic mykafka
返回:Created topic "mykafka".  说明已创建成功

6、查看topic
opt/kafka/bin/kafka-topics.sh --list --zookeeper 192.168.0.104:2181
返回:mykafka

7、创建生产者
 opt/kafka/bin/kafka-console-producer.sh --broker-list 192.168.0.104:9092 --topic mykafka
返回:>  表示进入输入状态

8、重新开一个命令行窗口,并再次进入kafka容器中,创建消费者
opt/kafka/bin/kafka-console-consumer.sh --zookeeper 192.168.0.104:2181 --topic mykafka --from-beginning

9、在生产者的命令行中输入信息,在消费者的命令行窗口就可以实时的接收到信息。

10、安装ELK
docker pull sebp/elk

11、启动ELK
docker run -p 5601:5601 -p 9200:9200 -p 9300:9300 -p 5044:5044 -e ES_MIN_MEM=128m  -e ES_MAX_MEM=2048m -d --name elk sebp/elk

过10秒就可以看到ELK相关的面板。:http://localhost:5601/

12、进入ELK容器: docker exec -it 容器ID bin/bash

13、测试 Logstash和ES之间是可以正常联通
/opt/logstash/bin/logstash -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }'
报错: Logstash could not be started because there is already another instance using the configured data directory.  If you wish to run multiple instances, you must change the "path.data" setting.
进入指定的文件夹root@c953e1538736:/opt/logstash/data#  中将.lock文件删除
再次执行命令 /opt/logstash/bin/logstash -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }'

14、这句话:Successfully started Logstash API endpoint {:port=>9600}表示成功
然后输入一句话:olidigital.com hello world 回车
打开浏览器,输入:http://192.168.0.104:9200/_search?pretty 如图,就会看到我们刚刚输入的日志内容。(并没有找到输入的日志,待解。。。)

15、配置Logstash从Kafka消费消息
1) 找到config文件
cd /opt/logstash/config

2) 编辑配置文件(没有该文件会新建一个)
vi logstash.config

输入内容:
input {
        kafka{
                bootstrap_servers =>["192.168.0.104:9092"]
                client_id => "test" group_id => "test"
                consumer_threads => 5
                decorate_events => true
                topics => "mi"
        }
}
filter{
        json{
                source => "message"
        }
}

output {
        elasticsearch {
                hosts => ["localhost"]
                index => "mi-%{app_id}"
                codec => "json"
        }
}
 

16、进入elk加载配置文件从kafka上消费
/opt/logstash/bin/logstash -f  /opt/logstash/config/logstash.config
原文地址:https://www.cnblogs.com/zhengwei-cq/p/14708558.html