kafak manager + zookeeper + kafka 消费队列快速清除

做性能测试时,kafka消息队列比较长,让程序自己消费完毕需要等待很长时间。就需要快速清理kafka队列

清理方式把 这kafak manager + zookeeper + kafka 这些应用情况,复制成一个备份文件夹。当需要清理时,把在使用的kafka +zookeeper  文件夹删除,在从备份文件还原回去。

这样就作成几个脚本

1、清理环境clen_envionment.sh, 这个只执行一次,需要把 kafka 和zookeeper的路径填写正确。如果存在备份文件就不需要执行这个脚本,

#要先导出kafka topic
#========zookeeper
#stop zookeeper
/app/zookeeper/bin/zkServer.sh stop;
ps -ef|grep zookeeper |grep -v grep|awk '{print $2}'|xargs kill -9;
ps -aef |grep zookeeper;

#clean zookeeper data
ls -l /app/zookeeper/data/version-2/;
rm -rf /app/zookeeper/data/version-2/;
rm -rf /app/zookeeper/logs/*;
rm -rf /app/zookeeper_backup;
rm -rf /app/zookeeper_org;
sleep 5;
#backup zookeeper file
cp -rp /app/zookeeper/ /app/zookeeper_backup;
cp -rp /app/zookeeper/ /app/zookeeper_org;
ps -aef |grep zookeeper;

# stop kafka
/app/kafka_cluster/bin/kafka-server-stop.sh ;
# 如果还存在,继续杀死
ps -ef|grep kafka_cluster |grep -v grep|awk '{print $2}'|xargs kill -9;
ps -aef |grep kafka_cluster;
#clean tow log folder
rm -rf /app/kafka_cluster/kafka-logs/*;
rm -rf /app/kafka_cluster/logs/*;
rm -rf /app/kafka_cluster_org/;
rm -rf /app/kafka_cluster_backup/;
sleep 5;
#backup kafka file
cp -rp /app/kafka_cluster/ /app/kafka_cluster_org/;
cp -rp /app/kafka_cluster/ /app/kafka_cluster_backup/;

 2、初始化zookeeper脚本

#!/bin/bash
echo "停止zookeeper服务......"
/app/zookeeper/bin/zkServer.sh stop
sleep 3
echo "初始化zookeeper安装目录......"
rm -rf /app/zookeeper
cp -rp /app/zookeeper_backup /app/zookeeper
echo "启动zookeeper服务......"
/app/zookeeper/bin/zkServer.sh start
result=`ps -ef|grep "/app/zookeeper/"|grep -v grep|awk '{print $2}'`
echo -e "33[42;30m zookeeper服务的进程号为----$result 33[0m"

  3、初始化kafka脚本

#!/bin/bash
kafka_cluster="kafka_cluster"
kafka_path="/app/"$kafka_cluster
kafka_backup_path="/app/"$kafka_cluster"_backup"
echo "停止kafka服务....."
$kafka_path/bin/kafka-server-stop.sh
sleep 10
ps -ef|grep $kafka_cluster |grep -v grep|awk '{print $2}'|xargs kill -9
#删除还原kafak
if [[ -d $kafka_backup_path && ${#kafka_path} -ge 8 ]];then
    echo "kafka_backup_path  dir is exsit...."
    echo "the kafka folder is :$kafka_path , deleting $kafka_path"
    #rm -rf $KAFKA_PATH
    echo "the kafka backup folder is :$kafka_backup_path , copying file from  $kafka_backup_path to $kafka_path"
    #cp -rp $KAFKA_BAKE_PATH  $KAFKA_PATH
    sleep 20
else
   echo "kafka_backup_path : $kafka_backup_path is not exist , exit process..."
   exit 1
fi

echo "准备启动kafka服务......"
$kafka_path/bin/kafka-server-start.sh -daemon $kafka_path/config/server.properties 
result=`ps -ef|grep $kafka_cluster|grep -v grep|awk '{print $2}'`
echo -e "33[42;30m kafka服务的进程号为----$result 33[0m"

4、创建topic脚本

#!/bin/bash
topic_file="/home/root/topics.txt"
ZK_DIR="192.168.53.125:2181,192.168.53.126:2181,192.168.53.127:2181"
CMD="/app/kafka_cluster/bin/kafka-topics.sh"

for line in `cat $topic_file`
do
    $CMD --create --zookeeper $ZK_DIR --replication-facto 3 --partitions 8 --topic $line
done

topic文件如下

[root@fkafka-01:/home/root]$cat topics.txt
test1
test2

 

 如果需要清理 ,执行如下顺序步骤

停止kafka-manager > 停止/启动zookeeper > 停止/启动 kafka > 启动kafka-manager >启动kafka manager

#停止kafka

ps -ef|grep manager |grep -v grep|awk '{print $2}'|xargs kill -9
cd

#恢复zookeeper 并启动
./init_zk.sh

#恢复kafka 并启动
cd /app/kafka-manager-1.3.0.7/

#kafka manager启动
rm RUNNING_PID
nohup bin/kafka-manager -Dconfig.file=conf/application.conf -Dhttp.port=9000 &

#创建topic

./create-topics.sh

原文地址:https://www.cnblogs.com/testway/p/7750999.html