kafka中Preferred Leader设置

https://www.pianshen.com/article/12231023160/

https://www.pianshen.com/article/12231023160/

broker重启后可能会有异常,比如Preferred Leader由true变为flase

在创建一个topic时,kafka尽量将partition均分在所有的brokers上,并且将replicas也j均分在不同的broker上。

每个partitiion的所有replicas叫做"assigned replicas","assigned replicas"中的第一个replicas叫"preferred replica",刚创建的topic一般"preferred replica"是leader。leader replica负责所有的读写。

但随着时间推移,broker可能会停机,会导致leader迁移,导致机群的负载不均衡。我们期望对topic的leader进行重新负载均衡,让partition选择"preferred replica"做为leader。

用kafka-eagle监控kafka运行状况,分区3所在的broker异常重启了。截图看看,分区3的Preferred Leader为false,由于Replicas为[1,3,4],leader为3,由于leader为副本3,副本3不是Replicas里的第一个副本(副本1),所以Preferred Leader为false。

有两种方法可以让Preferred Leader恢复为true

  • 方法一:修改leader为副本1
  • 方法二:调整Replicas的副本集顺序为[3,1,4]

我们用方法二试试

 #cat move.json    
{
    "partitions": [
    {
        "topic": "data_report_h5_merged_app",
        "partition": 3,
        "replicas": [
            3,
            1,
            4
        ]
    }]
}

 
 

#

/usr/local/xyhadoop/kafka/bin/kafka-reassign-partitions.sh --zookeeper localhost:2181/kafka  --reassignment-json-file  move.json --execute
Current partition replica assignment

{"version":1,"partitions":[{"topic":"data_report_h5_merged_app","partition":2,"replicas":[1,2,3],"log_dirs":["any","any","any"]},{"topic":"data_report_h5_merged_app","partition":4,"replicas":[3,4,5],"log_dirs":["any","any","any"]},{"topic":"data_report_h5_merged_app","partition":1,"replicas":[5,1,2],"log_dirs":["any","any","any"]},{"topic":"data_report_h5_merged_app","partition":5,"replicas":[4,1,2],"log_dirs":["any","any","any"]},{"topic":"data_report_h5_merged_app","partition":7,"replicas":[1,3,4],"log_dirs":["any","any","any"]},{"topic":"data_report_h5_merged_app","partition":0,"replicas":[4,5,1],"log_dirs":["any","any","any"]},{"topic":"data_report_h5_merged_app","partition":3,"replicas":[1,3,4],"log_dirs":["any","any","any"]},{"topic":"data_report_h5_merged_app","partition":6,"replicas":[5,2,3],"log_dirs":["any","any","any"]}]}

Save this to use as the --reassignment-json-file option during rollback
Successfully started reassignment of partitions.

再看看kafka-eagle,已经 Preferred Leader恢复为true

 

案例2:

对所有Topics进行操作

./bin/kafka-preferred-replica-election.sh --zookeeper hadoop16:2181,hadoop17:2181,hadoop18:2181/kafka08

 

对某个Topic进行操作

[sankuai@data-kafka01 kafka]$ cat topicPartitionList.json

{

 "partitions":

  [

    {"topic":"test.example","partition": "0"}

  ]

}

 

./bin/kafka-preferred-replica-election.sh --zookeeper hadoop16:2181,hadoop17:2181,hadoop18:2181/kafka08 --path-to-json-file topicPartitionList.json

原文地址:https://www.cnblogs.com/kebibuluan/p/13813902.html