kafka消费组、消费者

consumer group

consumer instance

一个消费组可能有一个或者多个消费者。同一个消费组可以订阅一个或者多个主题。主题的某一个分区只能被消费组的某一个消费者消费。那么分区和消费者之间是如何对应的呢?

假设消费组cg1(group.id=cg1)订阅了topic1,cg1有3个消费者c1、c2、c3,topic1有5个分区p1、p2、p3、p4、p5。那么c1消费topic1的哪个或者哪些分区呢?p1要被cg1的哪个消费者消费呢?

kafka2.2.0源码中有一个PartitionAssignor接口(在kafka-client.jar的org.apache.kafka.clients.consumer.internals包中),该接口有2个实现类:AbstractPartitionAssignor(在kafka-client.jar的org.apache.kafka.clients.consumer.internals包中)和StreamsPartitionAssignor(在kafka-stream.jar的org.apache.kafka.streams.processor.internals包中)。AbstractPartitionAssignor有3个子类:RangeAssignor(在kafka-client.jar的org.apache.kafka.clients.consumer包中)、RoundRobinAssignor(在kafka-client.jar的org.apache.kafka.clients.consumer包中)、StickyAssignor(在kafka-client.jar的org.apache.kafka.clients.consumer包中)。这些非抽象类分别实现了不同的策略:

记录在消费者这边用ConsumerRecord表示,成员变量有:

String topic、int partition、long offset、long timestamp、TimestampType timestampType、int serializedKeySize、int serializedValueSize、Headers headers、K key、V value、Optional<Integer> leaderEpoch

其中leaderEpoch最不好理解,其意思是???

KafkaConsumer实例就是一个kafka消费者客户端,从kafka集群消费记录。消费者客户端透明地处理kafka代理的故障,透明地适应主题分区。同一消费组的各消费者会负载均衡消息的消费。消费者维护与kafka代理的TCP连接,以取数据(fetch data)。消费者客户端在使用完之后要close,否则就会有资源泄露。和生产者客户端不同,消费者客户端不是线程安全的。


Offsets and Consumer Position
Kafka maintains a numerical offset for each record in a partition. This offset acts as a unique identifier of a record within that partition, and also denotes the position of the consumer in the partition. For example, a consumer which is at position 5 has consumed records with offsets 0 through 4 and will next receive the record with offset 5. There are actually two notions of position relevant to the user of the consumer:
The {@link #position(TopicPartition) position} of the consumer gives the offset of the next record that will be given out. It will be one larger than the highest offset the consumer has seen in that partition. It automatically advances every time the consumer receives messages in a call to {@link #poll(Duration)}.
The {@link #commitSync() committed position} is the last offset that has been stored securely. Should the process fail and restart, this is the offset that the consumer will recover to. The consumer can either automatically commit offsets periodically; or it can choose to control this committed position manually by calling one of the commit APIs (e.g. {@link #commitSync() commitSync} and {@link #commitAsync(OffsetCommitCallback) commitAsync}).
This distinction gives the consumer control over when a record is considered consumed. It is discussed in further detail below.

简单用例:

    public static void main(String[] args) {
        Properties props = new Properties();
        props.setProperty("bootstrap.servers", "127.0.0.1:9092");
        props.setProperty("group.id", "my-test-consumer-group2");
        props.setProperty("enable.auto.commit", "true");
        props.setProperty("auto.commit.interval.ms", "500");
        props.setProperty("auto.offset.reset", "earliest");
        props.setProperty("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.setProperty("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Collections.singletonList("test"));
        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(10000));
            System.out.println(System.currentTimeMillis());
            for (ConsumerRecord<String, String> record : records) {
                System.out.println(record);
            }
        }
    }

消费者客户端从kafka集群拉数据的方式是poll(Duration timeout),返回ConsumerRecords类型。ConsumerRecords类实现了Iterable<ConsumerRecord<K, V>>接口。在消费者这里,记录用ConsumerRecord实例表示。ConsumerRecords表示记录的集合。上例中,poll方法会最多阻塞1000ms。在fetch不到数据的时候才会阻塞,比如先启动了消费者,但是还没启动生产者的情况。

enable.auto.commit,这个值默认为true,即自动向kafka提交消费者偏移量。消费者必须向kafka提交消费偏移量,否则会一直重复消费同一条消息。除了默认的自动提交外,还可以设置为手动提交,手动提交需要我们显式地调用KafkaConsumer实例的commitXXX方法,如无参的commitSync()方法。auto.commit.interval.ms默认值是5000,即每5000ms才提交一次消费偏移量,时间太长了,设为500ms左右比较合适。

实测,真不提交偏移量的话,随着生产者往topic中生产数据,通过命令查看kafka-consumer-groups --describe --group my-test-consumer-group2 --bootstrap-server 127.0.0.1:9092,发现log-end-offset一直在增大,但是current-offset一直不变,lag也一直在变大。log-end-offset是分区消息偏移量,current-offset是消费偏移量,lag是消费延迟。当重启消费者后,消费者又从上一次启动后消费的起始位置即current-offset开始消费,这就重复消费了。

auto.offset.reset,这个默认为latest,即消费者第一次消费时从最大偏移量开始消费。假如先往topic中放了一些消息,然后才启动了消费者,那么消费者消费不了这些消息,只能消费之后放入topic中的消息。

我们可以显式地把auto.offset.reset设置为earliest,这样子,第一次启动消费者后就可以消费到启动之前topic已有的消息。

注意,这个只会影响消费者第一次的消费情况,只要不是第一次启动,即已经有了消费记录的消费者再次启动后都会接着上次的消费偏移量消费,不管auto.offset.reset的值是latest还是earliest。

每次poll多少条记录呢?或者说每次poll多少数据量呢?

由以下几个参数控制:

fetch.min.bytes,每次poll最小数据量,默认值为1byte

fetch.max.bytes,每次poll最大数据量,默认值为50MB

max.partition.fetch.bytes,每次poll每个分区最大数据量,默认值为1MB

max.poll.records,每次poll最大记录个数,默认值为1048676。

原文地址:https://www.cnblogs.com/koushr/p/5873375.html