In Apache Kafka why can't there be more consumer instances than partitions?
Ok, to understand it, one needs to understand several parts.
- In order to provide ordering total order, the message can be sent only to one consumer. Otherwise it would be extremely inefficient, because it would need to wait for all consumers to recieve the message before sending the next one:
However, although the server hands out messages in order, the messages are delivered asynchronously to consumers, so they may arrive out of order on different consumers. This effectively means the ordering of the messages is lost in the presence of parallel consumption. Messaging systems often work around this by having a notion of "exclusive consumer" that allows only one process to consume from a queue, but of course this means that there is no parallelism in processing.
Kafka does it better. By having a notion of parallelism—the partition—within the topics, Kafka is able to provide both ordering guarantees and load balancing over a pool of consumer processes. This is achieved by assigning the partitions in the topic to the consumers in the consumer group so that each partition is consumed by exactly one consumer in the group. By doing this we ensure that the consumer is the only reader of that partition and consumes the data in order. Since there are many partitions this still balances the load over many consumer instances. Note however that there cannot be more consumer instances than partitions.
Kafka only provides a total order over messages within a partition, not between different partitions in a topic.
Also what you think is a performance penalty (multiple partitions) is actually a performance gain, as Kafka can perform actions of different partitions completely in parallel, while waiting for other partitions to finish.
- The picture show different consumer groups, but the limitation of maximum one consumer per partition is only within a group. You still can have multiple consumer groups.
In the beginning the two scenarios are described:
If all the consumer instances have the same consumer group, then this works just like a traditional queue balancing load over the consumers.
If all the consumer instances have different consumer groups, then this works like publish-subscribe and all messages are broadcast to all consumers.
So, the more subscriber groups you have, the lower the performance is, as kafka needs to replicate the messages to all those groups and guarantee the total order.
On the other hand, the less group, and more partitions you have the more you gain from parallizing the message processing.
It is important to recall that Kafka keeps one offset per [consumer-group, topic, partition]. That is the reason.
I guess the sentence
Note however that there cannot be more consumer instances than partitions.
is referring to the "automatic consumer group re-balance" mode, the default consumer mode when you just subscribe() some number of consumers to a list of topics.
I assume that because, at least with Kafka 0.9.x, nothing prevents having several consumer instances, members of the same group, reading from the same partition.
You can do something like this in two or more different threads
Properties props = new Properties();
props.put(ConsumerConfig.GROUP_ID_CONFIG, "MyConsumerGroup");
props.put("enable.auto.commit", "false");
consumer = new KafkaConsumer<>(props);
TopicPartition partition0 = new TopicPartition("mytopic", 0);
consumer.assign(Arrays.asList(partition0));
ConsumerRecords<Integer, String> records = consumer.poll(1000);
and you will have two (or more) consumers reading from the same partition.
Now, the "issue" is that both consumers will be sharing the same offset, you don't have other option since there is only one group, topic and partition into play.
If both consumers read the current offset at the same time, then both of them will read the same value, and both of them will get the same messages.
If you want each consumer to read different messages you will have to sync them so only one can fetch and commit the offset at at time.
There is a reason why Kafka can not support multiple consumers per partition.
Kafka broker writes data to the file per partition. So let's say if two partitions are configured, broker will create two files and assign multiple consumer groups where messages can be sent.
Now for each partition, only one consumer consumes messages based on the offset of the file. e.g Consumer 1 will first read messages from file offset 0 to 4096. Now these offset are part of the payload so consumer will know which offset to use while requesting for next messages read.
If multiple consumers are reading from same partition then consumer 1 reads from file with offset 0-4096 but consumer 2 will still try to read from offset 0 unless it also receives message sent to consumer 1. Now if same messages are sent to multiple consumers than it is not a load balancing so Kafka has divided them into consumer groups so all consumer groups can receives messages but within consumer group, only one consumer can receive message.