Apache Kafka —How it handles requests.

Akhilesh Mahajan
4 min readJul 18, 2023

--

In this Part-2 we will understand and deep dive into how Kafka handles requests.

WHATS INSIDE KAFKA CLUSTER

The Kafka cluster mainly consists of a:

  1. A Control Plane: Responsible to handle all the metadata of Kafka Clusters
  2. A Data Plane: Responsible to hold the actual data (Client Request Processing).
Inside Kafka Cluster

HOW DATA PLANE IS GOING TO HANDLE CLIENT REQUESTS

There are mainly two types of requests.

  1. Produce request from the producer.
  2. Fetch request from the consumer.

Both types of requests go through some common steps.

A) HOW PRODUCE REQUESTS WILL BE HANDLED

  1. Assign Record To Partition
    The client starts with sending records. The records will be partitioned to know in which partition (of Topic) the record should go. If a key is present in the record, the partition will be found by hashing the key. Else a round-robin mechanism is followed to store records in the partition.
Assign Record To Partition

2. Accumulate and Compress Records
Sending each record itself is not much efficient. What the producer library does is store the records in an In-Memory data structure called record batches. Also, it will be efficient to compress a bunch of data rather than individually.
The records from batches can be drained by using two properties.
1. By Time (after a certain period of time)
2. By Size ( after the size of the buffer is reached the limit)

Accumulate and Compress records

Then producer request will be sent to the corresponding broker.

3. Adding Request To Queue
The request will then be added to Socket Receive Buffer. From there it will be picked by one of the network threads. The Network threads are designed in such a way to do the work that's lightweight. The Network threads will take the client's request from the socket receive buffer and put it into the Shared Request Channel. From there, requests will be picked up by the second main pool, called IO Threads.

Adding Request to Queue

The IO Threads firsts validate the CRC (Cyclic Redundancy Check) and then append the data to a data structure called a commit log.

IO Thread verifies CRC

The commit log is organized on disk in a bunch of segments. Each segment consists of two parts:
a) Where actual data will be append.
b) Second is the Index which provides a mapping of the offset to the actual position of the record.

Commit logs

By default, the ack will be sent when data replication is completed to all the brokers. Since we have a limited number of IO Threads, we will have to detach the record from IO Thread. Thus the data is shared in a map data structure called Purgatory and the thread will be detached after that so that it can process further records.

Adding response to socket send buffer

After replicating all the data to all the brokers, an ack response will be generated and shared to Shared Response Queue. From there, network threads will pick up the response and send it to Socket Send Buffer.

In this way, the produce requests from producers will be handled.

B) HOW FETCH REQUESTS WILL BE HANDLED

The client sends the fetch request with the topic and the partition along with the offset from where data has to be fetched. The fetch request will move through the same steps as the produce request goes through. In this case, what IO Threads will do, is use the index file to find the actual position of the record.

There might be a possibility that no new data is present in the queue. In this case, the consumer will be performing unnecessary operations. To tackle this, what the consumer can specify is the minimum number of bytes that has to be sent along with a time period consumer can afford. Now the consumer will wait for a specified amount of time until a particular number of bytes are not ready to be sent.

The same ack process will be followed as by produce requests to send the response back to the client.

PART-1: https://medium.com/@akhileshmj/kafka-a-streaming-application-part-1-63f4ca7a5efd

Reference: https://youtu.be/s6-uDxDKH1k
Follow for more: https://www.linkedin.com/in/akhileshmahajan-/

--

--

Akhilesh Mahajan
Akhilesh Mahajan

Written by Akhilesh Mahajan

Full-Stack Developer | Golang, Java, Rust, Node, React Developer | AWS☁️, Docker, Kubernetes | Passionate about distributed systems and cloud-native application