Why use Avro with Kafka - How to handle POJOs
You don't need AVSC, you can use an AVDL file, which basically looks the same as a POJO with only the fields
@namespace("com.example.mycode.avro")
protocol ExampleProtocol {
record User {
long id;
string name;
}
}
Which, when using the idl-protocol
goal of the Maven plugin, will create this AVSC for you, rather than you writing it yourself.
{
"type" : "record",
"name" : "User",
"namespace" : "com.example.mycode.avro",
"fields" : [ {
"name" : "id",
"type" : "long"
}, {
"name" : "name",
"type" : "string"
} ]
}
And it'll also place a SpecificData
POJO User.java
on your classpath for using in your code.
If you already had a POJO, you don't need to use AVSC or AVDL files. There are libraries to convert POJOs. For example, you can use Jackson, which is not only for JSON, you would just need to likely create a JacksonAvroSerializer
for Kafka, for example, or find if one exists.
Avro also has built-in library based on reflection.
Confluent Schema Registry serializers have a setting for using reflect based models.
So to the question - why Avro (for Kafka)?
Well, having a schema is a good thing. Think about RDBMS tables, you can explain the table, and you see all the columns. Move to NoSQL document databases, and they can contain literally anything, and this is the JSON world of Kafka.
Let's assume you have consumers in your Kafka cluster that have no idea what is in the topic, they have to know exactly who/what has been produced into a topic. They can try the console consumer, and if it were a plaintext like JSON, then they have to figure out some fields they are interested in, then perform flaky HashMap-like .get("name")
operations again and again, only to run into an NPE when a field doesn't exist. With Avro, you clearly define defaults and nullable fields.
You aren't required to use a Schema Registry, but it provides that type of explain topic
semantics for the RDBMS analogy. It also saves you from needing to send the schema along with every message, and the expense of extra bandwidth on the Kafka topic. The registry is not only useful for Kafka, though, as it could be used for Spark, Flink, Hive, etc for all Data Science analysis surrounding streaming data ingest.
Assuming you did want to use JSON, then try using MsgPack instead and you'll likely see an increase in your Kafka throughput and save disk space on the brokers
You can also use other formats like Protobuf or Thrift, as Uber has compared
It is a matter of speed and storage. When serializing data, you often need to transmit the actual schema and therefore, this cause an increase of payload size.
Total Payload Size
+-----------------+--------------------------------------------------+
| Schema | Serialised Data |
+-----------------+--------------------------------------------------+
Schema Registry provides a centralized repository for schemas and metadata so that all schemas are registered in a central system. This centralized system enables producers to only include the ID of the schema instead of the full schema itself (in text format).
Total Payload Size
+----+--------------------------------------------------+
| ID | Serialised Data |
+----+--------------------------------------------------+
Therefore, the serialisation becomes faster.
Furthermore, schema registry versioning enables the enforcement of data policies that might help to prevent newer schemas to break compatibility with existing versions that could potentially cause downtime or any other significant issues in your pipeline.
Some more benefits of Schema Registry are thoroughly explained in this article by Confluent.