Kafka is a high throughput publish-subscribe messaging system implemented as distributed, partitioned, replicated commit log service.
Taken from official Kafka site
Fast
A single Kafka broker can handle hundreds of megabytes of reads and writes per second from thousands of clients.
Scalable
Kafka is designed to allow a single cluster to serve as the central data backbone for a large organization. It can be elastically and transparently expanded without downtime. Data streams are partitioned and spread over a cluster of machines to allow data streams larger than the capability of any single machine and to allow clusters of co-ordinated consumers
Durable
Messages are persisted on disk and replicated within the cluster to prevent data loss. Each broker can handle terabytes of messages without performance impact.
Distributed by Design
Kafka has a modern cluster-centric design that offers strong durability and fault-tolerance guarantees.
Before version 0.9.0.0 the Kafka Java API used Encoders
and Decoders
. They have been replaced by Serializer
and Deserializer
in the new API.