In more ways than one, Apache Kafka 3.0 is a huge update. Apache Kafka 3.0 adds new functionality, breaks APIs, and improves KRaft, Apache Kafka’s built-in consensus mechanism that will eventually replace Apache ZooKeeperTM.
While KRaft is not yet recommended for production (see the list of known gaps), the metadata and APIs have been greatly improved. It’s worth mentioning support for exactly-once and partition reassignment. Check out KRaft’s new features and give it a spin in a development environment.
The producer enables the strongest delivery guarantees by default (acks=all, enable.idempotence=true) beginning with Apache Kafka 3.0. As a result, users now have access to ordering and durability as a default. You can also know these changes once you learn Apache Kafka.
Once you learn Apache Kafka you will know what is Apache Kafka? In more ways than one, Apache Kafka 3.0 is a huge update. Apache Kafka 3.0 adds a slew of new capabilities, as well as disruptive API changes and improvements to KRaft, the built-in consensus mechanism that will eventually replace Apache ZooKeeper.
Check out the updates, features, and improvements below.
Support for Java 8 in Kafka should be deprecated (KIP-750, Part I).
In Apache Kafka 3.0, all components of the project are deprecated, including support for Java 8. This will offer consumers enough time to adjust before Java 8 support is eliminated in the next major version (4.0).
Support for Scala 2.12 in Kafka is deprecated in KIP-751 (Part I).
In Apache Kafka 3.0, all support for Scala 2.12 has been removed. We’re providing users time to adjust, much like we did with Java 8 because Scala 2.12 support will be phased out in the next major release (4.0).
Kafka broker, producer, consumer and adminclient
Snapshot of the Kafka Raft (KIP-630)
The ability for KRaft controllers and KRaft brokers to generate, duplicate, and load snapshots for the metadata topic partition designated __cluster metadata is an important feature that we’re providing with 3.0. The Kafka Cluster uses this topic to store and replicate cluster metadata such as broker configuration, topic partition assignment, leadership, and so on. Kafka Raft Snapshot provides an effective means to save, load, and replicate this information as the state grows.
KIP-746: KRaft metadata records need to be reviewed.
The necessity to update a handful of the metadata record types used when Kafka is set to run without ZooKeeper has been highlighted by experience and continuous development since the first version of the Kafka Raft controller (ZK).
In KRaft mode, KIP-730 generates producer ID.
The Kafka Controller now takes full responsibility for producing a Kafka producer ID with version 3.0 and KIP-730. In both ZK and KRaft modes, the Controller is doing so. This brings us closer to the introduction of the bridge, which will allow customers to migrate from ZK-based Kafka deployments to KRaft-based deployments.
Increase the default timeout for customer sessions (KIP-735).
Session.timeout.ms, a configuration property of the Kafka Consumer, has been increased from 10 to 45 seconds by default. This will help the consumer to better respond to temporary network outages by default and avoid several rebalances when a consumer appears to leave the group only temporarily.
KIP-699 has been updated.
To resolve many Coordinators at once, use FindCoordinator.
The ability of clients to rapidly locate the coordinators of many consumer groups at the same time is critical to supporting activities that can be applied to multiple consumer groups at the same time. This is made feasible by KIP-699, which adds the ability to find coordinators for many groups with a single request. When talking to new Kafka brokers that enable this request, Kafka clients have been upgraded to use this optimization.
KIP-707: Kafka’s FutureKIP-707
Pre-Java 8 versions were still in general use when the KafkaFuture type was developed to make the Kafka AdminClient easier to implement, and Kafka officially supported Java 7. Kafka now operates on Java versions that support the CompletionStage and CompletableFuture class types, which were added a few years ago. With KIP-707, KafkaFuture adds a function that returns a CompletionStage object, enhancing KafkaFuture’s usefulness while remaining backward compatible.
KIP-745: Restart connection and tasks using the Connect API
Through the Connect REST API, A connector in Kafka Connect is dynamically represented as a group of Connector class instances and one or more Task class instances, and most connector operations provided can be applied to the group as a whole. The restart endpoints for the Connector and Task instances have been noticeable exceptions since the beginning. Users have to perform separate calls to restart the Connection and Task instances in order to restart the connector as a whole. Users can now restart all or just the failing Connection and Task instances of a connector with a single call in version 3.0 of KIP-745.
Connect Log4j configuration: Enable connection log contexts (KIP-721).
Connector log contexts are another feature that was introduced in version 2.3.0 but was not activated by default till now. The connector context is now included by default in the Connect worker’s log4j logs pattern in version 3.0. When you upgrade to 3.0 from a prior version, log4j will add the connector context to the log lines it exports.
Improve the timestamp synchronization in Kafka Streams (KIP-695).
The semantics of how Streams tasks choose to fetch data are improved, and the meaning and available values of the configuration option max.task.idle.ms are expanded. This modification necessitated the addition of a new method, currentLag, to the Kafka consumer API, which can provide the consumer lag of a specific partition if it is known locally and without having to contact the Kafka Broker.
Clean up the public API in TaskId with KIP-740.
The TaskId class has been completely redesigned in KIP-740. Several methods and internal fields have been deprecated, and the old topic grouped and partition fields have been replaced with new sub topology() and partition() getters (see also KIP-744 for relevant changes and an amendment to KIP-740).
In KIP-666, add instance-based methods to the Read-Only Session Store.
The ReadOnlySessionStore and SessionStore interfaces in the Interactive Queries API have been enhanced with a new set of methods that accept Instant data type inputs. Any bespoke read-only Interactive Query session store implementations that require to use of the new methods will be affected by this change.
KIP-743: Remove the built-in metrics version config value 0.10.0-2.4 from Streams.
The built-in metrics in Streams 3.0 no longer support the previous metrics framework. The number 0.10.0-2.4 is being removed from the built-in metrics. version configuration variable by KIP-743. At the time, this property’s sole valid value was the latest (has been the default value since 2.5).
Change the default SerDe to null (KIP-741).
The default SerDe attributes are no longer set to their previous default value. Previously, ByteArrayAsset was the default for streams. There is no default in version 3.0, and users must either set their SerDes as needed in the API or establish a default in their Streams configuration via DEFAULT KEY SERDE CLASS CONFIG and DEFAULT VALUE SERDE CLASS CONFIG. The previous default was almost never appropriate for real-world applications, and it generated more confusion than it solved.
KIP-623: Streams application reset tool should have an “internal-topics” option.
With the addition of a new command-line parameter —internal-topics, the Streams use of the application reset tool Kafka-streams-application-reset becomes more configurable. The new parameter accepts a comma-separated list of internal topic names that can be scheduled for deletion with this application utility. Users can validate which topics will be deleted and specify a subset of them if necessary by combining this new parameter with the existing parameter —dry-run.
MirrorMaker v1 is being phased out in KIP-720.
MirrorMaker 1.0 is being phased out in favor of version 3.0. MirrorMaker 2 will be the focus of future development for new features and significant enhancements (MM2).
KIP-716: With MirrorMaker2, allow customizing the offset-syncs topic’s location.
Users can now specify where MirrorMaker2 builds and saves its internal topic, which it utilizes to convert consumer group offsets, with version 3.0. Users of MirrorMaker2 will be able to keep the source Kafka cluster as a read-only cluster while storing offset records in a separate Kafka cluster (that being the target Kafka cluster or even a third cluster beyond the source and target clusters).