This patch adds support for running the ZooKeeper-based
kafka.security.authorizer.AclAuthorizer with KRaft clusters. Set the
authorizer.class.name config as well as the zookeeper.connect config while also
setting the typical KRaft configs (node.id, process.roles, etc.), and the
cluster will use KRaft for metadata and ZooKeeper for ACL storage. A system
test that exercises the authorizer is included.
This patch also changes "Raft" to "KRaft" in several system test files. It also
fixes a bug where system test admin clients were unable to connect to a cluster
with broker credentials via the SSL security protocol when the broker was using
that for inter-broker communication and SASL for client communication.
Reviewers: Colin P. McCabe <cmccabe@apache.org>, Ismael Juma <ismael@juma.me.uk>
Correct empty meta response comment, since it is no longer related only to brokers associating with the query topic.
Reviewers: Boyang Chen <boyang@confluent.io>
This includes TASTy Reader support for Scala 3.0.0. This makes it easier
for Kafka libraries to be used in Scala 3.0 projects
Release notes: https://github.com/scala/scala/releases/tag/v2.13.6
Reviewers: Ismael Juma <ismael@juma.me.uk>
Increase session timeout to fix flaky KTableKTableForeignKeyInnerJoinMultiIntegrationTest.shouldInnerJoinMultiPartitionQueryable
Reviewers: Anna Sophie Blee-Goldman <ableegoldman@apache.org>
In #10676 we renamed the internal Subtopology class that implemented the TopologyDescription.Subtopology interface. By mistake, we also renamed the interface itself, which is a public API. This wasn't really the intended point of that PR, so rather than do a retroactive KIP, let's just reverse the renaming.
Reviewers: Walker Carlson <wcarlson@confluent.io>, Guozhang Wang <guozhang@confluent.io>
Kafka networking layer doesn't close FileRecords and assumes that they are already open when sending them over a channel. To support this pattern this commit changes the ownership model for FileRawSnapshotReader so that they are owned by KafkaMetadataLog.
Reviewers: dengziming <swzmdeng@163.com>, David Arthur <mumrah@gmail.com>, Jun Rao <junrao@gmail.com>
Remove the clearPending method from AlterIsrManager
Reviewers: Colin P. Mccabe <cmccabe@confluent.io>, Chia-Ping Tsai <chia7712@gmail.com>, Rajini Sivaram <rajinisivaram@googlemail.com>
Implement a striped replica placement algorithm for KRaft. This also
means implementing rack awareness. Previously, KRraft just chose
replicas randomly in a non-rack-aware fashion. Also, allow replicas to
be placed on fenced brokers if there are no other choices. This was
specified in KIP-631 but previously not implemented.
Reviewers: Jun Rao <junrao@gmail.com>
The StreamsNamedRepartitionTopicTest system tests did not have the @cluster annotation and was therefore taking up the entire cluster. For example, we see this in the log output:
kafkatest.tests.streams.streams_named_repartition_topic_test.StreamsNamedRepartitionTopicTest.test_upgrade_topology_with_named_repartition_topic is using entire cluster. It's possible this test has no associated cluster metadata.
This PR adds the missing annotation.
Reviewers: Bill Bejeck <bbejeck@apache.org>
Ensure security protocol and sasl mechanism are updated in the cached SecurityConfig during rolling system tests. Also explicitly indicate which SASL mechanisms we wish to expose during the tests.
Reviewers: David Arthur <mumrah@gmail.com>
Minor followup to #10573. Removes this internal Producer config which was only ever used to avoid a very minor amount of work to downgrade the consumer group metadata in the txn commit request in Kafka Streams
Reviewers: Ismael Juma <ismael@juma.me.uk>, Matthias J. Sax <mjsax@confluent.io>, Guozhang Wang <guozhang@confluent.io>
Introduce a TimelineInteger class which represents a single integer
value which can be changed while maintaining snapshot consistency. Fix a
case where a metric value would be corrupted after a snapshot restore.
Reviewers: David Arthur <mumrah@gmail.com>
This PR aims to streamline the configurations for WindowedDeserialisers as described in KIP-725. It deprecates default.windowed.key.serde.inner and default.windowed.value.serde.inner configs in StreamConfig and adds windowed.inner.class.serde.
Reviewers: Anna Sophie Blee-Goldman<ableegoldman@apache.org>
When flush is called a copy of incomplete batches is made. This
means that the full ProducerBatch(s) are held in memory until the flush
has completed. Note that the `Sender` removes producer batches
from the original incomplete collection when they're no longer
needed.
For batches where the existing memory pool is used this
is not as wasteful as the memory will be returned to the pool,
but for non pool memory it can only be GC'd after the flush has
completed. Rather than use copyAll we can make a new array with only the
produceFuture(s) and await on those.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, Ismael Juma <ismael@juma.me.uk>
The QuorumController should honor the timeout for RPC requests
which feature a timeout. For electLeaders, attempt to trigger a leader
election for all partitions when the request specifies null for the topics
argument.
Reviewers: David Arthur <mumrah@gmail.com>
(reverted #10405). #10405 has several issues, for example:
It fails to create a topic with 9000 partitions.
It flushes in several unnecessary places.
If multiple segments of the same partition are flushed at roughly the same time, we may end up doing multiple unnecessary flushes: the logic of handling the flush in LogSegments.scala is weird.
Kafka does not call fsync() on the directory when a new log segment is created and flushed to disk.
The problem is that following sequence of calls doesn't guarantee file durability:
fd = open("log", O_RDWR | O_CREATE); // suppose open creates "log"
write(fd);
fsync(fd);
If the system crashes after fsync() but before the parent directory has been flushed to disk, the log file can disappear.
This PR is to flush the directory when flush() is called for the first time.
Did performance test which shows this PR has a minimal performance impact on Kafka clusters.
Reviewers: Jun Rao <junrao@gmail.com>
Currently KafkaStreams#cleanUp only throw an IllegalStateException if the state is RUNNING or REBALANCING, however the application could be in the process of shutting down in which case StreamThreads may still be running. We should also throw if the state is PENDING_ERROR or PENDING_SHUTDOWN
Reviewers: Walker Carlson <wcarlson@confluent.io>, Guozhang Wang <guozhang@confluent.io>
Introduce List serde for primitive types or custom serdes with a serializer and a deserializer according to KIP-466
Reviewers: Anna Sophie Blee-Goldman <ableegoldman@apache.org>, Matthias J. Sax <mjsax@conflunet.io>, John Roesler <roesler@confluent.io>, Michael Noll <michael@confluent.io>
Added server-common module to have server side common classes. Moved ApiMessageAndVersion, RecordSerde, AbstractApiMessageSerde, and BytesApiMessageSerde to server-common module.
Reivewers: Kowshik Prakasam <kprakasam@confluent.io>, Jun Rao <junrao@gmail.com>
Consecutive UUID generation could result in same prefix.
Reviewers: Josep Prat <josep.prat@aiven.io>, Anna Sophie Blee-Goldman <ableegoldman@apache.org>
* replace deprecated Class.newInstance() to class.getDeclaredConstructor().newInstance()
* throw ReflectiveOperationException to cover all other exceptions
Reviewers: Tom Bentley <tbentley@redhat.com>
* Changes the new Throughput Generators to track messages per window
instead of making per-second calculations which can have rounding errors.
Also, one of these had a calculation error which prompted this change in
the first place.
* Fixes a couple typos.
* Fixes an error where certain JSON fields were not exposed, causing the
workloads to not behave as intended.
* Fixes a bug where we use wait not in a loop, which exits too quickly.
* Adds additional constant payload generators.
* Fixes problems with an example spec.
* Fixes several off-by-one comparisons.
Reviewers: Colin P. McCabe <cmccabe@apache.org>
This PR upgrades RocksDB to 6.19.3. After the upgrade the Gradle build exited with code 134 due to SIGABRT signals ("Pure virtual function called!") coming from the C++ part of RocksDB. This error was caused by RocksDB state stores not properly closed in Streams' code. This PR adds the missing closings and updates the RocksDB option adapter.
Reviewers: Anna Sophie Blee-Goldman <ableegoldman@apache.org>, Guozhang Wang <wangguoz@gmail.com>