- The isReady API in RemoteLogMetadataManager (RLMM) is used to denote whether the partition metadata is ready for remote storage operations. The plugin implementors can use this API to denote the partition status while bootstrapping the RLMM.
- Using this API, we are gracefully starting the remote log components. The segment copy, delete, and other operations that hits remote storage will be invoked once the metadata is ready for a given partition.
- See KIP-1105 (https://cwiki.apache.org/confluence/display/KAFKA/KIP-1105%3A+Make+remote+log+manager+thread-pool+configs+dynamic) for more details.
Reviewers: Federico Valeri <fvaleri@redhat.com>, Satish Duggana <satishd@apache.org>
After the share.auto.offset.reset dynamic config was added for share groups in this commit - 9db5ed0, we needed to update this config value to "earliest" in ShareRoundTripWorker when it creates the consumer.
Reviewers: Andrew Schofield <aschofield@confluent.io>, Manikumar Reddy <manikumar.reddy@gmail.com>
The logs for Kafka Streams local state restoration incorrectly refer to the StreamThread instead of the StateUpdater thread, which is responsible for decoupling the restoration process. The restore consumer also references StreamThread instead of StateUpdater.
This commit corrects the log message for more clarity.
Reviewer: Bruno Cadonna <cadonna@apache.org>
When we introduced "startup tasks" in #16922, we initialized them with
no input partitions, because they aren't known until assignment.
However, when we update them during assignment, it's possible that we
update the topology with the incorrect source topics for some internal
topics, due to a difference in the way internal topics are handled for
StandbyTasks.
To resolve this, we now initialize startup tasks with the correct input
partitions, by calculating them from the Topology.
When we assign our startup tasks, we now conditionally update their
input partitions only if they've actually changed, just as we do for
regular StandbyTasks.
With this, the E2E tests now pass, as expected.
Reviewer: Bruno Cadonna <cadonna@apache.org>
RoundRobinPartitioner does not handle the fact that on new batch creation, the partition method is called twice.
Reviewers: Viktor Somogyi-Vass <viktorsomogyi@gmail.com>, Mickael Maison <mickael.maison@gmail.com>
This patch cleans up the `Assertions` class in the `group-coordinator` module.
Reviewers: Lianet Magrans <lmagrans@confluent.io>, Chia-Ping Tsai <chia7712@gmail.com>
This patch does a few cleanups in GroupMetadataManagerTest:
* Uses `Map.of` where possible.
* Uses `List.of` instead of `Arrays.asList`.
* Fix inconsistent indentation in a few places.
Reviewers: Lianet Magrans <lmagrans@confluent.io>
This PR adds another dynamic config share.auto.offset.reset fir share groups.
Reviewers: Andrew Schofield <aschofield@confluent.io>, Apoorv Mittal <apoorvmittal10@gmail.com>, Abhinav Dixit <adixit@confluent.io>, Manikumar Reddy <manikumar.reddy@gmail.com>
TimestampExtractor allows to drop records by returning a timestamp of -1. For this case, we still need to update consumed offsets to allows us to commit progress.
Reviewers: Bill Bejeck <bill@confluent.io>
As part of PR: https://github.com/apache/kafka/pull/17636 where purgatory has been moved from core to server-common hence move some existing classes used in Share Fetch to Share module.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
Fixes an issue with the TTD in the specific case where users don't specify an initial time for the driver and also don't specify a start timestamp for the TestInputTopic, then pipe input records without timestamps. This combination results in a slight mismatch in the expected timestamps for the piped records, which can be noticeable when writing tests with very small time deltas.
The problem is that, while both the TTD and the TestInputTopic will be initialized to the "current time" when not otherwise specified, it's possible for some milliseconds to have passed between the creation of the TTD and the creation of the TestInputTopic. This can result in a TestInputTopic getting a start timestamp that's several ms larger than the driver's time, and ultimately causing the piped input records to have timestamps slightly in the future relative to the driver.
In practice even those who hit this issue might not notice it if they aren't manipulating time in their tests, or are advancing time by enough to negate the several-milliseconds of difference. However we noticed a test fail due to this because we were testing a ttl-based processor and had advanced the driver time by only 1 millisecond past the ttl. The piped record should have been expired, but because it's timestamp was a few milliseconds longer than the driver's start time, this test ended up failing.
Reviewers: Matthias Sax <mjsax@apache.org>, Bruno Cadonna <cadonna@apache.org>, Lucas Brutschy < lbrutschy@confluent.io>
Currently, we are using the String repr of the shareCoordinator/sharePartition key (groupId:topicId:parition) as defined in kip-932 in a few methods like ShareCoordinator.partitionFor and ShareCoordinatorMetadataCacheHelper.getShareCoordinator.
This has the potential to introduce subtle bugs when incorrect strings are used to invoke these methods. What is perturbing is that the failures might be intermittent.
This PR aims to remedy the situation by changing the type to the concrete SharePartitionKey. This way callers need not be worried about a specific encoding or format of the coordinator key as long as the SharePartitionKey has the correct fields set.
There is one issue - the FIND_COORDINATOR RPC does require the coordinator key to be set as a String in the request body. We can't get around this and have to set the value as String. However, on the KafkaApis handler side we parse this key into a SharePartitionKey again and gain compile time safety. One downside is that we need to split and format the incoming coordinator key in the request but that can be encapsulated at a single location in SharePartitionKey.
Added tests for partitionFor.
Reviewers: Andrew Schofield <aschofield@confluent.io>, Apoorv Mittal <apoorvmittal10@gmail.com>, Manikumar Reddy <manikumar.reddy@gmail.com>
This patch introduces the ConsumerGroupRegularExpression record (key + value) and updates the `GroupMatadataManager` and the `ConsumerGroup` to bookkeep it appropriately. Note that with this change, regular expressions are counted as subscribers in the `subscribedTopicNames` data structure. This is important because the topic metadata of the group is computed based on it.
Reviewers: Jeff Kim <jeff.kim@confluent.io>, Lianet Magrans <lmagrans@confluent.io>