In order for Gradle to restore the cache, it needs to use the same workflow ID. This PR consolidates trunk and PRs builds to both use the CI build.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
A few improvements for JUnit in the Actions workflow:
* Generate a human readable job summary of the tests
* Fail the workflow if JUnit tests fail
* Archive the HTML JUnit reports
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
This patch introduces a wrapper around [HdrHistogram](https://github.com/HdrHistogram/HdrHistogram) to use for group coordinator histograms, event queue time, event processing time, flush time, and purgatory time.
Reviewers: David Jacot <djacot@confluent.io>
Add an integration test for share group list and describe admin operations.
Reviewers: Omnia Ibrahim <o.g.h.ibrahim@gmail.com>, Manikumar Reddy <manikumar.reddy@gmail.com>
- due to the server config UNSTABLE_API_VERSIONS_ENABLE_CONFIG is true, so we can't test the scenario of ListOffsetsRequest is unstable version. We want to test this case in this PR
- get the MV from metadataCache.metadataVersion() instead of config.interBrokerProtocolVersion since MV can be set dynamically.
Reviewers: Jun Rao <junrao@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
- The different behavior of nonexistent resource. For example: nonexistent broker will cause timeout; nonexistent topic will produce UnknownTopicOrPartitionException; nonexistent group will return static/default configs; client_metrics will return empty configs
- The resources (topic and broker resource types are currently supported) this description is out-of-date
- Add some junit test
Reviewers: Andrew Schofield <aschofield@confluent.io>, Chia-Ping Tsai <chia7712@gmail.com>
This patch allows the gradle cache to be used on GitHub Actions.
Also adds a Python script to parse checkstyle reports and produce GitHub annotations on PRs.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
testStateGlobalThreadClose() does fail sometimes, with unclear root
cause. This PR is an attempt to fix it, by cleaning up and improving the
test code across the board.
Reviewers: Lucas Brutschy <lbrutschy@confluent.io>
When a broker tries to register with the controller quorum, its registration should be rejected if it doesn't support a feature that is currently enabled. (A feature is enabled if it is set to a non-zero feature level.) This is important for the newly added kraft.version feature flag.
Reviewers: Colin P. McCabe <cmccabe@apache.org>, José Armando García Sancio <jsancio@apache.org>
This patch prepares the ground to switch on the new group coordinator by default. It basically ensures that the integration tests uses the internal flag to enable/disable the new group coordinator instead of relying on the auto-enabling that we had in place for the early access and the preview. The auto-enabling will be removed in a follow-up PR.
Reviewers: Jeff Kim <jeff.kim@confluent.io>, Chia-Ping Tsai <chia7712@gmail.com>
When a voter fails as leader (LeaderState) the quorum-state still states that it is the leader of
the epoch. When the voter starts it never starts as leader and instead starts as resigned
(ResignedState) if it was previously a leader. This causes the KRaft client to immediately notify
the state machine (e.g QuorumController) that it is leader or active. This is incorrect for two
reasons.
One, the controller cannot be notified of leadership until it has reached the LEO. If the
controller is notified before that it will generate and append records that are not based on the
latest state.
Two, it is not practical to notify of local leadership when it is resigned since any write
operation (prepareAppend and schedulePreparedAppend) will fail with NotLeaderException while KRaft
is in the resigned state.
Reviewers: Colin P. McCabe <cmccabe@apache.org>, David Arthur <mumrah@gmail.com>
This patch creates separate GitHub Actions workflows for trunk and for pull requests.
On trunk, each commit will be built separately and the build scan will be uploaded to ge.apache.org. The trunk builds will also populate a Gradle cache managed by Github Actions.
Pull Requests will be built on each commit, but will interrupt an ongoing build (for the same PR). These builds will not populate the Gradle cache and will not upload the build scan unless the PRs are in apache/kafka.
For now, only pull requests with branches named like "gh-*" will run the junit tests. This is to allow developers to opt-in to the GH build.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
This patch updates getOrMaybeCreateClassicGroup to only throw GroupIdNotFoundException as we did for other internal methods. The callers are responsible for translating the error to the appropriate one depending on the context. There is only one case.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
Currently we were not updating the result count when we merged commitAsync() requests into one batch in ShareConsumeRequestManager, so this led to lesser acknowledgements sent to the application thread (ShareConsumerImpl) than expected.
Fix : Now if the acknowledge response came from a commitAsync, then we do not wait for other requests to complete, we always prepare a background event to be sent.
This PR also fixes a bug in ShareConsumeRequestManager, where during the final ShareAcknowledge sent during close(), we also pick up any piggybacked acknowledgements which were waiting to be sent along with ShareFetch.
Reviewers: Andrew Schofield <aschofield@confluent.io>, Manikumar Reddy <manikumar.reddy@gmail.com>
When you use kafka-share-groups.sh --describe for an empty group, it prints an empty table consisting of only the table header. kafka-consumer-groups.sh summarises the group status to make the output more informative and only prints the table if it contains more than zero rows.
This PR applies this principle across all of the variants of describing share groups which makes the output much nicer where the output would otherwise be strangely empty.
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
Currently, users need to set --transaction-duration-ms to enable transactions in kafka-producer-perf-test, which is not straightforward. A better approach is to enable transactions when a transaction ID is provided.
This PR allows enabling transaction in kafka-producer-perf-test by either
- set transaction.id=<id> via --producer-props or
- set transaction.id=<id> in config file via --producer.config or
- set --transaction-id <id> or
- set --transaction-duration-ms=<ms>
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
This patch removes Java versions 11 and 17 from the "check" job. This patch also fixes the develocity build scan upload.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
The method branch in both Java and Scala KStream class was deprecated in version 2.8:
1) org.apache.kafka.streams.scala.kstream.KStream#branch
2) org.apache.kafka.streams.kstream.KStream#branch(org.apache.kafka.streams.kstream.Predicate<? super K,? super V>...)
3) org.apache.kafka.streams.kstream.KStream#branch(org.apache.kafka.streams.kstream.Named, org.apache.kafka.streams.kstream.Predicate<? super K,? super V>...)
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
This change includes two improvements.
When the leader removes itself from the voters set clients of RaftClient may call resign. In those cases the leader is not in the voter set and should not throw an exception.
Controllers that are observers must flush the log on every append because leader may be trying to add them to the voter set. Leader always assume that voters flush their disk before sending a Fetch request.
Reviewers: David Arthur <mumrah@gmail.com>, Alyssa Huang <ahuang@confluent.io>
Avoids stream allocation on hot code path in Admin#listOffsets
This patch avoids allocating the stream reference pipeline & spliterator for this case by explicitly allocating the pre-sized Node[] and using a for loop with int induction over the specified IDs List argument.
Reviewers: Apoorv Mittal <apoorvmittal10@gmail.com>, Kirk True <kirk@kirktrue.pro>, David Arthur <mumrah@gmail.com>
Now that ConsumerRecord.deliveryCount() exists, enhance kafka-console-share-consumer.sh to exploit it. Added support to the DefaultMessageFormatter and the option print.delivery to the usage message for kafka-console-share-consumer.sh. Note that it was not added to kafka-console-consumer.sh even though the option would be recognised - this is because delivery with a consumer group does not count deliveries, and the result would include Delivery:NOT_PRESENT for all records if it was enabled - not really that useful with a consumer group.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>