Spotbugs was temporarily disabled as part of KAFKA-15485 to support Kafka build with JDK 21. This PR upgrades the spotbugs version to 4.8.0 which adds support for JDK 21 and enables it's usage on build again.
Reviewers: Divij Vaidya <diviv@amazon.com>
This PR is part of #13247
It contains ReassignPartitionsIntegrationTest rewritten in java.
Goal of PR is reduce changes size in main PR.
Reviewers: Taras Ledkov <tledkov@apache.org>, Justine Olshan <jolshan@confluent.io>
* Update CI to build with Java 21 instead of Java 20
* Disable spotbugs when building with Java 21 as it doesn't support it yet (filed KAFKA-15492 for
addressing this)
* Disable SslTransportLayerTest.testValidEndpointIdentificationCN with Java 21 (same as Java 20)
Reviewers: Divij Vaidya <diviv@amazon.com>
Resolves cache misses in checkstyle tasks due to absolute paths in configProperties.
Sets configDirectory extension property, which is made available by the checkstyle plugin as ${config_loc} in the checkstyle xml files, as shown in the Checkstyle Gradle docs. The absolute paths set in configProperties are then replaced by relative paths from configDirectory. Because the header and suppression config file names are static and only referenced once, these were removed from configProperties and the file names are given directly in checkstyle.xml
Reviewers: Divij Vaidya <diviv@amazon.com>
This change does the following:
1. Make RemoteLogManagerConfigs that are implemented public
2. Add tasks to generate html docs for the configs
3. Include config docs in the main site
Reviewers: Divij Vaidya <diviv@amazon.com>, Luke Chen <showuon@gmail.com>, Christo Lolov <lolovc@amazon.com>, Satish Duggana <satishd@apache.org>
`TieredStorageTestHarness` is a base class for integration tests exercising the tiered storage functionality. This uses `LocalTieredStorage` instance as the second-tier storage system and `TopicBasedRemoteLogMetadataManager` as the remote log metadata manager.
Co-authored-by: Alexandre Dupriez <alexandre.dupriez@gmail.com>
Co-authored-by: Kamal Chandraprakash <kamal.chandraprakash@gmail.com>
Only initialize remote topic metrics when system-wise remote storage is enabled to avoid impacting performance for existing brokers. Also add tests.
Reviewers: Divij Vaidya <diviv@amazon.com>, Kamal Chandraprakash <kamal.chandraprakash@gmail.com>
* KAFKA-14953: Adding RemoteLogManager metrics
In this PR, I have added the following metrics that are related to tiered storage mentioned in[ KIP-405](https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage).
|Metric|Description|
|-----------------------------------------|--------------------------------------------------------------|
| RemoteReadRequestsPerSec | Number of remote storage read requests per second |
| RemoteWriteRequestsPerSec | Number of remote storage write requests per second |
| RemoteBytesInPerSec | Number of bytes read from remote storage per second |
| RemoteReadErrorsPerSec | Number of remote storage read errors per second |
| RemoteBytesOutPerSec | Number of bytes copied to remote storage per second |
| RemoteWriteErrorsPerSec | Number of remote storage write errors per second |
| RemoteLogReaderTaskQueueSize | Number of remote storage read tasks pending for execution. |
| RemoteLogReaderAvgIdlePercent | Average idle percent of the remote storage reader thread pool|
| RemoteLogManagerTasksAvgIdlePercent | Average idle percent of RemoteLogManager thread pool |
Added unit tests for all the rate metrics.
Reviewers: Luke Chen <showuon@gmail.com>, Divij Vaidya <diviv@amazon.com>, Kamal Chandraprakash<kamal.chandraprakash@gmail.com>, Jorge Esteban Quilcate Otoya <quilcate.jorge@gmail.com>, Staniel Yao<yaolixinylx@gmail.com>, hudeqi<1217150961@qq.com>, Satish Duggana <satishd@apache.org>
KAFKA-14522 Rewrite and Move of RemoteIndexCache to storage module.
Cleanedup index file suffix usages and other minor cleanups
Reviewers: Jun Rao <junrao@gmail.com>, Ismael Juma <ismael@juma.me.uk>, Luke Chen <showuon@gmail.com>, Divij Vaidya <diviv@amazon.com>, Kamal Chandraprakash<kamal.chandraprakash@gmail.com>, Jorge Esteban Quilcate Otoya <quilcate.jorge@gmail.com>
It's good for us to add support for Java 20 in preparation for Java 21 - the next LTS.
Given that Scala 2.12 support has been deprecated, a Scala 2.12 variant is not included.
Also remove some branch builds that add load to the CI, but have
low value: JDK 8 & Scala 2.13 (JDK 8 support has been deprecated),
JDK 11 & Scala 2.12 (Scala 2.12 support has been deprecated) and
JDK 17 & Scala 2.12 (Scala 2.12 support has been deprecated).
A newer version of Mockito (4.9.0 -> 4.11.0) is required for Java 20 support, but we
only use it with Scala 2.13+ since it causes compilation errors with Scala 2.12. Similarly,
we upgrade easymock when the Java version is 16 or newer as it's incompatible
with powermock (which doesn't support Java 16 or newer).
Filed KAFKA-15117 for a test that fails with Java 20 (SslTransportLayerTest.testValidEndpointIdentificationCN).
Finally, fixed some lossy conversions that were added after #13582 was submitted.
Reviewers: Ismael Juma <ismael@juma.me.uk>
Use thread safe Caffeine to cache indexes fetched from RemoteTier locally. This PR removes a lock contention that led to higher fetch latencies as the IO threads spent time unnecessarily waiting on global cache lock while a single thread fetches the index from remote tier. See PR #13850 for details and rejected alternatives.
Reviewers: Luke Chen <showuon@gmail.com>, Satish Duggana <satishd@apache.org>
This fix the following issue that we occasionally see in [builds](https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-13848/4/pipeline/13/).
```
[2023-06-14T11:41:50.769Z] * What went wrong:
[2023-06-14T11:41:50.769Z] A problem was found with the configuration of task ':rat' (type 'RatTask').
[2023-06-14T11:41:50.769Z] - Gradle detected a problem with the following location: '/home/jenkins/jenkins-agent/workspace/Kafka_kafka-pr_PR-13848'.
[2023-06-14T11:41:50.769Z]
[2023-06-14T11:41:50.769Z] Reason: Task ':rat' uses this output of task ':clients:processTestMessages' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed.
[2023-06-14T11:41:50.769Z]
[2023-06-14T11:41:50.769Z] Possible solutions:
[2023-06-14T11:41:50.769Z] 1. Declare task ':clients:processTestMessages' as an input of ':rat'.
[2023-06-14T11:41:50.769Z] 2. Declare an explicit dependency on ':clients:processTestMessages' from ':rat' using Task#dependsOn.
[2023-06-14T11:41:50.769Z] 3. Declare an explicit dependency on ':clients:processTestMessages' from ':rat' using Task#mustRunAfter.
[2023-06-14T11:41:50.769Z]
[2023-06-14T11:41:50.769Z] Please refer to https://docs.gradle.org/8.1.1/userguide/validation_problems.html#implicit_dependency for more details about this problem.
```
Validated manually as well:
```
% ./gradlew rat
> Configure project :
Starting build with version 3.6.0-SNAPSHOT (commit id 874081ca) using Gradle 8.1.1, Java 17 and Scala 2.13.10
Build properties: maxParallelForks=10, maxScalacThreads=8, maxTestRetries=0
> Task :storage:processMessages
MessageGenerator: processed 4 Kafka message JSON files(s).
> Task :raft:processMessages
MessageGenerator: processed 1 Kafka message JSON files(s).
> Task :core:processMessages
MessageGenerator: processed 2 Kafka message JSON files(s).
> Task :group-coordinator:processMessages
MessageGenerator: processed 16 Kafka message JSON files(s).
> Task :streams:processMessages
MessageGenerator: processed 1 Kafka message JSON files(s).
> Task :metadata:processMessages
MessageGenerator: processed 20 Kafka message JSON files(s).
> Task :clients:processMessages
MessageGenerator: processed 146 Kafka message JSON files(s).
> Task :clients:processTestMessages
MessageGenerator: processed 4 Kafka message JSON files(s).
BUILD SUCCESSFUL in 8s
```
Reviewers: Divij Vaidya <diviv@amazon.com>
This patch rewrite `MockTime` in Java and moves it to `server-common` module. This is a prerequisite to move `MockTimer` later on to `server-common` as well.
Reviewers: David Arthur <mumrah@gmail.com>
Also upgrade gradle plugins:
- `org.owasp.dependencycheck` gradle plugin to version `8.2.1`
- `com.github.johnrengelman.shadow gradle` plugin to version `8.1.1`
Gradle release notes:
* https://docs.gradle.org/8.1.1/release-notes.html
Reviewers: Ismael Juma <ismael@juma.me.uk>
Loosens the validation so that Kafka can accept duplicate listeners on the same port but if and only if the listeners are valid IP addresses with one address being an IPv4 address and the other being an IPv6 address.
Reviewers: Josep Prat <jlprat@apache.org>, Luke Chen <showuon@apache.org>
topic counts.
Introduces the use of persistent data structures in the KRaft metadata image to avoid copying the entire TopicsImage upon every change. Performance that was O(<number of topics in the cluster>) is now O(<number of topics changing>), which has dramatic time and GC improvements for the most common topic-related metadata events. We abstract away the chosen underlying persistent collection library via ImmutableMap<> and ImmutableSet<> interfaces and static factory methods.
Reviewers: Luke Chen <showuon@gmail.com>, Colin P. McCabe <cmccabe@apache.org>, Ismael Juma <ismael@juma.me.uk>, Purshotam Chauhan <pchauhan@confluent.io>
A privious change disabled strict stubbing for the `RocksDBMetricsRecorderTest`. To re-enable the behavior in JUnit-5, we need to pull in a new dependency in the `streams` gradle project.
Reviewers: Guozhang Wang <wangguoz@gmail.com>
The new group coordinator needs to access cluster metadata (e.g. topics, partitions, etc.) and it needs a mechanism to be notified when the metadata changes (e.g. to trigger a rebalance). In KRaft clusters, the easiest is to subscribe to metadata changes via the MetadataPublisher.
Reviewers: Justine Olshan <jolshan@confluent.io>
This fixes the following `./gradlew install` issue:
```text
* What went wrong:
A problem was found with the configuration of task ':storage:srcJar' (type 'Jar').
- Gradle detected a problem with the following location: '/Users/ijuma/src/kafka/storage/src/generated/java'.
Reason: Task ':storage:srcJar' uses this output of task ':storage:processMessages' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed.
Possible solutions:
1. Declare task ':storage:processMessages' as an input of ':storage:srcJar'.
2. Declare an explicit dependency on ':storage:processMessages' from ':storage:srcJar' using Task#dependsOn.
3. Declare an explicit dependency on ':storage:processMessages' from ':storage:srcJar' using Task#mustRunAfter.
Please refer to https://docs.gradle.org/8.0.1/userguide/validation_problems.html#implicit_dependency for more details about this problem.
```
Reviewers: David Jacot <david.jacot@gmail.com>
Also re-enable it in CI. We do this by adjusting the `Jenkinsfile`
to use a more general task (`./gradlew check -x test`).
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, Dejan Stojadinović <dejan2609@users.noreply.github.com>
Details:
* gradle upgrade: 7.6 -> 8.0.1
* spotbugs plugin upgrade: 5.0.9 -> 5.0.13
* tweaked the mechanics for `-release`/`-source`/`-target` to workaround idiosyncrasies in Gradle 8.0.1 and newer Scala 2.13 versions.
* streams-scala `test` task no longer triggers the `spotless` task since a newer version is required for Gradle 8 support, but the newer version requires Java 11.
Note: relates to #5479
Gradle upgrade highlights:
* "Scala Incremental Compilation for Multi-Module projects broken in 7.x": https://github.com/gradle/gradle/issues/20101
* "Incremental compilation of java modules is broken with Gradle 7.6": https://github.com/gradle/gradle/issues/23067
Full release notes: https://docs.gradle.org/8.0/release-notes.html
Reviewers: Ismael Juma <ismael@juma.me.uk>
Reviewers: Daniel Urban <durban@cloudera.com>, Greg Harris <greg.harris@aiven.io>, Viktor Somogyi-Vass <viktorsomogyi@gmail.com>, Mickael Maison <mickael.maison@gmail.com>
This patch moves the current `__consumer_offsets` records from the `core` module to the new `group-coordinator` module.
Reviewers: Christo Lolov <lolovc@amazon.com>, Mickael Maison <mickael.maison@gmail.com>
Make sure no scaladoc warnings are emitted from the streams-scala project build.
We cannot fully fix all scaladoc warnings due to limitations of the scaladoc tool,
so this is a best-effort attempt at fixing as many warnings as possible. We also
disable one problematic class of scaladoc wornings (link errors) in the gradle build.
The causes of existing warnings are that we link to java members from scaladoc, which
is not possible, or we fail to disambiguate some members.
The broad rule applied in the changes is
- For links to Java members such as [[StateStore]], we use the fully qualified name in a code tag
to make manual link resolution via a search engine easy.
- For some common terms that are also linked to Java members, like [[Serde]], we omit the link.
- We disambiguate where possible.
- In the special case of @throws declarations with Java Exceptions, we do not seem to be able
to avoid the warning altogther.
Reviewers: Matthias J. Sax <mjsax@apache.org>, Guozhang Wang <wangguoz@gmail.com>
Most were converted not to use PowerMock, but some no
longer exist.
Reviewers: Chris Egerton <chrise@aiven.io>, Christo Lolov <christo_lolov@yahoo.com>
This patch migrates all the internal APIs of the current group coordinator to the new `GroupCoordinator` interface. It also makes the current implementation package private to ensure that it is not used anymore.
Reviewers: Justine Olshan <jolshan@confluent.io>
There were some concurrency inconsistencies in `KafkaScheduler` flagged by spotBugs
that had to be fixed, summary of changes below:
* Executor is `volatile`
* We always synchronize and check `isStarted` as the first thing within the critical
section when a mutating operation is performed.
* We don't synchronize (but ensure the executor is not null in a safe way) in read-only
operations that operate on the executor.
With regards to `MockScheduler/MockTask`:
* Set the type of `nextExecution` to `AtomicLong` and replaced inconsistent synchronization
* Extracted logic into `MockTask.rescheduleIfPeriodic`
Tweaked the `Scheduler` interface a bit:
* Removed `unit` parameter since we always used `ms` except one invocation
* Introduced a couple of `scheduleOnce` overloads to replace the usage of default
arguments in Scala
* Pulled up `resizeThreadPool` to the interface and removed `isStarted` from the
interface.
Other cleanups:
* Removed spotBugs exclusion affecting `kafka.log.LogConfig`, which no longer exists.
For broader context, see:
* KAFKA-14470: Move log layer to storage module
Reviewers: Jun Rao <junrao@gmail.com>
Also improved `LogValidatorTest` to cover a bug that was originally
only caught by `LogAppendTimeTest`.
For broader context on this change, please check:
* KAFKA-14470: Move log layer to storage module
Reviewers: Jun Rao <junrao@gmail.com>
This PR implements the follower fetch protocol as mentioned in KIP-405.
Added a new version for ListOffsets protocol to receive local log start offset on the leader replica. This is used by follower replicas to find the local log star offset on the leader.
Added a new version for FetchRequest protocol to receive OffsetMovedToTieredStorageException error. This is part of the enhanced fetch protocol as described in KIP-405.
We introduced a new field locaLogStartOffset to maintain the log start offset in the local logs. Existing logStartOffset will continue to be the log start offset of the effective log that includes the segments in remote storage.
When a follower receives OffsetMovedToTieredStorage, then it tries to build the required state from the leader and remote storage so that it can be ready to move to fetch state.
Introduced RemoteLogManager which is responsible for
initializing RemoteStorageManager and RemoteLogMetadataManager instances.
receives any leader and follower replica events and partition stop events and act on them
also provides APIs to fetch indexes, metadata about remote log segments.
Followup PRs will add more functionality like copying segments to tiered storage, retention checks to clean local and remote log segments. This will change the local log start offset and make sure the follower fetch protocol works fine for several cases.
You can look at the detailed protocol changes in KIP: https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage#KIP405:KafkaTieredStorage-FollowerReplication
Co-authors: satishd@apache.org, kamal.chandraprakash@gmail.com, yingz@uber.com
Reviewers: Kowshik Prakasam <kprakasam@confluent.io>, Cong Ding <cong@ccding.com>, Tirtha Chatterjee <tirtha.p.chatterjee@gmail.com>, Yaodong Yang <yangyaodong88@gmail.com>, Divij Vaidya <diviv@amazon.com>, Luke Chen <showuon@gmail.com>, Jun Rao <junrao@gmail.com>
`core` should only be used for legacy cli tools and tools that require
access to `core` classes instead of communicating via the kafka protocol
(typically by using the client classes).
Summary of changes:
1. Convert the command implementation and tests to Java and move it to
the `tools` module.
2. Introduce mechanism to capture stdout and stderr from tests.
3. Change `kafka-metadata-quorum.sh` to point to the new command class.
4. Adjusted the test classpath of the `tools` module so that it supports tests
that rely on the `@ClusterTests` annotation.
5. Improved error handling when an exception different from `TerseFailure` is
thrown.
6. Changed `ToolsUtils` to avoid usage of arrays in favor of `List`.
Reviewers: dengziming <dengziming1993@gmail.com>
This path moves the timeline data structures from metadata module to server-common module as those will be used in the new group coordinator.
Reviewers: José Armando García Sancio <jsancio@users.noreply.github.com>, Colin Patrick McCabe <cmccabe@apache.org>
This PR build on top of #11017. I have added the previous author's comment in this PR for attribution. I have also addressed the pending comments from @chia7712 in this PR.
Notes to help the reviewer:
Mockito has mockStatic method which is equivalent to PowerMock's method.
When we run the tests using @RunWith(MockitoJUnitRunner.StrictStubs.class) Mockito performs a verify() for all stubs that are mentioned, hence, there is no need to explicitly verify the stubs (unless you want to verify the number of times etc.). Note that this does not work for static mocks.
Reviewers: Bruno Cadonna <cadonna@apache.org>, Walker Carlson <wcarlson@confluent.io>, Bill Bejeck <bbejeck@apache.org>
Also moves the Streams LogCaptureAppender class into the clients module so that it can be used by both Streams and Connect.
Reviewers: Nigel Liang <nigel@nigelliang.com>, Kalpesh Patel <kpatel@confluent.io>, John Roesler <vvcephei@apache.org>, Tom Bentley <tbentley@redhat.com>
In https://github.com/apache/kafka/pull/12695, we discovered a gap in our testing of `StandardAuthorizer`. We addressed the specific case that was failing, but I think we need to establish a better methodology for testing which incorporates randomized inputs. This patch is a start in that direction. We implement a few basic property tests using jqwik which focus on prefix searching. It catches the case from https://github.com/apache/kafka/pull/12695 prior to the fix. In the future, we can extend this to cover additional operation types, principal matching, etc.
Reviewers: David Arthur <mumrah@gmail.com>
Kafka Streams expects a version resource at /kafka/kafka-streams-version.properties which is read by ClientMetrics. Unfortunately, the name of the generated file was wrong and thus was not copied as a resource.
Reviewer: Bruno Cadonna <cadonna@apache.org>
We sometimes see build failures where the code encounters an exit condition and fails abruptly. For example:
```
[2022-09-18T10:01:25.947Z] * What went wrong:
[2022-09-18T10:01:25.947Z] Execution failed for task ':core:unitTest'.
[2022-09-18T10:01:25.947Z] > Process 'Gradle Test Executor 116' finished with non-zero exit value 1
```
When this happens, it can be difficult to track the failure back to a specific test from the build output because we don't know which test was executing on 'Gradle Test Executor 116.'
There is a test logging property in gradle called `displayGranularity`, which lets us see the executor for each test run: https://docs.gradle.org/current/dsl/org.gradle.api.tasks.testing.logging.TestLogging.html#org.gradle.api.tasks.testing.logging.TestLogging:displayGranularity. When `displayGranularity` is set to 2 (the default), we get the following:
```
AdminZkClientTest > testGetBrokerMetadatas() PASSED
```
When set to 0, it looks like this instead:
```
Gradle Test Run :core:test > Gradle Test Executor 76 > AdminZkClientTest > testGetBrokerMetadatas() PASSED
```
Having this extra information should make it easier to debug failures.
Reviewers: Luke Chen <showuon@gmail.com>, David Jacot <djacot@confluent.io>
Verified that the artifact generated by `releaseTarGz` no longer includes
swagger-jaxrs2 or its dependencies (like snakeyaml).
Reviewers: José Armando García Sancio <jsancio@users.noreply.github.com>, Chris Egerton <fearthecellos@gmail.com>
Add `MetadataQuorumCommand` to describe quorum status, I'm trying to use arg4j style command format, currently, we only support one sub-command which is "describe" and we can specify 2 arguments which are --status and --replication.
```
# describe quorum status
kafka-metadata-quorum.sh --bootstrap-server localhost:9092 describe --replication
ReplicaId LogEndOffset Lag LastFetchTimeMs LastCaughtUpTimeMs Status
0 10 0 -1 -1 Leader
1 10 0 -1 -1 Follower
2 10 0 -1 -1 Follower
kafka-metadata-quorum.sh --bootstrap-server localhost:9092 describe --status
ClusterId: fMCL8kv1SWm87L_Md-I2hg
LeaderId: 3002
LeaderEpoch: 2
HighWatermark: 10
MaxFollowerLag: 0
MaxFollowerLagTimeMs: -1
CurrentVoters: [3000,3001,3002]
CurrentObservers: [0,1,2]
# specify AdminClient properties
kafka-metadata-quorum.sh --bootstrap-server localhost:9092 --command-config config.properties describe --status
```
Reviewers: Jason Gustafson <jason@confluent.io>
When consuming both `kafka-client:3.0.1` and `kafka-client:3.0.1:test`
through maven a hygene tool was detecting multiple instances of the same
class loaded into the classpath.
Verified this change by building locally with a before and after build with
`./gradlew clients:publishToMavenLocal`, then used beyond compare to
verify the contents.
Reviewers: Ismael Juma <ismael@juma.me.uk>
This PR is created on top of #10904 and includes commits from original author for attribution.
## Testing
1. `./gradlew connect:runtime:unitTest --tests WorkerGroupMemberTest` is successful.
2. Verified that test is run as part of `./gradlew connect:runtime:unitTest` (see report in the PR)
Reviewers: Ismael Juma <ismael@juma.me.uk>
Co-authored-by: Chun-Hao Tang <tang7526@gmail.com>
Fix a bug in the KAFKA-14124 PR where a gradle test dependency was missing.
This causes missing test class exceptions.
Reviewers: Ismael Juma <ismael@juma.me.uk>
Before trying to commit a batch of records to the __cluster_metadata log, the active controller
should try to apply them to its current in-memory state. If this application process fails, the
active controller process should exit, allowing another node to take leadership. This will prevent
most bad metadata records from ending up in the log and help to surface errors during testing.
Similarly, if the active controller attempts to renounce leadership, and the renunciation process
itself fails, the process should exit. This will help avoid bugs where the active controller
continues in an undefined state.
In contrast, standby controllers that experience metadata application errors should continue on, in
order to avoid a scenario where a bad record brings down the whole controller cluster. The
intended effect of these changes is to make it harder to commit a bad record to the metadata log,
but to continue to ride out the bad record as well as possible if such a record does get committed.
This PR introduces the FaultHandler interface to implement these concepts. In junit tests, we use a
FaultHandler implementation which does not exit the process. This allows us to avoid terminating
the gradle test runner, which would be very disruptive. It also allows us to ensure that the test
surfaces these exceptions, which we previously were not doing (the mock fault handler stores the
exception).
In addition to the above, this PR fixes a bug where RaftClient#resign was not being called from the
renounce() function. This bug could have resulted in the raft layer not being informed of an active
controller resigning.
Reviewers: David Arthur <mumrah@gmail.com>
When the migration of the Streams project to JUnit 5 started with PR #12285, we discovered that the migrated tests were not run by the PR builds. This PR ensures that Streams' tests that are written in JUnit 4 and JUnit 5 are run in the PR builds.
Co-authored-by: Divij Vaidya <diviv@amazon.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Bruno Cadonna <cadonna@apache.org>
Highlights:
* The default Scala Zinc version was updated from 1.3.5 to 1.6.1
* Multiple Checkstyle tasks may now run in parallel within a project
* Support for Java 18
* Much more responsive continuous builds on Windows and macOS
* Improved diagnostics for dependency resolution
Some of our tests require java.util and java.lang modules to be open,
so do it explicitly given the following Gradle bug fix:
> When running on Java 9+, Gradle no longer opens the java.base/java.util
> and java.base/java.lang JDK modules for all Test tasks. In some cases,
> this would cause code to pass during testing but fail at runtime.
Release notes: https://docs.gradle.org/7.5/release-notes.html
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>, Luke Chen <showuon@gmail.com>
* KAFKA-13930: Add 3.2.0 Streams upgrade system tests
Apache Kafka 3.2.0 was recently released. Now we need
to test upgrades from 3.2 to trunk in our system tests.
Reviewer: Bill Bejeck <bbejeck@apache.org>
Clients remain connected and able to produce or consume despite an expired OAUTHBEARER token.
Root cause seems to be SaslServerAuthenticator#calcCompletionTimesAndReturnSessionLifetimeMs failing to set ReauthInfo#sessionExpirationTimeNanos when tokens have already expired (when session life time goes negative), in turn causing KafkaChannel#serverAuthenticationSessionExpired returning false and finally SocketServer not closing the channel.
The issue is observed with OAUTHBEARER but seems to have a wider impact on SASL re-authentication.
Reviewers: Luke Chen <showuon@gmail.com>, Tom Bentley <tbentley@redhat.com>, Sam Barker <sbarker@redhat.com>
New gradle task `connect:runtime:genConnectOpenAPIDocs` that generates `connect_rest.yaml` under `docs/generated`.
This task is executed when `siteDocsTar` runs.
This patch builds on #12072 and adds controller support for metadata.version. The kafka-storage tool now allows a
user to specify a specific metadata.version to bootstrap into the cluster, otherwise the latest version is used.
Upon the first leader election of the KRaft quroum, this initial metadata.version is written into the metadata log. When
writing snapshots, a FeatureLevelRecord for metadata.version will be written out ahead of other records so we can
decode things at the correct version level.
This also includes additional validation in the controller when setting feature levels. It will now check that a given
metadata.version is supportable by the quroum, not just the brokers.
Reviewers: José Armando García Sancio <jsancio@gmail.com>, Colin P. McCabe <cmccabe@apache.org>, dengziming <dengziming1993@gmail.com>, Alyssa Huang <ahuang@confluent.io>
* Replace `log4j` with `reload4j` in `copyDependantLibs`. Since we have
some projects that have an explicit `reload4j` dependency, it
was included in the final release release tar - i.e. it was effectively
a workaround for this bug.
* Exclude `log4j` and `slf4j-log4j12` transitive dependencies for
`streams:upgrade-system-tests`. Versions 0100 and 0101
had a transitive dependency to `log4j` and `slf4j-log4j12` via
`zkclient` and `zookeeper`. This avoids classpath conflicts that lead
to [NoSuchFieldError](https://github.com/qos-ch/reload4j/issues/41) in
system tests.
Reviewers: Jason Gustafson <jason@confluent.io>
Refactoring ApiVersion to MetadataVersion to support both old IBP versioning and new KRaft versioning (feature flags)
for KIP-778.
IBP versions are now encoded as enum constants and explicitly prefixed w/ IBP_ instead of KAFKA_, and having a
LegacyApiVersion vs DefaultApiVersion was not necessary and replaced with appropriate parsing rules for extracting
the correct shortVersions/versions.
Co-authored-by: David Arthur <mumrah@gmail.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>, David Arthur <mumrah@gmail.com>, dengziming <dengziming1993@gmail.com>, Colin P. McCabe <cmccabe@apache.org>
* Fix UP-TO-DATE check in `create*VersionFile` tasks
`create*VersionFile` tasks explicitly declared output UP-TO-DATE status
as being false. This change properly sets the inputs to
`create*VersionFile` tasks to the `commitId` and `version` values and
sets `receiptFile` locally rather than in an extra property.
* Enable output caching for `process*Messages` tasks
`process*Messages` tasks did not have output caching enabled. This
change enables that caching, as well as setting a property name and
RELATIVE path sensitivity.
* Fix existing Gradle deprecations
Replaces `JavaExec#main` with `JavaExec#mainClass`
Replaces `Report#destination` with `Report#outputLocation`
Adds a `generator` configuration to projects that need to resolve
the `generator` project (rather than referencing the runtimeClasspath
of the `generator` project from other project contexts.
Reviewers: Mickael Maison <mickael.maison@gmail.com>
This patch adds audit support through the kafka.authorizer.logger logger to StandardAuthorizer. It
follows the same conventions as AclAuthorizer with a similarly formatted log message. When
logIfAllowed is set in the Action, then the log message is at DEBUG level; otherwise, we log at
trace. When logIfDenied is set, then the log message is at INFO level; otherwise, we again log at
TRACE.
Reviewers: Colin P. McCabe <cmccabe@apache.org>
This regressed in ca375d8004 due to a typo. We need tests
for our builds. :)
I verified that passing the commitId via `-PcommitId=123`
works correctly.
Reviewers: Ismael Juma <ismael@juma.me.uk>
With major server components like the new quorum controller being moved outside of the `core` module, it is useful to have shared dependencies moved into `server-common`. An example of this is Yammer metrics which server components still rely heavily upon. All server components should have access to the default registry used by the broker so that new metrics can be registered and metric naming conventions should be standardized. This is particularly important in KRaft where we are attempting to recreate identically named metrics in the controller context. This patch takes a step in this direction. It moves `KafkaYammerMetrics` into `server-common` and it implements
standard metric naming utilities there.
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
This bumps the slf4j version to 1.7.36 and swaps out log4j 1.2.17 with
reload4j 1.2.19
Signed-off-by: Mike Lothian <mike@fireburn.co.uk>
Reviewers: Luke Chen <showuon@gmail.com>, Ismael Juma <ismael@juma.me.uk>, Bruno Cadonna <cadonna@apache.org>
The caching store layers were passing down writes into lower store layers upon eviction, but not setting the context to the evicted records' context. Instead, the context was from whatever unrelated record was being processed at the time.
Reviewers: Matthias J. Sax <mjsax@apache.org>
`ext` contains definitions that should be accessible in `allprojects`
(even though we don't use any right now).
Reviewers: Jason Gustafson <jason@confluent.io>
Introduce `maxScalacThreads` and set the default to the lowest of `8`
and the number of processors available to the JVM. The number `8` was
picked empirically, the sweet spot is between 6 and 10.
On my desktop, `./gradlew clean core:compileScala core:compileTestScala`
improved from around 60s to 51s ( with this change.
While at it, we improve the build output to include more useful
information at the start: build id, max parallel forks, max scala
threads and max test retries.
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>, Sean Li
grgit 4.1.0 caused unsupported version error during gradle builds.
The reason was that grgit 4.1.0 uses always the latest JGit version
internally. Unfortunately, the latest JGit version was compiled with
a Java version later than Java 8 which caused the unsupported version
error during gradle builds for Java 8.
grgit 4.1.1 fixed this issue by upper bounding the version of JGrit
to a version that is still compiled with Java 8. Consequently, we can
remove the hotfix we merged in commit d1e0d2b474
and instead bump the grgit version from 4.1.0 to 4.1.1.
Reviewer: John Roesler <vvcephei@apache.org>
A new version of JGit that is used by grgit that is used by gradle
causes the following error:
org/eclipse/jgit/storage/file/FileRepositoryBuilder has been compiled
by a more recent version of the Java Runtime (class file version 55.0),
this version of the Java Runtime only recognizes class file versions
up to 52.0
The reason is that version 6.0.0.202111291000-r of JGrit was compiled
with a newer Java version than Java 8, probably Java 11.
Explicitly setting the version of JGrit in gradle to 5.12.0.202106070339-r fixes
the issue.
Reviewers: David Jacot <djacot@confluent.io>, Ismael Juma <ismael@juma.me.uk>, Alexander Stohr, David Arthur <mumrah@gmail.com>
This task is to provide a concrete implementation of the interfaces defined in KIP-255 to allow Kafka to connect to an OAuth/OIDC identity provider for authentication and token retrieval. While KIP-255 provides an unsecured JWT example for development, this will fill in the gap and provide a production-grade implementation.
The OAuth/OIDC work will allow out-of-the-box configuration by any Apache Kafka users to connect to an external identity provider service (e.g. Okta, Auth0, Azure, etc.). The code will implement the standard OAuth client credentials grant type.
The proposed change is largely composed of a pair of AuthenticateCallbackHandler implementations: one to login on the client and one to validate on the broker.
See the following for more detail:
KIP-768
KAFKA-13202
Reviewers: Yi Ding <dingyi.zj@gmail.com>, Ismael Juma <ismael@juma.me.uk>, Jun Rao <junrao@gmail.com>
Java 17 is at release candidate stage and it will be a LTS release once
it's out (previous LTS release was Java 11).
Details:
* Replace Java 16 with Java 17 in Jenkins and Readme.
* Replace `--illegal-access=permit` (which was removed from Java 17)
with `--add-opens` for the packages we require internal access to.
Filed KAFKA-13275 for updating the tests not to require `--add-opens`
(where possible).
* Update `release.py` to use JDK8. and JDK 17 (instead of JDK 8 and JDK 15).
* Removed all but one Streams test from `testsToExclude`. The
Connect test exclusion list remains the same.
* Add notable change to upgrade.html
* Upgrade to Gradle 7.2 as it's required for proper Java 17 support.
* Upgrade mockito to 3.12.4 for better Java 17 support.
* Adjusted `KafkaRaftClientTest` and `QuorumStateTest` not to require
private access to `jdk.internal.util.random`.
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
Also adjusted the acceptable recovery lag to stabilize Streams tests.
Reviewers: Justine Olshan <jolshan@confluent.io>, Matthias J. Sax <mjsax@apache.org>, John Roesler <vvcephei@apache.org>
KAFKA-9555 Added default RLMM implementation based on internal topic storage.
This is the initial version of the default RLMM implementation.
This includes changes containing default RLMM configs, RLMM implementation, producer/consumer managers.
Introduced TopicBasedRemoteLogMetadataManagerHarness which takes care of bringing up a Kafka cluster and create remote log metadata topic and initializes TopicBasedRemoteLogMetadataManager.
Refactored existing RemoteLogMetadataCacheTest to RemoteLogSegmentLifecycleTest to have parameterized tests to run both RemoteLogMetadataCache and also TopicBasedRemoteLogMetadataManager.
Refactored existing InmemoryRemoteLogMetadataManagerTest, RemoteLogMetadataManagerTest to have parameterized tests to run both InmemoryRemoteLogMetadataManager and also TopicBasedRemoteLogMetadataManager.
This is part of tiered storage KIP-405 efforts.
Reviewers: Kowshik Prakasam <kprakasam@confluent.io>, Cong Ding <cong@ccding.com>, Jun Rao <junrao@gmail.com>
The broker should trigger a snapshot once
metadata.log.max.record.bytes.between.snapshots has been exceeded.
Reviewers: Jason Gustafson <jason@confluent.io>
This PR implements broker-side KRaft snapshots, including both saving and
loading. The code for triggering a periodic broker-side snapshot will come in a
follow-on PR. Loading should work with just this PR. It also implements
reloading broker snapshots after initialization.
In order to facilitate snapshots, this PR introduces the concept of
MetadataImage and MetadataDelta. MetadataImage represents the metadata state
retained in memory. It is basically a generalization of MetadataCache that
includes a few things that MetadataCache does not (such as features and client
quotas.) KRaftMetadataCache is now an accessor for the data stored in this object.
Similarly, MetadataImage replaces CacheConfigRespository and ClientQuotaCache.
It also subsumes kafka.server.metadata.MetadataImage and related classes.
MetadataDelta represents a change to a MetadataImage. When a KRaft snapshot is
loaded, we will accumulate all the changes into a MetadataDelta first, prior to
applying it. If we must reload a snapshot because we fell too far behind while
consuming metadata, the resulting MetadataDelta will contain all the changes
needed to catch us up. During normal operation, MetadataDelta is also used to
accumulate the changes of each incoming batch of metadata records. These
incremental deltas should be relatively small.
I have removed the logic for updating the various manager objects from
BrokerMetadataListener and placed it into BrokerMetadataPublisher. This makes
it easier to unit test BrokerMetadataListener.
Reviewers: David Arthur <mumrah@gmail.com>, Jason Gustafson <jason@confluent.io>
Gradle 7.1 improves Java incremental compilation:
https://docs.gradle.org/7.1.1/release-notes.html
We previously kept the JDK 15 build because some
tests didn't work with JDK 16. Since then, a number
of PRs were submitted to fix this so it's best
to remove the JDK 15 build before we create the
3.0 release branch.
Finally bump `test-retry` gradle plugin version too.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, Luke Chen <showuon@gmail.com>
Update the ZooKeeper version to v3.6.3. This requires adding dropwizard
as a new dependency.
Also, add Kafka v2.8.0 to the ducktape system test image.
Reviewers: Luke Chen <showuon@gmail.com>, Colin P. McCabe <cmccabe@apache.org>, Ismael Juma <ismael@juma.me.uk>
JDK 15 no longer receives updates, so we want to switch from JDK 15 to JDK 16.
However, we have a number of tests that don't yet pass with JDK 16.
Instead of replacing JDK 15 with JDK 16, we have both for now and we either
disable (via annotations) or exclude (via gradle) the tests that don't pass with
JDK 16 yet. The annotations approach is better, but it doesn't work for tests
that rely on the PowerMock JUnit 4 runner.
Also add `--illegal-access=permit` when building with JDK 16 to make MiniKdc
work for now. This has been removed in JDK 17, so we'll have to figure out
another solution when we migrate to that.
Relevant JIRAs for the disabled tests: KAFKA-12790, KAFKA-12941, KAFKA-12942.
Moved some assertions from `testTlsDefaults` to `testUnsupportedTlsVersion`
since the former claims to test the success case while the former tests the failure case.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
* Development of EasyMock and PowerMock has stagnated while Mockito continues to be actively developed. With the new Java cadence, it's a problem to depend on libraries that do bytecode generation and are not actively maintained. In addition, Mockito is also easier to use.KAFKA-7438
Reviewers: Ismael Juma <ismael@juma.me.uk>, Chia-Ping Tsai <chia7712@gmail.com>, Bruno Cadonna <cadonna@apache.org>
The command used by our private CI is ./gradlew cleanTest xxx:test. It does not re-run test when we use unitTest and integrationTest to replace test. The root cause is that we don't offer test output (unitTest and integrationTest) to cleanTest task and so it does not delete related test output.
Reviewers: Ismael Juma <ismael@juma.me.uk>
This patch removes the temporary shim layer we added to bridge the interface
differences between MetaLogManager and RaftClient. Instead, we now use the
RaftClient directly from the metadata module. This also means that the
metadata gradle module now depends on raft, rather than the other way around.
Finally, this PR also consolidates the handleResign and handleNewLeader APIs
into a single handleLeaderChange API.
Co-authored-by: Jason Gustafson <jason@confluent.io>
Added server-common module to have server side common classes. Moved ApiMessageAndVersion, RecordSerde, AbstractApiMessageSerde, and BytesApiMessageSerde to server-common module.
Reivewers: Kowshik Prakasam <kprakasam@confluent.io>, Jun Rao <junrao@gmail.com>
KAFKA-12429: Added serdes for the default implementation of RLMM based on an internal topic as storage. This topic will receive events of RemoteLogSegmentMetadata, RemoteLogSegmentUpdate, and RemotePartitionDeleteMetadata. These events are serialized into Kafka protocol message format.
Added tests for all the event types for that topic.
This is part of the tiered storaqe implementation KIP-405.
Reivewers: Kowshik Prakasam <kprakasam@confluent.io>, Jun Rao <junrao@gmail.com>
Move Trogdor out of tools and into its own gradle module. This allows us to minimize
the dependencies of the tools module. We still keep Trogdor in the CLASSPATH
created by kafka-run-class.sh.
Reviewers: Colin P. McCabe <cmccabe@apache.org>
Verified that `./bin/kafka-metadata-shell.sh --help` on the release tarball
works as expected. It failed with a `ClassNotFoundException` before this
change.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, Gwen (Chen) Shapira <cshapi@gmail.com>
The Gradle RAT plugin properly declares inputs and outputs and is also
cachable. This also relieves the Kafka developers from maintaining the build
integration with RAT.
The generated RAT report is identical to the one generated previously. The only
difference is the RAT report name: the RAT plugin sets the HTML report name to
`index.html` (still under `build/rat`).
Verified that the rat task fails if unlicensed files are present (and not excluded). Also
`./gradlew rat` succeeds when there is no .git folder.
`testRuntime` is deprecated in Gradle 6.x and has been removed in Gradle 7.0.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, Satish Duggana <satishd@apache.org>
Moved tiered storage API classes from clients module to a new storage-api module.
Created storage and storage-api modules. All the remote storage API classes are moved to storage-api module. All the remote storage implementation classes will be added to storage module.
Reviewers: Jun Rao <junrao@gmail.com>
Fixes the LICENSE files that we ship with our releases:
* the source-distribution license included wrong and unnecessary dependencies
* the binary-distribution license was missing most of our actual dependencies
Reviewers: A. Sophie Blee-Goldman <ableegoldman@apache.org>, Ewen Cheslack-Postava <ewencp@apache.org>, Justin Mclean <jmclean@apache.org>
* Standardize license headers in scala, python, and gradle files.
* Relocate copyright attribution to the NOTICE.
* Add a license header check to `spotless` for scala files.
Reviewers: Ewen Cheslack-Postava <ewencp@apache.org>, Matthias J. Sax <mjsax@apache.org>, A. Sophie Blee-Goldman <ableegoldman@apache.org
`Self-managed` is also used in the context of Cloud vs on-prem and it can
be confusing.
`KRaft` is a cute combination of `Kafka Raft` and it's pronounced like `craft`
(as in `craftsmanship`).
Reviewers: Colin P. McCabe <cmccabe@apache.org>, Jose Sancio <jsancio@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>, Ron Dagostino <rdagostino@confluent.io>
As @chia7712 found, we currently apply the `java` plugin indirectly via `rat.gradle`
if the `.git` folder exists (7071ded2a6/gradle/rat.gradle (L101)).
This led to inconsistent behavior if the `.git` directory was not present. With
this change, the `java-library` plugin (which has slightly more features than
the `java` plugin) is always applied.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, John Roesler <vvcephei@apache.org>
There were errors while generating javadoc for the streams:test-utils module
because the included TopologyTestDriver imported some excluded classes.
This fixes the errors by inlining the previously excluded packages.
Reviewers: Chia-Ping Tsai <chia7712@apache.org>, Ismael Juma <ijuma@apache.org>
1. replace org.junit.Assert by org.junit.jupiter.api.Assertions
2. replace org.junit by org.junit.jupiter.api
3. replace Before by BeforeEach
4. replace After by AfterEach
5. remove ExternalResource from all scala modules
6. add explicit AfterClass/BeforeClass to stop/start EmbeddedKafkaCluster
Noted that this PR does not migrate stream module to junit 5 so it does not introduce callback of junit 5 to deal with beforeAll/afterAll. The next PR of migrating stream module can replace explicit beforeAll/afterAll by junit 5 extension. Or we can keep the beforeAll/afterAll if it make code more readable.
Reviewers: John Roesler <vvcephei@apache.org>
KIP-500 is not particularly descriptive. I also tweaked the readme text a bit.
Tested that the readme for self-managed still works after these changes.
Reviewers: Colin P. McCabe <cmccabe@apache.org>, Ron Dagostino <rdagostino@confluent.io>, Jason Gustafson <jason@confluent.io>
Builds are failing since this file does not have a license. Similar .md files do not seem to have licenses,
so I've added this file to the exclude list.
It was incorrectly set within `dependencyUpdates` and it still worked.
That is, this is a no-op in terms of behavior, but makes it easier to
read and understand.
I tested that `releaseTarGz` produced a tar with two versions of
`javassist` _without this block_ and a single version with it (in either
position).
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
`org.rocksdb.Options` is part of Kafka Streams public api via `RocksDBConfigSetter`.
Reviewers: Anna Sophie Blee-Goldman <sophie@confluent.io>, Chia-Ping Tsai <chia7712@gmail.com>
This patch changes the raft simulation tests to use jqwik, which is a property testing library. This provides two main benefits:
- It simplifies the randomization of test parameters. Currently the tests use a fixed set of `Random` seeds, which means that most builds are doing redundant work. We get a bigger benefit from allowing each build to test different parameterizations.
- It makes it easier to reproduce failures. Whenever a test fails, jqwik will report the random seed that failed. A developer can then modify the `@Property` annotation to use that specific seed in order to reproduce the failure.
This patch also includes an optimization for `MockLog.earliestSnapshotId` which reduces the time to run the simulation tests dramatically.
Reviewers: Ismael Juma <ismael@juma.me.uk>, Chia-Ping Tsai <chia7712@gmail.com>, José Armando García Sancio <jsancio@gmail.com>, David Jacot <djacot@confluent.io>
This is useful when debugging build issues. I also removed two printlns that are now redundant, so
this makes the build more informative and less noisy at the same time. Example output:
> Starting build with version 3.0.0-SNAPSHOT using Gradle 6.8.3, Java 15 and Scala 2.13.5
Reviewers: A. Sophie Blee-Goldman <sophie@confluent.io>
- Use consistent options for `javadoc` and `aggregatedJavadoc`
- `aggregatedJavadoc` depends on `compileJava`
- `connect-api` inherits `options.links`
- `streams` and `streams-test-utils` javadoc exclusions should be more
specific to avoid unexpected behavior in `aggregatedJavadoc` when the
javadoc for multiple modules is generated together
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
Gradle 7.0 is required for Java 16 compatibility and it removes a number of
deprecated APIs. Fix most issues preventing the upgrade to Gradle 7.0.
The remaining ones are more complicated and should be handled
in a separate PR. Details of the changes:
* Release tarball no longer includes includes test, sources, javadoc and test sources jars (these
are still published to the Maven Central repository).
* Replace `compile` with `api` or `implementation` - note that `implementation`
dependencies appear with `runtime` scope in the pom file so this is a (positive)
change in behavior
* Add missing dependencies that were uncovered by the usage of `implementation`
* Replace `testCompile` with `testImplementation`
* Replace `runtime` with `runtimeOnly` and `testRuntime` with `testRuntimeOnly`
* Replace `configurations.runtime` with `configurations.runtimeClasspath`
* Replace `configurations.testRuntime` with `configurations.testRuntimeClasspath` (except for
the usage in the `streams` project as that causes a cyclic dependency error)
* Use `java-library` plugin instead of `java`
* Use `maven-publish` plugin instead of deprecated `maven` plugin - this changes the
commands used to publish and to install locally, but task aliases for `install` and
`uploadArchives` were added for backwards compatibility
* Removed `-x signArchives` line from the readme since it was wrong (it was a
no-op before and it fails now, however)
* Replaces `artifacts` block with an approach that works with the `maven-publish` plugin
* Don't publish `jmh-benchmark` module - the shadow jar is pretty large and not
particularly useful (before this PR, we would publish the non shadow jars)
* Replace `version` with `archiveVersion`, `baseName` with `archiveBaseName` and
`classifier` with `archiveClassifier`
* Update Gradle and plugins to the latest stable version (7.0 is not stable yet)
* Use `plugin` DSL to configure plugins
* Updated notable changes for 3.0
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, Randall Hauch <rhauch@gmail.com>
KIP-405 introduces tiered storage feature in Kafka. With this feature, Kafka cluster is configured with two tiers of storage - local and remote. The local tier is the same as the current Kafka that uses the local disks on the Kafka brokers to store the log segments. The new remote tier uses systems, such as HDFS or S3 or other cloud storages to store the completed log segments. Consumers fetch the records stored in remote storage through the brokers with the existing protocol.
We introduced a few SPIs for plugging in log/index store and remote log metadata store.
This involves two parts
1. Storing the actual data in remote storage like HDFS, S3, or other cloud storages.
2. Storing the metadata about where the remote segments are stored. The default implementation uses an internal Kafka topic.
You can read KIP for more details at https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage
Reviewers: Jun Rao <junrao@gmail.com>
Implements KIP-695
Reverts a previous behavior change to Consumer.poll and replaces
it with a new Consumer.currentLag API, which returns the client's
currently cached lag.
Uses this new API to implement the desired task idling semantics
improvement from KIP-695.
Reverts fdcf8fbf72 / KAFKA-10866: Add metadata to ConsumerRecords (#9836)
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, Guozhang Wang <guozhang@apache.org>
As mentioned in #9548, users currently use the kafka jar (`core` module)
for integration testing and the current inlining behavior causes
problems when the user's classpath contains a different Scala version
than the one that was used for compilation (e.g. 2.13.4 versus 2.13.3).
An example error:
`java.lang.NoClassDefFoundError: scala/math/Ordering$$anon$7`
We now disable inlining of the `scala` package by default, but make it
easy to enable it for those who so desire (a good option if you can
ensure the scala library version matches the one used for compilation).
While at it, we make it possible to disable scala compiler optimizations
(`none`) or to use only method local optimizations (`method`). This can
be useful if optimizing for compilation time during development.
Verified behavior by running gradlew with `--debug` and checking the
output after `[zinc] The Scala compiler is invoked with:`
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
The quorum controller stores metadata in the KIP-500 metadata log, not in Apache
ZooKeeper. Each controller node is a voter in the metadata quorum. The leader of the
quorum is the active controller, which processes write requests. The followers are standby
controllers, which replay the operations written to the log. If the active controller goes away,
a standby controller can take its place.
Like the ZooKeeper-based controller, the quorum controller is based on an event queue
backed by a single-threaded executor. However, unlike the ZK-based controller, the quorum
controller can have multiple operations in flight-- it does not need to wait for one operation
to be finished before starting another. Therefore, calls into the QuorumController return
CompleteableFuture objects which are completed with either a result or an error when the
operation is done. The QuorumController will also time out operations that have been
sitting on the queue too long without being processed. In this case, the future is completed
with a TimeoutException.
The controller uses timeline data structures to store multiple "versions" of its in-memory
state simultaneously. "Read operations" read only committed state, which is slightly older
than the most up-to-date in-memory state. "Write operations" read and write the latest
in-memory state. However, we can not return a successful result for a write operation until
its state has been committed to the log. Therefore, if a client receives an RPC response, it
knows that the requested operation has been performed, and can not be undone by a
controller failover.
Reviewers: Jun Rao <junrao@gmail.com>, Ron Dagostino <rdagostino@confluent.io>
The Kafka Metadata shell is a new command which allows users to
interactively examine the metadata stored in a KIP-500 cluster.
It can examine snapshot files that are specified via --snapshot.
The metadata tool works by replaying the log and storing the state into
in-memory nodes. These nodes are presented in a fashion similar to
filesystem directories.
Reviewers: Jason Gustafson <jason@confluent.io>, David Arthur <mumrah@gmail.com>, Igor Soarez <soarez@apple.com>
Add ClusterTool as specified in KIP-631. It can report the current cluster ID, and also send the new RPC for removing broker registrations.
Reviewers: David Arthur <mumrah@gmail.com>
This patch adds a `RecordSerde` implementation for the metadata record format expected by KIP-631.
Reviewers: Colin McCabe <cmccabe@apache.org>, Ismael Juma <mlists@juma.me.uk>
Replace BrokerStates.scala with BrokerState.java, to make it easier to use from Java code if needed. This also makes it easier to go from a numeric type to an enum.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
This was causing IntelliJ to choose the Vintage runner when running `core` tests
which would then fail during the discovery phase.
Verified that `core` tests work in IntelliJ again after this change.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
Ensure that runtime-only dependencies don't cause a different version to be used.
More specifically:
> The relationship is directed, which means that if the runtimeClasspath configuration
has to be resolved, Gradle will first resolve the compileClasspath and then "inject" the
result of resolution as strict constraints into the runtimeClasspath.
For more details, see:
https://docs.gradle.org/6.8/userguide/resolution_strategy_tuning.html#sec:configuration_consistency
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
Add a runtime dependency on the jupiter engine to avoid failures
during `./gradlew unitTest/integrationTest/test` for `upgrade-system-tests-*`.
Also remove `connect` and `examples` from the JUnit 5 blocklist since they
contains no tests.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
Also:
* Remove unused powermock dependency
* Remove "examples" from the JUnit 4 list since one module was already
converted and the other has no tests
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
Add the metadata gradle module, which will contain the metadata record
definitions, and other metadata-related broker-side code.
Add MetadataParser, MetadataParseException, etc.
Reviewers: José Armando García Sancio <jsancio@gmail.com>, Ismael Juma <ismael@juma.me.uk>, David Arthur <mumrah@gmail.com>
Instead of listing the projects that should use JUnit 5, list the
projects that are still using JUnit 4.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
* replace `org.junit.Assert` by `org.junit.jupiter.api.Assertions`
* replace `org.junit` by `org.junit.jupiter.api`
* replace `org.junit.runners.Parameterized` by `org.junit.jupiter.params.ParameterizedTest`
* replace `org.junit.runners.Parameterized.Parameters` by `org.junit.jupiter.params.provider.{Arguments, MethodSource}`
* replace `Before` by `BeforeEach`
* replace `After` by `AfterEach`
Reviewers: Ismael Juma <ismael@juma.me.uk>
* Use the packages/classes from JUnit 5
* Move description in `assert` methods to last parameter
* Convert parameterized tests so that they work with JUnit 5
* Remove `hamcrest`, it didn't seem to add much value
* Fix `Utils.mkEntry` to have correct `equals` implementation
* Add a missing `@Test` annotation in `SslSelectorTest` override
* Adjust regex in `SaslAuthenticatorTest` due to small change in the
assert failure string in JUnit 5
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
This PR includes following changes.
1. replace org.junit.Assert by org.junit.jupiter.api.Assertions
2. replace org.junit by org.junit.jupiter.api
3. replace Before by BeforeEach
4. replace After by AfterEach
Reviewers: Ismael Juma <ismael@confluent.io>
Also updated the jmh readme to make it easier for new people to know
what's possible and best practices.
There were some changes in the generated benchmarking code that
required adjusting `spotbugs-exclude.xml` and for a `javac` warning
to be suppressed for the benchmarking module. I took the chance
to make the spotbugs exclusion mode maintainable via a regex
pattern.
Tested the commands on Linux and macOS with zsh.
JMH highlights:
* async-profiler integration. Can be used with -prof async,
pass -prof async:help to look for the accepted options.
* perf c2c [2] integration. Can be used with -prof perfc2c,
if available.
* JFR profiler integration. Can be used with -prof jfr, pass
-prof jfr:help to look for the accepted options.
Full details:
* 1.24: https://mail.openjdk.java.net/pipermail/jmh-dev/2020-August/002982.html
* 1.25: https://mail.openjdk.java.net/pipermail/jmh-dev/2020-August/002987.html
* 1.26: https://mail.openjdk.java.net/pipermail/jmh-dev/2020-October/003024.html
* 1.27: https://mail.openjdk.java.net/pipermail/jmh-dev/2020-December/003096.html
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>, Bill Bejeck <bbejeck@gmail.com>, Lucas Bradstreet <lucasbradstreet@gmail.com>
Scala 2.13.4 restores default global `ExecutionContext` to 2.12 behavior
(to fix a perf regression in some use cases) and improves pattern matching
(especially exhaustiveness checking). Most of the changes are related
to the latter as I have enabled the newly introduced `-Xlint:strict-unsealed-patmat`.
More details on the code changes:
* Don't swallow exception in `ReassignPartitionsCommand.topicDescriptionFutureToState`.
* `RequestChannel.Response` should be `sealed`.
* Introduce sealed ClientQuotaManager.BaseUserEntity to avoid false positive
exhaustiveness warning.
* Handle a number of cases where pattern matches were not exhaustive:
either by marking them with @unchecked or by adding a catch-all clause.
* Workaround scalac bug related to exhaustiveness warnings in ZooKeeperClient
* Remove warning suppression annotations related to the optimizer that are no
longer needed in ConsumerGroupCommand and AclAuthorizer.
* Use `forKeyValue` in `AclAuthorizer.acls` as the scala bug preventing us from
using it seems to be fixed.
* Also update scalaCollectionCompat to 2.3.0, which includes minor improvements.
Full release notes:
* https://github.com/scala/scala/releases/tag/v2.13.4
* https://github.com/scala/scala-collection-compat/releases/tag/v2.3.0
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
I also removed a test class with no tests currently (Jason filed KAFKA-10519 for
filling the test gap).
Reviewers: Jason Gustafson <jason@confluent.io>
This is the core Raft implementation specified by KIP-595: https://cwiki.apache.org/confluence/display/KAFKA/KIP-595%3A+A+Raft+Protocol+for+the+Metadata+Quorum. We have created a separate "raft" module where most of the logic resides. The new APIs introduced in this patch in order to support Raft election and such are disabled in the server until the integration with the controller is complete. Until then, there is a standalone server which can be used for testing the performance of the Raft implementation. See `raft/README.md` for details.
Reviewers: Guozhang Wang <wangguoz@gmail.com>, Boyang Chen <boyang@confluent.io>
Co-authored-by: Boyang Chen <boyang@confluent.io>
Co-authored-by: Guozhang Wang <wangguoz@gmail.com>
This change sets the groundwork for migrating other modules incrementally.
Main changes:
- Replace `junit` 4.13 with `junit-jupiter` and `junit-vintage` 5.7.0-RC1.
- All modules except for `tools` depend on `junit-vintage`.
- `tools` depends on `junit-jupiter`.
- Convert `tools` tests to JUnit 5.
- Update `PushHttpMetricsReporterTest` to use `mockito` instead of `powermock` and `easymock`
(powermock doesn't seem to work well with JUnit 5 and we don't need it since mockito can mock
static methods).
- Update `mockito` to 3.5.7.
- Update `TestUtils` to use JUnit 5 assertions since `tools` depends on it.
Unrelated clean-ups:
- Remove `unit` from package names in a few `core` tests.
- Replace `try/catch/fail` with `assertThrows` in a number of places.
- Tag `CoordinatorTest` as integration test.
- Remove unnecessary type parameters when invoking methods and constructors.
Tested with IntelliJ and gradle. Verified that the following commands work as expected:
* ./gradlew tools:unitTest
* ./gradlew tools:integrationTest
* ./gradlew tools:test
* ./gradlew core:unitTest
* ./gradlew core:integrationTest
* ./gradlew clients:test
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
For the generated message code, put the JSON conversion functionality
in a separate JsonConverter class.
Make MessageDataGenerator simply another generator class, alongside the
new JsonConverterGenerator class. Move some of the utility functions
from MessageDataGenerator into FieldSpec and other places, so that they
can be used by other generator classes.
Use argparse4j to support a better command-line for the generator.
Reviewers: David Arthur <mumrah@gmail.com>
This patch ensures we use a force resolution strategy for the scala-library dependency.
I've tested this locally and saw a difference in the output.
With the change (using 2.4 and the jackson library 2.10.5):
```
./core/build/dependant-libs-2.12.10/scala-java8-compat_2.12-0.9.0.jar
./core/build/dependant-libs-2.12.10/scala-collection-compat_2.12-2.1.2.jar
./core/build/dependant-libs-2.12.10/scala-reflect-2.12.10.jar
./core/build/dependant-libs-2.12.10/scala-logging_2.12-3.9.2.jar
./core/build/dependant-libs-2.12.10/scala-library-2.12.10.jar
```
Without (using 2.4 and the jackson library 2.10.5):
```
find . -name 'scala*.jar'
./core/build/dependant-libs-2.12.10/scala-java8-compat_2.12-0.9.0.jar
./core/build/dependant-libs-2.12.10/scala-collection-compat_2.12-2.1.2.jar
./core/build/dependant-libs-2.12.10/scala-reflect-2.12.10.jar
./core/build/dependant-libs-2.12.10/scala-logging_2.12-3.9.2.jar
./core/build/dependant-libs-2.12.10/scala-library-2.12.12.jar
```
Reviewers: Ismael Juma <ismael@juma.me.uk>
I left out updates that could be risky. Preliminary testing indicates
we can build (including spotBugs) and run tests with Java 15 with
these changes. I will do more thorough testing once Java 15 reaches
release candidate stage in a few weeks.
Minor updates with mostly bug fixes:
- Scala: 2.12.11 -> 2.12.12 (compiler and collection performance improvements)
- Bouncy castle: 1.64 -> 1.66 (several bug fixes)
- HttpClient: 4.5.11 -> 4.5.12 (small number of bug fixes)
- Mockito: 3.3.3 -> 3.4.4 (several bug fixes and Java 15 support)
- Netty: 4.5.10 -> 4.5.11 (several bug fixes)
- Snappy: 1.1.7.3 -> 1.1.7.6 (small number of bug fixes)
- Zstd: 1.4.5-2 -> 1.4.5-6 (small number of bug fixes)
Gradle plugin and library upgrades:
- Gradle version plugins: 0.28.0 -> 0.29.0 (small number of bug fixes)
- Git: 4.0.1 -> 4.0.2 (small number of bug fixes)
- Scoverage plugin: 4.0.1 -> 4.0.2 (small number of bug fixes)
- Shadow plugin: 5.2.0 -> 6.0.0 (Java 15 support and require Gradle 6.0)
- Test Retry plugin: 1.1.5 -> 1.1.6 (small number of bug fixes)
- Spotless plugin: 4.4.4 -> 5.1.0 (several internal changes that should not matter to us)
- Spotbugs: 4.0.3 -> 4.0.6 (small number of bug fixes)
- Spotbugs plugin: 4.2.4 -> 4.4.4 (small number of bug fixes)
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
Previously, we had some code hard-coded to generate message type classes
for RPCs. We might want to generate message type classes for other
things as well, so make it more generic.
Reviewers: Boyang Chen <boyang@confluent.io>
This includes important fixes. Netty is required by ZooKeeper if TLS is
enabled.
I verified that the netty jars were changed from 4.1.48 to 4.1.50 with
this PR, `find . -name '*netty*'`:
```text
./core/build/dependant-libs-2.13.3/netty-handler-4.1.50.Final.jar
./core/build/dependant-libs-2.13.3/netty-transport-native-epoll-4.1.50.Final.jar
./core/build/dependant-libs-2.13.3/netty-codec-4.1.50.Final.jar
./core/build/dependant-libs-2.13.3/netty-transport-native-unix-common-4.1.50.Final.jar
./core/build/dependant-libs-2.13.3/netty-transport-4.1.50.Final.jar
./core/build/dependant-libs-2.13.3/netty-resolver-4.1.50.Final.jar
./core/build/dependant-libs-2.13.3/netty-buffer-4.1.50.Final.jar
./core/build/dependant-libs-2.13.3/netty-common-4.1.50.Final.jar
```
Note that the previous netty exclude no longer worked since we upgraded
to ZooKeeper 3.5.x as it switched to Netty 4 which has different module names.
Also, the Netty dependency is needed by ZooKeeper for TLS support so we
cannot exclude it.
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
I had to fix several compiler errors due to deprecation of auto application of `()`. A related
Xlint config (`-Xlint:nullary-override`) is no longer valid in 2.13, so we now only enable it
for 2.12. The compiler flagged two new inliner warnings that required suppression and
the semantics of `&` in `@nowarn` annotations changed, requiring a small change in
one of the warning suppressions.
I also removed the deprecation of a number of methods in `KafkaZkClient` as
they should not have been deprecated in the first place since `KafkaZkClient` is an
internal class and we still use these methods in the Controller and so on. This
became visible because the Scala compiler now respects Java's `@Deprecated`
annotation.
Finally, I included a few minor clean-ups (eg using `toBuffer` instead `toList`) when fixing
the compilation warnings.
Noteworthy bug fixes in Scala 2.13.3:
* Fix 2.13-only bug in Java collection converters that caused some operations to perform an extra pass
* Fix 2.13.2 performance regression in Vector: restore special cases for small operands in appendedAll and prependedAll
* Increase laziness of #:: for LazyList
* Fixes related to annotation parsing of @Deprecated from Java sources in mixed compilation
Full release notes:
https://github.com/scala/scala/releases/tag/v2.13.3
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
Most builds don't require test coverage output, so it's wasteful
to spend cycles tracking coverage information for each method
invoked.
I ran a quick test in a fast desktop machine, the absolute
difference will be larger in a slower machine. The tests were
executed after `./gradlew clean` and with a gradle daemon
that was started just before the test (and mildly warmed up
by running `./gradlew clean` again).
`./gradlew unitTest --continue --profile`:
* With coverage enabled: 6m32s
* With coverage disabled: 5m47s
I ran the same test twice and the results were within 2s of
each other, so reasonably consistent.
16% reduction in the time taken to run the unit tests is a
reasonable gain with little downside, so I think this is a
good change.
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
* Add documentation for using transformation predicates.
* Add `PredicateDoc` for generating predicate config docs, following the style of `TransformationDoc`.
* Fix the header depth mismatch.
* Avoid generating HTML ids based purely on the config name since there
are very likely to conflict (e.g. #name). Instead allow passing a function
which can be used to generate an id from a config key.
The docs have been generated and tested locally.
Reviewer: Konstantine Karantasis <konstantine@confluent.io>
Gradle 6.5 includes a fix for https://github.com/gradle/gradle/pull/12866, which
affects the performance of Scala compilation.
I profiled the scalac build with async profiler and 54% of the time was on GC
even after the Gradle upgrade (it was more than 60% before), so I switched to
the throughput GC (GC latency is less important for batch builds) and it
was reduced to 38%.
I also centralized the jvm configuration in `build.gradle` and simplified it a bit
by removing the minHeapSize configuration from the test tasks.
On my desktop, the time to execute clean builds with no cached Gradle daemon
was reduced from 127 seconds to 97 seconds. With a cached daemon, it was
reduced from 120 seconds to 88 seconds. The performance regression when
we upgraded to Gradle 6.x was 27 seconds with a cached daemon
(https://github.com/apache/kafka/pull/7677#issuecomment-616271179), so it
should be fixed now.
Gradle 6.4 with no cached daemon:
```
BUILD SUCCESSFUL in 2m 7s
115 actionable tasks: 112 executed, 3 up-to-date
./gradlew clean compileScala compileJava compileTestScala compileTestJava 1.15s user 0.12s system 0% cpu 2:08.06 total
```
Gradle 6.4 with cached daemon:
```
BUILD SUCCESSFUL in 2m 0s
115 actionable tasks: 111 executed, 4 up-to-date
./gradlew clean compileScala compileJava compileTestScala compileTestJava 0.95s user 0.10s system 0% cpu 2:01.42 total
```
Gradle 6.5 with no cached daemon:
```
BUILD SUCCESSFUL in 1m 46s
115 actionable tasks: 111 executed, 4 up-to-date
./gradlew clean compileScala compileJava compileTestScala compileTestJava 1.27s user 0.12s system 1% cpu 1:47.71 total
```
Gradle 6.5 with cached daemon:
```
BUILD SUCCESSFUL in 1m 37s
115 actionable tasks: 111 executed, 4 up-to-date
./gradlew clean compileScala compileJava compileTestScala compileTestJava 1.02s user 0.10s system 1% cpu 1:38.31 total
```
This PR with no cached Gradle daemon:
```
BUILD SUCCESSFUL in 1m 37s
115 actionable tasks: 81 executed, 34 up-to-date
./gradlew clean compileScala compileJava compileTestScala compileTestJava 1.27s user 0.10s system 1% cpu 1:38.70 total
```
This PR with cached Gradle daemon:
```
BUILD SUCCESSFUL in 1m 28s
115 actionable tasks: 111 executed, 4 up-to-date
./gradlew clean compileScala compileJava compileTestScala compileTestJava 1.02s user 0.10s system 1% cpu 1:29.35 total
```
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
The version of Zinc included with Gradle 6.4 includes a fix for the blocker
that was preventing us from passing `-release 8` to scalac.
Release notes for Gradle 6.4:
https://docs.gradle.org/6.4/release-notes.html
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
* add a config to set the TaskAssignor
* set the default assignor to HighAvailabilityTaskAssignor
* fix broken tests (with some TODOs in the system tests)
Implements: KIP-441
Reviewers: Bruno Cadonna <bruno@confluent.io>, A. Sophie Blee-Goldman <sophie@confluent.io>
* Upgrade to Scala 2.13.2 which introduces the ability to suppress warnings.
* Upgrade to scala-collection-compat 2.1.6 as it introduces the
@nowarn annotation for Scala 2.12.
* While at it, also update scala-java8-compat to 0.9.1.
* Fix compiler warnings and add @nowarn for the unfixed ones.
Scala 2.13.2 highlights (besides @nowarn):
* Rewrite Vector (using "radix-balanced finger tree vectors"),
for performance. Small vectors are now more compactly
represented. Some operations are now drastically faster on
large vectors. A few operations may be a little slower.
* Matching strings makes switches in bytecode.
https://github.com/scala/scala/releases/tag/v2.13.2
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
Also:
* Remove deprecated `=` in resolutionStrategy.
* Replace `AES/GCM/PKCS5Padding` with `AES/GCM/NoPadding`
in `PasswordEncoderTest`. The former is invalid and JDK 14 rejects it,
see https://bugs.openjdk.java.net/browse/JDK-8229043.
With these changes, the build works with Java 14 and Scala 2.12. The
same will apply to Scala 2.13 when Scala 2.13.2 is released (should
happen within 1-2 weeks).
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>, Matthias J. Sax <matthias@confluent.io>
* Introduce `gradlewAll` script to replace `*All` tasks since the approach
used by the latter doesn't work since Gradle 6.0 and it's unclear when,
if ever, it will work again ( see https://github.com/gradle/gradle/issues/11301).
* Update release script and README given the above.
* Update zinc to 1.3.5.
* Update gradle-versions-plugin to 0.28.0.
The major improvements in Gradle 6.0 to 6.3 are:
- Improved incremental compilation for Scala
- Support for Java 14 (although some Gradle plugins
like spotBugs may need to be updated or disabled,
will do that separately)
- Improved scalac reporting, warnings are clearly
marked as such, which is very helpful.
Tested `gradlewAll` manually for the commands listed in the README
and release script. For `uploadArchive`, I tested it with a local Maven
repository.
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
Once Scala 2.13.2 is officially released, I will submit a follow up PR
that enables `-Xfatal-warnings` with the necessary warning
exclusions. Compiler warning exclusions were only introduced in 2.13.2
and hence why we have to wait for that. I used a snapshot build to
test it in the meantime.
Changes:
* Remove Deprecated annotation from internal request classes
* Class.newInstance is deprecated in favor of
Class.getConstructor().newInstance
* Replace deprecated JavaConversions with CollectionConverters
* Remove unused kafka.cluster.Cluster
* Don't use Map and Set methods deprecated in 2.13:
- collection.Map +, ++, -, --, mapValues, filterKeys, retain
- collection.Set +, ++, -, --
* Add scala-collection-compat dependency to streams-scala and
update version to 2.1.4.
* Replace usages of deprecated Either.get and Either.right
* Replace usage of deprecated Integer(String) constructor
* `import scala.language.implicitConversions` is not needed in Scala 2.13
* Replace usage of deprecated `toIterator`, `Traversable`, `seq`,
`reverseMap`, `hasDefiniteSize`
* Replace usage of deprecated alterConfigs with incrementalAlterConfigs
where possible
* Fix implicit widening conversions from Long/Int to Double/Float
* Avoid implicit conversions to String
* Eliminate usage of deprecated procedure syntax
* Remove `println`in `LogValidatorTest` instead of fixing the compiler
warning since tests should not `println`.
* Eliminate implicit conversion from Array to Seq
* Remove unnecessary usage of 3 argument assertEquals
* Replace `toStream` with `iterator`
* Do not use deprecated SaslConfigs.DEFAULT_SASL_ENABLED_MECHANISMS
* Replace StringBuilder.newBuilder with new StringBuilder
* Rename AclBuffers to AclSeqs and remove usage of `filterKeys`
* More consistent usage of Set/Map in Controller classes: this also fixes
deprecated warnings with Scala 2.13
* Add spotBugs exclusion for inliner artifact in KafkaApis with Scala 2.12.
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>