Recently, we found a regression that could have been detected by static
analysis, since a local variable wasn't being passed to a method during
a refactoring, and was left unused. It was fixed in
[7a749b5](7a749b589f),
but almost slipped into 4.0. Unused variables are typically detected by
IDEs, but this is insufficient to prevent these kinds of bugs. This
change enables unused local variable detection in checkstyle for Kafka.
A few notes on the usage:
- There are two situations in which people actually want to have a local
variable but not use it. First, there are `for (Type ignored:
collection)` loops which have to loop `collection.length` number of
times, but that do not use `ignored` in the loop body. These are
typically still easier to read than a classical `for` loop. Second, some
IDEs detect it if a return value of a function such as `File.delete` is
not being used. In this case, people sometimes store the result in an
unused local variable to make ignoring the return value explicit and to
avoid the squiggly lines.
- In Java 22, unsued local variables can be omitted by using a single
underscore `_`. This is supported by checkstyle. In pre-22 versions,
IntelliJ allows such variables to be named `ignored` to suppress the
unused local variable warning. This pattern is often (but not
consistently) used in the Kafka codebase. This is, however, not
supported by checkstyle.
Since we cannot switch to Java 22, yet, and we want to use automated
detection using checkstyle, we have to resort to prefixing the unused
local variables with `@SuppressWarnings("UnusedLocalVariable")`. We have
to apply this in 11 cases across the Kafka codebase. While not being
pretty, I'd argue it's worth it to prevent bugs like the one fixed in
[7a749b5](7a749b589f).
Reviewers: Andrew Schofield <aschofield@confluent.io>, David Arthur
<mumrah@gmail.com>, Matthias J. Sax <matthias@confluent.io>, Bruno
Cadonna <cadonna@apache.org>, Kirk True <ktrue@confluent.io>
This PR addresses minor grammar and clarity issues in upgrade.html doc.
Reviewers: mingdaoy <mingdaoy@gmail.com>, Colin P. McCabe <cmccabe@apache.org>, TengYao Chi <kitingiao@gmail.com>, Ken Huang <s7133700@gmail.com>, Jhen-Yung Hsu <jhenyunghsu@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
* Add `DynamicThreadPool.java` to the server module.
* Remove the old DynamicThreadPool object in the `DynamicBrokerConfig.scala`.
Reviewers: TengYao Chi <kitingiao@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
Migrate the following data carrier class to records to eliminate
constructors, `equals`, `hashCode`, and `toString`.
* `Entry` in `LogHistory`
* `SnapshotPath`
Additionally, migrate the following classes as discussed:
* OffsetAndEpoch
* LogFetchInfo
* LogAppendInfo
In Java, accessing a field in record class requires parentheses.
In Scala, parentheses are not needed because Scala allows omitting them
when calling parameterless methods; hence, there is no need to change
the Scala code.
Reviewers: Ken Huang <s7133700@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
Running `bin/kafka-features.sh upgrade --release-version 4.0` results in
the following error. This PR fixes the issue by adding the required
argument.
`kafka-features: error: one of the arguments --bootstrap-server
--bootstrap-controller is required.`
Reviewers: Colin P. McCabe <cmccabe@apache.org>
The kafka controllers need to set kraft.version in their
ApiVersionsResponse messages according to the current kraft.version
reported by the Raft layer. Instead, currently they always set it to 0.
Also remove FeatureControlManager.latestFinalizedFeatures. It is not
needed and it does a lot of copying.
Reviewers: Jun Rao <junrao@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
- Adding a space, article and punctuation to the Producer config doc
strings for consistency and readability.
Reviewers: TengYao Chi <kitingiao@gmail.com>, Ken Huang <s7133700@gmail.com>, Justine Olshan <jolshan@confluent.io>
The electionWasClean should also consider if the election is done
through ELR. Otherwise, the metric uncleanLeaderElection will wrongly
count the ELR election
https://issues.apache.org/jira/browse/KAFKA-18940
Reviewers: Jun Rao <junrao@gmail.com>
If specified an invalid option then an exception trace appears with
`kafka-client-metrics.sh` and `kafka-groups.sh` utilities. Then once has
to explicitly remove the invalid argument and append `--help` to fetch
correct options. The PR fixes below error message to one with `cause`
and `usage`. This behaviour is similar to `kafka-console-consumer.sh`
and `kafka-console-share-consumer.sh`
Reviewers: Andrew Schofield <aschofield@confluent.io>
Adds `describeStreamsGroup` to Admin API.
This exposes the result of the `DESCRIBE_STREAMS_GROUP` RPC in the Admin
API.
Reviewers: Bill Bejeck <bill@confluent.io>
Add explicit not-null checks in Snapshot so we get a better error message in the event that a Snapshot object is accessed after erase has been called.
Reviewers: David Arthur <mumrah@gmail.com>
Add the verify_license.py script to our build to detect missing licenses.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, Ken Huang <s7133700@gmail.com>, David Arthur <mumrah@gmail.com>
When producers send future timestamps, time retention based log segments
may get blocked from removal for an extended period of time. Log
cleaning should should warn in the logs when this scenario occurs.
Reviewers: Viktor Somogyi-Vass <viktorsomogyi@gmail.com>
Cached TransactionIndex may get closed if interrupted, causing following calls to always fail with ClosedChannelException, and forcing process to be restarted. In order to avoid this issue, a new method is exposed by TransactionIndex to validate state of channel; and index is reopened if closed.
Reviewers: Luke Chen <showuon@gmail.com>, Kamal Chandraprakash<kamal.chandraprakash@gmail.com>, Nikhil Ramakrishnan <ramakrishnan.nikhil@gmail.com>
This patch is a first step towards resolving KAFKA-18046. Apache Kafka
4.0 ships with log4j2 so the issue raised in the ticket causing high CPU
usage on the fetch path due to LoggerFactory.getLogger() being called on
the handling of all fetch responses is not good. Hence, I propose to fix
that one by caching the Logger used by the `CompletedFetch` class.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Add information about setting dynamic log levels and dynamic
configurations on KRaft controllers.
Reviewers: Ismael Juma <ismael@juma.me.uk>, Chia-Ping Tsai <chia7712@gmail.com>
Migrate AdminClientRebootstrapTest to the new test infra and remove the
old Scala test.
Reviewers: TengYao Chi <kitingiao@gmail.com>, David Arthur <mumrah@gmail.com>
Logging on a per-batch bases is very chatty, and should only be done at
TRACE level to avoid spamming DEBUG logs.
Reviewers: Justine Olshan <jolshan@confluent.io>, Lucas Brutschy <lbrutschy@confluent.io>
In the documentation today, we have the following sentence:
By default, if no ResourcePatterns match a specific Resource R, then R
has no associated ACLs, and therefore no one other than super users is
allowed to access R. If you want to change that behavior, you can
include the following in server.properties.
Reviewers: TengYao Chi <kitingiao@gmail.com>, Andrew Schofield <aschofield@confluent.io>
Although, it is correct, I have observed users being confused by it. I
think could me made clearer that default is deny and this property is a
way to change default.
Change
Replace the above with the following:
Default Behavior Without ACLs:
If a resource (R) does not have any ACLs defined—that is, if no ACL
matches the resource—Kafka will restrict access to that resource. In
this situation, only super users are allowed to access it.
Changing the Default Behavior:
If you prefer that resources without any ACLs be accessible by all users
(instead of just super users), you can change the default behavior. To
do this, add the following line to your server.properties file:
allow.everyone.if.no.acl.found=true
With this setting enabled, if a resource does not have any ACLs defined,
Kafka will allow access to everyone. If a resource has one or more ACLs
defined, those ACL rules will be enforced as usual, regardless of the
setting.
User testing of the `KafkaShareConsumer` interface has revealed some
areas which confuse people. One of these is that way that it decides
whether you want to use implicit or explicit acknowledgement of records
by observing which calls the application issues. We are taking the
opportunity to refine the interface before it is finalised.
This PR introduces an experimental configuration called
`internal.share.acknowledgement.mode` which can be used to make the
application declare which kind of acknowledgement it wishes to use. We
plan to try out the configuration, assess whether it has helped, and
then create a proper consumer configuration that makes this area better.
That would require a lot of change in the tests, which explains why this
initial PR only has a small number of tests.
Reviewers: David Arthur <mumrah@gmail.com>
Increase the max number of forks for each JUnit task to 4. Also increase
the amount of memory given to forked workers to 3g.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
Implement Admin API extensions beyond list/describe group (delete group,
offset-related APIs).
* adds methods for describing and manipulating offsets, as described in
KIP-1071
* adds corresponding unit tests
These are doing the exact same thing as the corresponding consumer group
counter-parts.
Reviewers: Lucas Brutschy <lbrutschy@confluent.io>
This PR includes a new flag in VerifiableShareConsumer.java called
command.config to include a properties file for admin client
related configs
Co-authored-by: Andrew Schofield <aschofield@confluent.io>
Reviewers: Apoorv Mittal <apoorvmittal10@gmail.com>, Andrew Schofield <aschofield@confluent.io>
Using `bin/kafka-features.sh upgrade --metadata 4.0` to finalize the
upgrade is deprecated and furthermore it does not bump all the other
feature flags. Hence we should recommend to use `bin/kafka-features.sh
upgrade --release-version 4.0` in the documentation. This change was
introduced in [KIP-1022: Formatting and Updating
Features](https://cwiki.apache.org/confluence/display/KAFKA/KIP-1022%3A+Formatting+and+Updating+Features).
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Per ASF Infra's instructions, we need to reapply this .asf.yaml change.
We'll do that by setting it to "true", then back to "false"
Reviewers: Andrew Schofield <aschofield@confluent.io>
Add a single job that runs after the whole CI pipeline and make it a
required check before merging a PR. This will prevent us from merging
PRs which have not run through the CI.
Reviewers: Justine Olshan <jolshan@confluent.io>
In this PR, we perform this refactor as the class is not needed since
there is no need to refer to child classes by common ref and the
duplicated code is minimal.
Reviewers: Andrew Schofield <aschofield@confluent.io>
Implement Admin API extensions beyond list/describe group (delete group, offset-related APIs).
* adds methods for describing and manipulating offsets, as described in KIP-1071
* adds corresponding unit tests
These are doing the exact same thing as the corresponding consumer group counter-parts.
Reviewers: Lucas Brutschy <lbrutschy@confluent.io>
In KRaft mode, custom KafkaPrincipalBuilder instances must implement KafkaPrincipalSerde. This PR updates all related documentation to highlight this requirement.
Reviewers: Ken Huang <s7133700@gmail.com>, David Jacot <djacot@confluent.io>, TengYao Chi <kitingiao@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>