Bump the commons-beanutils for CVE-2025-48734. Since `commons-validator`
hasn't had new release with newer `commons-beanutils` versions, we manually bump it in kafka.
Reviewers: Mickael Maison <mickael.maison@gmail.com>
Update opentelemetry-proto from 1.0.0-alpha to 1.3.2-alpha.
OpenTelemetry-Proto versions from v1.0.0 up to and including v1.3.2
introduce no breaking changes.
[release
note](https://github.com/open-telemetry/opentelemetry-proto/releases)
For example, starting with v1.4.0, protobuf-java was updated to version
4.28.3. To mitigate the risk of protobuf compatibility issues, upgrading
to v1.3.2 first allows the existing protobuf version to remain unchanged
for now.
Reviewers: poorv Mittal <apoorvmittal10@gmail.com>, TengYao Chi
<kitingiao@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
Log segment closure results in right sizing the segment on disk along
with the associated index files.
This is specially important for TimeIndexes where a failure to right
size may eventually cause log roll failures leading to under replication
and log cleaner failures.
This change uses `Utils.closeAll` which propagates exceptions, resulting
in an "unclean" shutdown. That would then cause the broker to attempt to
recover the log segment and the index on next startup, thereby avoiding
the failures described above.
Reviewers: Omnia Ibrahim <o.g.h.ibrahim@gmail.com>, Jun Rao
<junrao@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
For ShareFetch Requests, the fetch happens through DelayedShareFetch
operation. The operations which are already completed has reference to
data being sent as response. As the operation is watched over multiple
keys i.e. DelayedShareFetchGroupKey and DelayedShareFetchPartitionKey,
hence if the operation is already completed by either watched keys but
then again the reference to the operation is still present in other
watched key. Which means the memory can only be free once purge
operation is triggered by DelayedOperationPurgatory which removes the
watched key operation from remaining keys, as the operation is already
completed.
The purge operation is dependent on the config
`ShareGroupConfig#SHARE_FETCH_PURGATORY_PURGE_INTERVAL_REQUESTS_CONFIG`
hence if the value is not smaller than the number of share fetch
requests which can consume complete memory of the broker then broker can
go out of memory. This can also be avoided by having lower fetch max
bytes for request but this value is client dependent hence can't rely to
prevent the broker.
This PR triggers the completion on both watched keys hence the
DelayedShareFetch operation shall be removed from both keys which frees
the broker memory as soon the share fetch response is sent.
#### Testing
Tested with LocalTieredStorage where broker goes OOM after reading some
8040 messages before the fix, with default configurations as mentioned
in the
doc
[here](https://kafka.apache.org/documentation/#tiered_storage_config_ex).
But after the fix the consumption continues without any issue. And the
memory is released instantaneously.
Reviewers: Jun Rao <junrao@gmail.com>, Andrew Schofield
<aschofield@confluent.io>
Previously, the confirmation prompt for updating the PR body treated any
input other than 'n' as approval, which could lead to unintended
actions.
With this change, the update will only proceed if the user enters 'y',
'Y', or presses Enter. For any other input, the operation is canceled
and an Abort. message is printed. This makes the prompt behavior clearer
and more predictable.
Reviewers: TengYao Chi <frankvicky@apache.org>, PoAn Yang
<payang@apache.org>, Kuan-Po Tseng <brandboat@gmail.com>, Ken Huang
<s7133700@gmail.com>, Lan Ding <isDing_L@163.com>
We should be mindful of ours users and let them know early if they are
using an unsupported feature in 4.1.
Unsupported features:
- Regular expressions
- Warm-up replicas (high availability assignor)
- Static membership
- Standby replicas enabled through local config
- Named topologies (already checked)
- Non-default kafka-client supplier
Reviewers: Bill Bejeck <bbejeck@apache.org>, Chia-Ping Tsai
<chia7712@gmail.com>
The original `props.setProperty(TopicConfig.SEGMENT_MS_CONFIG,
config.logSegmentMillis.toString)` in the `KafkaMetadataLog` constructor
was accidentally removed in #19371. Add a test to ensure this property
is properly assigned.
Reviewers: Ken Huang <s7133700@gmail.com>, TengYao Chi
<kitingiao@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
jira: https://issues.apache.org/jira/browse/KAFKA-19382
Upgrade junit from 5.10.2 to
[5.13.1](https://github.com/junit-team/junit5/releases).
A new behavior was introduced to junit 5.12
(89a46dfa10),
disallowing `ClusterTestExtensions` to generate empty invocation
contexts. However, `ClusterTestExtensions` is invoked by junit extension
so it could result in empty contexts for some tests.
```
> Configure project :
Starting build with version 4.1.0-SNAPSHOT (commit id c4a769bc) using
Gradle 8.14.1, Java 17 and Scala 2.13.16
Build properties: ignoreFailures=false, maxParallelForks=10,
maxScalacThreads=8, maxTestRetries=0
> Task :core:test kafka.api.ConsumerBounceTest.initializationError
failed, log available in
/Users/lansg/Project/OpenSource/kafka/kafka-fork/kafka/core/build/reports/testOutput/kafka.api.ConsumerBounceTest.initializationError.test.stdout
Gradle Test Run :core:test > Gradle Test Executor 5 > ConsumerBounceTest
> testCloseDuringRebalance(String) > initializationError FAILED
org.junit.platform.commons.PreconditionViolationException: Provider
[ClusterTestExtensions] did not provide any invocation contexts, but was
expected to do so. You may override
mayReturnZeroTestTemplateInvocationContexts() to allow this. at
java.base@17.0.13/java.util.ArrayList.forEach(ArrayList.java:1511) at
java.base@17.0.13/java.util.ArrayList.forEach(ArrayList.java:1511)
kafka.api.ConsumerBounceTest.initializationError failed, log available
in
/Users/lansg/Project/OpenSource/kafka/kafka-fork/kafka/core/build/reports/testOutput/kafka.api.ConsumerBounceTest.initializationError.test.stdout
```
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, TengYao Chi
<frankvicky@apache.org>, Ken Huang <s7133700@gmail.com>
Added docs on Enhancements to transactional producer error handling:
* Added standardized exception categories (`RetriableException`,
`RefreshRetriableException`, `AbortableException`,
`ApplicationRecoverableException`, `InvalidConfigurationException`,
`KafkaException`) to ensure clearer error handling patterns.
* Included a link to example template code for handling transaction
exceptions: [Transaction Client
Demo](https://github.com/apache/kafka/blob/trunk/examples/src/main/java/kafka/examples/TransactionalClientDemo.java).
Reviewers: Justine Olshan <jolshan@confluent.io>
To allow intercepting the internal subscribe call to the async-consumer,
we need to extend ConsumerWrapper interface accordingly, instead of
returning the wrapped async-consumer back to the KS runtime.
Reviewers: Lucas Brutschy <lbrutschy@confluent.io>
These dependencies have been updated across both files:
caffeine: From 3.1.8 to 3.2.0 javassist: From 3.29.2-GA to
3.30.2-GA Jetty-related: All Jetty components have been updated
from 12.0.15 to 12.0.22, including: jetty-alpn-client
jetty-client jetty-ee10-servlet jetty-ee10-servlets
jetty-http jetty-io jetty-security
jetty-server jetty-session jetty-util jose4j:
From 0.9.4 to 0.9.6 Jersey-related: All Jersey components have been
updated from 3.1.9 to 3.1.10, including: jersey-client
jersey-common jersey-container-servlet
jersey-container-servlet-core jersey-hk2 jersey-server
classgraph: From 4.8.173 to 4.8.179 jline: From 3.25.1 to 3.30.4
pcollections: From 4.0.1 to 4.0.2 re2j: From 1.7 to 1.8
snappy-java: From 1.1.10.5 to 1.1.10.7
New Dependency (LICENSE-binary only)
A new dependency, jspecify-1.0.0, has been added to LICENSE-binary.
gradle/dependencies.gradle Specific Updates
These updates are only reflected in the gradle/dependencies.gradle file:
bcpkix: From 1.78.1 to 1.80 bndlib: From 7.0.0 to 7.1.0 jacoco:
From 0.8.10 to 0.8.13 hamcrest: From 2.2 to 3.0 jqwik: From
1.8.3 to 1.9.2
Reviewers: Ken Huang <s7133700@gmail.com>, Chia-Ping Tsai
<chia7712@gmail.com>
We can use `pollUntilTrue` instead of `waitForCondition`, thus do a
little refactor to reduce the duplicate code
Reviewers: TengYao Chi <frankvicky@apache.org>, Lan Ding
<isDing_L@163.com>, TaiJuWu <tjwu1217@gmail.com>
## Summary
- MetadataShell may deletes lock file unintentionally when it exists or
fails to acquire lock. If there's running server, this causes unexpected
result as below:
* MetadataShell succeeds on 2nd run unexpectedly
* Even worse, LogManager/RaftManager's lock also no longer work from
concurrent Kafka process startup
Reviewers: TengYao Chi <frankvicky@apache.org>
See Discussion:
https://github.com/apache/kafka/pull/19371#discussion_r2109549343
Do the following changes:
- Update the internal config name with metadata prefix
- add the warning message for setting
`INTERNAL_METADATA_LOG_SEGMENT_BYTES_CONFIG`
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
Use Java to rewrite PlaintextConsumerSubscriptionTest by new test infra
and move it to client-integration-tests module.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
- Replace the deprecated `becomeLeaderOrFollower` with the
metadata-based `applyDelta` method.
- Add overloaded `topicsCreateDelta` to support custom topic name and
topicId.
Reviewers: Ken Huang <s7133700@gmail.com>, TengYao Chi
<kitingiao@gmail.com>, Nick Guo <lansg0504@gmail.com>, Chia-Ping Tsai
<chia7712@gmail.com>
Description:
* replace RPC with KRaft mechanism to test activeProducerState in
ReplicaManagerTest
Reviewers: TaiJuWu <tjwu1217@gmail.com>, Chia-Ping Tsai
<chia7712@gmail.com>
Move AddPartitionsToTxnManager to server module and convert to Java.
This patch moves AddPartitionsToTxnManager from the core module to the
server module, with its package updated from `kafka.server` to
`org.apache.kafka.server.transaction`. Additionally, several
configuration used by AddPartitionsToTxnManager are moved from
KafkaConfig.scala to AbstractKafkaConfig.java.
- brokerId
- requestTimeoutMs
- controllerListenerNames
- interBrokerListenerName
- interBrokerSecurityProtocol
- effectiveListenerSecurityProtocolMap
The next PR will move AddPartitionsToTxnManagerTest.scala to java
Reviewers: Justine Olshan <jolshan@confluent.io>, Chia-Ping Tsai
<chia7712@gmail.com>
Remove the event IDs from the ApplicationEvent and BackgroundEvent as it
serves no functional purpose other than uniquely identifying events in
the logs.
Reviewers: Andrew Schofield <aschofield@confluent.io>
While reading through the code, I found the method name to be somewhat
ambiguous and not fully descriptive of its purpose.
So I renamed the method to make its purpose clearer and more
self-explanatory. If there was another reason for the original naming,
I’d be happy to hear about it.
Reviewers: Lianet Magrans <lmagrans@confluent.io>
This change handles rejecting non-zero sequences when there is an empty
producerIDState with TV2. The scenario will be covered with the
re-triable OutOfOrderSequence error.
For Transactions V2 with empty state: ✅ Allow only sequence 0 is allowed for
new producers or after state cleanup (new validation added) ❌ Don't allow any
non-zero sequence is rejected with our specific error message ❌ Don't allow any epoch
bumps still require sequence 0 (existing validation remains)
For Transactions V1 with empty state: ✅ Allow ANY sequence number is allowed
(0, 5, 100, etc.) ❌ Don't allow epoch bumps still require sequence 0 (existing
validation)
Reviewers: Justine Olshan <jolshan@confluent.io>, Artem Livshits
<alivshits@confluent.io>
This pull request introduces a new example application,
`TransactionalClientDemo`, which demonstrates how to use Kafka's
transactional capabilities for exactly-once processing semantics. The
application consumes messages from an input topic, processes them to
generate word count statistics, and produces the results to an output
topic. It also includes robust error handling and transaction
management.
### Key Changes:
* Added `TransactionalClientDemo` class to demonstrate a transactional
Kafka client application. It handles consuming messages, processing
them, and producing results to an output topic while ensuring
exactly-once processing semantics.
* Implements transactional error handling based on KIP-1050 guidelines,
including handling `TransactionAbortableException`,
`InvalidConfigurationException`, `ApplicationRecoverableException`, and
generic `KafkaException`.
Ref :
[KIP-1050](https://cwiki.apache.org/confluence/display/KAFKA/KIP-1050%3A+Consistent+error+handling+for+Transactions)
Reviewers: Justine Olshan <jolshan@confluent.io>, Artem Livshits
<alivshits@confluent.io>
* Add `group.share.assignors` config to `GroupCoordinatorConfig`.
* Send `rackId` in share group heartbeat request if it's not null.
* Add integration test `testShareConsumerWithRackAwareAssignor`.
Reviewers: Lan Ding <53332773+DL1231@users.noreply.github.com>, Andrew
Schofield <aschofield@confluent.io>
---------
Signed-off-by: PoAn Yang <payang@apache.org>
The mapKey optimisation can be used in some KIP-932 RPC schemas to
improve efficiency of some key-based accesses.
* AlterShareGroupOffsetsResponse
* ShareFetchRequest
* ShareFetchResponse
* ShareAcknowledgeRequest
* ShareAcknowledgeResponse
Reviewers: Andrew Schofield <aschofield@confluent.io>
---------
Signed-off-by: PoAn Yang <payang@apache.org>
Updated the code to start the State Updater Thread only after the Stream
Thread is started.
Changes done :
1. Moved the starting of the StateUpdater thread to a new init method in
the TaskManager.
2. Called the init of TaskManager in the run method of the StreamThread.
3. Updated the test cases in the StreamThreadTest to mimic the
aforementioned behaviour.
Reviewers: Bruno Cadonna <cadonna@apache.org>
This PR simplifies two ConcurrentHashMap fields by removing their Atomic
wrappers:
- Change `brokerContactTimesMs` from `ConcurrentHashMap<Integer,
AtomicLong>` to `ConcurrentHashMap<Integer, Long>`.
- Change `brokerRegistrationStates` from `ConcurrentHashMap<Integer,
AtomicInteger>` to `ConcurrentHashMap<Integer, Integer>`.
This removes mutable holders without affecting thread safety (see
discussion in #19828).
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, TengYao Chi
<frankvicky@apache.org>, Kevin Wu <kevin.wu2412@gmail.com>, Ken Huang
<s7133700@gmail.com>
As part of readying share groups for production, we want to ensure that
the performance of the server-side assignor is optimal. In common with
consumer group assignors, a JMH benchmark is used for the analysis.
Reviewers: Apoorv Mittal <apoorvmittal10@gmail.com>
Now that Kafka Brokers support Java 17, this PR makes some changes in
core module. The changes in this PR are limited to only some Scala files
in the Core module's tests. The changes mostly include:
- Collections.emptyList(), Collections.singletonList() and
Arrays.asList() are replaced with List.of()
- Collections.emptyMap() and Collections.singletonMap() are replaced
with Map.of()
- Collections.singleton() is replaced with Set.of()
To be clear, the directories being targeted in this PR from unit.kafka
module:
- log
- network
- security
- tools
- utils
Reviewers: TengYao Chi <frankvicky@apache.org>
The PR do following:
1. rewrite to new test infra
2. rewrite to java
3. move to clients-integration-tests
Reviewers: Ken Huang <s7133700@gmail.com>, Kuan-Po Tseng
<brandboat@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
- Moving off deprecated methods
- Fixing argument order for assertEquals(...)
- Few other minor cleanups
Reviewers: PoAn Yang <payang@apache.org>, Lianet Magrans
<lmagrans@confluent.io>, Ken Huang <s7133700@gmail.com>
This PR is part of the KIP-1034.
It brings the support for the source raw key and the source raw
value in the `ErrorHandlerContext`. Required by the routing to DLQ implemented
by https://github.com/apache/kafka/pull/17942.
Reviewers: Bruno Cadonna <cadonna@apache.org>
Co-authored-by: Damien Gasparina <d.gasparina@gmail.com>
Fix to ensure protocol name comparison in integration test ignore case
(group protocol from param is lower case, vs enum name upper case)
The tests were not failing but the custom configs/expectation were not
being applied depending on the protocol (the tests checks for
"groupProtocol.equals(CLASSIC)" would never be true.
Found all comparisons with equals agains the constant name and fixed
them (not too many luckily).
I did consider changing the protocol param that is passed to every test
(that is now lowercase), but still, seems more robust to have the tests
ignore case.
Reviewers: Gaurav Narula <gaurav_narula2@apple.com>, Ken Huang
<s7133700@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>, TengYao Chi
<frankvicky@apache.org>
With the transaction V2, replica manager checks whether the incoming
producer request produces to a partition belonging to a transaction.
ReplicaManager figures this out by checking the producer epoch stored in
the partition log. However, the current code does not reject the produce
request if its producer epoch is lower than the stored producer epoch.
It is an optimization to reject such requests earlier instead of sending
an AddPartitionToTxn request and getting rejected in the response.
Reviewers: Justine Olshan <jolshan@confluent.io>, Artem Livshits
<alivshits@confluent.io>
We should behave more like a consumer group and make sure to not be
subscribed to the input topics anymore when the last member leaves the
group. We don't do this right now because our topology is still
initialized even after the last member leaves the group.
This will allow:
* Offsets to expire and be cleaned up.
* Offsets to be deleted through admin API calls.
Reviewers: Bill Bejeck <bbejeck@apache.org>
A heartbeat might be sent to the group coordinator, claiming to own
tasks that we do not know about. We need some logic to handle those
requests. In KIP-1071, we propose to return `INVALID_REQUEST` error
whenever this happens, effectively letting the clients crash.
This behavior will, however, make topology updates impossible. Bruno
Cadonna proposed to only check that owned tasks match our set of
expected tasks if the topology epochs between the group and the client
match. The aim of this change is to implement a check and a behavior
for the first version of the protocol, which is to always return
`INVALID_REQUEST` if an unknown task is sent to the group coordinator.
We can relax this constraint once we allow topology updating with
topology epochs.
To efficiently check this whenever we receive a heartbeat containing
tasks, we precompute the number of tasks for each subtopology. This also
benefits the performance of the assignor.
Reviewers: Bill Bejeck <bbejeck@apache.org>
* Add `RackAwareAssignor`. It uses `racksForPartition` to check the rack
id of a partition and assign it to a member which has the same rack id.
* Add `ConsumerIntegrationTest#testRackAwareAssignment` to check
`racksForPartition` works correctly.
Reviewers: David Jacot <djacot@confluent.io>
---------
Signed-off-by: PoAn Yang <payang@apache.org>
In #19840, we broke de-duplication during ACL creation. This patch fixes
that and adds a test to cover this case.
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
Hi, I've created pull request.
jira: [19328](https://issues.apache.org/jira/browse/KAFKA-19328)
problem:
1. doAnswer chaining works as intended only when calls are made
sequentially. In a multithreaded environment, its behavior is
unpredictable.
2. errors in a thread can be swallowed, not seen in main thread.
3. 5 doAnswer chain is not enough for 100 threads. The last chain is
returned for most cases.
4. nextFetchOffset seems to be called before doAnswer chain, so the last
values (25, 5, 26, 16) always was found in doAsnwer chain.
solution:
Delete doAnswer chain so that above four problems disappear.
Reviewers: Abhinav Dixit <adixit@confluent.io>, Apoorv Mittal
<apoorvmittal10@gmail.com>, Andrew Schofield <aschofield@confluent.io>
This PR implements all the options for `--delete --group grpId` and
`--delete --all-groups`
Tests: Integration tests and unit tests.
Reviewers: Lucas Brutschy <lbrutschy@confluent.io>, Andrew Schofield
<aschofield@confluent.io>
The `LIST_CLIENT_METRICS_RESOURCES` RPC was generalised to all config
resources in AK 4.1 and the RPC was renamed to `LIST_CONFIG_RESOURCES`.
This PR updates the RPC authorisation table in the documentation.
Reviewers: Apoorv Mittal <apoorvmittal10@gmail.com>
### About
Replaced `.close` functionality with `try-with-resources` for few tests
in `DelayedShareFetchTest` where we required to use `mockStatic`.
### Testing
The code has been tested by running the unit tests.
Reviewers: Apoorv Mittal <apoorvmittal10@gmail.com>