Commit Graph

16390 Commits

Author SHA1 Message Date
Sanskar Jhajharia 3c7f99ad31
MINOR: Cleanup Server Module (#20180)
As the PR title suggests, this PR is an attempt to perform some cleanups
in the server module. The changes are mostly around the use of Record
type for some classes, changes to use enhanced switch, etc.

Reviewers: Ken Huang <s7133700@gmail.com>, Chia-Ping Tsai
 <chia7712@gmail.com>
2025-09-08 07:01:09 +08:00
Chang-Yu Huang d6688f869c
KAFKA-15983 Kafka-acls should return authorization already done if repeating work is issued (#20482)
# Description
`kafka-acls.sh` doesn't print message about duplicate authorization.

# Changes 
Now the cli searches for existing AclBinding, prints duplicate bindings,
and removes them from the adding list.

Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2025-09-07 06:22:02 +08:00
jimmy 350577d0ae
MINOR: Add doc for external schemas in JSONConverter (#20429)
This is a follow-up to #19449, which do the following things:

1. Add document to explain `schema.content` only work for sink connector
when  `schemas.enable` set to true.
2. Handle the case that while jsonValue contains the `schema` and
`payload` fields, we should use the corresponding value.

Reviewers: Priyanka K U <priyanka.ku@gmail.com>, Chia-Ping Tsai
 <chia7712@gmail.com>
2025-09-06 23:51:59 +08:00
Chang-Chi Hsu f6f6172bd1
MINOR: update gradle from 8.14.1 to 8.14.3 (#20495)
**This upgrade includes:**
- Dependency configurations are now realized only when necessary, which
helps improve configuration performance and memory usage.
- The configuration cache improves build time by caching the result of
the configuration phase and reusing it for subsequent builds. This
feature can significantly improve build performance.

reference: [Gradle 8.14.3 Release

Notes](https://docs.gradle.org/8.14.3/release-notes.html#build-authoring-improvements)

Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2025-09-06 23:40:49 +08:00
Matthias J. Sax 655cfaa7b6
MINOR: remove System.out in test (#20494)
This PR removes two System.out.println(...) statements from
StreamsGraphTest. These outputs were left over from debugging and are
not needed in the test logic.

Reviewers: Ken Huang <s7133700@gmail.com>, TengYao Chi
 <kitingiao@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
2025-09-06 23:24:54 +08:00
Sanskar Jhajharia 52dfe1e1b3
MINOR: Cleanup Raft Module (#20348)
This PR aims at cleaning up the `raft` module further by getting rid of
some extra code which can be replaced by `record`

Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2025-09-06 23:02:14 +08:00
Sanskar Jhajharia 5e2f54e37a
MINOR: Cleanup Connect Module (5/n) (#20393)
This PR aims at cleaning up the`connect:runtime` module further by
getting rid of some extra code which can be replaced by record and the
relevant changes.

Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2025-09-06 10:08:56 +08:00
Matthias J. Sax 9ba7dd68e6
KAFKA-19668: processValue() must be declared as value-changing operation (#20470)
With "merge.repartition.topic" optimization enabled, Kafka Streams tries
to push repartition topics upstream, to be able to merge multiple
repartition topics from different downstream branches together.

However, it is not safe to push a repartition topic if the parent node
is value-changing: because of potentially changing data types, the
topology might become invalid, and fail with serde error at runtime.

The optimization itself work correctly, however, processValues() is not
correctly declared as value-changing, what can lead to invalid
topologies.

Reviewers: Bill Bejeck <bill@confluent.io>, Lucas Brutschy
 <lbrutschy@confluent.io>
2025-09-05 18:00:24 -07:00
Ken Huang 0a12eaa80e
KAFKA-19112 Unifying LIST-Type Configuration Validation and Default Values (#20334)
We add the three main changes in this PR

- Disallowing null values for most LIST-type configurations makes sense,
since users cannot explicitly set a configuration to null in a
properties file. Therefore, only configurations with a default value of
null should be allowed to accept null.
- Disallowing duplicate values is reasonable, as there are currently no
known configurations in Kafka that require specifying the same value
multiple times. Allowing duplicates is both rare in practice and
potentially confusing to users.
- Disallowing empty list, even though many configurations currently
accept them. In practice, setting an empty list for several of these
configurations can lead to server startup failures or unexpected
behavior. Therefore, enforcing non-empty lists helps prevent
misconfiguration and improves system robustness.
These changes may introduce some backward incompatibility, but this
trade-off is justified by the significant improvements in safety,
consistency, and overall user experience.

Additionally, we introduce two minor adjustments:

- Reclassify some STRING-type configurations as LIST-type, particularly
those using comma-separated values to represent multiple entries. This
change reflects the actual semantics used in Kafka.
- Update the default values for some configurations to better align with
other configs.
These changes will not introduce any compatibility issues.

Reviewers: Jun Rao <junrao@gmail.com>, Chia-Ping Tsai
 <chia7712@gmail.com>
2025-09-06 01:25:55 +08:00
Levani Kokhreidze 548fb18099
MINOR: Fix typo for the headers.separator cli option (#20489)
Should be `headers.separator=<headers.separator>` instead of
`headers.separator=<line.separator>`

Reviewers: Kuan-Po Tseng <brandboat@gmail.com>, Ken Huang
 <s7133700@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
2025-09-06 00:57:21 +08:00
Lianet Magrans 5fefb16f14
MINOR: extend consumer close java doc with error handling behaviour (#20472)
Add to the consumer.close java doc to describe the error handling
behaviour.

Reviewers: Matthias J. Sax <matthias@confluent.io>, Chia-Ping Tsai
 <chia7712@gmail.com>, Andrew Schofield <aschofield@confluent.io>,
 TengYao Chi <frankvicky@apache.org>
2025-09-06 00:41:11 +08:00
Kuan-Po Tseng af03353f71
KAFKA-19659: Wrong generic type for UnregisterBrokerOptions (#20490)
Fix wrong generic type for UnregisterBrokerOptions

Reviewers: Andrew Schofield <aschofield@confluent.io>
2025-09-05 16:50:05 +01:00
Jonah Hooper 29ce96151c
MINOR; Revert "KAFKA-18681: Created GetReplicaLogInfo RPCs (#19664)" (#20371)
This reverts commit d86ba7f54a.

Reverting since we are planning to change how KIP-966 is implemented. We
should revert this RPC until we have more clarity on how this KIP will
be executed.

Reviewers: José Armando García Sancio <jsancio@apache.org>
2025-09-05 11:31:50 -04:00
Kirk True f922ff6d1f
KAFKA-19259: Async consumer fetch intermittent delays on console consumer (#19980)
There’s a difference in the two consumers’ `pollForFetches()` methods in
this case: `ClassicKafkaConsumer` doesn't block waiting for data in the
fetch buffer, but `AsyncKafkaConsumer` does.

In `ClassicKafkaConsumer.pollForFetches()`, after enqueuing the `FETCH`
request, the consumer makes a call to `ConsumerNetworkClient.poll()`. In
most cases `poll()` returns almost immediately because it successfully
sent the `FETCH` request. So even when the `pollTimeout` value is, e.g.
3000, the call to `ConsumerNetworkClient.poll()` doesn't block that long
waiting for a response.

After sending out a `FETCH` request, `AsyncKafkaConsumer` then calls
`FetchBuffer.awaitNotEmpty()` and proceeds to block there for the full
length of the timeout. In some cases, the response to the `FETCH` comes
back with no results, which doesn't unblock
`FetchBuffer.awaitNotEmpty()`. So because the application thread is
still waiting for data in the buffer, it remains blocked, preventing any
more `FETCH` requests from being sent, causing the long pauses in the
console consumer.

Reviewers: Lianet Magrans <lmagrans@confluent.io>, Andrew Schofield
 <aschofield@confluent.io>
2025-09-05 10:50:47 -04:00
Jim Galasyn b92d47d487
MINOR: Update Kafka Streams API broker compatibility table for 4.1 (#20423)
Update the Kafka Streams API broker compatibility table for version 4.1.

Reviewers: Matthias J. Sax <matthias@confluent.io>
2025-09-04 17:39:49 -07:00
Lan Ding 32c2383bfa
KAFKA-19658 Tweak org.apache.kafka.clients.consumer.OffsetAndMetadata (#20451)
1. Optimize the `equals()`, `hashCode()`, and `toString()` methods in
`OffsetAndMetadata`.
2. Add UT and IT to these modifications.

Reviewers: TengYao Chi <kitingiao@gmail.com>, Sean Quah
 <squah@confluent.io>, Chia-Ping Tsai <chia7712@gmail.com>
2025-09-05 06:06:08 +08:00
Ken Huang 8076702c4c
MINOR: Add Unit test for `TimingWheel` (#20443)
There is any unit test for `TimingWheel`, we should add test for it.

Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2025-09-05 05:55:57 +08:00
Shashank d68c41d2f3
KAFKA-19666: Clean up integration tests related to state-updater (#20462)
Clean up `KafkaStreamsTelemetryIntegrationTest.java`

Reviewers: Lucas Brutschy <lucasbru@apache.org>
2025-09-04 21:40:23 +02:00
Andrew Schofield 37e04eca81
KAFKA-19662: Allow resetting offset for unsubscribed topic in kafka-share-groups.sh (#20453)
The `kafka-share-groups.sh` tool checks whether a topic already has a
start-offset in the share group when resetting offsets. This is not
necessary. By removing the check, it is possible to set a start offset
for a topic which has not yet but will be subscribed in the future, thus
initialising the consumption point.

There is still a small piece of outstanding work to do with resetting
the offset for a non-existent group which should also create the group.
A subsequent PR will be used to address that.

Reviewers: Jimmy Wang <48462172+JimmyWang6@users.noreply.github.com>,
Lan Ding <isDing_L@163.com>, Apoorv Mittal <apoorvmittal10@gmail.com>
2025-09-04 18:46:12 +01:00
Andrew Schofield 1d0c5f2820
KAFKA-19667: Close ShareConsumer in ShareConsumerPerformance after metrics displayed (#20467)
Ensure that metrics are retrieved and displayed (when requested) before
ShareConsumer.close() is called. This is important because metrics are
technically supposed to be removed on ShareConsumer.close(), which means
retrieving them after close() would yield an empty map.

Related to https://github.com/apache/kafka/pull/20267.

Reviewers: Apoorv Mittal <apoorvmittal10@gmail.com>
2025-09-04 18:42:58 +01:00
jimmy 9257c431ed
MINOR: Fix failed e2e compatibility_test_new_broker_test and upgrade_test.py (#20471)
#20390 Replace the -`-producer.config` for the verifiable producer and
`--consumer.config` option by `--command-config` for the verifiable
consumer. However, for e2e tests targeting older broker versions, the
original configuration should still be used.

Fix the following tests:
`consumer_protocol_migration_test.py`、`compatibility_test_new_broker_test.py`
and `upgrade_test.py`.

Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, Manikumar Reddy <manikumar.reddy@gmail.com>
2025-09-04 21:02:45 +05:30
Hong-Yi Chen 6a1cdf8262
MINOR: Refactor CLI tools to use CommandLineUtils#maybePrintHelpOrVersion (#20469)
Refactor help and version handling in command-line tools by replacing
duplicate code with `CommandLineUtils#maybePrintHelpOrVersion`.

Reviewers: TengYao Chi <kitingiao@gmail.com>, Ken Huang
 <s7133700@gmail.com>, Jhen-Yung Hsu <jhenyunghsu@gmail.com>, Chia-Ping
 Tsai <chia7712@gmail.com>
2025-09-04 21:43:17 +08:00
Shivsundar R 29b940bef4
MINOR: Use drainEvents() in ShareConsumerImpl::processBackgroundEvents (#20474)
*What*

- Currently in `ShareConsumerImpl`, we were not resetting
`background-event-queue-size` metric to 0 after draining the events from
the queue.
- This PR fixes it by using `BackgroundEventHandler::drainEvents`
similar to `AsyncKafkaConsumer`.
- Added a unit test to verify the metric is reset to 0 after draining
the events.

Reviewers: Andrew Schofield <aschofield@confluent.io>, Chia-Ping Tsai
<chia7712@gmail.com>
2025-09-04 21:39:50 +08:00
lucliu1108 a81f08d368
KAFKA-19550: Integration test for Streams-related Admin APIs [1/N] (#20244)
This change adds:

- Integration test for `Admin#describeStreamsGroups` API
- Integration test for `Admin#deleteStreamsGroup` API

Reviewers: Alieh Saeedi <asaeedi@confluent.io>, Lucas Brutschy
 <lucasbru@apache.org>

---------

Co-authored-by: Lucas Brutschy <lbrutschy@gmail.com>
2025-09-04 15:09:21 +02:00
Mickael Maison 6097b330c3
MINOR: Update supported image tags in docker_scan (#20459)
Update the supported tags for the 4.1.0 release

Reviewers: Luke Chen <showuon@gmail.com>
2025-09-04 09:37:17 +02:00
Matthias J. Sax c3af2064e7
MINOR: code cleanup (#20455)
- rewrite code to avoid @Suppress
- remove unused code
- fix test error message

Reviewer: Lucas Brutschy <lbrutschy@confluent.io>
2025-09-03 17:16:05 -07:00
PoAn Yang ea5b5fec32
KAFKA-19432 Add an ERROR log message if broker.heartbeat.interval.ms is too large (#20046)
* Log error message if `broker.heartbeat.interval.ms * 2` is large than
`broker.session.timeout.ms`.
* Add test case

`testLogBrokerHeartbeatIntervalMsShouldBeLowerThanHalfOfBrokerSessionTimeoutMs`.

Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2025-09-04 03:40:21 +08:00
Hong-Yi Chen a9bce0647f
KAFKA-19535 add integration tests for DescribeProducersOptions#brokerId (#20420)
Add tests for producer state listing with, without, and invalid
brokerId.

Reviewers: TengYao Chi <kitingiao@gmail.com>, Chia-Ping Tsai
 <chia7712@gmail.com>
2025-09-04 03:15:21 +08:00
Nick Guo ef10a52a52
KAFKA-19011 Improve EndToEndLatency Tool with argument parser and message key/header support (#20301)
jira: https://issues.apache.org/jira/browse/KAFKA-19011  kip:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-1172%3A+Improve+EndToEndLatency+tool

This PR improves the usability and maintainability of the
`kafka-e2e-latency.sh` tool:

- Replaces fixed-index argument parsing with a proper argument parser
(joptsimple)
- Adds support for configuring:
    - -record-key-size: size of the message key
    - -num-headers: number of headers per message
    - -record-header-key-size: size of each header key
    - -record-header-size: size of each header value
- Renames existing arguments to align with Kafka CLI conventions:
    - broker_list → bootstrap-server
    - num_messages → num-records
    - message_size_bytes → record-size
    - properties_file → command-config
    -

Reviewers: Jhen-Yung Hsu <jhenyunghsu@gmail.com>, Ken Huang
 <s7133700@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
2025-09-04 02:29:53 +08:00
Lucas Brutschy 6247fd9eb3
KAFKA-19478 [3/N]: Use heaps to discover the least loaded process (#20172)
The original implementation uses a linear search to find the least
loaded process in O(n), and we can replace this by look-ups in a heap is
O(log(n)), as described below

Active tasks: For active tasks, we can do exactly the same assignment as
in the original algorithm by first building a heap (by load) of all
processes. When we assign a task, we pick the head off the heap, assign
the task to it, update the load, and re-insert it into the heap in
O(log(n)).

Standby tasks: For standby tasks, we cannot do this optimization
directly, because of the order in which we assign tasks:

1. We first try to assign task A to a process that previously owned A.
2. If we did not find such a process, we assign A to the least loaded
node.
3. We now try to assign task B to a process that previously owned B
4. If we did not find such a process, we assign B to the least loaded
node
   ...

The problem is that we cannot efficiently keep a heap (by load)
throughout this process, because finding and removing process that
previously owned A (and B and…) in the heap is O(n). We therefore need
to change the order of evaluation to be able to use a heap:

1. Try to assign all tasks A, B.. to a process that previously owned the
task
2. Build a heap.
3. Assign all remaining tasks to the least-loaded process that does not
yet own the task. Since at most NumStandbyReplicas already own the task,
we can do it by removing up to NumStandbyReplicas from the top of the
heap in O(log(n)), so we get O(log(NumProcesses)*NumStandbyReplicas).

Note that the change in order changes the resulting standby assignments
(although this difference does not show up in the existing unit tests).
I would argue that the new order of assignment will actually yield
better assignments, since the assignment will be more sticky, which has
the potential to reduce the amount of store we have to restore from the
changelog topic after assingments.

In our worst-performing benchmark, this improves the runtime by ~107x.

Reviewers: Bill Bejeck<bbejeck@apache.org>
2025-09-03 17:13:01 +02:00
Andrew Schofield 4b9075b506
KAFKA-19653: Improve metavariable names in usage messages (#20438)
This trivial PR improves the so-called metavariable names in the usage
messages of the verifiable producer/consumer command-line tools. These
are the names of the replacement variables that appear solely in the
usage messages.

Verifiable producer (before):
```
usage: verifiable-producer [-h] --topic TOPIC
    [--max-messages MAX-MESSAGES] [--throughput THROUGHPUT]
    [--acks ACKS] [--producer.config CONFIG_FILE]
    [--message-create-time CREATETIME] [--value-prefix VALUE-PREFIX]
    [--repeating-keys REPEATING-KEYS] [--command-config CONFIG_FILE]
    --bootstrap-server HOST1:PORT1[,HOST2:PORT2[...]]
```

(after)
```
usage: verifiable-producer [-h] --topic TOPIC
    [--max-messages MAX-MESSAGES] [--throughput THROUGHPUT]
    [--acks ACKS] [--producer.config CONFIG-FILE]
    [--message-create-time CREATE-TIME] [--value-prefix VALUE-PREFIX]
    [--repeating-keys REPEATING-KEYS] [--command-config CONFIG-FILE]
    --bootstrap-server HOST1:PORT1[,HOST2:PORT2[...]]
```

Verifiable consumer (before):
```
usage: verifiable-consumer [-h] --topic TOPIC
    [--group-protocol GROUP_PROTOCOL]
    [--group-remote-assignor GROUP_REMOTE_ASSIGNOR]
    --group-id GROUP_ID
    [--group-instance-id GROUP_INSTANCE_ID]
    [--max-messages MAX-MESSAGES]
    [--session-timeout TIMEOUT_MS] [--verbose]
    [--enable-autocommit] [--reset-policy RESETPOLICY]
    [--assignment-strategy ASSIGNMENTSTRATEGY]
    [--consumer.config CONFIG_FILE] [--command-config CONFIG_FILE]
    --bootstrap-server HOST1:PORT1[,HOST2:PORT2[...]]
```

(after)
```
usage: verifiable-consumer [-h] --topic TOPIC
    [--group-protocol GROUP-PROTOCOL]
    [--group-remote-assignor GROUP-REMOTE-ASSIGNOR]
    --group-id GROUP-ID
    [--group-instance-id GROUP-INSTANCE-ID]
    [--max-messages MAX-MESSAGES]
    [--session-timeout TIMEOUT-MS] [--verbose]
    [--enable-autocommit] [--reset-policy RESET-POLICY]
    [--assignment-strategy ASSIGNMENT-STRATEGY]
    [--consumer.config CONFIG-FILE] [--command-config CONFIG-FILE]
    --bootstrap-server HOST1:PORT1[,HOST2:PORT2[...]]
```

Verifiable share consumer (before):
```
usage: verifiable-share-consumer
       [-h] --topic TOPIC --group-id GROUP_ID
       [--max-messages MAX-MESSAGES] [--verbose]
       [--acknowledgement-mode ACKNOWLEDGEMENTMODE]
       [--offset-reset-strategy OFFSETRESETSTRATEGY]
       [--command-config CONFIG_FILE]
       --bootstrap-server HOST1:PORT1[,HOST2:PORT2[...]]
```

(after):
```
usage: verifiable-share-consumer
       [-h] --topic TOPIC --group-id GROUP-ID
       [--max-messages MAX-MESSAGES] [--verbose]
       [--acknowledgement-mode ACKNOWLEDGEMENT-MODE]
       [--offset-reset-strategy OFFSET-RESET-STRATEGY]
       [--command-config CONFIG-FILE]
       --bootstrap-server HOST1:PORT1[,HOST2:PORT2[...]]
```

Reviewers: Kirk True <kirk@kirktrue.pro>, Ken Huang
 <s7133700@gmail.com>, Lianet Magrans <lmagrans@confluent.io>
2025-09-03 15:38:42 +01:00
Shivsundar R d226b43597
KAFKA-18220: Refactor AsyncConsumerMetrics to not extend KafkaConsumerMetrics (#20283)
*What*
https://issues.apache.org/jira/browse/KAFKA-18220

- Currently, `AsyncConsumerMetrics` extends `KafkaConsumerMetrics`, but
is being used both by `AsyncKafkaConsumer` and `ShareConsumerImpl`.

- `ShareConsumerImpl` only needs the async consumer metrics(the metrics
associated with the new consumer threading model).
- This needs to be fixed, we are unnecessarily having
`KafkaConsumerMetrics` as a parent class for `ShareConsumer` metrics.

Fix :
- In this PR, we have removed the dependancy of `AsyncConsumerMetrics`
on `KafkaConsumerMetrics` and made it an independent class which both
`AsyncKafkaConsumer` and `ShareConsumerImpl` will use.
- The "`asyncConsumerMetrics`" field represents the metrics associated
with the new consumer threading model (like application event queue
size, background queue size, etc).
- The "`kafkaConsumerMetrics`" and "`kafkaShareConsumerMetrics`" fields
denote the actual consumer metrics for `KafkaConsumer` and
`KafkaShareConsumer` respectively.

Reviewers: Andrew Schofield <aschofield@confluent.io>
2025-09-03 12:35:55 +01:00
dependabot[bot] 8448c288fa
MINOR: Bump requests from 2.31.0 to 2.32.4 in /tests (#19940)
Reviewers: Mickael Maison <mickael.maison@gmail.com>
2025-09-03 12:29:18 +02:00
Mickael Maison dd52058466
MINOR: Cleanups in Connect (#20077)
A few cleanups including Java 17 syntax, collections and assertEquals() order

Reviewers: Luke Chen <showuon@gmail.com>, Ken Huang <s7133700@gmail.com>, Jhen-Yung Hsu <jhenyunghsu@gmail.com>
2025-09-03 11:11:57 +02:00
Kirk True 4271fd8c8b
KAFKA-19564: Close Consumer in ConsumerPerformance only after metrics displayed (#20267)
Ensure that metrics are retrieved and displayed (when requested) before
`Consumer.close()` is called. This is important because metrics are
technically supposed to be removed on `Consumer.close()`, which means
retrieving them _after_ `close()` would yield an empty map.

Reviewers: Andrew Schofield <aschofield@confluent.io>
2025-09-03 09:25:21 +01:00
Federico Valeri 2ba30cc466
KAFKA-19574: Improve producer and consumer config files (#20302)
This is an attempt at improving the client configuration files. We now
have sections and comments similar to the other properties files.

Reviewers: Kirk True <ktrue@confluent.io>, Luke Chen <showuon@gmail.com>

---------

Signed-off-by: Federico Valeri <fedevaleri@gmail.com>
2025-09-02 11:24:35 +09:00
Matthias J. Sax 342a8e6773
MINOR: suppress build warning (#20424)
Suppress build warning.

Reviewers: TengYao Chi <frankvicky@apache.org>, Ken Huang
<s7133700@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
2025-09-01 11:12:11 -07:00
Lan Ding 4f2114a49e
KAFKA-19645 add a lower bound to num.replica.fetchers (#20414)
Add a lower bound to num.replica.fetchers.

Reviewers: PoAn Yang <payang@apache.org>, TaiJuWu <tjwu1217@gmail.com>,
 Ken Huang <s7133700@gmail.com>, jimmy <wangzhiwang611@gmail.com>,
 Jhen-Yung Hsu <jhenyunghsu@gmail.com>, Chia-Ping Tsai
 <chia7712@gmail.com>
2025-08-31 11:12:57 +08:00
Yunchi Pang 5441f5e3e1
KAFKA-19616 Add compression type and level support to LogCompactionTester (#20396)
issue: [KAFKA-19616](https://issues.apache.org/jira/browse/KAFKA-19616)

**why**: validate log compaction works correctly with compressed data.
**what**: adds compression config options to `LogCompactionTester` tool
and extends test coverage to validate log compaction with different
compression types and levels.

Reviewers: TengYao Chi <kitingiao@gmail.com>, Chia-Ping Tsai
 <chia7712@gmail.com>
2025-08-30 10:21:22 +08:00
Bill Bejeck e389484697
MINOR: Prepend steams to group configs specific to Kafka Streams groups (#20448)
In the 4.1 `upgrade-guide` describing the new KIP-1071 protocol it would
be helpful to display the configs you can set via `kafka-configs.sh`
with `streams` pre-pended to the configs, the command will fail
otherwise.

Reviewers: Andrew J Schofield<aschofield@apache.org>,   Matthias J
 Sax<mjsax@apache.org>,  Genseric Ghiro<gghiro@confluent.io>
2025-08-29 16:57:23 -04:00
Kuan-Po Tseng 26fea78ae1
MINOR: Remove default config in creating internal stream topic (#20421)
Cleanup default configs in
AutoTopicCreationManager#createStreamsInternalTopics.   The streams
protocol would like to be consistent with the kafka streams using the
classic protocol - which would create the internal topics using
CreateTopic and therefore use the controller config.

Reviewers: Lucas Brutschy <lucasbru@apache.org>
2025-08-29 15:23:53 +02:00
Jhen-Yung Hsu 65f789f560
KAFKA-19626: KIP-1147 Consistency of command-line arguments for remaining CLI tools (#20431)
This implements [KIP-1147](https://cwiki.apache.org/confluence/x/DguWF)
for kafka-cluster.sh, kafka-leader-election.sh and
kafka-streams-application-reset.sh.

Jira: https://issues.apache.org/jira/browse/KAFKA-19626

Reviewers: Andrew Schofield <aschofield@confluent.io>
2025-08-29 12:04:03 +01:00
knoxy5467 2dd2db7a1e
KAFKA-8350 Fix stack overflow when batch size is larger than cluster max.message.byte (#20358)
### Summary
This PR fixes two critical issues related to producer batch splitting
that can cause infinite retry loops and stack overflow errors when batch
sizes are significantly larger than broker-configured message size
limits.

### Issues Addressed
- **KAFKA-8350**: Producers endlessly retry batch splitting when
`batch.size` is much larger than topic-level `message.max.bytes`,
leading to infinite retry loops with "MESSAGE_TOO_LARGE" errors
- **KAFKA-8202**: Stack overflow errors in
`FutureRecordMetadata.chain()` due to excessive recursive splitting
attempts

### Root Cause
The existing batch splitting logic in
`RecordAccumulator.splitAndReenqueue()` always used the configured
`batchSize` parameter for splitting, regardless of whether the batch had
already been split before. This caused:

1. **Infinite loops**: When `batch.size` (e.g., 8MB) >>
`message.max.bytes` (e.g., 1MB), splits would never succeed since the
split size was still too large
2. **Stack overflow**: Repeated splitting attempts created deep call
chains in the metadata chaining logic

### Solution
Implemented progressive batch splitting logic:

```java
int maxBatchSize = this.batchSize;
if (bigBatch.isSplitBatch()) {
    maxBatchSize = Math.max(bigBatch.maxRecordSize,
bigBatch.estimatedSizeInBytes() / 2);
}
```

__Key improvements:__

- __First split__: Uses original `batchSize` (maintains backward
compatibility)

- __Subsequent splits__: Uses the larger of:

  - `maxRecordSize`: Ensures we can always split down to individual
records   - `estimatedSizeInBytes() / 2`: Provides geometric reduction
for faster convergence

### Testing

Added comprehensive test
`testSplitAndReenqueuePreventInfiniteRecursion()` that:

- Creates oversized batches with 100 records of 1KB each
- Verifies splitting can reduce batches to single-record size
- Ensures no infinite recursion (safety limit of 100 operations)
- Validates no data loss or duplication during splitting
- Confirms all original records are preserved with correct keys

### Backward Compatibility

- No breaking changes to public APIs
- First split attempt still uses original `batchSize` configuration
- Progressive splitting only engages for retry scenarios

Reviewers: Colin P. McCabe <cmccabe@apache.org>, Jason Gustafson
<jason@confluent.io>, Rajini Sivaram <rajinisivaram@googlemail.com>
###

---------

Co-authored-by: Michael Knox <mrknox@amazon.com>
2025-08-29 11:51:47 +01:00
Matthias J. Sax c7154b8bf8
MINOR: improve RLMQuotaMetricsTest (#20425)
Adds metrics description verification to RLMQuotaMetricsTest.

Reviewers: Ken Huang <s7133700@gmail.com>, TengYao Chi
<kitingiao@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
2025-08-29 17:35:48 +08:00
Apoorv Mittal 7eeb5c8344
MINOR: Removing incorrect multi threaded state transition tests (#20436)
These tests were written while finalizing approach for making inflight
state class thread safe but later approach changed and the lock is now
always required by SharePartition to change inflight state. Hence these
tests are incorrect and do not add any value.

Reviewers: Andrew Schofield <aschofield@confluent.io>
2025-08-29 07:45:07 +01:00
Andrew Schofield e6f3efc914
KAFKA-19635: Minor docs tweaks (#20434)
Improve the wording in the upgrade doc slightly. Also fix a tiny
annoyance in the output from the message generator.

Reviewers: Apoorv Mittal <apoorvmittal10@gmail.com>
2025-08-28 18:52:04 +01:00
Andrew Schofield 50009cc76a
KAFKA-19635: KIP-1147 changes for upgrade.html (#20415)
Updates to `docs/upgrade.html` for

https://cwiki.apache.org/confluence/display/KAFKA/KIP-1147:+Improve+consistency+of+command-line+arguments.

Reviewers: Apoorv Mittal <apoorvmittal10@gmail.com>
2025-08-28 16:24:45 +01:00
Lucas Brutschy 3c378dab7d
KAFKA-19647: Implement integration test for offline migration (#20412)
In KAFKA-19570 we implemented offline migration between groups, that is,
the following integration test or system test should be possible:

Test A:

 - Start a streams application with classic protocol, process up to a
certain offset and commit the offset and shut down.   - Start the same
streams application with streams protocol (same app ID!).   - Make sure
that the offsets before the one committed in the first run are not
reprocessed in the second run.

Test B:

 - Start a streams application with streams protocol, process up to a
certain offset and commit the offset and shut down.   - Start the same
streams application with classic protocol (same app ID!).   - Make sure
that the offsets before the one committed in the first run are not
reprocessed in the second run.

We have unit tests that make sure that non-empty groups will not be
converted. This should be enough.

Reviewers: Bill Bejeck <bbejeck@apache.org>
2025-08-28 17:07:58 +02:00
Apoorv Mittal 6956417a3e
MINOR: Updated name from messages to records for consistency in share partition (#20416)
Minor PR to update name of maxInFlightMessages to maxInFlightRecords to
maintain consistency in share partition related classes.

Reviewers: Andrew Schofield <aschofield@confluent.io>
2025-08-28 13:52:04 +01:00
Ken Huang 2cc66f12c3
MINOR: Remove OffsetsForLeaderEpochRequest unused static field (#20418)
This field was used for replica_id, but after

51c833e795,
the OffsetsForLeaderEpochRequest directly relies on the internal structs
generated by the automated protocol. Therefore, we can safely remove it.

Reviewers: Lan Ding <isDing_L@163.com>, TengYao Chi
<frankvicky@apache.org>
2025-08-28 17:24:01 +08:00