1. Fix ambiguous statement where it could be interpreted that scalac is used for the clients module
2. Remove "in KRaft mode" because that's the only mode
3. Put IntelliJ first in IDE section and only mention the native gradle integration
4. Restore instructions for installing all projects to the local Maven repo, this was accidentally removed as part of the Scala 2.12 support removal.
Currently, the command ./gradlew :streams:testAll as suggested in README is failing with an error. Following this commit, the command is able to pass and start running the tests in streams suite.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, Lucas Brutschy <lbrutschy@confluent.io>
Also remove confusing comment related to connect & Java 17 from build.gradle.
I also updated KIP-1013 to note the implications of KIP-1032.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
Added transaction version 2 to some of the system tests. Also marking TV2 as production ready.
Also fixes the defaultVersion test.
Reviewers: Jun Rao <jun@confluent.io>
Included in this change:
1. Remove deprecated protocol api versions from json files.
3. Remove fields that are no longer used from json files (affects ListOffsets, OffsetCommit, DescribeConfigs).
4. Remove record down-conversion support from KafkaApis.
5. No longer return `Errors.UNSUPPORTED_COMPRESSION_TYPE` on the fetch path[1].
6. Deprecate `TopicConfig. MESSAGE_DOWNCONVERSION_ENABLE_CONFIG` and made the relevant
configs (`message.downconversion.enable` and `log.message.downcoversion.enable`) no-ops since
down-conversion is no longer supported. It was an oversight not to deprecate this via KIP-724.
7. Fix `shouldRetainsBufferReference` to handle null request schemas for a given version.
8. Simplify producer logic since it only supports the v2 record format now.
9. Fix tests so they don't exercise protocol api versions that have been removed.
10. Add upgrade note.
Testing:
1. System tests have a lot of failures, but those tests fail for trunk too and I didn't see any issues specific to this change - it's hard to be sure given the number of failing tests, but let's not block on that given the other testing that has been done (see below).
3. Java producers and consumers with version 0.9-0.10.1 don't have api versions support and hence they fail in an ungraceful manner: the broker disconnects and the clients reconnect until the relevant timeout is triggered.
4. Same thing seems to happen for the console producer 0.10.2 although it's unclear why since api versions should be supported. I will look into this separately, it's unlikely to be related to this PR.
5. Console consumer 0.10.2 fails with the expected error and a reasonable message[2].
6. Console producer and consumer 0.11.0 works fine, newer versions should naturally also work fine.
7. kcat 1.5.0 (based on librdkafka 1.1.0) produce and consume fail with a reasonable message[3][4].
8. kcat 1.6.0-1.7.0 (based on librdkafka 1.5.0 and 1.7.0 respectively) consume fails with a reasonable message[5].
9. kcat 1.6.0-1.7.0 produce works fine.
10. kcat 1.7.1 (based on librdkafka 1.8.2) works fine for consumer and produce.
11. confluent-go-client (librdkafka based) 1.8.2 works fine for consumer and produce.
12. I will test more clients, but I don't think we need to block the PR on that.
Note that this also completes part of KIP-724: produce v2 and lower as well as fetch v3 and lower are no longer supported.
Future PRs will remove conditional code that is no longer needed (some of that has been done in KafkaApis,
but only what was required due to the schema changes). We can probably do that in master only as it does
not change behavior.
Note that I did not touch `ignorable` fields even though some of them could have been
changed. The reasoning is that this could result in incompatible changes for clients
that use new protocol versions without setting such fields _if_ we don't manually
validate their presence. I will file a JIRA ticket to look into this carefully for each
case (i.e. if we do validate their presence for the appropriate versions, we can
set them to ignorable=false in the json file).
[1] We would return this error if a fetch < v10 was used and the compression topic config was set
to zstd, but we would not do the same for the case where zstd was compressed at the producer
level (the most common case). Since there is no efficient way to do the check for the common
case, I made it consistent for both by having no checks.
[2] ```org.apache.kafka.common.errors.UnsupportedVersionException: The broker is too new to support JOIN_GROUP version 1```
[3]```METADATA|rdkafka#producer-1| [thrd:main]: localhost:9092/bootstrap: Metadata request failed: connected: Local: Required feature not supported by broker (0ms): Permanent```
[4]```METADATA|rdkafka#consumer-1| [thrd:main]: localhost:9092/bootstrap: Metadata request failed: connected: Local: Required feature not supported by broker (0ms): Permanent```
[5] `ERROR: Topic test-topic [0] error: Failed to query logical offset END: Local: Required feature not supported by broker`
Reviewers: David Arthur <mumrah@gmail.com>
Librdkafka totally breaks if produce v3 is removed - it starts sending records with record format v0.
These api versions have to be undeprecated - KIP-896 has been updated.
Reviewers: David Arthur <mumrah@gmail.com>
We remove the deprecated overload of StateStore.init() and thus do not
need to cast any longer. This PR removes all unnecessary casts, and
additionally cleans ups all related classed to reduce warnings.
Reviewers: Bill Bejeck <bill@confluent.io>
This is just a mechanical change to make prepareTransitionTo method use named parameters instead of positional parameters.
Reviewers: Justine Olshan <jolshan@confluent.io>, Ritika Reddy <rreddy@confluent.io>
This PR Adds entityType: topicName to SubscribedTopicNames in ShareGroupHeartbeatRequest.json as per the recent update to kip-932.
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
This patch is the first one in a series to improve how coordinator records are managed. It focuses on making coordinator records first class citizen in the generator.
* Introduce `coordinator-key` and `coordinator-value` in the schema;
* Introduce `apiKey` for those. This is done to avoid relying on the version to determine the type.
* It also allows the generator to enforce some rules: the key cannot use flexible versions, the key must have a single version `0`, there must be a key and a value for a given api key, etc.
* It generates an enum with all the coordinator record types. This is pretty handy in the code.
The patch also updates the group coordinators to use those.
Reviewers: Jeff Kim <jeff.kim@confluent.io>, Andrew Schofield <aschofield@confluent.io>
Reviewers: John Huang <pegasashjy@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>, Bill Bejeck <bill@confluent.io>, Lucas Brutschy <lbrutschy@confluent.io>
A lot of these tests assumed that the commit/abort happened immediately. Spoiler alert -- it does not.
For some I ensure that the first send of the next transaction is successful before grabbing the epoch. I also loosened some checks since we don't need to guarantee the exact epoch.
I went back and forth with completely deleting testBumpTransactionalEpochWithTV2Enabled since we don't have client side epoch bumps with V2 (which is what the test was originally testing), but I opted to keep it to just confirm the epoch on each transaction -- even in the timeout scenario.
Reviewers: Calvin Liu <caliu@confluent.io>, Artem Livshits <alivshits@confluent.io>, Jeff Kim <jeff.kim@confluent.io>, David Mao <dmao@confluent.io>
When inter.broker.listener is explicitly set, validate that it is not in the set of controller.listener.names.
Reviewers: Colin P. McCabe <cmccabe@apache.org>, David Arthur <mumrah@gmail.com>
While looking at the message formatters in https://github.com/apache/kafka/pull/18261, I have noticed at few incorrect test cases.
* We should not log anything when the record type is unknown because the formatters have clear goals.
* We should not parse the value when the key is null or when the key cannot be parsed. While it works in the tests, in practice, this is wrong because we cannot assume that type of the value if the type of the key is not defined. The key drives the type of the entire record.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
KIP-1071 specifies records that the group coordinator needs to store
into the consumer offset topic to persist the state of a Streams
group. This records are specified in json files from which the actual
classes for the records are generated.
This commit adds the records needed by the group coordinator to store
the state of a Streams group.
Reviewer: Lucas Brutschy <lbrutschy@confluent.io>
The tests are flaky because the tests end before the verified calls
are executed. This happens because the state updater thread executes
the verified calls, but the thread that executes the
tests with the verifications is a different thread.
This commit fixes the flaky tests by enusring that the calls were
performed by the state updater by either shutting down the state
updater or waiting for the condition.
Reviewers: Lucas Brutschy <lbrutschy@confluent.io>, Matthias J. Sax <matthias@confluent.io>
StreamsResetter should log the deprecation warning only if the deprecated
flag is used.
Reviewers: Bruno Cadonna <bruno@confluent.io>, Chia-Ping Tsai <chia7712@gmail.com>
We want to bump the epoch if we are upgrading to TV2. Given that we already have code in place for this, I thought we could piggyback on the completing transaction epoch bump logic. For just initializing producers, I moved the check to the end of InitTransaction. Note, we have do do this check after we initialize the producer ID to ensure we have updated ApiVersions correctly.
Reviewers: Calvin Liu <caliu@confluent.io>, Artem Livshits <alivshits@confluent.io>, Jeff Kim <jeff.kim@confluent.io>
Adds a new RPC StreamsGroupDescribe that returns, given the group ID, all metadata related to the streams group, such as
- The topology metadata of the group.
- The topology epoch of the group.
- The latest member metadata that each member provided through the StreamsGroupHeartbeat API.
- The current target assignment generated by the assignor.
- This just adds the JSON as defined in KIP-1071, together with some plumbing.
Reviewers: Bill Bejeck <bbejeck@gmail.com>
For PRs that have been reviewed (by anyone, not just a committer), remove the "triage" label. This job runs once per night.
Reviewers: Justine Olshan <jolshan@confluent.io>, Chia-Ping Tsai <chia7712@gmail.com>
The StreamsGroupHeartbeat API is the new core API used by streams application to form a group. The API allows members to initialize a topology, advertise their state, and their owned tasks. The group coordinator uses it to assign/revoke tasks to/from members. This API is also used as a liveness check.
This change adds the JSON definition of the RPC, as defined in KIP-1071.
Reviewers: Bruno Cadonna <cadonna@apache.org>
Migrates KTableSuppressProcessorSupplier to use the the ProcessorSupplier#stores() method
Reviewers: Guozhang Wang <guozhang.wang.us@gmail.com>, Anna Sophie Blee-Goldman <ableegoldman@apache.org>