Compare commits

...

323 Commits
main ... 1.5

Author SHA1 Message Date
Xu Han@AutoMQ 6636f8791a
feat(release): release 1.5.5 (#2761)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-08-15 20:39:43 +08:00
Xu Han@AutoMQ 5ae27285d8
fix(failover): fix failover get wrong node config (#2760)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-08-15 18:42:28 +08:00
woshigaopp f8781a9be4
fix(e2e): cherry pick to fix e2e (#2756)
remove unnecessary imports to fix e2e.
2025-08-14 19:26:17 +08:00
Xu Han@AutoMQ c1e4cb7b96
feat(release): release 1.5.4 (#2755)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-08-14 15:20:43 +08:00
Xu Han@AutoMQ d50499eea6
fix(s3stream): fix the network out over-consumed (#2752) (#2753)
Consider the following scenario:
1. A Fetch request contains partitions P1 and P2. The data of P1 is in LogCache, while the data of P2 is not.
2. First, a fast read will be attempted. At this time, P1 will return data and consume Network Out, and P2 will return a FastReadException.
3. Due to the FastReadException, the entire Fetch attempts a slow read. At this time, both P1 and P2 return data and consume Network Out.
4. At this point, the Network Out in step 2 is consumed repeatedly.

Solution: Move the S3Stream network out consumption to ElasticReplicaManager. Avoid the network out traffic over-consumed, when there are mixin(tail read & catch-up read) partitions reading.
2025-08-13 10:52:40 +08:00
Xu Han@AutoMQ 8641ba864c
fix(s3stream): add pending requests await timeout for S3Stream#close (#2751) 2025-08-12 17:05:36 +08:00
Xu Han@AutoMQ 387557b10d
fix: Added support for AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (#2748)
fix: Added support for AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY  (#2747)

Added support for AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to the
list of supported environment variables. Issue #2746

Co-authored-by: Saumya Pandey <saumyapandeycse98@gmail.com>
2025-08-11 11:30:52 +08:00
Gezi-lzq bd9b78ee46
fix(streamReader): implement scheduled cleanup for expired stream readers (#2719) (#2735)
- Add Time dependency to StreamReader and StreamReaders for time-related operations
- Update constructors to accept Time, defaulting to Time.SYSTEM
- Replace System.currentTimeMillis() with time.milliseconds() throughout
- Refactor StreamReadersTest to use MockTime for simulating time passage
- Remove reflection-based time manipulation in tests for cleaner and safer testing

---------

Signed-off-by: Gezi-lzq <lzqtxwd@gmail.com>
2025-07-31 10:45:18 +08:00
woshigaopp 6462c967a5
feat: modify for enterprise e2e (#2733)
* feat: modify for enterprise e2e

* feat: add AutoMQ inject start/end

* feat: undo modify runclass.sh
2025-07-30 23:52:41 +08:00
Gezi-lzq 219dba3e95
perf(log): increase FETCH_BATCH_SIZE to 512KB in StreamSegmentInputStream (#2722) (#2730) 2025-07-30 22:13:11 +08:00
Gezi-lzq 92b6f7ef44
fix(log): Prevent potential offset overflow in ElasticLogSegment (#2720) (#2726)
* fix(log): Prevent potential offset overflow in ElasticLogSegment

This commit addresses an issue where a log segment could accommodate more than Integer.MAX_VALUE records, leading to a potential integer overflow when calculating relative offsets.

The root cause was that the check `offset - baseOffset <= Integer.MAX_VALUE` allowed a relative offset to be exactly `Integer.MAX_VALUE`. Since offsets are 0-based, this allows for `Integer.MAX_VALUE + 1` records, which cannot be represented by a standard Integer.

This fix implements the following changes:
1.  In `ElasticLogSegment`, the offset validation is changed from `<=` to `< Integer.MAX_VALUE` to ensure the relative offset strictly fits within an Integer's bounds.
2.  In `LogCleaner`, a new segment grouping method `groupSegmentsBySizeV2` is introduced for `ElasticUnifiedLog`. This method uses the same stricter offset check to prevent incorrectly grouping segments that would exceed the offset limit.
3.  The corresponding unit tests in `LogCleanerTest` have been updated to reflect these new boundaries and validate the fix.

Fixes: #2718

* fix(logCleaner): unify segment grouping logic

* fix(logCleaner): extract offset range check for segment grouping to prevent overflow in ElasticLogSegment

* style(logCleaner): fix indentation in segment grouping condition for readability

* style(logCleaner): fix line break in offset range check for readability

* chore: add AutoMQ inject

* style(logCleaner): remove unnecessary blank line after segment grouping

* fix(stream): validate record batch count to prevent negative values in append
2025-07-30 12:42:02 +08:00
Gezi-lzq a885e5ae8c
fix(logCleaner): optimize write buffer management and clear buffer before use (#2704) (#2711) 2025-07-29 18:32:53 +08:00
woshigaopp 93087a07f9
fix: resolve Base64 decoding error in certificate parsing (#2615) (#2… (#2707)
fix: resolve Base64 decoding error in certificate parsing (#2615) (#2693)

- Fix IllegalArgumentException: Illegal base64 character 20 in S3StreamKafkaMetricsManager
- Replace single newline removal with comprehensive whitespace cleanup using replaceAll("\s", "")
- Add graceful error handling for both Base64 and certificate parsing failures
- Add comprehensive unit tests covering various whitespace scenarios and edge cases
- Improve logging with specific error messages for failed certificate parsing

Fixes #2615

(cherry picked from commit 75bdea05e5)

Co-authored-by: Vivek Chavan <111511821+vivekchavan14@users.noreply.github.com>
2025-07-28 17:25:24 +08:00
Gezi-lzq ca0e9bf40f
feat(log): enhance reading logic to handle offset gaps and add unit tests (#2699) (#2701) 2025-07-28 14:09:27 +08:00
Xu Han@AutoMQ 3e0af68ae9
chore(github): change code owners (#2696)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-07-25 10:56:32 +08:00
Xu Han@AutoMQ a9ee4f8e7b
perf(s3stream): avoid S3StreamSetObject objectId long primitive unbox… (#2694)
perf(s3stream): avoid S3StreamSetObject objectId long primitive unboxing (#2687)

Co-authored-by: lifepuzzlefun <wjl_is_213@163.com>
2025-07-25 10:56:04 +08:00
Gezi-lzq 60d671f706
feat(catalog): avoid static global credentials provider (#2684) (#2686)
* feat(catalog): Avoid static global credentials provider

Refactors `CredentialProviderHolder` to prevent "Connection Pool Shutdown"
errors by creating a new provider instance for each catalog.

Previously, a single static `AwsCredentialsProvider` was shared globally.
If this provider was closed, it would affect all subsequent operations.
By creating a new provider on each `create()` call from Iceberg, this
change removes the global singleton and isolates provider instances.

Fixes #2680

* Update core/src/main/java/kafka/automq/table/CredentialProviderHolder.java




* fix(credentials): update DefaultCredentialsProvider instantiation to use builder pattern

---------

Signed-off-by: Gezi-lzq <lzqtxwd@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-18 18:03:29 +08:00
Yu Ning 5a8af4b89d
feat(block-wal): use version V1 and support sequential return (#2681)
* feat(s3stream/block-wal): complete appends sequentially (#2665)

* feat(s3stream/block-wal): complete appends sequentially



* fix: use a lock to ensure there is at most one callback thread



---------



* feat(s3stream/wal): write a padding record when no space at the end of device (#2673)

* refactor(RecordHeader): remove useless methods



* feat(SlidingWindowService): write a padding block when not enough space



* feat(recovery): handle padding records



* fix: fix incorrect assertion



---------



* feat(s3stream/wal): defaults to using version V1 and forward compatible (#2676)

* feat: introduce the Block WAL V1



* feat: impl `RecoverIteratorV1` which only recovers continuous records



* feat: wal forward compatibility



* test: fix tests



* test: test recover from WAL V1



* test: test upgrade



---------



---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2025-07-17 14:15:29 +08:00
Gezi-lzq a0500b4562
fix(worker): update CommitResponse to use partition type from writerF… (#2679)
fix(worker): update CommitResponse to use partition type from writerFactory (#2677)

* fix(worker): update CommitResponse to use partition type from writerFactory

* fix(worker): mock partitionSpec in TopicPartitionsWorkerTest for unpartitioned partitions

* fix(worker): reorganize imports in TopicPartitionsWorkerTest for clarity
2025-07-17 11:14:47 +08:00
Xu Han@AutoMQ 38541af171
perf(misc): optimize FairLimiter implementation (#2670) (#2672)
* fix(s3stream): avoid StreamMetadataManager add callback when retry

* perf(misc): optimize FairLimiter implementation

Co-authored-by: lifepuzzlefun <wjl_is_213@163.com>
2025-07-17 10:08:09 +08:00
Gezi-lzq 0cd047c2ce
fix: support list more than 1000 objects by prefix (#2660) (#2666)
This commit fixes an issue where the doList method in AwsObjectStorage.java
did not handle paginated results from the S3 listObjectsV2 API. The
method now recursively fetches all pages of objects, ensuring that it can
retrieve more than the default 1000 objects.
2025-07-14 14:14:08 +08:00
Xu Han@AutoMQ caaa41718d
feat(release): 1.5.3 (#2663)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-07-10 11:17:03 +08:00
1sonofqiu 8d624daf56
chore(container): update welcome message and container patch (#2658) (#2664) 2025-07-10 11:16:52 +08:00
Gezi-lzq 1dfa8f75a0
fix(s3stream): avoid StreamMetadataManager add callback when retry (#… (#2661)
fix(s3stream): avoid StreamMetadataManager add callback when retry (#2659)

Co-authored-by: lifepuzzlefun <wjl_is_213@163.com>
2025-07-08 18:10:56 +08:00
Xu Han@AutoMQ f473d7da1a
fix(image): guard streams image access with lock to prevent data loss (#2653) (#2654)
fix(image): guard streams image access with lock to prevent compaction skip data

Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-07-04 18:39:06 +08:00
Gezi-lzq 73642827c3
feat(release): release 1.5.3-rc0 (#2651) 2025-07-03 18:37:10 +08:00
Gezi-lzq a67349f845
fix(deps): resolve logging conflicts from Hive Metastore (#2648) (#2649)
feat(dependencies): add jcl-over-slf4j library and exclude conflicting logging implementations
2025-07-03 16:48:35 +08:00
Yu Ning 76195e23b9
chore(tools/perf): log client logs to a separate file (#2645)
Signed-off-by: Ning Yu <ningyu@automq.com>
2025-07-02 14:49:21 +08:00
Xu Han@AutoMQ a9cf9b23ec
feat(release): release 1.5.2 (#2639)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-06-25 19:34:18 +08:00
Gezi-lzq e86e00ae45
fix(docker): update minio policy commands to use 'anonymous' instead of 'policy' (#2640) (#2641)
fix(docker): update minio policy commands to use 'anonymous' instead of 'public'
2025-06-25 19:09:06 +08:00
Xu Han@AutoMQ 2c42dec469
feat(tabletopic): default include hive catalog dependencies (#2637)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-06-25 16:02:49 +08:00
Gezi-lzq 5e7806178d
fix(metadata): ensure correct GroupCoordinator updates for topic dele… (#2635)
fix(metadata): ensure correct GroupCoordinator updates for topic deletions (#2626)
2025-06-22 10:50:36 +08:00
Shichao Nie fe462777e6
feat(auto_balancer): only use pending append/fetch latency to identif… (#2632)
feat(auto_balancer): only use pending append/fetch latency to identify slow broker

Signed-off-by: Shichao Nie <niesc@automq.com>
2025-06-18 11:47:34 +08:00
Gezi-lzq beb23756d4
fix(docker): update AutoMQ image version to 1.5.1 in docker-compose files (#2630) 2025-06-17 16:00:14 +08:00
Gezi-lzq 92094b533b
fix(docker): update MinIO command from config host to alias set (#2628)
* fix(docker): update MinIO command from config host to alias set

* fix(docker): update MinIO and mc images to specific release versions
2025-06-17 11:57:33 +08:00
Yu Ning b3c1f10813
fix(quota): update broker quota configs on "--broker-defaults" (#2618)
Signed-off-by: Ning Yu <ningyu@automq.com>
2025-06-09 18:04:14 +08:00
woshigaopp 47a4517a4d
fix(e2e): cherry pick fix e2e 1.5 (#2616)
* fix(e2e): fix e2e test performance and log compaction

* fix(e2e): fix e2e test reassign and round_trip

* fix(e2e): fix e2e test GroupModeTransactionsTest

* fix(e2e): fix e2e test reassignTest

* fix(e2e): fix e2e test kafka start failed because not support file wal

* fix(e2e): fix e2e test kafka start failed because not support file wal

* fix(e2e): fix e2e test kafka start failed because not support file wal

* fix(e2e): fix e2e test kafka start failed because wait logic

* fix(e2e): fix e2e test kafka start failed because wait too short

* fix(e2e): format code

* fix(e2e): fix e2e test kafka start failed because not support file wal

* fix(e2e): format code

* fix(e2e): format code

* fix(e2e): format code
2025-06-06 18:45:34 +08:00
Yu Ning 1119acc065
feat(tools/perf): log cpu, memory usage and min latency (#2611)
* chore(tools/perf): add an option "--send-throughput"

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat(tools/perf): log cpu and memory usages (#2607)

* feat: introduce `CpuMonitor`

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat: introduce `MemoryMonitor`

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat(tools/perf): log cpu and memory usages

Signed-off-by: Ning Yu <ningyu@automq.com>

---------

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat(tools/perf): log the min latency (#2608)

Signed-off-by: Ning Yu <ningyu@automq.com>

---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2025-06-03 20:29:37 +08:00
Yu Ning 0fc2c7f0bd
perf(s3client): increase outbound network limiter pool size (#2603)
Signed-off-by: Ning Yu <ningyu@automq.com>
2025-05-29 18:01:17 +08:00
Yu Ning c59127cd2a
fix(s3client): limit the network usage of object storages (#2600)
Signed-off-by: Ning Yu <ningyu@automq.com>
2025-05-29 16:54:41 +08:00
Yu Ning 6995c3f64d
perf(s3stream): recover and upload data in segments (#2596)
perf(s3stream): recover and upload data in segments (#2593)

* feat: pause recovery once the cache is full



* feat(s3stream): recover and upload data in segments



* test: test segmented recovery



---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2025-05-28 11:05:54 +08:00
Xu Han@AutoMQ 2b407e0786
fix(s3stream): fix node epoch outdated (#2595)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-05-23 18:22:11 +08:00
Yu Ning 92aef1d11e
refactor(s3stream): preparation for segmented recovery (#2592)
Signed-off-by: Ning Yu <ningyu@automq.com>
2025-05-23 16:15:40 +08:00
Yu Ning 8f58a419c9
refactor(s3stream): in preparation for segmented recovery (#2589) (#2590)
* refacotr: remove useless param passing



* revert: "release Bytebuf allocated by WAL earlier to prevent memory fragmentation (#2341)"

This reverts commit 7b4240aa31.

* refactor: extract `filterOutInvalidStreams`



* refactor: extract `releaseRecords`



* refactor: rename "expectXXX" to "expectedXXX"



* refactor: extract methods



---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2025-05-23 11:39:54 +08:00
1sonofqiu 39bc9dae1f
chore(chart): update demo-values.yaml to correct AWS environment variable names (#2588) 2025-05-22 14:14:46 +08:00
Xu Han@AutoMQ f6bf3e64f8
fix(stream): obey aws auth chain (#2585)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-05-22 11:44:55 +08:00
Xu Han@AutoMQ 8ec58aa0c7
chore(all): make code more extendable (#2582)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-05-22 10:46:40 +08:00
Yu Ning 86071df7b0
perf(tool/perf): reduce the record header size (#2580)
* perf(tool/perf): reduce the record header size

Signed-off-by: Ning Yu <ningyu@automq.com>

* style: fix lint

Signed-off-by: Ning Yu <ningyu@automq.com>

---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2025-05-20 10:20:49 +08:00
Xu Han@AutoMQ dbef35334d
feat(release): release 1.5.0 (#2577)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-05-19 11:59:05 +08:00
Xu Han@AutoMQ 52871652c0
feat(docker): cherry pick #2468 #2469 (#2576)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-05-19 11:16:54 +08:00
1sonofqiu 4afee951c4
feat(container): cp_2561_2570_2562_2574 (#2575)
* feat(container): automq kafka container features and patch file for u… (#2561)

feat(container): automq kafka container features and patch file for upgrade

* fix(dockerfile): update Dockerfile and scripts for AutoMQ packaging im… (#2570)

fix(container): update Dockerfile and scripts for AutoMQ packaging improvements

* feat(docker): Docker release for bitnami chart (#2563)

* feat(docker): add GitHub Actions workflow for bitnami chart Docker image release

* feat(docker): add GitHub Actions workflow for bitnami chart Docker image release

* fix(docker): update image tag format for automq-bitnami in release workflow and temporarily remove the latest tag until the AutoMQ Docker Compose is refactored into Bitnami Docker Compose.

* feat(helm): add demo-values.yaml and update README for AutoMQ deployment (#2562)

* feat(helm): add demo-values.yaml and update README for AutoMQ deployment

* fix(demo/readme): fix demo-values.yaml and change readme description

* fix(README): update Helm chart references to use 'kafka'

* feat(values): update demo-values.yaml and README for AutoMQ deployment

* fix(demo): image tag

* fix(readme): bitnami helm chart version

* fix(readme): bitnami helm chart version

* fix(docker): update Dockerfile for AutoMQ while github action packagi… (#2574)

fix(docker): update Dockerfile for AutoMQ while github action packaging installations and permissions
2025-05-19 10:52:56 +08:00
1sonofqiu f57de0ed0f
feat: cherry-pick 2560 (#2571)
feat(bitnami): init and cp bitnami kafka container 3.9.0 (#2560)

* feat(bitnami): init and cp bitnami kafka container 3.9.0

* refactor(bitnami): rename Bitnami container files for consistency
2025-05-16 19:53:21 +08:00
Xu Han@AutoMQ 813c6ec54c
chore(log): log readiness check result to stdout (#2568)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-05-16 16:06:40 +08:00
Gezi-lzq 5f2808d0b0
feat(docker): add Spark Iceberg Docker setup and demonstration notebook (#2567)
* feat(docker): add Spark Iceberg Docker setup and demonstration notebook (#2553)

* feat(docker): add Spark Iceberg Docker setup and demonstration notebook

* docs(readme): add Quick Start guide for AutoMQ Table Topic with Docker Compose

* fix(workflow): update Docker image tag for Spark Iceberg build (#2555)

* fix(workflow): remove conditional image build for main branch

* fix(workflow): specify image tag for Spark Iceberg Docker build

* fix(workflow): update Docker image tag for Spark Iceberg build

* fix(docker): update automq image tag to 1.5.0-rc0 in docker-compose.yml
2025-05-16 15:19:52 +08:00
Xu Han@AutoMQ eea5f0a94d
feat(config): remove client logs from stdout (#2565)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-05-16 15:13:10 +08:00
Xu Han@AutoMQ e89412ec14
fix(perf): unify the time in different nodes (#2554)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-05-15 18:56:19 +08:00
Yu Ning 59b3a21899
refactor(s3stream/object-wal): complete appends sequentially (#2549)
* refactor(s3stream/object-wal): complete appends sequentially (#2426)

* chore: add todos

Signed-off-by: Ning Yu <ningyu@automq.com>

* refactor(s3stream/object-wal): sequentially succeed

Signed-off-by: Ning Yu <ningyu@automq.com>

* refactor(s3stream/object-wal): drop discontinuous objects during recovery

Signed-off-by: Ning Yu <ningyu@automq.com>

* test: introduce `MockObjectStorage`

Signed-off-by: Ning Yu <ningyu@automq.com>

* test: test sequentially succeed

Signed-off-by: Ning Yu <ningyu@automq.com>

* refactor: record endOffset in the object path

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat: different version of wal object header

Signed-off-by: Ning Yu <ningyu@automq.com>

* refactor: adapt to the new object header format

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat: recover from the trim offset

Signed-off-by: Ning Yu <ningyu@automq.com>

* test: test recover continuous objects from trim offset

Signed-off-by: Ning Yu <ningyu@automq.com>

* test: test marshal and unmarshal wal object header

Signed-off-by: Ning Yu <ningyu@automq.com>

* test: fix tests

Signed-off-by: Ning Yu <ningyu@automq.com>

* test: test recover from discontinuous objects

Signed-off-by: Ning Yu <ningyu@automq.com>

* test: test recover from v0 and v1 objects

Signed-off-by: Ning Yu <ningyu@automq.com>

* style: fix lint

Signed-off-by: Ning Yu <ningyu@automq.com>

---------

Signed-off-by: Ning Yu <ningyu@automq.com>

* test: fix tests

Signed-off-by: Ning Yu <ningyu@automq.com>

---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2025-05-15 11:24:46 +08:00
Gezi-lzq 814ff0e2e6
fix(github-release): correct IS_LATEST variable usage in upload condi… (#2538)
fix(github-release): correct IS_LATEST variable usage in upload condition (#2537)
2025-05-14 17:09:46 +08:00
Yu Ning 8e1992d29a
fix(s3stream/wal): fix incorrect offset return value during recovery (#2543)
Signed-off-by: Ning Yu <ningyu@automq.com>
2025-05-14 17:08:52 +08:00
Shichao Nie fed831524d
fix(s3stream): add delayed deletion for S3 WAL (#2525)
Signed-off-by: Shichao Nie <niesc@automq.com>
2025-05-13 18:48:33 +08:00
Xu Han@AutoMQ e4494409fd
fix(storage): fix upload wal rate missing (#2528)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-05-13 18:45:14 +08:00
Xu Han@AutoMQ 3343c0ed06
feat(zerozone): fast upload when snapshot-read enable (#2534)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-05-13 18:44:19 +08:00
Gezi-lzq 35279a0552
chore(test): increase timeout for write completion in AbstractObjectSto… (#2536)
fix(test): increase timeout for write completion in AbstractObjectStorageTest
2025-05-13 17:49:45 +08:00
Xu Han@AutoMQ 7a7cb7e842
feat(table): validate table config (#2523)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-05-13 16:13:26 +08:00
Shichao Nie a906f602f8
feat(cli): add default s3 wal config (#2512)
Signed-off-by: Shichao Nie <niesc@automq.com>
2025-05-13 11:43:46 +08:00
Xu Han@AutoMQ e820ea4048
fix(test): fix e2e dependencies conflict (#2518) (#2519)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-05-12 16:40:42 +08:00
Xu Han@AutoMQ ddc8e3a6a7
feat(failover): add wal failover support (#2516) (#2517)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-05-09 15:43:20 +08:00
Xu Han@AutoMQ c9a97ebf19
feat(table_topic): add table topic support (#2511) (#2513)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-05-08 14:10:56 +08:00
Shichao Nie 09359077c8
fix(s3stream): fix backward compatibility of default aws credentials (#2510)
* fix(s3stream): fix backward compatibility of default aws credentials

Signed-off-by: Shichao Nie <niesc@automq.com>

* fix(s3stream): fix broken test

Signed-off-by: Shichao Nie <niesc@automq.com>

---------

Signed-off-by: Shichao Nie <niesc@automq.com>
2025-05-07 17:54:50 +08:00
Xu Han@AutoMQ fea145bfd8
feat(zerozone): support zero cross az traffic cost (#2506)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-05-06 15:54:52 +08:00
Gezi-lzq d041e13f9e
fix(storage): ensure proper reference counting for ByteBuf in write o… (#2499)
fix(storage): ensure proper reference counting for ByteBuf in write o… (#2452)

* fix(storage): ensure proper reference counting for ByteBuf in write operations

* feat(storage): implement fast retry mechanism and improve resource management in write operations

* test(storage): add concurrency test for write operations and ensure buffer release

* test(storage): add test for write permit acquisition and blocking behavior

* style(test): format code for consistency in AbstractObjectStorageTest

* feat(storage): add constructor for MemoryObjectStorage with concurrency support

* fix(storage): ensure proper release of ByteBuf resources in write operations

* chore: polish code

* fix(storage): improve error handling and resource management in write operations

* fix(storage): ensure proper release of resources on timeout in AbstractObjectStorage

* test(storage): increase timeout duration for resource cleanup assertions
2025-05-06 10:10:59 +08:00
Xu Han@AutoMQ 961ba10695
fix(snapshot_read): fix snapshot-read cache tryload trigger (#2460)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-04-28 19:12:22 +08:00
Xu Han@AutoMQ 887d5053e2
feat(snapshot_read): snapshot-read cache (#2453)
feat(snapshot_read): snapshot read cache

Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-04-28 14:25:55 +08:00
Xu Han@AutoMQ ddfadbea0d
fix(gradle): fix kafka-client conflict #2445 (#2446)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-04-23 18:11:06 +08:00
woshigaopp 8a152dd74c
feat(metrics): cherry pick add cert metrics 1.4 (#2439)
* feat: add cert metrics

* feat: check cert null

* feat: fix format

* feat: adjust cert metrics position

* feat: remove cert prefix
2025-04-22 15:14:33 +08:00
Xu Han@AutoMQ a67e45e1e3
feat(snapshot_read): support preferred node (#2436) (#2437)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-04-21 14:46:34 +08:00
daniel-y dc5b56b9ef
chore(license): use the apache license for the next major version (#2434)
Signed-off-by: daniel-y <daniel@automq.com>
2025-04-18 17:38:40 +08:00
Shichao Nie 6220570ca2
fix(s3stream): fix potential index leak on stream deletion (#2429)
Signed-off-by: Shichao Nie <niesc@automq.com>
2025-04-18 11:41:34 +08:00
Xu Han@AutoMQ c775509bfd
fix(snapshot_read): prevent append (#2425)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-04-17 16:19:01 +08:00
Xu Han@AutoMQ e9ba7a8c71
feat(s3stream): add trigger wal upload interval (#2347) (#2423)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-04-17 12:00:16 +08:00
Xu Han@AutoMQ 0c98593176
feat(interceptor): extend client id (#2422)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-04-16 17:53:36 +08:00
Xu Han@AutoMQ d33bd4bbfc
feat(circuit): prevent unregister locked node (#2418) (#2419)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-04-16 11:22:47 +08:00
Xu Han@AutoMQ 022f2a7eb1
feat(circuit): add LocalFileObjectStorage storage limit (#2415) (#2417)
* feat(circuit): add LocalFileObjectStorage storage limit



* fix(cicuit): fix CR



---------

Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-04-15 19:52:05 +08:00
Yu Ning 303cd28732
perf(s3stream/wal): add append timeout in block WAL (#2399)
Signed-off-by: Ning Yu <ningyu@automq.com>
2025-04-11 14:17:09 +08:00
Xu Han@AutoMQ 1de161e4e6
feat(circuit): support node circuit breaker (#2409) (#2413)
* feat(circuit): add circuit object storage



* fix(circuit): fix code review comment



---------

Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-04-09 14:22:18 +08:00
Xu Han@AutoMQ 10ba1a633f
feat(multi_read): add get partitions api (#2338) (#2345)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-04-09 10:55:47 +08:00
Xu Han@AutoMQ c04275422f
feat(release): release 1.3.3 (#2412)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-04-08 16:17:01 +08:00
Xu Han@AutoMQ d3e09e1ad2
chore(test): add test timeout (#2411)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-04-08 15:05:13 +08:00
Yu Ning 3a92901eb3
perf(s3stream/objectstorage): unify the throttle criteria (#2396)
perf(s3stream/objectstorage): unify the throttle criteria (#2386)

* perf(s3stream/objectstorage): unify the throttle criteria



* refactor(s3stream/objectstorage): retry on 403 responses



* refactor(objectstorage): use `TimeoutException` instead of `ApiCallAttemptTimeoutException`



* refactor: supress the cause of `ObjectNotExistException`



* style: fix lint



---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2025-04-02 10:52:27 +08:00
Xu Han@AutoMQ f88c72b158
feat(s3stream): support s3 write timeout (#2356) (#2361)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-03-18 15:58:59 +08:00
Xu Han@AutoMQ 87b63600c2
feat(release): bump version to 1.3.3-rc0 (#2355)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-03-17 19:57:49 +08:00
Shichao Nie dfa35b90b8
feat(linking): use linkId for update group API (#2352)
Signed-off-by: Shichao Nie <niesc@automq.com>
2025-03-14 15:18:00 +08:00
Xu Han@AutoMQ 95a2107030
refactor(automq): rename producerouter to traffic interceptor (#2353)
refactor(automq): rename producerouter to traffic interceptor (#2350)

Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-03-14 15:07:15 +08:00
Shichao Nie 1610f2f4a5
refactor(core): refine config names for kafka linking (#2348) (#2349)
Signed-off-by: Shichao Nie <niesc@automq.com>
2025-03-14 11:27:21 +08:00
Yu Ning 814644530c
refactor(controller): add method `ControllerServer#reconfigurables` (#2346)
refactor(controller): add method `ControllerServer#reconfigurables` (#2344)

Signed-off-by: Ning Yu <ningyu@automq.com>
2025-03-13 11:12:47 +08:00
Yu Ning 94107fc1a9
fix(s3storage): release Bytebuf allocated by WAL earlier to prevent memory fragmentation (#2342)
Signed-off-by: Ning Yu <ningyu@automq.com>
2025-03-12 11:33:03 +08:00
Yu Ning c0f5a7de29
perf(s3stream): limit write traffic to object storage (#2335)
* chore(objectstorage): log next retry delay

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat: a `TrafficLimiter` to limit the network traffic

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat: a `TrafficMonitor` to monitor the network traffic

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat: record success and failed write requests

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat: queued pending write tasks

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat: run write tasks one by one

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat: use a `TrafficRegulator` to control the rate of write requests

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat: limit the inflight force upload tasks

Signed-off-by: Ning Yu <ningyu@automq.com>

* fix: correct retry count

Signed-off-by: Ning Yu <ningyu@automq.com>

* chore: fix commit object logs

Signed-off-by: Ning Yu <ningyu@automq.com>

* chore: log force uploads

Signed-off-by: Ning Yu <ningyu@automq.com>

* fix: catch exceptions

Signed-off-by: Ning Yu <ningyu@automq.com>

* style: fix lint

Signed-off-by: Ning Yu <ningyu@automq.com>

* fix: fix re-trigger run write task

Signed-off-by: Ning Yu <ningyu@automq.com>

* perf: increate if no traffic

Signed-off-by: Ning Yu <ningyu@automq.com>

* refactor: remove useless try-catch

Signed-off-by: Ning Yu <ningyu@automq.com>

* perf: ensure only one inflight force upload tasks

Signed-off-by: Ning Yu <ningyu@automq.com>

* refactor: move inner classes outside

Signed-off-by: Ning Yu <ningyu@automq.com>

* perf: increase rate limit slower

Signed-off-by: Ning Yu <ningyu@automq.com>

* chore: add a prefix in `AbstractObjectStorage#logger`

Signed-off-by: Ning Yu <ningyu@automq.com>

* chore: reduce useless logs

Signed-off-by: Ning Yu <ningyu@automq.com>

* perf: reduce the sample count on warmup

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat: introduce `TrafficVolumeLimiter` base on IBM `AsyncSemaphore`

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat: limit the inflight write requests

Signed-off-by: Ning Yu <ningyu@automq.com>

* chore: reduce useless logs

Signed-off-by: Ning Yu <ningyu@automq.com>

* fix: fix release size

Signed-off-by: Ning Yu <ningyu@automq.com>

* fix: release permits once the request failed

Signed-off-by: Ning Yu <ningyu@automq.com>

* perf: increase to max after 2 hours

Signed-off-by: Ning Yu <ningyu@automq.com>

* fix: limit the request size

Signed-off-by: Ning Yu <ningyu@automq.com>

* perf: adjust constants

Signed-off-by: Ning Yu <ningyu@automq.com>

---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2025-03-11 20:44:53 +08:00
Shichao Nie 19900d258e
feat(core): add kafka linking interface (#2336)
* feat(core): add kafka linking interface (#2289)

Signed-off-by: Shichao Nie <niesc@automq.com>

* fix(linking): fix npe on default implementation (#2306)

Signed-off-by: Shichao Nie <niesc@automq.com>

* feat(linking): add updateGroup interface (#2328)

* feat(core): rename kafka linking interface

Signed-off-by: Shichao Nie <niesc@automq.com>

* feat(core): adjust shutdown order

Signed-off-by: Shichao Nie <niesc@automq.com>

* feat(linking): add update group interface

Signed-off-by: Shichao Nie <niesc@automq.com>

---------

Signed-off-by: Shichao Nie <niesc@automq.com>

* feat(core): enable producer id modification in MutableRecordBatch (#2329)

Signed-off-by: Shichao Nie <niesc@automq.com>

* feat(core): add connection id param (#2334)

Signed-off-by: Shichao Nie <niesc@automq.com>

---------

Signed-off-by: Shichao Nie <niesc@automq.com>
2025-03-10 11:10:20 +08:00
Xu Han@AutoMQ 00dcea1738
feat(release): bump version to 1.3.2 (#2330)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-02-27 19:38:04 +08:00
Xu Han@AutoMQ 72009f1b60
fix(action): fix release action (#2324)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-02-19 21:02:10 +08:00
Shichao Nie 866088c70f
fix(s3stream): skip waiting for pending part on release (#2316) (#2319)
Signed-off-by: Shichao Nie <niesc@automq.com>
2025-02-19 19:06:03 +08:00
Xu Han@AutoMQ 27aeefe056
fix(action): change upload bucket (#2313)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-02-19 14:49:32 +08:00
Yu Ning 60c2ff747a
fix(tool/perf): add admin properties in `ConsumerService#admin` (#2311)
Signed-off-by: Ning Yu <ningyu@automq.com>
2025-02-19 13:58:52 +08:00
Yu Ning e63bfc8f29
feat(tool/perf): add option "--common-config-file" (#2309)
feat(tool/perf): add option "--common-config-file" (#2308)

* refactor: rename "--common-configs" to "--admin-configs"



* feat: add option "--common-config-file"



* refactor: use `Properties` rather than `Map` to pass configs



* style: fix lint



* refactor: remove "--admin-config"



---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2025-02-19 10:32:40 +08:00
Yu Ning 5cda6e4086
feat(s3stream/failover): check request epoch >= wal epoch (#2305)
feat(s3stream/failover): check request epoch >= wal epoch (#2302)

* chore(controller): more logs in `getOpeningStream`



* feat(s3stream/failover): check request epoch >= wal epoch



* test(s3stream/failover): test node epoch checker



* test: increase timeout



* chore: expose `QuorumController#streamControlManager`



---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2025-02-14 18:39:36 +08:00
Yu Ning d21b058721
feat(s3stream/wal): add constraints in recovery mode (#2304)
feat(s3stream/wal): add constraints in recovery mode (#2301)

* feat(s3stream/wal): add constraints in recovery mode



* refactor: log it rather than throw an exception



---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2025-02-14 18:39:18 +08:00
Yu Ning 2bef7e31ec
fix(s3stream/wal): increase max record size from 64MiB to 128MiB (#2299)
Signed-off-by: Ning Yu <ningyu@automq.com>
2025-02-14 12:04:36 +08:00
Yu Ning c077d513eb
fix(s3stream/wal): increase max record size from 16MiB to 64MiB (#2297)
Signed-off-by: Ning Yu <ningyu@automq.com>
2025-02-14 11:16:59 +08:00
Yu Ning 402c0a68f5
fix(network): adjust number of permits if the request is huge (#2295)
fix(network): adjust number of permits if the request is huge (#2294)

* refactor: use only one semaphore to limit the queues requests size



* fix(network): adjust number of permits if the request is huge



---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2025-02-12 20:13:24 +08:00
Xu Han@AutoMQ 513e5845d1
fix(s3stream): halt the process when node is fenced (#2292)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-02-12 16:31:09 +08:00
Yu Ning 3f1ca24b26
fix(tools/perf): only delete test topics on reset (#2285)
Signed-off-by: Ning Yu <ningyu@automq.com>
2025-02-07 09:54:37 +08:00
Xu Han@AutoMQ 3235c490cf
chore(s3stream): replace eventloop with executor in async semaphore (#2283)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-02-06 10:49:04 +08:00
Xu Han@AutoMQ b4c5341e43
feat(s3stream): fast fail s3 request (#2281)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-02-06 10:02:13 +08:00
Shichao Nie aeb6c7d2f8 fix(core): add missing setting method
Signed-off-by: Shichao Nie <niesc@automq.com>
2025-01-24 10:09:48 +08:00
Yu Ning 9eca0de19d refactor(controller): consider brokers that has recently `CONTROLLED_SHUTDOWN` as `SHUTTING_DOWN` (#2261)
* refactor(controller): consider brokers that has recently `CONTROLLED_SHUTDOWN` as `SHUTTING_DOWN`

Signed-off-by: Ning Yu <ningyu@automq.com>

* test: test `BrokerHeartbeatManager#brokerState`

Signed-off-by: Ning Yu <ningyu@automq.com>

* revert(NodeState): revert `SHUTDOWN` and `SHUTTING_DOWN` to `FENCED` and `CONTROLLED_SHUTDOWN`

Signed-off-by: Ning Yu <ningyu@automq.com>

---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2025-01-24 10:09:32 +08:00
Ning Yu 9f3b55b87a fix(tools/perf): fix option name "--max-consume-record-rate"
Signed-off-by: Ning Yu <ningyu@automq.com>
2025-01-24 10:08:56 +08:00
Ning Yu d38e9301fe fix(s3stream/wal): check `isBlockDev` by prefix in some rare cases
Signed-off-by: Ning Yu <ningyu@automq.com>
2025-01-21 21:02:53 +08:00
Yu Ning fd3e7122b9
feat(tools/perf): support to limit the max poll rate of consumers (#2271)
Signed-off-by: Ning Yu <ningyu@automq.com>
2025-01-13 17:10:31 +08:00
Shichao Nie 3fdef4c070
feat(config): fix exporter uri type (#2267)
Signed-off-by: Shichao Nie <niesc@automq.com>
2025-01-10 15:21:12 +08:00
Shichao Nie 5860a396b7
fix(s3stream): fix compaction block on upload exception (#2264)
Signed-off-by: Shichao Nie <niesc@automq.com>
2025-01-10 10:21:24 +08:00
Xu Han@AutoMQ 273c134683
chore(version): bump version to 1.3.2-rc0 (#2260)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-01-08 10:47:59 +08:00
Xu Han@AutoMQ 62abf707e5
perf(produce): fix validate compressed records alloc too many memory (#2257)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-01-07 22:17:27 +08:00
Yu Ning cd7d337601
chore(version): bump version to 1.3.1 (#2254)
Signed-off-by: Ning Yu <ningyu@automq.com>
2025-01-03 17:53:10 +08:00
Ning Yu 732b3f8d1c fix(cli/deploy): override "controller.quorum.bootstrap.servers"
Signed-off-by: Ning Yu <ningyu@automq.com>
2025-01-03 14:00:53 +08:00
Xu Han@AutoMQ 8189d89e78 chore(config): change s3.stream.object.compaction.max.size.bytes default value from 1GiB to 10GiB (#2249)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-01-02 17:38:12 +08:00
Robin Han abba7c44b7 feat(table): support partition & upsert config
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2025-01-02 13:28:47 +08:00
Ning Yu bb2409d99d style: fix lint
Signed-off-by: Ning Yu <ningyu@automq.com>
2025-01-02 12:39:55 +08:00
Ning Yu d233658268 feat: use the internal partitioner to choose the partition to send msg to
Signed-off-by: Ning Yu <ningyu@automq.com>
2025-01-02 12:39:55 +08:00
Ning Yu 64b704ef5e feat: add a random string in the topic name
Signed-off-by: Ning Yu <ningyu@automq.com>
2025-01-02 12:39:55 +08:00
Ning Yu 9462e42f25 feat: only delete topics created by the perf tool
Signed-off-by: Ning Yu <ningyu@automq.com>
2025-01-02 12:39:55 +08:00
Shichao Nie 9502ac3696
fix(s3stream): report compaction delay after two compaction period (#2242)
Signed-off-by: Shichao Nie <niesc@automq.com>
2024-12-24 17:14:19 +08:00
Gezi-lzq 50946bf210
feat(config): add table topic conversion type configuration (#2203) (#2240)
* feat(config): add table topic conversion type configurations

* feat(config): rename table topic type to schema type and update related configurations

* feat(config): add table topic schema registry URL configuration and validation

* test(config): add unit tests for ControllerConfigurationValidator table topic schema configuration

* fix(tests): update exception type in ControllerConfigurationValidatorTableTest for schema validation

* feat(config): polish code
2024-12-20 18:44:51 +08:00
Robin Han 6605e37dd3 ~
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2024-12-20 17:17:54 +08:00
Xu Han@AutoMQ 8125b80ed4 feat(tools/perf): support schema message perf (#2226)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2024-12-20 15:09:33 +08:00
Xu Han@AutoMQ 7e8776da63 chore(gradle): update aws version to 2.29.26 (#2210)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2024-12-20 15:09:33 +08:00
Xu Han@AutoMQ 9fe78c2aaa feat(table): auto create table topic control topic (#2186)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2024-12-20 15:09:33 +08:00
Xu Han@AutoMQ a1f9969773 chore(table): set table max.message.bytes to 20MiB (#2182)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2024-12-20 15:09:33 +08:00
Xu Han@AutoMQ 0fca069a12 chore(stream): move asyncsemaphore to util (#2173)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2024-12-20 15:09:33 +08:00
Xu Han@AutoMQ 2d6fe1f805 feat(table): table topic aspect 2024-12-20 15:09:33 +08:00
Ning Yu 9cb537a9b3 style: fix lint
Signed-off-by: Ning Yu <ningyu@automq.com>
2024-12-20 11:56:15 +08:00
Yu Ning 609bcc6672 fix(backpressure): fix metric value of back pressure state (#2209)
fix: fix metric value of back pressure state

Signed-off-by: Ning Yu <ningyu@automq.com>
2024-12-20 11:56:15 +08:00
Yu Ning 9e7fec1afc fix(backpressure): start before registering to dynamic configs (#2208)
Signed-off-by: Ning Yu <ningyu@automq.com>
2024-12-20 11:56:15 +08:00
Yu Ning 985596c44c feat(backpressure): stop and remove all scheduled tasks on shutdown (#2207)
Signed-off-by: Ning Yu <ningyu@automq.com>
2024-12-20 11:56:15 +08:00
Yu Ning 5e11479053 feat(backpressure): support dynamic configs (#2204)
* feat(backpressure): make back pressure manager configurable

Signed-off-by: Ning Yu <ningyu@automq.com>

* test: test diabled

Signed-off-by: Ning Yu <ningyu@automq.com>

* refactor: move backpressure from s3stream to kafka.core

Signed-off-by: Ning Yu <ningyu@automq.com>

* refactor: init `BackPressureManager` in `BrokerServer`

Signed-off-by: Ning Yu <ningyu@automq.com>

* refactor: introduce `BackPressureConfig`

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat: make `BackPressureManager` reconfigurable

Signed-off-by: Ning Yu <ningyu@automq.com>

* test: test reconfigurable

Signed-off-by: Ning Yu <ningyu@automq.com>

* refactor: rename config key

Signed-off-by: Ning Yu <ningyu@automq.com>

* refactor: move metric "back_pressure_state" from s3stream to core

Signed-off-by: Ning Yu <ningyu@automq.com>

---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2024-12-20 11:56:15 +08:00
Yu Ning 37ae1766d9 feat(backpressure): add metrics (#2198)
* feat(backpressure): log it on recovery from backpressure

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat: add metric fetch_limiter_waiting_task_num

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat: add metric fetch_limiter_timeout_count

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat: add metric fetch_limiter_time

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat: add metric back_pressure_state

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat: add metric broker_quota_limit

Signed-off-by: Ning Yu <ningyu@automq.com>

* fix(backpressure): run checkers with fixed delay

Signed-off-by: Ning Yu <ningyu@automq.com>

* style: fix lint

Signed-off-by: Ning Yu <ningyu@automq.com>

* perf: drop too large values

Signed-off-by: Ning Yu <ningyu@automq.com>

* refactor: record -1 for other states

Signed-off-by: Ning Yu <ningyu@automq.com>

* test: fix tests

Signed-off-by: Ning Yu <ningyu@automq.com>

---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2024-12-20 11:56:15 +08:00
Yu Ning 8e004eea2d fix(quota): check whether the client in white list before fetch (#2181)
Signed-off-by: Ning Yu <ningyu@automq.com>
2024-12-20 11:56:15 +08:00
Yu Ning d9245691b4 fix(quota): limit the max throttle time (#2180)
Signed-off-by: Ning Yu <ningyu@automq.com>
2024-12-20 11:56:15 +08:00
Yu Ning b660d48fe1 feat(quota): exclude internal client IDs from broker quota (#2179)
* feat(quota): exclude internal client IDs from broker quota

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat(autobalancer): mark producers and consumers internal clients

Signed-off-by: Ning Yu <ningyu@automq.com>

---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2024-12-20 11:56:15 +08:00
Yu Ning df78978543 feat(quota): support to get current quota metric value... (#2170)
* fix: fix logs

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat(quota): support to get current quota metric value

Signed-off-by: Ning Yu <ningyu@automq.com>

* refactor(backpressure): remove `Regulator#minimize`

Signed-off-by: Ning Yu <ningyu@automq.com>

* perf(quota): increase the max of broker quota throttle time

Signed-off-by: Ning Yu <ningyu@automq.com>

* perf(backpressure): decrease cooldown time

Signed-off-by: Ning Yu <ningyu@automq.com>

* perf(quota): increase the max of broker quota throttle time

Signed-off-by: Ning Yu <ningyu@automq.com>

* docs: update comments

Signed-off-by: Ning Yu <ningyu@automq.com>

---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2024-12-20 11:56:15 +08:00
Yu Ning 42099b0b3f chore(backpressure): log it on back pressure (#2164)
Signed-off-by: Ning Yu <ningyu@automq.com>
2024-12-20 11:56:15 +08:00
Yu Ning e8502b64b0 feat(quota): support to get current quota by type (#2163)
Signed-off-by: Ning Yu <ningyu@automq.com>
2024-12-20 11:56:15 +08:00
Yu Ning 4bfa8af6bd refactor(backpressure): introduce interface `Checker` (#2162)
Signed-off-by: Ning Yu <ningyu@automq.com>
2024-12-20 11:56:15 +08:00
Yu Ning 574c0ea4cc feat(backpressure): back pressure by system load (#2161)
* feat(backpressure): init backpressure module

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat(backpressure): implement `DefaultBackPressureManager`

Signed-off-by: Ning Yu <ningyu@automq.com>

* test(backpressure): test `DefaultBackPressureManager`

Signed-off-by: Ning Yu <ningyu@automq.com>

---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2024-12-20 11:56:15 +08:00
Yu Ning 5d2f4a4c5d feat(quota): support broker quota for slow fetch (#2160)
* feat(quota): introduce `SLOW_FETCH` broker quota

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat(quota): add slow fetch quota

Signed-off-by: Ning Yu <ningyu@automq.com>

* test(quota): test broker slow fetch quota

Signed-off-by: Ning Yu <ningyu@automq.com>

* test(quota): test zero quota value

Signed-off-by: Ning Yu <ningyu@automq.com>

---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2024-12-20 11:56:15 +08:00
Yu Ning ef0ce42126 feat(quota): support to update broker request rate quota (#2158)
* refactor(quota): refactor `maybeRecordAndGetThrottleTimeMs`

Signed-off-by: Ning Yu <ningyu@automq.com>

* fix(quota): throttle the produce request whatever the acks is

Signed-off-by: Ning Yu <ningyu@automq.com>

* refactor(quota): separate `Request` in `ClientQuotaManager` and `RequestRate` in `BrokerQuotaManager`

Signed-off-by: Ning Yu <ningyu@automq.com>

* sytle: fix lint

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat(quota): support to update broker request rate quota

Signed-off-by: Ning Yu <ningyu@automq.com>

* test(quota): test update quota

Signed-off-by: Ning Yu <ningyu@automq.com>

---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2024-12-20 11:56:15 +08:00
Xu Han@AutoMQ 786d405caa
feat(version): bump to 1.3.1-rc0 (#2234)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2024-12-20 10:56:34 +08:00
Shichao Nie 48eeb81cec
fix(core): fix potential infinite recursion on reading empty segment (#2229)
Signed-off-by: Shichao Nie <niesc@automq.com>
2024-12-17 11:43:38 +08:00
Shichao Nie d79d08ab4a
feat(telemetry): support gzip compression on uploading metrics & logs… (#2222)
feat(telemetry): support gzip compression on uploading metrics & logs to s3

Signed-off-by: Shichao Nie <niesc@automq.com>
2024-12-13 16:39:41 +08:00
Xu Han@AutoMQ 080bae4d07
feat(tools/perf): support schema message perf (#2227)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2024-12-13 14:31:46 +08:00
Xu Han@AutoMQ 504761b7dd
fix(tools/perf): fix the perf tools await count (#2220)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2024-12-12 10:00:07 +08:00
Xu Han@AutoMQ f6936cadab
feat(table): enhance command utils (#2217)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2024-12-06 14:39:40 +08:00
Xu Han@AutoMQ 7bf19db2c8
fix(docker): fix docker compose quick start (#2212)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2024-12-05 18:48:41 +08:00
Shichao Nie 0bac03ed48
feat(version): bump version to 1.3.0-rc2 (#2211)
Signed-off-by: Shichao Nie <niesc@automq.com>
2024-12-05 15:32:35 +08:00
Shichao Nie 78cb2cb508
fix(core): write next node id into image (#2205)
Signed-off-by: Shichao Nie <niesc@automq.com>
2024-12-03 10:37:03 +08:00
Shichao Nie 1966e042ff
fix(core): fix getting duplicated node id (#2199)
Signed-off-by: Shichao Nie <niesc@automq.com>
2024-12-02 14:49:31 +08:00
Xu Han@AutoMQ 7a12c17e5d
fix(issues2193): retry 2 times to cover most of BlockNotContinuousException (#2195)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2024-11-29 19:57:27 +08:00
Yu Ning a14f07e8b7
fix: use the "adjusted" `maxSize` in `ElasticLogSegment#readAsync` (#2190)
Signed-off-by: Ning Yu <ningyu@automq.com>
2024-11-28 10:54:20 +08:00
Yu Ning cc34841c49
fix: release `PooledMemoryRecords` if it's dropped in the fetch session (#2187)
Signed-off-by: Ning Yu <ningyu@automq.com>
2024-11-28 10:53:21 +08:00
Shichao Nie f6f5412c76
feat(core): reuse unregistered node when requesting for next node id (#2183)
Signed-off-by: Shichao Nie <niesc@automq.com>
2024-11-27 17:59:40 +08:00
Yu Ning f7e8b9abb3
fix(stream): release `FetchResult`s if the subsequent fetch fails (#2174)
fix(stream): release `FetchResult`s if the subsequent fetch fails (#2172)

* fix(stream): release `FetchResult`s if the subsequent fetch fails



* revert: "fix(stream): release `FetchResult`s if the subsequent fetch fails"

This reverts commit 5836a6afa0.

* refactor: add the `FetchResult` into the list in order rather than in reverse order



* fix: release `FetchResult`s if failed to fetch



---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2024-11-25 09:53:06 +08:00
Shichao Nie bf8ebdb098
feat(version): bump version to 1.3.0-rc1 (#2178)
Signed-off-by: Shichao Nie <niesc@automq.com>
2024-11-25 09:46:24 +08:00
Shichao Nie 67347165ca
fix(issue2151): avoid using stale broker IPs for AutoBalancer consume… (#2177)
fix(issue2151): avoid using stale broker IPs for AutoBalancer consumer (#2152)

close #2151

Signed-off-by: Shichao Nie <niesc@automq.com>
2024-11-23 21:07:08 +08:00
Xu Han@AutoMQ a2ab79deae
chore(workflow): add spotless check (#2169)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2024-11-21 18:50:56 +08:00
Yu Ning 82a76e87e0
feat(tools/perf): create topics in batch (#2165)
Signed-off-by: Ning Yu <ningyu@automq.com>
2024-11-20 15:59:45 +08:00
Yu Ning 7d0f2a2746
chore(github): update code owners (#2156)
Signed-off-by: Ning Yu <ningyu@automq.com>
2024-11-13 17:33:00 +08:00
Xu Han@AutoMQ e0c761c110
feat(version): bump version to 1.3.0-rc0 (#2153)
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2024-11-11 21:48:19 +08:00
Xu Han@AutoMQ c091ba740f
Merge pull request #2150 from AutoMQ/merge_3.9
feat(merge): merge apache kafka 3.9.0 cc53a63
2024-11-08 15:43:31 +08:00
Shichao Nie 504b0b1cb2
fix(issue2140): remove override equals and hashCode method for ObjectReader (#2148)
close #2140

Signed-off-by: Shichao Nie <niesc@automq.com>
2024-11-08 15:33:13 +08:00
Robin Han aa856f83d9
Merge branch '3.9.0' into merge_3.9 2024-11-08 15:30:15 +08:00
Shichao Nie 1ada92c329
fix(issue2139): add computeIfAbsent atomic operation to AsyncLRUCache (#2145)
close #2139

Signed-off-by: Shichao Nie <niesc@automq.com>
2024-11-08 15:00:39 +08:00
Shichao Nie 8be9e519c7
fix(issue2139): prevent read object info from closed ObjectReader (#2143)
Signed-off-by: Shichao Nie <niesc@automq.com>
2024-11-08 14:02:22 +08:00
Yu Ning 0a851c3047
feat(tools/perf): run benchmark without consumer (#2134)
Signed-off-by: Ning Yu <ningyu@automq.com>
2024-11-08 11:14:15 +08:00
Yu Ning fb5bce8291
fix(s3stream/storage): correct if condition on `awaitTermination` (#2137)
Signed-off-by: Ning Yu <ningyu@automq.com>
2024-11-08 11:13:26 +08:00
Yu Ning 37e1af586d
refactor(tools/perf): retry sending messages in when waiting topics ready (#2132)
Signed-off-by: Ning Yu <ningyu@automq.com>
2024-11-07 15:41:31 +08:00
Yu Ning cab9e191da
perf(log): avoid too many checkpoint at the same time (#2129)
Signed-off-by: Ning Yu <ningyu@automq.com>
2024-11-07 14:10:38 +08:00
Yu Ning c243eaf9ca
perf(tools/perf): assuming all partitions have the same offset at the same time (#2127)
* feat(tools/perf): log progress on resetting offsets

Signed-off-by: Ning Yu <ningyu@automq.com>

* fix: reset timeouts

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat: increase the log interval

Signed-off-by: Ning Yu <ningyu@automq.com>

* perf(tools/perf): assuming all partitions have the same offset at the same time

Signed-off-by: Ning Yu <ningyu@automq.com>

* feat: limit the min of --backlog-duration

Signed-off-by: Ning Yu <ningyu@automq.com>

---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2024-11-07 10:10:28 +08:00
Colin P. McCabe cc53a632ed Bump version to 3.9.0 2024-11-06 13:15:24 -08:00
Yu Ning 7f9a63195c
revert(s3stream/limiter): increase the max tokens of network limiters (#2125)
This reverts commit bc63e6b614.
2024-11-06 16:53:23 +08:00
Shichao Nie c32772f588
fix(e2e): remove unstable autobalancer tests (#2123)
Signed-off-by: Shichao Nie <niesc@automq.com>
2024-11-06 15:54:54 +08:00
Shichao Nie aafef77114
fix(checkstyle): fix checkstyle (#2121)
Signed-off-by: Shichao Nie <niesc@automq.com>
2024-11-06 11:01:11 +08:00
Shichao Nie dec14165fb
fix(metrics): fix metrics name for BrokerTopicPartitionMetrics (#2118)
Signed-off-by: Shichao Nie <niesc@automq.com>
2024-11-05 20:20:43 +08:00
Shichao Nie fcd0d5e529
fix(s3stream): fix available bandwidth metrics (#2120)
Signed-off-by: Shichao Nie <niesc@automq.com>
2024-11-05 20:20:00 +08:00
Shichao Nie 9b4db72a0d
fix(compaction): prevent double release on compaction shutdown (#2116)
Signed-off-by: Shichao Nie <niesc@automq.com>
2024-11-05 11:22:26 +08:00
ShivsundarR 4a562cddcb
Removed Set.of usage (#17683)
Reviewers: Federico Valeri <fedevaleri@gmail.com>, Lianet Magrans <lmagrans@confluent.io>, Chia-Ping Tsai <chia7712@gmail.com>
2024-11-04 20:04:39 +01:00
Xu Han@AutoMQ c92bedb6a9
fix(s3stream): wait force upload complete before return (#2113)
Signed-off-by: Shichao Nie <niesc@automq.com>
Co-authored-by: Shichao Nie <niesc@automq.com>
2024-11-03 22:30:40 +08:00
Shichao Nie c51cc0298a
fix(issue2108): avoid blocking at the end of a compaction iteration when there are un-uploaded data (#2111)
Signed-off-by: Shichao Nie <niesc@automq.com>
2024-11-03 16:14:38 +08:00
Yu Ning e9b2117b38
perf: limit the inflight requests (#2100) (#2106)
* docs: add todos



* perf(network): limit the inflight requests by size



* perf(ReplicaManager): limit the queue size of the `fetchExecutor`s



* perf(KafkaApis): limit the queue size of async request handlers



* refactor(network): make "queued.max.requests.size.bytes" configurable



* style: fix lint



* fix(network): limit the min queued request size per queue



---------

Signed-off-by: Ning Yu <ningyu@automq.com>
2024-11-01 19:30:39 +08:00
Jonah Hooper bcb5d167fd [KAFKA-17870] Fail CreateTopicsRequest if total number of partitions exceeds 10k (#17604)
We fail the entire CreateTopicsRequest action if there are more than 10k total
partitions being created in this topic for this specific request. The usual pattern for
this API to try and succeed with some topics. Since the 10k limit applies to all topics
then no topic should be created if they all exceede it.

Reviewers: Colin P. McCabe <cmccabe@apache.org>
2024-10-31 13:55:49 -07:00
Xu Han@AutoMQ 72dfbc32f5
Merge pull request #2105 from AutoMQ/merge_3_9
feat(merge): merge apache kafka 3.9 398b4c4fa1
2024-10-31 17:26:03 +08:00
Robin Han 0e84ac7de2
fix(merge): fix unit test
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2024-10-31 17:11:23 +08:00
Robin Han 7e759baf40
fix(merge): fix automq version
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2024-10-31 16:04:47 +08:00
Robin Han 2854533f42
fix(merge): fix compile error
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2024-10-31 12:02:17 +08:00
Robin Han fbd0c7ce3e
fix(merge): fix conflict
Signed-off-by: Robin Han <hanxvdovehx@gmail.com>
2024-10-31 11:25:36 +08:00
Colin Patrick McCabe 398b4c4fa1 KAFKA-17868: Do not ignore --feature flag in kafka-storage.sh (#17597)
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, Justine Olshan <jolshan@confluent.io>
2024-10-25 17:08:51 -07:00
Colin Patrick McCabe c821449fb7 KAFKA-17794: Add some formatting safeguards for KIP-853 (#17504)
KIP-853 adds support for dynamic KRaft quorums. This means that the quorum topology is
no longer statically determined by the controller.quorum.voters configuration. Instead, it
is contained in the storage directories of each controller and broker.

Users of dynamic quorums must format at least one controller storage directory with either
the --initial-controllers or --standalone flags.  If they fail to do this, no quorum can be
established. This PR changes the storage tool to warn about the case where a KIP-853 flag has
not been supplied to format a KIP-853 controller. (Note that broker storage directories
can continue to be formatted without a KIP-853 flag.)

There are cases where we don't want to specify initial voters when formatting a controller. One
example is where we format a single controller with --standalone, and then dynamically add 4
more controllers with no initial topology. In this case, we want the 4 later controllers to grab
the quorum topology from the initial one. To support this case, this PR adds the
--no-initial-controllers flag.

Reviewers: José Armando García Sancio <jsancio@apache.org>, Federico Valeri <fvaleri@redhat.com>
2024-10-21 10:41:26 -07:00
Federico Valeri 7842e25d32 KAFKA-17031: Make RLM thread pool configurations public and fix default handling (#17499)
According to KIP-950, remote.log.manager.thread.pool.size should be marked as deprecated and replaced by two new configurations: remote.log.manager.copier.thread.pool.size and remote.log.manager.expiration.thread.pool.size. Fix default handling so that -1 works as expected.

Reviewers: Luke Chen <showuon@gmail.com>, Gaurav Narula <gaurav_narula2@apple.com>, Satish Duggana <satishd@apache.org>, Colin P. McCabe <cmccabe@apache.org>
2024-10-21 10:39:53 -07:00
Josep Prat de9a7199df KAFKA-17810 upgrade Jetty because of CVE-2024-8184 (#17517)
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2024-10-21 02:51:37 +08:00
Colin Patrick McCabe abd4bf08ab
KAFKA-17790: Document that control.plane.listener should be removed before ZK migration is finished (#17501)
Reviewers: Luke Chen <showuon@gmail.com>
2024-10-15 14:36:16 -07:00
Colin P. McCabe 796ce2121b KAFKA-17788: During ZK migration, always include control.plane.listener.name in advertisedBrokerListeners
During ZK migration, always include control.plane.listener.name in advertisedBrokerListeners, to be
bug-compatible with earlier Apache Kafka versions that ignored this misconfiguration. (Just as
before, control.plane.listener.name is not supported in KRaft mode itself.)

Reviewers: Luke Chen <showuon@gmail.com>
2024-10-15 14:34:42 -07:00
Ken Huang 51253e2bf4
KAFKA-17520 align the low bound of ducktape version (#17481)
Reviewers: Colin Patrick McCabe <cmccabe@apache.org>, Chia-Ping Tsai <chia7712@gmail.com>
2024-10-15 00:15:59 +08:00
David Arthur 8c3c6c3841
KAFKA-17193: Pin all external GitHub Actions to the specific git hash (#16960) (#17461)
Co-authored-by: Mickael Maison <mimaison@users.noreply.github.com>

Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, Colin P. McCabe <cmccabe@apache.org>
2024-10-10 13:20:49 -07:00
Mickael Maison 44f15cc22c KAFKA-17749: Fix Throttler metrics name
Reviewers: Colin P. McCabe <cmccabe@apache.org>
2024-10-10 09:20:14 -07:00
PoAn Yang 4878174b77 KAFKA-16972 Move BrokerTopicMetrics to org.apache.kafka.storage.log.metrics (#16387)
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2024-10-10 09:15:21 -07:00
Apoorv Mittal db4c80a455 KAFKA-17731: Removed timed waiting signal for client telemetry close (#17431)
Reviewers: Andrew Schofield <aschofield@confluent.io>, Kirk True <ktrue@confluent.io>, Chia-Ping Tsai <chia7712@gmail.com>, Lianet Magrans <lmagrans@confluent.io>
2024-10-10 08:55:43 -07:00
Colin Patrick McCabe bf95a3239c
KAFKA-17753: Update protobuf and commons-io dependencies (#17436)
Reviewers: Josep Prat <jlprat@apache.org>
2024-10-09 16:34:26 -07:00
Gaurav Narula ab6dafaab6 KAFKA-17751; fix pollTimeout calculation in pollFollowerAsVoter (#17434)
KAFKA-16534 introduced a change to send UpdateVoterRequest every "3 * fetchTimeoutMs" if the voter's configure endpoints are different from the endpoints persisted in the KRaft log. It also introduced a regression where if the voter nodes do not need an update then updateVoterTimer wasn't reset. This resulted in a busy-loop in KafkaRaftClient#poll method resulting in high CPU usage.

This PR modifies the conditions in pollFollowerAsVoter to reset updateVoterTimer appropriately.

Reviewers: José Armando García Sancio <jsancio@apache.org>
2024-10-09 18:13:10 -04:00
Colin Patrick McCabe 8af063a165
KAFKA-17735: release.py must not use home.apache.org (#17421)
Previously, Apache Kafka was uploading release candidate (RC) artifacts
to users' home directories on home.apache.org. However, since this
resource has been decommissioned, we need to follow the standard
approach of putting release candidate artifacts into the appropriate
subversion directory, at https://dist.apache.org/repos/dist/dev/kafka/.

Reviewers: Justine Olshan <jolshan@confluent.io>
2024-10-08 15:40:27 -07:00
Colin Patrick McCabe 0a70c3a61e
KAFKA-17714 Fix StorageToolTest.scala to compile under Scala 2.12 (#17400)
Reviewers: David Arthur <mumrah@gmail.com>, Justine Olshan <jolshan@confluent.io>, Chia-Ping Tsai <chia7712@gmail.com>
2024-10-08 11:00:18 +08:00
David Arthur 1d54a7373c KAFKA-17146 Include note to remove migration znode (#16770)
When reverting the ZK migration, we must also remove the /migration ZNode in order to allow the migration to be re-attempted in the future.

Reviewers: Colin P. McCabe <cmccabe@apache.org>, Chia-Ping Tsai <chia7712@gmail.com>
2024-10-07 10:34:22 -07:00
José Armando García Sancio 550bf60460 KAFKA-16927; Handle expanding leader endpoints (#17363)
When a replica restarts in the follower state it is possible for the set of leader endpoints to not match the latest set of leader endpoints. Voters will discover the latest set of leader endpoints through the BEGIN_QUORUM_EPOCH request. This means that KRaft needs to allow for the replica to transition from Follower to Follower when only the set of leader endpoints has changed.

Reviewers: Colin P. McCabe <cmccabe@apache.org>, Alyssa Huang <ahuang@confluent.io>
2024-10-04 14:53:17 +00:00
Alyssa Huang 5c95a5da31 MINOR: Fix kafkatest advertised listeners (#17294)
Followup for #17146

Reviewers: Bill Bejeck <bbejeck@apache.org>
2024-10-01 17:21:28 +00:00
Bill Bejeck edd77c1e25 MINOR: Need to split the controller bootstrap servers on ',' in list comprehenson (#17183)
Kafka Streams system tests were failing with this error:

Failed to parse host name from entry 3001@d for the configuration controller.quorum.voters.  Each entry should be in the form `{id}@{host}:{port}`.

The cause is that in kafka.py line 876, we create a delimited string from a list comprehension, but the input is a string itself, so each character gets appended vs. the bootstrap server string of host:port. To fix this, this PR adds split(',') to controller_quorum_bootstrap_servers. Note that this only applies when dynamicRaftQuorum=False

Reviewers: Alyssa Huang <ahuang@confluent.io>, Chia-Ping Tsai <chia7712@gmail.com>
2024-10-01 17:07:26 +00:00
David Arthur 2cbc5bd3ca KAFKA-17636 Fix missing SCRAM bootstrap records (#17305)
Fixes a regression introduced by #16669 which inadvertently stopped processing SCRAM arguments from kafka-storage.sh

Reviewers: Colin P. McCabe <cmccabe@apache.org>, Federico Valeri <fedevaleri@gmail.com>
2024-09-28 10:04:29 -04:00
Alyssa Huang 89cb632acd KAFKA-17608, KAFKA-17604, KAFKA-16963; KRaft controller crashes when active controller is removed (#17146)
This change fixes a few issues.

KAFKA-17608; KRaft controller crashes when active controller is removed
When a control batch is committed, the quorum controller currently increases the last stable offset but fails to create a snapshot for that offset. This causes an issue if the quorum controller renounces and needs to revert to that offset (which has no snapshot present). Since the control batches are no-ops for the quorum controller, it does not need to update its offsets for control records. We skip handle commit logic for control batches.

KAFKA-17604; Describe quorum output missing added voters endpoints
Describe quorum output will miss endpoints of voters which were added via AddRaftVoter. This is due to a bug in LeaderState's updateVoterAndObserverStates which will pull replica state from observer states map (which does not include endpoints). The fix is to populate endpoints from the lastVoterSet passed into the method.

Reviewers: José Armando García Sancio <jsancio@apache.org>, Colin P. McCabe <cmccabe@apache.org>, Chia-Ping Tsai <chia7712@apache.org>
2024-09-26 18:04:05 +00:00
Alyssa Huang c2c2dd424b KAFKA-16963: Ducktape test for KIP-853 (#17081)
Add a ducktape system test for KIP-853 quorum reconfiguration, including adding and removing voters.

Reviewers: Colin P. McCabe <cmccabe@apache.org>
2024-09-26 18:03:56 +00:00
Colin Patrick McCabe 57b098c397 KAFKA-17584: Fix incorrect synonym handling for dynamic log configurations (#17258)
Several Kafka log configurations in have synonyms. For example, log retention can be configured
either by log.retention.ms, or by log.retention.minutes, or by log.retention.hours. There is also
a faculty in Kafka to dynamically change broker configurations without restarting the broker. These
dynamically set configurations are stored in the metadata log and override what is in the broker
properties file.

Unfortunately, these two features interacted poorly; there was a bug where the dynamic log
configuration update code ignored synonyms. For example, if you set log.retention.minutes and then
reconfigured something unrelated that triggered the LogConfig update path, the retention value that
you had configured was overwritten.

The reason for this was incorrect handling of synonyms. The code tried to treat the Kafka broker
configuration as a bag of key/value entities rather than extracting the correct retention time (or
other setting with overrides) from the KafkaConfig object.

Reviewers: Luke Chen <showuon@gmail.com>, Jun Rao <junrao@gmail.com>, Kamal Chandraprakash<kamal.chandraprakash@gmail.com>, Christo Lolov <lolovc@amazon.com>, Federico Valeri <fedevaleri@gmail.com>, Rajini Sivaram <rajinisivaram@googlemail.com>, amangandhi94 <>
2024-09-26 14:20:33 +08:00
José Armando García Sancio e36c82d71c MINOR: Replace gt and lt char with html encoding (#17235)
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2024-09-24 17:07:16 +00:00
Ken Huang 333483a16e MINOR: add a space for kafka.metrics.polling.interval.secs description (#17256)
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2024-09-24 21:52:16 +08:00
TengYao Chi 7d14cd6b33 KAFKA-17459 Stablize reassign_partitions_test.py (#17250)
This test expects that each partition can receive the record, so using a non-null key helps distribute the records more randomly.

Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2024-09-24 17:37:03 +08:00
Jakub Scholz 83091994a6 KAFKA-17543: Improve and clarify the error message about generated broker IDs in migration (#17210)
This PR tries to improve the error message when broker.id is set to -1 and ZK migration is enabled. It is not
needed to disable the broker.id.generation.enable option. It is sufficient to just not use it (by not setting
the broker.id to -1).

Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, Luke Chen <showuon@gmail.com>
2024-09-18 11:47:01 -07:00
José Armando García Sancio c141acb6bf KAFKA-17048; Update docs for KIP-853 (#17076)
Change the configurations under config/kraft to use controller.quorum.bootstrap.servers instead of controller.quorum.voters. Add comments explaining how to use the older static quorum configuration where appropriate.

In docs/ops.html, remove the reference to "tentative timelines for ZooKeeper removal" and "Tiered storage is considered as an early access feature" since they are no longer up-to-date. Add KIP-853 information.

In docs/quickstart.html, move the ZK instructions to be after the KRaft instructions. Update the KRaft instructions to use KIP-853.

In docs/security.html, add an explanation of --bootstrap-controller and document controller.quorum.bootstrap.servers instead of controller.quorum.voters.

Reviewers: Mickael Maison <mickael.maison@gmail.com>, Alyssa Huang <ahuang@confluent.io>, Colin P. McCabe <cmccabe@apache.org>
2024-09-18 11:34:12 -07:00
Colin Patrick McCabe 389a8d8dec
Revert "KAFKA-16803: Change fork, update ShadowJavaPlugin to 8.1.7 (#16295)" (#17218)
This reverts commit 391778b8d7.

Unfortunately that commit re-introduced bug #15127 which prevented the publishing of kafka-clients
artifacts to remote maven. As that bug says:

    The issue triggers only with publishMavenJavaPublicationToMavenRepository due to signing.
    Generating signed asc files error out for shadowed release artifacts as the module name
    (clients) differs from the artifact name (kafka-clients).

    The fix is basically to explicitly define artifact of shadowJar to signing and publish plugin.
    project.shadow.component(mavenJava) previously outputs the name as client-<version>-all.jar
    though the classifier and archivesBaseName are already defined correctly in :clients and
    shadowJar construction.

Reviewers: David Arthur <mumrah@gmail.com>
2024-09-17 12:05:25 -07:00
Colin Patrick McCabe f324ef461f
MINOR: update documentation link to 3.9 (#17216)
Reviewers: David Arthur <mumrah@gmail.com>
2024-09-17 07:36:08 -07:00
Colin Patrick McCabe a1a4389c35 KAFKA-17543: Enforce that broker.id.generation.enable is not used when migrating to KRaft (#17192)
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, David Arthur <mumrah@gmail.com>
2024-09-13 17:25:26 -07:00
Matthias J. Sax b43439482d KAFKA-17527: Fix NPE for null RecordContext (#17169)
Reviewers: Bruno Cadonna <bruno@confluent.io>
2024-09-13 16:32:54 -07:00
Colin Patrick McCabe 7d3ba8a0eb KAFKA-16468: verify that migrating brokers provide their inter.broker.listener (#17159)
When brokers undergoing ZK migration register with the controller, it should verify that they have
provided a way to contact them via their inter.broker.listener. Otherwise the migration will fail
later on with a more confusing error message.

Reviewers: David Arthur <mumrah@gmail.com>
2024-09-13 09:18:43 -07:00
Bruno Cadonna 4f0675d5e9 KAFKA-17489: Do not handle failed tasks as tasks to assign (#17115)
Failed tasks discovered when removed from the state updater during assignment or revocation are added to the task registry. From there they are retrieved and handled as normal tasks. This leads to a couple of IllegalStateExceptions because it breaks some invariants that ensure that only good tasks are assigned and processed.

This commit solves this bug by distinguish failed from non-failed tasks in the task registry.

Reviewer: Lucas Brutschy <lbrutschy@confluent.io>
2024-09-13 12:16:19 +02:00
David Arthur 4734077f47 KAFKA-17506 KRaftMigrationDriver initialization race (#17147)
There is a race condition between KRaftMigrationDriver running its first poll() and being notified by Raft about a leader change. If onControllerChange is called before RecoverMigrationStateFromZKEvent is run, we will end up getting stuck in the INACTIVE state.

This patch fixes the race by enqueuing a RecoverMigrationStateFromZKEvent from onControllerChange if the driver has not yet initialized. If another RecoverMigrationStateFromZKEvent was already enqueued, the second one to run will just be ignored.

Reviewers: Luke Chen <showuon@gmail.com>
2024-09-11 10:42:16 -04:00
Vikas Singh 5a4d2b44d2 MINOR: Few cleanups
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
2024-09-11 14:40:10 +05:30
David Arthur 6d3e77533e KAFKA-15793 Fix ZkMigrationIntegrationTest#testMigrateTopicDeletions (#17004)
Reviewers: Igor Soarez <soarez@apple.com>, Ajit Singh <>
2024-09-10 13:06:13 -07:00
xijiu f7fe4b9441 KAFKA-17458 Add 3.8 to transactions_upgrade_test.py, transactions_mixed_versions_test.py, and kraft_upgrade_test.py (#17084)
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2024-09-11 02:13:01 +08:00
Ken Huang e885daf0e3 KAFKA-17492 skip features with minVersion of 0 instead of replacing 0 with 1 when BrokerRegistrationRequest < 4 (#17128)
The 3.8 controller assumes the unknown features have min version = 0, but KAFKA-17011 replace the min=0 by min=1 when BrokerRegistrationRequest < 4. Hence, to support upgrading from 3.8.0 to 3.9, this PR changes the implementation of ApiVersionsResponse (<4) and BrokerRegistrationRequest (<4) to skip features with supported minVersion of 0 instead of replacing 0 with 1

Reviewers: Jun Rao <junrao@gmail.com>, Colin P. McCabe <cmccabe@apache.org>, Chia-Ping Tsai <chia7712@gmail.com>
2024-09-11 01:17:41 +08:00
TengYao Chi 4b6437e6a5
KAFKA-17497 Add e2e for zk migration with old controller (#17131)
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2024-09-10 12:14:47 +08:00
David Arthur 034780dce9 KAFKA-15648 Update leader volatile before handleLeaderChange in LocalLogManager (#17118)
Update the leader before calling handleLeaderChange and use the given epoch in LocalLogManager#prepareAppend. This should hopefully fix several flaky QuorumControllerTest tests.

Reviewers: José Armando García Sancio <jsancio@apache.org>
2024-09-06 14:23:13 -04:00
David Arthur d067ed0a2f
KAFKA-17457 Don't allow ZK migration to start without transactions (#17094)
This patch raises the minimum MetadataVersion for migrations to 3.6-IV1 (metadata transactions). This is only enforced on the controller during bootstrap (when the log is empty). If the log is not empty on controller startup, as in the case of a software upgrade, we allow the migration to continue where it left off.

The broker will log an ERROR message if migrations are enabled and the IBP is not at least 3.6-IV1.

Reviewers: Colin P. McCabe <cmccabe@apache.org>
2024-09-06 13:28:53 -04:00
Sebastien Viale 74d55ca639 KAFKA-16448: Add timestamp to error handler context (#17054)
Part of KIP-1033.

Co-authored-by: Dabz <d.gasparina@gmail.com>
Co-authored-by: loicgreffier <loic.greffier@michelin.com>

Reviewers: Matthias J. Sax <matthias@confluent.io>
2024-09-05 08:40:22 -07:00
Luke Chen 3cabf333ce MINOR: Update doc for tiered storage GA (#17088)
Reviewers: Satish Duggana <satishd@apache.org>
2024-09-05 19:19:33 +08:00
TengYao Chi 14e1ebee9e KAFKA-17454 Fix failed transactions_mixed_versions_test.py when running with 3.2 (#17067)
why df04887ba5 does not fix it?

The fix of df04887ba5 is to NOT collect the log from path `/mnt/kafka/kafka-operational-logs/debug/xxxx.log`if the task is successful. It does not change the log level. see ducktape b2ad7693f2/ducktape/tests/test.py (L181)

why df04887ba5 does not see the error of "sort"

df04887ba5 does NOT show the error since the number of features is only "one" (only metadata.version). Hence, the bug is not triggered as it does not need to "sort". Now, we have two features - metadata.version and krafe.version - so the sort is executed and then we see the "hello bug"

why we should change the kafka.log_level to INFO?

the template of log4j.properties is controlled by `log_level` (https://github.com/apache/kafka/blob/trunk/tests/kafkatest/services/kafka/templates/log4j.properties#L16), and the bug happens in writing debug message (e4ca066680/core/src/main/scala/kafka/server/metadata/BrokerMetadataListener.scala (L274)). Hence, changing the log level to DEBUG can avoid triggering the bug.

Reviewers: Justine Olshan <jolshan@confluent.io>, Chia-Ping Tsai <chia7712@gmail.com>
2024-09-04 15:56:11 +08:00
Justine Olshan 4756402f75 MINOR: Reduce log levels for transactions_mixed_versions_test with 3.2 due to bug in that version (#16787)
7496e62434 fixed an error that caused an exception to be thrown on broker startup when debug logs were on. This made it to every version except 3.2. 

The Kraft upgrade tests totally turn off debug logs, but I think we only need to remove them for the broken version.

Note: this bug is also present in 3.1, but there is no logging on startup like in subsequent versions.

Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, David Jacot <david.jacot@gmail.com>
2024-09-04 15:56:04 +08:00
Luke Chen 28f5ff6039 KAFKA-17412: add doc for `unclean.leader.election.enable` in KRaft (#17051)
Reviewers: Colin P. McCabe <cmccabe@apache.org>
2024-09-03 16:12:20 -07:00
PoAn Yang a954ad1c67 KAFKA-17331 Throw unsupported version exception if the server does NOT support EarliestLocalSpec and LatestTieredSpec (#16873)
Add the version check to server side for the specific timestamp:
- the version must be >=8 if timestamp=-4L
- the version must be >=9 if timestamp=-5L

Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2024-09-01 21:14:27 +08:00
Colin Patrick McCabe c7cc4d0b68 KAFKA-17434: Do not test impossible scenarios in upgrade_test.py (#17024)
Because of KIP-902 (Upgrade Zookeeper version to 3.8.2), it is not possible to upgrade from a Kafka version
earlier than 2.4 to a version later than 2.4. Therefore, we should not test these upgrade scenarios
in upgrade_test.py. They do happen to work sometimes, but only in the trivial case where we don't
create topics or make changes during the upgrade (which would reveal the ZK incompatibility).
Instead, we should test only supported scenarios.

Reviewers: Reviewers: José Armando García Sancio <jsancio@gmail.com>
2024-08-29 12:53:48 -07:00
Krishna Agarwal d9ebb2e79b
MINOR: Add experimental message for the native docker image (#17041)
The docker image for Native Apache Kafka was introduced with KIP-974 and was first release with 3.8 AK release.
The docker image for Native Apache Kafka is currently intended for local development and testing purposes.

This PR intends to add a logline indicating the same during docker image startup.

Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
2024-08-29 17:33:11 +05:30
Luke Chen fbaea5ff6a KAFKA-17062: handle dangling "copy_segment_start" state when deleting remote logs (#16959)
The COPY_SEGMENT_STARTED state segments are counted when calculating remote retention size. This causes unexpected retention size breached segment deletion. This PR fixes it by
  1. only counting COPY_SEGMENT_FINISHED and DELETE_SEGMENT_STARTED state segments when calculating remote log size.
  2. During copy Segment, if we encounter errors, we will delete the segment immediately.
  3. Tests added.

Co-authored-by: Guillaume Mallet <>

Reviewers: Kamal Chandraprakash<kamal.chandraprakash@gmail.com>, Satish Duggana <satishd@apache.org>, Guillaume Mallet <>
2024-08-29 14:10:30 +08:00
Colin Patrick McCabe 7356328f53 KAFKA-12670: Support configuring unclean leader election in KRaft (#16866)
Previously in KRaft mode, we could request an unclean leader election for a specific topic using
the electLeaders API. This PR adds an additional way to trigger unclean leader election when in
KRaft mode via the static controller configuration and various dynamic configurations.

In order to support all possible configuration methods, we have to do a multi-step configuration
lookup process:

1. check the dynamic topic configuration for the topic.
2. check the dynamic node configuration.
3. check the dynamic cluster configuration.
4. check the controller's static configuration.

Fortunately, we already have the logic to do this multi-step lookup in KafkaConfigSchema.java.
This PR reuses that logic. It also makes setting a configuration schema in
ConfigurationControlManager mandatory. Previously, it was optional for unit tests.

Of course, the dynamic configuration can change over time, or the active controller can change
to a different one with a different configuration. These changes can make unclean leader
elections possible for partitions that they were not previously possible for. In order to address
this, I added a periodic background task which scans leaderless partitions to check if they are
eligible for an unclean leader election.

Finally, this PR adds the UncleanLeaderElectionsPerSec metric.

Co-authored-by: Luke Chen showuon@gmail.com

Reviewers: Igor Soarez <soarez@apple.com>, Luke Chen <showuon@gmail.com>
2024-08-28 14:14:41 -07:00
Dmitry Werner a4728f566f KAFKA-15746: KRaft support in ControllerMutationQuotaTest (#16620)
Reviewers: Mickael Maison <mickael.maison@gmail.com>
2024-08-28 14:14:34 -07:00
kevin-wu24 e5f47ba350 KAFKA-15406: Add the ForwardingManager metrics from KIP-938 (#16904)
Implement the remaining ForwardingManager metrics from KIP-938: Add more metrics for measuring KRaft performance:

kafka.server:type=ForwardingManager,name=QueueTimeMs.p99
kafka.server:type=ForwardingManager,name=QueueTimeMs.p999
kafka.server:type=ForwardingManager,name=QueueLength
kafka.server:type=ForwardingManager,name=RemoteTimeMs.p99
kafka.server:type=ForwardingManager,name=RemoteTimeMs.p999

Reviewers: Colin P. McCabe <cmccabe@apache.org>
2024-08-28 11:33:31 -07:00
José Armando García Sancio 145fa49e54 KAFKA-17426; Check node directory id for KRaft (#17017)
Reviewers: Colin P. McCabe <cmccabe@apache.org>
2024-08-28 11:33:09 -07:00
Kirk True a87b501a47 KAFKA-17335 Lack of default for URL encoding configuration for OAuth causes NPE (#16990)
AccessTokenRetrieverFactory uses the value of sasl.oauthbearer.header.urlencode provided by the user, or null if no value was provided for that configuration. When the HttpAccessTokenRetriever is created the JVM attempts to unbox the value into a boolean, a NullPointerException is thrown.

The fix is to explicitly check the Boolean, and if it's null, use Boolean.FALSE.

Reviewers: bachmanity1 <81428651+bachmanity1@users.noreply.github.com>, Chia-Ping Tsai <chia7712@gmail.com>
2024-08-28 23:12:09 +08:00
Arpit Goyal a19792fbd7 KAFKA-17422: Adding copySegmentLatch countdown after expiration task is over (#17012)
The given test took 5 seconds as the logic was waiting completely for 5 seconds for the expiration task to be completed. Adding copySegmentLatch countdown after expiration task is over

Reviewers: Luke Chen <showuon@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
2024-08-28 10:45:51 +08:00
Kuan-Po Tseng 6d2b81e07f KAFKA-17360 local log retention ms/bytes "-2" is not treated correctly (#16932)
1) When the local.retention.ms/bytes is set to -2, we didn't replace it with the server-side retention.ms/bytes config, so the -2 local retention won't take effect.
2) When setting retention.ms/bytes to -2, we can notice this log message:

```
Deleting segment LogSegment(baseOffset=10045, size=1037087, lastModifiedTime=1724040653922, largestRecordTimestamp=1724040653835) due to local log retention size -2 breach. Local log size after deletion will be 13435280. (kafka.log.UnifiedLog) [kafka-scheduler-6]
```
This is not helpful for users. We should replace -2 with real retention value when logging.

Reviewers: Luke Chen <showuon@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
2024-08-25 19:46:43 +08:00
TengYao Chi db686fb964 KAFKA-17331 Set correct version for EarliestLocalSpec and LatestTieredSpec (#16876)
Add the version check to client side when building ListOffsetRequest for the specific timestamp:
1) the version must be >=8 if timestamp=-4L (EARLIEST_LOCAL_TIMESTAMP)
2) the version must be >=9 if timestamp=-5L (LATEST_TIERED_TIMESTAMP)

Reviewers: PoAn Yang <payang@apache.org>, Chia-Ping Tsai <chia7712@gmail.com>
2024-08-25 17:40:26 +08:00
TengYao Chi dcb4578903 KAFKA-17315 Fix the behavior of delegation tokens that expire immediately upon creation in KRaft mode (#16858)
In kraft mode, expiring delegation token (`expiryTimePeriodMs` < 0) has following different behavior to zk mode.

1. `ExpiryTimestampMs` is set to "expiryTimePeriodMs" [0] rather than "now" [1]
2. it throws exception directly if the token is expired already [2]. By contrast, zk mode does not. [3]

[0] 49fc14f611/metadata/src/main/java/org/apache/kafka/controller/DelegationTokenControlManager.java (L316)
[1] 49fc14f611/core/src/main/scala/kafka/server/DelegationTokenManagerZk.scala (L292)
[2] 49fc14f611/metadata/src/main/java/org/apache/kafka/controller/DelegationTokenControlManager.java (L305)
[3] 49fc14f611/core/src/main/scala/kafka/server/DelegationTokenManagerZk.scala (L293)

Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2024-08-25 07:30:34 +08:00
Matthias J. Sax 1f6d5aec82 MINOR: fix HTML for topology.optimization config (#16953)
The HTML rendering broke via https://issues.apache.org/jira/browse/KAFKA-14209 in 3.4 release. The currently shown value is some garbage org.apache.kafka.streams.StreamsConfig$$Lambda$20/0x0000000800c0cf18@b1bc7ed

cf https://kafka.apache.org/documentation/#streamsconfigs_topology.optimization

Verified the fix via running StreamsConfig#main() locally.

Reviewers: Bill Bejeck <bill@confluent.io>, Chia-Ping Tsai <chia7712@gmail.com>
2024-08-22 17:14:13 -07:00
Ken Huang 29bbb6555c KAFKA-17336 Add IT to make sure the production MV does not use unstable version of LIST_OFFSET (#16893)
- due to the server config UNSTABLE_API_VERSIONS_ENABLE_CONFIG is true, so we can't test the scenario of ListOffsetsRequest is unstable version. We want to test this case in this PR
- get the MV from metadataCache.metadataVersion() instead of config.interBrokerProtocolVersion since MV can be set dynamically.

Reviewers: Jun Rao <junrao@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
2024-08-23 03:48:45 +08:00
Alyssa Huang b377ea94c5 KAFKA-17305; Check broker registrations for missing features (#16848)
When a broker tries to register with the controller quorum, its registration should be rejected if it doesn't support a feature that is currently enabled. (A feature is enabled if it is set to a non-zero feature level.) This is important for the newly added kraft.version feature flag.

Reviewers: Colin P. McCabe <cmccabe@apache.org>, José Armando García Sancio <jsancio@apache.org>
2024-08-21 11:15:17 -07:00
José Armando García Sancio 283b82d46a KAFKA-17333; ResignedState should not notify of leader change (#16900)
When a voter fails as leader (LeaderState) the quorum-state still states that it is the leader of
the epoch. When the voter starts it never starts as leader and instead starts as resigned
(ResignedState) if it was previously a leader. This causes the KRaft client to immediately notify
the state machine (e.g QuorumController) that it is leader or active. This is incorrect for two
reasons.

One, the controller cannot be notified of leadership until it has reached the LEO. If the
controller is notified before that it will generate and append records that are not based on the
latest state.

Two, it is not practical to notify of local leadership when it is resigned since any write
operation (prepareAppend and schedulePreparedAppend) will fail with NotLeaderException while KRaft
is in the resigned state.

Reviewers: Colin P. McCabe <cmccabe@apache.org>, David Arthur <mumrah@gmail.com>
2024-08-21 09:32:09 -07:00
Sean Quah 321ab71192
KAFKA-17279: Handle retriable errors from offset fetches (#16826) (#16934)
Handle retriable errors from offset fetches in ConsumerCoordinator.

Reviewers: Lianet Magrans <lianetmr@gmail.com>, David Jacot <djacot@confluent.io>
2024-08-21 05:17:31 -07:00
Mason Chen b7a97e7102 KAFKA-17169: Add EndpointsTest (#16659)
Reviewers: Omnia Ibrahim <o.g.h.ibrahim@gmail.com>, Colin P. McCabe <cmccabe@apache.org>
2024-08-20 15:09:27 -07:00
José Armando García Sancio 313af4e83d KAFKA-17332; Controller always flush and can call resign on observers (#16907)
This change includes two improvements.

When the leader removes itself from the voters set clients of RaftClient may call resign. In those cases the leader is not in the voter set and should not throw an exception.

Controllers that are observers must flush the log on every append because leader may be trying to add them to the voter set. Leader always assume that voters flush their disk before sending a Fetch request.

Reviewers: David Arthur <mumrah@gmail.com>, Alyssa Huang <ahuang@confluent.io>
2024-08-20 00:45:01 +00:00
José Armando García Sancio ed7cadd4c0 KAFKA-16842; Fix config validation and support unknown voters (#16892)
This change fixes the Kafka configuration validation to take into account the reconfiguration changes to configuration and allows KRaft observers to start with an unknown set of voters.

For the Kafka configuration validation the high-level change is that now the user only needs to specify either the controller.quorum.bootstrap.servers property or the controller.quorum.voters property. The other notable change in the configuration is that controller listeners can now be (and should be) specified in advertise.listeners property.

Because Kafka can now be configured without any voters and just the bootstrap servers. The KRaft client needs to allow for an unknown set of voters during the initial startup. This is done by adding the VoterSet#empty set of voters to the KRaftControlRecordStateMachine.

Lastly the RaftClientTestContext type is updated to support this new configuration for KRaft and a test is added to verify that observers can start and send Fetch requests when the voters are unknown.

Reviewers: David Arthur <mumrah@gmail.com>
2024-08-16 19:55:16 +00:00
TengYao Chi bcf4c73bae KAFKA-17238 Move VoterSet and ReplicaKey from raft.internals to raft (#16775)
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2024-08-16 00:45:32 +08:00
Josep Prat 3b90bbaf6f MINOR: Fix visibility for classes exposed outside of their scope (#16886)
These 2 classes are package protected but they are part of the public
API of public methods. To have clean APIs we should make this
consistent.

Static class ReplicaState is exposed in RaftUtil#singletonDescribeQuorumResponse method which is public.

RequestSender is implemented by a public class and it's exposed in the public constructor of AddVoterHandler.

Reviewers: José Armando García Sancio <jsancio@apache.org>
2024-08-15 16:11:07 +00:00
Ken Huang ba1995704a KAFKA-17326 The LIST_OFFSET request is removed from the "Api Keys" page (#16870)
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2024-08-15 19:00:14 +08:00
José Armando García Sancio 4ea3b6181a KAFKA-17304; Make RaftClient API for writing to log explicit (#16862)
RaftClient API is changed to separate the batch accumulation (RaftClient#prepareAppend) from scheduling the append of accumulated batches (RaftClient#schedulePrepatedAppend) to the KRaft log. This change is needed to better match the controller's flow of replaying the generated records before replicating them. When the controller replay records it needs to know the offset associated with the record. To compute a table offset the KafkaClient needs to be aware of the records and their log position.

The controller uses this new API by generated the cluster metadata records, compute their offset using RaftClient#prepareAppend, replay the records in the state machine, and finally allowing KRaft to append the records with RaftClient#schedulePreparedAppend.

To implement this API the BatchAccumulator is changed to also support this access pattern. This is done by adding a drainOffset to the implementation. The batch accumulator is allowed to return any record and batch that is less than the drain offset.

Lastly, this change also removes some functionality that is no longer needed like non-atomic appends and validation of the base offset.

Reviewers: Colin Patrick McCabe <cmccabe@apache.org>, David Arthur <mumrah@gmail.com>
2024-08-14 19:48:47 +00:00
PoAn Yang 41107041f3 KAFKA-17309 Fix flaky testCallFailWithUnsupportedVersionExceptionDoesNotHaveConcurrentModificationException (#16854)
Reviewers: TengYao Chi <kitingiao@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
2024-08-13 21:06:58 +08:00
Ken Huang 32346c646b KAFKA-17319 change ListOffsetsRequest latestVersionUnstable to false (#16865)
Reviewers: Luke Chen <showuon@gmail.com>, PoAn Yang <payang@apache.org>, TengYao Chi <kitingiao@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
2024-08-13 20:45:01 +08:00
Luke Chen 500dc27c2b KAFKA-17300: Add document for new tiered storage feature in v3.9.0. (#16836)
- Added document for disabling tiered storage at topic level
- Added notable changes items in v3.9.0 for tiered storage quota

Reviewers: Satish Duggana <satishd@apache.org>, Kamal Chandraprakash<kamal.chandraprakash@gmail.com>, Abhijeet Kumar<abhijeet.cse.kgp@gmail.com>
2024-08-13 17:33:12 +05:30
Nancy df5c31e8df Kafka 16887 : Upgrade the document to add remote copy/fetch quotas metrics values. (#16863)
Reviewers: Abhijeet Kumar<abhijeet.cse.kgp@gmail.com>, Luke Chen <showuon@gmail.com>, Satish Duggana <satishd@apache.org>
2024-08-13 12:31:52 +05:30
Colin Patrick McCabe b53dfebad5 KAFKA-17018: update MetadataVersion for the Kafka release 3.9 (#16841)
- Mark 3.9-IV0 as stable. Metadata version 3.9-IV0 should return Fetch version 17.

- Move ELR to 4.0-IV0. Remove 3.9-IV1 since it's no longer needed.

- Create a new 4.0-IV1 MV for KIP-848.

Reviewers: Jun Rao <junrao@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>, Justine Olshan <jolshan@confluent.io>
2024-08-12 16:31:47 -07:00
Colin Patrick McCabe 161c8c6383 KAFKA-17190: AssignmentsManager gets stuck retrying on deleted topics (#16672)
In MetadataVersion 3.7-IV2 and above, the broker's AssignmentsManager sends an RPC to the
controller informing it about which directory we have chosen to place each new replica on.
Unfortunately, the code does not check to see if the topic still exists in the MetadataImage before
sending the RPC. It will also retry infinitely. Therefore, after a topic is created and deleted in
rapid succession, we can get stuck including the now-defunct replica in our subsequent
AssignReplicasToDirsRequests forever.

In order to prevent this problem, the AssignmentsManager should check if a topic still exists (and
is still present on the broker in question) before sending the RPC. In order to prevent log spam,
we should not log any error messages until several minutes have gone past without success.
Finally, rather than creating a new EventQueue event for each assignment request, we should simply
modify a shared data structure and schedule a deferred event to send the accumulated RPCs. This
will improve efficiency.

Reviewers: Igor Soarez <i@soarez.me>, Ron Dagostino <rndgstn@gmail.com>
2024-08-12 12:24:04 -07:00
Kuan-Po Tseng fccae40564 KAFKA-17310 locking the offline dir can destroy the broker exceptionally (#16856)
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2024-08-12 16:45:09 +08:00
Alyssa Huang 5c97ab0a34 KAFKA-17067; Fix KRaft transition to CandidateState (#16820)
Only voters should be able to transition to Candidate state. This removes VotedState as one of the EpochStates and moves voted information into UnattachedState.

Reviewers: José Armando García Sancio <jsancio@apache.org>
2024-08-10 11:45:04 +00:00
José Armando García Sancio 682299a7df KAFKA-16534; Implemeent update voter sending (#16837)
This change implements the KRaft voter sending UpdateVoter request. The
UpdateVoter RPC is used to update a voter's listeners and supported
kraft versions. The UpdateVoter RPC is sent if the replicated voter set
(VotersRecord in the log) doesn't match the local voter's supported
kraft versions and controller listeners.

To not starve the Fetch request, the UpdateVoter request is sent at most
every 3 fetch timeouts. This is required to make sure that replication
is making progress and eventually the voter set in the replicated log
matches the local voter configuration.

This change also modifies the semantic for UpdateVoter. Now the
UpdateVoter response is sent right after the leader has created the new
voter set. This is required so that updating voter can transition from
sending UpdateVoter request to sending Fetch request. If the leader
waits for the VotersRecord control record to commit before sending the
UpdateVoter response, it may never send the UpdateVoter response. This
can happen if the leader needs that voter's Fetch request to commit the
control record.

Reviewers: Colin P. McCabe <cmccabe@apache.org>
2024-08-09 00:05:18 +00:00
Colin Patrick McCabe 75cf36050d KAFKA-16523; kafka-metadata-quorum: support add-controller and remove-controller (#16774)
This PR adds support for add-controller and remove-controller in the kafka-metadata-quorum.sh
command-line tool. It also fixes some minor server-side bugs that blocked the tool from working.

In kafka-metadata-quorum.sh, the implementation of remove-controller is fairly straightforward. It
just takes some command-line flags and uses them to invoke AdminClient. The add-controller
implementation is a bit more complex because we have to look at the new controller's configuration
file. The parsing logic for the advertised.listeners and listeners server configurations that we
need was previously implemented in the :core module. However, the gradle module where
kafka-metadata-quorum.sh lives, :tools, cannot depend on :core. Therefore, I moved listener parsing
into SocketServerConfigs.listenerListToEndPoints. This will be a small step forward in our efforts
to move Kafka configuration out of :core.

I also made some minor changes in kafka-metadata-quorum.sh and Kafka-storage-tool.sh to handle
--help without displaying a backtrace on the screen, and give slightly better error messages on
stderr. Also, in DynamicVoter.toString, we now enclose the host in brackets if it contains a colon
(as IPV6 addresses can).

This PR fixes our handling of clusterId in addRaftVoter and removeRaftVoter, in two ways. Firstly,
it marks clusterId as nullable in the AddRaftVoterRequest.json and RemoveRaftVoterRequest.json
schemas, as it was always intended to be. Secondly, it allows AdminClient to optionally send
clusterId, by using AddRaftVoterOptions and RemoveRaftVoterOptions. We now also remember to
properly set timeoutMs in AddRaftVoterRequest. This PR adds unit tests for
KafkaAdminClient#addRaftVoter and KafkaAdminClient#removeRaftVoter, to make sure they are sending
the right things.

Finally, I fixed some minor server-side bugs that were blocking the handling of these RPCs.
Firstly, ApiKeys.ADD_RAFT_VOTER and ApiKeys.REMOVE_RAFT_VOTER are now marked as forwardable so that
forwarding from the broker to the active controller works correctly. Secondly,
org.apache.kafka.raft.KafkaNetworkChannel has now been updated to enable API_VERSIONS_REQUEST and
API_VERSIONS_RESPONSE.

Co-authored-by: Murali Basani muralidhar.basani@aiven.io
Reviewers: José Armando García Sancio <jsancio@apache.org>, Alyssa Huang <ahuang@confluent.io>
2024-08-09 00:03:13 +00:00
PoAn Yang c4cc6d2ff3 KAFKA-17223 Retrying the call after encoutering UnsupportedVersionException will cause ConcurrentModificationException (#16753)
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2024-08-08 16:09:34 -07:00
Alyssa Huang b048798a09 KAFKA-16521: Have Raft endpoints printed as name://host:port (#16830)
Reviewers: Colin P. McCabe <cmccabe@apache.org>
2024-08-08 09:23:03 -07:00
Dmitry Werner 9230a3899f
KAFKA-17242: Do not log spurious timeout message for MirrorCheckpointTask sync store startup (#16773)
Reviewers: Chris Egerton <chrise@aiven.io>
2024-08-08 10:04:00 -04:00
TengYao Chi 0b57b36c8f
KAFKA-17232: Do not generate task configs in MirrorCheckpointConnector if initial consumer group load times out (#16767)
Reviewers: Hongten <hongtenzone@foxmail.com>, Chris Egerton <chrise@aiven.io>
2024-08-08 09:58:34 -04:00
Luke Chen 7fe3cec4eb KAFKA-17236: Handle local log deletion when remote.log.copy.disabled=true (#16765)
Handle local log deletion when remote.log.copy.disabled=true based on the KIP-950.

When tiered storage is disabled or becomes read-only on a topic, the local retention configuration becomes irrelevant, and all data expiration follows the topic-wide retention configuration exclusively.

- added remoteLogEnabledAndRemoteCopyEnabled method to check if this topic enables tiered storage and remote log copy is enabled. We should adopt local.retention.ms/bytes when remote.storage.enable=true,remote.log.copy.disable=false.
- Changed to use retention.bytes/retention.ms when remote copy disabled.
- Added validation to ask users to set local.retention.ms == retention.ms and local.retention.bytes == retention.bytes
- Added tests

Reviewers: Kamal Chandraprakash<kamal.chandraprakash@gmail.com>, Satish Duggana <satishd@apache.org>, Christo Lolov <lolovc@amazon.com>
2024-08-08 19:49:23 +08:00
Ken Huang dd5e7a8291 KAFKA-17276; replicaDirectoryId for Fetch and FetchSnapshot should be ignorable (#16819)
The replicaDirectoryId field for FetchRequest and FetchSnapshotRequest should be ignorable. This allows data objects with the directory id to be serialized to any version of the requests.

Reviewers: José Armando García Sancio <jsancio@apache.org>, Chia-Ping Tsai <chia7712@apache.org>
2024-08-08 00:58:01 +00:00
dujian0068 c736d02b52 KAFKA-16584: Make log processing summary configurable or debug--update upgrade-guide (#16709)
Updates Kafka Streams upgrade-guide for KIP-1049.

Reviewers: Bill Bejeck <bill@confluent.io>, Matthias J. Sax <matthias@confluent.io>
2024-08-06 12:09:56 -07:00
Mickael Maison 4e6508b5e3 KAFKA-17227: Refactor compression code to only load codecs when used (#16782)
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, Josep Prat <josep.prat@aiven.io>
2024-08-06 11:04:28 +02:00
Kuan-Po Tseng 4537c8af5b KAFKA-17235 system test test_performance_service.py failed (#16789)
related to https://issues.apache.org/jira/browse/KAFKA-17235

The root cause of this issue is a change we introduced in KAFKA-16879, where we modified the PushHttpMetricsReporter constructor to use Time.System [1]. However, Time.System doesn't exist in Kafka versions 0.8.2 and 0.9.

In test_performance_services.py, we have system tests for Kafka versions 0.8.2 and 0.9 [2]. These tests always use the tools JAR from the trunk branch, regardless of the Kafka version being tested [3], while the client JAR aligns with the Kafka version specified in the test suite [4]. This discrepancy is what causes the issue to arise.

To resolve this issue, we have a few options:

1) Add Time.System to Kafka 0.8.2 and 0.9: This isn't practical, as we no longer maintain these versions.
2) Modify the PushHttpMetricsReporter constructor to use new SystemTime() instead of Time.System: This would contradict the intent of KAFKA-16879, which aims to make SystemTime a singleton.
3) Implement Time in PushHttpMetricsReporter use the time to get current time
4) Remove system tests for Kafka 0.8.2 and 0.9 from test_performance_services.py

Given that we no longer maintain Kafka 0.8.2 and 0.9, and altering the constructor goes against the design goals of KAFKA-16879, option 4 appears to be the most feasible solution. However, I'm not sure whether it's acceptable to remove these old version tests. Maybe someone else has a better solution

"We'll proceed with option 3 since support for versions 0.8 and 0.9 is still required, meaning we can't remove those Kafka versions from the system tests."

Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2024-08-06 14:52:17 +08:00
José Armando García Sancio 81edb74c5e KAFKA-16533; Update voter handling
Add support for handling the update voter RPC. The update voter RPC is used to automatically update
the voters supported kraft versions and available endpoints as the operator upgrades and
reconfigures the KRaft controllers.

The add voter RPC is handled as follow:

1. Check that the leader has fenced the previous leader(s) by checking that the HWM is known;
   otherwise, return the REQUEST_TIMED_OUT error.

2. Check that the cluster supports kraft.version 1; otherwise, return the UNSUPPORTED_VERSION error.

3. Check that there are no uncommitted voter changes, otherwise return the REQUEST_TIMED_OUT error.

4. Check that the updated voter still supports the currently finalized kraft.version; otherwise
   return the INVALID_REQUEST error.

5. Check that the updated voter is still listening on the default listener.

6. Append the updated VotersRecord to the log. The KRaft internal listener will read this
   uncommitted record from the log and update the voter in the set of voters.

7. Wait for the VotersRecord to commit using the majority of the voters. Return a REQUEST_TIMED_OUT
   error if it doesn't commit in time.

8. Send the UpdateVoter successful response to the voter.

This change also implements the ability for the leader to update its own entry in the voter
set when it becomes leader for an epoch. This is done by updating the voter set and writing a
control batch as the first batch in a new leader epoch.

Finally, fix a bug in KafkaAdminClient's handling of removeRaftVoterResponse where we tried to cast
the response to the wrong type.

Reviewers: Alyssa Huang <ahuang@confluent.io>, Colin P. McCabe <cmccabe@apache.org>
2024-08-05 13:31:51 -07:00
Colin Patrick McCabe 129e7fb0b8 KAFKA-16518: Implement KIP-853 flags for storage-tool.sh (#16669)
As part of KIP-853, storage-tool.sh now has two new flags: --standalone, and --initial-voters. This PR implements these two flags in storage-tool.sh.

There are currently two valid ways to format a cluster:

The pre-KIP-853 way, where you use a statically configured controller quorum. In this case, neither --standalone nor --initial-voters may be specified, and kraft.version must be set to 0.

The KIP-853 way, where one of --standalone and --initial-voters must be specified with the initial value of the dynamic controller quorum. In this case, kraft.version must be set to 1.

This PR moves the formatting logic out of StorageTool.scala and into Formatter.java. The tool file was never intended to get so huge, or to implement complex logic like generating metadata records. Those things should be done by code in the metadata or raft gradle modules. This is also useful for junit tests, which often need to do formatting. (The 'info' and 'random-uuid' commands remain in StorageTool.scala, for now.)

Reviewers: José Armando García Sancio <jsancio@apache.org>
2024-08-05 13:31:40 -07:00
Josep Prat c7d02127b1
KAFKA-17227: Update zstd-jni lib (#16763)
* KAFKA-17227: Update zstd-jni lib
* Add note in upgrade docs
* Change zstd-jni version in docker native file and add warning in dependencies.gradle file
* Add reference to snappy in upgrade

Reviewers:  Chia-Ping Tsai <chia7712@gmail.com>,  Mickael Maison <mickael.maison@gmail.com>
2024-08-05 09:55:42 +02:00
Kuan-Po Tseng b65644c3e3 KAFKA-16154: Broker returns offset for LATEST_TIERED_TIMESTAMP (#16783)
This pr support EarliestLocalSpec LatestTierSpec in GetOffsetShell, and add integration tests.

Reviewers: Luke Chen <showuon@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>, PoAn Yang <payang@apache.org>
2024-08-05 10:41:56 +08:00
Matthias J. Sax 2ddbfebecb KAFKA-16448: Unify error-callback exception handling (#16745)
Follow up code cleanup for KIP-1033.

This PR unifies the handling of both error cases for exception handlers:
 - handler throws an exception
 - handler returns null

The unification happens for all 5 handler cases:
 - deserialzation
 - production / serialization
 - production / send
 - processing
 - punctuation

Reviewers:  Sebastien Viale <sebastien.viale@michelin.com>, Loic Greffier <loic.greffier@michelin.com>, Bill Bejeck <bill@confluent.io>
2024-08-03 13:08:11 -07:00
Luke Chen b622121c0a KAFKA-16855: remote log disable policy in KRaft (#16653)
Reviewers: Kamal Chandraprakash <kamal.chandraprakash@gmail.com>, Christo Lolov <lolovc@amazon.com>
2024-08-03 20:21:05 +08:00
Luke Chen 38db4c46ff KAFKA-17205: Allow topic config validation in controller level in KRaft mode (#16693)
Reviewers: Kamal Chandraprakash <kamal.chandraprakash@gmail.com>, Christo Lolov <lolovc@amazon.com>
2024-08-03 20:20:19 +08:00
PoAn Yang 66485b04c6 KAFKA-16480: ListOffsets change should have an associated API/IBP version update (#16781)
1. Use oldestAllowedVersion as 9 if using ListOffsetsRequest#EARLIEST_LOCAL_TIMESTAMP or ListOffsetsRequest#LATEST_TIERED_TIMESTAMP.
   2. Add test cases to ListOffsetsRequestTest#testListOffsetsRequestOldestVersion to make sure requireTieredStorageTimestamp return 9 as minVersion.
   3. Add EarliestLocalSpec and LatestTierSpec to OffsetSpec.
   4. Add more cases to KafkaAdminClient#getOffsetFromSpec.
   5. Add testListOffsetsEarliestLocalSpecMinVersion and testListOffsetsLatestTierSpecSpecMinVersion to KafkaAdminClientTest to make sure request builder has oldestAllowedVersion as 9.

Signed-off-by: PoAn Yang <payang@apache.org>

Reviewers: Luke Chen <showuon@gmail.com>
2024-08-03 20:17:58 +08:00
TengYao Chi 4e75c57bbb KAFKA-17245: Revert TopicRecord changes. (#16780)
Revert KAFKA-16257 changes because KIP-950 doesn't need it anymore.

Reviewers: Luke Chen <showuon@gmail.com>
2024-08-03 20:17:25 +08:00
TengYao Chi 6b039ce75b KAFKA-16390: add `group.coordinator.rebalance.protocols=classic,consumer` to broker configs when system tests need the new coordinator (#16715)
Fix an issue that cause system test failing when using AsyncKafkaConsumer.
A configuration option, group.coordinator.rebalance.protocols, was introduced to specify the rebalance protocols used by the group coordinator. By default, the rebalance protocol is set to classic. When the new group coordinator is enabled, the rebalance protocols are set to classic,consumer.

Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, David Jacot <djacot@confluent.io>, Lianet Magrans <lianetmr@gmail.com>, Kirk True <kirk@kirktrue.pro>, Justine Olshan <jolshan@confluent.io>
2024-08-02 16:19:04 -07:00
Sebastien Viale 4afe5f380a KAFKA-16448: Update documentation (#16776)
Updated docs for KIP-1033.

Reviewers: Matthias J. Sax <matthias@confluent.io>
2024-08-02 09:54:51 -07:00
Ken Huang fbb598ce82 KAFKA-16666 Migrate GroupMetadataMessageFormatter` to tools module (#16748)
we need to migate GroupMetadataMessageFormatter from scala code to java code,and make the message format is json pattern

Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2024-08-02 11:54:43 +08:00
Kondrat Bertalan 60e1478fb9
KAFKA-17192 Fix MirrorMaker2 worker config does not pass config.provi… (#16678)
Reviewers: Chris Egerton <chrise@aiven.io>
2024-08-01 16:13:38 -04:00
Alyssa Huang 25f04804cd KAFKA-16521; kafka-metadata-quorum describe command changes for KIP-853 (#16759)
describe --status now includes directory id and endpoint information for voter and observers.
describe --replication now includes directory id.

Reviewers: Colin P. McCabe <cmccabe@apache.org>, José Armando García Sancio <jsancio@apache.org>
2024-08-01 19:30:56 +00:00
Sebastien Viale 578fef2355 KAFKA-16448: Handle processing exceptions in punctuate (#16300)
This PR is part of KIP-1033 which aims to bring a ProcessingExceptionHandler to Kafka Streams in order to deal with exceptions that occur during processing.

This PR actually catches processing exceptions from punctuate.

Co-authored-by: Dabz <d.gasparina@gmail.com>
Co-authored-by: loicgreffier <loic.greffier@michelin.com>

Reviewers: Bruno Cadonna <bruno@confluent.io>, Matthias J. Sax <matthias@confluent.io>
2024-07-31 16:06:39 -07:00
Matthias J. Sax 2c957a6e5c MINOR: simplify code which calles `Punctuator.punctuate()` (#16725)
Reviewers: Bill Bejeck <bill@confluent.io>
2024-07-31 16:06:25 -07:00
Loïc GREFFIER aaed1bdd89 KAFKA-16448: Unify class cast exception handling for both key and value (#16736)
Part of KIP-1033. Minor code cleanup.

Reviewers: Matthias J. Sax <matthias@confluent.io>
2024-07-31 13:23:03 -07:00
Matthias J. Sax ccb04acb56
Revert "KAFKA-16508: Streams custom handler should handle the timeout exceptions (#16450)" (#16738)
This reverts commit 15a4501bde.

We consider this change backward incompatible and will fix forward for 4.0
release via KIP-1065, but need to revert for 3.9 release.

Reviewers: Josep Prat <josep.prat@aiven.io>, Bill Bejeck <bill@confluent.io>
2024-07-31 10:29:02 -07:00
Ken Huang fbdfd0d596 KAFKA-16666 Migrate OffsetMessageFormatter to tools module (#16689)
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2024-07-31 15:19:28 +08:00
Sebastien Viale c8dc09c265 KAFKA-16448: Handle fatal user exception during processing error (#16675)
This PR is part of KIP-1033 which aims to bring a ProcessingExceptionHandler to Kafka Streams in order to deal with exceptions that occur during processing.

This PR catch the exceptions thrown while handling a processing exception

Co-authored-by: Dabz <d.gasparina@gmail.com>
Co-authored-by: loicgreffier <loic.greffier@michelin.com>

Reviewers: Bruno Cadonna <bruno@confluent.io>, Matthias J. Sax <matthias@confluent.io>
2024-07-30 22:57:31 -07:00
Josep Prat 0370a6464b
MINOR: Add text and link to blog in announcement template email (#16734)
Reviewers: Igor Soarez <soarez@apple.com>
2024-07-30 21:50:31 +02:00
Josep Prat 3d2ea547d8
KAFKA-17214: Add 3.8.0 version to core and client system tests (#16726)
Reviewers: Greg Harris <greg.harris@aiven.io>
2024-07-30 19:42:12 +02:00
Josep Prat b8c54c3f38
KAFKA-17214: Add 3.8.0 version to streams system tests (#16728)
* KAFKA-17214: Add 3.8.0 version to streams system tests

Reviewers: Bill Bejeck <bbejeck@gmail.com>
2024-07-30 19:41:36 +02:00
PaulRMellor 0969789973 KAFKA-15469: Add documentation for configuration providers (#16650)
Reviewers: Mickael Maison <mickael.maison@gmail.com>
2024-07-30 15:35:40 +02:00
Josep Prat bc243ab1e8
MINOR: Add 3.8.0 to system tests (#16714)
Reviewers:  Manikumar Reddy <manikumar.reddy@gmail.com>
2024-07-30 09:20:35 +02:00
Matthias J. Sax b8532070f7 HOTFIX: fix compilation error 2024-07-29 21:08:49 -07:00
Sebastien Viale 10d9f7872d KAFKA-16448: Add ErrorHandlerContext in deserialization exception handler (#16432)
This PR is part of KIP1033 which aims to bring a ProcessingExceptionHandler to Kafka Streams in order to deal with exceptions that occur during processing.

This PR expose the new ErrorHandlerContext as a parameter to the Deserialization exception handlers and deprecate the previous handle signature.

Co-authored-by: Dabz <d.gasparina@gmail.com>
Co-authored-by: loicgreffier <loic.greffier@michelin.com>

Reviewers: Bruno Cadonna <bruno@confluent.io>, Matthias J. Sax <matthias@confluent.io>
2024-07-29 20:35:25 -07:00
Sebastien Viale a4ea9aec73 KAFKA-16448: Add ErrorHandlerContext in production exception handler (#16433)
This PR is part of KIP-1033 which aims to bring a ProcessingExceptionHandler to Kafka Streams in order to deal with exceptions that occur during processing.

This PR expose the new ErrorHandlerContext as a parameter to the Production exception handler and deprecate the previous handle signature.

Co-authored-by: Dabz <d.gasparina@gmail.com>
Co-authored-by: loicgreffier <loic.greffier@michelin.com>

Reviewers: Bruno Cadonna <bruno@confluent.io>, Matthias J. Sax <matthias@confluent.io>
2024-07-29 20:35:17 -07:00
Colin P. McCabe f26f0b6626 tests/kafkatest/version.py: Add 3.9.0 as DEV_VERSION 2024-07-29 15:58:04 -07:00
1367 changed files with 65065 additions and 16033 deletions

2
.github/CODEOWNERS vendored
View File

@ -13,4 +13,4 @@
# See the License for the specific language governing permissions and
# limitations under the License.
* @superhx @SCNieh @ShadowySpirits @Chillax-0v0
* @superhx @Gezi-lzq @1sonofqiu @woshigaopp

View File

@ -49,7 +49,7 @@ jobs:
- name: Setup Gradle
uses: gradle/gradle-build-action@v2.9.0
- name: Checkstyle
run: ./gradlew --build-cache rat checkstyleMain checkstyleTest
run: ./gradlew --build-cache rat checkstyleMain checkstyleTest spotlessJavaCheck
spotbugs:
name: "Spotbugs"
runs-on: ${{ matrix.os }}

View File

@ -0,0 +1,67 @@
name: Docker Bitnami Release
on:
workflow_dispatch:
push:
tags:
- '[0-9]+.[0-9]+.[0-9]+'
- '[0-9]+.[0-9]+.[0-9]+-rc[0-9]+'
jobs:
docker-release:
name: Docker Image Release
strategy:
matrix:
platform: [ "ubuntu-24.04" ]
jdk: ["17"]
runs-on: ${{ matrix.platform }}
permissions:
contents: write
steps:
- name: Checkout Code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up JDK ${{ matrix.jdk }}
uses: actions/setup-java@v3
with:
java-version: ${{ matrix.jdk }}
distribution: "zulu"
- name: Setup Gradle
uses: gradle/gradle-build-action@v2.12.0
- name: Get project version
id: get_project_version
run: |
project_version=$(./gradlew properties | grep "version:" | awk '{print $2}')
echo "PROJECT_VERSION=${project_version}" >> $GITHUB_OUTPUT
- name: Build TarGz
run: |
./gradlew -Pprefix=automq-${{ github.ref_name }}_ --build-cache --refresh-dependencies clean releaseTarGz
# docker image release
- name: Cp TarGz to Docker Path
run: |
cp ./core/build/distributions/automq-${{ github.ref_name }}_kafka-${{ steps.get_project_version.outputs.PROJECT_VERSION }}.tgz ./container/bitnami
- name: Determine Image Tags
id: image_tags
run: |
echo "tags=${{ secrets.DOCKERHUB_USERNAME }}/automq:${{ github.ref_name }}-bitnami" >> $GITHUB_OUTPUT
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_READ_WRITE_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: ./container/bitnami
push: true
tags: ${{ steps.image_tags.outputs.tags }}
platforms: linux/amd64,linux/arm64

View File

@ -1,6 +1,7 @@
name: Docker Release
on:
workflow_dispatch:
push:
tags:
- '[0-9]+.[0-9]+.[0-9]+'
@ -12,7 +13,7 @@ jobs:
name: Docker Image Release
strategy:
matrix:
platform: [ "ubuntu-22.04" ]
platform: [ "ubuntu-24.04" ]
jdk: ["17"]
runs-on: ${{ matrix.platform }}
permissions:
@ -69,4 +70,4 @@ jobs:
context: ./docker
push: true
tags: ${{ steps.image_tags.outputs.tags }}
platforms: linux/amd64,linux/arm64
platforms: linux/amd64,linux/arm64

View File

@ -46,7 +46,7 @@ jobs:
run: |
python docker_build_test.py kafka/test -tag=test -type=${{ github.event.inputs.image_type }} -u=${{ github.event.inputs.kafka_url }}
- name: Run CVE scan
uses: aquasecurity/trivy-action@master
uses: aquasecurity/trivy-action@6e7b7d1fd3e4fef0c5fa8cce1229c54b2c9bd0d8 # v0.24.0
with:
image-ref: 'kafka/test:test'
format: 'table'

View File

@ -45,7 +45,7 @@ jobs:
run: |
python docker_official_image_build_test.py kafka/test -tag=test -type=${{ github.event.inputs.image_type }} -v=${{ github.event.inputs.kafka_version }}
- name: Run CVE scan
uses: aquasecurity/trivy-action@master
uses: aquasecurity/trivy-action@6e7b7d1fd3e4fef0c5fa8cce1229c54b2c9bd0d8 # v0.24.0
with:
image-ref: 'kafka/test:test'
format: 'table'

View File

@ -31,11 +31,11 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@49b3bc8e6bdd4a60e6116a5414239cba5943d3cf # v3.2.0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@988b5a0280414f521da01fcc63a27aeeb4b104db # v3.6.1
- name: Login to Docker Hub
uses: docker/login-action@v3
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0
with:
username: ${{ secrets.DOCKERHUB_USER }}
password: ${{ secrets.DOCKERHUB_TOKEN }}

View File

@ -47,11 +47,11 @@ jobs:
python -m pip install --upgrade pip
pip install -r docker/requirements.txt
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@49b3bc8e6bdd4a60e6116a5414239cba5943d3cf # v3.2.0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@988b5a0280414f521da01fcc63a27aeeb4b104db # v3.6.1
- name: Login to Docker Hub
uses: docker/login-action@v3
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0
with:
username: ${{ secrets.DOCKERHUB_USER }}
password: ${{ secrets.DOCKERHUB_TOKEN }}

View File

@ -29,7 +29,7 @@ jobs:
supported_image_tag: ['latest', '3.7.0']
steps:
- name: Run CVE scan
uses: aquasecurity/trivy-action@master
uses: aquasecurity/trivy-action@6e7b7d1fd3e4fef0c5fa8cce1229c54b2c9bd0d8 # v0.24.0
if: always()
with:
image-ref: apache/kafka:${{ matrix.supported_image_tag }}

View File

@ -30,32 +30,39 @@ jobs:
uses: gradle/gradle-build-action@v2.12.0
- name: Build TarGz
id: build-targz
run: |
./gradlew -Pprefix=automq-${{ github.ref_name }}_ --build-cache --refresh-dependencies clean releaseTarGz
mkdir -p core/build/distributions/latest
LATEST_TAG=$(git tag --sort=-v:refname | grep -E '^[0-9]+\.[0-9]+\.[0-9]+$' | head -n 1)
echo "LATEST_TAG=$LATEST_TAG"
IS_LATEST="false"
if [ "$LATEST_TAG" == "${{ github.ref_name }}" ]; then
IS_LATEST=true
fi
echo "IS_LATEST=$IS_LATEST" >> $GITHUB_OUTPUT
for file in core/build/distributions/automq-*.tgz; do
if [[ ! "$file" =~ site-docs ]]; then
echo "Find latest tgz file: $file"
cp "$file" core/build/distributions/latest/automq-kafka-latest.tgz
break
if [ "$IS_LATEST" = "true" ]; then
echo "Find latest tgz file: $file"
cp "$file" core/build/distributions/latest/automq-kafka-latest.tgz
fi
else
echo "Skip and remove site-docs file: $file"
rm "$file"
fi
done
- uses: jakejarvis/s3-sync-action@master
name: s3-upload-latest
if: ${{ github.repository_owner == 'AutoMQ' }}
- uses: tvrcgo/oss-action@master
name: upload-latest
if: ${{ github.repository_owner == 'AutoMQ' && steps.build-targz.outputs.IS_LATEST == 'true' }}
with:
args: --follow-symlinks --delete
env:
AWS_S3_BUCKET: ${{ secrets.AWS_CN_PROD_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_CN_PROD_AK }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_CN_PROD_SK }}
AWS_REGION: 'cn-northwest-1'
SOURCE_DIR: 'core/build/distributions/latest'
DEST_DIR: 'community_edition/artifacts'
bucket: ${{ secrets.UPLOAD_BUCKET }}
key-id: ${{ secrets.UPLOAD_BUCKET_AK }}
key-secret: ${{ secrets.UPLOAD_BUCKET_SK }}
region: 'oss-cn-hangzhou'
assets: |
core/build/distributions/latest/automq-kafka-latest.tgz:community_edition/artifacts/automq-kafka-latest.tgz
- name: GitHub Release
uses: softprops/action-gh-release@v1

View File

@ -0,0 +1,31 @@
name: Spark Iceberg image
on:
workflow_dispatch:
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_READ_WRITE_TOKEN }}
- name: Build and Push
uses: docker/build-push-action@v6
with:
context: docker/table_topic/spark_iceberg/
platforms: linux/amd64,linux/arm64
push: true
tags: automqinc/spark-iceberg:latest

4
.gitignore vendored
View File

@ -62,3 +62,7 @@ storage/kafka-tiered-storage/
docker/test/report_*.html
kafka.Kafka
__pycache__
# Ignore bin folder generated by the build, but exclude the one in the root
bin/
!/bin/

View File

@ -1,6 +0,0 @@
<component name="CopyrightManager">
<copyright>
<option name="notice" value="Copyright 2024, AutoMQ HK Limited.&#10;&#10;The use of this file is governed by the Business Source License,&#10;as detailed in the file &quot;/LICENSE.S3Stream&quot; included in this repository.&#10;&#10;As of the Change Date specified in that file, in accordance with&#10;the Business Source License, use of this software will be governed&#10;by the Apache License, Version 2.0" />
<option name="myName" value="BSL" />
</copyright>
</component>

View File

@ -1,7 +0,0 @@
<component name="CopyrightManager">
<settings default="BSL">
<module2copyright>
<element module="All" copyright="BSL" />
</module2copyright>
</settings>
</component>

221
LICENSE
View File

@ -1,29 +1,202 @@
Copyright (c) 2023-2024 AutoMQ HK Limited.
this software are licensed as follows:
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
1. Apache Kafka Source and Dependency Licensing:
All code in this repository that is forked from Apache Kafka and its
dependencies will continue to be licensed under the original Apache Kafka
open source license. For detailed licensing information regarding Apache
Kafka and its dependencies, please refer to the files under the "/licenses/"
folder in this repository.
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
2. S3Stream Component Licensing:
The S3Stream component added to this project (specifically referring to all
files under the "/S3Stream/" directory) is licensed under a revised Business
Source License (BSL) by AutoMQ HK Limited, with the specific terms available
in the /LICENSE.S3Stream file in this repository. Any dependencies used by
the S3Stream component are subject to their respective open source licenses.
1. Definitions.
3. File-Level License Precedence:
For each file in this repository, if the license is explicitly specified in
the header of the file, the license stated in the file header shall prevail.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,96 +0,0 @@
License text copyright © 2023 MariaDB plc, All Rights Reserved.
"Business Source License" is a trademark of MariaDB plc.
Parameters
Licensor: AutoMQ HK Limited.
Licensed Work: AutoMQ Version 1.1.2 or later. The Licensed Work is (c) 2024
AutoMQ HK Limited.
Additional Use Grant: You may make production use of the Licensed Work, provided
Your use does not include offering the Licensed Work to third
parties on a hosted or embedded basis in order to compete with
AutoMQ's paid version(s) of the Licensed Work. For purposes
of this license:
A "competitive offering" is a Product that is offered to third
parties on a paid basis, including through paid support
arrangements, that significantly overlaps with the capabilities
of AutoMQ's paid version(s) of the Licensed Work. If Your
Product is not a competitive offering when You first make it
generally available, it will not become a competitive offering
later due to AutoMQ releasing a new version of the Licensed
Work with additional capabilities. In addition, Products that
are not provided on a paid basis are not competitive.
"Product" means software that is offered to end users to manage
in their own environments or offered as a service on a hosted
basis.
"Embedded" means including the source code or executable code
from the Licensed Work in a competitive offering. "Embedded"
also means packaging the competitive offering in such a way
that the Licensed Work must be accessed or downloaded for the
competitive offering to operate.
Hosting or using the Licensed Work(s) for internal purposes
within an organization is not considered a competitive
offering. AutoMQ considers your organization to include all
of your affiliates under common control.
For binding interpretive guidance on using AutoMQ products
under the Business Source License, please visit our FAQ.
(https://www.automq.com/license-faq)
Change Date: Change date is four years from release date.
Please see https://github.com/AutoMQ/automq/releases for exact dates
Change License: Apache License, Version 2.0
URL: https://www.apache.org/licenses/LICENSE-2.0
For information about alternative licensing arrangements for the Licensed Work,
please contact licensing@automq.com.
Notice
Business Source License 1.1
Terms
The Licensor hereby grants you the right to copy, modify, create derivative
works, redistribute, and make non-production use of the Licensed Work. The
Licensor may make an Additional Use Grant, above, permitting limited production use.
Effective on the Change Date, or the fourth anniversary of the first publicly
available distribution of a specific version of the Licensed Work under this
License, whichever comes first, the Licensor hereby grants you rights under
the terms of the Change License, and the rights granted in the paragraph
above terminate.
If your use of the Licensed Work does not comply with the requirements
currently in effect as described in this License, you must purchase a
commercial license from the Licensor, its affiliated entities, or authorized
resellers, or you must refrain from using the Licensed Work.
All copies of the original and modified Licensed Work, and derivative works
of the Licensed Work, are subject to this License. This License applies
separately for each version of the Licensed Work and the Change Date may vary
for each version of the Licensed Work released by Licensor.
You must conspicuously display this License on each original or modified copy
of the Licensed Work. If you receive the Licensed Work in original or
modified form from a third party, the terms and conditions set forth in this
License apply to your use of that work.
Any use of the Licensed Work in violation of this License will automatically
terminate your rights under this License for the current and all other
versions of the Licensed Work.
This License does not grant you any right in any trademark or logo of
Licensor or its affiliates (provided that you may use a trademark or logo of
Licensor as expressly required by this License).
TO THE EXTENT PERMITTED BY APPLICABLE LAW, THE LICENSED WORK IS PROVIDED ON
AN "AS IS" BASIS. LICENSOR HEREBY DISCLAIMS ALL WARRANTIES AND CONDITIONS,
EXPRESS OR IMPLIED, INCLUDING (WITHOUT LIMITATION) WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, AND
TITLE.

2
NOTICE
View File

@ -1,5 +1,5 @@
AutoMQ NOTICE
Copyright 2023-2024, AutoMQ HK Limited.
Copyright 2023-2025, AutoMQ HK Limited.
---------------------------
Apache Kafka NOTICE

View File

@ -1,5 +1,5 @@
AutoMQ Binary NOTICE
Copyright 2023-2024, AutoMQ HK Limited.
Copyright 2023-2025, AutoMQ HK Limited.
---------------------------
Apache Kafka Binary NOTICE

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell;

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell;

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.commands.cluster;

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.commands.cluster;

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.commands.cluster;
@ -102,9 +110,11 @@ public class Deploy implements Callable<Integer> {
String globalAccessKey = null;
String globalSecretKey = null;
for (Env env : topo.getGlobal().getEnvs()) {
if ("KAFKA_S3_ACCESS_KEY".equals(env.getName())) {
if ("KAFKA_S3_ACCESS_KEY".equals(env.getName()) ||
"AWS_ACCESS_KEY_ID".equals(env.getName())) {
globalAccessKey = env.getValue();
} else if ("KAFKA_S3_SECRET_KEY".equals(env.getName())) {
} else if ("KAFKA_S3_SECRET_KEY".equals(env.getName()) ||
"AWS_SECRET_ACCESS_KEY".equals(env.getName())) {
globalSecretKey = env.getValue();
}
}
@ -159,6 +169,7 @@ public class Deploy implements Callable<Integer> {
sb.append("--override cluster.id=").append(topo.getGlobal().getClusterId()).append(" ");
sb.append("--override node.id=").append(node.getNodeId()).append(" ");
sb.append("--override controller.quorum.voters=").append(getQuorumVoters(topo)).append(" ");
sb.append("--override controller.quorum.bootstrap.servers=").append(getBootstrapServers(topo)).append(" ");
sb.append("--override advertised.listeners=").append("PLAINTEXT://").append(node.getHost()).append(":9092").append(" ");
}
@ -181,4 +192,14 @@ public class Deploy implements Callable<Integer> {
.map(node -> node.getNodeId() + "@" + node.getHost() + ":9093")
.collect(Collectors.joining(","));
}
private static String getBootstrapServers(ClusterTopology topo) {
List<Node> nodes = topo.getControllers();
if (!(nodes.size() == 1 || nodes.size() == 3)) {
throw new IllegalArgumentException("Only support 1 or 3 controllers");
}
return nodes.stream()
.map(node -> node.getHost() + ":9093")
.collect(Collectors.joining(","));
}
}

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.commands.cluster;

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.constant;

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.log;

View File

@ -1,17 +1,26 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.log;
import com.automq.shell.AutoMQApplication;
import com.automq.shell.util.Utils;
import com.automq.stream.s3.operator.ObjectStorage;
import com.automq.stream.s3.operator.ObjectStorage.ObjectInfo;
import com.automq.stream.s3.operator.ObjectStorage.ObjectPath;
@ -204,7 +213,7 @@ public class LogUploader implements LogRecorder {
try {
String objectKey = getObjectKey();
objectStorage.write(WriteOptions.DEFAULT, objectKey, uploadBuffer.retainedSlice().asReadOnly()).get();
objectStorage.write(WriteOptions.DEFAULT, objectKey, Utils.compress(uploadBuffer.slice().asReadOnly())).get();
break;
} catch (Exception e) {
e.printStackTrace(System.err);

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.log;

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.log;

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.metrics;

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.metrics;

View File

@ -1,20 +1,30 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.metrics;
import com.automq.shell.util.Utils;
import com.automq.stream.s3.operator.ObjectStorage;
import com.automq.stream.s3.operator.ObjectStorage.ObjectInfo;
import com.automq.stream.s3.operator.ObjectStorage.ObjectPath;
import com.automq.stream.s3.operator.ObjectStorage.WriteOptions;
import com.automq.stream.utils.Threads;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.node.ArrayNode;
import com.fasterxml.jackson.databind.node.ObjectNode;
@ -152,12 +162,16 @@ public class S3MetricsExporter implements MetricExporter {
CompletableFuture.allOf(deleteFutures).join();
}
}
Thread.sleep(Duration.ofMinutes(1).toMillis());
if (Threads.sleep(Duration.ofMinutes(1).toMillis())) {
break;
}
} catch (InterruptedException e) {
break;
} catch (Exception e) {
LOGGER.error("Cleanup s3 metrics failed", e);
if (Threads.sleep(Duration.ofMinutes(1).toMillis())) {
break;
}
}
}
}
@ -242,7 +256,7 @@ public class S3MetricsExporter implements MetricExporter {
synchronized (uploadBuffer) {
if (uploadBuffer.readableBytes() > 0) {
try {
objectStorage.write(WriteOptions.DEFAULT, getObjectKey(), uploadBuffer.retainedSlice().asReadOnly()).get();
objectStorage.write(WriteOptions.DEFAULT, getObjectKey(), Utils.compress(uploadBuffer.slice().asReadOnly())).get();
} catch (Exception e) {
LOGGER.error("Failed to upload metrics to s3", e);
return CompletableResultCode.ofFailure();

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.model;

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.model;

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.model;

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.model;

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.model;

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.model;

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.model;

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.stream;

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.stream;

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.util;

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.util;

View File

@ -0,0 +1,69 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.util;
import com.automq.stream.s3.ByteBufAlloc;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.util.zip.GZIPInputStream;
import java.util.zip.GZIPOutputStream;
import io.netty.buffer.ByteBuf;
public class Utils {
public static ByteBuf compress(ByteBuf input) throws IOException {
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
GZIPOutputStream gzipOutputStream = new GZIPOutputStream(byteArrayOutputStream);
byte[] buffer = new byte[input.readableBytes()];
input.readBytes(buffer);
gzipOutputStream.write(buffer);
gzipOutputStream.close();
ByteBuf compressed = ByteBufAlloc.byteBuffer(byteArrayOutputStream.size());
compressed.writeBytes(byteArrayOutputStream.toByteArray());
return compressed;
}
public static ByteBuf decompress(ByteBuf input) throws IOException {
byte[] compressedData = new byte[input.readableBytes()];
input.readBytes(compressedData);
ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(compressedData);
GZIPInputStream gzipInputStream = new GZIPInputStream(byteArrayInputStream);
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
byte[] buffer = new byte[1024];
int bytesRead;
while ((bytesRead = gzipInputStream.read(buffer)) != -1) {
byteArrayOutputStream.write(buffer, 0, bytesRead);
}
gzipInputStream.close();
byteArrayOutputStream.close();
byte[] uncompressedData = byteArrayOutputStream.toByteArray();
ByteBuf output = ByteBufAlloc.byteBuffer(uncompressedData.length);
output.writeBytes(uncompressedData);
return output;
}
}

View File

@ -9,10 +9,12 @@ global:
config: |
s3.data.buckets=0@s3://xxx_bucket?region=us-east-1
s3.ops.buckets=1@s3://xxx_bucket?region=us-east-1
s3.wal.path=0@s3://xxx_bucket?region=us-east-1
log.dirs=/root/kraft-logs
envs:
- name: KAFKA_S3_ACCESS_KEY
- name: AWS_ACCESS_KEY_ID
value: 'xxxxx'
- name: KAFKA_S3_SECRET_KEY
- name: AWS_SECRET_ACCESS_KEY
value: 'xxxxx'
controllers:
# The controllers default are combined nodes which roles are controller and broker.

View File

@ -0,0 +1,50 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.util;
import com.automq.stream.s3.ByteBufAlloc;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.Timeout;
import io.netty.buffer.ByteBuf;
@Timeout(60)
@Tag("S3Unit")
public class UtilsTest {
@Test
public void testCompression() {
String testStr = "This is a test string";
ByteBuf input = ByteBufAlloc.byteBuffer(testStr.length());
input.writeBytes(testStr.getBytes());
try {
ByteBuf compressed = Utils.compress(input);
ByteBuf decompressed = Utils.decompress(compressed);
String decompressedStr = decompressed.toString(io.netty.util.CharsetUtil.UTF_8);
System.out.printf("Original: %s, Decompressed: %s\n", testStr, decompressedStr);
Assertions.assertEquals(testStr, decompressedStr);
} catch (Exception e) {
Assertions.fail("Exception occurred during compression/decompression: " + e.getMessage());
}
}
}

View File

@ -47,7 +47,7 @@ if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
fi
if [ "x$KAFKA_OPTS" = "x" ]; then
export KAFKA_OPTS="-Dio.netty.allocator.maxOrder=11"
export KAFKA_OPTS="-XX:+ExitOnOutOfMemoryError -XX:+HeapDumpOnOutOfMemoryError -Dio.netty.allocator.maxOrder=11"
fi
EXTRA_ARGS=${EXTRA_ARGS-'-name kafkaServer -loggc'}

View File

@ -44,7 +44,9 @@ plugins {
// be dropped from gradle/resources/dependencycheck-suppressions.xml
id "com.github.spotbugs" version '5.1.3' apply false
id 'org.scoverage' version '8.0.3' apply false
id 'io.github.goooler.shadow' version '8.1.3' apply false
// Updating the shadow plugin version to 8.1.1 causes issue with signing and publishing the shadowed
// artifacts - see https://github.com/johnrengelman/shadow/issues/901
id 'com.github.johnrengelman.shadow' version '8.1.0' apply false
// Spotless 6.13.0 has issue with Java 21 (see https://github.com/diffplug/spotless/pull/1920), and Spotless 6.14.0+ requires JRE 11
// We are going to drop JDK8 support. Hence, the spotless is upgrade to newest version and be applied only if the build env is compatible with JDK 11.
// spotless 6.15.0+ has issue in runtime with JDK8 even through we define it with `apply:false`. see https://github.com/diffplug/spotless/issues/2156 for more details
@ -128,6 +130,9 @@ allprojects {
repositories {
mavenCentral()
maven {
url = uri("https://packages.confluent.io/maven/")
}
}
dependencyUpdates {
@ -147,6 +152,10 @@ allprojects {
}
configurations.all {
// Globally exclude commons-logging and logback to ensure a single logging implementation (reload4j)
exclude group: "commons-logging", module: "commons-logging"
exclude group: "ch.qos.logback", module: "logback-classic"
exclude group: "ch.qos.logback", module: "logback-core"
// zinc is the Scala incremental compiler, it has a configuration for its own dependencies
// that are unrelated to the project dependencies, we should not change them
if (name != "zinc") {
@ -162,8 +171,8 @@ allprojects {
// ZooKeeper (potentially older and containing CVEs)
libs.nettyHandler,
libs.nettyTransportNativeEpoll,
// be explicit about the reload4j version instead of relying on the transitive versions
libs.reload4j
// be explicit about the reload4j version instead of relying on the transitive versions
libs.reload4j
)
}
}
@ -295,7 +304,7 @@ subprojects {
if (!shouldPublishWithShadow) {
from components.java
} else {
apply plugin: 'io.github.goooler.shadow'
apply plugin: 'com.github.johnrengelman.shadow'
project.shadow.component(mavenJava)
// Fix for avoiding inclusion of runtime dependencies marked as 'shadow' in MANIFEST Class-Path.
@ -728,7 +737,7 @@ subprojects {
jacoco {
toolVersion = versions.jacoco
}
jacocoTestReport {
dependsOn tasks.test
sourceSets sourceSets.main
@ -752,8 +761,8 @@ subprojects {
skipProjects = [ ":jmh-benchmarks", ":trogdor" ]
skipConfigurations = [ "zinc" ]
}
// the task `removeUnusedImports` is implemented by google-java-format,
// and unfortunately the google-java-format version used by spotless 6.14.0 can't work with JDK 21.
// the task `removeUnusedImports` is implemented by google-java-format,
// and unfortunately the google-java-format version used by spotless 6.14.0 can't work with JDK 21.
// Hence, we apply spotless tasks only if the env is either JDK11 or JDK17
if ((JavaVersion.current().isJava11() || (JavaVersion.current() == JavaVersion.VERSION_17))) {
apply plugin: 'com.diffplug.spotless'
@ -835,6 +844,7 @@ project(':server') {
dependencies {
implementation project(':clients')
implementation project(':metadata')
implementation project(':server-common')
implementation project(':storage')
implementation project(':group-coordinator')
@ -944,6 +954,7 @@ project(':core') {
implementation libs.scalaReflect
implementation libs.scalaLogging
implementation libs.slf4jApi
implementation libs.commonsIo // ZooKeeper dependency. Do not use, this is going away.
implementation(libs.zookeeper) {
// Dropwizard Metrics are required by ZooKeeper as of v3.6.0,
// but the library should *not* be used in Kafka code
@ -965,6 +976,8 @@ project(':core') {
implementation libs.guava
implementation libs.slf4jBridge
implementation libs.slf4jReload4j
// The `jcl-over-slf4j` library is used to redirect JCL logging to SLF4J.
implementation libs.jclOverSlf4j
implementation libs.opentelemetryJava8
implementation libs.opentelemetryOshi
@ -976,6 +989,71 @@ project(':core') {
implementation libs.opentelemetryJmx
implementation libs.awsSdkAuth
// table topic start
implementation ("org.apache.avro:avro:${versions.avro}")
implementation ("org.apache.avro:avro-protobuf:${versions.avro}")
implementation('com.google.protobuf:protobuf-java:3.25.5')
implementation ("org.apache.iceberg:iceberg-core:${versions.iceberg}")
implementation ("org.apache.iceberg:iceberg-api:${versions.iceberg}")
implementation ("org.apache.iceberg:iceberg-data:${versions.iceberg}")
implementation ("org.apache.iceberg:iceberg-parquet:${versions.iceberg}")
implementation ("org.apache.iceberg:iceberg-common:${versions.iceberg}")
implementation ("org.apache.iceberg:iceberg-aws:${versions.iceberg}")
implementation ("software.amazon.awssdk:glue:${versions.awsSdk}")
implementation ("software.amazon.awssdk:s3tables:${versions.awsSdk}")
implementation 'software.amazon.s3tables:s3-tables-catalog-for-iceberg:0.1.0'
implementation ('org.apache.hadoop:hadoop-common:3.4.1') {
exclude group: 'org.eclipse.jetty', module: '*'
exclude group: 'com.sun.jersey', module: '*'
}
// for hadoop common
implementation ("org.eclipse.jetty:jetty-webapp:${versions.jetty}")
implementation (libs.kafkaAvroSerializer) {
exclude group: 'org.apache.kafka', module: 'kafka-clients'
}
// > hive ext start
implementation 'org.apache.iceberg:iceberg-hive-metastore:1.6.1'
implementation('org.apache.hive:hive-metastore:3.1.3') {
// Remove useless dependencies (copy from iceberg-kafka-connect)
exclude group: "org.apache.avro", module: "avro"
exclude group: "org.slf4j", module: "slf4j-log4j12"
exclude group: "org.pentaho" // missing dependency
exclude group: "org.apache.hbase"
exclude group: "org.apache.logging.log4j"
exclude group: "co.cask.tephra"
exclude group: "com.google.code.findbugs", module: "jsr305"
exclude group: "org.eclipse.jetty.aggregate", module: "jetty-all"
exclude group: "org.eclipse.jetty.orbit", module: "javax.servlet"
exclude group: "org.apache.parquet", module: "parquet-hadoop-bundle"
exclude group: "com.tdunning", module: "json"
exclude group: "javax.transaction", module: "transaction-api"
exclude group: "com.zaxxer", module: "HikariCP"
exclude group: "org.apache.hadoop", module: "hadoop-yarn-server-common"
exclude group: "org.apache.hadoop", module: "hadoop-yarn-server-applicationhistoryservice"
exclude group: "org.apache.hadoop", module: "hadoop-yarn-server-resourcemanager"
exclude group: "org.apache.hadoop", module: "hadoop-yarn-server-web-proxy"
exclude group: "org.apache.hive", module: "hive-service-rpc"
exclude group: "com.github.joshelser", module: "dropwizard-metrics-hadoop-metrics2-reporter"
}
implementation ('org.apache.hadoop:hadoop-mapreduce-client-core:3.4.1') {
exclude group: 'com.sun.jersey', module: '*'
exclude group: 'com.sun.jersey.contribs', module: '*'
exclude group: 'com.github.pjfanning', module: 'jersey-json'
}
// > hive ext end
// > Protobuf ext start
// Wire Runtime for schema handling
implementation ("com.squareup.wire:wire-schema:${versions.wire}")
implementation ("com.squareup.wire:wire-runtime:${versions.wire}")
implementation 'com.google.api.grpc:proto-google-common-protos:2.52.0'
// > Protobuf ext end
// table topic end
implementation(libs.oshi) {
exclude group: 'org.slf4j', module: '*'
}
@ -990,6 +1068,7 @@ project(':core') {
testImplementation project(':storage:storage-api').sourceSets.test.output
testImplementation project(':server').sourceSets.test.output
testImplementation libs.bcpkix
testImplementation libs.mockitoJunitJupiter // supports MockitoExtension
testImplementation libs.mockitoCore
testImplementation libs.guava
testImplementation(libs.apacheda) {
@ -1160,7 +1239,6 @@ project(':core') {
from(project.file("$rootDir/docker/docker-compose.yaml")) { into "docker/" }
from(project.file("$rootDir/docker/telemetry")) { into "docker/telemetry/" }
from(project.file("$rootDir/LICENSE")) { into "" }
from(project.file("$rootDir/LICENSE.S3Stream")) { into "" }
from "$rootDir/NOTICE-binary" rename {String filename -> filename.replace("-binary", "")}
from(configurations.runtimeClasspath) { into("libs/") }
from(configurations.archives.artifacts.files) { into("libs/") }
@ -1201,6 +1279,38 @@ project(':core') {
from(project(':tools:tools-api').configurations.runtimeClasspath) { into("libs/") }
duplicatesStrategy 'exclude'
}
// AutoMQ inject start
tasks.create(name: "releaseE2ETar", dependsOn: [configurations.archives.artifacts, 'copyDependantTestLibs'], type: Tar) {
def prefix = project.findProperty('prefix') ?: ''
archiveBaseName = "${prefix}kafka"
into "${prefix}kafka-${archiveVersion.get()}"
compression = Compression.GZIP
from(project.file("$rootDir/bin")) { into "bin/" }
from(project.file("$rootDir/config")) { into "config/" }
from(project.file("$rootDir/licenses")) { into "licenses/" }
from(project.file("$rootDir/docker/docker-compose.yaml")) { into "docker/" }
from(project.file("$rootDir/docker/telemetry")) { into "docker/telemetry/" }
from(project.file("$rootDir/LICENSE")) { into "" }
from "$rootDir/NOTICE-binary" rename {String filename -> filename.replace("-binary", "")}
from(configurations.runtimeClasspath) { into("libs/") }
from(configurations.archives.artifacts.files) { into("libs/") }
from(project.siteDocsTar) { into("site-docs/") }
// Include main and test jars from all subprojects
rootProject.subprojects.each { subproject ->
if (subproject.tasks.findByName('jar')) {
from(subproject.tasks.named('jar')) { into('libs/') }
}
if (subproject.tasks.findByName('testJar')) {
from(subproject.tasks.named('testJar')) { into('libs/') }
}
from(subproject.configurations.runtimeClasspath) { into('libs/') }
}
duplicatesStrategy 'exclude'
}
// AutoMQ inject end
jar {
dependsOn('copyDependantLibs')
@ -1220,7 +1330,7 @@ project(':core') {
//By default gradle does not handle test dependencies between the sub-projects
//This line is to include clients project test jar to dependant-testlibs
from (project(':clients').testJar ) { "$buildDir/dependant-testlibs" }
// log4j-appender is not in core dependencies,
// log4j-appender is not in core dependencies,
// so we add it to dependant-testlibs to avoid ClassNotFoundException in running kafka_log4j_appender.py
from (project(':log4j-appender').jar ) { "$buildDir/dependant-testlibs" }
duplicatesStrategy 'exclude'
@ -1253,6 +1363,7 @@ project(':core') {
}
}
project(':metadata') {
base {
archivesName = "kafka-metadata"
@ -1480,7 +1591,7 @@ project(':transaction-coordinator') {
implementation project(':clients')
generator project(':generator')
}
sourceSets {
main {
java {
@ -1571,6 +1682,7 @@ project(':clients') {
implementation libs.snappy
implementation libs.slf4jApi
implementation libs.opentelemetryProto
implementation libs.protobuf
// libraries which should be added as runtime dependencies in generated pom.xml should be defined here:
shadowed libs.zstd
@ -1755,6 +1867,7 @@ project(':raft') {
testImplementation libs.junitJupiter
testImplementation libs.mockitoCore
testImplementation libs.jqwik
testImplementation libs.hamcrest
testRuntimeOnly libs.slf4jReload4j
testRuntimeOnly libs.junitPlatformLanucher
@ -1845,7 +1958,12 @@ project(':server-common') {
implementation libs.jacksonDatabind
implementation libs.pcollections
implementation libs.opentelemetrySdk
// AutoMQ inject start
implementation project(':s3stream')
implementation libs.commonLang
// AutoMQ inject end
testImplementation project(':clients')
testImplementation project(':clients').sourceSets.test.output
@ -2140,11 +2258,12 @@ project(':s3stream') {
implementation 'commons-codec:commons-codec:1.17.0'
implementation 'org.hdrhistogram:HdrHistogram:2.2.2'
implementation 'software.amazon.awssdk.crt:aws-crt:0.30.8'
implementation 'com.ibm.async:asyncutil:0.1.0'
testImplementation 'org.slf4j:slf4j-simple:2.0.9'
testImplementation 'org.junit.jupiter:junit-jupiter:5.10.0'
testImplementation 'org.mockito:mockito-core:5.5.0'
testImplementation 'org.mockito:mockito-junit-jupiter:5.5.0'
testImplementation libs.junitJupiter
testImplementation libs.mockitoCore
testImplementation libs.mockitoJunitJupiter // supports MockitoExtension
testImplementation 'org.awaitility:awaitility:4.2.1'
}
@ -2217,21 +2336,15 @@ project(':tools') {
}
dependencies {
implementation (project(':clients')){
exclude group: 'org.slf4j', module: '*'
}
implementation (project(':server-common')){
exclude group: 'org.slf4j', module: '*'
}
implementation (project(':log4j-appender')){
exclude group: 'org.slf4j', module: '*'
}
implementation project(':automq-shell')
implementation project(':clients')
implementation project(':metadata')
implementation project(':storage')
implementation project(':server')
implementation project(':server-common')
implementation project(':connect:runtime')
implementation project(':tools:tools-api')
implementation project(':transaction-coordinator')
implementation project(':group-coordinator')
implementation libs.argparse4j
implementation libs.jacksonDatabind
implementation libs.jacksonDataformatCsv
@ -2243,6 +2356,16 @@ project(':tools') {
implementation libs.hdrHistogram
implementation libs.spotbugsAnnotations
// AutoMQ inject start
implementation project(':automq-shell')
implementation libs.guava
implementation (libs.kafkaAvroSerializer) {
exclude group: 'org.apache.kafka', module: 'kafka-clients'
}
implementation libs.bucket4j
implementation libs.oshi
// AutoMQ inject end
// for SASL/OAUTHBEARER JWT validation
implementation (libs.jose4j){
exclude group: 'org.slf4j', module: '*'
@ -2279,7 +2402,7 @@ project(':tools') {
testImplementation project(':connect:runtime')
testImplementation project(':connect:runtime').sourceSets.test.output
testImplementation project(':storage:storage-api').sourceSets.main.output
testImplementation project(':group-coordinator')
testImplementation project(':storage').sourceSets.test.output
testImplementation libs.junitJupiter
testImplementation libs.mockitoCore
testImplementation libs.mockitoJunitJupiter // supports MockitoExtension
@ -2577,6 +2700,7 @@ project(':streams') {
':streams:upgrade-system-tests-35:test',
':streams:upgrade-system-tests-36:test',
':streams:upgrade-system-tests-37:test',
':streams:upgrade-system-tests-38:test',
':streams:examples:test'
]
)
@ -3076,9 +3200,24 @@ project(':streams:upgrade-system-tests-37') {
}
}
project(':streams:upgrade-system-tests-38') {
base {
archivesName = "kafka-streams-upgrade-system-tests-38"
}
dependencies {
testImplementation libs.kafkaStreams_38
testRuntimeOnly libs.junitJupiter
}
systemTestLibs {
dependsOn testJar
}
}
project(':jmh-benchmarks') {
apply plugin: 'io.github.goooler.shadow'
apply plugin: 'com.github.johnrengelman.shadow'
shadowJar {
archiveBaseName = 'kafka-jmh-benchmarks'

62
chart/bitnami/README.md Normal file
View File

@ -0,0 +1,62 @@
# AutoMQ
[AutoMQ](https://www.automq.com/) is a cloud-native alternative to Kafka by decoupling durability to cloud storage services like S3. 10x Cost-Effective. No Cross-AZ Traffic Cost. Autoscale in seconds. Single-digit ms latency.
This Helm chart simplifies the deployment of AutoMQ into your Kubernetes cluster using the Software model.
## Prerequisites
### Install Helm chart
Install Helm chart and version v3.8.0+
[Helm chart quickstart](https://helm.sh/zh/docs/intro/quickstart/)
```shell
helm version
```
### Using the Bitnami Helm repository
AutoMQ is fully compatible with Bitnami's Helm Charts, so you can customize your AutoMQ Kubernetes cluster based on the relevant values.yaml of Bitnami.
[Bitnami Helm Charts](https://github.com/bitnami/charts)
## Quickstart
### Setup a Kubernetes Cluster
The quickest way to set up a Kubernetes cluster to install Bitnami Charts is by following the "Bitnami Get Started" guides for the different services:
[Get Started with Bitnami Charts using the Amazon Elastic Container Service for Kubernetes (EKS)](https://docs.bitnami.com/kubernetes/get-started-eks/)
### Installing the AutoMQ with Bitnami Chart
As an alternative to supplying the configuration parameters as arguments, you can create a supplemental YAML file containing your specific config parameters. Any parameters not specified in this file will default to those set in [values.yaml](values.yaml).
1. Create an empty `automq-values.yaml` file
2. Edit the file with your specific parameters:
You can refer to the [demo-values.yaml](/chart/bitnami/demo-values.yaml) based on the bitnami [values.yaml](https://github.com/bitnami/charts/blob/main/bitnami/kafka/values.yaml)
we provided for deploying AutoMQ on AWS across 3 Availability Zones using m7g.xlarge instances (4 vCPUs, 16GB Mem, 156MiB/s network bandwidth).
You need to replace the bucket configurations in the placeholders $, such as ops-bucket, data-bucket, region, endpoint, access-key/secret-key.
3. Install or upgrade the AutoMQ Helm chart using your custom yaml file:
we recommend using the `--version` [31.x.x (31.1.0 ~ 31.5.0)](https://artifacthub.io/packages/helm/bitnami/kafka) bitnami helm chart while installing AutoMQ.
```shell
helm install automq-release oci://registry-1.docker.io/bitnamicharts/kafka -f demo-values.yaml --version 31.5.0 --namespace automq --create-namespace
```
### Upgrading
To upgrade the deployment:
```shell
helm repo update
helm upgrade automq-release oci://registry-1.docker.io/bitnamicharts/kafka -f demo-values.yaml --version 31.5.0 --namespace automq --create-namespace
```
### Uninstalling the Chart
To uninstall/delete the deployment:
```shell
helm uninstall automq-release --namespace automq
```
This command removes all the Kubernetes components associated with the chart and deletes the release.

View File

@ -0,0 +1,141 @@
global:
security:
allowInsecureImages: true
image:
registry: automqinc
repository: automq
tag: 1.5.5-bitnami
pullPolicy: Always
extraEnvVars:
- name: AWS_ACCESS_KEY_ID
value: "${access-key}"
- name: AWS_SECRET_ACCESS_KEY
value: "${secret-key}"
controller:
replicaCount: 3
resources:
requests:
cpu: "3000m"
memory: "12Gi"
limits:
cpu: "4000m"
memory: "16Gi"
heapOpts: -Xmx6g -Xms6g -XX:MaxDirectMemorySize=6g -XX:MetaspaceSize=96m
extraConfig: |
elasticstream.enable=true
autobalancer.client.auth.sasl.mechanism=PLAIN
autobalancer.client.auth.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="inter_broker_user" password="interbroker-password-placeholder" user_inter_broker_user="interbroker-password-placeholder";
autobalancer.client.auth.security.protocol=SASL_PLAINTEXT
autobalancer.client.listener.name=INTERNAL
s3.wal.cache.size=2147483648
s3.block.cache.size=1073741824
s3.stream.allocator.policy=POOLED_DIRECT
s3.network.baseline.bandwidth=245366784
# Replace the following with your bucket config
s3.ops.buckets=1@s3://${ops-bucket}?region=${region}&endpoint=${endpoint}
s3.data.buckets=0@s3://${data-bucket}?region=${region}&endpoint=${endpoint}
s3.wal.path=0@s3://${data-bucket}?region=${region}&endpoint=${endpoint}
automq.zonerouter.channels=0@s3://${data-bucket}?region=${region}&endpoint=${endpoint}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/instance
operator: In
# your helm release name
values:
- automq-release
- key: app.kubernetes.io/component
operator: In
values:
- controller-eligible
- broker
topologyKey: kubernetes.io/hostname
# --- nodeAffinity recommended ---
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: "${your-node-label-key}"
# operator: In
# values:
# - "${your-node-label-value}"
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app.kubernetes.io/component: controller-eligible
tolerations:
- key: "dedicated"
operator: "Equal"
value: "automq"
effect: "NoSchedule"
persistence:
size: 20Gi
broker:
replicaCount: 3
resources:
requests:
cpu: "3000m"
memory: "12Gi"
limits:
cpu: "4000m"
memory: "16Gi"
heapOpts: -Xmx6g -Xms6g -XX:MaxDirectMemorySize=6g -XX:MetaspaceSize=96m
extraConfig: |
elasticstream.enable=true
autobalancer.client.auth.sasl.mechanism=PLAIN
autobalancer.client.auth.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="inter_broker_user" password="interbroker-password-placeholder" user_inter_broker_user="interbroker-password-placeholder";
autobalancer.client.auth.security.protocol=SASL_PLAINTEXT
autobalancer.client.listener.name=INTERNAL
s3.wal.cache.size=2147483648
s3.block.cache.size=1073741824
s3.stream.allocator.policy=POOLED_DIRECT
s3.network.baseline.bandwidth=245366784
# Replace the following with your bucket config
s3.ops.buckets=1@s3://${ops-bucket}?region=${region}&endpoint=${endpoint}
s3.data.buckets=0@s3://${data-bucket}?region=${region}&endpoint=${endpoint}
s3.wal.path=0@s3://${data-bucket}?region=${region}&endpoint=${endpoint}
automq.zonerouter.channels=0@s3://${data-bucket}?region=${region}&endpoint=${endpoint}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/instance
operator: In
# your helm release name
values:
- automq-release
- key: app.kubernetes.io/component
operator: In
values:
- controller-eligible
- broker
topologyKey: kubernetes.io/hostname
# --- nodeAffinity recommended ---
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: "${your-node-label-key}"
# operator: In
# values:
# - "${your-node-label-value}"
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app.kubernetes.io/component: broker
tolerations:
- key: "dedicated"
operator: "Equal"
value: "automq"
effect: "NoSchedule"
brokerRackAssignment: aws-az

View File

@ -182,6 +182,10 @@
<subpackage name="migration">
<allow pkg="org.apache.kafka.controller" />
</subpackage>
<subpackage name="storage">
<allow pkg="org.apache.kafka.common.internals" />
<allow pkg="org.apache.kafka.snapshot" />
</subpackage>
<subpackage name="util">
<allow class="org.apache.kafka.common.compress.Compression" exact-match="true" />
</subpackage>

View File

@ -80,6 +80,8 @@
<allow pkg="org.apache.kafka.raft" />
<subpackage name="server">
<allow pkg="org.apache.kafka.server" />
<allow pkg="org.apache.kafka.image" />
<subpackage name="metrics">
<allow class="org.apache.kafka.server.authorizer.AuthorizableRequestContext" />
<allow pkg="org.apache.kafka.server.telemetry" />

View File

@ -83,6 +83,11 @@
<allow pkg="org.apache.kafka.coordinator.transaction"/>
</subpackage>
<subpackage name="storage.log">
<allow pkg="org.apache.kafka.server" />
<allow pkg="com.yammer.metrics" />
</subpackage>
<!-- START OF TIERED STORAGE INTEGRATION TEST IMPORT DEPENDENCIES -->
<subpackage name="tiered.storage">
<allow pkg="scala" />

View File

@ -49,6 +49,7 @@
<subpackage name="common">
<allow class="org.apache.kafka.clients.consumer.ConsumerRecord" exact-match="true" />
<allow class="org.apache.kafka.clients.NodeApiVersions" exact-match="true" />
<allow class="org.apache.kafka.common.message.ApiMessageType" exact-match="true" />
<disallow pkg="org.apache.kafka.clients" />
<allow pkg="org.apache.kafka.common" exact-match="true" />
@ -76,7 +77,10 @@
<allow pkg="net.jpountz.xxhash" />
<allow pkg="org.xerial.snappy" />
<allow pkg="org.apache.kafka.common.compress" />
<allow class="org.apache.kafka.common.record.CompressionType" exact-match="true" />
<allow class="org.apache.kafka.common.record.CompressionType" />
<allow class="org.apache.kafka.common.record.CompressionType.GZIP" />
<allow class="org.apache.kafka.common.record.CompressionType.LZ4" />
<allow class="org.apache.kafka.common.record.CompressionType.ZSTD" />
<allow class="org.apache.kafka.common.record.RecordBatch" exact-match="true" />
</subpackage>
@ -150,6 +154,7 @@
</subpackage>
<subpackage name="record">
<allow class="org.apache.kafka.common.config.ConfigDef.Range.between" exact-match="true" />
<allow pkg="org.apache.kafka.common.compress" />
<allow pkg="org.apache.kafka.common.header" />
<allow pkg="org.apache.kafka.common.record" />
@ -278,12 +283,16 @@
<subpackage name="tools">
<allow pkg="org.apache.kafka.common"/>
<allow pkg="org.apache.kafka.metadata.properties" />
<allow pkg="org.apache.kafka.network" />
<allow pkg="org.apache.kafka.server.util" />
<allow pkg="kafka.admin" />
<allow pkg="kafka.server" />
<allow pkg="org.apache.kafka.storage.internals" />
<allow pkg="org.apache.kafka.server.config" />
<allow pkg="org.apache.kafka.server.common" />
<allow pkg="org.apache.kafka.server.log.remote.metadata.storage" />
<allow pkg="org.apache.kafka.server.log.remote.storage" />
<allow pkg="org.apache.kafka.clients" />
<allow pkg="org.apache.kafka.clients.admin" />
<allow pkg="org.apache.kafka.clients.producer" />
@ -301,6 +310,7 @@
<allow pkg="kafka.utils" />
<allow pkg="scala.collection" />
<allow pkg="org.apache.kafka.coordinator.transaction" />
<allow pkg="org.apache.kafka.coordinator.group" />
<subpackage name="consumer">
<allow pkg="org.apache.kafka.tools"/>

View File

@ -39,7 +39,7 @@
<suppress checks="(NPathComplexity|ClassFanOutComplexity|CyclomaticComplexity|ClassDataAbstractionCoupling|FinalLocalVariable|LocalVariableName|MemberName|ParameterName|MethodLength|JavaNCSS|AvoidStarImport)"
files="core[\\/]src[\\/](generated|generated-test)[\\/].+.java$"/>
<suppress checks="NPathComplexity" files="(ClusterTestExtensions|KafkaApisBuilder|SharePartition).java"/>
<suppress checks="NPathComplexity|ClassFanOutComplexity|ClassDataAbstractionCoupling" files="(RemoteLogManager|RemoteLogManagerTest).java"/>
<suppress checks="NPathComplexity|ClassFanOutComplexity|ClassDataAbstractionCoupling|JavaNCSS" files="(RemoteLogManager|RemoteLogManagerTest).java"/>
<suppress checks="MethodLength" files="RemoteLogManager.java"/>
<suppress checks="ClassFanOutComplexity" files="RemoteLogManagerTest.java"/>
<suppress checks="MethodLength"
@ -190,11 +190,11 @@
<!-- Raft -->
<suppress checks="NPathComplexity"
files="RecordsIterator.java"/>
files="(DynamicVoter|RecordsIterator).java"/>
<!-- Streams -->
<suppress checks="ClassFanOutComplexity"
files="(KafkaStreams|KStreamImpl|KTableImpl|InternalTopologyBuilder|StreamsPartitionAssignor|StreamThread|IQv2StoreIntegrationTest|KStreamImplTest|RocksDBStore).java"/>
files="(KafkaStreams|KStreamImpl|KTableImpl|InternalTopologyBuilder|StreamsPartitionAssignor|StreamThread|IQv2StoreIntegrationTest|KStreamImplTest|RocksDBStore|StreamTask).java"/>
<suppress checks="MethodLength"
files="KTableImpl.java"/>
@ -326,7 +326,7 @@
<suppress checks="(ParameterNumber|ClassDataAbstractionCoupling)"
files="(QuorumController).java"/>
<suppress checks="(CyclomaticComplexity|NPathComplexity)"
files="(PartitionRegistration|PartitionChangeBuilder).java"/>
files="(PartitionRegistration|PartitionChangeBuilder|ScramParser).java"/>
<suppress checks="CyclomaticComplexity"
files="(ClientQuotasImage|KafkaEventQueue|MetadataDelta|QuorumController|ReplicationControlManager|KRaftMigrationDriver|ClusterControlManager|MetaPropertiesEnsemble).java"/>
<suppress checks="NPathComplexity"
@ -372,7 +372,7 @@
<suppress checks="CyclomaticComplexity"
files="(S3StreamsMetadataImage|S3StreamMetricsManager|BlockCache|StreamReader|S3MetricsExporter|PrometheusUtils).java"/>
<suppress checks="NPathComplexity"
files="(StreamControlManager|S3StreamsMetadataImage|CompactionManagerTest|S3StreamMetricsManager|CompactionManager|BlockCache|DefaultS3BlockCache|StreamReader|S3Utils|AnomalyDetector|Recreate|ForceClose|QuorumController).java"/>
files="(StreamControlManager|S3StreamsMetadataImage|CompactionManagerTest|S3StreamMetricsManager|CompactionManager|BlockCache|DefaultS3BlockCache|StreamReader|S3Utils|AnomalyDetector|Recreate|ForceClose|QuorumController|AbstractObjectStorage).java"/>
<suppress checks="MethodLength"
files="(S3StreamMetricsManager|BlockWALServiceTest).java"/>
<suppress id="dontUseSystemExit"

View File

@ -18,9 +18,21 @@ package org.apache.kafka.clients.admin;
import org.apache.kafka.common.annotation.InterfaceStability;
import java.util.Optional;
/**
* Options for {@link Admin#addRaftVoter}.
*/
@InterfaceStability.Stable
public class AddRaftVoterOptions extends AbstractOptions<AddRaftVoterOptions> {
private Optional<String> clusterId = Optional.empty();
public AddRaftVoterOptions setClusterId(Optional<String> clusterId) {
this.clusterId = clusterId;
return this;
}
public Optional<String> clusterId() {
return clusterId;
}
}

View File

@ -1729,6 +1729,16 @@ public interface Admin extends AutoCloseable {
* @return {@link GetNodesResult}
*/
GetNodesResult getNodes(Collection<Integer> nodeIdList, GetNodesOptions options);
/**
* Update consumer group
*
* @param groupId group id
* @param groupSpec {@link UpdateGroupSpec}
* @param options {@link UpdateGroupOptions}
* @return {@link UpdateGroupResult}
*/
UpdateGroupResult updateGroup(String groupId, UpdateGroupSpec groupSpec, UpdateGroupOptions options);
// AutoMQ inject end
/**

View File

@ -314,5 +314,11 @@ public class ForwardingAdmin implements Admin {
public GetNodesResult getNodes(Collection<Integer> nodeIdList, GetNodesOptions options) {
return delegate.getNodes(nodeIdList, options);
}
@Override
public UpdateGroupResult updateGroup(String groupId, UpdateGroupSpec groupSpec, UpdateGroupOptions options) {
return delegate.updateGroup(groupId, groupSpec, options);
}
// AutoMQ inject end
}

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.clients.admin;

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.clients.admin;

View File

@ -56,6 +56,7 @@ import org.apache.kafka.clients.admin.internals.ListConsumerGroupOffsetsHandler;
import org.apache.kafka.clients.admin.internals.ListOffsetsHandler;
import org.apache.kafka.clients.admin.internals.ListTransactionsHandler;
import org.apache.kafka.clients.admin.internals.RemoveMembersFromConsumerGroupHandler;
import org.apache.kafka.clients.admin.internals.UpdateGroupHandler;
import org.apache.kafka.clients.consumer.OffsetAndMetadata;
import org.apache.kafka.clients.consumer.internals.ConsumerProtocol;
import org.apache.kafka.common.Cluster;
@ -241,6 +242,7 @@ import org.apache.kafka.common.requests.ListPartitionReassignmentsResponse;
import org.apache.kafka.common.requests.MetadataRequest;
import org.apache.kafka.common.requests.MetadataResponse;
import org.apache.kafka.common.requests.RemoveRaftVoterRequest;
import org.apache.kafka.common.requests.RemoveRaftVoterResponse;
import org.apache.kafka.common.requests.RenewDelegationTokenRequest;
import org.apache.kafka.common.requests.RenewDelegationTokenResponse;
import org.apache.kafka.common.requests.UnregisterBrokerRequest;
@ -1202,16 +1204,27 @@ public class KafkaAdminClient extends AdminClient {
long pollTimeout = Long.MAX_VALUE;
log.trace("Trying to choose nodes for {} at {}", pendingCalls, now);
Iterator<Call> pendingIter = pendingCalls.iterator();
while (pendingIter.hasNext()) {
Call call = pendingIter.next();
List<Call> toRemove = new ArrayList<>();
// Using pendingCalls.size() to get the list size before the for-loop to avoid infinite loop.
// If call.fail keeps adding the call to pendingCalls,
// the loop like for (int i = 0; i < pendingCalls.size(); i++) can't stop.
int pendingSize = pendingCalls.size();
// pendingCalls could be modified in this loop,
// hence using for-loop instead of iterator to avoid ConcurrentModificationException.
for (int i = 0; i < pendingSize; i++) {
Call call = pendingCalls.get(i);
// If the call is being retried, await the proper backoff before finding the node
if (now < call.nextAllowedTryMs) {
pollTimeout = Math.min(pollTimeout, call.nextAllowedTryMs - now);
} else if (maybeDrainPendingCall(call, now)) {
pendingIter.remove();
toRemove.add(call);
}
}
// Use remove instead of removeAll to avoid delete all matched elements
for (Call call : toRemove) {
pendingCalls.remove(call);
}
return pollTimeout;
}
@ -4701,6 +4714,8 @@ public class KafkaAdminClient extends AdminClient {
setPort(endpoint.port())));
return new AddRaftVoterRequest.Builder(
new AddRaftVoterRequestData().
setClusterId(options.clusterId().orElse(null)).
setTimeoutMs(timeoutMs).
setVoterId(voterId) .
setVoterDirectoryId(voterDirectoryId).
setListeners(listeners));
@ -4745,13 +4760,14 @@ public class KafkaAdminClient extends AdminClient {
RemoveRaftVoterRequest.Builder createRequest(int timeoutMs) {
return new RemoveRaftVoterRequest.Builder(
new RemoveRaftVoterRequestData().
setClusterId(options.clusterId().orElse(null)).
setVoterId(voterId) .
setVoterDirectoryId(voterDirectoryId));
}
@Override
void handleResponse(AbstractResponse response) {
AddRaftVoterResponse addResponse = (AddRaftVoterResponse) response;
RemoveRaftVoterResponse addResponse = (RemoveRaftVoterResponse) response;
if (addResponse.data().errorCode() != Errors.NONE.code()) {
ApiError error = new ApiError(
addResponse.data().errorCode(),
@ -4857,6 +4873,14 @@ public class KafkaAdminClient extends AdminClient {
return new GetNodesResult(future);
}
@Override
public UpdateGroupResult updateGroup(String groupId, UpdateGroupSpec groupSpec, UpdateGroupOptions options) {
SimpleAdminApiFuture<CoordinatorKey, Void> future = UpdateGroupHandler.newFuture(groupId);
UpdateGroupHandler handler = new UpdateGroupHandler(groupId, groupSpec, logContext);
invokeDriver(handler, future, options.timeoutMs);
return new UpdateGroupResult(future.get(CoordinatorKey.byGroupId(groupId)));
}
private <K, V> void invokeDriver(
AdminApiHandler<K, V> handler,
AdminApiFuture<K, V> future,
@ -4931,6 +4955,10 @@ public class KafkaAdminClient extends AdminClient {
return ListOffsetsRequest.EARLIEST_TIMESTAMP;
} else if (offsetSpec instanceof OffsetSpec.MaxTimestampSpec) {
return ListOffsetsRequest.MAX_TIMESTAMP;
} else if (offsetSpec instanceof OffsetSpec.EarliestLocalSpec) {
return ListOffsetsRequest.EARLIEST_LOCAL_TIMESTAMP;
} else if (offsetSpec instanceof OffsetSpec.LatestTieredSpec) {
return ListOffsetsRequest.LATEST_TIERED_TIMESTAMP;
}
return ListOffsetsRequest.LATEST_TIMESTAMP;
}

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.clients.admin;

View File

@ -26,6 +26,8 @@ public class OffsetSpec {
public static class EarliestSpec extends OffsetSpec { }
public static class LatestSpec extends OffsetSpec { }
public static class MaxTimestampSpec extends OffsetSpec { }
public static class EarliestLocalSpec extends OffsetSpec { }
public static class LatestTieredSpec extends OffsetSpec { }
public static class TimestampSpec extends OffsetSpec {
private final long timestamp;
@ -70,4 +72,23 @@ public class OffsetSpec {
return new MaxTimestampSpec();
}
/**
* Used to retrieve the local log start offset.
* Local log start offset is the offset of a log above which reads
* are guaranteed to be served from the disk of the leader broker.
* <br/>
* Note: When tiered Storage is not enabled, it behaves the same as retrieving the earliest timestamp offset.
*/
public static OffsetSpec earliestLocal() {
return new EarliestLocalSpec();
}
/**
* Used to retrieve the highest offset of data stored in remote storage.
* <br/>
* Note: When tiered storage is not enabled, we will return unknown offset.
*/
public static OffsetSpec latestTiered() {
return new LatestTieredSpec();
}
}

View File

@ -91,10 +91,8 @@ public class RaftVoterEndpoint {
@Override
public String toString() {
return "RaftVoterEndpoint" +
"(name=" + name +
", host=" + host +
", port=" + port +
")";
// enclose IPv6 hosts in square brackets for readability
String hostString = host.contains(":") ? "[" + host + "]" : host;
return name + "://" + hostString + ":" + port;
}
}

View File

@ -18,9 +18,21 @@ package org.apache.kafka.clients.admin;
import org.apache.kafka.common.annotation.InterfaceStability;
import java.util.Optional;
/**
* Options for {@link Admin#removeRaftVoter}.
*/
@InterfaceStability.Stable
public class RemoveRaftVoterOptions extends AbstractOptions<RemoveRaftVoterOptions> {
private Optional<String> clusterId = Optional.empty();
public RemoveRaftVoterOptions setClusterId(Optional<String> clusterId) {
this.clusterId = clusterId;
return this;
}
public Optional<String> clusterId() {
return clusterId;
}
}

View File

@ -0,0 +1,23 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.clients.admin;
public class UpdateGroupOptions extends AbstractOptions<UpdateGroupOptions> {
}

View File

@ -0,0 +1,37 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.clients.admin;
import org.apache.kafka.common.KafkaFuture;
public class UpdateGroupResult extends AbstractOptions<UpdateGroupResult> {
private final KafkaFuture<Void> future;
UpdateGroupResult(final KafkaFuture<Void> future) {
this.future = future;
}
/**
* Return a future which succeeds if all the feature updates succeed.
*/
public KafkaFuture<Void> all() {
return future;
}
}

View File

@ -0,0 +1,68 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.clients.admin;
import java.util.Objects;
public class UpdateGroupSpec {
private String linkId;
private boolean promoted;
public UpdateGroupSpec linkId(String linkId) {
this.linkId = linkId;
return this;
}
public UpdateGroupSpec promoted(boolean promoted) {
this.promoted = promoted;
return this;
}
public String linkId() {
return linkId;
}
public boolean promoted() {
return promoted;
}
@Override
public boolean equals(Object o) {
if (this == o)
return true;
if (o == null || getClass() != o.getClass())
return false;
UpdateGroupSpec spec = (UpdateGroupSpec) o;
return promoted == spec.promoted && Objects.equals(linkId, spec.linkId);
}
@Override
public int hashCode() {
return Objects.hash(linkId, promoted);
}
@Override
public String toString() {
return "UpdateGroupsSpec{" +
"linkId='" + linkId + '\'' +
", promoted=" + promoted +
'}';
}
}

View File

@ -93,9 +93,20 @@ public final class ListOffsetsHandler extends Batched<TopicPartition, ListOffset
.stream()
.anyMatch(key -> offsetTimestampsByPartition.get(key) == ListOffsetsRequest.MAX_TIMESTAMP);
return ListOffsetsRequest.Builder
.forConsumer(true, options.isolationLevel(), supportsMaxTimestamp)
.setTargetTimes(new ArrayList<>(topicsByName.values()));
boolean requireEarliestLocalTimestamp = keys
.stream()
.anyMatch(key -> offsetTimestampsByPartition.get(key) == ListOffsetsRequest.EARLIEST_LOCAL_TIMESTAMP);
boolean requireTieredStorageTimestamp = keys
.stream()
.anyMatch(key -> offsetTimestampsByPartition.get(key) == ListOffsetsRequest.LATEST_TIERED_TIMESTAMP);
return ListOffsetsRequest.Builder.forConsumer(true,
options.isolationLevel(),
supportsMaxTimestamp,
requireEarliestLocalTimestamp,
requireTieredStorageTimestamp)
.setTargetTimes(new ArrayList<>(topicsByName.values()));
}
@Override

View File

@ -0,0 +1,144 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.clients.admin.internals;
import org.apache.kafka.clients.admin.UpdateGroupSpec;
import org.apache.kafka.common.Node;
import org.apache.kafka.common.message.AutomqUpdateGroupRequestData;
import org.apache.kafka.common.message.AutomqUpdateGroupResponseData;
import org.apache.kafka.common.protocol.Errors;
import org.apache.kafka.common.requests.AbstractResponse;
import org.apache.kafka.common.requests.FindCoordinatorRequest.CoordinatorType;
import org.apache.kafka.common.requests.s3.AutomqUpdateGroupRequest;
import org.apache.kafka.common.requests.s3.AutomqUpdateGroupResponse;
import org.apache.kafka.common.utils.LogContext;
import org.slf4j.Logger;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import static java.util.Collections.singleton;
public class UpdateGroupHandler extends AdminApiHandler.Batched<CoordinatorKey, Void> {
private final CoordinatorKey groupId;
private final UpdateGroupSpec groupSpec;
private final Logger logger;
private final AdminApiLookupStrategy<CoordinatorKey> lookupStrategy;
public UpdateGroupHandler(
String groupId,
UpdateGroupSpec groupSpec,
LogContext logContext
) {
this.groupId = CoordinatorKey.byGroupId(groupId);
this.groupSpec = groupSpec;
this.logger = logContext.logger(UpdateGroupHandler.class);
this.lookupStrategy = new CoordinatorStrategy(CoordinatorType.GROUP, logContext);
}
@Override
public String apiName() {
return "updateGroup";
}
@Override
public AdminApiLookupStrategy<CoordinatorKey> lookupStrategy() {
return lookupStrategy;
}
public static AdminApiFuture.SimpleAdminApiFuture<CoordinatorKey, Void> newFuture(
String groupId
) {
return AdminApiFuture.forKeys(Collections.singleton(CoordinatorKey.byGroupId(groupId)));
}
private void validateKeys(Set<CoordinatorKey> groupIds) {
if (!groupIds.equals(singleton(groupId))) {
throw new IllegalArgumentException("Received unexpected group ids " + groupIds +
" (expected only " + singleton(groupId) + ")");
}
}
@Override
public AutomqUpdateGroupRequest.Builder buildBatchedRequest(
int coordinatorId,
Set<CoordinatorKey> groupIds
) {
validateKeys(groupIds);
return new AutomqUpdateGroupRequest.Builder(
new AutomqUpdateGroupRequestData()
.setLinkId(groupSpec.linkId())
.setGroupId(this.groupId.idValue)
.setPromoted(groupSpec.promoted())
);
}
@Override
public ApiResult<CoordinatorKey, Void> handleResponse(
Node coordinator,
Set<CoordinatorKey> groupIds,
AbstractResponse abstractResponse
) {
validateKeys(groupIds);
final Map<CoordinatorKey, Void> completed = new HashMap<>();
final Map<CoordinatorKey, Throwable> failed = new HashMap<>();
final List<CoordinatorKey> groupsToUnmap = new ArrayList<>();
AutomqUpdateGroupResponse response = (AutomqUpdateGroupResponse) abstractResponse;
AutomqUpdateGroupResponseData data = response.data();
Errors error = Errors.forCode(data.errorCode());
if (error != Errors.NONE) {
handleError(
CoordinatorKey.byGroupId(data.groupId()),
error,
data.errorMessage(),
failed,
groupsToUnmap
);
} else {
completed.put(groupId, null);
}
return new ApiResult<>(completed, failed, groupsToUnmap);
}
private void handleError(
CoordinatorKey groupId,
Errors error,
String errorMsg,
Map<CoordinatorKey, Throwable> failed,
List<CoordinatorKey> groupsToUnmap
) {
switch (error) {
case COORDINATOR_NOT_AVAILABLE:
case NOT_COORDINATOR:
// If the coordinator is unavailable or there was a coordinator change, then we unmap
// the key so that we retry the `FindCoordinator` request
logger.debug("`{}` request for group id {} returned error {}. " +
"Will attempt to find the coordinator again and retry.", apiName(), groupId.idValue, error);
groupsToUnmap.add(groupId);
break;
default:
logger.error("`{}` request for group id {} failed due to unexpected error {}.", apiName(), groupId.idValue, error);
failed.put(groupId, error.exception(errorMsg));
}
}
}

View File

@ -1233,7 +1233,7 @@ public class AsyncKafkaConsumer<K, V> implements ConsumerDelegate<K, V> {
wakeupTrigger.disableWakeups();
final Timer closeTimer = time.timer(timeout);
clientTelemetryReporter.ifPresent(reporter -> reporter.initiateClose(timeout.toMillis()));
clientTelemetryReporter.ifPresent(ClientTelemetryReporter::initiateClose);
closeTimer.update();
// Prepare shutting down the network thread
swallow(log, Level.ERROR, "Failed to release assignment before closing consumer",

View File

@ -1130,7 +1130,7 @@ public class ClassicKafkaConsumer<K, V> implements ConsumerDelegate<K, V> {
AtomicReference<Throwable> firstException = new AtomicReference<>();
final Timer closeTimer = createTimerForRequest(timeout);
clientTelemetryReporter.ifPresent(reporter -> reporter.initiateClose(timeout.toMillis()));
clientTelemetryReporter.ifPresent(ClientTelemetryReporter::initiateClose);
closeTimer.update();
// Close objects with a timeout. The timeout is required because the coordinator & the fetcher send requests to
// the server in the process of closing which may not respect the overall timeout defined for closing the

View File

@ -1457,15 +1457,16 @@ public final class ConsumerCoordinator extends AbstractCoordinator {
if (responseError != Errors.NONE) {
log.debug("Offset fetch failed: {}", responseError.message());
if (responseError == Errors.COORDINATOR_LOAD_IN_PROGRESS) {
// just retry
future.raise(responseError);
} else if (responseError == Errors.NOT_COORDINATOR) {
if (responseError == Errors.COORDINATOR_NOT_AVAILABLE ||
responseError == Errors.NOT_COORDINATOR) {
// re-discover the coordinator and retry
markCoordinatorUnknown(responseError);
future.raise(responseError);
} else if (responseError == Errors.GROUP_AUTHORIZATION_FAILED) {
future.raise(GroupAuthorizationException.forGroupId(rebalanceConfig.groupId));
} else if (responseError.exception() instanceof RetriableException) {
// retry
future.raise(responseError);
} else {
future.raise(new KafkaException("Unexpected error in fetch offset response: " + responseError.message()));
}

View File

@ -391,7 +391,7 @@ public class OffsetFetcher {
final Map<TopicPartition, ListOffsetsPartition> timestampsToSearch,
boolean requireTimestamp) {
ListOffsetsRequest.Builder builder = ListOffsetsRequest.Builder
.forConsumer(requireTimestamp, isolationLevel, false)
.forConsumer(requireTimestamp, isolationLevel)
.setTargetTimes(ListOffsetsRequest.toListOffsetsTopics(timestampsToSearch));
log.debug("Sending ListOffsetRequest {} to broker {}", builder, node);

View File

@ -337,7 +337,7 @@ public class OffsetsRequestManager implements RequestManager, ClusterResourceLis
boolean requireTimestamps,
List<NetworkClientDelegate.UnsentRequest> unsentRequests) {
ListOffsetsRequest.Builder builder = ListOffsetsRequest.Builder
.forConsumer(requireTimestamps, isolationLevel, false)
.forConsumer(requireTimestamps, isolationLevel)
.setTargetTimes(ListOffsetsRequest.toListOffsetsTopics(targetTimes));
log.debug("Creating ListOffset request {} for broker {} to reset positions", builder,

View File

@ -828,7 +828,7 @@ public class ShareConsumerImpl<K, V> implements ShareConsumerDelegate<K, V> {
wakeupTrigger.disableWakeups();
final Timer closeTimer = time.timer(timeout);
clientTelemetryReporter.ifPresent(reporter -> reporter.initiateClose(timeout.toMillis()));
clientTelemetryReporter.ifPresent(ClientTelemetryReporter::initiateClose);
closeTimer.update();
// Prepare shutting down the network thread

View File

@ -1392,6 +1392,9 @@ public class KafkaProducer<K, V> implements Producer<K, V> {
} else {
// Try to close gracefully.
final Timer closeTimer = time.timer(timeout);
clientTelemetryReporter.ifPresent(ClientTelemetryReporter::initiateClose);
closeTimer.update();
if (this.sender != null) {
this.sender.initiateClose();
closeTimer.update();
@ -1406,7 +1409,6 @@ public class KafkaProducer<K, V> implements Producer<K, V> {
closeTimer.update();
}
}
clientTelemetryReporter.ifPresent(reporter -> reporter.initiateClose(closeTimer.remainingMs()));
}
}

View File

@ -19,9 +19,6 @@ package org.apache.kafka.clients.producer;
import org.apache.kafka.clients.ClientDnsLookup;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.MetadataRecoveryStrategy;
import org.apache.kafka.common.compress.GzipCompression;
import org.apache.kafka.common.compress.Lz4Compression;
import org.apache.kafka.common.compress.ZstdCompression;
import org.apache.kafka.common.config.AbstractConfig;
import org.apache.kafka.common.config.ConfigDef;
import org.apache.kafka.common.config.ConfigDef.Importance;
@ -381,9 +378,9 @@ public class ProducerConfig extends AbstractConfig {
Importance.LOW,
ACKS_DOC)
.define(COMPRESSION_TYPE_CONFIG, Type.STRING, CompressionType.NONE.name, in(Utils.enumOptions(CompressionType.class)), Importance.HIGH, COMPRESSION_TYPE_DOC)
.define(COMPRESSION_GZIP_LEVEL_CONFIG, Type.INT, GzipCompression.DEFAULT_LEVEL, new GzipCompression.LevelValidator(), Importance.MEDIUM, COMPRESSION_GZIP_LEVEL_DOC)
.define(COMPRESSION_LZ4_LEVEL_CONFIG, Type.INT, Lz4Compression.DEFAULT_LEVEL, between(Lz4Compression.MIN_LEVEL, Lz4Compression.MAX_LEVEL), Importance.MEDIUM, COMPRESSION_LZ4_LEVEL_DOC)
.define(COMPRESSION_ZSTD_LEVEL_CONFIG, Type.INT, ZstdCompression.DEFAULT_LEVEL, between(ZstdCompression.MIN_LEVEL, ZstdCompression.MAX_LEVEL), Importance.MEDIUM, COMPRESSION_ZSTD_LEVEL_DOC)
.define(COMPRESSION_GZIP_LEVEL_CONFIG, Type.INT, CompressionType.GZIP.defaultLevel(), CompressionType.GZIP.levelValidator(), Importance.MEDIUM, COMPRESSION_GZIP_LEVEL_DOC)
.define(COMPRESSION_LZ4_LEVEL_CONFIG, Type.INT, CompressionType.LZ4.defaultLevel(), CompressionType.LZ4.levelValidator(), Importance.MEDIUM, COMPRESSION_LZ4_LEVEL_DOC)
.define(COMPRESSION_ZSTD_LEVEL_CONFIG, Type.INT, CompressionType.ZSTD.defaultLevel(), CompressionType.ZSTD.levelValidator(), Importance.MEDIUM, COMPRESSION_ZSTD_LEVEL_DOC)
.define(BATCH_SIZE_CONFIG, Type.INT, 16384, atLeast(0), Importance.MEDIUM, BATCH_SIZE_DOC)
.define(PARTITIONER_ADPATIVE_PARTITIONING_ENABLE_CONFIG, Type.BOOLEAN, true, Importance.LOW, PARTITIONER_ADPATIVE_PARTITIONING_ENABLE_DOC)
.define(PARTITIONER_AVAILABILITY_TIMEOUT_MS_CONFIG, Type.LONG, 0, atLeast(0), Importance.LOW, PARTITIONER_AVAILABILITY_TIMEOUT_MS_DOC)

View File

@ -17,8 +17,6 @@
package org.apache.kafka.common.compress;
import org.apache.kafka.common.KafkaException;
import org.apache.kafka.common.config.ConfigDef;
import org.apache.kafka.common.config.ConfigException;
import org.apache.kafka.common.record.CompressionType;
import org.apache.kafka.common.utils.BufferSupplier;
import org.apache.kafka.common.utils.ByteBufferInputStream;
@ -30,14 +28,11 @@ import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.util.Objects;
import java.util.zip.Deflater;
import java.util.zip.GZIPInputStream;
public class GzipCompression implements Compression {
import static org.apache.kafka.common.record.CompressionType.GZIP;
public static final int MIN_LEVEL = Deflater.BEST_SPEED;
public static final int MAX_LEVEL = Deflater.BEST_COMPRESSION;
public static final int DEFAULT_LEVEL = Deflater.DEFAULT_COMPRESSION;
public class GzipCompression implements Compression {
private final int level;
@ -47,7 +42,7 @@ public class GzipCompression implements Compression {
@Override
public CompressionType type() {
return CompressionType.GZIP;
return GZIP;
}
@Override
@ -101,10 +96,10 @@ public class GzipCompression implements Compression {
}
public static class Builder implements Compression.Builder<GzipCompression> {
private int level = DEFAULT_LEVEL;
private int level = GZIP.defaultLevel();
public Builder level(int level) {
if ((level < MIN_LEVEL || MAX_LEVEL < level) && level != DEFAULT_LEVEL) {
if ((level < GZIP.minLevel() || GZIP.maxLevel() < level) && level != GZIP.defaultLevel()) {
throw new IllegalArgumentException("gzip doesn't support given compression level: " + level);
}
@ -117,22 +112,4 @@ public class GzipCompression implements Compression {
return new GzipCompression(level);
}
}
public static class LevelValidator implements ConfigDef.Validator {
@Override
public void ensureValid(String name, Object o) {
if (o == null)
throw new ConfigException(name, null, "Value must be non-null");
int level = ((Number) o).intValue();
if (level > MAX_LEVEL || (level < MIN_LEVEL && level != DEFAULT_LEVEL)) {
throw new ConfigException(name, o, "Value must be between " + MIN_LEVEL + " and " + MAX_LEVEL + " or equal to " + DEFAULT_LEVEL);
}
}
@Override
public String toString() {
return "[" + MIN_LEVEL + ",...," + MAX_LEVEL + "] or " + DEFAULT_LEVEL;
}
}
}

View File

@ -16,6 +16,7 @@
*/
package org.apache.kafka.common.compress;
import org.apache.kafka.common.record.CompressionType;
import org.apache.kafka.common.utils.ByteUtils;
import net.jpountz.lz4.LZ4Compressor;
@ -75,7 +76,7 @@ public final class Lz4BlockOutputStream extends OutputStream {
*
* For backward compatibility, Lz4BlockOutputStream uses fastCompressor with default compression level but, with the other level, it uses highCompressor.
*/
compressor = level == Lz4Compression.DEFAULT_LEVEL ? LZ4Factory.fastestInstance().fastCompressor() : LZ4Factory.fastestInstance().highCompressor(level);
compressor = level == CompressionType.LZ4.defaultLevel() ? LZ4Factory.fastestInstance().fastCompressor() : LZ4Factory.fastestInstance().highCompressor(level);
checksum = XXHashFactory.fastestInstance().hash32();
this.useBrokenFlagDescriptorChecksum = useBrokenFlagDescriptorChecksum;
bd = new BD(blockSize);

View File

@ -28,13 +28,9 @@ import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.util.Objects;
public class Lz4Compression implements Compression {
import static org.apache.kafka.common.record.CompressionType.LZ4;
// These values come from net.jpountz.lz4.LZ4Constants
// We may need to update them if the lz4 library changes these values.
public static final int MIN_LEVEL = 1;
public static final int MAX_LEVEL = 17;
public static final int DEFAULT_LEVEL = 9;
public class Lz4Compression implements Compression {
private final int level;
@ -44,7 +40,7 @@ public class Lz4Compression implements Compression {
@Override
public CompressionType type() {
return CompressionType.LZ4;
return LZ4;
}
@Override
@ -89,10 +85,10 @@ public class Lz4Compression implements Compression {
}
public static class Builder implements Compression.Builder<Lz4Compression> {
private int level = DEFAULT_LEVEL;
private int level = LZ4.defaultLevel();
public Builder level(int level) {
if (level < MIN_LEVEL || MAX_LEVEL < level) {
if (level < LZ4.minLevel() || LZ4.maxLevel() < level) {
throw new IllegalArgumentException("lz4 doesn't support given compression level: " + level);
}

View File

@ -26,7 +26,6 @@ import org.apache.kafka.common.utils.ChunkedBytesStream;
import com.github.luben.zstd.BufferPool;
import com.github.luben.zstd.RecyclingBufferPool;
import com.github.luben.zstd.Zstd;
import com.github.luben.zstd.ZstdInputStreamNoFinalizer;
import com.github.luben.zstd.ZstdOutputStreamNoFinalizer;
@ -37,11 +36,9 @@ import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.util.Objects;
public class ZstdCompression implements Compression {
import static org.apache.kafka.common.record.CompressionType.ZSTD;
public static final int MIN_LEVEL = Zstd.minCompressionLevel();
public static final int MAX_LEVEL = Zstd.maxCompressionLevel();
public static final int DEFAULT_LEVEL = Zstd.defaultCompressionLevel();
public class ZstdCompression implements Compression {
private final int level;
@ -51,7 +48,7 @@ public class ZstdCompression implements Compression {
@Override
public CompressionType type() {
return CompressionType.ZSTD;
return ZSTD;
}
@Override
@ -125,10 +122,10 @@ public class ZstdCompression implements Compression {
}
public static class Builder implements Compression.Builder<ZstdCompression> {
private int level = DEFAULT_LEVEL;
private int level = ZSTD.defaultLevel();
public Builder level(int level) {
if (MAX_LEVEL < level || level < MIN_LEVEL) {
if (level < ZSTD.minLevel() || ZSTD.maxLevel() < level) {
throw new IllegalArgumentException("zstd doesn't support given compression level: " + level);
}

View File

@ -81,38 +81,37 @@ public class TopicConfig {
public static final String REMOTE_LOG_STORAGE_ENABLE_CONFIG = "remote.storage.enable";
public static final String REMOTE_LOG_STORAGE_ENABLE_DOC = "To enable tiered storage for a topic, set this configuration as true. " +
"You can not disable this config once it is enabled. It will be provided in future versions.";
"You can not disable this config once it is enabled. It will be provided in future versions.";
public static final String LOCAL_LOG_RETENTION_MS_CONFIG = "local.retention.ms";
public static final String LOCAL_LOG_RETENTION_MS_DOC = "The number of milliseconds to keep the local log segment before it gets deleted. " +
"Default value is -2, it represents `retention.ms` value is to be used. The effective value should always be less than or equal " +
"to `retention.ms` value.";
"Default value is -2, it represents `retention.ms` value is to be used. The effective value should always be less than or equal " +
"to `retention.ms` value.";
public static final String LOCAL_LOG_RETENTION_BYTES_CONFIG = "local.retention.bytes";
public static final String LOCAL_LOG_RETENTION_BYTES_DOC = "The maximum size of local log segments that can grow for a partition before it " +
"deletes the old segments. Default value is -2, it represents `retention.bytes` value to be used. The effective value should always be " +
"less than or equal to `retention.bytes` value.";
"deletes the old segments. Default value is -2, it represents `retention.bytes` value to be used. The effective value should always be " +
"less than or equal to `retention.bytes` value.";
public static final String REMOTE_LOG_DISABLE_POLICY_RETAIN = "retain";
public static final String REMOTE_LOG_DISABLE_POLICY_DELETE = "delete";
public static final String REMOTE_LOG_COPY_DISABLE_CONFIG = "remote.log.copy.disable";
public static final String REMOTE_LOG_COPY_DISABLE_DOC = "Determines whether tiered data for a topic should become read only," +
" and no more data uploading on a topic. Once this config is set to true, the local retention configuration " +
"(i.e. local.retention.ms/bytes) becomes irrelevant, and all data expiration follows the topic-wide retention configuration" +
"(i.e. retention.ms/bytes).";
public static final String REMOTE_LOG_DISABLE_POLICY_CONFIG = "remote.log.disable.policy";
public static final String REMOTE_LOG_DISABLE_POLICY_DOC = String.format("Determines whether tiered data for a topic should be retained or " +
"deleted after tiered storage disablement on a topic. The two valid options are \"%s\" and \"%s\". If %s is " +
"selected then all data in remote will be kept post-disablement and will only be deleted when it breaches expiration " +
"thresholds. If %s is selected then the data will be made inaccessible immediately by advancing the log start offset and will be " +
"deleted asynchronously.", REMOTE_LOG_DISABLE_POLICY_RETAIN, REMOTE_LOG_DISABLE_POLICY_DELETE,
REMOTE_LOG_DISABLE_POLICY_RETAIN, REMOTE_LOG_DISABLE_POLICY_DELETE);
public static final String REMOTE_LOG_DELETE_ON_DISABLE_CONFIG = "remote.log.delete.on.disable";
public static final String REMOTE_LOG_DELETE_ON_DISABLE_DOC = "Determines whether tiered data for a topic should be " +
"deleted after tiered storage is disabled on a topic. This configuration should be enabled when trying to " +
"set `remote.storage.enable` from true to false";
public static final String MAX_MESSAGE_BYTES_CONFIG = "max.message.bytes";
public static final String MAX_MESSAGE_BYTES_DOC =
"The largest record batch size allowed by Kafka (after compression if compression is enabled). " +
"If this is increased and there are consumers older than 0.10.2, the consumers' fetch " +
"size must also be increased so that they can fetch record batches this large. " +
"In the latest message format version, records are always grouped into batches for efficiency. " +
"In previous message format versions, uncompressed records are not grouped into batches and this " +
"limit only applies to a single record in that case.";
"If this is increased and there are consumers older than 0.10.2, the consumers' fetch " +
"size must also be increased so that they can fetch record batches this large. " +
"In the latest message format version, records are always grouped into batches for efficiency. " +
"In previous message format versions, uncompressed records are not grouped into batches and this " +
"limit only applies to a single record in that case.";
public static final String INDEX_INTERVAL_BYTES_CONFIG = "index.interval.bytes";
public static final String INDEX_INTERVAL_BYTES_DOC = "This setting controls how frequently " +
@ -166,7 +165,9 @@ public class TopicConfig {
public static final String UNCLEAN_LEADER_ELECTION_ENABLE_CONFIG = "unclean.leader.election.enable";
public static final String UNCLEAN_LEADER_ELECTION_ENABLE_DOC = "Indicates whether to enable replicas " +
"not in the ISR set to be elected as leader as a last resort, even though doing so may result in data " +
"loss.";
"loss.<p>Note: In KRaft mode, when enabling this config dynamically, it needs to wait for the unclean leader election" +
"thread to trigger election periodically (default is 5 minutes). Please run `kafka-leader-election.sh` with `unclean` option " +
"to trigger the unclean leader election immediately if needed.</p>";
public static final String MIN_IN_SYNC_REPLICAS_CONFIG = "min.insync.replicas";
public static final String MIN_IN_SYNC_REPLICAS_DOC = "When a producer sets acks to \"all\" (or \"-1\"), " +
@ -256,4 +257,32 @@ public class TopicConfig {
"broker will not perform down-conversion for consumers expecting an older message format. The broker responds " +
"with <code>UNSUPPORTED_VERSION</code> error for consume requests from such older clients. This configuration" +
"does not apply to any message format conversion that might be required for replication to followers.";
// AutoMQ inject start
public static final String TABLE_TOPIC_ENABLE_CONFIG = "automq.table.topic.enable";
public static final String TABLE_TOPIC_ENABLE_DOC = "The configuration controls whether enable table topic";
public static final String TABLE_TOPIC_COMMIT_INTERVAL_CONFIG = "automq.table.topic.commit.interval.ms";
public static final String TABLE_TOPIC_COMMIT_INTERVAL_DOC = "The table topic commit interval(ms)";
public static final String TABLE_TOPIC_NAMESPACE_CONFIG = "automq.table.topic.namespace";
public static final String TABLE_TOPIC_NAMESPACE_DOC = "The table topic table namespace";
public static final String TABLE_TOPIC_SCHEMA_TYPE_CONFIG = "automq.table.topic.schema.type";
public static final String TABLE_TOPIC_SCHEMA_TYPE_DOC = "The table topic schema type, support schemaless, schema";
public static final String TABLE_TOPIC_ID_COLUMNS_CONFIG = "automq.table.topic.id.columns";
public static final String TABLE_TOPIC_ID_COLUMNS_DOC = "The primary key, comma-separated list of columns that identify a row in tables."
+ "ex. [region, name]";
public static final String TABLE_TOPIC_PARTITION_BY_CONFIG = "automq.table.topic.partition.by";
public static final String TABLE_TOPIC_PARTITION_BY_DOC = "The partition fields of the table. ex. [bucket(name), month(timestamp)]";
public static final String TABLE_TOPIC_UPSERT_ENABLE_CONFIG = "automq.table.topic.upsert.enable";
public static final String TABLE_TOPIC_UPSERT_ENABLE_DOC = "The configuration controls whether enable table topic upsert";
public static final String TABLE_TOPIC_CDC_FIELD_CONFIG = "automq.table.topic.cdc.field";
public static final String TABLE_TOPIC_CDC_FIELD_DOC = "The name of the field containing the CDC operation, I, U, or D";
public static final String KAFKA_LINKS_ID_CONFIG = "automq.kafka.links.id";
public static final String KAFKA_LINKS_ID_DOC = "The unique id of a kafka link";
public static final String KAFKA_LINKS_TOPIC_START_TIME_CONFIG = "automq.kafka.links.topic.start.time";
public static final String KAFKA_LINKS_TOPIC_START_TIME_DOC = "The offset to start replicate from. Valid values: -1 (latest), -2 (earliest), positive value (timestamp)";
public static final String KAFKA_LINKS_TOPIC_STATE_CONFIG = "automq.kafka.links.topic.state";
public static final String KAFKA_LINKS_TOPIC_STATE_DOC = "The state of the topic that's in linking";
// AutoMQ inject end
}

View File

@ -1,4 +1,6 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
@ -14,16 +16,15 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.raft.errors;
/**
* Indicates that an append operation cannot be completed because it would have resulted in an
* unexpected base offset.
*/
public class UnexpectedBaseOffsetException extends RaftException {
private static final long serialVersionUID = 1L;
package org.apache.kafka.common.errors.s3;
public UnexpectedBaseOffsetException(String s) {
super(s);
import org.apache.kafka.common.errors.ApiException;
public class NodeLockedException extends ApiException {
public NodeLockedException(String message) {
super(message);
}
}

View File

@ -0,0 +1,28 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.common.errors.s3;
import org.apache.kafka.common.errors.ApiException;
public class ObjectNotCommittedException extends ApiException {
public ObjectNotCommittedException(String message) {
super(message);
}
}

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.common.errors.s3;

View File

@ -30,7 +30,13 @@ public class Topic {
public static final String TRANSACTION_STATE_TOPIC_NAME = "__transaction_state";
public static final String SHARE_GROUP_STATE_TOPIC_NAME = "__share_group_state";
public static final String CLUSTER_METADATA_TOPIC_NAME = "__cluster_metadata";
// AutoMQ inject start
public static final String AUTO_BALANCER_METRICS_TOPIC_NAME = "__auto_balancer_metrics";
public static final String TABLE_TOPIC_CONTROL_TOPIC_NAME = "__automq_table_control";
public static final String TABLE_TOPIC_DATA_TOPIC_NAME = "__automq_table_data";
// AutoMQ inject end
public static final TopicPartition CLUSTER_METADATA_TOPIC_PARTITION = new TopicPartition(
CLUSTER_METADATA_TOPIC_NAME,
0

View File

@ -129,4 +129,13 @@ public final class KafkaMetric implements Metric {
this.config = config;
}
}
// AutoMQ inject start
/**
* A public method to expose the {@link #measurableValue} method.
*/
public double measurableValueV2(long timeMs) {
return measurableValue(timeMs);
}
// AutoMQ inject end
}

View File

@ -123,8 +123,8 @@ public enum ApiKeys {
SHARE_GROUP_DESCRIBE(ApiMessageType.SHARE_GROUP_DESCRIBE),
SHARE_FETCH(ApiMessageType.SHARE_FETCH),
SHARE_ACKNOWLEDGE(ApiMessageType.SHARE_ACKNOWLEDGE),
ADD_RAFT_VOTER(ApiMessageType.ADD_RAFT_VOTER),
REMOVE_RAFT_VOTER(ApiMessageType.REMOVE_RAFT_VOTER),
ADD_RAFT_VOTER(ApiMessageType.ADD_RAFT_VOTER, false, RecordBatch.MAGIC_VALUE_V0, true),
REMOVE_RAFT_VOTER(ApiMessageType.REMOVE_RAFT_VOTER, false, RecordBatch.MAGIC_VALUE_V0, true),
UPDATE_RAFT_VOTER(ApiMessageType.UPDATE_RAFT_VOTER),
INITIALIZE_SHARE_GROUP_STATE(ApiMessageType.INITIALIZE_SHARE_GROUP_STATE, true),
READ_SHARE_GROUP_STATE(ApiMessageType.READ_SHARE_GROUP_STATE, true),
@ -148,9 +148,11 @@ public enum ApiKeys {
AUTOMQ_REGISTER_NODE(ApiMessageType.AUTOMQ_REGISTER_NODE, false, false),
AUTOMQ_GET_NODES(ApiMessageType.AUTOMQ_GET_NODES, false, true),
AUTOMQ_ZONE_ROUTER(ApiMessageType.AUTOMQ_ZONE_ROUTER, false, false),
AUTOMQ_GET_PARTITION_SNAPSHOT(ApiMessageType.AUTOMQ_GET_PARTITION_SNAPSHOT, false, false),
GET_NEXT_NODE_ID(ApiMessageType.GET_NEXT_NODE_ID, false, true),
DESCRIBE_STREAMS(ApiMessageType.DESCRIBE_STREAMS, false, true);
DESCRIBE_STREAMS(ApiMessageType.DESCRIBE_STREAMS, false, true),
AUTOMQ_UPDATE_GROUP(ApiMessageType.AUTOMQ_UPDATE_GROUP);
// AutoMQ for Kafka inject end
private static final Map<ApiMessageType.ListenerType, EnumSet<ApiKeys>> APIS_BY_LISTENER =
@ -307,7 +309,7 @@ public enum ApiKeys {
b.append("<th>Key</th>\n");
b.append("</tr>");
clientApis().stream()
.filter(apiKey -> !apiKey.messageType.latestVersionUnstable())
.filter(apiKey -> apiKey.toApiVersion(false).isPresent())
.forEach(apiKey -> {
b.append("<tr>\n");
b.append("<td>");

View File

@ -151,6 +151,8 @@ import org.apache.kafka.common.errors.s3.KeyExistException;
import org.apache.kafka.common.errors.s3.NodeEpochExpiredException;
import org.apache.kafka.common.errors.s3.NodeEpochNotExistException;
import org.apache.kafka.common.errors.s3.NodeFencedException;
import org.apache.kafka.common.errors.s3.NodeLockedException;
import org.apache.kafka.common.errors.s3.ObjectNotCommittedException;
import org.apache.kafka.common.errors.s3.ObjectNotExistException;
import org.apache.kafka.common.errors.s3.OffsetNotMatchedException;
import org.apache.kafka.common.errors.s3.RedundantOperationException;
@ -435,6 +437,8 @@ public enum Errors {
KEY_EXIST(512, "The key already exists.", KeyExistException::new),
KEY_NOT_EXIST(513, "The key does not exist.", ObjectNotExistException::new),
NODE_FENCED(514, "The node is fenced.", NodeFencedException::new),
NODE_LOCKED(515, "The node is locked", NodeLockedException::new),
OBJECT_NOT_COMMITED(516, "The object is not commited.", ObjectNotCommittedException::new),
STREAM_INNER_ERROR(599, "The stream inner error.", StreamInnerErrorException::new),
// AutoMQ inject end

View File

@ -499,6 +499,11 @@ public abstract class AbstractLegacyRecordBatch extends AbstractRecordBatch impl
throw new UnsupportedOperationException("Magic versions prior to 2 do not support partition leader epoch");
}
@Override
public void setProducerId(long producerId) {
throw new UnsupportedOperationException("Magic versions prior to 2 do not support producer id");
}
private void setTimestampAndUpdateCrc(TimestampType timestampType, long timestamp) {
byte attributes = LegacyRecord.computeAttributes(magic(), compressionType(), timestampType);
buffer.put(LOG_OVERHEAD + LegacyRecord.ATTRIBUTES_OFFSET, attributes);

View File

@ -16,6 +16,13 @@
*/
package org.apache.kafka.common.record;
import org.apache.kafka.common.config.ConfigDef;
import org.apache.kafka.common.config.ConfigException;
import java.util.zip.Deflater;
import static org.apache.kafka.common.config.ConfigDef.Range.between;
/**
* The compression type to use
*/
@ -23,7 +30,46 @@ public enum CompressionType {
NONE((byte) 0, "none", 1.0f),
// Shipped with the JDK
GZIP((byte) 1, "gzip", 1.0f),
GZIP((byte) 1, "gzip", 1.0f) {
public static final int MIN_LEVEL = Deflater.BEST_SPEED;
public static final int MAX_LEVEL = Deflater.BEST_COMPRESSION;
public static final int DEFAULT_LEVEL = Deflater.DEFAULT_COMPRESSION;
@Override
public int defaultLevel() {
return DEFAULT_LEVEL;
}
@Override
public int maxLevel() {
return MAX_LEVEL;
}
@Override
public int minLevel() {
return MIN_LEVEL;
}
@Override
public ConfigDef.Validator levelValidator() {
return new ConfigDef.Validator() {
@Override
public void ensureValid(String name, Object o) {
if (o == null)
throw new ConfigException(name, null, "Value must be non-null");
int level = ((Number) o).intValue();
if (level > MAX_LEVEL || (level < MIN_LEVEL && level != DEFAULT_LEVEL)) {
throw new ConfigException(name, o, "Value must be between " + MIN_LEVEL + " and " + MAX_LEVEL + " or equal to " + DEFAULT_LEVEL);
}
}
@Override
public String toString() {
return "[" + MIN_LEVEL + ",...," + MAX_LEVEL + "] or " + DEFAULT_LEVEL;
}
};
}
},
// We should only load classes from a given compression library when we actually use said compression library. This
// is because compression libraries include native code for a set of platforms and we want to avoid errors
@ -31,8 +77,65 @@ public enum CompressionType {
// To ensure this, we only reference compression library code from classes that are only invoked when actual usage
// happens.
SNAPPY((byte) 2, "snappy", 1.0f),
LZ4((byte) 3, "lz4", 1.0f),
ZSTD((byte) 4, "zstd", 1.0f);
LZ4((byte) 3, "lz4", 1.0f) {
// These values come from net.jpountz.lz4.LZ4Constants
// We may need to update them if the lz4 library changes these values.
private static final int MIN_LEVEL = 1;
private static final int MAX_LEVEL = 17;
private static final int DEFAULT_LEVEL = 9;
@Override
public int defaultLevel() {
return DEFAULT_LEVEL;
}
@Override
public int maxLevel() {
return MAX_LEVEL;
}
@Override
public int minLevel() {
return MIN_LEVEL;
}
@Override
public ConfigDef.Validator levelValidator() {
return between(MIN_LEVEL, MAX_LEVEL);
}
},
ZSTD((byte) 4, "zstd", 1.0f) {
// These values come from the zstd library. We don't use the Zstd.minCompressionLevel(),
// Zstd.maxCompressionLevel() and Zstd.defaultCompressionLevel() methods to not load the Zstd library
// while parsing configuration.
// See ZSTD_minCLevel in https://github.com/facebook/zstd/blob/dev/lib/compress/zstd_compress.c#L6987
// and ZSTD_TARGETLENGTH_MAX https://github.com/facebook/zstd/blob/dev/lib/zstd.h#L1249
private static final int MIN_LEVEL = -131072;
// See ZSTD_MAX_CLEVEL in https://github.com/facebook/zstd/blob/dev/lib/compress/clevels.h#L19
private static final int MAX_LEVEL = 22;
// See ZSTD_CLEVEL_DEFAULT in https://github.com/facebook/zstd/blob/dev/lib/zstd.h#L129
private static final int DEFAULT_LEVEL = 3;
@Override
public int defaultLevel() {
return DEFAULT_LEVEL;
}
@Override
public int maxLevel() {
return MAX_LEVEL;
}
@Override
public int minLevel() {
return MIN_LEVEL;
}
@Override
public ConfigDef.Validator levelValidator() {
return between(MIN_LEVEL, MAX_LEVEL);
}
};
// compression type is represented by two bits in the attributes field of the record batch header, so `byte` is
// large enough
@ -78,6 +181,22 @@ public enum CompressionType {
throw new IllegalArgumentException("Unknown compression name: " + name);
}
public int defaultLevel() {
throw new UnsupportedOperationException("Compression levels are not defined for this compression type: " + name);
}
public int maxLevel() {
throw new UnsupportedOperationException("Compression levels are not defined for this compression type: " + name);
}
public int minLevel() {
throw new UnsupportedOperationException("Compression levels are not defined for this compression type: " + name);
}
public ConfigDef.Validator levelValidator() {
throw new UnsupportedOperationException("Compression levels are not defined for this compression type: " + name);
}
@Override
public String toString() {
return name;

View File

@ -190,6 +190,16 @@ public class DefaultRecordBatch extends AbstractRecordBatch implements MutableRe
return buffer.getLong(PRODUCER_ID_OFFSET);
}
@Override
public void setProducerId(long producerId) {
if (producerId() == producerId) {
return;
}
buffer.putLong(PRODUCER_ID_OFFSET, producerId);
long crc = computeChecksum();
ByteUtils.writeUnsignedInt(buffer, CRC_OFFSET, crc);
}
@Override
public short producerEpoch() {
return buffer.getShort(PRODUCER_EPOCH_OFFSET);

View File

@ -65,4 +65,13 @@ public interface MutableRecordBatch extends RecordBatch {
* @return The closeable iterator
*/
CloseableIterator<Record> skipKeyValueIterator(BufferSupplier bufferSupplier);
// AutoMQ injection start
/**
* Set the producer id for this batch of records.
* @param producerId The producer id to use
*/
void setProducerId(long producerId);
// AutoMQ injection end
}

View File

@ -1,12 +1,20 @@
/*
* Copyright 2024, AutoMQ HK Limited.
* Copyright 2025, AutoMQ HK Limited.
*
* The use of this file is governed by the Business Source License,
* as detailed in the file "/LICENSE.S3Stream" included in this repository.
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.common.record;

View File

@ -24,7 +24,9 @@ import org.apache.kafka.common.protocol.MessageUtil;
import org.apache.kafka.common.protocol.ObjectSerializationCache;
import org.apache.kafka.common.protocol.SendBuilder;
import org.apache.kafka.common.requests.s3.AutomqGetNodesRequest;
import org.apache.kafka.common.requests.s3.AutomqGetPartitionSnapshotRequest;
import org.apache.kafka.common.requests.s3.AutomqRegisterNodeRequest;
import org.apache.kafka.common.requests.s3.AutomqUpdateGroupRequest;
import org.apache.kafka.common.requests.s3.AutomqZoneRouterRequest;
import org.apache.kafka.common.requests.s3.CloseStreamsRequest;
import org.apache.kafka.common.requests.s3.CommitStreamObjectRequest;
@ -375,10 +377,14 @@ public abstract class AbstractRequest implements AbstractRequestResponse {
return AutomqGetNodesRequest.parse(buffer, apiVersion);
case AUTOMQ_ZONE_ROUTER:
return AutomqZoneRouterRequest.parse(buffer, apiVersion);
case AUTOMQ_GET_PARTITION_SNAPSHOT:
return AutomqGetPartitionSnapshotRequest.parse(buffer, apiVersion);
case GET_NEXT_NODE_ID:
return GetNextNodeIdRequest.parse(buffer, apiVersion);
case DESCRIBE_STREAMS:
return DescribeStreamsRequest.parse(buffer, apiVersion);
case AUTOMQ_UPDATE_GROUP:
return AutomqUpdateGroupRequest.parse(buffer, apiVersion);
// AutoMQ for Kafka inject end
case SHARE_GROUP_HEARTBEAT:

View File

@ -22,7 +22,9 @@ import org.apache.kafka.common.protocol.Errors;
import org.apache.kafka.common.protocol.MessageUtil;
import org.apache.kafka.common.protocol.SendBuilder;
import org.apache.kafka.common.requests.s3.AutomqGetNodesResponse;
import org.apache.kafka.common.requests.s3.AutomqGetPartitionSnapshotResponse;
import org.apache.kafka.common.requests.s3.AutomqRegisterNodeResponse;
import org.apache.kafka.common.requests.s3.AutomqUpdateGroupResponse;
import org.apache.kafka.common.requests.s3.AutomqZoneRouterResponse;
import org.apache.kafka.common.requests.s3.CloseStreamsResponse;
import org.apache.kafka.common.requests.s3.CommitStreamObjectResponse;
@ -312,10 +314,14 @@ public abstract class AbstractResponse implements AbstractRequestResponse {
return AutomqGetNodesResponse.parse(responseBuffer, version);
case AUTOMQ_ZONE_ROUTER:
return AutomqZoneRouterResponse.parse(responseBuffer, version);
case AUTOMQ_GET_PARTITION_SNAPSHOT:
return AutomqGetPartitionSnapshotResponse.parse(responseBuffer, version);
case GET_NEXT_NODE_ID:
return GetNextNodeIdResponse.parse(responseBuffer, version);
case DESCRIBE_STREAMS:
return DescribeStreamsResponse.parse(responseBuffer, version);
case AUTOMQ_UPDATE_GROUP:
return AutomqUpdateGroupResponse.parse(responseBuffer, version);
// AutoMQ for Kafka inject end
case SHARE_GROUP_HEARTBEAT:

View File

@ -46,7 +46,7 @@ public class AddRaftVoterResponse extends AbstractResponse {
@Override
public void maybeSetThrottleTimeMs(int throttleTimeMs) {
// not supported
data.setThrottleTimeMs(throttleTimeMs);
}
@Override

View File

@ -32,26 +32,39 @@ public class ApiVersionsRequest extends AbstractRequest {
public static class Builder extends AbstractRequest.Builder<ApiVersionsRequest> {
private static final String DEFAULT_CLIENT_SOFTWARE_NAME = "apache-kafka-java";
private static final ApiVersionsRequestData DATA = new ApiVersionsRequestData()
private static final ApiVersionsRequestData DEFAULT_DATA = new ApiVersionsRequestData()
.setClientSoftwareName(DEFAULT_CLIENT_SOFTWARE_NAME)
.setClientSoftwareVersion(AppInfoParser.getVersion());
private final ApiVersionsRequestData data;
public Builder() {
super(ApiKeys.API_VERSIONS);
this(DEFAULT_DATA,
ApiKeys.API_VERSIONS.oldestVersion(),
ApiKeys.API_VERSIONS.latestVersion());
}
public Builder(short version) {
super(ApiKeys.API_VERSIONS, version);
this(DEFAULT_DATA, version, version);
}
public Builder(
ApiVersionsRequestData data,
short oldestAllowedVersion,
short latestAllowedVersion
) {
super(ApiKeys.API_VERSIONS, oldestAllowedVersion, latestAllowedVersion);
this.data = data.duplicate();
}
@Override
public ApiVersionsRequest build(short version) {
return new ApiVersionsRequest(DATA, version);
return new ApiVersionsRequest(data, version);
}
@Override
public String toString() {
return DATA.toString();
return data.toString();
}
}

View File

@ -289,19 +289,17 @@ public class ApiVersionsResponse extends AbstractResponse {
SupportedFeatureKeyCollection converted = new SupportedFeatureKeyCollection();
for (Map.Entry<String, SupportedVersionRange> feature : latestSupportedFeatures.features().entrySet()) {
final SupportedVersionRange versionRange = feature.getValue();
final SupportedFeatureKey key = new SupportedFeatureKey();
key.setName(feature.getKey());
if (alterV0 && versionRange.min() == 0) {
// Some older clients will have deserialization problems if a feature's
// minimum supported level is 0. Therefore, when preparing ApiVersionResponse
// at versions less than 4, we must set the minimum version for these features
// to 1 rather than 0. See KAFKA-17011 for details.
key.setMinVersion((short) 1);
// at versions less than 4, we must omit these features. See KAFKA-17492.
} else {
final SupportedFeatureKey key = new SupportedFeatureKey();
key.setName(feature.getKey());
key.setMinVersion(versionRange.min());
key.setMaxVersion(versionRange.max());
converted.add(key);
}
key.setMaxVersion(versionRange.max());
converted.add(key);
}
return converted;

Some files were not shown because too many files have changed in this diff Show More