The task-level commit metrics were removed without deprecation in KIP-447 and the corresponding PR #8218. However, we forgot to update the docs and to remove the code that creates the task-level commit sensor.
This PR removes the task-level commit metrics from the docs and removes the code that creates the task-level commit sensor. The code was effectively dead since it was only used in tests.
Reviewers: Guozhang Wang <wangguoz@gmail.com>, Matthias J. Sax <matthias@confluent.io>
Adding the note from the kafka-site repo to the main repo. I also included the fixed link.
apache/kafka-site@9fa596capache/kafka-site@4eb2409
Reviewers: Divij Vaidya <diviv@amazon.com>, Ismael Juma <ismael@juma.me.uk>
[KIP-962](https://cwiki.apache.org/confluence/display/KAFKA/KIP-962%3A+Relax+non-null+key+requirement+in+Kafka+Streams)
The key requirments got relaxed for the followinger streams dsl operator:
left join Kstream-Kstream: no longer drop left records with null-key and call ValueJoiner with 'null' for right value.
outer join Kstream-Kstream: no longer drop left/right records with null-key and call ValueJoiner with 'null' for right/left value.
left-foreign-key join Ktable-Ktable: no longer drop left records with null-foreign-key returned by the ForeignKeyExtractor and call ValueJoiner with 'null' for right value.
left join KStream-Ktable: no longer drop left records with null-key and call ValueJoiner with 'null' for right value.
left join KStream-GlobalTable: no longer drop records when KeyValueMapper returns 'null' and call ValueJoiner with 'null' for right value.
Reviewers: Walker Carlson <wcarlson@apache.org>
Part of KIP-985.
Updates JavaDocs for `RangeQuery` and `ReadOnlyKeyValueStore` with regard to ordering guarantees.
Updates Kafka Streams upgrade guide with KIP information.
Reviewer: Matthias J. Sax <matthias@confluent.io>
Some users complained they don't have a way to determine if there is something wrong in the RSM plug-in they implemented, or there's something wrong in Kafka itself. Also, if there are users who just want to try the tiered storage feature out before implementing anything, it would be good we have an RSM implementation by default.
Per the discussion in the KIP, there will be no default RSM implementation in Kafka, but we can use the LocalTieredStorage implemented for integration test, to resolve the issues above.
Reviewers: Christo Lolov <lolovc@amazon.com>, Divij Vaidya <diviv@amazon.com>, Kamal Chandraprakash <kamal.chandraprakash@gmail.com>, Satish Duggana <satishd@apache.org>
This change adds the upgrade documentation for 3.6.0 and fixes the position of the notable changes in 3.5.0.
In previous releases, notable changes always come after the upgrade instructions.
Reviewers: Luke Chen <showuon@gmail.com>, Satish Duggana <satishd@apache.org>
In KIP-778 we introduced the "unsafe" (lossy) downgrade in case metadata has changes in one of the versions between target and current, as defined in MetadataVersion.
The documentation says it is possible:
"Note that the cluster metadata version cannot be downgraded to a pre-production 3.0.x, 3.1.x, or 3.2.x version once it has been upgraded. However, it is possible to downgrade to production versions such as 3.3-IV0, 3.3-IV1, etc."
The command line tool shows that this doesn't work:
bin/kafka-features.sh --bootstrap-server :9092 downgrade --metadata 3.4 --unsafe
Could not downgrade metadata.version to 8. Invalid metadata.version 8. Unsafe metadata downgrade is not supported in this version.
1 out of 1 operation(s) failed.
In addition to unsafe, also safe metadata downgrades are not supported in practice. For example, when you upgrade to 3.5, you land on 3.5-IV2 as metadata version, which has metadata changes and won't let you to downgrade. This is true for every other release at the moment.
This change fixes the documentation to reflect that, and improves the error messages.
Signed-off-by: Federico Valeri <fedevaleri@gmail.com>
Reviewers: Luke Chen <showuon@gmail.com>, Jakub Scholz <github@scholzj.com>
I've added details for VerificationFailureRate and VerificationTimeMs.
I considered adding the documentation for the AddPartitionsToTxnVerification metrics, but I noticed that all the request metrics simply listed Produce|FetchConsumer|FetchFollower. If we don't already report the AddPartitionsToTxn request metrics in this file, it doesn't make sense to add the verification variant. (As well as all the other APIs we report)
Filed a followup jira if we want to redo that whole section.
Reviewers: Reviewers: Divij Vaidya <diviv@amazon.com>
In "ZooKeeper to KRaft Migration" documentation, we are still reporting 3.4 as metadata version. Reworking that phrase to make it more clear and avoid the need to update it in the future.
Signed-off-by: Federico Valeri <fedevaleri@gmail.com>
Reviewers: Luke Chen <showuon@gmail.com>
This change does the following:
1. Make RemoteLogManagerConfigs that are implemented public
2. Add tasks to generate html docs for the configs
3. Include config docs in the main site
Reviewers: Divij Vaidya <diviv@amazon.com>, Luke Chen <showuon@gmail.com>, Christo Lolov <lolovc@amazon.com>, Satish Duggana <satishd@apache.org>
This implementation introduces two new configurations `log.message.timestamp.before.max.ms` and `log.message.timestamp.after.max.ms` and deprecates `log.message.timestamp.difference.max.ms`.
The default value for all these three configs is maintained to be Long.MAX_VALUE for backward compatibility but with the newly added configurations we can have a finer control when validating message timestamps that are in the past and the future compared to the broker's timestamp.
To maintain backward compatibility if the default value of `log.message.timestamp.before.max.ms` is not changed, we are assuming users are still using the deprecated config `log.message.timestamp.difference.max.ms` and validation is done using its value. This ensures that existing customers who have customized the value of `log.message.timestamp.difference.max.ms` will continue to see no change in behavior.
Reviewers: Divij Vaidya <diviv@amazon.com>, Christo Lolov <lolovc@amazon.com>
Reviewers: Ismael Juma <ismael@confluent.io>, Mickael Maison <mickael.maison@gmail.com>, Divij Vaidya <diviv@amazon.com>, Matthias J. Sax <matthias@confluent.io>
Fixed a regression described in KAFKA-15053 that security.protocol only allows uppercase values like PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. With this fix, both lower case and upper case values will be supported (e.g. PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL, plaintext, ssl, sasl_plaintext, sasl_ssl)
Reviewers: Chris Egerton <chrise@aiven.io>, Divij Vaidya <diviv@amazon.com>
Most of the contents in the README.md was already covered in the docs therefore only had to add the section for Exactly Once support.
Reviewers: Luke Chen <showuon@gmail.com>
Loosens the validation so that Kafka can accept duplicate listeners on the same port but if and only if the listeners are valid IP addresses with one address being an IPv4 address and the other being an IPv6 address.
Reviewers: Josep Prat <jlprat@apache.org>, Luke Chen <showuon@apache.org>
The CRC field of Record Batch was incorrectly documented as int32 while in reality it's an unsigned uint32 field.
Reviewers: Luke Chen <showuon@gmail.com>
KIP-914 introduced a new boolean isLatest into Change to indicate whether a change update represents the latest for the key. Even though Change is serialized into the table repartition topic, the new boolean does not need to be serialized in, because the table repartition map processor performs an optimization to drop records for which isLatest = false. If not for this optimization, the downstream table aggregate would have to drop such records instead, and isLatest would need to be serialized into the repartition topic.
In light of the possibility that isLatest may need to be serialized into the repartition topic in the future, e.g., if other downstream processors are added which need to distinguish between records for which isLatest = true vs isLatest = false, this PR reserves repartition topic formats which include isLatest. Reserving these formats now comes at no additional cost to users since a rolling bounce is already required for the upcoming release due to KIP-904. If we don't reserve them now and instead have to add them later, then another bounce would be required at that time. Reserving formats is cheap, so we choose to do it now.
Reviewers: Matthias J. Sax <matthias@confluent.io>
Moved the information about the BooleanSerde addition from general upgrade notes to notes in Streams.
Reviewers: Walker Carlson <wcarlson@confluent.io>, Divij Vaidya <diviv@amazon.com>
Currently, the current-state KRaft related metric reports follower state for a broker while technically it should be reported as an observer as the kafka-metadata-quorum tool does.
Reviewers: Luke Chen <showuon@gmail.com>, dengziming <dengziming1993@gmail.com>
Documentation only—Minor clarification on how max.warmup.replicas works; specifically, that one "warmup replica" corresponds to a Task that is restoring its state. Also clarifies how max.warmup.replicas interacts with probing.rebalance.interval.ms.
Reviewers: Lucas Brutschy <lbrutschy@confluent.io>, Matthias J. Sax <matthias@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
This is the PR for the implementation of KIP-847: https://cwiki.apache.org/confluence/display/KAFKA/KIP-847%3A+Add+ProducerIdCount+metrics
Add ProducerIdCount metric at the broker level:
kafka.server:type=ReplicaManager,name=ProducerIdCount
Added unit tests below to ensure the metric reported the count correctly.
---------
Co-authored-by: Artem Livshits <84364232+artemlivshits@users.noreply.github.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Divij Vaidya <diviv@amazon.com>, Christo Lolov <christo_lolov@yahoo.com>, Alexandre Dupriez <alexandre.dupriez@gmail.com>, Justine Olshan <jolshan@confluent.io>
The table of (stateless) transformations uses the transformation name in the first column and a description in the second column. I adjusted the transformation name for FlatMapValues accordingly.
Reviewers: Matthias J. Sax <mjsax@apache.org>, Bill Bejeck <bbejeck@apache.org>
This patch adds a section in security.html about listener configuration. This includes the basics of how to define the security mapping of each listener as well as the configurations to control inter-cluster traffic.
Reviewers: José Armando García Sancio <jsancio@users.noreply.github.com>, Luke Chen <showuon@gmail.com>
This commit adds KRaft monitoring related metrics to the Kafka docs (docs/ops.html).
Reviewers: Jason Gustafson <jason@confluent.io>, Luke Chen <showuon@gmail.com>
This PR is part of a series implementing the self-join rewriting. As part of it, we decided to clean up the TOPOLOGY_OPTIMIZATION_CONFIG and make it a list of optimization rules. Acceptable values are: NO_OPTIMIZATION, OPTIMIZE which applies all optimization rules or a comma separated list of specific optimizations.
Reviewers: Guozhang Wang <guozhang@apache.org>, John Roesler <vvcephei@apache.org>
Update security documentation to describe how to configure the KRaft `Authorizer` implementation and include a note about principal forwarding.
Additionally, this patch renames `KafkaConfig.Defaults.DefaultPrincipalSerde` to `DefaultPrincipalBuilder` since the former is somewhat misleading.
Reviewers: David Arthur <mumrah@gmail.com>
The following options are deprecated in kafka-acls.sh: `--authorizer`, `--authorizer-properties`, and `--zk-tls-config-file`. This patch updates the security documentation to mention the deprecation and changes examples to use `--bootstrap-server` when possible.
Reviewers: Luke Chen <showuon@gmail.com>
Document process for recovering and formatting the metadata log directory for the KRaft controller.
Reviewers: Colin Patrick McCabe <cmccabe@apache.org>, Jason Gustafson <jason@confluent.io>
Update the quickstart HTML pages for Kafka and Kafka Stream to include how to quickly start and
experiment with a Kafka cluster using KRaft in addition to ZooKeeper.
Reviews: Colin Patrick McCabe <cmccabe@apache.org>, Chase Thomas <forlack@users.noreply.github.com>, Luke Chen <showuon@gmail.com>
This PR adds support to kafka-features.sh for the --metadata flag, as specified in KIP-778. This
flag makes it possible to upgrade to a new metadata version without consulting a table mapping
version names to short integers. Change --feature to use a key=value format.
FeatureCommandTest.scala: make most tests here true unit tests (that don't start brokers) in order
to improve test run time, and allow us to test more cases. For the integration test part, test both
KRaft and ZK-based clusters. Add support for mocking feature operations in MockAdminClient.java.
upgrade.html: add a section describing how the metadata.version should be upgraded in KRaft
clusters.
Add kraft_upgrade_test.py to test upgrades between KRaft versions.
Reviewers: David Arthur <mumrah@gmail.com>, dengziming <dengziming1993@gmail.com>, José Armando García Sancio <jsancio@gmail.com>
This PR is a mirror of apache/kafka-site#433 which used placeholder images for the Kafka Streams that users need to click in order to load the iframe with the corresponding video.
Reviewers: Mickael Maison <mimaison@apache.org>
Update notable changes documentation to mention requiring IDEMPOTENT_WRITE permission
when producing messages with default/idempotent configuration and broker version lower than
2.8.0.
Reviewers: Ismael Juma <ismael@juma.me.uk>, Luke Chen <showuon@gmail.com>
New gradle task `connect:runtime:genConnectOpenAPIDocs` that generates `connect_rest.yaml` under `docs/generated`.
This task is executed when `siteDocsTar` runs.
Implementation of KIP-846: Source/sink node metrics for Consumed/Produced throughput in Streams
Adds the following INFO topic-level metrics for the total bytes/records consumed and produced:
bytes-consumed-total
records-consumed-total
bytes-produced-total
records-produced-total
Reviewers: Kvicii <Karonazaba@gmail.com>, Guozhang Wang <guozhang@apache.org>, Bruno Cadonna <cadonna@apache.org>
The wildcard * in command without wrapped by single quote will be replaced into the file name under the current folder by bash. So we need to wrap with single quote. Update the doc and command option description.
Reviewers: dengziming <dengziming1993@gmail.com>, Luke Chen <showuon@gmail.com>
Improve documentation for Kafka zero-copy. Kafka combines pagecache and zero-copy to greatly improve message consumption efficiency. But zero-copy only works in PlaintextTransportLayer.
Reviewers: Divij Vaidya <divijvaidya13@gmail.com>, Guozhang Wang <wangguoz@gmail.com>
Prior to this commit FK response sink routed FK results to
SubscriptionResolverJoinProcessorSupplier using the primary key.
There are cases, where this behavior is incorrect. For example,
if KTable key serde differs from the data source serde which might
happen without a key changing operation.
Instead of determining the resolver partition by serializing the PK
this patch includes target partition in SubscriptionWrapper payloads.
Default FK response-sink partitioner extracts the correct partition
from the value and routes the message accordingly.
Reviewers: Matthias J. Sax <matthias@confluent.io>
The `DESTROYED` state is represented internally as a tombstone record when running in distributed mode and by the removal of the connector/task from the in-memory status map when running in standalone mode. As a result, it will never appear to users of the REST API, and we should remove mention of it from our docs so that developers creating tooling against the REST API don't write unnecessary logic to account for that state.
Reviewers: Mickael Maison <mickael.maison@gmail.com>
With this change we stop including the non-production grade connectors that are meant to be used for demos and quick starts by default in the CLASSPATH and plugin.path of Connect deployments. The package of these connector will still be shipped with the Apache Kafka distribution and will be available for explicit inclusion.
The changes have been tested through the system tests and the existing unit and integration tests.
Reviewers: Mickael Maison <mickael.maison@gmail.com>, Randall Hauch <rhauch@gmail.com>
This bumps the slf4j version to 1.7.36 and swaps out log4j 1.2.17 with
reload4j 1.2.19
Signed-off-by: Mike Lothian <mike@fireburn.co.uk>
Reviewers: Luke Chen <showuon@gmail.com>, Ismael Juma <ismael@juma.me.uk>, Bruno Cadonna <cadonna@apache.org>
Adds documentation for KIP-708: Rack awareness for Kafka Streams
Co-authored-by: Bruno Cadonna <cadonna@apache.org>
Reviewers: Luke Chen <showuon@gmail.com>, Bruno Cadonna <cadonna@apache.org>
With AK 3.0, idempotence was enabled by default in Kafka producers. However, if idempotence is enabled, Connect won't be able to communicate via its producers with Kafka brokers older than version 0.11. Perhaps more importantly, for brokers older than version 2.8 the IDEMPOTENT_WRITE ACL is required to be granted to the principal of the Connect worker.
Therefore this commit disables producer idempotence by default to all the producers instantiated by Connect. Users can still choose to enable producer idempotence by explicitly setting the right worker and/or connector properties.
The changes were tested via existing unit, integration and system tests.
Reviewers: Randall Hauch <rhauch@gmail.com>
Disable idempotence when conflicting config values for acks, retries
and max.in.flight.requests.per.connection are set by the user. For the
former two configs, we log at info level when we disable idempotence
due to conflicting configs. For the latter, we log at warn level since
it's due to an implementation detail that is likely to be surprising.
This mitigates compatibility impact of enabling idempotence by default.
Added unit tests to verify the change in behavior.
Reviewers: Ismael Juma <ismael@juma.me.uk>, Jason Gustafson <jason@confluent.io>, Mickael Maison <mickael.maison@gmail.com>
* Mention `acks=1` to `acks=all` change in 3.0.0 upgrade docs
* Have a separate section for 3.0.1 and 3.1.1 as some may skip the
3.0.0/3.1.0 section when upgrading to a bug fix.
* Move the 3.0.0 note to the top since it's more impactful than the
other changes.
Reviewers: Jason Gustafson <jason@confluent.io>
In v3.0, we changed the default value for `enable.idempotence` to true, but we didn't adjust the validator and the `idempotence` enabled check method. So if a user didn't explicitly enable idempotence, this feature won't be turned on. This patch addresses the problem, cleans up associated logic, and fixes tests that broke as a result of properly applying the new default. Specifically it does the following:
1. fix the `ProducerConfig#idempotenceEnabled` method, to make it correctly detect if `idempotence` is enabled or not
2. remove some unnecessary config overridden and checks due to we already default `acks`, `retries` and `enable.idempotence` configs.
3. move the config validator for the idempotent producer from `KafkaProducer` into `ProducerConfig`. The config validation should be the responsibility of `ProducerConfig` class.
4. add an `AbstractConfig#hasKeyInOriginals` method, to avoid `originals` configs get copied and only want to check the existence of the key.
5. fix many broken tests. As mentioned, we didn't actually enable idempotence in v3.0. After this PR, there are some tests broken due to some different behavior between idempotent and non-idempotent producer.
6. add additional tests to validate configuration behavior
Reviewers: Kirk True <kirk@mustardgrain.com>, Ismael Juma <ismael@juma.me.uk>, Mickael Maison <mimaison@users.noreply.github.com>, Jason Gustafson <jason@confluent.io>
This PR updates the documentation and tooling to match https://github.com/apache/kafka/pull/10914, which added support for encoding `deleteHorizonMs` in the record batch schema. The changes include adding the new attribute and updating field names. We have also updated stale references to the old `FirstTimestamp` field in the code and comments. Finally, In the `DumpLogSegments` tool, when record batch information is printed, it will also include the value of `deleteHorizonMs` is (e.g. `OptionalLong.empty` or `OptionalLong[123456]`).
Reviewers: Vincent Jiang <84371940+vincent81jiang@users.noreply.github.com>, Kvicii <42023367+Kvicii@users.noreply.github.com>, Jason Gustafson <jason@confluent.io>
* Update main version of main documentation and add link to previous version;
* Update quick start guide (links do not work yet obviously);
* Add upgrade section.
Reviewers: Luke Chen <showuon@gmail.com>, Manikumar Reddy <manikumar.reddy@gmail.com>
In the RocksDb memory management doc, we mentioned in the footnote that there's a rocksdb bug caused the strict_capacity_limit boolean parameter in the LRUCache constructor can't be set to true. However, the bug is already fixed in 6.11.4, and we're using 6.22 now, so the note can be removed.
Reviewer: Bruno Cadonna <cadonna@apache.org>
Deprecate eager rebalancing protocol in kafka streams and log warning message when upgrade.from is set to 2.3 or lower. Also add a note in upgrade doc to prepare users for the removal of eager rebalancing support
Reviewers: Anna Sophie Blee-Goldman <ableegoldman@apache.org>
This PR adds some RocksDB metrics that could not be added in KIP-471 due to RocksDB version. The new metrics are extracted using Histogram data provided by RocksDB API, and the old ones were extracted using Tickers. The new metrics added are memtable-flush-time-(avg|min|max) and compaction-time-(avg|min|max).
Reviewer: Bruno Cadonnna <cadonna@apache.org>
KAFKA-13243: KIP-773 Differentiate metric latency measured in ms and ns
Implementation of KIP-773
Deprecates inconsistent metrics bufferpool-wait-time-total,
io-waittime-total, and iotime-total.
Introduces new metrics bufferpool-wait-time-ns-total,
io-wait-time-ns-total, and io-time-ns-total with the same semantics as
before.
Adds metrics (old and new) in ops.html.
Adds upgrade guide for these metrics.
Reviewers: Guozhang Wang <wangguoz@gmail.com>, Tom Bentley <tbentley@redhat.com>
Java 17 is at release candidate stage and it will be a LTS release once
it's out (previous LTS release was Java 11).
Details:
* Replace Java 16 with Java 17 in Jenkins and Readme.
* Replace `--illegal-access=permit` (which was removed from Java 17)
with `--add-opens` for the packages we require internal access to.
Filed KAFKA-13275 for updating the tests not to require `--add-opens`
(where possible).
* Update `release.py` to use JDK8. and JDK 17 (instead of JDK 8 and JDK 15).
* Removed all but one Streams test from `testsToExclude`. The
Connect test exclusion list remains the same.
* Add notable change to upgrade.html
* Upgrade to Gradle 7.2 as it's required for proper Java 17 support.
* Upgrade mockito to 3.12.4 for better Java 17 support.
* Adjusted `KafkaRaftClientTest` and `QuorumStateTest` not to require
private access to `jdk.internal.util.random`.
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
Right now, we have scattered uses of `SerDes` throughout the docs. These should be updated to be `Serdes`, as that's what we commonly use now.
Reviewers: Boyang Chen <bchen11@outlook.com>
There were a few followup things to address from #10926, most importantly a number of updates to the javadocs. Also includes a few missing verification checks.
Reviewers: Guozhang Wang <guozhang@confluent.io>, Matthias J. Sax <mjsax@apache.org>, Israel Ekpo
We removed default 24 hours grace period in KIP-633, and deprecate some grace methods, but we forgot to update the stream docs.
Reviewer: Bruno Cadonna <cadonna@apache.org>
Checked the documentation, we must use `--zookeeper` option in 3 places (alter and describe):
1. user configs where the config is a SCRAM mechanism name (i.e. a SCRAM credential for a user)
2. update broker configs for a particular broker when that broker is down
3. broker default configs when all brokers are down
Reference:
1. [config SCRAM Credentials](https://kafka.apache.org/documentation/#security_sasl_scram_credentials)
2. [Update config before broker started](https://kafka.apache.org/documentation/#dynamicbrokerconfigs)
So, after this PR, we only support `--zookeeper` on `users` and `brokers` entity. Add some argument parse rules and tests.
Reviewers: Ron Dagostino <rdagostino@confluent.io>, Ismael Juma <ismael@juma.me.uk>
Also:
* Deprecate `log.message.format.version` and `message.format.version`.
* Log broker warning if the deprecated config values are ignored due to
the inter-broker protocol version.
* Log warning if `message.format.version` is set via `ConfigCommand`.
* Always down-convert if fetch version is v3 or lower.
* Add tests to verify new message format version based on the
inter-broker protocol version.
* Adjust existing tests that create topics with an older message format to
have the inter-broker protocol set to 2.8.
* Add upgrade note.
Note that the log compaction change to always write new segments with
record format v2 if the IBP is 3.0 or higher will be done as part of
KAFKA-13093 (with Kafka 3.1 as the target release version).
Reviewers: David Jacot <djacot@confluent.io>, David Arthur <mumrah@gmail.com>, Jason Gustafson <jason@confluent.io>
Also update the upgrade guide indicating about the grace period KIP and its indication on the fix with throughput impact.
Reviewers: Luke Chen <showuon@gmail.com>, Matthias J. Sax <mjsax@apache.org>
This patch adds a check to ensure that principal builder implementations implement `KafkaPrincipalSerde` as specified in KIP-590: https://cwiki.apache.org/confluence/display/KAFKA/KIP-590%3A+Redirect+Zookeeper+Mutation+Protocols+to+The+Controller. This patch also changes the default value of `principal.builder.class` to `DefaultKafkaPrincipalBuilder`, which was already the implicit behavior when no principal builder was specified.
Reviewers: Ismael Juma <ismael@juma.me.uk>, Jason Gustafson <jason@confluent.io>
Update the docs for task idling, since the semantics have
changed in 3.0.
Reviewers: Jim Galasyn <jim.galasyn@confluent.io>, Luke Chen <showuon@gmail.com>, Boyang Chen <boyang@apache.org>
#10813 changed the default serde from ByteArraySerde as discussed in KIP-741. This adds proper documentation so users know to set a serde through the configs or explicitly pass one in.
Reviewers: Walker Carlson <wcarlson@confluent.io>, Anna Sophie Blee-Goldman <ableegoldman@apache.org>
Updated documentation for connector restart REST API to include the tasks restart behavior, including calling out that by default the same behavior of previous versions is preserved.
Author: Kalpesh Patel <kpatel@confluent.io>
Reviewer: Randall Hauch <rhauch@gmail.com>
This new policy enables active/passive, one-way replication without renaming topics, similar to MM1. This implementation is described in KIP-382 (adopted), originally as "LegacyReplicationPolicy".
This enables operators to migrate from MM1 to MM2 without re-architecting their replication flows, and enables some additional use-cases for MM2. For example, operators may wish to "upgrade" their Kafka clusters by mirroring everything to a completely new cluster. Such a migration would have been difficult with either MM1 or MM2 previously.
When using IdentityReplicationPolicy, operators should be aware that MM2 will not be able to detect cycles among replicated topics. A misconfigured topology may result in replicating the same records back-and-forth or in an infinite loop. However, we don't prevent this behavior, as some use-cases involve filtering records (via SMTs) to prevent cycles.
Reviewers: Mickael Maison <mickael.maison@gmail.com>
Co-authored-by: Ryanne Dolan <rdolan@twitter.com>
Co-authored-by: Matthew de Detrich <mdedetrich@gmail.com>
Co-authored-by: Ivan Yurchenko <ivanyu@aiven.io>
Update the ZooKeeper version to v3.6.3. This requires adding dropwizard
as a new dependency.
Also, add Kafka v2.8.0 to the ducktape system test image.
Reviewers: Luke Chen <showuon@gmail.com>, Colin P. McCabe <cmccabe@apache.org>, Ismael Juma <ismael@juma.me.uk>
Implementation of KIP-744.
Creates new Interfaces for TaskMetadata, ThreadMetadata, and
StreamsMetadata, providing internal implementations for each of them.
Deprecates current TaskMetadata, ThreadMetadata under o.a.k.s.processor,
and SreamsMetadata under a.o.k.s.state.
Updates references on internal classes from deprecated classes to new interfaces.
Deprecates methods on KafkaStreams returning deprecated ThreadMeatada and
StreamsMetadata, and provides new ones returning the new interfaces.
Update Javadocs referencing to deprecated classes and methods to point
to the right ones.
Co-authored-by: Bruno Cadonna <cadonna@apache.org>
Reviewers: Anna Sophie Blee-Goldman <ableegoldman@apache.org>, Bruno Cadonna <cadonna@apache.org>
Remove the `rest.host.name` and `rest.port` Connect worker configs that were deprecated in KIP-208 and AK 1.1.
Author: Kalpesh Patel <kalpeshpatel.india@gmail.com>
Reviewers: Randall Hauch <rhauch@gmail.com>, Konstantine Karantasis <konstantine@confluent.io>, wenbingshen <oliver.shen999@gmail.com>
Fixes the indentation of the code listings for:
api.html
configuration.html
design.html
implementation.html
toc.html
These changes consist of whitespaces added or removed for consistency. It also contains a couple of fixes on unbalanced html tags.
Reviewers: Luke Chen <showuon@gmail.com>, Bill Bejeck <bbejeck@apache.org>
They have been deprecated since 0.10.0. Full list of removes configs:
* port
* host.name
* advertised.port
* advertised.host.name
Also adjust tests to take the removals into account. Some tests were
no longer relevant and have been removed.
Finally, took the chance to:
* Clean up unnecessary usage of `KafkaConfig$.MODULE$` in
related files.
* Add missing `Test` annotations to `AdvertiseBrokerTest` and
make necessary changes for the tests to pass.
Reviewers: David Jacot <djacot@confluent.io>, Luke Chen <showuon@gmail.com>
They were both deprecated in Apache Kafka 2.4 and it's a straightforward change
to use the non deprecated variants.
Reviewers: David Jacot <djacot@confluent.io>
Fix examples under security.html so they use the right bash icon (`>`
instead of `$`) and also uses the right tool for showing code listings.
Reviewers: Luke Chen <showuon@gmail.com>, Bruno Cadonna <cadonna@apache.org>
Quick followup to KIP-740 to actually deprecate this constructor, and update the upgrade guide with what we changed in KIP-740. I also noticed the TaskId#parse method had been modified previously, and should be re-added to the public TaskId class. It had no tests, so now it does
Reviewers: Matthias J. Sax <mjsax@confluent.io>, Luke Chen <showuon@gmail.com>
Code samples are now unified and correctly formatted.
Samples under Streams use consistently the prism library.
Reviewers: Bruno Cadonna <cadonna@apache.org>
Introduce List serde for primitive types or custom serdes with a serializer and a deserializer according to KIP-466
Reviewers: Anna Sophie Blee-Goldman <ableegoldman@apache.org>, Matthias J. Sax <mjsax@conflunet.io>, John Roesler <roesler@confluent.io>, Michael Noll <michael@confluent.io>
Deprecates the following
1. StreamsConfig.EXACTLY_ONCE
2. StreamsConfig.EXACTLY_ONCE_BETA
3. Producer#sendOffsetsToTransaction(Map offsets, String consumerGroupId)
And introduces a new StreamsConfig.EXACTLY_ONCE_V2 config. Additionally, this PR replaces usages of the term "eos-beta" throughout the code with the term "eos-v2"
Reviewers: Matthias J. Sax <mjsax@confluent.io>
Allow user to specify subset of internal topics to clean up with application reset tool
Reviewers: Boyang Chen <boyang@confluent.io>, Anna Sophie Blee-Goldman <ableegoldman@apache.org>, Walker Carlson <wcarlson@confluent.io>
KIP-216: IQ should throw different exceptions for different errors, Part 2
Reviewers: Matthias J. Sax <mjsax@confluent.io>, Anna Sophie Blee-Goldman <ableegoldman@apache.org>, Bruno Cadonna <cadonna@confluent.io>
These SMTs were originally specified in KIP-145 but never implemented
at the time.
HeaderTo is not included since its original specification doesn't deal with
the fact that there can be >1 header with the same name, but a field can only
have a single value (which could be an array, but not if the headers for
the given name had different schemas).
Reviewers: Chris Egerton <chrise@confluent.io>, Mickael Maison <mickael.maison@gmail.com>
The methods have been deprecated since 0.11 without replacement since
message format 2 moved the checksum to the record batch (instead of the
record).
Unfortunately, we did not deprecate the constructors that take a checksum
(even though we intended to) so we cannot remove them. I have deprecated
them for removal in 4.0 and added a single non deprecated constructor to
`ConsumerRecord` and `RecordMetadata` that take all remaining parameters.
`ConsumerRecord` could do with one additional convenience constructor, but
that requires a KIP and hence should be done separately.
Also:
* Removed `ChecksumMessageFormatter`, which is technically not public
API, but may have been used with the console consumer.
* Updated all usages of `ConsumerRecord`/`RecordMetadata` constructors
to use the non deprecated ones.
* Added tests for deprecated `ConsumerRecord/`RecordMetadata`
constructors.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, David Jacot <djacot@confluent.io>
`Sum` and `Total` classes were deprecated and replaced by `WindowedSum` and `CumulativeSum` in 2.4. This patch removes them for 3.0.
Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, Ismael Juma <ismael@juma.me.uk>
`quota.producer.default` and `quota.consumer.default` were deprecated in AK 0.11.0.0. Dynamic default quotas must be used instead. This patch removes them for AK 3.0.
Reviewers: Rajini Sivaram <rajinisivaram@googlemail.com>, Ismael Juma <ismael@juma.me.uk>
Fix the formatting of example RocksDBConfigSetter due to the un-arranged spaces within <pre> tag.
Reviewers: Anna Sophie Blee-Goldman <ableegoldman@apache.org>