Commit Graph

258 Commits

Author SHA1 Message Date
Colin P. Mccabe cd01042ac8 MINOR: Add the KIP-500 metadata shell
The Kafka Metadata shell is a new command which allows users to
interactively examine the metadata stored in a KIP-500 cluster.

It can read the metadata from the controllers directly, by connecting to
them, or from a metadata snapshot on disk.  In the former case, the
quorum voters must be specified by passing the --controllers flag; in
the latter case, the snapshot file should be specified via --snapshot.

The metadata tool works by replaying the log and storing the state into
in-memory nodes.  These nodes are presented in a fashion similar to
filesystem directories.

This PR currently includes the metalog/ directory since that is a
dependency of metadata shell.  Eventually we want to migrate to using
the Raft API directly, however.
2021-02-10 14:12:28 -08:00
David Arthur e7e4252b0f
JUnit extensions for integration tests (#9986)
Adds JUnit 5 extension for running the same test with different types of clusters. 
See core/src/test/java/kafka/test/junit/README.md for details
2021-02-09 11:49:33 -05:00
Colin Patrick McCabe d98df7fc4d
MINOR: Add KafkaEventQueue (#10030)
Add KafkaEventQueue, which is used by the KIP-500 controller to manage its event queue.
Compared to using an Executor, KafkaEventQueue has the following advantages:

* Events can be given "deadlines." If an event lingers in the queue beyond the deadline, it
will be completed with a timeout exception. This is useful for implementing timeouts for
controller RPCs.

* Events can be prepended to the queue as well as appended.

* Events can be given tags to make them easier to manage. This is especially useful for
rescheduling or cancelling events which were previously scheduled to execute in the future.

Reviewers: Jun Rao <junrao@gmail.com>, José Armando García Sancio <jsancio@gmail.com>
2021-02-04 14:46:57 -08:00
Jason Gustafson f58c2acf26
KAFKA-12250; Add metadata record serde for KIP-631 (#9998)
This patch adds a `RecordSerde` implementation for the metadata record format expected by KIP-631. 

Reviewers: Colin McCabe <cmccabe@apache.org>, Ismael Juma <mlists@juma.me.uk>
2021-02-03 16:16:35 -08:00
Colin Patrick McCabe 772f2cfc82
MINOR: Replace BrokerStates.scala with BrokerState.java (#10028)
Replace BrokerStates.scala with BrokerState.java, to make it easier to use from Java code if needed.  This also makes it easier to go from a numeric type to an enum.

Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2021-02-03 13:41:38 -08:00
Colin Patrick McCabe 1711cfa4eb
KAFKA-12209: Add the timeline data structures for the KIP-631 controller (#9901)
Reviewers: Jun Rao <junrao@gmail.com>
2021-02-02 11:33:55 -08:00
John Roesler 4d28391480
KAFKA-10867: Improved task idling (#9840)
Use the new ConsumerRecords.metadata() API to implement
improved task idling as described in KIP-695

Reviewers: Guozhang Wang <guozhang@apache.org>
2021-01-27 21:57:20 -06:00
John Roesler fdcf8fbf72
KAFKA-10866: Add metadata to ConsumerRecords (#9836)
Expose fetched metadata via the ConsumerRecords
object as described in KIP-695.

Reviewers: Guozhang Wang <guozhang@apache.org>
2021-01-27 18:18:38 -06:00
Ismael Juma 24a2ed26a6
MINOR: Update zstd-jni to 1.4.8-2 (#9957)
* The latest version zstd-jni doesn't use `RecyclingBufferPool` by default, so we
pass it via the relevant constructors to maintain the behavior before this
change.
* zstd-jni fixes an issue when using Alpine, see https://github.com/luben/zstd-jni/issues/157.
* zstd 1.4.7 includes several months of improvements across many axis,
from performance to various fixes. Details: https://github.com/facebook/zstd/releases/tag/v1.4.7
* zstd 1.4.8 is a hotfix release, details: https://github.com/facebook/zstd/releases/tag/v1.4.8

Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2021-01-24 20:20:52 -08:00
Colin Patrick McCabe 217334b0f4
KAFKA-12183: Add the KIP-631 metadata record definitions (#9876)
Add the metadata gradle module, which will contain the metadata record
definitions, and other metadata-related broker-side code.

Add MetadataParser, MetadataParseException, etc.

Reviewers: José Armando García Sancio <jsancio@gmail.com>, Ismael Juma <ismael@juma.me.uk>, David Arthur <mumrah@gmail.com>
2021-01-14 09:58:52 -08:00
Ning Zhang 2cde6f61b8
KAFKA-10304: Refactor MM2 integration tests (#9224)
Co-authored-by: Ning Zhang <nzhang1220@fb.com>
Reviewers: Mickael Maison <mickael.maison@gmail.com>
2021-01-14 14:48:17 +00:00
Ismael Juma 52b8aa0fdc
KAFKA-7340: Migrate clients module to JUnit 5 (#9874)
* Use the packages/classes from JUnit 5
* Move description in `assert` methods to last parameter
* Convert parameterized tests so that they work with JUnit 5
* Remove `hamcrest`, it didn't seem to add much value
* Fix `Utils.mkEntry` to have correct `equals` implementation
* Add a missing `@Test` annotation in `SslSelectorTest` override
* Adjust regex in `SaslAuthenticatorTest` due to small change in the
assert failure string in JUnit 5

Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
2021-01-13 16:17:45 -08:00
Colin P. Mccabe 1e4d335245 KAFKA-12180: Implement the KIP-631 message generator changes
* Implement the uint16 type
* Implement MetadataRecordType and MetadataJsonConverters

Reviewers: Jason Gustafson <jason@confluent.io>
2021-01-12 12:43:59 -08:00
Jason Gustafson eb9fe411bb
KAFKA-10842; Use `InterBrokerSendThread` for raft's outbound network channel (#9732)
This patch contains the following improvements:

- Separate inbound/outbound request flows so that we can open the door for concurrent inbound request handling
- Rewrite `KafkaNetworkChannel` to use `InterBrokerSendThread` which fixes a number of bugs/shortcomings
- Get rid of a lot of boilerplate conversions in `KafkaNetworkChannel` 
- Improve validation of inbound responses in `KafkaRaftClient` by checking correlationId. This fixes a bug which could cause an out of order Fetch to be applied incorrectly.

Reviewers: David Arthur <mumrah@gmail.com>
2020-12-21 18:15:15 -08:00
Justine Olshan 1dd1e7f945
KAFKA-10545: Create topic IDs and propagate to brokers (#9626)
This change propagates topic ids to brokers in LeaderAndIsr Request. It also removes the topic name from the LeaderAndIsr Response, reorganizes the response to be sorted by topic, and includes the topic ID.

In addition, the topic ID is persisted to each replica in Log as well as in a file on disk. This file is read on startup and if the topic ID exists, it will be reloaded.

Reviewers: David Jacot <djacot@confluent.io>, dengziming <dengziming1993@gmail.com>, Nikhil Bhatia <rite2nikhil@gmail.com>, Rajini Sivaram <rajinisivaram@googlemail.com>
2020-12-18 22:19:50 +00:00
Scott Hendricks baef516789
Add ConfigurableProducerSpec to Trogdor for improved E2E latency tracking. (#9736)
Reviewer: Colin P. McCabe <cmccabe@apache.org>
2020-12-18 13:03:59 -08:00
Cheng Tan ae3a6ed990
KAKFA-10619: Idempotent producer will get authorized once it has a WRITE access to at least one topic (KIP-679) (#9485)
Includes:
- New API to authorize by resource type
- Default implementation for the method that supports super users and ACLs
- Optimized implementation in AclAuthorizer that supports ACLs, super users and allow.everyone.if.no.acl.found
- Benchmarks and tests
- InitProducerIdRequest authorized for Cluster:IdempotentWrite or WRITE to any topic, ProduceRequest authorized only for topic even if idempotent

Reviewers: Lucas Bradstreet <lucas@confluent.io>, Rajini Sivaram <rajinisivaram@googlemail.com>
2020-12-18 18:08:46 +00:00
José Armando García Sancio ab0807dd85
KAFKA-10394: Add classes to read and write snapshot for KIP-630 (#9512)
This PR adds support for generating snapshot for KIP-630.

1. Adds the interfaces `RawSnapshotWriter` and `RawSnapshotReader` and the implementations `FileRawSnapshotWriter` and `FileRawSnapshotReader` respectively. These interfaces and implementations are low level API for writing and reading snapshots. They are internal to the Raft implementation and are not exposed to the users of `RaftClient`. They operation at the `Record` level. These types are exposed to the `RaftClient` through the `ReplicatedLog` interface.

2. Adds a buffered snapshot writer: `SnapshotWriter<T>`. This type is a higher-level type and it is exposed through the `RaftClient` interface. A future PR will add the related `SnapshotReader<T>`, which will be used by the state machine to load a snapshot.

Reviewers: Jason Gustafson <jason@confluent.io>
2020-12-07 14:06:25 -08:00
Chia-Ping Tsai 6bbf69fb00
KAFKA-10497 Convert group coordinator metadata schemas to use generat… (#9318)
Reviewers: David Jacot <djacot@confluent.io>
2020-11-18 14:49:04 +08:00
A. Sophie Blee-Goldman e71cb7ab11
KAFKA-10689: fix windowed FKJ topology and put checks in assignor to avoid infinite loops (#9568)
Fix infinite loop in assignor when trying to resolve the number of partitions in a topology with a windowed FKJ. Also adds a check to this loop to break out and fail the application if we detect that we are/will be stuck in an infinite loop

Reviewers: Matthias Sax <matthias@confluent.io>
2020-11-17 16:57:53 -08:00
Boyang Chen 0814e4f645
KAFKA-10181: Use Envelope RPC to do redirection for (Incremental)AlterConfig, AlterClientQuota and CreateTopics (#9103)
This PR adds support for forwarding of the following RPCs:

AlterConfigs
IncrementalAlterConfigs
AlterClientQuotas
CreateTopics

Co-authored-by: Jason Gustafson <jason@confluent.io>
Reviewers: Jason Gustafson <jason@confluent.io>
2020-11-04 14:21:44 -08:00
Matthias J. Sax e8ad80ebe1
MINOR: remove explicit passing of AdminClient into StreamsPartitionAssignor (#9384)
Currently, we pass multiple object reference (AdminClient,TaskManager, and a few more) into StreamsPartitionAssignor. Furthermore, we (miss)use TaskManager#mainConsumer() to get access to the main consumer (we need to do this, to avoid a cyclic dependency).

This PR unifies how object references are passed into a single ReferenceContainer class to
 - not "miss use" the TaskManager as reference container
 - unify how object references are passes

Note: we need to use a reference container to avoid cyclic dependencies, instead of using a config for each passed reference individually.

Reviewers: John Roesler <john@confluent.io>
2020-10-15 16:10:27 -07:00
John Roesler 27b0e35e7a
KAFKA-10437: Implement new PAPI support for test-utils (#9396)
Implements KIP-478 for the test-utils module:
* adds mocks of the new ProcessorContext and StateStoreContext
* adds tests that all stores and store builders are usable with the new mock
* adds tests that the new Processor api is usable with the new mock
* updates the demonstration Processor to the new api

Reviewers: Guozhang Wang <guozhang@apache.org>
2020-10-13 11:15:22 -05:00
Lee Dongjin 8d4bbf22ad
MINOR: trivial cleanups, javadoc errors, omitted StateStore tests, etc. (#8130)
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>, Guozhang Wang <guozhang@confluent.io>, Matthias J. Sax <matthias@confluent.io>
2020-10-07 19:08:31 -07:00
Rajini Sivaram 7be8bd8cbf
KAFKA-10338; Support PEM format for SSL key and trust stores (KIP-651) (#9345)
Adds support for SSL key and trust stores to be specified in PEM format either as files or directly as configuration values.

Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
2020-10-06 19:13:43 +01:00
Guozhang Wang 53a35c1de3
MINOR: Refactor unit tests around RocksDBConfigSetter (#9358)
* Extract the mock RocksDBConfigSetter into a separate class.
* De-dup unit tests covering RocksDBConfigSetter.

Reviewers: Boyang Chen <boyang@confluent.io>
2020-10-06 09:09:54 -07:00
John Roesler 69790a1463
KAFKA-10535: Split ProcessorContext into Processor/StateStore/Record Contexts (#9361)
Migrate different components of the old ProcessorContext interface
into separate interfaces that are more appropriate for their usages.
See KIP-478 for the details.

Reviewers: Guozhang Wang <guozhang@apache.org>, Paul Whalen <pgwhalen@gmail.com>
2020-10-02 18:49:12 -05:00
leah 95986a8f48
MINOR: Fix KStreamKTableJoinTest and StreamTaskTest (#9357)
Reviewers: Matthias J. Sax <matthias@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
2020-09-30 20:07:23 -07:00
Jason Gustafson b7c8490cf4
KAFKA-10492; Core Kafka Raft Implementation (KIP-595) (#9130)
This is the core Raft implementation specified by KIP-595: https://cwiki.apache.org/confluence/display/KAFKA/KIP-595%3A+A+Raft+Protocol+for+the+Metadata+Quorum. We have created a separate "raft" module where most of the logic resides. The new APIs introduced in this patch in order to support Raft election and such are disabled in the server until the integration with the controller is complete. Until then, there is a standalone server which can be used for testing the performance of the Raft implementation. See `raft/README.md` for details.

Reviewers: Guozhang Wang <wangguoz@gmail.com>, Boyang Chen <boyang@confluent.io>

Co-authored-by: Boyang Chen <boyang@confluent.io>
Co-authored-by: Guozhang Wang <wangguoz@gmail.com>
2020-09-22 11:32:44 -07:00
leah 2194ccba5b
Adding reverse iterator usage for sliding windows processing (extending KIP-450) (#9239)
Add a backwardFetch call to the window store for sliding window
processing. While the implementation works with the forward call
to the window store, using backwardFetch allows for the iterator
to be closed earlier, making implementation more efficient.

Reviewers: A. Sophie Blee-Goldman <sophie@confluent.io>, John Roesler <vvcephei@apache.org>
2020-09-11 16:38:17 -05:00
leah dd2b9eca5d
KAFKA-5636: Improve handling of "early" records in sliding windows (#9157)
Update for KIP-450 to handle "early" records.

Reviewers: A. Sophie Blee-Goldman <sophie@confluent.io>, Matthias J. Sax <matthias@confluent.io>
2020-09-09 12:02:19 -07:00
Colin Patrick McCabe b6ba67482f
KAFKA-10384: Separate converters from generated messages (#9194)
For the generated message code, put the JSON conversion functionality
in a separate JsonConverter class.

Make MessageDataGenerator simply another generator class, alongside the
new JsonConverterGenerator class.  Move some of the utility functions
from MessageDataGenerator into FieldSpec and other places, so that they
can be used by other generator classes.

Use argparse4j to support a better command-line for the generator.

Reviewers: David Arthur <mumrah@gmail.com>
2020-08-26 15:10:09 -07:00
David Arthur 1a9697430a
KAFKA-8806 Reduce calls to validateOffsetsIfNeeded (#7222)
Only check if positions need validation if there is new metadata. 

Also fix some inefficient java.util.stream code in the hot path of SubscriptionState.
2020-08-21 10:25:52 -04:00
Jason Gustafson 3a189ad868
KAFKA-10386; Fix flexible version support for `records` type (#9163)
This patch fixes the generated serde logic for the 'records' type so that it uses the compact byte array representation consistently when flexible versions are enabled.

Reviewers: David Arthur <mumrah@gmail.com>
2020-08-13 09:52:23 -07:00
Matthias J. Sax b351493543
KAFKA-9274: Remove `retries` for global task (#9047)
- part of KIP-572
 - removed the usage of `retries` in `GlobalStateManger`
 - instead of retries the new `task.timeout.ms` config is used

Reviewers: John Roesler <john@confluent.io>, Boyang Chen <boyang@confluent.io>, Guozhang Wang <guozhang@confluent.io>
2020-08-05 14:14:18 -07:00
John Roesler 26a217c8e7
MINOR: Streams integration tests should not call exit (#9067)
- replace System.exit with Exit.exit in all relevant classes
- forbid use of System.exit in all relevant classes and add exceptions for others

Co-authored-by: John Roesler <vvcephei@apache.org>
Co-authored-by: Matthias J. Sax <matthias@confluent.io>

Reviewers: Lucas Bradstreet <lucas@confluent.io>, Ismael Juma <ismael@confluent.io>
2020-08-05 13:52:50 -07:00
David Arthur 4cd2396db3
KAFKA-9629 Use generated protocol for Fetch API (#9008)
Refactored FetchRequest and FetchResponse to use the generated message classes for serialization and deserialization. This allows us to bypass unnecessary Struct conversion in a few places. A new "records" type was added to the message protocol which uses BaseRecords as the field type. When sending, we can set a FileRecords instance on the message, and when receiving the message class will use MemoryRecords. 

Also included a few JMH benchmarks which indicate a small performance improvement for requests with high partition counts or small record sizes.

Reviewers: Jason Gustafson <jason@confluent.io>, Boyang Chen <boyang@confluent.io>, David Jacot <djacot@confluent.io>, Lucas Bradstreet <lucas@confluent.io>, Ismael Juma <ismael@juma.me.uk>, Colin P. McCabe <cmccabe@apache.org>
2020-07-30 13:29:39 -04:00
Mickael Maison caa806cd82
KAFKA-10232: MirrorMaker2 internal topics Formatters KIP-597 (#8604)
This PR includes 3 MessageFormatters for MirrorMaker2 internal topics:
- HeartbeatFormatter
- CheckpointFormatter
- OffsetSyncFormatter

This also introduces a new public interface org.apache.kafka.common.MessageFormatter that users can implement to build custom formatters.

Reviewers: Konstantine Karantasis <k.karantasis@gmail.com>, Ryanne Dolan <ryannedolan@gmail.com>, David Jacot <djacot@confluent.io>

Co-authored-by: Mickael Maison <mickael.maison@gmail.com>
Co-authored-by: Edoardo Comar <ecomar@uk.ibm.com>
2020-07-03 10:41:45 +01:00
A. Sophie Blee-Goldman 42aa0f38b9
MINOR: clean up unused checkstyle suppressions for Streams (#8861)
Reviewers: John Roesler <john@confluent.io>, Matthias J. Sax <matthias@confluent.io>
2020-06-17 17:04:43 -07:00
A. Sophie Blee-Goldman 03ed08d0d1
KAFKA-10144: clean up corrupted standby tasks before attempting a commit (#8849)
We need to make sure that corrupted standby tasks are actually cleaned up upon a TaskCorruptedException. However due to the commit prior to invoking handleCorruption, it's possible to throw a TaskMigratedException before actually cleaning up any of the corrupted tasks.

This is fine for active tasks since handleLostAll will finish up the job, but it does nothing with standby tasks. We should make sure that standby tasks are handled before attempting to commit (which we can do, since we don't need to commit anything for the corrupted standbys)

Reviewers: Guozhang Wang <wangguoz@gmail.com>
2020-06-12 16:21:57 -07:00
Adam Bellemare bcf45b09d3
KAFKA-10049: Fixed FKJ bug where wrapped serdes are set incorrectly when using default StreamsConfig serdes (#8764)
Bug Details:
Mistakenly setting the value serde to the key serde for an internal wrapped serde in the FKJ workflow.

Testing:
Modified the existing test to reproduce the issue, then verified that the test passes.

Reviewers: Guozhang Wang <wangguoz@gmail.com>, John Roesler <vvcephei@apache.org>
2020-06-12 10:00:38 -05:00
Kowshik Prakasam 4f96c5b424
KAFKA-10027: Implement read path for feature versioning system (KIP-584) (#8680)
In this PR, I have implemented various classes and integration for the read path of the feature versioning system (KIP-584). The ultimate plan is that the cluster-wide finalized features information is going to be stored in ZK under the node /feature. The read path implemented in this PR is centered around reading this finalized features information from ZK, and, processing it inside the Broker.

Here is a summary of what's in this PR (a lot of it is new classes):

A facility is provided in the broker to declare its supported features, and advertise its supported features via its own BrokerIdZNode under a features key.
A facility is provided in the broker to listen to and propagate cluster-wide finalized feature changes from ZK.
When new finalized features are read from ZK, feature incompatibilities are detected by comparing against the broker's own supported features.
ApiVersionsResponse is now served containing supported and finalized feature information (using the newly added tagged fields).

Reviewers: Boyang Chen <boyang@confluent.io>, Jun Rao <junrao@gmail.com>
2020-06-11 11:28:57 -07:00
Konstantine Karantasis 09b22e7e67
KAFKA-9848: Avoid triggering scheduled rebalance delay when task assignment fails but Connect workers remain in the group (#8805)
In the first version of the incremental cooperative protocol, in the presence of a failed sync request by the leader, the assignor was designed to treat the unapplied assignments as lost and trigger a rebalance delay. 

This commit applies optimizations in these cases to avoid the unnecessary activation of the rebalancing delay. First, if the worker that loses the sync group request or response is the leader, then it detects this failure by checking the what is the expected generation when it performs task assignments. If it's not the expected one, it resets its view of the previous assignment because it wasn't successfully applied and it doesn't represent a correct state. Furthermore, if the worker that has missed the assignment sync is an ordinary worker, then the leader is able to detect that there are lost assignments and instead of triggering a rebalance delay among the same members of the group, it treats the lost tasks as new tasks and reassigns them immediately. If the lost assignment included revocations that were not applied, the leader reapplies these revocations again. 

Existing unit tests and integration tests are adapted to test the proposed optimizations. 

Reviewers: Randall Hauch <rhauch@gmail.com>
2020-06-09 09:41:11 -07:00
Tom Bentley 78e8a49cda
KAFKA-9434: automated protocol for alterReplicaLogDirs (#8311)
Reviewers: David Jacot <djacot@confluent.io>, Mickael Maison <mickael.maison@gmail.com>
2020-06-04 15:36:37 +01:00
A. Sophie Blee-Goldman c6633a157e
KAFKA-9987: optimize sticky assignment algorithm for same-subscription case (#8668)
Motivation and pseudo code algorithm in the ticket.

Added a scale test with large number of topic partitions and consumers and 30s timeout.
With these changes, assignment with 2,000 consumers and 200 topics with 2,000 each completes within a few seconds.

Porting the same test to trunk, it took 2 minutes even with a 100x reduction in the number of topics (ie, 2 minutes for 2,000 consumers and 2 topics with 2,000 partitions)

Should be cherry-picked to 2.6, 2.5, and 2.4

Reviewers: Guozhang Wang <wangguoz@gmail.com>
2020-06-01 15:57:15 -07:00
Mickael Maison fe948d39e5
KAFKA-9130; KIP-518 Allow listing consumer groups per state (#8238)
Implementation of KIP-518: https://cwiki.apache.org/confluence/display/KAFKA/KIP-518%3A+Allow+listing+consumer+groups+per+state. 

Reviewers: David Jacot <djacot@confluent.io>, Jason Gustafson <jason@confluent.io>

Co-authored-by: Mickael Maison <mickael.maison@gmail.com>
Co-authored-by: Edoardo Comar <ecomar@uk.ibm.com>
2020-05-29 11:25:20 -07:00
A. Sophie Blee-Goldman 9d52deca24
KAFKA-9501: convert between active and standby without closing stores (#8248)
This PR has gone through several significant transitions of its own, but here's the latest:

* TaskManager just collects the tasks to transition and refers to the active/standby task creator to handle closing & recycling the old task and creating the new one. If we ever hit an exception during the close, we bail and close all the remaining tasks as dirty.

* The task creators tell the task to "close but recycle state". If this is successful, it tells the recycled processor context and state manager that they should transition to the new type.

* During "close and recycle" the task just does a normal clean close, but instead of closing the state manager it informs it to recycle itself: maintain all of its store information (most importantly the current store offsets) but unregister the changelogs from the changelog reader

* The new task will (re-)register its changelogs during initialization, but skip re-registering any stores. It will still read the checkpoint file, but only use the written offsets if the store offsets are not already initialized from pre-transition

* To ensure we don't end up with manual compaction disabled for standbys, we have to call the state restore listener's onRestoreEnd for any active restoring stores that are switching to standbys

Reviewers: John Roesler <vvcephei@apache.org>, Guozhang Wang <wangguoz@gmail.com>
2020-05-29 10:48:03 -07:00
Aakash Shah 38c1e96d2c
KAFKA-9971: Error Reporting in Sink Connectors (KIP-610) (#8720)
Implementation for KIP-610: https://cwiki.apache.org/confluence/display/KAFKA/KIP-610%3A+Error+Reporting+in+Sink+Connectors based on which sink connectors can now report errors at the final stages of the stream that exports records to the sink system.
 
This PR adds the `ErrantRecordReporter` interface as well as its implementation - `WorkerErrantRecordReporter`. The `WorkerErrantRecordReporter` is created in `Worker` and brought up through `WorkerSinkTask` to `WorkerSinkTaskContext`. 

An integration test and unit tests have been added.

Reviewers: Lev Zemlyanov <lev@confluent.io>, Greg Harris <gregh@confluent.io>, Chris Egerton <chrise@confluent.io>, Randall Hauch <rhauch@gmail.com>, Konstantine Karantasis <konstantine@confluent.io>
2020-05-27 23:49:57 -07:00
xiaodongdu 9c833f665f
KAFKA-9960: implement KIP-606 to add metadata context to MetricsReporter (#8691)
Implemented KIP-606 to add metadata context to MetricsReporter.

Author: Xiaodong Du <xdu@confluent.io>
Reviewers: David Arthur <mumrah@gmail.com>, Randall Hauch <rhauch@gmail.com>, Xavier Léauté <xavier@confluent.io>, Ryan Pridgeon <ryan.n.pridgeon@gmail.com>
2020-05-27 20:18:36 -05:00
A. Sophie Blee-Goldman 83c616f706
KAFKA-9983: KIP-613: add INFO level e2e latency metrics (#8697)
Add e2e latency metrics at the beginning and end of task topologies
as INFO-level processor-node-level metrics.

Implements: KIP-613
Reviewers: John Roesler <vvcephei@apache.org>, Andrew Choi <a24choi@edu.uwaterloo.ca>, Bruno Cadonna <cadonna@confluent.io>, Matthias J. Sax <matthias@confluent.io>
2020-05-27 15:55:29 -05:00