Related to https://github.com/apache/kafka/pull/3321
Author: Konstantine Karantasis <konstantine@confluent.io>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#3326 from kkonstantine/MINOR-Add-tests-for-PluginDesc
When the `SetSchemaMetadata` SMT is used to change the name and/or version of the key or value’s schema, any references to the old schema in the key or value must be changed to reference the new schema. Only keys or values that are `Struct` have such references, and so currently only these are adjusted.
This is based on `trunk` since the fix is expected to be targeted to the 0.11.1 release.
Author: Randall Hauch <rhauch@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#3198 from rhauch/kafka-5164
- Added a boolean `allow_auto_topic_creation` to MetadataRequest and
bumped the protocol version to V4.
- When connecting to brokers older than 0.11.0.0, the `allow_auto_topic_creation`
field won't be considered, so we send a metadata request for all topics
to keep the behavior consistent.
- Set `allow_auto_topic_creation` to false in the new AdminClient and
StreamsKafkaClient (which exists for the purpose of creating topics
manually); set it to true everywhere else for now. Other clients will eventually
rely on client-side auto topic creation, but that’s not there yet.
- Add `allowAutoTopicCreation` field to `Metadata`, which is used by
`DefaultMetadataUpdater`. This is not strictly needed for the new
`AdminClient`, but it avoids surprises if it ever adds a topic to `Metadata`
via `setTopics` or `addTopic`.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Jun Rao <junrao@gmail.com>
Closes#3098 from ijuma/kafka-5291-admin-client-no-auto-topic-creation
Summary:
- add `reconnect.backoff.max.ms` common client configuration parameter
- if `reconnect.backoff.max.ms` > `reconnect.backoff.ms`, apply an exponential backoff policy
- apply +/- 20% random jitter to smooth cluster reconnects
Author: Dana Powers <dana.powers@gmail.com>
Reviewers: Ewen Cheslack-Postava <me@ewencp.org>, Roger Hoover <roger.hoover@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Closes#1523 from dpkp/exp_backoff
The backing store for offsets, status, and configs now attempts to use the new AdminClient to look up the internal topics and create them if they don’t yet exist. If the necessary APIs are not available in the connected broker, the stores fall back to the old behavior of relying upon auto-created topics. Kafka Connect requires a minimum of Apache Kafka 0.10.0.1-cp1, and the AdminClient can work with all versions since 0.10.0.0.
All three of Connect’s internal topics are created as compacted topics, and new distributed worker configuration properties control the replication factor for all three topics and the number of partitions for the offsets and status topics; the config topic requires a single partition and does not allow it to be set via configuration. All of these new configuration properties have sensible defaults, meaning users can upgrade without having to change any of the existing configurations. In most situations, existing Connect deployments will have already created the storage topics prior to upgrading.
The replication factor defaults to 3, so anyone running Kafka clusters with fewer nodes than 3 will receive an error unless they explicitly set the replication factor for the three internal topics. This is actually desired behavior, since it signals the users that they should be aware they are not using sufficient replication for production use.
The integration tests use a cluster with a single broker, so they were changed to explicitly specify a replication factor of 1 and a single partition.
The `KafkaAdminClientTest` was refactored to extract a utility for setting up a `KafkaAdminClient` with a `MockClient` for unit tests.
Author: Randall Hauch <rhauch@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2984 from rhauch/kafka-4667
Also:
1. FindCoordinator is more general and takes a coordinator_type
so that it can be used for the group and transaction coordinators.
2. Include an error message in FindCoordinatorResponse to make the
errors at the client side more informative. We have just added the
field to the protocol in this PR, a subsequent PR will update the
code to use it.
3. Rename `Errors` names for FindCoordinator to be more generic. This
is a compatible change as the ids remain the same.
4. Since the exception classes for the error codes are in a public
package, we introduce new ones and deprecate the old ones.
The classes were not thrown back to the user (KAFKA-5052 aside),
so this is a compatible change.
5. Update InitPidRequest for transactions. Since this protocol API
was introduced recently and is not used by default, we did not bump
its version.
Author: Apurva Mehta <apurva@confluent.io>
Reviewers: Ismael Juma <ismael@juma.me.uk>
Closes#2825 from apurvam/exactly-once-rpc-stubs
Author: Ewen Cheslack-Postava <me@ewencp.org>
Reviewers: Damian Guy <damian.guy@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Closes#2660 from ewencp/minor-make-configdef-safer
https://issues.apache.org/jira/browse/KAFKA-4810
> Currently SchemaBuilder is strict when checking that certain fields have not been set yet (e.g. version, name, doc). It just checks that the field is null. This is intended to protect the user from buggy code that overwrites a field with different values, but it's a bit too strict currently. In generic code for converting schemas (e.g. Converters) you will sometimes initialize a builder with these values (e.g. because you get a SchemaBuilder for a logical type, which sets name & version), but then have generic code for setting name & version from the source schema.
Changed the validation method to not only check if a field is null but also to check if the new value that is being set is the same as the current value of the field.
ewencp
Author: Vitaly Pushkar <vitaly.pushkar@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2806 from vitaly-pushkar/KAFKA-4810-schema-builder-default-fields-validation
Author: Colin P. Mccabe <cmccabe@confluent.io>
Reviewers: Konstantine Karantasis <konstantine@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2763 from cmccabe/KAFKA-4977
Addresses for https://issues.apache.org/jira/browse/KAFKA-4878
* Adjusted the error message to explicitly state errors and their number
* Dried up the logic for generating the message between standalone and distributed
Example
messed up two config keys in the file source config:
````
namse=local-file-source
connector.class=FileStreamSource
tasks.max=1
fisle=test.txt
topic=connect-test
```
Produces:
```
[2017-03-22 08:57:11,896] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:99)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 2 error(s):
Missing required configuration "file" which has no default value.
Missing required configuration "name" which has no default value.
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
```
Author: Armin Braun <me@obrown.io>
Reviewers: Gwen Shapira, Konstantine Karantasis, Ewen Cheslack-Postava
Closes#2722 from original-brownbear/KAFKA-4878
Author: Jason Gustafson <jason@confluent.io>
Reviewers: Jun Rao <junrao@gmail.com>, Apurva Mehta <apurva@confluent.io>, Guozhang Wang <wangguoz@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Closes#2614 from hachikuji/exactly-once-message-format
Changing getCanonicalName() references to getName() so that docs update with "$" instead of ".". Also added a connect-plugin-discovery.sh CLI to list all of the transformations available.
Author: Bruce Szalwinski <bruce.szalwinski@cdk.com>
Reviewers: Gwen Shapira
Closes#2720 from bruce-szalwinski/transforms and squashes the following commits:
ec3b5b9 [Bruce Szalwinski] remove connect-plugin-discovery. will submit in a different PR
eba0af7 [Bruce Szalwinski] Key / Value transformations are static nested classes and so are referenced using OuterClass$Key and OuterClass$Value.
Fixes related to handling of MAX_POLL_INTERVAL_MS_CONFIG during deadlock and CommitFailedException on partition revoked.
Author: Sachin Mittal <sjmittal@gmail.com>
Reviewers: Matthias J. Sax, Damian Guy, Guozhang Wang
Closes#2642 from sjmittal/trunk
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Manikumar reddy O <manikumar.reddy@gmail.com>
Closes#2690 from ijuma/fix-header-in-byte-array-converter
If leader node of one more more partitions in a consumer subscription are temporarily unavailable, request metadata refresh so that partitions skipped for assignment dont have to wait for metadata expiry before reassignment. Metadata refresh is also requested if a subscribe topic or assigned partition doesn't exist.
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Reviewers: Vahid Hashemian <vahidhashemian@us.ibm.com>, Ismael Juma <ismael@juma.me.uk>, Jason Gustafson <jason@confluent.io>
Closes#2622 from rajinisivaram/KAFKA-4631
Author: Matthias J. Sax <matthias@confluent.io>
Reviewers: Jason Gustafson <jason@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2303 from mjsax/licenseHeader
Want to use these methods in an external project.
Author: Chris Egerton <fearthecellos@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2610 from C0urante/public-json-schema-conversion
Author: Armin Braun <me@obrown.io>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Jason Gustafson <jason@confluent.io>
Closes#2582 from original-brownbear/cleanup-nonfinal-close
…lass should be static
Author: Colin P. Mccabe <cmccabe@confluent.io>
Reviewers: Jason Gustafson <jason@confluent.io>, Ismael Juma <ismael@juma.me.uk>
Closes#2558 from cmccabe/KAFKA-4774
Author: Vahid Hashemian <vahidhashemian@us.ibm.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Jason Gustafson <jason@confluent.io>
Closes#2475 from vahidhashemian/minor/use_explicit_Errors_type_when_possible
Author: Maysam Yabandeh <myabandeh@dropbox.com>
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Jun Rao <junrao@gmail.com>
Closes#2474 from ijuma/kafka-4039-deadlock-during-shutdown
ewencp ignore this PR if you are already started to work on this ticket.
Author: Balint Molnar <balintmolnar91@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2423 from baluchicken/KAFKA-4679
(cherry picked from commit 1434b61d5d)
Signed-off-by: Ewen Cheslack-Postava <me@ewencp.org>
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Jun Rao <junrao@gmail.com>, Rajini Sivaram <rajinisivaram@googlemail.com>
Closes#2406 from ijuma/kafka-4636-per-listener-security-settings
This behaviour was changed in 8b3c6c0, but it caused interceptor
test failures (which rely on callbacks) and since we’re so close to
code freeze, it’s better to be conservative.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Jason Gustafson <jason@confluent.io>
Closes#2440 from ijuma/kafka-4699-callbacks-invoked-before-future-is-completed
Renames `HoistToStruct` SMT to `HoistField`.
Adds the following SMTs:
`ExtractField`
`MaskField`
`RegexRouter`
`ReplaceField`
`SetSchemaMetadata`
`ValueToKey`
Adds HTML doc generation and updates to `connect.html`.
Author: Shikhar Bhushan <shikhar@confluent.io>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2374 from shikhar/more-smt
Besides API and runtime changes, this PR also includes 2 data transformations (`InsertField`, `HoistToStruct`) and 1 routing transformation (`TimestampRouter`).
There is some gnarliness in `ConnectorConfig` / `ConfigDef` around creating, parsing and validating a dynamic `ConfigDef`.
Author: Shikhar Bhushan <shikhar@confluent.io>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2299 from shikhar/smt-2017
The client should send older versions of requests to the broker if necessary.
Author: Colin P. Mccabe <cmccabe@confluent.io>
Reviewers: Jason Gustafson <jason@confluent.io>, Jun Rao <junrao@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Closes#2264 from cmccabe/KAFKA-4507
Otherwise in this test the sink task goes through the pause/resume cycle with 0 assigned partitions, since the default metadata refresh interval is quite long
Author: Shikhar Bhushan <shikhar@confluent.io>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2313 from shikhar/kafka-4575
h/t ewencp for pointing out the issue
Author: Shikhar Bhushan <shikhar@confluent.io>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2277 from shikhar/kafka-4527
Author: Ashish Singh <asingh@cloudera.com>
Reviewers: Jason Gustafson <jason@confluent.io>, Colin P. Mccabe <cmccabe@confluent.io>, Dana Powers <dana.powers@gmail.com>, Gwen Shapira <cshapi@gmail.com>, Grant Henke <granthenke@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Closes#1251 from SinghAsDev/KAFKA-3600
With KAFKA-3008 (#1788), the implementation does not respect the contract that 'sgn(x.compareTo(y)) == -sgn(y.compareTo(x))'
This fix addresses the hang with JDK8 in DistributedHerderTest.compareTo()
Author: Shikhar Bhushan <shikhar@confluent.io>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Konstantine Karantasis <konstantine@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2232 from shikhar/herderreq-compareto
Resolves
KAFKA-4306: Connect workers won't shut down if brokers are not available
KAFKA-4154: Kafka Connect fails to shutdown if it has not completed startup
Author: Konstantine Karantasis <konstantine@confluent.io>
Reviewers: Shikhar Bhushan <shikhar@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2201 from kkonstantine/KAFKA-4306-Connect-workers-will-not-shut-down-if-brokers-are-not-available
Also:
* Make all implementations of `Time` thread-safe as they are accessed from multiple threads in some cases.
* Change default implementation of `MockTime` to use two separate variables for `nanoTime` and `currentTimeMillis` as they have different `origins`.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>, Shikhar Bhushan <shikhar@confluent.io>, Jason Gustafson <jason@confluent.io>, Eno Thereska <eno.thereska@gmail.com>, Damian Guy <damian.guy@gmail.com>
Closes#2095 from ijuma/kafka-2247-consolidate-time-interfaces
leverage fix from KAFKA-2690 to remove secrets from task logging
Author: rnpridgeon <ryan.n.pridgeon@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2115 from rnpridgeon/KAFKA-4364
Kafka Connect REST API does not handle in many places connectors with slashes in their names because it expects PathParams, this PR intends to :
* Reject as bad requests API calls trying to create connectors with slashes in their names
* Add support for connector with slashes in their names in the DELETE part of the API to allow users to cleanup their connectors without dropping everything.
This PR adds as well the Unit Test needed for the creation part and was tested manually for the DELETE part.
Author: Olivier Girardot <o.girardot@lateral-thoughts.com>
Reviewers: Shikhar Bhushan <shikhar@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2096 from ogirardot/fix/connectors-with-slashes-cannot-be-deleted
When storing a non-primitive type in a Connect offset, the following NullPointerException will occur:
```
07:18:23.702 [pool-3-thread-1] ERROR o.a.k.c.storage.OffsetStorageWriter - CRITICAL: Failed to serialize offset data, making it impossible to commit offsets under namespace tenant-db-bootstrap-source. This likely won't recover unless the unserializable partition or offset information is overwritten.
07:18:23.702 [pool-3-thread-1] ERROR o.a.k.c.storage.OffsetStorageWriter - Cause of serialization failure:
java.lang.NullPointerException: null
at org.apache.kafka.connect.storage.OffsetUtils.validateFormat(OffsetUtils.java:51)
at org.apache.kafka.connect.storage.OffsetStorageWriter.doFlush(OffsetStorageWriter.java:143)
at org.apache.kafka.connect.runtime.WorkerSourceTask.commitOffsets(WorkerSourceTask.java:319)
... snip ...
```
The attached patch fixes the specific case where OffsetUtils.validateFormat is attempting to provide a useful error message, but fails to because the schemaType method could return null.
This contribution is my original work and I license the work to the project under the project's open source license.
Author: Mathieu Fenniak <mathieu.fenniak@replicon.com>
Reviewers: Gwen Shapira
Closes#2087 from mfenniak/fix-npr-with-clearer-error-message
There should be only one cases where these clean-ups have a functional impact: replaced repeated identical logs with a single log for the stale controller epoch case.
The rest should just make the code easier to read and make it a bit less wasteful. I did this exercise because unused variables sometimes mask bugs.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Jason Gustafson <jason@confluent.io>
Closes#1985 from ijuma/remove-unused
And improve readability by adding proper punctuations.
Author: Vahid Hashemian <vahidhashemian@us.ibm.com>
Reviewers: Jason Gustafson <jason@confluent.io>
Closes#2002 from vahidhashemian/doc/fix_typos
Cleaner to just check once for optional & default value from the `convertToConnect()` function.
It also helps address an issue with conversions for logical type schemas that have default values and null as the included value. That test case is _probably_ not an issue in practice, since when using the `JsonConverter` to serialize a missing field with a default value, it will serialize the default value for the field. But in the face of JSON data streaming in from a topic being [generous on input, strict on output](http://tedwise.com/2009/05/27/generous-on-input-strict-on-output) seems best.
Author: Shikhar Bhushan <shikhar@confluent.io>
Reviewers: Randall Hauch <rhauch@gmail.com>, Jason Gustafson <jason@confluent.io>
Closes#1872 from shikhar/kafka-4183
The `JsonConverter` class has `LogicalTypeConverter` implementations for Date, Time, Timestamp, and Decimal, but these implementations fail when the input literal value (deserialized from the message) is null.
Test cases were added to check for these cases, and these failed before the `LogicalTypeConverter` implementations were fixed to consider whether the schema has a default value or is optional, similarly to how the `JsonToConnectTypeConverter` implementations do this. Once the fixes were made, the new tests pass.
Author: Randall Hauch <rhauch@gmail.com>
Reviewers: Shikhar Bhushan <shikhar@confluent.io>, Jason Gustafson <jason@confluent.io>
Closes#1867 from rhauch/kafka-4183
Invoke the statusListener.onFailure() callback on start failures so that the statusBackingStore is updated. This involved a fix to the putSafe() functionality which prevented any update that was not preceded by a (non-safe) put() from completing, so here when a connector or task is transitioning directly to FAILED.
Worker start methods can still throw if the same connector name or task ID is already registered with the worker, as this condition should not happen.
Author: Shikhar Bhushan <shikhar@confluent.io>
Reviewers: Jason Gustafson <jason@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1778 from shikhar/distherder-stayup-take4
Author: Jason Gustafson <jason@confluent.io>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>, Ismael Juma <ismael@juma.me.uk>, Guozhang Wang <wangguoz@gmail.com>
Closes#1627 from hachikuji/KAFKA-3888
Author: Ewen Cheslack-Postava <me@ewencp.org>
Reviewers: Jason Gustafson, Gwen Shapira
Closes#1727 from ewencp/kafka-3847-per-task-producers and squashes the following commits:
7d39724 [Ewen Cheslack-Postava] Add timeout for closing producers.
98ec7f6 [Ewen Cheslack-Postava] KAFKA-3847: Use a separate producer per source task
ewencp I went down the list of connect configs and it looks like only the internal converter configs are mismarked. It looks like the `cluster` config that is present in the current docs is already gone. The only other values I can see arguing to change importance on are the ssl configs (marked high) but they are consistent with the producer/consumer config docs so that's at least consistent. Everything else marked high looks either mandatory or requires consideration in a production deployment to me.
Author: Dustin Cote <dustin@confluent.io>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1653 from cotedm/KAFKA-2932
Fix the test by using a more liberal timeout and forcing more frequent SinkTask.put() calls. Also add some logging to aid future debugging.
Author: Ewen Cheslack-Postava <me@ewencp.org>
Reviewers: Jason Gustafson <jason@confluent.io>, Ismael Juma <ismael@juma.me.uk>
Closes#1663 from ewencp/kafka-3935-fix-restart-system-test
Was just reading kafka source code, my favourite Friday afternoon activity, when I found these small grammatical errors in some `DataException` messages.
Could someone please review? ewencp dguy
Author: Laurier Mantel <laurier.mantel@shopify.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1551 from LaurierMantel/maps-typos
And not the containing struct's default value.
The contribution is my original work and that I license the work to the project under the project's open source license.
ewencp
Author: Rollulus <roelboel@xs4all.nl>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1528 from rollulus/kafka-3864
ExecutorService needs to be shutdown on close, lest a zombie thread
prevent clean shutdown.
ewencp
Author: Peter Davis <peter.davis@expeditors.com>
Reviewers: Liquan Pei <liquanpei@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1383 from davispw/KAFKA-3710
Author: Christian Posta <christian.posta@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1401 from christian-posta/ceposta-connect-class-cast-error
Author: Jason Gustafson <jason@confluent.io>
Reviewers: Grant Henke <granthenke@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1322 from hachikuji/KAFKA-3659
hachikuji ewencp Can you take a look when you have time?
Author: Liquan Pei <liquanpei@gmail.com>
Reviewers: Jason Gustafson <jason@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1248 from Ishiihara/kafka-3459
ewencp granders Can you take a look? Thanks!
Author: Liquan Pei <liquanpei@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1259 from Ishiihara/fix-warning
ewencp Can you take a quick look?
Author: Liquan Pei <liquanpei@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1252 from Ishiihara/pre-list-connectors
For enums and other constant strings, use locale independent case conversions to enable comparisons to work regardless of the default locale.
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Reviewers: Manikumar Reddy, Ismael Juma, Guozhang Wang, Gwen Shapira
Closes#1220 from rajinisivaram/KAFKA-3548
This patch fixes all occurances of two consecutive 'the's in the code comments.
Author: Ishita Mandhan (imandhaus.ibm.com)
Author: Ishita Mandhan <imandha@us.ibm.com>
Reviewers: Guozhang Wang <wangguoz@gmail.com>
Closes#1240 from imandhan/typofixes
Fail unsent requests only when returning from KafkaConsumer.poll().
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Reviewers: Jason Gustafson <jason@confluent.io>, Ismael Juma <ismael@juma.me.uk>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1183 from rajinisivaram/KAFKA-3488
Currently the property TASKS_MAX_CONFIG is not validated against nonsensical values such as 0. This patch leverages the Range.atLeast() method to ensure value is at least 1.
Author: Ryan P <Ryan.N.Pridgeon@Gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1132 from rnpridgeon/KAFKA-3445
Added commitRecord(SourceRecord record) to SourceTask. This method is called during the callback from producer.send() when the message has been sent successfully. Added commitTaskRecord(SourceRecord record) to WorkerSourceTask to handle calling commitRecord on the SourceTask. Updated tests for calls to commitRecord.
Author: Jeremy Custenborder <jcustenborder@gmail.com>
Reviewers: Jason Gustafson <jason@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#950 from jcustenborder/KAFKA-3260
* Fix and suppress number of unchecked warnings (except for Kafka Streams)
* Add `SafeVarargs` annotation to fix warnings
* Suppress unfixable deprecation warnings
* Replace deprecated by non-deprecated usage where possible
* Avoid reflective calls via structural types in Scala
* Tweak compiler settings for scalac and javac
Once we drop Java 7 and Scala 2.10, we can tweak the compiler settings further so that they warn us about more things.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Grant Henke, Gwen Shapira, Guozhang Wang
Closes#1042 from ijuma/kafka-3375-suppress-depreccated-tweak-compiler
Added offsetBackingStore config to StandaloneConfig and DistributedConfig;
Added config for offset.storage.topic and config.storage.topic into DistributedConfig;
Author: jinxing <jinxing@fenbi.com>
Reviewers: Jason Gustafson <jason@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#734 from ZoneMayor/trunk-KAFKA-2934
…ssage to assist with figuring this out.
Author: Gwen Shapira <cshapi@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#993 from gwenshap/KAFKA-2944
See KIP-31 and KIP-32 for details.
A few notes on the patch:
1. This patch implements KIP-31 and KIP-32. The patch includes features in both KAFKA-3025, KAFKA-3026 and KAFKA-3036
2. All unit tests passed.
3. The unit tests were run with new and old message format.
4. When message format conversion occurs during consumption, the consumer will not be able to detect the message size too large situation. I did not try to fix this because the situation seems rare and only happen during migration phase.
Author: Jiangjie Qin <becket.qin@gmail.com>
Author: Ismael Juma <ismael@juma.me.uk>
Author: Jiangjie (Becket) Qin <becket.qin@gmail.com>
Reviewers: Jason Gustafson <jason@confluent.io>, Anna Povzner <anna@confluent.io>, Ismael Juma <ismael@juma.me.uk>, Guozhang Wang <wangguoz@gmail.com>, Jun Rao <junrao@gmail.com>
Closes#764 from becketqin/KAFKA-3025
1. Added a test case to prove commit() on SourceTask was not being called.
2. Added commitSourceTask() which logs potential exceptions.
3. Added after call to finishSuccessfulFlush().
Author: Jeremy Custenborder <jeremy@scarcemedia.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#909 from jcustenborder/KAFKA-3225