2015-02-03 13:36:21 +08:00
|
|
|
<!DOCTYPE import-control PUBLIC
|
2015-08-19 12:51:15 +08:00
|
|
|
"-//Puppy Crawl//DTD Import Control 1.1//EN"
|
|
|
|
"http://www.puppycrawl.com/dtds/import_control_1_1.dtd">
|
2015-02-03 13:36:21 +08:00
|
|
|
<!--
|
2022-02-17 14:35:36 +08:00
|
|
|
Licensed to the Apache Software Foundation (ASF) under one or more
|
|
|
|
contributor license agreements. See the NOTICE file distributed with
|
|
|
|
this work for additional information regarding copyright ownership.
|
|
|
|
The ASF licenses this file to You under the Apache License, Version 2.0
|
|
|
|
(the "License"); you may not use this file except in compliance with
|
|
|
|
the License. You may obtain a copy of the License at
|
|
|
|
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
|
|
|
|
Unless required by applicable law or agreed to in writing, software
|
|
|
|
distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
See the License for the specific language governing permissions and
|
|
|
|
limitations under the License.
|
2015-08-19 12:51:15 +08:00
|
|
|
-->
|
|
|
|
|
2015-02-03 13:36:21 +08:00
|
|
|
<import-control pkg="org.apache.kafka">
|
2015-08-19 12:51:15 +08:00
|
|
|
|
2015-09-29 05:51:06 +08:00
|
|
|
<!-- THINK HARD ABOUT THE LAYERING OF THE PROJECT BEFORE CHANGING THIS FILE -->
|
|
|
|
|
|
|
|
<!-- common library dependencies -->
|
|
|
|
<allow pkg="java" />
|
|
|
|
<allow pkg="javax.management" />
|
|
|
|
<allow pkg="org.slf4j" />
|
|
|
|
<allow pkg="org.junit" />
|
2021-01-14 08:17:45 +08:00
|
|
|
<allow pkg="org.opentest4j" />
|
2016-12-10 08:17:36 +08:00
|
|
|
<allow pkg="org.hamcrest" />
|
2018-10-10 06:55:09 +08:00
|
|
|
<allow pkg="org.mockito" />
|
2015-09-29 05:51:06 +08:00
|
|
|
<allow pkg="org.easymock" />
|
|
|
|
<allow pkg="org.powermock" />
|
2015-10-21 05:13:34 +08:00
|
|
|
<allow pkg="java.security" />
|
2015-09-29 05:51:06 +08:00
|
|
|
<allow pkg="javax.net.ssl" />
|
2015-10-21 05:13:34 +08:00
|
|
|
<allow pkg="javax.security" />
|
|
|
|
<allow pkg="org.ietf.jgss" />
|
2022-10-05 07:31:43 +08:00
|
|
|
<allow pkg="net.jqwik.api" />
|
2015-09-29 05:51:06 +08:00
|
|
|
|
|
|
|
<!-- no one depends on the server -->
|
|
|
|
<disallow pkg="kafka" />
|
|
|
|
|
|
|
|
<!-- anyone can use public classes -->
|
|
|
|
<allow pkg="org.apache.kafka.common" exact-match="true" />
|
2015-10-21 05:13:34 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.security" />
|
2017-09-21 20:58:35 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.serialization" />
|
2015-09-29 05:51:06 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.utils" />
|
2017-02-03 06:22:31 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.errors" exact-match="true" />
|
2017-07-26 14:19:56 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.memory" />
|
2015-09-29 05:51:06 +08:00
|
|
|
|
|
|
|
<subpackage name="common">
|
2020-07-03 17:41:45 +08:00
|
|
|
<allow class="org.apache.kafka.clients.consumer.ConsumerRecord" exact-match="true" />
|
KAFKA-12278; Ensure exposed api versions are consistent within listener (#10666)
Previously all APIs were accessible on every listener exposed by the broker, but
with KIP-500, that is no longer true. We now have more complex requirements for
API accessibility.
For example, the KIP-500 controller exposes some APIs which are not exposed by
brokers, such as BrokerHeartbeatRequest, and does not expose most client APIs,
such as JoinGroupRequest, etc. Similarly, the KIP-500 broker does not implement
some APIs that the ZK-based broker does, such as LeaderAndIsrRequest and
UpdateFeaturesRequest.
All of this means that we need more sophistication in how we expose APIs and
keep them consistent with the ApiVersions API. Up until now, we have been
working around this using the controllerOnly flag inside ApiKeys, but this is
not rich enough to support all of the cases listed above. This PR introduces a
new "listeners" field to the request schema definitions. This field is an array
of strings which indicate the listener types in which the API should be exposed.
We currently support "zkBroker", "broker", and "controller". ("broker"
indicates the KIP-500 broker, whereas zkBroker indicates the old broker).
This PR also creates ApiVersionManager to encapsulate the creation of the
ApiVersionsResponse based on the listener type. Additionally, it modifies
SocketServer to check the listener type of received requests before forwarding
them to the request handler.
Finally, this PR also fixes a bug in the handling of the ApiVersionsResponse
prior to authentication. Previously a static response was sent, which means that
changes to features would not get reflected. This also meant that the logic to
ensure that only the intersection of version ranges supported by the controller
would get exposed did not work. I think this is important because some clients
rely on the initial pre-authenticated ApiVersions response rather than doing a
second round after authentication as the Java client does.
One final cleanup note: I have removed the expectation that envelope requests
are only allowed on "privileged" listeners. This made sense initially because
we expected to use forwarding before the KIP-500 controller was available. That
is not the case anymore and we expect the Envelope API to only be exposed on the
controller listener. I have nevertheless preserved the existing workarounds to
allow verification of the forwarding behavior in integration testing.
Reviewers: Colin P. McCabe <cmccabe@apache.org>, Ismael Juma <ismael@juma.me.uk>
2021-02-05 02:04:17 +08:00
|
|
|
<allow class="org.apache.kafka.common.message.ApiMessageType" exact-match="true" />
|
2015-09-29 05:51:06 +08:00
|
|
|
<disallow pkg="org.apache.kafka.clients" />
|
|
|
|
<allow pkg="org.apache.kafka.common" exact-match="true" />
|
2017-06-14 23:57:49 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.annotation" />
|
2018-02-05 01:19:17 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.config" exact-match="true" />
|
2017-05-02 07:16:01 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.internals" exact-match="true" />
|
2015-09-29 05:51:06 +08:00
|
|
|
<allow pkg="org.apache.kafka.test" />
|
|
|
|
|
2017-05-31 23:46:43 +08:00
|
|
|
<subpackage name="acl">
|
2017-06-14 23:57:49 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.annotation" />
|
2017-05-31 23:46:43 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.acl" />
|
|
|
|
<allow pkg="org.apache.kafka.common.resource" />
|
|
|
|
</subpackage>
|
|
|
|
|
2015-09-29 05:51:06 +08:00
|
|
|
<subpackage name="config">
|
|
|
|
<allow pkg="org.apache.kafka.common.config" />
|
|
|
|
<!-- for testing -->
|
|
|
|
<allow pkg="org.apache.kafka.common.metrics" />
|
|
|
|
</subpackage>
|
|
|
|
|
2021-02-15 00:12:25 +08:00
|
|
|
<!-- Third-party compression libraries should only be references from this package -->
|
|
|
|
<subpackage name="compress">
|
|
|
|
<allow pkg="com.github.luben.zstd" />
|
|
|
|
<allow pkg="net.jpountz.lz4" />
|
|
|
|
<allow pkg="net.jpountz.xxhash" />
|
|
|
|
<allow pkg="org.apache.kafka.common.compress" />
|
|
|
|
<allow pkg="org.xerial.snappy" />
|
|
|
|
</subpackage>
|
|
|
|
|
2019-01-12 08:40:21 +08:00
|
|
|
<subpackage name="message">
|
|
|
|
<allow pkg="com.fasterxml.jackson" />
|
|
|
|
<allow pkg="org.apache.kafka.common.protocol" />
|
|
|
|
<allow pkg="org.apache.kafka.common.protocol.types" />
|
|
|
|
<allow pkg="org.apache.kafka.common.message" />
|
2020-08-14 00:52:23 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.record" />
|
2019-01-12 08:40:21 +08:00
|
|
|
</subpackage>
|
|
|
|
|
2021-01-15 01:58:52 +08:00
|
|
|
<subpackage name="metadata">
|
|
|
|
<allow pkg="com.fasterxml.jackson" />
|
|
|
|
<allow pkg="org.apache.kafka.common.protocol" />
|
|
|
|
<allow pkg="org.apache.kafka.common.protocol.types" />
|
|
|
|
<allow pkg="org.apache.kafka.common.message" />
|
|
|
|
<allow pkg="org.apache.kafka.common.metadata" />
|
|
|
|
</subpackage>
|
|
|
|
|
2015-09-29 05:51:06 +08:00
|
|
|
<subpackage name="metrics">
|
|
|
|
<allow pkg="org.apache.kafka.common.metrics" />
|
2017-07-26 14:19:56 +08:00
|
|
|
</subpackage>
|
|
|
|
|
|
|
|
<subpackage name="memory">
|
|
|
|
<allow pkg="org.apache.kafka.common.metrics" />
|
2015-09-29 05:51:06 +08:00
|
|
|
</subpackage>
|
|
|
|
|
|
|
|
<subpackage name="network">
|
|
|
|
<allow pkg="org.apache.kafka.common.security.auth" />
|
|
|
|
<allow pkg="org.apache.kafka.common.protocol" />
|
|
|
|
<allow pkg="org.apache.kafka.common.config" />
|
|
|
|
<allow pkg="org.apache.kafka.common.metrics" />
|
2015-08-19 12:51:15 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.security" />
|
KAFKA-12278; Ensure exposed api versions are consistent within listener (#10666)
Previously all APIs were accessible on every listener exposed by the broker, but
with KIP-500, that is no longer true. We now have more complex requirements for
API accessibility.
For example, the KIP-500 controller exposes some APIs which are not exposed by
brokers, such as BrokerHeartbeatRequest, and does not expose most client APIs,
such as JoinGroupRequest, etc. Similarly, the KIP-500 broker does not implement
some APIs that the ZK-based broker does, such as LeaderAndIsrRequest and
UpdateFeaturesRequest.
All of this means that we need more sophistication in how we expose APIs and
keep them consistent with the ApiVersions API. Up until now, we have been
working around this using the controllerOnly flag inside ApiKeys, but this is
not rich enough to support all of the cases listed above. This PR introduces a
new "listeners" field to the request schema definitions. This field is an array
of strings which indicate the listener types in which the API should be exposed.
We currently support "zkBroker", "broker", and "controller". ("broker"
indicates the KIP-500 broker, whereas zkBroker indicates the old broker).
This PR also creates ApiVersionManager to encapsulate the creation of the
ApiVersionsResponse based on the listener type. Additionally, it modifies
SocketServer to check the listener type of received requests before forwarding
them to the request handler.
Finally, this PR also fixes a bug in the handling of the ApiVersionsResponse
prior to authentication. Previously a static response was sent, which means that
changes to features would not get reflected. This also meant that the logic to
ensure that only the intersection of version ranges supported by the controller
would get exposed did not work. I think this is important because some clients
rely on the initial pre-authenticated ApiVersions response rather than doing a
second round after authentication as the Java client does.
One final cleanup note: I have removed the expectation that envelope requests
are only allowed on "privileged" listeners. This made sense initially because
we expected to use forwarding before the KIP-500 controller was available. That
is not the case anymore and we expect the Envelope API to only be exposed on the
controller listener. I have nevertheless preserved the existing workarounds to
allow verification of the forwarding behavior in integration testing.
Reviewers: Colin P. McCabe <cmccabe@apache.org>, Ismael Juma <ismael@juma.me.uk>
2021-02-05 02:04:17 +08:00
|
|
|
<allow class="org.apache.kafka.common.requests.ApiVersionsResponse" />
|
2015-09-29 05:51:06 +08:00
|
|
|
</subpackage>
|
2015-08-19 12:51:15 +08:00
|
|
|
|
2017-05-31 23:46:43 +08:00
|
|
|
<subpackage name="resource">
|
2017-06-14 23:57:49 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.annotation" />
|
2017-05-31 23:46:43 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.resource" />
|
|
|
|
</subpackage>
|
|
|
|
|
2015-09-29 05:51:06 +08:00
|
|
|
<subpackage name="security">
|
2015-11-18 00:36:43 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.annotation" />
|
2015-08-19 12:51:15 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.network" />
|
|
|
|
<allow pkg="org.apache.kafka.common.config" />
|
2017-09-14 17:16:00 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.protocol" />
|
|
|
|
<allow pkg="org.apache.kafka.common.errors" />
|
2020-11-05 06:21:44 +08:00
|
|
|
<!-- To access DefaultPrincipalData -->
|
|
|
|
<allow pkg="org.apache.kafka.common.message" />
|
2016-04-27 07:56:42 +08:00
|
|
|
<subpackage name="authenticator">
|
2019-02-25 13:50:07 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.message" />
|
2016-04-27 07:56:42 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.protocol.types" />
|
|
|
|
<allow pkg="org.apache.kafka.common.requests" />
|
|
|
|
<allow pkg="org.apache.kafka.clients" />
|
|
|
|
</subpackage>
|
2020-10-07 02:13:43 +08:00
|
|
|
<subpackage name="ssl">
|
|
|
|
<allow pkg="javax.crypto" />
|
|
|
|
</subpackage>
|
2017-01-10 21:05:07 +08:00
|
|
|
<subpackage name="scram">
|
|
|
|
<allow pkg="javax.crypto" />
|
|
|
|
</subpackage>
|
2018-05-26 15:18:41 +08:00
|
|
|
<subpackage name="oauthbearer">
|
|
|
|
<allow pkg="com.fasterxml.jackson.databind" />
|
2022-09-23 15:45:15 +08:00
|
|
|
<allow pkg="org.jose4j" />
|
2018-05-26 15:18:41 +08:00
|
|
|
</subpackage>
|
2015-08-19 12:51:15 +08:00
|
|
|
</subpackage>
|
|
|
|
|
2015-09-29 05:51:06 +08:00
|
|
|
<subpackage name="protocol">
|
|
|
|
<allow pkg="org.apache.kafka.common.errors" />
|
2019-01-26 06:06:18 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.message" />
|
2020-07-31 01:29:39 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.network" />
|
2019-09-25 23:58:54 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.protocol" />
|
2015-09-29 05:51:06 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.protocol.types" />
|
2016-11-15 08:31:04 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.record" />
|
2017-09-19 12:12:55 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.requests" />
|
2018-06-06 22:22:57 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.resource" />
|
2020-04-10 04:11:36 +08:00
|
|
|
<allow pkg="com.fasterxml.jackson" />
|
2015-09-29 05:51:06 +08:00
|
|
|
</subpackage>
|
|
|
|
|
|
|
|
<subpackage name="record">
|
2021-02-15 00:12:25 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.compress" />
|
2017-04-29 10:17:57 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.header" />
|
2015-09-29 05:51:06 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.record" />
|
2020-09-23 02:32:44 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.message" />
|
2016-11-15 08:31:04 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.network" />
|
2017-03-25 03:38:36 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.protocol" />
|
|
|
|
<allow pkg="org.apache.kafka.common.protocol.types" />
|
2016-12-14 02:26:25 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.errors" />
|
2015-09-29 05:51:06 +08:00
|
|
|
</subpackage>
|
|
|
|
|
2017-04-29 10:17:57 +08:00
|
|
|
<subpackage name="header">
|
|
|
|
<allow pkg="org.apache.kafka.common.header" />
|
|
|
|
<allow pkg="org.apache.kafka.common.record" />
|
|
|
|
</subpackage>
|
|
|
|
|
2015-09-29 05:51:06 +08:00
|
|
|
<subpackage name="requests">
|
2017-05-31 23:46:43 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.acl" />
|
2020-06-12 02:28:57 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.feature" />
|
2015-09-29 05:51:06 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.protocol" />
|
2019-01-26 06:06:18 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.message" />
|
2015-09-29 05:51:06 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.network" />
|
2020-03-15 14:03:13 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.quota" />
|
2017-05-18 10:20:23 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.requests" />
|
2017-05-31 23:46:43 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.resource" />
|
2016-02-19 23:56:40 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.record" />
|
2019-09-02 21:43:17 +08:00
|
|
|
<!-- for AuthorizableRequestContext interface -->
|
|
|
|
<allow pkg="org.apache.kafka.server.authorizer" />
|
2020-11-05 06:21:44 +08:00
|
|
|
<!-- for IncrementalAlterConfigsRequest Builder -->
|
|
|
|
<allow pkg="org.apache.kafka.clients.admin" />
|
2015-09-29 05:51:06 +08:00
|
|
|
<!-- for testing -->
|
|
|
|
<allow pkg="org.apache.kafka.common.errors" />
|
|
|
|
</subpackage>
|
|
|
|
|
|
|
|
<subpackage name="serialization">
|
2021-05-14 06:54:00 +08:00
|
|
|
<allow pkg="org.apache.kafka.clients" />
|
2015-09-29 05:51:06 +08:00
|
|
|
<allow class="org.apache.kafka.common.errors.SerializationException" />
|
2017-04-29 10:17:57 +08:00
|
|
|
<allow class="org.apache.kafka.common.header.Headers" />
|
2015-09-29 05:51:06 +08:00
|
|
|
</subpackage>
|
2017-09-29 04:58:47 +08:00
|
|
|
|
|
|
|
<subpackage name="utils">
|
2018-06-06 22:22:57 +08:00
|
|
|
<allow pkg="org.apache.kafka.common" />
|
2022-10-13 22:15:42 +08:00
|
|
|
<allow pkg="org.apache.log4j" />
|
2017-09-29 04:58:47 +08:00
|
|
|
</subpackage>
|
2020-03-15 14:03:13 +08:00
|
|
|
|
|
|
|
<subpackage name="quotas">
|
|
|
|
<allow pkg="org.apache.kafka.common" />
|
|
|
|
</subpackage>
|
2015-09-29 05:51:06 +08:00
|
|
|
</subpackage>
|
|
|
|
|
2021-02-12 00:42:59 +08:00
|
|
|
<subpackage name="controller">
|
KAFKA-12276: Add the quorum controller code (#10070)
The quorum controller stores metadata in the KIP-500 metadata log, not in Apache
ZooKeeper. Each controller node is a voter in the metadata quorum. The leader of the
quorum is the active controller, which processes write requests. The followers are standby
controllers, which replay the operations written to the log. If the active controller goes away,
a standby controller can take its place.
Like the ZooKeeper-based controller, the quorum controller is based on an event queue
backed by a single-threaded executor. However, unlike the ZK-based controller, the quorum
controller can have multiple operations in flight-- it does not need to wait for one operation
to be finished before starting another. Therefore, calls into the QuorumController return
CompleteableFuture objects which are completed with either a result or an error when the
operation is done. The QuorumController will also time out operations that have been
sitting on the queue too long without being processed. In this case, the future is completed
with a TimeoutException.
The controller uses timeline data structures to store multiple "versions" of its in-memory
state simultaneously. "Read operations" read only committed state, which is slightly older
than the most up-to-date in-memory state. "Write operations" read and write the latest
in-memory state. However, we can not return a successful result for a write operation until
its state has been committed to the log. Therefore, if a client receives an RPC response, it
knows that the requested operation has been performed, and can not be undone by a
controller failover.
Reviewers: Jun Rao <junrao@gmail.com>, Ron Dagostino <rdagostino@confluent.io>
2021-02-20 10:03:23 +08:00
|
|
|
<allow pkg="com.yammer.metrics"/>
|
2021-02-12 00:42:59 +08:00
|
|
|
<allow pkg="org.apache.kafka.clients" />
|
|
|
|
<allow pkg="org.apache.kafka.clients.admin" />
|
KAFKA-13646; Implement KIP-801: KRaft authorizer (#11649)
Currently, when using KRaft mode, users still have to have an Apache ZooKeeper instance if they want to use AclAuthorizer. We should have a built-in Authorizer for KRaft mode that does not depend on ZooKeeper. This PR introduces such an authorizer, called StandardAuthorizer. See KIP-801 for a full description of the new Authorizer design.
Authorizer.java: add aclCount API as described in KIP-801. StandardAuthorizer is currently the only authorizer that implements it, but eventually we may implement it for AclAuthorizer and others as well.
ControllerApis.scala: fix a bug where createPartitions was authorized using CREATE on the topic resource rather than ALTER on the topic resource as it should have been.
QuorumTestHarness: rename the controller endpoint to CONTROLLER for consistency (the brokers already called it that). This is relevant in AuthorizerIntegrationTest where we are examining endpoint names. Also add the controllerServers call.
TestUtils.scala: adapt the ACL functions to be usable from KRaft, by ensuring that they use the Authorizer from the current active controller.
BrokerMetadataPublisher.scala: add broker-side ACL application logic.
Controller.java: add ACL APIs. Also add a findAllTopicIds API in order to make junit tests that use KafkaServerTestHarness#getTopicNames and KafkaServerTestHarness#getTopicIds work smoothly.
AuthorizerIntegrationTest.scala: convert over testAuthorizationWithTopicExisting (more to come soon)
QuorumController.java: add logic for replaying ACL-based records. This means storing them in the new AclControlManager object, and integrating them into controller snapshots. It also means applying the changes in the Authorizer, if one is configured. In renounce, when reverting to a snapshot, also set newBytesSinceLastSnapshot to 0.
Reviewers: YeonCheol Jang <YeonCheolGit@users.noreply.github.com>, Jason Gustafson <jason@confluent.io>
2022-02-10 02:38:52 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.acl" />
|
KAFKA-12276: Add the quorum controller code (#10070)
The quorum controller stores metadata in the KIP-500 metadata log, not in Apache
ZooKeeper. Each controller node is a voter in the metadata quorum. The leader of the
quorum is the active controller, which processes write requests. The followers are standby
controllers, which replay the operations written to the log. If the active controller goes away,
a standby controller can take its place.
Like the ZooKeeper-based controller, the quorum controller is based on an event queue
backed by a single-threaded executor. However, unlike the ZK-based controller, the quorum
controller can have multiple operations in flight-- it does not need to wait for one operation
to be finished before starting another. Therefore, calls into the QuorumController return
CompleteableFuture objects which are completed with either a result or an error when the
operation is done. The QuorumController will also time out operations that have been
sitting on the queue too long without being processed. In this case, the future is completed
with a TimeoutException.
The controller uses timeline data structures to store multiple "versions" of its in-memory
state simultaneously. "Read operations" read only committed state, which is slightly older
than the most up-to-date in-memory state. "Write operations" read and write the latest
in-memory state. However, we can not return a successful result for a write operation until
its state has been committed to the log. Therefore, if a client receives an RPC response, it
knows that the requested operation has been performed, and can not be undone by a
controller failover.
Reviewers: Jun Rao <junrao@gmail.com>, Ron Dagostino <rdagostino@confluent.io>
2021-02-20 10:03:23 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.annotation" />
|
2021-02-12 00:42:59 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.config" />
|
|
|
|
<allow pkg="org.apache.kafka.common.feature" />
|
|
|
|
<allow pkg="org.apache.kafka.common.internals" />
|
|
|
|
<allow pkg="org.apache.kafka.common.message" />
|
|
|
|
<allow pkg="org.apache.kafka.common.metadata" />
|
|
|
|
<allow pkg="org.apache.kafka.common.metrics" />
|
|
|
|
<allow pkg="org.apache.kafka.common.network" />
|
|
|
|
<allow pkg="org.apache.kafka.common.protocol" />
|
|
|
|
<allow pkg="org.apache.kafka.common.quota" />
|
2021-05-21 06:39:46 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.record" />
|
2021-02-12 00:42:59 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.requests" />
|
KAFKA-13646; Implement KIP-801: KRaft authorizer (#11649)
Currently, when using KRaft mode, users still have to have an Apache ZooKeeper instance if they want to use AclAuthorizer. We should have a built-in Authorizer for KRaft mode that does not depend on ZooKeeper. This PR introduces such an authorizer, called StandardAuthorizer. See KIP-801 for a full description of the new Authorizer design.
Authorizer.java: add aclCount API as described in KIP-801. StandardAuthorizer is currently the only authorizer that implements it, but eventually we may implement it for AclAuthorizer and others as well.
ControllerApis.scala: fix a bug where createPartitions was authorized using CREATE on the topic resource rather than ALTER on the topic resource as it should have been.
QuorumTestHarness: rename the controller endpoint to CONTROLLER for consistency (the brokers already called it that). This is relevant in AuthorizerIntegrationTest where we are examining endpoint names. Also add the controllerServers call.
TestUtils.scala: adapt the ACL functions to be usable from KRaft, by ensuring that they use the Authorizer from the current active controller.
BrokerMetadataPublisher.scala: add broker-side ACL application logic.
Controller.java: add ACL APIs. Also add a findAllTopicIds API in order to make junit tests that use KafkaServerTestHarness#getTopicNames and KafkaServerTestHarness#getTopicIds work smoothly.
AuthorizerIntegrationTest.scala: convert over testAuthorizationWithTopicExisting (more to come soon)
QuorumController.java: add logic for replaying ACL-based records. This means storing them in the new AclControlManager object, and integrating them into controller snapshots. It also means applying the changes in the Authorizer, if one is configured. In renounce, when reverting to a snapshot, also set newBytesSinceLastSnapshot to 0.
Reviewers: YeonCheol Jang <YeonCheolGit@users.noreply.github.com>, Jason Gustafson <jason@confluent.io>
2022-02-10 02:38:52 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.resource" />
|
2021-02-12 00:42:59 +08:00
|
|
|
<allow pkg="org.apache.kafka.controller" />
|
2022-12-16 08:53:07 +08:00
|
|
|
<allow pkg="org.apache.kafka.image.writer" />
|
2021-02-12 00:42:59 +08:00
|
|
|
<allow pkg="org.apache.kafka.metadata" />
|
KAFKA-13646; Implement KIP-801: KRaft authorizer (#11649)
Currently, when using KRaft mode, users still have to have an Apache ZooKeeper instance if they want to use AclAuthorizer. We should have a built-in Authorizer for KRaft mode that does not depend on ZooKeeper. This PR introduces such an authorizer, called StandardAuthorizer. See KIP-801 for a full description of the new Authorizer design.
Authorizer.java: add aclCount API as described in KIP-801. StandardAuthorizer is currently the only authorizer that implements it, but eventually we may implement it for AclAuthorizer and others as well.
ControllerApis.scala: fix a bug where createPartitions was authorized using CREATE on the topic resource rather than ALTER on the topic resource as it should have been.
QuorumTestHarness: rename the controller endpoint to CONTROLLER for consistency (the brokers already called it that). This is relevant in AuthorizerIntegrationTest where we are examining endpoint names. Also add the controllerServers call.
TestUtils.scala: adapt the ACL functions to be usable from KRaft, by ensuring that they use the Authorizer from the current active controller.
BrokerMetadataPublisher.scala: add broker-side ACL application logic.
Controller.java: add ACL APIs. Also add a findAllTopicIds API in order to make junit tests that use KafkaServerTestHarness#getTopicNames and KafkaServerTestHarness#getTopicIds work smoothly.
AuthorizerIntegrationTest.scala: convert over testAuthorizationWithTopicExisting (more to come soon)
QuorumController.java: add logic for replaying ACL-based records. This means storing them in the new AclControlManager object, and integrating them into controller snapshots. It also means applying the changes in the Authorizer, if one is configured. In renounce, when reverting to a snapshot, also set newBytesSinceLastSnapshot to 0.
Reviewers: YeonCheol Jang <YeonCheolGit@users.noreply.github.com>, Jason Gustafson <jason@confluent.io>
2022-02-10 02:38:52 +08:00
|
|
|
<allow pkg="org.apache.kafka.metadata.authorizer" />
|
2022-12-09 02:14:01 +08:00
|
|
|
<allow pkg="org.apache.kafka.metadata.migration" />
|
2021-02-12 00:42:59 +08:00
|
|
|
<allow pkg="org.apache.kafka.metalog" />
|
|
|
|
<allow pkg="org.apache.kafka.queue" />
|
2021-05-21 06:39:46 +08:00
|
|
|
<allow pkg="org.apache.kafka.raft" />
|
KAFKA-13646; Implement KIP-801: KRaft authorizer (#11649)
Currently, when using KRaft mode, users still have to have an Apache ZooKeeper instance if they want to use AclAuthorizer. We should have a built-in Authorizer for KRaft mode that does not depend on ZooKeeper. This PR introduces such an authorizer, called StandardAuthorizer. See KIP-801 for a full description of the new Authorizer design.
Authorizer.java: add aclCount API as described in KIP-801. StandardAuthorizer is currently the only authorizer that implements it, but eventually we may implement it for AclAuthorizer and others as well.
ControllerApis.scala: fix a bug where createPartitions was authorized using CREATE on the topic resource rather than ALTER on the topic resource as it should have been.
QuorumTestHarness: rename the controller endpoint to CONTROLLER for consistency (the brokers already called it that). This is relevant in AuthorizerIntegrationTest where we are examining endpoint names. Also add the controllerServers call.
TestUtils.scala: adapt the ACL functions to be usable from KRaft, by ensuring that they use the Authorizer from the current active controller.
BrokerMetadataPublisher.scala: add broker-side ACL application logic.
Controller.java: add ACL APIs. Also add a findAllTopicIds API in order to make junit tests that use KafkaServerTestHarness#getTopicNames and KafkaServerTestHarness#getTopicIds work smoothly.
AuthorizerIntegrationTest.scala: convert over testAuthorizationWithTopicExisting (more to come soon)
QuorumController.java: add logic for replaying ACL-based records. This means storing them in the new AclControlManager object, and integrating them into controller snapshots. It also means applying the changes in the Authorizer, if one is configured. In renounce, when reverting to a snapshot, also set newBytesSinceLastSnapshot to 0.
Reviewers: YeonCheol Jang <YeonCheolGit@users.noreply.github.com>, Jason Gustafson <jason@confluent.io>
2022-02-10 02:38:52 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.authorizer" />
|
2021-05-12 00:58:28 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.common" />
|
2023-01-04 18:42:52 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.config" />
|
KAFKA-14124: improve quorum controller fault handling (#12447)
Before trying to commit a batch of records to the __cluster_metadata log, the active controller
should try to apply them to its current in-memory state. If this application process fails, the
active controller process should exit, allowing another node to take leadership. This will prevent
most bad metadata records from ending up in the log and help to surface errors during testing.
Similarly, if the active controller attempts to renounce leadership, and the renunciation process
itself fails, the process should exit. This will help avoid bugs where the active controller
continues in an undefined state.
In contrast, standby controllers that experience metadata application errors should continue on, in
order to avoid a scenario where a bad record brings down the whole controller cluster. The
intended effect of these changes is to make it harder to commit a bad record to the metadata log,
but to continue to ride out the bad record as well as possible if such a record does get committed.
This PR introduces the FaultHandler interface to implement these concepts. In junit tests, we use a
FaultHandler implementation which does not exit the process. This allows us to avoid terminating
the gradle test runner, which would be very disruptive. It also allows us to ensure that the test
surfaces these exceptions, which we previously were not doing (the mock fault handler stores the
exception).
In addition to the above, this PR fixes a bug where RaftClient#resign was not being called from the
renounce() function. This bug could have resulted in the raft layer not being informed of an active
controller resigning.
Reviewers: David Arthur <mumrah@gmail.com>
2022-08-05 13:49:45 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.fault" />
|
2022-03-31 04:59:22 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.metrics" />
|
2021-09-23 04:07:45 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.policy"/>
|
2022-10-14 00:56:19 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.util"/>
|
2021-05-21 06:39:46 +08:00
|
|
|
<allow pkg="org.apache.kafka.snapshot" />
|
2021-02-12 00:42:59 +08:00
|
|
|
<allow pkg="org.apache.kafka.test" />
|
|
|
|
<allow pkg="org.apache.kafka.timeline" />
|
|
|
|
</subpackage>
|
|
|
|
|
KAFKA-13019: Add MetadataImage and MetadataDelta classes for KRaft Snapshots (#10949)
Create the image/ module for storing, reading, and writing broker metadata images.
Metadata images are immutable. New images are produced from existing images
using delta classes. Delta classes are mutable, and represent changes to a base
image.
MetadataImage objects can be converted to lists of KRaft metadata records. This
is essentially writing a KRaft snapshot. The resulting snapshot can be read
back into a MetadataDelta object. In practice, we will typically read the
snapshot, and then read a few more records to get fully up to date. After that,
the MetadataDelta can be converted to a MetadataImage as usual.
Sometimes, we have to load a snapshot even though we already have an existing
non-empty MetadataImage. We would do this if the broker fell too far behind and
needed to receive a snapshot to catch up. This is handled just like the normal
snapshot loading process. Anything that is not in the snapshot will be marked
as deleted in the MetadataDelta once finishSnapshot() is called.
In addition to being used for reading and writing snapshots, MetadataImage also
serves as a cache for broker information in memory. A follow-up PR will replace
MetadataCache, CachedConfigRepository, and the client quotas cache with the
corresponding Image classes. TopicsDelta also replaces the "deferred
partition" state that the RaftReplicaManager currently implements. (That change
is also in a follow-up PR.)
Reviewers: Jason Gustafson <jason@confluent.io>, David Arthur <mumrah@gmail.com>
2021-07-01 15:08:25 +08:00
|
|
|
<subpackage name="image">
|
|
|
|
<allow pkg="org.apache.kafka.common.config" />
|
|
|
|
<allow pkg="org.apache.kafka.common.message" />
|
|
|
|
<allow pkg="org.apache.kafka.common.metadata" />
|
|
|
|
<allow pkg="org.apache.kafka.common.protocol" />
|
|
|
|
<allow pkg="org.apache.kafka.common.quota" />
|
|
|
|
<allow pkg="org.apache.kafka.common.requests" />
|
2022-12-16 08:53:07 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.resource" />
|
2022-10-14 00:56:19 +08:00
|
|
|
<allow pkg="org.apache.kafka.image" />
|
|
|
|
<allow pkg="org.apache.kafka.image.writer" />
|
KAFKA-13019: Add MetadataImage and MetadataDelta classes for KRaft Snapshots (#10949)
Create the image/ module for storing, reading, and writing broker metadata images.
Metadata images are immutable. New images are produced from existing images
using delta classes. Delta classes are mutable, and represent changes to a base
image.
MetadataImage objects can be converted to lists of KRaft metadata records. This
is essentially writing a KRaft snapshot. The resulting snapshot can be read
back into a MetadataDelta object. In practice, we will typically read the
snapshot, and then read a few more records to get fully up to date. After that,
the MetadataDelta can be converted to a MetadataImage as usual.
Sometimes, we have to load a snapshot even though we already have an existing
non-empty MetadataImage. We would do this if the broker fell too far behind and
needed to receive a snapshot to catch up. This is handled just like the normal
snapshot loading process. Anything that is not in the snapshot will be marked
as deleted in the MetadataDelta once finishSnapshot() is called.
In addition to being used for reading and writing snapshots, MetadataImage also
serves as a cache for broker information in memory. A follow-up PR will replace
MetadataCache, CachedConfigRepository, and the client quotas cache with the
corresponding Image classes. TopicsDelta also replaces the "deferred
partition" state that the RaftReplicaManager currently implements. (That change
is also in a follow-up PR.)
Reviewers: Jason Gustafson <jason@confluent.io>, David Arthur <mumrah@gmail.com>
2021-07-01 15:08:25 +08:00
|
|
|
<allow pkg="org.apache.kafka.metadata" />
|
2022-12-16 08:53:07 +08:00
|
|
|
<allow pkg="org.apache.kafka.queue" />
|
2022-10-14 00:56:19 +08:00
|
|
|
<allow pkg="org.apache.kafka.raft" />
|
KAFKA-13019: Add MetadataImage and MetadataDelta classes for KRaft Snapshots (#10949)
Create the image/ module for storing, reading, and writing broker metadata images.
Metadata images are immutable. New images are produced from existing images
using delta classes. Delta classes are mutable, and represent changes to a base
image.
MetadataImage objects can be converted to lists of KRaft metadata records. This
is essentially writing a KRaft snapshot. The resulting snapshot can be read
back into a MetadataDelta object. In practice, we will typically read the
snapshot, and then read a few more records to get fully up to date. After that,
the MetadataDelta can be converted to a MetadataImage as usual.
Sometimes, we have to load a snapshot even though we already have an existing
non-empty MetadataImage. We would do this if the broker fell too far behind and
needed to receive a snapshot to catch up. This is handled just like the normal
snapshot loading process. Anything that is not in the snapshot will be marked
as deleted in the MetadataDelta once finishSnapshot() is called.
In addition to being used for reading and writing snapshots, MetadataImage also
serves as a cache for broker information in memory. A follow-up PR will replace
MetadataCache, CachedConfigRepository, and the client quotas cache with the
corresponding Image classes. TopicsDelta also replaces the "deferred
partition" state that the RaftReplicaManager currently implements. (That change
is also in a follow-up PR.)
Reviewers: Jason Gustafson <jason@confluent.io>, David Arthur <mumrah@gmail.com>
2021-07-01 15:08:25 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.common" />
|
2022-12-16 08:53:07 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.fault" />
|
2021-10-08 00:41:57 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.util" />
|
2022-10-14 00:56:19 +08:00
|
|
|
<allow pkg="org.apache.kafka.snapshot" />
|
2022-12-16 08:53:07 +08:00
|
|
|
<allow pkg="org.apache.kafka.test" />
|
KAFKA-13019: Add MetadataImage and MetadataDelta classes for KRaft Snapshots (#10949)
Create the image/ module for storing, reading, and writing broker metadata images.
Metadata images are immutable. New images are produced from existing images
using delta classes. Delta classes are mutable, and represent changes to a base
image.
MetadataImage objects can be converted to lists of KRaft metadata records. This
is essentially writing a KRaft snapshot. The resulting snapshot can be read
back into a MetadataDelta object. In practice, we will typically read the
snapshot, and then read a few more records to get fully up to date. After that,
the MetadataDelta can be converted to a MetadataImage as usual.
Sometimes, we have to load a snapshot even though we already have an existing
non-empty MetadataImage. We would do this if the broker fell too far behind and
needed to receive a snapshot to catch up. This is handled just like the normal
snapshot loading process. Anything that is not in the snapshot will be marked
as deleted in the MetadataDelta once finishSnapshot() is called.
In addition to being used for reading and writing snapshots, MetadataImage also
serves as a cache for broker information in memory. A follow-up PR will replace
MetadataCache, CachedConfigRepository, and the client quotas cache with the
corresponding Image classes. TopicsDelta also replaces the "deferred
partition" state that the RaftReplicaManager currently implements. (That change
is also in a follow-up PR.)
Reviewers: Jason Gustafson <jason@confluent.io>, David Arthur <mumrah@gmail.com>
2021-07-01 15:08:25 +08:00
|
|
|
</subpackage>
|
|
|
|
|
2021-01-15 01:58:52 +08:00
|
|
|
<subpackage name="metadata">
|
|
|
|
<allow pkg="org.apache.kafka.clients" />
|
|
|
|
<allow pkg="org.apache.kafka.common.annotation" />
|
2022-03-03 06:26:31 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.config" />
|
2021-01-15 01:58:52 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.message" />
|
|
|
|
<allow pkg="org.apache.kafka.common.metadata" />
|
|
|
|
<allow pkg="org.apache.kafka.common.protocol" />
|
2022-05-19 03:08:36 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.record" />
|
KAFKA-13749: CreateTopics in KRaft must return configs (#11941)
Previously, when in KRaft mode, CreateTopics did not return the active configurations for the
topic(s) it had just created. This PR addresses that gap. We will now return these topic
configuration(s) when the user has DESCRIBE_CONFIGS permission. (In the case where the user does
not have this permission, we will omit the configurations and set TopicErrorCode. We will also omit
the number of partitions and replication factor data as well.)
For historical reasons, we use different names to refer to each topic configuration when it is set
in the broker context, as opposed to the topic context. For example, the topic configuration
"segment.ms" corresponds to the broker configuration "log.roll.ms". Additionally, some broker
configurations have synonyms. For example, the broker configuration "log.roll.hours" can be used to
set the log roll time instead of "log.roll.ms". In order to track all of this, this PR adds a
table in LogConfig.scala which maps each topic configuration to an ordered list of ConfigSynonym
classes. (This table is then passed to KafkaConfigSchema as a constructor argument.)
Some synonyms require transformations. For example, in order to convert from "log.roll.hours" to
"segment.ms", we must convert hours to milliseconds. (Note that our assumption right now is that
topic configurations do not have synonyms, only broker configurations. If this changes, we will
need to add some logic to handle it.)
This PR makes the 8-argument constructor for ConfigEntry public. We need this in order to make full
use of ConfigEntry outside of the admin namespace. This change is probably inevitable in general
since otherwise we cannot easily test the output from various admin APIs in junit tests outside the
admin package.
Testing:
This PR adds PlaintextAdminIntegrationTest#testCreateTopicsReturnsConfigs. This test validates
some of the configurations that it gets back from the call to CreateTopics, rather than just checking
if it got back a non-empty map like some of the existing tests. In order to test the
configuration override logic, testCreateDeleteTopics now sets up some custom static and dynamic
configurations.
In QuorumTestHarness, we now allow tests to configure what the ID of the controller should be. This
allows us to set dynamic configurations for the controller in testCreateDeleteTopics. We will have
a more complete fix for setting dynamic configuations on the controller later.
This PR changes ConfigurationControlManager so that it is created via a Builder. This will make it
easier to add more parameters to its constructor without having to update every piece of test code
that uses it. It will also make the test code easier to read.
Reviewers: David Arthur <mumrah@gmail.com>
2022-04-02 01:50:25 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.requests" />
|
2021-10-13 08:19:03 +08:00
|
|
|
<allow pkg="org.apache.kafka.image" />
|
2021-05-18 07:49:47 +08:00
|
|
|
<allow pkg="org.apache.kafka.metadata" />
|
2021-05-21 06:39:46 +08:00
|
|
|
<allow pkg="org.apache.kafka.metalog" />
|
2022-05-19 03:08:36 +08:00
|
|
|
<allow pkg="org.apache.kafka.queue" />
|
2021-05-21 06:39:46 +08:00
|
|
|
<allow pkg="org.apache.kafka.raft" />
|
KAFKA-13646; Implement KIP-801: KRaft authorizer (#11649)
Currently, when using KRaft mode, users still have to have an Apache ZooKeeper instance if they want to use AclAuthorizer. We should have a built-in Authorizer for KRaft mode that does not depend on ZooKeeper. This PR introduces such an authorizer, called StandardAuthorizer. See KIP-801 for a full description of the new Authorizer design.
Authorizer.java: add aclCount API as described in KIP-801. StandardAuthorizer is currently the only authorizer that implements it, but eventually we may implement it for AclAuthorizer and others as well.
ControllerApis.scala: fix a bug where createPartitions was authorized using CREATE on the topic resource rather than ALTER on the topic resource as it should have been.
QuorumTestHarness: rename the controller endpoint to CONTROLLER for consistency (the brokers already called it that). This is relevant in AuthorizerIntegrationTest where we are examining endpoint names. Also add the controllerServers call.
TestUtils.scala: adapt the ACL functions to be usable from KRaft, by ensuring that they use the Authorizer from the current active controller.
BrokerMetadataPublisher.scala: add broker-side ACL application logic.
Controller.java: add ACL APIs. Also add a findAllTopicIds API in order to make junit tests that use KafkaServerTestHarness#getTopicNames and KafkaServerTestHarness#getTopicIds work smoothly.
AuthorizerIntegrationTest.scala: convert over testAuthorizationWithTopicExisting (more to come soon)
QuorumController.java: add logic for replaying ACL-based records. This means storing them in the new AclControlManager object, and integrating them into controller snapshots. It also means applying the changes in the Authorizer, if one is configured. In renounce, when reverting to a snapshot, also set newBytesSinceLastSnapshot to 0.
Reviewers: YeonCheol Jang <YeonCheolGit@users.noreply.github.com>, Jason Gustafson <jason@confluent.io>
2022-02-10 02:38:52 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.authorizer" />
|
2021-05-12 00:58:28 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.common" />
|
2023-01-10 02:44:11 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.fault" />
|
2023-01-04 18:42:52 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.config" />
|
2022-10-14 00:56:19 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.util"/>
|
2021-01-15 01:58:52 +08:00
|
|
|
<allow pkg="org.apache.kafka.test" />
|
KAFKA-13646; Implement KIP-801: KRaft authorizer (#11649)
Currently, when using KRaft mode, users still have to have an Apache ZooKeeper instance if they want to use AclAuthorizer. We should have a built-in Authorizer for KRaft mode that does not depend on ZooKeeper. This PR introduces such an authorizer, called StandardAuthorizer. See KIP-801 for a full description of the new Authorizer design.
Authorizer.java: add aclCount API as described in KIP-801. StandardAuthorizer is currently the only authorizer that implements it, but eventually we may implement it for AclAuthorizer and others as well.
ControllerApis.scala: fix a bug where createPartitions was authorized using CREATE on the topic resource rather than ALTER on the topic resource as it should have been.
QuorumTestHarness: rename the controller endpoint to CONTROLLER for consistency (the brokers already called it that). This is relevant in AuthorizerIntegrationTest where we are examining endpoint names. Also add the controllerServers call.
TestUtils.scala: adapt the ACL functions to be usable from KRaft, by ensuring that they use the Authorizer from the current active controller.
BrokerMetadataPublisher.scala: add broker-side ACL application logic.
Controller.java: add ACL APIs. Also add a findAllTopicIds API in order to make junit tests that use KafkaServerTestHarness#getTopicNames and KafkaServerTestHarness#getTopicIds work smoothly.
AuthorizerIntegrationTest.scala: convert over testAuthorizationWithTopicExisting (more to come soon)
QuorumController.java: add logic for replaying ACL-based records. This means storing them in the new AclControlManager object, and integrating them into controller snapshots. It also means applying the changes in the Authorizer, if one is configured. In renounce, when reverting to a snapshot, also set newBytesSinceLastSnapshot to 0.
Reviewers: YeonCheol Jang <YeonCheolGit@users.noreply.github.com>, Jason Gustafson <jason@confluent.io>
2022-02-10 02:38:52 +08:00
|
|
|
<subpackage name="authorizer">
|
|
|
|
<allow pkg="org.apache.kafka.common.acl" />
|
|
|
|
<allow pkg="org.apache.kafka.common.requests" />
|
|
|
|
<allow pkg="org.apache.kafka.common.resource" />
|
2022-04-16 07:07:23 +08:00
|
|
|
<allow pkg="org.apache.kafka.controller" />
|
KAFKA-13646; Implement KIP-801: KRaft authorizer (#11649)
Currently, when using KRaft mode, users still have to have an Apache ZooKeeper instance if they want to use AclAuthorizer. We should have a built-in Authorizer for KRaft mode that does not depend on ZooKeeper. This PR introduces such an authorizer, called StandardAuthorizer. See KIP-801 for a full description of the new Authorizer design.
Authorizer.java: add aclCount API as described in KIP-801. StandardAuthorizer is currently the only authorizer that implements it, but eventually we may implement it for AclAuthorizer and others as well.
ControllerApis.scala: fix a bug where createPartitions was authorized using CREATE on the topic resource rather than ALTER on the topic resource as it should have been.
QuorumTestHarness: rename the controller endpoint to CONTROLLER for consistency (the brokers already called it that). This is relevant in AuthorizerIntegrationTest where we are examining endpoint names. Also add the controllerServers call.
TestUtils.scala: adapt the ACL functions to be usable from KRaft, by ensuring that they use the Authorizer from the current active controller.
BrokerMetadataPublisher.scala: add broker-side ACL application logic.
Controller.java: add ACL APIs. Also add a findAllTopicIds API in order to make junit tests that use KafkaServerTestHarness#getTopicNames and KafkaServerTestHarness#getTopicIds work smoothly.
AuthorizerIntegrationTest.scala: convert over testAuthorizationWithTopicExisting (more to come soon)
QuorumController.java: add logic for replaying ACL-based records. This means storing them in the new AclControlManager object, and integrating them into controller snapshots. It also means applying the changes in the Authorizer, if one is configured. In renounce, when reverting to a snapshot, also set newBytesSinceLastSnapshot to 0.
Reviewers: YeonCheol Jang <YeonCheolGit@users.noreply.github.com>, Jason Gustafson <jason@confluent.io>
2022-02-10 02:38:52 +08:00
|
|
|
<allow pkg="org.apache.kafka.metadata" />
|
2022-10-05 07:31:43 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.internals" />
|
KAFKA-13646; Implement KIP-801: KRaft authorizer (#11649)
Currently, when using KRaft mode, users still have to have an Apache ZooKeeper instance if they want to use AclAuthorizer. We should have a built-in Authorizer for KRaft mode that does not depend on ZooKeeper. This PR introduces such an authorizer, called StandardAuthorizer. See KIP-801 for a full description of the new Authorizer design.
Authorizer.java: add aclCount API as described in KIP-801. StandardAuthorizer is currently the only authorizer that implements it, but eventually we may implement it for AclAuthorizer and others as well.
ControllerApis.scala: fix a bug where createPartitions was authorized using CREATE on the topic resource rather than ALTER on the topic resource as it should have been.
QuorumTestHarness: rename the controller endpoint to CONTROLLER for consistency (the brokers already called it that). This is relevant in AuthorizerIntegrationTest where we are examining endpoint names. Also add the controllerServers call.
TestUtils.scala: adapt the ACL functions to be usable from KRaft, by ensuring that they use the Authorizer from the current active controller.
BrokerMetadataPublisher.scala: add broker-side ACL application logic.
Controller.java: add ACL APIs. Also add a findAllTopicIds API in order to make junit tests that use KafkaServerTestHarness#getTopicNames and KafkaServerTestHarness#getTopicIds work smoothly.
AuthorizerIntegrationTest.scala: convert over testAuthorizationWithTopicExisting (more to come soon)
QuorumController.java: add logic for replaying ACL-based records. This means storing them in the new AclControlManager object, and integrating them into controller snapshots. It also means applying the changes in the Authorizer, if one is configured. In renounce, when reverting to a snapshot, also set newBytesSinceLastSnapshot to 0.
Reviewers: YeonCheol Jang <YeonCheolGit@users.noreply.github.com>, Jason Gustafson <jason@confluent.io>
2022-02-10 02:38:52 +08:00
|
|
|
</subpackage>
|
KAFKA-14177: Correctly support older kraft versions without FeatureLevelRecord (#12513)
The main changes here are ensuring that we always have a metadata.version record in the log, making
˘sure that the bootstrap file can be used for records other than the metadata.version record (for
example, we will want to put SCRAM initialization records there), and fixing some bugs.
If no feature level record is in the log and the IBP is less than 3.3IV0, then we assume the minimum KRaft
version for all records in the log.
Fix some issues related to initializing new clusters. If there are no records in the log at all,
then insert the bootstrap records in a single batch. If there are records, but no metadata version,
process the existing records as though they were metadata.version 3.3IV0 and then append a metadata
version record setting version 3.3IV0. Previously, we were not clearly distinguishing between the
case where the metadata log was empty, and the case where we just needed to add a metadata.version
record.
Refactor BootstrapMetadata into an immutable class which contains a 3-tuple of metadata version,
record list, and source. The source field is used to log where the bootstrap metadata was obtained
from. This could be a bootstrap file, the static configuration, or just the software defaults.
Move the logic for reading and writing bootstrap files into BootstrapDirectory.java.
Add LogReplayTracker, which tracks whether the log is empty.
Fix a bug in FeatureControlManager where it was possible to use a "downgrade" operation to
transition to a newer version. Do not store whether we have seen a metadata version or not in
FeatureControlManager, since that is now handled by LogReplayTracker.
Introduce BatchFileReader, which is a simple way of reading a file containing batches of snapshots
that does not require spawning a thread. Rename SnapshotFileWriter to BatchFileWriter to be
consistent, and to reflect the fact that bootstrap files aren't snapshots.
QuorumController#processBrokerHeartbeat: add an explanatory comment.
Reviewers: David Arthur <mumrah@gmail.com>, Jason Gustafson <jason@confluent.io>
2022-08-26 09:12:31 +08:00
|
|
|
<subpackage name="bootstrap">
|
|
|
|
<allow pkg="org.apache.kafka.snapshot" />
|
|
|
|
</subpackage>
|
KAFKA-14124: improve quorum controller fault handling (#12447)
Before trying to commit a batch of records to the __cluster_metadata log, the active controller
should try to apply them to its current in-memory state. If this application process fails, the
active controller process should exit, allowing another node to take leadership. This will prevent
most bad metadata records from ending up in the log and help to surface errors during testing.
Similarly, if the active controller attempts to renounce leadership, and the renunciation process
itself fails, the process should exit. This will help avoid bugs where the active controller
continues in an undefined state.
In contrast, standby controllers that experience metadata application errors should continue on, in
order to avoid a scenario where a bad record brings down the whole controller cluster. The
intended effect of these changes is to make it harder to commit a bad record to the metadata log,
but to continue to ride out the bad record as well as possible if such a record does get committed.
This PR introduces the FaultHandler interface to implement these concepts. In junit tests, we use a
FaultHandler implementation which does not exit the process. This allows us to avoid terminating
the gradle test runner, which would be very disruptive. It also allows us to ensure that the test
surfaces these exceptions, which we previously were not doing (the mock fault handler stores the
exception).
In addition to the above, this PR fixes a bug where RaftClient#resign was not being called from the
renounce() function. This bug could have resulted in the raft layer not being informed of an active
controller resigning.
Reviewers: David Arthur <mumrah@gmail.com>
2022-08-05 13:49:45 +08:00
|
|
|
<subpackage name="fault">
|
|
|
|
<allow pkg="org.apache.kafka.server.fault" />
|
|
|
|
</subpackage>
|
2021-01-15 01:58:52 +08:00
|
|
|
</subpackage>
|
|
|
|
|
2021-02-12 00:42:59 +08:00
|
|
|
<subpackage name="metalog">
|
|
|
|
<allow pkg="org.apache.kafka.common.metadata" />
|
|
|
|
<allow pkg="org.apache.kafka.common.protocol" />
|
2021-06-16 01:32:01 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.record" />
|
2021-02-12 00:42:59 +08:00
|
|
|
<allow pkg="org.apache.kafka.metadata" />
|
|
|
|
<allow pkg="org.apache.kafka.metalog" />
|
2021-05-21 06:39:46 +08:00
|
|
|
<allow pkg="org.apache.kafka.raft" />
|
|
|
|
<allow pkg="org.apache.kafka.snapshot" />
|
2021-02-12 00:42:59 +08:00
|
|
|
<allow pkg="org.apache.kafka.queue" />
|
2021-05-12 00:58:28 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.common" />
|
2021-02-12 00:42:59 +08:00
|
|
|
<allow pkg="org.apache.kafka.test" />
|
|
|
|
</subpackage>
|
|
|
|
|
2022-12-02 01:04:04 +08:00
|
|
|
<subpackage name="queue">
|
2022-12-08 02:43:34 +08:00
|
|
|
<allow pkg="org.apache.kafka.test" />
|
2022-12-02 01:04:04 +08:00
|
|
|
</subpackage>
|
|
|
|
|
2015-09-29 05:51:06 +08:00
|
|
|
<subpackage name="clients">
|
|
|
|
<allow pkg="org.apache.kafka.common" />
|
|
|
|
<allow pkg="org.apache.kafka.clients" exact-match="true"/>
|
|
|
|
<allow pkg="org.apache.kafka.test" />
|
|
|
|
|
|
|
|
<subpackage name="consumer">
|
|
|
|
<allow pkg="org.apache.kafka.clients.consumer" />
|
|
|
|
</subpackage>
|
|
|
|
|
|
|
|
<subpackage name="producer">
|
2017-04-28 05:11:17 +08:00
|
|
|
<allow pkg="org.apache.kafka.clients.consumer" />
|
2015-09-29 05:51:06 +08:00
|
|
|
<allow pkg="org.apache.kafka.clients.producer" />
|
|
|
|
</subpackage>
|
2017-05-02 07:16:01 +08:00
|
|
|
|
|
|
|
<subpackage name="admin">
|
|
|
|
<allow pkg="org.apache.kafka.clients.admin" />
|
2018-04-12 05:17:46 +08:00
|
|
|
<allow pkg="org.apache.kafka.clients.consumer.internals" />
|
|
|
|
<allow pkg="org.apache.kafka.clients.consumer" />
|
2017-05-02 07:16:01 +08:00
|
|
|
</subpackage>
|
2015-10-27 22:44:32 +08:00
|
|
|
</subpackage>
|
2015-09-29 05:51:06 +08:00
|
|
|
|
2022-11-30 03:39:12 +08:00
|
|
|
<subpackage name="coordinator">
|
|
|
|
<subpackage name="group">
|
2023-02-10 15:26:00 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.annotation" />
|
2022-11-30 03:39:12 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.message" />
|
2023-02-07 16:06:56 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.protocol" />
|
2022-11-30 03:39:12 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.requests" />
|
2023-02-07 16:06:56 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.util"/>
|
2022-11-30 03:39:12 +08:00
|
|
|
</subpackage>
|
|
|
|
</subpackage>
|
|
|
|
|
2017-01-14 11:45:49 +08:00
|
|
|
<subpackage name="server">
|
|
|
|
<allow pkg="org.apache.kafka.common" />
|
2023-01-27 03:06:09 +08:00
|
|
|
<allow pkg="joptsimple" />
|
2021-05-12 00:58:28 +08:00
|
|
|
|
2022-03-22 16:45:04 +08:00
|
|
|
<!-- This is required to make AlterConfigPolicyTest work. -->
|
|
|
|
<allow pkg="org.apache.kafka.server.policy" />
|
|
|
|
|
2021-05-12 00:58:28 +08:00
|
|
|
<subpackage name="common">
|
|
|
|
<allow pkg="org.apache.kafka.server.common" />
|
|
|
|
</subpackage>
|
2021-04-14 01:14:03 +08:00
|
|
|
|
2022-03-31 04:59:22 +08:00
|
|
|
<subpackage name="metrics">
|
|
|
|
<allow pkg="com.yammer.metrics" />
|
|
|
|
</subpackage>
|
|
|
|
|
2021-04-14 01:14:03 +08:00
|
|
|
<subpackage name="log">
|
2021-05-12 00:58:28 +08:00
|
|
|
<allow pkg="com.fasterxml.jackson" />
|
2021-07-20 00:05:46 +08:00
|
|
|
<allow pkg="kafka.api" />
|
|
|
|
<allow pkg="kafka.utils" />
|
|
|
|
<allow pkg="org.apache.kafka.clients" />
|
2021-05-12 00:58:28 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.common" />
|
2023-01-04 18:42:52 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.config" />
|
2021-04-14 01:14:03 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.log" />
|
2023-01-04 18:42:52 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.record" />
|
2021-05-12 00:58:28 +08:00
|
|
|
<allow pkg="org.apache.kafka.test" />
|
2023-02-07 18:07:23 +08:00
|
|
|
<allow pkg="org.apache.kafka.storage"/>
|
2021-10-12 01:24:55 +08:00
|
|
|
<subpackage name="remote">
|
|
|
|
<allow pkg="scala.collection" />
|
|
|
|
</subpackage>
|
|
|
|
|
2021-04-14 01:14:03 +08:00
|
|
|
</subpackage>
|
2017-01-14 11:45:49 +08:00
|
|
|
</subpackage>
|
|
|
|
|
2023-02-07 18:07:23 +08:00
|
|
|
<subpackage name="storage.internals">
|
|
|
|
<allow pkg="org.apache.kafka.server"/>
|
|
|
|
<allow pkg="org.apache.kafka.storage.internals"/>
|
|
|
|
<allow pkg="org.apache.kafka.common" />
|
|
|
|
</subpackage>
|
|
|
|
|
2021-02-10 06:11:35 +08:00
|
|
|
<subpackage name="shell">
|
|
|
|
<allow pkg="com.fasterxml.jackson" />
|
|
|
|
<allow pkg="kafka.raft"/>
|
|
|
|
<allow pkg="kafka.server"/>
|
|
|
|
<allow pkg="kafka.tools"/>
|
|
|
|
<allow pkg="net.sourceforge.argparse4j" />
|
|
|
|
<allow pkg="org.apache.kafka.common"/>
|
|
|
|
<allow pkg="org.apache.kafka.metadata"/>
|
2022-05-19 03:08:36 +08:00
|
|
|
<allow pkg="org.apache.kafka.controller.util"/>
|
2021-02-10 06:11:35 +08:00
|
|
|
<allow pkg="org.apache.kafka.queue"/>
|
|
|
|
<allow pkg="org.apache.kafka.raft"/>
|
2021-05-12 00:58:28 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.common" />
|
2021-02-10 06:11:35 +08:00
|
|
|
<allow pkg="org.apache.kafka.shell"/>
|
2021-05-02 01:05:45 +08:00
|
|
|
<allow pkg="org.apache.kafka.snapshot"/>
|
2021-02-10 06:11:35 +08:00
|
|
|
<allow pkg="org.jline"/>
|
|
|
|
<allow pkg="scala.compat"/>
|
|
|
|
</subpackage>
|
|
|
|
|
2015-10-27 22:44:32 +08:00
|
|
|
<subpackage name="tools">
|
2015-11-10 10:38:22 +08:00
|
|
|
<allow pkg="org.apache.kafka.common"/>
|
2022-08-20 23:37:26 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.util" />
|
2017-06-03 13:26:06 +08:00
|
|
|
<allow pkg="org.apache.kafka.clients.admin" />
|
2015-10-27 22:44:32 +08:00
|
|
|
<allow pkg="org.apache.kafka.clients.producer" />
|
|
|
|
<allow pkg="org.apache.kafka.clients.consumer" />
|
2023-02-14 07:33:20 +08:00
|
|
|
<allow pkg="org.apache.kafka.test" />
|
2015-10-27 22:44:32 +08:00
|
|
|
<allow pkg="com.fasterxml.jackson" />
|
2021-10-29 02:36:53 +08:00
|
|
|
<allow pkg="org.jose4j" />
|
2015-10-27 22:44:32 +08:00
|
|
|
<allow pkg="net.sourceforge.argparse4j" />
|
|
|
|
<allow pkg="org.apache.log4j" />
|
2022-12-10 01:22:58 +08:00
|
|
|
<allow pkg="kafka.test" />
|
2023-01-27 03:06:09 +08:00
|
|
|
<allow pkg="joptsimple" />
|
2023-02-02 18:23:26 +08:00
|
|
|
<allow pkg="javax.rmi.ssl"/>
|
2015-09-29 05:51:06 +08:00
|
|
|
</subpackage>
|
|
|
|
|
2017-08-26 03:29:40 +08:00
|
|
|
<subpackage name="trogdor">
|
|
|
|
<allow pkg="com.fasterxml.jackson" />
|
|
|
|
<allow pkg="javax.servlet" />
|
|
|
|
<allow pkg="javax.ws.rs" />
|
|
|
|
<allow pkg="net.sourceforge.argparse4j" />
|
2018-08-06 15:47:25 +08:00
|
|
|
<allow pkg="org.apache.kafka.clients" />
|
KAFKA-6060; Add workload generation capabilities to Trogdor
Previously, Trogdor only handled "Faults." Now, Trogdor can handle
"Tasks" which may be either faults, or workloads to execute in the
background.
The Agent and Coordinator have been refactored from a
mutexes-and-condition-variables paradigm into a message passing
paradigm. No locks are necessary, because only one thread can access
the task state or worker state. This makes them a lot easier to reason
about.
The MockTime class can now handle mocking deferred message passing
(adding a message to an ExecutorService with a delay). I added a
MockTimeTest.
MiniTrogdorCluster now starts up Agent and Coordinator classes in
paralle in order to minimize junit test time.
RPC messages now inherit from a common Message.java class. This class
handles implementing serialization, equals, hashCode, etc.
Remove FaultSet, since it is no longer necessary.
Previously, if CoordinatorClient or AgentClient hit a networking
problem, they would throw an exception. They now retry several times
before giving up. Additionally, the REST RPCs to the Coordinator and
Agent have been changed to be idempotent. If a response is lost, and
the request is resent, no harm will be done.
Author: Colin P. Mccabe <cmccabe@confluent.io>
Reviewers: Rajini Sivaram <rajinisivaram@googlemail.com>, Ismael Juma <ismael@juma.me.uk>
Closes #4073 from cmccabe/KAFKA-6060
2017-11-03 17:37:29 +08:00
|
|
|
<allow pkg="org.apache.kafka.clients.admin" />
|
|
|
|
<allow pkg="org.apache.kafka.clients.consumer" exact-match="true"/>
|
|
|
|
<allow pkg="org.apache.kafka.clients.producer" exact-match="true"/>
|
|
|
|
<allow pkg="org.apache.kafka.common" />
|
2017-08-26 03:29:40 +08:00
|
|
|
<allow pkg="org.apache.kafka.test"/>
|
|
|
|
<allow pkg="org.apache.kafka.trogdor" />
|
|
|
|
<allow pkg="org.eclipse.jetty" />
|
|
|
|
<allow pkg="org.glassfish.jersey" />
|
|
|
|
</subpackage>
|
|
|
|
|
2019-01-12 08:40:21 +08:00
|
|
|
<subpackage name="message">
|
|
|
|
<allow pkg="com.fasterxml.jackson" />
|
|
|
|
<allow pkg="com.fasterxml.jackson.annotation" />
|
2020-08-27 06:10:09 +08:00
|
|
|
<allow pkg="net.sourceforge.argparse4j" />
|
|
|
|
<allow pkg="org.apache.message" />
|
2019-01-12 08:40:21 +08:00
|
|
|
</subpackage>
|
|
|
|
|
2015-09-29 05:51:06 +08:00
|
|
|
<subpackage name="streams">
|
|
|
|
<allow pkg="org.apache.kafka.common"/>
|
|
|
|
<allow pkg="org.apache.kafka.test"/>
|
|
|
|
<allow pkg="org.apache.kafka.clients"/>
|
|
|
|
<allow pkg="org.apache.kafka.clients.producer" exact-match="true"/>
|
|
|
|
<allow pkg="org.apache.kafka.clients.consumer" exact-match="true"/>
|
|
|
|
|
|
|
|
<allow pkg="org.apache.kafka.streams"/>
|
|
|
|
|
2016-03-02 10:53:58 +08:00
|
|
|
<subpackage name="examples">
|
2018-09-01 04:13:42 +08:00
|
|
|
<allow pkg="com.fasterxml.jackson" />
|
2016-03-02 10:53:58 +08:00
|
|
|
<allow pkg="org.apache.kafka.connect.json" />
|
|
|
|
</subpackage>
|
2017-06-21 18:46:59 +08:00
|
|
|
|
2020-04-10 04:11:36 +08:00
|
|
|
<subpackage name="internals">
|
|
|
|
<allow pkg="com.fasterxml.jackson" />
|
|
|
|
</subpackage>
|
|
|
|
|
2017-06-21 18:46:59 +08:00
|
|
|
<subpackage name="perf">
|
|
|
|
<allow pkg="com.fasterxml.jackson.databind" />
|
|
|
|
</subpackage>
|
2020-03-15 14:03:13 +08:00
|
|
|
|
2016-04-28 07:55:51 +08:00
|
|
|
<subpackage name="integration">
|
|
|
|
<allow pkg="kafka.admin" />
|
2017-01-18 04:33:11 +08:00
|
|
|
<allow pkg="kafka.api" />
|
2021-06-17 20:32:34 +08:00
|
|
|
<allow pkg="kafka.cluster" />
|
2016-04-28 07:55:51 +08:00
|
|
|
<allow pkg="kafka.server" />
|
2016-08-02 11:12:22 +08:00
|
|
|
<allow pkg="kafka.tools" />
|
2016-04-28 07:55:51 +08:00
|
|
|
<allow pkg="kafka.utils" />
|
|
|
|
<allow pkg="kafka.log" />
|
|
|
|
<allow pkg="scala" />
|
2018-07-20 05:28:12 +08:00
|
|
|
<allow class="kafka.zk.EmbeddedZookeeper"/>
|
2020-06-12 23:00:38 +08:00
|
|
|
<allow pkg="com.fasterxml.jackson" />
|
2016-04-28 07:55:51 +08:00
|
|
|
</subpackage>
|
2016-03-02 10:53:58 +08:00
|
|
|
|
2017-06-09 05:08:54 +08:00
|
|
|
<subpackage name="test">
|
|
|
|
<allow pkg="kafka.admin" />
|
|
|
|
</subpackage>
|
|
|
|
|
2017-12-07 03:38:38 +08:00
|
|
|
<subpackage name="tools">
|
|
|
|
<allow pkg="kafka.tools" />
|
|
|
|
</subpackage>
|
|
|
|
|
2015-09-29 05:51:06 +08:00
|
|
|
<subpackage name="state">
|
|
|
|
<allow pkg="org.rocksdb" />
|
|
|
|
</subpackage>
|
2015-12-08 07:12:09 +08:00
|
|
|
|
|
|
|
<subpackage name="processor">
|
|
|
|
<subpackage name="internals">
|
|
|
|
<allow pkg="com.fasterxml.jackson" />
|
2019-08-27 00:53:36 +08:00
|
|
|
<allow pkg="kafka.utils" />
|
2015-12-08 07:12:09 +08:00
|
|
|
<allow pkg="org.apache.zookeeper" />
|
|
|
|
</subpackage>
|
|
|
|
</subpackage>
|
2015-09-29 05:51:06 +08:00
|
|
|
</subpackage>
|
|
|
|
|
|
|
|
<subpackage name="log4jappender">
|
|
|
|
<allow pkg="org.apache.log4j" />
|
|
|
|
<allow pkg="org.apache.kafka.clients" />
|
|
|
|
<allow pkg="org.apache.kafka.common" />
|
|
|
|
<allow pkg="org.apache.kafka.test" />
|
|
|
|
</subpackage>
|
|
|
|
|
|
|
|
<subpackage name="test">
|
|
|
|
<allow pkg="org.apache.kafka" />
|
|
|
|
<allow pkg="org.bouncycastle" />
|
2020-10-07 00:09:54 +08:00
|
|
|
<allow pkg="org.rocksdb" />
|
2015-09-29 05:51:06 +08:00
|
|
|
</subpackage>
|
|
|
|
|
2020-09-23 02:32:44 +08:00
|
|
|
<subpackage name="raft">
|
|
|
|
<allow pkg="org.apache.kafka.raft" />
|
2021-02-04 08:16:35 +08:00
|
|
|
<allow pkg="org.apache.kafka.metadata" />
|
2020-12-08 06:06:25 +08:00
|
|
|
<allow pkg="org.apache.kafka.snapshot" />
|
2020-09-23 02:32:44 +08:00
|
|
|
<allow pkg="org.apache.kafka.clients" />
|
|
|
|
<allow pkg="org.apache.kafka.common.config" />
|
|
|
|
<allow pkg="org.apache.kafka.common.message" />
|
2021-02-04 08:16:35 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.metadata" />
|
2020-09-23 02:32:44 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.metrics" />
|
|
|
|
<allow pkg="org.apache.kafka.common.record" />
|
|
|
|
<allow pkg="org.apache.kafka.common.requests" />
|
|
|
|
<allow pkg="org.apache.kafka.common.protocol" />
|
2021-05-12 00:58:28 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.common" />
|
|
|
|
<allow pkg="org.apache.kafka.server.common.serialization" />
|
2020-09-23 02:32:44 +08:00
|
|
|
<allow pkg="org.apache.kafka.test"/>
|
|
|
|
<allow pkg="com.fasterxml.jackson" />
|
2021-03-18 10:20:07 +08:00
|
|
|
<allow pkg="net.jqwik"/>
|
2020-09-23 02:32:44 +08:00
|
|
|
</subpackage>
|
|
|
|
|
2020-12-08 06:06:25 +08:00
|
|
|
<subpackage name="snapshot">
|
|
|
|
<allow pkg="org.apache.kafka.common.record" />
|
2021-06-30 00:37:20 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.message" />
|
2020-12-08 06:06:25 +08:00
|
|
|
<allow pkg="org.apache.kafka.raft" />
|
2021-05-12 00:58:28 +08:00
|
|
|
<allow pkg="org.apache.kafka.server.common" />
|
2020-12-08 06:06:25 +08:00
|
|
|
<allow pkg="org.apache.kafka.test"/>
|
|
|
|
</subpackage>
|
|
|
|
|
2015-11-09 14:11:03 +08:00
|
|
|
<subpackage name="connect">
|
2015-09-29 05:51:06 +08:00
|
|
|
<allow pkg="org.apache.kafka.common" />
|
2015-11-09 14:11:03 +08:00
|
|
|
<allow pkg="org.apache.kafka.connect.data" />
|
|
|
|
<allow pkg="org.apache.kafka.connect.errors" />
|
KAFKA-5142: Add Connect support for message headers (KIP-145)
**[KIP-145](https://cwiki.apache.org/confluence/display/KAFKA/KIP-145+-+Expose+Record+Headers+in+Kafka+Connect) has been accepted, and this PR implements KIP-145 except without the SMTs.**
Changed the Connect API and runtime to support message headers as described in [KIP-145](https://cwiki.apache.org/confluence/display/KAFKA/KIP-145+-+Expose+Record+Headers+in+Kafka+Connect).
The new `Header` interface defines an immutable representation of a Kafka header (key-value pair) with support for the Connect value types and schemas. This interface provides methods for easily converting between many of the built-in primitive, structured, and logical data types.
The new `Headers` interface defines an ordered collection of headers and is used to track all headers associated with a `ConnectRecord` (and thus `SourceRecord` and `SinkRecord`). This does allow multiple headers with the same key. The `Headers` contains methods for adding, removing, finding, and modifying headers. Convenience methods allow connectors and transforms to easily use and modify the headers for a record.
A new `HeaderConverter` interface is also defined to enable the Connect runtime framework to be able to serialize and deserialize headers between the in-memory representation and Kafka’s byte[] representation. A new `SimpleHeaderConverter` implementation has been added, and this serializes to strings and deserializes by inferring the schemas (`Struct` header values are serialized without the schemas, so they can only be deserialized as `Map` instances without a schema.) The `StringConverter`, `JsonConverter`, and `ByteArrayConverter` have all been extended to also be `HeaderConverter` implementations. Each connector can be configured with a different header converter, although by default the `SimpleHeaderConverter` is used to serialize header values as strings without schemas.
Unit and integration tests are added for `ConnectHeader` and `ConnectHeaders`, the two implementation classes for headers. Additional test methods are added for the methods added to the `Converter` implementations. Finally, the `ConnectRecord` object is already used heavily, so only limited tests need to be added while quite a few of the existing tests already cover the changes.
Author: Randall Hauch <rhauch@gmail.com>
Reviewers: Arjun Satish <arjun@confluent.io>, Ted Yu <yuzhihong@gmail.com>, Magesh Nandakumar <magesh.n.kumar@gmail.com>, Konstantine Karantasis <konstantine@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes #4319 from rhauch/kafka-5142-b
2018-02-01 02:40:24 +08:00
|
|
|
<allow pkg="org.apache.kafka.connect.header" />
|
2018-05-30 12:35:22 +08:00
|
|
|
<allow pkg="org.apache.kafka.connect.components"/>
|
2015-09-29 05:51:06 +08:00
|
|
|
<allow pkg="org.apache.kafka.clients" />
|
2015-10-24 07:37:30 +08:00
|
|
|
<allow pkg="org.apache.kafka.test"/>
|
2015-09-29 05:51:06 +08:00
|
|
|
|
|
|
|
<subpackage name="source">
|
2015-11-09 14:11:03 +08:00
|
|
|
<allow pkg="org.apache.kafka.connect.connector" />
|
|
|
|
<allow pkg="org.apache.kafka.connect.storage" />
|
2015-09-29 05:51:06 +08:00
|
|
|
</subpackage>
|
|
|
|
|
|
|
|
<subpackage name="sink">
|
|
|
|
<allow pkg="org.apache.kafka.clients.consumer" />
|
2015-11-09 14:11:03 +08:00
|
|
|
<allow pkg="org.apache.kafka.connect.connector" />
|
|
|
|
<allow pkg="org.apache.kafka.connect.storage" />
|
2015-09-29 05:51:06 +08:00
|
|
|
</subpackage>
|
|
|
|
|
2017-03-15 08:20:49 +08:00
|
|
|
<subpackage name="converters">
|
|
|
|
<allow pkg="org.apache.kafka.connect.storage" />
|
|
|
|
</subpackage>
|
2018-05-30 12:35:22 +08:00
|
|
|
|
2019-05-17 16:37:32 +08:00
|
|
|
<subpackage name="connector.policy">
|
|
|
|
<allow pkg="org.apache.kafka.connect.health" />
|
|
|
|
<allow pkg="org.apache.kafka.connect.connector" />
|
|
|
|
<!-- for testing -->
|
|
|
|
<allow pkg="org.apache.kafka.connect.runtime" />
|
|
|
|
</subpackage>
|
|
|
|
|
2018-05-30 12:35:22 +08:00
|
|
|
<subpackage name="rest">
|
|
|
|
<allow pkg="org.apache.kafka.connect.health" />
|
|
|
|
<allow pkg="javax.ws.rs" />
|
|
|
|
<allow pkg= "javax.security.auth"/>
|
|
|
|
<subpackage name="basic">
|
|
|
|
<allow pkg="org.apache.kafka.connect.rest"/>
|
2022-12-05 22:38:40 +08:00
|
|
|
<allow pkg="javax.annotation"/>
|
2018-05-30 12:35:22 +08:00
|
|
|
</subpackage>
|
|
|
|
</subpackage>
|
|
|
|
|
KAFKA-7500: MirrorMaker 2.0 (KIP-382)
Implementation of [KIP-382 "MirrorMaker 2.0"](https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0)
Author: Ryanne Dolan <ryannedolan@gmail.com>
Author: Arun Mathew <arunmathew88@gmail.com>
Author: In Park <inpark@cloudera.com>
Author: Andre Price <obsoleted@users.noreply.github.com>
Author: christian.hagel@rio.cloud <christian.hagel@rio.cloud>
Reviewers: Eno Thereska <eno.thereska@gmail.com>, William Hammond <william.t.hammond@gmail.com>, Viktor Somogyi <viktorsomogyi@gmail.com>, Jakub Korzeniowski, Tim Carey-Smith, Kamal Chandraprakash <kamal.chandraprakash@gmail.com>, Arun Mathew, Jeremy-l-ford, vpernin, Oleg Kasian <oleg.kasian@gmail.com>, Mickael Maison <mickael.maison@gmail.com>, Qihong Chen, Sriharsha Chintalapani <sriharsha@apache.org>, Jun Rao <junrao@gmail.com>, Randall Hauch <rhauch@gmail.com>, Manikumar Reddy <manikumar.reddy@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Closes #6295 from ryannedolan/KIP-382
2019-10-07 16:27:54 +08:00
|
|
|
<subpackage name="mirror">
|
|
|
|
<allow pkg="org.apache.kafka.clients.consumer" />
|
|
|
|
<allow pkg="org.apache.kafka.connect.source" />
|
|
|
|
<allow pkg="org.apache.kafka.connect.sink" />
|
|
|
|
<allow pkg="org.apache.kafka.connect.storage" />
|
|
|
|
<allow pkg="org.apache.kafka.connect.connector" />
|
|
|
|
<allow pkg="org.apache.kafka.connect.runtime" />
|
|
|
|
<allow pkg="org.apache.kafka.connect.runtime.distributed" />
|
|
|
|
<allow pkg="org.apache.kafka.connect.util" />
|
|
|
|
<allow pkg="org.apache.kafka.connect.converters" />
|
|
|
|
<allow pkg="net.sourceforge.argparse4j" />
|
|
|
|
<!-- for tests -->
|
|
|
|
<allow pkg="org.apache.kafka.connect.integration" />
|
2020-01-14 07:25:24 +08:00
|
|
|
<allow pkg="org.apache.kafka.connect.mirror" />
|
2021-01-14 22:48:17 +08:00
|
|
|
<allow pkg="kafka.server" />
|
2023-02-09 23:50:07 +08:00
|
|
|
<subpackage name="rest">
|
|
|
|
<allow pkg="javax.ws.rs" />
|
|
|
|
</subpackage>
|
KAFKA-7500: MirrorMaker 2.0 (KIP-382)
Implementation of [KIP-382 "MirrorMaker 2.0"](https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0)
Author: Ryanne Dolan <ryannedolan@gmail.com>
Author: Arun Mathew <arunmathew88@gmail.com>
Author: In Park <inpark@cloudera.com>
Author: Andre Price <obsoleted@users.noreply.github.com>
Author: christian.hagel@rio.cloud <christian.hagel@rio.cloud>
Reviewers: Eno Thereska <eno.thereska@gmail.com>, William Hammond <william.t.hammond@gmail.com>, Viktor Somogyi <viktorsomogyi@gmail.com>, Jakub Korzeniowski, Tim Carey-Smith, Kamal Chandraprakash <kamal.chandraprakash@gmail.com>, Arun Mathew, Jeremy-l-ford, vpernin, Oleg Kasian <oleg.kasian@gmail.com>, Mickael Maison <mickael.maison@gmail.com>, Qihong Chen, Sriharsha Chintalapani <sriharsha@apache.org>, Jun Rao <junrao@gmail.com>, Randall Hauch <rhauch@gmail.com>, Manikumar Reddy <manikumar.reddy@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Closes #6295 from ryannedolan/KIP-382
2019-10-07 16:27:54 +08:00
|
|
|
</subpackage>
|
|
|
|
|
2015-09-29 05:51:06 +08:00
|
|
|
<subpackage name="runtime">
|
2015-11-09 14:11:03 +08:00
|
|
|
<allow pkg="org.apache.kafka.connect" />
|
2016-01-05 04:01:58 +08:00
|
|
|
<allow pkg="org.reflections"/>
|
|
|
|
<allow pkg="org.reflections.util"/>
|
2019-10-03 06:06:57 +08:00
|
|
|
<allow pkg="javax.crypto"/>
|
2020-05-24 21:56:27 +08:00
|
|
|
<allow pkg="org.eclipse.jetty.util" />
|
2015-10-31 06:00:00 +08:00
|
|
|
|
|
|
|
<subpackage name="rest">
|
|
|
|
<allow pkg="org.eclipse.jetty" />
|
|
|
|
<allow pkg="javax.ws.rs" />
|
|
|
|
<allow pkg="javax.servlet" />
|
|
|
|
<allow pkg="org.glassfish.jersey" />
|
|
|
|
<allow pkg="com.fasterxml.jackson" />
|
2019-02-13 04:03:08 +08:00
|
|
|
<allow pkg="org.apache.http"/>
|
2022-06-10 17:35:22 +08:00
|
|
|
<allow pkg="io.swagger.v3.oas.annotations"/>
|
2019-10-03 06:00:37 +08:00
|
|
|
<subpackage name="resources">
|
|
|
|
<allow pkg="org.apache.log4j" />
|
|
|
|
</subpackage>
|
2015-10-31 06:00:00 +08:00
|
|
|
</subpackage>
|
2017-05-19 01:39:15 +08:00
|
|
|
|
|
|
|
<subpackage name="isolation">
|
|
|
|
<allow pkg="com.fasterxml.jackson" />
|
|
|
|
<allow pkg="org.apache.maven.artifact.versioning" />
|
2019-10-17 09:43:01 +08:00
|
|
|
<allow pkg="javax.tools" />
|
2017-05-19 01:39:15 +08:00
|
|
|
</subpackage>
|
2019-10-03 06:06:57 +08:00
|
|
|
|
|
|
|
<subpackage name="distributed">
|
|
|
|
<allow pkg="javax.ws.rs.core" />
|
|
|
|
</subpackage>
|
2015-09-29 05:51:06 +08:00
|
|
|
</subpackage>
|
|
|
|
|
|
|
|
<subpackage name="cli">
|
2015-11-09 14:11:03 +08:00
|
|
|
<allow pkg="org.apache.kafka.connect.runtime" />
|
|
|
|
<allow pkg="org.apache.kafka.connect.storage" />
|
|
|
|
<allow pkg="org.apache.kafka.connect.util" />
|
2015-09-29 05:51:06 +08:00
|
|
|
<allow pkg="org.apache.kafka.common" />
|
2019-05-17 16:37:32 +08:00
|
|
|
<allow pkg="org.apache.kafka.connect.connector.policy" />
|
2015-09-29 05:51:06 +08:00
|
|
|
</subpackage>
|
|
|
|
|
|
|
|
<subpackage name="storage">
|
2015-11-09 14:11:03 +08:00
|
|
|
<allow pkg="org.apache.kafka.connect" />
|
2015-09-29 05:51:06 +08:00
|
|
|
<allow pkg="org.apache.kafka.common.serialization" />
|
2019-10-03 06:06:57 +08:00
|
|
|
<allow pkg="javax.crypto.spec"/>
|
2015-09-29 05:51:06 +08:00
|
|
|
</subpackage>
|
|
|
|
|
|
|
|
<subpackage name="util">
|
2015-11-09 14:11:03 +08:00
|
|
|
<allow pkg="org.apache.kafka.connect" />
|
2016-04-29 02:59:02 +08:00
|
|
|
<allow pkg="org.reflections.vfs" />
|
2015-10-31 06:00:00 +08:00
|
|
|
<!-- for annotations to avoid code duplication -->
|
|
|
|
<allow pkg="com.fasterxml.jackson.annotation" />
|
2019-01-15 05:50:23 +08:00
|
|
|
<allow pkg="com.fasterxml.jackson.databind" />
|
|
|
|
<subpackage name="clusters">
|
2021-06-17 20:32:34 +08:00
|
|
|
<allow pkg="kafka.cluster" />
|
2019-01-15 05:50:23 +08:00
|
|
|
<allow pkg="kafka.server" />
|
|
|
|
<allow pkg="kafka.zk" />
|
|
|
|
<allow pkg="kafka.utils" />
|
|
|
|
<allow class="javax.servlet.http.HttpServletResponse" />
|
2019-01-21 11:31:20 +08:00
|
|
|
<allow class="javax.ws.rs.core.Response" />
|
2020-02-15 06:34:34 +08:00
|
|
|
<allow pkg="com.fasterxml.jackson.core.type" />
|
2021-02-04 05:41:38 +08:00
|
|
|
<allow pkg="org.apache.kafka.metadata" />
|
2019-01-15 05:50:23 +08:00
|
|
|
</subpackage>
|
|
|
|
</subpackage>
|
|
|
|
|
|
|
|
<subpackage name="integration">
|
|
|
|
<allow pkg="org.apache.kafka.connect.util.clusters" />
|
|
|
|
<allow pkg="org.apache.kafka.connect" />
|
2019-03-25 22:29:33 +08:00
|
|
|
<allow pkg="org.apache.kafka.tools" />
|
2019-05-08 06:20:51 +08:00
|
|
|
<allow pkg="javax.ws.rs" />
|
2022-11-18 06:51:54 +08:00
|
|
|
<allow pkg="org.apache.http"/>
|
|
|
|
<allow pkg="org.eclipse.jetty.util"/>
|
2015-09-29 05:51:06 +08:00
|
|
|
</subpackage>
|
|
|
|
|
|
|
|
<subpackage name="json">
|
|
|
|
<allow pkg="com.fasterxml.jackson" />
|
|
|
|
<allow pkg="org.apache.kafka.common.serialization" />
|
|
|
|
<allow pkg="org.apache.kafka.common.errors" />
|
2015-11-09 14:11:03 +08:00
|
|
|
<allow pkg="org.apache.kafka.connect.storage" />
|
2015-09-29 05:51:06 +08:00
|
|
|
</subpackage>
|
|
|
|
|
|
|
|
<subpackage name="file">
|
2015-11-09 14:11:03 +08:00
|
|
|
<allow pkg="org.apache.kafka.connect" />
|
2015-09-29 05:51:06 +08:00
|
|
|
<allow pkg="org.apache.kafka.clients.consumer" />
|
|
|
|
<!-- for tests -->
|
|
|
|
<allow pkg="org.easymock" />
|
|
|
|
<allow pkg="org.powermock" />
|
|
|
|
</subpackage>
|
|
|
|
|
2015-11-11 06:54:15 +08:00
|
|
|
<subpackage name="tools">
|
|
|
|
<allow pkg="org.apache.kafka.connect" />
|
2015-11-13 03:11:56 +08:00
|
|
|
<allow pkg="org.apache.kafka.tools" />
|
2015-11-11 06:54:15 +08:00
|
|
|
<allow pkg="com.fasterxml.jackson" />
|
|
|
|
</subpackage>
|
2017-01-13 08:14:53 +08:00
|
|
|
|
|
|
|
<subpackage name="transforms">
|
|
|
|
<allow class="org.apache.kafka.connect.connector.ConnectRecord" />
|
|
|
|
<allow class="org.apache.kafka.connect.source.SourceRecord" />
|
|
|
|
<allow class="org.apache.kafka.connect.sink.SinkRecord" />
|
|
|
|
<allow pkg="org.apache.kafka.connect.transforms.util" />
|
|
|
|
</subpackage>
|
2015-09-29 05:51:06 +08:00
|
|
|
</subpackage>
|
KAFKA-2366; Initial patch for Copycat
This is an initial patch implementing the basics of Copycat for KIP-26.
The intent here is to start a review of the key pieces of the core API and get a reasonably functional, baseline, non-distributed implementation of Copycat in place to get things rolling. The current patch has a number of known issues that need to be addressed before a final version:
* Some build-related issues. Specifically, requires some locally-installed dependencies (see below), ignores checkstyle for the runtime data library because it's lifted from Avro currently and likely won't last in its current form, and some Gradle task dependencies aren't quite right because I haven't gotten rid of the dependency on `core` (which should now be an easy patch since new consumer groups are in a much better state).
* This patch currently depends on some Confluent trunk code because I prototyped with our Avro serializers w/ schema-registry support. We need to figure out what we want to provide as an example built-in set of serializers. Unlike core Kafka where we could ignore the issue, providing only ByteArray or String serializers, this is pretty central to how Copycat works.
* This patch uses a hacked up version of Avro as its runtime data format. Not sure if we want to go through the entire API discussion just to get some basic code committed, so I filed KAFKA-2367 to handle that separately. The core connector APIs and the runtime data APIs are entirely orthogonal.
* This patch needs some updates to get aligned with recent new consumer changes (specifically, I'm aware of the ConcurrentModificationException issue on exit). More generally, the new consumer is in flux but Copycat depends on it, so there are likely to be some negative interactions.
* The layout feels a bit awkward to me right now because I ported it from a Maven layout. We don't have nearly the same level of granularity in Kafka currently (core and clients, plus the mostly ignored examples, log4j-appender, and a couple of contribs). We might want to reorganize, although keeping data+api separate from runtime and connector plugins is useful for minimizing dependencies.
* There are a variety of other things (e.g., I'm not happy with the exception hierarchy/how they are currently handled, TopicPartition doesn't really need to be duplicated unless we want Copycat entirely isolated from the Kafka APIs, etc), but I expect those we'll cover in the review.
Before commenting on the patch, it's probably worth reviewing https://issues.apache.org/jira/browse/KAFKA-2365 and https://issues.apache.org/jira/browse/KAFKA-2366 to get an idea of what I had in mind for a) what we ultimately want with all the Copycat patches and b) what we aim to cover in this initial patch. My hope is that we can use a WIP patch (after the current obvious deficiencies are addressed) while recognizing that we want to make iterative progress with a bunch of subsequent PRs.
Author: Ewen Cheslack-Postava <me@ewencp.org>
Reviewers: Ismael Juma, Gwen Shapira
Closes #99 from ewencp/copycat and squashes the following commits:
a3a47a6 [Ewen Cheslack-Postava] Simplify Copycat exceptions, make them a subclass of KafkaException.
8c108b0 [Ewen Cheslack-Postava] Rename Coordinator to Herder to avoid confusion with the consumer coordinator.
7bf8075 [Ewen Cheslack-Postava] Make Copycat CLI speific to standalone mode, clean up some config and get rid of config storage in standalone mode.
656a003 [Ewen Cheslack-Postava] Clarify and expand the explanation of the Copycat Coordinator interface.
c0e5fdc [Ewen Cheslack-Postava] Merge remote-tracking branch 'origin/trunk' into copycat
0fa7a36 [Ewen Cheslack-Postava] Mark Copycat classes as unstable and reduce visibility of some classes where possible.
d55d31e [Ewen Cheslack-Postava] Reorganize Copycat code to put it all under one top-level directory.
b29cb2c [Ewen Cheslack-Postava] Merge remote-tracking branch 'origin/trunk' into copycat
d713a21 [Ewen Cheslack-Postava] Address Gwen's review comments.
6787a85 [Ewen Cheslack-Postava] Make Converter generic to match serializers since some serialization formats do not require a base class of Object; update many other classes to have generic key and value class type parameters to match this change.
b194c73 [Ewen Cheslack-Postava] Split Copycat converter option into two options for key and value.
0b5a1a0 [Ewen Cheslack-Postava] Normalize naming to use partition for both source and Kafka, adjusting naming in CopycatRecord classes to clearly differentiate.
e345142 [Ewen Cheslack-Postava] Remove Copycat reflection utils, use existing Utils and ConfigDef functionality from clients package.
be5c387 [Ewen Cheslack-Postava] Minor cleanup
122423e [Ewen Cheslack-Postava] Style cleanup
6ba87de [Ewen Cheslack-Postava] Remove most of the Avro-based mock runtime data API, only preserving enough schema functionality to support basic primitive types for an initial patch.
4674d13 [Ewen Cheslack-Postava] Address review comments, clean up some code styling.
25b5739 [Ewen Cheslack-Postava] Fix sink task offset commit concurrency issue by moving it to the worker thread and waking up the consumer to ensure it exits promptly.
0aefe21 [Ewen Cheslack-Postava] Add log4j settings for Copycat.
220e42d [Ewen Cheslack-Postava] Replace Avro serializer with JSON serializer.
1243a7c [Ewen Cheslack-Postava] Merge remote-tracking branch 'origin/trunk' into copycat
5a618c6 [Ewen Cheslack-Postava] Remove offset serializers, instead reusing the existing serializers and removing schema projection support.
e849e10 [Ewen Cheslack-Postava] Remove duplicated TopicPartition implementation.
dec1379 [Ewen Cheslack-Postava] Switch to using new consumer coordinator instead of manually assigning partitions. Remove dependency of copycat-runtime on core.
4a9b4f3 [Ewen Cheslack-Postava] Add some helpful Copycat-specific build and test targets that cover all Copycat packages.
31cd1ca [Ewen Cheslack-Postava] Add CLI tools for Copycat.
e14942c [Ewen Cheslack-Postava] Add Copycat file connector.
0233456 [Ewen Cheslack-Postava] Add copycat-avro and copycat-runtime
11981d2 [Ewen Cheslack-Postava] Add copycat-data and copycat-api
2015-08-15 07:00:51 +08:00
|
|
|
|
2015-03-28 23:39:48 +08:00
|
|
|
</import-control>
|