kafka/checkstyle/import-control-core.xml

145 lines
5.2 KiB
XML
Raw Normal View History

<!DOCTYPE import-control PUBLIC
"-//Puppy Crawl//DTD Import Control 1.1//EN"
"http://www.puppycrawl.com/dtds/import_control_1_1.dtd">
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<import-control pkg="kafka">
<!-- THINK HARD ABOUT THE LAYERING OF THE PROJECT BEFORE CHANGING THIS FILE -->
<!-- common library dependencies -->
<allow pkg="java" />
<allow pkg="scala" />
<allow pkg="javax.management" />
<allow pkg="org.slf4j" />
<allow pkg="org.junit" />
<allow pkg="java.security" />
<allow pkg="javax.net.ssl" />
<allow pkg="javax.security" />
<allow pkg="kafka.common" />
<allow pkg="kafka.utils" />
<allow pkg="kafka.serializer" />
<allow pkg="org.apache.kafka.common" />
<allow pkg="org.mockito" class="AssignmentsManagerTest"/>
<allow pkg="org.apache.kafka.server"/>
<allow pkg="org.opentest4j" class="RemoteLogManagerTest"/>
<!-- see KIP-544 for why KafkaYammerMetrics should be used instead of the global default yammer metrics registry
https://cwiki.apache.org/confluence/display/KAFKA/KIP-544%3A+Make+metrics+exposed+via+JMX+configurable -->
<disallow class="com.yammer.metrics.Metrics" />
<allow pkg="com.yammer.metrics"/>
<subpackage name="testkit">
<allow pkg="kafka.metrics"/>
<allow pkg="kafka.raft"/>
<allow pkg="kafka.server"/>
<allow pkg="kafka.tools"/>
<allow pkg="org.apache.kafka.clients"/>
<allow pkg="org.apache.kafka.controller"/>
<allow pkg="org.apache.kafka.raft"/>
<allow pkg="org.apache.kafka.test"/>
<allow pkg="org.apache.kafka.metadata" />
<allow pkg="org.apache.kafka.metalog" />
<allow pkg="org.apache.kafka.server.common" />
KAFKA-14124: improve quorum controller fault handling (#12447) Before trying to commit a batch of records to the __cluster_metadata log, the active controller should try to apply them to its current in-memory state. If this application process fails, the active controller process should exit, allowing another node to take leadership. This will prevent most bad metadata records from ending up in the log and help to surface errors during testing. Similarly, if the active controller attempts to renounce leadership, and the renunciation process itself fails, the process should exit. This will help avoid bugs where the active controller continues in an undefined state. In contrast, standby controllers that experience metadata application errors should continue on, in order to avoid a scenario where a bad record brings down the whole controller cluster. The intended effect of these changes is to make it harder to commit a bad record to the metadata log, but to continue to ride out the bad record as well as possible if such a record does get committed. This PR introduces the FaultHandler interface to implement these concepts. In junit tests, we use a FaultHandler implementation which does not exit the process. This allows us to avoid terminating the gradle test runner, which would be very disruptive. It also allows us to ensure that the test surfaces these exceptions, which we previously were not doing (the mock fault handler stores the exception). In addition to the above, this PR fixes a bug where RaftClient#resign was not being called from the renounce() function. This bug could have resulted in the raft layer not being informed of an active controller resigning. Reviewers: David Arthur <mumrah@gmail.com>
2022-08-05 13:49:45 +08:00
<allow pkg="org.apache.kafka.server.fault" />
<allow class="org.apache.kafka.storage.internals.log.CleanerConfig" />
<allow class="org.apache.kafka.network.SocketServerConfigs" />
</subpackage>
<subpackage name="tools">
<allow pkg="org.apache.kafka.clients.admin" />
<allow pkg="kafka.admin" />
<allow pkg="org.apache.kafka.clients.consumer" />
<allow pkg="org.apache.kafka.server.util" />
<allow pkg="joptsimple" />
</subpackage>
<subpackage name="coordinator">
<allow class="kafka.server.MetadataCache" />
</subpackage>
<subpackage name="examples">
<allow pkg="org.apache.kafka.clients" />
</subpackage>
<subpackage name="log.remote">
<allow pkg="org.apache.kafka.server.common" />
<allow pkg="org.apache.kafka.server.log.remote" />
<allow pkg="org.apache.kafka.server.metrics" />
<allow pkg="org.apache.kafka.storage.internals" />
<allow pkg="kafka.log" />
<allow pkg="kafka.cluster" />
<allow pkg="kafka.server" />
<allow pkg="org.mockito" />
<allow pkg="org.apache.kafka.test" />
</subpackage>
<subpackage name="server">
<allow pkg="kafka" />
<allow pkg="org.apache.kafka" />
</subpackage>
<subpackage name="test">
<allow pkg="org.apache.kafka.controller"/>
<allow pkg="org.apache.kafka.metadata"/>
KAFKA-13646; Implement KIP-801: KRaft authorizer (#11649) Currently, when using KRaft mode, users still have to have an Apache ZooKeeper instance if they want to use AclAuthorizer. We should have a built-in Authorizer for KRaft mode that does not depend on ZooKeeper. This PR introduces such an authorizer, called StandardAuthorizer. See KIP-801 for a full description of the new Authorizer design. Authorizer.java: add aclCount API as described in KIP-801. StandardAuthorizer is currently the only authorizer that implements it, but eventually we may implement it for AclAuthorizer and others as well. ControllerApis.scala: fix a bug where createPartitions was authorized using CREATE on the topic resource rather than ALTER on the topic resource as it should have been. QuorumTestHarness: rename the controller endpoint to CONTROLLER for consistency (the brokers already called it that). This is relevant in AuthorizerIntegrationTest where we are examining endpoint names. Also add the controllerServers call. TestUtils.scala: adapt the ACL functions to be usable from KRaft, by ensuring that they use the Authorizer from the current active controller. BrokerMetadataPublisher.scala: add broker-side ACL application logic. Controller.java: add ACL APIs. Also add a findAllTopicIds API in order to make junit tests that use KafkaServerTestHarness#getTopicNames and KafkaServerTestHarness#getTopicIds work smoothly. AuthorizerIntegrationTest.scala: convert over testAuthorizationWithTopicExisting (more to come soon) QuorumController.java: add logic for replaying ACL-based records. This means storing them in the new AclControlManager object, and integrating them into controller snapshots. It also means applying the changes in the Authorizer, if one is configured. In renounce, when reverting to a snapshot, also set newBytesSinceLastSnapshot to 0. Reviewers: YeonCheol Jang <YeonCheolGit@users.noreply.github.com>, Jason Gustafson <jason@confluent.io>
2022-02-10 02:38:52 +08:00
<allow pkg="org.apache.kafka.server.authorizer"/>
<allow pkg="org.apache.kafka.server.common" />
KAFKA-15466: Add KIP-919 support for some admin APIs (#14399) Add support for --bootstrap-controller in the following command-line tools: - kafka-cluster.sh - kafka-configs.sh - kafka-features.sh - kafka-metadata-quorum.sh To implement this, the following AdminClient APIs now support the new bootstrap.controllers configuration: - Admin.alterConfigs - Admin.describeCluster - Admin.describeConfigs - Admin.describeFeatures - Admin.describeMetadataQuorum - Admin.incrementalAlterConfigs - Admin.updateFeatures Command-line tool changes: - Add CommandLineUtils.initializeBootstrapProperties to handle parsing --bootstrap-controller in addition to --bootstrap-server. - Add --bootstrap-controller to ConfigCommand.scala, ClusterTool.java, FeatureCommand.java, and MetadataQuorumCommand.java. KafkaAdminClient changes: - Add the AdminBootstrapAddresses class to handle extracting bootstrap.servers or bootstrap.controllers from the config map for KafkaAdminClient. - In AdminMetadataManager, store the new usingBootstrapControllers boolean. Generalize authException to encompass the concept of fatal exceptions in general. (For example, the fatal exception where we talked to the wrong node type.) Treat MismatchedEndpointTypeException and UnsupportedEndpointTypeException as fatal exceptions. - Extend NodeProvider to include information about whether bootstrap.controllers is supported. - Modify the APIs described above to support bootstrap.controllers. Server-side changes: - Support DescribeConfigsRequest on kcontrollers. - Add KRaftMetadataCache to the kcontroller to simplify implemeting describeConfigs (and probably more APIs in the future). It's mainly a wrapper around MetadataImage, so there is essentially no extra resource consumption. - Split RuntimeLoggerManager out of ConfigAdminManager to handle the incrementalAlterConfigs support for BROKER_LOGGER. This is now supported on kcontrollers as well as brokers. - Fix bug in AuthHelper.computeDescribeClusterResponse that resulted in us always sending back BROKER as the endpoint type, even on the kcontroller. Miscellaneous: - Fix a few places in exceptions and log messages where we wrote "broker" instead of "node". For example, an exception in NodeApiVersions.java, and a log message in NetworkClient.java. - Fix the slf4j log prefix used by KafkaRequestHandler logging so that request handlers on a controller don't look like they're on a broker. - Make the FinalizedVersionRange constructor public for the sake of a junit test. - Add unit and integration tests for the above. Reviewers: David Arthur <mumrah@gmail.com>, Doguscan Namal <namal.doguscan@gmail.com>
2023-09-27 05:43:42 +08:00
<allow pkg="org.apache.kafka.test" />
<allow pkg="kafka.testkit"/>
<allow pkg="kafka.test.annotation"/>
<allow pkg="kafka.test.junit"/>
<allow pkg="kafka.network"/>
<allow pkg="kafka.api"/>
<allow pkg="kafka.server"/>
<allow pkg="kafka.zk" />
<allow pkg="org.apache.kafka.clients.admin"/>
<allow pkg="org.apache.kafka.clients.consumer"/>
<allow pkg="org.apache.kafka.coordinator.group"/>
<allow pkg="org.apache.kafka.coordinator.transaction"/>
<subpackage name="annotation">
<allow pkg="kafka.test"/>
</subpackage>
<subpackage name="junit">
<allow pkg="kafka.test"/>
<allow pkg="org.apache.kafka.clients"/>
<allow pkg="org.apache.kafka.metadata" />
</subpackage>
<subpackage name="server">
<allow pkg="kafka.test" />
</subpackage>
</subpackage>
<subpackage name="admin">
<allow pkg="kafka.admin"/>
<allow pkg="kafka.cluster"/>
<allow pkg="kafka.security.authorizer"/>
<allow pkg="kafka.server"/>
<allow pkg="kafka.zk"/>
<allow pkg="org.apache.kafka.clients.admin"/>
<allow pkg="org.apache.kafka.coordinator.group"/>
<allow pkg="org.apache.kafka.metadata.authorizer"/>
<allow pkg="org.apache.kafka.security"/>
<allow pkg="org.apache.kafka.server"/>
<allow pkg="org.apache.kafka.test"/>
<allow pkg="org.apache.log4j"/>
<allow pkg="kafka.test"/>
<allow pkg="kafka.test.annotation"/>
<allow pkg="kafka.test.junit"/>
</subpackage>
</import-control>