kafka/checkstyle/suppressions.xml

358 lines
20 KiB
XML
Raw Normal View History

<?xml version="1.0"?>
<!DOCTYPE suppressions PUBLIC
"-//Puppy Crawl//DTD Suppressions 1.1//EN"
"http://www.puppycrawl.com/dtds/suppressions_1_1.dtd">
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<suppressions>
<!-- Note that [/\\] must be used as the path separator for cross-platform support -->
<!-- Generator -->
<suppress checks="CyclomaticComplexity|BooleanExpressionComplexity"
files="(SchemaGenerator|MessageDataGenerator|FieldSpec|FieldType).java"/>
<suppress checks="NPathComplexity"
files="(MessageDataGenerator|FieldSpec|WorkerSinkTask).java"/>
<suppress checks="JavaNCSS"
files="(ApiMessageType|FieldSpec|MessageDataGenerator|KafkaConsumerTest).java"/>
<suppress checks="MethodLength"
files="(FieldSpec|MessageDataGenerator).java"/>
<suppress id="dontUseSystemExit"
files="MessageGenerator.java"/>
<!-- core -->
<suppress checks="(NPathComplexity|ClassFanOutComplexity|CyclomaticComplexity|ClassDataAbstractionCoupling|FinalLocalVariable|LocalVariableName|MemberName|ParameterName|MethodLength|JavaNCSS|AvoidStarImport)"
files="core[\\/]src[\\/](generated|generated-test)[\\/].+.java$"/>
<suppress checks="NPathComplexity" files="(ClusterTestExtensions|KafkaApisBuilder).java"/>
<suppress checks="NPathComplexity|ClassFanOutComplexity|ClassDataAbstractionCoupling" files="(RemoteLogManager|RemoteLogManagerTest).java"/>
<suppress checks="ClassFanOutComplexity" files="RemoteLogManagerTest.java"/>
KAFKA-14124: improve quorum controller fault handling (#12447) Before trying to commit a batch of records to the __cluster_metadata log, the active controller should try to apply them to its current in-memory state. If this application process fails, the active controller process should exit, allowing another node to take leadership. This will prevent most bad metadata records from ending up in the log and help to surface errors during testing. Similarly, if the active controller attempts to renounce leadership, and the renunciation process itself fails, the process should exit. This will help avoid bugs where the active controller continues in an undefined state. In contrast, standby controllers that experience metadata application errors should continue on, in order to avoid a scenario where a bad record brings down the whole controller cluster. The intended effect of these changes is to make it harder to commit a bad record to the metadata log, but to continue to ride out the bad record as well as possible if such a record does get committed. This PR introduces the FaultHandler interface to implement these concepts. In junit tests, we use a FaultHandler implementation which does not exit the process. This allows us to avoid terminating the gradle test runner, which would be very disruptive. It also allows us to ensure that the test surfaces these exceptions, which we previously were not doing (the mock fault handler stores the exception). In addition to the above, this PR fixes a bug where RaftClient#resign was not being called from the renounce() function. This bug could have resulted in the raft layer not being informed of an active controller resigning. Reviewers: David Arthur <mumrah@gmail.com>
2022-08-05 13:49:45 +08:00
<suppress checks="MethodLength"
files="(KafkaClusterTestKit).java"/>
<!-- Clients -->
<suppress id="dontUseSystemExit"
files="Exit.java"/>
<suppress checks="ClassFanOutComplexity"
files="(AbstractFetch|Sender|SenderTest|ConsumerCoordinator|KafkaConsumer|PrototypeAsyncConsumer|KafkaProducer|Utils|TransactionManager|TransactionManagerTest|KafkaAdminClient|NetworkClient|Admin|KafkaRaftClient|KafkaRaftClientTest|RaftClientTestContext).java"/>
<suppress checks="ClassFanOutComplexity"
files="(SaslServerAuthenticator|SaslAuthenticatorTest).java"/>
<suppress checks="NPath"
files="SaslServerAuthenticator.java"/>
<suppress checks="ClassFanOutComplexity"
files="Errors.java"/>
<suppress checks="ClassFanOutComplexity"
files="Utils.java"/>
<suppress checks="ClassFanOutComplexity"
files="AbstractRequest.java"/>
<suppress checks="ClassFanOutComplexity"
files="AbstractResponse.java"/>
<suppress checks="ClassFanOutComplexity"
files="PrototypeAsyncConsumer.java"/>
<suppress checks="MethodLength"
files="(KerberosLogin|RequestResponseTest|ConnectMetricsRegistry|KafkaConsumer|AbstractStickyAssignor|AbstractRequest|AbstractResponse).java"/>
<suppress checks="ParameterNumber"
files="(NetworkClient|FieldSpec|KafkaRaftClient).java"/>
<suppress checks="ParameterNumber"
files="(KafkaConsumer|PrototypeAsyncConsumer|ConsumerCoordinator).java"/>
<suppress checks="ParameterNumber"
files="(RecordAccumulator|Sender).java"/>
<suppress checks="ParameterNumber"
files="ConfigDef.java"/>
<suppress checks="ParameterNumber"
files="DefaultRecordBatch.java"/>
<suppress checks="ParameterNumber"
files="MemoryRecordsBuilder.java"/>
<suppress checks="ClassDataAbstractionCoupling"
files="(KafkaConsumer|PrototypeAsyncConsumer|ConsumerCoordinator|AbstractFetch|KafkaProducer|AbstractRequest|AbstractResponse|TransactionManager|Admin|KafkaAdminClient|MockAdminClient|KafkaRaftClient|KafkaRaftClientTest).java"/>
<suppress checks="ClassDataAbstractionCoupling"
files="(Errors|SaslAuthenticatorTest|AgentTest|CoordinatorTest).java"/>
<suppress checks="BooleanExpressionComplexity"
KAFKA-7862 & KIP-345 part-one: Add static membership logic to JoinGroup protocol (#6177) This is the first diff for the implementation of JoinGroup logic for static membership. The goal of this diff contains: * Add group.instance.id to be unique identifier for consumer instances, provided by end user; Modify group coordinator to accept JoinGroupRequest with/without static membership, refactor the logic for readability and code reusability. * Add client side support for incorporating static membership changes, including new config for group.instance.id, apply stream thread client id by default, and new join group exception handling. * Increase max session timeout to 30 min for more user flexibility if they are inclined to tolerate partial unavailability than burdening rebalance. * Unit tests for each module changes, especially on the group coordinator logic. Crossing the possibilities like: 6.1 Dynamic/Static member 6.2 Known/Unknown member id 6.3 Group stable/unstable 6.4 Leader/Follower The rest of the 345 change will be broken down to 4 separate diffs: * Avoid kicking out members through rebalance.timeout, only do the kick out through session timeout. * Changes around LeaveGroup logic, including version bumping, broker logic, client logic, etc. * Admin client changes to add ability to batch remove static members * Deprecate group.initial.rebalance.delay Reviewers: Liquan Pei <liquanpei@gmail.com>, Stanislav Kozlovski <familyguyuser192@windowslive.com>, Jason Gustafson <jason@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
2019-04-27 02:44:38 +08:00
files="(Utils|Topic|KafkaLZ4BlockOutputStream|AclData|JoinGroupRequest).java"/>
<suppress checks="CyclomaticComplexity"
files="(AbstractFetch|ConsumerCoordinator|FetchCollector|OffsetFetcherUtils|KafkaProducer|Sender|ConfigDef|KerberosLogin|AbstractRequest|AbstractResponse|Selector|SslFactory|SslTransportLayer|SaslClientAuthenticator|SaslClientCallbackHandler|SaslServerAuthenticator|AbstractCoordinator|TransactionManager|AbstractStickyAssignor|DefaultSslEngineFactory|Authorizer|RecordAccumulator|MemoryRecords|FetchSessionHandler).java"/>
<suppress checks="JavaNCSS"
files="(AbstractRequest|AbstractResponse|KerberosLogin|WorkerSinkTaskTest|TransactionManagerTest|SenderTest|KafkaAdminClient|ConsumerCoordinatorTest|KafkaAdminClientTest|KafkaRaftClientTest).java"/>
<suppress checks="NPathComplexity"
files="(ConsumerCoordinator|BufferPool|MetricName|Node|ConfigDef|RecordBatch|SslFactory|SslTransportLayer|MetadataResponse|KerberosLogin|Selector|Sender|Serdes|TokenInformation|Agent|Values|PluginUtils|MiniTrogdorCluster|TasksRequest|KafkaProducer|AbstractStickyAssignor|KafkaRaftClient|Authorizer|FetchSessionHandler|RecordAccumulator).java"/>
<suppress checks="(JavaNCSS|CyclomaticComplexity|MethodLength)"
files="CoordinatorClient.java"/>
<suppress checks="(UnnecessaryParentheses|BooleanExpressionComplexity|CyclomaticComplexity|WhitespaceAfter|LocalVariableName)"
files="Murmur3.java"/>
<suppress checks="(NPathComplexity|ClassFanOutComplexity|CyclomaticComplexity|ClassDataAbstractionCoupling|LocalVariableName|MemberName|ParameterName|MethodLength|JavaNCSS|AvoidStarImport)"
files="clients[\\/]src[\\/](generated|generated-test)[\\/].+.java$"/>
<suppress checks="NPathComplexity"
files="MessageTest.java|OffsetFetchRequest.java"/>
<!-- Clients tests -->
<suppress checks="ClassDataAbstractionCoupling"
files="(Sender|Fetcher|FetchRequestManager|OffsetFetcher|KafkaConsumer|PrototypeAsyncConsumer|Metrics|RequestResponse|TransactionManager|KafkaAdminClient|Message|KafkaProducer)Test.java"/>
<suppress checks="ClassFanOutComplexity"
files="(ConsumerCoordinator|KafkaConsumer|RequestResponse|Fetcher|FetchRequestManager|KafkaAdminClient|Message|KafkaProducer)Test.java"/>
<suppress checks="ClassFanOutComplexity"
files="MockAdminClient.java"/>
<suppress checks="CyclomaticComplexity"
files="(OffsetFetcher|RequestResponse)Test.java"/>
<suppress checks="JavaNCSS"
files="RequestResponseTest.java|FetcherTest.java|FetchRequestManagerTest.java|KafkaAdminClientTest.java"/>
<suppress checks="NPathComplexity"
files="MemoryRecordsTest|MetricsTest|RequestResponseTest|TestSslUtils|AclAuthorizerBenchmark"/>
<suppress checks="(WhitespaceAround|LocalVariableName|ImportControl|AvoidStarImport)"
files="Murmur3Test.java"/>
<!-- Connect -->
<suppress checks="ClassFanOutComplexity"
files="(AbstractHerder|DistributedHerder|Worker).java"/>
<suppress checks="ClassFanOutComplexity"
files="Worker(|Test).java"/>
<suppress checks="MethodLength"
files="(DistributedHerder|DistributedConfig|KafkaConfigBackingStore|Values|IncrementalCooperativeAssignor).java"/>
<suppress checks="ParameterNumber"
files="Worker(SinkTask|SourceTask|Coordinator).java"/>
<suppress checks="ParameterNumber"
files="(ConfigKeyInfo|DistributedHerder).java"/>
<suppress checks="DefaultComesLast"
files="LoggingResource.java" />
<suppress checks="ClassDataAbstractionCoupling"
files="(RestServer|AbstractHerder|DistributedHerder|Worker).java"/>
<suppress checks="BooleanExpressionComplexity"
files="JsonConverter.java"/>
<suppress checks="CyclomaticComplexity"
files="(FileStreamSourceTask|DistributedHerder|KafkaConfigBackingStore).java"/>
KAFKA-5142: Add Connect support for message headers (KIP-145) **[KIP-145](https://cwiki.apache.org/confluence/display/KAFKA/KIP-145+-+Expose+Record+Headers+in+Kafka+Connect) has been accepted, and this PR implements KIP-145 except without the SMTs.** Changed the Connect API and runtime to support message headers as described in [KIP-145](https://cwiki.apache.org/confluence/display/KAFKA/KIP-145+-+Expose+Record+Headers+in+Kafka+Connect). The new `Header` interface defines an immutable representation of a Kafka header (key-value pair) with support for the Connect value types and schemas. This interface provides methods for easily converting between many of the built-in primitive, structured, and logical data types. The new `Headers` interface defines an ordered collection of headers and is used to track all headers associated with a `ConnectRecord` (and thus `SourceRecord` and `SinkRecord`). This does allow multiple headers with the same key. The `Headers` contains methods for adding, removing, finding, and modifying headers. Convenience methods allow connectors and transforms to easily use and modify the headers for a record. A new `HeaderConverter` interface is also defined to enable the Connect runtime framework to be able to serialize and deserialize headers between the in-memory representation and Kafka’s byte[] representation. A new `SimpleHeaderConverter` implementation has been added, and this serializes to strings and deserializes by inferring the schemas (`Struct` header values are serialized without the schemas, so they can only be deserialized as `Map` instances without a schema.) The `StringConverter`, `JsonConverter`, and `ByteArrayConverter` have all been extended to also be `HeaderConverter` implementations. Each connector can be configured with a different header converter, although by default the `SimpleHeaderConverter` is used to serialize header values as strings without schemas. Unit and integration tests are added for `ConnectHeader` and `ConnectHeaders`, the two implementation classes for headers. Additional test methods are added for the methods added to the `Converter` implementations. Finally, the `ConnectRecord` object is already used heavily, so only limited tests need to be added while quite a few of the existing tests already cover the changes. Author: Randall Hauch <rhauch@gmail.com> Reviewers: Arjun Satish <arjun@confluent.io>, Ted Yu <yuzhihong@gmail.com>, Magesh Nandakumar <magesh.n.kumar@gmail.com>, Konstantine Karantasis <konstantine@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io> Closes #4319 from rhauch/kafka-5142-b
2018-02-01 02:40:24 +08:00
<suppress checks="CyclomaticComplexity"
files="(JsonConverter|Values|ConnectHeaders).java"/>
<suppress checks="JavaNCSS"
files="(KafkaConfigBackingStore|Values|ConnectMetricsRegistry).java"/>
<suppress checks="NPathComplexity"
files="(DistributedHerder|RestClient|RestServer|JsonConverter|KafkaConfigBackingStore|FileStreamSourceTask|WorkerSourceTask|TopicAdmin).java"/>
<!-- connect tests-->
<suppress checks="ClassDataAbstractionCoupling"
files="(DistributedHerder|KafkaBasedLog|WorkerSourceTaskWithTopicCreation|WorkerSourceTask)Test.java"/>
<suppress checks="ClassFanOutComplexity"
files="(WorkerSink|WorkerSource|ErrorHandling)Task(|WithTopicCreation)Test.java"/>
<suppress checks="ClassFanOutComplexity"
files="DistributedHerderTest.java"/>
<suppress checks="MethodLength"
files="(RequestResponse|WorkerSinkTask)Test.java"/>
<suppress checks="JavaNCSS"
files="(DistributedHerder|Worker)Test.java"/>
<!-- Raft -->
<suppress checks="NPathComplexity"
files="RecordsIterator.java"/>
<!-- Streams -->
<suppress checks="ClassFanOutComplexity"
files="(KafkaStreams|KStreamImpl|KTableImpl|InternalTopologyBuilder|StreamsPartitionAssignor|StreamThread|IQv2StoreIntegrationTest|KStreamImplTest).java"/>
<suppress checks="MethodLength"
files="KTableImpl.java"/>
<suppress checks="ParameterNumber"
files="StreamThread.java"/>
<suppress checks="ClassDataAbstractionCoupling"
files="(KafkaStreams|KStreamImpl|KTableImpl).java"/>
<suppress checks="CyclomaticComplexity"
files="(KafkaStreams|StreamsPartitionAssignor|StreamThread|TaskManager|PartitionGroup|SubscriptionWrapperSerde|AssignorConfiguration).java"/>
<suppress checks="StaticVariableName"
files="StreamsMetricsImpl.java"/>
<suppress checks="NPathComplexity"
files="(KafkaStreams|StreamsPartitionAssignor|StreamThread|TaskManager|GlobalStateManagerImpl|KStreamImplJoin|TopologyConfig|KTableKTableOuterJoin).java"/>
2019-09-20 07:38:18 +08:00
<suppress checks="(FinalLocalVariable|UnnecessaryParentheses|BooleanExpressionComplexity|CyclomaticComplexity|WhitespaceAfter|LocalVariableName)"
files="Murmur3.java"/>
<suppress checks="(NPathComplexity|CyclomaticComplexity)"
files="(KStreamSlidingWindowAggregate|RackAwareTaskAssignor).java"/>
<!-- suppress FinalLocalVariable outside of the streams package. -->
<suppress checks="FinalLocalVariable"
files="^(?!.*[\\/]org[\\/]apache[\\/]kafka[\\/]streams[\\/].*$)"/>
<!-- Generated code -->
<suppress checks="(NPathComplexity|ClassFanOutComplexity|CyclomaticComplexity|ClassDataAbstractionCoupling|FinalLocalVariable|LocalVariableName|MemberName|ParameterName|MethodLength|JavaNCSS|AvoidStarImport)"
files="streams[\\/]src[\\/](generated|generated-test)[\\/].+.java$"/>
<suppress checks="(NPathComplexity|ClassFanOutComplexity|CyclomaticComplexity|ClassDataAbstractionCoupling|FinalLocalVariable|LocalVariableName|MemberName|ParameterName|MethodLength|JavaNCSS|AvoidStarImport)"
files="raft[\\/]src[\\/](generated|generated-test)[\\/].+.java$"/>
<suppress checks="(NPathComplexity|ClassFanOutComplexity|CyclomaticComplexity|ClassDataAbstractionCoupling|FinalLocalVariable|LocalVariableName|MemberName|ParameterName|MethodLength|JavaNCSS|AvoidStarImport)"
files="storage[\\/]src[\\/](generated|generated-test)[\\/].+.java$"/>
<suppress checks="(NPathComplexity|ClassFanOutComplexity|CyclomaticComplexity|ClassDataAbstractionCoupling|FinalLocalVariable|LocalVariableName|MemberName|ParameterName|MethodLength|JavaNCSS|AvoidStarImport)"
files="group-coordinator[\\/]src[\\/](generated|generated-test)[\\/].+.java$"/>
<suppress checks="ImportControl" files="FetchResponseData.java"/>
<suppress checks="ImportControl" files="RecordsSerdeTest.java"/>
<!-- Streams tests -->
<suppress checks="ClassFanOutComplexity"
files="(RecordCollectorTest|StreamsPartitionAssignorTest|StreamThreadTest|StreamTaskTest|TaskManagerTest|TopologyTestDriverTest).java"/>
<suppress checks="MethodLength"
files="(EosIntegrationTest|EosV2UpgradeIntegrationTest|KStreamKStreamJoinTest|RocksDBWindowStoreTest|StreamStreamJoinIntegrationTest).java"/>
<suppress checks="ClassDataAbstractionCoupling"
files=".*[/\\]streams[/\\].*test[/\\].*.java"/>
<suppress checks="CyclomaticComplexity"
files="(EosV2UpgradeIntegrationTest|KStreamKStreamJoinTest|KTableKTableForeignKeyJoinIntegrationTest|KTableKTableForeignKeyVersionedJoinIntegrationTest|RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest|RelationalSmokeTest|MockProcessorContextStateStoreTest).java"/>
<suppress checks="JavaNCSS"
files="(EosV2UpgradeIntegrationTest|KStreamKStreamJoinTest|StreamThreadTest|TaskManagerTest|StreamTaskTest).java"/>
<suppress checks="NPathComplexity"
files="(EosV2UpgradeIntegrationTest|EosTestDriver|KStreamKStreamJoinTest|KTableKTableForeignKeyJoinIntegrationTest|KTableKTableForeignKeyVersionedJoinIntegrationTest|RelationalSmokeTest|MockProcessorContextStateStoreTest|TopologyTestDriverTest).java"/>
2019-09-20 07:38:18 +08:00
<suppress checks="(FinalLocalVariable|WhitespaceAround|LocalVariableName|ImportControl|AvoidStarImport)"
files="Murmur3Test.java"/>
<suppress checks="MethodLength"
files="(KStreamSlidingWindowAggregateTest|KStreamKStreamLeftJoinTest|KStreamKStreamOuterJoinTest|KTableKTableForeignKeyVersionedJoinIntegrationTest).java"/>
<suppress checks="ClassFanOutComplexity"
files="StreamTaskTest.java"/>
<!-- Streams test-utils -->
<suppress checks="ClassFanOutComplexity"
files="TopologyTestDriver.java"/>
<suppress checks="ClassDataAbstractionCoupling"
files="TopologyTestDriver.java"/>
<!-- Streams examples -->
<suppress id="dontUseSystemExit"
files="PageViewTypedDemo.java|PipeDemo.java|TemperatureDemo.java|WordCountDemo.java|WordCountProcessorDemo.java|WordCountTransformerDemo.java"/>
<!-- Tools -->
<suppress checks="ClassDataAbstractionCoupling"
files="VerifiableConsumer.java"/>
<suppress checks="CyclomaticComplexity"
files="(StreamsResetter|ProducerPerformance|Agent).java"/>
<suppress checks="BooleanExpressionComplexity"
files="StreamsResetter.java"/>
<suppress checks="NPathComplexity"
files="(ProducerPerformance|StreamsResetter|Agent|TransactionalMessageCopier|ReplicaVerificationTool).java"/>
<suppress checks="ImportControl"
files="SignalLogger.java"/>
<suppress checks="IllegalImport"
files="SignalLogger.java"/>
<suppress checks="ParameterNumber"
files="ProduceBenchSpec.java"/>
<suppress checks="ParameterNumber"
files="ConsumeBenchSpec.java"/>
<suppress checks="ParameterNumber"
files="SustainedConnectionSpec.java"/>
<suppress id="dontUseSystemExit"
files="VerifiableConsumer.java"/>
<suppress id="dontUseSystemExit"
files="VerifiableProducer.java"/>
<!-- Shell -->
<suppress checks="CyclomaticComplexity"
files="(GlobComponent|MetadataNodeManager).java"/>
<suppress checks="MethodLength"
files="(MetadataNodeManager).java"/>
<suppress checks="JavaNCSS"
files="(MetadataNodeManager).java"/>
<!-- Log4J-Appender -->
<suppress checks="CyclomaticComplexity"
files="KafkaLog4jAppender.java"/>
<suppress checks="NPathComplexity"
files="KafkaLog4jAppender.java"/>
KIP-101: Alter Replication Protocol to use Leader Epoch rather than High Watermark for Truncation This PR replaces https://github.com/apache/kafka/pull/2743 (just raising from Confluent repo) This PR describes the addition of Partition Level Leader Epochs to messages in Kafka as a mechanism for fixing some known issues in the replication protocol. Full details can be found here: [KIP-101 Reference](https://cwiki.apache.org/confluence/display/KAFKA/KIP-101+-+Alter+Replication+Protocol+to+use+Leader+Epoch+rather+than+High+Watermark+for+Truncation) *The key elements are*: - Epochs are stamped on messages as they enter the leader. - Epochs are tracked in both leader and follower in a new checkpoint file. - A new API allows followers to retrieve the leader's latest offset for a particular epoch. - The logic for truncating the log, when a replica becomes a follower, has been moved from Partition into the ReplicaFetcherThread - When partitions are added to the ReplicaFetcherThread they are added in an initialising state. Initialising partitions request leader epochs and then truncate their logs appropriately. This test provides a good overview of the workflow `EpochDrivenReplicationProtocolAcceptanceTest.shouldFollowLeaderEpochBasicWorkflow()` The corrupted log use case is covered by the test `EpochDrivenReplicationProtocolAcceptanceTest.offsetsShouldNotGoBackwards()` Remaining work: There is a do list here: https://docs.google.com/document/d/1edmMo70MfHEZH9x38OQfTWsHr7UGTvg-NOxeFhOeRew/edit?usp=sharing Author: Ben Stopford <benstopford@gmail.com> Author: Jun Rao <junrao@gmail.com> Reviewers: Ismael Juma <ismael@juma.me.uk>, Jun Rao <junrao@gmail.com> Closes #2808 from benstopford/kip-101-v2
2017-04-07 05:51:09 +08:00
<suppress checks="JavaNCSS"
files="RequestResponseTest.java"/>
<!-- metadata -->
KAFKA-12276: Add the quorum controller code (#10070) The quorum controller stores metadata in the KIP-500 metadata log, not in Apache ZooKeeper. Each controller node is a voter in the metadata quorum. The leader of the quorum is the active controller, which processes write requests. The followers are standby controllers, which replay the operations written to the log. If the active controller goes away, a standby controller can take its place. Like the ZooKeeper-based controller, the quorum controller is based on an event queue backed by a single-threaded executor. However, unlike the ZK-based controller, the quorum controller can have multiple operations in flight-- it does not need to wait for one operation to be finished before starting another. Therefore, calls into the QuorumController return CompleteableFuture objects which are completed with either a result or an error when the operation is done. The QuorumController will also time out operations that have been sitting on the queue too long without being processed. In this case, the future is completed with a TimeoutException. The controller uses timeline data structures to store multiple "versions" of its in-memory state simultaneously. "Read operations" read only committed state, which is slightly older than the most up-to-date in-memory state. "Write operations" read and write the latest in-memory state. However, we can not return a successful result for a write operation until its state has been committed to the log. Therefore, if a client receives an RPC response, it knows that the requested operation has been performed, and can not be undone by a controller failover. Reviewers: Jun Rao <junrao@gmail.com>, Ron Dagostino <rdagostino@confluent.io>
2021-02-20 10:03:23 +08:00
<suppress checks="ClassDataAbstractionCoupling"
files="(QuorumController|QuorumControllerTest|ReplicationControlManager|ReplicationControlManagerTest|ClusterControlManagerTest|KRaftMigrationDriverTest).java"/>
KAFKA-12276: Add the quorum controller code (#10070) The quorum controller stores metadata in the KIP-500 metadata log, not in Apache ZooKeeper. Each controller node is a voter in the metadata quorum. The leader of the quorum is the active controller, which processes write requests. The followers are standby controllers, which replay the operations written to the log. If the active controller goes away, a standby controller can take its place. Like the ZooKeeper-based controller, the quorum controller is based on an event queue backed by a single-threaded executor. However, unlike the ZK-based controller, the quorum controller can have multiple operations in flight-- it does not need to wait for one operation to be finished before starting another. Therefore, calls into the QuorumController return CompleteableFuture objects which are completed with either a result or an error when the operation is done. The QuorumController will also time out operations that have been sitting on the queue too long without being processed. In this case, the future is completed with a TimeoutException. The controller uses timeline data structures to store multiple "versions" of its in-memory state simultaneously. "Read operations" read only committed state, which is slightly older than the most up-to-date in-memory state. "Write operations" read and write the latest in-memory state. However, we can not return a successful result for a write operation until its state has been committed to the log. Therefore, if a client receives an RPC response, it knows that the requested operation has been performed, and can not be undone by a controller failover. Reviewers: Jun Rao <junrao@gmail.com>, Ron Dagostino <rdagostino@confluent.io>
2021-02-20 10:03:23 +08:00
<suppress checks="ClassFanOutComplexity"
files="(QuorumController|QuorumControllerTest|ReplicationControlManager|ReplicationControlManagerTest).java"/>
<suppress checks="(ParameterNumber|ClassDataAbstractionCoupling)"
files="(QuorumController).java"/>
<suppress checks="NPathComplexity"
files="(PartitionRegistration|PartitionChangeBuilder).java"/>
KAFKA-12276: Add the quorum controller code (#10070) The quorum controller stores metadata in the KIP-500 metadata log, not in Apache ZooKeeper. Each controller node is a voter in the metadata quorum. The leader of the quorum is the active controller, which processes write requests. The followers are standby controllers, which replay the operations written to the log. If the active controller goes away, a standby controller can take its place. Like the ZooKeeper-based controller, the quorum controller is based on an event queue backed by a single-threaded executor. However, unlike the ZK-based controller, the quorum controller can have multiple operations in flight-- it does not need to wait for one operation to be finished before starting another. Therefore, calls into the QuorumController return CompleteableFuture objects which are completed with either a result or an error when the operation is done. The QuorumController will also time out operations that have been sitting on the queue too long without being processed. In this case, the future is completed with a TimeoutException. The controller uses timeline data structures to store multiple "versions" of its in-memory state simultaneously. "Read operations" read only committed state, which is slightly older than the most up-to-date in-memory state. "Write operations" read and write the latest in-memory state. However, we can not return a successful result for a write operation until its state has been committed to the log. Therefore, if a client receives an RPC response, it knows that the requested operation has been performed, and can not be undone by a controller failover. Reviewers: Jun Rao <junrao@gmail.com>, Ron Dagostino <rdagostino@confluent.io>
2021-02-20 10:03:23 +08:00
<suppress checks="CyclomaticComplexity"
files="(ClientQuotasImage|KafkaEventQueue|MetadataDelta|QuorumController|ReplicationControlManager|KRaftMigrationDriver|ClusterControlManager).java"/>
<suppress checks="NPathComplexity"
files="(ClientQuotasImage|KafkaEventQueue|ReplicationControlManager|FeatureControlManager|KRaftMigrationDriver|ScramControlManager|ClusterControlManager|MetadataDelta).java"/>
<suppress checks="(NPathComplexity|ClassFanOutComplexity|CyclomaticComplexity|ClassDataAbstractionCoupling|LocalVariableName|MemberName|ParameterName|MethodLength|JavaNCSS|AvoidStarImport)"
files="metadata[\\/]src[\\/](generated|generated-test)[\\/].+.java$"/>
KAFKA-13646; Implement KIP-801: KRaft authorizer (#11649) Currently, when using KRaft mode, users still have to have an Apache ZooKeeper instance if they want to use AclAuthorizer. We should have a built-in Authorizer for KRaft mode that does not depend on ZooKeeper. This PR introduces such an authorizer, called StandardAuthorizer. See KIP-801 for a full description of the new Authorizer design. Authorizer.java: add aclCount API as described in KIP-801. StandardAuthorizer is currently the only authorizer that implements it, but eventually we may implement it for AclAuthorizer and others as well. ControllerApis.scala: fix a bug where createPartitions was authorized using CREATE on the topic resource rather than ALTER on the topic resource as it should have been. QuorumTestHarness: rename the controller endpoint to CONTROLLER for consistency (the brokers already called it that). This is relevant in AuthorizerIntegrationTest where we are examining endpoint names. Also add the controllerServers call. TestUtils.scala: adapt the ACL functions to be usable from KRaft, by ensuring that they use the Authorizer from the current active controller. BrokerMetadataPublisher.scala: add broker-side ACL application logic. Controller.java: add ACL APIs. Also add a findAllTopicIds API in order to make junit tests that use KafkaServerTestHarness#getTopicNames and KafkaServerTestHarness#getTopicIds work smoothly. AuthorizerIntegrationTest.scala: convert over testAuthorizationWithTopicExisting (more to come soon) QuorumController.java: add logic for replaying ACL-based records. This means storing them in the new AclControlManager object, and integrating them into controller snapshots. It also means applying the changes in the Authorizer, if one is configured. In renounce, when reverting to a snapshot, also set newBytesSinceLastSnapshot to 0. Reviewers: YeonCheol Jang <YeonCheolGit@users.noreply.github.com>, Jason Gustafson <jason@confluent.io>
2022-02-10 02:38:52 +08:00
<suppress checks="BooleanExpressionComplexity"
files="(MetadataImage).java"/>
<suppress checks="ImportControl"
files="ApiVersionsResponse.java"/>
<suppress checks="AvoidStarImport"
files="MetadataVersionTest.java"/>
<!-- group coordinator -->
<suppress checks="CyclomaticComplexity"
files="(ConsumerGroupMember|GroupMetadataManager).java"/>
<suppress checks="(NPathComplexity|MethodLength)"
files="(GroupMetadataManager|ConsumerGroupTest|GroupMetadataManagerTest).java"/>
<suppress checks="NPathComplexity"
files="CoordinatorRuntime.java"/>
<suppress checks="ClassFanOutComplexity"
files="(GroupMetadataManager|GroupMetadataManagerTest|GroupCoordinatorService|GroupCoordinatorServiceTest).java"/>
<suppress checks="ParameterNumber"
files="(ConsumerGroupMember|GroupMetadataManager|GroupCoordinatorConfig).java"/>
<suppress checks="ClassDataAbstractionCouplingCheck"
files="(RecordHelpersTest|GroupMetadataManager|GroupMetadataManagerTest|GroupCoordinatorServiceTest|GroupCoordinatorShardTest).java"/>
<suppress checks="JavaNCSS"
files="GroupMetadataManagerTest.java"/>
<!-- storage -->
<suppress checks="CyclomaticComplexity"
files="(LogValidator|RemoteLogManagerConfig|RemoteLogManager).java"/>
<suppress checks="NPathComplexity"
files="(LogValidator|RemoteLogManager|RemoteIndexCache).java"/>
<suppress checks="ParameterNumber"
files="(LogAppendInfo|RemoteLogManagerConfig).java"/>
<!-- benchmarks -->
<suppress checks="(ClassDataAbstractionCoupling|ClassFanOutComplexity)"
files="(ReplicaFetcherThreadBenchmark).java"/>
</suppressions>