MINOR: Various javadoc improvement in clients and connect (#5878)

Fixed formatting issues and added links in a few classes
This commit is contained in:
Mickael Maison 2018-11-23 05:20:30 +00:00 committed by Manikumar Reddy
parent 50a61c0dbf
commit 4e90af34c6
8 changed files with 47 additions and 41 deletions

View File

@ -29,29 +29,33 @@ import java.util.SortedSet;
import java.util.TreeSet; import java.util.TreeSet;
/** /**
* The round robin assignor lays out all the available partitions and all the available consumers. It * <p>The round robin assignor lays out all the available partitions and all the available consumers. It
* then proceeds to do a round robin assignment from partition to consumer. If the subscriptions of all consumer * then proceeds to do a round robin assignment from partition to consumer. If the subscriptions of all consumer
* instances are identical, then the partitions will be uniformly distributed. (i.e., the partition ownership counts * instances are identical, then the partitions will be uniformly distributed. (i.e., the partition ownership counts
* will be within a delta of exactly one across all consumers.) * will be within a delta of exactly one across all consumers.)
* *
* For example, suppose there are two consumers C0 and C1, two topics t0 and t1, and each topic has 3 partitions, * <p>For example, suppose there are two consumers <code>C0</code> and <code>C1</code>, two topics <code>t0</code> and <code>t1</code>, and each topic has 3 partitions,
* resulting in partitions t0p0, t0p1, t0p2, t1p0, t1p1, and t1p2. * resulting in partitions <code>t0p0</code>, <code>t0p1</code>, <code>t0p2</code>, <code>t1p0</code>, <code>t1p1</code>, and <code>t1p2</code>.
* *
* The assignment will be: * <p>The assignment will be:
* C0: [t0p0, t0p2, t1p1] * <ul>
* C1: [t0p1, t1p0, t1p2] * <li><code>C0: [t0p0, t0p2, t1p1]</code>
* <li><code>C1: [t0p1, t1p0, t1p2]</code>
* </ul>
* *
* When subscriptions differ across consumer instances, the assignment process still considers each * <p>When subscriptions differ across consumer instances, the assignment process still considers each
* consumer instance in round robin fashion but skips over an instance if it is not subscribed to * consumer instance in round robin fashion but skips over an instance if it is not subscribed to
* the topic. Unlike the case when subscriptions are identical, this can result in imbalanced * the topic. Unlike the case when subscriptions are identical, this can result in imbalanced
* assignments. For example, we have three consumers C0, C1, C2, and three topics t0, t1, t2, * assignments. For example, we have three consumers <code>C0</code>, <code>C1</code>, <code>C2</code>, and three topics <code>t0</code>, <code>t1</code>, <code>t2</code>,
* with 1, 2, and 3 partitions, respectively. Therefore, the partitions are t0p0, t1p0, t1p1, t2p0, * with 1, 2, and 3 partitions, respectively. Therefore, the partitions are <code>t0p0</code>, <code>t1p0</code>, <code>t1p1</code>, <code>t2p0</code>,
* t2p1, t2p2. C0 is subscribed to t0; C1 is subscribed to t0, t1; and C2 is subscribed to t0, t1, t2. * <code>t2p1</code>, <code>t2p2</code>. <code>C0</code> is subscribed to <code>t0</code>; <code>C1</code> is subscribed to <code>t0</code>, <code>t1</code>; and <code>C2</code> is subscribed to <code>t0</code>, <code>t1</code>, <code>t2</code>.
* *
* That assignment will be: * <p>That assignment will be:
* C0: [t0p0] * <ul>
* C1: [t1p0] * <li><code>C0: [t0p0]</code>
* C2: [t1p1, t2p0, t2p1, t2p2] * <li><code>C1: [t1p0]</code>
* <li><code>C2: [t1p1, t2p0, t2p1, t2p2]</code>
* </ul>
*/ */
public class RoundRobinAssignor extends AbstractPartitionAssignor { public class RoundRobinAssignor extends AbstractPartitionAssignor {

View File

@ -25,19 +25,19 @@ import java.util.Locale;
/** /**
* Represents an operation which an ACL grants or denies permission to perform. * Represents an operation which an ACL grants or denies permission to perform.
* *
* Some operations imply other operations. * Some operations imply other operations:
* <ul>
* <li><code>ALLOW ALL</code> implies <code>ALLOW</code> everything
* <li><code>DENY ALL</code> implies <code>DENY</code> everything
* *
* ALLOW ALL implies ALLOW everything * <li><code>ALLOW READ</code> implies <code>ALLOW DESCRIBE</code>
* DENY ALL implies DENY everything * <li><code>ALLOW WRITE</code> implies <code>ALLOW DESCRIBE</code>
* <li><code>ALLOW DELETE</code> implies <code>ALLOW DESCRIBE</code>
* *
* ALLOW READ implies ALLOW DESCRIBE * <li><code>ALLOW ALTER</code> implies <code>ALLOW DESCRIBE</code>
* ALLOW WRITE implies ALLOW DESCRIBE
* ALLOW DELETE implies ALLOW DESCRIBE
*
* ALLOW ALTER implies ALLOW DESCRIBE
*
* ALLOW ALTER_CONFIGS implies ALLOW DESCRIBE_CONFIGS
* *
* <li><code>ALLOW ALTER_CONFIGS</code> implies <code>ALLOW DESCRIBE_CONFIGS</code>
* </ul>
* The API for this class is still evolving and we may break compatibility in minor releases, if necessary. * The API for this class is still evolving and we may break compatibility in minor releases, if necessary.
*/ */
@InterfaceStability.Evolving @InterfaceStability.Evolving

View File

@ -18,11 +18,11 @@
package org.apache.kafka.common.config; package org.apache.kafka.common.config;
/** /**
* Keys that can be used to configure a topic. These keys are useful when creating or reconfiguring a * <p>Keys that can be used to configure a topic. These keys are useful when creating or reconfiguring a
* topic using the AdminClient. * topic using the AdminClient.
* *
* The intended pattern is for broker configs to include a `log.` prefix. For example, to set the default broker * <p>The intended pattern is for broker configs to include a <code>`log.`</code> prefix. For example, to set the default broker
* cleanup policy, one would set log.cleanup.policy instead of cleanup.policy. Unfortunately, there are many cases * cleanup policy, one would set <code>log.cleanup.policy</code> instead of <code>cleanup.policy</code>. Unfortunately, there are many cases
* where this pattern is not followed. * where this pattern is not followed.
*/ */
// This is a public API, so we should not remove or alter keys without a discussion and a deprecation period. // This is a public API, so we should not remove or alter keys without a discussion and a deprecation period.

View File

@ -23,23 +23,24 @@ import java.security.Principal;
import static java.util.Objects.requireNonNull; import static java.util.Objects.requireNonNull;
/** /**
* Principals in Kafka are defined by a type and a name. The principal type will always be "User" * <p>Principals in Kafka are defined by a type and a name. The principal type will always be <code>"User"</code>
* for the simple authorizer that is enabled by default, but custom authorizers can leverage different * for the simple authorizer that is enabled by default, but custom authorizers can leverage different
* principal types (such as to enable group or role-based ACLs). The {@link KafkaPrincipalBuilder} interface * principal types (such as to enable group or role-based ACLs). The {@link KafkaPrincipalBuilder} interface
* is used when you need to derive a different principal type from the authentication context, or when * is used when you need to derive a different principal type from the authentication context, or when
* you need to represent relations between different principals. For example, you could extend * you need to represent relations between different principals. For example, you could extend
* {@link KafkaPrincipal} in order to link a user principal to one or more role principals. * {@link KafkaPrincipal} in order to link a user principal to one or more role principals.
* *
* For custom extensions of {@link KafkaPrincipal}, there two key points to keep in mind: * <p>For custom extensions of {@link KafkaPrincipal}, there two key points to keep in mind:
* * <ol>
* 1. To be compatible with the ACL APIs provided by Kafka (including the command line tool), each ACL * <li>To be compatible with the ACL APIs provided by Kafka (including the command line tool), each ACL
* can only represent a permission granted to a single principal (consisting of a principal type and name). * can only represent a permission granted to a single principal (consisting of a principal type and name).
* It is possible to use richer ACL semantics, but you must implement your own mechanisms for adding * It is possible to use richer ACL semantics, but you must implement your own mechanisms for adding
* and removing ACLs. * and removing ACLs.
* 2. In general, {@link KafkaPrincipal} extensions are only useful when the corresponding Authorizer * <li>In general, {@link KafkaPrincipal} extensions are only useful when the corresponding Authorizer
* is also aware of the extension. If you have a {@link KafkaPrincipalBuilder} which derives user groups * is also aware of the extension. If you have a {@link KafkaPrincipalBuilder} which derives user groups
* from the authentication context (e.g. from an SSL client certificate), then you need a custom * from the authentication context (e.g. from an SSL client certificate), then you need a custom
* authorizer which is capable of using the additional group information. * authorizer which is capable of using the additional group information.
* </ol>
*/ */
public class KafkaPrincipal implements Principal { public class KafkaPrincipal implements Principal {
public static final String USER_TYPE = "User"; public static final String USER_TYPE = "User";

View File

@ -23,12 +23,12 @@ import org.apache.kafka.common.errors.PolicyViolationException;
import java.util.Map; import java.util.Map;
/** /**
* An interface for enforcing a policy on alter configs requests. * <p>An interface for enforcing a policy on alter configs requests.
* *
* Common use cases are requiring that the replication factor, min.insync.replicas and/or retention settings for a * <p>Common use cases are requiring that the replication factor, <code>min.insync.replicas</code> and/or retention settings for a
* topic remain within an allowable range. * topic remain within an allowable range.
* *
* If <code>alter.config.policy.class.name</code> is defined, Kafka will create an instance of the specified class * <p>If <code>alter.config.policy.class.name</code> is defined, Kafka will create an instance of the specified class
* using the default constructor and will then pass the broker configs to its <code>configure()</code> method. During * using the default constructor and will then pass the broker configs to its <code>configure()</code> method. During
* broker shutdown, the <code>close()</code> method will be invoked so that resources can be released (if necessary). * broker shutdown, the <code>close()</code> method will be invoked so that resources can be released (if necessary).
*/ */

View File

@ -24,12 +24,12 @@ import java.util.List;
import java.util.Map; import java.util.Map;
/** /**
* An interface for enforcing a policy on create topics requests. * <p>An interface for enforcing a policy on create topics requests.
* *
* Common use cases are requiring that the replication factor, min.insync.replicas and/or retention settings for a * <p>Common use cases are requiring that the replication factor, <code>min.insync.replicas</code> and/or retention settings for a
* topic are within an allowable range. * topic are within an allowable range.
* *
* If <code>create.topic.policy.class.name</code> is defined, Kafka will create an instance of the specified class * <p>If <code>create.topic.policy.class.name</code> is defined, Kafka will create an instance of the specified class
* using the default constructor and will then pass the broker configs to its <code>configure()</code> method. During * using the default constructor and will then pass the broker configs to its <code>configure()</code> method. During
* broker shutdown, the <code>close()</code> method will be invoked so that resources can be released (if necessary). * broker shutdown, the <code>close()</code> method will be invoked so that resources can be released (if necessary).
*/ */

View File

@ -26,7 +26,7 @@ import java.util.Objects;
/** /**
* <p> * <p>
* Base class for records containing data to be copied to/from Kafka. This corresponds closely to * Base class for records containing data to be copied to/from Kafka. This corresponds closely to
* Kafka's ProducerRecord and ConsumerRecord classes, and holds the data that may be used by both * Kafka's {@link org.apache.kafka.clients.producer.ProducerRecord ProducerRecord} and {@link org.apache.kafka.clients.consumer.ConsumerRecord ConsumerRecord} classes, and holds the data that may be used by both
* sources and sinks (topic, kafkaPartition, key, value). Although both implementations include a * sources and sinks (topic, kafkaPartition, key, value). Although both implementations include a
* notion of offset, it is not included here because they differ in type. * notion of offset, it is not included here because they differ in type.
* </p> * </p>

View File

@ -29,14 +29,15 @@ import java.util.Map;
* <p> * <p>
* Connectors manage integration of Kafka Connect with another system, either as an input that ingests * Connectors manage integration of Kafka Connect with another system, either as an input that ingests
* data into Kafka or an output that passes data to an external system. Implementations should * data into Kafka or an output that passes data to an external system. Implementations should
* not use this class directly; they should inherit from SourceConnector or SinkConnector. * not use this class directly; they should inherit from {@link org.apache.kafka.connect.source.SourceConnector SourceConnector}
* or {@link org.apache.kafka.connect.sink.SinkConnector SinkConnector}.
* </p> * </p>
* <p> * <p>
* Connectors have two primary tasks. First, given some configuration, they are responsible for * Connectors have two primary tasks. First, given some configuration, they are responsible for
* creating configurations for a set of {@link Task}s that split up the data processing. For * creating configurations for a set of {@link Task}s that split up the data processing. For
* example, a database Connector might create Tasks by dividing the set of tables evenly among * example, a database Connector might create Tasks by dividing the set of tables evenly among
* tasks. Second, they are responsible for monitoring inputs for changes that require * tasks. Second, they are responsible for monitoring inputs for changes that require
* reconfiguration and notifying the Kafka Connect runtime via the ConnectorContext. Continuing the * reconfiguration and notifying the Kafka Connect runtime via the {@link ConnectorContext}. Continuing the
* previous example, the connector might periodically check for new tables and notify Kafka Connect of * previous example, the connector might periodically check for new tables and notify Kafka Connect of
* additions and deletions. Kafka Connect will then request new configurations and update the running * additions and deletions. Kafka Connect will then request new configurations and update the running
* Tasks. * Tasks.