mirror of https://github.com/apache/kafka.git
MINOR: Small fixes in the documentation (#8623)
These minor documentation fixes included: 1. fix broken links 2. remove redundant sentences 3. fix content format issue Reviewers: Konstantine Karantasis <konstantine@confluent.io>
This commit is contained in:
parent
ad0850659f
commit
3ec5e8e652
|
@ -32,7 +32,7 @@
|
|||
|
||||
<h3><a id="connect_user" href="#connect_user">8.2 User Guide</a></h3>
|
||||
|
||||
<p>The quickstart provides a brief example of how to run a standalone version of Kafka Connect. This section describes how to configure, run, and manage Kafka Connect in more detail.</p>
|
||||
<p>The <a href="../quickstart">quickstart</a> provides a brief example of how to run a standalone version of Kafka Connect. This section describes how to configure, run, and manage Kafka Connect in more detail.</p>
|
||||
|
||||
<h4><a id="connect_running" href="#connect_running">Running Kafka Connect</a></h4>
|
||||
|
||||
|
@ -174,7 +174,7 @@
|
|||
<li>InsertField - Add a field using either static data or record metadata</li>
|
||||
<li>ReplaceField - Filter or rename fields</li>
|
||||
<li>MaskField - Replace field with valid null value for the type (0, empty string, etc)</li>
|
||||
<li>ValueToKey</li>
|
||||
<li>ValueToKey - Replace the record key with a new key formed from a subset of fields in the record value</li>
|
||||
<li>HoistField - Wrap the entire event as a single field inside a Struct or a Map</li>
|
||||
<li>ExtractField - Extract a specific field from Struct and Map and include only this field in results</li>
|
||||
<li>SetSchemaMetadata - modify the schema name or version</li>
|
||||
|
@ -309,7 +309,7 @@
|
|||
}
|
||||
</pre>
|
||||
|
||||
<p>We will define the <code>FileStreamSourceTask</code> class below. Next, we add some standard lifecycle methods, <code>start()</code> and <code>stop()</code></p>:
|
||||
<p>We will define the <code>FileStreamSourceTask</code> class below. Next, we add some standard lifecycle methods, <code>start()</code> and <code>stop()</code>:</p>
|
||||
|
||||
<pre class="brush: java;">
|
||||
@Override
|
||||
|
|
|
@ -477,19 +477,25 @@
|
|||
Throttle was removed.</pre>
|
||||
|
||||
<p>The administrator can also validate the assigned configs using the kafka-configs.sh. There are two pairs of throttle
|
||||
configuration used to manage the throttling process. The throttle value itself. This is configured, at a broker
|
||||
configuration used to manage the throttling process. First pair refers to the throttle value itself. This is configured, at a broker
|
||||
level, using the dynamic properties: </p>
|
||||
|
||||
<pre class="brush: text;">leader.replication.throttled.rate
|
||||
follower.replication.throttled.rate</pre>
|
||||
<pre class="brush: text;">
|
||||
leader.replication.throttled.rate
|
||||
follower.replication.throttled.rate
|
||||
</pre>
|
||||
|
||||
<p>There is also an enumerated set of throttled replicas: </p>
|
||||
<p>Then there is the configuration pair of enumerated sets of throttled replicas: </p>
|
||||
|
||||
<pre class="brush: text;">leader.replication.throttled.replicas
|
||||
follower.replication.throttled.replicas</pre>
|
||||
<pre class="brush: text;">
|
||||
leader.replication.throttled.replicas
|
||||
follower.replication.throttled.replicas
|
||||
</pre>
|
||||
|
||||
<p>Which are configured per topic. </p>
|
||||
|
||||
<p>All four config values are automatically assigned by kafka-reassign-partitions.sh (discussed below).</p>
|
||||
|
||||
<p>Which are configured per topic. All four config values are automatically assigned by kafka-reassign-partitions.sh
|
||||
(discussed below). </p>
|
||||
<p>To view the throttle limit configuration:</p>
|
||||
|
||||
<pre class="brush: bash;">
|
||||
|
|
|
@ -240,7 +240,7 @@ keyUsage = digitalSignature, keyEncipherment
|
|||
|
||||
<b>Note:</b>
|
||||
If you configure the Kafka brokers to require client authentication by setting ssl.client.auth to be "requested" or "required" in the
|
||||
<a href="#config_broker">Kafka brokers config</a> then you must provide a truststore for the Kafka brokers as well and it should have
|
||||
<a href="#brokerconfigs">Kafka brokers config</a> then you must provide a truststore for the Kafka brokers as well and it should have
|
||||
all the CA certificates that clients' keys were signed by.
|
||||
<pre class="brush: bash;">
|
||||
keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert
|
||||
|
@ -361,9 +361,9 @@ keyUsage = digitalSignature, keyEncipherment
|
|||
|
||||
<p>
|
||||
The JRE/JDK will have a default pseudo-random number generator (PRNG) that is used for cryptography operations, so it is not required to configure the
|
||||
implementation used with the <pre>ssl.secure.random.implementation</pre>. However, there are performance issues with some implementations (notably, the
|
||||
default chosen on Linux systems, <pre>NativePRNG</pre>, utilizes a global lock). In cases where performance of SSL connections becomes an issue,
|
||||
consider explicitly setting the implementation to be used. The <pre>SHA1PRNG</pre> implementation is non-blocking, and has shown very good performance
|
||||
implementation used with the <code>ssl.secure.random.implementation</code>. However, there are performance issues with some implementations (notably, the
|
||||
default chosen on Linux systems, <code>NativePRNG</code>, utilizes a global lock). In cases where performance of SSL connections becomes an issue,
|
||||
consider explicitly setting the implementation to be used. The <code>SHA1PRNG</code> implementation is non-blocking, and has shown very good performance
|
||||
characteristics under heavy load (50 MB/sec of produced messages, plus replication traffic, per-broker).
|
||||
</p>
|
||||
|
||||
|
@ -609,7 +609,7 @@ keyUsage = digitalSignature, keyEncipherment
|
|||
|
||||
</li>
|
||||
<tt>KafkaServer</tt> section in the JAAS file tells the broker which principal to use and the location of the keytab where this principal is stored. It
|
||||
allows the broker to login using the keytab specified in this section. See <a href="#security_sasl_brokernotes">notes</a> for more details on Zookeeper SASL configuration.
|
||||
allows the broker to login using the keytab specified in this section. See <a href="#security_jaas_broker">notes</a> for more details on Zookeeper SASL configuration.
|
||||
<li>Pass the JAAS and optionally the krb5 file locations as JVM parameters to each Kafka broker (see <a href="https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html">here</a> for more details):
|
||||
<pre> -Djava.security.krb5.conf=/etc/kafka/krb5.conf
|
||||
-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf</pre>
|
||||
|
@ -767,7 +767,7 @@ keyUsage = digitalSignature, keyEncipherment
|
|||
<pre class="brush: bash;">
|
||||
> bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --describe --entity-type users --entity-name alice
|
||||
</pre>
|
||||
<p>Credentials may be deleted for one or more SCRAM mechanisms using the <i>--delete</i> option:
|
||||
<p>Credentials may be deleted for one or more SCRAM mechanisms using the <i>--alter --delete-config</i> option:
|
||||
<pre class="brush: bash;">
|
||||
> bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name alice
|
||||
</pre>
|
||||
|
@ -1193,7 +1193,7 @@ keyUsage = digitalSignature, keyEncipherment
|
|||
<h3><a id="security_authz" href="#security_authz">7.4 Authorization and ACLs</a></h3>
|
||||
Kafka ships with a pluggable Authorizer and an out-of-box authorizer implementation that uses zookeeper to store all the acls. The Authorizer is configured by setting <tt>authorizer.class.name</tt> in server.properties. To enable the out of the box implementation use:
|
||||
<pre>authorizer.class.name=kafka.security.authorizer.AclAuthorizer</pre>
|
||||
Kafka acls are defined in the general format of "Principal P is [Allowed/Denied] Operation O From Host H on any Resource R matching ResourcePattern RP". You can read more about the acl structure in KIP-11 and resource patterns in KIP-290. In order to add, remove or list acls you can use the Kafka authorizer CLI. By default, if no ResourcePatterns match a specific Resource R, then R has no associated acls, and therefore no one other than super users is allowed to access R. If you want to change that behavior, you can include the following in server.properties.
|
||||
Kafka acls are defined in the general format of "Principal P is [Allowed/Denied] Operation O From Host H on any Resource R matching ResourcePattern RP". You can read more about the acl structure in <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface">KIP-11</a> and resource patterns in <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-290%3A+Support+for+Prefixed+ACLs">KIP-290</a>. In order to add, remove or list acls you can use the Kafka authorizer CLI. By default, if no ResourcePatterns match a specific Resource R, then R has no associated acls, and therefore no one other than super users is allowed to access R. If you want to change that behavior, you can include the following in server.properties.
|
||||
<pre>allow.everyone.if.no.acl.found=true</pre>
|
||||
One can also add super users in server.properties like the following (note that the delimiter is semicolon since SSL user names may contain comma). Default PrincipalType string "User" is case sensitive.
|
||||
<pre>super.users=User:Bob;User:Alice</pre>
|
||||
|
@ -1438,7 +1438,7 @@ keyUsage = digitalSignature, keyEncipherment
|
|||
<pre class="brush: bash;">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Bob --allow-principal User:Alice --allow-host 198.51.100.0 --allow-host 198.51.100.1 --operation Read --operation Write --topic Test-topic</pre>
|
||||
By default, all principals that don't have an explicit acl that allows access for an operation to a resource are denied. In rare cases where an allow acl is defined that allows access to all but some principal we will have to use the --deny-principal and --deny-host option. For example, if we want to allow all users to Read from Test-topic but only deny User:BadBob from IP 198.51.100.3 we can do so using following commands:
|
||||
<pre class="brush: bash;">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:* --allow-host * --deny-principal User:BadBob --deny-host 198.51.100.3 --operation Read --topic Test-topic</pre>
|
||||
Note that ``--allow-host`` and ``deny-host`` only support IP addresses (hostnames are not supported).
|
||||
Note that <code>--allow-host</code> and <code>--deny-host</code> only support IP addresses (hostnames are not supported).
|
||||
Above examples add acls to a topic by specifying --topic [topic-name] as the resource pattern option. Similarly user can add acls to cluster by specifying --cluster and to a consumer group by specifying --group [group-name].
|
||||
You can add acls on any resource of a certain type, e.g. suppose you wanted to add an acl "Principal User:Peter is allowed to produce to any Topic from IP 198.51.200.0"
|
||||
You can do that by using the wildcard resource '*', e.g. by executing the CLI with following options:
|
||||
|
@ -1451,7 +1451,7 @@ keyUsage = digitalSignature, keyEncipherment
|
|||
<li><b>Removing Acls</b><br>
|
||||
Removing acls is pretty much the same. The only difference is instead of --add option users will have to specify --remove option. To remove the acls added by the first example above we can execute the CLI with following options:
|
||||
<pre class="brush: bash;"> bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --remove --allow-principal User:Bob --allow-principal User:Alice --allow-host 198.51.100.0 --allow-host 198.51.100.1 --operation Read --operation Write --topic Test-topic </pre>
|
||||
If you wan to remove the acl added to the prefixed resource pattern above we can execute the CLI with following options:
|
||||
If you want to remove the acl added to the prefixed resource pattern above we can execute the CLI with following options:
|
||||
<pre class="brush: bash;"> bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --remove --allow-principal User:Jane --producer --topic Test- --resource-pattern-type Prefixed</pre></li>
|
||||
|
||||
<li><b>List Acls</b><br>
|
||||
|
@ -2158,23 +2158,23 @@ keyUsage = digitalSignature, keyEncipherment
|
|||
<li>Perform a rolling restart of brokers setting the JAAS login file and/or defining ZooKeeper mutual TLS configurations (including connecting to the TLS-enabled ZooKeeper port) as required, which enables brokers to authenticate to ZooKeeper. At the end of the rolling restart, brokers are able to manipulate znodes with strict ACLs, but they will not create znodes with those ACLs</li>
|
||||
<li>If you enabled mTLS, disable the non-TLS port in ZooKeeper</li>
|
||||
<li>Perform a second rolling restart of brokers, this time setting the configuration parameter <tt>zookeeper.set.acl</tt> to true, which enables the use of secure ACLs when creating znodes</li>
|
||||
<li>Execute the ZkSecurityMigrator tool. To execute the tool, there is this script: <tt>./bin/zookeeper-security-migration.sh</tt> with <tt>zookeeper.acl</tt> set to secure. This tool traverses the corresponding sub-trees changing the ACLs of the znodes. Use the <code>--zk-tls-config-file <file></code> option if you enable mTLS.</li>
|
||||
<li>Execute the ZkSecurityMigrator tool. To execute the tool, there is this script: <tt>bin/zookeeper-security-migration.sh</tt> with <tt>zookeeper.acl</tt> set to secure. This tool traverses the corresponding sub-trees changing the ACLs of the znodes. Use the <code>--zk-tls-config-file <file></code> option if you enable mTLS.</li>
|
||||
</ol>
|
||||
<p>It is also possible to turn off authentication in a secure cluster. To do it, follow these steps:</p>
|
||||
<ol>
|
||||
<li>Perform a rolling restart of brokers setting the JAAS login file and/or defining ZooKeeper mutual TLS configurations, which enables brokers to authenticate, but setting <tt>zookeeper.set.acl</tt> to false. At the end of the rolling restart, brokers stop creating znodes with secure ACLs, but are still able to authenticate and manipulate all znodes</li>
|
||||
<li>Execute the ZkSecurityMigrator tool. To execute the tool, run this script <tt>./bin/zookeeper-security-migration.sh</tt> with <tt>zookeeper.acl</tt> set to unsecure. This tool traverses the corresponding sub-trees changing the ACLs of the znodes. Use the <code>--zk-tls-config-file <file></code> option if you need to set TLS configuration.</li></li>
|
||||
<li>Execute the ZkSecurityMigrator tool. To execute the tool, run this script <tt>bin/zookeeper-security-migration.sh</tt> with <tt>zookeeper.acl</tt> set to unsecure. This tool traverses the corresponding sub-trees changing the ACLs of the znodes. Use the <code>--zk-tls-config-file <file></code> option if you need to set TLS configuration.</li></li>
|
||||
<li>If you are disabling mTLS, enable the non-TLS port in ZooKeeper</li>
|
||||
<li>Perform a second rolling restart of brokers, this time omitting the system property that sets the JAAS login file and/or removing ZooKeeper mutual TLS configuration (including connecting to the non-TLS-enabled ZooKeeper port) as required</li>
|
||||
<li>If you are disabling mTLS, disable the TLS port in ZooKeeper</li>
|
||||
</ol>
|
||||
Here is an example of how to run the migration tool:
|
||||
<pre class="brush: bash;">
|
||||
./bin/zookeeper-security-migration.sh --zookeeper.acl=secure --zookeeper.connect=localhost:2181
|
||||
bin/zookeeper-security-migration.sh --zookeeper.acl=secure --zookeeper.connect=localhost:2181
|
||||
</pre>
|
||||
<p>Run this to see the full list of parameters:</p>
|
||||
<pre class="brush: bash;">
|
||||
./bin/zookeeper-security-migration.sh --help
|
||||
bin/zookeeper-security-migration.sh --help
|
||||
</pre>
|
||||
<h4><a id="zk_authz_ensemble" href="#zk_authz_ensemble">7.6.3 Migrating the ZooKeeper ensemble</a></h4>
|
||||
It is also necessary to enable SASL and/or mTLS authentication on the ZooKeeper ensemble. To do it, we need to perform a rolling restart of the server and set a few properties. See above for mTLS information. Please refer to the ZooKeeper documentation for more detail:
|
||||
|
|
Loading…
Reference in New Issue