From 3ec5e8e652baac0ff10fb4164c545c09163c2d25 Mon Sep 17 00:00:00 2001
From: showuon <43372967+showuon@users.noreply.github.com>
Date: Tue, 19 May 2020 22:31:06 +0800
Subject: [PATCH] MINOR: Small fixes in the documentation (#8623)
These minor documentation fixes included:
1. fix broken links
2. remove redundant sentences
3. fix content format issue
Reviewers: Konstantine Karantasis The quickstart provides a brief example of how to run a standalone version of Kafka Connect. This section describes how to configure, run, and manage Kafka Connect in more detail. The quickstart provides a brief example of how to run a standalone version of Kafka Connect. This section describes how to configure, run, and manage Kafka Connect in more detail. We will define the We will define the The administrator can also validate the assigned configs using the kafka-configs.sh. There are two pairs of throttle
- configuration used to manage the throttling process. The throttle value itself. This is configured, at a broker
+ configuration used to manage the throttling process. First pair refers to the throttle value itself. This is configured, at a broker
level, using the dynamic properties: There is also an enumerated set of throttled replicas: Then there is the configuration pair of enumerated sets of throttled replicas: Which are configured per topic. All four config values are automatically assigned by kafka-reassign-partitions.sh (discussed below). Which are configured per topic. All four config values are automatically assigned by kafka-reassign-partitions.sh
- (discussed below). To view the throttle limit configuration:
The JRE/JDK will have a default pseudo-random number generator (PRNG) that is used for cryptography operations, so it is not required to configure the
- implementation used with the 8.2 User Guide
- Running Kafka Connect
@@ -174,7 +174,7 @@
FileStreamSourceTask class below. Next, we add some standard lifecycle methods, start() and stop()FileStreamSourceTask class below. Next, we add some standard lifecycle methods, start() and stop():
@Override
diff --git a/docs/ops.html b/docs/ops.html
index 19fa1360ea5..765c198e614 100644
--- a/docs/ops.html
+++ b/docs/ops.html
@@ -477,19 +477,25 @@
Throttle was removed.
leader.replication.throttled.rate
- follower.replication.throttled.rate
+
+ leader.replication.throttled.rate
+ follower.replication.throttled.rate
+
- leader.replication.throttled.replicas
- follower.replication.throttled.replicas
+
+ leader.replication.throttled.replicas
+ follower.replication.throttled.replicas
+
+
+
diff --git a/docs/security.html b/docs/security.html
index 984a1a96d61..f0a1e5f7a23 100644
--- a/docs/security.html
+++ b/docs/security.html
@@ -240,7 +240,7 @@ keyUsage = digitalSignature, keyEncipherment
Note:
If you configure the Kafka brokers to require client authentication by setting ssl.client.auth to be "requested" or "required" in the
- Kafka brokers config then you must provide a truststore for the Kafka brokers as well and it should have
+ Kafka brokers config then you must provide a truststore for the Kafka brokers as well and it should have
all the CA certificates that clients' keys were signed by.
keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert
@@ -361,9 +361,9 @@ keyUsage = digitalSignature, keyEncipherment
ssl.secure.random.implementation
. However, there are performance issues with some implementations (notably, the
- default chosen on Linux systems, NativePRNG
, utilizes a global lock). In cases where performance of SSL connections becomes an issue,
- consider explicitly setting the implementation to be used. The SHA1PRNG
implementation is non-blocking, and has shown very good performance
+ implementation used with the ssl.secure.random.implementation. However, there are performance issues with some implementations (notably, the
+ default chosen on Linux systems, NativePRNG, utilizes a global lock). In cases where performance of SSL connections becomes an issue,
+ consider explicitly setting the implementation to be used. The SHA1PRNG implementation is non-blocking, and has shown very good performance
characteristics under heavy load (50 MB/sec of produced messages, plus replication traffic, per-broker).
-Djava.security.krb5.conf=/etc/kafka/krb5.conf
-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
@@ -767,7 +767,7 @@ keyUsage = digitalSignature, keyEncipherment
> bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --describe --entity-type users --entity-name alice
- Credentials may be deleted for one or more SCRAM mechanisms using the --delete option: +
Credentials may be deleted for one or more SCRAM mechanisms using the --alter --delete-config option:
> bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name alice
@@ -1193,7 +1193,7 @@ keyUsage = digitalSignature, keyEncipherment
authorizer.class.name=kafka.security.authorizer.AclAuthorizer- Kafka acls are defined in the general format of "Principal P is [Allowed/Denied] Operation O From Host H on any Resource R matching ResourcePattern RP". You can read more about the acl structure in KIP-11 and resource patterns in KIP-290. In order to add, remove or list acls you can use the Kafka authorizer CLI. By default, if no ResourcePatterns match a specific Resource R, then R has no associated acls, and therefore no one other than super users is allowed to access R. If you want to change that behavior, you can include the following in server.properties. + Kafka acls are defined in the general format of "Principal P is [Allowed/Denied] Operation O From Host H on any Resource R matching ResourcePattern RP". You can read more about the acl structure in KIP-11 and resource patterns in KIP-290. In order to add, remove or list acls you can use the Kafka authorizer CLI. By default, if no ResourcePatterns match a specific Resource R, then R has no associated acls, and therefore no one other than super users is allowed to access R. If you want to change that behavior, you can include the following in server.properties.
allow.everyone.if.no.acl.found=trueOne can also add super users in server.properties like the following (note that the delimiter is semicolon since SSL user names may contain comma). Default PrincipalType string "User" is case sensitive.
super.users=User:Bob;User:Alice@@ -1438,7 +1438,7 @@ keyUsage = digitalSignature, keyEncipherment
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Bob --allow-principal User:Alice --allow-host 198.51.100.0 --allow-host 198.51.100.1 --operation Read --operation Write --topic Test-topicBy default, all principals that don't have an explicit acl that allows access for an operation to a resource are denied. In rare cases where an allow acl is defined that allows access to all but some principal we will have to use the --deny-principal and --deny-host option. For example, if we want to allow all users to Read from Test-topic but only deny User:BadBob from IP 198.51.100.3 we can do so using following commands:
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:* --allow-host * --deny-principal User:BadBob --deny-host 198.51.100.3 --operation Read --topic Test-topic- Note that ``--allow-host`` and ``deny-host`` only support IP addresses (hostnames are not supported). + Note that
--allow-host and --deny-host only support IP addresses (hostnames are not supported).
Above examples add acls to a topic by specifying --topic [topic-name] as the resource pattern option. Similarly user can add acls to cluster by specifying --cluster and to a consumer group by specifying --group [group-name].
You can add acls on any resource of a certain type, e.g. suppose you wanted to add an acl "Principal User:Peter is allowed to produce to any Topic from IP 198.51.200.0"
You can do that by using the wildcard resource '*', e.g. by executing the CLI with following options:
@@ -1451,7 +1451,7 @@ keyUsage = digitalSignature, keyEncipherment
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --remove --allow-principal User:Bob --allow-principal User:Alice --allow-host 198.51.100.0 --allow-host 198.51.100.1 --operation Read --operation Write --topic Test-topic- If you wan to remove the acl added to the prefixed resource pattern above we can execute the CLI with following options: + If you want to remove the acl added to the prefixed resource pattern above we can execute the CLI with following options:
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --remove --allow-principal User:Jane --producer --topic Test- --resource-pattern-type Prefixed
--zk-tls-config-file <file> option if you enable mTLS.--zk-tls-config-file <file> option if you enable mTLS.It is also possible to turn off authentication in a secure cluster. To do it, follow these steps:
--zk-tls-config-file <file> option if you need to set TLS configuration.--zk-tls-config-file <file> option if you need to set TLS configuration.
- ./bin/zookeeper-security-migration.sh --zookeeper.acl=secure --zookeeper.connect=localhost:2181
+ bin/zookeeper-security-migration.sh --zookeeper.acl=secure --zookeeper.connect=localhost:2181
Run this to see the full list of parameters:
- ./bin/zookeeper-security-migration.sh --help
+ bin/zookeeper-security-migration.sh --help