KAFKA-18012: Update the Scram configuration section for KRaft (#17844)

Reviewers: Mickael Maison <mickael.maison@gmail.com>
This commit is contained in:
PoAn Yang 2024-11-27 18:37:24 +08:00 committed by GitHub
parent 3f834781a4
commit 3710add2a7
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
1 changed files with 20 additions and 20 deletions

View File

@ -814,27 +814,27 @@ sasl.mechanism=PLAIN</code></pre></li>
Kafka supports <a href="https://tools.ietf.org/html/rfc7677">SCRAM-SHA-256</a> and SCRAM-SHA-512 which Kafka supports <a href="https://tools.ietf.org/html/rfc7677">SCRAM-SHA-256</a> and SCRAM-SHA-512 which
can be used with TLS to perform secure authentication. Under the default implementation of <code>principal.builder.class</code>, the username is used as the authenticated can be used with TLS to perform secure authentication. Under the default implementation of <code>principal.builder.class</code>, the username is used as the authenticated
<code>Principal</code> for configuration of ACLs etc. The default SCRAM implementation in Kafka <code>Principal</code> for configuration of ACLs etc. The default SCRAM implementation in Kafka
stores SCRAM credentials in Zookeeper and is suitable for use in Kafka installations where Zookeeper stores SCRAM credentials in the metadata log. Refer to <a href="#security_sasl_scram_security">Security Considerations</a>
is on a private network. Refer to <a href="#security_sasl_scram_security">Security Considerations</a>
for more details.</p> for more details.</p>
<ol> <ol>
<li><h5 class="anchor-heading"><a id="security_sasl_scram_credentials" class="anchor-link"></a><a href="#security_sasl_scram_credentials">Creating SCRAM Credentials</a></h5> <li><h5 class="anchor-heading"><a id="security_sasl_scram_credentials" class="anchor-link"></a><a href="#security_sasl_scram_credentials">Creating SCRAM Credentials</a></h5>
<p>The SCRAM implementation in Kafka uses Zookeeper as credential store. Credentials can be created in <p>The SCRAM implementation in Kafka uses the metadata log as credential store. Credentials can be created in
Zookeeper using <code>kafka-configs.sh</code>. For each SCRAM mechanism enabled, credentials must be created the metadata log using <code>kafka-storage.sh</code> or <code>kafka-configs.sh</code>. For each SCRAM mechanism enabled, credentials must be created
by adding a config with the mechanism name. Credentials for inter-broker communication must be created by adding a config with the mechanism name. Credentials for inter-broker communication must be created
before Kafka brokers are started. Client credentials may be created and updated dynamically and updated before Kafka brokers are started. <code>kafka-storage.sh</code> can format storage with initial credentials.
credentials will be used to authenticate new connections.</p> Client credentials may be created and updated dynamically and updated credentials will be used to authenticate new connections.
<p>Create SCRAM credentials for user <i>alice</i> with password <i>alice-secret</i>: <code>kafka-configs.sh</code> can be used to create and update credentials after Kafka brokers are started.</p>
<pre><code class="language-bash">$ bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]' --entity-type users --entity-name alice</code></pre> <p>Create initial SCRAM credentials for user <i>admin</i> with password <i>admin-secret</i>:
<p>The default iteration count of 4096 is used if iterations are not specified. A random salt is created <pre><code class="language-bash">$ bin/kafka-storage.sh format -t $(bin/kafka-storage.sh random-uuid) -c config/kraft/server.properties --add-scram 'SCRAM-SHA-256=[name="admin",password="admin-secret"]'</code></pre>
and the SCRAM identity consisting of salt, iterations, StoredKey and ServerKey are stored in Zookeeper. <p>Create SCRAM credentials for user <i>alice</i> with password <i>alice-secret</i> (refer to <a href="#security_sasl_scram_clientconfig">Configuring Kafka Clients</a> for client configuration):
<pre><code class="language-bash">$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret]' --entity-type users --entity-name alice --command-config client.properties</code></pre>
<p>The default iteration count of 4096 is used if iterations are not specified. A random salt is created if it's not specified.
The SCRAM identity consisting of salt, iterations, StoredKey and ServerKey are stored in the metadata log.
See <a href="https://tools.ietf.org/html/rfc5802">RFC 5802</a> for details on SCRAM identity and the individual fields. See <a href="https://tools.ietf.org/html/rfc5802">RFC 5802</a> for details on SCRAM identity and the individual fields.
<p>The following examples also require a user <i>admin</i> for inter-broker communication which can be created using:
<pre><code class="language-bash">$ bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin</code></pre>
<p>Existing credentials may be listed using the <i>--describe</i> option: <p>Existing credentials may be listed using the <i>--describe</i> option:
<pre><code class="language-bash">$ bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --describe --entity-type users --entity-name alice</code></pre> <pre><code class="language-bash">$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type users --entity-name alice --command-config client.properties</code></pre>
<p>Credentials may be deleted for one or more SCRAM mechanisms using the <i>--alter --delete-config</i> option: <p>Credentials may be deleted for one or more SCRAM mechanisms using the <i>--alter --delete-config</i> option:
<pre><code class="language-bash">$ bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name alice</code></pre> <pre><code class="language-bash">$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --delete-config 'SCRAM-SHA-256' --entity-type users --entity-name alice --command-config client.properties</code></pre>
</li> </li>
<li><h5 class="anchor-heading"><a id="security_sasl_scram_brokerconfig" class="anchor-link"></a><a href="#security_sasl_scram_brokerconfig">Configuring Kafka Brokers</a></h5> <li><h5 class="anchor-heading"><a id="security_sasl_scram_brokerconfig" class="anchor-link"></a><a href="#security_sasl_scram_brokerconfig">Configuring Kafka Brokers</a></h5>
<ol> <ol>
@ -882,17 +882,17 @@ sasl.mechanism=SCRAM-SHA-256 (or SCRAM-SHA-512)</code></pre></li>
</li> </li>
<li><h5><a id="security_sasl_scram_security" href="#security_sasl_scram_security">Security Considerations for SASL/SCRAM</a></h5> <li><h5><a id="security_sasl_scram_security" href="#security_sasl_scram_security">Security Considerations for SASL/SCRAM</a></h5>
<ul> <ul>
<li>The default implementation of SASL/SCRAM in Kafka stores SCRAM credentials in Zookeeper. This <li>The default implementation of SASL/SCRAM in Kafka stores SCRAM credentials in the metadata log. This
is suitable for production use in installations where Zookeeper is secure and on a private network.</li> is suitable for production use in installations where KRaft controllers are secure and on a private network.</li>
<li>Kafka supports only the strong hash functions SHA-256 and SHA-512 with a minimum iteration count <li>Kafka supports only the strong hash functions SHA-256 and SHA-512 with a minimum iteration count
of 4096. Strong hash functions combined with strong passwords and high iteration counts protect of 4096. Strong hash functions combined with strong passwords and high iteration counts protect
against brute force attacks if Zookeeper security is compromised.</li> against brute force attacks if KRaft controllers security is compromised.</li>
<li>SCRAM should be used only with TLS-encryption to prevent interception of SCRAM exchanges. This <li>SCRAM should be used only with TLS-encryption to prevent interception of SCRAM exchanges. This
protects against dictionary or brute force attacks and against impersonation if Zookeeper is compromised.</li> protects against dictionary or brute force attacks and against impersonation if KRaft controllers security is compromised.</li>
<li>From Kafka version 2.0 onwards, the default SASL/SCRAM credential store may be overridden using custom callback handlers <li>From Kafka version 2.0 onwards, the default SASL/SCRAM credential store may be overridden using custom callback handlers
by configuring <code>sasl.server.callback.handler.class</code> in installations where Zookeeper is not secure.</li> by configuring <code>sasl.server.callback.handler.class</code> in installations where KRaft controllers are not secure.</li>
<li>For more details on security considerations, refer to <li>For more details on security considerations, refer to
<a href="https://tools.ietf.org/html/rfc5802#section-9">RFC 5802</a>. <a href="https://tools.ietf.org/html/rfc5802#section-9">RFC 5802</a>.</li>
</ul> </ul>
</li> </li>
</ol> </ol>