From 3710add2a7f8048f001c0f6a255b0bb209d35830 Mon Sep 17 00:00:00 2001 From: PoAn Yang Date: Wed, 27 Nov 2024 18:37:24 +0800 Subject: [PATCH] KAFKA-18012: Update the Scram configuration section for KRaft (#17844) Reviewers: Mickael Maison --- docs/security.html | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/docs/security.html b/docs/security.html index 069ac42c106..7b08458a864 100644 --- a/docs/security.html +++ b/docs/security.html @@ -814,27 +814,27 @@ sasl.mechanism=PLAIN Kafka supports SCRAM-SHA-256 and SCRAM-SHA-512 which can be used with TLS to perform secure authentication. Under the default implementation of principal.builder.class, the username is used as the authenticated Principal for configuration of ACLs etc. The default SCRAM implementation in Kafka - stores SCRAM credentials in Zookeeper and is suitable for use in Kafka installations where Zookeeper - is on a private network. Refer to Security Considerations + stores SCRAM credentials in the metadata log. Refer to Security Considerations for more details.

  1. Creating SCRAM Credentials
    -

    The SCRAM implementation in Kafka uses Zookeeper as credential store. Credentials can be created in - Zookeeper using kafka-configs.sh. For each SCRAM mechanism enabled, credentials must be created +

    The SCRAM implementation in Kafka uses the metadata log as credential store. Credentials can be created in + the metadata log using kafka-storage.sh or kafka-configs.sh. For each SCRAM mechanism enabled, credentials must be created by adding a config with the mechanism name. Credentials for inter-broker communication must be created - before Kafka brokers are started. Client credentials may be created and updated dynamically and updated - credentials will be used to authenticate new connections.

    -

    Create SCRAM credentials for user alice with password alice-secret: -

    $ bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]' --entity-type users --entity-name alice
    -

    The default iteration count of 4096 is used if iterations are not specified. A random salt is created - and the SCRAM identity consisting of salt, iterations, StoredKey and ServerKey are stored in Zookeeper. + before Kafka brokers are started. kafka-storage.sh can format storage with initial credentials. + Client credentials may be created and updated dynamically and updated credentials will be used to authenticate new connections. + kafka-configs.sh can be used to create and update credentials after Kafka brokers are started.

    +

    Create initial SCRAM credentials for user admin with password admin-secret: +

    $ bin/kafka-storage.sh format -t $(bin/kafka-storage.sh random-uuid) -c config/kraft/server.properties --add-scram 'SCRAM-SHA-256=[name="admin",password="admin-secret"]'
    +

    Create SCRAM credentials for user alice with password alice-secret (refer to Configuring Kafka Clients for client configuration): +

    $ bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret]' --entity-type users --entity-name alice --command-config client.properties
    +

    The default iteration count of 4096 is used if iterations are not specified. A random salt is created if it's not specified. + The SCRAM identity consisting of salt, iterations, StoredKey and ServerKey are stored in the metadata log. See RFC 5802 for details on SCRAM identity and the individual fields. -

    The following examples also require a user admin for inter-broker communication which can be created using: -

    $ bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin

    Existing credentials may be listed using the --describe option: -

    $ bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --describe --entity-type users --entity-name alice
    +
    $ bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type users --entity-name alice --command-config client.properties

    Credentials may be deleted for one or more SCRAM mechanisms using the --alter --delete-config option: -

    $ bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name alice
    +
    $ bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --delete-config 'SCRAM-SHA-256' --entity-type users --entity-name alice --command-config client.properties
  2. Configuring Kafka Brokers
      @@ -882,17 +882,17 @@ sasl.mechanism=SCRAM-SHA-256 (or SCRAM-SHA-512)
    1. Security Considerations for SASL/SCRAM
        -
      • The default implementation of SASL/SCRAM in Kafka stores SCRAM credentials in Zookeeper. This - is suitable for production use in installations where Zookeeper is secure and on a private network.
      • +
      • The default implementation of SASL/SCRAM in Kafka stores SCRAM credentials in the metadata log. This + is suitable for production use in installations where KRaft controllers are secure and on a private network.
      • Kafka supports only the strong hash functions SHA-256 and SHA-512 with a minimum iteration count of 4096. Strong hash functions combined with strong passwords and high iteration counts protect - against brute force attacks if Zookeeper security is compromised.
      • + against brute force attacks if KRaft controllers security is compromised.
      • SCRAM should be used only with TLS-encryption to prevent interception of SCRAM exchanges. This - protects against dictionary or brute force attacks and against impersonation if Zookeeper is compromised.
      • + protects against dictionary or brute force attacks and against impersonation if KRaft controllers security is compromised.
      • From Kafka version 2.0 onwards, the default SASL/SCRAM credential store may be overridden using custom callback handlers - by configuring sasl.server.callback.handler.class in installations where Zookeeper is not secure.
      • + by configuring sasl.server.callback.handler.class in installations where KRaft controllers are not secure.
      • For more details on security considerations, refer to - RFC 5802. + RFC 5802.