KAFKA-18229: Move configs out of "kraft" directory (#18389)

Reviewers: Mickael Maison <mickael.maison@gmail.com>, Ismael Juma <ismael@juma.me.uk>, José Armando García Sancio <jsancio@apache.org>
This commit is contained in:
TengYao Chi 2025-01-22 22:47:57 +08:00 committed by GitHub
parent 5a57473a52
commit 7e86bd8281
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
15 changed files with 19 additions and 145 deletions

View File

@ -110,8 +110,8 @@ fail due to code changes. You can just run:
Using compiled files: Using compiled files:
KAFKA_CLUSTER_ID="$(./bin/kafka-storage.sh random-uuid)" KAFKA_CLUSTER_ID="$(./bin/kafka-storage.sh random-uuid)"
./bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c config/kraft/reconfig-server.properties ./bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c config/server.properties
./bin/kafka-server-start.sh config/kraft/reconfig-server.properties ./bin/kafka-server-start.sh config/server.properties
Using docker image: Using docker image:

View File

@ -1,129 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
############################# Server Basics #############################
# The role of this server. Setting this puts us in KRaft mode
process.roles=broker,controller
# The node id associated with this instance's roles
node.id=1
# The connect string for the controller quorum
controller.quorum.voters=1@localhost:9093
############################# Socket Server Settings #############################
# The address the socket server listens on.
# Combined nodes (i.e. those with `process.roles=broker,controller`) must list the controller listener here at a minimum.
# If the broker listener is not defined, the default listener will use a host name that is equal to the value of java.net.InetAddress.getCanonicalHostName(),
# with PLAINTEXT listener name, and port 9092.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:9092,CONTROLLER://:9093
# Name of listener used for communication between brokers.
inter.broker.listener.name=PLAINTEXT
# Listener name, hostname and port the broker or the controller will advertise to clients.
# If not set, it uses the value for "listeners".
advertised.listeners=PLAINTEXT://localhost:9092,CONTROLLER://localhost:9093
# A comma-separated list of the names of the listeners used by the controller.
# If no explicit mapping set in `listener.security.protocol.map`, default will be using PLAINTEXT protocol
# This is required if running in KRaft mode.
controller.listener.names=CONTROLLER
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=/tmp/kraft-combined-logs
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets", "__share_group_state" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
share.coordinator.state.topic.replication.factor=1
share.coordinator.state.topic.min.isr=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

View File

@ -78,7 +78,7 @@ RUN set -eux ; \
chmod -R ug+w /etc/kafka /var/lib/kafka /etc/kafka/secrets; \ chmod -R ug+w /etc/kafka /var/lib/kafka /etc/kafka/secrets; \
cp /opt/kafka/config/log4j.properties /etc/kafka/docker/log4j.properties; \ cp /opt/kafka/config/log4j.properties /etc/kafka/docker/log4j.properties; \
cp /opt/kafka/config/tools-log4j.properties /etc/kafka/docker/tools-log4j.properties; \ cp /opt/kafka/config/tools-log4j.properties /etc/kafka/docker/tools-log4j.properties; \
cp /opt/kafka/config/kraft/reconfig-server.properties /etc/kafka/docker/server.properties; \ cp /opt/kafka/config/kraft/server.properties /etc/kafka/docker/server.properties; \
rm kafka.tgz kafka.tgz.asc KEYS; \ rm kafka.tgz kafka.tgz.asc KEYS; \
apk del wget gpg gpg-agent; \ apk del wget gpg gpg-agent; \
apk cache clean; apk cache clean;

View File

@ -17,9 +17,9 @@
KAFKA_CLUSTER_ID="$(opt/kafka/bin/kafka-storage.sh random-uuid)" KAFKA_CLUSTER_ID="$(opt/kafka/bin/kafka-storage.sh random-uuid)"
TOPIC="test-topic" TOPIC="test-topic"
KAFKA_JVM_PERFORMANCE_OPTS="-XX:ArchiveClassesAtExit=storage.jsa" opt/kafka/bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c opt/kafka/config/kraft/reconfig-server.properties KAFKA_JVM_PERFORMANCE_OPTS="-XX:ArchiveClassesAtExit=storage.jsa" opt/kafka/bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c opt/kafka/config/kraft/server.properties
KAFKA_JVM_PERFORMANCE_OPTS="-XX:ArchiveClassesAtExit=kafka.jsa" opt/kafka/bin/kafka-server-start.sh opt/kafka/config/kraft/reconfig-server.properties & KAFKA_JVM_PERFORMANCE_OPTS="-XX:ArchiveClassesAtExit=kafka.jsa" opt/kafka/bin/kafka-server-start.sh opt/kafka/config/kraft/server.properties &
check_timeout() { check_timeout() {
if [ $TIMEOUT -eq 0 ]; then if [ $TIMEOUT -eq 0 ]; then

View File

@ -78,7 +78,7 @@ RUN set -eux ; \
chmod -R ug+w /etc/kafka /var/lib/kafka /etc/kafka/secrets; \ chmod -R ug+w /etc/kafka /var/lib/kafka /etc/kafka/secrets; \
cp /opt/kafka/config/log4j2.yaml /etc/kafka/docker/log4j2.yaml; \ cp /opt/kafka/config/log4j2.yaml /etc/kafka/docker/log4j2.yaml; \
cp /opt/kafka/config/tools-log4j2.yaml /etc/kafka/docker/tools-log4j2.yaml; \ cp /opt/kafka/config/tools-log4j2.yaml /etc/kafka/docker/tools-log4j2.yaml; \
cp /opt/kafka/config/kraft/reconfig-server.properties /etc/kafka/docker/server.properties; \ cp /opt/kafka/config/server.properties /etc/kafka/docker/server.properties; \
rm kafka.tgz kafka.tgz.asc KEYS; \ rm kafka.tgz kafka.tgz.asc KEYS; \
apk del wget gpg gpg-agent; \ apk del wget gpg gpg-agent; \
apk cache clean; apk cache clean;

View File

@ -17,9 +17,9 @@
KAFKA_CLUSTER_ID="$(opt/kafka/bin/kafka-storage.sh random-uuid)" KAFKA_CLUSTER_ID="$(opt/kafka/bin/kafka-storage.sh random-uuid)"
TOPIC="test-topic" TOPIC="test-topic"
KAFKA_JVM_PERFORMANCE_OPTS="-XX:ArchiveClassesAtExit=storage.jsa" opt/kafka/bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c opt/kafka/config/kraft/reconfig-server.properties KAFKA_JVM_PERFORMANCE_OPTS="-XX:ArchiveClassesAtExit=storage.jsa" opt/kafka/bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c opt/kafka/config/server.properties
KAFKA_JVM_PERFORMANCE_OPTS="-XX:ArchiveClassesAtExit=kafka.jsa" opt/kafka/bin/kafka-server-start.sh opt/kafka/config/kraft/reconfig-server.properties & KAFKA_JVM_PERFORMANCE_OPTS="-XX:ArchiveClassesAtExit=kafka.jsa" opt/kafka/bin/kafka-server-start.sh opt/kafka/config/server.properties &
check_timeout() { check_timeout() {
if [ $TIMEOUT -eq 0 ]; then if [ $TIMEOUT -eq 0 ]; then

View File

@ -63,7 +63,7 @@ RUN apk update ; \
chmod -R ug+w /etc/kafka /opt/kafka /mnt/shared/config ; chmod -R ug+w /etc/kafka /opt/kafka /mnt/shared/config ;
COPY --chown=appuser:root --from=build-native-image /app/kafka/kafka.Kafka /opt/kafka/ COPY --chown=appuser:root --from=build-native-image /app/kafka/kafka.Kafka /opt/kafka/
COPY --chown=appuser:root --from=build-native-image /app/kafka/config/kraft/reconfig-server.properties /etc/kafka/docker/ COPY --chown=appuser:root --from=build-native-image /app/kafka/config/server.properties /etc/kafka/docker/
COPY --chown=appuser:root --from=build-native-image /app/kafka/config/log4j2.yaml /etc/kafka/docker/ COPY --chown=appuser:root --from=build-native-image /app/kafka/config/log4j2.yaml /etc/kafka/docker/
COPY --chown=appuser:root --from=build-native-image /app/kafka/config/tools-log4j2.yaml /etc/kafka/docker/ COPY --chown=appuser:root --from=build-native-image /app/kafka/config/tools-log4j2.yaml /etc/kafka/docker/
COPY --chown=appuser:root resources/common-scripts /etc/kafka/docker/ COPY --chown=appuser:root resources/common-scripts /etc/kafka/docker/

View File

@ -52,10 +52,10 @@ $ cd kafka_{{scalaVersion}}-{{fullDotVersion}}</code></pre>
<pre><code class="language-bash">$ KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"</code></pre> <pre><code class="language-bash">$ KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"</code></pre>
<p>Format Log Directories</p> <p>Format Log Directories</p>
<pre><code class="language-bash">$ bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c config/kraft/reconfig-server.properties</code></pre> <pre><code class="language-bash">$ bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c config/server.properties</code></pre>
<p>Start the Kafka Server</p> <p>Start the Kafka Server</p>
<pre><code class="language-bash">$ bin/kafka-server-start.sh config/kraft/reconfig-server.properties</code></pre> <pre><code class="language-bash">$ bin/kafka-server-start.sh config/server.properties</code></pre>
<p>Once the Kafka server has successfully launched, you will have a basic Kafka environment running and ready to use.</p> <p>Once the Kafka server has successfully launched, you will have a basic Kafka environment running and ready to use.</p>

View File

@ -825,7 +825,7 @@ sasl.mechanism=PLAIN</code></pre></li>
Client credentials may be created and updated dynamically and updated credentials will be used to authenticate new connections. Client credentials may be created and updated dynamically and updated credentials will be used to authenticate new connections.
<code>kafka-configs.sh</code> can be used to create and update credentials after Kafka brokers are started.</p> <code>kafka-configs.sh</code> can be used to create and update credentials after Kafka brokers are started.</p>
<p>Create initial SCRAM credentials for user <i>admin</i> with password <i>admin-secret</i>: <p>Create initial SCRAM credentials for user <i>admin</i> with password <i>admin-secret</i>:
<pre><code class="language-bash">$ bin/kafka-storage.sh format -t $(bin/kafka-storage.sh random-uuid) -c config/kraft/server.properties --add-scram 'SCRAM-SHA-256=[name="admin",password="admin-secret"]'</code></pre> <pre><code class="language-bash">$ bin/kafka-storage.sh format -t $(bin/kafka-storage.sh random-uuid) -c config/server.properties --add-scram 'SCRAM-SHA-256=[name="admin",password="admin-secret"]'</code></pre>
<p>Create SCRAM credentials for user <i>alice</i> with password <i>alice-secret</i> (refer to <a href="#security_sasl_scram_clientconfig">Configuring Kafka Clients</a> for client configuration): <p>Create SCRAM credentials for user <i>alice</i> with password <i>alice-secret</i> (refer to <a href="#security_sasl_scram_clientconfig">Configuring Kafka Clients</a> for client configuration):
<pre><code class="language-bash">$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret]' --entity-type users --entity-name alice --command-config client.properties</code></pre> <pre><code class="language-bash">$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret]' --entity-type users --entity-name alice --command-config client.properties</code></pre>
<p>The default iteration count of 4096 is used if iterations are not specified. A random salt is created if it's not specified. <p>The default iteration count of 4096 is used if iterations are not specified. A random salt is created if it's not specified.

View File

@ -106,13 +106,13 @@ $ cd kafka_{{scalaVersion}}-{{fullDotVersion}}</code></pre>
Format Log Directories Format Log Directories
</p> </p>
<pre><code class="language-bash">$ bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c config/kraft/reconfig-server.properties</code></pre> <pre><code class="language-bash">$ bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c config/server.properties</code></pre>
<p> <p>
Start the Kafka Server Start the Kafka Server
</p> </p>
<pre><code class="language-bash">$ bin/kafka-server-start.sh config/kraft/reconfig-server.properties</code></pre> <pre><code class="language-bash">$ bin/kafka-server-start.sh config/server.properties</code></pre>
<h4><a id="quickstart_streams_prepare" href="#quickstart_streams_prepare">Step 3: Prepare input topic and start Kafka producer</a></h4> <h4><a id="quickstart_streams_prepare" href="#quickstart_streams_prepare">Step 3: Prepare input topic and start Kafka producer</a></h4>

View File

@ -82,6 +82,9 @@
</li> </li>
<li>The function <code>onNewBatch</code> in <code>org.apache.kafka.clients.producer.Partitioner</code> class was removed. <li>The function <code>onNewBatch</code> in <code>org.apache.kafka.clients.producer.Partitioner</code> class was removed.
</li> </li>
<li>The default properties files for KRaft mode are no longer stored in the separate <code>config/kraft</code> directory since Zookeeper has been removed. These files have been consolidated with other configuration files.
Now all configuration files are in <code>config</code> directory.
</li>
</ul> </ul>
</li> </li>
<li><b>Broker</b> <li><b>Broker</b>

View File

@ -12,8 +12,8 @@ Running Kafka in Kraft mode:
``` ```
KAFKA_CLUSTER_ID="$(./bin/kafka-storage.sh random-uuid)" KAFKA_CLUSTER_ID="$(./bin/kafka-storage.sh random-uuid)"
./bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c config/kraft/reconfig-server.properties ./bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c config/server.properties
./bin/kafka-server-start.sh config/kraft/reconfig-server.properties &> /tmp/kafka.log & ./bin/kafka-server-start.sh config/server.properties &> /tmp/kafka.log &
``` ```
Then, we want to run a Trogdor Agent, plus a Trogdor Coordinator. Then, we want to run a Trogdor Agent, plus a Trogdor Coordinator.