diff --git a/README.md b/README.md index efcddbcd359..b7c602d574a 100644 --- a/README.md +++ b/README.md @@ -110,8 +110,8 @@ fail due to code changes. You can just run: Using compiled files: KAFKA_CLUSTER_ID="$(./bin/kafka-storage.sh random-uuid)" - ./bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c config/kraft/reconfig-server.properties - ./bin/kafka-server-start.sh config/kraft/reconfig-server.properties + ./bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c config/server.properties + ./bin/kafka-server-start.sh config/server.properties Using docker image: diff --git a/config/kraft/broker.properties b/config/broker.properties similarity index 100% rename from config/kraft/broker.properties rename to config/broker.properties diff --git a/config/kraft/controller.properties b/config/controller.properties similarity index 100% rename from config/kraft/controller.properties rename to config/controller.properties diff --git a/config/kraft/server.properties b/config/kraft/server.properties deleted file mode 100644 index 311fefbdf86..00000000000 --- a/config/kraft/server.properties +++ /dev/null @@ -1,129 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -############################# Server Basics ############################# - -# The role of this server. Setting this puts us in KRaft mode -process.roles=broker,controller - -# The node id associated with this instance's roles -node.id=1 - -# The connect string for the controller quorum -controller.quorum.voters=1@localhost:9093 - -############################# Socket Server Settings ############################# - -# The address the socket server listens on. -# Combined nodes (i.e. those with `process.roles=broker,controller`) must list the controller listener here at a minimum. -# If the broker listener is not defined, the default listener will use a host name that is equal to the value of java.net.InetAddress.getCanonicalHostName(), -# with PLAINTEXT listener name, and port 9092. -# FORMAT: -# listeners = listener_name://host_name:port -# EXAMPLE: -# listeners = PLAINTEXT://your.host.name:9092 -listeners=PLAINTEXT://:9092,CONTROLLER://:9093 - -# Name of listener used for communication between brokers. -inter.broker.listener.name=PLAINTEXT - -# Listener name, hostname and port the broker or the controller will advertise to clients. -# If not set, it uses the value for "listeners". -advertised.listeners=PLAINTEXT://localhost:9092,CONTROLLER://localhost:9093 - -# A comma-separated list of the names of the listeners used by the controller. -# If no explicit mapping set in `listener.security.protocol.map`, default will be using PLAINTEXT protocol -# This is required if running in KRaft mode. -controller.listener.names=CONTROLLER - -# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details -listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL - -# The number of threads that the server uses for receiving requests from the network and sending responses to the network -num.network.threads=3 - -# The number of threads that the server uses for processing requests, which may include disk I/O -num.io.threads=8 - -# The send buffer (SO_SNDBUF) used by the socket server -socket.send.buffer.bytes=102400 - -# The receive buffer (SO_RCVBUF) used by the socket server -socket.receive.buffer.bytes=102400 - -# The maximum size of a request that the socket server will accept (protection against OOM) -socket.request.max.bytes=104857600 - - -############################# Log Basics ############################# - -# A comma separated list of directories under which to store log files -log.dirs=/tmp/kraft-combined-logs - -# The default number of log partitions per topic. More partitions allow greater -# parallelism for consumption, but this will also result in more files across -# the brokers. -num.partitions=1 - -# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. -# This value is recommended to be increased for installations with data dirs located in RAID array. -num.recovery.threads.per.data.dir=1 - -############################# Internal Topic Settings ############################# -# The replication factor for the group metadata internal topics "__consumer_offsets", "__share_group_state" and "__transaction_state" -# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3. -offsets.topic.replication.factor=1 -share.coordinator.state.topic.replication.factor=1 -share.coordinator.state.topic.min.isr=1 -transaction.state.log.replication.factor=1 -transaction.state.log.min.isr=1 - -############################# Log Flush Policy ############################# - -# Messages are immediately written to the filesystem but by default we only fsync() to sync -# the OS cache lazily. The following configurations control the flush of data to disk. -# There are a few important trade-offs here: -# 1. Durability: Unflushed data may be lost if you are not using replication. -# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. -# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks. -# The settings below allow one to configure the flush policy to flush data after a period of time or -# every N messages (or both). This can be done globally and overridden on a per-topic basis. - -# The number of messages to accept before forcing a flush of data to disk -#log.flush.interval.messages=10000 - -# The maximum amount of time a message can sit in a log before we force a flush -#log.flush.interval.ms=1000 - -############################# Log Retention Policy ############################# - -# The following configurations control the disposal of log segments. The policy can -# be set to delete segments after a period of time, or after a given size has accumulated. -# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens -# from the end of the log. - -# The minimum age of a log file to be eligible for deletion due to age -log.retention.hours=168 - -# A size-based retention policy for logs. Segments are pruned from the log unless the remaining -# segments drop below log.retention.bytes. Functions independently of log.retention.hours. -#log.retention.bytes=1073741824 - -# The maximum size of a log segment file. When this size is reached a new log segment will be created. -log.segment.bytes=1073741824 - -# The interval at which log segments are checked to see if they can be deleted according -# to the retention policies -log.retention.check.interval.ms=300000 diff --git a/config/kraft/reconfig-server.properties b/config/server.properties similarity index 100% rename from config/kraft/reconfig-server.properties rename to config/server.properties diff --git a/docker/docker_official_images/3.7.0/jvm/Dockerfile b/docker/docker_official_images/3.7.0/jvm/Dockerfile index b8d0ceb00ab..905e2f2149b 100755 --- a/docker/docker_official_images/3.7.0/jvm/Dockerfile +++ b/docker/docker_official_images/3.7.0/jvm/Dockerfile @@ -78,7 +78,7 @@ RUN set -eux ; \ chmod -R ug+w /etc/kafka /var/lib/kafka /etc/kafka/secrets; \ cp /opt/kafka/config/log4j.properties /etc/kafka/docker/log4j.properties; \ cp /opt/kafka/config/tools-log4j.properties /etc/kafka/docker/tools-log4j.properties; \ - cp /opt/kafka/config/kraft/reconfig-server.properties /etc/kafka/docker/server.properties; \ + cp /opt/kafka/config/kraft/server.properties /etc/kafka/docker/server.properties; \ rm kafka.tgz kafka.tgz.asc KEYS; \ apk del wget gpg gpg-agent; \ apk cache clean; diff --git a/docker/docker_official_images/3.7.0/jvm/jsa_launch b/docker/docker_official_images/3.7.0/jvm/jsa_launch index dd0299767e3..c14420b8cbd 100755 --- a/docker/docker_official_images/3.7.0/jvm/jsa_launch +++ b/docker/docker_official_images/3.7.0/jvm/jsa_launch @@ -17,9 +17,9 @@ KAFKA_CLUSTER_ID="$(opt/kafka/bin/kafka-storage.sh random-uuid)" TOPIC="test-topic" -KAFKA_JVM_PERFORMANCE_OPTS="-XX:ArchiveClassesAtExit=storage.jsa" opt/kafka/bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c opt/kafka/config/kraft/reconfig-server.properties +KAFKA_JVM_PERFORMANCE_OPTS="-XX:ArchiveClassesAtExit=storage.jsa" opt/kafka/bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c opt/kafka/config/kraft/server.properties -KAFKA_JVM_PERFORMANCE_OPTS="-XX:ArchiveClassesAtExit=kafka.jsa" opt/kafka/bin/kafka-server-start.sh opt/kafka/config/kraft/reconfig-server.properties & +KAFKA_JVM_PERFORMANCE_OPTS="-XX:ArchiveClassesAtExit=kafka.jsa" opt/kafka/bin/kafka-server-start.sh opt/kafka/config/kraft/server.properties & check_timeout() { if [ $TIMEOUT -eq 0 ]; then diff --git a/docker/jvm/Dockerfile b/docker/jvm/Dockerfile index 767b414ab7a..e633237c873 100644 --- a/docker/jvm/Dockerfile +++ b/docker/jvm/Dockerfile @@ -78,7 +78,7 @@ RUN set -eux ; \ chmod -R ug+w /etc/kafka /var/lib/kafka /etc/kafka/secrets; \ cp /opt/kafka/config/log4j2.yaml /etc/kafka/docker/log4j2.yaml; \ cp /opt/kafka/config/tools-log4j2.yaml /etc/kafka/docker/tools-log4j2.yaml; \ - cp /opt/kafka/config/kraft/reconfig-server.properties /etc/kafka/docker/server.properties; \ + cp /opt/kafka/config/server.properties /etc/kafka/docker/server.properties; \ rm kafka.tgz kafka.tgz.asc KEYS; \ apk del wget gpg gpg-agent; \ apk cache clean; diff --git a/docker/jvm/jsa_launch b/docker/jvm/jsa_launch index dd0299767e3..bfb6d73fa8b 100755 --- a/docker/jvm/jsa_launch +++ b/docker/jvm/jsa_launch @@ -17,9 +17,9 @@ KAFKA_CLUSTER_ID="$(opt/kafka/bin/kafka-storage.sh random-uuid)" TOPIC="test-topic" -KAFKA_JVM_PERFORMANCE_OPTS="-XX:ArchiveClassesAtExit=storage.jsa" opt/kafka/bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c opt/kafka/config/kraft/reconfig-server.properties +KAFKA_JVM_PERFORMANCE_OPTS="-XX:ArchiveClassesAtExit=storage.jsa" opt/kafka/bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c opt/kafka/config/server.properties -KAFKA_JVM_PERFORMANCE_OPTS="-XX:ArchiveClassesAtExit=kafka.jsa" opt/kafka/bin/kafka-server-start.sh opt/kafka/config/kraft/reconfig-server.properties & +KAFKA_JVM_PERFORMANCE_OPTS="-XX:ArchiveClassesAtExit=kafka.jsa" opt/kafka/bin/kafka-server-start.sh opt/kafka/config/server.properties & check_timeout() { if [ $TIMEOUT -eq 0 ]; then diff --git a/docker/native/Dockerfile b/docker/native/Dockerfile index 57c11ba7048..08db80781f2 100644 --- a/docker/native/Dockerfile +++ b/docker/native/Dockerfile @@ -63,7 +63,7 @@ RUN apk update ; \ chmod -R ug+w /etc/kafka /opt/kafka /mnt/shared/config ; COPY --chown=appuser:root --from=build-native-image /app/kafka/kafka.Kafka /opt/kafka/ -COPY --chown=appuser:root --from=build-native-image /app/kafka/config/kraft/reconfig-server.properties /etc/kafka/docker/ +COPY --chown=appuser:root --from=build-native-image /app/kafka/config/server.properties /etc/kafka/docker/ COPY --chown=appuser:root --from=build-native-image /app/kafka/config/log4j2.yaml /etc/kafka/docker/ COPY --chown=appuser:root --from=build-native-image /app/kafka/config/tools-log4j2.yaml /etc/kafka/docker/ COPY --chown=appuser:root resources/common-scripts /etc/kafka/docker/ diff --git a/docs/quickstart.html b/docs/quickstart.html index 1ded73e2256..e42d965bcef 100644 --- a/docs/quickstart.html +++ b/docs/quickstart.html @@ -52,10 +52,10 @@ $ cd kafka_{{scalaVersion}}-{{fullDotVersion}}
$ KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"

Format Log Directories

-
$ bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c config/kraft/reconfig-server.properties
+
$ bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c config/server.properties

Start the Kafka Server

-
$ bin/kafka-server-start.sh config/kraft/reconfig-server.properties
+
$ bin/kafka-server-start.sh config/server.properties

Once the Kafka server has successfully launched, you will have a basic Kafka environment running and ready to use.

diff --git a/docs/security.html b/docs/security.html index 1b82dfc1223..fe65412fa70 100644 --- a/docs/security.html +++ b/docs/security.html @@ -825,7 +825,7 @@ sasl.mechanism=PLAIN Client credentials may be created and updated dynamically and updated credentials will be used to authenticate new connections. kafka-configs.sh can be used to create and update credentials after Kafka brokers are started.

Create initial SCRAM credentials for user admin with password admin-secret: -

$ bin/kafka-storage.sh format -t $(bin/kafka-storage.sh random-uuid) -c config/kraft/server.properties --add-scram 'SCRAM-SHA-256=[name="admin",password="admin-secret"]'
+
$ bin/kafka-storage.sh format -t $(bin/kafka-storage.sh random-uuid) -c config/server.properties --add-scram 'SCRAM-SHA-256=[name="admin",password="admin-secret"]'

Create SCRAM credentials for user alice with password alice-secret (refer to Configuring Kafka Clients for client configuration):

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret]' --entity-type users --entity-name alice --command-config client.properties

The default iteration count of 4096 is used if iterations are not specified. A random salt is created if it's not specified. diff --git a/docs/streams/quickstart.html b/docs/streams/quickstart.html index 710bec43351..ad14001293d 100644 --- a/docs/streams/quickstart.html +++ b/docs/streams/quickstart.html @@ -106,13 +106,13 @@ $ cd kafka_{{scalaVersion}}-{{fullDotVersion}} Format Log Directories

-
$ bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c config/kraft/reconfig-server.properties
+
$ bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c config/server.properties

Start the Kafka Server

-
$ bin/kafka-server-start.sh config/kraft/reconfig-server.properties
+
$ bin/kafka-server-start.sh config/server.properties

Step 3: Prepare input topic and start Kafka producer

diff --git a/docs/upgrade.html b/docs/upgrade.html index 1be5e1836c7..b45e25c400c 100644 --- a/docs/upgrade.html +++ b/docs/upgrade.html @@ -82,6 +82,9 @@
  • The function onNewBatch in org.apache.kafka.clients.producer.Partitioner class was removed.
  • +
  • The default properties files for KRaft mode are no longer stored in the separate config/kraft directory since Zookeeper has been removed. These files have been consolidated with other configuration files. + Now all configuration files are in config directory. +
  • Broker diff --git a/trogdor/README.md b/trogdor/README.md index a44da002eb3..daebf2f50aa 100644 --- a/trogdor/README.md +++ b/trogdor/README.md @@ -12,8 +12,8 @@ Running Kafka in Kraft mode: ``` KAFKA_CLUSTER_ID="$(./bin/kafka-storage.sh random-uuid)" -./bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c config/kraft/reconfig-server.properties -./bin/kafka-server-start.sh config/kraft/reconfig-server.properties &> /tmp/kafka.log & +./bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c config/server.properties +./bin/kafka-server-start.sh config/server.properties &> /tmp/kafka.log & ``` Then, we want to run a Trogdor Agent, plus a Trogdor Coordinator.