kafka/raft
Divij Vaidya 5e4c8f704c
KAFKA-13943; Make `LocalLogManager` implementation consistent with the `RaftClient` contract (#12224)
Fixes two issues in the implementation of `LocalLogManager`:

- As per the interface contract for `RaftClient.scheduleAtomicAppend()`, it should throw a `NotLeaderException` exception when the provided current leader epoch does not match the current epoch. However, the current `LocalLogManager`'s implementation of the API returns a LONG_MAX instead of throwing an exception. This change fixes the behaviour and makes it consistent with the interface contract.
-  As per the interface contract for `RaftClient.resign(epoch)`if the parameter epoch does not match the current epoch, this call will be ignored. But in the current `LocalLogManager` implementation the leader epoch might change when the thread is waiting to acquire a lock on `shared.tryAppend()` (note that tryAppend() is a synchronized method). In such a case, if a NotALeaderException is thrown (as per code change in above), then resign should be ignored.

Reviewers: José Armando García Sancio <jsancio@users.noreply.github.com>, Tom Bentley <tbentley@redhat.com>, Jason Gustafson <jason@confluent.io>
2022-07-05 20:08:28 -07:00
..
bin MINOR: Self-managed -> KRaft (Kafka Raft) (#10414) 2021-03-29 15:39:10 -07:00
config KAFKA-13073: Fix MockLog snapshot implementation (#11032) 2021-07-13 17:06:18 -07:00
src KAFKA-13943; Make `LocalLogManager` implementation consistent with the `RaftClient` contract (#12224) 2022-07-05 20:08:28 -07:00
.gitignore KAFKA-13429: ignore bin on new modules (#11415) 2021-11-10 14:36:24 -06:00
README.md KAFKA-12672: Added config for raft testing server (#10545) 2021-04-15 20:48:53 -07:00

README.md

KRaft (Kafka Raft)

KRaft (Kafka Raft) is a protocol based on the Raft Consensus Protocol tailored for Apache Kafka.

This is used by Apache Kafka in the KRaft (Kafka Raft Metadata) mode. We also have a standalone test server which can be used for performance testing. We describe the details to set this up below.

Run Single Quorum

bin/test-kraft-server-start.sh --config config/kraft.properties

Run Multi Node Quorum

Create 3 separate KRaft quorum properties as the following:

cat << EOF >> config/kraft-quorum-1.properties

node.id=1
listeners=PLAINTEXT://localhost:9092
controller.listener.names=PLAINTEXT
controller.quorum.voters=1@localhost:9092,2@localhost:9093,3@localhost:9094
log.dirs=/tmp/kraft-logs-1
EOF

cat << EOF >> config/kraft-quorum-2.properties

node.id=2
listeners=PLAINTEXT://localhost:9093
controller.listener.names=PLAINTEXT
controller.quorum.voters=1@localhost:9092,2@localhost:9093,3@localhost:9094
log.dirs=/tmp/kraft-logs-2
EOF

cat << EOF >> config/kraft-quorum-3.properties

node.id=3
listeners=PLAINTEXT://localhost:9094
controller.listener.names=PLAINTEXT
controller.quorum.voters=1@localhost:9092,2@localhost:9093,3@localhost:9094
log.dirs=/tmp/kraft-logs-3
EOF

Open up 3 separate terminals, and run individual commands:

bin/test-kraft-server-start.sh --config config/kraft-quorum-1.properties
bin/test-kraft-server-start.sh --config config/kraft-quorum-2.properties
bin/test-kraft-server-start.sh --config config/kraft-quorum-3.properties

Once a leader is elected, it will begin writing to an internal __raft_performance_test topic with a steady workload of random data. You can control the workload using the --throughput and --record-size arguments passed to test-kraft-server-start.sh.