KAFKA-2715: Removed previous system_test folder

ewencp Nothing too complicated here

Author: Geoff Anderson <geoff@confluent.io>

Reviewers: Ewen Cheslack-Postava, Gwen Shapira

Closes #392 from granders/minor-remove-system-test
This commit is contained in:
Geoff Anderson 2015-10-30 15:13:16 -07:00 committed by Gwen Shapira
parent c001b2040c
commit d50499a0e0
251 changed files with 2 additions and 26346 deletions

2
Vagrantfile vendored
View File

@ -148,7 +148,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
end
# Exclude some directories that can grow very large from syncing
override.vm.synced_folder ".", "/vagrant", type: "rsync", :rsync_excludes => ['.git', 'core/data/', 'logs/', 'system_test/', 'tests/results/', 'results/']
override.vm.synced_folder ".", "/vagrant", type: "rsync", :rsync_excludes => ['.git', 'core/data/', 'logs/', 'tests/results/', 'results/']
end
def name_node(node, name)

View File

@ -75,8 +75,7 @@ rat {
'gradlew',
'gradlew.bat',
'**/README.md',
'.reviewboardrc',
'system_test/**'
'.reviewboardrc'
])
}

View File

@ -1,83 +0,0 @@
# ==========================
# Quick Start
# ==========================
* Please note that the following commands should be executed after downloading the kafka source code to build all the required binaries:
1. <kafka install dir>/ $ ./gradlew jar
Now you are ready to follow the steps below.
1. Update system_test/cluster_config.json for "kafka_home" & "java_home" specific to your environment
2. Edit system_test/replication_testsuite/testcase_1/testcase_1_properties.json and update "broker-list" to the proper settings of your environment. (If this test is to be run in a single localhost, no change is required for this.)
3. Create testcase_to_run.json file with the tests you wish to run. You can start by just copying one of our preset test suites. For example:
cp testcase_to_run_sanity.json testcase_to_run.json
4. To run the test, go to <kafka_home>/system_test and run the following command:
$ python -u -B system_test_runner.py 2>&1 | tee system_test_output.log
5. To turn on debugging, update system_test/logging.conf by changing the level in handlers session from INFO to DEBUG.
We also have three built-in test suites you can use after you set your environment (steps 1 and 2 above):
* run_sanity.sh - will run a single basic replication test
* run_all_replica.sh - will run all replication tests
* run_all.sh - will run all replication and mirror_maker tests
# ==========================
# Overview
# ==========================
"system_test" is now transformed to a system regression test framework intended for the automation of system / integration testing of data platform software such as Kafka. The test framework is implemented in Python which is a popular scripting language with well supported features.
The framework has the following levels:
1. The first level is generic and does not depend on any product specific details.
location: system_test
a. system_test_runner.py - It implements the main class RegTest as an entry point.
b. system_test_env.py - It implements the class RegTestEnv which defines the testing environment of a test session such as the base directory and environment variables specific to the local machine.
2. The second level defines a suite of testing such as Kafka's replication (including basic testing, failure testing, ... etc)
location: system_test/<suite directory name>*.
* Please note the test framework will look for a specific suffix of the directories under system_test to determine what test suites are available. The suffix of <suite directory name> can be defined in SystemTestEnv class (system_test_env.py)
a. replica_basic_test.py - This is a test module file. It implements the test logic for basic replication testing as follows:
i. start zookeepers
ii. start brokers
iii. create kafka topics
iv. lookup the brokerid as a leader
v. terminate the leader (if defined in the testcase config json file)
vi. start producer to send n messages
vii. start consumer to receive messages
viii. validate if there is data loss
b. config/ - This config directory provides templates for all properties files needed for zookeeper, brokers, producer and consumer (any changes in the files under this directory would be reflected or overwritten by the settings under testcase_<n>/testcase_<n>_properties.json)
d. testcase_<n>** - The testcase directory contains the testcase argument definition file: testcase_1_properties.json. This file defines the specific configurations for the testcase such as the followings (eg. producer related):
i. no. of producer threads
ii. no. of messages to produce
iii. zkconnect string
When this test case is being run, the test framework will copy and update the template properties files to testcase_<n>/config. The logs of various components will be saved in testcase_<n>/logs
** Please note the test framework will look for a specific prefix of the directories under system_test/<test suite dir>/ to determine what test cases are available. The prefix of <testcase directory name> can be defined in SystemTestEnv class (system_test_env.py)
# ==========================
# Adding Test Case
# ==========================
To create a new test suite called "broker_testsuite", please do the followings:
1. Copy and paste system_test/replication_testsuite => system_test/broker_testsuite
2. Rename system_test/broker_testsuite/replica_basic_test.py => system_test/broker_testsuite/broker_basic_test.py
3. Edit system_test/broker_testsuite/broker_basic_test.py and update all ReplicaBasicTest related class name to BrokerBasicTest (as an example)
4. Follow the flow of system_test/broker_testsuite/broker_basic_test.py and modify the necessary test logic accordingly.
To create a new test case under "replication_testsuite", please do the followings:
1. Copy and paste system_test/replication_testsuite/testcase_1 => system_test/replication_testsuite/testcase_2
2. Rename system_test/replication_testsuite/testcase_2/testcase_1_properties.json => system_test/replication_testsuite/testcase_2/testcase_2_properties.json
3. Update system_test/replication_testsuite/testcase_2/testcase_2_properties.json with the corresponding settings for testcase 2.
Note:
The following testcases are for the old producer and the old mirror maker. We can remove them once we phase out the old producer client.
replication_testsuite: testcase_{10101 - 10110} testcase_{10131 - 10134}
mirror_maker_testsuite: testcase_{15001 - 15006}

View File

@ -1 +0,0 @@

View File

@ -1,72 +0,0 @@
** Please note that the following commands should be executed
after downloading the kafka source code to build all the
required binaries:
1. <kafka install dir>/ $ ./sbt update
2. <kafka install dir>/ $ ./sbt package
Now you are ready to follow the steps below.
This script performs broker failure tests in an environment with
Mirrored Source & Target clusters in a single machine:
1. Start a cluster of Kafka source brokers
2. Start a cluster of Kafka target brokers
3. Start one or more Mirror Maker to create mirroring between
source and target clusters
4. A producer produces batches of messages to the SOURCE brokers
in the background
5. The Kafka SOURCE, TARGET brokers and Mirror Maker will be
terminated in a round-robin fashion and wait for the consumer
to catch up.
6. Repeat step 5 as many times as specified in the script
7. An independent ConsoleConsumer in publish/subcribe mode to
consume messages from the SOURCE brokers cluster
8. An independent ConsoleConsumer in publish/subcribe mode to
consume messages from the TARGET brokers cluster
Expected results:
==================
There should not be any discrepancies by comparing the unique
message checksums from the source ConsoleConsumer and the
target ConsoleConsumer.
Notes:
==================
The number of Kafka SOURCE brokers can be increased as follows:
1. Update the value of $num_kafka_source_server in this script
2. Make sure that there are corresponding number of prop files:
$base_dir/config/server_source{1..4}.properties
The number of Kafka TARGET brokers can be increased as follows:
1. Update the value of $num_kafka_target_server in this script
2. Make sure that there are corresponding number of prop files:
$base_dir/config/server_target{1..3}.properties
Quick Start:
==================
In the directory <kafka home>/system_test/broker_failure,
execute this script as following:
$ bin/run-test.sh -n <num of iterations> -s <servers to bounce>
num of iterations - the number of iterations that the test runs
servers to bounce - the servers to be bounced in a round-robin fashion.
Values to be entered:
1 - source broker
2 - mirror maker
3 - target broker
Example:
* To bounce only mirror maker and target broker
in turns, enter the value 23.
* To bounce only mirror maker, enter the value 2.
* To run the test without bouncing, enter 0.
At the end of the test, the received messages checksums in both
SOURCE & TARGET will be compared. If all checksums are matched,
the test is PASSED. Otherwise, the test is FAILED.
In the event of failure, by default the brokers and zookeepers
remain running to make it easier to debug the issue - hit Ctrl-C
to shut them down.

View File

@ -1,67 +0,0 @@
#!/bin/bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
if [ $# -lt 1 ];
then
echo "USAGE: $0 classname [opts]"
exit 1
fi
base_dir=$(dirname $0)/..
kafka_inst_dir=${base_dir}/../..
for file in $kafka_inst_dir/project/boot/scala-2.8.0/lib/*.jar;
do
CLASSPATH=$CLASSPATH:$file
done
for file in $kafka_inst_dir/core/target/scala_2.8.0/*.jar;
do
CLASSPATH=$CLASSPATH:$file
done
for file in $kafka_inst_dir/core/lib/*.jar;
do
CLASSPATH=$CLASSPATH:$file
done
for file in $kafka_inst_dir/perf/target/scala_2.8.0/kafka*.jar;
do
CLASSPATH=$CLASSPATH:$file
done
for file in $kafka_inst_dir/core/lib_managed/scala_2.8.0/compile/*.jar;
do
if [ ${file##*/} != "sbt-launch.jar" ]; then
CLASSPATH=$CLASSPATH:$file
fi
done
if [ -z "$KAFKA_JMX_OPTS" ]; then
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false "
fi
if [ -z "$KAFKA_OPTS" ]; then
KAFKA_OPTS="-Xmx512M -server -Dlog4j.configuration=file:$base_dir/config/log4j.properties"
fi
if [ $JMX_PORT ]; then
KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT "
fi
if [ -z "$JAVA_HOME" ]; then
JAVA="java"
else
JAVA="$JAVA_HOME/bin/java"
fi
$JAVA $KAFKA_OPTS $KAFKA_JMX_OPTS -cp $CLASSPATH $@

View File

@ -1,815 +0,0 @@
#!/bin/bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ===========
# run-test.sh
# ===========
# ====================================
# Do not change the followings
# (keep this section at the beginning
# of this script)
# ====================================
readonly system_test_root=$(dirname $0)/../.. # path of <kafka install>/system_test
readonly common_dir=${system_test_root}/common # common util scripts for system_test
source ${common_dir}/util.sh # include the util script
readonly base_dir=$(dirname $0)/.. # the base dir of this test suite
readonly test_start_time="$(date +%s)" # time starting this test
readonly bounce_source_id=1
readonly bounce_mir_mkr_id=2
readonly bounce_target_id=3
readonly log4j_prop_file=$base_dir/config/log4j.properties
iter=1 # init a counter to keep track of iterations
num_iterations=5 # total no. of iterations to run
svr_to_bounce=0 # servers to bounce: 1-source 2-mirror_maker 3-target
# 12 - source & mirror_maker
# 13 - source & target
# ====================================
# No need to change the following
# configurations in most cases
# ====================================
readonly zk_source_port=2181 # source zk port
readonly zk_target_port=2182 # target zk port
readonly test_topic=test01 # topic used in this test
readonly consumer_grp=group1 # consumer group
readonly source_console_consumer_grp=source
readonly target_console_consumer_grp=target
readonly message_size=100
readonly console_consumer_timeout_ms=15000
readonly num_kafka_source_server=4 # requires same no. of property files such as:
# $base_dir/config/server_source{1..4}.properties
readonly num_kafka_target_server=3 # requires same no. of property files such as:
# $base_dir/config/server_target{1..3}.properties
readonly num_kafka_mirror_maker=3 # any values greater than 0
readonly wait_time_after_killing_broker=0 # wait after broker is stopped but before starting again
readonly wait_time_after_restarting_broker=10
# ====================================
# Change the followings as needed
# ====================================
num_msg_per_batch=500 # no. of msg produced in each calling of ProducerPerformance
num_producer_threads=5 # no. of producer threads to send msg
producer_sleep_min=5 # min & max sleep time (in sec) between each
producer_sleep_max=5 # batch of messages sent from producer
# ====================================
# zookeeper
# ====================================
pid_zk_source=
pid_zk_target=
zk_log4j_log=
# ====================================
# kafka source
# ====================================
kafka_source_pids=
kafka_source_prop_files=
kafka_source_log_files=
kafka_topic_creation_log_file=$base_dir/kafka_topic_creation.log
kafka_log4j_log=
# ====================================
# kafka target
# ====================================
kafka_target_pids=
kafka_target_prop_files=
kafka_target_log_files=
# ====================================
# mirror maker
# ====================================
kafka_mirror_maker_pids=
kafka_mirror_maker_log_files=
consumer_prop_file=$base_dir/config/whitelisttest.consumer.properties
mirror_producer_prop_files=
# ====================================
# console consumer source
# ====================================
console_consumer_source_pid=
console_consumer_source_log=$base_dir/console_consumer_source.log
console_consumer_source_mid_log=$base_dir/console_consumer_source_mid.log
console_consumer_source_mid_sorted_log=$base_dir/console_consumer_source_mid_sorted.log
console_consumer_source_mid_sorted_uniq_log=$base_dir/console_consumer_source_mid_sorted_uniq.log
# ====================================
# console consumer target
# ====================================
console_consumer_target_pid=
console_consumer_target_log=$base_dir/console_consumer_target.log
console_consumer_target_mid_log=$base_dir/console_consumer_target_mid.log
console_consumer_target_mid_sorted_log=$base_dir/console_consumer_target_mid_sorted.log
console_consumer_target_mid_sorted_uniq_log=$base_dir/console_consumer_target_mid_sorted_uniq.log
# ====================================
# producer
# ====================================
background_producer_pid=
producer_performance_log=$base_dir/producer_performance.log
producer_performance_mid_log=$base_dir/producer_performance_mid.log
producer_performance_mid_sorted_log=$base_dir/producer_performance_mid_sorted.log
producer_performance_mid_sorted_uniq_log=$base_dir/producer_performance_mid_sorted_uniq.log
tmp_file_to_stop_background_producer=/tmp/tmp_file_to_stop_background_producer
# ====================================
# test reports
# ====================================
checksum_diff_log=$base_dir/checksum_diff.log
# ====================================
# initialize prop and log files
# ====================================
initialize() {
for ((i=1; i<=$num_kafka_target_server; i++))
do
kafka_target_prop_files[${i}]=$base_dir/config/server_target${i}.properties
kafka_target_log_files[${i}]=$base_dir/kafka_target${i}.log
kafka_mirror_maker_log_files[${i}]=$base_dir/kafka_mirror_maker${i}.log
done
for ((i=1; i<=$num_kafka_source_server; i++))
do
kafka_source_prop_files[${i}]=$base_dir/config/server_source${i}.properties
kafka_source_log_files[${i}]=$base_dir/kafka_source${i}.log
done
for ((i=1; i<=$num_kafka_mirror_maker; i++))
do
mirror_producer_prop_files[${i}]=$base_dir/config/mirror_producer${i}.properties
done
zk_log4j_log=`grep "log4j.appender.zookeeperAppender.File=" $log4j_prop_file | awk -F '=' '{print $2}'`
kafka_log4j_log=`grep "log4j.appender.kafkaAppender.File=" $log4j_prop_file | awk -F '=' '{print $2}'`
}
# =========================================
# cleanup
# =========================================
cleanup() {
info "cleaning up"
rm -rf $tmp_file_to_stop_background_producer
rm -rf $kafka_topic_creation_log_file
rm -rf /tmp/zookeeper_source
rm -rf /tmp/zookeeper_target
rm -rf /tmp/kafka-source{1..4}-logs
rm -rf /tmp/kafka-target{1..3}-logs
rm -rf $zk_log4j_log
rm -rf $kafka_log4j_log
for ((i=1; i<=$num_kafka_target_server; i++))
do
rm -rf ${kafka_target_log_files[${i}]}
rm -rf ${kafka_mirror_maker_log_files[${i}]}
done
rm -f $base_dir/zookeeper_source.log
rm -f $base_dir/zookeeper_target.log
rm -f $base_dir/kafka_source{1..4}.log
rm -f $producer_performance_log
rm -f $producer_performance_mid_log
rm -f $producer_performance_mid_sorted_log
rm -f $producer_performance_mid_sorted_uniq_log
rm -f $console_consumer_target_log
rm -f $console_consumer_source_log
rm -f $console_consumer_target_mid_log
rm -f $console_consumer_source_mid_log
rm -f $checksum_diff_log
rm -f $console_consumer_target_mid_sorted_log
rm -f $console_consumer_source_mid_sorted_log
rm -f $console_consumer_target_mid_sorted_uniq_log
rm -f $console_consumer_source_mid_sorted_uniq_log
}
# =========================================
# wait_for_zero_consumer_lags
# =========================================
wait_for_zero_consumer_lags() {
this_group_name=$1
this_zk_port=$2
# no of times to check for zero lagging
no_of_zero_to_verify=3
while [ 'x' == 'x' ]
do
TOTAL_LAG=0
CONSUMER_LAGS=`$base_dir/bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker \
--group $target_console_consumer_grp \
--zkconnect localhost:$zk_target_port \
--topic $test_topic \
| grep "Consumer lag" | tr -d ' ' | cut -f2 -d '='`
for lag in $CONSUMER_LAGS;
do
TOTAL_LAG=$(($TOTAL_LAG + $lag))
done
info "mirror console consumer TOTAL_LAG = $TOTAL_LAG"
if [ $TOTAL_LAG -eq 0 ]; then
if [ $no_of_zero_to_verify -eq 0 ]; then
echo
return 0
fi
no_of_zero_to_verify=$(($no_of_zero_to_verify - 1))
fi
sleep 1
done
}
# =========================================
# create_topic
# =========================================
create_topic() {
this_topic_to_create=$1
this_zk_conn_str=$2
this_replica_factor=$3
info "creating topic [$this_topic_to_create] on [$this_zk_conn_str]"
$base_dir/../../bin/kafka-create-topic.sh \
--topic $this_topic_to_create \
--zookeeper $this_zk_conn_str \
--replica $this_replica_factor \
2> $kafka_topic_creation_log_file
}
# =========================================
# start_zk
# =========================================
start_zk() {
info "starting zookeepers"
$base_dir/../../bin/zookeeper-server-start.sh \
$base_dir/config/zookeeper_source.properties \
2>&1 > $base_dir/zookeeper_source.log &
pid_zk_source=$!
$base_dir/../../bin/zookeeper-server-start.sh \
$base_dir/config/zookeeper_target.properties \
2>&1 > $base_dir/zookeeper_target.log &
pid_zk_target=$!
}
# =========================================
# start_source_servers_cluster
# =========================================
start_source_servers_cluster() {
info "starting source cluster"
for ((i=1; i<=$num_kafka_source_server; i++))
do
start_source_server $i
done
}
# =========================================
# start_source_server
# =========================================
start_source_server() {
s_idx=$1
$base_dir/bin/kafka-run-class.sh kafka.Kafka \
${kafka_source_prop_files[$s_idx]} \
2>&1 >> ${kafka_source_log_files[$s_idx]} &
kafka_source_pids[${s_idx}]=$!
info " -> kafka_source_pids[$s_idx]: ${kafka_source_pids[$s_idx]}"
}
# =========================================
# start_target_servers_cluster
# =========================================
start_target_servers_cluster() {
info "starting mirror cluster"
for ((i=1; i<=$num_kafka_target_server; i++))
do
start_target_server $i
done
}
# =========================================
# start_target_server
# =========================================
start_target_server() {
s_idx=$1
$base_dir/bin/kafka-run-class.sh kafka.Kafka \
${kafka_target_prop_files[${s_idx}]} \
2>&1 >> ${kafka_target_log_files[${s_idx}]} &
kafka_target_pids[$s_idx]=$!
info " -> kafka_target_pids[$s_idx]: ${kafka_target_pids[$s_idx]}"
}
# =========================================
# start_target_mirror_maker
# =========================================
start_target_mirror_maker() {
info "starting mirror maker"
for ((i=1; i<=$num_kafka_mirror_maker; i++))
do
start_mirror_maker $i
done
}
# =========================================
# start_mirror_maker
# =========================================
start_mirror_maker() {
s_idx=$1
$base_dir/bin/kafka-run-class.sh kafka.tools.MirrorMaker \
--consumer.config $consumer_prop_file \
--producer.config ${mirror_producer_prop_files[${s_idx}]} \
--whitelist=\".*\" \
2>&1 >> ${kafka_mirror_maker_log_files[$s_idx]} &
kafka_mirror_maker_pids[${s_idx}]=$!
info " -> kafka_mirror_maker_pids[$s_idx]: ${kafka_mirror_maker_pids[$s_idx]}"
}
# =========================================
# start_console_consumer
# =========================================
start_console_consumer() {
this_consumer_grp=$1
this_consumer_zk_port=$2
this_consumer_log=$3
this_msg_formatter=$4
info "starting console consumers for $this_consumer_grp"
$base_dir/bin/kafka-run-class.sh kafka.tools.ConsoleConsumer \
--zookeeper localhost:$this_consumer_zk_port \
--topic $test_topic \
--group $this_consumer_grp \
--from-beginning \
--consumer-timeout-ms $console_consumer_timeout_ms \
--formatter "kafka.tools.ConsoleConsumer\$${this_msg_formatter}" \
2>&1 > ${this_consumer_log} &
console_consumer_pid=$!
info " -> console consumer pid: $console_consumer_pid"
}
# =========================================
# force_shutdown_background_producer
# - to be called when user press Ctrl-C
# =========================================
force_shutdown_background_producer() {
info "force shutting down producer"
`ps auxw | grep "run\-test\|ProducerPerformance" | grep -v grep | awk '{print $2}' | xargs kill -9`
}
# =========================================
# force_shutdown_consumer
# - to be called when user press Ctrl-C
# =========================================
force_shutdown_consumer() {
info "force shutting down consumer"
`ps auxw | grep ChecksumMessageFormatter | grep -v grep | awk '{print $2}' | xargs kill -9`
}
# =========================================
# shutdown_servers
# =========================================
shutdown_servers() {
info "shutting down mirror makers"
for ((i=1; i<=$num_kafka_mirror_maker; i++))
do
#info "stopping mm pid: ${kafka_mirror_maker_pids[$i]}"
if [ "x${kafka_mirror_maker_pids[$i]}" != "x" ]; then
kill_child_processes 0 ${kafka_mirror_maker_pids[$i]};
fi
done
info "shutting down target servers"
for ((i=1; i<=$num_kafka_target_server; i++))
do
if [ "x${kafka_target_pids[$i]}" != "x" ]; then
kill_child_processes 0 ${kafka_target_pids[$i]};
fi
done
info "shutting down source servers"
for ((i=1; i<=$num_kafka_source_server; i++))
do
if [ "x${kafka_source_pids[$i]}" != "x" ]; then
kill_child_processes 0 ${kafka_source_pids[$i]};
fi
done
info "shutting down zookeeper servers"
if [ "x${pid_zk_target}" != "x" ]; then kill_child_processes 0 ${pid_zk_target}; fi
if [ "x${pid_zk_source}" != "x" ]; then kill_child_processes 0 ${pid_zk_source}; fi
}
# =========================================
# start_background_producer
# =========================================
start_background_producer() {
topic=$1
batch_no=0
while [ ! -e $tmp_file_to_stop_background_producer ]
do
sleeptime=$(get_random_range $producer_sleep_min $producer_sleep_max)
info "producing $num_msg_per_batch messages on topic '$topic'"
$base_dir/bin/kafka-run-class.sh \
kafka.tools.ProducerPerformance \
--brokerinfo zk.connect=localhost:2181 \
--topics $topic \
--messages $num_msg_per_batch \
--message-size $message_size \
--threads $num_producer_threads \
--initial-message-id $batch_no \
2>&1 >> $base_dir/producer_performance.log # appending all producers' msgs
batch_no=$(($batch_no + $num_msg_per_batch))
sleep $sleeptime
done
}
# =========================================
# cmp_checksum
# =========================================
cmp_checksum() {
cmp_result=0
grep MessageID $console_consumer_source_log | sed s'/^.*MessageID://g' | awk -F ':' '{print $1}' > $console_consumer_source_mid_log
grep MessageID $console_consumer_target_log | sed s'/^.*MessageID://g' | awk -F ':' '{print $1}' > $console_consumer_target_mid_log
grep MessageID $producer_performance_log | sed s'/^.*MessageID://g' | awk -F ':' '{print $1}' > $producer_performance_mid_log
sort $console_consumer_target_mid_log > $console_consumer_target_mid_sorted_log
sort $console_consumer_source_mid_log > $console_consumer_source_mid_sorted_log
sort $producer_performance_mid_log > $producer_performance_mid_sorted_log
sort -u $console_consumer_target_mid_log > $console_consumer_target_mid_sorted_uniq_log
sort -u $console_consumer_source_mid_log > $console_consumer_source_mid_sorted_uniq_log
sort -u $producer_performance_mid_log > $producer_performance_mid_sorted_uniq_log
msg_count_from_source_consumer=`cat $console_consumer_source_mid_log | wc -l | tr -d ' '`
uniq_msg_count_from_source_consumer=`cat $console_consumer_source_mid_sorted_uniq_log | wc -l | tr -d ' '`
msg_count_from_mirror_consumer=`cat $console_consumer_target_mid_log | wc -l | tr -d ' '`
uniq_msg_count_from_mirror_consumer=`cat $console_consumer_target_mid_sorted_uniq_log | wc -l | tr -d ' '`
uniq_msg_count_from_producer=`cat $producer_performance_mid_sorted_uniq_log | wc -l | tr -d ' '`
total_msg_published=`cat $producer_performance_mid_log | wc -l | tr -d ' '`
duplicate_msg_in_producer=$(( $total_msg_published - $uniq_msg_count_from_producer ))
crc_only_in_mirror_consumer=`comm -23 $console_consumer_target_mid_sorted_uniq_log $console_consumer_source_mid_sorted_uniq_log`
crc_only_in_source_consumer=`comm -13 $console_consumer_target_mid_sorted_uniq_log $console_consumer_source_mid_sorted_uniq_log`
crc_common_in_both_consumer=`comm -12 $console_consumer_target_mid_sorted_uniq_log $console_consumer_source_mid_sorted_uniq_log`
crc_only_in_producer=`comm -23 $producer_performance_mid_sorted_uniq_log $console_consumer_source_mid_sorted_uniq_log`
duplicate_mirror_mid=`comm -23 $console_consumer_target_mid_sorted_log $console_consumer_target_mid_sorted_uniq_log`
no_of_duplicate_msg=$(( $msg_count_from_mirror_consumer - $uniq_msg_count_from_mirror_consumer \
+ $msg_count_from_source_consumer - $uniq_msg_count_from_source_consumer - \
2*$duplicate_msg_in_producer ))
source_mirror_uniq_msg_diff=$(($uniq_msg_count_from_source_consumer - $uniq_msg_count_from_mirror_consumer))
echo ""
echo "========================================================"
echo "no. of messages published : $total_msg_published"
echo "producer unique msg rec'd : $uniq_msg_count_from_producer"
echo "source consumer msg rec'd : $msg_count_from_source_consumer"
echo "source consumer unique msg rec'd : $uniq_msg_count_from_source_consumer"
echo "mirror consumer msg rec'd : $msg_count_from_mirror_consumer"
echo "mirror consumer unique msg rec'd : $uniq_msg_count_from_mirror_consumer"
echo "total source/mirror duplicate msg : $no_of_duplicate_msg"
echo "source/mirror uniq msg count diff : $source_mirror_uniq_msg_diff"
echo "========================================================"
echo "(Please refer to $checksum_diff_log for more details)"
echo ""
echo "========================================================" >> $checksum_diff_log
echo "crc only in producer" >> $checksum_diff_log
echo "========================================================" >> $checksum_diff_log
echo "${crc_only_in_producer}" >> $checksum_diff_log
echo "" >> $checksum_diff_log
echo "========================================================" >> $checksum_diff_log
echo "crc only in source consumer" >> $checksum_diff_log
echo "========================================================" >> $checksum_diff_log
echo "${crc_only_in_source_consumer}" >> $checksum_diff_log
echo "" >> $checksum_diff_log
echo "========================================================" >> $checksum_diff_log
echo "crc only in mirror consumer" >> $checksum_diff_log
echo "========================================================" >> $checksum_diff_log
echo "${crc_only_in_mirror_consumer}" >> $checksum_diff_log
echo "" >> $checksum_diff_log
echo "========================================================" >> $checksum_diff_log
echo "duplicate crc in mirror consumer" >> $checksum_diff_log
echo "========================================================" >> $checksum_diff_log
echo "${duplicate_mirror_mid}" >> $checksum_diff_log
echo "================="
if [[ $source_mirror_uniq_msg_diff -eq 0 && $uniq_msg_count_from_source_consumer -gt 0 ]]; then
echo "## Test PASSED"
else
echo "## Test FAILED"
fi
echo "================="
echo
return $cmp_result
}
# =========================================
# start_test
# =========================================
start_test() {
echo
info "==========================================================="
info "#### Starting Kafka Broker / Mirror Maker Failure Test ####"
info "==========================================================="
echo
start_zk
sleep 2
start_source_servers_cluster
sleep 2
create_topic $test_topic localhost:$zk_source_port 1
sleep 2
start_target_servers_cluster
sleep 2
start_target_mirror_maker
sleep 2
start_background_producer $test_topic &
background_producer_pid=$!
info "Started background producer pid [${background_producer_pid}]"
sleep 5
# loop for no. of iterations specified in $num_iterations
while [ $num_iterations -ge $iter ]
do
# if $svr_to_bounce is '0', it means no bouncing
if [[ $num_iterations -ge $iter && $svr_to_bounce -gt 0 ]]; then
idx=
# check which type of broker bouncing is requested: source, mirror_maker or target
# $svr_to_bounce contains $bounce_target_id - eg. '3', '123', ... etc
svr_idx=`expr index $svr_to_bounce $bounce_target_id`
if [[ $num_iterations -ge $iter && $svr_idx -gt 0 ]]; then
echo
info "=========================================="
info "Iteration $iter of ${num_iterations}"
info "=========================================="
# bounce target kafka broker
idx=$(get_random_range 1 $num_kafka_target_server)
if [ "x${kafka_target_pids[$idx]}" != "x" ]; then
echo
info "#### Bouncing Kafka TARGET Broker ####"
info "terminating kafka target[$idx] with process id ${kafka_target_pids[$idx]}"
kill_child_processes 0 ${kafka_target_pids[$idx]}
info "sleeping for ${wait_time_after_killing_broker}s"
sleep $wait_time_after_killing_broker
info "starting kafka target server"
start_target_server $idx
fi
iter=$(($iter+1))
info "sleeping for ${wait_time_after_restarting_broker}s"
sleep $wait_time_after_restarting_broker
fi
# $svr_to_bounce contains $bounce_mir_mkr_id - eg. '2', '123', ... etc
svr_idx=`expr index $svr_to_bounce $bounce_mir_mkr_id`
if [[ $num_iterations -ge $iter && $svr_idx -gt 0 ]]; then
echo
info "=========================================="
info "Iteration $iter of ${num_iterations}"
info "=========================================="
# bounce mirror maker
idx=$(get_random_range 1 $num_kafka_mirror_maker)
if [ "x${kafka_mirror_maker_pids[$idx]}" != "x" ]; then
echo
info "#### Bouncing Kafka Mirror Maker ####"
info "terminating kafka mirror maker [$idx] with process id ${kafka_mirror_maker_pids[$idx]}"
kill_child_processes 0 ${kafka_mirror_maker_pids[$idx]}
info "sleeping for ${wait_time_after_killing_broker}s"
sleep $wait_time_after_killing_broker
info "starting kafka mirror maker"
start_mirror_maker $idx
fi
iter=$(($iter+1))
info "sleeping for ${wait_time_after_restarting_broker}s"
sleep $wait_time_after_restarting_broker
fi
# $svr_to_bounce contains $bounce_source_id - eg. '1', '123', ... etc
svr_idx=`expr index $svr_to_bounce $bounce_source_id`
if [[ $num_iterations -ge $iter && $svr_idx -gt 0 ]]; then
echo
info "=========================================="
info "Iteration $iter of ${num_iterations}"
info "=========================================="
# bounce source kafka broker
idx=$(get_random_range 1 $num_kafka_source_server)
if [ "x${kafka_source_pids[$idx]}" != "x" ]; then
echo
info "#### Bouncing Kafka SOURCE Broker ####"
info "terminating kafka source[$idx] with process id ${kafka_source_pids[$idx]}"
kill_child_processes 0 ${kafka_source_pids[$idx]}
info "sleeping for ${wait_time_after_killing_broker}s"
sleep $wait_time_after_killing_broker
info "starting kafka source server"
start_source_server $idx
fi
iter=$(($iter+1))
info "sleeping for ${wait_time_after_restarting_broker}s"
sleep $wait_time_after_restarting_broker
fi
else
echo
info "=========================================="
info "Iteration $iter of ${num_iterations}"
info "=========================================="
info "No bouncing performed"
iter=$(($iter+1))
info "sleeping for ${wait_time_after_restarting_broker}s"
sleep $wait_time_after_restarting_broker
fi
done
# notify background producer to stop
`touch $tmp_file_to_stop_background_producer`
echo
info "Tests completed. Waiting for consumers to catch up "
# =======================================================
# remove the following 'sleep 30' when KAFKA-313 is fixed
# =======================================================
info "sleeping 30 sec"
sleep 30
}
# =========================================
# print_usage
# =========================================
print_usage() {
echo
echo "Error : invalid no. of arguments"
echo "Usage : $0 -n <no. of iterations> -s <servers to bounce>"
echo
echo " num of iterations - the number of iterations that the test runs"
echo
echo " servers to bounce - the servers to be bounced in a round-robin fashion"
echo " Values of the servers:"
echo " 0 - no bouncing"
echo " 1 - source broker"
echo " 2 - mirror maker"
echo " 3 - target broker"
echo " Example:"
echo " * To bounce only mirror maker and target broker"
echo " in turns, enter the value 23"
echo " * To bounce only mirror maker, enter the value 2"
echo " * To run the test without bouncing, enter 0"
echo
echo "Usage Example : $0 -n 10 -s 12"
echo " (run 10 iterations and bounce source broker (1) + mirror maker (2) in turn)"
echo
}
# =========================================
#
# Main test begins here
#
# =========================================
# get command line arguments
while getopts "hb:i:n:s:x:" opt
do
case $opt in
b)
num_msg_per_batch=$OPTARG
;;
h)
print_usage
exit
;;
i)
producer_sleep_min=$OPTARG
;;
n)
num_iterations=$OPTARG
;;
s)
svr_to_bounce=$OPTARG
;;
x)
producer_sleep_max=$OPTARG
;;
?)
print_usage
exit
;;
esac
done
# initialize and cleanup
initialize
cleanup
sleep 5
# Ctrl-c trap. Catches INT signal
trap "shutdown_servers; force_shutdown_consumer; force_shutdown_background_producer; cmp_checksum; exit 0" INT
# starting the test
start_test
# starting consumer to consume data in source
start_console_consumer $source_console_consumer_grp $zk_source_port $console_consumer_source_log DecodedMessageFormatter
# starting consumer to consume data in target
start_console_consumer $target_console_consumer_grp $zk_target_port $console_consumer_target_log DecodedMessageFormatter
# wait for zero source consumer lags
wait_for_zero_consumer_lags $source_console_consumer_grp $zk_source_port
# wait for zero target consumer lags
wait_for_zero_consumer_lags $target_console_consumer_grp $zk_target_port
# =======================================================
# remove the following 'sleep 30' when KAFKA-313 is fixed
# =======================================================
info "sleeping 30 sec"
sleep 30
shutdown_servers
cmp_checksum
result=$?
# ===============================================
# Report the time taken
# ===============================================
test_end_time="$(date +%s)"
total_test_time_sec=$(( $test_end_time - $test_start_time ))
total_test_time_min=$(( $total_test_time_sec / 60 ))
info "Total time taken: $total_test_time_min min for $num_iterations iterations"
echo
exit $result

View File

@ -1,86 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
log4j.rootLogger=INFO, stdout
# ====================================
# messages going to kafkaAppender
# ====================================
log4j.logger.kafka=DEBUG, kafkaAppender
log4j.logger.org.I0Itec.zkclient.ZkClient=INFO, kafkaAppender
log4j.logger.org.apache.zookeeper=INFO, kafkaAppender
# ====================================
# messages going to zookeeperAppender
# ====================================
# (comment out this line to redirect ZK-related messages to kafkaAppender
# to allow reading both Kafka and ZK debugging messages in a single file)
log4j.logger.org.apache.zookeeper=INFO, zookeeperAppender
# ====================================
# stdout
# ====================================
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n
# ====================================
# fileAppender
# ====================================
log4j.appender.fileAppender=org.apache.log4j.FileAppender
log4j.appender.fileAppender.File=/tmp/kafka_all_request.log
log4j.appender.fileAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.fileAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
# ====================================
# kafkaAppender
# ====================================
log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.kafkaAppender.File=/tmp/kafka.log
log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.additivity.kafka=true
# ====================================
# zookeeperAppender
# ====================================
log4j.appender.zookeeperAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.zookeeperAppender.File=/tmp/zookeeper.log
log4j.appender.zookeeperAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.zookeeperAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.additivity.org.apache.zookeeper=false
# ====================================
# other available debugging info
# ====================================
#log4j.logger.kafka.server.EmbeddedConsumer$MirroringThread=TRACE
#log4j.logger.kafka.server.KafkaRequestHandlers=TRACE
#log4j.logger.kafka.producer.async.AsyncProducer=TRACE
#log4j.logger.kafka.producer.async.ProducerSendThread=TRACE
#log4j.logger.kafka.producer.async.DefaultEventHandler=TRACE
log4j.logger.kafka.consumer=DEBUG
log4j.logger.kafka.tools.VerifyConsumerRebalance=DEBUG
log4j.logger.kafka.tools.ConsumerOffsetChecker=DEBUG
# to print message checksum from ProducerPerformance
log4j.logger.kafka.perf.ProducerPerformance$ProducerThread=DEBUG
# to print socket buffer size validated by Kafka broker
log4j.logger.kafka.network.Acceptor=DEBUG
# to print socket buffer size validated by SimpleConsumer
log4j.logger.kafka.consumer.SimpleConsumer=TRACE

View File

@ -1,27 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# zk connection string
# comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
zk.connect=localhost:2182
# timeout in ms for connecting to zookeeper
zk.connection.timeout.ms=1000000
producer.type=async
# to avoid dropping events if the queue is full, wait indefinitely
queue.enqueue.timeout.ms=-1

View File

@ -1,28 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# zk connection string
# comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
#broker.list=0:localhost:9081
zk.connect=localhost:2182
# timeout in ms for connecting to zookeeper
zk.connection.timeout.ms=1000000
producer.type=async
# to avoid dropping events if the queue is full, wait indefinitely
queue.enqueue.timeout.ms=-1

View File

@ -1,28 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# zk connection string
# comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
#broker.list=0:localhost:9082
zk.connect=localhost:2182
# timeout in ms for connecting to zookeeper
zk.connection.timeout.ms=1000000
producer.type=async
# to avoid dropping events if the queue is full, wait indefinitely
queue.enqueue.timeout.ms=-1

View File

@ -1,28 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# zk connection string
# comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
#broker.list=0:localhost:9083
zk.connect=localhost:2182
# timeout in ms for connecting to zookeeper
zk.connection.timeout.ms=1000000
producer.type=async
# to avoid dropping events if the queue is full, wait indefinitely
queue.enqueue.timeout.ms=-1

View File

@ -1,76 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
# the id of the broker
broker.id=1
# hostname of broker. If not set, will pick up from the value returned
# from getLocalHost. If there are multiple interfaces getLocalHost
# may not be what you want.
# host.name=
# number of logical partitions on this broker
num.partitions=1
# the port the socket server runs on
port=9091
# the number of processor threads the socket server uses. Defaults to the number of cores on the machine
num.threads=8
# the directory in which to store log files
log.dir=/tmp/kafka-source1-logs
# the send buffer used by the socket server
socket.send.buffer.bytes=1048576
# the receive buffer used by the socket server
socket.receive.buffer.bytes=1048576
# the maximum size of a log segment
log.segment.bytes=10000000
# the interval between running cleanup on the logs
log.cleanup.interval.mins=1
# the minimum age of a log file to eligible for deletion
log.retention.hours=168
#the number of messages to accept without flushing the log to disk
log.flush.interval.messages=600
#set the following properties to use zookeeper
# enable connecting to zookeeper
enable.zookeeper=true
# zk connection string
# comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
zk.connect=localhost:2181
# timeout in ms for connecting to zookeeper
zk.connection.timeout.ms=1000000
# time based topic flush intervals in ms
#log.flush.intervals.ms.per.topic=topic:1000
# default time based flush interval in ms
log.flush.interval.ms=1000
# time based topic flasher time rate in ms
log.flush.scheduler.interval.ms=1000

View File

@ -1,76 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
# the id of the broker
broker.id=2
# hostname of broker. If not set, will pick up from the value returned
# from getLocalHost. If there are multiple interfaces getLocalHost
# may not be what you want.
# host.name=
# number of logical partitions on this broker
num.partitions=1
# the port the socket server runs on
port=9092
# the number of processor threads the socket server uses. Defaults to the number of cores on the machine
num.threads=8
# the directory in which to store log files
log.dir=/tmp/kafka-source2-logs
# the send buffer used by the socket server
socket.send.buffer.bytes=1048576
# the receive buffer used by the socket server
socket.receive.buffer.bytes=1048576
# the maximum size of a log segment
log.segment.bytes=10000000
# the interval between running cleanup on the logs
log.cleanup.interval.mins=1
# the minimum age of a log file to eligible for deletion
log.retention.hours=168
#the number of messages to accept without flushing the log to disk
log.flush.interval.messages=600
#set the following properties to use zookeeper
# enable connecting to zookeeper
enable.zookeeper=true
# zk connection string
# comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
zk.connect=localhost:2181
# timeout in ms for connecting to zookeeper
zk.connection.timeout.ms=1000000
# time based topic flush intervals in ms
#log.flush.intervals.ms.per.topic=topic:1000
# default time based flush interval in ms
log.flush.interval.ms=1000
# time based topic flasher time rate in ms
log.flush.scheduler.interval.ms=1000

View File

@ -1,76 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
# the id of the broker
broker.id=3
# hostname of broker. If not set, will pick up from the value returned
# from getLocalHost. If there are multiple interfaces getLocalHost
# may not be what you want.
# host.name=
# number of logical partitions on this broker
num.partitions=1
# the port the socket server runs on
port=9093
# the number of processor threads the socket server uses. Defaults to the number of cores on the machine
num.threads=8
# the directory in which to store log files
log.dir=/tmp/kafka-source3-logs
# the send buffer used by the socket server
socket.send.buffer.bytes=1048576
# the receive buffer used by the socket server
socket.receive.buffer.bytes=1048576
# the maximum size of a log segment
log.segment.size=10000000
# the interval between running cleanup on the logs
log.cleanup.interval.mins=1
# the minimum age of a log file to eligible for deletion
log.retention.hours=168
#the number of messages to accept without flushing the log to disk
log.flush.interval.messages=600
#set the following properties to use zookeeper
# enable connecting to zookeeper
enable.zookeeper=true
# zk connection string
# comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
zk.connect=localhost:2181
# timeout in ms for connecting to zookeeper
zk.connection.timeout.ms=1000000
# time based topic flush intervals in ms
#log.flush.intervals.ms.per.topic=topic:1000
# default time based flush interval in ms
log.flush.interval.ms=1000
# time based topic flasher time rate in ms
log.flush.scheduler.interval.ms=1000

View File

@ -1,76 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
# the id of the broker
broker.id=4
# hostname of broker. If not set, will pick up from the value returned
# from getLocalHost. If there are multiple interfaces getLocalHost
# may not be what you want.
# host.name=
# number of logical partitions on this broker
num.partitions=1
# the port the socket server runs on
port=9094
# the number of processor threads the socket server uses. Defaults to the number of cores on the machine
num.threads=8
# the directory in which to store log files
log.dir=/tmp/kafka-source4-logs
# the send buffer used by the socket server
socket.send.buffer.bytes=1048576
# the receive buffer used by the socket server
socket.receive.buffer.bytes=1048576
# the maximum size of a log segment
log.segment.bytes=10000000
# the interval between running cleanup on the logs
log.cleanup.interval.mins=1
# the minimum age of a log file to eligible for deletion
log.retention.hours=168
#the number of messages to accept without flushing the log to disk
log.flush.interval.messages=600
#set the following properties to use zookeeper
# enable connecting to zookeeper
enable.zookeeper=true
# zk connection string
# comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
zk.connect=localhost:2181
# timeout in ms for connecting to zookeeper
zk.connection.timeout.ms=1000000
# time based topic flush intervals in ms
#log.flush.intervals.ms.per.topic=topic:1000
# default time based flush interval in ms
log.flush.interval.ms=1000
# time based topic flasher time rate in ms
log.flush.scheduler.interval.ms=1000

View File

@ -1,79 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
# the id of the broker
broker.id=1
# hostname of broker. If not set, will pick up from the value returned
# from getLocalHost. If there are multiple interfaces getLocalHost
# may not be what you want.
# host.name=
# number of logical partitions on this broker
num.partitions=1
# the port the socket server runs on
port=9081
# the number of processor threads the socket server uses. Defaults to the number of cores on the machine
num.threads=8
# the directory in which to store log files
log.dir=/tmp/kafka-target1-logs
# the send buffer used by the socket server
socket.send.buffer.bytes=1048576
# the receive buffer used by the socket server
socket.receive.buffer.bytes=1048576
# the maximum size of a log segment
log.segment.bytes=10000000
# the interval between running cleanup on the logs
log.cleanup.interval.mins=1
# the minimum age of a log file to eligible for deletion
log.retention.hours=168
#the number of messages to accept without flushing the log to disk
log.flush.interval.messages=600
#set the following properties to use zookeeper
# enable connecting to zookeeper
enable.zookeeper=true
# zk connection string
# comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
zk.connect=localhost:2182
# timeout in ms for connecting to zookeeper
zk.connection.timeout.ms=1000000
# time based topic flush intervals in ms
#log.flush.intervals.ms.per.topic=topic:1000
# default time based flush interval in ms
log.flush.interval.ms=1000
# time based topic flasher time rate in ms
log.flush.scheduler.interval.ms=1000
# topic partition count map
# topic.partition.count.map=topic1:3, topic2:4

View File

@ -1,79 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
# the id of the broker
broker.id=2
# hostname of broker. If not set, will pick up from the value returned
# from getLocalHost. If there are multiple interfaces getLocalHost
# may not be what you want.
# host.name=
# number of logical partitions on this broker
num.partitions=1
# the port the socket server runs on
port=9082
# the number of processor threads the socket server uses. Defaults to the number of cores on the machine
num.threads=8
# the directory in which to store log files
log.dir=/tmp/kafka-target2-logs
# the send buffer used by the socket server
socket.send.buffer.bytes=1048576
# the receive buffer used by the socket server
socket.receive.buffer.bytes=1048576
# the maximum size of a log segment
log.segment.bytes=10000000
# the interval between running cleanup on the logs
log.cleanup.interval.mins=1
# the minimum age of a log file to eligible for deletion
log.retention.hours=168
#the number of messages to accept without flushing the log to disk
log.flush.interval.messages=600
#set the following properties to use zookeeper
# enable connecting to zookeeper
enable.zookeeper=true
# zk connection string
# comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
zk.connect=localhost:2182
# timeout in ms for connecting to zookeeper
zk.connection.timeout.ms=1000000
# time based topic flush intervals in ms
#log.flush.intervals.ms.per.topic=topic:1000
# default time based flush interval in ms
log.flush.interval.ms=1000
# time based topic flasher time rate in ms
log.flush.scheduler.interval.ms=1000
# topic partition count map
# topic.partition.count.map=topic1:3, topic2:4

View File

@ -1,79 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
# the id of the broker
broker.id=3
# hostname of broker. If not set, will pick up from the value returned
# from getLocalHost. If there are multiple interfaces getLocalHost
# may not be what you want.
# host.name=
# number of logical partitions on this broker
num.partitions=1
# the port the socket server runs on
port=9083
# the number of processor threads the socket server uses. Defaults to the number of cores on the machine
num.threads=8
# the directory in which to store log files
log.dir=/tmp/kafka-target3-logs
# the send buffer used by the socket server
socket.send.buffer.bytes=1048576
# the receive buffer used by the socket server
socket.receive.buffer.bytes=1048576
# the maximum size of a log segment
log.segment.bytes=10000000
# the interval between running cleanup on the logs
log.cleanup.interval.mins=1
# the minimum age of a log file to eligible for deletion
log.retention.hours=168
#the number of messages to accept without flushing the log to disk
log.flush.interval.messages=600
#set the following properties to use zookeeper
# enable connecting to zookeeper
enable.zookeeper=true
# zk connection string
# comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
zk.connect=localhost:2182
# timeout in ms for connecting to zookeeper
zk.connection.timeout.ms=1000000
# time based topic flush intervals in ms
#log.flush.intervals.ms.per.topic=topic:1000
# default time based flush interval in ms
log.flush.interval.ms=1000
# time based topic flasher time rate in ms
log.flush.scheduler.interval.ms=1000
# topic partition count map
# topic.partition.count.map=topic1:3, topic2:4

View File

@ -1,29 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.consumer.ConsumerConfig for more details
# zk connection string
# comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
zk.connect=localhost:2181
# timeout in ms for connecting to zookeeper
zk.connection.timeout.ms=1000000
#consumer group id
group.id=group1
mirror.topics.whitelist=test_1,test_2
auto.offset.reset=smallest

View File

@ -1,18 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# the directory where the snapshot is stored.
dataDir=/tmp/zookeeper_source
# the port at which the clients will connect
clientPort=2181

View File

@ -1,18 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# the directory where the snapshot is stored.
dataDir=/tmp/zookeeper_target
# the port at which the clients will connect
clientPort=2182

View File

@ -1,58 +0,0 @@
{
"cluster_config": [
{
"entity_id": "0",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name": "source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9990"
},
{
"entity_id": "1",
"hostname": "localhost",
"role": "broker",
"cluster_name": "source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9991"
},
{
"entity_id": "2",
"hostname": "localhost",
"role": "broker",
"cluster_name": "source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9992"
},
{
"entity_id": "3",
"hostname": "localhost",
"role": "broker",
"cluster_name": "source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9993"
},
{
"entity_id": "4",
"hostname": "localhost",
"role": "producer_performance",
"cluster_name": "source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9997"
},
{
"entity_id": "5",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name": "source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9998"
}
]
}

View File

@ -1,182 +0,0 @@
#!/bin/bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# =========================================
# info - print messages with timestamp
# =========================================
info() {
echo -e "$(date +"%Y-%m-%d %H:%M:%S") $*"
}
# =========================================
# info_no_newline - print messages with
# timestamp without newline
# =========================================
info_no_newline() {
echo -e -n "$(date +"%Y-%m-%d %H:%M:%S") $*"
}
# =========================================
# get_random_range - return a random number
# between the lower & upper bounds
# usage:
# get_random_range $lower $upper
# random_no=$?
# =========================================
get_random_range() {
lo=$1
up=$2
range=$(($up - $lo + 1))
echo $(($(($RANDOM % range)) + $lo))
}
# =========================================
# kill_child_processes - terminate a
# process and its child processes
# =========================================
kill_child_processes() {
isTopmost=$1
curPid=$2
childPids=$(ps a -o pid= -o ppid= | grep "${curPid}$" | awk '{print $1;}')
for childPid in $childPids
do
kill_child_processes 0 $childPid
done
if [ $isTopmost -eq 0 ]; then
kill -15 $curPid 2> /dev/null
fi
}
# =========================================================================
# generate_kafka_properties_files -
# 1. it takes the following arguments and generate server_{1..n}.properties
# for the total no. of kafka broker as specified in "num_server"; the
# resulting properties files will be located at:
# <kafka home>/system_test/<test suite>/config
# 2. the default values in the generated properties files will be copied
# from the settings in config/server.properties while the brokerid and
# server port will be incremented accordingly
# 3. to generate properties files with non-default values such as
# "socket.send.buffer.bytes=2097152", simply add the property with new value
# to the array variable kafka_properties_to_replace as shown below
# =========================================================================
generate_kafka_properties_files() {
test_suite_full_path=$1 # eg. <kafka home>/system_test/single_host_multi_brokers
num_server=$2 # total no. of brokers in the cluster
brokerid_to_start=$3 # this should be '0' in most cases
kafka_port_to_start=$4 # if 9091 is used, the rest would be 9092, 9093, ...
this_config_dir=${test_suite_full_path}/config
# info "test suite full path : $test_suite_full_path"
# info "broker id to start : $brokerid_to_start"
# info "kafka port to start : $kafka_port_to_start"
# info "num of server : $num_server"
# info "config dir : $this_config_dir"
# =============================================
# array to keep kafka properties statements
# from the file 'server.properties' need
# to be changed from their default values
# =============================================
# kafka_properties_to_replace # DO NOT uncomment this line !!
# =============================================
# Uncomment the following kafka properties
# array element as needed to change the default
# values. Other kafka properties can be added
# in a similar fashion.
# =============================================
# kafka_properties_to_replace[1]="socket.send.buffer.bytes=2097152"
# kafka_properties_to_replace[2]="socket.receive.buffer.bytes=2097152"
# kafka_properties_to_replace[3]="num.partitions=3"
# kafka_properties_to_replace[4]="socket.request.max.bytes=10485760"
server_properties=`cat ${this_config_dir}/server.properties`
for ((i=1; i<=$num_server; i++))
do
# ======================
# update misc properties
# ======================
for ((j=1; j<=${#kafka_properties_to_replace[@]}; j++))
do
keyword_to_replace=`echo ${kafka_properties_to_replace[${j}]} | awk -F '=' '{print $1}'`
string_to_be_replaced=`echo "$server_properties" | grep $keyword_to_replace`
# info "string to be replaced : [$string_to_be_replaced]"
# info "string to replace : [${kafka_properties_to_replace[${j}]}]"
echo "${server_properties}" | \
sed -e "s/${string_to_be_replaced}/${kafka_properties_to_replace[${j}]}/g" \
>${this_config_dir}/server_${i}.properties
server_properties=`cat ${this_config_dir}/server_${i}.properties`
done
# ======================
# update brokerid
# ======================
keyword_to_replace="brokerid="
string_to_be_replaced=`echo "$server_properties" | grep $keyword_to_replace`
brokerid_idx=$(( $brokerid_to_start + $i))
string_to_replace="${keyword_to_replace}${brokerid_idx}"
# info "string to be replaced : [${string_to_be_replaced}]"
# info "string to replace : [${string_to_replace}]"
echo "${server_properties}" | \
sed -e "s/${string_to_be_replaced}/${string_to_replace}/g" \
>${this_config_dir}/server_${i}.properties
server_properties=`cat ${this_config_dir}/server_${i}.properties`
# ======================
# update kafak_port
# ======================
keyword_to_replace="port="
string_to_be_replaced=`echo "$server_properties" | grep $keyword_to_replace`
port_idx=$(( $kafka_port_to_start + $i - 1 ))
string_to_replace="${keyword_to_replace}${port_idx}"
# info "string to be replaced : [${string_to_be_replaced}]"
# info "string to replace : [${string_to_replace}]"
echo "${server_properties}" | \
sed -e "s/${string_to_be_replaced}/${string_to_replace}/g" \
>${this_config_dir}/server_${i}.properties
server_properties=`cat ${this_config_dir}/server_${i}.properties`
# ======================
# update kafka_log dir
# ======================
keyword_to_replace="log.dir="
string_to_be_replaced=`echo "$server_properties" | grep $keyword_to_replace`
string_to_be_replaced=${string_to_be_replaced//\//\\\/}
string_to_replace="${keyword_to_replace}\/tmp\/kafka_server_${i}_logs"
# info "string to be replaced : [${string_to_be_replaced}]"
# info "string to replace : [${string_to_replace}]"
echo "${server_properties}" | \
sed -e "s/${string_to_be_replaced}/${string_to_replace}/g" \
>${this_config_dir}/server_${i}.properties
server_properties=`cat ${this_config_dir}/server_${i}.properties`
done
}

View File

@ -1,56 +0,0 @@
# ==============================================
# declaration - must have a 'root' logger
# ==============================================
[loggers]
keys=root,namedLogger,anonymousLogger
[handlers]
keys=namedConsoleHandler,anonymousConsoleHandler
[formatters]
keys=namedFormatter,anonymousFormatter
# ==============================================
# loggers session
# ==============================================
[logger_root]
level=NOTSET
handlers=
[logger_namedLogger]
level=DEBUG
handlers=namedConsoleHandler
qualname=namedLogger
propagate=0
[logger_anonymousLogger]
level=DEBUG
handlers=anonymousConsoleHandler
qualname=anonymousLogger
propagate=0
# ==============================================
# handlers session
# ** Change 'level' to INFO/DEBUG in this session
# ==============================================
[handler_namedConsoleHandler]
class=StreamHandler
level=INFO
formatter=namedFormatter
args=[]
[handler_anonymousConsoleHandler]
class=StreamHandler
level=INFO
formatter=anonymousFormatter
args=[]
# ==============================================
# formatters session
# ==============================================
[formatter_namedFormatter]
format=%(asctime)s - %(levelname)s - %(message)s %(name_of_class)s
[formatter_anonymousFormatter]
format=%(asctime)s - %(levelname)s - %(message)s

View File

@ -1,174 +0,0 @@
{
"dashboards": [
{
"role": "broker",
"graphs": [
{
"graph_name": "Produce-Request-Rate",
"y_label": "requests-per-sec",
"bean_name": "kafka.network:type=RequestMetrics,name=Produce-RequestsPerSec",
"attributes": "OneMinuteRate"
},
{
"graph_name": "Produce-Request-Time",
"y_label": "ms,ms",
"bean_name": "kafka.network:type=RequestMetrics,name=Produce-TotalTimeMs",
"attributes": "Mean,99thPercentile"
},
{
"graph_name": "Produce-Request-Remote-Time",
"y_label": "ms,ms",
"bean_name": "kafka.network:type=RequestMetrics,name=Produce-RemoteTimeMs",
"attributes": "Mean,99thPercentile"
},
{
"graph_name": "Fetch-Consumer-Request-Rate",
"y_label": "requests-per-sec",
"bean_name": "kafka.network:type=RequestMetrics,name=Fetch-Consumer-RequestsPerSec",
"attributes": "OneMinuteRate"
},
{
"graph_name": "Fetch-Consumer-Request-Time",
"y_label": "ms,ms",
"bean_name": "kafka.network:type=RequestMetrics,name=Fetch-Consumer-TotalTimeMs",
"attributes": "Mean,99thPercentile"
},
{
"graph_name": "Fetch-Consumer-Request-Remote-Time",
"y_label": "ms,ms",
"bean_name": "kafka.network:type=RequestMetrics,name=Fetch-Consumer-RemoteTimeMs",
"attributes": "Mean,99thPercentile"
},
{
"graph_name": "Fetch-Follower-Request-Rate",
"y_label": "requests-per-sec",
"bean_name": "kafka.network:type=RequestMetrics,name=Fetch-Follower-RequestsPerSec",
"attributes": "OneMinuteRate"
},
{
"graph_name": "Fetch-Follower-Request-Time",
"y_label": "ms,ms",
"bean_name": "kafka.network:type=RequestMetrics,name=Fetch-Follower-TotalTimeMs",
"attributes": "Mean,99thPercentile"
},
{
"graph_name": "Fetch-Follower-Request-Remote-Time",
"y_label": "ms,ms",
"bean_name": "kafka.network:type=RequestMetrics,name=Fetch-Follower-RemoteTimeMs",
"attributes": "Mean,99thPercentile"
},
{
"graph_name": "ProducePurgatoryExpirationRate",
"y_label": "expirations-per-sec",
"bean_name": "kafka.server:type=DelayedProducerRequestMetrics,name=AllExpiresPerSecond",
"attributes": "OneMinuteRate"
},
{
"graph_name": "FetchConsumerPurgatoryExpirationRate",
"y_label": "expirations-per-sec",
"bean_name": "kafka.server:type=DelayedFetchRequestMetrics,name=ConsumerExpiresPerSecond",
"attributes": "OneMinuteRate"
},
{
"graph_name": "FetchFollowerPurgatoryExpirationRate",
"y_label": "expirations-per-sec",
"bean_name": "kafka.server:type=DelayedFetchRequestMetrics,name=FollowerExpiresPerSecond",
"attributes": "OneMinuteRate"
},
{
"graph_name": "ProducePurgatoryQueueSize",
"y_label": "size",
"bean_name": "kafka.server:type=ProducerRequestPurgatory,name=NumDelayedOperations",
"attributes": "Value"
},
{
"graph_name": "FetchPurgatoryQueueSize",
"y_label": "size",
"bean_name": "kafka.server:type=FetchRequestPurgatory,name=NumDelayedOperations",
"attributes": "Value"
},
{
"graph_name": "ControllerLeaderElectionRateAndTime",
"y_label": "elections-per-sec,ms,ms",
"bean_name": "kafka.controller:type=ControllerStat,name=LeaderElectionRateAndTimeMs",
"attributes": "OneMinuteRate,Mean,99thPercentile"
},
{
"graph_name": "LogFlushRateAndTime",
"y_label": "flushes-per-sec,ms,ms",
"bean_name": "kafka.log:type=LogFlushStats,name=LogFlushRateAndTimeMs",
"attributes": "OneMinuteRate,Mean,99thPercentile"
},
{
"graph_name": "AllBytesOutRate",
"y_label": "bytes-per-sec",
"bean_name": "kafka.server:type=BrokerTopicMetrics,name=AllTopicsBytesOutPerSec",
"attributes": "OneMinuteRate"
},
{
"graph_name": "AllBytesInRate",
"y_label": "bytes-per-sec",
"bean_name": "kafka.server:type=BrokerTopicMetrics,name=AllTopicsBytesInPerSec",
"attributes": "OneMinuteRate"
},
{
"graph_name": "AllMessagesInRate",
"y_label": "messages-per-sec",
"bean_name": "kafka.server:type=BrokerTopicMetrics,name=AllTopicsMessagesInPerSec",
"attributes": "OneMinuteRate"
}
]
},
{
"role": "producer_performance",
"graphs": [
{
"graph_name": "ProduceRequestRateAndTime",
"y_label": "requests-per-sec,ms,ms",
"bean_name": "kafka.producer:type=ProducerRequestStat,name=ProduceRequestRateAndTimeMs",
"attributes": "OneMinuteRate,Mean,99thPercentile"
},
{
"graph_name": "ProduceRequestSize",
"y_label": "bytes,bytes",
"bean_name": "kafka.producer:type=ProducerRequestStat,name=ProducerRequestSize",
"attributes": "Mean,99thPercentile"
}
]
},
{
"role": "console_consumer",
"graphs": [
{
"graph_name": "FetchRequestRateAndTime",
"y_label": "requests-per-sec,ms,ms",
"bean_name": "kafka.consumer:type=FetchRequestAndResponseStat,name=FetchRequestRateAndTimeMs",
"attributes": "OneMinuteRate,Mean,99thPercentile"
},
{
"graph_name": "FetchResponseSize",
"y_label": "bytes,bytes",
"bean_name": "kafka.consumer:type=FetchRequestAndResponseStat,name=FetchResponseSize",
"attributes": "Mean,99thPercentile"
},
{
"graph_name": "ConsumedMessageRate",
"y_label": "messages-per-sec",
"bean_name": "kafka.consumer:type=ConsumerTopicStat,name=AllTopicsMessagesPerSec",
"attributes": "OneMinuteRate"
}
]
},
{
"role": "zookeeper",
"graphs": [
{
"graph_name": "ZookeeperServerStats",
"y_label": "zookeeper-latency-ms",
"bean_name": "org.apache.ZooKeeperService:name0=StandaloneServer_port-1",
"attributes": "AvgRequestLatency"
}
]
}
]
}

View File

@ -1,136 +0,0 @@
{
"cluster_config": [
{
"entity_id": "0",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9100"
},
{
"entity_id": "1",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9101"
},
{
"entity_id": "2",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9102"
},
{
"entity_id": "3",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9103"
},
{
"entity_id": "4",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9104"
},
{
"entity_id": "5",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9105"
},
{
"entity_id": "6",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9106"
},
{
"entity_id": "7",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9107"
},
{
"entity_id": "8",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9108"
},
{
"entity_id": "9",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9109"
},
{
"entity_id": "10",
"hostname": "localhost",
"role": "producer_performance",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9110"
},
{
"entity_id": "11",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9111"
},
{
"entity_id": "12",
"hostname": "localhost",
"role": "mirror_maker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9112"
},
{
"entity_id": "13",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9113"
}
]
}

View File

@ -1,12 +0,0 @@
zookeeper.connect=localhost:2108
zookeeper.connection.timeout.ms=1000000
group.id=mm_regtest_grp
auto.commit.interval.ms=120000
auto.offset.reset=smallest
#fetch.message.max.bytes=1048576
#rebalance.max.retries=4
#rebalance.backoff.ms=2000
socket.receive.buffer.bytes=1048576
fetch.message.max.bytes=1048576
zookeeper.sync.time.ms=15000
shallow.iterator.enable=false

View File

@ -1,12 +0,0 @@
# old producer
metadata.broker.list=localhost:9094
compression.codec=0
request.retries=3
request.required.acks=1
# new producer
block.on.buffer.full=true
bootstrap.servers=localhost:9094
compression.type=none
retries=3
acks=1

View File

@ -1,139 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
# Hostname the broker will advertise to consumers. If not set, kafka will use the value returned
# from InetAddress.getLocalHost(). If there are multiple interfaces getLocalHost
# may not be what you want.
#host.name=
############################# Socket Server Settings #############################
# The port the socket server listens on
port=9091
# The number of threads handling network requests
num.network.threads=2
# The number of threads doing disk I/O
num.io.threads=2
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=1048576
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=1048576
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# The directory under which to store log files
log.dir=/tmp/kafka_server_logs
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=5
# Overrides for for the default given by num.partitions on a per-topic basis
#topic.partition.count.map=topic1:3, topic2:4
############################# Log Flush Policy #############################
# The following configurations control the flush of data to disk. This is the most
# important performance knob in kafka.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data is at greater risk of loss in the event of a crash.
# 2. Latency: Data is not made available to consumers until it is flushed (which adds latency).
# 3. Throughput: The flush is generally the most expensive operation.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
log.flush.interval.ms=1000
# Per-topic overrides for log.flush.interval.ms
#log.flush.intervals.ms.per.topic=topic1:1000, topic2:3000
# The interval (in ms) at which logs are checked to see if they need to be flushed to disk.
log.flush.scheduler.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824
log.retention.bytes=-1
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
#log.segment.size=536870912
log.segment.bytes=102400
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.cleanup.interval.mins=1
############################# Zookeeper #############################
# Enable connecting to zookeeper
enable.zookeeper=true
# Zk connection string (see zk docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=1000000
monitoring.period.secs=1
message.max.bytes=1000000
queued.max.requests=500
log.roll.hours=168
log.index.size.max.bytes=10485760
log.index.interval.bytes=4096
auto.create.topics.enable=true
controller.socket.timeout.ms=30000
default.replication.factor=1
replica.lag.time.max.ms=10000
replica.lag.max.messages=4000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.fetch.min.bytes=4096
num.replica.fetchers=1

View File

@ -1,23 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# the directory where the snapshot is stored.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# disable the per-ip limit on the number of connections since this is a non-production config
maxClientCnxns=0
syncLimit=5
initLimit=10
tickTime=2000

View File

@ -1,324 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#!/usr/bin/env python
# ===================================
# mirror_maker_test.py
# ===================================
import inspect
import logging
import os
import signal
import subprocess
import sys
import time
import traceback
from system_test_env import SystemTestEnv
sys.path.append(SystemTestEnv.SYSTEM_TEST_UTIL_DIR)
from setup_utils import SetupUtils
from replication_utils import ReplicationUtils
import system_test_utils
from testcase_env import TestcaseEnv
# product specific: Kafka
import kafka_system_test_utils
import metrics
class MirrorMakerTest(ReplicationUtils, SetupUtils):
testModuleAbsPathName = os.path.realpath(__file__)
testSuiteAbsPathName = os.path.abspath(os.path.dirname(testModuleAbsPathName))
def __init__(self, systemTestEnv):
# SystemTestEnv - provides cluster level environment settings
# such as entity_id, hostname, kafka_home, java_home which
# are available in a list of dictionary named
# "clusterEntityConfigDictList"
self.systemTestEnv = systemTestEnv
super(MirrorMakerTest, self).__init__(self)
# dict to pass user-defined attributes to logger argument: "extra"
d = {'name_of_class': self.__class__.__name__}
def signal_handler(self, signal, frame):
self.log_message("Interrupt detected - User pressed Ctrl+c")
# perform the necessary cleanup here when user presses Ctrl+c and it may be product specific
self.log_message("stopping all entities - please wait ...")
kafka_system_test_utils.stop_all_remote_running_processes(self.systemTestEnv, self.testcaseEnv)
sys.exit(1)
def runTest(self):
# ======================================================================
# get all testcase directories under this testsuite
# ======================================================================
testCasePathNameList = system_test_utils.get_dir_paths_with_prefix(
self.testSuiteAbsPathName, SystemTestEnv.SYSTEM_TEST_CASE_PREFIX)
testCasePathNameList.sort()
replicationUtils = ReplicationUtils(self)
# =============================================================
# launch each testcase one by one: testcase_1, testcase_2, ...
# =============================================================
for testCasePathName in testCasePathNameList:
skipThisTestCase = False
try:
# ======================================================================
# A new instance of TestcaseEnv to keep track of this testcase's env vars
# and initialize some env vars as testCasePathName is available now
# ======================================================================
self.testcaseEnv = TestcaseEnv(self.systemTestEnv, self)
self.testcaseEnv.testSuiteBaseDir = self.testSuiteAbsPathName
self.testcaseEnv.initWithKnownTestCasePathName(testCasePathName)
self.testcaseEnv.testcaseArgumentsDict = self.testcaseEnv.testcaseNonEntityDataDict["testcase_args"]
# ======================================================================
# SKIP if this case is IN testcase_to_skip.json or NOT IN testcase_to_run.json
# ======================================================================
testcaseDirName = self.testcaseEnv.testcaseResultsDict["_test_case_name"]
if self.systemTestEnv.printTestDescriptionsOnly:
self.testcaseEnv.printTestCaseDescription(testcaseDirName)
continue
elif self.systemTestEnv.isTestCaseToSkip(self.__class__.__name__, testcaseDirName):
self.log_message("Skipping : " + testcaseDirName)
skipThisTestCase = True
continue
else:
self.testcaseEnv.printTestCaseDescription(testcaseDirName)
system_test_utils.setup_remote_hosts_with_testcase_level_cluster_config(self.systemTestEnv, testCasePathName)
# ============================================================================== #
# ============================================================================== #
# Product Specific Testing Code Starts Here: #
# ============================================================================== #
# ============================================================================== #
# initialize self.testcaseEnv with user-defined environment variables (product specific)
self.testcaseEnv.userDefinedEnvVarDict["zkConnectStr"] = ""
self.testcaseEnv.userDefinedEnvVarDict["stopBackgroundProducer"] = False
self.testcaseEnv.userDefinedEnvVarDict["backgroundProducerStopped"] = False
# initialize signal handler
signal.signal(signal.SIGINT, self.signal_handler)
# TestcaseEnv.testcaseConfigsList initialized by reading testcase properties file:
# system_test/<suite_name>_testsuite/testcase_<n>/testcase_<n>_properties.json
self.testcaseEnv.testcaseConfigsList = system_test_utils.get_json_list_data(
self.testcaseEnv.testcasePropJsonPathName)
# clean up data directories specified in zookeeper.properties and kafka_server_<n>.properties
kafka_system_test_utils.cleanup_data_at_remote_hosts(self.systemTestEnv, self.testcaseEnv)
# create "LOCAL" log directories for metrics, dashboards for each entity under this testcase
# for collecting logs from remote machines
kafka_system_test_utils.generate_testcase_log_dirs(self.systemTestEnv, self.testcaseEnv)
# TestcaseEnv - initialize producer & consumer config / log file pathnames
kafka_system_test_utils.init_entity_props(self.systemTestEnv, self.testcaseEnv)
# generate remote hosts log/config dirs if not exist
kafka_system_test_utils.generate_testcase_log_dirs_in_remote_hosts(self.systemTestEnv, self.testcaseEnv)
# generate properties files for zookeeper, kafka, producer, consumer and mirror-maker:
# 1. copy system_test/<suite_name>_testsuite/config/*.properties to
# system_test/<suite_name>_testsuite/testcase_<n>/config/
# 2. update all properties files in system_test/<suite_name>_testsuite/testcase_<n>/config
# by overriding the settings specified in:
# system_test/<suite_name>_testsuite/testcase_<n>/testcase_<n>_properties.json
kafka_system_test_utils.generate_overriden_props_files(self.testSuiteAbsPathName,
self.testcaseEnv, self.systemTestEnv)
# =============================================
# preparing all entities to start the test
# =============================================
self.log_message("starting zookeepers")
kafka_system_test_utils.start_zookeepers(self.systemTestEnv, self.testcaseEnv)
self.anonLogger.info("sleeping for 2s")
time.sleep(2)
self.log_message("starting brokers")
kafka_system_test_utils.start_brokers(self.systemTestEnv, self.testcaseEnv)
self.anonLogger.info("sleeping for 5s")
time.sleep(5)
self.log_message("creating topics")
kafka_system_test_utils.create_topic_for_producer_performance(self.systemTestEnv, self.testcaseEnv)
self.anonLogger.info("sleeping for 5s")
time.sleep(5)
self.log_message("starting mirror makers")
kafka_system_test_utils.start_mirror_makers(self.systemTestEnv, self.testcaseEnv)
self.anonLogger.info("sleeping for 10s")
time.sleep(10)
# =============================================
# starting producer
# =============================================
self.log_message("starting producer in the background")
kafka_system_test_utils.start_producer_performance(self.systemTestEnv, self.testcaseEnv, False)
msgProducingFreeTimeSec = self.testcaseEnv.testcaseArgumentsDict["message_producing_free_time_sec"]
self.anonLogger.info("sleeping for " + msgProducingFreeTimeSec + " sec to produce some messages")
time.sleep(int(msgProducingFreeTimeSec))
# =============================================
# A while-loop to bounce mirror maker as specified
# by "num_iterations" in testcase_n_properties.json
# =============================================
i = 1
numIterations = int(self.testcaseEnv.testcaseArgumentsDict["num_iteration"])
bouncedEntityDownTimeSec = 15
try:
bouncedEntityDownTimeSec = int(self.testcaseEnv.testcaseArgumentsDict["bounced_entity_downtime_sec"])
except:
pass
while i <= numIterations:
self.log_message("Iteration " + str(i) + " of " + str(numIterations))
# =============================================
# Bounce Mirror Maker if specified in testcase config
# =============================================
bounceMirrorMaker = self.testcaseEnv.testcaseArgumentsDict["bounce_mirror_maker"]
self.log_message("bounce_mirror_maker flag : " + bounceMirrorMaker)
if (bounceMirrorMaker.lower() == "true"):
clusterConfigList = self.systemTestEnv.clusterEntityConfigDictList
mirrorMakerEntityIdList = system_test_utils.get_data_from_list_of_dicts(
clusterConfigList, "role", "mirror_maker", "entity_id")
stoppedMirrorMakerEntityId = mirrorMakerEntityIdList[0]
mirrorMakerPPid = self.testcaseEnv.entityMirrorMakerParentPidDict[stoppedMirrorMakerEntityId]
self.log_message("stopping mirror maker : " + mirrorMakerPPid)
kafka_system_test_utils.stop_remote_entity(self.systemTestEnv, stoppedMirrorMakerEntityId, mirrorMakerPPid)
self.anonLogger.info("sleeping for " + str(bouncedEntityDownTimeSec) + " sec")
time.sleep(bouncedEntityDownTimeSec)
# starting previously terminated broker
self.log_message("starting the previously terminated mirror maker")
kafka_system_test_utils.start_mirror_makers(self.systemTestEnv, self.testcaseEnv, stoppedMirrorMakerEntityId)
self.anonLogger.info("sleeping for 15s")
time.sleep(15)
i += 1
# while loop
# =============================================
# tell producer to stop
# =============================================
self.testcaseEnv.lock.acquire()
self.testcaseEnv.userDefinedEnvVarDict["stopBackgroundProducer"] = True
time.sleep(1)
self.testcaseEnv.lock.release()
time.sleep(1)
# =============================================
# wait for producer thread's update of
# "backgroundProducerStopped" to be "True"
# =============================================
while 1:
self.testcaseEnv.lock.acquire()
self.logger.info("status of backgroundProducerStopped : [" + \
str(self.testcaseEnv.userDefinedEnvVarDict["backgroundProducerStopped"]) + "]", extra=self.d)
if self.testcaseEnv.userDefinedEnvVarDict["backgroundProducerStopped"]:
time.sleep(1)
self.testcaseEnv.lock.release()
self.logger.info("all producer threads completed", extra=self.d)
break
time.sleep(1)
self.testcaseEnv.lock.release()
time.sleep(2)
self.anonLogger.info("sleeping for 15s")
time.sleep(15)
self.anonLogger.info("terminate Mirror Maker")
cmdStr = "ps auxw | grep Mirror | grep -v grep | tr -s ' ' | cut -f2 -d ' ' | xargs kill -15"
subproc = system_test_utils.sys_call_return_subproc(cmdStr)
for line in subproc.stdout.readlines():
line = line.rstrip('\n')
self.anonLogger.info("#### ["+line+"]")
self.anonLogger.info("sleeping for 15s")
time.sleep(15)
# =============================================
# starting consumer
# =============================================
self.log_message("starting consumer in the background")
kafka_system_test_utils.start_console_consumer(self.systemTestEnv, self.testcaseEnv)
self.anonLogger.info("sleeping for 10s")
time.sleep(10)
# =============================================
# this testcase is completed - stop all entities
# =============================================
self.log_message("stopping all entities")
for entityId, parentPid in self.testcaseEnv.entityBrokerParentPidDict.items():
kafka_system_test_utils.stop_remote_entity(self.systemTestEnv, entityId, parentPid)
for entityId, parentPid in self.testcaseEnv.entityZkParentPidDict.items():
kafka_system_test_utils.stop_remote_entity(self.systemTestEnv, entityId, parentPid)
# make sure all entities are stopped
kafka_system_test_utils.ps_grep_terminate_running_entity(self.systemTestEnv)
# =============================================
# collect logs from remote hosts
# =============================================
kafka_system_test_utils.collect_logs_from_remote_hosts(self.systemTestEnv, self.testcaseEnv)
# =============================================
# validate the data matched and checksum
# =============================================
self.log_message("validating data matched")
kafka_system_test_utils.validate_data_matched(self.systemTestEnv, self.testcaseEnv, replicationUtils)
kafka_system_test_utils.validate_broker_log_segment_checksum(self.systemTestEnv, self.testcaseEnv, "source")
kafka_system_test_utils.validate_broker_log_segment_checksum(self.systemTestEnv, self.testcaseEnv, "target")
# =============================================
# draw graphs
# =============================================
metrics.draw_all_graphs(self.systemTestEnv.METRICS_PATHNAME,
self.testcaseEnv,
self.systemTestEnv.clusterEntityConfigDictList)
# build dashboard, one for each role
metrics.build_all_dashboards(self.systemTestEnv.METRICS_PATHNAME,
self.testcaseEnv.testCaseDashboardsDir,
self.systemTestEnv.clusterEntityConfigDictList)
except Exception as e:
self.log_message("Exception while running test {0}".format(e))
traceback.print_exc()
self.testcaseEnv.validationStatusDict["Test completed"] = "FAILED"
finally:
if not skipThisTestCase and not self.systemTestEnv.printTestDescriptionsOnly:
self.log_message("stopping all entities - please wait ...")
kafka_system_test_utils.stop_all_remote_running_processes(self.systemTestEnv, self.testcaseEnv)

View File

@ -1,158 +0,0 @@
{
"description": {"01":"To Test : 'Replication with Mirror Maker'",
"02":"Set up 2 clusters such as : SOURCE => MirrorMaker => TARGET",
"03":"Set up 2-node Zk cluster for both SOURCE & TARGET",
"04":"Produce and consume messages to a single topic - single partition.",
"05":"This test sends messages to 3 replicas",
"06":"At the end it verifies the log size and contents",
"07":"Use a consumer to verify no message loss in TARGET cluster.",
"08":"Producer dimensions : mode:sync, acks:-1, comp:0",
"09":"Log segment size : 10240"
},
"testcase_args": {
"bounce_leader": "false",
"bounce_mirror_maker": "false",
"replica_factor": "3",
"num_partition": "1",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2108",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_0.log",
"config_filename": "zookeeper_0.properties"
},
{
"entity_id": "1",
"clientPort": "2118",
"dataDir": "/tmp/zookeeper_1",
"log_filename": "zookeeper_1.log",
"config_filename": "zookeeper_1.properties"
},
{
"entity_id": "2",
"clientPort": "2128",
"dataDir": "/tmp/zookeeper_2",
"log_filename": "zookeeper_2.log",
"config_filename": "zookeeper_2.properties"
},
{
"entity_id": "3",
"clientPort": "2138",
"dataDir": "/tmp/zookeeper_3",
"log_filename": "zookeeper_3.log",
"config_filename": "zookeeper_3.properties"
},
{
"entity_id": "4",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_4_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_4.log",
"config_filename": "kafka_server_4.properties"
},
{
"entity_id": "5",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_5_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_5.log",
"config_filename": "kafka_server_5.properties"
},
{
"entity_id": "6",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_6_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_6.log",
"config_filename": "kafka_server_6.properties"
},
{
"entity_id": "7",
"port": "9094",
"broker.id": "4",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_7_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_7.log",
"config_filename": "kafka_server_7.properties"
},
{
"entity_id": "8",
"port": "9095",
"broker.id": "5",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_8_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_8.log",
"config_filename": "kafka_server_8.properties"
},
{
"entity_id": "9",
"port": "9096",
"broker.id": "6",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_9_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_9.log",
"config_filename": "kafka_server_9.properties"
},
{
"entity_id": "10",
"topic": "test_1",
"threads": "5",
"compression-codec": "0",
"message-size": "500",
"message": "500",
"request-num-acks": "-1",
"sync":"true",
"producer-num-retries":"5",
"log_filename": "producer_performance_10.log",
"config_filename": "producer_performance_10.properties"
},
{
"entity_id": "11",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"log_filename": "console_consumer_11.log",
"config_filename": "console_consumer_11.properties"
},
{
"entity_id": "12",
"log_filename": "mirror_maker_12.log",
"mirror_consumer_config_filename": "mirror_consumer_12.properties",
"mirror_producer_config_filename": "mirror_producer_12.properties"
},
{
"entity_id": "13",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"log_filename": "console_consumer_13.log",
"config_filename": "console_consumer_13.properties"
}
]
}

View File

@ -1,158 +0,0 @@
{
"description": {"01":"Replication with Mirror Maker => Bounce Mirror Maker",
"02":"Set up 2 clusters such as : SOURCE => MirrorMaker => TARGET",
"03":"Set up 2-node Zk cluster for both SOURCE & TARGET",
"04":"Produce and consume messages to a single topic - single partition.",
"05":"This test sends messages to 3 replicas",
"06":"At the end it verifies the log size and contents",
"07":"Use a consumer to verify no message loss in TARGET cluster.",
"08":"Producer dimensions : mode:sync, acks:-1, comp:0",
"09":"Log segment size : 20480"
},
"testcase_args": {
"bounce_leader": "false",
"bounce_mirror_maker": "true",
"replica_factor": "3",
"num_partition": "1",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2108",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_0.log",
"config_filename": "zookeeper_0.properties"
},
{
"entity_id": "1",
"clientPort": "2118",
"dataDir": "/tmp/zookeeper_1",
"log_filename": "zookeeper_1.log",
"config_filename": "zookeeper_1.properties"
},
{
"entity_id": "2",
"clientPort": "2128",
"dataDir": "/tmp/zookeeper_2",
"log_filename": "zookeeper_2.log",
"config_filename": "zookeeper_2.properties"
},
{
"entity_id": "3",
"clientPort": "2138",
"dataDir": "/tmp/zookeeper_3",
"log_filename": "zookeeper_3.log",
"config_filename": "zookeeper_3.properties"
},
{
"entity_id": "4",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_4_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_4.log",
"config_filename": "kafka_server_4.properties"
},
{
"entity_id": "5",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_5_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_5.log",
"config_filename": "kafka_server_5.properties"
},
{
"entity_id": "6",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_6_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_6.log",
"config_filename": "kafka_server_6.properties"
},
{
"entity_id": "7",
"port": "9094",
"broker.id": "4",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_7_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_7.log",
"config_filename": "kafka_server_7.properties"
},
{
"entity_id": "8",
"port": "9095",
"broker.id": "5",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_8_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_8.log",
"config_filename": "kafka_server_8.properties"
},
{
"entity_id": "9",
"port": "9096",
"broker.id": "6",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_9_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_9.log",
"config_filename": "kafka_server_9.properties"
},
{
"entity_id": "10",
"topic": "test_1",
"threads": "5",
"compression-codec": "0",
"message-size": "500",
"message": "100",
"request-num-acks": "-1",
"sync":"true",
"producer-num-retries":"5",
"log_filename": "producer_performance_10.log",
"config_filename": "producer_performance_10.properties"
},
{
"entity_id": "11",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"log_filename": "console_consumer_11.log",
"config_filename": "console_consumer_11.properties"
},
{
"entity_id": "12",
"log_filename": "mirror_maker_12.log",
"mirror_consumer_config_filename": "mirror_consumer_12.properties",
"mirror_producer_config_filename": "mirror_producer_12.properties"
},
{
"entity_id": "13",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"log_filename": "console_consumer_13.log",
"config_filename": "console_consumer_13.properties"
}
]
}

View File

@ -1,135 +0,0 @@
{
"cluster_config": [
{
"entity_id": "0",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9100"
},
{
"entity_id": "1",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9101"
},
{
"entity_id": "2",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9102"
},
{
"entity_id": "3",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9103"
},
{
"entity_id": "4",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9104"
},
{
"entity_id": "5",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9105"
},
{
"entity_id": "6",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9106"
},
{
"entity_id": "7",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9107"
},
{
"entity_id": "8",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9108"
},
{
"entity_id": "9",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9109"
},
{
"entity_id": "10",
"hostname": "localhost",
"role": "producer_performance",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9110"
},
{
"entity_id": "11",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9111"
},
{
"entity_id": "12",
"hostname": "localhost",
"role": "mirror_maker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9112"
},
{
"entity_id": "13",
"hostname": "localhost",
"role": "mirror_maker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9113"
}
]
}

View File

@ -1,156 +0,0 @@
{
"description": {"01":"Replication with Mirror Maker => Bounce Mirror Maker",
"02":"Set up 2 clusters such as : SOURCE => MirrorMaker => TARGET",
"03":"Set up 2-node Zk cluster for both SOURCE & TARGET",
"04":"Produce and consume messages to a single topic - single partition.",
"05":"This test sends messages to 3 replicas",
"06":"At the end it verifies the log size and contents",
"07":"Use a consumer to verify no message loss in TARGET cluster.",
"08":"Producer dimensions : mode:async, acks:-1, comp:1",
"09":"Log segment size : 20480"
},
"testcase_args": {
"bounce_leader": "false",
"bounce_mirror_maker": "true",
"bounced_entity_downtime_sec": "30",
"replica_factor": "3",
"num_partition": "1",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2108",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_0.log",
"config_filename": "zookeeper_0.properties"
},
{
"entity_id": "1",
"clientPort": "2118",
"dataDir": "/tmp/zookeeper_1",
"log_filename": "zookeeper_1.log",
"config_filename": "zookeeper_1.properties"
},
{
"entity_id": "2",
"clientPort": "2128",
"dataDir": "/tmp/zookeeper_2",
"log_filename": "zookeeper_2.log",
"config_filename": "zookeeper_2.properties"
},
{
"entity_id": "3",
"clientPort": "2138",
"dataDir": "/tmp/zookeeper_3",
"log_filename": "zookeeper_3.log",
"config_filename": "zookeeper_3.properties"
},
{
"entity_id": "4",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_4_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_4.log",
"config_filename": "kafka_server_4.properties"
},
{
"entity_id": "5",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_5_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_5.log",
"config_filename": "kafka_server_5.properties"
},
{
"entity_id": "6",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_6_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_6.log",
"config_filename": "kafka_server_6.properties"
},
{
"entity_id": "7",
"port": "9094",
"broker.id": "4",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_7_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_7.log",
"config_filename": "kafka_server_7.properties"
},
{
"entity_id": "8",
"port": "9095",
"broker.id": "5",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_8_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_8.log",
"config_filename": "kafka_server_8.properties"
},
{
"entity_id": "9",
"port": "9096",
"broker.id": "6",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_9_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_9.log",
"config_filename": "kafka_server_9.properties"
},
{
"entity_id": "10",
"topic": "test_1",
"threads": "5",
"compression-codec": "2",
"message-size": "500",
"message": "100",
"request-num-acks": "-1",
"sync":"false",
"producer-num-retries":"5",
"log_filename": "producer_performance_10.log",
"config_filename": "producer_performance_10.properties"
},
{
"entity_id": "11",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"log_filename": "console_consumer_11.log",
"config_filename": "console_consumer_11.properties"
},
{
"entity_id": "12",
"log_filename": "mirror_maker_12.log",
"mirror_consumer_config_filename": "mirror_consumer_12.properties",
"mirror_producer_config_filename": "mirror_producer_12.properties"
},
{
"entity_id": "13",
"log_filename": "mirror_maker_13.log",
"mirror_consumer_config_filename": "mirror_consumer_13.properties",
"mirror_producer_config_filename": "mirror_producer_13.properties"
}
]
}

View File

@ -1,135 +0,0 @@
{
"cluster_config": [
{
"entity_id": "0",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9100"
},
{
"entity_id": "1",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9101"
},
{
"entity_id": "2",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9102"
},
{
"entity_id": "3",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9103"
},
{
"entity_id": "4",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9104"
},
{
"entity_id": "5",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9105"
},
{
"entity_id": "6",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9106"
},
{
"entity_id": "7",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9107"
},
{
"entity_id": "8",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9108"
},
{
"entity_id": "9",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9109"
},
{
"entity_id": "10",
"hostname": "localhost",
"role": "producer_performance",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9110"
},
{
"entity_id": "11",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9111"
},
{
"entity_id": "12",
"hostname": "localhost",
"role": "mirror_maker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9112"
},
{
"entity_id": "13",
"hostname": "localhost",
"role": "mirror_maker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9113"
}
]
}

View File

@ -1,156 +0,0 @@
{
"description": {"01":"Replication with Mirror Maker => Bounce Mirror Maker",
"02":"Set up 2 clusters such as : SOURCE => MirrorMaker => TARGET",
"03":"Set up 2-node Zk cluster for both SOURCE & TARGET",
"04":"Produce and consume messages to a single topic - single partition.",
"05":"This test sends messages to 3 replicas",
"06":"At the end it verifies the log size and contents",
"07":"Use a consumer to verify no message loss in TARGET cluster.",
"08":"Producer dimensions : mode:async, acks:1, comp:1",
"09":"Log segment size : 20480"
},
"testcase_args": {
"bounce_leader": "false",
"bounce_mirror_maker": "true",
"bounced_entity_downtime_sec": "30",
"replica_factor": "3",
"num_partition": "1",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2108",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_0.log",
"config_filename": "zookeeper_0.properties"
},
{
"entity_id": "1",
"clientPort": "2118",
"dataDir": "/tmp/zookeeper_1",
"log_filename": "zookeeper_1.log",
"config_filename": "zookeeper_1.properties"
},
{
"entity_id": "2",
"clientPort": "2128",
"dataDir": "/tmp/zookeeper_2",
"log_filename": "zookeeper_2.log",
"config_filename": "zookeeper_2.properties"
},
{
"entity_id": "3",
"clientPort": "2138",
"dataDir": "/tmp/zookeeper_3",
"log_filename": "zookeeper_3.log",
"config_filename": "zookeeper_3.properties"
},
{
"entity_id": "4",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_4_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_4.log",
"config_filename": "kafka_server_4.properties"
},
{
"entity_id": "5",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_5_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_5.log",
"config_filename": "kafka_server_5.properties"
},
{
"entity_id": "6",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_6_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_6.log",
"config_filename": "kafka_server_6.properties"
},
{
"entity_id": "7",
"port": "9094",
"broker.id": "4",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_7_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_7.log",
"config_filename": "kafka_server_7.properties"
},
{
"entity_id": "8",
"port": "9095",
"broker.id": "5",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_8_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_8.log",
"config_filename": "kafka_server_8.properties"
},
{
"entity_id": "9",
"port": "9096",
"broker.id": "6",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_9_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_9.log",
"config_filename": "kafka_server_9.properties"
},
{
"entity_id": "10",
"topic": "test_1",
"threads": "5",
"compression-codec": "1",
"message-size": "500",
"message": "100",
"request-num-acks": "1",
"sync":"false",
"producer-num-retries":"5",
"log_filename": "producer_performance_10.log",
"config_filename": "producer_performance_10.properties"
},
{
"entity_id": "11",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"log_filename": "console_consumer_11.log",
"config_filename": "console_consumer_11.properties"
},
{
"entity_id": "12",
"log_filename": "mirror_maker_12.log",
"mirror_consumer_config_filename": "mirror_consumer_12.properties",
"mirror_producer_config_filename": "mirror_producer_12.properties"
},
{
"entity_id": "13",
"log_filename": "mirror_maker_13.log",
"mirror_consumer_config_filename": "mirror_consumer_13.properties",
"mirror_producer_config_filename": "mirror_producer_13.properties"
}
]
}

View File

@ -1,153 +0,0 @@
{
"cluster_config": [
{
"entity_id": "0",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9100"
},
{
"entity_id": "1",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9101"
},
{
"entity_id": "2",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9102"
},
{
"entity_id": "3",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9103"
},
{
"entity_id": "4",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9104"
},
{
"entity_id": "5",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9105"
},
{
"entity_id": "6",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9106"
},
{
"entity_id": "7",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9107"
},
{
"entity_id": "8",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9108"
},
{
"entity_id": "9",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9109"
},
{
"entity_id": "10",
"hostname": "localhost",
"role": "producer_performance",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9110"
},
{
"entity_id": "11",
"hostname": "localhost",
"role": "producer_performance",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9111"
},
{
"entity_id": "12",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9112"
},
{
"entity_id": "13",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9113"
},
{
"entity_id": "14",
"hostname": "localhost",
"role": "mirror_maker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9114"
},
{
"entity_id": "15",
"hostname": "localhost",
"role": "mirror_maker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9115"
}
]
}

View File

@ -1,178 +0,0 @@
{
"description": {"01":"Replication with Mirror Maker => Bounce Mirror Maker",
"02":"Set up 2 clusters such as : SOURCE => MirrorMaker => TARGET",
"03":"Set up 2-node Zk cluster for both SOURCE & TARGET",
"04":"Produce and consume messages to 2 topics - 2 partitions.",
"05":"This test sends messages to 3 replicas",
"06":"At the end it verifies the log size and contents",
"07":"Use a consumer to verify no message loss in TARGET cluster.",
"08":"Producer dimensions : mode:async, acks:-1, comp:1",
"09":"Log segment size : 20480"
},
"testcase_args": {
"bounce_leader": "false",
"bounce_mirror_maker": "true",
"bounced_entity_downtime_sec": "30",
"replica_factor": "3",
"num_partition": "2",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2108",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_0.log",
"config_filename": "zookeeper_0.properties"
},
{
"entity_id": "1",
"clientPort": "2118",
"dataDir": "/tmp/zookeeper_1",
"log_filename": "zookeeper_1.log",
"config_filename": "zookeeper_1.properties"
},
{
"entity_id": "2",
"clientPort": "2128",
"dataDir": "/tmp/zookeeper_2",
"log_filename": "zookeeper_2.log",
"config_filename": "zookeeper_2.properties"
},
{
"entity_id": "3",
"clientPort": "2138",
"dataDir": "/tmp/zookeeper_3",
"log_filename": "zookeeper_3.log",
"config_filename": "zookeeper_3.properties"
},
{
"entity_id": "4",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_4_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_4.log",
"config_filename": "kafka_server_4.properties"
},
{
"entity_id": "5",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_5_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_5.log",
"config_filename": "kafka_server_5.properties"
},
{
"entity_id": "6",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_6_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_6.log",
"config_filename": "kafka_server_6.properties"
},
{
"entity_id": "7",
"port": "9094",
"broker.id": "4",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_7_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_7.log",
"config_filename": "kafka_server_7.properties"
},
{
"entity_id": "8",
"port": "9095",
"broker.id": "5",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_8_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_8.log",
"config_filename": "kafka_server_8.properties"
},
{
"entity_id": "9",
"port": "9096",
"broker.id": "6",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_9_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_9.log",
"config_filename": "kafka_server_9.properties"
},
{
"entity_id": "10",
"topic": "test_1",
"threads": "5",
"compression-codec": "1",
"message-size": "500",
"message": "100",
"request-num-acks": "-1",
"sync":"false",
"producer-num-retries":"5",
"log_filename": "producer_performance_10.log",
"config_filename": "producer_performance_10.properties"
},
{
"entity_id": "11",
"topic": "test_2",
"threads": "5",
"compression-codec": "1",
"message-size": "500",
"message": "100",
"request-num-acks": "-1",
"sync":"false",
"producer-num-retries":"5",
"log_filename": "producer_performance_11.log",
"config_filename": "producer_performance_11.properties"
},
{
"entity_id": "12",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"log_filename": "console_consumer_12.log",
"config_filename": "console_consumer_12.properties"
},
{
"entity_id": "13",
"topic": "test_2",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"log_filename": "console_consumer_13.log",
"config_filename": "console_consumer_13.properties"
},
{
"entity_id": "14",
"log_filename": "mirror_maker_14.log",
"mirror_consumer_config_filename": "mirror_consumer_14.properties",
"mirror_producer_config_filename": "mirror_producer_14.properties"
},
{
"entity_id": "15",
"log_filename": "mirror_maker_15.log",
"mirror_consumer_config_filename": "mirror_consumer_15.properties",
"mirror_producer_config_filename": "mirror_producer_15.properties"
}
]
}

View File

@ -1,153 +0,0 @@
{
"cluster_config": [
{
"entity_id": "0",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9100"
},
{
"entity_id": "1",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9101"
},
{
"entity_id": "2",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9102"
},
{
"entity_id": "3",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9103"
},
{
"entity_id": "4",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9104"
},
{
"entity_id": "5",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9105"
},
{
"entity_id": "6",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9106"
},
{
"entity_id": "7",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9107"
},
{
"entity_id": "8",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9108"
},
{
"entity_id": "9",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9109"
},
{
"entity_id": "10",
"hostname": "localhost",
"role": "producer_performance",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9110"
},
{
"entity_id": "11",
"hostname": "localhost",
"role": "producer_performance",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9111"
},
{
"entity_id": "12",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9112"
},
{
"entity_id": "13",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9113"
},
{
"entity_id": "14",
"hostname": "localhost",
"role": "mirror_maker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9114"
},
{
"entity_id": "15",
"hostname": "localhost",
"role": "mirror_maker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9115"
}
]
}

View File

@ -1,178 +0,0 @@
{
"description": {"01":"Replication with Mirror Maker => Bounce Mirror Maker",
"02":"Set up 2 clusters such as : SOURCE => MirrorMaker => TARGET",
"03":"Set up 2-node Zk cluster for both SOURCE & TARGET",
"04":"Produce and consume messages to 2 topics - 2 partitions.",
"05":"This test sends messages to 3 replicas",
"06":"At the end it verifies the log size and contents",
"07":"Use a consumer to verify no message loss in TARGET cluster.",
"08":"Producer dimensions : mode:async, acks:1, comp:1",
"09":"Log segment size : 20480"
},
"testcase_args": {
"bounce_leader": "false",
"bounce_mirror_maker": "true",
"bounced_entity_downtime_sec": "30",
"replica_factor": "3",
"num_partition": "2",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2108",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_0.log",
"config_filename": "zookeeper_0.properties"
},
{
"entity_id": "1",
"clientPort": "2118",
"dataDir": "/tmp/zookeeper_1",
"log_filename": "zookeeper_1.log",
"config_filename": "zookeeper_1.properties"
},
{
"entity_id": "2",
"clientPort": "2128",
"dataDir": "/tmp/zookeeper_2",
"log_filename": "zookeeper_2.log",
"config_filename": "zookeeper_2.properties"
},
{
"entity_id": "3",
"clientPort": "2138",
"dataDir": "/tmp/zookeeper_3",
"log_filename": "zookeeper_3.log",
"config_filename": "zookeeper_3.properties"
},
{
"entity_id": "4",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_4_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_4.log",
"config_filename": "kafka_server_4.properties"
},
{
"entity_id": "5",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_5_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_5.log",
"config_filename": "kafka_server_5.properties"
},
{
"entity_id": "6",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_6_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_6.log",
"config_filename": "kafka_server_6.properties"
},
{
"entity_id": "7",
"port": "9094",
"broker.id": "4",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_7_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_7.log",
"config_filename": "kafka_server_7.properties"
},
{
"entity_id": "8",
"port": "9095",
"broker.id": "5",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_8_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_8.log",
"config_filename": "kafka_server_8.properties"
},
{
"entity_id": "9",
"port": "9096",
"broker.id": "6",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_9_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_9.log",
"config_filename": "kafka_server_9.properties"
},
{
"entity_id": "10",
"topic": "test_1",
"threads": "5",
"compression-codec": "1",
"message-size": "500",
"message": "100",
"request-num-acks": "1",
"sync":"false",
"producer-num-retries":"5",
"log_filename": "producer_performance_10.log",
"config_filename": "producer_performance_10.properties"
},
{
"entity_id": "11",
"topic": "test_2",
"threads": "5",
"compression-codec": "1",
"message-size": "500",
"message": "100",
"request-num-acks": "1",
"sync":"false",
"producer-num-retries":"5",
"log_filename": "producer_performance_11.log",
"config_filename": "producer_performance_11.properties"
},
{
"entity_id": "12",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"log_filename": "console_consumer_12.log",
"config_filename": "console_consumer_12.properties"
},
{
"entity_id": "13",
"topic": "test_2",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"log_filename": "console_consumer_13.log",
"config_filename": "console_consumer_13.properties"
},
{
"entity_id": "14",
"log_filename": "mirror_maker_14.log",
"mirror_consumer_config_filename": "mirror_consumer_14.properties",
"mirror_producer_config_filename": "mirror_producer_14.properties"
},
{
"entity_id": "15",
"log_filename": "mirror_maker_15.log",
"mirror_consumer_config_filename": "mirror_consumer_15.properties",
"mirror_producer_config_filename": "mirror_producer_15.properties"
}
]
}

View File

@ -1,160 +0,0 @@
{
"description": {"01":"To Test : 'Replication with Mirror Maker'",
"02":"Set up 2 clusters such as : SOURCE => MirrorMaker => TARGET",
"03":"Set up 2-node Zk cluster for both SOURCE & TARGET",
"04":"Produce and consume messages to a single topic - single partition.",
"05":"This test sends messages to 3 replicas",
"06":"At the end it verifies the log size and contents",
"07":"Use a consumer to verify no message loss in TARGET cluster.",
"08":"Producer dimensions : mode:sync, acks:-1, comp:0",
"09":"Log segment size : 20480"
},
"testcase_args": {
"bounce_leader": "false",
"bounce_mirror_maker": "false",
"replica_factor": "3",
"num_partition": "1",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2108",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_0.log",
"config_filename": "zookeeper_0.properties"
},
{
"entity_id": "1",
"clientPort": "2118",
"dataDir": "/tmp/zookeeper_1",
"log_filename": "zookeeper_1.log",
"config_filename": "zookeeper_1.properties"
},
{
"entity_id": "2",
"clientPort": "2128",
"dataDir": "/tmp/zookeeper_2",
"log_filename": "zookeeper_2.log",
"config_filename": "zookeeper_2.properties"
},
{
"entity_id": "3",
"clientPort": "2138",
"dataDir": "/tmp/zookeeper_3",
"log_filename": "zookeeper_3.log",
"config_filename": "zookeeper_3.properties"
},
{
"entity_id": "4",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_4_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_4.log",
"config_filename": "kafka_server_4.properties"
},
{
"entity_id": "5",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_5_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_5.log",
"config_filename": "kafka_server_5.properties"
},
{
"entity_id": "6",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_6_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_6.log",
"config_filename": "kafka_server_6.properties"
},
{
"entity_id": "7",
"port": "9094",
"broker.id": "4",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_7_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_7.log",
"config_filename": "kafka_server_7.properties"
},
{
"entity_id": "8",
"port": "9095",
"broker.id": "5",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_8_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_8.log",
"config_filename": "kafka_server_8.properties"
},
{
"entity_id": "9",
"port": "9096",
"broker.id": "6",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_9_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_9.log",
"config_filename": "kafka_server_9.properties"
},
{
"entity_id": "10",
"new-producer":"true",
"topic": "test_1",
"threads": "5",
"compression-codec": "0",
"message-size": "500",
"message": "500",
"request-num-acks": "-1",
"sync":"true",
"producer-num-retries":"5",
"log_filename": "producer_performance_10.log",
"config_filename": "producer_performance_10.properties"
},
{
"entity_id": "11",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"log_filename": "console_consumer_11.log",
"config_filename": "console_consumer_11.properties"
},
{
"entity_id": "12",
"new-producer":"true",
"log_filename": "mirror_maker_12.log",
"mirror_consumer_config_filename": "mirror_consumer_12.properties",
"mirror_producer_config_filename": "mirror_producer_12.properties"
},
{
"entity_id": "13",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"log_filename": "console_consumer_13.log",
"config_filename": "console_consumer_13.properties"
}
]
}

View File

@ -1,160 +0,0 @@
{
"description": {"01":"Replication with Mirror Maker => Bounce Mirror Maker",
"02":"Set up 2 clusters such as : SOURCE => MirrorMaker => TARGET",
"03":"Set up 2-node Zk cluster for both SOURCE & TARGET",
"04":"Produce and consume messages to a single topic - single partition.",
"05":"This test sends messages to 3 replicas",
"06":"At the end it verifies the log size and contents",
"07":"Use a consumer to verify no message loss in TARGET cluster.",
"08":"Producer dimensions : mode:sync, acks:-1, comp:0",
"09":"Log segment size : 20480"
},
"testcase_args": {
"bounce_leader": "false",
"bounce_mirror_maker": "true",
"replica_factor": "3",
"num_partition": "1",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2108",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_0.log",
"config_filename": "zookeeper_0.properties"
},
{
"entity_id": "1",
"clientPort": "2118",
"dataDir": "/tmp/zookeeper_1",
"log_filename": "zookeeper_1.log",
"config_filename": "zookeeper_1.properties"
},
{
"entity_id": "2",
"clientPort": "2128",
"dataDir": "/tmp/zookeeper_2",
"log_filename": "zookeeper_2.log",
"config_filename": "zookeeper_2.properties"
},
{
"entity_id": "3",
"clientPort": "2138",
"dataDir": "/tmp/zookeeper_3",
"log_filename": "zookeeper_3.log",
"config_filename": "zookeeper_3.properties"
},
{
"entity_id": "4",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_4_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_4.log",
"config_filename": "kafka_server_4.properties"
},
{
"entity_id": "5",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_5_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_5.log",
"config_filename": "kafka_server_5.properties"
},
{
"entity_id": "6",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_6_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_6.log",
"config_filename": "kafka_server_6.properties"
},
{
"entity_id": "7",
"port": "9094",
"broker.id": "4",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_7_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_7.log",
"config_filename": "kafka_server_7.properties"
},
{
"entity_id": "8",
"port": "9095",
"broker.id": "5",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_8_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_8.log",
"config_filename": "kafka_server_8.properties"
},
{
"entity_id": "9",
"port": "9096",
"broker.id": "6",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_9_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_9.log",
"config_filename": "kafka_server_9.properties"
},
{
"entity_id": "10",
"new-producer":"true",
"topic": "test_1",
"threads": "5",
"compression-codec": "0",
"message-size": "500",
"message": "100",
"request-num-acks": "-1",
"sync":"true",
"producer-num-retries":"5",
"log_filename": "producer_performance_10.log",
"config_filename": "producer_performance_10.properties"
},
{
"entity_id": "11",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"log_filename": "console_consumer_11.log",
"config_filename": "console_consumer_11.properties"
},
{
"entity_id": "12",
"new-producer":"true",
"log_filename": "mirror_maker_12.log",
"mirror_consumer_config_filename": "mirror_consumer_12.properties",
"mirror_producer_config_filename": "mirror_producer_12.properties"
},
{
"entity_id": "13",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"log_filename": "console_consumer_13.log",
"config_filename": "console_consumer_13.properties"
}
]
}

View File

@ -1,135 +0,0 @@
{
"cluster_config": [
{
"entity_id": "0",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9100"
},
{
"entity_id": "1",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9101"
},
{
"entity_id": "2",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9102"
},
{
"entity_id": "3",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9103"
},
{
"entity_id": "4",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9104"
},
{
"entity_id": "5",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9105"
},
{
"entity_id": "6",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9106"
},
{
"entity_id": "7",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9107"
},
{
"entity_id": "8",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9108"
},
{
"entity_id": "9",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9109"
},
{
"entity_id": "10",
"hostname": "localhost",
"role": "producer_performance",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9110"
},
{
"entity_id": "11",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9111"
},
{
"entity_id": "12",
"hostname": "localhost",
"role": "mirror_maker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9112"
},
{
"entity_id": "13",
"hostname": "localhost",
"role": "mirror_maker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9113"
}
]
}

View File

@ -1,159 +0,0 @@
{
"description": {"01":"Replication with Mirror Maker => Bounce Mirror Maker",
"02":"Set up 2 clusters such as : SOURCE => MirrorMaker => TARGET",
"03":"Set up 2-node Zk cluster for both SOURCE & TARGET",
"04":"Produce and consume messages to a single topic - single partition.",
"05":"This test sends messages to 3 replicas",
"06":"At the end it verifies the log size and contents",
"07":"Use a consumer to verify no message loss in TARGET cluster.",
"08":"Producer dimensions : mode:async, acks:-1, comp:1",
"09":"Log segment size : 20480"
},
"testcase_args": {
"bounce_leader": "false",
"bounce_mirror_maker": "true",
"bounced_entity_downtime_sec": "30",
"replica_factor": "3",
"num_partition": "1",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2108",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_0.log",
"config_filename": "zookeeper_0.properties"
},
{
"entity_id": "1",
"clientPort": "2118",
"dataDir": "/tmp/zookeeper_1",
"log_filename": "zookeeper_1.log",
"config_filename": "zookeeper_1.properties"
},
{
"entity_id": "2",
"clientPort": "2128",
"dataDir": "/tmp/zookeeper_2",
"log_filename": "zookeeper_2.log",
"config_filename": "zookeeper_2.properties"
},
{
"entity_id": "3",
"clientPort": "2138",
"dataDir": "/tmp/zookeeper_3",
"log_filename": "zookeeper_3.log",
"config_filename": "zookeeper_3.properties"
},
{
"entity_id": "4",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_4_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_4.log",
"config_filename": "kafka_server_4.properties"
},
{
"entity_id": "5",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_5_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_5.log",
"config_filename": "kafka_server_5.properties"
},
{
"entity_id": "6",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_6_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_6.log",
"config_filename": "kafka_server_6.properties"
},
{
"entity_id": "7",
"port": "9094",
"broker.id": "4",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_7_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_7.log",
"config_filename": "kafka_server_7.properties"
},
{
"entity_id": "8",
"port": "9095",
"broker.id": "5",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_8_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_8.log",
"config_filename": "kafka_server_8.properties"
},
{
"entity_id": "9",
"port": "9096",
"broker.id": "6",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_9_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_9.log",
"config_filename": "kafka_server_9.properties"
},
{
"entity_id": "10",
"new-producer":"true",
"topic": "test_1",
"threads": "5",
"compression-codec": "2",
"message-size": "500",
"message": "100",
"request-num-acks": "-1",
"sync":"false",
"producer-num-retries":"5",
"log_filename": "producer_performance_10.log",
"config_filename": "producer_performance_10.properties"
},
{
"entity_id": "11",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"log_filename": "console_consumer_11.log",
"config_filename": "console_consumer_11.properties"
},
{
"entity_id": "12",
"new-producer":"true",
"log_filename": "mirror_maker_12.log",
"mirror_consumer_config_filename": "mirror_consumer_12.properties",
"mirror_producer_config_filename": "mirror_producer_12.properties"
},
{
"entity_id": "13",
"new-producer":"true",
"log_filename": "mirror_maker_13.log",
"mirror_consumer_config_filename": "mirror_consumer_13.properties",
"mirror_producer_config_filename": "mirror_producer_13.properties"
}
]
}

View File

@ -1,135 +0,0 @@
{
"cluster_config": [
{
"entity_id": "0",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9100"
},
{
"entity_id": "1",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9101"
},
{
"entity_id": "2",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9102"
},
{
"entity_id": "3",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9103"
},
{
"entity_id": "4",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9104"
},
{
"entity_id": "5",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9105"
},
{
"entity_id": "6",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9106"
},
{
"entity_id": "7",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9107"
},
{
"entity_id": "8",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9108"
},
{
"entity_id": "9",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9109"
},
{
"entity_id": "10",
"hostname": "localhost",
"role": "producer_performance",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9110"
},
{
"entity_id": "11",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9111"
},
{
"entity_id": "12",
"hostname": "localhost",
"role": "mirror_maker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9112"
},
{
"entity_id": "13",
"hostname": "localhost",
"role": "mirror_maker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9113"
}
]
}

View File

@ -1,159 +0,0 @@
{
"description": {"01":"Replication with Mirror Maker => Bounce Mirror Maker",
"02":"Set up 2 clusters such as : SOURCE => MirrorMaker => TARGET",
"03":"Set up 2-node Zk cluster for both SOURCE & TARGET",
"04":"Produce and consume messages to a single topic - single partition.",
"05":"This test sends messages to 3 replicas",
"06":"At the end it verifies the log size and contents",
"07":"Use a consumer to verify no message loss in TARGET cluster.",
"08":"Producer dimensions : mode:async, acks:1, comp:1",
"09":"Log segment size : 20480"
},
"testcase_args": {
"bounce_leader": "false",
"bounce_mirror_maker": "true",
"bounced_entity_downtime_sec": "30",
"replica_factor": "3",
"num_partition": "1",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2108",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_0.log",
"config_filename": "zookeeper_0.properties"
},
{
"entity_id": "1",
"clientPort": "2118",
"dataDir": "/tmp/zookeeper_1",
"log_filename": "zookeeper_1.log",
"config_filename": "zookeeper_1.properties"
},
{
"entity_id": "2",
"clientPort": "2128",
"dataDir": "/tmp/zookeeper_2",
"log_filename": "zookeeper_2.log",
"config_filename": "zookeeper_2.properties"
},
{
"entity_id": "3",
"clientPort": "2138",
"dataDir": "/tmp/zookeeper_3",
"log_filename": "zookeeper_3.log",
"config_filename": "zookeeper_3.properties"
},
{
"entity_id": "4",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_4_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_4.log",
"config_filename": "kafka_server_4.properties"
},
{
"entity_id": "5",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_5_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_5.log",
"config_filename": "kafka_server_5.properties"
},
{
"entity_id": "6",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_6_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_6.log",
"config_filename": "kafka_server_6.properties"
},
{
"entity_id": "7",
"port": "9094",
"broker.id": "4",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_7_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_7.log",
"config_filename": "kafka_server_7.properties"
},
{
"entity_id": "8",
"port": "9095",
"broker.id": "5",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_8_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_8.log",
"config_filename": "kafka_server_8.properties"
},
{
"entity_id": "9",
"port": "9096",
"broker.id": "6",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_9_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_9.log",
"config_filename": "kafka_server_9.properties"
},
{
"entity_id": "10",
"new-producer":"true",
"topic": "test_1",
"threads": "5",
"compression-codec": "1",
"message-size": "500",
"message": "100",
"request-num-acks": "1",
"sync":"false",
"producer-num-retries":"5",
"log_filename": "producer_performance_10.log",
"config_filename": "producer_performance_10.properties"
},
{
"entity_id": "11",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"log_filename": "console_consumer_11.log",
"config_filename": "console_consumer_11.properties"
},
{
"entity_id": "12",
"new-producer":"true",
"log_filename": "mirror_maker_12.log",
"mirror_consumer_config_filename": "mirror_consumer_12.properties",
"mirror_producer_config_filename": "mirror_producer_12.properties"
},
{
"entity_id": "13",
"new-producer":"true",
"log_filename": "mirror_maker_13.log",
"mirror_consumer_config_filename": "mirror_consumer_13.properties",
"mirror_producer_config_filename": "mirror_producer_13.properties"
}
]
}

View File

@ -1,153 +0,0 @@
{
"cluster_config": [
{
"entity_id": "0",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9100"
},
{
"entity_id": "1",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9101"
},
{
"entity_id": "2",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9102"
},
{
"entity_id": "3",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9103"
},
{
"entity_id": "4",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9104"
},
{
"entity_id": "5",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9105"
},
{
"entity_id": "6",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9106"
},
{
"entity_id": "7",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9107"
},
{
"entity_id": "8",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9108"
},
{
"entity_id": "9",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9109"
},
{
"entity_id": "10",
"hostname": "localhost",
"role": "producer_performance",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9110"
},
{
"entity_id": "11",
"hostname": "localhost",
"role": "producer_performance",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9111"
},
{
"entity_id": "12",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9112"
},
{
"entity_id": "13",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9113"
},
{
"entity_id": "14",
"hostname": "localhost",
"role": "mirror_maker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9114"
},
{
"entity_id": "15",
"hostname": "localhost",
"role": "mirror_maker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9115"
}
]
}

View File

@ -1,182 +0,0 @@
{
"description": {"01":"Replication with Mirror Maker => Bounce Mirror Maker",
"02":"Set up 2 clusters such as : SOURCE => MirrorMaker => TARGET",
"03":"Set up 2-node Zk cluster for both SOURCE & TARGET",
"04":"Produce and consume messages to 2 topics - 2 partitions.",
"05":"This test sends messages to 3 replicas",
"06":"At the end it verifies the log size and contents",
"07":"Use a consumer to verify no message loss in TARGET cluster.",
"08":"Producer dimensions : mode:async, acks:-1, comp:1",
"09":"Log segment size : 20480"
},
"testcase_args": {
"bounce_leader": "false",
"bounce_mirror_maker": "true",
"bounced_entity_downtime_sec": "30",
"replica_factor": "3",
"num_partition": "2",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2108",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_0.log",
"config_filename": "zookeeper_0.properties"
},
{
"entity_id": "1",
"clientPort": "2118",
"dataDir": "/tmp/zookeeper_1",
"log_filename": "zookeeper_1.log",
"config_filename": "zookeeper_1.properties"
},
{
"entity_id": "2",
"clientPort": "2128",
"dataDir": "/tmp/zookeeper_2",
"log_filename": "zookeeper_2.log",
"config_filename": "zookeeper_2.properties"
},
{
"entity_id": "3",
"clientPort": "2138",
"dataDir": "/tmp/zookeeper_3",
"log_filename": "zookeeper_3.log",
"config_filename": "zookeeper_3.properties"
},
{
"entity_id": "4",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_4_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_4.log",
"config_filename": "kafka_server_4.properties"
},
{
"entity_id": "5",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_5_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_5.log",
"config_filename": "kafka_server_5.properties"
},
{
"entity_id": "6",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_6_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_6.log",
"config_filename": "kafka_server_6.properties"
},
{
"entity_id": "7",
"port": "9094",
"broker.id": "4",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_7_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_7.log",
"config_filename": "kafka_server_7.properties"
},
{
"entity_id": "8",
"port": "9095",
"broker.id": "5",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_8_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_8.log",
"config_filename": "kafka_server_8.properties"
},
{
"entity_id": "9",
"port": "9096",
"broker.id": "6",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_9_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_9.log",
"config_filename": "kafka_server_9.properties"
},
{
"entity_id": "10",
"new-producer":"true",
"topic": "test_1",
"threads": "5",
"compression-codec": "1",
"message-size": "500",
"message": "100",
"request-num-acks": "-1",
"sync":"false",
"producer-num-retries":"5",
"log_filename": "producer_performance_10.log",
"config_filename": "producer_performance_10.properties"
},
{
"entity_id": "11",
"new-producer":"true",
"topic": "test_2",
"threads": "5",
"compression-codec": "1",
"message-size": "500",
"message": "100",
"request-num-acks": "-1",
"sync":"false",
"producer-num-retries":"5",
"log_filename": "producer_performance_11.log",
"config_filename": "producer_performance_11.properties"
},
{
"entity_id": "12",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"log_filename": "console_consumer_12.log",
"config_filename": "console_consumer_12.properties"
},
{
"entity_id": "13",
"topic": "test_2",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"log_filename": "console_consumer_13.log",
"config_filename": "console_consumer_13.properties"
},
{
"entity_id": "14",
"new-producer":"true",
"log_filename": "mirror_maker_14.log",
"mirror_consumer_config_filename": "mirror_consumer_14.properties",
"mirror_producer_config_filename": "mirror_producer_14.properties"
},
{
"entity_id": "15",
"new-producer":"true",
"log_filename": "mirror_maker_15.log",
"mirror_consumer_config_filename": "mirror_consumer_15.properties",
"mirror_producer_config_filename": "mirror_producer_15.properties"
}
]
}

View File

@ -1,153 +0,0 @@
{
"cluster_config": [
{
"entity_id": "0",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9100"
},
{
"entity_id": "1",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9101"
},
{
"entity_id": "2",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9102"
},
{
"entity_id": "3",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9103"
},
{
"entity_id": "4",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9104"
},
{
"entity_id": "5",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9105"
},
{
"entity_id": "6",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9106"
},
{
"entity_id": "7",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9107"
},
{
"entity_id": "8",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9108"
},
{
"entity_id": "9",
"hostname": "localhost",
"role": "broker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9109"
},
{
"entity_id": "10",
"hostname": "localhost",
"role": "producer_performance",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9110"
},
{
"entity_id": "11",
"hostname": "localhost",
"role": "producer_performance",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9111"
},
{
"entity_id": "12",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9112"
},
{
"entity_id": "13",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9113"
},
{
"entity_id": "14",
"hostname": "localhost",
"role": "mirror_maker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9114"
},
{
"entity_id": "15",
"hostname": "localhost",
"role": "mirror_maker",
"cluster_name":"target",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9115"
}
]
}

View File

@ -1,182 +0,0 @@
{
"description": {"01":"Replication with Mirror Maker => Bounce Mirror Maker",
"02":"Set up 2 clusters such as : SOURCE => MirrorMaker => TARGET",
"03":"Set up 2-node Zk cluster for both SOURCE & TARGET",
"04":"Produce and consume messages to 2 topics - 2 partitions.",
"05":"This test sends messages to 3 replicas",
"06":"At the end it verifies the log size and contents",
"07":"Use a consumer to verify no message loss in TARGET cluster.",
"08":"Producer dimensions : mode:async, acks:1, comp:1",
"09":"Log segment size : 20480"
},
"testcase_args": {
"bounce_leader": "false",
"bounce_mirror_maker": "true",
"bounced_entity_downtime_sec": "30",
"replica_factor": "3",
"num_partition": "2",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2108",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_0.log",
"config_filename": "zookeeper_0.properties"
},
{
"entity_id": "1",
"clientPort": "2118",
"dataDir": "/tmp/zookeeper_1",
"log_filename": "zookeeper_1.log",
"config_filename": "zookeeper_1.properties"
},
{
"entity_id": "2",
"clientPort": "2128",
"dataDir": "/tmp/zookeeper_2",
"log_filename": "zookeeper_2.log",
"config_filename": "zookeeper_2.properties"
},
{
"entity_id": "3",
"clientPort": "2138",
"dataDir": "/tmp/zookeeper_3",
"log_filename": "zookeeper_3.log",
"config_filename": "zookeeper_3.properties"
},
{
"entity_id": "4",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_4_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_4.log",
"config_filename": "kafka_server_4.properties"
},
{
"entity_id": "5",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_5_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_5.log",
"config_filename": "kafka_server_5.properties"
},
{
"entity_id": "6",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_6_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_6.log",
"config_filename": "kafka_server_6.properties"
},
{
"entity_id": "7",
"port": "9094",
"broker.id": "4",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_7_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_7.log",
"config_filename": "kafka_server_7.properties"
},
{
"entity_id": "8",
"port": "9095",
"broker.id": "5",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_8_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_8.log",
"config_filename": "kafka_server_8.properties"
},
{
"entity_id": "9",
"port": "9096",
"broker.id": "6",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_9_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_9.log",
"config_filename": "kafka_server_9.properties"
},
{
"entity_id": "10",
"new-producer":"true",
"topic": "test_1",
"threads": "5",
"compression-codec": "1",
"message-size": "500",
"message": "100",
"request-num-acks": "1",
"sync":"false",
"producer-num-retries":"5",
"log_filename": "producer_performance_10.log",
"config_filename": "producer_performance_10.properties"
},
{
"entity_id": "11",
"new-producer":"true",
"topic": "test_2",
"threads": "5",
"compression-codec": "1",
"message-size": "500",
"message": "100",
"request-num-acks": "1",
"sync":"false",
"producer-num-retries":"5",
"log_filename": "producer_performance_11.log",
"config_filename": "producer_performance_11.properties"
},
{
"entity_id": "12",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"log_filename": "console_consumer_12.log",
"config_filename": "console_consumer_12.properties"
},
{
"entity_id": "13",
"topic": "test_2",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"log_filename": "console_consumer_13.log",
"config_filename": "console_consumer_13.properties"
},
{
"entity_id": "14",
"new-producer":"true",
"log_filename": "mirror_maker_14.log",
"mirror_consumer_config_filename": "mirror_consumer_14.properties",
"mirror_producer_config_filename": "mirror_producer_14.properties"
},
{
"entity_id": "15",
"new-producer":"true",
"log_filename": "mirror_maker_15.log",
"mirror_consumer_config_filename": "mirror_consumer_15.properties",
"mirror_producer_config_filename": "mirror_producer_15.properties"
}
]
}

View File

@ -1,103 +0,0 @@
{
"cluster_config": [
{
"entity_id": "0",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9100"
},
{
"entity_id": "1",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9101"
},
{
"entity_id": "2",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9102"
},
{
"entity_id": "3",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9103"
},
{
"entity_id": "4",
"hostname": "localhost",
"role": "broker",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9104"
},
{
"entity_id": "5",
"hostname": "localhost",
"role": "producer_performance",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9105"
},
{
"entity_id": "6",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9106"
},
{
"entity_id": "7",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9107"
},
{
"entity_id": "8",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9108"
},
{
"entity_id": "9",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9109"
},
{
"entity_id": "10",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name":"source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9110"
}
]
}

View File

@ -1,2 +0,0 @@
auto.offset.reset=smallest
auto.commit.interval.ms=1000

View File

@ -1,143 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
# Hostname the broker will advertise to consumers. If not set, kafka will use the value returned
# from InetAddress.getLocalHost(). If there are multiple interfaces getLocalHost
# may not be what you want.
#host.name=
############################# Socket Server Settings #############################
# The port the socket server listens on
port=9091
# The number of threads handling network requests
num.network.threads=2
# The number of threads doing disk I/O
num.io.threads=2
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=1048576
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=1048576
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# The directory under which to store log files
log.dir=/tmp/kafka_server_logs
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=5
# Overrides for for the default given by num.partitions on a per-topic basis
#topic.partition.count.map=topic1:3, topic2:4
############################# Log Flush Policy #############################
# The following configurations control the flush of data to disk. This is the most
# important performance knob in kafka.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data is at greater risk of loss in the event of a crash.
# 2. Latency: Data is not made available to consumers until it is flushed (which adds latency).
# 3. Throughput: The flush is generally the most expensive operation.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
log.flush.interval.ms=1000
# Per-topic overrides for log.flush.interval.ms
#log.flush.intervals.ms.per.topic=topic1:1000, topic2:3000
# The interval (in ms) at which logs are checked to see if they need to be flushed to disk.
log.flush.scheduler.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824
log.retention.bytes=-1
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
#log.segment.size=536870912
log.segment.bytes=102400
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.cleanup.interval.mins=1
############################# Zookeeper #############################
# Enable connecting to zookeeper
enable.zookeeper=true
# Zk connection string (see zk docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=1000000
monitoring.period.secs=1
message.max.bytes=1000000
queued.max.requests=500
log.roll.hours=168
log.index.size.max.bytes=10485760
log.index.interval.bytes=4096
auto.create.topics.enable=true
controller.socket.timeout.ms=30000
default.replication.factor=1
replica.lag.time.max.ms=10000
replica.lag.max.messages=4000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.fetch.min.bytes=4096
num.replica.fetchers=1
offsets.topic.num.partitions=2
offsets.topic.replication.factor=4

View File

@ -1,23 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# the directory where the snapshot is stored.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# disable the per-ip limit on the number of connections since this is a non-production config
maxClientCnxns=0
syncLimit=5
initLimit=10
tickTime=2000

View File

@ -1,299 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#!/usr/bin/env python
# ===================================
# offset_management_test.py
# ===================================
import os
import signal
import sys
import time
import traceback
from system_test_env import SystemTestEnv
sys.path.append(SystemTestEnv.SYSTEM_TEST_UTIL_DIR)
from setup_utils import SetupUtils
from replication_utils import ReplicationUtils
import system_test_utils
from testcase_env import TestcaseEnv
# product specific: Kafka
import kafka_system_test_utils
import metrics
class OffsetManagementTest(ReplicationUtils, SetupUtils):
testModuleAbsPathName = os.path.realpath(__file__)
testSuiteAbsPathName = os.path.abspath(os.path.dirname(testModuleAbsPathName))
def __init__(self, systemTestEnv):
# SystemTestEnv - provides cluster level environment settings
# such as entity_id, hostname, kafka_home, java_home which
# are available in a list of dictionary named
# "clusterEntityConfigDictList"
self.systemTestEnv = systemTestEnv
super(OffsetManagementTest, self).__init__(self)
# dict to pass user-defined attributes to logger argument: "extra"
d = {'name_of_class': self.__class__.__name__}
def signal_handler(self, signal, frame):
self.log_message("Interrupt detected - User pressed Ctrl+c")
# perform the necessary cleanup here when user presses Ctrl+c and it may be product specific
self.log_message("stopping all entities - please wait ...")
kafka_system_test_utils.stop_all_remote_running_processes(self.systemTestEnv, self.testcaseEnv)
sys.exit(1)
def runTest(self):
# ======================================================================
# get all testcase directories under this testsuite
# ======================================================================
testCasePathNameList = system_test_utils.get_dir_paths_with_prefix(
self.testSuiteAbsPathName, SystemTestEnv.SYSTEM_TEST_CASE_PREFIX)
testCasePathNameList.sort()
replicationUtils = ReplicationUtils(self)
# =============================================================
# launch each testcase one by one: testcase_1, testcase_2, ...
# =============================================================
for testCasePathName in testCasePathNameList:
skipThisTestCase = False
try:
# ======================================================================
# A new instance of TestcaseEnv to keep track of this testcase's env vars
# and initialize some env vars as testCasePathName is available now
# ======================================================================
self.testcaseEnv = TestcaseEnv(self.systemTestEnv, self)
self.testcaseEnv.testSuiteBaseDir = self.testSuiteAbsPathName
self.testcaseEnv.initWithKnownTestCasePathName(testCasePathName)
self.testcaseEnv.testcaseArgumentsDict = self.testcaseEnv.testcaseNonEntityDataDict["testcase_args"]
# ======================================================================
# SKIP if this case is IN testcase_to_skip.json or NOT IN testcase_to_run.json
# ======================================================================
testcaseDirName = self.testcaseEnv.testcaseResultsDict["_test_case_name"]
if self.systemTestEnv.printTestDescriptionsOnly:
self.testcaseEnv.printTestCaseDescription(testcaseDirName)
continue
elif self.systemTestEnv.isTestCaseToSkip(self.__class__.__name__, testcaseDirName):
self.log_message("Skipping : " + testcaseDirName)
skipThisTestCase = True
continue
else:
self.testcaseEnv.printTestCaseDescription(testcaseDirName)
system_test_utils.setup_remote_hosts_with_testcase_level_cluster_config(self.systemTestEnv, testCasePathName)
# ============================================================================== #
# ============================================================================== #
# Product Specific Testing Code Starts Here: #
# ============================================================================== #
# ============================================================================== #
# initialize self.testcaseEnv with user-defined environment variables (product specific)
self.testcaseEnv.userDefinedEnvVarDict["stopBackgroundProducer"] = False
self.testcaseEnv.userDefinedEnvVarDict["backgroundProducerStopped"] = False
# initialize signal handler
signal.signal(signal.SIGINT, self.signal_handler)
# TestcaseEnv.testcaseConfigsList initialized by reading testcase properties file:
# system_test/<suite_name>_testsuite/testcase_<n>/testcase_<n>_properties.json
self.testcaseEnv.testcaseConfigsList = system_test_utils.get_json_list_data(
self.testcaseEnv.testcasePropJsonPathName)
# clean up data directories specified in zookeeper.properties and kafka_server_<n>.properties
kafka_system_test_utils.cleanup_data_at_remote_hosts(self.systemTestEnv, self.testcaseEnv)
# create "LOCAL" log directories for metrics, dashboards for each entity under this testcase
# for collecting logs from remote machines
kafka_system_test_utils.generate_testcase_log_dirs(self.systemTestEnv, self.testcaseEnv)
# TestcaseEnv - initialize producer & consumer config / log file pathnames
kafka_system_test_utils.init_entity_props(self.systemTestEnv, self.testcaseEnv)
# generate remote hosts log/config dirs if not exist
kafka_system_test_utils.generate_testcase_log_dirs_in_remote_hosts(self.systemTestEnv, self.testcaseEnv)
# generate properties files for zookeeper, kafka, producer, and consumer:
# 1. copy system_test/<suite_name>_testsuite/config/*.properties to
# system_test/<suite_name>_testsuite/testcase_<n>/config/
# 2. update all properties files in system_test/<suite_name>_testsuite/testcase_<n>/config
# by overriding the settings specified in:
# system_test/<suite_name>_testsuite/testcase_<n>/testcase_<n>_properties.json
kafka_system_test_utils.generate_overriden_props_files(self.testSuiteAbsPathName,
self.testcaseEnv, self.systemTestEnv)
# =============================================
# preparing all entities to start the test
# =============================================
self.log_message("starting zookeepers")
kafka_system_test_utils.start_zookeepers(self.systemTestEnv, self.testcaseEnv)
self.anonLogger.info("sleeping for 2s")
time.sleep(2)
self.log_message("starting brokers")
kafka_system_test_utils.start_brokers(self.systemTestEnv, self.testcaseEnv)
self.anonLogger.info("sleeping for 5s")
time.sleep(5)
self.log_message("creating offset topic")
kafka_system_test_utils.create_topic(self.systemTestEnv, self.testcaseEnv, "__consumer_offsets", 3, 2)
self.anonLogger.info("sleeping for 5s")
time.sleep(5)
# =============================================
# starting producer
# =============================================
self.log_message("starting producer in the background")
kafka_system_test_utils.start_producer_performance(self.systemTestEnv, self.testcaseEnv, False)
msgProducingFreeTimeSec = self.testcaseEnv.testcaseArgumentsDict["message_producing_free_time_sec"]
self.anonLogger.info("sleeping for " + msgProducingFreeTimeSec + " sec to produce some messages")
time.sleep(int(msgProducingFreeTimeSec))
kafka_system_test_utils.start_console_consumers(self.systemTestEnv, self.testcaseEnv)
kafka_system_test_utils.get_leader_for(self.systemTestEnv, self.testcaseEnv, "__consumer_offsets", 0)
# =============================================
# A while-loop to bounce consumers as specified
# by "num_iterations" in testcase_n_properties.json
# =============================================
i = 1
numIterations = int(self.testcaseEnv.testcaseArgumentsDict["num_iteration"])
bouncedEntityDownTimeSec = 10
try:
bouncedEntityDownTimeSec = int(self.testcaseEnv.testcaseArgumentsDict["bounced_entity_downtime_sec"])
except:
pass
# group1 -> offsets partition 0 // has one consumer; eid: 6
# group2 -> offsets partition 1 // has four consumers; eid: 7, 8, 9, 10
offsets_0_leader_entity = kafka_system_test_utils.get_leader_for(self.systemTestEnv, self.testcaseEnv, "__consumer_offsets", 0)
offsets_1_leader_entity = kafka_system_test_utils.get_leader_for(self.systemTestEnv, self.testcaseEnv, "__consumer_offsets", 1)
while i <= numIterations:
self.log_message("Iteration " + str(i) + " of " + str(numIterations))
kafka_system_test_utils.stop_remote_entity(self.systemTestEnv, offsets_0_leader_entity, self.testcaseEnv.entityBrokerParentPidDict[offsets_0_leader_entity])
kafka_system_test_utils.stop_remote_entity(self.systemTestEnv, offsets_1_leader_entity, self.testcaseEnv.entityBrokerParentPidDict[offsets_1_leader_entity])
# =============================================
# Bounce consumers if specified in testcase config
# =============================================
bounceConsumers = self.testcaseEnv.testcaseArgumentsDict["bounce_consumers"]
self.log_message("bounce_consumers flag : " + bounceConsumers)
if (bounceConsumers.lower() == "true"):
clusterConfigList = self.systemTestEnv.clusterEntityConfigDictList
consumerEntityIdList = system_test_utils.get_data_from_list_of_dicts( clusterConfigList, "role", "console_consumer", "entity_id")
for stoppedConsumerEntityId in consumerEntityIdList:
consumerPPID = self.testcaseEnv.entityConsoleConsumerParentPidDict[stoppedConsumerEntityId]
self.log_message("stopping consumer: " + consumerPPID)
kafka_system_test_utils.stop_remote_entity(self.systemTestEnv, stoppedConsumerEntityId, consumerPPID)
self.anonLogger.info("sleeping for " + str(bouncedEntityDownTimeSec) + " sec")
time.sleep(bouncedEntityDownTimeSec)
# leaders would have changed during the above bounce.
self.log_message("starting the previously terminated consumers.")
for stoppedConsumerEntityId in consumerEntityIdList:
# starting previously terminated consumer
kafka_system_test_utils.start_console_consumers(self.systemTestEnv, self.testcaseEnv, stoppedConsumerEntityId)
self.log_message("starting the previously terminated brokers")
kafka_system_test_utils.start_entity_in_background(self.systemTestEnv, self.testcaseEnv, offsets_0_leader_entity)
kafka_system_test_utils.start_entity_in_background(self.systemTestEnv, self.testcaseEnv, offsets_1_leader_entity)
self.anonLogger.info("sleeping for 15s")
time.sleep(15)
i += 1
# while loop
# =============================================
# tell producer to stop
# =============================================
self.testcaseEnv.lock.acquire()
self.testcaseEnv.userDefinedEnvVarDict["stopBackgroundProducer"] = True
time.sleep(1)
self.testcaseEnv.lock.release()
time.sleep(1)
# =============================================
# wait for producer thread's update of
# "backgroundProducerStopped" to be "True"
# =============================================
while 1:
self.testcaseEnv.lock.acquire()
self.logger.info("status of backgroundProducerStopped : [" + \
str(self.testcaseEnv.userDefinedEnvVarDict["backgroundProducerStopped"]) + "]", extra=self.d)
if self.testcaseEnv.userDefinedEnvVarDict["backgroundProducerStopped"]:
time.sleep(1)
self.logger.info("all producer threads completed", extra=self.d)
break
time.sleep(1)
self.testcaseEnv.lock.release()
time.sleep(2)
self.anonLogger.info("sleeping for 15s")
time.sleep(15)
# =============================================
# this testcase is completed - stop all entities
# =============================================
self.log_message("stopping all entities")
for entityId, parentPid in self.testcaseEnv.entityBrokerParentPidDict.items():
kafka_system_test_utils.stop_remote_entity(self.systemTestEnv, entityId, parentPid)
for entityId, parentPid in self.testcaseEnv.entityZkParentPidDict.items():
kafka_system_test_utils.stop_remote_entity(self.systemTestEnv, entityId, parentPid)
# make sure all entities are stopped
kafka_system_test_utils.ps_grep_terminate_running_entity(self.systemTestEnv)
# =============================================
# collect logs from remote hosts
# =============================================
kafka_system_test_utils.collect_logs_from_remote_hosts(self.systemTestEnv, self.testcaseEnv)
# =============================================
# validate the data matched and checksum
# =============================================
self.log_message("validating data matched")
kafka_system_test_utils.validate_data_matched_in_multi_topics_from_single_consumer_producer(self.systemTestEnv, self.testcaseEnv, replicationUtils)
except Exception as e:
self.log_message("Exception while running test {0}".format(e))
traceback.print_exc()
self.testcaseEnv.validationStatusDict["Test completed"] = "FAILED"
finally:
if not skipThisTestCase and not self.systemTestEnv.printTestDescriptionsOnly:
self.log_message("stopping all entities - please wait ...")
kafka_system_test_utils.stop_all_remote_running_processes(self.systemTestEnv, self.testcaseEnv)

View File

@ -1,95 +0,0 @@
{
"description": {"01":"To Test : 'Basic offset management test.'",
"02":"Set up a Zk and Kafka cluster.",
"03":"Produce messages to a multiple topics - various partition counts.",
"04":"Start multiple consumer groups to read various subsets of above topics.",
"05":"Bounce consumers.",
"06":"Verify that there are no duplicate messages or lost messages on any consumer group.",
"07":"Producer dimensions : mode:sync, acks:-1, comp:0"
},
"testcase_args": {
"bounce_leaders": "false",
"bounce_consumers": "true",
"replica_factor": "3",
"num_partition": "1",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50",
"num_topics_for_auto_generated_string":"1"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2108",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_0.log",
"config_filename": "zookeeper_0.properties"
},
{
"entity_id": "1",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_1_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_1.log",
"config_filename": "kafka_server_1.properties"
},
{
"entity_id": "2",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_2_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_2.log",
"config_filename": "kafka_server_2.properties"
},
{
"entity_id": "3",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_3_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_3.log",
"config_filename": "kafka_server_3.properties"
},
{
"entity_id": "4",
"port": "9094",
"broker.id": "4",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_4_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_4.log",
"config_filename": "kafka_server_4.properties"
},
{
"entity_id": "5",
"topic": "test",
"threads": "3",
"compression-codec": "0",
"message-size": "500",
"message": "1000",
"request-num-acks": "-1",
"sync":"true",
"producer-num-retries":"5",
"log_filename": "producer_performance_10.log",
"config_filename": "producer_performance_10.properties"
},
{
"entity_id": "6",
"topic": "test_0001",
"group.id": "group1",
"consumer-timeout-ms": "30000",
"log_filename": "console_consumer.log",
"config_filename": "console_consumer_6.properties"
}
]
}

View File

@ -1,147 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1
# Hostname the broker will advertise to consumers. If not set, kafka will use the value returned
# from InetAddress.getLocalHost(). If there are multiple interfaces getLocalHost
# may not be what you want.
#host.name=
############################# Socket Server Settings #############################
# The port the socket server listens on
port=9091
# The number of threads handling network requests
num.network.threads=2
# The number of threads doing disk I/O
num.io.threads=2
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=1048576
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=1048576
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# The directory under which to store log files
log.dir=/tmp/kafka_server_1_logs
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=5
# Overrides for for the default given by num.partitions on a per-topic basis
#topic.partition.count.map=topic1:3, topic2:4
############################# Log Flush Policy #############################
# The following configurations control the flush of data to disk. This is the most
# important performance knob in kafka.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data is at greater risk of loss in the event of a crash.
# 2. Latency: Data is not made available to consumers until it is flushed (which adds latency).
# 3. Throughput: The flush is generally the most expensive operation.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
log.flush.interval.ms=1000
# Per-topic overrides for log.flush.interval.ms
#log.flush.intervals.ms.per.topic=topic1:1000, topic2:3000
# The interval (in ms) at which logs are checked to see if they need to be flushed to disk.
log.flush.scheduler.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824
log.retention.bytes=-1
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
#log.segment.size=536870912
log.segment.bytes=10240
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.cleanup.interval.mins=1
############################# Zookeeper #############################
# Enable connecting to zookeeper
enable.zookeeper=true
# Zk connection string (see zk docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2108
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=1000000
monitoring.period.secs=1
message.max.bytes=1000000
queued.max.requests=500
log.roll.hours=168
log.index.size.max.bytes=10485760
log.index.interval.bytes=4096
auto.create.topics.enable=true
controller.socket.timeout.ms=30000
default.replication.factor=3
replica.lag.time.max.ms=10000
replica.lag.max.messages=4000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.fetch.min.bytes=4096
num.replica.fetchers=1
offsets.topic.num.partitions=2
offsets.topic.replication.factor=4
kafka.csv.metrics.dir=/home/jkoshy/Projects/kafka/system_test/offset_management_testsuite/testcase_7002/logs/broker-1/metrics
kafka.csv.metrics.reporter.enabled=true
kafka.metrics.polling.interval.secs=5
kafka.metrics.reporters=kafka.metrics.KafkaCSVMetricsReporter

View File

@ -1,147 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=2
# Hostname the broker will advertise to consumers. If not set, kafka will use the value returned
# from InetAddress.getLocalHost(). If there are multiple interfaces getLocalHost
# may not be what you want.
#host.name=
############################# Socket Server Settings #############################
# The port the socket server listens on
port=9092
# The number of threads handling network requests
num.network.threads=2
# The number of threads doing disk I/O
num.io.threads=2
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=1048576
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=1048576
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# The directory under which to store log files
log.dir=/tmp/kafka_server_2_logs
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=5
# Overrides for for the default given by num.partitions on a per-topic basis
#topic.partition.count.map=topic1:3, topic2:4
############################# Log Flush Policy #############################
# The following configurations control the flush of data to disk. This is the most
# important performance knob in kafka.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data is at greater risk of loss in the event of a crash.
# 2. Latency: Data is not made available to consumers until it is flushed (which adds latency).
# 3. Throughput: The flush is generally the most expensive operation.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
log.flush.interval.ms=1000
# Per-topic overrides for log.flush.interval.ms
#log.flush.intervals.ms.per.topic=topic1:1000, topic2:3000
# The interval (in ms) at which logs are checked to see if they need to be flushed to disk.
log.flush.scheduler.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824
log.retention.bytes=-1
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
#log.segment.size=536870912
log.segment.bytes=10240
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.cleanup.interval.mins=1
############################# Zookeeper #############################
# Enable connecting to zookeeper
enable.zookeeper=true
# Zk connection string (see zk docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2108
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=1000000
monitoring.period.secs=1
message.max.bytes=1000000
queued.max.requests=500
log.roll.hours=168
log.index.size.max.bytes=10485760
log.index.interval.bytes=4096
auto.create.topics.enable=true
controller.socket.timeout.ms=30000
default.replication.factor=3
replica.lag.time.max.ms=10000
replica.lag.max.messages=4000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.fetch.min.bytes=4096
num.replica.fetchers=1
offsets.topic.num.partitions=2
offsets.topic.replication.factor=4
kafka.csv.metrics.dir=/home/jkoshy/Projects/kafka/system_test/offset_management_testsuite/testcase_7002/logs/broker-2/metrics
kafka.csv.metrics.reporter.enabled=true
kafka.metrics.polling.interval.secs=5
kafka.metrics.reporters=kafka.metrics.KafkaCSVMetricsReporter

View File

@ -1,147 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=3
# Hostname the broker will advertise to consumers. If not set, kafka will use the value returned
# from InetAddress.getLocalHost(). If there are multiple interfaces getLocalHost
# may not be what you want.
#host.name=
############################# Socket Server Settings #############################
# The port the socket server listens on
port=9093
# The number of threads handling network requests
num.network.threads=2
# The number of threads doing disk I/O
num.io.threads=2
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=1048576
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=1048576
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# The directory under which to store log files
log.dir=/tmp/kafka_server_3_logs
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=5
# Overrides for for the default given by num.partitions on a per-topic basis
#topic.partition.count.map=topic1:3, topic2:4
############################# Log Flush Policy #############################
# The following configurations control the flush of data to disk. This is the most
# important performance knob in kafka.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data is at greater risk of loss in the event of a crash.
# 2. Latency: Data is not made available to consumers until it is flushed (which adds latency).
# 3. Throughput: The flush is generally the most expensive operation.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
log.flush.interval.ms=1000
# Per-topic overrides for log.flush.interval.ms
#log.flush.intervals.ms.per.topic=topic1:1000, topic2:3000
# The interval (in ms) at which logs are checked to see if they need to be flushed to disk.
log.flush.scheduler.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824
log.retention.bytes=-1
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
#log.segment.size=536870912
log.segment.bytes=10240
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.cleanup.interval.mins=1
############################# Zookeeper #############################
# Enable connecting to zookeeper
enable.zookeeper=true
# Zk connection string (see zk docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2108
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=1000000
monitoring.period.secs=1
message.max.bytes=1000000
queued.max.requests=500
log.roll.hours=168
log.index.size.max.bytes=10485760
log.index.interval.bytes=4096
auto.create.topics.enable=true
controller.socket.timeout.ms=30000
default.replication.factor=3
replica.lag.time.max.ms=10000
replica.lag.max.messages=4000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.fetch.min.bytes=4096
num.replica.fetchers=1
offsets.topic.num.partitions=2
offsets.topic.replication.factor=4
kafka.csv.metrics.dir=/home/jkoshy/Projects/kafka/system_test/offset_management_testsuite/testcase_7002/logs/broker-3/metrics
kafka.csv.metrics.reporter.enabled=true
kafka.metrics.polling.interval.secs=5
kafka.metrics.reporters=kafka.metrics.KafkaCSVMetricsReporter

View File

@ -1,147 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=4
# Hostname the broker will advertise to consumers. If not set, kafka will use the value returned
# from InetAddress.getLocalHost(). If there are multiple interfaces getLocalHost
# may not be what you want.
#host.name=
############################# Socket Server Settings #############################
# The port the socket server listens on
port=9094
# The number of threads handling network requests
num.network.threads=2
# The number of threads doing disk I/O
num.io.threads=2
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=1048576
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=1048576
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# The directory under which to store log files
log.dir=/tmp/kafka_server_4_logs
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=5
# Overrides for for the default given by num.partitions on a per-topic basis
#topic.partition.count.map=topic1:3, topic2:4
############################# Log Flush Policy #############################
# The following configurations control the flush of data to disk. This is the most
# important performance knob in kafka.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data is at greater risk of loss in the event of a crash.
# 2. Latency: Data is not made available to consumers until it is flushed (which adds latency).
# 3. Throughput: The flush is generally the most expensive operation.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
log.flush.interval.ms=1000
# Per-topic overrides for log.flush.interval.ms
#log.flush.intervals.ms.per.topic=topic1:1000, topic2:3000
# The interval (in ms) at which logs are checked to see if they need to be flushed to disk.
log.flush.scheduler.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824
log.retention.bytes=-1
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
#log.segment.size=536870912
log.segment.bytes=10240
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.cleanup.interval.mins=1
############################# Zookeeper #############################
# Enable connecting to zookeeper
enable.zookeeper=true
# Zk connection string (see zk docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2108
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=1000000
monitoring.period.secs=1
message.max.bytes=1000000
queued.max.requests=500
log.roll.hours=168
log.index.size.max.bytes=10485760
log.index.interval.bytes=4096
auto.create.topics.enable=true
controller.socket.timeout.ms=30000
default.replication.factor=3
replica.lag.time.max.ms=10000
replica.lag.max.messages=4000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.fetch.min.bytes=4096
num.replica.fetchers=1
offsets.topic.num.partitions=2
offsets.topic.replication.factor=4
kafka.csv.metrics.dir=/home/jkoshy/Projects/kafka/system_test/offset_management_testsuite/testcase_7002/logs/broker-4/metrics
kafka.csv.metrics.reporter.enabled=true
kafka.metrics.polling.interval.secs=5
kafka.metrics.reporters=kafka.metrics.KafkaCSVMetricsReporter

View File

@ -1,24 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# the directory where the snapshot is stored.
dataDir=/tmp/zookeeper_0
# the port at which the clients will connect
clientPort=2108
# disable the per-ip limit on the number of connections since this is a non-production config
maxClientCnxns=0
syncLimit=5
initLimit=10
tickTime=2000
server.1=localhost:2107:2109

View File

@ -1,127 +0,0 @@
{
"description": {"01":"To Test : 'Basic offset management test.'",
"02":"Set up a Zk and Kafka cluster.",
"03":"Produce messages to a multiple topics - various partition counts.",
"04":"Start multiple consumer groups to read various subsets of above topics.",
"05":"Bounce consumers.",
"06":"Verify that there are no duplicate messages or lost messages on any consumer group.",
"07":"Producer dimensions : mode:sync, acks:-1, comp:0"
},
"testcase_args": {
"bounce_leaders": "false",
"bounce_consumers": "true",
"replica_factor": "3",
"num_partition": "1",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50",
"num_topics_for_auto_generated_string":"3"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2108",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_0.log",
"config_filename": "zookeeper_0.properties"
},
{
"entity_id": "1",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_1_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_1.log",
"config_filename": "kafka_server_1.properties"
},
{
"entity_id": "2",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_2_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_2.log",
"config_filename": "kafka_server_2.properties"
},
{
"entity_id": "3",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_3_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_3.log",
"config_filename": "kafka_server_3.properties"
},
{
"entity_id": "4",
"port": "9094",
"broker.id": "4",
"log.segment.bytes": "20480",
"log.dir": "/tmp/kafka_server_4_logs",
"default.replication.factor": "3",
"num.partitions": "5",
"log_filename": "kafka_server_4.log",
"config_filename": "kafka_server_4.properties"
},
{
"entity_id": "5",
"topic": "test",
"threads": "5",
"compression-codec": "0",
"message-size": "500",
"message": "1000",
"request-num-acks": "-1",
"sync":"true",
"producer-num-retries":"5",
"log_filename": "producer_performance_10.log",
"config_filename": "producer_performance_10.properties"
},
{
"entity_id": "6",
"topic": "test_0001",
"group.id": "group1",
"consumer-timeout-ms": "30000",
"log_filename": "console_consumer.log",
"config_filename": "console_consumer_6.properties"
},
{
"entity_id": "7",
"topic": "test_0002",
"group.id": "group2",
"consumer-timeout-ms": "30000",
"log_filename": "console_consumer.log",
"config_filename": "console_consumer_7.properties"
},
{
"entity_id": "8",
"topic": "test_0002",
"group.id": "group2",
"consumer-timeout-ms": "30000",
"log_filename": "console_consumer.log",
"config_filename": "console_consumer_8.properties"
},
{
"entity_id": "9",
"topic": "test_0002",
"group.id": "group2",
"consumer-timeout-ms": "30000",
"log_filename": "console_consumer.log",
"config_filename": "console_consumer_9.properties"
},
{
"entity_id": "10",
"topic": "test_0003",
"group.id": "group2",
"consumer-timeout-ms": "30000",
"log_filename": "console_consumer.log",
"config_filename": "console_consumer_10.properties"
}
]
}

View File

@ -1,9 +0,0 @@
This test produces a large number of messages to a broker. It measures the throughput and tests
the amount of data received is expected.
To run this test, do
bin/run-test.sh
The expected output is given in expected.out. There are 2 things to pay attention to:
1. The output should have a line "test passed".
2. The throughput from the producer should be around 300,000 Messages/sec on a typical machine.

View File

@ -1,32 +0,0 @@
start the servers ...
start producing 2000000 messages ...
[2011-05-17 14:31:12,568] INFO Creating async producer for broker id = 0 at localhost:9092 (kafka.producer.ProducerPool)
thread 0: 100000 messages sent 3272786.7779 nMsg/sec 3.1212 MBs/sec
thread 0: 200000 messages sent 3685956.5057 nMsg/sec 3.5152 MBs/sec
thread 0: 300000 messages sent 3717472.1190 nMsg/sec 3.5453 MBs/sec
thread 0: 400000 messages sent 3730647.2673 nMsg/sec 3.5578 MBs/sec
thread 0: 500000 messages sent 3730647.2673 nMsg/sec 3.5578 MBs/sec
thread 0: 600000 messages sent 3722315.2801 nMsg/sec 3.5499 MBs/sec
thread 0: 700000 messages sent 3718854.5928 nMsg/sec 3.5466 MBs/sec
thread 0: 800000 messages sent 3714020.4271 nMsg/sec 3.5420 MBs/sec
thread 0: 900000 messages sent 3713330.8578 nMsg/sec 3.5413 MBs/sec
thread 0: 1000000 messages sent 3710575.1391 nMsg/sec 3.5387 MBs/sec
thread 0: 1100000 messages sent 3711263.6853 nMsg/sec 3.5393 MBs/sec
thread 0: 1200000 messages sent 3716090.6726 nMsg/sec 3.5439 MBs/sec
thread 0: 1300000 messages sent 3709198.8131 nMsg/sec 3.5374 MBs/sec
thread 0: 1400000 messages sent 3705762.4606 nMsg/sec 3.5341 MBs/sec
thread 0: 1500000 messages sent 3701647.2330 nMsg/sec 3.5302 MBs/sec
thread 0: 1600000 messages sent 3696174.4594 nMsg/sec 3.5249 MBs/sec
thread 0: 1700000 messages sent 3703703.7037 nMsg/sec 3.5321 MBs/sec
thread 0: 1800000 messages sent 3703017.9596 nMsg/sec 3.5315 MBs/sec
thread 0: 1900000 messages sent 3700277.5208 nMsg/sec 3.5289 MBs/sec
thread 0: 2000000 messages sent 3702332.4695 nMsg/sec 3.5308 MBs/sec
[2011-05-17 14:33:01,102] INFO Closing all async producers (kafka.producer.ProducerPool)
[2011-05-17 14:33:01,103] INFO Closed AsyncProducer (kafka.producer.async.AsyncProducer)
Total Num Messages: 2000000 bytes: 400000000 in 108.678 secs
Messages/sec: 18402.9886
MB/sec: 3.5101
wait for data to be persisted
test passed
bin/../../../bin/kafka-server-start.sh: line 11: 21110 Terminated $(dirname $0)/kafka-run-class.sh kafka.Kafka $@
bin/../../../bin/zookeeper-server-start.sh: line 9: 21109 Terminated $(dirname $0)/kafka-run-class.sh org.apache.zookeeper.server.quorum.QuorumPeerMain $@

View File

@ -1,61 +0,0 @@
#!/bin/bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
num_messages=2000000
message_size=200
base_dir=$(dirname $0)/..
rm -rf /tmp/zookeeper
rm -rf /tmp/kafka-logs
echo "start the servers ..."
$base_dir/../../bin/zookeeper-server-start.sh $base_dir/config/zookeeper.properties 2>&1 > $base_dir/zookeeper.log &
$base_dir/../../bin/kafka-server-start.sh $base_dir/config/server.properties 2>&1 > $base_dir/kafka.log &
sleep 4
echo "start producing $num_messages messages ..."
$base_dir/../../bin/kafka-run-class.sh kafka.tools.ProducerPerformance --brokerinfo broker.list=0:localhost:9092 --topics test01 --messages $num_messages --message-size $message_size --batch-size 200 --threads 1 --reporting-interval 100000 num_messages --async --compression-codec 1
echo "wait for data to be persisted"
cur_offset="-1"
quit=0
while [ $quit -eq 0 ]
do
sleep 2
target_size=`$base_dir/../../bin/kafka-run-class.sh kafka.tools.GetOffsetShell --server kafka://localhost:9092 --topic test01 --partition 0 --time -1 --offsets 1 | tail -1`
if [ $target_size -eq $cur_offset ]
then
quit=1
fi
cur_offset=$target_size
done
sleep 2
actual_size=`$base_dir/../../bin/kafka-run-class.sh kafka.tools.GetOffsetShell --server kafka://localhost:9092 --topic test01 --partition 0 --time -1 --offsets 1 | tail -1`
num_batches=`expr $num_messages \/ $message_size`
expected_size=`expr $num_batches \* 262`
if [ $actual_size != $expected_size ]
then
echo "actual size: $actual_size expected size: $expected_size test failed!!! look at it!!!"
else
echo "test passed"
fi
ps ax | grep -i 'kafka.kafka' | grep -v grep | awk '{print $1}' | xargs kill -15 > /dev/null
sleep 2
ps ax | grep -i 'QuorumPeerMain' | grep -v grep | awk '{print $1}' | xargs kill -15 > /dev/null

View File

@ -1,61 +0,0 @@
#!/bin/bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
num_messages=2000000
message_size=200
base_dir=$(dirname $0)/..
rm -rf /tmp/zookeeper
rm -rf /tmp/kafka-logs
echo "start the servers ..."
$base_dir/../../bin/zookeeper-server-start.sh $base_dir/config/zookeeper.properties 2>&1 > $base_dir/zookeeper.log &
$base_dir/../../bin/kafka-server-start.sh $base_dir/config/server.properties 2>&1 > $base_dir/kafka.log &
sleep 4
echo "start producing $num_messages messages ..."
$base_dir/../../bin/kafka-run-class.sh kafka.tools.ProducerPerformance --brokerinfo broker.list=0:localhost:9092 --topics test01 --messages $num_messages --message-size $message_size --batch-size 200 --threads 1 --reporting-interval 100000 num_messages --async
echo "wait for data to be persisted"
cur_offset="-1"
quit=0
while [ $quit -eq 0 ]
do
sleep 2
target_size=`$base_dir/../../bin/kafka-run-class.sh kafka.tools.GetOffsetShell --server kafka://localhost:9092 --topic test01 --partition 0 --time -1 --offsets 1 | tail -1`
if [ $target_size -eq $cur_offset ]
then
quit=1
fi
cur_offset=$target_size
done
sleep 2
actual_size=`$base_dir/../../bin/kafka-run-class.sh kafka.tools.GetOffsetShell --server kafka://localhost:9092 --topic test01 --partition 0 --time -1 --offsets 1 | tail -1`
msg_full_size=`expr $message_size + 10`
expected_size=`expr $num_messages \* $msg_full_size`
if [ $actual_size != $expected_size ]
then
echo "actual size: $actual_size expected size: $expected_size test failed!!! look at it!!!"
else
echo "test passed"
fi
ps ax | grep -i 'kafka.kafka' | grep -v grep | awk '{print $1}' | xargs kill -15 > /dev/null
sleep 2
ps ax | grep -i 'QuorumPeerMain' | grep -v grep | awk '{print $1}' | xargs kill -15 > /dev/null

View File

@ -1,78 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
# the id of the broker
broker.id=0
# hostname of broker. If not set, will pick up from the value returned
# from getLocalHost. If there are multiple interfaces getLocalHost
# may not be what you want.
# host.name=
# number of logical partitions on this broker
num.partitions=1
# the port the socket server runs on
port=9092
# the number of processor threads the socket server uses. Defaults to the number of cores on the machine
num.threads=8
# the directory in which to store log files
log.dir=/tmp/kafka-logs
# the send buffer used by the socket server
socket.send.buffer.bytes=1048576
# the receive buffer used by the socket server
socket.receive.buffer.bytes=1048576
# the maximum size of a log segment
log.segment.bytes=536870912
# the interval between running cleanup on the logs
log.cleanup.interval.mins=1
# the minimum age of a log file to eligible for deletion
log.retention.hours=168
#the number of messages to accept without flushing the log to disk
log.flush.interval.messages=600
#set the following properties to use zookeeper
# enable connecting to zookeeper
enable.zookeeper=true
# zk connection string
# comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
zookeeper.connect=localhost:2181
# timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=1000000
# time based topic flush intervals in ms
#log.flush.intervals.ms.per.topic=topic:1000
# default time based flush interval in ms
log.flush.interval.ms=1000
# time based topic flasher time rate in ms
log.flush.scheduler.interval.ms=1000
# topic partition count map
# topic.partition.count.map=topic1:3, topic2:4

View File

@ -1,18 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# the directory where the snapshot is stored.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181

View File

@ -1,139 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
# Hostname the broker will advertise to consumers. If not set, kafka will use the value returned
# from InetAddress.getLocalHost(). If there are multiple interfaces getLocalHost
# may not be what you want.
#host.name=
############################# Socket Server Settings #############################
# The port the socket server listens on
port=9091
# The number of threads handling network requests
num.network.threads=2
# The number of threads doing disk I/O
num.io.threads=2
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=1048576
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=1048576
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# The directory under which to store log files
log.dir=/tmp/kafka_server_logs
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=5
# Overrides for for the default given by num.partitions on a per-topic basis
#topic.partition.count.map=topic1:3, topic2:4
############################# Log Flush Policy #############################
# The following configurations control the flush of data to disk. This is the most
# important performance knob in kafka.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data is at greater risk of loss in the event of a crash.
# 2. Latency: Data is not made available to consumers until it is flushed (which adds latency).
# 3. Throughput: The flush is generally the most expensive operation.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
log.flush.interval.ms=1000
# Per-topic overrides for log.flush.interval.ms
#log.flush.intervals.ms.per.topic=topic1:1000, topic2:3000
# The interval (in ms) at which logs are checked to see if they need to be flushed to disk.
log.flush.scheduler.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824
log.retention.bytes=-1
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
#log.segment.size=536870912
log.segment.bytes=102400
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.cleanup.interval.mins=1
############################# Zookeeper #############################
# Enable connecting to zookeeper
enable.zookeeper=true
# Zk connection string (see zk docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=1000000
monitoring.period.secs=1
message.max.bytes=1000000
queued.max.requests=500
log.roll.hours=168
log.index.size.max.bytes=10485760
log.index.interval.bytes=4096
auto.create.topics.enable=true
controller.socket.timeout.ms=30000
default.replication.factor=1
replica.lag.time.max.ms=10000
replica.lag.max.messages=4000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.fetch.min.bytes=1
num.replica.fetchers=1

View File

@ -1,20 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# the directory where the snapshot is stored.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# disable the per-ip limit on the number of connections since this is a non-production config
maxClientCnxns=0

View File

@ -1,461 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#!/usr/bin/env python
# ===================================
# replica_basic_test.py
# ===================================
import inspect
import logging
import os
import pprint
import signal
import subprocess
import sys
import time
import traceback
from system_test_env import SystemTestEnv
sys.path.append(SystemTestEnv.SYSTEM_TEST_UTIL_DIR)
from setup_utils import SetupUtils
from replication_utils import ReplicationUtils
import system_test_utils
from testcase_env import TestcaseEnv
# product specific: Kafka
import kafka_system_test_utils
import metrics
class ReplicaBasicTest(ReplicationUtils, SetupUtils):
testModuleAbsPathName = os.path.realpath(__file__)
testSuiteAbsPathName = os.path.abspath(os.path.dirname(testModuleAbsPathName))
def __init__(self, systemTestEnv):
# SystemTestEnv - provides cluster level environment settings
# such as entity_id, hostname, kafka_home, java_home which
# are available in a list of dictionary named
# "clusterEntityConfigDictList"
self.systemTestEnv = systemTestEnv
super(ReplicaBasicTest, self).__init__(self)
# dict to pass user-defined attributes to logger argument: "extra"
d = {'name_of_class': self.__class__.__name__}
def signal_handler(self, signal, frame):
self.log_message("Interrupt detected - User pressed Ctrl+c")
# perform the necessary cleanup here when user presses Ctrl+c and it may be product specific
self.log_message("stopping all entities - please wait ...")
kafka_system_test_utils.stop_all_remote_running_processes(self.systemTestEnv, self.testcaseEnv)
sys.exit(1)
def runTest(self):
# ======================================================================
# get all testcase directories under this testsuite
# ======================================================================
testCasePathNameList = system_test_utils.get_dir_paths_with_prefix(
self.testSuiteAbsPathName, SystemTestEnv.SYSTEM_TEST_CASE_PREFIX)
testCasePathNameList.sort()
replicationUtils = ReplicationUtils(self)
# =============================================================
# launch each testcase one by one: testcase_1, testcase_2, ...
# =============================================================
for testCasePathName in testCasePathNameList:
skipThisTestCase = False
try:
# ======================================================================
# A new instance of TestcaseEnv to keep track of this testcase's env vars
# and initialize some env vars as testCasePathName is available now
# ======================================================================
self.testcaseEnv = TestcaseEnv(self.systemTestEnv, self)
self.testcaseEnv.testSuiteBaseDir = self.testSuiteAbsPathName
self.testcaseEnv.initWithKnownTestCasePathName(testCasePathName)
self.testcaseEnv.testcaseArgumentsDict = self.testcaseEnv.testcaseNonEntityDataDict["testcase_args"]
# ======================================================================
# SKIP if this case is IN testcase_to_skip.json or NOT IN testcase_to_run.json
# ======================================================================
testcaseDirName = self.testcaseEnv.testcaseResultsDict["_test_case_name"]
if self.systemTestEnv.printTestDescriptionsOnly:
self.testcaseEnv.printTestCaseDescription(testcaseDirName)
continue
elif self.systemTestEnv.isTestCaseToSkip(self.__class__.__name__, testcaseDirName):
self.log_message("Skipping : " + testcaseDirName)
skipThisTestCase = True
continue
else:
self.testcaseEnv.printTestCaseDescription(testcaseDirName)
system_test_utils.setup_remote_hosts_with_testcase_level_cluster_config(self.systemTestEnv, testCasePathName)
# ============================================================================== #
# ============================================================================== #
# Product Specific Testing Code Starts Here: #
# ============================================================================== #
# ============================================================================== #
# get optional testcase arguments
logRetentionTest = "false"
try:
logRetentionTest = self.testcaseEnv.testcaseArgumentsDict["log_retention_test"]
except:
pass
consumerMultiTopicsMode = "false"
try:
consumerMultiTopicsMode = self.testcaseEnv.testcaseArgumentsDict["consumer_multi_topics_mode"]
except:
pass
autoCreateTopic = "false"
try:
autoCreateTopic = self.testcaseEnv.testcaseArgumentsDict["auto_create_topic"]
except:
pass
# initialize self.testcaseEnv with user-defined environment variables (product specific)
self.testcaseEnv.userDefinedEnvVarDict["zkConnectStr"] = ""
self.testcaseEnv.userDefinedEnvVarDict["stopBackgroundProducer"] = False
self.testcaseEnv.userDefinedEnvVarDict["backgroundProducerStopped"] = False
self.testcaseEnv.userDefinedEnvVarDict["leaderElectionLatencyList"] = []
# initialize signal handler
signal.signal(signal.SIGINT, self.signal_handler)
# TestcaseEnv.testcaseConfigsList initialized by reading testcase properties file:
# system_test/<suite_name>_testsuite/testcase_<n>/testcase_<n>_properties.json
self.testcaseEnv.testcaseConfigsList = system_test_utils.get_json_list_data(
self.testcaseEnv.testcasePropJsonPathName)
# clean up data directories specified in zookeeper.properties and kafka_server_<n>.properties
kafka_system_test_utils.cleanup_data_at_remote_hosts(self.systemTestEnv, self.testcaseEnv)
# create "LOCAL" log directories for metrics, dashboards for each entity under this testcase
# for collecting logs from remote machines
kafka_system_test_utils.generate_testcase_log_dirs(self.systemTestEnv, self.testcaseEnv)
# TestcaseEnv - initialize producer & consumer config / log file pathnames
kafka_system_test_utils.init_entity_props(self.systemTestEnv, self.testcaseEnv)
# generate remote hosts log/config dirs if not exist
kafka_system_test_utils.generate_testcase_log_dirs_in_remote_hosts(self.systemTestEnv, self.testcaseEnv)
# generate properties files for zookeeper, kafka, producer, consumer:
# 1. copy system_test/<suite_name>_testsuite/config/*.properties to
# system_test/<suite_name>_testsuite/testcase_<n>/config/
# 2. update all properties files in system_test/<suite_name>_testsuite/testcase_<n>/config
# by overriding the settings specified in:
# system_test/<suite_name>_testsuite/testcase_<n>/testcase_<n>_properties.json
kafka_system_test_utils.generate_overriden_props_files(self.testSuiteAbsPathName,
self.testcaseEnv, self.systemTestEnv)
# =============================================
# preparing all entities to start the test
# =============================================
self.log_message("starting zookeepers")
kafka_system_test_utils.start_zookeepers(self.systemTestEnv, self.testcaseEnv)
self.anonLogger.info("sleeping for 2s")
time.sleep(2)
self.log_message("starting brokers")
kafka_system_test_utils.start_brokers(self.systemTestEnv, self.testcaseEnv)
self.anonLogger.info("sleeping for 5s")
time.sleep(5)
if autoCreateTopic.lower() == "false":
self.log_message("creating topics")
kafka_system_test_utils.create_topic_for_producer_performance(self.systemTestEnv, self.testcaseEnv)
self.anonLogger.info("sleeping for 5s")
time.sleep(5)
# =============================================
# start ConsoleConsumer if this is a Log Retention test
# =============================================
if logRetentionTest.lower() == "true":
self.log_message("starting consumer in the background")
kafka_system_test_utils.start_console_consumer(self.systemTestEnv, self.testcaseEnv)
time.sleep(1)
# =============================================
# starting producer
# =============================================
self.log_message("starting producer in the background")
kafka_system_test_utils.start_producer_performance(self.systemTestEnv, self.testcaseEnv, False)
msgProducingFreeTimeSec = self.testcaseEnv.testcaseArgumentsDict["message_producing_free_time_sec"]
self.anonLogger.info("sleeping for " + msgProducingFreeTimeSec + " sec to produce some messages")
time.sleep(int(msgProducingFreeTimeSec))
# =============================================
# A while-loop to bounce leader as specified
# by "num_iterations" in testcase_n_properties.json
# =============================================
i = 1
numIterations = int(self.testcaseEnv.testcaseArgumentsDict["num_iteration"])
brokerType = self.testcaseEnv.testcaseArgumentsDict["broker_type"]
bounceBrokerFlag = self.testcaseEnv.testcaseArgumentsDict["bounce_broker"]
while i <= numIterations:
self.log_message("Iteration " + str(i) + " of " + str(numIterations))
self.log_message("bounce_broker flag : " + bounceBrokerFlag)
leaderDict = None
controllerDict = None
stoppedBrokerEntityId = ""
# ==============================================
# Find out the entity id for the stopping broker
# ==============================================
if brokerType == "leader" or brokerType == "follower":
self.log_message("looking up leader")
leaderDict = kafka_system_test_utils.get_leader_attributes(self.systemTestEnv, self.testcaseEnv)
# ==========================
# leaderDict looks like this:
# ==========================
#{'entity_id': u'3',
# 'partition': '0',
# 'timestamp': 1345050255.8280001,
# 'hostname': u'localhost',
# 'topic': 'test_1',
# 'brokerid': '3'}
if brokerType == "leader":
stoppedBrokerEntityId = leaderDict["entity_id"]
self.log_message("Found leader with entity id: " + stoppedBrokerEntityId)
else: # Follower
self.log_message("looking up follower")
# a list of all brokers
brokerEntityIdList = system_test_utils.get_data_from_list_of_dicts(self.systemTestEnv.clusterEntityConfigDictList, "role", "broker", "entity_id")
# we pick the first non-leader broker as the follower
firstFollowerEntityId = None
for brokerEntityId in brokerEntityIdList:
if brokerEntityId != leaderDict["entity_id"]:
firstFollowerEntityId = brokerEntityId
break
stoppedBrokerEntityId = firstFollowerEntityId
self.log_message("Found follower with entity id: " + stoppedBrokerEntityId)
elif brokerType == "controller":
self.log_message("looking up controller")
controllerDict = kafka_system_test_utils.get_controller_attributes(self.systemTestEnv, self.testcaseEnv)
# ==========================
# controllerDict looks like this:
# ==========================
#{'entity_id': u'3',
# 'timestamp': 1345050255.8280001,
# 'hostname': u'localhost',
# 'brokerid': '3'}
stoppedBrokerEntityId = controllerDict["entity_id"]
self.log_message("Found controller with entity id: " + stoppedBrokerEntityId)
# =============================================
# Bounce the broker
# =============================================
if bounceBrokerFlag.lower() == "true":
if brokerType == "leader":
# validate to see if leader election is successful
self.log_message("validating leader election")
kafka_system_test_utils.validate_leader_election_successful(self.testcaseEnv, leaderDict, self.testcaseEnv.validationStatusDict)
# trigger leader re-election by stopping leader to get re-election latency
#reelectionLatency = kafka_system_test_utils.get_reelection_latency(self.systemTestEnv, self.testcaseEnv, leaderDict, self.leaderAttributesDict)
#latencyKeyName = "Leader Election Latency - iter " + str(i) + " brokerid " + leaderDict["brokerid"]
#self.testcaseEnv.validationStatusDict[latencyKeyName] = str("{0:.2f}".format(reelectionLatency * 1000)) + " ms"
#self.testcaseEnv.userDefinedEnvVarDict["leaderElectionLatencyList"].append("{0:.2f}".format(reelectionLatency * 1000))
elif brokerType == "follower":
# stopping Follower
self.log_message("stopping follower with entity id: " + firstFollowerEntityId)
kafka_system_test_utils.stop_remote_entity(self.systemTestEnv, firstFollowerEntityId, self.testcaseEnv.entityBrokerParentPidDict[firstFollowerEntityId])
elif brokerType == "controller":
# stopping Controller
self.log_message("stopping controller : " + controllerDict["brokerid"])
kafka_system_test_utils.stop_remote_entity(self.systemTestEnv, controllerDict["entity_id"], self.testcaseEnv.entityBrokerParentPidDict[controllerDict["entity_id"]])
brokerDownTimeInSec = 5
try:
brokerDownTimeInSec = int(self.testcaseEnv.testcaseArgumentsDict["broker_down_time_in_sec"])
except:
pass # take default
time.sleep(brokerDownTimeInSec)
# starting previously terminated broker
self.log_message("starting the previously terminated broker")
kafka_system_test_utils.start_entity_in_background(self.systemTestEnv, self.testcaseEnv, stoppedBrokerEntityId)
else:
# GC Pause simulation
pauseTime = None
try:
hostname = leaderDict["hostname"]
pauseTime = self.testcaseEnv.testcaseArgumentsDict["pause_time_in_seconds"]
parentPid = self.testcaseEnv.entityBrokerParentPidDict[leaderDict["entity_id"]]
pidStack = system_test_utils.get_remote_child_processes(hostname, parentPid)
system_test_utils.simulate_garbage_collection_pause_in_remote_process(hostname, pidStack, pauseTime)
except:
pass
self.anonLogger.info("sleeping for 60s")
time.sleep(60)
i += 1
# while loop
# update Leader Election Latency MIN/MAX to testcaseEnv.validationStatusDict
#self.testcaseEnv.validationStatusDict["Leader Election Latency MIN"] = None
#try:
# self.testcaseEnv.validationStatusDict["Leader Election Latency MIN"] = \
# min(self.testcaseEnv.userDefinedEnvVarDict["leaderElectionLatencyList"])
#except:
# pass
#
#self.testcaseEnv.validationStatusDict["Leader Election Latency MAX"] = None
#try:
# self.testcaseEnv.validationStatusDict["Leader Election Latency MAX"] = \
# max(self.testcaseEnv.userDefinedEnvVarDict["leaderElectionLatencyList"])
#except:
# pass
# =============================================
# tell producer to stop
# =============================================
self.testcaseEnv.lock.acquire()
self.testcaseEnv.userDefinedEnvVarDict["stopBackgroundProducer"] = True
time.sleep(1)
self.testcaseEnv.lock.release()
time.sleep(1)
# =============================================
# wait for producer thread's update of
# "backgroundProducerStopped" to be "True"
# =============================================
while 1:
self.testcaseEnv.lock.acquire()
self.logger.info("status of backgroundProducerStopped : [" + \
str(self.testcaseEnv.userDefinedEnvVarDict["backgroundProducerStopped"]) + "]", extra=self.d)
if self.testcaseEnv.userDefinedEnvVarDict["backgroundProducerStopped"]:
time.sleep(1)
self.testcaseEnv.lock.release()
self.logger.info("all producer threads completed", extra=self.d)
break
time.sleep(1)
self.testcaseEnv.lock.release()
time.sleep(2)
# =============================================
# collect logs from remote hosts to find the
# minimum common offset of a certain log
# segment file among all replicas
# =============================================
minStartingOffsetDict = None
if logRetentionTest.lower() == "true":
self.anonLogger.info("sleeping for 60s to make sure log truncation is completed")
time.sleep(60)
kafka_system_test_utils.collect_logs_from_remote_hosts(self.systemTestEnv, self.testcaseEnv)
minStartingOffsetDict = kafka_system_test_utils.getMinCommonStartingOffset(self.systemTestEnv, self.testcaseEnv)
print
pprint.pprint(minStartingOffsetDict)
# =============================================
# starting debug consumer
# =============================================
if consumerMultiTopicsMode.lower() == "false":
self.log_message("starting debug consumers in the background")
kafka_system_test_utils.start_simple_consumer(self.systemTestEnv, self.testcaseEnv, minStartingOffsetDict)
self.anonLogger.info("sleeping for 10s")
time.sleep(10)
# =============================================
# starting console consumer
# =============================================
if logRetentionTest.lower() == "false":
self.log_message("starting consumer in the background")
kafka_system_test_utils.start_console_consumer(self.systemTestEnv, self.testcaseEnv)
time.sleep(10)
# =============================================
# this testcase is completed - stop all entities
# =============================================
self.log_message("stopping all entities")
for entityId, parentPid in self.testcaseEnv.entityBrokerParentPidDict.items():
kafka_system_test_utils.stop_remote_entity(self.systemTestEnv, entityId, parentPid)
for entityId, parentPid in self.testcaseEnv.entityZkParentPidDict.items():
kafka_system_test_utils.stop_remote_entity(self.systemTestEnv, entityId, parentPid)
# make sure all entities are stopped
kafka_system_test_utils.ps_grep_terminate_running_entity(self.systemTestEnv)
# =============================================
# collect logs from remote hosts
# =============================================
kafka_system_test_utils.collect_logs_from_remote_hosts(self.systemTestEnv, self.testcaseEnv)
# =============================================
# validate the data matched and checksum
# =============================================
self.log_message("validating data matched")
if logRetentionTest.lower() == "true":
kafka_system_test_utils.validate_data_matched(self.systemTestEnv, self.testcaseEnv, replicationUtils)
elif consumerMultiTopicsMode.lower() == "true":
kafka_system_test_utils.validate_data_matched_in_multi_topics_from_single_consumer_producer(
self.systemTestEnv, self.testcaseEnv, replicationUtils)
else:
kafka_system_test_utils.validate_simple_consumer_data_matched_across_replicas(self.systemTestEnv, self.testcaseEnv)
kafka_system_test_utils.validate_broker_log_segment_checksum(self.systemTestEnv, self.testcaseEnv)
kafka_system_test_utils.validate_data_matched(self.systemTestEnv, self.testcaseEnv, replicationUtils)
kafka_system_test_utils.validate_index_log(self.systemTestEnv, self.testcaseEnv)
# =============================================
# draw graphs
# =============================================
metrics.draw_all_graphs(self.systemTestEnv.METRICS_PATHNAME,
self.testcaseEnv,
self.systemTestEnv.clusterEntityConfigDictList)
# build dashboard, one for each role
metrics.build_all_dashboards(self.systemTestEnv.METRICS_PATHNAME,
self.testcaseEnv.testCaseDashboardsDir,
self.systemTestEnv.clusterEntityConfigDictList)
except Exception as e:
self.log_message("Exception while running test {0}".format(e))
traceback.print_exc()
self.testcaseEnv.validationStatusDict["Test completed"] = "FAILED"
finally:
if not skipThisTestCase and not self.systemTestEnv.printTestDescriptionsOnly:
self.log_message("stopping all entities - please wait ...")
kafka_system_test_utils.stop_all_remote_running_processes(self.systemTestEnv, self.testcaseEnv)

View File

@ -1,85 +0,0 @@
{
"description": {"01":"Replication Basic : Base Test",
"02":"Produce and consume messages to a single topic - single partition.",
"03":"This test sends messages to 3 replicas",
"04":"At the end it verifies the log size and contents",
"05":"Use a consumer to verify no message loss.",
"06":"Producer dimensions : mode:sync, acks:-1, comp:0",
"07":"Log segment size : 10000000"
},
"testcase_args": {
"broker_type": "leader",
"bounce_broker": "false",
"replica_factor": "3",
"num_partition": "1",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2188",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_2188.log",
"config_filename": "zookeeper_2188.properties"
},
{
"entity_id": "1",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_1_logs",
"default.replication.factor": "3",
"num.partitions": "1",
"log_filename": "kafka_server_9091.log",
"config_filename": "kafka_server_9091.properties"
},
{
"entity_id": "2",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_2_logs",
"default.replication.factor": "3",
"num.partitions": "1",
"log_filename": "kafka_server_9092.log",
"config_filename": "kafka_server_9092.properties"
},
{
"entity_id": "3",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_3_logs",
"default.replication.factor": "3",
"num.partitions": "1",
"log_filename": "kafka_server_9093.log",
"config_filename": "kafka_server_9093.properties"
},
{
"entity_id": "4",
"new-producer":"true",
"topic": "test_1",
"threads": "5",
"compression-codec": "0",
"message-size": "500",
"message": "500",
"request-num-acks": "-1",
"producer-retry-backoff-ms": "300",
"sync":"true",
"log_filename": "producer_performance.log",
"config_filename": "producer_performance.properties"
},
{
"entity_id": "5",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"zookeeper": "localhost:2188",
"log_filename": "console_consumer.log",
"config_filename": "console_consumer.properties"
}
]
}

View File

@ -1,85 +0,0 @@
{
"description": {"01":"Replication Basic : 1. comp => 1",
"02":"Produce and consume messages to a single topic - single partition.",
"03":"This test sends messages to 3 replicas",
"04":"At the end it verifies the log size and contents",
"05":"Use a consumer to verify no message loss.",
"06":"Producer dimensions : mode:sync, acks:-1, comp:1",
"07":"Log segment size : 10000000"
},
"testcase_args": {
"broker_type": "leader",
"bounce_broker": "false",
"replica_factor": "3",
"num_partition": "1",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2188",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_2188.log",
"config_filename": "zookeeper_2188.properties"
},
{
"entity_id": "1",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_1_logs",
"default.replication.factor": "3",
"num.partitions": "1",
"log_filename": "kafka_server_9091.log",
"config_filename": "kafka_server_9091.properties"
},
{
"entity_id": "2",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_2_logs",
"default.replication.factor": "3",
"num.partitions": "1",
"log_filename": "kafka_server_9092.log",
"config_filename": "kafka_server_9092.properties"
},
{
"entity_id": "3",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_3_logs",
"default.replication.factor": "3",
"num.partitions": "1",
"log_filename": "kafka_server_9093.log",
"config_filename": "kafka_server_9093.properties"
},
{
"entity_id": "4",
"new-producer":"true",
"topic": "test_1",
"threads": "5",
"compression-codec": "1",
"message-size": "500",
"message": "100",
"request-num-acks": "-1",
"producer-retry-backoff-ms": "300",
"sync":"true",
"log_filename": "producer_performance.log",
"config_filename": "producer_performance.properties"
},
{
"entity_id": "5",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"zookeeper": "localhost:2188",
"log_filename": "console_consumer.log",
"config_filename": "console_consumer.properties"
}
]
}

View File

@ -1,85 +0,0 @@
{
"description": {"01":"Replication Basic : 1. acks => 1; 2. comp => 1",
"02":"Produce and consume messages to a single topic - single partition.",
"03":"This test sends messages to 3 replicas",
"04":"At the end it verifies the log size and contents",
"05":"Use a consumer to verify no message loss.",
"06":"Producer dimensions : mode:sync, acks:1, comp:1",
"07":"Log segment size : 10000000"
},
"testcase_args": {
"broker_type": "leader",
"bounce_broker": "false",
"replica_factor": "3",
"num_partition": "1",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2188",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_2188.log",
"config_filename": "zookeeper_2188.properties"
},
{
"entity_id": "1",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_1_logs",
"default.replication.factor": "3",
"num.partitions": "1",
"log_filename": "kafka_server_9091.log",
"config_filename": "kafka_server_9091.properties"
},
{
"entity_id": "2",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_2_logs",
"default.replication.factor": "3",
"num.partitions": "1",
"log_filename": "kafka_server_9092.log",
"config_filename": "kafka_server_9092.properties"
},
{
"entity_id": "3",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_3_logs",
"default.replication.factor": "3",
"num.partitions": "1",
"log_filename": "kafka_server_9093.log",
"config_filename": "kafka_server_9093.properties"
},
{
"entity_id": "4",
"new-producer":"true",
"topic": "test_1",
"threads": "5",
"compression-codec": "1",
"message-size": "500",
"message": "100",
"request-num-acks": "1",
"producer-retry-backoff-ms": "300",
"sync":"true",
"log_filename": "producer_performance.log",
"config_filename": "producer_performance.properties"
},
{
"entity_id": "5",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"zookeeper": "localhost:2188",
"log_filename": "console_consumer.log",
"config_filename": "console_consumer.properties"
}
]
}

View File

@ -1,85 +0,0 @@
{
"description": {"01":"Replication Basic : 1. mode => async; 2. comp => 1",
"02":"Produce and consume messages to a single topic - single partition.",
"03":"This test sends messages to 3 replicas",
"04":"At the end it verifies the log size and contents",
"05":"Use a consumer to verify no message loss.",
"06":"Producer dimensions : mode:async, acks:-1, comp:1",
"07":"Log segment size : 10000000"
},
"testcase_args": {
"broker_type": "leader",
"bounce_broker": "false",
"replica_factor": "3",
"num_partition": "1",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2188",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_2188.log",
"config_filename": "zookeeper_2188.properties"
},
{
"entity_id": "1",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_1_logs",
"default.replication.factor": "3",
"num.partitions": "1",
"log_filename": "kafka_server_9091.log",
"config_filename": "kafka_server_9091.properties"
},
{
"entity_id": "2",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_2_logs",
"default.replication.factor": "3",
"num.partitions": "1",
"log_filename": "kafka_server_9092.log",
"config_filename": "kafka_server_9092.properties"
},
{
"entity_id": "3",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_3_logs",
"default.replication.factor": "3",
"num.partitions": "1",
"log_filename": "kafka_server_9093.log",
"config_filename": "kafka_server_9093.properties"
},
{
"entity_id": "4",
"new-producer":"true",
"topic": "test_1",
"threads": "5",
"compression-codec": "1",
"message-size": "500",
"message": "100",
"request-num-acks": "-1",
"producer-retry-backoff-ms": "300",
"sync":"false",
"log_filename": "producer_performance.log",
"config_filename": "producer_performance.properties"
},
{
"entity_id": "5",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"zookeeper": "localhost:2188",
"log_filename": "console_consumer.log",
"config_filename": "console_consumer.properties"
}
]
}

View File

@ -1,85 +0,0 @@
{
"description": {"01":"Replication Basic : 1. mode => async; 2. acks => 1; 3. comp => 1",
"02":"Produce and consume messages to a single topic - single partition.",
"03":"This test sends messages to 3 replicas",
"04":"At the end it verifies the log size and contents",
"05":"Use a consumer to verify no message loss.",
"06":"Producer dimensions : mode:async, acks:1, comp:1",
"07":"Log segment size : 10000000"
},
"testcase_args": {
"broker_type": "leader",
"bounce_broker": "false",
"replica_factor": "3",
"num_partition": "1",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2188",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_2188.log",
"config_filename": "zookeeper_2188.properties"
},
{
"entity_id": "1",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_1_logs",
"default.replication.factor": "3",
"num.partitions": "1",
"log_filename": "kafka_server_9091.log",
"config_filename": "kafka_server_9091.properties"
},
{
"entity_id": "2",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_2_logs",
"default.replication.factor": "3",
"num.partitions": "1",
"log_filename": "kafka_server_9092.log",
"config_filename": "kafka_server_9092.properties"
},
{
"entity_id": "3",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_3_logs",
"default.replication.factor": "3",
"num.partitions": "1",
"log_filename": "kafka_server_9093.log",
"config_filename": "kafka_server_9093.properties"
},
{
"entity_id": "4",
"new-producer":"true",
"topic": "test_1",
"threads": "5",
"compression-codec": "1",
"message-size": "500",
"message": "100",
"request-num-acks": "1",
"producer-retry-backoff-ms": "300",
"sync":"false",
"log_filename": "producer_performance.log",
"config_filename": "producer_performance.properties"
},
{
"entity_id": "5",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"zookeeper": "localhost:2188",
"log_filename": "console_consumer.log",
"config_filename": "console_consumer.properties"
}
]
}

View File

@ -1,85 +0,0 @@
{
"description": {"01":"Replication Basic : 1. comp => 1",
"02":"Produce and consume messages to a single topic - 3 partitions.",
"03":"This test sends messages to 3 replicas",
"04":"At the end it verifies the log size and contents",
"05":"Use a consumer to verify no message loss.",
"06":"Producer dimensions : mode:sync, acks:-1, comp:1",
"07":"Log segment size : 10000000"
},
"testcase_args": {
"broker_type": "leader",
"bounce_broker": "false",
"replica_factor": "3",
"num_partition": "3",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2188",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_2188.log",
"config_filename": "zookeeper_2188.properties"
},
{
"entity_id": "1",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_1_logs",
"default.replication.factor": "3",
"num.partitions": "3",
"log_filename": "kafka_server_9091.log",
"config_filename": "kafka_server_9091.properties"
},
{
"entity_id": "2",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_2_logs",
"default.replication.factor": "3",
"num.partitions": "3",
"log_filename": "kafka_server_9092.log",
"config_filename": "kafka_server_9092.properties"
},
{
"entity_id": "3",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_3_logs",
"default.replication.factor": "3",
"num.partitions": "3",
"log_filename": "kafka_server_9093.log",
"config_filename": "kafka_server_9093.properties"
},
{
"entity_id": "4",
"new-producer":"true",
"topic": "test_1",
"threads": "5",
"compression-codec": "1",
"message-size": "500",
"message": "100",
"request-num-acks": "-1",
"producer-retry-backoff-ms": "300",
"sync":"true",
"log_filename": "producer_performance.log",
"config_filename": "producer_performance.properties"
},
{
"entity_id": "5",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"zookeeper": "localhost:2188",
"log_filename": "console_consumer.log",
"config_filename": "console_consumer.properties"
}
]
}

View File

@ -1,85 +0,0 @@
{
"description": {"01":"Replication Basic : 1. mode => async; 2. comp => 1",
"02":"Produce and consume messages to a single topic - 3 partitions.",
"03":"This test sends messages to 3 replicas",
"04":"At the end it verifies the log size and contents",
"05":"Use a consumer to verify no message loss.",
"06":"Producer dimensions : mode:async, acks:-1, comp:1",
"07":"Log segment size : 10000000"
},
"testcase_args": {
"broker_type": "leader",
"bounce_broker": "false",
"replica_factor": "3",
"num_partition": "3",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2188",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_2188.log",
"config_filename": "zookeeper_2188.properties"
},
{
"entity_id": "1",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_1_logs",
"default.replication.factor": "3",
"num.partitions": "3",
"log_filename": "kafka_server_9091.log",
"config_filename": "kafka_server_9091.properties"
},
{
"entity_id": "2",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_2_logs",
"default.replication.factor": "3",
"num.partitions": "3",
"log_filename": "kafka_server_9092.log",
"config_filename": "kafka_server_9092.properties"
},
{
"entity_id": "3",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_3_logs",
"default.replication.factor": "3",
"num.partitions": "3",
"log_filename": "kafka_server_9093.log",
"config_filename": "kafka_server_9093.properties"
},
{
"entity_id": "4",
"new-producer":"true",
"topic": "test_1",
"threads": "5",
"compression-codec": "1",
"message-size": "500",
"message": "100",
"request-num-acks": "-1",
"producer-retry-backoff-ms": "300",
"sync":"false",
"log_filename": "producer_performance.log",
"config_filename": "producer_performance.properties"
},
{
"entity_id": "5",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"zookeeper": "localhost:2188",
"log_filename": "console_consumer.log",
"config_filename": "console_consumer.properties"
}
]
}

View File

@ -1,85 +0,0 @@
{
"description": {"01":"Replication Basic : 1. acks => 1; 2. comp => 1",
"02":"Produce and consume messages to a single topic - 3 partitions.",
"03":"This test sends messages to 3 replicas",
"04":"At the end it verifies the log size and contents",
"05":"Use a consumer to verify no message loss.",
"06":"Producer dimensions : mode:sync, acks:1, comp:1",
"07":"Log segment size : 10000000"
},
"testcase_args": {
"broker_type": "leader",
"bounce_broker": "false",
"replica_factor": "3",
"num_partition": "3",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2188",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_2188.log",
"config_filename": "zookeeper_2188.properties"
},
{
"entity_id": "1",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_1_logs",
"default.replication.factor": "3",
"num.partitions": "3",
"log_filename": "kafka_server_9091.log",
"config_filename": "kafka_server_9091.properties"
},
{
"entity_id": "2",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_2_logs",
"default.replication.factor": "3",
"num.partitions": "3",
"log_filename": "kafka_server_9092.log",
"config_filename": "kafka_server_9092.properties"
},
{
"entity_id": "3",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_3_logs",
"default.replication.factor": "3",
"num.partitions": "3",
"log_filename": "kafka_server_9093.log",
"config_filename": "kafka_server_9093.properties"
},
{
"entity_id": "4",
"new-producer":"true",
"topic": "test_1",
"threads": "5",
"compression-codec": "1",
"message-size": "500",
"message": "100",
"request-num-acks": "1",
"producer-retry-backoff-ms": "300",
"sync":"true",
"log_filename": "producer_performance.log",
"config_filename": "producer_performance.properties"
},
{
"entity_id": "5",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"zookeeper": "localhost:2188",
"log_filename": "console_consumer.log",
"config_filename": "console_consumer.properties"
}
]
}

View File

@ -1,85 +0,0 @@
{
"description": {"01":"Replication Basic : 1. mode => async; 2. acks => 1; 3. comp => 1",
"02":"Produce and consume messages to a single topic - 3 partitions.",
"03":"This test sends messages to 3 replicas",
"04":"At the end it verifies the log size and contents",
"05":"Use a consumer to verify no message loss.",
"06":"Producer dimensions : mode:async, acks:1, comp:1",
"07":"Log segment size : 10000000"
},
"testcase_args": {
"broker_type": "leader",
"bounce_broker": "false",
"replica_factor": "3",
"num_partition": "3",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2188",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_2188.log",
"config_filename": "zookeeper_2188.properties"
},
{
"entity_id": "1",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_1_logs",
"default.replication.factor": "3",
"num.partitions": "3",
"log_filename": "kafka_server_9091.log",
"config_filename": "kafka_server_9091.properties"
},
{
"entity_id": "2",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_2_logs",
"default.replication.factor": "3",
"num.partitions": "3",
"log_filename": "kafka_server_9092.log",
"config_filename": "kafka_server_9092.properties"
},
{
"entity_id": "3",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "10000000",
"log.dir": "/tmp/kafka_server_3_logs",
"default.replication.factor": "3",
"num.partitions": "3",
"log_filename": "kafka_server_9093.log",
"config_filename": "kafka_server_9093.properties"
},
{
"entity_id": "4",
"new-producer":"true",
"topic": "test_1",
"threads": "5",
"compression-codec": "1",
"message-size": "500",
"message": "100",
"request-num-acks": "1",
"producer-retry-backoff-ms": "300",
"sync":"false",
"log_filename": "producer_performance.log",
"config_filename": "producer_performance.properties"
},
{
"entity_id": "5",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"zookeeper": "localhost:2188",
"log_filename": "console_consumer.log",
"config_filename": "console_consumer.properties"
}
]
}

View File

@ -1,85 +0,0 @@
{
"description": {"01":"Replication Basic : 1. mode => async; 2. acks => 1; 3. comp => 1; 4. log segment size => 1M",
"02":"Produce and consume messages to a single topic - 3 partitions.",
"03":"This test sends messages to 3 replicas",
"04":"At the end it verifies the log size and contents",
"05":"Use a consumer to verify no message loss.",
"06":"Producer dimensions : mode:async, acks:1, comp:1",
"07":"Log segment size : 1048576 (1M)"
},
"testcase_args": {
"broker_type": "leader",
"bounce_broker": "false",
"replica_factor": "3",
"num_partition": "3",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2188",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_2188.log",
"config_filename": "zookeeper_2188.properties"
},
{
"entity_id": "1",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "1048576",
"log.dir": "/tmp/kafka_server_1_logs",
"default.replication.factor": "3",
"num.partitions": "3",
"log_filename": "kafka_server_9091.log",
"config_filename": "kafka_server_9091.properties"
},
{
"entity_id": "2",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "1048576",
"log.dir": "/tmp/kafka_server_2_logs",
"default.replication.factor": "3",
"num.partitions": "3",
"log_filename": "kafka_server_9092.log",
"config_filename": "kafka_server_9092.properties"
},
{
"entity_id": "3",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "1048576",
"log.dir": "/tmp/kafka_server_3_logs",
"default.replication.factor": "3",
"num.partitions": "3",
"log_filename": "kafka_server_9093.log",
"config_filename": "kafka_server_9093.properties"
},
{
"entity_id": "4",
"new-producer":"true",
"topic": "test_1",
"threads": "5",
"compression-codec": "1",
"message-size": "500",
"message": "100",
"request-num-acks": "1",
"producer-retry-backoff-ms": "300",
"sync":"false",
"log_filename": "producer_performance.log",
"config_filename": "producer_performance.properties"
},
{
"entity_id": "5",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"zookeeper": "localhost:2188",
"log_filename": "console_consumer.log",
"config_filename": "console_consumer.properties"
}
]
}

View File

@ -1,86 +0,0 @@
{
"description": {"01":"Replication Basic : 1. auto create topic => true",
"02":"Produce and consume messages to a single topic - 3 partitions.",
"03":"This test sends messages to 3 replicas",
"04":"At the end it verifies the log size and contents",
"05":"Use a consumer to verify no message loss.",
"06":"Producer dimensions : mode:async, acks:1, comp:1",
"07":"Log segment size : 1048576 (1M)"
},
"testcase_args": {
"broker_type": "leader",
"bounce_broker": "false",
"replica_factor": "3",
"num_partition": "3",
"num_iteration": "1",
"auto_create_topic": "true",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15",
"num_messages_to_produce_per_producer_call": "50"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2188",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_2188.log",
"config_filename": "zookeeper_2188.properties"
},
{
"entity_id": "1",
"port": "9091",
"broker.id": "1",
"num.partitions": "3",
"default.replication.factor": "3",
"log.segment.bytes": "1048576",
"log.dir": "/tmp/kafka_server_1_logs",
"log_filename": "kafka_server_9091.log",
"config_filename": "kafka_server_9091.properties"
},
{
"entity_id": "2",
"port": "9092",
"broker.id": "2",
"num.partitions": "3",
"default.replication.factor": "3",
"log.segment.bytes": "1048576",
"log.dir": "/tmp/kafka_server_2_logs",
"log_filename": "kafka_server_9092.log",
"config_filename": "kafka_server_9092.properties"
},
{
"entity_id": "3",
"port": "9093",
"broker.id": "3",
"num.partitions": "3",
"default.replication.factor": "3",
"log.segment.bytes": "1048576",
"log.dir": "/tmp/kafka_server_3_logs",
"log_filename": "kafka_server_9093.log",
"config_filename": "kafka_server_9093.properties"
},
{
"entity_id": "4",
"new-producer":"true",
"topic": "test_1",
"threads": "5",
"compression-codec": "1",
"message-size": "500",
"message": "100",
"request-num-acks": "1",
"producer-retry-backoff-ms": "300",
"sync":"false",
"log_filename": "producer_performance.log",
"config_filename": "producer_performance.properties"
},
{
"entity_id": "5",
"topic": "test_1",
"groupid": "mytestgroup",
"consumer-timeout-ms": "10000",
"zookeeper": "localhost:2188",
"log_filename": "console_consumer.log",
"config_filename": "console_consumer.properties"
}
]
}

View File

@ -1,76 +0,0 @@
{
"cluster_config": [
{
"entity_id": "0",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name": "source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9990"
},
{
"entity_id": "1",
"hostname": "localhost",
"role": "broker",
"cluster_name": "source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9991"
},
{
"entity_id": "2",
"hostname": "localhost",
"role": "broker",
"cluster_name": "source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9992"
},
{
"entity_id": "3",
"hostname": "localhost",
"role": "broker",
"cluster_name": "source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9993"
},
{
"entity_id": "4",
"hostname": "localhost",
"role": "producer_performance",
"cluster_name": "source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9997"
},
{
"entity_id": "5",
"hostname": "localhost",
"role": "producer_performance",
"cluster_name": "source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9998"
},
{
"entity_id": "6",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name": "source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9999"
},
{
"entity_id": "7",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name": "source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9099"
}
]
}

View File

@ -1,105 +0,0 @@
{
"description": {"01":"Replication Basic on Multi Topics & Partitions : Base Test",
"02":"Produce and consume messages to 2 topics - 3 partitions",
"03":"This test sends messages to 3 replicas",
"04":"At the end it verifies the log size and contents",
"05":"Use a consumer to verify no message loss.",
"06":"Producer dimensions : mode:sync, acks:-1, comp:0",
"07":"Log segment size : 102400"
},
"testcase_args": {
"broker_type": "leader",
"bounce_broker": "false",
"replica_factor": "3",
"num_partition": "3",
"num_iteration": "1",
"sleep_seconds_between_producer_calls": "1",
"message_producing_free_time_sec": "15"
},
"entities": [
{
"entity_id": "0",
"clientPort": "2188",
"dataDir": "/tmp/zookeeper_0",
"log_filename": "zookeeper_2188.log",
"config_filename": "zookeeper_2188.properties"
},
{
"entity_id": "1",
"port": "9091",
"broker.id": "1",
"log.segment.bytes": "102400",
"log.dir": "/tmp/kafka_server_1_logs",
"default.replication.factor": "3",
"num.partitions": "3",
"log_filename": "kafka_server_9091.log",
"config_filename": "kafka_server_9091.properties"
},
{
"entity_id": "2",
"port": "9092",
"broker.id": "2",
"log.segment.bytes": "102400",
"log.dir": "/tmp/kafka_server_2_logs",
"default.replication.factor": "3",
"num.partitions": "3",
"log_filename": "kafka_server_9092.log",
"config_filename": "kafka_server_9092.properties"
},
{
"entity_id": "3",
"port": "9093",
"broker.id": "3",
"log.segment.bytes": "102400",
"log.dir": "/tmp/kafka_server_3_logs",
"default.replication.factor": "3",
"num.partitions": "3",
"log_filename": "kafka_server_9093.log",
"config_filename": "kafka_server_9093.properties"
},
{
"entity_id": "4",
"new-producer":"true",
"topic": "test_1",
"threads": "5",
"compression-codec": "0",
"message-size": "500",
"message": "100",
"request-num-acks": "-1",
"producer-retry-backoff-ms": "300",
"sync":"true",
"log_filename": "producer_performance_4.log",
"config_filename": "producer_performance_4.properties"
},
{
"entity_id": "5",
"new-producer":"true",
"topic": "test_2",
"threads": "5",
"compression-codec": "0",
"message-size": "500",
"message": "100",
"request-num-acks": "-1",
"producer-retry-backoff-ms": "300",
"sync":"true",
"log_filename": "producer_performance_5.log",
"config_filename": "producer_performance_5.properties"
},
{
"entity_id": "6",
"topic": "test_1",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"log_filename": "console_consumer_6.log",
"config_filename": "console_consumer_6.properties"
},
{
"entity_id": "7",
"topic": "test_2",
"group.id": "mytestgroup",
"consumer-timeout-ms": "10000",
"log_filename": "console_consumer_7.log",
"config_filename": "console_consumer_7.properties"
}
]
}

View File

@ -1,76 +0,0 @@
{
"cluster_config": [
{
"entity_id": "0",
"hostname": "localhost",
"role": "zookeeper",
"cluster_name": "source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9990"
},
{
"entity_id": "1",
"hostname": "localhost",
"role": "broker",
"cluster_name": "source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9991"
},
{
"entity_id": "2",
"hostname": "localhost",
"role": "broker",
"cluster_name": "source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9992"
},
{
"entity_id": "3",
"hostname": "localhost",
"role": "broker",
"cluster_name": "source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9993"
},
{
"entity_id": "4",
"hostname": "localhost",
"role": "producer_performance",
"cluster_name": "source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9997"
},
{
"entity_id": "5",
"hostname": "localhost",
"role": "producer_performance",
"cluster_name": "source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9998"
},
{
"entity_id": "6",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name": "source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9999"
},
{
"entity_id": "7",
"hostname": "localhost",
"role": "console_consumer",
"cluster_name": "source",
"kafka_home": "default",
"java_home": "default",
"jmx_port": "9099"
}
]
}

Some files were not shown because too many files have changed in this diff Show More