mirror of https://github.com/apache/kafka.git
MINOR: Update Quickstart in documentation to account for Windows platforms
Author: Vahid Hashemian <vahidhashemian@us.ibm.com> Reviewers: Jason Gustafson <jason@confluent.io> Closes #1990 from vahidhashemian/doc/quickstart_update_windows
This commit is contained in:
parent
44d18d273c
commit
179d069857
|
@ -18,6 +18,7 @@
|
|||
<h3><a id="quickstart" href="#quickstart">1.3 Quick Start</a></h3>
|
||||
|
||||
This tutorial assumes you are starting fresh and have no existing Kafka or ZooKeeper data.
|
||||
Since Kafka console scripts are different for Unix-based and Windows platforms, on Windows platforms use <code>bin\windows\</code> instead of <code>bin/</code>, and change the script extension to <code>.bat</code>.
|
||||
|
||||
<h4><a id="quickstart_download" href="#quickstart_download">Step 1: Download the code</a></h4>
|
||||
|
||||
|
@ -93,7 +94,7 @@ All of the command line tools have additional options; running the command with
|
|||
|
||||
So far we have been running against a single broker, but that's no fun. For Kafka, a single broker is just a cluster of size one, so nothing much changes other than starting a few more broker instances. But just to get feel for it, let's expand our cluster to three nodes (still all on our local machine).
|
||||
<p>
|
||||
First we make a config file for each of the brokers:
|
||||
First we make a config file for each of the brokers (on Windows use the <code>copy</code> command instead):
|
||||
<pre>
|
||||
> <b>cp config/server.properties config/server-1.properties</b>
|
||||
> <b>cp config/server.properties config/server-2.properties</b>
|
||||
|
@ -173,6 +174,13 @@ Now let's test out fault-tolerance. Broker 1 was acting as the leader so let's k
|
|||
> <b>kill -9 7564</b>
|
||||
</pre>
|
||||
|
||||
On Windows use:
|
||||
<pre>
|
||||
> <b>wmic process get processid,caption,commandline | find "java.exe" | find "server-1.properties"</b>
|
||||
java.exe java -Xmx1G -Xms1G -server -XX:+UseG1GC ... build\libs\kafka_2.10-0.10.1.0.jar" kafka.Kafka config\server-1.properties <i>644</i>
|
||||
> <b>taskkill /pid 644 /f</b>
|
||||
</pre>
|
||||
|
||||
Leadership has switched to one of the slaves and node 1 is no longer in the in-sync replica set:
|
||||
<pre>
|
||||
> <b>bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic</b>
|
||||
|
@ -297,6 +305,12 @@ We will now prepare input data to a Kafka topic, which will subsequently process
|
|||
<pre>
|
||||
> <b>echo -e "all streams lead to kafka\nhello kafka streams\njoin kafka summit" > file-input.txt</b>
|
||||
</pre>
|
||||
Or on Windows:
|
||||
<pre>
|
||||
> <b>echo all streams lead to kafka> file-input.txt</b>
|
||||
> <b>echo hello kafka streams>> file-input.txt</b>
|
||||
> <b>echo|set /p=join kafka summit>> file-input.txt</b>
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
Next, we send this input data to the input topic named <b>streams-file-input</b> using the console producer (in practice,
|
||||
|
@ -313,7 +327,7 @@ stream data will likely be flowing continuously into Kafka where the application
|
|||
|
||||
|
||||
<pre>
|
||||
> <b>cat file-input.txt | bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-file-input</b>
|
||||
> <b>bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-file-input < file-input.txt</b>
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
|
@ -349,12 +363,9 @@ with the following output data being printed to the console:
|
|||
|
||||
<pre>
|
||||
all 1
|
||||
streams 1
|
||||
lead 1
|
||||
to 1
|
||||
kafka 1
|
||||
hello 1
|
||||
kafka 2
|
||||
streams 2
|
||||
join 1
|
||||
kafka 3
|
||||
|
|
Loading…
Reference in New Issue