mirror of https://github.com/apache/kafka.git
205 lines
10 KiB
HTML
205 lines
10 KiB
HTML
<!--
|
|
Licensed to the Apache Software Foundation (ASF) under one or more
|
|
contributor license agreements. See the NOTICE file distributed with
|
|
this work for additional information regarding copyright ownership.
|
|
The ASF licenses this file to You under the Apache License, Version 2.0
|
|
(the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software
|
|
distributed under the License is distributed on an "AS IS" BASIS,
|
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
See the License for the specific language governing permissions and
|
|
limitations under the License.
|
|
-->
|
|
|
|
<script><!--#include virtual="js/templateData.js" --></script>
|
|
|
|
<script id="quickstart-docker-template" type="text/x-handlebars-template">
|
|
<div class="quickstart-step">
|
|
<h4 class="anchor-heading">
|
|
<a class="anchor-link" id="step-1-get-kafka" href="#step-1-get-kafka"></a>
|
|
<a href="#step-1-get-kafka">Step 1: Get Kafka</a>
|
|
</h4>
|
|
|
|
<p>
|
|
This docker-compose file will run everything for you via <a href="https://www.docker.com/" rel="nofollow">Docker</a>.
|
|
Copy and paste it into a file named <code>docker-compose.yml</code> on your local filesystem.
|
|
</p>
|
|
<pre class="line-numbers"><code class="language-bash">---
|
|
version: '2'
|
|
|
|
services:
|
|
broker:
|
|
image: apache-kafka/broker:2.5.0
|
|
hostname: kafka-broker
|
|
container_name: kafka-broker
|
|
|
|
# ...rest omitted...</code></pre>
|
|
</div>
|
|
|
|
<div class="quickstart-step">
|
|
<h4 class="anchor-heading">
|
|
<a class="anchor-link" id="step-2-start-kafka" href="#step-2-start-kafka"></a>
|
|
<a href="#step-2-start-kafka">Step 2: Start the Kafka environment</a>
|
|
</h4>
|
|
|
|
<p>
|
|
From the directory containing the <code>docker-compose.yml</code> file created in the previous step, run this
|
|
command in order to start all services in the correct order:
|
|
</p>
|
|
<pre class="line-numbers"><code class="language-bash">$ docker-compose up</code></pre>
|
|
<p>
|
|
Once all services have successfully launched, you will have a basic Kafka environment running and ready to use.
|
|
</p>
|
|
</div>
|
|
|
|
<div class="quickstart-step">
|
|
<h4 class="anchor-heading">
|
|
<a class="anchor-link" id="step-3-create-a-topic" href="#step-3-create-a-topic"></a>
|
|
<a href="#step-3-create-a-topic">Step 3: Create a topic to store your events</a>
|
|
</h4>
|
|
<p>Kafka is a distributed <em>event streaming platform</em> that lets you read, write, store, and process
|
|
<a href="/documentation/#messages" rel="nofollow"><em>events</em></a> (also called <em>records</em> or <em>messages</em> in the documentation)
|
|
across many machines.
|
|
Example events are payment transactions, geolocation updates from mobile phones, shipping orders, sensor measurements
|
|
from IoT devices or medical equipment, and much more.
|
|
These events are organized and stored in <a href="/documentation/#intro_topics" rel="nofollow"><em>topics</em></a>.
|
|
Very simplified, a topic is similar to a folder in a filesystem, and the events are the files in that folder.</p>
|
|
<p>So before you can write your first events, you must create a topic:</p>
|
|
<pre class="line-numbers"><code class="language-bash">$ docker exec -it kafka-broker kafka-topics.sh --create --topic quickstart-events</code></pre>
|
|
<p>All of Kafka's command line tools have additional options: run the <code>kafka-topics.sh</code> command without any
|
|
arguments to display usage information.
|
|
For example, it can also show you
|
|
<a href="/documentation/#intro_topics" rel="nofollow">details such as the partition count</a> of the new topic:</p>
|
|
<pre class="line-numbers"><code class="language-bash">$ docker exec -it kafka-broker kafka-topics.sh --describe --topic quickstart-events
|
|
Topic:quickstart-events PartitionCount:1 ReplicationFactor:1 Configs:
|
|
Topic: quickstart-events Partition: 0 Leader: 0 Replicas: 0 Isr: 0</code></pre>
|
|
</div>
|
|
|
|
<div class="quickstart-step">
|
|
<h4 class="anchor-heading">
|
|
<a class="anchor-link" id="step-4-write-events" href="#step-4-write-events"></a>
|
|
<a href="#step-4-write-events">Step 4: Write some events into the topic</a>
|
|
</h4>
|
|
<p>A Kafka client communicates with the Kafka brokers via the network for writing (or reading) events.
|
|
Once received, the brokers will store the events in a durable and fault-tolerant manner for as long as you
|
|
need—even forever.</p>
|
|
<p>Run the console producer client to write a few events into your topic.
|
|
By default, each line you enter will result in a separate event being written to the topic.</p>
|
|
<pre class="line-numbers"><code class="language-bash">$ docker exec -it kafka-broker kafka-console-producer.sh --topic quickstart-events
|
|
This is my first event
|
|
This is my second event</code></pre>
|
|
<p>You can stop the producer client with <code>Ctrl-C</code> at any time.</p>
|
|
</div>
|
|
|
|
<div class="quickstart-step">
|
|
<h4 class="anchor-heading">
|
|
<a class="anchor-link" id="step-5-read-the-events" href="#step-5-read-the-events"></a>
|
|
<a href="#step-5-read-the-events">Step 5: Read the events</a>
|
|
</h4>
|
|
<p>Open another terminal session and run the console consumer client to read the events you just created:</p>
|
|
<pre class="line-numbers"><code class="language-bash">$ docker exec -it kafka-broker kafka-console-consumer.sh --topic quickstart-events --from-beginning
|
|
This is my first event
|
|
This is my second event</code></pre>
|
|
<p>You can stop the consumer client with <code>Ctrl-C</code> at any time.</p>
|
|
<p>Feel free to experiment: for example, switch back to your producer terminal (previous step) to write
|
|
additional events, and see how the events immediately show up in your consumer terminal.</p>
|
|
<p>Because events are durably stored in Kafka, they can be read as many times and by as many consumers as you want.
|
|
You can easily verify this by opening yet another terminal session and re-running the previous command again.</p>
|
|
|
|
</div>
|
|
|
|
<div class="quickstart-step">
|
|
<h4 class="anchor-heading">
|
|
<a class="anchor-link" id="step-5-read-the-events" href="#step-5-read-the-events"></a>
|
|
<a href="#step-5-read-the-events">Step 6: Import/export your data as streams of events with Kafka Connect</a>
|
|
</h4>
|
|
<p>You probably have lots of data in existing systems like relational databases or traditional messaging systems, along
|
|
with many applications that already use these systems.
|
|
<a href="/documentation/#connect" rel="nofollow">Kafka Connect</a> allows you to continuously ingest data from external
|
|
systems into Kafka, and vice versa. It is thus
|
|
very easy to integrate existing systems with Kafka. To make this process even easier, there are hundreds of such
|
|
connectors readily available.</p>
|
|
<p>Take a look at the <a href="/documentation/#connect" rel="nofollow">Kafka Connect section</a> in the documentation to
|
|
learn more about how to continuously import/export your data into and out of Kafka.</p>
|
|
|
|
</div>
|
|
|
|
<div class="quickstart-step">
|
|
<h4 class="anchor-heading">
|
|
<a class="anchor-link" id="step-7-process-events" href="#step-7-process-events"></a>
|
|
<a href="#step-7-process-events">Step 7: Process your events with Kafka Streams</a>
|
|
</h4>
|
|
|
|
<p>Once your data is stored in Kafka as events, you can process the data with the
|
|
<a href="/documentation/streams" rel="nofollow">Kafka Streams</a> client library for Java/Scala.
|
|
It allows you to implement mission-critical real-time applications and microservices, where the input and/or output data
|
|
is stored in Kafka topics. Kafka Streams combines the simplicity of writing and deploying standard Java and Scala
|
|
applications on the client side with the benefits of Kafka's server-side cluster technology to make these applications
|
|
highly scalable, elastic, fault-tolerant, and distributed. The library supports exactly-once processing, stateful
|
|
operations and aggregations, windowing, joins, processing based on event-time, and much more.</p>
|
|
<p>To give you a first taste, here's how one would implement the popular <code>WordCount</code> algorithm:</p>
|
|
<pre class="line-numbers"><code class="language-java">KStream<String, String> textLines = builder.stream("quickstart-events");
|
|
|
|
KTable<String, Long> wordCounts = textLines
|
|
.flatMapValues(line -> Arrays.asList(line.toLowerCase().split(" ")))
|
|
.groupBy((keyIgnored, word) -> word)
|
|
.count();
|
|
|
|
wordCounts.toStream().to("output-topic"), Produced.with(Serdes.String(), Serdes.Long()));</code></pre>
|
|
<p>The <a href="/25/documentation/streams/quickstart" rel="nofollow">Kafka Streams demo</a> and the
|
|
<a href="/25/documentation/streams/tutorial" rel="nofollow">app development tutorial</a> demonstrate how to code and run
|
|
such a streaming application from start to finish.</p>
|
|
|
|
</div>
|
|
|
|
<div class="quickstart-step">
|
|
<h4 class="anchor-heading">
|
|
<a class="anchor-link" id="step-8-terminate" href="#step-8-terminate"></a>
|
|
<a href="#step-8-terminate">Step 8: Terminate the Kafka environment</a>
|
|
</h4>
|
|
<p>Now that you reached the end of the quickstart, feel free to tear down the Kafka environment—or continue playing around.</p>
|
|
<p>Run the following command to tear down the environment, which also deletes any events you have created along the way:</p>
|
|
<pre class="line-numbers"><code class="language-bash">$ docker-compose down</code></pre>
|
|
|
|
</div>
|
|
|
|
<div class="quickstart-step">
|
|
<h4 class="anchor-heading">
|
|
<a class="anchor-link" id="quickstart_kafkacongrats" href="#quickstart_kafkacongrats"></a>
|
|
<a href="#quickstart_kafkacongrats">Congratulations!</a>
|
|
</h4>
|
|
|
|
<p>You have successfully finished the Apache Kafka quickstart.<div>
|
|
|
|
<p>To learn more, we suggest the following next steps:</p>
|
|
|
|
<ul>
|
|
<li>
|
|
Read through the brief <a href="/intro">Introduction</a> to learn how Kafka works at a high level, its
|
|
main concepts, and how it compares to other technologies. To understand Kafka in more detail, head over to the
|
|
<a href="/documentation/">Documentation</a>.
|
|
</li>
|
|
<li>
|
|
Browse through the <a href="/powered-by">Use Cases</a> to learn how other users in our world-wide
|
|
community are getting value out of Kafka.
|
|
</li>
|
|
<!--
|
|
<li>
|
|
Learn how _Kafka compares to other technologies_ [note to design team: this new page is not yet written] you might be familiar with.
|
|
</li>
|
|
-->
|
|
<li>
|
|
Join a <a href="/events">local Kafka meetup group</a> and
|
|
<a href="https://kafka-summit.org/past-events/">watch talks from Kafka Summit</a>,
|
|
the main conference of the Kafka community.
|
|
</li>
|
|
</ul>
|
|
</div>
|
|
</script>
|
|
|
|
<div class="p-quickstart-docker"></div>
|