diff --git a/docs/api.html b/docs/api.html
index 8d5be9b030d..c4572411c47 100644
--- a/docs/api.html
+++ b/docs/api.html
@@ -165,3 +165,22 @@ This new unified consumer API removes the distinction between the 0.8 high-level
Examples showing how to use the consumer are given in the
javadocs.
+
+
+
+As of the 0.10.0 release we have added a new client library named Kafka Streams to let users implement their stream processing
+applications with data stored in Kafka topics. Kafka Streams is considered alpha quality and its public APIs are likely to change in
+future releases.
+You can use Kafka Streams by adding a dependency on the streams jar using
+the following example maven co-ordinates (you can change the version numbers with new releases):
+
+
+ <dependency>
+ <groupId>org.apache.kafka</groupId>
+ <artifactId>kafka-streams</artifactId>
+ <version>0.10.0.0</version>
+ </dependency>
+
+
+Examples showing how to use this library are given in the
+javadocs (note those classes annotated with @InterfaceStability.Unstable, indicating their public APIs may change without backward-compatibility in future releases).
\ No newline at end of file
diff --git a/docs/documentation.html b/docs/documentation.html
index 70002ab8ec4..ddc31021801 100644
--- a/docs/documentation.html
+++ b/docs/documentation.html
@@ -40,6 +40,7 @@ Prior releases: 0.7.x, 2.2.2 Old Simple Consumer API
2.2.3 New Consumer API
+ 2.3 Streams API
3. Configuration
diff --git a/docs/quickstart.html b/docs/quickstart.html
index 7a923c69fc0..4d4f7eae683 100644
--- a/docs/quickstart.html
+++ b/docs/quickstart.html
@@ -258,15 +258,15 @@ This quickstart example will demonstrate how to run a streaming application code
of the WordCountDemo
example code (converted to use Java 8 lambda expressions for easy reading).
-KStream wordCounts = textLines
-// Split each text line, by whitespace, into words.
-.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
-// Ensure the words are available as message keys for the next aggregate operation.
-.map((key, value) -> new KeyValue<>(value, value))
-// Count the occurrences of each word (message key).
-.countByKey(stringSerializer, longSerializer, stringDeserializer, longDeserializer, "Counts")
-// Convert the resulted aggregate table into another stream.
-.toStream();
+KTable wordCounts = textLines
+ // Split each text line, by whitespace, into words.
+ .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
+
+ // Ensure the words are available as record keys for the next aggregate operation.
+ .map((key, value) -> new KeyValue<>(value, value))
+
+ // Count the occurrences of each word (record key) and store the results into a table named "Counts".
+ .countByKey("Counts")
diff --git a/docs/upgrade.html b/docs/upgrade.html
index 486954c1c62..4b8ec7eb9f0 100644
--- a/docs/upgrade.html
+++ b/docs/upgrade.html
@@ -90,6 +90,7 @@ work with 0.10.0.x brokers. Therefore, 0.9.0.0 clients should be upgraded to 0.9
+ - Starting from Kafka 0.10.0.0, a new client library named Kafka Streams is available for stream processing on data stored in Kafka topics. This new client library only works with 0.10.x and upward versioned brokers due to message format changes mentioned above. For more information please read this section.
- The default value of the configuration parameter
receive.buffer.bytes
is now 64K for the new consumer.
- The new consumer now exposes the configuration parameter
exclude.internal.topics
to restrict internal topics (such as the consumer offsets topic) from accidentally being included in regular expression subscriptions. By default, it is enabled.
- The old Scala producer has been deprecated. Users should migrate their code to the Java producer included in the kafka-clients JAR as soon as possible.
diff --git a/docs/uses.html b/docs/uses.html
index f769bedfcad..5b97272598a 100644
--- a/docs/uses.html
+++ b/docs/uses.html
@@ -45,7 +45,7 @@ In comparison to log-centric systems like Scribe or Flume, Kafka offers equally
-Many users end up doing stage-wise processing of data where data is consumed from topics of raw data and then aggregated, enriched, or otherwise transformed into new Kafka topics for further consumption. For example a processing flow for article recommendation might crawl article content from RSS feeds and publish it to an "articles" topic; further processing might help normalize or deduplicate this content to a topic of cleaned article content; a final stage might attempt to match this content to users. This creates a graph of real-time data flow out of the individual topics. Storm and Samza are popular frameworks for implementing these kinds of transformations.
+Many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing. For example, a processing pipeline for recommending news articles might crawl article content from RSS feeds and publish it to an "articles" topic; further processing might normalize or deduplicate this content and published the cleansed article content to a new topic; a final processing stage might attempt to recommend this content to users. Such processing pipelines create graphs of real-time data flows based on the individual topics. Starting in 0.10.0.0, a light-weight but powerful stream processing library called Kafka Streams is available in Apache Kafka to perform such data processing as described above. Apart from Kafka Streams, alternative open source stream processing tools include Apache Storm and Apache Samza.