KAFKA-3461: Fix typos in Kafka web documentations.

This PR fixes 8 typos in HTML files of `docs` module. I wrote explicitly here since Github sometimes does not highlight the corrections on long lines correctly.
- docs/api.html: compatability => compatibility
- docs/connect.html: simultaneoulsy => simultaneously
- docs/implementation.html: LATIEST_TIME => LATEST_TIME, nPartions => nPartitions
- docs/migration.html: Decomission => Decommission
- docs/ops.html: stoping => stopping, ConumserGroupCommand => ConsumerGroupCommand, youre => you're

Author: Dongjoon Hyun <dongjoon@apache.org>

Reviewers: Ismael Juma

Closes #1138 from dongjoon-hyun/KAFKA-3461
This commit is contained in:
Dongjoon Hyun 2016-04-12 13:48:18 -07:00 committed by Gwen Shapira
parent 34a5944721
commit e79d9af3cf
5 changed files with 21 additions and 21 deletions

View File

@ -15,7 +15,7 @@
limitations under the License.
-->
Apache Kafka includes new java clients (in the org.apache.kafka.clients package). These are meant to supplant the older Scala clients, but for compatability they will co-exist for some time. These clients are available in a separate jar with minimal dependencies, while the old Scala clients remain packaged with the server.
Apache Kafka includes new java clients (in the org.apache.kafka.clients package). These are meant to supplant the older Scala clients, but for compatibility they will co-exist for some time. These clients are available in a separate jar with minimal dependencies, while the old Scala clients remain packaged with the server.
<h3><a id="producerapi" href="#producerapi">2.1 Producer API</a></h3>

View File

@ -297,7 +297,7 @@ The framework will promptly request new configuration information and update the
Ideally this code for monitoring changes would be isolated to the <code>Connector</code> and tasks would not need to worry about them. However, changes can also affect tasks, most commonly when one of their input streams is destroyed in the input system, e.g. if a table is dropped from a database. If the <code>Task</code> encounters the issue before the <code>Connector</code>, which will be common if the <code>Connector</code> needs to poll for changes, the <code>Task</code> will need to handle the subsequent error. Thankfully, this can usually be handled simply by catching and handling the appropriate exception.
<code>SinkConnectors</code> usually only have to handle the addition of streams, which may translate to new entries in their outputs (e.g., a new database table). The framework manages any changes to the Kafka input, such as when the set of input topics changes because of a regex subscription. <code>SinkTasks</code>should expect new input streams, which may require creating new resources in the downstream system, such as a new table in a database. The trickiest situation to handle in these cases may be conflicts between multiple <code>SinkTasks</code>seeing a new input stream for the first time and simultaneoulsy trying to create the new resource. <code>SinkConnectors</code>, on the other hand, will generally require no special code for handling a dynamic set of streams.
<code>SinkConnectors</code> usually only have to handle the addition of streams, which may translate to new entries in their outputs (e.g., a new database table). The framework manages any changes to the Kafka input, such as when the set of input topics changes because of a regex subscription. <code>SinkTasks</code> should expect new input streams, which may require creating new resources in the downstream system, such as a new table in a database. The trickiest situation to handle in these cases may be conflicts between multiple <code>SinkTasks</code> seeing a new input stream for the first time and simultaneously trying to create the new resource. <code>SinkConnectors</code>, on the other hand, will generally require no special code for handling a dynamic set of streams.
<h4><a id="connect_schemas" href="#connect_schemas">Working with Schemas</a></h4>

View File

@ -90,7 +90,7 @@ class SimpleConsumer {
* Get a list of valid offsets (up to maxSize) before the given time.
* The result is a list of offsets, in descending order.
* @param time: time in millisecs,
* if set to OffsetRequest$.MODULE$.LATIEST_TIME(), get from the latest offset available.
* if set to OffsetRequest$.MODULE$.LATEST_TIME(), get from the latest offset available.
* if set to OffsetRequest$.MODULE$.EARLIEST_TIME(), get from the earliest offset available.
*/
public long[] getOffsetsBefore(String topic, int partition, long time, int maxNumOffsets);
@ -292,7 +292,7 @@ Since the broker registers itself in ZooKeeper using ephemeral znodes, this regi
</p>
<h4><a id="impl_zktopic" href="#impl_zktopic">Broker Topic Registry</a></h4>
<pre>
/brokers/topics/[topic]/[0...N] --> nPartions (ephemeral node)
/brokers/topics/[topic]/[0...N] --> nPartitions (ephemeral node)
</pre>
<p>

View File

@ -27,7 +27,7 @@
<li>Use the 0.7 to 0.8 <a href="tools.html">migration tool</a> to mirror data from the 0.7 cluster into the 0.8 cluster.
<li>When the 0.8 cluster is fully caught up, redeploy all data <i>consumers</i> running the 0.8 client and reading from the 0.8 cluster.
<li>Finally migrate all 0.7 producers to 0.8 client publishing data to the 0.8 cluster.
<li>Decomission the 0.7 cluster.
<li>Decommission the 0.7 cluster.
<li>Drink.
</ol>

View File

@ -70,7 +70,7 @@ Instructions for changing the replication factor of a topic can be found <a href
<h4><a id="basic_ops_restarting" href="#basic_ops_restarting">Graceful shutdown</a></h4>
The Kafka cluster will automatically detect any broker shutdown or failure and elect new leaders for the partitions on that machine. This will occur whether a server fails or it is brought down intentionally for maintenance or configuration changes. For the latter cases Kafka supports a more graceful mechanism for stoping a server than just killing it.
The Kafka cluster will automatically detect any broker shutdown or failure and elect new leaders for the partitions on that machine. This will occur whether a server fails or it is brought down intentionally for maintenance or configuration changes. For the latter cases Kafka supports a more graceful mechanism for stopping a server than just killing it.
When a server is stopped gracefully it has two optimizations it will take advantage of:
<ol>
@ -138,7 +138,7 @@ Note, however, after 0.9.0, the kafka.tools.ConsumerOffsetChecker tool is deprec
<h4><a id="basic_ops_consumer_group" href="#basic_ops_consumer_group">Managing Consumer Groups</a></h4>
With the ConumserGroupCommand tool, we can list, delete, or describe consumer groups. For example, to list all consumer groups across all topics:
With the ConsumerGroupCommand tool, we can list, delete, or describe consumer groups. For example, to list all consumer groups across all topics:
<pre>
&gt; bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --list
@ -156,7 +156,7 @@ test-consumer-group test-foo 0 1
</pre>
When youre using the <a href="https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Client+Re-Design">new consumer-groups API</a> where the broker handles coordination of partition handling and rebalance, you can manage the groups with the "--new-consumer" flags:
When you're using the <a href="https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Client+Re-Design">new consumer-groups API</a> where the broker handles coordination of partition handling and rebalance, you can manage the groups with the "--new-consumer" flags:
<pre>
&gt; bin/kafka-consumer-groups.sh --new-consumer --bootstrap-server broker1:9092 --list