Addresses comments from previous PR [#1187]
Changed print and writeAsText method return signature to void
Flush System.out on close
Changed IllegalStateException to TopologyBuilderException
Updated MockProcessorContext.topic method to return a String
Renamed KStreamPrinter to KeyValuePrinter
Updated the printing of null keys to 'null' to match ConsoleConsumer
Updated JavaDoc stating need to override toString
Author: bbejeck <bbejeck@gmail.com>
Reviewers: Dan Norwood, Guozhang Wang
Closes#1209 from bbejeck/KAFKA-3338_Adding_print/writeAsText_to_Streams_DSL
miguno guozhangwang please have a look if you can.
Author: Eno Thereska <eno.thereska@gmail.com>
Reviewers: Michael G. Noll <michael@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
Closes#1193 from enothereska/kafka-3512-ForEach
Author: Guozhang Wang <wangguoz@gmail.com>
Reviewers: Anna Povzner <anna@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1190 from guozhangwang/K3505
guozhangwang
Setting the timestamp in produced records in SinkNode. This forces the producer record's timestamp same as the context's timestamp.
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang <wangguoz@gmail.com>
Closes#1137 from ymatsuda/set_timestamp_in_sinknode
guozhangwang
Without clearing the dirty flags, RocksDBStore will perform flush for every new record. This bug made the store performance painfully slower.
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang <wangguoz@gmail.com>
Closes#1163 from ymatsuda/clear_dirty_flag
…nology
Hi all,
This is my first contribution and I hope it will be good.
The PR is related to this issue:
https://issues.apache.org/jira/browse/KAFKA-3449
Thanks a lot,
Andrea
Author: Andrea Cosentino <ancosen@gmail.com>
Reviewers: Yasuhiro Matsuda, Guozhang Wang
Closes#1134 from oscerd/KAFKA-3449
Fix NPE in StoreChangeLogger caused by a record out of window retention period.
guozhangwang
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang <wangguoz@gmail.com>
Closes#1124 from ymatsuda/logger_npe
Replace `update` with `withPartitions`, which returns a copy instead of mutating the instance.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Guozhang Wang <wangguoz@gmail.com>
Closes#1118 from ijuma/kafka-3432-cluster-update-thread-safety
Also improves Java docs for the Streams high-level DSL.
Author: Guozhang Wang <wangguoz@gmail.com>
Reviewers: Ismael Juma, Michael G. Noll
Closes#1097 from guozhangwang/KNewJavaDoc
Also include:
1) remove streams specific configs before passing to producer and consumer to avoid warning message;
2) add `ConsumerRecord` timestamp extractor and set as the default extractor.
Author: Guozhang Wang <wangguoz@gmail.com>
Reviewers: Michael G. Noll, Ewen Cheslack-Postava
Closes#1093 from guozhangwang/KConfigWarn
guozhangwang ymatsuda : please review.
Author: Michael G. Noll <michael@confluent.io>
Reviewers: Guozhang Wang <wangguoz@gmail.com>
Closes#1081 from miguno/KAFKA-3411
guozhangwang Very minor cleanup.
Author: Liquan Pei <liquanpei@gmail.com>
Reviewers: Guozhang Wang <wangguoz@gmail.com>
Closes#1063 from Ishiihara/minor-cleanup
Author: Guozhang Wang <wangguoz@gmail.com>
Reviewers: Yasuhiro Matsuda <yasuhiro.matsuda@gmail.com>, Jun Rao <junrao@gmail.com>
Closes#990 from guozhangwang/K3311
and remove TOTAL_RECORDS_TO_PROCESS
guozhangwang
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang <wangguoz@gmail.com>
Closes#985 from ymatsuda/config_params
guozhangwang
Author: Kim Christensen <kich@mvno.dk>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Guozhang Wang <wangguoz@gmail.com>
Closes#912 from kichristensen/KAFKA-3133
guozhangwang made the changes as requested, I reverted my original commit and that seems to have closed the other pull request - sorry if that mucks up the process a bit
Author: tomdearman <tom.dearman@gmail.com>
Reviewers: Guozhang Wang <wangguoz@gmail.com>
Closes#978 from tomdearman/KAFKA-3278
guozhangwang
* add table aggregate to the system test
* actually create change log partition replica
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang <wangguoz@gmail.com>
Closes#966 from ymatsuda/enh_systest
There is a possibility that the state directory locking fails when another stream thread is taking long to close all tasks. Simple retries should alleviate the problem.
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang <wangguoz@gmail.com>
Closes#899 from ymatsuda/minor2
guozhangwang
My bad. I removed ZOOKEEPER_CONNECT_CONFIG from consumer's config by mistake. It is needed by our own partition assigner running in consumers.
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang <wangguoz@gmail.com>
Closes#959 from ymatsuda/hotfix3
StreamThread should keep going after a commit was failed due to a group rebalance.
Currently the thread just dies.
guozhangwang
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang <wangguoz@gmail.com>
Closes#933 from ymatsuda/catch_commit_failure
See KIP-31 and KIP-32 for details.
A few notes on the patch:
1. This patch implements KIP-31 and KIP-32. The patch includes features in both KAFKA-3025, KAFKA-3026 and KAFKA-3036
2. All unit tests passed.
3. The unit tests were run with new and old message format.
4. When message format conversion occurs during consumption, the consumer will not be able to detect the message size too large situation. I did not try to fix this because the situation seems rare and only happen during migration phase.
Author: Jiangjie Qin <becket.qin@gmail.com>
Author: Ismael Juma <ismael@juma.me.uk>
Author: Jiangjie (Becket) Qin <becket.qin@gmail.com>
Reviewers: Jason Gustafson <jason@confluent.io>, Anna Povzner <anna@confluent.io>, Ismael Juma <ismael@juma.me.uk>, Guozhang Wang <wangguoz@gmail.com>, Jun Rao <junrao@gmail.com>
Closes#764 from becketqin/KAFKA-3025
Removing streams' specific config params from producer/consumer configs to reduce warning messages.
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang <wangguoz@gmail.com>
Closes#906 from ymatsuda/clean_config
Pass through the root StateStore in the init method so the inner StateStore can register that object.
Author: tomdearman <tom.dearman@gmail.com>
Reviewers: Yasuhiro Matsuda
Closes#904 from tomdearman/KAFKA-3229
* We need to poll periodically even when all partitions are paused in order to respond to a possible rebalance promptly.
* There is a race condition when two (or more) threads try to clean up the same state directory. One of the thread fails with FileNotFoundException. Thus the new code simply catches it and ignore.
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Gwen Shapira
Closes#893 from ymatsuda/hotfix
* During window store initialization, we have to open segments in the segment id order and update ```currentSegmentId```, otherwise cleanup won't work.
* ```getSegment()``` should not create a segment and clean up old segments if the segment id is greater than ```currentSegmentId```. Segment maintenance should be driven not by query but only by data insertion.
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#891 from ymatsuda/hotfix2
Buffered records of change logs must be cleared upon reassignment of standby tasks.
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#889 from ymatsuda/hotfix
guozhangwang
A window store should open all existing segments. This is important for segment cleanup, and it also ensures that the first fetch() call returns the hits, the values in the search range. (previously, it missed the hits in fetch() immediately after initialization).
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#886 from ymatsuda/hotfix3
The range is inclusive according to KeyValueStore's java doc.
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang
Closes#883 from ymatsuda/minor
* RocksDBStore.putInternal should bypass logging.
* StoreChangeLogger should not call context.recordCollector() when nothing to log
* This is for standby tasks. In standby task, recordCollector() throws an exception. There should be nothing to log anyway.
* fixed ConcurrentModificationException in StreamThread
guozhangwang
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang
Closes#877 from ymatsuda/hotfix2
workround partition ordering not preserved by the consumer group management.
guozhangwang
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang
Closes#868 from ymatsuda/partitionOrder
guozhangwang
Temporarily disabled state store access checking.
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang
Closes#864 from ymatsuda/fix_table_lookup
We need to close producer first before closing tasks to make sure all messages are acked and hence checkpoint offsets are updated before closing tasks and their state. It was re-ordered mistakenly before.
Author: Guozhang Wang <wangguoz@gmail.com>
Reviewers: Yasuhiro Matsuda <yasuhiro@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#845 from guozhangwang/KStreamState
Changes include:
1) Move logging logic from MeteredXXXStore to internal stores, and leave WindowedStore API clean by removed all internalPut/Get functions.
2) Wrap common logging behavior of InMemory and LRUCache stores into one class.
3) Fix a bug for StoreChangeLogger where byte arrays are not comparable in HashSet by using a specified RawStoreChangeLogger.
4) Add a caching layer on top of RocksDBStore with object caching, it relies on the object's equals and hashCode function to be consistent with serdes.
Author: Guozhang Wang <wangguoz@gmail.com>
Reviewers: Yasuhiro Matsuda <yasuhiro@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#826 from guozhangwang/K3060
guozhangwang
removing an unused class, FilteredIterator, and its test.
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Gwen Shapira
Closes#816 from ymatsuda/remove_obsolete_class
It behaves better on Windows and provides more useful error messages.
Also:
* Minor inconsistency fix in `kafka.server.OffsetCheckpoint`.
* Remove delete from `streams.state.OffsetCheckpoint` constructor (similar to the change in `kafka.server.OffsetCheckpoint` in 836cb19633 (diff-2503b32f29cbbd61ed8316f127829455L29)).
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#771 from ijuma/kafka-3105-use-atomic-move-with-fallback-instead-of-rename
guozhangwang
When ```WindowedSerializer``` is specified in ```to(...)``` or ```through(...)``` for a key, we use ```WindowedStreamPartitioner```.
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang
Closes#779 from ymatsuda/partitioner
Added option to use custom partitioning logic within each topology sink.
Author: Randall Hauch <rhauch@gmail.com>
Reviewers: Guozhang Wang
Closes#309 from rhauch/kafka-2649
guozhangwang
An implementation of local store for join window. This implementation uses "rolling" of RocksDB instances for timestamp based truncation.
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang
Closes#726 from ymatsuda/windowed_join
guozhangwang
At DAG level, `KTable<K,V>` sends (key, (new value, old value)) to down stream. This is done by wrapping the new value and the old value in an instance of `Change<V>` class and sending it as a "value" part of the stream. The old value is omitted (set to null) by default for optimization. When any downstream processor needs to use the old value, the framework should enable it (see `KTableImpl.enableSendingOldValues()` and implementations of `KTableProcessorSupplier.enableSensingOldValues()`).
NOTE: This is meant to be used by aggregation. But, if there is a use case like a SQL database trigger, we can add a new KTable method to expose this.
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang
Closes#672 from ymatsuda/trigger
guozhangwang
* a test for ktable state store creation
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang
Closes#661 from ymatsuda/more_ktable_test
onurkaraman becketqin Do you have time to review this patch? It addresses the ticket that jjkoshy filed in KAFKA-2668.
Author: Dong Lin <lindong28@gmail.com>
Reviewers: Onur Karaman <okaraman@linkedin.com>, Joel Koshy <jjkoshy@gmail.com>, Guozhang Wang <wangguoz@gmail.com>, Jun Rao <junrao@gmail.com>
Closes#328 from lindong28/KAFKA-2668
guozhangwang
* fix ProcessorStateManager to use correct ktable partitions
* more ktable tests
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang
Closes#635 from ymatsuda/more_ktable_test
guozhangwang
* added KTable API and impl
* added standby support for KTable
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang
Closes#604 from ymatsuda/add_ktable
Changes made for using getBaseConsumerConfigs from StreamingConfig.getConsumerConfigs.
Author: bbejeck <bbejeck@gmail.com>
Author: Bill Bejeck <bbejeck@gmail.com>
Reviewers: Guozhang Wang
Closes#596 from bbejeck/KAFKA-2902-StreamingConfig-use-getBaseConsumerConfigs
Starting a KafkaStream was getting an error due to the fact that the TopologyBuilder.addSink method was not connecting the sink with it parent(s) processor/sources. Just needed to wire up the sink with it parent(s) in TopologyBuilder.addSink .
Author: bbejeck <bbejeck@gmail.com>
Reviewers: Guozhang Wang
Closes#572 from bbejeck/KAFKA-2872_kafka_stream_sink_not_connected_to_parent
guozhangwang
A restore consumer does not belong to a consumer group.
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang
Closes#543 from ymatsuda/no_group_for_restore_consumer
guozhangwang
An optimization which may reduce unnecessary poll for standby tasks.
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang
Closes#535 from ymatsuda/remove_empty_standby_task
guozhangwang
* added a new config param "num.standby.replicas" (the default value is 0).
* added a new abstract class AbstractTask
* added StandbyTask as a subclass of AbstractTask
* modified StreamTask to a subclass of AbstractTask
* StreamThread
* standby tasks are created by calling StreamThread.addStandbyTask() from onPartitionsAssigned()
* standby tasks are destroyed by calling StreamThread.removeStandbyTasks() from onPartitionRevoked()
* In addStandbyTasks(), change log partitions are assigned to restoreConsumer.
* In removeStandByTasks(), change log partitions are removed from restoreConsumer.
* StreamThread polls change log records using restoreConsumer in the runLoop with timeout=0.
* If records are returned, StreamThread calls StandbyTask.update and pass records to each standby tasks.
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang
Closes#526 from ymatsuda/standby_task
Fails when order of elements is incorrect
Author: Grant Henke <granthenke@gmail.com>
Reviewers: Yasuhiro Matsuda
Closes#510 from granthenke/streams-test
guozhangwang
When the rebalance happens each consumer reports the following information to the coordinator.
* Client UUID (a unique id assigned to an instance of KafkaStreaming)
* Task ids of previously running tasks
* Task ids of valid local states on the client's state directory
TaskAssignor does the following
* Assign a task to a client which was running it previously. If there is no such client, assign a task to a client which has its valid local state.
* Try to balance the load among stream threads.
* A client may have more than one stream threads. The assignor tries to assign tasks to a client proportionally to the number of threads.
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang
Closes#497 from ymatsuda/task_assignment
* Added StateStoreSupplier
* StateStore
* Added init(ProcessorContext context) method
* TopologyBuilder
* Added addStateStore(StateStoreSupplier supplier, String... processNames)
* Added connectProessorAndStateStores(String processorName, String... stateStoreNames)
* This is for the case processors are not created when a store is added to the topology. (used by KStream)
* KStream
* add stateStoreNames to process(), transform(), transformValues().
* Refactored existing state stores to implement StateStoreSupplier
guozhangwang
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang
Closes#387 from ymatsuda/state_store_supplier
StreamThread should keep calling consumer.poll() even when no task is assigned. This is necessary to get a task.
guozhangwang
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang
Closes#373 from ymatsuda/no_task
guozhangwang
* A task id is now a class, ```TaskId```, that has a topic group id and a partition id fields.
* ```TopologyBuilder``` assigns a topic group id to a topic group. Related methods are changed accordingly.
* A state store uses the partition id part of the task id as the change log partition id.
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang
Closes#365 from ymatsuda/task_id
guozhangwang
* added ```PartitionGrouper``` (abstract class)
* This class is responsible for grouping partitions. Each group forms a task.
* Users may implement this class for custom grouping.
* added ```DefaultPartitionGrouper```
* our default implementation of ```PartitionGrouper```
* added ```KafkaStreamingPartitionAssignor```
* We always use this as ```PartitionAssignor``` of stream consumers.
* Actual grouping is delegated to ```PartitionGrouper```.
* ```TopologyBuilder```
* added ```topicGroups()```
* This returns groups of related topics according to the topology
* added ```copartitionSources(sourceNodes...)```
* This is used by DSL layer. It asserts the specified source nodes must be copartitioned.
* added ```copartitionGroups()```
* This returns groups of copartitioned topics
* KStream layer
* keep track of source nodes to determine copartition sources when steams are joined
* source nodes are set to null when partitioning property is not preserved (ex. ```map()```, ```transform()```), and this indicates the stream is no longer joinable
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang
Closes#353 from ymatsuda/grouping
Added a new `KeyValueStore` implementation called `InMemoryLRUCacheStore` that keeps a maximum number of entries in-memory, and as the size exceeds the capacity the least-recently used entry is removed from the store and the backing topic. Also added unit tests for this new store and the existing `InMemoryKeyValueStore` and `RocksDBKeyValueStore` implementations. A new `KeyValueStoreTestDriver` class simplifies all of the other tests, and can be used by other libraries to help test their own custom implementations.
This PR depends upon [KAFKA-2593](https://issues.apache.org/jira/browse/KAFKA-2593) and its PR at https://github.com/apache/kafka/pull/255. Once that PR is merged, I can rebase this PR if desired.
Two issues were uncovered when creating these new unit tests, and both are also addressed as separate (small) commits in this PR:
* The `RocksDBKeyValueStore` initialization was not creating the file system directory if missing.
* `MeteredKeyValueStore` was casting to `ProcessorContextImpl` to access the `RecordCollector`, which prevent using `MeteredKeyValueStore` implementations in tests where something other than `ProcessorContextImpl` was used. The fix was to introduce a `RecordCollector.Supplier` interface to define this `recordCollector()` method, and change `ProcessorContextImpl` and `MockProcessorContext` to both implement this interface. Now, `MeteredKeyValueStore` can cast to the new interface to access the record collector rather than to a single concrete implementation, making it possible to use any and all current stores inside unit tests.
Author: Randall Hauch <rhauch@gmail.com>
Reviewers: Edward Ribeiro, Guozhang Wang
Closes#256 from rhauch/kafka-2594
guozhangwang
StreamTaskTest did not set up a temp directory for each test. This occasionally caused interference between tests through state directory locking.
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang
Closes#317 from ymatsuda/fix_StreamTaskTest
guozhangwang
This change aims to remove unnecessary ```consumer.poll(0)``` calls.
* ```once``` after some partition is resumed
* whenever the size of the top queue in any task is below ```BUFFERED_RECORDS_PER_PARTITION_CONFIG```
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang
Closes#315 from ymatsuda/less_poll_zero
Add support for the key value stores to use specified serializers and deserializers (aka, "serdes"). Prior to this change, the stores were limited to only the default serdes specified in the topology's configuration and exposed to the processors via the ProcessorContext.
Now, using InMemoryKeyValueStore and RocksDBKeyValueStore are similar: both are parameterized on the key and value types, and both have similar multiple static factory methods. The static factory methods either take explicit key and value serdes, take key and value class types so the serdes can be inferred (only for the built-in serdes for string, integer, long, and byte array types), or use the default serdes on the ProcessorContext.
Author: Randall Hauch <rhauch@gmail.com>
Reviewers: Guozhang Wang
Closes#255 from rhauch/kafka-2593
guozhangwang
Fix the order of flushing. Undoing the change I did sometime ago.
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang
Closes#304 from ymatsuda/flush_order
guozhangwang
* added back type safe stateful transform methods (kstream.transform() and kstream.transformValues())
* changed kstream.process() to void
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang
Closes#292 from ymatsuda/transform_method
A few of Kafka Stream's interfaces and classes are not as well-aligned with Java 8's functional interfaces. By making these changes, when Kafka moves to Java 8 these classes can extend standard Java 8 functional interfaces while remaining backward compatible. This will make it easier for developers to use Kafka Streams, and may allow us to eventually remove these custom interfaces and just use the standard Java 8 interfaces.
The changes include:
1. The 'apply' method of KStream's `Predicate` functional interface was renamed to `test` to match the method name on `java.util.function.BiPredicate`. This will allow KStream's `Predicate` to extend `BiPredicate` when Kafka moves to Java 8, and for the `KStream.filter` and `filterOut` methods to accept `BiPredicate`.
2. Renamed the `ProcessorDef` and `WindowDef` interfaces to `ProcessorSupplier` and `WindowSupplier`, respectively. Also the `SlidingWindowDef` class was renamed to `SlidingWindowSupplier`, and the `MockProcessorDef` test class was renamed to `MockProcessorSupplier`. The `instance()` method in all were renamed to `get()`, so that all of these can extend/implement Java 8's `java.util.function.Supplier<T>` interface in the future with no other changes and while remaining backward compatible. Variable names that used some form of "def" were changed to use "supplier".
These two sets of changes were made in separate commits.
Author: Randall Hauch <rhauch@gmail.com>
Reviewers: Ismael Juma, Guozhang Wang
Closes#270 from rhauch/kafka-2600
guozhangwang
This code change properly types ProcessorDef. This also makes KStream.process() typesafe.
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Ismael Juma, Guozhang Wang
Closes#289 from ymatsuda/typing_ProcessorDef
ymatsuda junrao Could you take a quick look? The current unit test is failing on this.
Author: Guozhang Wang <wangguoz@gmail.com>
Reviewers: Ismael Juma, Jun Rao
Closes#276 from guozhangwang/HF-ProcessorStateManager
Remove state storage upon unclean shutdown and fix streaming metrics used for local state.
Author: Guozhang Wang <wangguoz@gmail.com>
Reviewers: Edward Ribeiro, Yasuhiro Matsuda, Jun Rao
Closes#265 from guozhangwang/K2591
This work has been contributed by Jesse Anderson, Randall Hauch, Yasuhiro Matsuda and Guozhang Wang. The detailed design can be found in https://cwiki.apache.org/confluence/display/KAFKA/KIP-28+-+Add+a+processor+client.
Author: Guozhang Wang <wangguoz@gmail.com>
Author: Yasuhiro Matsuda <yasuhiro.matsuda@gmail.com>
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Author: ymatsuda <yasuhiro.matsuda@gmail.com>
Author: Randall Hauch <rhauch@gmail.com>
Author: Jesse Anderson <jesse@smokinghand.com>
Author: Ismael Juma <ismael@juma.me.uk>
Author: Jesse Anderson <eljefe6a@gmail.com>
Reviewers: Ismael Juma, Randall Hauch, Edward Ribeiro, Gwen Shapira, Jun Rao, Jay Kreps, Yasuhiro Matsuda, Guozhang Wang
Closes#130 from guozhangwang/streaming