From 8deb6c6911616f887ebb2678f3f12ee1da09a618 Mon Sep 17 00:00:00 2001 From: Clemens Hutter Date: Fri, 8 Aug 2025 11:17:36 +0200 Subject: [PATCH] MINOR: Remove SPAM URL in Streams Documentation (#20321) The previous URL http://lambda-architecture.net/ seems to now be controlled by spammers Co-authored-by: Shashank Reviewers: Mickael Maison --- docs/streams/core-concepts.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/streams/core-concepts.html b/docs/streams/core-concepts.html index c400ca08453..a2d1b7209b5 100644 --- a/docs/streams/core-concepts.html +++ b/docs/streams/core-concepts.html @@ -279,7 +279,7 @@

In stream processing, one of the most frequently asked question is "does my stream processing system guarantee that each record is processed once and only once, even if some failures are encountered in the middle of processing?" Failing to guarantee exactly-once stream processing is a deal-breaker for many applications that cannot tolerate any data-loss or data duplicates, and in that case a batch-oriented framework is usually used in addition - to the stream processing pipeline, known as the Lambda Architecture. + to the stream processing pipeline, known as the Lambda Architecture. Prior to 0.11.0.0, Kafka only provides at-least-once delivery guarantees and hence any stream processing systems that leverage it as the backend storage could not guarantee end-to-end exactly-once semantics. In fact, even for those stream processing systems that claim to support exactly-once processing, as long as they are reading from / writing to Kafka as the source / sink, their applications cannot actually guarantee that no duplicates will be generated throughout the pipeline.