MINOR: Fix reassign partitions system test (#18860)

The tests which set reassign_from_offset_zero=False have a setup phase which produces records with old timestamps to the topic and waits until they are cleaned by the retention in order to run the main phase of the test based on non-zero offsets. The setup phases did not wait enough for the cleaning task to kick in, mainly because the scheduled task was not started yet due to log.initial.task.delay.ms being set to 30s by default. Reducing it to 5s helps to stabilize the test. The patch also changes the sleep to 12s in order to have a bit more head room.

```
================================================================================
SESSION REPORT (ALL TESTS)
ducktape version: 0.12.0
session_id:       2025-02-11--016
run time:         26 minutes 9.451 seconds
tests run:        12
passed:           12
flaky:            0
failed:           0
ignored:          0
================================================================================
```

Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
This commit is contained in:
David Jacot 2025-02-11 15:46:19 +01:00 committed by GitHub
parent 1bebdd9fe8
commit 84b639d932
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 6 additions and 4 deletions

View File

@ -44,6 +44,7 @@ LOG_SEGMENT_BYTES = "log.segment.bytes"
LOG_RETENTION_CHECK_INTERVAL_MS = "log.retention.check.interval.ms"
LOG_RETENTION_MS = "log.retention.ms"
LOG_CLEANER_ENABLE = "log.cleaner.enable"
LOG_INITIAL_TASK_DELAY = "log.initial.task.delay.ms"
METADATA_LOG_DIR = "metadata.log.dir"
METADATA_LOG_SEGMENT_BYTES = "metadata.log.segment.bytes"

View File

@ -47,7 +47,8 @@ class ReassignPartitionsTest(ProduceConsumeValidateTest):
self.kafka = KafkaService(test_context, num_nodes=4, zk=None,
server_prop_overrides=[
[config_property.LOG_ROLL_TIME_MS, "5000"],
[config_property.LOG_RETENTION_CHECK_INTERVAL_MS, "5000"]
[config_property.LOG_RETENTION_CHECK_INTERVAL_MS, "5000"],
[config_property.LOG_INITIAL_TASK_DELAY, "5000"]
],
topics={self.topic: {
"partitions": self.num_partitions,
@ -130,11 +131,11 @@ class ReassignPartitionsTest(ProduceConsumeValidateTest):
self.logger.info("Seeded topic with %d messages which will be deleted" %\
producer.num_acked)
# Since the configured check interval is 5 seconds, we wait another
# 6 seconds to ensure that at least one more cleaning so that the last
# segment is deleted. An altenate to using timeouts is to poll each
# 12 seconds to ensure that at least one more cleaning so that the last
# segment is deleted. An alternate to using timeouts is to poll each
# partition until the log start offset matches the end offset. The
# latter is more robust.
time.sleep(6)
time.sleep(12)
@cluster(num_nodes=8)
@matrix(