This very small patch requires extended explanations. The patch
swaps two lines in a rabbit_variable_queue setup: one which sets
the memory hint to 0 which results in reduce_memory_usage to
always flush to disk and fsync; and another which publishes a
lot of messages to the queue that will after that point be
manipulated further to get the queue in the exact right state
for the relevant tests.
The problem with calling reduce_memory_usage after every single
message has been published is not writing to disk (v2 tests do
not suffer from performance issues in that regard) but rather
that rabbit_queue_index will always flush its journal (containing
the one message), which results in opening the segment file,
appending to it, and closing it. The file handling is done
by file_handle_cache which, in this case, will always fsync
the data before closing the file. And that's this one fsync
per message that makes the relevant tests very slow.
By swapping the lines, meaning we publish all messages first
and then set the memory hint to 0, we end up with a single
reduce_memory_usage call that results in an fsync, at the
end. (There may be other fsyncs as part of normal operations.)
And still get the same result because all messages will have
been flushed to disk, only this time in far fewer operations.
This doesn't seem to have been causing problems on CI which
already runs the tests very fast but should help macOS and
possibly other development environments.
On dirty recovery the count in the segment file was already
accurate. It was not accurate otherwise as it assumed that
all messages would be written to the index, which is not
the case in the current implementation.
Because queues deliver messages sequentially we do not need to
keep track of delivers per message, we just need to keep track
of the highest message that was delivered, via its seq_id().
This allows us to avoid updating the index and storing data
unnecessarily and can help simplify the code (not seen in this
WIP commit because the code was left there or commented out
for the time being).
Includes a few small bug fixes.
The new default of 2048 was chosen based on various scenarios.
It provides much better memory usage when many queues are used
(allowing one host to go from 500 queues to 800+ queues) and
there seems to be none or negligible performance cost (< 1%)
for single queues.