BEFORE: time gmake -C deps/rabbit ct-dynamic_qq 1.92s user 1.44s system 2% cpu 2:23.56 total
AFTER: time gmake -C deps/rabbit ct-dynamic_qq 1.66s user 1.22s system 2% cpu 1:56.44 total
Reduce the number of tests that are run for 2 nodes.
BEFORE: time gmake -C deps/rabbit ct-rabbit_stream_queue 7.22s user 5.72s system 2% cpu 8:28.18 total
AFTER time gmake -C deps/rabbit ct-rabbit_stream_queue 27.04s user 8.43s system 10% cpu 5:38.63 total
in rabbitmq.conf.
Note that this does not include any tests because
the test would have to use a writeable directory,
and it is not obvious what kind of cross-platform
path that is not computed programmatically they
could use.
It's a trivial schema file that uses an existing
core validator => let's leave it as is.
The FD limits are still valuable.
The FD used will still show some information during CQv1
upgrade to v2 so it is kept for now. But in the future
it will have to be reworked to query the system, or be
removed.
The message has been tweaked; it isn't about FHC
or queues but about system limits only. The
ulimit() function can later be moved out of
FHC when FHC gets fully removed.
They are no longer used.
This removes a couple file_handle_cache:info/1 calls.
We are not removing them from the HTTP API to avoid
breaking things unintentionally.
Part of the removal of file_handle_cache.
The Prometheus endpoint was updated but the Grafana dashboard
was not.
The FD stats are using the system's state rather than
file_handle_cache so there's no need to remove them.
Stats were not removed, including management UI stats
relating to FDs.
Web-MQTT and Web-STOMP configuration relating to FHC
were not removed.
The file_handle_cache itself must be kept until we
remove CQv1.
Besides fixing a regression detected by priority_queue_SUITE,
this introduces a drive-by change:
rabbit_priority_queue: avoid an exception when
max priority is a negative value that circumvented validation
at validation time.
DQT = default queue type.
When a client provides no queue type, validation
should take the defaults (virtual host, global,
and the last resort fallback) into account
instead of considering the type to
be "undefined".
References #11457 ##11528
The queue type argument won't always be a binary,
for example, when a virtual host is created.
As such, the validation code should accept at
least atoms in addition to binaries.
While at it, improve logging and error reporting
when DQT validation fails, and while at it,
make the definition import tests focussed on
virtual host a bit more robust.
ra_state may contain a QQ state such as {'foo',init,unknown}.
Perfore this fix, all_replica_states doesn't map such states
to a 2-tuple which leads to a crash in maps:from_list because
a 3-tuple can't be handled.
A crash in rabbit_quorum_queue:all_replica_states leads to no
results being returned from a given node when the CLI asks for
QQs with minimum quorum.
In case 16, an await_condition/2 condition was
not correctly matching the error. As a result,
the function proceeded to the assertion step
earlier than it should have, failing with
an obscure function_clause.
This was because an {error, Context} clause
was not correct.
In addition to fixing it, this change adds a
catch-all clause and verifies the loaded
tagged virtual host before running any assertions
on it.
If the virtual host was not imported, case 16
will now fail with a specific CT log message.
References #11457 because the changes there
exposed this behavior in CI.