Commit Graph

1065 Commits

Author SHA1 Message Date
Jean-Sébastien Pédron b15eb0ff1b
rabbit_db: `join/2` now takes care of stopping/starting RabbitMQ
[Why]
Up until now, a user had to run the following three commands to expand a
cluster:
1. stop_app
2. join_cluster
3. start_app

Stopping and starting the `rabbit` application and taking care of the
underlying Mnesia application could be handled by `join_cluster`
directly.

[How]
After the call to `can_join/1` and before proceeding with the actual
join, the code remembers the state of `rabbit`, the Feature flags
controler and Mnesia.

After the join, it restarts whatever needs to be restarted to. It does
so regardless of the success or failure of the join. One exception is
when the node switched from Mnesia to Khepri as part of that join. In
this case, Mnesia is left stopped.
2023-10-26 11:22:47 +02:00
Michael Klishin e87a3995c5
Closes #9733 2023-10-19 11:27:14 -04:00
Jean-Sébastien Pédron f571b86692
rabbit_khepri: Remove Mnesia files after migration
[Why]
When a Khepri-based node joins a Mnesia-based cluster, it is reset and
switches back from Khepri to Mnesia. If there are Mnesia files left in
its data directory, Mnesia will restart with stale/incorrect data and
the operation will fail.

After a migration to Khepri, we need to make sure there is no stale
Mnesia files.

[How]
We use `rabbit_mnesia` to query the Mnesia files and delete them.
2023-10-17 09:38:12 +02:00
Karl Nilsson 8db5316b87 Stream queue: treat discard and return like settle
Currently these are not allowed for use with stream queues
which is a bit too strict. Some client impl will automatically
nack or reject messages that are pending when an application
requests to stop consuming. Treating all message outcomes the same
makes as much sense as not to.
2023-10-05 20:30:30 -04:00
Alex Valiushko 2d569f1701 New quorum queue members join as temporary non-voters
Because both `add_member` and `grow` default to Membership status `promotable`,
new members will have to catch up before they are considered cluster members.
This can be overridden with either `voter` or (permanent `non_voter` statuses.
The latter one is useless without additional tooling so kept undocumented.

- non-voters do not affect quorum size for election purposes
- `observer_cli` reports their status with lowercase 'f'
- `rabbitmq-queues check_if_node_is_quorum_critical` takes voter status into
account
2023-10-05 20:30:30 -04:00
Simon Unge cffc77d396 Add overflow as operpolicy 2023-10-05 11:22:51 +00:00
Diana Parra Corbacho 5f0981c5a3
Allow to use Khepri database to store metadata instead of Mnesia
[Why]

Mnesia is a very powerful and convenient tool for Erlang applications:
it is a persistent disc-based database, it handles replication accross
multiple Erlang nodes and it is available out-of-the-box from the
Erlang/OTP distribution. RabbitMQ relies on Mnesia to manage all its
metadata:

* virtual hosts' properties
* intenal users
* queue, exchange and binding declarations (not queues data)
* runtime parameters and policies
* ...

Unfortunately Mnesia makes it difficult to handle network partition and,
as a consequence, the merge conflicts between Erlang nodes once the
network partition is resolved. RabbitMQ provides several partition
handling strategies but they are not bullet-proof. Users still hit
situations where it is a pain to repair a cluster following a network
partition.

[How]

@kjnilsson created Ra [1], a Raft consensus library that RabbitMQ
already uses successfully to implement quorum queues and streams for
instance. Those queues do not suffer from network partitions.

We created Khepri [2], a new persistent and replicated database engine
based on Ra and we want to use it in place of Mnesia in RabbitMQ to
solve the problems with network partitions.

This patch integrates Khepri as an experimental feature. When enabled,
RabbitMQ will store all its metadata in Khepri instead of Mnesia.

This change comes with behavior changes. While Khepri remains disabled,
you should see no changes to the behavior of RabbitMQ. If there are
changes, it is a bug. After Khepri is enabled, there are significant
changes of behavior that you should be aware of.

Because it is based on the Raft consensus algorithm, when there is a
network partition, only the cluster members that are in the partition
with at least `(Number of nodes in the cluster ÷ 2) + 1` number of nodes
can "make progress". In other words, only those nodes may write to the
Khepri database and read from the database and expect a consistent
result.

For instance in a cluster of 5 RabbitMQ nodes:
* If there are two partitions, one with 3 nodes, one with 2 nodes, only
  the group of 3 nodes will be able to write to the database.
* If there are three partitions, two with 2 nodes, one with 1 node, none
  of the group can write to the database.

Because the Khepri database will be used for all kind of metadata, it
means that RabbitMQ nodes that can't write to the database will be
unable to perform some operations. A list of operations and what to
expect is documented in the associated pull request and the RabbitMQ
website.

This requirement from Raft also affects the startup of RabbitMQ nodes in
a cluster. Indeed, at least a quorum number of nodes must be started at
once to allow nodes to become ready.

To enable Khepri, you need to enable the `khepri_db` feature flag:

    rabbitmqctl enable_feature_flag khepri_db

When the `khepri_db` feature flag is enabled, the migration code
performs the following two tasks:
1. It synchronizes the Khepri cluster membership from the Mnesia
   cluster. It uses `mnesia_to_khepri:sync_cluster_membership/1` from
   the `khepri_mnesia_migration` application [3].
2. It copies data from relevant Mnesia tables to Khepri, doing some
   conversion if necessary on the way. Again, it uses
   `mnesia_to_khepri:copy_tables/4` from `khepri_mnesia_migration` to do
   it.

This can be performed on a running standalone RabbitMQ node or cluster.
Data will be migrated from Mnesia to Khepri without any service
interruption. Note that during the migration, the performance may
decrease and the memory footprint may go up.

Because this feature flag is considered experimental, it is not enabled
by default even on a brand new RabbitMQ deployment.

More about the implementation details below:

In the past months, all accesses to Mnesia were isolated in a collection
of `rabbit_db*` modules. This is where the integration of Khepri mostly
takes place: we use a function called `rabbit_khepri:handle_fallback/1`
which selects the database and perform the query or the transaction.
Here is an example from `rabbit_db_vhost`:

* Up until RabbitMQ 3.12.x:

        get(VHostName) when is_binary(VHostName) ->
            get_in_mnesia(VHostName).

* Starting with RabbitMQ 3.13.0:

        get(VHostName) when is_binary(VHostName) ->
            rabbit_khepri:handle_fallback(
              #{mnesia => fun() -> get_in_mnesia(VHostName) end,
                khepri => fun() -> get_in_khepri(VHostName) end}).

This `rabbit_khepri:handle_fallback/1` function relies on two things:
1. the fact that the `khepri_db` feature flag is enabled, in which case
   it always executes the Khepri-based variant.
4. the ability or not to read and write to Mnesia tables otherwise.

Before the feature flag is enabled, or during the migration, the
function will try to execute the Mnesia-based variant. If it succeeds,
then it returns the result. If it fails because one or more Mnesia
tables can't be used, it restarts from scratch: it means the feature
flag is being enabled and depending on the outcome, either the
Mnesia-based variant will succeed (the feature flag couldn't be enabled)
or the feature flag will be marked as enabled and it will call the
Khepri-based variant. The meat of this function really lives in the
`khepri_mnesia_migration` application [3] and
`rabbit_khepri:handle_fallback/1` is a wrapper on top of it that knows
about the feature flag.

However, some calls to the database do not depend on the existence of
Mnesia tables, such as functions where we need to learn about the
members of a cluster. For those, we can't rely on exceptions from
Mnesia. Therefore, we just look at the state of the feature flag to
determine which database to use. There are two situations though:

* Sometimes, we need the feature flag state query to block because the
  function interested in it can't return a valid answer during the
  migration. Here is an example:

        case rabbit_khepri:is_enabled(RemoteNode) of
            true  -> can_join_using_khepri(RemoteNode);
            false -> can_join_using_mnesia(RemoteNode)
        end

* Sometimes, we need the feature flag state query to NOT block (for
  instance because it would cause a deadlock). Here is an example:

        case rabbit_khepri:get_feature_state() of
            enabled -> members_using_khepri();
            _       -> members_using_mnesia()
        end

Direct accesses to Mnesia still exists. They are limited to code that is
specific to Mnesia such as classic queue mirroring or network partitions
handling strategies.

Now, to discover the Mnesia tables to migrate and how to migrate them,
we use an Erlang module attribute called
`rabbit_mnesia_tables_to_khepri_db` which indicates a list of Mnesia
tables and an associated converter module. Here is an example in the
`rabbitmq_recent_history_exchange` plugin:

    -rabbit_mnesia_tables_to_khepri_db(
       [{?RH_TABLE, rabbit_db_rh_exchange_m2k_converter}]).

The converter module  — `rabbit_db_rh_exchange_m2k_converter` in this
example  — is is fact a "sub" converter module called but
`rabbit_db_m2k_converter`. See the documentation of a `mnesia_to_khepri`
converter module to learn more about these modules.

[1] https://github.com/rabbitmq/ra
[2] https://github.com/rabbitmq/khepri
[3] https://github.com/rabbitmq/khepri_mnesia_migration

See #7206.

Co-authored-by: Jean-Sébastien Pédron <jean-sebastien@rabbitmq.com>
Co-authored-by: Diana Parra Corbacho <dparracorbac@vmware.com>
Co-authored-by: Michael Davis <mcarsondavis@gmail.com>
2023-09-29 16:00:11 +02:00
Michael Klishin 8852729cea Reduce log spam 2023-09-28 11:46:39 -04:00
Karl Nilsson 882e0c1749 Ra 2.7.0
This includes a new ra:key_metrics/1 API that is more available
than parsing the output of sys:get_status/1.

the rabbit_quorum_queue:status/1 function has been ported to use
this API instead as well as now inludes a few new fields.
2023-09-28 11:46:39 -04:00
Michael Klishin e63b6a6099 policy_SUITE: rename a test
it was named by copying and pasting an adjacent
one that indeed had to do with queue type-specific
policies but "version-specific" policies is not
something RabbitMQ supports

References #9547 #9541
2023-09-27 01:06:47 -04:00
Jean-Sébastien Dominique 8c6ba6daca Add Classic Queue version to operator policies 2023-09-26 20:13:52 -04:00
Michael Klishin 1e8701f2ca
Merge pull request #9542 from SimonUnge/queue_pattern_bug
Fix wrong queue-pattern type
2023-09-26 19:32:54 -04:00
Michael Klishin a0ae78e141
Merge pull request #9525 from rabbitmq/operator-policies
Operator policies
2023-09-26 19:31:57 -04:00
Michael Klishin 7508a97140
Merge pull request #9483 from rabbitmq/test-resilience-qq
Test suites: wait for conditions to reduce flakiness
2023-09-26 19:31:37 -04:00
Simon Unge e8a872ff42 Fix wrong queue-pattern type 2023-09-26 20:57:24 +00:00
Diana Parra Corbacho d2b055ad49 Operator policies: add unsupported ones to each queue type
Just valid policies are effectively applied on each queue type,
but they need to be added to 'unsupported-capabilities' to be
excluded from the queue info.
2023-09-22 16:40:20 +02:00
Jean-Sébastien Pédron 6b815b94b8
Start to remove the use of `rabbit_control_helper`
[Why]
The CLI is only compatible with the version of RabbitMQ it is shipped
with. It does not pretend to be backward- or forward-compatible with
other versions.

Unfortunately, `rabbit_control_helper` always use the CLI's module from
the first RabbitMQ node and is executed against any nodes in a testcase.
This may break for the reason described above.

[How]
There is no reason to fix `rabbit_control_helper`, we just need to
switch to the initial way of using the CLI,
`rabbit_ct_broker_helper:rabbitmqctl()`. This one was already fixed to
use the appropriate copy of the CLI.

This patch only fixes `clustering_management_SUITE` and
`rabbitmq_4_0_deprecations_SUITE`. The former because it broke because
of this, the latter as a liow hanging fruit.
2023-09-22 16:05:24 +02:00
Diana Parra Corbacho b94b22b765 rabbitmqctl_integration_SUITE: wait for condition to reduce flakiness 2023-09-20 22:55:07 +02:00
Diana Parra Corbacho ded2c197b8 logging_SUITE: increase wait for long entries 2023-09-20 22:55:07 +02:00
Diana Parra Corbacho 1e7c64c41c quorum_queue_SUITE: wait for conditions or wait for longer to reduce flakiness 2023-09-20 22:55:07 +02:00
Michael Klishin 343e59363e
Merge pull request #9459 from Ayanda-D/propagate-protocol-exceptions-in-channel-interceptors
Handle and propagate protocol exceptions in channel interceptors to the calling parent channels
2023-09-19 10:48:55 -04:00
Michael Klishin 6d420cbe41
Merge pull request #9460 from rabbitmq/flake-cli-forget-cluster-node
cli_forget_cluster_node_SUITE: wait for leader election in both queues
2023-09-19 10:47:40 -04:00
Diana Parra Corbacho 17def72a3b cli_forget_cluster_node_SUITE: wait for leader election in both queues 2023-09-19 16:07:39 +02:00
Ayanda Dube c817bed018 allow propagation of protocol exceptions in channel interceptors to parent/executing channels 2023-09-19 14:01:50 +01:00
Arnaud Cogoluègnes dba1e177b5
Skip test in mixed-version cluster mode
Applies only to 3.13.
2023-09-18 18:33:01 +02:00
Arnaud Cogoluègnes c697f29d23
Do not merge user-sent x-death values (test) 2023-09-18 18:33:01 +02:00
Arnaud Cogoluègnes 7957473717
Add comments to dead letter test 2023-09-18 18:33:01 +02:00
Arnaud Cogoluègnes 11d528d748
Keep CC header after dead-lettering without routing key
To avoid mutating the message.

See https://rabbitmq.com/dlx.html#effects
2023-09-18 18:33:00 +02:00
Arnaud Cogoluègnes 9199935c5b
Fix MC test 2023-09-18 18:32:59 +02:00
Arnaud Cogoluègnes e22fcd70fe
Make MC conversion function return ok or error 2023-09-18 18:32:59 +02:00
Arnaud Cogoluègnes 8affb7af4d
Fix Bazel configuration for transaction test suite 2023-09-18 18:32:58 +02:00
Arnaud Cogoluègnes 0bcfd304b2
Create transaction test suite 2023-09-18 18:32:58 +02:00
Arnaud Cogoluègnes a225f30bd0
Add cc header and dlx integration tests
Following up on failures detected by Java project test
suites after the merge of the message container PR.
These tests are ported to Erlang in the broker test suite.
2023-09-18 18:32:58 +02:00
Michael Klishin 97462e232d
Merge pull request #9293 from rabbitmq/delete-stream-fix
Delete stream replica even if the node is down
2023-09-18 11:25:50 -04:00
Michael Klishin 779c08945b
Merge pull request #9449 from rabbitmq/forget-cluster-node-streams
CLI forget_cluster_node: shrink stream queues
2023-09-18 10:58:54 -04:00
Michael Klishin 2413744a5a
Merge branch 'main' into delete-stream-fix 2023-09-18 10:35:57 -04:00
Diana Parra Corbacho c241b36bf7 CLI forget_cluster_node: shrink stream queues 2023-09-18 16:08:40 +02:00
Michael Klishin 6c2ed30935
Merge pull request #9450 from rabbitmq/test-await
Tests: increase await timeout
2023-09-18 09:51:09 -04:00
Diana Parra Corbacho 40d055732a Tests: increase await timeout
There are some flakes in these low wait tests on CI
2023-09-18 15:38:18 +02:00
Rin Kuryloski a8dcf86f49 Rename mc_SUITE -> mc_unit_SUITE
since the short name seems to break running the suite in Windows CI
2023-09-18 11:10:55 +02:00
Luke Bakken c94d22aceb Use pg_local to track AMQP 1.0 connections
Fixes #9371

Since each AMQP 1.0 connection opens several direct AMQP connections, we
must assign each direct connection a unique name to prevent multiple
entries in the `connection_created_stats` table.

Also, use `pg_local` to track AMQP 1.0 connections instead of walking
the supervisor trees.

Nuke authz_backends from connection created event 💥

Fix regex for connection name because UniqueId is part of it now (channel number)
2023-09-15 09:03:43 -07:00
Diana Parra Corbacho 7540ccc628 forget_cluster_node: handle errors while shrinking quorum queues 2023-09-15 13:34:53 +02:00
Diana Parra Corbacho 08ed15a6b3 Tests: replace some sleeps for wait for condition 2023-09-14 16:35:00 +02:00
Diana Parra Corbacho 1ac9249512 Rename quorum_queue_utils to queue_utils
Not quorum queue specific any longer.
It's used in many test suites to query any queue type.
2023-09-13 16:29:23 +02:00
Michael Klishin 08fc83442b
Merge pull request #9384 from cloudamqp/prio_queue_version
Fix queue storage version returned by priority queues
2023-09-12 15:24:30 -04:00
Péter Gömöri 03b2db6a52 Fix queue storage version returned by priority queues
Fixes #9370
2023-09-12 19:50:51 +02:00
Ayanda-D 1d163886fd new rabbit_amqqueue_control for queue related control operations and fix bazel issues raised in MK's review 2023-09-11 14:13:03 +01:00
Ayanda-D 3ac283d13e use queue resource names for new queue operations (changes the api specs) 2023-09-11 14:13:03 +01:00
Ayanda-D f0a29c95a5 make kill_queue/{2,3} and kill_queue_hard/{2,3} from crashing_queues_SUITE reusable 2023-09-11 14:13:02 +01:00
Michal Kuratczyk 1768694185
Delete stream replica even if node is down.
Otherwise we can't forget replicas on nodes that are no longer
cluster members.

Fixes https://github.com/rabbitmq/rabbitmq-server/issues/9282
2023-09-11 13:55:53 +02:00
David Ansari 62009dc8d8
Translate AMQP 0.9.1 CC headers to AMQP 1.0 x-cc (#9321)
* Translate AMQP 0.9.1 CC headers to AMQP 1.0 x-cc

Translate AMQP 0.9.1 CC headers to AMQP 1.0 x-cc message annotations.

We want CC headers to be kept an AMQP legacy feature and therefore
special case its conversion to AMQP 1.0.

* Translate x-cc from 1.0 message annotation to 091 CC header

for the case where you publish via 091 with CC to a stream and consume
via 091 from a stream in which case the 091 consuming client would like
to know the original CC headers.
2023-09-07 18:25:45 +02:00
Karl Nilsson fb91185f54
Various message container fixes and improvements (#9278)
* AMQP encoded bodies should be converted to amqp correctly

Fix for AMQP encoded amqpl payloads.

Also removing some headers added during amqpl->amqpl conversions that
duplicate information in the amqp header.

* we should not need to prepre for read toset annotations

* fix tagged_prop() type spec

* tagged_prop() -> tagged_value()
2023-09-04 16:35:19 +01:00
Diana Parra Corbacho 6b4cfe86b7 routing_SUITE: extract topic test from unit_access_control_SUITE
As the unit_access_control_SUITE topic test is the only testcase
that covers topic routing, it makes sense to extract it and run
it as a standalone test suite. It eases the development and testing
of topic routing features.
2023-09-01 10:36:17 +02:00
Michael Klishin 16ab86b0ac
Merge pull request #9233 from rabbitmq/clustering-utils
feature_flags_SUITE: wait for cluster status instead of a fixed time
2023-08-31 19:45:58 +04:00
Diana Parra Corbacho 89fd473b19 confirms_rejects_SUITE: replace timer:sleep by wait for conditions
Reduces flakes
2023-08-31 16:31:39 +02:00
Diana Parra Corbacho 1b90263417 feature_flags_SUITE: wait for cluster status instead of a fixed time
Extracted to clustering_utils.erl the utility functions to check cluster status
2023-08-31 15:19:58 +02:00
Karl Nilsson 119f034406
Message Containers (#5077)
This PR implements an approach for a "protocol (data format) agnostic core" where the format of the message isn't converted at point of reception.

Currently all non AMQP 0.9.1 originating messages are converted into a AMQP 0.9.1 flavoured basic_message record before sent to a queue. If the messages are then consumed by the originating protocol they are converted back from AMQP 0.9.1. For some protocols such as MQTT 3.1 this isn't too expensive as MQTT is mostly a fairly easily mapped subset of AMQP 0.9.1 but for others such as AMQP 1.0 the conversions are awkward and in some cases lossy even if consuming from the originating protocol.

This PR instead wraps all incoming messages in their originating form into a generic, extensible message container type (mc). The container module exposes an API to get common message details such as size and various properties (ttl, priority etc) directly from the source data type. Each protocol needs to implement the mc behaviour such that when a message originating form one protocol is consumed by another protocol we convert it to the target protocol at that point.

The message container also contains annotations, dead letter records and other meta data we need to record during the lifetime of a message. The original protocol message is never modified unless it is consumed.

This includes conversion modules to and from amqp, amqpl (AMQP 0.9.1) and mqtt.


COMMIT HISTORY:

* Refactor away from using the delivery{} record

In many places including exchange types. This should make it
easier to move towards using a message container type instead of
basic_message.

Add mc module and move direct replies outside of exchange

Lots of changes incl classic queues

Implement stream support incl amqp conversions

simplify mc state record

move mc.erl

mc dlx stuff

recent history exchange

Make tracking work

But doesn't take a protocol agnostic approach as we just convert
everything into AMQP legacy and back. Might be good enough for now.

Tracing as a whole may want a bit of a re-vamp at some point.

tidy

make quorum queue peek work by legacy conversion

dead lettering fixes

dead lettering fixes

CMQ fixes

rabbit_trace type fixes

fixes

fix

Fix classic queue props

test assertion fix

feature flag and backwards compat

Enable message_container feature flag in some SUITEs

Dialyzer fixes

fixes

fix

test fixes

Various

Manually update a gazelle generated file

until a gazelle enhancement can be made
https://github.com/rabbitmq/rules_erlang/issues/185

Add message_containers_SUITE to bazel

and regen bazel files with gazelle from rules_erlang@main

Simplify essential proprty access

Such as durable, ttl and priority by extracting them into annotations
at message container init time.

Move type

to remove dependenc on amqp10 stuff in mc.erl

mostly because I don't know how to make bazel do the right thing

add more stuff

Refine routing header stuff

wip

Cosmetics

Do not use "maybe" as type name as "maybe" is a keyword since OTP 25
which makes Erlang LS complain.

* Dedup death queue names

* Fix function clause crashes

Fix failing tests in the MQTT shared_SUITE:
A classic queue message ID can be undefined as set in
fbe79ff47b/deps/rabbit/src/rabbit_classic_queue_index_v2.erl (L1048)

Fix failing tests in the MQTT shared_SUITE-mixed:
When feature flag message_containers is disabled, the
message is not an #mc{} record, but a #basic_message{} record.

* Fix is_utf8_no_null crash

Prior to this commit, the function crashed if invalid UTF-8 was
provided, e.g.:
```
1> rabbit_misc:is_valid_shortstr(<<"😇"/utf16>>).
** exception error: no function clause matching rabbit_misc:is_utf8_no_null(<<216,61,222,7>>) (rabbit_misc.erl, line 1481)
```

* Implement mqtt mc behaviour

For now via amqp translation.

This is still work in progress, but the following SUITEs pass:
```
make -C deps/rabbitmq_mqtt ct-shared t=[mqtt,v5,cluster_size_1] FULL=1
make -C deps/rabbitmq_mqtt ct-v5 t=[mqtt,cluster_size_1] FULL=1
```

* Shorten mc file names

Module name length matters because for each persistent message the #mc{}
record is persisted to disk.

```
1> iolist_size(term_to_iovec({mc, rabbit_mc_amqp_legacy})).
30
2> iolist_size(term_to_iovec({mc, mc_amqpl})).
17
```

This commit renames the mc modules:
```
ag -l rabbit_mc_amqp_legacy | xargs sed -i 's/rabbit_mc_amqp_legacy/mc_amqpl/g'
ag -l rabbit_mc_amqp | xargs sed -i 's/rabbit_mc_amqp/mc_amqp/g'
ag -l rabbit_mqtt_mc | xargs sed -i 's/rabbit_mqtt_mc/mc_mqtt/g'
```

* mc: make deaths an annotation + fixes

* Fix mc_mqtt protocol_state callback

* Fix test will_delay_node_restart

```
make -C deps/rabbitmq_mqtt ct-v5 t=[mqtt,cluster_size_3]:will_delay_node_restart FULL=1
```

* Bazel run gazelle

* mix format rabbitmqctl.ex

* Ensure ttl annotation is refelected in amqp legacy protocol state

* Fix id access in message store

* Fix rabbit_message_interceptor_SUITE

* dializer fixes

* Fix rabbit:rabbit_message_interceptor_SUITE-mixed

set_annotation/3 should not result in duplicate keys

* Fix MQTT shared_SUITE-mixed

Up to 3.12 non-MQTT publishes were always QoS 1 regardless of delivery_mode.
75a953ce28/deps/rabbitmq_mqtt/src/rabbit_mqtt_processor.erl (L2075-L2076)
From now on, non-MQTT publishes are QoS 1 if durable.
This makes more sense.

The MQTT plugin must send a #basic_message{} to an old node that does
not understand message containers.

* Field content of 'v1_0.data' can be binary

Fix
```
bazel test //deps/rabbitmq_mqtt:shared_SUITE-mixed \
    --test_env FOCUS="-group [mqtt,v4,cluster_size_1] -case trace" \
    -t- --test_sharding_strategy=disabled
```

* Remove route/2 and implement route/3 for all exchange types.

This removes the route/2 callback from rabbit_exchange_type and
makes route/3 mandatory instead. This is a breaking change and
will require all implementations of exchange types to update their
code, however this is necessary anyway for them to correctly handle
the mc type.

stream filtering fixes

* Translate directly from MQTT to AMQP 0.9.1

* handle undecoded properties in mc_compat

amqpl: put clause in right order

recover death deatails from amqp data

* Replace callback init_amqp with convert_from

* Fix return value of lists:keyfind/3

* Translate directly from AMQP 0.9.1 to MQTT

* Fix MQTT payload size

MQTT payload can be a list when converted from AMQP 0.9.1 for example

First conversions tests

Plus some other conversion related fixes.

bazel

bazel

translate amqp 1.0 null to undefined

mc: property/2 and correlation_id/message_id return type tagged values.

To ensure we can support a variety of types better.

The type type tags are AMQP 1.0 flavoured.

fix death recovery

mc_mqtt: impl new api

Add callbacks to allow protocols to compact data before storage

And make readable if needing to query things repeatedly.

bazel fix

* more decoding

* tracking mixed versions compat

* mc: flip default of `durable` annotation to save some data.

Assuming most messages are durable and that in memory messages suffer less
from persistence overhead it makes sense for a non existent `durable`
annotation to mean durable=true.

* mc conversion tests and tidy up

* mc make x_header unstrict again

* amqpl: death record fixes

* bazel

* amqp -> amqpl conversion test

* Fix crash in mc_amqp:size/1

Body can be a single amqp-value section (instead of
being a list) as shown by test
```
make -C deps/rabbitmq_amqp1_0/ ct-system t=java
```
on branch native-amqp.

* Fix crash in lists:flatten/1

Data can be a single amqp-value section (instead of
being a list) as shown by test
```
make -C deps/rabbitmq_amqp1_0 ct-system t=dotnet:roundtrip_to_amqp_091
```
on branch native-amqp.

* Fix crash in rabbit_writer

Running test
```
make -C deps/rabbitmq_amqp1_0 ct-system t=dotnet:roundtrip_to_amqp_091
```
on branch native-amqp resulted in the following crash:
```
crasher:
  initial call: rabbit_writer:enter_mainloop/2
  pid: <0.711.0>
  registered_name: []
  exception error: bad argument
    in function  size/1
       called as size([<<0>>,<<"Sw">>,[<<160,2>>,<<"hi">>]])
       *** argument 1: not tuple or binary
    in call from rabbit_binary_generator:build_content_frames/7 (rabbit_binary_generator.erl, line 89)
    in call from rabbit_binary_generator:build_simple_content_frames/4 (rabbit_binary_generator.erl, line 61)
    in call from rabbit_writer:assemble_frames/5 (rabbit_writer.erl, line 334)
    in call from rabbit_writer:internal_send_command_async/3 (rabbit_writer.erl, line 365)
    in call from rabbit_writer:handle_message/2 (rabbit_writer.erl, line 265)
    in call from rabbit_writer:handle_message/3 (rabbit_writer.erl, line 232)
    in call from rabbit_writer:mainloop1/2 (rabbit_writer.erl, line 223)
```
because #content.payload_fragments_rev is currently supposed to
be a flat list of binaries instead of being an iolist.

This commit fixes this crash inefficiently by calling
iolist_to_binary/1. A better solution would be to allow AMQP legacy's #content.payload_fragments_rev
to be an iolist.

* Add accidentally deleted line back

* mc: optimise mc_amqp internal format

By removint the outer records for message and delivery annotations
as well as application properties and footers.

* mc: optimis mc_amqp map_add by using upsert

* mc: refactoring and bug fixes

* mc_SUITE routingheader assertions

* mc remove serialize/1 callback as only used by amqp

* mc_amqp: avoid returning a nested list from protocol_state

* test and bug fix

* move infer_type to mc_util

* mc fixes and additiona assertions

* Support headers exchange routing for MQTT messages

When a headers exchange is bound to the MQTT topic exchange, routing
will be performend based on both MQTT topic (by the topic exchange) and
MQTT User Property (by the headers exchange).

This combines the best worlds of both MQTT 5.0 and AMQP 0.9.1 and
enables powerful routing topologies.

When the User Property contains the same name multiple times, only the
last name (and value) will be considered by the headers exchange.

* Fix crash when sending from stream to amqpl

When publishing a message via the stream protocol and consuming it via
AMQP 0.9.1, the following crash occurred prior to this commit:
```
crasher:
  initial call: rabbit_channel:init/1
  pid: <0.818.0>
  registered_name: []
  exception exit: {{badmatch,undefined},
                   [{rabbit_channel,handle_deliver0,4,
                                    [{file,"rabbit_channel.erl"},
                                     {line,2728}]},
                    {lists,foldl,3,[{file,"lists.erl"},{line,1594}]},
                    {rabbit_channel,handle_cast,2,
                                    [{file,"rabbit_channel.erl"},
                                     {line,728}]},
                    {gen_server2,handle_msg,2,
                                 [{file,"gen_server2.erl"},{line,1056}]},
                    {proc_lib,wake_up,3,
                              [{file,"proc_lib.erl"},{line,251}]}]}
```

This commit first gives `mc:init/3` the chance to set exchange and
routing_keys annotations.
If not set, `rabbit_stream_queue` will set these annotations assuming
the message was originally published via the stream protocol.

* Support consistent hash exchange routing for MQTT 5.0

When a consistent hash exchange is bound to the MQTT topic exchange,
MQTT 5.0 messages can be routed to queues consistently based on the
Correlation-Data in the PUBLISH packet.

* Convert MQTT 5.0 User Property

* to AMQP 0.9.1 headers
* from AMQP 0.9.1 headers
* to AMQP 1.0 application properties and message annotations
* from AMQP 1.0 application properties and message annotations

* Make use of Annotations in mc_mqtt:protocol_state/2

mc_mqtt:protocol_state/2 includes Annotations as parameter.
It's cleaner to make use of these Annotations when computing the
protocol state instead of relying on the caller (rabbitmq_mqtt_processor)
to compute the protocol state.

* Enforce AMQP 0.9.1 field name length limit

The AMQP 0.9.1 spec prohibits field names longer than 128 characters.
Therefore, when converting AMQP 1.0 message annotations, application
properties or MQTT 5.0 User Property to AMQP 0.9.1 headers, drop any
names longer than 128 characters.

* Fix type specs

Apply feedback from Michael Davis

Co-authored-by: Michael Davis <mcarsondavis@gmail.com>

* Add mc_mqtt unit test suite

Implement mc_mqtt:x_header/2

* Translate indicator that payload is UTF-8 encoded

when converting between MQTT 5.0 and AMQP 1.0

* Translate single amqp-value section from AMQP 1.0 to MQTT

Convert to a text representation, if possible, and indicate to MQTT
client that the payload is UTF-8 encoded. This way, the MQTT client will
be able to parse the payload.

If conversion to text representation is not possible, encode the payload
using the AMQP 1.0 type system and indiate the encoding via Content-Type
message/vnd.rabbitmq.amqp.

This Content-Type is not registered.
Type "message" makes sense since it's a message.
Vendor tree "vnd.rabbitmq.amqp" makes sense since merely subtype "amqp" is not
registered.

* Fix payload conversion

* Translate Response Topic between MQTT and AMQP

Translate MQTT 5.0 Response Topic to AMQP 1.0 reply-to address and vice
versa.

The Response Topic must be a UTF-8 encoded string.

This commit re-uses the already defined RabbitMQ target addresses:
```
"/topic/"     RK        Publish to amq.topic with routing key RK
"/exchange/"  X "/" RK  Publish to exchange X with routing key RK
```

By default, the MQTT topic exchange is configure dto be amq.topic using
the 1st target address.

When an operator modifies the mqtt.exchange, the 2nd target address is
used.

* Apply PR feedback

and fix formatting

Co-authored-by: Michael Davis <mcarsondavis@gmail.com>

* tidy up

* Add MQTT message_containers test

* consistent hash exchange: avoid amqp legacy conversion

When hashing on a header value.

* Avoid converting to amqp legacy when using exchange federation

* Fix test flake

* test and dialyzer fixes

* dialyzer fix

* Add MQTT protocol interoperability tests

Test receiving from and sending to MQTT 5.0 and
* AMQP 0.9.1
* AMQP 1.0
* STOMP
* Streams

* Regenerate portions of deps/rabbit/app.bzl with gazelle

I'm not exactly sure how this happened, but gazell seems to have been
run with an older version of the rules_erlang gazelle extension at
some point. This caused generation of a structure that is no longer
used. This commit updates the structure to the current pattern.

* mc: refactoring

* mc_amqpl: handle delivery annotations

Just in case they are included.

Also use iolist_to_iovec to create flat list of binaries when
converting from amqp with amqp encoded payload.

---------

Co-authored-by: David Ansari <david.ansari@gmx.de>
Co-authored-by: Michael Davis <mcarsondavis@gmail.com>
Co-authored-by: Rin Kuryloski <kuryloskip@vmware.com>
2023-08-31 11:27:13 +01:00
Jean-Sébastien Pédron 3ddca1ff53
rabbit_feature_flags: Run `post_enable` callback also in case of error
[Why]
The `enable` callback is executed on each node of the cluster. It could
succeed on some of them and fail on other nodes. If it succeeds
everywhere, the controller could still fail to mark the feature flag as
enabled on some of the nodes.

When this happens, we correctly mark the feature flag back as disabled
everywhere. However, the controller never gave a chance to the feature
flag callbacks to roll back anything.

[How]
Now, the controller always runs the `post_enable` callback (if any)
after it ran the `enable` callback. It adds the following field to the
passed map of arguments to indicate if the feature flag was enabled or
not:

    #{enabled => boolean()}

While here, fix two things:
1. One call to `restore_feature_flag_state()` was passed an older
   "version" of the inventory, instead of the latest modified one.
2. One log message had no domain set.
2023-08-29 13:17:22 +02:00
Jean-Sébastien Pédron b4b2be4dfc
rabbit_feature_flags: Wait for in-flight operations before terminating the controller
[Why]
The feature flags controller ensures all nodes in a cluster are running
before a feature flag can be enabled. It continues to do so whenever it
wants to record a state change because it requires that all nodes get
the new state otherwise the task in aborted.

However, it's difficult to verify that through out the entire process if
the feature flag has an `enable` callback. But again, if we loose a node
during the execution of the callback or between its execution and the
time we mark the feature flag as enabled on all nodes, that's ok because
the feature flag will be marked as disabled everywhere: the remaining
running nodes will go back from `state_changing` to `false` and the
stopped nodes will keep their initial state of `false`.

Nonetheless, we can increase the chance of letting an `enable` operation
to finish if the controller waits for anything in-flight before is
actually exits.

[How]
The `terminate/3` function now tries to register globally, like if the
controller wanted to lock the cluster and run a task. If it succeeds to
register, it means nothing is running in parallel and it can exit. If it
fails, it waits for the globally-registered controller to finish and
tries to register again.

We expose a new `wait_for_task_and_stop/0` function to explicitly stop
the feature flags controller and call it from the `rabbit` application
pre-stop phase. The reason is that when the supervisor asks the
controller to stop as part of the regular shutdown of a supervision
tree, it has a timeout and could kill the controller if an in-flight
operation takes too much time. To avoid this kill, we prefer to use
`wait_for_task_and_stop/0` which has no timeout.
2023-08-29 11:20:23 +02:00
Karl Nilsson 49108a69cd QQ: fix bug when subscribing using an already existing consumer tag
When subscribing using a consumer tag that is already in the quorum
queues state (but perhaps with a cancelled status) and that has
pending messages the next_msg_id which is used to initialise the
queue type consumer state did not take the in flight message ids into
account. This resulted in some messages occasionally not being delivered
to the clint and thus would appear stuck as awaiting acknowledgement
for the consumer.

When a new checkout operation detects there are in-flight messages
we set the last_msg_id to `undefined` and just accept the next message
that arrives, irrespective of their message id. This isn't 100% fool proof
as there may be cases where messages are lost between queue and channel
where we'd miss to trigger the fallback query for missing messages.

It is however much better than what we have atm.

NB: really the ideal solution would be to make checkout operations
async so that any inflight messages are delivered before the checkout
result. That is a much bigger change for another day.
2023-08-23 13:48:14 +01:00
Jean-Sébastien Pédron df416eb414 logging_SUITE: Don't use non-exclusive transient queues
[Why]
They are deprecated. Currently, we simply got a warning in the logs but
in a few minor versions, the testcase will start to fail because it
may not be able to declare a queue.
2023-08-18 10:01:13 +02:00
Jean-Sébastien Pédron 3cc3a3879b logging_SUITE: Ensure the exchange was declared in `logging_to_exchange_works`
[Why]
We were running the check to make sure the exchange was declared, but we
didn't verify the result of that check. The testcase would still fail
later but if we verify its existence early, the testcase can fail early
too.
2023-08-18 09:59:37 +02:00
Jean-Sébastien Pédron 270aef9437 logging_SUITE: Enable debug logging in exchange tests
[Why]
This helps when we have to debug the logging configuration or the
testsuite itself.
2023-08-18 09:58:53 +02:00
Marcial Rosales dbffccba9d Fix #9043 2023-08-14 11:51:46 +01:00
David Ansari 2a4301e12d Nack rejected messages to MQTT 5.0 client
since MQTT 5.0 supports negative acknowledgements thanks to reason codes
in the PUBACK packet.

We could either choose reason code 128 or 131. The description code for
131 applies for rejected messages, hence this commit uses 131:
> The PUBLISH is valid but the receiver is not willing to accept it.
2023-08-09 15:31:14 +02:00
Jean-Sébastien Pédron ada57c0770
per_vhost_connection_limit_SUITE: Ensure maintenance mode table is replicated
See #9005 for an explanation of the bug.
2023-08-07 17:26:15 +02:00
Diana Parra Corbacho eef97418bc
peer_discovery_classic_config_SUITE: Fix invalid app config
[Why]
The testcase used to set the `cluster_formation` proplist twice. It is
very ambiguous what we should do: is only one of them relevant or should
they be merged?

[How]
We merge both proplists into a single one.
2023-08-07 17:02:58 +02:00
Jean-Sébastien Pédron 32356eef5b
rabbit_stream_queue_SUITE: Fix `consume_and_reject` channel close detection
[Why]
The previous detection was based on a reuse of the channel to get the
error from an exit exception. The problem is that it is very dependent
on the timing: if the channel process exits before it is reused, the
test fails for two possible reasons:

1. The channel and connection processes exit before they are reused and
   the channel manager opens a new pair. The problem is that the declare
   suceeds but the test expected a failure.

2. The channel and connection processes exit during the reuse and
   `rabbit_ct_client_helpers:open_channel` in
   `retry_if_coordinator_unavailable()` waits a response from the
   channel manager forever (this is probably a weakness of the channel
   manager in rabbitmq_ct_client_helpers). This indefinite wait causes
   the testcase to timeout.

[How]
A simpler solution is to monitor the exit reason of the channel process
that triggers the error on the server side.
2023-08-04 10:05:22 +02:00
Diana Parra Corbacho 2afd7f098e Tests: split parallel stream queue groups and retry if coordinator unavailable
This test suite times out often in CI. It is probably a real test issue
as CI is slower than our dev machines.
2023-07-27 14:25:47 +02:00
Diana Parra Corbacho 48c1a8245b Use rpcs instead of ctl commands to avoid CI failures 2023-07-25 18:18:49 +02:00
Jean-Sébastien Pédron 24d77a046f
rabbit_feature_flags: Enable deprecated features required by feature flags on init
[Why]
We don't record the state of deprecated features because it is
controlled from configuration and they can be disabled (the deprecated
feature can be turned back on) if the deprecated feature allows it.

However, some feature flags may depend on deprecated features. If those
feature flags are enabled, we need to enable the deprecated features
(turn off the deprecated features) they depend on regardless of the
configuration.

[How]
During the (re)initialization of the registry, we go through all enabled
feature flags and deprecated features' `depends_on` declarations and
consider all their dependencies to be implicitly enabled.
2023-07-25 12:29:15 +02:00
Jean-Sébastien Pédron 1e81522cf8
rabbit_db: Reset feature flags registry after a db reset
[Why]
A database reset removes the enabled feature flags file on disc. A reset
of the registry ensures that the next time the registry is reloaded, it
is also initialized from scratch.

[How]
We call `rabbit_feature_flags:reset_registry/0` after both a regular
reset and a forced reset.

The `reset_registry/0` is also exposed by the `rabbit_feature_flags`
module now. The actual implementation in `rabbit_ff_registry_factory`
should only be called by the Feature flags subsystem itself.
2023-07-25 12:00:37 +02:00
Jean-Sébastien Pédron ca1a583120
Don't run testcases in parallel when using Bazel
[Why]
Testcases fail with various system errors in CI, like the inability to
spawn system processes or to open a TCP port.

[How]
We check if the `$RABBITMQ_RUN` environment variable is set. It is only
set by Bazel and not make(1). Based on that, we compute the test group
options to include `parallel` or not.
2023-07-25 11:35:19 +02:00
Jean-Sébastien Pédron b9b4f8a4c1
rabbit_db: Fall back to `rabbit_mnesia` if functions are undefined in remote nodes
[Why]
The CLI may be used against a remote node running a different version.
We took that into account in several uses of the `rabbit_db*` modules on
remote nodes, but not everywhere. Likewise in the
`clustering_management_SUITE` testsuite.

[How]
This patch falls back to previous `rabbit_mnesia`-based calls if the
initial calls throws an `undef` exception.
2023-07-20 15:47:40 +02:00
Simon Unge 559a83d45f See #7209. Evaluate quorum queue membership periodically. 2023-07-11 13:14:04 -07:00
Arnaud Cogoluègnes b89976ad91
Do run stream filtering test in mixed-version clusters
The feature should not be used during an upgrade,
because it must be enabled on all nodes. The test
will always fail.
2023-07-10 15:21:54 +02:00
Arnaud Cogoluègnes 051ef818fd
Support stream filtering in AMQP 0.9.1 2023-07-10 15:21:54 +02:00
Karl Nilsson 86479670cf
Make filter size configurable
as a queue arg and policy
2023-07-10 15:21:53 +02:00
Michael Klishin 7471d27994
Merge pull request #8799 from rabbitmq/at-least-once-dead-lettering-fix
Fix at-least-once dead lettering when the target include the source
2023-07-10 13:48:48 +04:00
Karl Nilsson 46e8f9ae30 fix pattern match 2023-07-07 17:25:15 +01:00
Karl Nilsson 5d563d08d5 Fix at-least-once dead lettering when the target include the source.
If the target for at least once dead lettering included the source queue
the dead letter outbound queue in the quorum queue would never be cleared.

This changes the queue -> dead letter worker message format to better distinguish
between those and queue events for "normal" queue type interactions.
2023-07-07 15:57:49 +01:00
Jean-Sébastien Pédron eaf1f0e56b
Merge pull request #8694 from rabbitmq/test-resilience
Tests: more resilient time-dependent tests
2023-07-07 16:42:25 +02:00
Diana Parra Corbacho 2aa84d6ddf Tests: more resilient time-dependent tests 2023-07-07 13:01:49 +02:00
Jean-Sébastien Pédron f4d75ae8d8
clustering_management_SUITE: Reorganize the `cluster_status/1` code
[Why]
We want the code to depend less on Mnesia (and not at all in the
future). We also want to make room to introduce the use of Khepri.

[How]
For now, we simply store each list in a variable. This give them a name
to better understand what each one is.

`rabbit_mnsia:cluster_nodes(all)` is also replaced by
`rabbit_db_cluster:members()`. The other two calls to `rabbit_mnesia`
are left alone as they are quite specific to Mnesia.
2023-07-07 09:42:32 +02:00
Jean-Sébastien Pédron 9c358dd9f3
rabbit_peer_discovery: Move peer discovery driving code from `rabbit_mnesia`
[Why]
Peer discovery is not Mnesia-specific and will be used once we introduce
Khepri.

[How]
The whole peer discovery driving code is moved from `rabbit_mnesia` to
`rabbit_peer_discovery`. When `rabbit_mnesia` calls that code, it simply
passes a callback for the Mnesia-specific cluster expansion code.
2023-07-07 09:42:32 +02:00
Jean-Sébastien Pédron a595128d88
peer_discovery_classic_config_SUITE: Increase timetrap for `successful_discovery_with_a_subset_of_nodes_coming_online`
[Why]
Now that feature flags compatibility is tested first, before
Mnesia-specific checks, when a peer is not started yet, the feature
flags check lasts the entire timeout, so one minute. This retry
mechanism was added to feature flags in #8411.

Thus, instead of 20 seconds, the testcase takes 10 minutes now (10
retries of one minute each).
2023-07-07 09:42:32 +02:00
Jean-Sébastien Pédron 9ba4d43000
Mark transient non-exclusive queues as deprecated
[Why]
Transient queues are queues that are removed upon node restart. An
application developer can't rely on such an arbitrary event to reason
about a queue's lifetime.

The only exception are exclusive transient queues which have a lifetime
linked to that of a client connection.

[How]
Non-exclusive transient queues are marked as deprecated in the code
using the Deprecated features subsystem (based on feature flags). See
pull request #7390 for a description of that subsystem.

To test RabbitMQ behavior as if the feature was removed, the following
configuration setting can be used:
deprecated_features.permit.transient_nonexcl_queues = false

Non-exclusive transient queues can be turned off anytime, there are no
conditions to do that.

Once non-exclusive transient queues are turned off, declaring a new
queue with those arguments will be rejected with a protocol error.

Note that given the marketing calendar, the deprecated feature will go
directly from "permitted by default" to "removed" in RabbitMQ 4.0. It
won't go through the gradual deprecation process.
2023-07-06 11:02:49 +02:00
Jean-Sébastien Pédron 469afafd86
Mark classic queue mirroring as deprecated
[Why]
Classic queue mirroring will be removed in RabbitMQ 4.0. Quorum queues
provide a better safer alternative. Non-replicated classic queues remain
supported.

[How]
Classic queue mirroring is marked as deprecated in the code using the
Deprecated features subsystem (based on feature flags). See #7390 for a
description of that subsystem.

To test RabbitMQ behavior as if the feature was removed, the following
configuration setting can be used:
deprecated_features.permit.classic_queue_mirroring = false

To turn off classic queue mirroring, there must be no classic mirrored
queues declared and no HA policy defined. A node with classic mirrored
queues will refuse to start if classic queue mirroring is turned off.

Once classic queue mirroring is turned off, users will not be able to
declare HA policies. Trying to do that from the CLI or the management
API will be rejected with a warning in the logs. This impacts clustering
too: a node with classic queue mirroring turned off will only cluster
with another node which has no HA policy or has classic queue mirroring
turned off.

Note that given the marketing calendar, the deprecated feature will go
directly from "permitted by default" to "removed" in RabbitMQ 4.0. It
won't go through the gradual deprecation process.

V2: Renamed the deprecated feature from `classic_mirrored_queues` to
    `classic_queue_mirroring` to better reflect the intention. Otherwise
    it could be unclear is only the mirroring property is
    deprecated/removed or classic queues entirely.
2023-07-06 11:02:49 +02:00
Jean-Sébastien Pédron 3031a09981
Mark RAM node type as deprecated
[Why]
RAM nodes provide no safety at all and they lost interest with recent
fast storage solutions.

[How]
RAM nodes are marked as deprecated in the code using the Deprecated
features subsystem (based on feature flags). See pull request #7390 for
a description of that subsystem.

To test RabbitMQ behavior as if the feature was removed, the following
configuration setting can be used:
deprecated_features.permit.ram_node_type = false

RAM nodes can be turned off anytime, there are no conditions to do that.

Once RAM nodes are turned off, an existing node previously created as a
RAM node will change itself to a disc node during boot. If a new node is
added to the cluster using peer discovery or the CLI, it will be as a
disc node and a warning will be logged if the requested node type is
RAM. The `change_cluster_node_type` CLI command will reject a change to
a RAM node with an error.

Note that given the marketing calendar, the deprecated feature will go
directly from "permitted by default" to "removed" in RabbitMQ 4.0. It
won't go through the gradual deprecation process.
2023-07-06 11:02:49 +02:00
Jean-Sébastien Pédron 05f6a9813f
Mark global QoS setting as deprecated
[Why]
Global QoS, where a single shared prefetch is used for an entire
channel, is not recommended practice. Per-consumer QoS (non-global)
should be set instead.

[How]
The global QoS setting is marked as deprecated in the code using the
Deprecated features subsystem (based on feature flags). See #7390 for a
description of that subsystem.

To test RabbitMQ behavior as if the feature was removed, the following
configuration setting can be used:
deprecated_features.permit.global_qos = false

Global QoS can be turned off anytime, there are no conditions to do
that.

Once global QoS is turned off, the prefetch setting will always be
considered as non-global (i.e. per-consumer). A warning message will be
logged if the default prefetch setting enables global QoS or anytime a
client requests a global QoS on the channel.

Note that given the marketing calendar, the deprecated feature will go
directly from "permitted by default" to "removed" in RabbitMQ 4.0. It
won't go through the gradual deprecation process.
2023-07-06 11:02:49 +02:00
Loïc Hoguin 610af302c6
Add support for LOCAL proxy header
This is what the proxy uses for health checks. In those cases
we use the socket's IP/ports for the connection name as we
have nothing else we can use.
2023-06-23 12:12:58 +02:00
David Ansari 86b794cdd3 Remove topic routing regression
Hashing the #resource{} record is expensive.
Routing to 40k queues via the topic exchanges takes:
~150ms prior to this commit
~100ms after this commit

As rabbit_exchange already deduplicates destination queues and binding
keys, there's no need to use maps in rabbit_db_topic_exchange or
rabbit_exchange_type_topic.
2023-06-21 17:14:08 +01:00
David Ansari bb20618b13 Return matched binding keys faster
For MQTT 5.0 destination queues, the topic exchange does not only have
to return the destination queue names, but also the matched binding
keys.
This is needed to implement MQTT 5.0 subscription options No Local,
Retain As Published and Subscription Identifiers.

Prior to this commit, as the trie was walked down, we remembered the
edges being walked and assembled the final binding key with
list_to_binary/1.

list_to_binary/1 is very expensive with long lists (long topic names),
even in OTP 26.
The CPU flame graph showed ~3% of CPU usage was spent only in
list_to_binary/1.

Unfortunately and unnecessarily, the current topic exchange
implementation stores topic levels as lists.

It would be better to store topic levels as binaries:
split_topic_key/1 should ideally use binary:split/3 similar as follows:
```
1> P = binary:compile_pattern(<<".">>).
{bm,#Ref<0.1273071188.1488322568.63736>}
2> Bin = <<"aaa.bbb..ccc">>.
<<"aaa.bbb..ccc">>
3> binary:split(Bin, P, [global]).
[<<"aaa">>,<<"bbb">>,<<>>,<<"ccc">>]
```
The compiled pattern could be placed into persistent term.

This commit decided to avoid migrating Mnesia tables to use binaries
instead of lists. Mnesia migrations are non-trivial, especially with the
current feature flag subsystem.
Furthermore the Mnesia topic tables are already getting migrated to
their Khepri counterparts in 3.13.
Adding additional migration only for Mnesia does not make sense.

So, instead of assembling the binding key as we walk down the trie and
then calling list_to_binary/1 in the leaf, it
would be better to just fetch the binding key from the database in the leaf.

As we reach the leaf of the trie, we know both source and destination.
Unfortunately, we cannot fetch the binding key efficiently with the
current rabbit_route (sorted by source exchange) and
rabbit_reverse_route (sorted by destination) tables as the key is in
the middle between source and destination.
If there are a huge number of bindings for a given sourc exchange (very
realistic in MQTT use cases) or a large number of bindings for a given
destination (also realistic), it would require scanning these large
number of bindings.

Therefore this commit takes the simplest possible solution:
The solution leverages the fact that binding arguments are already part of
table rabbit_topic_trie_binding.
So, if we simply include the binding key into the binding arguments, we
can fetch and return it efficiently in the topic exchange
implementation.

The following patch omitting fetching the empty list binding argument
(the default) makes routing slower because function
`analyze_pattern.constprop.0` requires significantly more (~2.5%) CPU time
```
@@ -273,7 +273,11 @@ trie_bindings(X, Node) ->
                                    node_id       = Node,
                                    destination   = '$1',
                                    arguments     = '$2'}},
-    mnesia:select(?MNESIA_BINDING_TABLE, [{MatchHead, [], [{{'$1', '$2'}}]}]).
+    mnesia:select(
+      ?MNESIA_BINDING_TABLE,
+      [{MatchHead, [{'andalso', {'is_list', '$2'}, {'=/=', '$2', []}}], [{{'$1', '$2'}}]},
+       {MatchHead, [], ['$1']}
+      ]).
```
Hence, this commit always fetches the binding arguments.

All MQTT 5.0 destination queues will create a binding that
contains the binding key in the binding arguments.

Not only does this solution avoid expensive list_to_binay/1 calls, but
it also means that Erlang app rabbit (specifically the topic exchange
implementation) does not need to be aware of MQTT anymore:
It just returns the binding key when the binding args tell to do so.

In future, once the Khepri migration completed, we should be able to
relatively simply remove the binding key from the binding arguments
again to free up some storage space.

Note that one of the advantages of a trie data structue is its space
efficiency that you don't have to store the same prefixes multiple
times.
However, for RabbitMQ the binding key is already stored at least N times
in various routing tables, so storing it a few times more via the
binding arguments should be acceptable.
The speed improvements are favoured over a few more MBs ETS usage.
2023-06-21 17:14:08 +01:00
David Ansari 48a442b23e Change routing options from list to map
as small maps with atom keys are optimized in OTP 26.
Rename v2 to return_binding_keys to make the routing option clearer.
2023-06-21 17:14:08 +01:00
David Ansari e2b545f270 Support MQTT 5.0 features No Local, RAP, Subscription IDs
Support subscription options "No Local" and "Retain As Published"
as well as Subscription Identifiers.

All three MQTT 5.0 features can be set on a per subscription basis.
Due to wildcards in topic filters, multiple subscriptions
can match a given topic. Therefore, to implement Retain As Published and
Subscription Identifiers, the destination MQTT connection process needs
to know what subscription(s) caused it to receive the message.

There are a few ways how this could be implemented:

1. The destination MQTT connection process is aware of all its
   subscriptions. Whenever, it receives a message, it can match the
   message's routing key / topic against all its known topic filters.
   However, to iteratively match the routing key against all topic
   filters for every received message can become very expensive in the
   worst case when the MQTT client creates many subscriptions containing
   wildcards. This could be the case for an MQTT client that acts as a
   bridge or proxy or dispatcher: It could subscribe via a wildcard for
   each of its own clients.

2. Instead of interatively matching the topic of the received message
   against all topic filters that contain wildcards, a better approach
   would be for every MQTT subscriber connection process to maintain a
   local trie datastructure (similar to how topic exchanges are
   implemented) and perform matching therefore more efficiently.
   However, this does not sound optimal either because routing is
   effectively performed twice: in the topic exchange and again against
   a much smaller trie in each destination connection process.

3. Given that the topic exchange already perform routing, a much more
   sensible way would be to send the matched binding key(s) to the
   destination MQTT connection process. A subscription (topic filter)
   maps to a binding key in AMQP 0.9.1 routing. Therefore, for the first
   time in RabbitMQ, the routing function should not only output a list
   of unique destination queues, but also the binding keys (subscriptions)
   that caused the message to be routed to the destination queue.

This commit therefore implements the 3rd approach.
The downside of the 3rd approach is that it requires API changes to the
routing function and topic exchange.

Specifically, this commit adds a new function rabbit_exchange:route/3
that accepts a list of routing options. If that list contains version 2,
the caller of the routing function knows how to handle the return value
that could also contain binding keys.

This commits allows an MQTT connection process, the channel process, and
at-most-once dead lettering to handle binding keys. Binding keys are
included as AMQP 0.9.1 headers into the basic message.
Therefore, whenever a message is sent from an MQTT client or AMQP 0.9.1
client or AMQP 1.0 client or STOMP client, the MQTT receiver will know
the subscription identifier that caused the message to be received.

Note that due to the low number of allowed wildcard characters (# and
+), the cardinality of matched binding keys shouldn't be high even if
the topic contains for example 3 levels and the message is sent to for
example 5 million destination queues. In other words, sending multiple
distinct basic messages to the destination shouldn't hurt the delegate
optimisation too much. The delegate optimisation implemented for classic
queues and rabbit_mqtt_qos0_queue(s) still takes place for all basic
messages that contain the same set of matched binding keys.

The topic exchange returns all matched binding keys by remembering the
edges walked down to the leaves. As an optimisation, only for MQTT
queues are binding keys being returned. This does add a small dependency
from app rabbit to app rabbitmq_mqtt which is not optimal. However, this
dependency should be simple to remove when omitting this optimisation.

Another important feature of this commit is persisting subscription
options and subscription identifiers because they are part of the
MQTT 5.0 session state.

In MQTT v3 and v4, the only subscription information that were part of
the session state was the topic filter and the QoS level.
Both information were implicitly stored in the form of bindings:
The topic filter as the binding key and the QoS level as the destination
queue name of the binding.

For MQTT v5 we need to persist more subscription information.
From a domain perspective, it makes sense to store subscription options
as part of subscriptions, i.e. bindings, even though they are currently
not used in routing.
Therefore, this commits stores subscription options as binding arguments.

Storing subscription options as binding arguments comes in turn with
new challenges: How to handle mixed version clusters and upgrading an
MQTT session from v3 or v4 to v5?
Imagine an MQTT client connects via v5 with Session Expiry Interval > 0
to a new node in a mixed version cluster, creates a subscription,
disconnects, and subsequently connects via v3 to an old node. The
client should continue to receive messages.

To simplify such edge cases, this commit introduces a new feature flag
called mqtt_v5. If mqtt_v5 is disabled, clients cannot connect to
RabbitMQ via MQTT 5.0.

This still doesn't entirely solve the problem of MQTT session upgrades
(v4 to v5 client) or session downgrades (v5 to v4 client).

Ideally, once mqtt_v5 is enabled, all MQTT bindings contain non-empty binding
arguments. However, this will require a feature flag migration function
to modify all MQTT bindings. To be more precise, all MQTT bindings need
to be deleted and added because the binding argument is part of the
Mnesia table key.

Since feature flag migration functions are non-trivial to implement in
RabbitMQ (they can run on every node multiple times and concurrently),
this commit takes a simpler approach:
All v3 / v4 sessions keep the empty binding argument [].
All v5 sessions use the new binding argument [#mqtt_subscription_opts{}].

This requires only handling a session upgrade / downgrade by
creating a binding (with the new binding arg) and deleting the old
binding (with the old binding arg) when processing the CONNECT packet.

Note that such session upgrades or downgrades should be rather rare in
practice. Therefore these binding transactions shouldn't hurt peformance.

The No Local option is implemented within the MQTT publishing connection
process: The message is not sent to the MQTT destination if the
destination queue name matches the current MQTT client ID and the
message was routed due to a subscription that has the No Local flag set.
This avoids unnecessary traffic on the MQTT queue.
The alternative would have been that the "receiving side" (same process)
filters the message out - which would have been more consistent in how
Retain As Published and Subscription Identifiers are implemented, but
would have caused unnecessary load on the MQTT queue.
2023-06-21 17:14:08 +01:00
David Ansari 49f1071591 Add MQTT v5 feature Maximum Packet Size set by client
"Allow the Client and Server to independently specify the maximum
packet size they support. It is an error for the session partner
to send a larger packet."

This commit implements the part where the Client specifies the maximum
packet size.

As per protocol spec, instead of sending, the server drops the MQTT packet
if it's too large.
A debug message is logged for "infrequent" packet types.

For PUBLISH packets, the messages is rejected to the queue such that it
will be dead lettered, if dead lettering is configured.
At the very least, Prometheus metrics for dead lettered messages will
be increased, even if dead lettering is not configured.
2023-06-21 17:14:08 +01:00
Diana Parra Corbacho 91bcdd9c5b Test: verify just vhost folder, not whole directory 2023-06-21 13:05:03 +02:00
Diana Parra Corbacho 2387fae370 More resilient ttl test 2023-06-21 13:00:42 +02:00
Loïc Hoguin 985f4e8a68
CQ shared store write optimisations (#8507)
* CQ: Don't use FHC for writes in shared store

* CQ: Send confirms when flushing to disk in shared store

Before they would only be sent periodically or when
rolling over to a new file.

* CQ: Fast-confirm when flushing data to disk

We know the messages are on disk or were acked so there is no
need to do sets intersections/subtracts in this scenario.

* Fix a Dialyzer warning

* Faster confirms for unwritten messages

Instead of having the message store send a message to the queue
with the confirms for messages ignored due to the flying
optimisation, we have the queue handle the confirms directly
when removing the messages.

This avoids sending potentially 1 Erlang message per 1 AMQP
message to the queue.

* Refactor rabbit_msg_file:pread into rabbit_msg_store

Also make use of the opened file for multi-reads instead
of opening/reading/closing each time.

* CQ: Make sure we keep the updated CState when using read_many

* CQ shared store: Run compaction on older file candidates

The way I initially did this the maybe_gc would be triggered
based on candidates from 15s ago, but run against candidates
from just now. This is sub-optimal because when messages are
consumed rapidly, just-now candidates are likely to be in a
file about to be deleted, and we don't want to run compaction
on those.

Instead, when sending the maybe_gc we also send the candidates
we had at the time. Then 15s later we check if the file still
exists. If it's gone, great! No compaction to do.

* CQ: Add a few todos for later
2023-06-20 20:04:17 +02:00
Michael Klishin d34203b571 clustering_management_SUITE: remove a group that's now gone 2023-06-19 13:51:38 +04:00
Michael Klishin ba52bbeed0 Drop a particularly flaky CMQ test
Since CMQs are on their way out, we are only willing
to spend so much time on it.

The test covers a scenario where four nodes are stopped, then
one force booted and then immediately removed from the cluster.
In other words, a scenario that's quite unrealistic.
2023-06-19 12:44:57 +04:00
Michael Klishin 16f49d336f Add a shorthand for the OAuth 2 authN/authZ backend
References #8512
2023-06-10 00:51:00 +04:00
Jean-Sébastien Pédron ac0565287b
Deprecated features: New module to manage deprecated features (!)
This introduces a way to declare deprecated features in the code, not
only in our communication. The new module allows to disallow the use of
a deprecated feature and/or warn the user when he relies on such a
feature.

[Why]
Currently, we only tell people about deprecated features through blog
posts and the mailing-list. This might be insufficiant for our users
that a feature they use will be removed in a future version:
* They may not read our blog or mailing-list
* They may not understand that they use such a deprecated feature
* They might wait for the big removal before they plan testing
* They might not take it seriously enough

The idea behind this patch is to increase the chance that users notice
that they are using something which is about to be dropped from
RabbitMQ. Anopther benefit is that they should be able to test how
RabbitMQ will behave in the future before the actual removal. This
should allow them to test and plan changes.

[How]
When a feature is deprecated in other large projects (such as FreeBSD
where I took the idea from), it goes through a lifecycle:
1. The feature is still available, but users get a warning somehow when
   they use it. They can disable it to test.
2. The feature is still available, but disabled out-of-the-box. Users
   can re-enable it (and get a warning).
3. The feature is disconnected from the build. Therefore, the code
   behind it is still there, but users have to recompile the thing to be
   able to use it.
4. The feature is removed from the source code. Users have to adapt or
   they can't upgrade anymore.

The solution in this patch offers the same lifecycle. A deprecated
feature will be in one of these deprecation phases:
1. `permitted_by_default`: The feature is available. Users get a warning
   if they use it. They can disable it from the configuration.
2. `denied_by_default`: The feature is available but disabled by
   default. Users get an error if they use it and RabbitMQ behaves like
   the feature is removed. They can re-enable is from the configuration
   and get a warning.
3. `disconnected`: The feature is present in the source code, but is
   disabled and can't be re-enabled without recompiling RabbitMQ. Users
   get the same behavior as if the code was removed.
4. `removed`: The feature's code is gone.

The whole thing is based on the feature flags subsystem, but it has the
following differences with other feature flags:
* The semantic is reversed: the feature flag behind a deprecated feature
  is disabled when the deprecated feature is permitted, or enabled when
  the deprecated feature is denied.
* The feature flag behind a deprecated feature is enabled out-of-the-box
  (meaning the deprecated feature is denied):
    * if the deprecation phase is `permitted_by_default` and the
      configuration denies the deprecated feature
    * if the deprecation phase is `denied_by_default` and the
      configuration doesn't permit the deprecated feature
    * if the deprecation phase is `disconnected` or `removed`
* Feature flags behind deprecated feature don't appear in feature flags
  listings.

Otherwise, deprecated features' feature flags are managed like other
feature flags, in particular inside clusters.

To declare a deprecated feature:

    -rabbit_deprecated_feature(
       {my_deprecated_feature,
        #{deprecation_phase => permitted_by_default,
          msgs => #{when_permitted => "This feature will be removed in RabbitMQ X.0"},
         }}).

Then, to check the state of a deprecated feature in the code:

    case rabbit_deprecated_features:is_permitted(my_deprecated_feature) of
        true ->
            %% The deprecated feature is still permitted.
            ok;
        false ->
            %% The deprecated feature is gone or should be considered
            %% unavailable.
            error
    end.

Warnings and errors are logged automatically. A message is generated
automatically, but it is possible to define a message in the deprecated
feature flag declaration like in the example above.

Here is an example of a logged warning that was generated automatically:

    Feature `my_deprecated_feature` is deprecated.
    By default, this feature can still be used for now.
    Its use will not be permitted by default in a future minor RabbitMQ version and the feature will be removed from a future major RabbitMQ version; actual versions to be determined.
    To continue using this feature when it is not permitted by default, set the following parameter in your configuration:
        "deprecated_features.permit.my_deprecated_feature = true"
    To test RabbitMQ as if the feature was removed, set this in your configuration:
        "deprecated_features.permit.my_deprecated_feature = false"

To override the default state of `permitted_by_default` and
`denied_by_default` deprecation phases, users can set the following
configuration:

    # In rabbitmq.conf:
    deprecated_features.permit.my_deprecated_feature = true # or false

The actual behavior protected by a deprecated feature check is out of
scope for this subsystem. It is the repsonsibility of each deprecated
feature code to determine what to do when the deprecated feature is
denied.

V1: Deprecated feature states are initially computed during the
    initialization of the registry, based on their deprecation phase and
    possibly the configuration. They don't go through the `enable/1`
    code at all.

V2: Manage deprecated feature states as any other non-required
    feature flags. This allows to execute an `is_feature_used()`
    callback to determine if a deprecated feature can be denied. This
    also allows to prevent the RabbitMQ node from starting if it
    continues to use a deprecated feature.

V3: Manage deprecated feature states from the registry initialization
    again. This is required because we need to know very early if some
    of them are denied, so that an upgrade to a version of RabbitMQ
    where a deprecated feature is disconnected or removed can be
    performed.

    To still prevent the start of a RabbitMQ node when a denied
    deprecated feature is actively used, we run the `is_feature_used()`
    callback of all denied deprecated features as part of the
    `sync_cluster()` task. This task is executed as part of a feature
    flag refresh executed when RabbitMQ starts or when plugins are
    enabled. So even though a deprecated feature is marked as denied in
    the registry early in the boot process, we will still abort the
    start of a RabbitMQ node if the feature is used.

V4: Support context-dependent warnings. It is now possible to set a
    specific message when deprecated feature is permitted, when it is
    denied and when it is removed. Generic per-context messages are
    still generated.

V5: Improve default warning messages, thanks to @pstack2021.

V6: Rename the configuration variable from `permit_deprecated_features.*`
    to `deprecated_features.permit.*`. As @michaelklishin said, we tend
    to use shorter top-level names.
2023-06-06 13:02:03 +02:00
Jean-Sébastien Pédron 8749c605f5
rabbit_feature_flags: Retry after erpc:call() fails with `noconnection`
[Why]
There could be a transient network issue. Let's give a few more chances
to perform the requested RPC call.

[How]
We retry until the given timeout is reached, if any.

To honor that timeout, we measure the time taken by the RPC call itself.
We also sleep between retries. Before each retry, the timeout is reduced
by the total of the time taken by the RPC call and the sleep.

References #8346.

V2: Treat `infinity` timeout differently. In this case, we never retry
    following a `noconnection` error. The reason is that this timeout is
    used specifically for callbacks executed remotely. We don't know how
    long they take (for instance if there is a lot of data to migrate).
    We don't want an infinite retry loop either, so in this case, we
    don't retry.
2023-06-06 09:40:16 +02:00
Jean-Sébastien Pédron 7c53958a20
rabbit_feature_flags: Use cluster members hint for cluster sync
[Why]
During peer discovery, when the feature flags state is synchronized on a
starting node that joins a cluster thanks to peer discovery, the list of
nodes returned by `rabbit_nodes:list_running()` is incorrect because
Mnesia is not initiliazed yet.

Because of that, the synchronization works on the wrong inventory of
feature flags. In the end, the states of feature flags is incorrect
across the cluster.

[How]
`rabbit_mnesia` passes a list of nodes to
`rabbit_feature_flags:sync_feature_flags_with_cluster/2`. We can use
this list, as we did in feature flags V1. This makes sure that the
synchronization works with a valid list of cluster members, in case the
cluster state is not ready yet.

V2: Filter the given list of nodes to only keep those where `rabbit` is
    running. This avoids trying to collect inventory from nodes which
    are stopped.
2023-06-05 13:46:07 +02:00
Michael Klishin 6d2e497382
Merge pull request #8453 from cloudamqp/cqv1_missing_del
Handle missing delivery marker in CQ v1 index
2023-06-02 14:32:37 +04:00
Péter Gömöri 18a6881a7f fixup! Tests for "Handle missing delivery marker in CQ v1 index" 2023-06-01 23:28:39 +02:00
Loïc Hoguin ef7c68a9cc
CQ shared store: rework the flying optimisation
Instead of doing a complicated +1/-1 we do an update_counter
of an integer value using 2^n values. We always know exactly
in which state we are when looking at the ets table. We also
can avoid some ets operations as a result although the
performance improvements are minimal.
2023-05-30 11:19:46 +02:00
Loïc Hoguin 4e4e6e401a
CQ: Remove mechanism for closing FHC FDs in queues
We no longer use FHC there and don't keep FDs open
after reading.
2023-05-30 11:19:45 +02:00
Michael Klishin a68e2d383d Commit a new case for definition_import_SUITE 2023-05-29 00:46:37 +04:00
Michael Klishin 0c6f8aa316 Fail boot if definition file is invalid JSON
and `definitions.skip_if_unchanged` is set to `true`.

References #2610, #6418.
Closes #8372.
2023-05-29 00:39:58 +04:00
Michael Klishin c30cd649e6
Merge pull request #8308 from rabbitmq/default-to-cqv2
Default to classic queues v2
2023-05-25 19:49:48 +04:00
Michael Klishin fba3329377
Merge pull request #8322 from rabbitmq/disaster-recovery-shrink-qq
Disaster recovery: force shrink all quorum queues to a 1-node cluster
2023-05-25 19:19:59 +04:00
Diana Parra Corbacho 5d75eee6f7 Disaster recovery: force shrink all quorum queues to a 1-node cluster 2023-05-25 10:55:21 +02:00
Michal Kuratczyk 3c52f55c5e
Use lazy queue in dynamic_ha_SUITE
In mixed version cluster tests where the new node
uses CQv2, when mirror synchronisation happens,
v2 (source) overloads v1 (destination) leading to
a memory spike and a crash (in a memory-constrained
CI environment). Given that in 3.12 we switch to
a lazy-like mode for all classic queues, I think
we can make use a lazy queue in the test.
2023-05-24 16:18:25 +02:00
Karl Nilsson 1c727a4ee4 test fix 2023-05-24 10:52:30 +01:00
Michael Klishin e569a2b4f5
Merge pull request #8260 from rabbitmq/ik-consumer-timeout-followup
Consumer Timeout Follow-up
2023-05-23 00:28:26 +04:00
Jean-Sébastien Pédron aacfa1978e
rabbit_feature_flags: Fix possible deadlock when calling the Code server (take 2)
[Why]
The background reason for this fix is about the same as the one
explained in the previous version of this fix; see commit
e0a2f10272.

This time, the order of events that led to a similar deadlock is the
following:

0. No `rabbit_ff_registry` is loaded yet.
1. Process A, B and C call `rabbit_ff_registry:something()` indirectly
   which triggers two initializations in parallel.
    * Process A did it from an explicit call to
      `rabbit_ff_registry_factory:initialize_factory()` during RabbitMQ
      boot.
    * Process B and C indirectly called it because they checked if a
      feature flag was enabled.
2. Process B acquires the lock first and finishes the initialization. A
   new registry is loaded and the old `rabbit_ff_registry` module copy
   is marked as "old". At this point, process A and C still reference
   that old copy because `rabbit_ff_registry:something()` is up above in
   its call stack.
3. Process A acquires the lock, prepares the new registry and tries to
   soft-purge the old `rabbit_ff_registry` copy before loading the new
   one.

This is where the deadlock happens: process A requests the Code server
to purge the old copy, but the Code server waits for process C to stop
using it.

The difference between the steps described in the first bug fix
attempt's commit and these ones is that the process which lingers on the
deleted `rabbit_ff_registry` (process C above) isn't the one who
acquired the lock; process A has it.

That's why the first bug fix isn't effective in this case: it relied on
the fact that the process which lingers on the deleted
`rabbit_ff_registry` is the process which attempts to purge the module.

[How]
In this commit, we go with a more drastic change. This time, we put a
wrapper in front of `rabbit_ff_registry` called
`rabbit_ff_registry_wrapper`. This wrapper is responsible for doing the
automatic initialization if the loaded registry is the stub module. The
`rabbit_ff_registry` stub now always returns `init_required` instead of
performing the initialization and calling itself recursively.

This way, processes linger on `rabbit_ff_registry_wrapper`, not on
`rabbit_ff_registry`. Thanks to this, the Code server can proceed with
the purge.

See #8112.
2023-05-22 17:15:24 +02:00
Iliia Khaprov c02ca3b52b Remove per consumer timeout capability 2023-05-22 11:59:18 +02:00
Michal Kuratczyk f8a3643d5d
Remove "lazy" from Management and lazy-specific tests 2023-05-18 13:59:50 +02:00
David Ansari ddabc35191 Change rabbitmq.conf key to message_interceptors.incoming.*
as it nicer categorises if there will be a future
"message_interceptors.outgoing.*" key.

We leave the advanced config file key because simple single value
settings should not require using the advanced config file.
2023-05-15 10:06:01 +00:00
David Ansari 044f6e3bac Move plugin rabbitmq-message-timestamp to the core
As reported in https://groups.google.com/g/rabbitmq-users/c/x8ACs4dBlkI/
plugins that implement rabbit_channel_interceptor break with
Native MQTT in 3.12 because Native MQTT does not use rabbit_channel anymore.
Specifically, these plugins don't work anymore in 3.12 when sending a message
from an MQTT publisher to an AMQP 0.9.1 consumer.

Two of these plugins are
https://github.com/rabbitmq/rabbitmq-message-timestamp
and
https://github.com/rabbitmq/rabbitmq-routing-node-stamp

This commit moves both plugins into rabbitmq-server.
Therefore, these plugins are deprecated starting in 3.12.

Instead of using these plugins, the user gets the same behaviour by
configuring rabbitmq.conf as follows:
```
incoming_message_interceptors.set_header_timestamp.overwrite = false
incoming_message_interceptors.set_header_routing_node.overwrite = false
```

While both plugins were incompatible to be used together, this commit
allows setting both headers.

We name the top level configuration key `incoming_message_interceptors`
because only incoming messages are intercepted.
Currently, only `set_header_timestamp` and `set_header_routing_node` are
supported. (We might support more in the future.)
Both can set `overwrite` to `false` or `true`.
The meaning of `overwrite` is the same as documented in
https://github.com/rabbitmq/rabbitmq-message-timestamp#always-overwrite-timestamps
i.e. whether headers should be overwritten if they are already present
in the message.

Both `set_header_timestamp` and `set_header_routing_node` behave exactly
to plugins `rabbitmq-message-timestamp` and `rabbitmq-routing-node-stamp`,
respectively.

Upon node boot, the configuration is put into persistent_term to not
cause any performance penalty in the default case where these settings
are disabled.

The channel and MQTT connection process will intercept incoming messages
and - if configured - add the desired AMQP 0.9.1 headers.

For now, this allows using Native MQTT in 3.12 with the old plugins
behaviour.

In the future, once "message containers" are implemented,
we can think about more generic message interceptors where plugins can be
written to modify arbitrary headers or message contents for various protocols.

Likewise, in the future, once MQTT 5.0 is implemented, we can think
about an MQTT connection interceptor which could function similar to a
`rabbit_channel_interceptor` allowing to modify any MQTT packet.
2023-05-15 08:37:52 +00:00
Jean-Sébastien Pédron e0a2f10272
rabbit_feature_flags: Fix possible deadlock when calling the Code server
[Why]
The Feature flags registry is implemented as a module called
`rabbit_ff_registry` recompiled and reloaded at runtime.

There is a copy on disk which is a stub responsible for triggering the
first initialization of the real registry and please Dialyzer. Once the
initialization is done, this stub calls `rabbit_ff_registry` again to
get an actual return value. This is kind of recursive: the on-disk
`rabbit_ff_registry` copy calls the `rabbit_ff_registry` copy generated
at runtime.

Early during RabbitMQ startup, there could be multiple processes
indirectly calling `rabbit_ff_registry` and possibly triggering that
first initialization concurrently. Unfortunately, there is a slight
chance of race condition and deadlock:

0. No `rabbit_ff_registry` is loaded yet.
1. Both process A and B call `rabbit_ff_registry:something()` indirectly
   which triggers two initializations in parallel.
2. Process A acquires the lock first and finishes the initialization. A
   new registry is loaded and the old `rabbit_ff_registry` module copy
   is marked as "old". At this point, process B still references that
   old copy because `rabbit_ff_registry:something()` is up above in its
   call stack.
3. Process B acquires the lock, prepares the new registry and tries to
   soft-purge the old `rabbit_ff_registry` copy before loading the new
   one.

This is where the deadlock happens: process B requests the Code server
to purge the old copy, but the Code server waits for process B to stop
using it.

[How]
With this commit, process B calls `erlang:check_process_code/2` before
asking for a soft purge. If it is using an old copy, it skips the purge
because it will deadlock anyway.
2023-05-09 10:43:29 +02:00
Michael Klishin 52b1eb9a43
Naming 2023-05-04 04:53:22 +04:00
Simon Unge d32c19e86f See #8076. Skip arg and type check on re-declare of QQ if declare type is classic. 2023-05-03 16:11:16 -07:00
Michael Klishin 7f78fc1fd8
Update a test assertion in rabbitmq_queues_cli_integration_SUITE 2023-05-02 12:59:17 +04:00
Simon Unge d0fadf9e08 Fix so that default policy ha-mode and ha-sync-mode are are converted to binary 2023-05-01 14:46:05 -07:00
Simon Unge 367b1f0a6d Add ha-sync-mode as an operator policy 2023-04-27 15:16:39 -07:00
Iliia Khaprov 4e8f05e0ca Allow setting consumer timeout via queue policy/arg and as consumer arg. Close #5437 2023-04-25 18:10:46 +02:00
Rin Kuryloski 854d01d9a5 Restore the original -include_lib statements from before #6466
since this broke erlang_ls

requires rules_erlang 3.9.13
2023-04-20 12:40:45 +02:00
Rin Kuryloski 9666aeed63 Adjust -include in some tests to work with both bazel and make 2023-04-19 14:28:22 +02:00
Michael Klishin c0ed80c625
Merge pull request #6466 from rabbitmq/gazelle
Use gazelle for some maintenance of bazel BUILD files
2023-04-19 09:33:44 +04:00
Alex Valiushko 4c30d9a6b4 address feedback 2023-04-17 17:51:38 -07:00
Alex Valiushko 127dca732d add runtime_parameters_SUITE 2023-04-17 16:29:12 -07:00
Alex Valiushko 13a37f512b add config fields 2023-04-17 11:26:43 -07:00
Rin Kuryloski 8de8f59d47 Use gazelle generated bazel files
Bazel build files are now maintained primarily with `bazel run
gazelle`. This will analyze and merge changes into the build files as
necessitated by certain code changes (e.g. the introduction of new
modules).

In some cases there hints to gazelle in the build files, such as `#
gazelle:erlang...` or `# keep` comments. xref checks on plugins that
depend on the cli are a good example.
2023-04-17 18:13:18 +02:00
Michal Kuratczyk 3c2917b871
Don't rely on implict list ordering
While at it, refactor `rabbit_misc:plmerge/2` to use the same precedence
as maps:merge and lists:merge (the second argument supersedes the first
one)
2023-04-13 14:37:18 +02:00
Michal Kuratczyk 83cd34a078
Fix for OTP-26+ (`verify_none`)
We don't care about the security of the TLS connection in that test.
2023-04-13 14:37:18 +02:00
Loïc Hoguin 77d58dddf7
Use verify_none in proxy_protocol TLS test
Fix for OTP-26+. We don't care about the security of the TLS
connection in that test.
2023-04-13 14:37:18 +02:00
Michal Kuratczyk 435d3e7f8d
Exclude shake128/256 from hashes
These algorithms, introduced in OTP-26, are not compatible
with what we do in this test.
2023-04-13 14:37:18 +02:00
Michal Kuratczyk 07b8f1b686
Remove a test that can't pass on OTP-26
OTP-26 changed the default version for binary_to_term from 1 to 2.
There's one place where we explicitly ask for version 1 anyway
(in the STOMP plugin) and seems like we need to keep it like this.
2023-04-13 14:37:17 +02:00
Rin Kuryloski ae9f377ee4 Mark per_user_connection_channel_limit_partitions_SUITE flaky
The suite creates and autoheals partitions, so I'm not convinced it's
worth chasing the root cause of the flake, whatever it is
2023-04-13 11:09:12 +02:00
Michael Klishin ac89309a9c
Merge pull request #7846 from rabbitmq/stream-at-most-once-dead-lettering
Streams: make at-most-once dead lettering work
2023-04-05 19:42:26 +04:00
Arnaud Cogoluègnes 70af1c4607
Merge pull request #7827 from rabbitmq/qq-return-crash
Quorum queues: avoid potential crash when returning message.
2023-04-05 16:56:55 +02:00
Karl Nilsson e7d7f6f225 Streams: make at-most-once dead lettering to work
Previously osiris did not support uncorrelated writes which means
we could not use a "stateless" queue type delivery and these were
silently dropped.

This had the impact that at-most-once dead letter was not possible
where the dead letter target is a stream.

This change bumps the osiris version that has the required API
to allow for uncorrelated writes (osiris:write/2).

Currently there is no feature flag to control this as osiris writer
processes just logs and drops any messages they don't understand.
2023-04-05 15:34:22 +01:00
Arnaud Cogoluègnes b840200798
Poll with basic.get in test
To make sure to get the message.
2023-04-05 14:35:55 +02:00
Arnaud Cogoluègnes f20f415576 Fix test after message structure change
References #7743
2023-04-04 19:32:57 +04:00
Karl Nilsson 01f6d0fc19 Quorum queues: avoid potential crash when returning message.
Returns reaching a Ra member that used to be leader but now has stepped
down would cause that follower to crash and restart.

This commit avoids this scenario as well as giving the return commands
a good chance of being resent to the new leader in a timeley manner.
(see the Ra release for this).
2023-04-04 16:02:26 +01:00
Michael Klishin f1a922a17c Virtual host limit: error type naming
vhost_precondition_failed => vhost_limit_exceeded

vhost_limit_exceeded is the error type used by
definition import when a per-vhost is exceeded.
It feels appropriate for this case, too.
2023-04-01 23:11:48 +04:00
Simon Unge 574ca55a3f See #7777. Use vhost_max to stop vhost creation in rabbitmq 2023-03-31 12:18:16 -07:00
Simon Unge 9363648e0c See #7389. Only one tick process per QQ 2023-03-30 14:13:02 +04:00
Michael Klishin 8baaa86961
Merge pull request #7753 from SimonUnge/sunge/max_node_connection
See #7593. Use connection_max to stop connections in rabbitmq
2023-03-29 12:46:39 +04:00
Simon Unge b42e99acfe See #7593. Use connection_max to stop connections in rabbitmq 2023-03-28 17:07:57 -07:00
Karl Nilsson 38c29d909c QQ: add test for message ttl using a policy.
This test also updates the policy and validates that the new message
ttl configuration is correctly applied.
2023-03-28 12:23:21 +01:00
Michael Klishin f55259bf86
Merge pull request #7725 from rabbitmq/derpecate-cmqs-without-ffs
Make it possible to disable Classic Mirrored Queues via configuration
2023-03-25 00:11:24 +04:00
Michael Klishin 87b65c2142 permit_deprecated_features.* => deprecated_features.permit.* 2023-03-24 19:54:58 +04:00
Karl Nilsson f0e9242806 QQ: do not add x-delivery-count header for the first delivery.
The x-delivery-count header only needs to be added when a message is
redelivered. Adding it on the first delivery attempt is unnecessary,
not recorded in the quorum queue documentation and causes additional work
deserialising the binary basic properties data to add this header.

This could be notable for messages with substantial property data incl.
heavy use of headers for example.
2023-03-24 12:11:09 +00:00
Rin Kuryloski c61d16c971 Include the queue type in the queue_deleted rabbit_event
This is useful for understanding if a deleted queue was matching any
policies given the more selective policies introduced in #7601.

Does not apply to bulk deletion of transient queues on node down.
2023-03-17 11:50:14 +01:00
Michal Kuratczyk 0a3136a916
Allow applying policies to specific queue types
Rather than relying on queue name conventions, allow applying policies
based on the queue type. For example, this allows multiple policies that
apply to all queue names (".*") that specify different parameters for
different queue types.
2023-03-13 12:36:48 +01:00
Diana Parra Corbacho 35465562aa Improve test assertions to include `rabbit_nodes` ouput 2023-03-08 15:12:08 +01:00
Jean-Sébastien Pédron 0fad97f5d5
per_user_connection_channel_tracking_SUITE: Remove dead code
This code became unused when the `tracking_records_in_ets` feature flag
was made required and its compatibility code was removed.

Seee #7270.
2023-03-01 12:16:16 +01:00
Karl Nilsson cb3407564b
Chunk quorum queue deliveries (#7175)
This puts a limit to the amount of message data that is added
to the process heap at the same time to around 128KB.

Large prefetch values combined with large messages could cause
excessive garbage collection work.

Also similify the intermediate delivery message format to avoid
allocations that aren't necessary.
2023-02-27 15:30:20 +00:00
Alex Valiushko 89582422f5 Add default_users per #7208 2023-02-24 15:41:25 -08:00
Simon Unge d66b38d333 See #7323. Rename default policy for ha-* and add option to massage key/value for aggregate_props 2023-02-22 11:46:03 -08:00
Michael Klishin 232c7faece
Merge branch 'main' into ha-mode-operator-policy 2023-02-22 20:21:59 +04:00
Jean-Sébastien Pédron 42bcd94dce
rabbit_db_cluster: New module on top of databases clustering
This new module sits on top of `rabbit_mnesia` and provide an API with
all cluster-related functions.

`rabbit_mnesia` should be called directly inside Mnesia-specific code
only, `rabbit_mnesia_rename` or classic mirrored queues for instance.
Otherwise, `rabbit_db_cluster` must be used.

Several modules, in particular in `rabbitmq_cli`, continue to call
`rabbit_mnesia` as a fallback option if the `rabbit_db_cluster` module
unavailable. This will be the case when the CLI will interact with an
older RabbitMQ version.

This will help with the introduction of a new database backend.
2023-02-22 15:28:04 +01:00
Simon Unge 36a559da51 See #7323. Cleanup testcase. 2023-02-21 11:33:47 -08:00
Michael Klishin 04ec916d54
Merge branch 'main' into ha-mode-operator-policy 2023-02-21 18:41:06 +04:00
Jean-Sébastien Pédron a4e8cdda58
rabbit_feature_flags: Support required feature flags in plugins
[Why]
If a plugin was already enabled when RabbitMQ starts, its required
feature flags were correctly handled and thus enabled. However, this was
not the case for a plugin enabled at runtime.

Here is an example with the `drop_unroutable_metric` from the
rabbitmq_management_agent plugin:

    Feature flags: `drop_unroutable_metric`: required feature flag not
    enabled! It must be enabled before upgrading RabbitMQ.

Supporting required feature flags in plugin is trickier than in the
core broker. Indeed, with the broker, we know when this is the first
time the broker is started. Therefore we are sure that a required
feature flag can be enabled directly, there is no existing data/context
that could conflict with the code behind the required feature flag.

For plugins, this is different: a plugin can be enabled/disabled at
runtime and between broker restarts (and thus upgrades). So, when a
plugin is enabled and it has a required feature flag, we have no way to
make sure that there is no existing and conflicting data/context.

[How]
In this patch, if the required feature flag is provided by a plugin
(i.e. not `rabbit`), we always mark it as enabled.

The plugin is responsible for handling any existing data/context and
perform any cleanup/conversion.

Reported by: @ansd
2023-02-20 10:56:36 +01:00
Simon Unge a22486211a See #7323. Oper policy for ha-mode and ha-params 2023-02-17 12:13:56 -08:00
Michael Klishin 634f9b602f
Merge pull request #7270 from rabbitmq/ff-tracking-records-in-ets
Remove compatibility for flag tracking_records_in_ets
2023-02-16 09:44:46 -03:00
Diana Parra Corbacho 56e4ed5464 Remove compatibility for flag tracking_records_in_ets 2023-02-14 22:51:57 +01:00
Michael Klishin 5a8e74ed5d
Merge pull request #7280 from rabbitmq/rin/rabbit_vhost-update_tags-skip-notify-if-unchanged
rabbit_vhost:set_tags/2 avoids notifying if tags are unchanged
2023-02-14 06:46:05 -03:00
Rin Kuryloski 0476d105d1 Tighten some test assertions 2023-02-14 10:13:38 +01:00
Michael Klishin d0dc951343
Merge pull request #7058 from rabbitmq/add-node-lists-functions-to-clarify-intent
rabbit_nodes: Add list functions to clarify which nodes we are interested in
2023-02-13 23:06:50 -03:00
Rin Kuryloski f87a5512c1 Relax and assertion which may vary when run not in isolation 2023-02-13 21:28:28 +01:00
Rin Kuryloski 12ec3a55ff rabbit_vhost:set_tags/2 avoids notifying if tags are unchanged
Additionally, tags are now always sorted when set
2023-02-13 20:38:25 +01:00
David Ansari 575f4e78bc Remove compatibility for feature flag stream_queue
Remove compatibility code for feature flag `stream_queue`
because this feature flag is required in 3.12.

See #7219
2023-02-13 15:31:40 +00:00
Jean-Sébastien Pédron d65637190a
rabbit_nodes: Add list functions to clarify which nodes we are interested in
So far, we had the following functions to list nodes in a RabbitMQ
cluster:
* `rabbit_mnesia:cluster_nodes/1` to get members of the Mnesia cluster;
  the argument was used to select members (all members or only those
  running Mnesia and participating in the cluster)
* `rabbit_nodes:all/0` to get all members of the Mnesia cluster
* `rabbit_nodes:all_running/0` to get all members who currently run
  Mnesia

Basically:
* `rabbit_nodes:all/0` calls `rabbit_mnesia:cluster_nodes(all)`
* `rabbit_nodes:all_running/0` calls `rabbit_mnesia:cluster_nodes(running)`

We also have:
* `rabbit_node_monitor:alive_nodes/1` which filters the given list of
  nodes to only select those currently running Mnesia
* `rabbit_node_monitor:alive_rabbit_nodes/1` which filters the given
  list of nodes to only select those currently running RabbitMQ

Most of the code uses `rabbit_mnesia:cluster_nodes/1` or the
`rabbit_nodes:all*/0` functions. `rabbit_mnesia:cluster_nodes(running)`
or `rabbit_nodes:all_running/0` is often used as a close approximation
of "all cluster members running RabbitMQ". This list might be incorrect
in times where a node is joining the clustered or is being worked on
(i.e. Mnesia is running but not RabbitMQ).

With Khepri, there won't be the same possible approximation because we
will try to keep Khepri/Ra running even if RabbitMQ is stopped to
expand/shrink the cluster.

So in order to clarify what we want when we query a list of nodes, this
patch introduces the following functions:
* `rabbit_nodes:list_members/0` to get all cluster members, regardless
  of their state
* `rabbit_nodes:list_reachable/0` to get all cluster members we can
  reach using Erlang distribution, regardless of the state of RabbitMQ
* `rabbit_nodes:list_running/0` to get all cluster members who run
  RabbitMQ, regardless of the maintenance state
* `rabbit_nodes:list_serving/0` to get all cluster members who run
  RabbitMQ and are accepting clients

In addition to the list functions, there are the corresponding
`rabbit_nodes:is_*(Node)` checks and `rabbit_nodes:filter_*(Nodes)`
filtering functions.

The code is modified to use these new functions. One possible
significant change is that the new list functions will perform RPC calls
to query the nodes' state, unlike `rabbit_mnesia:cluster_nodes(running)`.
2023-02-13 12:58:40 +01:00
David Ansari 5045fce6de Require all feature flags introduced before 3.11.1
RabbitMQ 3.12 requires feature flag `feature_flags_v2` which got
introduced in 3.11.0 (see
https://github.com/rabbitmq/rabbitmq-server/pull/6810).

Therefore, we can mark all feature flags that got introduced in 3.11.0
or before 3.11.0 as required because users will have to upgrade to
3.11.x first, before upgrading to 3.12.x

The advantage of marking these feature flags as required is that we can
start deleting any compatibliy code for these feature flags, similarly
as done in https://github.com/rabbitmq/rabbitmq-server/issues/5215

This list shows when a given feature flag was first introduced:

```
classic_mirrored_queue_version 3.11.0
stream_single_active_consumer 3.11.0
direct_exchange_routing_v2 3.11.0
listener_records_in_ets 3.11.0
tracking_records_in_ets 3.11.0

empty_basic_get_metric 3.8.10
drop_unroutable_metric 3.8.10
```

In this commit, we also force all required feature flags in Erlang
application `rabbit` to be enabled in mixed version cluster testing
and delete any tests that were about a feature flag starting as disabled.

Furthermore, this commit already deletes the callback (migration) functions
given they do not run anymore in 3.12.x.

All other clean up (i.e. branching depending on whether a feature flag
is enabled) will be done in separate commits.
2023-02-08 16:00:03 +00:00
Jean-Sébastien Pédron 9a99480bc9
Merge pull request #6821 from rabbitmq/rabbit-db-modules
Move missing Mnesia-specific code to rabbit_db_* modules
2023-02-02 15:40:11 +01:00
Diana Parra Corbacho 9cf10ed8a7 Unit test rabbit_db_* modules, spec and API updates 2023-02-02 15:01:42 +01:00
Michal Kuratczyk 67c123f91f
Merge pull request #7115 from rabbitmq/cq-perf-regression-fix
CQ: Fix performance regression after moving to v2 sets
2023-02-01 14:40:54 +01:00
Arnaud Cogoluègnes 1425e5cb02
Mark AMQP 1.0 properties chunk as binary (#7001)
* Mark AMQP 1.0 properties chunk as binary

It is marked as an UTF8 string, which is not, so
strict AMQP 1.0 codecs can fail.

* Re-use AMQP 1.0 binary chunks if available

Instead of converting from AMQP 091 back to AMQP 1.0.
This is for AMQP 1.0 properties, application properties,
and message annotations.

* Test AMQP 1.0 binary chunk reuse

* Support AMQP 1.0 multi-value body better

In the rabbit_msg_record module, mostly. Before this commit,
only one Data section was supported. Now multiple Data sections,
multiple Sequence sections, and an AMQP value section are supported.

* Add test for non-single-data-section AMQP 1.0 message

* Squash some Dialyzer warnings

* Silent dialyzer for a function for now

* Fix type declaration, use type, not atom

* Address review comments
2023-01-31 15:23:21 +00:00
Loïc Hoguin e330f683b6
CQ: Fix performance regression after moving to v2 sets
sets:from_list also must be told to use v2 otherwise
it will use v1.
2023-01-31 15:37:47 +01:00
Diana Parra Corbacho f2443f6d10 Move mnesia queries from rabbit_misc to rabbit_mnesia 2023-01-31 10:23:16 +01:00
Diana Parra Corbacho 452152469d Move mirrored supervisor Mnesia-specific code to rabbit_db_* modules 2023-01-31 10:23:16 +01:00
Diana Parra Corbacho d0ac99df5e Move queue/exchange/binding/policy Mnesia-specific code to rabbit_db_* modules 2023-01-31 10:23:16 +01:00
Diana Parra Corbacho d8ae41119c Move missing Mnesia-specific code to rabbit_db_topic_exchange module 2023-01-31 10:23:16 +01:00
David Ansari 8a2a82e19b Remove feature flag no_queue_name_in_classic_queue_client
as it was unnecessary to introduce it in the first place.

Remove the queue name from all queue type clients and pass the queue
name to the queue type callbacks that need it.

We have to leave feature flag classic_queue_type_delivery_support
required because we removed the monitor registry
1fd4a6d353/deps/rabbit/src/rabbit_queue_type.erl (L322-L325)

Implements review from Karl:
"rather than changing the message format we could amend the queue type
callbacks involved with the stateful operation to also take the queue
name record as an argument. This way we don't need to maintain the extra
queue name (which uses memory for known but obscurely technical reasons
with how maps work) in the queue type state (as it is used in the queue
type state map as the key)"
2023-01-24 17:32:59 +00:00
David Ansari a4db85de0d Make pipeline fail when there are dialyzer warnings
We want the build to fail if there are any dialyzer warnings in
rabbitmq_mqtt or rabbitmq_web_mqtt. Otherwise we rely on people manually
executing and checking the results of dialyzer.

Also, we want any test to fail that is flaky.
Flaky tests can indicate subtle errors in either test or program execution.
Instead of marking them as flaky, we should understand and - if possible -
fix the underlying root cause.

Fix OTP 25.0 dialyzer warning

Type gen_server:format_status() is known in OTP 25.2, but not in 25.0
2023-01-24 17:32:59 +00:00
David Ansari 3980c28596 Allow higher load on Mnesia by default
Prior to this commit, when connecting or disconnecting many thousands of
MQTT subscribers, RabbitMQ printed many times:
```
[warning] <0.241.0> Mnesia('rabbit@mqtt-rabbit-1-server-0.mqtt-rabbit-1-nodes.default'): ** WARNING ** Mnesia is overloaded: {dump_log,write_threshold}
```

Each MQTT subscription causes queues and bindings to be written into Mnesia.

In order to allow for higher Mnesia load, the user can configure
```
[
 {mnesia,[
  {dump_log_write_threshold, 10000}
 ]}
].
```
in advanced.config

or set this value via
```
RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="-mnesia dump_log_write_threshold 10000"
```

The Mnesia default for dump_log_write_threshold is 1,000.
The Mnesia default for dump_log_time_threshold is 180,000 ms.

It is reasonable to increase the default for dump_log_write_threshold from
1,000 to 5,000 and in return decrease the default dump_log_time_threshold
from 3 minutes to 1.5 minutes.
This way, users can achieve higher MQTT scalability by default.

This setting cannot be changed at Mnesia runtime, it needs to be set
before Mnesia gets started.
Since the rabbitmq_mqtt plugin can be enabled dynamically after Mnesia
started, this setting must therefore apply globally to RabbitMQ.

Users can continue to set their own defaults via advanced.config or
RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS. They continue to be respected
as shown by the new test suite included in this commit.
2023-01-24 17:30:10 +00:00
David Ansari 61a33da838 Make rabbit_fifo_dlx_worker tests less flaky
Tests sporadically fail with:
```
=== Ended at 2022-11-17 20:27:09
=== Location: [{rabbit_fifo_dlx_integration_SUITE,assert_active_dlx_workers,938},
              {test_server,ts_tc,1782},
              {test_server,run_test_case_eval1,1291},
              {test_server,run_test_case_eval,1223}]
=== === Reason: {assertMatch,
                     [{module,rabbit_fifo_dlx_integration_SUITE},
                      {line,938},
                      {expression,
                          "rabbit_ct_broker_helpers : rpc ( Config , Server , supervisor , count_children , [ rabbit_fifo_dlx_sup ] , 1000 )"},
                      {pattern,"[ _ , { active , N } , _ , _ ]"},
                      {value,
                          [{specs,1},
                           {active,2},
                           {supervisors,0},
                           {workers,2}]}]}
  in function  rabbit_fifo_dlx_integration_SUITE:assert_active_dlx_workers/3 (rabbit_fifo_dlx_integration_SUITE.erl, line 938)
  in call from test_server:ts_tc/3 (test_server.erl, line 1782)
  in call from test_server:run_test_case_eval1/6 (test_server.erl, line 1291)
  in call from test_server:run_test_case_eval/9 (test_server.erl, line 1223)
```

This commits attempts to remove that failure by using
supervisor:which_children/1 because the docs for
supervisor:count_children/1 say:
"active - The count of all actively running child processes managed by this supervisor.
For a simple_one_for_one supervisors, no check is done to ensure that each child process
is still alive, although the result provided here is likely to be very accurate unless
the supervisor is heavily overloaded."
2023-01-24 17:30:10 +00:00
David Ansari 14f59f1380 Handle soft limit exceeded as queue action
Instead of performing credit_flow within quorum queue and stream queue
clients, return new {block | unblock, QueueName} actions.

The queue client process can then decide what to do.

For example, the channel continues to use credit_flow such that the
channel gets blocked sending any more credits to rabbit_reader.

However, the MQTT connection process does not use credit_flow. It
instead blocks its reader directly.
2023-01-24 17:29:07 +00:00
David Ansari af68fb4484 Decrease memory usage of queue_type state
Prior to this commit, 1 MQTT publisher publishing to 1 Million target
classic queues requires around 680 MB of process memory.

After this commit, it requires around 290 MB of process memory.

This commit requires feature flag classic_queue_type_delivery_support
and introduces a new one called no_queue_name_in_classic_queue_client.

Instead of storing the binary queue name 4 times, this commit now stores
it only 1 time.

The monitor_registry is removed since only classic queue clients monitor
their classic queue server processes.

The classic queue client does not store the queue name anymore. Instead
the queue name is included in messages handled by the classic queue
client.

Storing the queue name in the record ctx was unnecessary.

More potential future memory optimisations:
* When routing to destination queues, looking up the queue record,
  delivering to queue: Use streaming / batching instead of fetching all
  at once
* Only fetch ETS columns that are necessary instead of whole queue
  records
* Do not hold the same vhost binary in memory many times. Instead,
  maintain a mapping.
* Remove unnecessary tuple fields.
2023-01-24 17:29:07 +00:00
David Ansari ab8957ba9c Use best-effort client ID tracking
"Each Client connecting to the Server has a unique ClientId"

"If the ClientId represents a Client already connected to
the Server then the Server MUST disconnect the existing
Client [MQTT-3.1.4-2]."

Instead of tracking client IDs via Raft, we use local ETS tables in this
commit.

Previous tracking of client IDs via Raft:
(+) consistency (does the right thing)
(-) state of Ra process becomes large > 1GB with many (> 1 Million) MQTT clients
(-) Ra process becomes a bottleneck when many MQTT clients (e.g. 300k)
    disconnect at the same time because monitor (DOWN) Ra commands get
    written resulting in Ra machine timeout.
(-) if we need consistency, we ideally want a single source of truth,
    e.g. only Mnesia, or only Khepri (but not Mnesia + MQTT ra process)

While above downsides could be fixed (e.g. avoiding DOWN commands by
instead doing periodic cleanups of client ID entries using session interval
in MQTT 5 or using subscription_ttl parameter in current RabbitMQ MQTT config),
in this case we do not necessarily need the consistency guarantees Raft provides.

In this commit, we try to comply with [MQTT-3.1.4-2] on a best-effort
basis: If there are no network failures and no messages get lost,
existing clients with duplicate client IDs get disconnected.

In the presence of network failures / lost messages, two clients with
the same client ID can end up publishing or receiving from the same
queue. Arguably, that's acceptable and less worse than the scaling
issues we experience when we want stronger consistency.

Note that it is also the responsibility of the client to not connect
twice with the same client ID.

This commit also ensures that the client ID is a binary to save memory.

A new feature flag is introduced, which when enabled, deletes the Ra
cluster named 'mqtt_node'.

Independent of that feature flag, client IDs are tracked locally in ETS
tables.
If that feature flag is disabled, client IDs are additionally tracked in
Ra.

The feature flag is required such that clients can continue to connect
to all nodes except for the node being udpated in a rolling update.

This commit also fixes a bug where previously all MQTT connections were
cluster-wide closed when one RabbitMQ node was put into maintenance
mode.
2023-01-24 17:29:07 +00:00
Jean-Sébastien Pédron 027b8874b7
rabbit: Add `is_serving/{0,1}` function
This function returns true if the node runs RabbitMQ and is accepting
clients. In other words, RabbitMQ is not under maintenance.
2023-01-18 18:01:26 +01:00
Jean-Sébastien Pédron 950c4ef7eb
Use `rabbit:data_dir/0` instead of `rabbit_mnesia:dir/0` where it makes sense
Some testcases are interested in RabbitMQ data directory, not Mnesia
directory per se. In this case, call `rabbit:data_dir/0` instead.
2023-01-13 11:56:21 +01:00
Jean-Sébastien Pédron 3f0d187f9f
rabbit_db: Add `init/0`, `is_virgin_node/0`, `dir/0` and `ensure_dir_exists/0` functions
These functions sit on top of their equivalent in `rabbit_mnesia`. In
the future, they will take care of picking the right database layer,
whatever it is.

The start of `mnesia_sync` is now part of this initialization instead of
a separate boot step in `rabbit` because it is specific to our use of
Mnesia.

In addition, `rabbit_db` provides `is_virgin_node/1` to query the state
of a remote node. This is used by `rabbit_ff_controller` in the feature
flags subsystem.

At this point, the underlying equivalent functions in `rabbit_mnesia`
become private to this module (and other modules implementing the
interaction with Mnesia). Other parts of RabbitMQ, including plugins,
should now use `rabbit_db`, not `rabbit_mnesia`.
2023-01-13 11:37:20 +01:00
Jean-Sébastien Pédron e72cd47a06
rabbit_feature_flags: Use `erpc:call/5` to properly catch exceptions
With `rpc:call/5`, the `throw(reason)` in a migration function would be
detected as an error by the feature flags subsystem, but the return
value of `sync_cluster/0` would be `reason` instead of `{error, reason}`
(which was expected by the caller).

This should make sure that the call to
`rabbit_feature_flags:sync_feature_flags_with_cluster/2` in
`rabbit_mnesia` gets the proper return value and aborts the node
startup.
2023-01-11 15:44:12 +01:00
Jean-Sébastien Pédron 44f2eb4802
rabbit_feature_flags: Restore feature flag state if we fail to enable it
When a node joins a cluster, it will synchronize its feature flags
states with the cluster. As part of that, it will run the migration
functions of the feature flags which must be enabled.

The migration function will be executed on the joining node only. It was
already executed on each member and supposedly succeeded at the time the
feature flag was enabled initially.

On the joining node, if the migration function fails, we used to mark
the feature flag state as disabled. This is a bug because existing
cluster members see the state going from enabled to disabled.

Instead of marking it as disabled everywhere, we should restore the
state as it was before we tried to enable it on the joining node:
* enabled for the cluster members
* disabled for the joining node

This fixes a bug discovered as part of the investigation on an issue in
the migration function of the `direct_exchange_routing_v2` feature flag
(#6847).
2023-01-11 15:27:36 +01:00
Jean-Sébastien Pédron 67f7c7ea04
rabbit_feature_flags: Ack behavior change with plugins in feature_flags_SUITE
A plugin's stable feature flag will now be enabled on initial node
start with `feature_flags_v2` required.
2023-01-10 16:38:24 +01:00
Jean-Sébastien Pédron 32b5a84212
rabbit_feature_flags: Make several fixes to `enable_feature_flag_when_ff_file_is_unwritable` testcase
The testcase was moved to another testsuite but called functions from
its initial testsuite. That code code relies on the name of the
testsuite so it broke.

At the same time, make it depend on the testsuite's feature flag, not
stream queues. The stream queues feature flag will become required at
some point and break the testcase.
2023-01-10 16:38:24 +01:00
Jean-Sébastien Pédron 7163ea8d98
rabbit_feature_flags: Use `?LOG_*()` macros for logging
... instead of the internal `rabbit_log_feature_flags` module.
2023-01-10 16:38:24 +01:00
Jean-Sébastien Pédron d2af280b5f
rabbit_feature_flags: Remove support for v1 migration functions
All existing feature flags either use v2 callbacks or don't have any
associated migration code. Future flags will have to use v2 callbacks if
they need to.

v1 migration functions were deprecated with the introduction of the
feature flags controller in RabbitMQ 3.11.0. Removing support for v1
migration functions simplifies the code.
2023-01-10 16:32:33 +01:00
Jean-Sébastien Pédron 09094f207f
rabbit_feature_flags: Remove v1 code
`feature_flags_v2` is required now, so we can drop the old code path.
2023-01-10 13:18:22 +01:00
Jean-Sébastien Pédron 55f4e675f9
rabbit_feature_flags: Mark `feature_flags_v2` as required
This means users of RabbitMQ 3.10.x or older will have to upgrade to
RabbitMQ 3.11.x and enable `feature_flags_v2` before they can upgrade to
a more recent release.
2023-01-10 13:18:18 +01:00
Jean-Sébastien Pédron c239399e12
Merge pull request #6791 from rabbitmq/enable-remote-ff-on-virgin-node
rabbit_feature_flags: Sync enabled feature flags differently on virgin node
2023-01-05 16:18:33 +01:00
Jean-Sébastien Pédron 2f301b4a38
rabbit_feature_flags: Sync enabled feature flags differently on virgin node
If a virgin node starts as clustered (thanks to peer discovery), we need
to mark feature flags already enabled remotely as enabled locally too.

We can't do a regular cluster sync because remote nodes may have
required feature flags which are disabled locally on the virgin node.
Therefore, those nodes don't have the migration code for these feature
flags anymore and the feature flags' state can't be changed.

By doing this special sync, we allow a clustered virgin node to join a
cluster with remote nodes having required feature flags.
2023-01-05 15:53:45 +01:00
David Ansari b0d03081fb
Test Unicode char in queue names 2023-01-04 07:59:23 -08:00
David Ansari 8f0800e578
Make classic queues v2 memory efficient
Store directory names as binary instead of string.

This commit saves >1GB of memory per 100,000 classic queues v2.
With longish node names, the memory savings are even much higher.

This commit is especially a prerequisite for scalalbe MQTT where every
subscribing MQTT connection creates its own classic queue.

So, with 3 million MQTT subscribers, this commit saves >30 GB of memory.

This commits stores file names as binaries and converts back to
file:filename() when passed to file API functions.
This is to reduce risk of breaking behaviour for path names containing
unicode chars on certain platforms.

Alternatives to the implementation in this commit:
1. Store common directory list prefix only once (e.g. put it into
   persistent_term) and store per queue directory names in ETS.
2. Use file:filename_all() instead of file:filename() and pass binaries
   to the file module functions. However this might be brittle on some
   platforms since these binaries are interpreted as "raw filenames".
   Using raw filenames requires more changes to classic queues which we
   want to avoid to reduce risk.

The downside of the implemenation in this commit is that the binary gets
converted to a list sometimes.
This happens whenever a file is flushed or a new file gets created for
example.

Following perf tests did not show any regression in performance:
```
java -jar target/perf-test.jar -s 10 -x 1 -y 0 -u q -f persistent -z 30
java -jar target/perf-test.jar -s 10000 -x 1 -y 0 -u q -f persistent -z 30
java -jar target/perf-test.jar -s 10 -x 100 -qp q%d -qpf 1 -qpt 100 -y 0 -f persistent -z 60 -c 1000
```

Furthermore `rabbit_file` did not show up in the CPU flame graphs
either.
2023-01-04 07:59:23 -08:00
Michael Klishin 0a8dd19434
Cosmetics
(cherry picked from commit 042725d8364bac3fed40df4dcdb534728dd56576)
2023-01-02 07:15:58 -05:00
Michael Klishin ec4f1dba7d
(c) year bump: 2022 => 2023 2023-01-01 23:17:36 -05:00
Michal Kuratczyk 44ed4eb340
Make a test less flaky; formatting 2022-12-22 14:59:47 +01:00
Michal Kuratczyk 4160dac33e
Check if stream coordinator is quorum critial 2022-12-22 10:02:13 +01:00
Alex Valiushko e07ed47d83 Parse and apply default_policies.operator
Example:

  default_policies.operator.policy-name.vhost_pattern = ^device
  default_policies.operator.policy-name.queue_pattern = .*
  default_policies.operator.policy-name.max_length_bytes = 1GB
  default_policies.operator.policy-name.max_length = 1000000
2022-12-16 10:25:30 -08:00
Arnaud Cogoluègnes d3caa1cdaa
Merge pull request #6440 from rabbitmq/stream-balancing
Streams rebalancing
2022-12-14 16:51:30 +01:00
Jean-Sébastien Pédron 6467e8ec2b
topic_permission_SUITE: Use binaries for usernames and vhost names
One testsuite was using strings to check the non-existence of a user and
a virtual host. Given these names are expected to be binaries, for sure
strings won't match.
2022-12-13 14:55:46 +01:00
Michael Klishin 8326ec3983
Expose aten poll interval in rabbitmq.conf
as `raft.adaptive_failure_detector.poll_interval`.

On systems under peak load, inter-node communication link congestion
can result in false positives and trigger QQ leader re-elections that
are unnecessary and could make the situation worse.

Using a higher poll interval would at least reduce the probability of
false positives.

Per discussion with @kjnilsson @mkuratczyk.
2022-12-12 16:45:45 +04:00
Jean-Sébastien Pédron a6f98886c9
logging_SUITE: Remove unused `Context` variable 2022-12-01 10:15:54 +01:00
Michael Klishin ee57f0f3b4
Merge pull request #6535 from SimonUnge/4980-absolute_before_relative
See #4980. Give *.absolute precedence over *.relative configuration
2022-12-01 01:31:12 +04:00
Simon Unge 9af4567342 See #4980. Give *.absolute precedence over *.relative configuration 2022-11-30 12:44:18 -08:00
Michael Klishin 85c89931e2
Merge pull request #6462 from rabbitmq/rename-mnesia-dir-to-data-dir
Use "data directory" instead of "Mnesia directory" to indicate RabbitMQ's data location
2022-12-01 00:04:27 +04:00
Michal Kuratczyk 17e8fa6f67
Don't test rebalancing if the FF is unavailable 2022-11-30 19:24:47 +01:00
Karl Nilsson 8a6df5d955 Add feature flag for the restart_stream feature.
As it requires a new stream coordinator command which cannot be processed
by older machine versions.
2022-11-30 14:02:57 +00:00
Jean-Sébastien Pédron 15d9cdea61
Call `rabbit:data_dir/0` instead of `rabbit_mnesia:dir/0`
This is a follow-up commit to the parent commit. To quote part of the
parent commit's message:

> Historically, this was the Mnesia directory. But semantically, this
> should be the reverse: RabbitMQ owns the data directory and Mnesia is
> configured to put its files there too.

Now all subsystems call `rabbit:data_dir/0`. They are not tied to Mnesia
anymore.
2022-11-30 14:41:32 +01:00
Jean-Sébastien Pédron 15aaa009cd
feature_flags_SUITE: Test a plugin's feature flag is enabled after a node restart
This situation was untested by the testsuite and it happens that it
doesn't work correctly...

References #6500.
2022-11-30 12:13:58 +01:00
Michal Kuratczyk feff12cbe2
Stream rebalancing support 2022-11-29 18:59:50 +01:00
Karl Nilsson 97c2bb15c6 Stream coordiantor restart stream preferred leader flag
Allow the restart_stream command / API to provide a preferrred leader
hint. If this leader replies to the coordinator stop phase as one of the
first n/2+1 members and has a sufficientl up-to-date stream tail it will be
selected as the leader, else it will fall back and use the modulus logic to
select the next leader.

sufficiently up to
2022-11-29 16:30:44 +00:00
Karl Nilsson 6959a3624b Stream coordinator: refactoring
Refactor stream coordinator code not to rely on the node field in the
members record as this information is already present in the members map
as the key. This way we could repurpose this field.
2022-11-29 16:30:44 +00:00
Karl Nilsson 9736425fa5 Add restart_stream command to rabbitmq-streams
Also add epoch to stream_status output which requires osiris 1.4.1
2022-11-29 16:30:41 +00:00
Karl Nilsson 2266c1019b Stream coordintor
Implement "tie-break" selection when more than one replica has the
potential to become the next writer. For now only a simple modulo based
selection is made.

This may improve writer distribution in cases where a rabbit node goes down
and there are many streams.
2022-11-29 16:29:30 +00:00
David Ansari f489866011 Add rabbit_fifo_dlx_worker test
which tests that messages get delivered to target quorum queue
eventually when it gets rejected initially due to target
quorum queue not being available.
2022-11-29 14:26:17 +01:00
Michael Klishin 12c80f6cc5
Exclude one more new QQ test from mixed version runs 2022-11-27 17:06:51 +04:00
Michael Klishin e6a7f67747
Exclude a new QQ test from mixed version runs 2022-11-27 10:47:12 +04:00
Péter Gömöri 25ad13ce24 Fix case_clause crash when stream queue provided for quorum queue cmds 2022-11-26 00:57:01 +01:00
Michal Kuratczyk d8ff99180b
Consider streams when quorum critical check (#6448)
Extended the quorum critical check to also consider streams and their
potential unavailability if the nodes is stopped.
2022-11-23 15:24:26 +01:00
David Ansari 5bf8192982 Support code coverage
Previously it was not possible to see code coverage for the majority of
test cases: integration tests that create RabbitMQ nodes.
It was only possible to see code coverage for unit tests.
This commit allows to see code coverage for tests that create RabbitMQ
nodes.

The only thing you need to do is setting the `COVER` variable, for example
```
make -C deps/rabbitmq_mqtt ct COVER=1
```
will show you coverage across all tests in the MQTT plugin.

Whenever a RabbitMQ node is started `ct_cover:add_nodes/1` is called.
Contrary to the documentation which states

> To have effect, this function is to be called from init_per_suite/1 (see common_test) before any tests are performed.

I found that it also works in init_per_group/1 or even within the test cases themselves.

Whenever a RabbitMQ node is stopped or killed `ct_cover:remove_nodes/1`
is called to transfer results from the RabbitMQ node to the CT node.

Since the erlang.mk file writes a file called `test/ct.cover.spec`
including the line:
```
{export,".../rabbitmq-server/deps/rabbitmq_mqtt/cover/ct.coverdata"}.
```
results across all test suites will be accumulated in that file.

The accumulated result can be seen through the link `Coverage log` on the test suite result pages.
2022-11-10 15:04:31 +01:00
Alex Valiushko 87017827e0 fix rabbit_direct_reply_to:compute_key_and_suffix_v1 type signature 2022-11-09 20:33:05 -08:00
Michael Klishin b73377489f
Merge pull request #6183 from rabbitmq/qq-info-optimise
QQ: don't try to contact non-connected nodes for stats
2022-10-28 11:38:47 +04:00
Karl Nilsson f8a8fb749f Correctly decrease global counters in rabbit_channel:terminate/2
Previously publisher counts would be decremented only if "publishing_mode"
was set to false resulting in ever decreasing global counters.

No consumer counts were decremented in terminate previously resulting
in ever growing consumer counts.

Add assertion to ensure global counters are decremented
2022-10-26 10:19:08 +01:00
Michael Klishin 6f6e4b1acd
Reduce CT logging here 2022-10-25 12:38:14 +04:00
Karl Nilsson e2a5ba5b78 SC: fail active mnesia update actions on leader change
Else it is possible that any mnesia update actions that are in
progress when a stream coordinator leader change occurs get stuck
and never restart.

Exclude test from mixed versions

The assertion fixes a bug and requires the new code to be running.
2022-10-21 16:08:30 +01:00
Karl Nilsson 5c1b11fbb7 Make rabbit_fifo_int_SUITE:basics test less flaky 2022-10-20 12:38:29 +01:00
Michael Klishin 20bc656d14 Rename a couple of snippets 2022-10-20 04:26:16 +04:00
Michael Klishin edcc31ef58 Update default virtual host limit tests 2022-10-20 03:40:36 +04:00
Michael Klishin 919248293b Rename a schema key
References #6172
2022-10-20 03:08:06 +04:00
Alex Valiushko 27ebc04dc9 Add ability to set default vhost limits by pattern
Limits are defined in the instance config:

    default_limits.vhosts.1.pattern = ^device
    default_limits.vhosts.1.max_connections = 10
    default_limits.vhosts.1.max_queues = 10

    default_limits.vhosts.2.pattern = ^system
    default_limits.vhosts.2.max_connections = 100

    default_limits.vhosts.3.pattern = .*
    default_limits.vhosts.3.max_connections = 20
    default_limits.vhosts.3.max_queues = 20

Where pattern is a regular expression used to match limits to a newly
created vhost, and the limits are non-negative integers. First matching
set of limits is applied, only once, during vhost creation.
2022-10-19 20:00:25 +00:00
Karl Nilsson 4486843fbc QQ: don't try to contact non-connected nodes for starts
Some systems may incur a substantial latency penalty when attempting
reconnections to down nodes so to avoid this some stat related functions
that gather info from all QQ member nodes no only try those nodes
that are connected. This should help keeping things like the mgmt API
functions and ctl commands a bit more responsive.
2022-10-19 12:13:24 +01:00
Karl Nilsson 0cecb97a29 Stream coordinator: fix member check in ensure_coordinator_started/0
Previously if a command was issued from a rabbit node where there is
not yet a local stream coordinator member it would try to create
a new stream coordinator cluster, which would fail even if there
is a functional cluster spanning the other rabbit nodes.

This was due to use of global:whereis_name to discover stream coordinator
members on other nodes, however, Ra members never register globally
they only do local name registrations.

Replacing with a simple erpc that does erlang:whereis/1

Pairing with @acogoluegnes
2022-10-19 11:02:10 +01:00
Arnaud Cogoluègnes 2460f1468b
Merge pull request #6073 from rabbitmq/stream-coordinator-overview
Implement ra_machine:overview/1 for rabbit_stream_coordinator
2022-10-13 11:27:23 +02:00
Karl Nilsson 2743df2e5f Implement ra_machine:overview/1 for rabbit_stream_coordinator
So that e.g. sys:get_status/1 would return a more compact state
representation than just a full state dump.
2022-10-12 15:23:45 +01:00
Michael Klishin 4fa66aa896
Merge pull request #6049 from rabbitmq/lukebakken/unicode-followup
Use Unicode-friendly format strings all over the codebase
2022-10-12 08:07:36 +04:00
Karl Nilsson fae8ec57d8 Make rabbit_fifo_basics_SUITE:basics less dependent on ra_event order
To allow for differences between Ra versions and the order they
execute effects.
2022-10-10 16:37:37 +01:00
Luke Bakken 7fe159edef
Yolo-replace format strings
Replaces `~s` and `~p` with their unicode-friendly counterparts.

```
git ls-files *.erl | xargs sed -i.ORIG -e s/~s>/~ts/g -e s/~p>/~tp/g
```
2022-10-10 10:32:03 +04:00
Michael Klishin 2eac17d640
Merge pull request #6019 from cloudamqp/prepare_plugins_order
Load plugins in dependency order
2022-10-08 09:10:15 +04:00
Jean-Sébastien Pédron 4b132daaba
Remove upgrade-specific log file
This category should be unused with the decommissioning of the old
upgrade subsystem (in favor of the feature flags subsystem). It means:
1. The upgrade log file will not be created by default anymore.
2. The `$RABBITMQ_UPGRADE_LOG` environment variable is now unsupported.

The configuration variables remain to avoid breaking an existing and
working configuration.
2022-10-06 21:28:50 +02:00
Karl Nilsson 3f53846c70 Refactor quorum queue periodic metric emission
Primarily to avoid the transient process that is spawned every
Ra "tick" to query the quorum queue process for information that could
be passed in when the process is spawned.

Refactored to make use of the overview map instead of a custom
tuple.

Also use ets:lookup_element where applicable
2022-10-05 11:24:34 +01:00
Péter Gömöri 3b422d34b7 Load plugins in dependency order
When enabling a plugin on the fly, the dir of the plugin and its
dependencies are added to the load path (and their modules are
loaded). There are some libraries which require their dependencies to be
loaded before they are (e.g lz4 requires host_triple to load the NIF
module).

During `rabbit_plugins:prepare_plugins/1`, `dependencies/3` gathers the
wanted apps in the right order (every app comes after its deps) however
`validate_plugins/1` used to reverse their order, so later code loading
in `prepare_plugin/2` happened in the wrong order.

This is now fixed, so `validate_plugins/1` preserves the order of apps.
2022-10-05 01:00:05 +02:00
Michael Klishin b31f23c4d0
Merge pull request #5944 from rabbitmq/gh_5927
Fix channel crash when cancelling then consuming using the same consumer tag and channel
2022-10-04 11:20:17 +04:00
Karl Nilsson 25a6ec3919 Change rabbit_fifo_client get_missing_deliveries to use aux_command
As since QQ v2 we don't ever keep any messages in memory and we need
to read them from the log. The only way to do this is by using an
aux command.

Execute get_checked_out query on local members if possible

This reduces the change of crashing a QQ member during a live upgrade
where the follower does not have the appropriate code in handle_aux
2022-10-03 16:05:34 +01:00
Karl Nilsson 63f01c8c6c Ensure consumer msg_id state is synchronised
By returning the next msg id for a merged consumer the rabbit_fifo_client
can set it's next expected msg_id accordingly and avoid triggering
a fetch of "missing" messages from the queue.
2022-10-03 16:05:34 +01:00
Michael Klishin 69b06d30f1
Merge pull request #4522 from rabbitmq/loic-cq-dont-reduce-memory-usage
CQ: Merge lazy/default behavior into a unified mode
2022-10-01 20:11:41 +04:00
Arnaud Cogoluègnes 3767401696
Make ensure_monitors more defensive in SAC coordinator
Do not assume the connection PID of a consumer is still
known from the state on state cleaning when unregistering
a consumer.

Fixes #5889
2022-09-28 16:00:00 +02:00
Michal Kuratczyk 2855278034
Migrate from supervisor2 to supervisor 2022-09-27 13:53:06 +02:00
Loïc Hoguin e09cbeb00c
CQ: Fix channel_operation_timeout_SUITE mixed versions
Since ram_pending_acks is now a map the test must support both
map and gb_trees to continue working. Also updated the state to
reflect a field name change.
2022-09-27 12:00:10 +02:00
Loïc Hoguin f1ae007455
CQ: Fix test compilation error following rebase 2022-09-27 12:00:09 +02:00
Loïc Hoguin f59020191b
CQ: Enable borken checks in backing_queue_SUITE again 2022-09-27 12:00:09 +02:00
Loïc Hoguin 723cc54705
CQ: Some cleanup 2022-09-27 12:00:09 +02:00
Loïc Hoguin 8051b00305
CQv2: Small fixes of and via the property suite 2022-09-27 12:00:09 +02:00
Loïc Hoguin 3683ab9a6e
CQ: Use v2 sets instead of gb_sets for confirms
For the following flags I see an improvement of
30k/s to 34k/s on my machine:

-x 1 -y 1 -A 1000 -q 1000 -c 1000 -s 1000 -f persistent
-u cqv2 --queue-args=x-queue-version=2
2022-09-27 12:00:08 +02:00
Loïc Hoguin a31be66af5
CQ: Fix prop suite after removal of lazy and other changes 2022-09-27 12:00:08 +02:00
Loïc Hoguin 1fb44267b4
Tweak test suites following CQ changes 2022-09-27 12:00:08 +02:00
Loïc Hoguin 38f335e83b
Rework CQ stats code to be obvious and efficient 2022-09-27 12:00:08 +02:00
Loïc Hoguin 341e908bbf
CQ: Merge lazy/default behavior into a unified mode
No longer reduce memory usage as well (except an explicit GC
that I am pondering about removing).
2022-09-27 12:00:03 +02:00
Michael Klishin 7332a3ab8a
Merge pull request #5737 from rabbitmq/rabbit-event-stats
Stop sending stats to rabbit_event
2022-09-09 21:04:50 +04:00
David Ansari 831a8e5211 Fix flaky rabbit_fifio_int:credit tests
Prior to this commit test credit() was flaky.

It used to fail with:
```
rabbit_fifo_int_SUITE:credit failed on line 437
Reason: {badmatch,{[{{resource,"/",queue,<<"credit">>},credit,1,fals...}
```
In that case returned events were:
```
Events=[{ra_event,{credit,ct_rabbit@nuc},{applied,[{3,ok}]}},
        {ra_event,{credit,ct_rabbit@nuc},
                  {machine,{delivery,<<"tag">>,[{1,{0,m2}}]}}}]
```
instead of:
```
Events=[{ra_event,
            {credit,ct_rabbit@nuc},
            {machine,{delivery,<<"tag">>,[{1,{0,m2}}]}}},
        {ra_event,
            {credit,ct_rabbit@nuc},
            {applied,
                [{3,ok},
                 {4,
                  {multi,
                      [{send_credit_reply,0},
                       {send_drained,{<<"tag">>,3}}]}}]}}]
```

The fix is we wait for both ra 'applied' events.
2022-09-09 15:46:40 +00:00
David Ansari b6952540a3 Remove rabbit_misc:atom_to_binary/1
Nowadays, we have erlang:atom_to_binary/1.
2022-09-09 10:52:38 +00:00
David Ansari 19628af139 Add gen_event behaviour declaration
This makes it easier to see that a module is an event handler.
Furthermore the compiler checks for required callback functions.
2022-09-09 10:52:38 +00:00
David Ansari b953b0f10e Stop sending stats to rabbit_event
Stop sending connection_stats from protocol readers to rabbit_event.
Stop sending queue_stats from queues to rabbit_event.
Sending these stats every 5 seconds to the event manager process is
superfluous because noone handles these events.

They seem to be a relict from before rabbit_core_metrics ETS tables got
introduced in 2016.

Delete test head_message_timestamp_statistics because it tests that
head_message_timestamp is set correctly in queue_stats events
although queue_stats events are used nowhere.
The functionality of head_message_timestamp itself is still tested in
deps/rabbit/test/priority_queue_SUITE.erl and
deps/rabbit/test/temp/head_message_timestamp_tests.py
2022-09-09 10:52:38 +00:00
Luke Bakken a9e6fca15b
Disk monitor strikes back!
* Crash when a sub-command times out
* Use atom `NaN` when free space can not be determined

Fixes #5721

Use port to run /bin/sh on `unix` systems to then run `df` command

Update disk monitor tests to not use mocks because we no longer use rabbit_misc:os_cmd/1
2022-09-08 14:43:43 -07:00
Michael Klishin ead5acc7d6 Squash a few compiler warnings
one revealed a real issue in a CLI command
2022-08-28 18:16:01 +04:00
David Ansari 530b65fa15 Introduce new credit_mode {simple_prefetch, MaxCredits} for v3
In rabbit_fifo Ra machine v3 instead of using credit_mode
simple_prefetch, use credit_mode {simple_prefetch, MaxCredits}.

The goal is to rely less on consumer metadata which is supposed to just be a
map of informational metadata.
We know that the prefetch is part of consumer metadata up until now.
However, the prefetch might not be part anymore of consumer metadata in
a future Ra version.

This commit therefore ensures that:
1. in the conversion from v2 to v3, {simple_prefetch, MaxCredits} is
   set as credit_mode if the consumer uses simple_prefetch, and
2. whenever a new credit_mode is set (in merge_consumer() or
   update_consumer()), ensure that the credit_mode is set correctly if
   the machine runs in v3
2022-08-24 17:43:12 +02:00
David Ansari 03659864bb Fix duplicate credit grant in quorum queue
Prior to this commit, when a consumer NACKed a message with requeue=true
(resulting in a Ra #return{} command) and that message got
dropped or dead-lettered (for example because the delivery-limit
was exceeded), that consumer got too many credits granted,
which could lead to exceeding (and therefore violating) the
configured Prefetch value.
The consumer got credit granted twice:
1. for completing the message, and
2. for returning the message.

The same issue occurs in the scenario where a consumer (or its node)
is DOWN. In that case the consumer got credit granted twice:
1. for completing the message (if delivery limit is exceeded), and
2. for returning the message (so that other live consumers can consume)

This bug has existed since 3.8. It got reported in
https://groups.google.com/g/rabbitmq-users/c/iMcX0oXzURQ

From now on, credit for a consumer is increased in only 3 places:

a message is completed, or
a message is returned, or
a message is requeued using the new #requeue{} Ra command (when
delivery-limit is not configured)

Furthermore, this commit also fixes a somewhat related bug:
When a consumer with checked out messages gets cancelled, and
a new consumer subscribes with the same consumer ID and on the same channel
but with a lower Prefetch value, that new consumer's Prefetch value was
not always respected. This commit fixes this issue by ensuring that
the credit in simple_prefetch mode does not exceed the consumer
Prefetch value.

This commit requires a new Ra machine version v3 which is done in-place
in rabbit_fifo due to the small number of changes compared to v2.
2022-08-24 17:43:12 +02:00
Karl Nilsson 8b029d4ad0 rabbit_stream_queue flake improvement
By waiting for the first remove_replica command to complete we
may reduce the likelyhood of this test flaking.
2022-08-11 15:46:35 +01:00
David Ansari 6211b900d8 Fix failing test
due to the changes in https://github.com/rabbitmq/ra/pull/298

'delivery' ra event is now received before 'applied' ra event.
2022-08-09 16:46:56 +02:00
David Ansari 23e7fc860b Enable a post-#4563 test
When a non-mirrored durable classic queue is hosted on a node
that goes down, prior to #4563 not only was the behaviour
that the queue gets deleted from the rabbit_queue table,
but also that its corresponding bindings get deleted.
The purpose of this test was to make sure that bindings
get also properly deleted from the new rabbit_index_route
table.

Given that the behaviour now changed #4563 we can either
delete this test or - as done in this commit - adapt this test.
2022-08-08 06:30:51 +00:00
Michael Klishin 842bae6163 Disable a test that needs revision post-#4563
How the behavior of this test should change
is yet to be discussed with @dcorbacho @ansd @lhoguin
2022-08-06 09:25:02 +04:00
Loïc Hoguin 744e66e42a
CQv1: Fix failure to recover messages in rare cases
When a full recovery was done it was possible to lose messages
for v1 queues when the queues only had a journal file and no
segment files.

In practice it should be a rare event because it requires the
queue (or maybe the node) to crash first and then the vhost or
the node to be restarted gracefully.
2022-08-04 13:50:12 +02:00
Iliia Khaprov 0f541f443f close #5399, set default vhost queue type from import's metadata 2022-08-01 15:01:13 +02:00
Jean-Sébastien Pédron 6e9ee4d0da
Remove test code which depended on the `quorum_queue` feature flags
These checks are now irrelevant as the feature flag is required.
2022-08-01 12:41:30 +02:00
Jean-Sébastien Pédron 909f861e55
Remove pre-quorum-queue compatibility code
Quorum queues were introduced in RabbitMQ 3.8.0. This was first time we
added a breaking change protected behind a feature flag. This allowed a
RabbitMQ cluster to be upgraded one node at a time, without having to
stop the entire cluster.

The breaking change was a new field in the `#amqqueue{}` record. This
broke the API and the ABI because records are a compile-time thing in
Erlang.

The compatibility code is in the wild for long enough that we want to
support the new `#amqqueue{}` record only from now on. The
`quorum_queue` feature flag was marked as required in a previous commit
(see #5202). This allows us to remove code in this patch.

References #5215.
2022-08-01 12:31:40 +02:00
Jean-Sébastien Pédron 776b4323bd
Remove test code which depended on the `virtual_host_metadata` feature flags
These checks are now irrelevant as the feature flag is required.
2022-08-01 11:56:53 +02:00
Michael Klishin 3fe97685d6
Merge branch 'master' into mk-swap-json-library 2022-07-30 15:35:25 +04:00
Michael Klishin 3664b2475b
Merge pull request #5385 from rabbitmq/conditional-logging-format
Add conditional logging to text formatter
2022-07-29 21:43:31 +04:00
Michael Klishin 01871b4a65
Adapt JSON logging test for Thoas
Only top-level features are atomized by rabbit_json:encode/1
now
2022-07-29 18:27:38 +04:00
David Ansari 4d17f63e2f Add test for routing from exchange to exchange 2022-07-29 10:18:49 +00:00
Michael Klishin 1bea14fdca
Begin adapting logging_SUITE for Thoas 2022-07-29 14:01:00 +04:00
Jean-Sébastien Pédron 5b98d7d2a2
Remove test code which depended on the `maintenance_mode_status` feature flags
These checks are now irrelevant as the feature flag is required.
2022-07-29 11:51:52 +02:00
Jean-Sébastien Pédron 32049cd256
Remove test code which depended on the `user_limits` feature flags
These checks are now irrelevant as the feature flag is required.
2022-07-29 11:04:48 +02:00
Iliia Khaprov 1c1f5403d6 Add conditional logging to text formatter.
Just like OTP logger
2022-07-29 10:40:29 +02:00
Michael Klishin 9c99f76579
Replace JSX with Thoas for JSON operations
Thoas is more efficient both in terms of encoding
time and peak memory footprint.

In the process we have discovered an issue:
https://github.com/lpil/thoas/issues/15

Pair: @pjk25
2022-07-29 10:34:47 +04:00
Jean-Sébastien Pédron f46acbd34c
Merge pull request #5301 from rabbitmq/tracking-to-ets
Move connection and channel tracking tables to ETS
2022-07-28 17:09:23 +02:00
David Ansari 4b6d72ea41 Add more direct_exchange_routing_v2 tests
1. Recover bindings
2. Enable feature flag with concurrent definition import
2022-07-28 14:06:59 +00:00
dcorbacho 6069af791c Minor fixes to move tracking tables to ETS 2022-07-28 15:49:20 +02:00
dcorbacho 5795ba94b1 Move connection and channel tracking tables to ETS 2022-07-28 15:49:20 +02:00
Jean-Sébastien Pédron 9c33a470c7
Merge pull request #5345 from rabbitmq/improve-ff-v2-callbacks
rabbit_feature_flags: Use one callback per command
2022-07-28 11:54:03 +02:00
Michael Klishin b623093d60
Merge pull request #5325 from rabbitmq/fix-direct-exchange-routing-v2
Fix rabbit_index_route inconsistencies
2022-07-28 09:23:21 +04:00
Jean-Sébastien Pédron bc6e28f5f3
rabbit_feature_flags: Use one callback per command
In the initial Feature flags subsystem implementation, we used a
migration function taking three arguments:
* the name of the feature flag
* its properties
* a command (`enable` or `is_enabled`)

However we wanted to implement a command (`post_enable`) and pass more
variables to the migration function. With the rework in #3940, the
migration function was therefore changed to take a single argument. That
argument was a record containing the command and much more information.
The content of the record could be different, depending on the command.

This solved the extensibility and the flexilibity of how we could call
the migration function. Unfortunately, this didn't improve its return
value as we wanted to return different things depending on the command.

In this patch, we change this completely by using a map of callbacks,
one per command, instead of that single migration function.

So before, where we had:

    #{migration_fun => {Module, Function}}

The new property is now:

    #{callbacks => #{enable => {Module, Function},
                     post_enable => {Module, Function}}}

All callbacks are still optional. You don't have to implement a fallback
empty function clause to skip commands you don't want to use oryou don't
support, as you would have to with the migration function.

Callbacks may be called with a callback-specific map of argument and
they should return the expected callback-specific return values.
Everything is defined with specs.

If a feature flag uses this new callbacks map, it will automatically
depend on `feature_flags_v2`, like the previous arity-1 migration
function. A feature flag can't define both the old migration function
and the new callbacks map.

Note that this arity-1 migration function never made it to a release of
RabbitMQ, so its support is entirely dropped with no backward
compatibility support. Likewise for the `#ffcommand{}` record.
2022-07-27 18:09:41 +02:00
Rin Kuryloski 7cf6deb855 Adjust the feature_flags_SUITE skip logic to accound for bazel
When test run in bazel, only compiled files are present in the
sandbox, so the checks that are performed must be modified
2022-07-27 14:45:37 +02:00
Rin Kuryloski e4995253f7 Re-enable mixed version tests for the secondary feature flags suite 2022-07-27 14:38:12 +02:00
Jean-Sébastien Pédron 46227fa955
feature_flags_SUITE: Skip `feature_flags_v2` if unsupported by secondary umbrella
The `feature_flags_v2` test group is only relevant if all nodes support
the `feature_flags_v2` feature flags. When doing mixed-version testing,
we must ensure that both umbrellas support that feature flags. If they
are not, we can skip the entire test group.

While here, add a dot at the end of a comment title.
2022-07-27 10:51:45 +02:00
Jean-Sébastien Pédron f5b0c8c2c9
feature_flags_SUITE: Remove CLI work around
We had this setup step to work around the by-design circular dependency
in the CLI (the broker depends on the CLI to start and the CLI depends
on the broker to work).

Unfortunately, it pollutes the code path and breaks the testsuite when
doing mixed-version testing: the copied `rabbit` taken from the main
umbrella ends up in the code path of the secondary umbrella, overriding
the `rabbit` there.

This patch removes this workaround to see what breaks, but it seems to
work fine so far. Let's see how it goes!
2022-07-27 10:51:45 +02:00
Jean-Sébastien Pédron c29dbc227a
feature_flags_SUITE: Adapt the argument to `rabbit_feature_flags:inject_test_feature_flags()`
Up to RabbitMQ 3.10.x, this function took an application attribute. In
`master`, it takes the feature flags map only.

The testsuite now verifies if the remote node supports
`feature_flags_v2` to determine if it should call the function with the
feature flags map directly or convert it to an application attribute
first.

This fixes some failure of the testsuite in mixed-version cluster
testing.
2022-07-27 10:51:45 +02:00
Rin Kuryloski 38f864a445
feature_flags_SUITE: Restore testsuite under mixed-version testing
This testsuite is mainly relevant in mixed-version testing. Indeed, we
really hope that two brokers running the same code base can work
together!
2022-07-27 10:51:41 +02:00
Karl Nilsson 935a768a90 adjust test function in stream queue test 2022-07-26 17:54:57 +01:00
Karl Nilsson 7d11d0f592 make stream queue suite less flaky 2022-07-26 15:40:11 +01:00
Karl Nilsson a772508a2a avoid policy pattern clashes in stream suite 2022-07-26 14:42:07 +01:00
Karl Nilsson c9b414a26e Avoid policy name collisions in stream queue suite 2022-07-26 14:42:07 +01:00
Karl Nilsson a5ced5d97e Fix stream queue segment size policy test
Set the policy _before_ creating the stream as there is a current
limitation which means streams won't be updated immediately when
only changing the segment size.
2022-07-26 14:42:07 +01:00
Michael Klishin 4cacec6bfd
Merge pull request #5305 from rabbitmq/default-queue-type-per-vhost
Configure default queue type by vhost
2022-07-26 01:37:56 +04:00
David Ansari 5a707e90cd Fix rabbit_index_route inconsistencies
Prior to this commit, when bindings were deleted while enabling feature
flag direct_exchange_routing_v2 - specifically AFTER
rabbit_binding:populate_index_route_table/0 ran and BEFORE the feature
flags controller changed the feature state from 'state_changing' to 'enabled' -
some bindings were incorrectly present in table rabbit_index_route.

This commit fixes this issue.

If the state is 'state_changing', bindings must be deleted when the
table already got created.

(Somewhat unexpectedly) checking for table existence within a Mnesia
transaction can return 'true' although the subsequent Mnesia table operation
will fail with {no_exists, rabbit_index_route}.

Therefore, a lock on the schema table must be set when checking for
table existence.
(Mnesia itself creates a write lock on the schema table when creating a
table, see
09c601fa21/lib/mnesia/src/mnesia_schema.erl (L1525) )

An alternative fix would have been to catch {no_exists,
rabbit_index_route} outside the Mnesia transaction, i.e. in all the callers of
the rabbit_binding:remove... functions and then retry the
rabbit_binding:remove... function.
2022-07-25 17:57:35 +00:00
Michael Klishin 8f779ce461
Avoid direct references to jsx
and remove an unused Honeycomb Common Test helper module
we ended up not using.

Discovered when spiking a JSON library switch to Thoas.

Pair: @pjk25
2022-07-25 19:34:51 +04:00
Karl Nilsson 15435eb922 Introduce new queue type callback to check argument compatibilty 2022-07-25 12:34:51 +01:00
Karl Nilsson 2c4e8cfa29 Configure default queue type by vhost
This allows operators to override the default queue type on a per-vhost
basis by setting the default_queue_type metadata key on the vhost record.

When a queue is declared without specifiying a queue type (x-queue-type)
and there is a default set for the vhost we will augment the declare arguments
with the default. This allows future declares with and without the x-queue-type
argument to succeed.

Also only change the default _if_ the queue is durable and not
exclusive.
2022-07-25 12:34:28 +01:00
David Ansari b2fbf154b0 Use feature_flags_v2 API for direct_exchange_routing_v2
Changes in this commit:

1.
Use feature_flags_v2 API for feature flag direct_exchange_routing_v2.
Both feature flags feature_flags_v2 and direct_exchange_routing_v2
will be introduced in 3.11.
Therefore, direct_exchange_routing_v2 can depend on feature_flags_v2.

2.
rabbit_binding:populate_index_route_table/0 should be run only during the
feature flag migration. Thereafter, upon node start-ups, binding recovery takes
care of populating the table.

3.
Merge direct_exchange_routing_v2_post_3_11_SUITE into
direct_exchange_routing_v2_SUITE. We don't need two separate test
suites with almost identical setup and teardown.
2022-07-19 10:55:36 +00:00
Michael Klishin 269504c9d3
Merge pull request #5203 from rabbitmq/ensure-index-route-table
Ensure rabbit_index_route table is created after joining a cluster
2022-07-15 20:14:11 +04:00
dcorbacho ae9547df34 Ensure that the post 3.11 suite doesn't run in mixed mode
This commit can be reverted once all mixed versions are >= 3.11
2022-07-15 17:44:14 +02:00
dcorbacho caa0cfd575 Ensure rabbit_index_route table is created after joining a cluster
With the feature flags controller, the features are not re-enabled every time.
This table needs to be considered as an special case.
2022-07-15 16:11:04 +02:00
Jean-Sébastien Pédron d4bb6539a5
feature_flags_v2_SUITE: Remove trailing whitespace 2022-07-15 10:32:28 +02:00
Jean-Sébastien Pédron e682c11300
feature_flags_v2_SUITE: Fix typo in "Overring" 2022-07-15 10:32:28 +02:00
Jean-Sébastien Pédron abd4edddc1
feature_flags_SUITE: Use `stream_queue` instead of `quorum_queue` in testcases
`quorum_queue` is now required and can't be used in tests.

Also, part of the `enable_feature_flag_when_ff_file_is_unwritable`
testcase was commentted out because it relied on the `is_enabled` thing
which was dropped in `rabbit_ff_controller`. This should be introduced
at some point with a more robust design.
2022-07-15 10:32:28 +02:00
Jean-Sébastien Pédron a1d0e45560
feature_flags_SUITE: Test required feature flags using `quorum_queue`
Our test framework can inject feature flags, but we would need a special
handling for required injected feature flags which would not test the
regular code path.

Therefore we rely on the `quorum_queue` feature flag, now required, to
test that it is correctly enabled on boot and when clustering.
2022-07-15 10:32:28 +02:00
David Ansari ceb5c72bbb Do not compute checksums for quorum queues
Make use of https://github.com/rabbitmq/ra/pull/292

The new default will be to NOT compute CRC32 for quorum queue segments
and to NOT compute Adler32 for WAL to achieve better performance.

See https://github.com/rabbitmq/ra/pull/292#pullrequestreview-1013194678
for performance improvements.
2022-07-06 13:37:50 +02:00
David Ansari a8442ccc7a Fix mixed version channel crash
Fixes #5141
2022-07-04 08:20:55 +00:00
Jean-Sébastien Pédron bcb8733880
rabbit_feature_flags: Add a feature flags controller process
This gen_statem-based process is responsible for handling concurrency
when feature flags are enabled and synchronized when a cluster is
expanded.

This clarifies and stabilizes the behavior of the feature flag subsystem
w.r.t. situations where e.g. a feature flag migration function takes
time to update data and a new node joins a cluster and synchronizes its
feature flag states with the cluster. There was a chance that the
feature flag was marked as enabled on the joining node, even though the
migration function didn't take care of that node.

With this new feature flags controller, enabling or synchronizing
feature flags blocks and delays any concurrent operations which try to
modify feature flags states too.

This change also clarifies where and when the migration function is
called: it is called at least once on each node who knows the feature
flag and when the state goes from "disabled" to "enabled" on that node.

Note that even if the feature flag is being enabled on a subset of the
nodes (because other nodes already have it enabled), it is marked as
"state_changing" everywhere during the migration. This is to prevent
that a node where it is enabled assumes it is enabled on all nodes who
know the feature flag.

There is a new feature as well: just after a feature flag is enabled,
the migration function is called a second time for any post-enable
actions. The feature flag is marked as enabled between these "enable"
and "post-enable" steps. The success or failure of this "post-enable"
run does not affect the state of the feature flag (i.e. it is ignored).

A new migration function API is introduced to allow more advanced
things. The new API is:

    my_migration_function(
      #ffcommand{name = ...,
                 props = ...,
		 command = enable | post_enable,
		 extra = #{...}})

The record is defined in `include/feature_flags.hrl`. Here is the
meaning of each field:

* `name` and `props` are the equivalent of the `FeatureName` and
  `FeatureProps` arguments of the previous migration function API.

* `command` is basically the same as the previous `Arg` arguments.

* `extra` is map containing context-specific information. For instance, it
  contains the list of nodes where the feature flag state changes.

This whole new behavior is behind a new feature flag called
`feature_flags_v2`. If a feature flag uses the new migration function
API, `feature_flags_v2` will be automatically enabled.

If many feature flags are enabled at once (like when a fresh RabbitMQ
node is started for the first time), `feature_flags_v2` will be enabled
first if it is in the list.
2022-06-28 10:13:19 +02:00
David Ansari 0ec9566e95 Do not route to duplicate extra BCC destinations 2022-06-27 17:17:20 +00:00
Jean-Sébastien Pédron 2e3ba4c1d7
unit_config_value_encryption_SUITE: Fix log message + add stacktrace
The format string started with "~s" but there was no corresponding
argument. I just removed the "~s".

While here, the log message now contains the stacktrace too.
2022-06-20 13:37:20 +02:00
Arnaud Cogoluègnes 68a1a848a9
Limit stream max segment size value in policy
To 3 GB.
2022-06-10 13:42:52 +02:00
Arnaud Cogoluègnes e44b65957d
Limit stream max segment size to 3 GB
Values too large can overflow the stream position field
in the index (32 bit int).
2022-06-10 11:45:57 +02:00
Philip Kuryloski 327f075d57 Make rabbitmq-server work with rules_erlang 3
Also rework elixir dependency handling, so we no longer rely on mix to
fetch the rabbitmq_cli deps

Also:

- Specify ra version with a commit rather than a branch
- Fixup compilation options for erlang 23
- Add missing ra reference in MODULE.bazel
- Add missing flag in oci.yaml
- Reduce bazel rbe jobs to try to save memory
- Use bazel built erlang for erlang git master tests
- Use the same cache for all the workflows but windows
- Avoid using `mix local.hex --force` in elixir rules
  - Fetching seems blocked in CI, and this should reduce hex api usage in
    all builds, which is always nice
- Remove xref and dialyze tags since rules_erlang 3 includes them in
  the defaults
2022-06-08 14:04:53 +02:00
David Ansari e26dd85a18 Fix quorum queue crash when reject with requeue followed by dead-letter
Prior to this commit, for at-most-once and at-least-once dead lettering
a quorum queue crashed if:
1. no delivery-limit set (causing Ra #requeue{} instead of #enqueue{} command
when message gets requeued)
2. message got rejected with requeue = true
3. requeued message got dead lettered

Fixes #4940
2022-06-01 13:53:19 +00:00
Michael Klishin b7b73302f9
Revisit test introduced in #4891 to not use 3.10-specific logger settings
(cherry picked from commit aa9d10e1a93f2c4f258f635f39d9873d9c3adaab)
2022-05-25 14:57:53 +04:00
Péter Gömöri 02975a560a Allow updating config (eg log level) of log exchange 2022-05-24 19:18:57 +02:00
Michael Klishin 721a8f06e3
Skip this test in mixed cluster environments 2022-05-19 14:53:16 +04:00
Lajos Gerecs 25f8a9611b implement fallback secret for credentials obfuscation
Author:    Lajos Gerecs <lajos.gerecs@erlang-solutions.com>
2022-05-18 23:03:46 +04:00
Michael Klishin 40faa1e625
Merge pull request #4838 from rabbitmq/handle-void-header-types
Handle void types in AMQP 0.9.1 -> AMQP 1.0 conversion
2022-05-18 20:34:02 +04:00
Karl Nilsson a0dea885f7 tidy up 2022-05-18 17:04:27 +01:00
Karl Nilsson 7242601a2b Handle void types in AMQP 0.9.1 -> AMQP 1.0 conversion
void -> null
2022-05-18 17:01:02 +01:00
David Ansari 2d14403dad Reduce expiry limit from 100 to 10 years 2022-05-18 12:11:57 +00:00
David Ansari de4eeb678e Set maximum expiration
When applications accidentally set an unreasonable high value for
the message TTL expiration field, e.g. 6779303336614035452,
before this commit quorum queue and classic queue processes crashed:

```
2022-05-17 13:35:26.488670+00:00 [notice] <0.1000.0> queue 'test' in vhost '/': candidate -> leader in term: 2 machine version: 2
2022-05-17 13:35:26.489492+00:00 [error] <0.1000.0>   crasher:
2022-05-17 13:35:26.489492+00:00 [error] <0.1000.0>     initial call: ra_server_proc:init/1
2022-05-17 13:35:26.489492+00:00 [error] <0.1000.0>     pid: <0.1000.0>
2022-05-17 13:35:26.489492+00:00 [error] <0.1000.0>     registered_name: '%2F_test'
2022-05-17 13:35:26.489492+00:00 [error] <0.1000.0>     exception error: bad argument
2022-05-17 13:35:26.489492+00:00 [error] <0.1000.0>       in function  erlang:start_timer/4
2022-05-17 13:35:26.489492+00:00 [error] <0.1000.0>          called as erlang:start_timer(6779303336614035351,<0.1000.0>,
2022-05-17 13:35:26.489492+00:00 [error] <0.1000.0>                                       {timeout,expire_msgs},
2022-05-17 13:35:26.489492+00:00 [error] <0.1000.0>                                       [])
2022-05-17 13:35:26.489492+00:00 [error] <0.1000.0>          *** argument 1: exceeds the maximum supported time value
2022-05-17 13:35:26.489492+00:00 [error] <0.1000.0>       in call from gen_statem:loop_timeouts_start/16 (gen_statem.erl, line 2108)
2022-05-17 13:35:26.489492+00:00 [error] <0.1000.0>     ancestors: [<0.999.0>,ra_server_sup_sup,<0.250.0>,ra_systems_sup,ra_sup,
2022-05-17 13:35:26.489492+00:00 [error] <0.1000.0>                   <0.186.0>]
```

In this commit, we disallow expiry fields higher than 100 years.
This causes the channel to be closed which is better than crashing the
queue process.

This new validation applies to message TTLs and queue expiry.

From the docs of erlang:start_timer:
"The absolute point in time, the timer is set to expire on, must be in the interval
[erlang:convert_time_unit(erlang:system_info(start_time), native, millisecond),
 erlang:convert_time_unit(erlang:system_info(end_time), native, millisecond)].
If a relative time is specified, the Time value is not allowed to be negative.

end_time:
The last Erlang monotonic time in native time unit that can be represented
internally in the current Erlang runtime system instance.
The time between the start time and the end time is at least a quarter of a millennium."
2022-05-18 11:01:17 +00:00
David Ansari 4472ddf71c Increase receiving throughput from a stream via AMQP
This commit increases consumption throughput from a stream via AMQP 0.9.1
for 1 consumer by 83k msg/s or 55%,
for 4 consumers by 140k msg/s or 44%.

This commit tries to follow https://www.erlang.org/doc/efficiency_guide/binaryhandling.html
by reusing match contexts instead of creating new sub-binaries.

The CPU and mmap() memory flame graphs show that
when producing and consuming from a stream via AMQP 0.9.1
module amqp10_binary_parser requires
before this commit: 10.1% CPU time and 8.0% of mmap system calls
after this commit:  2.6% CPU time 2.5% of mmap system calls

Performance tests

Start rabbitmq-server without any plugins enabled and with 4 schedulers:
```
make run-broker PLUGINS="" RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+JPperf true +S 4"
```

Test 1

Perf test client:
```
-x 1 -y 2 -qa x-queue-type=stream -ad false -f persistent -u s1 --qos 10000 --multi-ack-every 1000 -z 30
```

master branch:
sending rate avg msg/s 143k - 146k
receiving rate avg msg/s 188k - 194k

PR:
sending rate avg 133k - 138k
receiving rate avg 266k - 276k

This shows that with AMQP 0.9.1 and a stream, prior to this commit the broker could not
deliver messages to consumers as fast as they were published.
After this commit, it can.

Test 2

First, produce a few millions messages:
```
-x 1 -y 0 -qa x-queue-type=stream -ad false -f persistent -u s2
```
Then, consume them:
```
-x 0 -y 1 -qa x-queue-type=stream -ad false -f persistent -u s2 --qos 10000 --multi-ack-every 1000 -ca x-stream-offset=first -z 30
```

receving rate avg msg/s
master branch:
147k - 156k

PR:
230k - 237k

Improvement: 83k / 55%

Test 3

-x 0 -y 4 -qa x-queue-type=stream -ad false -f persistent -u s2 --qos 10000 --multi-ack-every 1000 -ca x-stream-offset=first -z 30

receving rate avg msg/s
master branch:
313k - 319k

PR:
450k - 461k

Improvement: 140k / 44%
2022-05-16 09:07:46 +00:00
David Ansari 04938f1d6a Fix unit test
The new default format of the log level is the full name.
2022-05-12 10:01:04 +00:00
David Ansari f4c5694813 Add feature flag direct_exchange_routing_v2 2022-05-11 15:25:15 +00:00
David Ansari 84ff7e6dea Increase routing throughput for direct exchange 2022-05-11 15:25:15 +00:00
David Ansari 70a639cd19 Avoid ETS lookup if no extra_bcc queue set
A queue (Q1) can have an extra_bcc queue (Q2).
Whenever a message is routed to Q1, it must also be routed to Q2.

Commit fc2d37ed1c
puts the logic to determine extra_bcc queues into
rabbit_exchange:route/2.
That is functionally correct because it ensures that messages being dead
lettered to target queues will also route to the target queues'
extra_bcc queues.
For every message being routed, that commit uses ets:lookup/2
just to check for an extra_bcc queue.

(Technically, that commit is not a regression because it does not slow
down the common case where a message is routed to a single target queue
because before that commit rabbit_channel:deliver_to_queues/3
used two ets:lookup/2 calls.)

However we can do better by avoiding the ets:lookup/2 for the common
case where there is no extra_bcc queue set.

One option is to use ets:lookup_element/3 to only fetch the queue
'options' field.

A better option (implemented in this commit) is determining whether to
send to an extra_bcc queue in the rabbit_channel and in the at-most
and at-least once dead lettering modules where the queue records
are already looked up.

This commit speeds up sending throughput by a few thousand messages per
second.
2022-05-11 07:31:30 +00:00
Arnaud Cogoluègnes 85b0625b6c
Address code review comments for stream SAC
References #3753
2022-05-09 10:52:38 +02:00
Arnaud Cogoluègnes f4e2a95e6c
Address code review comments for stream SAC
References #3753
2022-05-09 10:52:37 +02:00
Arnaud Cogoluègnes 8406e01297
Bump stream coordinator machine version to 3
References #3753
2022-05-09 10:52:35 +02:00
Arnaud Cogoluègnes bee4fcab11
Fix stream SAC coordinator unit test
It was broken after introducing the "connection label" field
for the list_stream_group_consumers CLI command.

References #3753
2022-05-09 10:52:31 +02:00
Arnaud Cogoluègnes 29b4b3e6be
Add unit test for SAC in super stream partition
References #3753
2022-05-09 10:52:28 +02:00
Arnaud Cogoluègnes 6b6953e948
Add unit for "simple" SAC group
"simple" meaning not part of a super stream.

References #3753
2022-05-09 10:52:28 +02:00
Arnaud Cogoluègnes d1fea82c80
Use helpers in stream SAC coordinator test
References #3753
2022-05-09 10:52:27 +02:00
Arnaud Cogoluègnes eeefd7c860
Monitor connection PIDs in stream SAC coordinator
References #3753
2022-05-09 10:52:27 +02:00
Arnaud Cogoluègnes f1876e212c
Fix test 2022-05-09 10:52:27 +02:00
Arnaud Cogoluègnes e02aa98405
Start to monitor connection PIDs in stream SAC coordinator
References #3753
2022-05-09 10:52:25 +02:00
Alex Valiushko 2945139ff9 Implement cat log file rotation 2022-05-06 13:03:15 -07:00
David Ansari c68ff2b070 Add tests for queue and consumer argument valdidation 2022-04-16 10:20:12 +02:00
David Ansari ef3c9cd526 Validate queue arguments
Throw a PRECONDITION_FAILED error when a queue is created with an
invalid argument.
There can be RabbitMQ plugins and extensions out there defining arbitrary
queue arguments. Therefore we only check that a queue is not declared
with arguments that we know are supported only by OTHER queue types.

The benefit of this change is that queues are not mistakenly declared
with the wrong arguments. For example, a stream should not be
declared with `x-dead-letter-exchange` because this argument is not
supported in streams. It is only supported by classic and quorum queues.
Instead of silently allowing this argument and users wondering why the stream
does not dead letter any messages, it's better to fail early with an
meaningful error message.

We still allow any other arbitrary queue arguments.
(Therefore and unfortunately, when `x-dead-letter-exchange` is misspelled,
it will still be accepted as an argument).

For argument equivalence, we only validate the arguments that are
supported for a queue type. There is no benefit in validating any other
arguments that are not supported anyways.
2022-04-16 10:20:12 +02:00
Michael Klishin 37a3448672
Merge pull request #4442 from rabbitmq/quorum-queue-leader-locator
Add quorum queue-leader-locator
2022-04-15 09:31:45 +04:00
David Ansari 367c8b7d1a Add test for random replica and leader selection 2022-04-12 17:03:42 +02:00
David Ansari eeb7bc98bc Skip leader_locator_client_local in mixed versions
because deleting a quorum queue in a mixed version cluster doesn't
delete the Ra server on time in the RabbitMQ node with the lower
version.

Therefore, this mixed versions test is skipped for the same reason that
test delete_declare is skipped.
2022-04-12 15:48:51 +02:00
David Ansari 597d2d36e4 Fix mixed version tests by enabling maintenance_mode_status
feature flag.

Feature flags are off by default for the RabbitMQ node(s) with the lower
version in mixed version clusters.
2022-04-12 11:02:17 +02:00
David Ansari 2cabc03fa2 Add test for importing existing queue
Reimporting a queue (i.e. same vhost and same name) with different properties
or queue arguments should be a no-op because it might be dangerous to
blindly override queue properties and arguments of existing queues.

(cherry picked from commit 3ae2befb30)
2022-04-11 17:05:45 +02:00
David Ansari 3ae2befb30 Add test for importing existing queue
Reimporting a queue (i.e. same vhost and same name) with different properties
or queue arguments should be a no-op because it might be dangerous to
blindly override queue properties and arguments of existing queues.
2022-04-11 15:58:13 +02:00
David Ansari f32e80c01c Convert random and least-leaders to balanced
Deprecate queue-leader-locator values 'random' and 'least-leaders'.
Both become value 'balanced'.

From now on only queue-leader-locator values 'client-local', and
'balanced' should be set.

'balanced' will place the leader on the node with the least leaders if
there are few queues and will select a random leader if there are many
queues.
This avoid expensive least leaders calculation if there are many queues.

This change also allows us to change the implementation of 'balanced' in
the future. For example 'balanced' could place a leader on a node
depending on resource usage or available node resources.

There is no need to expose implementation details like 'random' or
'least-leaders' as configuration to users.
2022-04-11 10:39:28 +02:00
Arnaud Cogoluègnes 4f4b3cbbb6
Handle machine upgrades incrementally in stream coordinator
RA applies machine upgrades from any version to any version,
e.g. 0 to 2. This commit "fills in the gaps" in the stream coordinator,
to make sure all 1-to-1 upgrades are applied, e.g. 0 to 1 and 1 to 2
in the previous example.

Fixes #4510
2022-04-11 10:04:39 +02:00
Michael Klishin 296566c305
Merge pull request #4463 from rabbitmq/unsupported-policies-queue-declare
Bugfix: check unsupported policies on queue.declare
2022-04-09 07:46:45 +04:00
Phil Kuryloski 36d707def2
Merge pull request #4440 from rabbitmq/use-rules_erlang-2.5.0
Updates for rules_erlang 2.5.0
2022-04-08 12:36:02 +02:00
dcorbacho 9bbfcdbc7a Bugfix: check unsupported policies on queue.declare
The amqqueue record does not exist when `rabbit_amqqueue:is_policy_applicable/2`
is called. It is available in `rabbit_policy`, so let's send it through.
2022-04-08 12:06:46 +02:00
Philip Kuryloski 2dd9bde891 Bring over PROJECT_APP_EXTRA_KEYS values from make to bazel 2022-04-07 17:39:33 +02:00
David Ansari 1315b1d4b1 Prefer running nodes for replica selection
When declaring a quorum queue or a stream, select its replicas in the
following order:
1. local RabbitMQ node (to have data locality for declaring client)
2. running RabbitMQ nodes
3. RabbitMQ nodes with least quorum queue or stream replicas (to have a "balanced" RabbitMQ cluster).

From now on, quorum queues and streams behave the same way for replica
selection strategy and leader locator strategy.
2022-04-07 11:55:19 +02:00
Lajos Gerecs 772a660be6 fix report crashing priority queues
Special case the online property.

Implement a safer add function to avoid this in the future.

Fixes: #4405
2022-04-07 11:02:47 +02:00
David Ansari f903ef95cc Filter out drained nodes when selecting stream leader 2022-04-06 13:29:51 +02:00
David Ansari f4503fb8d8 Filter out drained nodes when selecting quorum queue leader 2022-04-06 13:29:50 +02:00
David Ansari 542f21506c Support quorum queue leader locator
Prior to this commit:
1. When a new quorum queue was created, the local node + random nodes
   were selected as replicas.
2. Always local node became leader.

For example, when an AMQP client connects to a single RabbitMQ node and
creates N quorum queues, all N leaders will be on that node and replicas
are not evenly distributed across the RabbitMQ cluster.
If N is small and the RabbitMQ cluster has many nodes, some nodes might
not host any quorum queue replicas at all.

After this commit:
1. When a new quorum queue is created, the local node + RabbitMQ nodes
   with least quorum queue replicas are selected.
   This will nicely distribute the quorum queue replicas across the
   RabbitMQ cluster.
2. Support (x-)queue-leader-locator argument / policy with
    * client-local (stays the default)
    * random
    * least-leaders
    The same settings are already available for streams.
2022-04-05 16:01:51 +02:00
Michael Klishin fa98a41549
Revert "Bump a few timeouts in a flakey suite"
This reverts commit fe2b79835b.

The issue turned out to be a legitimate bug in Ra,
addressed in ecb176b10c.
2022-03-24 18:20:24 +04:00
Michael Klishin fe2b79835b
Bump a few timeouts in a flakey suite 2022-03-24 14:47:51 +04:00
Michael Klishin c38a3d697d
Bump (c) year 2022-03-21 01:21:56 +04:00
Michael Klishin 03b7e90c85
Merge pull request #4272 from rabbitmq/rabbit_fifo_dlx_integration_SUITE-flake
Adjust assertions in an effort to reduce flakes
2022-03-18 18:13:29 +04:00
Loïc Hoguin 7198d4720b
Make backing_queue_SUITE fast on macOS
This very small patch requires extended explanations. The patch
swaps two lines in a rabbit_variable_queue setup: one which sets
the memory hint to 0 which results in reduce_memory_usage to
always flush to disk and fsync; and another which publishes a
lot of messages to the queue that will after that point be
manipulated further to get the queue in the exact right state
for the relevant tests.

The problem with calling reduce_memory_usage after every single
message has been published is not writing to disk (v2 tests do
not suffer from performance issues in that regard) but rather
that rabbit_queue_index will always flush its journal (containing
the one message), which results in opening the segment file,
appending to it, and closing it. The file handling is done
by file_handle_cache which, in this case, will always fsync
the data before closing the file. And that's this one fsync
per message that makes the relevant tests very slow.

By swapping the lines, meaning we publish all messages first
and then set the memory hint to 0, we end up with a single
reduce_memory_usage call that results in an fsync, at the
end. (There may be other fsyncs as part of normal operations.)
And still get the same result because all messages will have
been flushed to disk, only this time in far fewer operations.

This doesn't seem to have been causing problems on CI which
already runs the tests very fast but should help macOS and
possibly other development environments.
2022-03-18 13:26:57 +01:00
Philip Kuryloski 8ccf7415f7 Adjust assertions in an effort to reduce flakes 2022-03-15 10:46:19 +01:00
Arnaud Cogoluègnes 6ab1158633
Re-issue monitors to clean up stale listeners
References #4133
2022-03-09 15:37:37 +01:00
Arnaud Cogoluègnes df61e44a0b
Fix stream coordinator v1-to-v2 machine state change
Monitors were not converted.

References #4133
2022-03-09 12:46:35 +01:00
Arnaud Cogoluègnes 824bf6aa67
Filter a stream queue test in mixed-version cluster mode
Unlikely to pass as it randomly stops a node, which
can be the stream coordinator leader. The 2 remaining nodes
then cannot elect a new leader because they don't have
the same version.

References #4133
2022-03-08 18:20:16 +01:00
Arnaud Cogoluègnes e6a2670bf5
Skip stream queue tests in mixed-version cluster testing
These tests requires a level of availability that mixed-version
clusters cannot provide, so they are skipped under these
conditions.

References #4133
2022-03-08 17:40:12 +01:00
Arnaud Cogoluègnes 7c47f004b4
Use node with latest machine version to connect
References #4133
2022-03-08 17:40:11 +01:00
Arnaud Cogoluègnes 218a0ba2d2
Skip 2 stream queue tests in mixed-version cluster testing
These 2 tests can fail in mixed-version, 2-node cluster testing. If
the stream coordinator leader ends up on the lower version, it
does not contain the fixes and the tests fail.

With this commit, the 2 tests are skipped under the appropriate
conditions.

References #4133
2022-03-08 17:40:11 +01:00
Arnaud Cogoluègnes 636bb55723
Bump machine version from 1 to 2 in stream coordinator
Version bump necessary because of the state changes to
handle local member listeners.

References #4133
2022-03-08 17:40:10 +01:00
Arnaud Cogoluègnes ae1684efe3
Delete queue at end of test 2022-03-08 17:40:10 +01:00