Commit Graph

3094 Commits

Author SHA1 Message Date
Diana Parra Corbacho ea6ef17cc0 Mqtt: test close connection 2024-08-27 16:44:18 +02:00
Michael Davis 5b3ae230b7
Merge pull request #12082 from rabbitmq/md/khepri/db-queue-deletion 2024-08-27 07:47:06 -05:00
David Ansari 55e6d582c4 Incrase default rabbit.max_link_credit
from 128 to 170. See comments for rationale.

On an Ubuntu box, run
```
quiver //host.docker.internal//queues/my-quorum-queue --durable --count 100k --duration 10m --body-size 12 --credit 10000
```

Before this commit:
```
RESULTS

Count ............................................... 100,000 messages
Duration ............................................... 11.0 seconds
Sender rate ........................................... 9,077 messages/s
Receiver rate ......................................... 9,097 messages/s
End-to-end rate ....................................... 9,066 messages/s
```

After this commit:
```
RESULTS

Count ............................................... 100,000 messages
Duration ................................................ 6.2 seconds
Sender rate .......................................... 16,215 messages/s
Receiver rate ........................................ 16,271 messages/s
End-to-end rate ...................................... 16,166 messages/s
```

That's because more `#enqueue{}` Ra commands can be batched before
fsyncing.

So, this commit brings the performance of scenario "a single connection publishing to
a quorum queue with large number (>200) of unconfirmed publishes" in AMQP 1.0
closer to AMQP 0.9.1.
2024-08-27 12:08:46 +02:00
Michael Klishin 6b444ae907 Exclude this Khepri-specific test from mixed version cluster runs 2024-08-24 21:54:25 -04:00
Michael Klishin f47daee915 Wording #12113 2024-08-24 19:07:09 -04:00
Michal Kuratczyk 6ca2022fcf await quorum+1 improvements
1. If khepri_db is enabled, rabbitmq_metadata is a critical component
2. When waiting for quorum+1, periodically log what doesn't have the
   quorum+1
   - for components: just list them
   - for queues: list how many we are waiting for and how to display
     them (because there could be a large number, logging that
     could be impractical or even dangerous)
3. make the tests signficantly faster by using a single group
2024-08-24 18:49:35 -04:00
Michael Klishin 96fc028352 Add a type spec 2024-08-24 18:25:44 -04:00
Michael Klishin c41c27de06 One more node-wide DQT test
References #11541 #11457 #11528
2024-08-24 05:50:20 -04:00
Michael Klishin 29051a8113 DQT: fall back to node-wide default
when virtual host does not have any metadata.

References #11541 #11457 #11528
2024-08-24 04:03:04 -04:00
Péter Gömöri 34bcb91159 Prevent exchange logging crash
Don't let the `log` callback of exchange_logging handler crash,
because in case of a crash OTP logger removes the exchange_logger
handler, which in turn deletes the log exchange and its bindings.

It was seen several times in production that the log exchange suddenly
disappears and without debug logging there is no trace of why.

With this commit `erlang:display` will print the reason and stacktrace
to stderr without using the logging infrastructure.
2024-08-23 00:28:10 +02:00
Michael Davis 4a8d01e79b
Handle rabbit_amqqueue:internal_delete/2 failures in quorum queues 2024-08-22 12:18:45 -04:00
Michael Davis 2302eb9a11
Handle rabbit_amqqueue:internal_delete/3 failures in classic queues
The design of `rabbit_amqqueue_process` makes this change challenging.
The old implementation of the handler of the `{delete,_,_,_}` command
simply stopped the process and any cleanup was done in `gen_server2`'s
`terminate` callback. This makes it impossible to pass any error back
to the caller if the record can't be deleted from the metadata store
before a timeout.

The strategy taken here slightly mirrors an existing
`{shutdown, missing_owner}` termination value which can be returned from
`init_it2/3`. We pass the `ReplyTo` for the call with the state. We then
optionally reply to this `ReplyTo` if it is set in `terminate_delete/4`
with the result of `rabbit_amqqueue:internal_delete/3`. So deletion of
a classic queue will terminate the process but may return an error to
the caller if the record can't be removed from the metadata store
before the timeout.
2024-08-22 12:17:44 -04:00
Michael Klishin 39679f58d9
Merge pull request #12073 from rabbitmq/osiris-1.8.3
Osiris v1.8.3
2024-08-22 12:17:37 -04:00
Diana Parra Corbacho 0061944e9c Cancel AMQP stream consumer when local stream member is deleted
The consumer reader process is gone and there is no way to recover
it as the node does not have a member of the stream anymore,
so it should be cancelled/detached.
2024-08-22 12:39:52 +02:00
Jean-Sébastien Pédron 363cc8586c
rabbit_khepri: Set `default_ra_system` Khepri setting
[Why]
It allows to restart Khepri using `khepri:start()`, e.g. from a shell.
2024-08-22 12:18:19 +02:00
Michael Davis a7d099de8c
cluster_minority_SUITE: Add a case for queue deletion 2024-08-21 16:23:48 -04:00
Michael Davis 0bb203e769
rabbit_db_queue: Add timeout error to delete/2 spec 2024-08-21 16:23:48 -04:00
Michael Davis 9774d8d833
minor: Use rabbit_misc:rs/1 formatting for stream delete failure msg
`rabbit_misc:rs/1` formats a string "queue {name} in vhost {vhost}" so
the "queue" and single quotes in the prior message can be removed.
2024-08-21 15:21:26 -04:00
Karl Nilsson baa64102fd Osiris v1.8.3
This release contains fixes around certain recovery failures where
there are either orphaned segment files (that do not have a corresponding
index file) or index files that do not have a corresponding segment
file.
2024-08-21 08:48:58 +01:00
Jean-Sébastien Pédron 20f2850875
rabbit_db_exchange: List exchange names from Khepri projection
[Why]
All other queries are based on projections, not direct queries to
Khepri. Using projections for exchange names should be faster and more
consistent with the rest of the module.

[How]
The Khepri query is replaced by an ETS query.
2024-08-20 17:35:34 +02:00
David Ansari 1c6f4be308 Rename quorum queue priority from "low" to "normal"
Rename the two quorum queue priority levels from "low" and "high" to "normal" and
"high". This improves user experience because the default priority level is low /
normal. Prior to this commit users were confused why their messages show
up as low priority. Furthermore there is no need to consult the docs to
know whether the default priority level is low or high.
2024-08-20 11:18:36 +02:00
David Ansari b105ca9877 Remove randomized_startup_delay_range config
For RabbitMQ 4.0, this commit removes support for the deprecated `rabbitmq.conf` settings
```
cluster_formation.randomized_startup_delay_range.min
cluster_formation.randomized_startup_delay_range.max
```

The rabbitmq/cluster-operator already removed these settings in
b81e0f9bb8
2024-08-19 14:34:32 +02:00
David Ansari 314ff387b1 Build map more efficiently
Call maps:from_list/1 once instead of iteratively adding key/value
associations to the map.
2024-08-19 12:09:20 +02:00
Michael Davis 49c645a076
Fix rabbit_db_queue_SUITE:update_decorators case
This test called `rabbit_db_queue:update_decorators/1` which doesn't
exist - instead it can call `update_decorators/2` with an empty list.
This commit also adds the test to the `all_tests/0` list - it being
absent is why this wasn't caught before.
2024-08-16 13:27:29 -04:00
Michael Davis f80cd7d477
rabbit_db_queue: Remove unused `set_many/1`
This function was only used by classic mirrored queue code which was
removed in 3bbda5b.
2024-08-16 13:26:37 -04:00
Michael Klishin 7121b802e4
Merge pull request #12026 from rabbitmq/maintenance-revive-fixes
Fixes to rabbit_maintenance:revive/0
2024-08-16 12:15:21 -04:00
Michael Klishin f1d51e19f4
Merge pull request #12032 from rabbitmq/sasl-mechanisms-order
Maintain order of configured SASL mechanisms
2024-08-16 12:13:32 -04:00
David Ansari b6fbc0292a Maintain order of configured SASL mechanisms
RabbitMQ should advertise the SASL mechanisms in the order as
configured in `rabbitmq.conf`.

Starting RabbitMQ with the following `rabbitmq.conf`:
```
auth_mechanisms.1 = PLAIN
auth_mechanisms.2 = AMQPLAIN
auth_mechanisms.3 = ANONYMOUS
```

translates prior to this commit to:
```
1> application:get_env(rabbit, auth_mechanisms).
{ok,['ANONYMOUS','AMQPLAIN','PLAIN']}
```

and after this commit to:
```
1> application:get_env(rabbit, auth_mechanisms).
{ok,['PLAIN','AMQPLAIN','ANONYMOUS']}
```

In our 4.0 docs we write:
> The server mechanisms are ordered in decreasing level of preference.

which complies with https://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-security-v1.0-os.html#type-sasl-mechanisms
2024-08-16 14:38:36 +02:00
Karl Nilsson c12d5a1b17
Merge pull request #11937 from rabbitmq/default-delivery-limit
QQ: introduce a default delivery limit
2024-08-16 12:55:46 +01:00
David Ansari 3e7f5a00e2 Fix AMQP 1.0 SASL CR Demo
```
switch_callback(State1, {frame_header, sasl}, 8);
```
was missing.

Tidy up various other small things.
2024-08-16 13:24:49 +02:00
Karl Nilsson 3a386f46d2 Show delivery-count on queue page for quorum queues.
To make it more visible that a default is in place.

Also added publisher count as it was easy to do so.
2024-08-16 10:32:45 +01:00
Karl Nilsson 2dcced6967 Maintenance mode: change revive to use quorum queue recovery function.
As this already does the job.
2024-08-16 10:05:53 +01:00
Karl Nilsson 8b2fccc659 Fix rabbit_amqqueue:list_local_followers/1
To ensure it only returns followers for queues that actually have
a local member.
2024-08-16 09:35:10 +01:00
Karl Nilsson daecdb07c2 QQ: introduce a delivery_limit default
If the delivery_limit of a quorum queue is not set by queue arg and/or
policy it will now be defaulted to 20.
2024-08-16 08:58:44 +01:00
GitHub 3e9cb1ed1b bazel run gazelle 2024-08-16 04:02:25 +00:00
Michael Klishin 178f9a962e
Merge pull request #11964 from rabbitmq/qq-checkpointing-tweaks
QQ: checkpointing frequency improvements
2024-08-15 20:49:24 -04:00
Michael Klishin 1fb70c7e95 Correct a couple of doc guide links 2024-08-15 16:04:46 -04:00
Michael Davis 9ca77f8efe
Remove max_in_memory_length/bytes from QQ config type
Also remove a resolved TODO about conversion for the `last_checkpoint`
field.
2024-08-15 15:44:28 -04:00
Michael Davis 140abd871a
Merge pull request #11980 from rabbitmq/md/khepri-minority-errors/queue-declaration 2024-08-15 14:26:08 -05:00
Michael Klishin 2058f449a1
Merge pull request #11999 from rabbitmq/sasl-anon
Add SASL mechanism ANONYMOUS
2024-08-15 13:12:41 -04:00
Michael Klishin 2f165e02f2 rabbitmq-upgrade revive: handle more errors
returned by Ra, e.g. when a replica cannot be
restarted because of a concurrent delete
or because a QQ was inserted into a schema data
store but not yet registered as a process on
the node.

References #12013.
2024-08-15 10:02:02 -04:00
David Ansari b09f2d4da3 Save a Cuttlefish translation 2024-08-15 15:00:09 +02:00
David Ansari ba14b158af Remove mqtt.default_user and mqtt.default_pass
This commit is a breaking change in RabbitMQ 4.0.

 ## What?
Remove mqtt.default_user and mqtt.default_pass
Instead, rabbit.anonymous_login_user and rabbit.anonymous_login_pass
should be used.

 ## Why?
RabbitMQ 4.0 simplifies anonymous logins.
There should be a single configuration place
```
rabbit.anonymous_login_user
rabbit.anonymous_login_pass
```
that is used for anonymous logins for any protocol.

Anonymous login is orthogonal to the protocol the client uses.
Hence, there should be a single configuration place which can then be
used for MQTT, AMQP 1.0, AMQP 0.9.1, and RabbitMQ Stream protocol.

This will also simplify switching to SASL for MQTT 5.0 in the future.
2024-08-15 10:58:48 +00:00
David Ansari d46f07c0a4 Add SASL mechanism ANONYMOUS
## 1. Introduce new SASL mechanism ANONYMOUS

 ### What?
Introduce a new `rabbit_auth_mechanism` implementation for SASL
mechanism ANONYMOUS called `rabbit_auth_mechanism_anonymous`.

 ### Why?
As described in AMQP section 5.3.3.1, ANONYMOUS should be used when the
client doesn't need to authenticate.

Introducing a new `rabbit_auth_mechanism` consolidates and simplifies how anonymous
logins work across all RabbitMQ protocols that support SASL. This commit
therefore allows AMQP 0.9.1, AMQP 1.0, stream clients to connect out of
the box to RabbitMQ without providing any username or password.

Today's AMQP 0.9.1 and stream protocol client libs hard code RabbitMQ default credentials
`guest:guest` for example done in:
* 0215e85643/src/main/java/com/rabbitmq/client/ConnectionFactory.java (L58-L61)
* ddb7a2f068/uri.go (L31-L32)

Hard coding RabbitMQ specific default credentials in dozens of different
client libraries is an anti-pattern in my opinion.
Furthermore, there are various AMQP 1.0 and MQTT client libraries which
we do not control or maintain and which still should work out of the box
when a user is getting started with RabbitMQ (that is without
providing `guest:guest` credentials).

 ### How?
The old RabbitMQ 3.13 AMQP 1.0 plugin `default_user`
[configuration](146b4862d8/deps/rabbitmq_amqp1_0/Makefile (L6))
is replaced with the following two new `rabbit` configurations:
```
{anonymous_login_user, <<"guest">>},
{anonymous_login_pass, <<"guest">>},
```
We call it `anonymous_login_user` because this user will be used for
anonymous logins. The subsequent commit uses the same setting for
anonymous logins in MQTT. Hence, this user is orthogonal to the protocol
used when the client connects.

Setting `anonymous_login_pass` could have been left out.
This commit decides to include it because our documentation has so far
recommended:
> It is highly recommended to pre-configure a new user with a generated username and password or delete the guest user
> or at least change its password to reasonably secure generated value that won't be known to the public.

By having the new module `rabbit_auth_mechanism_anonymous` internally
authenticate with `anonymous_login_pass` instead of blindly allowing
access without any password, we protect operators that relied on the
sentence:
> or at least change its password to reasonably secure generated value that won't be known to the public

To ease the getting started experience, since RabbitMQ already deploys a
guest user with full access to the default virtual host `/`, this commit
also allows SASL mechanism ANONYMOUS in `rabbit` setting `auth_mechanisms`.

In production, operators should disable SASL mechanism ANONYMOUS by
setting `anonymous_login_user` to `none` (or by removing ANONYMOUS from
the `auth_mechanisms` setting. This will be documented separately.
Even if operators forget or don't read the docs, this new ANONYMOUS
mechanism won't do any harm because it relies on the default user name
`guest` and password `guest`, which is recommended against in
production, and who by default can only connect from the local host.

 ## 2. Require SASL security layer in AMQP 1.0

 ### What?
An AMQP 1.0 client must use the SASL security layer.

 ### Why?
This is in line with the mandatory usage of SASL in AMQP 0.9.1 and
RabbitMQ stream protocol.
Since (presumably) any AMQP 1.0 client knows how to authenticate with a
username and password using SASL mechanism PLAIN, any AMQP 1.0 client
also (presumably) implements the trivial SASL mechanism ANONYMOUS.

Skipping SASL is not recommended in production anyway.
By requiring SASL, configuration for operators becomes easier.
Following the principle of least surprise, when an an operator
configures `auth_mechanisms` to exclude `ANONYMOUS`, anonymous logins
will be prohibited in SASL and also by disallowing skipping the SASL
layer.

 ### How?
This commit implements AMQP 1.0 figure 2.13.

A follow-up commit needs to be pushed to `v3.13.x` which will use SASL
mechanism `anon` instead of `none` in the Erlang AMQP 1.0 client
such that AMQP 1.0 shovels running on 3.13 can connect to 4.0 RabbitMQ nodes.
2024-08-15 10:58:48 +00:00
Karl Nilsson 0f1f27c1dd Qq: adjust checkpointing algo to something more like
it was in 3.13.x.

Also add a force_checkpoint aux command that the purge operation
emits - this can also be used to try to force a checkpoint
2024-08-15 11:54:18 +01:00
Michael Davis 8eef209791
Handle database timeouts in `rabbit_amqqueue:store_queue/1` 2024-08-14 15:11:28 -04:00
Michael Klishin 8fa7f3add0 Document man page sync with the new website 2024-08-14 12:53:51 -04:00
Michael Klishin 242b2243bb First man page updates for 4.0 2024-08-14 12:35:12 -04:00
Michael Klishin 8ef8d18f5f
Merge pull request #11986 from rabbitmq/amqplain
Restrict username and password in AMQPLAIN
2024-08-13 21:33:46 -04:00
Michael Klishin dad09e6123
Merge pull request #11989 from rabbitmq/mk-encrypted-values-in-rabbitmq-conf
Make it possible to specify encrypted values in rabbitmq conf
2024-08-13 18:48:31 -04:00
Michael Klishin 8b90d4a27c Allow for tagged values for a few more rabbitmq.conf settings 2024-08-13 16:27:00 -04:00
Michael Davis 3f734ef560
Handle timeouts in transient queue deletion
Transient queue deletion previously caused a crash if Khepri was enabled
and a node with a transient queue went down while its cluster was in a
minority. We need to handle the `{error,timeout}` return possible from
`rabbit_db_queue:delete_transient/1`. In the
`rabbit_amqqueue:on_node_down/1` callback we log a warning when we see
this return.

We then try this deletion again during that node's
`rabbit_khepri:init/0` which is called from a boot step after
`rabbit_khepri:setup/0`. At that point we can return an error and halt
the node's boot if the command times out. The cluster is very likely to
be in a majority at that point since `rabbit_khepri:setup/0` waits for
a leader to be elected (requiring a majority).

This fixes a crash report found in the `cluster_minority_SUITE`'s
`end_per_group`.
2024-08-13 11:40:18 -04:00
Michael Davis 0dd26f0c52
rabbit_db_queue: Transactionally delete transient queues from Khepri
The prior code skirted transactions because the filter function might
cause Khepri to call itself. We want to use the same idea as the old
code - get all queues, filter them, then delete them - but we want to
perform the deletion in a transaction and fail the transaction if any
queues changed since we read them.

This fixes a bug - that the call to `delete_in_khepri/2` could return
an error tuple that would be improperly recognized as `Deletions` -
but should also make deleting transient queues atomic and fast.
Each call to `delete_in_khepri/2` needed to wait on Ra to replicate
because the deletion is an individual command sent from one process.
Performing all deletions at once means we only need to wait for one
command to be replicated across the cluster.

We also bubble up any errors to delete now rather than storing them as
deletions. This fixes a crash that occurs on node down when Khepri is
in a minority.
2024-08-13 11:40:18 -04:00
Michael Klishin 1c7e590495 Initial encrypted value support for rabbitmq.conf
This makes possible to specify an encrypted
value in rabbitmq.conf using a prefix.

For example, to specify a default user password
as an encrypted value:

``` ini
default_user = bunnies-444
default_pass = encrypted:F/bjQkteQENB4rMUXFKdgsJEpYMXYLzBY/AmcYG83Tg8AOUwYP7Oa0Q33ooNEpK9
```

``` erl
[
  {rabbit, [
      {config_entry_decoder, [
             {passphrase, <<"bunnies">>}
       ]}
    ]}
].
```
2024-08-13 10:34:52 -04:00
David Ansari 29437d0344 Restrict username and password in AMQPLAIN
Restrict both username and password in SASL mechanism AMQPLAIN to be a
binary.
2024-08-13 14:11:58 +02:00
Michael Davis 8889d40a92
Handle database timeouts when declaring queues
This fixes a case-clause crash in the logs in `cluster_minority_SUITE`.
When the database is not available `rabbit_amqqueue:declare/6,7` should
return a `protocol_error` record with an error message rather than a
hard crash. Also included in this change is the necessary changes to
typespecs: `rabbit_db_queue:create_or_get/1` is the first function to
return a possible `{error,timeout}`. That bubbles up through
`rabbit_amqqueue:internal_declare/3` and must be handled in each
`rabbit_queue_type:declare/2` callback.
2024-08-12 16:16:57 -04:00
Michael Davis f60a9b5e57
minor: Clean up error message for failure to declare stream queue
`rabbit_misc:rs/1` for a queue resource will print
`queue '<QName>' in vhost '<VHostName>'` so the "a queue" and
surrounding single quotes should be removed here.
2024-08-12 16:16:57 -04:00
Michael Davis d3752c4aaa
minor: Correct outdated spec for rabbit_amqqueue:lookup/1
The clause of the spec that allowed passing a list of queue name
resources is out of date: the guard prevents a list from ever matching.
2024-08-12 16:16:39 -04:00
Michael Davis d0da0b556a
Move Khepri DB init to `rabbit_khepri:init/0` 2024-08-12 14:16:50 -04:00
Michael Davis 053c871ffc
rabbit_db: Lower log level of Khepri members log line 2024-08-12 14:10:15 -04:00
David Ansari 10a309d82f
Log AMQP connection name and container-id (#11975)
* Log AMQP connection name and container-id

Fixes #11958

 ## What
Log container-id and connection name.
Example JSON log:
```
{"time":"2024-08-12 10:49:44.365724+02:00","level":"info","msg":"accepting AMQP connection [::1]:56754 -> [::1]:5672","pid":"<0.1164.0>","domain":"rabbitmq.connection"}
{"time":"2024-08-12 10:49:44.381244+02:00","level":"debug","msg":"User 'guest' authenticated successfully by backend rabbit_auth_backend_internal","pid":"<0.1164.0>","domain":"rabbitmq","connection":"[::1]:56754 -> [::1]:5672"}
{"time":"2024-08-12 10:49:44.381578+02:00","level":"info","msg":"AMQP 1.0 connection from container 'my container ID': user 'guest' authenticated and granted access to vhost '/'","pid":"<0.1164.0>","domain":"rabbitmq.connection","connection":"[::1]:56754 -> [::1]:5672","container_id":"my container ID"}
{"time":"2024-08-12 10:49:44.381654+02:00","level":"debug","msg":"AMQP 1.0 connection.open frame: hostname = localhost, extracted vhost = /, idle-time-out = {uint,\n                                                                                            30000}","pid":"<0.1164.0>","domain":"rabbitmq","connection":"[::1]:56754 -> [::1]:5672","container_id":"my container ID"}
{"time":"2024-08-12 10:49:44.386412+02:00","level":"debug","msg":"AMQP 1.0 created session process <0.1170.0> for channel number 0","pid":"<0.1164.0>","domain":"rabbitmq","connection":"[::1]:56754 -> [::1]:5672","container_id":"my container ID"}

{"time":"2024-08-12 10:49:46.387957+02:00","level":"debug","msg":"AMQP 1.0 closed session process <0.1170.0> with channel number 0","pid":"<0.1164.0>","domain":"rabbitmq","connection":"[::1]:56754 -> [::1]:5672","container_id":"my container ID"}
{"time":"2024-08-12 10:49:46.388201+02:00","level":"info","msg":"closing AMQP connection ([::1]:56754 -> [::1]:5672)","pid":"<0.1164.0>","domain":"rabbitmq.connection","connection":"[::1]:56754 -> [::1]:5672","container_id":"my container ID"}
```

If JSON logging is not used, this commit still includes the container-ID
once at info level:
```
2024-08-12 10:48:57.451580+02:00 [info] <0.1164.0> accepting AMQP connection [::1]:56715 -> [::1]:5672
2024-08-12 10:48:57.465924+02:00 [debug] <0.1164.0> User 'guest' authenticated successfully by backend rabbit_auth_backend_internal
2024-08-12 10:48:57.466289+02:00 [info] <0.1164.0> AMQP 1.0 connection from container 'my container ID': user 'guest' authenticated and granted access to vhost '/'
2024-08-12 10:48:57.466377+02:00 [debug] <0.1164.0> AMQP 1.0 connection.open frame: hostname = localhost, extracted vhost = /, idle-time-out = {uint,
2024-08-12 10:48:57.466377+02:00 [debug] <0.1164.0>                                                                                             30000}
2024-08-12 10:48:57.470800+02:00 [debug] <0.1164.0> AMQP 1.0 created session process <0.1170.0> for channel number 0

2024-08-12 10:48:59.472928+02:00 [debug] <0.1164.0> AMQP 1.0 closed session process <0.1170.0> with channel number 0
2024-08-12 10:48:59.473332+02:00 [info] <0.1164.0> closing AMQP connection ([::1]:56715 -> [::1]:5672)
```

 ## Why?
See #11958 and https://www.rabbitmq.com/docs/connections#client-provided-names

To provide a similar feature to AMQP 0.9.1 this commit uses container-id as sent by the client in the open frame.
> Examples of containers are brokers and client applications.

The advantage is that the `container-id` is mandatory. Hence, in AMQP 1.0, we can enforce the desired behaviour that we document on our website for AMQP 0.9.1:
> The name is optional; however, developers are strongly encouraged to provide one as it would significantly simplify certain operational tasks.

* Clarify that container refers to AMQP 1.0

Rename container_id to amqp_container and change log message such that
it's unambigious that the word "container" refers to AMQP 1.0 containers
(to reduce confusion with the meaning of "container" in Docker / Kubernetes).
2024-08-12 18:41:25 +02:00
GitHub 0cdd894f81 bazel run gazelle 2024-08-10 04:02:30 +00:00
Michael Davis 543bf76a74
Add `cluster_upgrade_SUITE` to check mixed-version upgrades
This suite uses the mixed version secondary umbrella as a starting
version for a cluster and then has a helper to upgrade the cluster to
the current code. This is meant to ensure that we can upgrade from the
previous minor.
2024-08-09 16:23:35 -04:00
GitHub 84be037e73 bazel run gazelle 2024-08-09 04:02:26 +00:00
David Ansari 28bd6d45dc Store incoming max_message_size in #incoming_link{}
This keeps functions pure and ensures that existing links do not break
if an operator were to dynamically change the server's max_message_size.

Each link now has a max_message_size:
* incoming links as determined by RabbitMQ config
* outgoing links as determined by the client
2024-08-08 18:21:21 +02:00
David Ansari 3e708bc99a Avoid persistent_term for credit config
Put credit configuration into session state to make functions pure.
Although these credit configurations are not meant to be dynamically
changed at runtime, prior to this commit it could happen that
persistent_term:get/1 returns different results across invocations
leading to bugs in how credit is granted and recorded.
2024-08-08 18:21:21 +02:00
David Ansari aeedad7b51 Fix test flake
Prior to this commit, test
```
ERL_AFLAGS="+S 2" make -C deps/rabbit ct-amqp_client t=cluster_size_3:detach_requeues_two_connections_quorum_queue
```
failed rarely locally, and more often in CI.
An instance of a failed test in CI is
https://github.com/rabbitmq/rabbitmq-server/actions/runs/10298099899/job/28502687451?pr=11945

The test failed with:
```
=== === Reason: {assertEqual,[{module,amqp_client_SUITE},
                               {line,2800},
                               {expression,"amqp10_msg : body ( Msg1 )"},
                               {expected,[<<"1">>]},
                               {value,[<<"2">>]}]}
  in function  amqp_client_SUITE:detach_requeues_two_connections/2 (amqp_client_SUITE.erl, line 2800)
```
because it could happen that Receiver1's credit top up to the quorum
queue is applied before Receiver0's credit top up such that Receiver1
gets enqueued to the ServiceQueue before Receiver0.
2024-08-08 14:20:05 +02:00
Karl Nilsson 194d4ba2f5
Quorum queues v4 (#10637)
This commit contains the following new quorum queue features:

* Fair share high/low priorities
* SAC consumers honour consumer priorities
* Credited consumer refactoring to meet AMQP requirements.
* Use checkpoints feature to reduce memory use for queues with long backlogs
 * Consumer cancel option that immediately removes consumer and returns all pending messages.
 * More compact commands of the most common commands such as enqueue, settle and credit
 * Correctly track the delivery-count to be compatible with the AMQP spec
 * Support the "modified" AMQP 1.0 outcome better.

Commits:

* Quorum queues v4 scaffolding.

Create the new version but not including any changes yet.

QQ: force delete followers after leader has terminated.

Also try a longer sleep for mqtt_shared_SUITE so that the
delete operation stands a chance to time out and move on
to the forced deletion stage.

In some mixed machine version scenarios some followers will never
apply the poison pill command so we may as well force delete them
just in case.

QQ: skip test in amqp_client that cannot pass with mixed machine versions

QQ: remove dead code

Code relating to prior machine versions and state conversions.

rabbit_fifo_prop_SUITE fixes

* QQ: add v4 ff and new more compact enqueue command.

Also update rabbit_fifo_* suites to test more relevant code versions
where applicable.

QQ: always use the updated credit mode format

QQv4: use more compact consumer reference in settle, credit, return

This introudces a new type: consumer_key() which is either the consumer_id
or the raft index the checkout was processed at. If the consumer is
using one of the updated credit spec formats rabbit_fifo will use the
raft index as the primary key for the consumer such that the rabbit
fifo client can then use the more space efficient integer index
instead of the full consumer id in subsequent commands.

There is compatibility code to still accept the consumer id in
settle, return, discard and credit commands but this is slighlyt
slower and of course less space efficient.

The old form will be used in cases where the fifo client may have
already remove the local consumer state (as happens after a cancel).

Lots of test refactorings of the rabbit_fifo_SUITE to begin to use
the new forms.

* More test refactoring and new API fixes

rabbit_fifo_prop_SUITE refactoring and other fixes.


* First pass SAC consumer priority implementation.

Single active consumers will be activated if they have a higher priority
than the currently active consumer. if the currently active consumer
has pending messages, no further messages will be assigned to the
consumer and the activation of the new consumer will happen once
all pending messages are settled. This is to ensure processing order.

Consumers with the same priority will internally be ordered to
favour those with credit then those that attached first.

QQ: add SAC consumer priority integration tests

QQ: add check for ff in tests

* QQ: add new consumer cancel option: 'remove'

This option immediately removes and returns all messages for a
consumer instead of the softer 'cancel' option which keeps the
consumer around until all pending messages have been either
settled or returned.

This involves a change to the rabbit_queue_type:cancel/5 API
to rabbit_queue_type:cancel/3.

* QQ: capture checked out time for each consumer message.

This will form the basis for queue initiated consumer timeouts.

* QQ: Refactor to use the new ra_machine:handle_aux/5 API

Instead of the old ra_machine:handle_aux/6 callback.

* QQ hi/lo priority queue

* QQ: Avoid using mc:size/1 inside rabbit_fifo

As we dont want to depend on external functions for things that may
change the state of the queue.

* QQ bug fix: Maintain order when returning multiple

Prior to this commit, quorum queues requeued messages in an undefined
order, which is wrong.

This commit fixes this bug and requeues messages always in the order as
nacked / rejected / released by the client.

We ensure that order of requeues is deterministic from the client's
point of view and doesn't depend on whether the quorum queue soft limit
was exceeded temporarily.
So, even when rabbit_fifo_client batches requeues, the order as nacked
by the client is still maintained.

* Simplify

* Add rabbit_quorum_queue:file_handle* functions back.

For backwards compat.

* dialyzer fix

* dynamic_qq_SUITE: avoid mixed versions failure.

* QQ: track number of requeues for message.

To be able to calculate the correct value for the AMQP delivery_count
header we need to be able to distinguish between messages that were
"released" or returned in QQ speak and those that were returned
due to errors such as channel termination.

This commit implement such tracking as well as the calculation
of a new mc annotations `delivery_count` that AMQP makes use
of to set the header value accordingly.

* Use QQ consumer removal when AMQP client detaches

This enables us to unskip some AMQP tests.

* Use AMQP address v2 in fsharp-tests

* QQ: track number of requeues for message.

To be able to calculate the correct value for the AMQP delivery_count
header we need to be able to distinguish between messages that were
"released" or returned in QQ speak and those that were returned
due to errors such as channel termination.

This commit implement such tracking as well as the calculation
of a new mc annotations `delivery_count` that AMQP makes use
of to set the header value accordingly.

* rabbit_fifo: Use Ra checkpoints

* quorum queues: Use a custom interval for checkpoints

* rabbit_fifo_SUITE: List actual effects in ?ASSERT_EFF failure

* QQ: Checkpoints modifications

* fixes

* QQ: emit release cursors on tick for followers and leaders

else followers could end up holding on to segments a bit longer
after traffic stops.

* Support draining a QQ SAC waiting consumer

By issuing drain=true, the client says "either send a transfer or a flow frame".
Since there are no messages to send to an inactive consumer, the sending
queue should advance the delivery-count consuming all link-credit and send
a credit_reply with drain=true to the session proc which causes the session
proc to send a flow frame to the client.

* Extract applying #credit{} cmd into 2 functions

This commit is only refactoring and doesn't change any behaviour.

* Fix default priority level

Prior to this commit, when a message didn't have a priority level set,
it got enqueued as high prio.

This is wrong because the default priority is 4 and
"for example, if 2 distinct priorities are implemented,
then levels 0 to 4 are equivalent, and levels 5 to 9 are equivalent
and levels 4 and 5 are distinct."
Hence, by default a message without priority set, must be enqueued as
low prio.

* bazel run gazelle

* Avoid deprecated time unit

* Fix aux_test

* Delete dead code

* Fix rabbit_fifo_q:get_lowest_index/1

* Delete unused normalize functions

* Generate less garbage

* Add integration test for QQ SAC with consumer priority

* Improve readability

* Change modified outcome behaviour

With the new quorum queue v4 improvements where a requeue counter was
added in addition to the quorum queue delivery counter, the following
sentence from https://github.com/rabbitmq/rabbitmq-server/pull/6292#issue-1431275848
doesn't apply anymore:

> Also the case where delivery_failed=false|undefined requires the release of the
> message without incrementing the delivery_count. Again this is not something
> that our queues are able to do so again we have to reject without requeue.

Therefore, we simplify the modified outcome behaviour:
RabbitMQ will from now on only discard the message if the modified's
undeliverable-here field is true.

* Introduce single feature flag rabbitmq_4.0.0

 ## What?

Merge all feature flags introduced in RabbitMQ 4.0.0 into a single
feature flag called rabbitmq_4.0.0.

 ## Why?

1. This fixes the crash in
https://github.com/rabbitmq/rabbitmq-server/pull/10637#discussion_r1681002352
2. It's better user experience.

* QQ: expose priority metrics in UI

* Enable skipped test after rebasing onto main

* QQ: add new command "modify" to better handle AMQP modified outcomes.

This new command can be used to annotate returned or rejected messages.

This commit also retains the delivery-count across dead letter boundaries
such that the AMQP header delivery-count field can now include _all_ failed
deliver attempts since the message was originally received.

Internally the quorum queue has moved it's delivery_count header to
only track the AMQP protocol delivery attempts and now introduces
a new acquired_count to track all message acquisitions by consumers.

* Type tweaks and naming

* Add test for modified outcome with classic queue

* Add test routing on message-annotations in modified outcome

* Skip tests in mixed version tests

Skip tests in mixed version tests because feature flag
rabbitmq_4.0.0 is needed for the new #modify{} Ra command
being sent to quorum queues.

---------

Co-authored-by: David Ansari <david.ansari@gmx.de>
Co-authored-by: Michael Davis <mcarsondavis@gmail.com>
2024-08-08 08:48:27 +01:00
Karl Nilsson 7b5d339aec QQ: improve shrink_all to retry once if cluster change is not permitted.
This could happen if a leader election occurred just before the
the member removal was initiated. In particular this could
happen when stopping and forgetting an existing rabbit node.
2024-08-07 12:03:00 +01:00
Karl Nilsson e24bd06e71 QQ: refactor and improve leader detection code.
The leader returned in rabbit_quorum_queue:info/2 only ever queried
the pid field from the queue record when more up to date info could
have been available in the ra_leaderboard table.
2024-08-07 12:02:53 +01:00
David Ansari 9f61bebc23 Avoid returning leader info when leader is unknown
Prior to this commit, atom `undefined` was turned into a binary.
2024-08-06 22:46:40 +02:00
David Ansari f447986f8f Reuse timestamp in rabbit_message_interceptor
## What?
`mc:init()` already sets mc annotation `rts` (received timestamp).
This commit reuses this timestamp in `rabbit_message_interceptor`.

 ## Why?
`os:system_time/1` can jump forward or backward between invocations.
Using two different timestamps for the same meaning, the time the message
was received by RabbitMQ, can be misleading.
2024-08-06 11:11:41 +02:00
Diana Parra Corbacho 647d65b8c8 Classic peer discovery: node list warnings
Log warnings when:
- Local node is not present. Even though we force it on the node
list, this will not work for other cluster nodes if they have
the same list.
- There are duplicated nodes
2024-08-05 10:07:14 +02:00
David Ansari 93d1ac9bb8 Speed up AMQP connection and session (de)registration
## What?

Prior to this commit connecting 40k AMQP clients with 5 sessions each,
i.e. 200k sessions in total, took 7m55s.

After to this commit the same scenario takes 1m37s.

Additionally, prior to this commit, disconnecting all connections and sessions
at once caused the pg process to become overloaded taking ~14 minutes to
process its mailbox.

After this commit, these same deregistrations take less than 5 seconds.

To repro:
```go

package main

import (
	"context"
	"log"
	"time"

	"github.com/Azure/go-amqp"
)

func main() {
	for i := 0; i < 40_000; i++ {
		if i%1000 == 0 {
			log.Printf("opened %d connections", i)
		}
		conn, err := amqp.Dial(
			context.TODO(),
			"amqp://localhost",
			&amqp.ConnOptions{SASLType: amqp.SASLTypeAnonymous()})
		if err != nil {
			log.Fatal("open connection:", err)
		}
		for j := 0; j < 5; j++ {
			_, err = conn.NewSession(context.TODO(), nil)
			if err != nil {
				log.Fatal("begin session:", err)
			}
		}
	}
	log.Println("opened all connections")
	time.Sleep(5 * time.Hour)
}
```

 ## How?

This commit uses separate pg scopes (that is processes and ETS tables) to register
AMQP connections and AMQP sessions. Since each Pid is now its own group,
registration and deregistration is fast.
2024-08-02 13:46:30 +02:00
Michael Klishin 0525ab06a0 rabbitmq.conf.example: mention log.file.rotation.* keys 2024-08-01 01:11:46 -04:00
Michael Klishin f91498284e
Merge pull request #11867 from rabbitmq/mqtt-credential-expiration
Disconnect MQTT client when its credential expires
2024-07-30 21:10:12 -04:00
David Ansari d7f29426a8 Fix test flake
Sometimes in CI under Khepri, the test case errored with:
```
receiver_attached flushed: {amqp10_event,
                            {session,<0.396.0>,
                             {ended,
                              {'v1_0.error',
                               {symbol,<<"amqp:internal-error">>},
                               {utf8,
                                <<"stream queue 'leader_transfer_stream_credit_single' in vhost '/' does not have a running replica on the local node">>},
                               undefined}}}}
```
2024-07-30 21:05:25 +02:00
David Ansari 7fb78338c6 Disconnect MQTT client when its credential expires
Fixes https://github.com/rabbitmq/rabbitmq-server/discussions/11854
Fixes https://github.com/rabbitmq/rabbitmq-server/issues/11862

This commit uses the same approach as implemented for AMQP 1.0 and
Streams: When a token expires, RabbitMQ will close the connection.
2024-07-30 19:55:46 +02:00
David Ansari 9d9a69aed9 Make AMQP flow control configurable
Make the following AMQP 1.0 flow control variables configurable via
`advanced.config`:
* `max_incoming_window` (session flow control)
* `max_link_credit` (link flow control)
* `max_queue_credit` (link flow control)
2024-07-30 16:40:52 +02:00
David Ansari c771b2422a Make classic_queue_consumer_unsent_message_limit configurable
Similar to other RabbitMQ internal credit flow configurations such as
`credit_flow_default_credit` and `msg_store_credit_disc_bound`, this
commit makes the `classic_queue_consumer_unsent_message_limit`
configurable via `advanced.config`.

See https://github.com/rabbitmq/rabbitmq-server/pull/11822 for the
original motivation to make this setting configurable.
2024-07-29 22:48:48 +02:00
David Ansari ce915ae05a Fix quorum queue credit reply crash in AMQP session
Fixes #11841

PR #11307 introduced the invariant that at most one credit request between
session proc and quorum queue proc can be in flight at any given time.
This is not the case when rabbit_fifo_client re-sends credit
requests on behalf of the session proc when the quorum queue leader changes.

This commit therefore removes assertions which assumed only a single credit
request to be in flight.

This commit also removes field queue_flow_ctl.desired_credit
since it is redundant to field client_flow_ctl.credit
2024-07-28 12:34:41 +02:00
David Ansari d3109e9f09 Remove max_frame_size from AMQP writer
because the session process already splits frames that are too large
into smaller frames
2024-07-26 16:35:36 +02:00
David Ansari dde8e699a1 Report frame_max as integer
Resolves https://github.com/rabbitmq/rabbitmq-server/issues/11838
2024-07-26 16:35:36 +02:00
GitHub f011b54767 bazel run gazelle 2024-07-26 04:02:38 +00:00
Michael Klishin 4aaa1c410e
Merge pull request #11664 from rabbitmq/khepri-node-added-event
rabbit_node_monitor: use a leader query for cluster members on node_added event
2024-07-25 15:41:28 -04:00
Michael Klishin 29251a0a54
Merge pull request #11706 from rabbitmq/md/khepri-minority-errors/rabbit_db_vhost
Handle timeouts possible in Khepri minority in `rabbit_db_vhost`
2024-07-25 15:40:40 -04:00
Michael Klishin 5a56e326d9
Merge pull request #11741 from rabbitmq/retry-register-projections-during-boot
rabbit_khepri: Retry register_projections during boot
2024-07-25 15:39:15 -04:00
Michael Davis 4207faf433
Merge pull request #11785 from rabbitmq/md/khepri-minority-errors/rabbit_db_exchange
Handle timeouts possible in Khepri minority in `rabbit_db_exchange`
2024-07-24 12:11:17 -05:00
Michael Davis b56abeec12
Use `rabbit_misc:rs/1` on exchange resource records
This fixes a potential crash in `rabbit_amqp_amanegment` where we tried
to format the exchange resource as a string (`~ts`). The other changes
are cosmetic.
2024-07-24 11:32:33 -04:00
Michael Davis fb3154ba82
rabbit_channel: Fix formatting of error message for exchange deletion
Co-authored-by: David Ansari <david.ansari@gmx.de>
2024-07-24 11:19:31 -04:00
Karl Nilsson 1a9da90153
Merge pull request #11809 from rabbitmq/qq-system-recovery
QQ: use a dedicated function for queue recovery after Ra system restart.
2024-07-24 16:14:05 +01:00
Michael Davis 98616a0037
rabbit_amqp_management: Use HTTP code 503 for timeout errors
`rabbit_amqp_management` returns HTTP status codes to the client. 503
means that a service is unavailable (which Khepri is while it is in a
minority) so it's a more appropriate code than the generic 500
internal server error.
2024-07-24 11:13:17 -04:00
Michal Kuratczyk ae41f65c64
Fix rabbit_priority_queue:update_rates bug (#11814)
updates_rates fails after publishing a message to a queue
with priorities enabled.
2024-07-24 16:34:56 +02:00
Karl Nilsson 4863bc3b8f QQ: use a dedicated function for queue recovery after Ra system restart.
Previously we used the `registered` approach where all Ra servers that
have a registered name would be recovered. This could have unintended
side effects for queues that e.g. were deleted when not all members of
a quorum queueu were running when the queue was deleted. In this case
the Ra system would have recovered the members that were not deleted
which is not ideal as a dangling member would just sit and loop in
pre vote state and a future declaration of the queue may partially
fail.

Instead we rely on the meta data store for the truth about which
members should be restarted after a ra system restart.
2024-07-24 14:24:42 +01:00
David Ansari be6a7fec95 Fix test flake
Sometimes on Khepri the test failed with:
```
=== Ended at 2024-07-24 10:07:15
=== Location: [{gen_server,call,419},
              {amqpl_direct_reply_to_SUITE,rpc,226},
              {test_server,ts_tc,1793},
              {test_server,run_test_case_eval1,1302},
              {test_server,run_test_case_eval,1234}]
=== === Reason: {{shutdown,
                      {server_initiated_close,404,
                          <<"NOT_FOUND - no queue 'tests.amqpl_direct_reply_to.rpc.requests' in vhost '/'">>}},
                  {gen_server,call,
                      [<0.272.0>,
                       {call,
                           {'basic.get',0,
                               <<"tests.amqpl_direct_reply_to.rpc.requests">>,
                               false},
                           none,<0.246.0>},
                       infinity]}}
```

https://github.com/rabbitmq/rabbitmq-server/actions/runs/10074558971/job/27851173817?pr=11809
shows an instance of this flake.
2024-07-24 13:42:20 +02:00
Michael Davis 52a0d70e15
Handle database timeouts when declaring exchanges
The spec of `rabbit_exchange:declare/7` needs to be updated to return
`{ok, Exchange} | {error, Reason}` instead of the old return value of
`rabbit_types:exchange()`. This is safe to do since `declare/7` is not
called by RPC - from the CLI or otherwise - outside of test suites, and
in test suites only through the CLI's `TestHelper.declare_exchange/7`.
Callers of this helper are updated in this commit.

Otherwise this commit updates callers to unwrap the `{ok, Exchange}`
and bubble up errors.
2024-07-22 16:02:03 -04:00
Michael Davis 96c60a2de4
Move 'for_each_while_ok/2' helper to rabbit_misc 2024-07-22 16:02:03 -04:00
Michael Davis 70595822e4
rabbit_db_exchange: Allow infinite timeout for serial updates in Khepri
It's unlikely that these operations will time out since the serial
number is always updated after some other transaction, for example
adding or deleting an exchange.

In the future we could consider moving the serial updates into those
transactions. In the meantime we can remove the possibility of timeouts
by giving the serial update unlimited time to finish.
2024-07-22 15:59:55 -04:00
Michael Davis e7489d2cb7
Handle database failures when deleting exchanges
A common case for exchange deletion is that callers want the deletion
to be idempotent: they treat the `ok` and `{error, not_found}` returns
from `rabbit_exchange:delete/3` the same way. To simplify these
callsites we add a `rabbit_exchange:ensure_deleted/3` that wraps
`rabbit_exchange:delete/3` and returns `ok` when the exchange did not
exist. Part of this commit is to update callsites to use this helper.

The other part is to handle the `rabbit_khepri:timeout()` error possible
when Khepri is in a minority. For most callsites this is just a matter
of adding a branch to their `case` clauses and an appropriate error and
message.
2024-07-22 15:59:55 -04:00
Michael Davis 80f599b001
rabbit_db_exchange: Reflect possible failure in update/2 spec 2024-07-22 15:59:44 -04:00
Michael Davis 83994501b5
rabbit_db_vhost: Bubble up database errors in delete/1
We need to bubble up the error through the caller
`rabbit_vhost:delete/2`. The CLI calls `rabbit_vhost:delete/2` and
already handles the `{error, timeout}` but the management UI needs an
update so that an HTTP DELETE returns an error code when the deletion
times out.
2024-07-22 15:55:57 -04:00
Michael Davis 2a86dde998
rabbit_db_vhost: Add `no_return()` to `update/2` spec
This function throws if the database fails to apply the transaction.
This function is only called by the `rabbit_vhost_limit` runtime
parameter module in its `notify/5` and `notify_clear/4` callbacks. These
callers have no way of handling this error but it should be very
difficult for them to face this crash: setting the runtime parameter
would need to succeed first which needs Khepri to be in majority. Khepri
would need to enter a minority between inserting/updating/deleting the
runtime parameter and updating the vhost. It's possible but unlikely.

In the future we could consider refactoring vhost limits to update the
vhost as the runtime parameter is changed, transactionally. I figure
that to be a very large change though so we leave this to the future.
2024-07-22 15:55:57 -04:00
Michael Davis 4fd77d5fbf
rabbit_db_vhost: Add `no_return()` to `set_tags/2` spec
`set_tags/2` throws for database errors. This is benign since it's
caught by the CLI (the only caller) and turned into a Khepri-specific
error.
2024-07-22 15:55:57 -04:00
Michael Davis 1695d390d9
rabbit_db_vhost: Add timeout error to `merge_metadata/2` spec
This error is already handled by the callers of
`rabbit_vhost:update_metadata/3` (the CLI) and `rabbit_vhost:put_vhost/6`
(see the parent commit) but was just missing from the spec.
2024-07-22 15:55:57 -04:00
Michael Davis 63b5100374
rabbit_definitions: Handle vhost creation failure
`rabbit_definitions:concurrent_for_all/4` doesn't pay any attention to
the return value of the `Fun`, only counting an error when it catches
`{error, E}`. So we need to `throw/1` the error from
`rabbit_vhost:put_vhost/6`.

The other callers of `rabbit_vhost:put_vhost/6` - the management UI and
the CLI (indirectly through `rabbit_vhost:add/2,3`) already handle this
error return.
2024-07-22 15:55:57 -04:00
Michael Davis e459ee5c77
rabbit_db_vhost: Declare no-return in create_or_get/3 spec
`create_or_get_in_khepri/2` throws errors like the
`rabbit_khepri:timeout_error()`. Callers of `create_or_get/3` like
`rabbit_vhost:do_add/3` and its callers handle the throw with a `try`/
`catch` block and return the error tuple, which is then handled by
their callers.
2024-07-22 15:55:57 -04:00
Michael Davis f1be7bacc2
Handle database failures when adding/removing bindings
This ensures that the call graph of `rabbit_db_binding:create/2` and
`rabbit_db_binding:delete/2` handle the `{error, timeout}` error
possible when Khepri is in a minority.
2024-07-22 14:16:39 -04:00
Michael Davis fe280280a4
rabbit_db_bindings: Explicitly mark exists_in_khepri tx as read-only
This is essentially a cosmetic change. Read-only transactions are done
with queries in Khepri rather than commands, like read-write
transactions. Local queries cannot timeout like commands so marking the
transaction as 'ro' means that we don't need to handle a potential
'{error, timeout}' return.
2024-07-22 14:16:39 -04:00
Michael Davis aace1b5377
Introduce a rabbit_khepri:timeout_error() error type 2024-07-22 14:16:39 -04:00
Gabriele Santomaggio e094a9bfc8
Merge pull request #11742 from rabbitmq/fix_catch_precodition_fail_management
Handle more failure types in the rabbitmqqueue:declare/6 when declaring a stream
2024-07-22 12:50:04 +02:00
David Ansari 909f0d814a Add test case
and remove inner case statement since we only want
rabbit_amqqueue:declare/6 to be protected.
2024-07-22 11:02:23 +02:00
Michael Davis 38cd40b31e
maintenance_mode_SUITE: Skip leadership transfer case on mnesia
This case only targets Khepri. Instead of setting the `metadata_store`
config option we should skip the test when the configured metadata
store is mnesia.
2024-07-19 15:28:25 -04:00
Michael Klishin e366b1ddd4 Make bazel test //deps/rabbit:dialyze pass 2024-07-19 14:22:16 -04:00
Karl Nilsson 42991f7838 Ra v2.13.3
This contains a fix in the ra_directory module to ensure
names can be deleted even when a Ra server has never been started
during the current node lifetime.

Also contains a small tweak to ensure the ra_directory:unregister_name
is called before deleting a Ra data directory which is less likely
to cause a corrupt state that will stop a Ra system from starting.
2024-07-19 18:47:27 +01:00
Arnaud Cogoluègnes eeb35d2688
Add stream replication port range in ini-style configuration
This is more straightforward than configuring Osiris in the advanced
configuration file.
2024-07-19 16:47:59 +02:00
Gabriele Santomaggio f9707530b0
Remove case args
Signed-off-by: Gabriele Santomaggio <g.santomaggio@gmail.com>
2024-07-19 08:26:27 +02:00
Gabriele Santomaggio 93946eeda0
Handle the rabbitmqqueue:declare
The rabbitmqqueue:declare is handled, and in case of known errors, the correct error code is sent back.

Signed-off-by: Gabriele Santomaggio <g.santomaggio@gmail.com>
2024-07-18 12:00:52 +02:00
Diana Parra Corbacho 992c260c56 Catch throw:timeout as returned from Khepri 0.14.0 2024-07-17 15:26:24 +02:00
Diana Parra Corbacho f257e1181f rabbit_khepri: Retry register_projections during boot
Gives some time to form a majority during the boot process,
allowing nodes to boot more easily
2024-07-17 13:17:33 +02:00
Lois Soto Lopez bb93e718c2 Prometheus: some per-exchange/per-queue metrics aggregated per-channel
Add copies of some per-object metrics that are labeled per-channel
aggregated to reduce cardinality. These metrics are valuable and
easier to process if exposed on per-exchange and per-queue basis.
2024-07-16 14:30:25 +02:00
Diana Parra Corbacho e856a6cc21 rabbit_mnesia: Emit notify_left_cluster from forget_cluster_node
This function is called directly from CLI commands, skipping the `rabbit_db_cluster` layer
2024-07-16 12:48:29 +02:00
Diana Parra Corbacho db03d8c6cb rabbit_db_cluster: generate left cluster notifications
They must be sent during reset and when leaving the cluster for
any metadata store
2024-07-16 12:48:29 +02:00
Diana Parra Corbacho 19a71d8d28 rabbit_node_monitor: use a leader query for cluster members on node_added event
If the membership hasn't been updated locally yet, the event is never generated
2024-07-16 12:48:29 +02:00
Michal Kuratczyk 9debca24d8 Remove HA policy example from OpenStack script 2024-07-15 12:38:01 -04:00
Michal Kuratczyk 6b1377163d Remove sync_queue and cancel_sync_queue from man page 2024-07-15 12:38:01 -04:00
Karl Nilsson 131379a483 mc: increase utf8 scanning limit for longstr conversions.
The AMQP 0.9.1 longstr type is problematic as it can contain arbitrary
binary data but is typically used for utf8 by users.

The current conversion into AMQP avoids scanning arbitrarily large
longstr to see if they only contain valid utf8 by treating all
longstr data longer than 255 bytes as binary. This is in hindsight
too strict and thus this commit increases the scanning limit to
4096 bytes - enough to cover the vast majority of AMQP 0.9.1 header
values.

This change also conversts the AMQP binary types into longstr to
ensure that existing data (held in streams for example) is converted
to an AMQP 0.9.1 type most likely what the user intended.
2024-07-15 14:07:19 +02:00
Michael Klishin aeeb990e57
Merge pull request #11709 from rabbitmq/gazelle-main
bazel run gazelle
2024-07-13 01:49:50 -04:00
Michael Klishin 6f67a85ad9
Merge pull request #11705 from rabbitmq/amqp-consumer-priority
Support consumer priority in AMQP
2024-07-13 00:11:42 -04:00
GitHub e74ecff203 bazel run gazelle 2024-07-13 04:02:17 +00:00
Michael Klishin bd5e9fa2ac
Merge pull request #11700 from rabbitmq/md/khepri/projections-ets-try-catch
Use 'try'/'catch' rather than 'ets:whereis/1' for Khepri projections
2024-07-12 17:09:02 -04:00
David Ansari e6587c6e45 Support consumer priority in AMQP
Arguments
* `rabbitmq:stream-offset-spec`,
* `rabbitmq:stream-filter`,
* `rabbitmq:stream-match-unfiltered`
are set in the `filter` field of the `Source`.
This makes sense for these consumer arguments because:
> A filter acts as a function on a message which returns a boolean result
> indicating whether the message can pass through that filter or not.

Consumer priority is not really such a predicate.
Therefore, it makes more sense to set consumer priority in the
`properties` field of the `Attach` frame.

We call the key `rabbitmq:priority` which maps to consumer argument
`x-priority`.

While AMQP 0.9.1 consumers are allowed to set any integer data
type for the priority level, this commit decides to enforce an `int`
value (range -(2^31) to 2^31 - 1 inclusive).
Consumer priority levels outside of this range are not needed in
practice.
2024-07-12 20:31:01 +02:00
David Ansari 3863db3989 Fix queue type consumer arguments
see https://www.rabbitmq.com/blog/2023/10/24/stream-filtering-internals#bonus-stream-filtering-on-amqp

`x-credit` was used by the 3.13 AMQP 1.0 plugin
2024-07-12 16:58:40 +02:00
Michael Davis 9f255db90f
Use 'try'/'catch' rather than 'ets:whereis/1' for Khepri projections
`ets:whereis/1` adds some overhead - it's two ETS calls rather than one
when `ets:whereis/1` returns a table identifier. It's also not atomic:
the table could disappear between `ets:whereis/1` calls and the call to
read data from a projection. We replace all `ets:whereis/1` calls on
projection tables with `try`/`catch` and return default values when we
catch the `badarg` `error` which ETS emits when passed a non-existing
table name.

One special case though is `ets:info/2` which returns `undefined` when
passed a non-existing table names. That block is refactored to use a
`case` instead.
2024-07-12 10:35:29 -04:00
Michal Kuratczyk f398892bda
Deprecate queue-master-locator (#11565)
* Deprecate queue-master-locator

This should not be a breaking change - all validation should still pass
* CQs can now use `queue-leader-locator`
* `queue-leader-locator` takes precedence over `queue-master-locator` if both are used
* regardless of which name is used, effectively there are only two  values: `client-local` (default) or `balanced`
* other values (`min-masters`, `random`, `least-leaders`) are mapped to `balanced`
* Management UI no longer shows `master-locator` fields when declaring a queue/policy, but such arguments can still be used manually (unless not permitted)
* exclusive queues are always declared locally, as before
2024-07-12 13:22:55 +02:00
Michael Klishin 6b1e003afe Revert "New metrics return on detailed only"
This reverts commit 1aec73b21c.
2024-07-11 21:34:40 -04:00
Michael Klishin 85a4b365d0 Revert "Use functions w/out _process as its more approp."
This reverts commit 4d592da5ef.
2024-07-11 21:34:34 -04:00
Michael Klishin 2ec9625f1b Revert "Update deps/rabbit/src/rabbit_core_metrics_gc.erl"
This reverts commit b5fb5c4f2c.
2024-07-11 21:34:28 -04:00
LoisSotoLopez 6b4e3225d3 Update deps/rabbit/src/rabbit_core_metrics_gc.erl
Co-authored-by: Péter Gömöri <gomoripeti@users.noreply.github.com>
2024-07-11 17:34:18 -04:00
Lois Soto Lopez 94e3b2ccaa Use functions w/out _process as its more approp. 2024-07-11 17:34:18 -04:00
Lois Soto Lopez 18e667fc8f New metrics return on detailed only
Make new metrics return on detailed only and adjust some of the
help messages.
2024-07-11 17:34:18 -04:00
Michael Davis 8c6b866fc5
Merge pull request #11667 from rabbitmq/md/khepri-projections-wrap-ets-calls
rabbit_db_*: Wrap `ets` calls to projections in `whereis/1` checks
2024-07-11 11:27:56 -05:00
David Ansari 1ca9b95952 Delete unnecessary function
as suggested by JSP in PR feedback
2024-07-11 11:20:26 +02:00
David Ansari e31df4cd01 Fix test case rebalance
This test case was wrongly skipped and therefore never ran.
2024-07-11 11:20:26 +02:00
David Ansari 18e8c1d5f8 Require all stable feature flags added up to 3.13.0
Since feature flag `message_containers` introduced in 3.13.0 is required in 4.0,
we can also require all other feature flags introduced in or before 3.13.0
and remove their compatibility code for 4.0:

* restart_streams
* stream_sac_coordinator_unblock_group
* stream_filtering
* stream_update_config_command
2024-07-11 11:20:26 +02:00
Michael Davis 88c1ad2f6e
Adapt to new `{error, timeout}` return value in Khepri 0.14.0
See rabbitmq/khepri#256.
2024-07-10 16:07:43 -04:00
Michael Davis ae0663d7ca
Merge pull request #11663 from rabbitmq/md/ci/turn-off-mixed-version-khepri-tests
Turn off mixed version tests against Khepri
2024-07-10 15:07:07 -05:00
Michael Davis c490043484
rabbit_db_*: Wrap `ets` calls to projections in `whereis/1` checks
Projections might not be available in a mixed-version scenario where a
cluster has nodes which are all blank/uninitialized and the majority
of nodes run a version of Khepri with a new machine version while the
minority does not have the new machine version's code.

In this case, the cluster's effective machine version will be set to
the newer version as the majority of members have access to the new
code. The older version members will be unable to apply commands
including the `register_projection` commands that set up these ETS
tables. When these ETS tables don't exist, calls like `ets:tab2list/1`
or `ets:lookup/2` cause `badarg` errors.

We use default empty values when `ets:whereis/1` returns `undefined` for
a projection table name. Instead we could use local queries or leader
queries. Writing equivalent queries is a fair amount more work and the
code would be hard to test. `ets:whereis/1` should only return
`undefined` in the above scenario which should only be a problem in
our mixed-version testing - not in practice.
2024-07-10 14:24:27 -04:00
Michael Klishin 348ca6f5a7 amqpl_direct_reply_to_SUITE: use separate queue names to avoid interference 2024-07-10 13:48:38 -04:00
Michael Davis 56bbf3760d
Respect RABBITMQ_METADATA_STORE in clustering_recovery_SUITE 2024-07-10 13:46:22 -04:00
Michael Davis 0a4e5a9845
Respect RABBITMQ_METADATA_STORE in clustering_management_SUITE 2024-07-10 13:46:05 -04:00
Michael Davis ecb757554e
Skip cluster_minority_SUITE when RABBITMQ_METADATA_STORE is set to mnesia
This suite is only meant to run with Khepri as the metadata store.
Instead of setting this explicitly we can look at the configured
metadata store and conditionally skip the entire suite. This prevents
these tests from running twice in CI.
2024-07-10 13:46:05 -04:00
Michael Davis bda1f7cef6
Skip metadata_store_clustering_SUITE in mixed versions testing
Khepri is not yet compatible with mixed-version testing and this suite
only tests clustering when Khepri is the metadata store in at least some
of the nodes.
2024-07-10 13:46:05 -04:00
David Ansari 5deff457a0 Fix direct reply to crash when tracing is enabled
This commit fixes https://github.com/rabbitmq/rabbitmq-server/discussions/11662

Prior to this commit the following crash occurred when an RPC reply message entered
RabbitMQ and tracing was enabled:
```
** Reason for termination ==
** {function_clause,
       [{rabbit_trace,'-tap_in/6-fun-0-',
            [{virtual_reply_queue,
                 <<"amq.rabbitmq.reply-to.g1h2AA5yZXBseUAyNzc5NjQyMAAAC1oAAAAAZo4bIw==.+Uvn1EmAp0ZA+oQx2yoQFA==">>}],
            [{file,"rabbit_trace.erl"},{line,62}]},
        {lists,map,2,[{file,"lists.erl"},{line,1559}]},
        {rabbit_trace,tap_in,6,[{file,"rabbit_trace.erl"},{line,62}]},
        {rabbit_channel,handle_method,3,
            [{file,"rabbit_channel.erl"},{line,1284}]},
        {rabbit_channel,handle_cast,2,
            [{file,"rabbit_channel.erl"},{line,659}]},
        {gen_server2,handle_msg,2,[{file,"gen_server2.erl"},{line,1056}]},
        {proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,241}]}]}
```

(Note that no trace message is emitted for messages that are delivered to
direct reply to requesting clients (neither in 3.12, nor in 3.13, nor
after this commit). This behaviour can be added in future when a direct
reply virtual queue becomes its own queue type.)
2024-07-10 18:14:35 +02:00
Luke Bakken c3c4612efa
Take `-setcookie` argument into account for peer args
Fixes issue raised in discussion #11653

The user used the following env var:

```
RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="-setcookie FOOBARBAZ"
```

...and the cookie was not passed to the peer node for peer discovery.
2024-07-09 17:00:58 -07:00
David Ansari 37b1ffe244 Upgrade AMQP address format in tests
from v1 to v2.

We still use AMQP address format v1 when connecting to an old 3.13 node.
2024-07-08 13:23:50 +02:00
Michael Klishin 398c7de368
Merge pull request #11619 from rabbitmq/federation-definition-export
rabbit_definitions: ensure federation-upstream-set parameters are exported
2024-07-04 17:08:38 -04:00
Diana Parra Corbacho aa89307182 Use rabbit_data_coercion:to_map 2024-07-04 15:40:18 +02:00
David Ansari 19523876cd Make AMQP address v2 format user friendly
This commit is a follow up of https://github.com/rabbitmq/rabbitmq-server/pull/11604

This commit changes the AMQP address format v2 from
```
/e/:exchange/:routing-key
/e/:exchange
/q/:queue
```
to
```
/exchanges/:exchange/:routing-key
/exchanges/:exchange
/queues/:queue
```

Advantages:
1. more user friendly
2. matches nicely with the plural forms of HTTP API v1 and HTTP API v2

This plural form is still non-overlapping with AMQP address format v1.

Although it might feel unusual at first to send a message to `/queues/q1`,
if you think about `queues` just being a namespace or entity type, this
address format makes sense.
2024-07-04 14:33:05 +02:00
Diana Parra Corbacho a8dcca084a rabbit_definitions: ensure federation-upstream-set parameters are exported
These parameters are not proplists but a list of maps
2024-07-04 12:41:34 +02:00
Michael Klishin 20aee3f09f
Merge pull request #11604 from rabbitmq/amqp-addr
Use different AMQP address format for v1 and v2
2024-07-03 19:32:05 -04:00
Michael Davis 427876b694
rabbit_runtime_parameters: Remove dead 'value_global/2', 'value/4'
`rabbit_runtime_parameters:value_global/2` was only used in
`rabbit_nodes:cluster_name/0` since near the beginning of the commit
history of the server and its usage was eliminated in 06932b9fcb
(#3085, released in v3.8.17+ and v3.9.0+).

`rabbit_runtime_parameters:value/4` doesn't appear to have been ever
used since it was introduced near the beginning of the commit history.
It may have been added just to mirror `value_global/2`'s interface.

Eliminating these dead functions allows us to also eliminate a somewhat
complicated function `rabbit_db_rtparams:get_or_set/2`.
2024-07-03 16:44:52 -04:00
David Ansari 7b18bd7a81 Enforce percent encoding
Partially copy file
https://github.com/ninenines/cowlib/blob/optimise-urldecode/src/cow_uri.erl
We use this copy because:
1. uri_string:unquote/1 is lax: It doesn't validate that characters that are
   required to be percent encoded are indeed percent encoded. In RabbitMQ,
   we want to enforce that proper percent encoding is done by AMQP clients.
2. uri_string:unquote/1 and cow_uri:urldecode/1 in cowlib v2.13.0 are both
   slow because they allocate a new binary for the common case where no
   character was percent encoded.
When a new cowlib version is released, we should make app rabbit depend on
app cowlib calling cow_uri:urldecode/1 and delete this file (rabbit_uri.erl).
2024-07-03 17:01:51 +02:00
David Ansari 0de9591050 Use different AMQP address format for v1 and v2
to distinguish between v1 and v2 address formats.

Previously, v1 and v2 address formats overlapped and behaved differently
for example for:
```
/queue/:queue
/exchange/:exchange
```

This PR changes the v2 format to:
```
/e/:exchange/:routing-key
/e/:exchange
/q/:queue
```
to distinguish between v1 and v2 addresses.

This allows to call `rabbit_deprecated_features:is_permitted(amqp_address_v1)`
only if we know that the user requests address format v1.

Note that `rabbit_deprecated_features:is_permitted/1` should only
be called when the old feature is actually used.

Use percent encoding / decoding for address URI format v2.
This allows to use any UTF-8 encoded characters including slashes (`/`)
in routing keys, exchange names, and queue names and is more future
safe.
2024-07-03 16:36:03 +02:00
GitHub db63067f1e bazel run gazelle 2024-07-02 04:02:31 +00:00
Karl Nilsson 7273c6846d
Merge pull request #11582 from rabbitmq/testfixes-glorious-testfixes-and-other-improvements-hurray
Various test improvements
2024-07-01 16:54:49 +01:00
Loïc Hoguin a561b45dcf
Merge pull request #11573 from rabbitmq/loic-more-make
Another make PR
2024-07-01 14:27:04 +02:00
Karl Nilsson 2b09891a77 Fix flake in rabbit_stream_queue_SUITE 2024-07-01 10:27:52 +01:00
Karl Nilsson 087d701b6e speed up policy_SUITE 2024-07-01 10:27:52 +01:00
Karl Nilsson a277713105 speed up limit and tracking* suites
By removing explicit meta data store groups.
2024-07-01 10:27:52 +01:00
Karl Nilsson b67f7292b5 remove explicit meta data store groups from bindings_SUITE 2024-07-01 10:27:52 +01:00
Karl Nilsson a5819bf41e Remove meta data store groups from publisher_confirms_parallel_SUITE 2024-07-01 10:27:52 +01:00
Karl Nilsson 6a655b6daa Fix test flake in publisher_confirms_parallel_SUITE
test `confirm_nack` had a race condition that mean either ack or nack
where likely outcomes. By stopping the queue before the publish we
ensure only nack is the valid outcome.
2024-07-01 10:27:52 +01:00
Simon Unge fa7d911391 Add logging for when limit is hit for vhost_max and components such as shovels and federations 2024-06-28 21:35:14 +00:00
Michael Klishin 447fac9feb
Merge pull request #11583 from rabbitmq/remove-cancel-on-ha-failover
Remove x-cancel-on-ha-failover
2024-06-28 16:04:05 -04:00
Michael Klishin 756d3578b6 rabbit_channel: fix a compilation warning 2024-06-28 12:54:48 -04:00
David Ansari 166e5d8a2f Bump AMQP.NET Lite to 2.4.11 2024-06-28 18:50:59 +02:00
Diana Parra Corbacho 37ec060f7a Remove x-cancel-on-ha-failover 2024-06-28 17:11:37 +02:00
David Ansari c189af8857 Simplify
Also, notify decorators when requeuing due to consumer
removal causes the queue to become empty.
2024-06-28 15:38:21 +02:00
Loïc Hoguin bbfa066d79
Cleanup .gitignore files for the monorepo
We don't need to duplicate so many patterns in so many
files since we have a monorepo (and want to keep it).

If I managed to miss something or remove something that
should stay, please put it back. Note that monorepo-wide
patterns should go in the top-level .gitignore file.
Other .gitignore files are for application or folder-
specific patterns.
2024-06-28 12:00:52 +02:00
David Ansari daf61c9653 Classic queue removes consumer
## What?

The rabbit_queue_type API has allowed to cancel a consumer.
Cancelling a consumer merely stops the queue sending more messages
to the consumer. However, messages checked out to the cancelled consumer
remain checked out until acked by the client. It's up to the client when
and whether it wants to ack the remaining checked out messages.

For AMQP 0.9.1, this behaviour is necessary because the client may
receive messages between the point in time it sends the basic.cancel
request and the point in time it receives the basic.cancel_ok response.

AMQP 1.0 is a better designed protocol because a receiver can stop a
link as shown in figure 2.46.
After a link is stopped, the client knows that the queue won't deliver
any more messages.
Once the link is stopped, the client can subsequently detach the link.

This commit extends the rabbit_queue_type API to allow a consumer being
immediately removed:
1. Any checked out messages to that receiver will be requeued by the
   queue. (As explained previously, a client that first stops a link
   by sending a FLOW with `link_credit=0` and `echo=true` and waiting
   for the FLOW reply from the server and settles any messages before
   it sends the DETACH frame, won't have any checked out messages).
2. The queue entirely forgets this consumer and therefore stops
   delivering messages to the receiver.

This new behaviour of consumer removal is similar to what happens when
an AMQP 0.9.1 channel is closed: All checked out messages to that
channel will be requeued.

 ## Why?

Removing the consumer immediately simplifies many aspects:
1. The server session process doesn't need to requeue any checked out
   messages for the receiver when the receiver detaches the link.
   Specifically, messages in the outgoing_unsettled_map and
   outgoing_pending queue don't need to be requeued because the queue
   takes care of requeueing any checked out messages.
2. It simplifies reasoning about clients first detaching and then
   re-attaching in the same session with the same link handle (the handle
   becomes available for re-use once a link is closed): This will result
   in the same RabbitMQ queue consumer tag.
3. It simplifies queue implementations since state needs to be hold and
   special logic needs to be applied to consumers that are only
   cancelled (basic.cancel AMQP 0.9.1) but not removed.
4. It makes the single active consumer feature safer when it comes to
   maintaning message order: If a client cancels consumption via AMQP
   0.9.1 basic.cancel, but still has in-flight checked out messages,
   the queue will activate the next consumer. If the AMQP 0.9.1 client
   shortly after crashes, messages to the old consumer will be requeued
   which results in message being out of order. To maintain message order,
   an AMQP 0.9.1 client must close the whole channel such that messages
   are requeued before the next consumer is activated.
   For AMQP 1.0, the client can either stop the link first (preferred)
   or detach the link directly. Detaching the link will requeue all
   messages before activating the next consumer, therefore maintaining
   message order. Even if the session crashes, message order will be
   maintained.

  ## How?

`rabbit_queue_type:cancel` accepts a spec as argument.
The new interaction between session proc and classic queue proc (let's
call it cancel API v2) must be hidden behind a feature flag.
This commit re-uses feature flag credit_api_v2 since it also gets
introduced in 4.0.
2024-06-27 19:50:36 +02:00
David Ansari 19670b625b Copy remove consumer API from qq-v4 2024-06-27 15:09:39 +02:00
Karl Nilsson 237d050341 remove .DS_Store 2024-06-27 10:51:39 +01:00
Loïc Hoguin 18f8ee1457
Merge pull request #11549 from rabbitmq/loic-make-cleanups
Various make cleanup/consolidation
2024-06-27 11:42:24 +02:00
David Ansari 53b20ec78a Fix amqp_client_SUITE flake
Fix the following sporadic failure:

```
=== === Reason: {assertMatch,
                     [{module,amqp_client_SUITE},
                      {line,446},
                      {expression,
                          "rabbitmq_amqp_client : delete_queue ( LinkPair , QName )"},
                      {pattern,"{ ok , # { message_count := NumMsgs } }"},
                      {value,{ok,#{message_count => 29}}}]}
  in function  amqp_client_SUITE:sender_settle_mode_mixed/1 (amqp_client_SUITE.erl, line 446)
 ```
The last message (30th) is send as settled.
It apparently happened that all messages up to 29 got stored.
The 29th message also got confirmed.
Subsequently the queue got deleted with only 29 ready messages.

Bumping NumbMsgs to 31 ensures that the last message is sent unsettled.
2024-06-27 08:57:53 +00:00
David Ansari eed48ef3ee Fix amqp_client_SUITE flake
Fix the following sporadich error in CI:
```
=== Location: [{amqp_client_SUITE,available_messages,3137},
              {test_server,ts_tc,1793},
              {test_server,run_test_case_eval1,1302},
              {test_server,run_test_case_eval,1234}]
=== === Reason: {assertEqual,
                     [{module,amqp_client_SUITE},
                      {line,3137},
                      {expression,"get_available_messages ( Receiver )"},
                      {expected,5000},
                      {value,0}]}
```

The client decrements the available variable from 1 to 0 when it
receives the transfer and sends a credit_exhausted event to the CT test
case proc. The CT test case proc queries the client session proc for available
messages, which is still 0. The FLOW frame from RabbitMQ to the client with the
available=5000 set could arrive shortly after.
2024-06-27 08:00:48 +00:00
David Ansari 6aa2de3f4f Avoid crash in test case
Avoid the following unexpected error in mixed version testing where
feature flag credit_api_v2 is disabled:
```
[error] <0.1319.0> Timed out waiting for credit reply from quorum queue 'stop' in vhost '/'. Hint: Enable feature flag credit_api_v2
[warning] <0.1319.0> Closing session for connection <0.1314.0>: {'v1_0.error',
[warning] <0.1319.0>                                             {symbol,<<"amqp:internal-error">>},
[warning] <0.1319.0>                                             {utf8,
[warning] <0.1319.0>                                              <<"Timed out waiting for credit reply from quorum queue 'stop' in vhost '/'. Hint: Enable feature flag credit_api_v2">>},
[warning] <0.1319.0>                                             undefined}
[error] <0.1319.0> ** Generic server <0.1319.0> terminating
[error] <0.1319.0> ** Last message in was {'$gen_cast',
[error] <0.1319.0>                            {frame_body,
[error] <0.1319.0>                                {'v1_0.flow',
[error] <0.1319.0>                                    {uint,283},
[error] <0.1319.0>                                    {uint,65535},
[error] <0.1319.0>                                    {uint,298},
[error] <0.1319.0>                                    {uint,4294967295},
[error] <0.1319.0>                                    {uint,1},
[error] <0.1319.0>                                    {uint,282},
[error] <0.1319.0>                                    {uint,50},
[error] <0.1319.0>                                    {uint,0},
[error] <0.1319.0>                                    undefined,undefined,undefined}}}
```

Presumably, the server session proc timed out receiving a credit reply from
the quorum queue because the test case deleted the quorum queue
concurrently with the client (and therefore also the server session
process) topping up link credit.

This commit detaches the link first and ends the session synchronously
before deleting the quorum queue.
2024-06-27 07:30:23 +00:00
Michael Klishin 083800809f
Merge pull request #11436 from rabbitmq/loic-remove-fhc
Remove most of the fd related FHC code
2024-06-26 15:19:49 -04:00
Michael Klishin de866b81cb
Merge pull request #11547 from rabbitmq/speed-up-rabbit-tests
Speed up rabbit tests suites
2024-06-26 15:15:06 -04:00
Michael Klishin 8ae8f733e5
Merge pull request #11567 from rabbitmq/osiris-v1.8.2
Osiris v1.8.2
2024-06-26 13:59:57 -04:00
David Ansari 2adcdc1a28 Attempt to de-flake
This commit attempts to remove the following flake:
```
{amqp_client_SUITE,server_closes_link,1113}
{badmatch,[<14696.3530.0>,<14696.3453.0>]}
```
by waiting after each test case until sessions were de-registered from
the 1st RabbitMQ node.
2024-06-26 15:57:26 +02:00
Karl Nilsson 8995e00d3f Osiris v1.8.2
* Use the latest method of building the RabbitMQ OCI in actions by @pjk25 in https://github.com/rabbitmq/osiris/pull/159
* Correctly handle case where replica exits in handle_continue. by @kjnilsson in https://github.com/rabbitmq/osiris/pull/160
* Add osiris.app.src file by @lukebakken in https://github.com/rabbitmq/osiris/pull/158
2024-06-26 14:19:16 +01:00
David Ansari dcb2fe0fd9 Fix message IDs settlement order
## What?

This commit fixes issues that were present only on `main`
branch and were introduced by #9022.

1. Classic queues (specifically `rabbit_queue_consumers:subtract_acks/3`)
   expect message IDs to be (n)acked in the order as they were delivered
   to the channel / session proc.
   Hence, the `lists:usort(MsgIds0)` in `rabbit_classic_queue:settle/5`
   was wrong causing not all messages to be acked adding a regression
   to also AMQP 0.9.1.
2. The order in which the session proc requeues or rejects multiple
   message IDs at once is important. For example, if the client sends a
   DISPOSITION with first=3 and last=5, the message IDs corresponding to
   delivery IDs 3,4,5 must be requeued or rejected in exactly that
   order.
   For example, quorum queues use this order of message IDs in
   34d3f94374/deps/rabbit/src/rabbit_fifo.erl (L226-L234)
   to dead letter in that order.

 ## How?

The session proc will settle (internal) message IDs to queues in ascending
(AMQP) delivery ID order, i.e. in the order messages were sent to the
client and in the order messages were settled by the client.

This commit chooses to keep the session's outgoing_unsettled_map map
data structure.

An alternative would have been to use a queue or lqueue for the
outgoing_unsettled_map as done in
* 34d3f94374/deps/rabbit/src/rabbit_channel.erl (L135)
* 34d3f94374/deps/rabbit/src/rabbit_queue_consumers.erl (L43)

Whether a queue (as done by `rabbit_channel`) or a map (as done by
`rabbit_amqp_session`) performs better depends on the pattern how
clients ack messages.

A queue will likely perform good enough because usually the oldest
delivered messages will be acked first.
However, given that there can be many different consumers on an AQMP
0.9.1 channel or AMQP 1.0 session, this commit favours a map because
it will likely generate less garbage and is very efficient when for
example a single new message (or few new messages) gets acked while
many (older) messages are still checked out by the session (but by
possibly different AMQP 1.0 receivers).
2024-06-26 13:35:41 +02:00
Karl Nilsson 1825f2be77 Fix flake in rabbit_stream_queue_SUITE 2024-06-26 09:32:08 +01:00
Loïc Hoguin a64d1e67fc
Remove looking_glass
It has largely been superseded by `perf`. It is no longer
generally useful. It can always be added to BUILD_DEPS for
the rare cases it is needed, or installed locally and
pointed to by setting its path to ERL_LIBS.
2024-06-26 09:56:46 +02:00
Karl Nilsson c36b56b626 speed up signal_handling_SUITE 2024-06-26 07:38:29 +01:00
Karl Nilsson d25017d8cf speed up rabbit_fifo_dlx_integration_SUITE slightly 2024-06-26 07:38:29 +01:00
Karl Nilsson e2767a750d further speed up quorum_queue_SUITE 2024-06-26 07:38:29 +01:00
Karl Nilsson e84cb65287 speed up dead_lettering_SUITE
By lowering tick intervals.
2024-06-26 07:38:29 +01:00
Karl Nilsson 3bb5030132 speed up queue_parallel_SUITE
By removing groups that test quorum configurations that are no
longer relevant.
2024-06-26 07:38:29 +01:00
Karl Nilsson 6843384c85 speed up metrics_SUITE 2024-06-26 07:38:29 +01:00
Karl Nilsson 7f0f632de7 dynamic_qq test tweak 2024-06-26 07:38:29 +01:00
Karl Nilsson 30c14e6f35 speed up product_info_SUITE
By running all tests in a single node rather than starting a new
one for each test.
2024-06-26 07:38:29 +01:00
Karl Nilsson 371b429ea1 speed up consumer_timeout_SUITE
By reducing consumer timeout and the receive timeout.
2024-06-26 07:38:29 +01:00
Karl Nilsson 25d79fa448 speed up rabbit_fifo_prop_SUITE
mostly by reducing the sizes of some properties as they can run
quite slowly in ci
2024-06-26 07:38:29 +01:00
Karl Nilsson f919fee7f1 speed up quorum_queues_SUITE
AFTER: gmake -C deps/rabbit ct-quorum_queue  6.15s user 4.25s system 2% cpu 6:25.29 total
2024-06-26 07:38:29 +01:00
Karl Nilsson 3551309baf speed up dynamic_qq_SUITE
BEFORE: time gmake -C deps/rabbit ct-dynamic_qq  1.92s user 1.44s system 2% cpu 2:23.56 total

AFTER: time gmake -C deps/rabbit ct-dynamic_qq  1.66s user 1.22s system 2% cpu 1:56.44 total
2024-06-26 07:38:29 +01:00
Karl Nilsson 13a1a7c7fe speed up rabbit_stream_queue_SUITE
Reduce the number of tests that are run for 2 nodes.

BEFORE: time gmake -C deps/rabbit ct-rabbit_stream_queue  7.22s user 5.72s system 2% cpu 8:28.18 total

AFTER time gmake -C deps/rabbit ct-rabbit_stream_queue  27.04s user 8.43s system 10% cpu 5:38.63 total
2024-06-26 07:38:29 +01:00
Loïc Hoguin 9f15e978b1
make: Remove xrefr
It is no longer used by Erlang.mk.
2024-06-25 13:08:08 +02:00
Michael Klishin 6bc258e9ea
README: remove Erlang 22 and 23 status badges
Actions are visible enough on GitHub to not have to worry
about updating them.
2024-06-24 14:58:24 -04:00
David Ansari b524639e32 Remove dead code 2024-06-24 18:17:40 +02:00
Loïc Hoguin 2a64a0f6c8
Restore FD info in rabbitmqctl status
The FD limits are still valuable.

The FD used will still show some information during CQv1
upgrade to v2 so it is kept for now. But in the future
it will have to be reworked to query the system, or be
removed.
2024-06-24 12:07:51 +02:00
Loïc Hoguin b14bb68bff
Restore a warning about low FD limits
The message has been tweaked; it isn't about FHC
or queues but about system limits only. The
ulimit() function can later be moved out of
FHC when FHC gets fully removed.
2024-06-24 12:07:51 +02:00
Loïc Hoguin d222a36ca2
Additional cleanup following partial FHC removal 2024-06-24 12:07:51 +02:00
Loïc Hoguin 5c8366f753
Remove file_handle_cache_stats module
The stats were not removed from management agent, instead
they are hardcoded to zero in the agent itself.
2024-06-24 12:07:51 +02:00
Loïc Hoguin 6a47eaad22
Zero sockets_used/sockets_limit stats
They are no longer used.

This removes a couple file_handle_cache:info/1 calls.

We are not removing them from the HTTP API to avoid
breaking things unintentionally.
2024-06-24 12:07:51 +02:00
Loïc Hoguin 49bedfc17e
Remove most of the fd related FHC code
Stats were not removed, including management UI stats
relating to FDs.

Web-MQTT and Web-STOMP configuration relating to FHC
were not removed.

The file_handle_cache itself must be kept until we
remove CQv1.
2024-06-24 12:07:51 +02:00
Michael Klishin 2e5cb21bce Handle a case where a DQT is 'quorum' but client-provided props are incompatible 2024-06-24 04:11:02 -04:00
Michael Klishin 1a48bb7921 Correctly merge non-empty x-args that do not include queue type
Besides fixing a regression detected by priority_queue_SUITE,
this introduces a drive-by change:

rabbit_priority_queue: avoid an exception when
max priority is a negative value that circumvented validation
2024-06-24 02:09:13 -04:00
Michael Klishin f3b7a346f9 Make 'queue.declare' aware of virtual host DQT
at validation time.

DQT = default queue type.

When a client provides no queue type, validation
should take the defaults (virtual host, global,
and the last resort fallback) into account
instead of considering the type to
be "undefined".

References #11457 ##11528
2024-06-24 01:13:14 -04:00
Michael Klishin f5cb65b5d1 rabbit_queue: edit a comment post-#9815 2024-06-23 23:02:41 -04:00
Michael Klishin 7335aaaf1b
Merge pull request #11528 from rabbitmq/mk-fix-virtual-host-creation-post-11457
Follow-up to #11457
2024-06-22 05:15:22 -04:00
Michael Klishin b822be02af
Revert "cuttlefish tls schema for amqp_client" 2024-06-22 04:16:50 -04:00
Michael Klishin 1e577a82fc Follow-up to #11457
The queue type argument won't always be a binary,
for example, when a virtual host is created.

As such, the validation code should accept at
least atoms in addition to binaries.

While at it, improve logging and error reporting
when DQT validation fails, and while at it,
make the definition import tests focussed on
virtual host a bit more robust.
2024-06-22 02:16:22 -04:00
Simon Unge 24fb0334ac Add schema duplicate for amqp 1.0 2024-06-21 21:43:07 -04:00
Simon Unge 145592efe9 Remove server options and move to rabbit schema 2024-06-21 21:43:07 -04:00
Michael Klishin 0b84acd164
Merge pull request #11515 from rabbitmq/issue-11514
Handle unknown QQ state
2024-06-21 21:41:59 -04:00
Michael Klishin 7c2d29d1e4
Merge pull request #11513 from rabbitmq/loic-remove-ram-durations
CQ: Remove rabbit_memory_monitor and RAM durations
2024-06-21 21:40:13 -04:00
Michael Klishin f51528244e definition_import_SUITE: fix a subtle timing issue
In case 16, an await_condition/2 condition was
not correctly matching the error. As a result,
the function proceeded to the assertion step
earlier than it should have, failing with
an obscure function_clause.

This was because an {error, Context} clause
was not correct.

In addition to fixing it, this change adds a
catch-all clause and verifies the loaded
tagged virtual host before running any assertions
on it.

If the virtual host was not imported, case 16
will now fail with a specific CT log message.

References #11457 because the changes there
exposed this behavior in CI.
2024-06-21 21:32:38 -04:00
Luke Bakken 1cf5477d35 Add `vhost:new_metadata` function 2024-06-21 11:25:18 -04:00
Luke Bakken d6fa3b71d3 Remove unused function `rabbit_vhost:put_vhost/5` 2024-06-21 11:25:18 -04:00
Luke Bakken ce16e61d08 Do not overwrite `default_queue_type` with `undefined`
Importing a definitions file with no `default_queue_type` metadata for a vhost will result in that vhosts value being set to `undefined`.

Once set to a non-`undefined` value, this PR prevents `default_queue_type` from being set back to `undefined`
2024-06-21 11:25:18 -04:00
Michal Kuratczyk bfedfdca71
Handle unknown QQ state
ra_state may contain a QQ state such as {'foo',init,unknown}.
Perfore this fix, all_replica_states doesn't map such states
to a 2-tuple which leads to a crash in maps:from_list because
a 3-tuple can't be handled.

A crash in rabbit_quorum_queue:all_replica_states leads to no
results being returned from a given node when the CLI asks for
QQs with minimum quorum.
2024-06-20 17:49:43 +02:00
Loïc Hoguin 1ca46f1c63
CQ: Remove rabbit_memory_monitor and RAM durations
CQs have not used RAM durations for some time, following
the introduction of v2.
2024-06-20 15:19:51 +02:00
Jean-Sébastien Pédron d0c13b4a42
Revert "rabbit_feature_flags: Retry after erpc:call() fails with `noconnection`"
This reverts commit 8749c605f5.

[Why]
The patch was supposed to solve an issue that we didn't understand and
that was likely a network/DNS problem outside of RabbitMQ. We know it
didn't solve that issue because it was reported again 6 months after the
initial pull request (#8411).

What we are sure however is that it increased the testing of RabbitMQ
significantly because the code loops for 10+ minutes if the remote node
is not running.

The retry in the Feature flags subsystem was not the right place either.
The `noconnection` error is visible there because it runs earlier during
RabbitMQ startup. But retrying there won't solve a network issue
magically.

There are two ways to create a cluster:
1. peer discovery and this subsystem takes care of retries if necessary
   and appropriate
2. manually using the CLI, in which case the user is responsible for
   starting RabbitMQ nodes and clustering them

Let's revert it until the root cause is really understood.
2024-06-20 14:26:24 +02:00
Michal Kuratczyk 489de21a76
Deflake vhost_is_created_with_default_user
This test relied on the order of items
in the list of users returned and would crash
with `case_clause` if `guest` was returned first.
2024-06-19 13:52:56 +02:00
Rin Kuryloski 5debebfaf3 Use rules_elixir to build the cli without mix
Certain elixir-native deps are still build with mix, but this can be
corrected later
2024-06-18 14:50:34 +02:00
Arnaud Cogoluègnes a52b8c0eb3
Bump Maven to 3.9.8 in Java test suites 2024-06-18 08:55:19 +02:00
Michal Kuratczyk 41a4d1711d
OTP27 support (#11366)
* "maybe" is now a keyword
* Bump horus to 0.2.5 and switch to hex
* Get rid of some deprecated callbacks/functions
2024-06-18 07:32:58 +02:00
Loïc Hoguin d959c8a43d
Merge pull request #11112 from rabbitmq/loic-faster-cq-shared-store-gc
4.x: Additional CQv2 message store optimisations
2024-06-17 15:30:25 +02:00
Michael Klishin 6605a7330f
Merge pull request #11455 from rabbitmq/default-max-message-size
Reduce default maximum message size to 16MB
2024-06-14 13:49:26 -04:00
Diana Parra Corbacho c11b812ef6 Reduce default maximum message size to 16MB 2024-06-14 11:55:03 +02:00
Loïc Hoguin cc73b86e80
CQ: Remove rabbit_msg_store.hrl
It is no longer needed as only a single module uses it.
2024-06-14 11:52:03 +02:00
Loïc Hoguin 41ce4da5ca
CQ: Remove ability to change shared store index module
It will always use the ETS index. This change lets us
do optimisations that would otherwise not be possible,
including 81b2c39834953d9e1bd28938b7a6e472498fdf13.

A small functional change is included in this commit:
we now always use ets:update_counter to update the
ref_count, instead of a mix of update_{counter,fields}.

When upgrading to 4.0, the index will be rebuilt for
all users that were using a custom index module.
2024-06-14 11:52:03 +02:00
Loïc Hoguin d45fbc3da4
CQ: Write large messages into their own files 2024-06-14 11:52:03 +02:00
Loïc Hoguin 18acc01a47
CQ: Make dirty recovery of shared store more efficient
This only applies to v2 because modifying this part of the v1
code is seen as too risky considering v1 will soon get removed.
2024-06-14 11:52:03 +02:00
Jean-Sébastien Pédron 3968f8e1fe
Merge pull request #11414 from rabbitmq/ff-controller-block-startup-v2
Avoid deadlock on terminating ff controller
2024-06-13 17:31:54 +02:00
Diana Parra Corbacho e8986588d0 rabbit_feature_flags: avoid deadlock on terminating controller
If the global process is itself, it means it just crashed.
It shouldn't wait but terminate
2024-06-13 15:07:59 +02:00
Jean-Sébastien Pédron df658317ee
peer_discovery_tmp_hidden_node_SUITE: Use IP address to simulate a long node name
[Why]
The `longnames`-based testcase depends on a copnfigured FQDN. Buildbuddy
hosts were incorrectly configured and were lacking one. As a workaround,
`/etc/hosts` was modified to add an FQDN.

We don't use Buildbuddy anymore, but the problem also appeared on team
members' Broadcom-managed laptops (also having an incomplete network
configuration). More importantly, GitHub workers seem to have the same
problem, but randomly!

At this point, we can't rely on the host's correct network
configuration.

[How]
The testsuite is modified to use a hard-coded IP address, 127.0.0.1, to
simulate a long Erlang node name (i.e. it has dots).
2024-06-13 15:04:07 +02:00
Jean-Sébastien Pédron 0875169cc1
Merge pull request #11437 from rabbitmq/stop-rabbitmq_prelaunch-if-rabbit-fails-to-start
rabbit: Stop `rabbitmq_prelaunch` if we fail to start `rabbit`
2024-06-13 09:17:30 +02:00
Michael Klishin 3fbc4f12d4
Merge pull request #11435 from rabbitmq/rabbitmq-server-11434
3.13: ctl list_unresponsive_queues: handle queues of type MQTT QoS 0
2024-06-12 18:03:17 -04:00
Michael Klishin 18705dde66 Closes #11434 2024-06-12 12:37:09 -04:00
Jean-Sébastien Pédron 87a8c15380
rabbit: Stop `rabbitmq_prelaunch` if we fail to start `rabbit`
[Why]
This is important if the environment changes between that error and the
next attempt to start `rabbit': the environment is only read during the
start of `rabbitmq_prelaunch' (and the cached context is cleaned on
stop).

This fixes a transient failure of the
`unit_log_management_SUITE:log_file_fails_to_initialise_during_startup/1`
testcase.

V2: Explicitly ignore the return value of `application:stop/1`.
2024-06-12 18:26:04 +02:00
Karl Nilsson cca64e8ed6 Add new local random exchange type.
This exchange type will only bind classic queues and will only return
routes for queues that are local to the publishing connection. If more than
one queue is bound it will make a random choice of the locally bound queues.

This exchange type is suitable as a component in systems that run
highly available low-latency RPC workloads.

Co-authored-by: Marcial Rosales <mrosales@pivotal.io>
2024-06-12 17:16:45 +01:00
Karl Nilsson 3390fc97fb etcd peer discovery fixes
Instead of relying on the complex and non-determinstic default node
selection mechanism inside peer discovery this change makes the
etcd backend implemention make the leader selection itself based on
the etcd create_revision of each entry. Although not spelled out anywhere
explicitly is likely that a property called "Create Revision" is going
to remain consistent throughout the lifetime of the etcd key.

Either way this is likely to be an improvement on the current approach.
2024-06-12 15:31:27 +01:00
Loïc Hoguin 4bfef36251
make: Remove unused USE_PROPER_QC
This has been in the file since 2015 but as we are now just
including 'proper' as a dependency we don't need it anymore
(and haven't needed it for some time).

This removal changes base execution speed quite a bit:

  make -C deps/rabbit nope  0,29s user 0,22s system 235% cpu 0,220 total
  make -C deps/rabbit nope  0,03s user 0,02s system 101% cpu 0,053 total
2024-06-10 09:42:29 +02:00
Michael Klishin f39b7bba9c bazel run gazelle post CMQ removal 2024-06-08 22:16:11 -04:00
Michael Klishin b9f387b988
Merge pull request #11412 from rabbitmq/link-error
Prefer link error over session error
2024-06-07 11:10:48 -04:00
Michael Davis 5e4a4326ef
Merge pull request #11398 from rabbitmq/md/khepri/read-only-permissions-transaction-queries
Khepri: Use read-only transactions to query for user/topic permissions
2024-06-07 08:20:02 -05:00
David Ansari 895bf3e3cb Prefer link error over session error
when AMQP client publishes with a bad 'to' address such that other links
on the same session can continue to work.
2024-06-07 13:41:05 +02:00
Michael Klishin c592405b7a
Merge pull request #11397 from rabbitmq/direct-reply-to
4.x: add tests for AMQP 0.9.1 direct reply to feature
2024-06-07 02:28:31 -04:00
Michael Klishin c03d90fbec Pass Dialyzer 2024-06-06 22:09:19 -04:00
Michael Klishin 959d3fd15c Simplify a rabbitmqctl_integration_SUITE test 2024-06-06 21:21:26 -04:00
Michael Klishin 1fc325f5d7 Introduce a way to disable virtual host reconciliation
Certain test suites need virtual hosts to be down
for an extended period of time, and this feature
gets in a way ;)
2024-06-06 21:21:26 -04:00
Michael Klishin 5337f8f525 Reconcile virtual hosts shortly after a cluster member is detected to have come online 2024-06-06 21:21:26 -04:00
Michael Klishin 6e41d3e6e6 Introduce 'rabbitmqctl reconcile_vhosts' 2024-06-06 21:21:26 -04:00
Michael Klishin 37778fd934 Periodically reconcile virtual host processes
for up to 10 times.

When a cluster is formed from scratch and a virtual
host is declared in the process (can be via
definitions, or plugins, or any other way),
that virtual host's process tree will be started
on all reachable cluster nodes at that moment
in time.

However, this can be just a subset of all nodes
expected to join the cluster.

The most effective solution is to run this
reconciliation process on a timer for up to
5 minutes by default. This matches how long
some other parts of RabbitMQ (3.x) expect
cluster formation to take, at most.

Per discussion with @dcorbacho @mkuratczyk.
2024-06-06 21:21:26 -04:00
David Ansari d806d698af Add tests for AMQP 0.9.1 direct reply to feature
Add tests for https://www.rabbitmq.com/docs/direct-reply-to

Relates https://github.com/rabbitmq/rabbitmq-server/issues/11380
2024-06-06 18:11:39 +02:00
Michael Klishin 959ebeaf7d
Merge pull request #11395 from charltonliv/patch-1
Fix typo in rabbitmq.conf.example
2024-06-06 11:05:51 -04:00
Michael Davis d0425e5361
cluster_minority_SUITE: Test definitions export 2024-06-06 10:49:27 -04:00
Michael Davis 7a8669ea73
Improve cluster_minority_SUITE
This is a mix of a few changes:

* Suppress the compiler warning from the export_all attribute.
* Lower Khepri's command handling timeout value. By default this is
  set to 30s in rabbit which makes each of the cases in
  `client_operations` take an excessively long time. Before this change
  the suite took around 10 minutes to complete. Now it takes between two
  and three minutes.
* Swap the order of client and broker teardown steps in end_per_group
  hook. The client teardown steps will always fail if run after the
  broker teardown steps because they rely on a value in `Config` that
  is deleted by broker teardown.
2024-06-06 10:49:27 -04:00
Michael Davis 8f8170dd01
rabbit_db_user: Use read-only transactions to query user/topic permissions
The parent commit favors low-latency queries for read-only transactions,
so marking these read-only transactions as explicitly read-only will
cause them to be run against the local node. This ensures that functions
like `rabbit_auth_backend_internal:list_permissions/0` do not return
`{error, timeout}` when the cluster is in minority, fixing 'bad
generator' errors during definitions export (via
`rabbit_definitions:all_definitions/0`).
2024-06-06 10:49:27 -04:00
Michael Davis a3abe3e6b2
rabbit_khepri: Use low-latency favor for read-only transactions
Read-write transactions behave like commands (for example 'put' or
'delete') but read-only transactions behave like queries (for example
'get' or 'exists'). When a transaction is passed as read-only we can
re-use the same default options we use for most Khepri queries and
favor low-latency queries. This reads the values from the local node
rather than the leader (or a consistent query), making the query faster
and preventing timeouts when attempting the query when the cluster is
in minority.

The child commit will update some read-only transaction callsites that
take advantage of this low-latency favor.
2024-06-06 10:49:27 -04:00
David Ansari f5d2fd68e3 Prefer monotonic time to measure durations
Monotonic time instead of wall clock time should be used to measure a
duration.

This is mostly a cosmetic change given that consumer timeouts are 30
mins by default.

See also
https://github.com/rabbitmq/khepri/issues/239
https://github.com/rabbitmq/rabbitmq-server/pull/10928
2024-06-06 16:34:08 +02:00
Charlton Liv 3e88eb16ab
Update rabbitmq.conf.example 2024-06-06 22:31:00 +08:00
Michael Klishin 41e5d38b94 rabbitmq-diagnostics status: drop date-based support status field
As of [1], this field has become irrelevant or even
misleading.

1. https://www.rabbitmq.com/blog/2024/05/31/new-community-support-policy
2024-06-05 23:20:35 -04:00
Michael Davis 79e1350a87
Remove feature flag code handling potential code server deadlocks
These test cases and RPC calls were concerned with deadlocks possible
from modifying the code server process from multiple callers. With the
switch to persistent_term in the parent commits these kinds of deadlocks
are no longer possible.

We should keep the RPC calls to `rabbit_ff_registry_wrapper:inventory/0`
though for mixed-version compatibility with nodes that use module
generation instead of `persistent_term` for their registry.
2024-06-05 11:26:41 -04:00
Michael Davis ea2da2d32d
rabbit_feature_flags: Inject test feature flags atomically
`rabbit_feature_flags:inject_test_feature_flags/2` could discard flags
from the persistent term because the read and write to the persistent
term were not atomic. `feature_flags_SUITE:registry_concurrent_reloads/1`
spawns many processes to modify the feature flag registry concurrently
but also updates this persistent term concurrently. The processes would
race so that many would read the initial flags, add their flag and write
that state, discarding any flags that had been written in the meantime.

We can add a lock around changes to the persistent term to make the
changes atomic.

Non-atomic updates to the persistent term caused unexpected behavior in
the `registry_concurrent_reloads/1` case previously. The
`PT_TESTSUITE_ATTRS` persistent_term only ended up containing a few of
the desired feature flags (for example only `ff_02` and `ff_06` along
with the base `ff_a` and `ff_b`). The case did not fail because the
registry continued to make progress towards that set of feature flags.
However the test case never reached the "all feature flags appeared"
state of the spammer and so the new assertion added at the end of the
case in this commit would fail.
2024-06-05 11:26:41 -04:00
Michael Davis df19ea4434
rabbit_ff_registry_factory: Lock around registry initialization
This prevents a potential race which was possible when two processes
attempted to initialize the feature flag registry at the same time. One
process attempting to initialize the registry could take a relatively
long time to read the available feature flags meanwhile another process
could read a more recently changed set of available feature flags.
Locking before reading and writing the flag prevents one update from
clobbering the other.

This race was not possible before the registry used a persistent_term
because we read the version of the registry module as the very first
step when initializing. When we created a new module we checked the
version again and retried the update if it changed in the meantime. That
retry mechanism has been removed since we no longer track the current
version of the registry. Instead we can use a local lock for the same
effect.
2024-06-05 11:26:41 -04:00
Michael Davis 82cac70779
rabbit_ff_registry: Refactor as a `persistent_term`
Previously the feature flag registry was a module which was regenerated
(i.e. with `erl_syntax` and `compile:forms/2`) whenever the set of known
feature flags or feature flag states changed. Compiling and loading
a module is fairly expensive though, costing around 35-40ms for me
locally, and the registry module is regenerated many times on boot. So
this change saves a fair amount of time on boot. A regular single-node
broker boots locally for me in around 4250ms via `bazel run broker`.
With this patch the boot time falls to around 3300ms.

This patch also improves the time it takes to enable individual feature
flags. `time rabbitmqctl enable_feature_flag khepri_db` for example
changes locally for me from around 1.2s to 1.0s with this change.
2024-06-05 11:26:41 -04:00
Karl Nilsson ebbff46c96
Merge pull request #11307 from rabbitmq/amqp-flow-control-poc-2
Introduce outbound RabbitMQ internal AMQP flow control
2024-06-05 15:01:05 +01:00
David Ansari 5f659b5eb9 Support AMQP tracing
All incoming and outgoing AMQP frames excluding payload will be logged.

In addition, HTTP over AMQP management requests and response payloads
will be logged.

Each trace includes the module name, function name and module line which
outputs the trace.

Example log output:
```
[debug] <0.823.0> rabbit_amqp_reader:parse_frame_body/2 372
[debug] <0.823.0> channel 1 ->
[debug] <0.823.0>  {'v1_0.transfer',[{handle,{uint,0}},
[debug] <0.823.0>                    {delivery_id,{uint,4294967295}},
[debug] <0.823.0>                    {delivery_tag,{binary,<<>>}},
[debug] <0.823.0>                    {message_format,{uint,0}},
[debug] <0.823.0>                    {settled,true},
[debug] <0.823.0>                    {more,false},
[debug] <0.823.0>                    {rcv_settle_mode,undefined},
[debug] <0.823.0>                    {state,undefined},
[debug] <0.823.0>                    {resume,undefined},
[debug] <0.823.0>                    {aborted,undefined},
[debug] <0.823.0>                    {batchable,undefined}]}
[debug] <0.823.0>  followed by 100 bytes of payload
[debug] <0.823.0>
[debug] <0.828.0> rabbit_amqp_management:handle_request/5 69
[debug] <0.828.0> HTTP over AMQP request:
[debug] <0.828.0>  [{'v1_0.properties',
[debug] <0.828.0>       [{message_id,{binary,<<74,195,231,105,234,167,156,6>>}},
[debug] <0.828.0>        {user_id,undefined},
[debug] <0.828.0>        {to,{utf8,<<"/queues/q1">>}},
[debug] <0.828.0>        {subject,{utf8,<<"PUT">>}},
[debug] <0.828.0>        {reply_to,{utf8,<<"$me">>}},
[debug] <0.828.0>        {correlation_id,undefined},
[debug] <0.828.0>        {content_type,undefined},
[debug] <0.828.0>        {content_encoding,undefined},
[debug] <0.828.0>        {absolute_expiry_time,undefined},
[debug] <0.828.0>        {creation_time,undefined},
[debug] <0.828.0>        {group_id,undefined},
[debug] <0.828.0>        {group_sequence,undefined},
[debug] <0.828.0>        {reply_to_group_id,undefined}]},
[debug] <0.828.0>   {'v1_0.amqp_value',
[debug] <0.828.0>       [{content,
[debug] <0.828.0>            {map,
[debug] <0.828.0>                [{{utf8,<<"durable">>},true},
[debug] <0.828.0>                 {{utf8,<<"arguments">>},
[debug] <0.828.0>                  {map,
[debug] <0.828.0>                      [{{utf8,<<"x-queue-type">>},{utf8,<<"quorum">>}}]}}]}}]}]
[debug] <0.828.0> HTTP over AMQP response:
[debug] <0.828.0>  [{'v1_0.properties',
[debug] <0.828.0>       [{message_id,undefined},
[debug] <0.828.0>        {user_id,undefined},
[debug] <0.828.0>        {to,undefined},
[debug] <0.828.0>        {subject,{utf8,<<"201">>}},
[debug] <0.828.0>        {reply_to,undefined},
[debug] <0.828.0>        {correlation_id,{binary,<<74,195,231,105,234,167,156,6>>}},
[debug] <0.828.0>        {content_type,undefined},
[debug] <0.828.0>        {content_encoding,undefined},
[debug] <0.828.0>        {absolute_expiry_time,undefined},
[debug] <0.828.0>        {creation_time,undefined},
[debug] <0.828.0>        {group_id,undefined},
[debug] <0.828.0>        {group_sequence,undefined},
[debug] <0.828.0>        {reply_to_group_id,undefined}]},
[debug] <0.828.0>   {'v1_0.application_properties',
[debug] <0.828.0>       [{content,[{{utf8,<<"http:response">>},{utf8,<<"1.1">>}}]}]},
[debug] <0.828.0>   {'v1_0.amqp_value',
[debug] <0.828.0>       [{content,
[debug] <0.828.0>            {map,
[debug] <0.828.0>                [{{utf8,<<"leader">>},{utf8,<<"rabbit@VQD7JFK37T">>}},
[debug] <0.828.0>                 {{utf8,<<"replicas">>},
[debug] <0.828.0>                  {array,utf8,[{utf8,<<"rabbit@VQD7JFK37T">>}]}},
[debug] <0.828.0>                 {{utf8,<<"message_count">>},{ulong,0}},
[debug] <0.828.0>                 {{utf8,<<"consumer_count">>},{uint,0}},
[debug] <0.828.0>                 {{utf8,<<"name">>},{utf8,<<"q1">>}},
[debug] <0.828.0>                 {{utf8,<<"vhost">>},{utf8,<<"/">>}},
[debug] <0.828.0>                 {{utf8,<<"durable">>},{boolean,true}},
[debug] <0.828.0>                 {{utf8,<<"auto_delete">>},{boolean,false}},
[debug] <0.828.0>                 {{utf8,<<"exclusive">>},{boolean,false}},
[debug] <0.828.0>                 {{utf8,<<"type">>},{utf8,<<"quorum">>}},
[debug] <0.828.0>                 {{utf8,<<"arguments">>},
[debug] <0.828.0>                  {map,
[debug] <0.828.0>                      [{{utf8,<<"x-queue-type">>},{utf8,<<"quorum">>}}]}}]}}]}]
[debug] <0.828.0>
[debug] <0.826.0> rabbit_amqp_writer:assemble_frame/4 215
[debug] <0.826.0> channel 1 <-
[debug] <0.826.0>  {'v1_0.transfer',[{handle,{uint,1}},
[debug] <0.826.0>                    {delivery_id,{uint,0}},
[debug] <0.826.0>                    {delivery_tag,{binary,<<>>}},
[debug] <0.826.0>                    {message_format,{uint,0}},
[debug] <0.826.0>                    {settled,true},
[debug] <0.826.0>                    {more,undefined},
[debug] <0.826.0>                    {rcv_settle_mode,undefined},
[debug] <0.826.0>                    {state,undefined},
[debug] <0.826.0>                    {resume,undefined},
[debug] <0.826.0>                    {aborted,undefined},
[debug] <0.826.0>                    {batchable,undefined}]}
[debug] <0.826.0>  followed by 268 bytes of payload
[debug] <0.826.0>
```
2024-06-05 13:38:15 +02:00
David Ansari cf3c8baa11 Avoid frequent #resource{} record creation
Avoid #resource{} record creation in every queue type
interaction by storing the queue #resource{} in the
session state.
2024-06-05 13:26:20 +02:00
Michael Klishin 44c381929e
Merge pull request #11373 from rabbitmq/qq-server-recovery
QQ: Enable server recovery.
2024-06-04 15:50:16 -04:00
Karl Nilsson 08a8f93e97 QQ: Enable server recovery.
This ensures quorum queue processes are restarted after a ra system restart.
2024-06-04 16:41:12 +01:00
David Ansari d70e529d9a Introduce outbound RabbitMQ internal AMQP flow control
## What?

Introduce RabbitMQ internal flow control for messages sent to AMQP
clients.

Prior this PR, when an AMQP client granted a large amount of link
credit (e.g. 100k) to the sending queue, the sending queue sent
that amount of messages to the session process no matter what.
This becomes problematic for memory usage when the session process
cannot send out messages fast enough to the AMQP client, especially if
1. The writer proc cannot send fast enough. This can happen when
the AMQP client does not receive fast enough and causes TCP
back-pressure to the server. Or
2. The server session proc is limited by remote-incoming-window.

Both scenarios are now added as test cases.
Tests
* tcp_back_pressure_rabbitmq_internal_flow_quorum_queue
* tcp_back_pressure_rabbitmq_internal_flow_classic_queue
cover scenario 1.

Tests
* incoming_window_closed_rabbitmq_internal_flow_quorum_queue
* incoming_window_closed_rabbitmq_internal_flow_classic_queue
cover scenario 2.

This PR sends messages from queues to AMQP clients in a more controlled
manner.

To illustrate:
```
make run-broker PLUGINS="rabbitmq_management" RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 4"
observer_cli:start()
mq
```
where `mq` sorts by message queue length.
Create a stream:
```
deps/rabbitmq_management/bin/rabbitmqadmin declare queue name=s1 queue_type=stream durable=true
```
Next, send and receive from the Stream via AMQP.
Grant a large number of link credit to the sending stream:
```
docker run -it --rm --add-host host.docker.internal:host-gateway ssorj/quiver:latest
bash-5.1# quiver --version
quiver 0.4.0-SNAPSHOT
bash-5.1# quiver //host.docker.internal//queue/s1 --durable -d 30s --credit 100000
```

**Before** to this PR:
```
RESULTS

Count ............................................... 100,696 messages
Duration ............................................... 30.0 seconds
Sender rate ......................................... 120,422 messages/s
Receiver rate ......................................... 3,363 messages/s
End-to-end rate ....................................... 3,359 messages/s
```
We observe that all 100k link credit worth of messages are buffered in the
writer proc's mailbox:
```
|No | Pid        | MsgQueue  |Name or Initial Call                 |      Memory | Reductions          |Current Function                  |
|1  |<0.845.0>   |100001     |rabbit_amqp_writer:init/1            |  126.0734 MB| 466633491           |prim_inet:send/5                  |
```

**After** to this PR:
```
RESULTS

Count ............................................. 2,973,440 messages
Duration ............................................... 30.0 seconds
Sender rate ......................................... 123,322 messages/s
Receiver rate ........................................ 99,250 messages/s
End-to-end rate ...................................... 99,148 messages/s
```
We observe that the message queue lengths of both writer and session
procs are low.

 ## How?

Our goal is to have queues send out messages in a controlled manner
without overloading RabbitMQ itself.
We want RabbitMQ internal flow control between:
```
AMQP writer proc <--- session proc <--- queue proc
```
A similar concept exists for classic queues sending via AMQP 0.9.1.
We want an approach that applies to AMQP and works generic for all queue
types.

For the interaction between AMQP writer proc and session proc we use a
simple credit based approach reusing module `credit_flow`.

For the interaction between session proc and queue proc, the following options
exist:

 ### Option 1
The session process provides expliclity feedback to the queue after it
has sent N messages.
This approach is implemented in
https://github.com/ansd/rabbitmq-server/tree/amqp-flow-control-poc-1
and works well.
A new `rabbit_queue_type:sent/4` API was added which lets the queue proc know
that it can send further messages to the session proc.

Pros:
* Will work equally well for AMQP 0.9.1, e.g. when quorum queues send messages
  in auto ack mode to AMQP 0.9.1 clients.
* Simple for the session proc

Cons:
* Sligthly added complexity in every queue type implementation
* Multiple Ra commands (settle, credit, sent) to decide when a quorum
  queue sends more messages.

 ### Option 2
A dual link approach where two AMQP links exists between
```
AMQP client <---link--> session proc <---link---> queue proc
```
When the client grants a large amount of credits, the session proc will
top up credits to the queue proc periodically in smaller batches.

Pros:
* No queue type modifications required.
* Re-uses AMQP link flow control

Cons:
* Significant added complexity in the session proc. A client can
  dynamically decrease or increase credits and dynamically change the drain
  mode while the session tops up credit to the queue.

 ### Option 3
Credit is a 32 bit unsigned integer.
The spec mandates that the receiver independently chooses a credit.
Nothing in the spec prevents the receiver to choose a credit of 1 billion.
However the credit value is merely a **maximum**:
> The link-credit variable defines the current maximum legal amount that the delivery-count can be increased by.

Therefore, the server is not required to send all available messages to this
receiver.

For delivery-count:
> Only the sender MAY independently modify this field.

"independently" could be interpreted as the sender could add to the delivery-count
irrespective of what the client chose for drain and link-credit.

Option 3: The queue proc could at credit time already consume credit
and advance the delivery-count if credit is too large before checking out any messages.
For example if credit is 100k, but the queue only wants to send 1k, the queue could
consume 99k of credits and advance the delivery-count, and subsequently send maximum 1k messages.
If the queue advanced the delivery-count, RabbitMQ must send a FLOW to the receiver,
otherwise the receiver wouldn’t know that it ran out of link-credit.

Pros:
* Very simple

Cons:
* Possibly unexpected behaviour for receiving AMQP clients
* Possibly poor end-to-end throughput in auto-ack mode because the queue
  would send a batch of messages followed by a FLOW containing the advanced
  delivery-count. Only therafter the client will learn that it ran out of
  credits and top-up again. This feels like synchronously pulling a batch
  of messages. In contrast, option 2 sends out more messages as soon as
  the previous messages left RabbitMQ without requiring again a credit top
  up from the receiver.
* drain mode with large credits requires the queue to send all available
  messages and only thereafter advance the delivery-count. Therefore,
  drain mode breaks option 3 somewhat.

 ### Option 4
Session proc drops message payload when its outgoing-pending queue gets
too large and re-reads payloads from the queue once the message can be
sent (see `get_checked_out` Ra command for quorum queues).

Cons:
* Would need to be implemented for every queue type, especially classic queues
* Doesn't limit the amount of message metadata in the session proc's
  outgoing-pending queue

 ### Decision: Option 2
This commit implements option 2 to avoid any queue type modification.
At most one credit request is in-flight between session process and
queue process for a given queue consumer.
If the AMQP client sends another FLOW in between, the session proc
stashes the FLOW until it processes the previous credit reply.

A delivery is only sent from the outgoing-pending queue if the
session proc is not blocked by
1. writer proc, or
2. remote-incoming-window

The credit reply is placed into the outgoing-pending queue.
This ensures that the session proc will only top up the next batch of
credits if sufficient messages were sent out to the writer proc.

A future commit could additionally have each queue limit the number of
unacked messages for a given AMQP consumer, or alternatively make use
of session outgoing-window.
2024-06-04 13:11:55 +02:00
Diana Parra Corbacho 11a90b975c Keep StartMode argument for mixed-version compatibility 2024-06-04 13:00:31 +02:00
Diana Parra Corbacho a83e80fc42 Re-introduce `gm_group` table
For mixed-version clusters, as the gm table is created even if
CMQ have already been deprecated
2024-06-04 13:00:31 +02:00
Diana Parra Corbacho 3bbda5bdba Remove classic mirror queues 2024-06-04 13:00:31 +02:00
Michael Klishin bd111f01f7
Merge pull request #11278 from SimonUnge/qq_repair_amqqueue_on_tick 2024-06-03 08:57:19 -04:00
David Ansari bd847b8cac Put credit flow config into persistent term
Put configuration credit_flow_default_credit into persistent term such
that the tuple doesn't have to be copied on the hot path.

Also, change persistent term keys from `{rabbit, AtomKey}` to `AtomKey`
so that hashing becomes cheaper.
2024-05-31 16:20:51 +02:00
Simon Unge 83a0eedb4e Let QQs check if it needs to repair amqqueue nodes 2024-05-29 20:17:31 +00:00
Simon Unge 19a751890c Remove checks to vhost-limit as that is now handled by rabbit_queue_type:declare
Add new error return tuple when queue limit is exceed
2024-05-27 09:53:54 +02:00
Michael Klishin a8d9b9c1db
Merge pull request #11245 from cloudamqp/expose_segment_count_prometheus
Prometheus: add segment file count to queue_metrics and expose
2024-05-24 21:24:57 -04:00
Michal Kuratczyk cfa3de4b2b
Remove unused imports (thanks elp!) 2024-05-23 16:36:08 +02:00
markus812498 3b1ff80f0a added segment file count to queue_metrics ets and exposed in /metrics endpoint 2024-05-23 09:50:24 +12:00
Simon Unge 0d68c79e00 Fix broken code 2024-05-21 17:57:42 +00:00
Simon Unge 60ae6c0943 cluster wide queue limit 2024-05-21 17:41:53 +00:00
Michael Klishin f124bca95d
Merge pull request #11222 from SimonUnge/move_vhost_limit_check
Enforce/honor per-vhost queue limit for all protocols
2024-05-21 13:28:25 -04:00
Loïc Hoguin 93e4e88872
CQ: Fix entry missing from cache leading to crash on read
The issue comes from a mechanic that allows us to avoid writing
to disk when a message has already been consumed. It works fine
in normal circumstances, but fan-out makes things trickier.

When multiple queues write and read the same message, we could
get a crash. Let's say queues A and B both handle message Msg.

* Queue A asks store to write Msg
* Queue B asks store to write Msg
* Queue B asks store to delete Msg (message was immediately consumed)
* Store processes Msg write from queue A
  * Store writes Msg to current file
* Store processes Msg write from queue B
  * Store notices queue B doesn't need Msg anymore; doesn't write
  * Store clears Msg from the cache
* Queue A tries to read Msg
  * Msg is missing from the cache
  * Queue A tries to read from disk
  * Msg is in the current write file and may not be on disk yet
  * Crash

The problem is that the store clears Msg from the cache. We need
all messages written to the current file to remain in the cache
as we can't guarantee the data is on disk when comes the time
to read. That is, until we roll over to the next file.

The issue was that a match was wrong, instead of matching a single
location from the index, the code was matching against a list. The
error was present in the code for almost 13 years since commit
2ef30dc95e.
2024-05-21 15:48:44 +02:00
Michael Klishin 81bcec49a4
Merge pull request #11279 from rabbitmq/mk-start-virtual-host-on-an-arbitrary-set-of-nodes
Introduce rabbit_vhost_sup_sup:start_on_all_nodes/2
2024-05-20 19:56:28 -04:00
Michael Klishin 22b16a180c
Merge pull request #11280 from rabbitmq/mk-rabbit-direct-cosmetics
rabbit_direct: log cosmetics
2024-05-20 18:37:52 -04:00
Michael Klishin 64a8ee6de4 Log this message at info level 2024-05-20 18:24:19 -04:00
Michael Klishin 93227a9cc9 Make a log message less scary
This callback is also invoked when a virtual host
shuts down cleanly during a node shutdown.
2024-05-20 18:21:45 -04:00
Michael Klishin 68e49170af rabbit_direct: log cosmetics 2024-05-20 17:21:05 -04:00
Michael Klishin f22a02713f Introduce rabbit_vhost_sup_sup:start_on_all_nodes/2 2024-05-20 17:02:55 -04:00
Michael Klishin ee3092940d rabbit_vhost: log a warning when a virtual host process stops (or fails) 2024-05-20 00:20:23 -04:00
David Ansari df9e9978ce Fix type spec
and improve naming
2024-05-17 12:14:58 +02:00
David Ansari e8c5d22e9b Have minimum queue TTL take effect for quorum queues
For classic queues, if both policy and queue argument are set
for queue TTL, the minimum takes effect.

Prior to this commit, for quorum queues if both policy and
queue argument are set for queue TTL, the policy always overrides the
queue argument.

This commit brings the quorum queue queue TTL resolution to classic
queue's behaviour. This allows developers to provide a custom lower
queue TTL while the operator policy acts an upper bound safe-guard.
2024-05-16 17:40:59 +02:00
Simon Unge 4a6c009df9 Minor cosmetic fix 2024-05-14 23:18:33 +00:00
Simon Unge a5214f356c Now with spec 2024-05-14 20:54:43 +00:00
Simon Unge d2192fbcfd Prepare for adding more checks 2024-05-14 20:25:49 +00:00
Simon Unge 36f7e8d6f4 Instead of throwing an error, return protocol_error 2024-05-14 20:25:49 +00:00
Simon Unge 23a32e18e7 Move the check to be called on all queue types 2024-05-14 20:25:49 +00:00
Michael Davis 3435a0d362
Fix a flake in amqp_client_SUITE
`event_recorder` listens for events globally rather than per-connection
so it's possible for `connection_closed` and other events from prior
test cases to be captured and mixed with the events the test is
interested in, causing a flake.

Instead we can assert that the events we gather from `event_recorder`
contain the ones emitted by the connection.
2024-05-14 14:50:50 -04:00
David Ansari 1c4af0ccfc
Merge pull request #11230 from rabbitmq/dead-letter-bcc
Remove BCC from x-death routing-keys
2024-05-14 16:50:24 +02:00
David Ansari f122483e34 Allow AMQP client to leave 'more' unset 2024-05-14 15:33:49 +02:00
David Ansari 90a40107b4 Remove BCC from x-death routing-keys
This commit is a follow up of https://github.com/rabbitmq/rabbitmq-server/pull/11174
which broke the following Java client test:
```
./mvnw verify -P '!setup-test-cluster' -Drabbitmqctl.bin=DOCKER:rabbitmq -Dit.test=DeadLetterExchange#deadLetterNewRK
```

The desired documented behaviour is the following:
> routing-keys: the routing keys (including CC keys but excluding BCC ones) the message was published with

This behaviour should be respected also for messages dead lettered into a
stream. Therefore, instead of first including the BCC keys in the `#death.routing_keys` field
and removing it again in mc_amqpl before sending the routing-keys to the
client as done in v3.13.2 in
dc25ef5329/deps/rabbit/src/mc_amqpl.erl (L527)
we instead omit directly the BCC keys from `#death.routing_keys` when
recording a death event.

This commit records the BCC keys in their own mc `bcc` annotation in `mc_amqpl:init/1`.
2024-05-14 13:40:23 +02:00
Jean-Sébastien Pédron 3147ab7d47
rabbit_peer_discovery: Allow backends to select the node to join themselves
[Why]
Before, the backend would always return a list of nodes and the
subsystem would select one based on their uptimes, the nodes they are
already clustered with, and the readiness of their database.

This works well in general but has some limitations. For instance with
the Consul backend, the discoverability of nodes depends on when each
one registered and in which order. Therefore, the node with the highest
uptime might not be the first that registers. In this case, the one that
registers first will only discover itself and boot as a standalone node.
However, the one with the highest uptime that registered after will
discover both nodes. It will then select itself as the node to join
because it has the highest uptime. In the end both nodes form distinct
clusters.

Another example is the Kubernetes backend. The current solution works
fine but it could be optimized: the backend knows we always want to join
the first node ("$node-0") regardless of the order in which they are
started because picking the first node alphabetically is fine.

Therefore we want to let the backend selects the node to join if it
wants.

[How]
The `list_nodes()` callback can now return the following term:

    {ok, {SelectedNode :: node(), NodeType}}

If the subsystem sees this return value, it will consider that the
returned node is the one to join. It will still query properties because
we want to make sure the node's database is ready before joining it.
2024-05-14 09:40:44 +02:00
Jean-Sébastien Pédron cb9f0d8a44
rabbit_peer_discovery: Register node before running discovery
[Why]
The two backends that use registration are Consul and etcd. The
discovery process relies on the registered nodes: they return whatever
was previously registered.

With the new checks and failsafes added in peer discovery in RabbitMQ
3.13.0, the fact that registration happens after running discovery
breaks Consul and etcd backend.

It used to work before because the first node would eventually time out
waiting for a non-empty list of nodes from the backend and proceed as a
standalone node, registering itself on the way. Following nodes would
then discover that first node.

Among the new checks, the node running discovery expects to find itself
in the list of discovered nodes. Because it didn't register yet, it will
never find itself.

[How]
The solution is to register first, then run discovery. The node should
at least get itself in the discovered nodes.
2024-05-14 09:40:44 +02:00
David Ansari 083889ec01 Fix test assertion 2024-05-13 18:21:40 +02:00
David Ansari 68f8ef14f3 Fix comment 2024-05-13 17:59:42 +02:00
Loïc Hoguin 0942573be7
Merge pull request #10656 from rabbitmq/loic-remove-cqv1-option
4.x: remove availability of CQv1
2024-05-13 14:33:16 +02:00
Loïc Hoguin ecf46002e0
Remove availability of CQv1
We reject CQv1 in rabbit.schema as well.

Most of the v1 code is still around as it is needed
for conversion to v2. It will be removed at a later
time when conversion is no longer supported.

We don't shard the CQ property suite anymore:
there's only 1 case remaining.
2024-05-13 13:06:07 +02:00
David Ansari 6b300a2f34 Fix dead lettering
# What?

This commit fixes #11159, #11160, #11173.

  # How?

  ## Background

RabbitMQ allows to dead letter messages for four different reasons, out
of which three reasons cause messages to be dead lettered automatically
internally in the broker: (maxlen, expired, delivery_limit) and 1 reason
is caused by an explicit client action (rejected).

RabbitMQ also allows dead letter topologies. When a message is dead
lettered, it is re-published to an exchange, and therefore zero to
multiple target queues. These target queues can in turn dead letter
messages. Hence it is possible to create a cycle of queues where
messages get dead lettered endlessly, which is what we want to avoid.

  ## Alternative approach

One approach to avoid such endless cycles is to use a similar concept of
the TTL field of the IPv4 datagram, or the hop limit field of an IPv6
datagram. These fields ensure that IP packets aren't cicrulating forever
in the Internet. Each router decrements this counter. If this counter
reaches 0, the sender will be notified and the message gets dropped.

We could use the same approach in RabbitMQ: Whenever a queue dead
letters a message, a dead_letter_hop_limit field could be decremented.
If this field reaches 0, the message will be dropped.
Such a hop limit field could have a sensible default value, for example
32. The sender of the message could override this value. Likewise, the
client rejecting a message could set a new value via the Modified
outcome.

Such an approach has multiple advantages:
1. No dead letter cycle detection per se needs to be performed within
   the broker which is a slight simplification to what we have today.
2. Simpler dead letter topologies. One very common use case is that
   clients re-try sending the message after some time by consuming from
   a dead-letter queue and rejecting the message such that the message
   gets republished to the original queue. Instead of requiring explicit
   client actions, which increases complexity, a x-message-ttl argument
   could be set on the dead-letter queue to automatically retry after
   some time. This is a big simplification because it eliminates the
   need of various frameworks that retry, such as
   https://docs.spring.io/spring-cloud-stream/reference/rabbit/rabbit_overview/rabbitmq-retry.html
3. No dead letter history information needs to be compressed because
   there is a clear limit on how often a message gets dead lettered.
   Therefore, the full history including timestamps of every dead letter
   event will be available to clients.

Disadvantages:
1. Breaks a lot of clients, even for 4.0.

  ## 3.12 approach

Instead of decrementing a counter, the approach up to 3.12 has been to
drop the message if the message cycled automatically. A message cycled
automatically if no client expliclity rejected the message, i.e. the
mesage got dead lettered due to maxlen, expired, or delivery_limit, but
not due to rejected.

In this approach, the broker must be able to detect such cycles
reliably.
Reliably detecting dead letter cycles broke in 3.13 due to #11159 and #11160.

To reliably detect cycles, the broker must be able to obtain the exact
order of dead letter events for a given message. In 3.13.0 - 3.13.2, the
order cannot exactly be determined because wall clock time is used to
record the death time.

This commit uses the same approach as done in 3.12: a list ordered by
death recency is used with the most recent death at the head of the
list.

To not grow this list endlessly (for example when a client rejects the
same message hundreds of times), this list should be compacted.
This commit, like 3.12, compacts by tuple `{Queue, Reason}`:
If this message got already dead lettered from this Queue for this
Reason, then only a counter is incremented and the element is moved to
the front of the list.

  ## Streams & AMQP 1.0 clients

Dead lettering from a stream doesn't make sense because:
1. a client cannot reject a message from a stream since the stream must
   maintain the total order of events to be consumed by multiple clients.
2. TTL is implemented by Stream retention where only old Stream segments
   are automatically deleted (or archived in the future).
3. same applies to maxlen

Although messages cannot be dead lettered **from** a stream, messages can be dead lettered
**into** a stream. This commit provides clients consuming from a stream the death history: #11173

Additionally, this commit provides AMQP 1.0 clients the death history via
message annotation `x-opt-deaths` which contains the same information as
AMQP 0.9.1 header `x-death`.

Both, storing the death history in a stream and providing death history
to an AMQP 1.0 client, use the same encoding: a message annoation
`x-opt-deaths` that contains an array of maps ordered by death recency.
The information encoded is the same as in the AMQP 0.9.1 x-death header.

Instead of providing an array of maps, a better approach could be to use
an array of a custom AMQP death type, such as:
```xml
<amqp name="rabbitmq">
    <section name="custom-types">
        <type name="death" class="composite" source="list">
            <descriptor name="rabbitmq:death:list" code="0x00000000:0x000000255"/>
            <field name="queue" type="string" mandatory="true" label="the name of the queue the message was dead lettered from"/>
            <field name="reason" type="symbol" mandatory="true" label="the reason why this message was dead lettered"/>
            <field name="count" type="ulong" default="1" label="how many times this message was dead lettered from this queue for this reason"/>
            <field name="time" mandatory="true" type="timestamp" label="the first time when this message was dead lettered from this queue for this reason"/>
            <field name="exchange" type="string" default="" label="the exchange this message was published to before it was dead lettered for the first time from this queue for this reason"/>
            <field name="routing-keys" type="string" default="" multiple="true" label="the routing keys this message was published with before it was dead lettered for the first time from this queue for this reason"/>
            <field name="ttl" type="milliseconds" label="the time to live of this message before it was dead lettered for the first time from this queue for reason ‘expired’"/>
        </type>
    </section>
</amqp>
```

However, encoding and decoding custom AMQP types that are nested within
arrays which in turn are nested within the message annotation map can be
difficult for clients and the broker. Also, each client will need to
know the custom AMQP type. For now, therefore we use an array of maps.

  ## Feature flag
The new way to record death information is done via mc annotation
`deaths_v2`.
Because old nodes do not know this new annotation, recording death
information via mc annotation `deaths_v2` is hidden behind a new feature
flag `message_containers_deaths_v2`.

If this feature flag is disabled, a message will continue to use the
3.13.0 - 3.13.2 way to record death information in mc annotation
`deaths`, or even the older way within `x-death` header directly if
feature flag message_containers is also disabled.

Only if feature flag `message_containers_deaths_v2` is enabled and this
message hasn't been dead lettered before, will the new mc annotation
`deaths_v2` be used.
2024-05-13 11:00:39 +02:00
Karl Nilsson d180474b0e
Merge pull request #11121 from rabbitmq/qq-default-config-changes
4.x: Use compressed mem tables and set a wal max entries default for quorum queues
2024-05-13 09:24:27 +01:00
Karl Nilsson 49a4c66fce Move feature flag check outside of mc
the `mc` module is ideally meant to be kept pure and portable
 and feature flags have external infrastructure dependencies
as well as impure semantics.

Moving the check of this feature flag into the amqp session
simplifies the code (as no message containers with the new
format will enter the system before the feature flag is enabled).
2024-05-13 10:05:40 +02:00
Carl Hörberg b6aacb7181 Declaring an exchange with an invalid type is a precondition failure
Not a invalid command that closes the whole connection.
2024-05-11 22:11:24 +02:00
Karl Nilsson 5123680a17 QQ: default to compressed mem tables and set a wal max entries default
Compressed ETS tables may introduce a small throughput penalty (low single
digit %) but can reduce peak Ra memory use by 30-50%.

Also set a default wal_max_entries value to avoid mem tables growing
too large when using very small message sizes (as more than 1M tiny
messages can easily fit into one WAL file).

Ra 2.10.1 has a type spec fix needed.
2024-05-09 14:24:22 +01:00
Michael Klishin 8cb86ca65d Make: move seshat dep definition to rabbitmq-components.mk 2024-05-09 02:17:41 -04:00
Michael Klishin 7a8d8736f6
A simpler does_policy_configure_cmq impl for maps
Suggested by @the-mikedavis

Co-authored-by: Michael Davis <mcarsondavis@gmail.com>
2024-05-08 11:44:36 -04:00
Michael Klishin b9e015ca0c Be more defensive when determining if a policy has CMQ keys
References #11192.
2024-05-08 11:18:51 -04:00
Luke Bakken 620fff22f1
Log errors from `ranch:handshake`
Fixes #11171

An MQTT user encountered TLS handshake timeouts with their IoT device,
and the actual error from `ssl:handshake` / `ranch:handshake` was not
caught and logged.

At this time, `ranch` uses `exit(normal)` in the case of timeouts, but
that should change in the future
(https://github.com/ninenines/ranch/issues/336)
2024-05-06 08:24:38 -07:00
Simon Unge ca4ead762f rabbit_quorum_queue_periodic_membership_reconciliation:on_node_down should be called even with Khepri as db 2024-05-03 22:07:30 -04:00
Simon Unge ad69f5b506 Changed schema logic 2024-05-03 19:18:21 +00:00
Simon Unge 2e2bbaff33 Added global default queue type config 2024-05-03 18:41:59 +00:00
David Ansari 1ecaaadb32
Merge pull request #10964 from rabbitmq/amqp-parser
4.x: change AMQP on disk message format & speed up the AMQP parser
2024-05-03 14:48:23 +02:00
Michael Klishin f2af07c613 This should be logged at debug level
There are two info messages in the same function,
they should be enough.
2024-05-03 01:42:22 -04:00
David Ansari c3289689bd Remove unused code 2024-05-02 12:13:02 +02:00
David Ansari 4209f3fa56 Set durable annotation for AMQP messages
This is similar to https://github.com/rabbitmq/rabbitmq-server/pull/11057

What?
For incoming AMQP messages, always set the durable message container annotation.

Why?
Even though defaulting to durable=true when no durable annotation is set, as prior
to this commit, is good enough, explicitly setting the durable annotation makes
the code a bit more future proof and maintainable going forward in 4.0 where we
will rely more on the durable annotation because AMQP 1.0 message headers will
be omitted in classic and quorum queues.

The performance impact of always setting the durable annotation is negligible.
2024-05-02 07:56:00 +00:00
David Ansari 6018155e9b Add property test for AMQP serializer and parser 2024-05-02 07:56:00 +00:00
David Ansari 9f42e40346 Fix crash when old node receives new AMQP message
Similar to how we convert from mc_amqp to mc_amqpl before
sending to a classic queue or quorum queue process if
feature flag message_containers_store_amqp_v1 is disabled,
we also need to do the same conversion before sending to an MQTT QoS 0
queue on the old node.
2024-05-02 07:56:00 +00:00
David Ansari 6225dc9928 Do not parse entire AMQP body
Prior to this commit the entire amqp-value or amqp-sequence sections
were parsed when converting a message from mc_amqp.
Parsing the entire amqp-value or amqp-sequence section can generate a
huge amount of garbage depending on how large these sections are.

Given that other protocol cannot make use of amqp-value and
amqp-sequence sections anyway, leave them AMQP encoded when converting
from mc_amqp.

In fact prior to this commit, the entire body section was parsed
generating huge amounts of garbage just to subsequently encode it again
in mc_amqpl or mc_mqtt.

The new conversion interface from mc_amqp to other mc_* modules will
either output amqp-data sections or the encoded amqp-value /
amqp-sequence sections.
2024-05-02 07:56:00 +00:00
David Ansari 0697c20d70 Support end-to-end checksums over the bare message
This commit enables client apps to automatically perform end-to-end
checksumming over the bare message (i.e. body + application defined
headers).

This commit allows an app to configure the AMQP client:
* for a sending link to automatically compute CRC-32 or Adler-32
checksums over each bare message including the computed checksum
as a footer annotation, and
* for a receiving link to automatically lookup the expected CRC-32
or Adler-32 checksum in the footer annotation and, if present, check
the received checksum against the actually computed checksum.

The commit comes with the following advantages:
1. Transparent end-to-end checksumming. Although checksumming is
   performed by TCP and RabbitMQ queues using the disk, end-to-end
   checksumming is a level higher up and can therefore detect bit flips
   within RabbitMQ nodes or load balancers and other bit flips that
   went unnoticed.
2. Not only is the body checksummed, but also the properties and
   application-properties sections. This is an advantage over AMQP 0.9.1
   because the AMQP protocol disallows modification of the bare message.
3. This commit is currently used for testing the RabbitMQ AMQP
   implementation, but it shows the feasiblity of how apps could also
   get integrity guarantees of the whole bare message using HMACs or
   signatures.
2024-05-02 07:56:00 +00:00
David Ansari 1d02ea9e55 Fix crashes when message gets dead lettered
Fix crashes when message is originally sent via AMQP and
stored within a classic or quorum queue and subsequently
dead lettered where the dead letter exchange needs access to message
annotations or properties or application-properties.
2024-05-02 07:56:00 +00:00
David Ansari 8040b8f36c Do not store delivery-annotations
as they are only meant to be used from sending to receiving peer
2024-05-02 07:56:00 +00:00
David Ansari e8e9ef32cb Delete unused module rabbit_msg_record 2024-05-02 07:56:00 +00:00
David Ansari eac469a8a2 Change AMQP header durable to true by default
AMQP 3.2.1 defines durable=false to be the default.

However, the same section also mentions:
> If the header section is omitted the receiver MUST assume the appropriate
> default values (or the meaning implied by no value being set) for the
> fields within the header unless other target or node specific defaults
> have otherwise been set.

We want RabbitMQ to be secure by default, hence in RabbitMQ we set
durable=true to be the default.
2024-05-02 07:56:00 +00:00
David Ansari 2e15704104 Maintain footer
Ensure footer gets deliverd to AMQP client as received from AMQP client
when feature flag message_containers_store_amqp_v1 is disabled.

Fixes test
```
bazel test //deps/rabbit:amqp_system_SUITE-mixed  -t- --test_sharding_strategy=disabled --test_env FOCUS="-group [dotnet] -case footer"
```
2024-05-02 07:56:00 +00:00
David Ansari 81709d9745 Fix MQTT QoS
This commit fixes test
```
bazel test //deps/rabbitmq_mqtt:shared_SUITE-mixed -t- \
    --test_sharding_strategy=disabled --test_env \
    FOCUS="-group [mqtt,v3,cluster_size_3] -case pubsub"
```

Fix some mixed version tests

Assume the AMQP body, especially amqp-value section won't be parsed.
Hence, omit smart conversions from AMQP to MQTT involving the
Payload-Format-Indicator bit.

Fix test

Fix
```
bazel test //deps/amqp10_client:system_SUITE-mixed -t- --test_sharding_strategy=disabled --test_env FOCUS="-group [rabbitmq]
```
2024-05-02 07:56:00 +00:00
David Ansari 0b51b8d39b Introduce feature flag message_containers_store_amqp_v1
Prior to this commit test case
```
bazel test //deps/rabbit:amqp_client_SUITE-mixed -t- \
    --test_sharding_strategy=disabled --test_env \
    FOCUS="-group [cluster_size_3] -case quorum_queue_on_old_node"
```
was failing because `mc_amqp:size(#v1{})` was called on the old node
which doesn't understand the new AMQP on disk message format.

Even though the old 3.13 node never stored mc_amqp messages in classic
or quorum queues,
1. it will either need to understand the new mc_amqp message format, or
2. we should prevent the new format being sent to 3.13. nodes.

In this commit we decide for the 2nd solution.
In `mc:prepare(store, Msg)`, we convert the new mc_amqp format to
mc_amqpl which is guaranteed to be understood by the old node.
Note that `mc:prepare(store, Msg)` is not only stored before actual
storage, but already before the message is sent to the queue process
(which might be hosted by the old node).
The 2nd solution is easier to reason about over the 1st solution
because:
a) We don't have to backport code meant to be for 4.0 to 3.13, and
b) 3.13 is guaranteed to never store mc_amqp messages in classic or
   quorum queues, even in mixed version clusters.

The disadvantage of the 2nd solution is that messages are converted from
mc_amqp to mc_amqpl and back to mc_amqp if there is an AMQP sender and
AMQP receiver. However, this only happens while the new feature flag
is disabled during the rolling upgrade. In a certain sense, this is a
hybrid to how the AMQP 1.0 plugin worked in 3.13: Even though we don't
proxy via AMQP 0.9.1 anymore, we still convert to AMQP 0.9.1 (mc_amqpl)
messages when feature flag message_containers_store_amqp_v1 is disabled.
2024-05-02 07:56:00 +00:00