Commit Graph

1065 Commits

Author SHA1 Message Date
David Ansari c2ce905797
Enforce AMQP 1.0 channel-max (#12221)
* Enforce AMQP 1.0 channel-max

Enforce AMQP 1.0 field `channel-max` in the `open` frame by introducing
a new more user friendly setting called `session_max`:
> The channel-max value is the highest channel number that can be used on the connection.
> This value plus one is the maximum number of sessions that can be simultaneously active on the connection.

We set the default value of `session_max` to 64 such that, by
default, RabbitMQ 4.0 allows maximum 64 AMQP 1.0 sessions per AMQP 1.0 connection.

More than 64 AMQP 1.0 sessions per connection make little sense.
See also https://www.rabbitmq.com/blog/2024/09/02/amqp-flow-control#session

Limiting the maximum number of sessions per connection can be useful to
protect against
* applications that accidentally open new sessions without ending old sessions
  (session leaks)
* too many metrics being exposed, for example in the future via the
  "/metrics/per-object" Prometheus endpoint with timeseries per session
  being emitted.

This commit does not make use of the existing `channel_max` setting
because:
1. Given that `channel_max = 0` means "no limit", there is no way for an
   operator to limit the number of sessions per connections to 1.
2. Operators might want to set different limits for maximum number of
   AMQP 0.9.1 channels and maximum number of AMQP 1.0 sessions.
3. The default of `channel_max` is very high: It allows using more than
   2,000 AMQP 0.9.1 channels per connection. Lowering this default might
   break existing AMQP 0.9.1 applications.

This commit also fixes a bug in the AMQP 1.0 Erlang client which, prior
to this commit used channel number 1 for the first session. That's wrong
if a broker allows maximum 1 session by replying with `channel-max = 0`
in the `open` frame. Additionally, the spec recommends:
> To make it easier to monitor AMQP sessions, it is RECOMMENDED that implementations always assign the lowest available unused channel number.

Note that in AMQP 0.9.1, channel number 0 has a special meaning:
> The channel number is 0 for all frames which are global to the connection and 1-65535 for frames that
refer to specific channels.

* Apply PR feedback
2024-09-05 17:45:27 +02:00
Jean-Sébastien Pédron 1383c0c415
rabbt_db: Unify Khepri paths API
[Why]

Currently, `rabbit_db_*` modules use and export the following kind of
functions to return the path to the resources they manage:

    khepri_db_thing:khepri_things_path(),
    khepri_db_thing:khepri_thing_path(Identifier).

Internally, `khepri_db_thing:khepri_thing_path(Identifier)` appends
`Identifier` to the list returned by
`khepri_db_thing:khepri_things_path()`. This works for the organization
of the records we have today in Khepri:

    |-- thing
    |   |-- <<"identifier1">>
    |   |   <<"identifier2">>
    `-- other_thing
	`-- <<"other_identifier1">>

However, with the upcoming organization that leverages the tree in
Khepri, identifiers may be in the middle of the path instead of a leaf
component. We may also put `other_thing` under `thing` in the tree.

That's why, we can't really expose a parent directory for `thing` and
`other_thing`. Therefore, `khepri_db_thing:khepri_things_path/0` needs
to go away. Only `khepri_db_thing:khepri_thing_path/1` should be
exported and used.

In addition to that, there are several places where paths are hard-coded
(i.e. their definition is duplicated).

[How]

The patch does exactly that. Uses of
`khepri_db_thing:khepri_things_path()` are generally replaced by
`rabbit_db_thing:khepri_thing_path(?KHEPRI_WILDCARD_STAR)`.

Places where the path definitions were duplicated are fixed too by
calling the path building functions.

In the future, for a resource that depends on another one, the
corresponding module will call the `rabbit_db_thing:khepri_thing_path/1`
for that other resource and build its path on top of that.
2024-09-05 13:58:04 +02:00
Arnaud Cogoluègnes 56964a8f28
Merge pull request #12074 from rabbitmq/issue-11915
Cancel AMQP stream consumer when local stream member is deleted
2024-09-02 16:07:58 +02:00
Loïc Hoguin f0932e3d42
Merge pull request #11778 from rabbitmq/loic-make-it-big
Make cleanups and ct-master introduction
2024-09-02 13:54:39 +02:00
Loïc Hoguin 05b701b3f4
rabbit tests: Don't fail if rabbit already loaded
Seems that this can happen if multiple test suites are running
one after the other and a previous test suite did not clean up
perfectly.
2024-09-02 11:44:16 +02:00
Jean-Sébastien Pédron fa6d89212a
Merge pull request #12163 from rabbitmq/fix-node-state-after-failure-to-join-cluster
rabbit_db_cluster: Reset feature flags immediately after a failure to join
2024-08-31 13:38:46 +02:00
Jean-Sébastien Pédron bfc6f83306
rabbit_db_cluster: Reset feature flags immediately after a failure to join
[Why]
If a node failed to join a cluster, `rabbit` was restarted then the
feature flags were reset and the error returned. I.e., the error
handling was in a single place at the end of the function.

We need to reset feature flags after a failure because the feature flags
states were copied from the remote node just before the join.

However, resetting them after restarting `rabbit` was incorrect because
feature flags were initialized in a way that didn't match the rest of
the state. This led to crashes during the start of `rabbit`.

[How]
The feature flags are now reset after the failure to join but before
starting `rabbit`.

A new testcase was added to test this scenario.
2024-08-30 17:41:25 +02:00
Michal Kuratczyk 301424235c
Update .NET to 8.0 2024-08-30 08:42:53 +02:00
Loïc Hoguin a17fb13a03
make: Initial work on using ct_master to run tests
Because `ct_master` is yet another Erlang node, and it is used
to run multiple CT nodes, meaning it is in a cluster of CT
nodes, the tests that change the net_ticktime could not
work properly anymore. This is because net_ticktime must
be the same value across the cluster.

The same value had to be set for all tests in order to solve
this. This is why it was changed to 5s across the board. The
lower net_ticktime was used in most places to speed up tests
that must deal with cluster failures, so that value is good
enough for these cases.

One test in amqp_client was using the net_ticktime to test
the behavior of the direct connection timeout with varying
net_ticktime configurations. The test now mocks the
`net_kernel:get_net_ticktime()` function to achieve the
same result.
2024-08-29 15:23:31 +02:00
Loïc Hoguin c66e8740e8
rabbit tests: Redirect logs to ct always
Doing it on a per test suite basis leads to issues if multiple
suites try to configure it, and there's no cleanup performed
anyway.
2024-08-29 15:22:40 +02:00
Michal Kuratczyk 8a03975ba7
Set the default vm_memory_high_watermark to 0.6 (#12161)
The default of 0.4 was very conservative even when it was
set years ago. Since then:
- we moved to CQv2, which have much more predictable memory usage than (non-lazy) CQv1 used to
- we removed CQ mirroring which caused large sudden memory spikes in some situations
- we removed the option to store message payload in memory in quorum queues

For the past two years or so, we've been running all our internal tests and benchmarks
using the value of 0.8 with no OOMkills at all (note: we do this on
Kubernetes where the Cluster Operators overrides the available memory
levaing some additional headroom, but effectively we are still using  more than
0.6 of memory).
2024-08-29 12:10:49 +02:00
Michael Klishin 4fbfd9853a
Merge pull request #12153 from rabbitmq/cloudamqp-exchange_logging_unicode
Support unicode messages by exchange logging
2024-08-28 22:36:14 -04:00
Péter Gömöri 531d6d2922 Support unicode messages by exchange logging
Before this commit formatting the amqp body would crash and the log
message would not be published to the log exchange.

Before commit 34bcb911 it even crashed the whole exchange logging
handler which caused the log exchange to be deleted.
2024-08-28 17:33:17 +02:00
Michal Kuratczyk fa221d8eca
Remove memory_monitor_interval 2024-08-28 08:12:49 +02:00
Michael Davis 5b3ae230b7
Merge pull request #12082 from rabbitmq/md/khepri/db-queue-deletion 2024-08-27 07:47:06 -05:00
Michael Klishin 6b444ae907 Exclude this Khepri-specific test from mixed version cluster runs 2024-08-24 21:54:25 -04:00
Michal Kuratczyk 6ca2022fcf await quorum+1 improvements
1. If khepri_db is enabled, rabbitmq_metadata is a critical component
2. When waiting for quorum+1, periodically log what doesn't have the
   quorum+1
   - for components: just list them
   - for queues: list how many we are waiting for and how to display
     them (because there could be a large number, logging that
     could be impractical or even dangerous)
3. make the tests signficantly faster by using a single group
2024-08-24 18:49:35 -04:00
Michael Klishin c41c27de06 One more node-wide DQT test
References #11541 #11457 #11528
2024-08-24 05:50:20 -04:00
Diana Parra Corbacho 0061944e9c Cancel AMQP stream consumer when local stream member is deleted
The consumer reader process is gone and there is no way to recover
it as the node does not have a member of the stream anymore,
so it should be cancelled/detached.
2024-08-22 12:39:52 +02:00
Michael Davis a7d099de8c
cluster_minority_SUITE: Add a case for queue deletion 2024-08-21 16:23:48 -04:00
David Ansari 1c6f4be308 Rename quorum queue priority from "low" to "normal"
Rename the two quorum queue priority levels from "low" and "high" to "normal" and
"high". This improves user experience because the default priority level is low /
normal. Prior to this commit users were confused why their messages show
up as low priority. Furthermore there is no need to consult the docs to
know whether the default priority level is low or high.
2024-08-20 11:18:36 +02:00
David Ansari b105ca9877 Remove randomized_startup_delay_range config
For RabbitMQ 4.0, this commit removes support for the deprecated `rabbitmq.conf` settings
```
cluster_formation.randomized_startup_delay_range.min
cluster_formation.randomized_startup_delay_range.max
```

The rabbitmq/cluster-operator already removed these settings in
b81e0f9bb8
2024-08-19 14:34:32 +02:00
Michael Davis 49c645a076
Fix rabbit_db_queue_SUITE:update_decorators case
This test called `rabbit_db_queue:update_decorators/1` which doesn't
exist - instead it can call `update_decorators/2` with an empty list.
This commit also adds the test to the `all_tests/0` list - it being
absent is why this wasn't caught before.
2024-08-16 13:27:29 -04:00
Michael Davis f80cd7d477
rabbit_db_queue: Remove unused `set_many/1`
This function was only used by classic mirrored queue code which was
removed in 3bbda5b.
2024-08-16 13:26:37 -04:00
Michael Klishin 7121b802e4
Merge pull request #12026 from rabbitmq/maintenance-revive-fixes
Fixes to rabbit_maintenance:revive/0
2024-08-16 12:15:21 -04:00
David Ansari b6fbc0292a Maintain order of configured SASL mechanisms
RabbitMQ should advertise the SASL mechanisms in the order as
configured in `rabbitmq.conf`.

Starting RabbitMQ with the following `rabbitmq.conf`:
```
auth_mechanisms.1 = PLAIN
auth_mechanisms.2 = AMQPLAIN
auth_mechanisms.3 = ANONYMOUS
```

translates prior to this commit to:
```
1> application:get_env(rabbit, auth_mechanisms).
{ok,['ANONYMOUS','AMQPLAIN','PLAIN']}
```

and after this commit to:
```
1> application:get_env(rabbit, auth_mechanisms).
{ok,['PLAIN','AMQPLAIN','ANONYMOUS']}
```

In our 4.0 docs we write:
> The server mechanisms are ordered in decreasing level of preference.

which complies with https://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-security-v1.0-os.html#type-sasl-mechanisms
2024-08-16 14:38:36 +02:00
Karl Nilsson 2dcced6967 Maintenance mode: change revive to use quorum queue recovery function.
As this already does the job.
2024-08-16 10:05:53 +01:00
Michael Klishin 178f9a962e
Merge pull request #11964 from rabbitmq/qq-checkpointing-tweaks
QQ: checkpointing frequency improvements
2024-08-15 20:49:24 -04:00
David Ansari d46f07c0a4 Add SASL mechanism ANONYMOUS
## 1. Introduce new SASL mechanism ANONYMOUS

 ### What?
Introduce a new `rabbit_auth_mechanism` implementation for SASL
mechanism ANONYMOUS called `rabbit_auth_mechanism_anonymous`.

 ### Why?
As described in AMQP section 5.3.3.1, ANONYMOUS should be used when the
client doesn't need to authenticate.

Introducing a new `rabbit_auth_mechanism` consolidates and simplifies how anonymous
logins work across all RabbitMQ protocols that support SASL. This commit
therefore allows AMQP 0.9.1, AMQP 1.0, stream clients to connect out of
the box to RabbitMQ without providing any username or password.

Today's AMQP 0.9.1 and stream protocol client libs hard code RabbitMQ default credentials
`guest:guest` for example done in:
* 0215e85643/src/main/java/com/rabbitmq/client/ConnectionFactory.java (L58-L61)
* ddb7a2f068/uri.go (L31-L32)

Hard coding RabbitMQ specific default credentials in dozens of different
client libraries is an anti-pattern in my opinion.
Furthermore, there are various AMQP 1.0 and MQTT client libraries which
we do not control or maintain and which still should work out of the box
when a user is getting started with RabbitMQ (that is without
providing `guest:guest` credentials).

 ### How?
The old RabbitMQ 3.13 AMQP 1.0 plugin `default_user`
[configuration](146b4862d8/deps/rabbitmq_amqp1_0/Makefile (L6))
is replaced with the following two new `rabbit` configurations:
```
{anonymous_login_user, <<"guest">>},
{anonymous_login_pass, <<"guest">>},
```
We call it `anonymous_login_user` because this user will be used for
anonymous logins. The subsequent commit uses the same setting for
anonymous logins in MQTT. Hence, this user is orthogonal to the protocol
used when the client connects.

Setting `anonymous_login_pass` could have been left out.
This commit decides to include it because our documentation has so far
recommended:
> It is highly recommended to pre-configure a new user with a generated username and password or delete the guest user
> or at least change its password to reasonably secure generated value that won't be known to the public.

By having the new module `rabbit_auth_mechanism_anonymous` internally
authenticate with `anonymous_login_pass` instead of blindly allowing
access without any password, we protect operators that relied on the
sentence:
> or at least change its password to reasonably secure generated value that won't be known to the public

To ease the getting started experience, since RabbitMQ already deploys a
guest user with full access to the default virtual host `/`, this commit
also allows SASL mechanism ANONYMOUS in `rabbit` setting `auth_mechanisms`.

In production, operators should disable SASL mechanism ANONYMOUS by
setting `anonymous_login_user` to `none` (or by removing ANONYMOUS from
the `auth_mechanisms` setting. This will be documented separately.
Even if operators forget or don't read the docs, this new ANONYMOUS
mechanism won't do any harm because it relies on the default user name
`guest` and password `guest`, which is recommended against in
production, and who by default can only connect from the local host.

 ## 2. Require SASL security layer in AMQP 1.0

 ### What?
An AMQP 1.0 client must use the SASL security layer.

 ### Why?
This is in line with the mandatory usage of SASL in AMQP 0.9.1 and
RabbitMQ stream protocol.
Since (presumably) any AMQP 1.0 client knows how to authenticate with a
username and password using SASL mechanism PLAIN, any AMQP 1.0 client
also (presumably) implements the trivial SASL mechanism ANONYMOUS.

Skipping SASL is not recommended in production anyway.
By requiring SASL, configuration for operators becomes easier.
Following the principle of least surprise, when an an operator
configures `auth_mechanisms` to exclude `ANONYMOUS`, anonymous logins
will be prohibited in SASL and also by disallowing skipping the SASL
layer.

 ### How?
This commit implements AMQP 1.0 figure 2.13.

A follow-up commit needs to be pushed to `v3.13.x` which will use SASL
mechanism `anon` instead of `none` in the Erlang AMQP 1.0 client
such that AMQP 1.0 shovels running on 3.13 can connect to 4.0 RabbitMQ nodes.
2024-08-15 10:58:48 +00:00
Karl Nilsson 0f1f27c1dd Qq: adjust checkpointing algo to something more like
it was in 3.13.x.

Also add a force_checkpoint aux command that the purge operation
emits - this can also be used to try to force a checkpoint
2024-08-15 11:54:18 +01:00
Michael Klishin 8b90d4a27c Allow for tagged values for a few more rabbitmq.conf settings 2024-08-13 16:27:00 -04:00
Michael Davis 543bf76a74
Add `cluster_upgrade_SUITE` to check mixed-version upgrades
This suite uses the mixed version secondary umbrella as a starting
version for a cluster and then has a helper to upgrade the cluster to
the current code. This is meant to ensure that we can upgrade from the
previous minor.
2024-08-09 16:23:35 -04:00
David Ansari aeedad7b51 Fix test flake
Prior to this commit, test
```
ERL_AFLAGS="+S 2" make -C deps/rabbit ct-amqp_client t=cluster_size_3:detach_requeues_two_connections_quorum_queue
```
failed rarely locally, and more often in CI.
An instance of a failed test in CI is
https://github.com/rabbitmq/rabbitmq-server/actions/runs/10298099899/job/28502687451?pr=11945

The test failed with:
```
=== === Reason: {assertEqual,[{module,amqp_client_SUITE},
                               {line,2800},
                               {expression,"amqp10_msg : body ( Msg1 )"},
                               {expected,[<<"1">>]},
                               {value,[<<"2">>]}]}
  in function  amqp_client_SUITE:detach_requeues_two_connections/2 (amqp_client_SUITE.erl, line 2800)
```
because it could happen that Receiver1's credit top up to the quorum
queue is applied before Receiver0's credit top up such that Receiver1
gets enqueued to the ServiceQueue before Receiver0.
2024-08-08 14:20:05 +02:00
Karl Nilsson 194d4ba2f5
Quorum queues v4 (#10637)
This commit contains the following new quorum queue features:

* Fair share high/low priorities
* SAC consumers honour consumer priorities
* Credited consumer refactoring to meet AMQP requirements.
* Use checkpoints feature to reduce memory use for queues with long backlogs
 * Consumer cancel option that immediately removes consumer and returns all pending messages.
 * More compact commands of the most common commands such as enqueue, settle and credit
 * Correctly track the delivery-count to be compatible with the AMQP spec
 * Support the "modified" AMQP 1.0 outcome better.

Commits:

* Quorum queues v4 scaffolding.

Create the new version but not including any changes yet.

QQ: force delete followers after leader has terminated.

Also try a longer sleep for mqtt_shared_SUITE so that the
delete operation stands a chance to time out and move on
to the forced deletion stage.

In some mixed machine version scenarios some followers will never
apply the poison pill command so we may as well force delete them
just in case.

QQ: skip test in amqp_client that cannot pass with mixed machine versions

QQ: remove dead code

Code relating to prior machine versions and state conversions.

rabbit_fifo_prop_SUITE fixes

* QQ: add v4 ff and new more compact enqueue command.

Also update rabbit_fifo_* suites to test more relevant code versions
where applicable.

QQ: always use the updated credit mode format

QQv4: use more compact consumer reference in settle, credit, return

This introudces a new type: consumer_key() which is either the consumer_id
or the raft index the checkout was processed at. If the consumer is
using one of the updated credit spec formats rabbit_fifo will use the
raft index as the primary key for the consumer such that the rabbit
fifo client can then use the more space efficient integer index
instead of the full consumer id in subsequent commands.

There is compatibility code to still accept the consumer id in
settle, return, discard and credit commands but this is slighlyt
slower and of course less space efficient.

The old form will be used in cases where the fifo client may have
already remove the local consumer state (as happens after a cancel).

Lots of test refactorings of the rabbit_fifo_SUITE to begin to use
the new forms.

* More test refactoring and new API fixes

rabbit_fifo_prop_SUITE refactoring and other fixes.


* First pass SAC consumer priority implementation.

Single active consumers will be activated if they have a higher priority
than the currently active consumer. if the currently active consumer
has pending messages, no further messages will be assigned to the
consumer and the activation of the new consumer will happen once
all pending messages are settled. This is to ensure processing order.

Consumers with the same priority will internally be ordered to
favour those with credit then those that attached first.

QQ: add SAC consumer priority integration tests

QQ: add check for ff in tests

* QQ: add new consumer cancel option: 'remove'

This option immediately removes and returns all messages for a
consumer instead of the softer 'cancel' option which keeps the
consumer around until all pending messages have been either
settled or returned.

This involves a change to the rabbit_queue_type:cancel/5 API
to rabbit_queue_type:cancel/3.

* QQ: capture checked out time for each consumer message.

This will form the basis for queue initiated consumer timeouts.

* QQ: Refactor to use the new ra_machine:handle_aux/5 API

Instead of the old ra_machine:handle_aux/6 callback.

* QQ hi/lo priority queue

* QQ: Avoid using mc:size/1 inside rabbit_fifo

As we dont want to depend on external functions for things that may
change the state of the queue.

* QQ bug fix: Maintain order when returning multiple

Prior to this commit, quorum queues requeued messages in an undefined
order, which is wrong.

This commit fixes this bug and requeues messages always in the order as
nacked / rejected / released by the client.

We ensure that order of requeues is deterministic from the client's
point of view and doesn't depend on whether the quorum queue soft limit
was exceeded temporarily.
So, even when rabbit_fifo_client batches requeues, the order as nacked
by the client is still maintained.

* Simplify

* Add rabbit_quorum_queue:file_handle* functions back.

For backwards compat.

* dialyzer fix

* dynamic_qq_SUITE: avoid mixed versions failure.

* QQ: track number of requeues for message.

To be able to calculate the correct value for the AMQP delivery_count
header we need to be able to distinguish between messages that were
"released" or returned in QQ speak and those that were returned
due to errors such as channel termination.

This commit implement such tracking as well as the calculation
of a new mc annotations `delivery_count` that AMQP makes use
of to set the header value accordingly.

* Use QQ consumer removal when AMQP client detaches

This enables us to unskip some AMQP tests.

* Use AMQP address v2 in fsharp-tests

* QQ: track number of requeues for message.

To be able to calculate the correct value for the AMQP delivery_count
header we need to be able to distinguish between messages that were
"released" or returned in QQ speak and those that were returned
due to errors such as channel termination.

This commit implement such tracking as well as the calculation
of a new mc annotations `delivery_count` that AMQP makes use
of to set the header value accordingly.

* rabbit_fifo: Use Ra checkpoints

* quorum queues: Use a custom interval for checkpoints

* rabbit_fifo_SUITE: List actual effects in ?ASSERT_EFF failure

* QQ: Checkpoints modifications

* fixes

* QQ: emit release cursors on tick for followers and leaders

else followers could end up holding on to segments a bit longer
after traffic stops.

* Support draining a QQ SAC waiting consumer

By issuing drain=true, the client says "either send a transfer or a flow frame".
Since there are no messages to send to an inactive consumer, the sending
queue should advance the delivery-count consuming all link-credit and send
a credit_reply with drain=true to the session proc which causes the session
proc to send a flow frame to the client.

* Extract applying #credit{} cmd into 2 functions

This commit is only refactoring and doesn't change any behaviour.

* Fix default priority level

Prior to this commit, when a message didn't have a priority level set,
it got enqueued as high prio.

This is wrong because the default priority is 4 and
"for example, if 2 distinct priorities are implemented,
then levels 0 to 4 are equivalent, and levels 5 to 9 are equivalent
and levels 4 and 5 are distinct."
Hence, by default a message without priority set, must be enqueued as
low prio.

* bazel run gazelle

* Avoid deprecated time unit

* Fix aux_test

* Delete dead code

* Fix rabbit_fifo_q:get_lowest_index/1

* Delete unused normalize functions

* Generate less garbage

* Add integration test for QQ SAC with consumer priority

* Improve readability

* Change modified outcome behaviour

With the new quorum queue v4 improvements where a requeue counter was
added in addition to the quorum queue delivery counter, the following
sentence from https://github.com/rabbitmq/rabbitmq-server/pull/6292#issue-1431275848
doesn't apply anymore:

> Also the case where delivery_failed=false|undefined requires the release of the
> message without incrementing the delivery_count. Again this is not something
> that our queues are able to do so again we have to reject without requeue.

Therefore, we simplify the modified outcome behaviour:
RabbitMQ will from now on only discard the message if the modified's
undeliverable-here field is true.

* Introduce single feature flag rabbitmq_4.0.0

 ## What?

Merge all feature flags introduced in RabbitMQ 4.0.0 into a single
feature flag called rabbitmq_4.0.0.

 ## Why?

1. This fixes the crash in
https://github.com/rabbitmq/rabbitmq-server/pull/10637#discussion_r1681002352
2. It's better user experience.

* QQ: expose priority metrics in UI

* Enable skipped test after rebasing onto main

* QQ: add new command "modify" to better handle AMQP modified outcomes.

This new command can be used to annotate returned or rejected messages.

This commit also retains the delivery-count across dead letter boundaries
such that the AMQP header delivery-count field can now include _all_ failed
deliver attempts since the message was originally received.

Internally the quorum queue has moved it's delivery_count header to
only track the AMQP protocol delivery attempts and now introduces
a new acquired_count to track all message acquisitions by consumers.

* Type tweaks and naming

* Add test for modified outcome with classic queue

* Add test routing on message-annotations in modified outcome

* Skip tests in mixed version tests

Skip tests in mixed version tests because feature flag
rabbitmq_4.0.0 is needed for the new #modify{} Ra command
being sent to quorum queues.

---------

Co-authored-by: David Ansari <david.ansari@gmx.de>
Co-authored-by: Michael Davis <mcarsondavis@gmail.com>
2024-08-08 08:48:27 +01:00
Karl Nilsson e24bd06e71 QQ: refactor and improve leader detection code.
The leader returned in rabbit_quorum_queue:info/2 only ever queried
the pid field from the queue record when more up to date info could
have been available in the ra_leaderboard table.
2024-08-07 12:02:53 +01:00
David Ansari d7f29426a8 Fix test flake
Sometimes in CI under Khepri, the test case errored with:
```
receiver_attached flushed: {amqp10_event,
                            {session,<0.396.0>,
                             {ended,
                              {'v1_0.error',
                               {symbol,<<"amqp:internal-error">>},
                               {utf8,
                                <<"stream queue 'leader_transfer_stream_credit_single' in vhost '/' does not have a running replica on the local node">>},
                               undefined}}}}
```
2024-07-30 21:05:25 +02:00
David Ansari ce915ae05a Fix quorum queue credit reply crash in AMQP session
Fixes #11841

PR #11307 introduced the invariant that at most one credit request between
session proc and quorum queue proc can be in flight at any given time.
This is not the case when rabbit_fifo_client re-sends credit
requests on behalf of the session proc when the quorum queue leader changes.

This commit therefore removes assertions which assumed only a single credit
request to be in flight.

This commit also removes field queue_flow_ctl.desired_credit
since it is redundant to field client_flow_ctl.credit
2024-07-28 12:34:41 +02:00
Michael Klishin 4aaa1c410e
Merge pull request #11664 from rabbitmq/khepri-node-added-event
rabbit_node_monitor: use a leader query for cluster members on node_added event
2024-07-25 15:41:28 -04:00
Michael Davis 4207faf433
Merge pull request #11785 from rabbitmq/md/khepri-minority-errors/rabbit_db_exchange
Handle timeouts possible in Khepri minority in `rabbit_db_exchange`
2024-07-24 12:11:17 -05:00
Michal Kuratczyk ae41f65c64
Fix rabbit_priority_queue:update_rates bug (#11814)
updates_rates fails after publishing a message to a queue
with priorities enabled.
2024-07-24 16:34:56 +02:00
David Ansari be6a7fec95 Fix test flake
Sometimes on Khepri the test failed with:
```
=== Ended at 2024-07-24 10:07:15
=== Location: [{gen_server,call,419},
              {amqpl_direct_reply_to_SUITE,rpc,226},
              {test_server,ts_tc,1793},
              {test_server,run_test_case_eval1,1302},
              {test_server,run_test_case_eval,1234}]
=== === Reason: {{shutdown,
                      {server_initiated_close,404,
                          <<"NOT_FOUND - no queue 'tests.amqpl_direct_reply_to.rpc.requests' in vhost '/'">>}},
                  {gen_server,call,
                      [<0.272.0>,
                       {call,
                           {'basic.get',0,
                               <<"tests.amqpl_direct_reply_to.rpc.requests">>,
                               false},
                           none,<0.246.0>},
                       infinity]}}
```

https://github.com/rabbitmq/rabbitmq-server/actions/runs/10074558971/job/27851173817?pr=11809
shows an instance of this flake.
2024-07-24 13:42:20 +02:00
Michael Davis 52a0d70e15
Handle database timeouts when declaring exchanges
The spec of `rabbit_exchange:declare/7` needs to be updated to return
`{ok, Exchange} | {error, Reason}` instead of the old return value of
`rabbit_types:exchange()`. This is safe to do since `declare/7` is not
called by RPC - from the CLI or otherwise - outside of test suites, and
in test suites only through the CLI's `TestHelper.declare_exchange/7`.
Callers of this helper are updated in this commit.

Otherwise this commit updates callers to unwrap the `{ok, Exchange}`
and bubble up errors.
2024-07-22 16:02:03 -04:00
Michael Davis e7489d2cb7
Handle database failures when deleting exchanges
A common case for exchange deletion is that callers want the deletion
to be idempotent: they treat the `ok` and `{error, not_found}` returns
from `rabbit_exchange:delete/3` the same way. To simplify these
callsites we add a `rabbit_exchange:ensure_deleted/3` that wraps
`rabbit_exchange:delete/3` and returns `ok` when the exchange did not
exist. Part of this commit is to update callsites to use this helper.

The other part is to handle the `rabbit_khepri:timeout()` error possible
when Khepri is in a minority. For most callsites this is just a matter
of adding a branch to their `case` clauses and an appropriate error and
message.
2024-07-22 15:59:55 -04:00
Michael Davis f1be7bacc2
Handle database failures when adding/removing bindings
This ensures that the call graph of `rabbit_db_binding:create/2` and
`rabbit_db_binding:delete/2` handle the `{error, timeout}` error
possible when Khepri is in a minority.
2024-07-22 14:16:39 -04:00
Michael Davis 9d55d397e5
maintenance_mode_SUITE: Skip leadership transfer case on mnesia
This case only targets Khepri. Instead of setting the `metadata_store`
config option we should skip the test when the configured metadata
store is mnesia.
2024-07-19 15:31:55 -04:00
Arnaud Cogoluègnes eeb35d2688
Add stream replication port range in ini-style configuration
This is more straightforward than configuring Osiris in the advanced
configuration file.
2024-07-19 16:47:59 +02:00
Diana Parra Corbacho 19a71d8d28 rabbit_node_monitor: use a leader query for cluster members on node_added event
If the membership hasn't been updated locally yet, the event is never generated
2024-07-16 12:48:29 +02:00
Karl Nilsson 131379a483 mc: increase utf8 scanning limit for longstr conversions.
The AMQP 0.9.1 longstr type is problematic as it can contain arbitrary
binary data but is typically used for utf8 by users.

The current conversion into AMQP avoids scanning arbitrarily large
longstr to see if they only contain valid utf8 by treating all
longstr data longer than 255 bytes as binary. This is in hindsight
too strict and thus this commit increases the scanning limit to
4096 bytes - enough to cover the vast majority of AMQP 0.9.1 header
values.

This change also conversts the AMQP binary types into longstr to
ensure that existing data (held in streams for example) is converted
to an AMQP 0.9.1 type most likely what the user intended.
2024-07-15 14:07:19 +02:00
David Ansari e6587c6e45 Support consumer priority in AMQP
Arguments
* `rabbitmq:stream-offset-spec`,
* `rabbitmq:stream-filter`,
* `rabbitmq:stream-match-unfiltered`
are set in the `filter` field of the `Source`.
This makes sense for these consumer arguments because:
> A filter acts as a function on a message which returns a boolean result
> indicating whether the message can pass through that filter or not.

Consumer priority is not really such a predicate.
Therefore, it makes more sense to set consumer priority in the
`properties` field of the `Attach` frame.

We call the key `rabbitmq:priority` which maps to consumer argument
`x-priority`.

While AMQP 0.9.1 consumers are allowed to set any integer data
type for the priority level, this commit decides to enforce an `int`
value (range -(2^31) to 2^31 - 1 inclusive).
Consumer priority levels outside of this range are not needed in
practice.
2024-07-12 20:31:01 +02:00
Michal Kuratczyk f398892bda
Deprecate queue-master-locator (#11565)
* Deprecate queue-master-locator

This should not be a breaking change - all validation should still pass
* CQs can now use `queue-leader-locator`
* `queue-leader-locator` takes precedence over `queue-master-locator` if both are used
* regardless of which name is used, effectively there are only two  values: `client-local` (default) or `balanced`
* other values (`min-masters`, `random`, `least-leaders`) are mapped to `balanced`
* Management UI no longer shows `master-locator` fields when declaring a queue/policy, but such arguments can still be used manually (unless not permitted)
* exclusive queues are always declared locally, as before
2024-07-12 13:22:55 +02:00
David Ansari e31df4cd01 Fix test case rebalance
This test case was wrongly skipped and therefore never ran.
2024-07-11 11:20:26 +02:00
David Ansari 18e8c1d5f8 Require all stable feature flags added up to 3.13.0
Since feature flag `message_containers` introduced in 3.13.0 is required in 4.0,
we can also require all other feature flags introduced in or before 3.13.0
and remove their compatibility code for 4.0:

* restart_streams
* stream_sac_coordinator_unblock_group
* stream_filtering
* stream_update_config_command
2024-07-11 11:20:26 +02:00
Michael Davis 88c1ad2f6e
Adapt to new `{error, timeout}` return value in Khepri 0.14.0
See rabbitmq/khepri#256.
2024-07-10 16:07:43 -04:00
Michael Davis ae0663d7ca
Merge pull request #11663 from rabbitmq/md/ci/turn-off-mixed-version-khepri-tests
Turn off mixed version tests against Khepri
2024-07-10 15:07:07 -05:00
Michael Klishin 348ca6f5a7 amqpl_direct_reply_to_SUITE: use separate queue names to avoid interference 2024-07-10 13:48:38 -04:00
Michael Davis 56bbf3760d
Respect RABBITMQ_METADATA_STORE in clustering_recovery_SUITE 2024-07-10 13:46:22 -04:00
Michael Davis 0a4e5a9845
Respect RABBITMQ_METADATA_STORE in clustering_management_SUITE 2024-07-10 13:46:05 -04:00
Michael Davis ecb757554e
Skip cluster_minority_SUITE when RABBITMQ_METADATA_STORE is set to mnesia
This suite is only meant to run with Khepri as the metadata store.
Instead of setting this explicitly we can look at the configured
metadata store and conditionally skip the entire suite. This prevents
these tests from running twice in CI.
2024-07-10 13:46:05 -04:00
Michael Davis bda1f7cef6
Skip metadata_store_clustering_SUITE in mixed versions testing
Khepri is not yet compatible with mixed-version testing and this suite
only tests clustering when Khepri is the metadata store in at least some
of the nodes.
2024-07-10 13:46:05 -04:00
David Ansari 5deff457a0 Fix direct reply to crash when tracing is enabled
This commit fixes https://github.com/rabbitmq/rabbitmq-server/discussions/11662

Prior to this commit the following crash occurred when an RPC reply message entered
RabbitMQ and tracing was enabled:
```
** Reason for termination ==
** {function_clause,
       [{rabbit_trace,'-tap_in/6-fun-0-',
            [{virtual_reply_queue,
                 <<"amq.rabbitmq.reply-to.g1h2AA5yZXBseUAyNzc5NjQyMAAAC1oAAAAAZo4bIw==.+Uvn1EmAp0ZA+oQx2yoQFA==">>}],
            [{file,"rabbit_trace.erl"},{line,62}]},
        {lists,map,2,[{file,"lists.erl"},{line,1559}]},
        {rabbit_trace,tap_in,6,[{file,"rabbit_trace.erl"},{line,62}]},
        {rabbit_channel,handle_method,3,
            [{file,"rabbit_channel.erl"},{line,1284}]},
        {rabbit_channel,handle_cast,2,
            [{file,"rabbit_channel.erl"},{line,659}]},
        {gen_server2,handle_msg,2,[{file,"gen_server2.erl"},{line,1056}]},
        {proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,241}]}]}
```

(Note that no trace message is emitted for messages that are delivered to
direct reply to requesting clients (neither in 3.12, nor in 3.13, nor
after this commit). This behaviour can be added in future when a direct
reply virtual queue becomes its own queue type.)
2024-07-10 18:14:35 +02:00
David Ansari 37b1ffe244 Upgrade AMQP address format in tests
from v1 to v2.

We still use AMQP address format v1 when connecting to an old 3.13 node.
2024-07-08 13:23:50 +02:00
David Ansari 19523876cd Make AMQP address v2 format user friendly
This commit is a follow up of https://github.com/rabbitmq/rabbitmq-server/pull/11604

This commit changes the AMQP address format v2 from
```
/e/:exchange/:routing-key
/e/:exchange
/q/:queue
```
to
```
/exchanges/:exchange/:routing-key
/exchanges/:exchange
/queues/:queue
```

Advantages:
1. more user friendly
2. matches nicely with the plural forms of HTTP API v1 and HTTP API v2

This plural form is still non-overlapping with AMQP address format v1.

Although it might feel unusual at first to send a message to `/queues/q1`,
if you think about `queues` just being a namespace or entity type, this
address format makes sense.
2024-07-04 14:33:05 +02:00
David Ansari 7b18bd7a81 Enforce percent encoding
Partially copy file
https://github.com/ninenines/cowlib/blob/optimise-urldecode/src/cow_uri.erl
We use this copy because:
1. uri_string:unquote/1 is lax: It doesn't validate that characters that are
   required to be percent encoded are indeed percent encoded. In RabbitMQ,
   we want to enforce that proper percent encoding is done by AMQP clients.
2. uri_string:unquote/1 and cow_uri:urldecode/1 in cowlib v2.13.0 are both
   slow because they allocate a new binary for the common case where no
   character was percent encoded.
When a new cowlib version is released, we should make app rabbit depend on
app cowlib calling cow_uri:urldecode/1 and delete this file (rabbit_uri.erl).
2024-07-03 17:01:51 +02:00
David Ansari 0de9591050 Use different AMQP address format for v1 and v2
to distinguish between v1 and v2 address formats.

Previously, v1 and v2 address formats overlapped and behaved differently
for example for:
```
/queue/:queue
/exchange/:exchange
```

This PR changes the v2 format to:
```
/e/:exchange/:routing-key
/e/:exchange
/q/:queue
```
to distinguish between v1 and v2 addresses.

This allows to call `rabbit_deprecated_features:is_permitted(amqp_address_v1)`
only if we know that the user requests address format v1.

Note that `rabbit_deprecated_features:is_permitted/1` should only
be called when the old feature is actually used.

Use percent encoding / decoding for address URI format v2.
This allows to use any UTF-8 encoded characters including slashes (`/`)
in routing keys, exchange names, and queue names and is more future
safe.
2024-07-03 16:36:03 +02:00
Karl Nilsson 7273c6846d
Merge pull request #11582 from rabbitmq/testfixes-glorious-testfixes-and-other-improvements-hurray
Various test improvements
2024-07-01 16:54:49 +01:00
Loïc Hoguin a561b45dcf
Merge pull request #11573 from rabbitmq/loic-more-make
Another make PR
2024-07-01 14:27:04 +02:00
Karl Nilsson 2b09891a77 Fix flake in rabbit_stream_queue_SUITE 2024-07-01 10:27:52 +01:00
Karl Nilsson 087d701b6e speed up policy_SUITE 2024-07-01 10:27:52 +01:00
Karl Nilsson a277713105 speed up limit and tracking* suites
By removing explicit meta data store groups.
2024-07-01 10:27:52 +01:00
Karl Nilsson b67f7292b5 remove explicit meta data store groups from bindings_SUITE 2024-07-01 10:27:52 +01:00
Karl Nilsson a5819bf41e Remove meta data store groups from publisher_confirms_parallel_SUITE 2024-07-01 10:27:52 +01:00
Karl Nilsson 6a655b6daa Fix test flake in publisher_confirms_parallel_SUITE
test `confirm_nack` had a race condition that mean either ack or nack
where likely outcomes. By stopping the queue before the publish we
ensure only nack is the valid outcome.
2024-07-01 10:27:52 +01:00
David Ansari 166e5d8a2f Bump AMQP.NET Lite to 2.4.11 2024-06-28 18:50:59 +02:00
Loïc Hoguin bbfa066d79
Cleanup .gitignore files for the monorepo
We don't need to duplicate so many patterns in so many
files since we have a monorepo (and want to keep it).

If I managed to miss something or remove something that
should stay, please put it back. Note that monorepo-wide
patterns should go in the top-level .gitignore file.
Other .gitignore files are for application or folder-
specific patterns.
2024-06-28 12:00:52 +02:00
David Ansari daf61c9653 Classic queue removes consumer
## What?

The rabbit_queue_type API has allowed to cancel a consumer.
Cancelling a consumer merely stops the queue sending more messages
to the consumer. However, messages checked out to the cancelled consumer
remain checked out until acked by the client. It's up to the client when
and whether it wants to ack the remaining checked out messages.

For AMQP 0.9.1, this behaviour is necessary because the client may
receive messages between the point in time it sends the basic.cancel
request and the point in time it receives the basic.cancel_ok response.

AMQP 1.0 is a better designed protocol because a receiver can stop a
link as shown in figure 2.46.
After a link is stopped, the client knows that the queue won't deliver
any more messages.
Once the link is stopped, the client can subsequently detach the link.

This commit extends the rabbit_queue_type API to allow a consumer being
immediately removed:
1. Any checked out messages to that receiver will be requeued by the
   queue. (As explained previously, a client that first stops a link
   by sending a FLOW with `link_credit=0` and `echo=true` and waiting
   for the FLOW reply from the server and settles any messages before
   it sends the DETACH frame, won't have any checked out messages).
2. The queue entirely forgets this consumer and therefore stops
   delivering messages to the receiver.

This new behaviour of consumer removal is similar to what happens when
an AMQP 0.9.1 channel is closed: All checked out messages to that
channel will be requeued.

 ## Why?

Removing the consumer immediately simplifies many aspects:
1. The server session process doesn't need to requeue any checked out
   messages for the receiver when the receiver detaches the link.
   Specifically, messages in the outgoing_unsettled_map and
   outgoing_pending queue don't need to be requeued because the queue
   takes care of requeueing any checked out messages.
2. It simplifies reasoning about clients first detaching and then
   re-attaching in the same session with the same link handle (the handle
   becomes available for re-use once a link is closed): This will result
   in the same RabbitMQ queue consumer tag.
3. It simplifies queue implementations since state needs to be hold and
   special logic needs to be applied to consumers that are only
   cancelled (basic.cancel AMQP 0.9.1) but not removed.
4. It makes the single active consumer feature safer when it comes to
   maintaning message order: If a client cancels consumption via AMQP
   0.9.1 basic.cancel, but still has in-flight checked out messages,
   the queue will activate the next consumer. If the AMQP 0.9.1 client
   shortly after crashes, messages to the old consumer will be requeued
   which results in message being out of order. To maintain message order,
   an AMQP 0.9.1 client must close the whole channel such that messages
   are requeued before the next consumer is activated.
   For AMQP 1.0, the client can either stop the link first (preferred)
   or detach the link directly. Detaching the link will requeue all
   messages before activating the next consumer, therefore maintaining
   message order. Even if the session crashes, message order will be
   maintained.

  ## How?

`rabbit_queue_type:cancel` accepts a spec as argument.
The new interaction between session proc and classic queue proc (let's
call it cancel API v2) must be hidden behind a feature flag.
This commit re-uses feature flag credit_api_v2 since it also gets
introduced in 4.0.
2024-06-27 19:50:36 +02:00
David Ansari 53b20ec78a Fix amqp_client_SUITE flake
Fix the following sporadic failure:

```
=== === Reason: {assertMatch,
                     [{module,amqp_client_SUITE},
                      {line,446},
                      {expression,
                          "rabbitmq_amqp_client : delete_queue ( LinkPair , QName )"},
                      {pattern,"{ ok , # { message_count := NumMsgs } }"},
                      {value,{ok,#{message_count => 29}}}]}
  in function  amqp_client_SUITE:sender_settle_mode_mixed/1 (amqp_client_SUITE.erl, line 446)
 ```
The last message (30th) is send as settled.
It apparently happened that all messages up to 29 got stored.
The 29th message also got confirmed.
Subsequently the queue got deleted with only 29 ready messages.

Bumping NumbMsgs to 31 ensures that the last message is sent unsettled.
2024-06-27 08:57:53 +00:00
David Ansari eed48ef3ee Fix amqp_client_SUITE flake
Fix the following sporadich error in CI:
```
=== Location: [{amqp_client_SUITE,available_messages,3137},
              {test_server,ts_tc,1793},
              {test_server,run_test_case_eval1,1302},
              {test_server,run_test_case_eval,1234}]
=== === Reason: {assertEqual,
                     [{module,amqp_client_SUITE},
                      {line,3137},
                      {expression,"get_available_messages ( Receiver )"},
                      {expected,5000},
                      {value,0}]}
```

The client decrements the available variable from 1 to 0 when it
receives the transfer and sends a credit_exhausted event to the CT test
case proc. The CT test case proc queries the client session proc for available
messages, which is still 0. The FLOW frame from RabbitMQ to the client with the
available=5000 set could arrive shortly after.
2024-06-27 08:00:48 +00:00
David Ansari 6aa2de3f4f Avoid crash in test case
Avoid the following unexpected error in mixed version testing where
feature flag credit_api_v2 is disabled:
```
[error] <0.1319.0> Timed out waiting for credit reply from quorum queue 'stop' in vhost '/'. Hint: Enable feature flag credit_api_v2
[warning] <0.1319.0> Closing session for connection <0.1314.0>: {'v1_0.error',
[warning] <0.1319.0>                                             {symbol,<<"amqp:internal-error">>},
[warning] <0.1319.0>                                             {utf8,
[warning] <0.1319.0>                                              <<"Timed out waiting for credit reply from quorum queue 'stop' in vhost '/'. Hint: Enable feature flag credit_api_v2">>},
[warning] <0.1319.0>                                             undefined}
[error] <0.1319.0> ** Generic server <0.1319.0> terminating
[error] <0.1319.0> ** Last message in was {'$gen_cast',
[error] <0.1319.0>                            {frame_body,
[error] <0.1319.0>                                {'v1_0.flow',
[error] <0.1319.0>                                    {uint,283},
[error] <0.1319.0>                                    {uint,65535},
[error] <0.1319.0>                                    {uint,298},
[error] <0.1319.0>                                    {uint,4294967295},
[error] <0.1319.0>                                    {uint,1},
[error] <0.1319.0>                                    {uint,282},
[error] <0.1319.0>                                    {uint,50},
[error] <0.1319.0>                                    {uint,0},
[error] <0.1319.0>                                    undefined,undefined,undefined}}}
```

Presumably, the server session proc timed out receiving a credit reply from
the quorum queue because the test case deleted the quorum queue
concurrently with the client (and therefore also the server session
process) topping up link credit.

This commit detaches the link first and ends the session synchronously
before deleting the quorum queue.
2024-06-27 07:30:23 +00:00
Michael Klishin 083800809f
Merge pull request #11436 from rabbitmq/loic-remove-fhc
Remove most of the fd related FHC code
2024-06-26 15:19:49 -04:00
Michael Klishin de866b81cb
Merge pull request #11547 from rabbitmq/speed-up-rabbit-tests
Speed up rabbit tests suites
2024-06-26 15:15:06 -04:00
David Ansari 2adcdc1a28 Attempt to de-flake
This commit attempts to remove the following flake:
```
{amqp_client_SUITE,server_closes_link,1113}
{badmatch,[<14696.3530.0>,<14696.3453.0>]}
```
by waiting after each test case until sessions were de-registered from
the 1st RabbitMQ node.
2024-06-26 15:57:26 +02:00
David Ansari dcb2fe0fd9 Fix message IDs settlement order
## What?

This commit fixes issues that were present only on `main`
branch and were introduced by #9022.

1. Classic queues (specifically `rabbit_queue_consumers:subtract_acks/3`)
   expect message IDs to be (n)acked in the order as they were delivered
   to the channel / session proc.
   Hence, the `lists:usort(MsgIds0)` in `rabbit_classic_queue:settle/5`
   was wrong causing not all messages to be acked adding a regression
   to also AMQP 0.9.1.
2. The order in which the session proc requeues or rejects multiple
   message IDs at once is important. For example, if the client sends a
   DISPOSITION with first=3 and last=5, the message IDs corresponding to
   delivery IDs 3,4,5 must be requeued or rejected in exactly that
   order.
   For example, quorum queues use this order of message IDs in
   34d3f94374/deps/rabbit/src/rabbit_fifo.erl (L226-L234)
   to dead letter in that order.

 ## How?

The session proc will settle (internal) message IDs to queues in ascending
(AMQP) delivery ID order, i.e. in the order messages were sent to the
client and in the order messages were settled by the client.

This commit chooses to keep the session's outgoing_unsettled_map map
data structure.

An alternative would have been to use a queue or lqueue for the
outgoing_unsettled_map as done in
* 34d3f94374/deps/rabbit/src/rabbit_channel.erl (L135)
* 34d3f94374/deps/rabbit/src/rabbit_queue_consumers.erl (L43)

Whether a queue (as done by `rabbit_channel`) or a map (as done by
`rabbit_amqp_session`) performs better depends on the pattern how
clients ack messages.

A queue will likely perform good enough because usually the oldest
delivered messages will be acked first.
However, given that there can be many different consumers on an AQMP
0.9.1 channel or AMQP 1.0 session, this commit favours a map because
it will likely generate less garbage and is very efficient when for
example a single new message (or few new messages) gets acked while
many (older) messages are still checked out by the session (but by
possibly different AMQP 1.0 receivers).
2024-06-26 13:35:41 +02:00
Karl Nilsson 1825f2be77 Fix flake in rabbit_stream_queue_SUITE 2024-06-26 09:32:08 +01:00
Karl Nilsson c36b56b626 speed up signal_handling_SUITE 2024-06-26 07:38:29 +01:00
Karl Nilsson d25017d8cf speed up rabbit_fifo_dlx_integration_SUITE slightly 2024-06-26 07:38:29 +01:00
Karl Nilsson e2767a750d further speed up quorum_queue_SUITE 2024-06-26 07:38:29 +01:00
Karl Nilsson e84cb65287 speed up dead_lettering_SUITE
By lowering tick intervals.
2024-06-26 07:38:29 +01:00
Karl Nilsson 3bb5030132 speed up queue_parallel_SUITE
By removing groups that test quorum configurations that are no
longer relevant.
2024-06-26 07:38:29 +01:00
Karl Nilsson 6843384c85 speed up metrics_SUITE 2024-06-26 07:38:29 +01:00
Karl Nilsson 7f0f632de7 dynamic_qq test tweak 2024-06-26 07:38:29 +01:00
Karl Nilsson 30c14e6f35 speed up product_info_SUITE
By running all tests in a single node rather than starting a new
one for each test.
2024-06-26 07:38:29 +01:00
Karl Nilsson 371b429ea1 speed up consumer_timeout_SUITE
By reducing consumer timeout and the receive timeout.
2024-06-26 07:38:29 +01:00
Karl Nilsson 25d79fa448 speed up rabbit_fifo_prop_SUITE
mostly by reducing the sizes of some properties as they can run
quite slowly in ci
2024-06-26 07:38:29 +01:00
Karl Nilsson f919fee7f1 speed up quorum_queues_SUITE
AFTER: gmake -C deps/rabbit ct-quorum_queue  6.15s user 4.25s system 2% cpu 6:25.29 total
2024-06-26 07:38:29 +01:00
Karl Nilsson 3551309baf speed up dynamic_qq_SUITE
BEFORE: time gmake -C deps/rabbit ct-dynamic_qq  1.92s user 1.44s system 2% cpu 2:23.56 total

AFTER: time gmake -C deps/rabbit ct-dynamic_qq  1.66s user 1.22s system 2% cpu 1:56.44 total
2024-06-26 07:38:29 +01:00
Karl Nilsson 13a1a7c7fe speed up rabbit_stream_queue_SUITE
Reduce the number of tests that are run for 2 nodes.

BEFORE: time gmake -C deps/rabbit ct-rabbit_stream_queue  7.22s user 5.72s system 2% cpu 8:28.18 total

AFTER time gmake -C deps/rabbit ct-rabbit_stream_queue  27.04s user 8.43s system 10% cpu 5:38.63 total
2024-06-26 07:38:29 +01:00
Loïc Hoguin 6a47eaad22
Zero sockets_used/sockets_limit stats
They are no longer used.

This removes a couple file_handle_cache:info/1 calls.

We are not removing them from the HTTP API to avoid
breaking things unintentionally.
2024-06-24 12:07:51 +02:00
Loïc Hoguin 49bedfc17e
Remove most of the fd related FHC code
Stats were not removed, including management UI stats
relating to FDs.

Web-MQTT and Web-STOMP configuration relating to FHC
were not removed.

The file_handle_cache itself must be kept until we
remove CQv1.
2024-06-24 12:07:51 +02:00
Michael Klishin 7335aaaf1b
Merge pull request #11528 from rabbitmq/mk-fix-virtual-host-creation-post-11457
Follow-up to #11457
2024-06-22 05:15:22 -04:00
Michael Klishin b822be02af
Revert "cuttlefish tls schema for amqp_client" 2024-06-22 04:16:50 -04:00
Michael Klishin 1e577a82fc Follow-up to #11457
The queue type argument won't always be a binary,
for example, when a virtual host is created.

As such, the validation code should accept at
least atoms in addition to binaries.

While at it, improve logging and error reporting
when DQT validation fails, and while at it,
make the definition import tests focussed on
virtual host a bit more robust.
2024-06-22 02:16:22 -04:00
Simon Unge 24fb0334ac Add schema duplicate for amqp 1.0 2024-06-21 21:43:07 -04:00
Simon Unge 145592efe9 Remove server options and move to rabbit schema 2024-06-21 21:43:07 -04:00
Michael Klishin 0b84acd164
Merge pull request #11515 from rabbitmq/issue-11514
Handle unknown QQ state
2024-06-21 21:41:59 -04:00
Michael Klishin 7c2d29d1e4
Merge pull request #11513 from rabbitmq/loic-remove-ram-durations
CQ: Remove rabbit_memory_monitor and RAM durations
2024-06-21 21:40:13 -04:00
Michael Klishin f51528244e definition_import_SUITE: fix a subtle timing issue
In case 16, an await_condition/2 condition was
not correctly matching the error. As a result,
the function proceeded to the assertion step
earlier than it should have, failing with
an obscure function_clause.

This was because an {error, Context} clause
was not correct.

In addition to fixing it, this change adds a
catch-all clause and verifies the loaded
tagged virtual host before running any assertions
on it.

If the virtual host was not imported, case 16
will now fail with a specific CT log message.

References #11457 because the changes there
exposed this behavior in CI.
2024-06-21 21:32:38 -04:00
Luke Bakken ce16e61d08 Do not overwrite `default_queue_type` with `undefined`
Importing a definitions file with no `default_queue_type` metadata for a vhost will result in that vhosts value being set to `undefined`.

Once set to a non-`undefined` value, this PR prevents `default_queue_type` from being set back to `undefined`
2024-06-21 11:25:18 -04:00
Michal Kuratczyk bfedfdca71
Handle unknown QQ state
ra_state may contain a QQ state such as {'foo',init,unknown}.
Perfore this fix, all_replica_states doesn't map such states
to a 2-tuple which leads to a crash in maps:from_list because
a 3-tuple can't be handled.

A crash in rabbit_quorum_queue:all_replica_states leads to no
results being returned from a given node when the CLI asks for
QQs with minimum quorum.
2024-06-20 17:49:43 +02:00
Loïc Hoguin 1ca46f1c63
CQ: Remove rabbit_memory_monitor and RAM durations
CQs have not used RAM durations for some time, following
the introduction of v2.
2024-06-20 15:19:51 +02:00
Michal Kuratczyk 489de21a76
Deflake vhost_is_created_with_default_user
This test relied on the order of items
in the list of users returned and would crash
with `case_clause` if `guest` was returned first.
2024-06-19 13:52:56 +02:00
Arnaud Cogoluègnes a52b8c0eb3
Bump Maven to 3.9.8 in Java test suites 2024-06-18 08:55:19 +02:00
Loïc Hoguin d959c8a43d
Merge pull request #11112 from rabbitmq/loic-faster-cq-shared-store-gc
4.x: Additional CQv2 message store optimisations
2024-06-17 15:30:25 +02:00
Michael Klishin 6605a7330f
Merge pull request #11455 from rabbitmq/default-max-message-size
Reduce default maximum message size to 16MB
2024-06-14 13:49:26 -04:00
Diana Parra Corbacho c11b812ef6 Reduce default maximum message size to 16MB 2024-06-14 11:55:03 +02:00
Loïc Hoguin 41ce4da5ca
CQ: Remove ability to change shared store index module
It will always use the ETS index. This change lets us
do optimisations that would otherwise not be possible,
including 81b2c39834953d9e1bd28938b7a6e472498fdf13.

A small functional change is included in this commit:
we now always use ets:update_counter to update the
ref_count, instead of a mix of update_{counter,fields}.

When upgrading to 4.0, the index will be rebuilt for
all users that were using a custom index module.
2024-06-14 11:52:03 +02:00
Jean-Sébastien Pédron df658317ee
peer_discovery_tmp_hidden_node_SUITE: Use IP address to simulate a long node name
[Why]
The `longnames`-based testcase depends on a copnfigured FQDN. Buildbuddy
hosts were incorrectly configured and were lacking one. As a workaround,
`/etc/hosts` was modified to add an FQDN.

We don't use Buildbuddy anymore, but the problem also appeared on team
members' Broadcom-managed laptops (also having an incomplete network
configuration). More importantly, GitHub workers seem to have the same
problem, but randomly!

At this point, we can't rely on the host's correct network
configuration.

[How]
The testsuite is modified to use a hard-coded IP address, 127.0.0.1, to
simulate a long Erlang node name (i.e. it has dots).
2024-06-13 15:04:07 +02:00
Karl Nilsson cca64e8ed6 Add new local random exchange type.
This exchange type will only bind classic queues and will only return
routes for queues that are local to the publishing connection. If more than
one queue is bound it will make a random choice of the locally bound queues.

This exchange type is suitable as a component in systems that run
highly available low-latency RPC workloads.

Co-authored-by: Marcial Rosales <mrosales@pivotal.io>
2024-06-12 17:16:45 +01:00
Karl Nilsson 3390fc97fb etcd peer discovery fixes
Instead of relying on the complex and non-determinstic default node
selection mechanism inside peer discovery this change makes the
etcd backend implemention make the leader selection itself based on
the etcd create_revision of each entry. Although not spelled out anywhere
explicitly is likely that a property called "Create Revision" is going
to remain consistent throughout the lifetime of the etcd key.

Either way this is likely to be an improvement on the current approach.
2024-06-12 15:31:27 +01:00
Michael Klishin b9f387b988
Merge pull request #11412 from rabbitmq/link-error
Prefer link error over session error
2024-06-07 11:10:48 -04:00
Michael Davis 5e4a4326ef
Merge pull request #11398 from rabbitmq/md/khepri/read-only-permissions-transaction-queries
Khepri: Use read-only transactions to query for user/topic permissions
2024-06-07 08:20:02 -05:00
David Ansari 895bf3e3cb Prefer link error over session error
when AMQP client publishes with a bad 'to' address such that other links
on the same session can continue to work.
2024-06-07 13:41:05 +02:00
Michael Klishin c592405b7a
Merge pull request #11397 from rabbitmq/direct-reply-to
4.x: add tests for AMQP 0.9.1 direct reply to feature
2024-06-07 02:28:31 -04:00
Michael Klishin 959d3fd15c Simplify a rabbitmqctl_integration_SUITE test 2024-06-06 21:21:26 -04:00
Michael Klishin 1fc325f5d7 Introduce a way to disable virtual host reconciliation
Certain test suites need virtual hosts to be down
for an extended period of time, and this feature
gets in a way ;)
2024-06-06 21:21:26 -04:00
David Ansari d806d698af Add tests for AMQP 0.9.1 direct reply to feature
Add tests for https://www.rabbitmq.com/docs/direct-reply-to

Relates https://github.com/rabbitmq/rabbitmq-server/issues/11380
2024-06-06 18:11:39 +02:00
Michael Davis d0425e5361
cluster_minority_SUITE: Test definitions export 2024-06-06 10:49:27 -04:00
Michael Davis 7a8669ea73
Improve cluster_minority_SUITE
This is a mix of a few changes:

* Suppress the compiler warning from the export_all attribute.
* Lower Khepri's command handling timeout value. By default this is
  set to 30s in rabbit which makes each of the cases in
  `client_operations` take an excessively long time. Before this change
  the suite took around 10 minutes to complete. Now it takes between two
  and three minutes.
* Swap the order of client and broker teardown steps in end_per_group
  hook. The client teardown steps will always fail if run after the
  broker teardown steps because they rely on a value in `Config` that
  is deleted by broker teardown.
2024-06-06 10:49:27 -04:00
Michael Davis 79e1350a87
Remove feature flag code handling potential code server deadlocks
These test cases and RPC calls were concerned with deadlocks possible
from modifying the code server process from multiple callers. With the
switch to persistent_term in the parent commits these kinds of deadlocks
are no longer possible.

We should keep the RPC calls to `rabbit_ff_registry_wrapper:inventory/0`
though for mixed-version compatibility with nodes that use module
generation instead of `persistent_term` for their registry.
2024-06-05 11:26:41 -04:00
Michael Davis ea2da2d32d
rabbit_feature_flags: Inject test feature flags atomically
`rabbit_feature_flags:inject_test_feature_flags/2` could discard flags
from the persistent term because the read and write to the persistent
term were not atomic. `feature_flags_SUITE:registry_concurrent_reloads/1`
spawns many processes to modify the feature flag registry concurrently
but also updates this persistent term concurrently. The processes would
race so that many would read the initial flags, add their flag and write
that state, discarding any flags that had been written in the meantime.

We can add a lock around changes to the persistent term to make the
changes atomic.

Non-atomic updates to the persistent term caused unexpected behavior in
the `registry_concurrent_reloads/1` case previously. The
`PT_TESTSUITE_ATTRS` persistent_term only ended up containing a few of
the desired feature flags (for example only `ff_02` and `ff_06` along
with the base `ff_a` and `ff_b`). The case did not fail because the
registry continued to make progress towards that set of feature flags.
However the test case never reached the "all feature flags appeared"
state of the spammer and so the new assertion added at the end of the
case in this commit would fail.
2024-06-05 11:26:41 -04:00
Karl Nilsson ebbff46c96
Merge pull request #11307 from rabbitmq/amqp-flow-control-poc-2
Introduce outbound RabbitMQ internal AMQP flow control
2024-06-05 15:01:05 +01:00
Michael Klishin 44c381929e
Merge pull request #11373 from rabbitmq/qq-server-recovery
QQ: Enable server recovery.
2024-06-04 15:50:16 -04:00
Karl Nilsson 08a8f93e97 QQ: Enable server recovery.
This ensures quorum queue processes are restarted after a ra system restart.
2024-06-04 16:41:12 +01:00
David Ansari d70e529d9a Introduce outbound RabbitMQ internal AMQP flow control
## What?

Introduce RabbitMQ internal flow control for messages sent to AMQP
clients.

Prior this PR, when an AMQP client granted a large amount of link
credit (e.g. 100k) to the sending queue, the sending queue sent
that amount of messages to the session process no matter what.
This becomes problematic for memory usage when the session process
cannot send out messages fast enough to the AMQP client, especially if
1. The writer proc cannot send fast enough. This can happen when
the AMQP client does not receive fast enough and causes TCP
back-pressure to the server. Or
2. The server session proc is limited by remote-incoming-window.

Both scenarios are now added as test cases.
Tests
* tcp_back_pressure_rabbitmq_internal_flow_quorum_queue
* tcp_back_pressure_rabbitmq_internal_flow_classic_queue
cover scenario 1.

Tests
* incoming_window_closed_rabbitmq_internal_flow_quorum_queue
* incoming_window_closed_rabbitmq_internal_flow_classic_queue
cover scenario 2.

This PR sends messages from queues to AMQP clients in a more controlled
manner.

To illustrate:
```
make run-broker PLUGINS="rabbitmq_management" RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 4"
observer_cli:start()
mq
```
where `mq` sorts by message queue length.
Create a stream:
```
deps/rabbitmq_management/bin/rabbitmqadmin declare queue name=s1 queue_type=stream durable=true
```
Next, send and receive from the Stream via AMQP.
Grant a large number of link credit to the sending stream:
```
docker run -it --rm --add-host host.docker.internal:host-gateway ssorj/quiver:latest
bash-5.1# quiver --version
quiver 0.4.0-SNAPSHOT
bash-5.1# quiver //host.docker.internal//queue/s1 --durable -d 30s --credit 100000
```

**Before** to this PR:
```
RESULTS

Count ............................................... 100,696 messages
Duration ............................................... 30.0 seconds
Sender rate ......................................... 120,422 messages/s
Receiver rate ......................................... 3,363 messages/s
End-to-end rate ....................................... 3,359 messages/s
```
We observe that all 100k link credit worth of messages are buffered in the
writer proc's mailbox:
```
|No | Pid        | MsgQueue  |Name or Initial Call                 |      Memory | Reductions          |Current Function                  |
|1  |<0.845.0>   |100001     |rabbit_amqp_writer:init/1            |  126.0734 MB| 466633491           |prim_inet:send/5                  |
```

**After** to this PR:
```
RESULTS

Count ............................................. 2,973,440 messages
Duration ............................................... 30.0 seconds
Sender rate ......................................... 123,322 messages/s
Receiver rate ........................................ 99,250 messages/s
End-to-end rate ...................................... 99,148 messages/s
```
We observe that the message queue lengths of both writer and session
procs are low.

 ## How?

Our goal is to have queues send out messages in a controlled manner
without overloading RabbitMQ itself.
We want RabbitMQ internal flow control between:
```
AMQP writer proc <--- session proc <--- queue proc
```
A similar concept exists for classic queues sending via AMQP 0.9.1.
We want an approach that applies to AMQP and works generic for all queue
types.

For the interaction between AMQP writer proc and session proc we use a
simple credit based approach reusing module `credit_flow`.

For the interaction between session proc and queue proc, the following options
exist:

 ### Option 1
The session process provides expliclity feedback to the queue after it
has sent N messages.
This approach is implemented in
https://github.com/ansd/rabbitmq-server/tree/amqp-flow-control-poc-1
and works well.
A new `rabbit_queue_type:sent/4` API was added which lets the queue proc know
that it can send further messages to the session proc.

Pros:
* Will work equally well for AMQP 0.9.1, e.g. when quorum queues send messages
  in auto ack mode to AMQP 0.9.1 clients.
* Simple for the session proc

Cons:
* Sligthly added complexity in every queue type implementation
* Multiple Ra commands (settle, credit, sent) to decide when a quorum
  queue sends more messages.

 ### Option 2
A dual link approach where two AMQP links exists between
```
AMQP client <---link--> session proc <---link---> queue proc
```
When the client grants a large amount of credits, the session proc will
top up credits to the queue proc periodically in smaller batches.

Pros:
* No queue type modifications required.
* Re-uses AMQP link flow control

Cons:
* Significant added complexity in the session proc. A client can
  dynamically decrease or increase credits and dynamically change the drain
  mode while the session tops up credit to the queue.

 ### Option 3
Credit is a 32 bit unsigned integer.
The spec mandates that the receiver independently chooses a credit.
Nothing in the spec prevents the receiver to choose a credit of 1 billion.
However the credit value is merely a **maximum**:
> The link-credit variable defines the current maximum legal amount that the delivery-count can be increased by.

Therefore, the server is not required to send all available messages to this
receiver.

For delivery-count:
> Only the sender MAY independently modify this field.

"independently" could be interpreted as the sender could add to the delivery-count
irrespective of what the client chose for drain and link-credit.

Option 3: The queue proc could at credit time already consume credit
and advance the delivery-count if credit is too large before checking out any messages.
For example if credit is 100k, but the queue only wants to send 1k, the queue could
consume 99k of credits and advance the delivery-count, and subsequently send maximum 1k messages.
If the queue advanced the delivery-count, RabbitMQ must send a FLOW to the receiver,
otherwise the receiver wouldn’t know that it ran out of link-credit.

Pros:
* Very simple

Cons:
* Possibly unexpected behaviour for receiving AMQP clients
* Possibly poor end-to-end throughput in auto-ack mode because the queue
  would send a batch of messages followed by a FLOW containing the advanced
  delivery-count. Only therafter the client will learn that it ran out of
  credits and top-up again. This feels like synchronously pulling a batch
  of messages. In contrast, option 2 sends out more messages as soon as
  the previous messages left RabbitMQ without requiring again a credit top
  up from the receiver.
* drain mode with large credits requires the queue to send all available
  messages and only thereafter advance the delivery-count. Therefore,
  drain mode breaks option 3 somewhat.

 ### Option 4
Session proc drops message payload when its outgoing-pending queue gets
too large and re-reads payloads from the queue once the message can be
sent (see `get_checked_out` Ra command for quorum queues).

Cons:
* Would need to be implemented for every queue type, especially classic queues
* Doesn't limit the amount of message metadata in the session proc's
  outgoing-pending queue

 ### Decision: Option 2
This commit implements option 2 to avoid any queue type modification.
At most one credit request is in-flight between session process and
queue process for a given queue consumer.
If the AMQP client sends another FLOW in between, the session proc
stashes the FLOW until it processes the previous credit reply.

A delivery is only sent from the outgoing-pending queue if the
session proc is not blocked by
1. writer proc, or
2. remote-incoming-window

The credit reply is placed into the outgoing-pending queue.
This ensures that the session proc will only top up the next batch of
credits if sufficient messages were sent out to the writer proc.

A future commit could additionally have each queue limit the number of
unacked messages for a given AMQP consumer, or alternatively make use
of session outgoing-window.
2024-06-04 13:11:55 +02:00
Diana Parra Corbacho 3bbda5bdba Remove classic mirror queues 2024-06-04 13:00:31 +02:00
David Ansari bd847b8cac Put credit flow config into persistent term
Put configuration credit_flow_default_credit into persistent term such
that the tuple doesn't have to be copied on the hot path.

Also, change persistent term keys from `{rabbit, AtomKey}` to `AtomKey`
so that hashing becomes cheaper.
2024-05-31 16:20:51 +02:00
Simon Unge 19a751890c Remove checks to vhost-limit as that is now handled by rabbit_queue_type:declare
Add new error return tuple when queue limit is exceed
2024-05-27 09:53:54 +02:00
Michal Kuratczyk cfa3de4b2b
Remove unused imports (thanks elp!) 2024-05-23 16:36:08 +02:00
Simon Unge 60ae6c0943 cluster wide queue limit 2024-05-21 17:41:53 +00:00
David Ansari df9e9978ce Fix type spec
and improve naming
2024-05-17 12:14:58 +02:00
David Ansari e8c5d22e9b Have minimum queue TTL take effect for quorum queues
For classic queues, if both policy and queue argument are set
for queue TTL, the minimum takes effect.

Prior to this commit, for quorum queues if both policy and
queue argument are set for queue TTL, the policy always overrides the
queue argument.

This commit brings the quorum queue queue TTL resolution to classic
queue's behaviour. This allows developers to provide a custom lower
queue TTL while the operator policy acts an upper bound safe-guard.
2024-05-16 17:40:59 +02:00
Michael Davis 3435a0d362
Fix a flake in amqp_client_SUITE
`event_recorder` listens for events globally rather than per-connection
so it's possible for `connection_closed` and other events from prior
test cases to be captured and mixed with the events the test is
interested in, causing a flake.

Instead we can assert that the events we gather from `event_recorder`
contain the ones emitted by the connection.
2024-05-14 14:50:50 -04:00
David Ansari 90a40107b4 Remove BCC from x-death routing-keys
This commit is a follow up of https://github.com/rabbitmq/rabbitmq-server/pull/11174
which broke the following Java client test:
```
./mvnw verify -P '!setup-test-cluster' -Drabbitmqctl.bin=DOCKER:rabbitmq -Dit.test=DeadLetterExchange#deadLetterNewRK
```

The desired documented behaviour is the following:
> routing-keys: the routing keys (including CC keys but excluding BCC ones) the message was published with

This behaviour should be respected also for messages dead lettered into a
stream. Therefore, instead of first including the BCC keys in the `#death.routing_keys` field
and removing it again in mc_amqpl before sending the routing-keys to the
client as done in v3.13.2 in
dc25ef5329/deps/rabbit/src/mc_amqpl.erl (L527)
we instead omit directly the BCC keys from `#death.routing_keys` when
recording a death event.

This commit records the BCC keys in their own mc `bcc` annotation in `mc_amqpl:init/1`.
2024-05-14 13:40:23 +02:00
David Ansari 083889ec01 Fix test assertion 2024-05-13 18:21:40 +02:00
David Ansari 68f8ef14f3 Fix comment 2024-05-13 17:59:42 +02:00
Loïc Hoguin 0942573be7
Merge pull request #10656 from rabbitmq/loic-remove-cqv1-option
4.x: remove availability of CQv1
2024-05-13 14:33:16 +02:00
Loïc Hoguin ecf46002e0
Remove availability of CQv1
We reject CQv1 in rabbit.schema as well.

Most of the v1 code is still around as it is needed
for conversion to v2. It will be removed at a later
time when conversion is no longer supported.

We don't shard the CQ property suite anymore:
there's only 1 case remaining.
2024-05-13 13:06:07 +02:00
David Ansari 6b300a2f34 Fix dead lettering
# What?

This commit fixes #11159, #11160, #11173.

  # How?

  ## Background

RabbitMQ allows to dead letter messages for four different reasons, out
of which three reasons cause messages to be dead lettered automatically
internally in the broker: (maxlen, expired, delivery_limit) and 1 reason
is caused by an explicit client action (rejected).

RabbitMQ also allows dead letter topologies. When a message is dead
lettered, it is re-published to an exchange, and therefore zero to
multiple target queues. These target queues can in turn dead letter
messages. Hence it is possible to create a cycle of queues where
messages get dead lettered endlessly, which is what we want to avoid.

  ## Alternative approach

One approach to avoid such endless cycles is to use a similar concept of
the TTL field of the IPv4 datagram, or the hop limit field of an IPv6
datagram. These fields ensure that IP packets aren't cicrulating forever
in the Internet. Each router decrements this counter. If this counter
reaches 0, the sender will be notified and the message gets dropped.

We could use the same approach in RabbitMQ: Whenever a queue dead
letters a message, a dead_letter_hop_limit field could be decremented.
If this field reaches 0, the message will be dropped.
Such a hop limit field could have a sensible default value, for example
32. The sender of the message could override this value. Likewise, the
client rejecting a message could set a new value via the Modified
outcome.

Such an approach has multiple advantages:
1. No dead letter cycle detection per se needs to be performed within
   the broker which is a slight simplification to what we have today.
2. Simpler dead letter topologies. One very common use case is that
   clients re-try sending the message after some time by consuming from
   a dead-letter queue and rejecting the message such that the message
   gets republished to the original queue. Instead of requiring explicit
   client actions, which increases complexity, a x-message-ttl argument
   could be set on the dead-letter queue to automatically retry after
   some time. This is a big simplification because it eliminates the
   need of various frameworks that retry, such as
   https://docs.spring.io/spring-cloud-stream/reference/rabbit/rabbit_overview/rabbitmq-retry.html
3. No dead letter history information needs to be compressed because
   there is a clear limit on how often a message gets dead lettered.
   Therefore, the full history including timestamps of every dead letter
   event will be available to clients.

Disadvantages:
1. Breaks a lot of clients, even for 4.0.

  ## 3.12 approach

Instead of decrementing a counter, the approach up to 3.12 has been to
drop the message if the message cycled automatically. A message cycled
automatically if no client expliclity rejected the message, i.e. the
mesage got dead lettered due to maxlen, expired, or delivery_limit, but
not due to rejected.

In this approach, the broker must be able to detect such cycles
reliably.
Reliably detecting dead letter cycles broke in 3.13 due to #11159 and #11160.

To reliably detect cycles, the broker must be able to obtain the exact
order of dead letter events for a given message. In 3.13.0 - 3.13.2, the
order cannot exactly be determined because wall clock time is used to
record the death time.

This commit uses the same approach as done in 3.12: a list ordered by
death recency is used with the most recent death at the head of the
list.

To not grow this list endlessly (for example when a client rejects the
same message hundreds of times), this list should be compacted.
This commit, like 3.12, compacts by tuple `{Queue, Reason}`:
If this message got already dead lettered from this Queue for this
Reason, then only a counter is incremented and the element is moved to
the front of the list.

  ## Streams & AMQP 1.0 clients

Dead lettering from a stream doesn't make sense because:
1. a client cannot reject a message from a stream since the stream must
   maintain the total order of events to be consumed by multiple clients.
2. TTL is implemented by Stream retention where only old Stream segments
   are automatically deleted (or archived in the future).
3. same applies to maxlen

Although messages cannot be dead lettered **from** a stream, messages can be dead lettered
**into** a stream. This commit provides clients consuming from a stream the death history: #11173

Additionally, this commit provides AMQP 1.0 clients the death history via
message annotation `x-opt-deaths` which contains the same information as
AMQP 0.9.1 header `x-death`.

Both, storing the death history in a stream and providing death history
to an AMQP 1.0 client, use the same encoding: a message annoation
`x-opt-deaths` that contains an array of maps ordered by death recency.
The information encoded is the same as in the AMQP 0.9.1 x-death header.

Instead of providing an array of maps, a better approach could be to use
an array of a custom AMQP death type, such as:
```xml
<amqp name="rabbitmq">
    <section name="custom-types">
        <type name="death" class="composite" source="list">
            <descriptor name="rabbitmq:death:list" code="0x00000000:0x000000255"/>
            <field name="queue" type="string" mandatory="true" label="the name of the queue the message was dead lettered from"/>
            <field name="reason" type="symbol" mandatory="true" label="the reason why this message was dead lettered"/>
            <field name="count" type="ulong" default="1" label="how many times this message was dead lettered from this queue for this reason"/>
            <field name="time" mandatory="true" type="timestamp" label="the first time when this message was dead lettered from this queue for this reason"/>
            <field name="exchange" type="string" default="" label="the exchange this message was published to before it was dead lettered for the first time from this queue for this reason"/>
            <field name="routing-keys" type="string" default="" multiple="true" label="the routing keys this message was published with before it was dead lettered for the first time from this queue for this reason"/>
            <field name="ttl" type="milliseconds" label="the time to live of this message before it was dead lettered for the first time from this queue for reason ‘expired’"/>
        </type>
    </section>
</amqp>
```

However, encoding and decoding custom AMQP types that are nested within
arrays which in turn are nested within the message annotation map can be
difficult for clients and the broker. Also, each client will need to
know the custom AMQP type. For now, therefore we use an array of maps.

  ## Feature flag
The new way to record death information is done via mc annotation
`deaths_v2`.
Because old nodes do not know this new annotation, recording death
information via mc annotation `deaths_v2` is hidden behind a new feature
flag `message_containers_deaths_v2`.

If this feature flag is disabled, a message will continue to use the
3.13.0 - 3.13.2 way to record death information in mc annotation
`deaths`, or even the older way within `x-death` header directly if
feature flag message_containers is also disabled.

Only if feature flag `message_containers_deaths_v2` is enabled and this
message hasn't been dead lettered before, will the new mc annotation
`deaths_v2` be used.
2024-05-13 11:00:39 +02:00
Karl Nilsson 49a4c66fce Move feature flag check outside of mc
the `mc` module is ideally meant to be kept pure and portable
 and feature flags have external infrastructure dependencies
as well as impure semantics.

Moving the check of this feature flag into the amqp session
simplifies the code (as no message containers with the new
format will enter the system before the feature flag is enabled).
2024-05-13 10:05:40 +02:00
David Ansari 6018155e9b Add property test for AMQP serializer and parser 2024-05-02 07:56:00 +00:00
David Ansari 6225dc9928 Do not parse entire AMQP body
Prior to this commit the entire amqp-value or amqp-sequence sections
were parsed when converting a message from mc_amqp.
Parsing the entire amqp-value or amqp-sequence section can generate a
huge amount of garbage depending on how large these sections are.

Given that other protocol cannot make use of amqp-value and
amqp-sequence sections anyway, leave them AMQP encoded when converting
from mc_amqp.

In fact prior to this commit, the entire body section was parsed
generating huge amounts of garbage just to subsequently encode it again
in mc_amqpl or mc_mqtt.

The new conversion interface from mc_amqp to other mc_* modules will
either output amqp-data sections or the encoded amqp-value /
amqp-sequence sections.
2024-05-02 07:56:00 +00:00