from 128 to 170. See comments for rationale.
On an Ubuntu box, run
```
quiver //host.docker.internal//queues/my-quorum-queue --durable --count 100k --duration 10m --body-size 12 --credit 10000
```
Before this commit:
```
RESULTS
Count ............................................... 100,000 messages
Duration ............................................... 11.0 seconds
Sender rate ........................................... 9,077 messages/s
Receiver rate ......................................... 9,097 messages/s
End-to-end rate ....................................... 9,066 messages/s
```
After this commit:
```
RESULTS
Count ............................................... 100,000 messages
Duration ................................................ 6.2 seconds
Sender rate .......................................... 16,215 messages/s
Receiver rate ........................................ 16,271 messages/s
End-to-end rate ...................................... 16,166 messages/s
```
That's because more `#enqueue{}` Ra commands can be batched before
fsyncing.
So, this commit brings the performance of scenario "a single connection publishing to
a quorum queue with large number (>200) of unconfirmed publishes" in AMQP 1.0
closer to AMQP 0.9.1.
1. If khepri_db is enabled, rabbitmq_metadata is a critical component
2. When waiting for quorum+1, periodically log what doesn't have the
quorum+1
- for components: just list them
- for queues: list how many we are waiting for and how to display
them (because there could be a large number, logging that
could be impractical or even dangerous)
3. make the tests signficantly faster by using a single group
Don't let the `log` callback of exchange_logging handler crash,
because in case of a crash OTP logger removes the exchange_logger
handler, which in turn deletes the log exchange and its bindings.
It was seen several times in production that the log exchange suddenly
disappears and without debug logging there is no trace of why.
With this commit `erlang:display` will print the reason and stacktrace
to stderr without using the logging infrastructure.
The design of `rabbit_amqqueue_process` makes this change challenging.
The old implementation of the handler of the `{delete,_,_,_}` command
simply stopped the process and any cleanup was done in `gen_server2`'s
`terminate` callback. This makes it impossible to pass any error back
to the caller if the record can't be deleted from the metadata store
before a timeout.
The strategy taken here slightly mirrors an existing
`{shutdown, missing_owner}` termination value which can be returned from
`init_it2/3`. We pass the `ReplyTo` for the call with the state. We then
optionally reply to this `ReplyTo` if it is set in `terminate_delete/4`
with the result of `rabbit_amqqueue:internal_delete/3`. So deletion of
a classic queue will terminate the process but may return an error to
the caller if the record can't be removed from the metadata store
before the timeout.
The consumer reader process is gone and there is no way to recover
it as the node does not have a member of the stream anymore,
so it should be cancelled/detached.
This release contains fixes around certain recovery failures where
there are either orphaned segment files (that do not have a corresponding
index file) or index files that do not have a corresponding segment
file.
[Why]
All other queries are based on projections, not direct queries to
Khepri. Using projections for exchange names should be faster and more
consistent with the rest of the module.
[How]
The Khepri query is replaced by an ETS query.
Rename the two quorum queue priority levels from "low" and "high" to "normal" and
"high". This improves user experience because the default priority level is low /
normal. Prior to this commit users were confused why their messages show
up as low priority. Furthermore there is no need to consult the docs to
know whether the default priority level is low or high.
For RabbitMQ 4.0, this commit removes support for the deprecated `rabbitmq.conf` settings
```
cluster_formation.randomized_startup_delay_range.min
cluster_formation.randomized_startup_delay_range.max
```
The rabbitmq/cluster-operator already removed these settings in
b81e0f9bb8
This test called `rabbit_db_queue:update_decorators/1` which doesn't
exist - instead it can call `update_decorators/2` with an empty list.
This commit also adds the test to the `all_tests/0` list - it being
absent is why this wasn't caught before.
RabbitMQ should advertise the SASL mechanisms in the order as
configured in `rabbitmq.conf`.
Starting RabbitMQ with the following `rabbitmq.conf`:
```
auth_mechanisms.1 = PLAIN
auth_mechanisms.2 = AMQPLAIN
auth_mechanisms.3 = ANONYMOUS
```
translates prior to this commit to:
```
1> application:get_env(rabbit, auth_mechanisms).
{ok,['ANONYMOUS','AMQPLAIN','PLAIN']}
```
and after this commit to:
```
1> application:get_env(rabbit, auth_mechanisms).
{ok,['PLAIN','AMQPLAIN','ANONYMOUS']}
```
In our 4.0 docs we write:
> The server mechanisms are ordered in decreasing level of preference.
which complies with https://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-security-v1.0-os.html#type-sasl-mechanisms
returned by Ra, e.g. when a replica cannot be
restarted because of a concurrent delete
or because a QQ was inserted into a schema data
store but not yet registered as a process on
the node.
References #12013.
This commit is a breaking change in RabbitMQ 4.0.
## What?
Remove mqtt.default_user and mqtt.default_pass
Instead, rabbit.anonymous_login_user and rabbit.anonymous_login_pass
should be used.
## Why?
RabbitMQ 4.0 simplifies anonymous logins.
There should be a single configuration place
```
rabbit.anonymous_login_user
rabbit.anonymous_login_pass
```
that is used for anonymous logins for any protocol.
Anonymous login is orthogonal to the protocol the client uses.
Hence, there should be a single configuration place which can then be
used for MQTT, AMQP 1.0, AMQP 0.9.1, and RabbitMQ Stream protocol.
This will also simplify switching to SASL for MQTT 5.0 in the future.
## 1. Introduce new SASL mechanism ANONYMOUS
### What?
Introduce a new `rabbit_auth_mechanism` implementation for SASL
mechanism ANONYMOUS called `rabbit_auth_mechanism_anonymous`.
### Why?
As described in AMQP section 5.3.3.1, ANONYMOUS should be used when the
client doesn't need to authenticate.
Introducing a new `rabbit_auth_mechanism` consolidates and simplifies how anonymous
logins work across all RabbitMQ protocols that support SASL. This commit
therefore allows AMQP 0.9.1, AMQP 1.0, stream clients to connect out of
the box to RabbitMQ without providing any username or password.
Today's AMQP 0.9.1 and stream protocol client libs hard code RabbitMQ default credentials
`guest:guest` for example done in:
* 0215e85643/src/main/java/com/rabbitmq/client/ConnectionFactory.java (L58-L61)
* ddb7a2f068/uri.go (L31-L32)
Hard coding RabbitMQ specific default credentials in dozens of different
client libraries is an anti-pattern in my opinion.
Furthermore, there are various AMQP 1.0 and MQTT client libraries which
we do not control or maintain and which still should work out of the box
when a user is getting started with RabbitMQ (that is without
providing `guest:guest` credentials).
### How?
The old RabbitMQ 3.13 AMQP 1.0 plugin `default_user`
[configuration](146b4862d8/deps/rabbitmq_amqp1_0/Makefile (L6))
is replaced with the following two new `rabbit` configurations:
```
{anonymous_login_user, <<"guest">>},
{anonymous_login_pass, <<"guest">>},
```
We call it `anonymous_login_user` because this user will be used for
anonymous logins. The subsequent commit uses the same setting for
anonymous logins in MQTT. Hence, this user is orthogonal to the protocol
used when the client connects.
Setting `anonymous_login_pass` could have been left out.
This commit decides to include it because our documentation has so far
recommended:
> It is highly recommended to pre-configure a new user with a generated username and password or delete the guest user
> or at least change its password to reasonably secure generated value that won't be known to the public.
By having the new module `rabbit_auth_mechanism_anonymous` internally
authenticate with `anonymous_login_pass` instead of blindly allowing
access without any password, we protect operators that relied on the
sentence:
> or at least change its password to reasonably secure generated value that won't be known to the public
To ease the getting started experience, since RabbitMQ already deploys a
guest user with full access to the default virtual host `/`, this commit
also allows SASL mechanism ANONYMOUS in `rabbit` setting `auth_mechanisms`.
In production, operators should disable SASL mechanism ANONYMOUS by
setting `anonymous_login_user` to `none` (or by removing ANONYMOUS from
the `auth_mechanisms` setting. This will be documented separately.
Even if operators forget or don't read the docs, this new ANONYMOUS
mechanism won't do any harm because it relies on the default user name
`guest` and password `guest`, which is recommended against in
production, and who by default can only connect from the local host.
## 2. Require SASL security layer in AMQP 1.0
### What?
An AMQP 1.0 client must use the SASL security layer.
### Why?
This is in line with the mandatory usage of SASL in AMQP 0.9.1 and
RabbitMQ stream protocol.
Since (presumably) any AMQP 1.0 client knows how to authenticate with a
username and password using SASL mechanism PLAIN, any AMQP 1.0 client
also (presumably) implements the trivial SASL mechanism ANONYMOUS.
Skipping SASL is not recommended in production anyway.
By requiring SASL, configuration for operators becomes easier.
Following the principle of least surprise, when an an operator
configures `auth_mechanisms` to exclude `ANONYMOUS`, anonymous logins
will be prohibited in SASL and also by disallowing skipping the SASL
layer.
### How?
This commit implements AMQP 1.0 figure 2.13.
A follow-up commit needs to be pushed to `v3.13.x` which will use SASL
mechanism `anon` instead of `none` in the Erlang AMQP 1.0 client
such that AMQP 1.0 shovels running on 3.13 can connect to 4.0 RabbitMQ nodes.
Transient queue deletion previously caused a crash if Khepri was enabled
and a node with a transient queue went down while its cluster was in a
minority. We need to handle the `{error,timeout}` return possible from
`rabbit_db_queue:delete_transient/1`. In the
`rabbit_amqqueue:on_node_down/1` callback we log a warning when we see
this return.
We then try this deletion again during that node's
`rabbit_khepri:init/0` which is called from a boot step after
`rabbit_khepri:setup/0`. At that point we can return an error and halt
the node's boot if the command times out. The cluster is very likely to
be in a majority at that point since `rabbit_khepri:setup/0` waits for
a leader to be elected (requiring a majority).
This fixes a crash report found in the `cluster_minority_SUITE`'s
`end_per_group`.
The prior code skirted transactions because the filter function might
cause Khepri to call itself. We want to use the same idea as the old
code - get all queues, filter them, then delete them - but we want to
perform the deletion in a transaction and fail the transaction if any
queues changed since we read them.
This fixes a bug - that the call to `delete_in_khepri/2` could return
an error tuple that would be improperly recognized as `Deletions` -
but should also make deleting transient queues atomic and fast.
Each call to `delete_in_khepri/2` needed to wait on Ra to replicate
because the deletion is an individual command sent from one process.
Performing all deletions at once means we only need to wait for one
command to be replicated across the cluster.
We also bubble up any errors to delete now rather than storing them as
deletions. This fixes a crash that occurs on node down when Khepri is
in a minority.
This makes possible to specify an encrypted
value in rabbitmq.conf using a prefix.
For example, to specify a default user password
as an encrypted value:
``` ini
default_user = bunnies-444
default_pass = encrypted:F/bjQkteQENB4rMUXFKdgsJEpYMXYLzBY/AmcYG83Tg8AOUwYP7Oa0Q33ooNEpK9
```
``` erl
[
{rabbit, [
{config_entry_decoder, [
{passphrase, <<"bunnies">>}
]}
]}
].
```
This fixes a case-clause crash in the logs in `cluster_minority_SUITE`.
When the database is not available `rabbit_amqqueue:declare/6,7` should
return a `protocol_error` record with an error message rather than a
hard crash. Also included in this change is the necessary changes to
typespecs: `rabbit_db_queue:create_or_get/1` is the first function to
return a possible `{error,timeout}`. That bubbles up through
`rabbit_amqqueue:internal_declare/3` and must be handled in each
`rabbit_queue_type:declare/2` callback.
`rabbit_misc:rs/1` for a queue resource will print
`queue '<QName>' in vhost '<VHostName>'` so the "a queue" and
surrounding single quotes should be removed here.
* Log AMQP connection name and container-id
Fixes#11958
## What
Log container-id and connection name.
Example JSON log:
```
{"time":"2024-08-12 10:49:44.365724+02:00","level":"info","msg":"accepting AMQP connection [::1]:56754 -> [::1]:5672","pid":"<0.1164.0>","domain":"rabbitmq.connection"}
{"time":"2024-08-12 10:49:44.381244+02:00","level":"debug","msg":"User 'guest' authenticated successfully by backend rabbit_auth_backend_internal","pid":"<0.1164.0>","domain":"rabbitmq","connection":"[::1]:56754 -> [::1]:5672"}
{"time":"2024-08-12 10:49:44.381578+02:00","level":"info","msg":"AMQP 1.0 connection from container 'my container ID': user 'guest' authenticated and granted access to vhost '/'","pid":"<0.1164.0>","domain":"rabbitmq.connection","connection":"[::1]:56754 -> [::1]:5672","container_id":"my container ID"}
{"time":"2024-08-12 10:49:44.381654+02:00","level":"debug","msg":"AMQP 1.0 connection.open frame: hostname = localhost, extracted vhost = /, idle-time-out = {uint,\n 30000}","pid":"<0.1164.0>","domain":"rabbitmq","connection":"[::1]:56754 -> [::1]:5672","container_id":"my container ID"}
{"time":"2024-08-12 10:49:44.386412+02:00","level":"debug","msg":"AMQP 1.0 created session process <0.1170.0> for channel number 0","pid":"<0.1164.0>","domain":"rabbitmq","connection":"[::1]:56754 -> [::1]:5672","container_id":"my container ID"}
{"time":"2024-08-12 10:49:46.387957+02:00","level":"debug","msg":"AMQP 1.0 closed session process <0.1170.0> with channel number 0","pid":"<0.1164.0>","domain":"rabbitmq","connection":"[::1]:56754 -> [::1]:5672","container_id":"my container ID"}
{"time":"2024-08-12 10:49:46.388201+02:00","level":"info","msg":"closing AMQP connection ([::1]:56754 -> [::1]:5672)","pid":"<0.1164.0>","domain":"rabbitmq.connection","connection":"[::1]:56754 -> [::1]:5672","container_id":"my container ID"}
```
If JSON logging is not used, this commit still includes the container-ID
once at info level:
```
2024-08-12 10:48:57.451580+02:00 [info] <0.1164.0> accepting AMQP connection [::1]:56715 -> [::1]:5672
2024-08-12 10:48:57.465924+02:00 [debug] <0.1164.0> User 'guest' authenticated successfully by backend rabbit_auth_backend_internal
2024-08-12 10:48:57.466289+02:00 [info] <0.1164.0> AMQP 1.0 connection from container 'my container ID': user 'guest' authenticated and granted access to vhost '/'
2024-08-12 10:48:57.466377+02:00 [debug] <0.1164.0> AMQP 1.0 connection.open frame: hostname = localhost, extracted vhost = /, idle-time-out = {uint,
2024-08-12 10:48:57.466377+02:00 [debug] <0.1164.0> 30000}
2024-08-12 10:48:57.470800+02:00 [debug] <0.1164.0> AMQP 1.0 created session process <0.1170.0> for channel number 0
2024-08-12 10:48:59.472928+02:00 [debug] <0.1164.0> AMQP 1.0 closed session process <0.1170.0> with channel number 0
2024-08-12 10:48:59.473332+02:00 [info] <0.1164.0> closing AMQP connection ([::1]:56715 -> [::1]:5672)
```
## Why?
See #11958 and https://www.rabbitmq.com/docs/connections#client-provided-names
To provide a similar feature to AMQP 0.9.1 this commit uses container-id as sent by the client in the open frame.
> Examples of containers are brokers and client applications.
The advantage is that the `container-id` is mandatory. Hence, in AMQP 1.0, we can enforce the desired behaviour that we document on our website for AMQP 0.9.1:
> The name is optional; however, developers are strongly encouraged to provide one as it would significantly simplify certain operational tasks.
* Clarify that container refers to AMQP 1.0
Rename container_id to amqp_container and change log message such that
it's unambigious that the word "container" refers to AMQP 1.0 containers
(to reduce confusion with the meaning of "container" in Docker / Kubernetes).
This suite uses the mixed version secondary umbrella as a starting
version for a cluster and then has a helper to upgrade the cluster to
the current code. This is meant to ensure that we can upgrade from the
previous minor.
This keeps functions pure and ensures that existing links do not break
if an operator were to dynamically change the server's max_message_size.
Each link now has a max_message_size:
* incoming links as determined by RabbitMQ config
* outgoing links as determined by the client
Put credit configuration into session state to make functions pure.
Although these credit configurations are not meant to be dynamically
changed at runtime, prior to this commit it could happen that
persistent_term:get/1 returns different results across invocations
leading to bugs in how credit is granted and recorded.
Prior to this commit, test
```
ERL_AFLAGS="+S 2" make -C deps/rabbit ct-amqp_client t=cluster_size_3:detach_requeues_two_connections_quorum_queue
```
failed rarely locally, and more often in CI.
An instance of a failed test in CI is
https://github.com/rabbitmq/rabbitmq-server/actions/runs/10298099899/job/28502687451?pr=11945
The test failed with:
```
=== === Reason: {assertEqual,[{module,amqp_client_SUITE},
{line,2800},
{expression,"amqp10_msg : body ( Msg1 )"},
{expected,[<<"1">>]},
{value,[<<"2">>]}]}
in function amqp_client_SUITE:detach_requeues_two_connections/2 (amqp_client_SUITE.erl, line 2800)
```
because it could happen that Receiver1's credit top up to the quorum
queue is applied before Receiver0's credit top up such that Receiver1
gets enqueued to the ServiceQueue before Receiver0.
This commit contains the following new quorum queue features:
* Fair share high/low priorities
* SAC consumers honour consumer priorities
* Credited consumer refactoring to meet AMQP requirements.
* Use checkpoints feature to reduce memory use for queues with long backlogs
* Consumer cancel option that immediately removes consumer and returns all pending messages.
* More compact commands of the most common commands such as enqueue, settle and credit
* Correctly track the delivery-count to be compatible with the AMQP spec
* Support the "modified" AMQP 1.0 outcome better.
Commits:
* Quorum queues v4 scaffolding.
Create the new version but not including any changes yet.
QQ: force delete followers after leader has terminated.
Also try a longer sleep for mqtt_shared_SUITE so that the
delete operation stands a chance to time out and move on
to the forced deletion stage.
In some mixed machine version scenarios some followers will never
apply the poison pill command so we may as well force delete them
just in case.
QQ: skip test in amqp_client that cannot pass with mixed machine versions
QQ: remove dead code
Code relating to prior machine versions and state conversions.
rabbit_fifo_prop_SUITE fixes
* QQ: add v4 ff and new more compact enqueue command.
Also update rabbit_fifo_* suites to test more relevant code versions
where applicable.
QQ: always use the updated credit mode format
QQv4: use more compact consumer reference in settle, credit, return
This introudces a new type: consumer_key() which is either the consumer_id
or the raft index the checkout was processed at. If the consumer is
using one of the updated credit spec formats rabbit_fifo will use the
raft index as the primary key for the consumer such that the rabbit
fifo client can then use the more space efficient integer index
instead of the full consumer id in subsequent commands.
There is compatibility code to still accept the consumer id in
settle, return, discard and credit commands but this is slighlyt
slower and of course less space efficient.
The old form will be used in cases where the fifo client may have
already remove the local consumer state (as happens after a cancel).
Lots of test refactorings of the rabbit_fifo_SUITE to begin to use
the new forms.
* More test refactoring and new API fixes
rabbit_fifo_prop_SUITE refactoring and other fixes.
* First pass SAC consumer priority implementation.
Single active consumers will be activated if they have a higher priority
than the currently active consumer. if the currently active consumer
has pending messages, no further messages will be assigned to the
consumer and the activation of the new consumer will happen once
all pending messages are settled. This is to ensure processing order.
Consumers with the same priority will internally be ordered to
favour those with credit then those that attached first.
QQ: add SAC consumer priority integration tests
QQ: add check for ff in tests
* QQ: add new consumer cancel option: 'remove'
This option immediately removes and returns all messages for a
consumer instead of the softer 'cancel' option which keeps the
consumer around until all pending messages have been either
settled or returned.
This involves a change to the rabbit_queue_type:cancel/5 API
to rabbit_queue_type:cancel/3.
* QQ: capture checked out time for each consumer message.
This will form the basis for queue initiated consumer timeouts.
* QQ: Refactor to use the new ra_machine:handle_aux/5 API
Instead of the old ra_machine:handle_aux/6 callback.
* QQ hi/lo priority queue
* QQ: Avoid using mc:size/1 inside rabbit_fifo
As we dont want to depend on external functions for things that may
change the state of the queue.
* QQ bug fix: Maintain order when returning multiple
Prior to this commit, quorum queues requeued messages in an undefined
order, which is wrong.
This commit fixes this bug and requeues messages always in the order as
nacked / rejected / released by the client.
We ensure that order of requeues is deterministic from the client's
point of view and doesn't depend on whether the quorum queue soft limit
was exceeded temporarily.
So, even when rabbit_fifo_client batches requeues, the order as nacked
by the client is still maintained.
* Simplify
* Add rabbit_quorum_queue:file_handle* functions back.
For backwards compat.
* dialyzer fix
* dynamic_qq_SUITE: avoid mixed versions failure.
* QQ: track number of requeues for message.
To be able to calculate the correct value for the AMQP delivery_count
header we need to be able to distinguish between messages that were
"released" or returned in QQ speak and those that were returned
due to errors such as channel termination.
This commit implement such tracking as well as the calculation
of a new mc annotations `delivery_count` that AMQP makes use
of to set the header value accordingly.
* Use QQ consumer removal when AMQP client detaches
This enables us to unskip some AMQP tests.
* Use AMQP address v2 in fsharp-tests
* QQ: track number of requeues for message.
To be able to calculate the correct value for the AMQP delivery_count
header we need to be able to distinguish between messages that were
"released" or returned in QQ speak and those that were returned
due to errors such as channel termination.
This commit implement such tracking as well as the calculation
of a new mc annotations `delivery_count` that AMQP makes use
of to set the header value accordingly.
* rabbit_fifo: Use Ra checkpoints
* quorum queues: Use a custom interval for checkpoints
* rabbit_fifo_SUITE: List actual effects in ?ASSERT_EFF failure
* QQ: Checkpoints modifications
* fixes
* QQ: emit release cursors on tick for followers and leaders
else followers could end up holding on to segments a bit longer
after traffic stops.
* Support draining a QQ SAC waiting consumer
By issuing drain=true, the client says "either send a transfer or a flow frame".
Since there are no messages to send to an inactive consumer, the sending
queue should advance the delivery-count consuming all link-credit and send
a credit_reply with drain=true to the session proc which causes the session
proc to send a flow frame to the client.
* Extract applying #credit{} cmd into 2 functions
This commit is only refactoring and doesn't change any behaviour.
* Fix default priority level
Prior to this commit, when a message didn't have a priority level set,
it got enqueued as high prio.
This is wrong because the default priority is 4 and
"for example, if 2 distinct priorities are implemented,
then levels 0 to 4 are equivalent, and levels 5 to 9 are equivalent
and levels 4 and 5 are distinct."
Hence, by default a message without priority set, must be enqueued as
low prio.
* bazel run gazelle
* Avoid deprecated time unit
* Fix aux_test
* Delete dead code
* Fix rabbit_fifo_q:get_lowest_index/1
* Delete unused normalize functions
* Generate less garbage
* Add integration test for QQ SAC with consumer priority
* Improve readability
* Change modified outcome behaviour
With the new quorum queue v4 improvements where a requeue counter was
added in addition to the quorum queue delivery counter, the following
sentence from https://github.com/rabbitmq/rabbitmq-server/pull/6292#issue-1431275848
doesn't apply anymore:
> Also the case where delivery_failed=false|undefined requires the release of the
> message without incrementing the delivery_count. Again this is not something
> that our queues are able to do so again we have to reject without requeue.
Therefore, we simplify the modified outcome behaviour:
RabbitMQ will from now on only discard the message if the modified's
undeliverable-here field is true.
* Introduce single feature flag rabbitmq_4.0.0
## What?
Merge all feature flags introduced in RabbitMQ 4.0.0 into a single
feature flag called rabbitmq_4.0.0.
## Why?
1. This fixes the crash in
https://github.com/rabbitmq/rabbitmq-server/pull/10637#discussion_r1681002352
2. It's better user experience.
* QQ: expose priority metrics in UI
* Enable skipped test after rebasing onto main
* QQ: add new command "modify" to better handle AMQP modified outcomes.
This new command can be used to annotate returned or rejected messages.
This commit also retains the delivery-count across dead letter boundaries
such that the AMQP header delivery-count field can now include _all_ failed
deliver attempts since the message was originally received.
Internally the quorum queue has moved it's delivery_count header to
only track the AMQP protocol delivery attempts and now introduces
a new acquired_count to track all message acquisitions by consumers.
* Type tweaks and naming
* Add test for modified outcome with classic queue
* Add test routing on message-annotations in modified outcome
* Skip tests in mixed version tests
Skip tests in mixed version tests because feature flag
rabbitmq_4.0.0 is needed for the new #modify{} Ra command
being sent to quorum queues.
---------
Co-authored-by: David Ansari <david.ansari@gmx.de>
Co-authored-by: Michael Davis <mcarsondavis@gmail.com>
This could happen if a leader election occurred just before the
the member removal was initiated. In particular this could
happen when stopping and forgetting an existing rabbit node.
The leader returned in rabbit_quorum_queue:info/2 only ever queried
the pid field from the queue record when more up to date info could
have been available in the ra_leaderboard table.
## What?
`mc:init()` already sets mc annotation `rts` (received timestamp).
This commit reuses this timestamp in `rabbit_message_interceptor`.
## Why?
`os:system_time/1` can jump forward or backward between invocations.
Using two different timestamps for the same meaning, the time the message
was received by RabbitMQ, can be misleading.
Log warnings when:
- Local node is not present. Even though we force it on the node
list, this will not work for other cluster nodes if they have
the same list.
- There are duplicated nodes
## What?
Prior to this commit connecting 40k AMQP clients with 5 sessions each,
i.e. 200k sessions in total, took 7m55s.
After to this commit the same scenario takes 1m37s.
Additionally, prior to this commit, disconnecting all connections and sessions
at once caused the pg process to become overloaded taking ~14 minutes to
process its mailbox.
After this commit, these same deregistrations take less than 5 seconds.
To repro:
```go
package main
import (
"context"
"log"
"time"
"github.com/Azure/go-amqp"
)
func main() {
for i := 0; i < 40_000; i++ {
if i%1000 == 0 {
log.Printf("opened %d connections", i)
}
conn, err := amqp.Dial(
context.TODO(),
"amqp://localhost",
&amqp.ConnOptions{SASLType: amqp.SASLTypeAnonymous()})
if err != nil {
log.Fatal("open connection:", err)
}
for j := 0; j < 5; j++ {
_, err = conn.NewSession(context.TODO(), nil)
if err != nil {
log.Fatal("begin session:", err)
}
}
}
log.Println("opened all connections")
time.Sleep(5 * time.Hour)
}
```
## How?
This commit uses separate pg scopes (that is processes and ETS tables) to register
AMQP connections and AMQP sessions. Since each Pid is now its own group,
registration and deregistration is fast.
Sometimes in CI under Khepri, the test case errored with:
```
receiver_attached flushed: {amqp10_event,
{session,<0.396.0>,
{ended,
{'v1_0.error',
{symbol,<<"amqp:internal-error">>},
{utf8,
<<"stream queue 'leader_transfer_stream_credit_single' in vhost '/' does not have a running replica on the local node">>},
undefined}}}}
```
Similar to other RabbitMQ internal credit flow configurations such as
`credit_flow_default_credit` and `msg_store_credit_disc_bound`, this
commit makes the `classic_queue_consumer_unsent_message_limit`
configurable via `advanced.config`.
See https://github.com/rabbitmq/rabbitmq-server/pull/11822 for the
original motivation to make this setting configurable.
Fixes#11841
PR #11307 introduced the invariant that at most one credit request between
session proc and quorum queue proc can be in flight at any given time.
This is not the case when rabbit_fifo_client re-sends credit
requests on behalf of the session proc when the quorum queue leader changes.
This commit therefore removes assertions which assumed only a single credit
request to be in flight.
This commit also removes field queue_flow_ctl.desired_credit
since it is redundant to field client_flow_ctl.credit
This fixes a potential crash in `rabbit_amqp_amanegment` where we tried
to format the exchange resource as a string (`~ts`). The other changes
are cosmetic.
`rabbit_amqp_management` returns HTTP status codes to the client. 503
means that a service is unavailable (which Khepri is while it is in a
minority) so it's a more appropriate code than the generic 500
internal server error.
Previously we used the `registered` approach where all Ra servers that
have a registered name would be recovered. This could have unintended
side effects for queues that e.g. were deleted when not all members of
a quorum queueu were running when the queue was deleted. In this case
the Ra system would have recovered the members that were not deleted
which is not ideal as a dangling member would just sit and loop in
pre vote state and a future declaration of the queue may partially
fail.
Instead we rely on the meta data store for the truth about which
members should be restarted after a ra system restart.
Sometimes on Khepri the test failed with:
```
=== Ended at 2024-07-24 10:07:15
=== Location: [{gen_server,call,419},
{amqpl_direct_reply_to_SUITE,rpc,226},
{test_server,ts_tc,1793},
{test_server,run_test_case_eval1,1302},
{test_server,run_test_case_eval,1234}]
=== === Reason: {{shutdown,
{server_initiated_close,404,
<<"NOT_FOUND - no queue 'tests.amqpl_direct_reply_to.rpc.requests' in vhost '/'">>}},
{gen_server,call,
[<0.272.0>,
{call,
{'basic.get',0,
<<"tests.amqpl_direct_reply_to.rpc.requests">>,
false},
none,<0.246.0>},
infinity]}}
```
https://github.com/rabbitmq/rabbitmq-server/actions/runs/10074558971/job/27851173817?pr=11809
shows an instance of this flake.
The spec of `rabbit_exchange:declare/7` needs to be updated to return
`{ok, Exchange} | {error, Reason}` instead of the old return value of
`rabbit_types:exchange()`. This is safe to do since `declare/7` is not
called by RPC - from the CLI or otherwise - outside of test suites, and
in test suites only through the CLI's `TestHelper.declare_exchange/7`.
Callers of this helper are updated in this commit.
Otherwise this commit updates callers to unwrap the `{ok, Exchange}`
and bubble up errors.
It's unlikely that these operations will time out since the serial
number is always updated after some other transaction, for example
adding or deleting an exchange.
In the future we could consider moving the serial updates into those
transactions. In the meantime we can remove the possibility of timeouts
by giving the serial update unlimited time to finish.
A common case for exchange deletion is that callers want the deletion
to be idempotent: they treat the `ok` and `{error, not_found}` returns
from `rabbit_exchange:delete/3` the same way. To simplify these
callsites we add a `rabbit_exchange:ensure_deleted/3` that wraps
`rabbit_exchange:delete/3` and returns `ok` when the exchange did not
exist. Part of this commit is to update callsites to use this helper.
The other part is to handle the `rabbit_khepri:timeout()` error possible
when Khepri is in a minority. For most callsites this is just a matter
of adding a branch to their `case` clauses and an appropriate error and
message.
We need to bubble up the error through the caller
`rabbit_vhost:delete/2`. The CLI calls `rabbit_vhost:delete/2` and
already handles the `{error, timeout}` but the management UI needs an
update so that an HTTP DELETE returns an error code when the deletion
times out.
This function throws if the database fails to apply the transaction.
This function is only called by the `rabbit_vhost_limit` runtime
parameter module in its `notify/5` and `notify_clear/4` callbacks. These
callers have no way of handling this error but it should be very
difficult for them to face this crash: setting the runtime parameter
would need to succeed first which needs Khepri to be in majority. Khepri
would need to enter a minority between inserting/updating/deleting the
runtime parameter and updating the vhost. It's possible but unlikely.
In the future we could consider refactoring vhost limits to update the
vhost as the runtime parameter is changed, transactionally. I figure
that to be a very large change though so we leave this to the future.
This error is already handled by the callers of
`rabbit_vhost:update_metadata/3` (the CLI) and `rabbit_vhost:put_vhost/6`
(see the parent commit) but was just missing from the spec.
`rabbit_definitions:concurrent_for_all/4` doesn't pay any attention to
the return value of the `Fun`, only counting an error when it catches
`{error, E}`. So we need to `throw/1` the error from
`rabbit_vhost:put_vhost/6`.
The other callers of `rabbit_vhost:put_vhost/6` - the management UI and
the CLI (indirectly through `rabbit_vhost:add/2,3`) already handle this
error return.
`create_or_get_in_khepri/2` throws errors like the
`rabbit_khepri:timeout_error()`. Callers of `create_or_get/3` like
`rabbit_vhost:do_add/3` and its callers handle the throw with a `try`/
`catch` block and return the error tuple, which is then handled by
their callers.
This ensures that the call graph of `rabbit_db_binding:create/2` and
`rabbit_db_binding:delete/2` handle the `{error, timeout}` error
possible when Khepri is in a minority.
This is essentially a cosmetic change. Read-only transactions are done
with queries in Khepri rather than commands, like read-write
transactions. Local queries cannot timeout like commands so marking the
transaction as 'ro' means that we don't need to handle a potential
'{error, timeout}' return.
This case only targets Khepri. Instead of setting the `metadata_store`
config option we should skip the test when the configured metadata
store is mnesia.
This contains a fix in the ra_directory module to ensure
names can be deleted even when a Ra server has never been started
during the current node lifetime.
Also contains a small tweak to ensure the ra_directory:unregister_name
is called before deleting a Ra data directory which is less likely
to cause a corrupt state that will stop a Ra system from starting.
The rabbitmqqueue:declare is handled, and in case of known errors, the correct error code is sent back.
Signed-off-by: Gabriele Santomaggio <g.santomaggio@gmail.com>
Add copies of some per-object metrics that are labeled per-channel
aggregated to reduce cardinality. These metrics are valuable and
easier to process if exposed on per-exchange and per-queue basis.
The AMQP 0.9.1 longstr type is problematic as it can contain arbitrary
binary data but is typically used for utf8 by users.
The current conversion into AMQP avoids scanning arbitrarily large
longstr to see if they only contain valid utf8 by treating all
longstr data longer than 255 bytes as binary. This is in hindsight
too strict and thus this commit increases the scanning limit to
4096 bytes - enough to cover the vast majority of AMQP 0.9.1 header
values.
This change also conversts the AMQP binary types into longstr to
ensure that existing data (held in streams for example) is converted
to an AMQP 0.9.1 type most likely what the user intended.
Arguments
* `rabbitmq:stream-offset-spec`,
* `rabbitmq:stream-filter`,
* `rabbitmq:stream-match-unfiltered`
are set in the `filter` field of the `Source`.
This makes sense for these consumer arguments because:
> A filter acts as a function on a message which returns a boolean result
> indicating whether the message can pass through that filter or not.
Consumer priority is not really such a predicate.
Therefore, it makes more sense to set consumer priority in the
`properties` field of the `Attach` frame.
We call the key `rabbitmq:priority` which maps to consumer argument
`x-priority`.
While AMQP 0.9.1 consumers are allowed to set any integer data
type for the priority level, this commit decides to enforce an `int`
value (range -(2^31) to 2^31 - 1 inclusive).
Consumer priority levels outside of this range are not needed in
practice.
`ets:whereis/1` adds some overhead - it's two ETS calls rather than one
when `ets:whereis/1` returns a table identifier. It's also not atomic:
the table could disappear between `ets:whereis/1` calls and the call to
read data from a projection. We replace all `ets:whereis/1` calls on
projection tables with `try`/`catch` and return default values when we
catch the `badarg` `error` which ETS emits when passed a non-existing
table name.
One special case though is `ets:info/2` which returns `undefined` when
passed a non-existing table names. That block is refactored to use a
`case` instead.
* Deprecate queue-master-locator
This should not be a breaking change - all validation should still pass
* CQs can now use `queue-leader-locator`
* `queue-leader-locator` takes precedence over `queue-master-locator` if both are used
* regardless of which name is used, effectively there are only two values: `client-local` (default) or `balanced`
* other values (`min-masters`, `random`, `least-leaders`) are mapped to `balanced`
* Management UI no longer shows `master-locator` fields when declaring a queue/policy, but such arguments can still be used manually (unless not permitted)
* exclusive queues are always declared locally, as before
Since feature flag `message_containers` introduced in 3.13.0 is required in 4.0,
we can also require all other feature flags introduced in or before 3.13.0
and remove their compatibility code for 4.0:
* restart_streams
* stream_sac_coordinator_unblock_group
* stream_filtering
* stream_update_config_command
Projections might not be available in a mixed-version scenario where a
cluster has nodes which are all blank/uninitialized and the majority
of nodes run a version of Khepri with a new machine version while the
minority does not have the new machine version's code.
In this case, the cluster's effective machine version will be set to
the newer version as the majority of members have access to the new
code. The older version members will be unable to apply commands
including the `register_projection` commands that set up these ETS
tables. When these ETS tables don't exist, calls like `ets:tab2list/1`
or `ets:lookup/2` cause `badarg` errors.
We use default empty values when `ets:whereis/1` returns `undefined` for
a projection table name. Instead we could use local queries or leader
queries. Writing equivalent queries is a fair amount more work and the
code would be hard to test. `ets:whereis/1` should only return
`undefined` in the above scenario which should only be a problem in
our mixed-version testing - not in practice.
This suite is only meant to run with Khepri as the metadata store.
Instead of setting this explicitly we can look at the configured
metadata store and conditionally skip the entire suite. This prevents
these tests from running twice in CI.
Khepri is not yet compatible with mixed-version testing and this suite
only tests clustering when Khepri is the metadata store in at least some
of the nodes.
This commit fixes https://github.com/rabbitmq/rabbitmq-server/discussions/11662
Prior to this commit the following crash occurred when an RPC reply message entered
RabbitMQ and tracing was enabled:
```
** Reason for termination ==
** {function_clause,
[{rabbit_trace,'-tap_in/6-fun-0-',
[{virtual_reply_queue,
<<"amq.rabbitmq.reply-to.g1h2AA5yZXBseUAyNzc5NjQyMAAAC1oAAAAAZo4bIw==.+Uvn1EmAp0ZA+oQx2yoQFA==">>}],
[{file,"rabbit_trace.erl"},{line,62}]},
{lists,map,2,[{file,"lists.erl"},{line,1559}]},
{rabbit_trace,tap_in,6,[{file,"rabbit_trace.erl"},{line,62}]},
{rabbit_channel,handle_method,3,
[{file,"rabbit_channel.erl"},{line,1284}]},
{rabbit_channel,handle_cast,2,
[{file,"rabbit_channel.erl"},{line,659}]},
{gen_server2,handle_msg,2,[{file,"gen_server2.erl"},{line,1056}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,241}]}]}
```
(Note that no trace message is emitted for messages that are delivered to
direct reply to requesting clients (neither in 3.12, nor in 3.13, nor
after this commit). This behaviour can be added in future when a direct
reply virtual queue becomes its own queue type.)
Fixes issue raised in discussion #11653
The user used the following env var:
```
RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="-setcookie FOOBARBAZ"
```
...and the cookie was not passed to the peer node for peer discovery.
This commit is a follow up of https://github.com/rabbitmq/rabbitmq-server/pull/11604
This commit changes the AMQP address format v2 from
```
/e/:exchange/:routing-key
/e/:exchange
/q/:queue
```
to
```
/exchanges/:exchange/:routing-key
/exchanges/:exchange
/queues/:queue
```
Advantages:
1. more user friendly
2. matches nicely with the plural forms of HTTP API v1 and HTTP API v2
This plural form is still non-overlapping with AMQP address format v1.
Although it might feel unusual at first to send a message to `/queues/q1`,
if you think about `queues` just being a namespace or entity type, this
address format makes sense.
`rabbit_runtime_parameters:value_global/2` was only used in
`rabbit_nodes:cluster_name/0` since near the beginning of the commit
history of the server and its usage was eliminated in 06932b9fcb
(#3085, released in v3.8.17+ and v3.9.0+).
`rabbit_runtime_parameters:value/4` doesn't appear to have been ever
used since it was introduced near the beginning of the commit history.
It may have been added just to mirror `value_global/2`'s interface.
Eliminating these dead functions allows us to also eliminate a somewhat
complicated function `rabbit_db_rtparams:get_or_set/2`.
Partially copy file
https://github.com/ninenines/cowlib/blob/optimise-urldecode/src/cow_uri.erl
We use this copy because:
1. uri_string:unquote/1 is lax: It doesn't validate that characters that are
required to be percent encoded are indeed percent encoded. In RabbitMQ,
we want to enforce that proper percent encoding is done by AMQP clients.
2. uri_string:unquote/1 and cow_uri:urldecode/1 in cowlib v2.13.0 are both
slow because they allocate a new binary for the common case where no
character was percent encoded.
When a new cowlib version is released, we should make app rabbit depend on
app cowlib calling cow_uri:urldecode/1 and delete this file (rabbit_uri.erl).
to distinguish between v1 and v2 address formats.
Previously, v1 and v2 address formats overlapped and behaved differently
for example for:
```
/queue/:queue
/exchange/:exchange
```
This PR changes the v2 format to:
```
/e/:exchange/:routing-key
/e/:exchange
/q/:queue
```
to distinguish between v1 and v2 addresses.
This allows to call `rabbit_deprecated_features:is_permitted(amqp_address_v1)`
only if we know that the user requests address format v1.
Note that `rabbit_deprecated_features:is_permitted/1` should only
be called when the old feature is actually used.
Use percent encoding / decoding for address URI format v2.
This allows to use any UTF-8 encoded characters including slashes (`/`)
in routing keys, exchange names, and queue names and is more future
safe.
test `confirm_nack` had a race condition that mean either ack or nack
where likely outcomes. By stopping the queue before the publish we
ensure only nack is the valid outcome.
We don't need to duplicate so many patterns in so many
files since we have a monorepo (and want to keep it).
If I managed to miss something or remove something that
should stay, please put it back. Note that monorepo-wide
patterns should go in the top-level .gitignore file.
Other .gitignore files are for application or folder-
specific patterns.
## What?
The rabbit_queue_type API has allowed to cancel a consumer.
Cancelling a consumer merely stops the queue sending more messages
to the consumer. However, messages checked out to the cancelled consumer
remain checked out until acked by the client. It's up to the client when
and whether it wants to ack the remaining checked out messages.
For AMQP 0.9.1, this behaviour is necessary because the client may
receive messages between the point in time it sends the basic.cancel
request and the point in time it receives the basic.cancel_ok response.
AMQP 1.0 is a better designed protocol because a receiver can stop a
link as shown in figure 2.46.
After a link is stopped, the client knows that the queue won't deliver
any more messages.
Once the link is stopped, the client can subsequently detach the link.
This commit extends the rabbit_queue_type API to allow a consumer being
immediately removed:
1. Any checked out messages to that receiver will be requeued by the
queue. (As explained previously, a client that first stops a link
by sending a FLOW with `link_credit=0` and `echo=true` and waiting
for the FLOW reply from the server and settles any messages before
it sends the DETACH frame, won't have any checked out messages).
2. The queue entirely forgets this consumer and therefore stops
delivering messages to the receiver.
This new behaviour of consumer removal is similar to what happens when
an AMQP 0.9.1 channel is closed: All checked out messages to that
channel will be requeued.
## Why?
Removing the consumer immediately simplifies many aspects:
1. The server session process doesn't need to requeue any checked out
messages for the receiver when the receiver detaches the link.
Specifically, messages in the outgoing_unsettled_map and
outgoing_pending queue don't need to be requeued because the queue
takes care of requeueing any checked out messages.
2. It simplifies reasoning about clients first detaching and then
re-attaching in the same session with the same link handle (the handle
becomes available for re-use once a link is closed): This will result
in the same RabbitMQ queue consumer tag.
3. It simplifies queue implementations since state needs to be hold and
special logic needs to be applied to consumers that are only
cancelled (basic.cancel AMQP 0.9.1) but not removed.
4. It makes the single active consumer feature safer when it comes to
maintaning message order: If a client cancels consumption via AMQP
0.9.1 basic.cancel, but still has in-flight checked out messages,
the queue will activate the next consumer. If the AMQP 0.9.1 client
shortly after crashes, messages to the old consumer will be requeued
which results in message being out of order. To maintain message order,
an AMQP 0.9.1 client must close the whole channel such that messages
are requeued before the next consumer is activated.
For AMQP 1.0, the client can either stop the link first (preferred)
or detach the link directly. Detaching the link will requeue all
messages before activating the next consumer, therefore maintaining
message order. Even if the session crashes, message order will be
maintained.
## How?
`rabbit_queue_type:cancel` accepts a spec as argument.
The new interaction between session proc and classic queue proc (let's
call it cancel API v2) must be hidden behind a feature flag.
This commit re-uses feature flag credit_api_v2 since it also gets
introduced in 4.0.
Fix the following sporadic failure:
```
=== === Reason: {assertMatch,
[{module,amqp_client_SUITE},
{line,446},
{expression,
"rabbitmq_amqp_client : delete_queue ( LinkPair , QName )"},
{pattern,"{ ok , # { message_count := NumMsgs } }"},
{value,{ok,#{message_count => 29}}}]}
in function amqp_client_SUITE:sender_settle_mode_mixed/1 (amqp_client_SUITE.erl, line 446)
```
The last message (30th) is send as settled.
It apparently happened that all messages up to 29 got stored.
The 29th message also got confirmed.
Subsequently the queue got deleted with only 29 ready messages.
Bumping NumbMsgs to 31 ensures that the last message is sent unsettled.
Fix the following sporadich error in CI:
```
=== Location: [{amqp_client_SUITE,available_messages,3137},
{test_server,ts_tc,1793},
{test_server,run_test_case_eval1,1302},
{test_server,run_test_case_eval,1234}]
=== === Reason: {assertEqual,
[{module,amqp_client_SUITE},
{line,3137},
{expression,"get_available_messages ( Receiver )"},
{expected,5000},
{value,0}]}
```
The client decrements the available variable from 1 to 0 when it
receives the transfer and sends a credit_exhausted event to the CT test
case proc. The CT test case proc queries the client session proc for available
messages, which is still 0. The FLOW frame from RabbitMQ to the client with the
available=5000 set could arrive shortly after.
Avoid the following unexpected error in mixed version testing where
feature flag credit_api_v2 is disabled:
```
[error] <0.1319.0> Timed out waiting for credit reply from quorum queue 'stop' in vhost '/'. Hint: Enable feature flag credit_api_v2
[warning] <0.1319.0> Closing session for connection <0.1314.0>: {'v1_0.error',
[warning] <0.1319.0> {symbol,<<"amqp:internal-error">>},
[warning] <0.1319.0> {utf8,
[warning] <0.1319.0> <<"Timed out waiting for credit reply from quorum queue 'stop' in vhost '/'. Hint: Enable feature flag credit_api_v2">>},
[warning] <0.1319.0> undefined}
[error] <0.1319.0> ** Generic server <0.1319.0> terminating
[error] <0.1319.0> ** Last message in was {'$gen_cast',
[error] <0.1319.0> {frame_body,
[error] <0.1319.0> {'v1_0.flow',
[error] <0.1319.0> {uint,283},
[error] <0.1319.0> {uint,65535},
[error] <0.1319.0> {uint,298},
[error] <0.1319.0> {uint,4294967295},
[error] <0.1319.0> {uint,1},
[error] <0.1319.0> {uint,282},
[error] <0.1319.0> {uint,50},
[error] <0.1319.0> {uint,0},
[error] <0.1319.0> undefined,undefined,undefined}}}
```
Presumably, the server session proc timed out receiving a credit reply from
the quorum queue because the test case deleted the quorum queue
concurrently with the client (and therefore also the server session
process) topping up link credit.
This commit detaches the link first and ends the session synchronously
before deleting the quorum queue.
This commit attempts to remove the following flake:
```
{amqp_client_SUITE,server_closes_link,1113}
{badmatch,[<14696.3530.0>,<14696.3453.0>]}
```
by waiting after each test case until sessions were de-registered from
the 1st RabbitMQ node.
## What?
This commit fixes issues that were present only on `main`
branch and were introduced by #9022.
1. Classic queues (specifically `rabbit_queue_consumers:subtract_acks/3`)
expect message IDs to be (n)acked in the order as they were delivered
to the channel / session proc.
Hence, the `lists:usort(MsgIds0)` in `rabbit_classic_queue:settle/5`
was wrong causing not all messages to be acked adding a regression
to also AMQP 0.9.1.
2. The order in which the session proc requeues or rejects multiple
message IDs at once is important. For example, if the client sends a
DISPOSITION with first=3 and last=5, the message IDs corresponding to
delivery IDs 3,4,5 must be requeued or rejected in exactly that
order.
For example, quorum queues use this order of message IDs in
34d3f94374/deps/rabbit/src/rabbit_fifo.erl (L226-L234)
to dead letter in that order.
## How?
The session proc will settle (internal) message IDs to queues in ascending
(AMQP) delivery ID order, i.e. in the order messages were sent to the
client and in the order messages were settled by the client.
This commit chooses to keep the session's outgoing_unsettled_map map
data structure.
An alternative would have been to use a queue or lqueue for the
outgoing_unsettled_map as done in
* 34d3f94374/deps/rabbit/src/rabbit_channel.erl (L135)
* 34d3f94374/deps/rabbit/src/rabbit_queue_consumers.erl (L43)
Whether a queue (as done by `rabbit_channel`) or a map (as done by
`rabbit_amqp_session`) performs better depends on the pattern how
clients ack messages.
A queue will likely perform good enough because usually the oldest
delivered messages will be acked first.
However, given that there can be many different consumers on an AQMP
0.9.1 channel or AMQP 1.0 session, this commit favours a map because
it will likely generate less garbage and is very efficient when for
example a single new message (or few new messages) gets acked while
many (older) messages are still checked out by the session (but by
possibly different AMQP 1.0 receivers).
It has largely been superseded by `perf`. It is no longer
generally useful. It can always be added to BUILD_DEPS for
the rare cases it is needed, or installed locally and
pointed to by setting its path to ERL_LIBS.
BEFORE: time gmake -C deps/rabbit ct-dynamic_qq 1.92s user 1.44s system 2% cpu 2:23.56 total
AFTER: time gmake -C deps/rabbit ct-dynamic_qq 1.66s user 1.22s system 2% cpu 1:56.44 total
Reduce the number of tests that are run for 2 nodes.
BEFORE: time gmake -C deps/rabbit ct-rabbit_stream_queue 7.22s user 5.72s system 2% cpu 8:28.18 total
AFTER time gmake -C deps/rabbit ct-rabbit_stream_queue 27.04s user 8.43s system 10% cpu 5:38.63 total
The FD limits are still valuable.
The FD used will still show some information during CQv1
upgrade to v2 so it is kept for now. But in the future
it will have to be reworked to query the system, or be
removed.
The message has been tweaked; it isn't about FHC
or queues but about system limits only. The
ulimit() function can later be moved out of
FHC when FHC gets fully removed.
They are no longer used.
This removes a couple file_handle_cache:info/1 calls.
We are not removing them from the HTTP API to avoid
breaking things unintentionally.
Stats were not removed, including management UI stats
relating to FDs.
Web-MQTT and Web-STOMP configuration relating to FHC
were not removed.
The file_handle_cache itself must be kept until we
remove CQv1.
Besides fixing a regression detected by priority_queue_SUITE,
this introduces a drive-by change:
rabbit_priority_queue: avoid an exception when
max priority is a negative value that circumvented validation
at validation time.
DQT = default queue type.
When a client provides no queue type, validation
should take the defaults (virtual host, global,
and the last resort fallback) into account
instead of considering the type to
be "undefined".
References #11457 ##11528
The queue type argument won't always be a binary,
for example, when a virtual host is created.
As such, the validation code should accept at
least atoms in addition to binaries.
While at it, improve logging and error reporting
when DQT validation fails, and while at it,
make the definition import tests focussed on
virtual host a bit more robust.
In case 16, an await_condition/2 condition was
not correctly matching the error. As a result,
the function proceeded to the assertion step
earlier than it should have, failing with
an obscure function_clause.
This was because an {error, Context} clause
was not correct.
In addition to fixing it, this change adds a
catch-all clause and verifies the loaded
tagged virtual host before running any assertions
on it.
If the virtual host was not imported, case 16
will now fail with a specific CT log message.
References #11457 because the changes there
exposed this behavior in CI.
Importing a definitions file with no `default_queue_type` metadata for a vhost will result in that vhosts value being set to `undefined`.
Once set to a non-`undefined` value, this PR prevents `default_queue_type` from being set back to `undefined`
ra_state may contain a QQ state such as {'foo',init,unknown}.
Perfore this fix, all_replica_states doesn't map such states
to a 2-tuple which leads to a crash in maps:from_list because
a 3-tuple can't be handled.
A crash in rabbit_quorum_queue:all_replica_states leads to no
results being returned from a given node when the CLI asks for
QQs with minimum quorum.
This reverts commit 8749c605f5.
[Why]
The patch was supposed to solve an issue that we didn't understand and
that was likely a network/DNS problem outside of RabbitMQ. We know it
didn't solve that issue because it was reported again 6 months after the
initial pull request (#8411).
What we are sure however is that it increased the testing of RabbitMQ
significantly because the code loops for 10+ minutes if the remote node
is not running.
The retry in the Feature flags subsystem was not the right place either.
The `noconnection` error is visible there because it runs earlier during
RabbitMQ startup. But retrying there won't solve a network issue
magically.
There are two ways to create a cluster:
1. peer discovery and this subsystem takes care of retries if necessary
and appropriate
2. manually using the CLI, in which case the user is responsible for
starting RabbitMQ nodes and clustering them
Let's revert it until the root cause is really understood.
It will always use the ETS index. This change lets us
do optimisations that would otherwise not be possible,
including 81b2c39834953d9e1bd28938b7a6e472498fdf13.
A small functional change is included in this commit:
we now always use ets:update_counter to update the
ref_count, instead of a mix of update_{counter,fields}.
When upgrading to 4.0, the index will be rebuilt for
all users that were using a custom index module.
[Why]
The `longnames`-based testcase depends on a copnfigured FQDN. Buildbuddy
hosts were incorrectly configured and were lacking one. As a workaround,
`/etc/hosts` was modified to add an FQDN.
We don't use Buildbuddy anymore, but the problem also appeared on team
members' Broadcom-managed laptops (also having an incomplete network
configuration). More importantly, GitHub workers seem to have the same
problem, but randomly!
At this point, we can't rely on the host's correct network
configuration.
[How]
The testsuite is modified to use a hard-coded IP address, 127.0.0.1, to
simulate a long Erlang node name (i.e. it has dots).
[Why]
This is important if the environment changes between that error and the
next attempt to start `rabbit': the environment is only read during the
start of `rabbitmq_prelaunch' (and the cached context is cleaned on
stop).
This fixes a transient failure of the
`unit_log_management_SUITE:log_file_fails_to_initialise_during_startup/1`
testcase.
V2: Explicitly ignore the return value of `application:stop/1`.
This exchange type will only bind classic queues and will only return
routes for queues that are local to the publishing connection. If more than
one queue is bound it will make a random choice of the locally bound queues.
This exchange type is suitable as a component in systems that run
highly available low-latency RPC workloads.
Co-authored-by: Marcial Rosales <mrosales@pivotal.io>
Instead of relying on the complex and non-determinstic default node
selection mechanism inside peer discovery this change makes the
etcd backend implemention make the leader selection itself based on
the etcd create_revision of each entry. Although not spelled out anywhere
explicitly is likely that a property called "Create Revision" is going
to remain consistent throughout the lifetime of the etcd key.
Either way this is likely to be an improvement on the current approach.
This has been in the file since 2015 but as we are now just
including 'proper' as a dependency we don't need it anymore
(and haven't needed it for some time).
This removal changes base execution speed quite a bit:
make -C deps/rabbit nope 0,29s user 0,22s system 235% cpu 0,220 total
make -C deps/rabbit nope 0,03s user 0,02s system 101% cpu 0,053 total
for up to 10 times.
When a cluster is formed from scratch and a virtual
host is declared in the process (can be via
definitions, or plugins, or any other way),
that virtual host's process tree will be started
on all reachable cluster nodes at that moment
in time.
However, this can be just a subset of all nodes
expected to join the cluster.
The most effective solution is to run this
reconciliation process on a timer for up to
5 minutes by default. This matches how long
some other parts of RabbitMQ (3.x) expect
cluster formation to take, at most.
Per discussion with @dcorbacho @mkuratczyk.
This is a mix of a few changes:
* Suppress the compiler warning from the export_all attribute.
* Lower Khepri's command handling timeout value. By default this is
set to 30s in rabbit which makes each of the cases in
`client_operations` take an excessively long time. Before this change
the suite took around 10 minutes to complete. Now it takes between two
and three minutes.
* Swap the order of client and broker teardown steps in end_per_group
hook. The client teardown steps will always fail if run after the
broker teardown steps because they rely on a value in `Config` that
is deleted by broker teardown.
The parent commit favors low-latency queries for read-only transactions,
so marking these read-only transactions as explicitly read-only will
cause them to be run against the local node. This ensures that functions
like `rabbit_auth_backend_internal:list_permissions/0` do not return
`{error, timeout}` when the cluster is in minority, fixing 'bad
generator' errors during definitions export (via
`rabbit_definitions:all_definitions/0`).
Read-write transactions behave like commands (for example 'put' or
'delete') but read-only transactions behave like queries (for example
'get' or 'exists'). When a transaction is passed as read-only we can
re-use the same default options we use for most Khepri queries and
favor low-latency queries. This reads the values from the local node
rather than the leader (or a consistent query), making the query faster
and preventing timeouts when attempting the query when the cluster is
in minority.
The child commit will update some read-only transaction callsites that
take advantage of this low-latency favor.
These test cases and RPC calls were concerned with deadlocks possible
from modifying the code server process from multiple callers. With the
switch to persistent_term in the parent commits these kinds of deadlocks
are no longer possible.
We should keep the RPC calls to `rabbit_ff_registry_wrapper:inventory/0`
though for mixed-version compatibility with nodes that use module
generation instead of `persistent_term` for their registry.
`rabbit_feature_flags:inject_test_feature_flags/2` could discard flags
from the persistent term because the read and write to the persistent
term were not atomic. `feature_flags_SUITE:registry_concurrent_reloads/1`
spawns many processes to modify the feature flag registry concurrently
but also updates this persistent term concurrently. The processes would
race so that many would read the initial flags, add their flag and write
that state, discarding any flags that had been written in the meantime.
We can add a lock around changes to the persistent term to make the
changes atomic.
Non-atomic updates to the persistent term caused unexpected behavior in
the `registry_concurrent_reloads/1` case previously. The
`PT_TESTSUITE_ATTRS` persistent_term only ended up containing a few of
the desired feature flags (for example only `ff_02` and `ff_06` along
with the base `ff_a` and `ff_b`). The case did not fail because the
registry continued to make progress towards that set of feature flags.
However the test case never reached the "all feature flags appeared"
state of the spammer and so the new assertion added at the end of the
case in this commit would fail.
This prevents a potential race which was possible when two processes
attempted to initialize the feature flag registry at the same time. One
process attempting to initialize the registry could take a relatively
long time to read the available feature flags meanwhile another process
could read a more recently changed set of available feature flags.
Locking before reading and writing the flag prevents one update from
clobbering the other.
This race was not possible before the registry used a persistent_term
because we read the version of the registry module as the very first
step when initializing. When we created a new module we checked the
version again and retried the update if it changed in the meantime. That
retry mechanism has been removed since we no longer track the current
version of the registry. Instead we can use a local lock for the same
effect.
Previously the feature flag registry was a module which was regenerated
(i.e. with `erl_syntax` and `compile:forms/2`) whenever the set of known
feature flags or feature flag states changed. Compiling and loading
a module is fairly expensive though, costing around 35-40ms for me
locally, and the registry module is regenerated many times on boot. So
this change saves a fair amount of time on boot. A regular single-node
broker boots locally for me in around 4250ms via `bazel run broker`.
With this patch the boot time falls to around 3300ms.
This patch also improves the time it takes to enable individual feature
flags. `time rabbitmqctl enable_feature_flag khepri_db` for example
changes locally for me from around 1.2s to 1.0s with this change.
## What?
Introduce RabbitMQ internal flow control for messages sent to AMQP
clients.
Prior this PR, when an AMQP client granted a large amount of link
credit (e.g. 100k) to the sending queue, the sending queue sent
that amount of messages to the session process no matter what.
This becomes problematic for memory usage when the session process
cannot send out messages fast enough to the AMQP client, especially if
1. The writer proc cannot send fast enough. This can happen when
the AMQP client does not receive fast enough and causes TCP
back-pressure to the server. Or
2. The server session proc is limited by remote-incoming-window.
Both scenarios are now added as test cases.
Tests
* tcp_back_pressure_rabbitmq_internal_flow_quorum_queue
* tcp_back_pressure_rabbitmq_internal_flow_classic_queue
cover scenario 1.
Tests
* incoming_window_closed_rabbitmq_internal_flow_quorum_queue
* incoming_window_closed_rabbitmq_internal_flow_classic_queue
cover scenario 2.
This PR sends messages from queues to AMQP clients in a more controlled
manner.
To illustrate:
```
make run-broker PLUGINS="rabbitmq_management" RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 4"
observer_cli:start()
mq
```
where `mq` sorts by message queue length.
Create a stream:
```
deps/rabbitmq_management/bin/rabbitmqadmin declare queue name=s1 queue_type=stream durable=true
```
Next, send and receive from the Stream via AMQP.
Grant a large number of link credit to the sending stream:
```
docker run -it --rm --add-host host.docker.internal:host-gateway ssorj/quiver:latest
bash-5.1# quiver --version
quiver 0.4.0-SNAPSHOT
bash-5.1# quiver //host.docker.internal//queue/s1 --durable -d 30s --credit 100000
```
**Before** to this PR:
```
RESULTS
Count ............................................... 100,696 messages
Duration ............................................... 30.0 seconds
Sender rate ......................................... 120,422 messages/s
Receiver rate ......................................... 3,363 messages/s
End-to-end rate ....................................... 3,359 messages/s
```
We observe that all 100k link credit worth of messages are buffered in the
writer proc's mailbox:
```
|No | Pid | MsgQueue |Name or Initial Call | Memory | Reductions |Current Function |
|1 |<0.845.0> |100001 |rabbit_amqp_writer:init/1 | 126.0734 MB| 466633491 |prim_inet:send/5 |
```
**After** to this PR:
```
RESULTS
Count ............................................. 2,973,440 messages
Duration ............................................... 30.0 seconds
Sender rate ......................................... 123,322 messages/s
Receiver rate ........................................ 99,250 messages/s
End-to-end rate ...................................... 99,148 messages/s
```
We observe that the message queue lengths of both writer and session
procs are low.
## How?
Our goal is to have queues send out messages in a controlled manner
without overloading RabbitMQ itself.
We want RabbitMQ internal flow control between:
```
AMQP writer proc <--- session proc <--- queue proc
```
A similar concept exists for classic queues sending via AMQP 0.9.1.
We want an approach that applies to AMQP and works generic for all queue
types.
For the interaction between AMQP writer proc and session proc we use a
simple credit based approach reusing module `credit_flow`.
For the interaction between session proc and queue proc, the following options
exist:
### Option 1
The session process provides expliclity feedback to the queue after it
has sent N messages.
This approach is implemented in
https://github.com/ansd/rabbitmq-server/tree/amqp-flow-control-poc-1
and works well.
A new `rabbit_queue_type:sent/4` API was added which lets the queue proc know
that it can send further messages to the session proc.
Pros:
* Will work equally well for AMQP 0.9.1, e.g. when quorum queues send messages
in auto ack mode to AMQP 0.9.1 clients.
* Simple for the session proc
Cons:
* Sligthly added complexity in every queue type implementation
* Multiple Ra commands (settle, credit, sent) to decide when a quorum
queue sends more messages.
### Option 2
A dual link approach where two AMQP links exists between
```
AMQP client <---link--> session proc <---link---> queue proc
```
When the client grants a large amount of credits, the session proc will
top up credits to the queue proc periodically in smaller batches.
Pros:
* No queue type modifications required.
* Re-uses AMQP link flow control
Cons:
* Significant added complexity in the session proc. A client can
dynamically decrease or increase credits and dynamically change the drain
mode while the session tops up credit to the queue.
### Option 3
Credit is a 32 bit unsigned integer.
The spec mandates that the receiver independently chooses a credit.
Nothing in the spec prevents the receiver to choose a credit of 1 billion.
However the credit value is merely a **maximum**:
> The link-credit variable defines the current maximum legal amount that the delivery-count can be increased by.
Therefore, the server is not required to send all available messages to this
receiver.
For delivery-count:
> Only the sender MAY independently modify this field.
"independently" could be interpreted as the sender could add to the delivery-count
irrespective of what the client chose for drain and link-credit.
Option 3: The queue proc could at credit time already consume credit
and advance the delivery-count if credit is too large before checking out any messages.
For example if credit is 100k, but the queue only wants to send 1k, the queue could
consume 99k of credits and advance the delivery-count, and subsequently send maximum 1k messages.
If the queue advanced the delivery-count, RabbitMQ must send a FLOW to the receiver,
otherwise the receiver wouldn’t know that it ran out of link-credit.
Pros:
* Very simple
Cons:
* Possibly unexpected behaviour for receiving AMQP clients
* Possibly poor end-to-end throughput in auto-ack mode because the queue
would send a batch of messages followed by a FLOW containing the advanced
delivery-count. Only therafter the client will learn that it ran out of
credits and top-up again. This feels like synchronously pulling a batch
of messages. In contrast, option 2 sends out more messages as soon as
the previous messages left RabbitMQ without requiring again a credit top
up from the receiver.
* drain mode with large credits requires the queue to send all available
messages and only thereafter advance the delivery-count. Therefore,
drain mode breaks option 3 somewhat.
### Option 4
Session proc drops message payload when its outgoing-pending queue gets
too large and re-reads payloads from the queue once the message can be
sent (see `get_checked_out` Ra command for quorum queues).
Cons:
* Would need to be implemented for every queue type, especially classic queues
* Doesn't limit the amount of message metadata in the session proc's
outgoing-pending queue
### Decision: Option 2
This commit implements option 2 to avoid any queue type modification.
At most one credit request is in-flight between session process and
queue process for a given queue consumer.
If the AMQP client sends another FLOW in between, the session proc
stashes the FLOW until it processes the previous credit reply.
A delivery is only sent from the outgoing-pending queue if the
session proc is not blocked by
1. writer proc, or
2. remote-incoming-window
The credit reply is placed into the outgoing-pending queue.
This ensures that the session proc will only top up the next batch of
credits if sufficient messages were sent out to the writer proc.
A future commit could additionally have each queue limit the number of
unacked messages for a given AMQP consumer, or alternatively make use
of session outgoing-window.
Put configuration credit_flow_default_credit into persistent term such
that the tuple doesn't have to be copied on the hot path.
Also, change persistent term keys from `{rabbit, AtomKey}` to `AtomKey`
so that hashing becomes cheaper.
The issue comes from a mechanic that allows us to avoid writing
to disk when a message has already been consumed. It works fine
in normal circumstances, but fan-out makes things trickier.
When multiple queues write and read the same message, we could
get a crash. Let's say queues A and B both handle message Msg.
* Queue A asks store to write Msg
* Queue B asks store to write Msg
* Queue B asks store to delete Msg (message was immediately consumed)
* Store processes Msg write from queue A
* Store writes Msg to current file
* Store processes Msg write from queue B
* Store notices queue B doesn't need Msg anymore; doesn't write
* Store clears Msg from the cache
* Queue A tries to read Msg
* Msg is missing from the cache
* Queue A tries to read from disk
* Msg is in the current write file and may not be on disk yet
* Crash
The problem is that the store clears Msg from the cache. We need
all messages written to the current file to remain in the cache
as we can't guarantee the data is on disk when comes the time
to read. That is, until we roll over to the next file.
The issue was that a match was wrong, instead of matching a single
location from the index, the code was matching against a list. The
error was present in the code for almost 13 years since commit
2ef30dc95e.
For classic queues, if both policy and queue argument are set
for queue TTL, the minimum takes effect.
Prior to this commit, for quorum queues if both policy and
queue argument are set for queue TTL, the policy always overrides the
queue argument.
This commit brings the quorum queue queue TTL resolution to classic
queue's behaviour. This allows developers to provide a custom lower
queue TTL while the operator policy acts an upper bound safe-guard.
`event_recorder` listens for events globally rather than per-connection
so it's possible for `connection_closed` and other events from prior
test cases to be captured and mixed with the events the test is
interested in, causing a flake.
Instead we can assert that the events we gather from `event_recorder`
contain the ones emitted by the connection.
This commit is a follow up of https://github.com/rabbitmq/rabbitmq-server/pull/11174
which broke the following Java client test:
```
./mvnw verify -P '!setup-test-cluster' -Drabbitmqctl.bin=DOCKER:rabbitmq -Dit.test=DeadLetterExchange#deadLetterNewRK
```
The desired documented behaviour is the following:
> routing-keys: the routing keys (including CC keys but excluding BCC ones) the message was published with
This behaviour should be respected also for messages dead lettered into a
stream. Therefore, instead of first including the BCC keys in the `#death.routing_keys` field
and removing it again in mc_amqpl before sending the routing-keys to the
client as done in v3.13.2 in
dc25ef5329/deps/rabbit/src/mc_amqpl.erl (L527)
we instead omit directly the BCC keys from `#death.routing_keys` when
recording a death event.
This commit records the BCC keys in their own mc `bcc` annotation in `mc_amqpl:init/1`.
[Why]
Before, the backend would always return a list of nodes and the
subsystem would select one based on their uptimes, the nodes they are
already clustered with, and the readiness of their database.
This works well in general but has some limitations. For instance with
the Consul backend, the discoverability of nodes depends on when each
one registered and in which order. Therefore, the node with the highest
uptime might not be the first that registers. In this case, the one that
registers first will only discover itself and boot as a standalone node.
However, the one with the highest uptime that registered after will
discover both nodes. It will then select itself as the node to join
because it has the highest uptime. In the end both nodes form distinct
clusters.
Another example is the Kubernetes backend. The current solution works
fine but it could be optimized: the backend knows we always want to join
the first node ("$node-0") regardless of the order in which they are
started because picking the first node alphabetically is fine.
Therefore we want to let the backend selects the node to join if it
wants.
[How]
The `list_nodes()` callback can now return the following term:
{ok, {SelectedNode :: node(), NodeType}}
If the subsystem sees this return value, it will consider that the
returned node is the one to join. It will still query properties because
we want to make sure the node's database is ready before joining it.
[Why]
The two backends that use registration are Consul and etcd. The
discovery process relies on the registered nodes: they return whatever
was previously registered.
With the new checks and failsafes added in peer discovery in RabbitMQ
3.13.0, the fact that registration happens after running discovery
breaks Consul and etcd backend.
It used to work before because the first node would eventually time out
waiting for a non-empty list of nodes from the backend and proceed as a
standalone node, registering itself on the way. Following nodes would
then discover that first node.
Among the new checks, the node running discovery expects to find itself
in the list of discovered nodes. Because it didn't register yet, it will
never find itself.
[How]
The solution is to register first, then run discovery. The node should
at least get itself in the discovered nodes.
We reject CQv1 in rabbit.schema as well.
Most of the v1 code is still around as it is needed
for conversion to v2. It will be removed at a later
time when conversion is no longer supported.
We don't shard the CQ property suite anymore:
there's only 1 case remaining.
# What?
This commit fixes#11159, #11160, #11173.
# How?
## Background
RabbitMQ allows to dead letter messages for four different reasons, out
of which three reasons cause messages to be dead lettered automatically
internally in the broker: (maxlen, expired, delivery_limit) and 1 reason
is caused by an explicit client action (rejected).
RabbitMQ also allows dead letter topologies. When a message is dead
lettered, it is re-published to an exchange, and therefore zero to
multiple target queues. These target queues can in turn dead letter
messages. Hence it is possible to create a cycle of queues where
messages get dead lettered endlessly, which is what we want to avoid.
## Alternative approach
One approach to avoid such endless cycles is to use a similar concept of
the TTL field of the IPv4 datagram, or the hop limit field of an IPv6
datagram. These fields ensure that IP packets aren't cicrulating forever
in the Internet. Each router decrements this counter. If this counter
reaches 0, the sender will be notified and the message gets dropped.
We could use the same approach in RabbitMQ: Whenever a queue dead
letters a message, a dead_letter_hop_limit field could be decremented.
If this field reaches 0, the message will be dropped.
Such a hop limit field could have a sensible default value, for example
32. The sender of the message could override this value. Likewise, the
client rejecting a message could set a new value via the Modified
outcome.
Such an approach has multiple advantages:
1. No dead letter cycle detection per se needs to be performed within
the broker which is a slight simplification to what we have today.
2. Simpler dead letter topologies. One very common use case is that
clients re-try sending the message after some time by consuming from
a dead-letter queue and rejecting the message such that the message
gets republished to the original queue. Instead of requiring explicit
client actions, which increases complexity, a x-message-ttl argument
could be set on the dead-letter queue to automatically retry after
some time. This is a big simplification because it eliminates the
need of various frameworks that retry, such as
https://docs.spring.io/spring-cloud-stream/reference/rabbit/rabbit_overview/rabbitmq-retry.html
3. No dead letter history information needs to be compressed because
there is a clear limit on how often a message gets dead lettered.
Therefore, the full history including timestamps of every dead letter
event will be available to clients.
Disadvantages:
1. Breaks a lot of clients, even for 4.0.
## 3.12 approach
Instead of decrementing a counter, the approach up to 3.12 has been to
drop the message if the message cycled automatically. A message cycled
automatically if no client expliclity rejected the message, i.e. the
mesage got dead lettered due to maxlen, expired, or delivery_limit, but
not due to rejected.
In this approach, the broker must be able to detect such cycles
reliably.
Reliably detecting dead letter cycles broke in 3.13 due to #11159 and #11160.
To reliably detect cycles, the broker must be able to obtain the exact
order of dead letter events for a given message. In 3.13.0 - 3.13.2, the
order cannot exactly be determined because wall clock time is used to
record the death time.
This commit uses the same approach as done in 3.12: a list ordered by
death recency is used with the most recent death at the head of the
list.
To not grow this list endlessly (for example when a client rejects the
same message hundreds of times), this list should be compacted.
This commit, like 3.12, compacts by tuple `{Queue, Reason}`:
If this message got already dead lettered from this Queue for this
Reason, then only a counter is incremented and the element is moved to
the front of the list.
## Streams & AMQP 1.0 clients
Dead lettering from a stream doesn't make sense because:
1. a client cannot reject a message from a stream since the stream must
maintain the total order of events to be consumed by multiple clients.
2. TTL is implemented by Stream retention where only old Stream segments
are automatically deleted (or archived in the future).
3. same applies to maxlen
Although messages cannot be dead lettered **from** a stream, messages can be dead lettered
**into** a stream. This commit provides clients consuming from a stream the death history: #11173
Additionally, this commit provides AMQP 1.0 clients the death history via
message annotation `x-opt-deaths` which contains the same information as
AMQP 0.9.1 header `x-death`.
Both, storing the death history in a stream and providing death history
to an AMQP 1.0 client, use the same encoding: a message annoation
`x-opt-deaths` that contains an array of maps ordered by death recency.
The information encoded is the same as in the AMQP 0.9.1 x-death header.
Instead of providing an array of maps, a better approach could be to use
an array of a custom AMQP death type, such as:
```xml
<amqp name="rabbitmq">
<section name="custom-types">
<type name="death" class="composite" source="list">
<descriptor name="rabbitmq:death:list" code="0x00000000:0x000000255"/>
<field name="queue" type="string" mandatory="true" label="the name of the queue the message was dead lettered from"/>
<field name="reason" type="symbol" mandatory="true" label="the reason why this message was dead lettered"/>
<field name="count" type="ulong" default="1" label="how many times this message was dead lettered from this queue for this reason"/>
<field name="time" mandatory="true" type="timestamp" label="the first time when this message was dead lettered from this queue for this reason"/>
<field name="exchange" type="string" default="" label="the exchange this message was published to before it was dead lettered for the first time from this queue for this reason"/>
<field name="routing-keys" type="string" default="" multiple="true" label="the routing keys this message was published with before it was dead lettered for the first time from this queue for this reason"/>
<field name="ttl" type="milliseconds" label="the time to live of this message before it was dead lettered for the first time from this queue for reason ‘expired’"/>
</type>
</section>
</amqp>
```
However, encoding and decoding custom AMQP types that are nested within
arrays which in turn are nested within the message annotation map can be
difficult for clients and the broker. Also, each client will need to
know the custom AMQP type. For now, therefore we use an array of maps.
## Feature flag
The new way to record death information is done via mc annotation
`deaths_v2`.
Because old nodes do not know this new annotation, recording death
information via mc annotation `deaths_v2` is hidden behind a new feature
flag `message_containers_deaths_v2`.
If this feature flag is disabled, a message will continue to use the
3.13.0 - 3.13.2 way to record death information in mc annotation
`deaths`, or even the older way within `x-death` header directly if
feature flag message_containers is also disabled.
Only if feature flag `message_containers_deaths_v2` is enabled and this
message hasn't been dead lettered before, will the new mc annotation
`deaths_v2` be used.
the `mc` module is ideally meant to be kept pure and portable
and feature flags have external infrastructure dependencies
as well as impure semantics.
Moving the check of this feature flag into the amqp session
simplifies the code (as no message containers with the new
format will enter the system before the feature flag is enabled).
Compressed ETS tables may introduce a small throughput penalty (low single
digit %) but can reduce peak Ra memory use by 30-50%.
Also set a default wal_max_entries value to avoid mem tables growing
too large when using very small message sizes (as more than 1M tiny
messages can easily fit into one WAL file).
Ra 2.10.1 has a type spec fix needed.
Fixes#11171
An MQTT user encountered TLS handshake timeouts with their IoT device,
and the actual error from `ssl:handshake` / `ranch:handshake` was not
caught and logged.
At this time, `ranch` uses `exit(normal)` in the case of timeouts, but
that should change in the future
(https://github.com/ninenines/ranch/issues/336)
This is similar to https://github.com/rabbitmq/rabbitmq-server/pull/11057
What?
For incoming AMQP messages, always set the durable message container annotation.
Why?
Even though defaulting to durable=true when no durable annotation is set, as prior
to this commit, is good enough, explicitly setting the durable annotation makes
the code a bit more future proof and maintainable going forward in 4.0 where we
will rely more on the durable annotation because AMQP 1.0 message headers will
be omitted in classic and quorum queues.
The performance impact of always setting the durable annotation is negligible.
Similar to how we convert from mc_amqp to mc_amqpl before
sending to a classic queue or quorum queue process if
feature flag message_containers_store_amqp_v1 is disabled,
we also need to do the same conversion before sending to an MQTT QoS 0
queue on the old node.
Prior to this commit the entire amqp-value or amqp-sequence sections
were parsed when converting a message from mc_amqp.
Parsing the entire amqp-value or amqp-sequence section can generate a
huge amount of garbage depending on how large these sections are.
Given that other protocol cannot make use of amqp-value and
amqp-sequence sections anyway, leave them AMQP encoded when converting
from mc_amqp.
In fact prior to this commit, the entire body section was parsed
generating huge amounts of garbage just to subsequently encode it again
in mc_amqpl or mc_mqtt.
The new conversion interface from mc_amqp to other mc_* modules will
either output amqp-data sections or the encoded amqp-value /
amqp-sequence sections.
This commit enables client apps to automatically perform end-to-end
checksumming over the bare message (i.e. body + application defined
headers).
This commit allows an app to configure the AMQP client:
* for a sending link to automatically compute CRC-32 or Adler-32
checksums over each bare message including the computed checksum
as a footer annotation, and
* for a receiving link to automatically lookup the expected CRC-32
or Adler-32 checksum in the footer annotation and, if present, check
the received checksum against the actually computed checksum.
The commit comes with the following advantages:
1. Transparent end-to-end checksumming. Although checksumming is
performed by TCP and RabbitMQ queues using the disk, end-to-end
checksumming is a level higher up and can therefore detect bit flips
within RabbitMQ nodes or load balancers and other bit flips that
went unnoticed.
2. Not only is the body checksummed, but also the properties and
application-properties sections. This is an advantage over AMQP 0.9.1
because the AMQP protocol disallows modification of the bare message.
3. This commit is currently used for testing the RabbitMQ AMQP
implementation, but it shows the feasiblity of how apps could also
get integrity guarantees of the whole bare message using HMACs or
signatures.
Fix crashes when message is originally sent via AMQP and
stored within a classic or quorum queue and subsequently
dead lettered where the dead letter exchange needs access to message
annotations or properties or application-properties.
AMQP 3.2.1 defines durable=false to be the default.
However, the same section also mentions:
> If the header section is omitted the receiver MUST assume the appropriate
> default values (or the meaning implied by no value being set) for the
> fields within the header unless other target or node specific defaults
> have otherwise been set.
We want RabbitMQ to be secure by default, hence in RabbitMQ we set
durable=true to be the default.
Ensure footer gets deliverd to AMQP client as received from AMQP client
when feature flag message_containers_store_amqp_v1 is disabled.
Fixes test
```
bazel test //deps/rabbit:amqp_system_SUITE-mixed -t- --test_sharding_strategy=disabled --test_env FOCUS="-group [dotnet] -case footer"
```
This commit fixes test
```
bazel test //deps/rabbitmq_mqtt:shared_SUITE-mixed -t- \
--test_sharding_strategy=disabled --test_env \
FOCUS="-group [mqtt,v3,cluster_size_3] -case pubsub"
```
Fix some mixed version tests
Assume the AMQP body, especially amqp-value section won't be parsed.
Hence, omit smart conversions from AMQP to MQTT involving the
Payload-Format-Indicator bit.
Fix test
Fix
```
bazel test //deps/amqp10_client:system_SUITE-mixed -t- --test_sharding_strategy=disabled --test_env FOCUS="-group [rabbitmq]
```
Prior to this commit test case
```
bazel test //deps/rabbit:amqp_client_SUITE-mixed -t- \
--test_sharding_strategy=disabled --test_env \
FOCUS="-group [cluster_size_3] -case quorum_queue_on_old_node"
```
was failing because `mc_amqp:size(#v1{})` was called on the old node
which doesn't understand the new AMQP on disk message format.
Even though the old 3.13 node never stored mc_amqp messages in classic
or quorum queues,
1. it will either need to understand the new mc_amqp message format, or
2. we should prevent the new format being sent to 3.13. nodes.
In this commit we decide for the 2nd solution.
In `mc:prepare(store, Msg)`, we convert the new mc_amqp format to
mc_amqpl which is guaranteed to be understood by the old node.
Note that `mc:prepare(store, Msg)` is not only stored before actual
storage, but already before the message is sent to the queue process
(which might be hosted by the old node).
The 2nd solution is easier to reason about over the 1st solution
because:
a) We don't have to backport code meant to be for 4.0 to 3.13, and
b) 3.13 is guaranteed to never store mc_amqp messages in classic or
quorum queues, even in mixed version clusters.
The disadvantage of the 2nd solution is that messages are converted from
mc_amqp to mc_amqpl and back to mc_amqp if there is an AMQP sender and
AMQP receiver. However, this only happens while the new feature flag
is disabled during the rolling upgrade. In a certain sense, this is a
hybrid to how the AMQP 1.0 plugin worked in 3.13: Even though we don't
proxy via AMQP 0.9.1 anymore, we still convert to AMQP 0.9.1 (mc_amqpl)
messages when feature flag message_containers_store_amqp_v1 is disabled.