[How]
1. Use feature flags correctly: the code shouldn't test if a feature
flag is enabled, assuming something else enabled it. It should enable
it and react to an error.
2. Use `close_connection_sync/1` instead of the asynchronous
`amqp10_client:close_connection/1` to make sure they are really
closed. The wait in `end_per_testcase/2` was not enough apparently.
3. For the two testcases that flake the most for me, enclose the code in
a try/after and make sure to close the connection at the end,
regardless of the result. This should be done for all testcases
because the testgroup use a single set of RabbitMQ nodes for all
testcases, therefore testcases are supposed to clean up after them...
build(deps): bump org.springframework.boot:spring-boot-starter-parent from 3.4.1 to 3.4.2 in /deps/rabbitmq_auth_backend_http/examples/rabbitmq_auth_backend_spring_boot
build(deps): bump org.springframework.boot:spring-boot-starter-parent from 3.4.1 to 3.4.2 in /deps/rabbitmq_auth_backend_http/examples/rabbitmq_auth_backend_spring_boot_kotlin
This commit is no change in functionality and mostly deletes dead code.
1. Code targeting Erlang 22 and below is deleted since the mininmum
required Erlang version is higher nowadays.
"In OTP 23 distribution flag DFLAG_BIG_CREATION became mandatory. All
pids are now encoded using NEW_PID_EXT, even external pids received
as PID_EXT from older nodes."
https://www.erlang.org/doc/apps/erts/erl_ext_dist.html#new_pid_ext
2. All v1 encoding and decoding of the Pid is deleted since the lower
version RabbitMQ node supports the v2 encoding nowadays.
Exits the with reason "killed" only occurs "naturally" in OTP
when a supervisor tries to shut a child down and it times out.
It is used for failure simulation in tests quite frequently however.
This version contains bug fixes and a change to use async_dist
when a quorum queue sends a message to a remote node (e.g. a consumer
delivery). Using async_dist will reduce chances of messages not
reaching consumers in a timely manner when the system is loaded
and occasionally fills the distribution buffer.
When a leader changes all enqueuer and consumer processes are notified
from the `state_enter(leader,` callback. However a new leader may not
yet have applied all commands that the old leader had. If any of those
commands is a checkout or a register_enqueuer command these processes
will not be notified of the new leader and thus may never resend their
pending commands.
The new leader will however send an applied notification when it does
apply these entries and these are always sent from the leader process
so can also be used to trigger pending resends. This commit implements
that.
## What?
This commit fixes#13040.
Prior to this commit, exchange federation crashed if the MQTT topic exchange
(`amq.topic` by default) got federated and MQTT 5.0 clients subscribed on the
downstream. That's because the federation plugin sends bindings from downstream
to upstream via AMQP 0.9.1. However, binding arguments containing Erlang record
`mqtt_subscription_opts` (henceforth binding args v1) cannot be encoded in AMQP 0.9.1.
## Why?
Federating the MQTT topic exchange could be useful for warm standby use cases.
## How?
This commit makes binding arguments a valid AMQP 0.9.1 table (henceforth
binding args v2).
Binding args v2 can only be used if all nodes support it. Hence binding
args v2 comes with feature flag `rabbitmq_4.1.0`. Note that the AMQP
over WebSocket
[PR](https://github.com/rabbitmq/rabbitmq-server/pull/13071) already
introduces this same feature flag. Although the feature flag subsystem
supports plugins to define their own feature flags, and the MQTT plugin
defined its own feature flags in the past, reusing feature flag
`rabbitmq_4.1.0` is simpler.
This commit also avoids database migrations for both Mnesia and Khepri
if feature flag `rabbitmq_4.1.0` gets enabled. Instead, it's simpler to
migrate binding args v1 to binding args v2 at MQTT connection establishment
time if the feature flag is enabled. (If the feature flag is disabled at
connection etablishment time, but gets enabled during the connection
lifetime, the connection keeps using bindings args v1.)
This commit adds two new suites:
1. `federation_SUITE` which tests that federating the MQTT topic
exchange works, and
2. `feature_flag_SUITE` which tests the binding args migration from v1 to v2.
Visualise busy links from publisher to RabbitMQ. If the link credit
reaches 0, we set a yellow background colour in the cell.
Note that these credit values can change many times per second while the
management UI refreshes only every few seconds. However, it may still
give a user an idea of what links are currently busy.
We use yellow since that's consistent with the `flow` state in AMQP
0.9.1, which is also set to yellow.
We do not want want to highlight **outgoing** links with credit 0 as
that might be a paused consumer, and therefore not a busy link.
We also use yellow background color if incoming-window is 0 (in case of
a cluster wider memory or disk alarm) or if remote-incoming-window is 0
as consumers should try to keep their incoming-window open and instead
use link credit if they want to pause consumption.
Additionaly we set a grey background colour for the `/management`
address just to highlight them slightly since these are "special" link
pairs.
msg_store_io_batch_size is no longer used
msg_store_credit_disc_bound appears to be used in the code, but I don't
see any impact of that value on the performance. It should be properly
investigated and either removed completely or fixed, because there's
hardly any point in warning about the values configured
(plus, this settings is hopefully almost never used anyway)
According to the `rabbit_backing_queue` behavious it must always
return `ok`, but it used to return a list of results one for each
priority. That caused the below crash further up the call chain.
```
> rabbit_classic_queue:delete_crashed(Q)
** exception error: no case clause matching [ok,ok,ok,ok,ok,ok,ok,ok,ok,ok,ok]
in function rabbit_classic_queue:delete_crashed/2 (rabbit_classic_queue.erl, line 516)
```
Other backing_queue implementations (`rabbit_variable_queue`) just
exit with a badmatch upon error.
This (very minor) issue is present since 3.13.0 when
`rabbit_classic_queue:delete_crashed_in_backing_queue/1` was
instroduced with Khepri in commit 5f0981c5. Before that the result of
`BQ:delete_crashed/1` was simply ignored.
Include monitored session pids in format_status/1 of rabbit_amqp_writer.
They could be useful when debugging.
The maximum number of sessions per connection is limited, hence the
output won't be too large.