Fixes#11171
An MQTT user encountered TLS handshake timeouts with their IoT device,
and the actual error from `ssl:handshake` / `ranch:handshake` was not
caught and logged.
At this time, `ranch` uses `exit(normal)` in the case of timeouts, but
that should change in the future
(https://github.com/ninenines/ranch/issues/336)
This is similar to https://github.com/rabbitmq/rabbitmq-server/pull/11057
What?
For incoming AMQP messages, always set the durable message container annotation.
Why?
Even though defaulting to durable=true when no durable annotation is set, as prior
to this commit, is good enough, explicitly setting the durable annotation makes
the code a bit more future proof and maintainable going forward in 4.0 where we
will rely more on the durable annotation because AMQP 1.0 message headers will
be omitted in classic and quorum queues.
The performance impact of always setting the durable annotation is negligible.
Similar to how we convert from mc_amqp to mc_amqpl before
sending to a classic queue or quorum queue process if
feature flag message_containers_store_amqp_v1 is disabled,
we also need to do the same conversion before sending to an MQTT QoS 0
queue on the old node.
Prior to this commit the entire amqp-value or amqp-sequence sections
were parsed when converting a message from mc_amqp.
Parsing the entire amqp-value or amqp-sequence section can generate a
huge amount of garbage depending on how large these sections are.
Given that other protocol cannot make use of amqp-value and
amqp-sequence sections anyway, leave them AMQP encoded when converting
from mc_amqp.
In fact prior to this commit, the entire body section was parsed
generating huge amounts of garbage just to subsequently encode it again
in mc_amqpl or mc_mqtt.
The new conversion interface from mc_amqp to other mc_* modules will
either output amqp-data sections or the encoded amqp-value /
amqp-sequence sections.
This commit enables client apps to automatically perform end-to-end
checksumming over the bare message (i.e. body + application defined
headers).
This commit allows an app to configure the AMQP client:
* for a sending link to automatically compute CRC-32 or Adler-32
checksums over each bare message including the computed checksum
as a footer annotation, and
* for a receiving link to automatically lookup the expected CRC-32
or Adler-32 checksum in the footer annotation and, if present, check
the received checksum against the actually computed checksum.
The commit comes with the following advantages:
1. Transparent end-to-end checksumming. Although checksumming is
performed by TCP and RabbitMQ queues using the disk, end-to-end
checksumming is a level higher up and can therefore detect bit flips
within RabbitMQ nodes or load balancers and other bit flips that
went unnoticed.
2. Not only is the body checksummed, but also the properties and
application-properties sections. This is an advantage over AMQP 0.9.1
because the AMQP protocol disallows modification of the bare message.
3. This commit is currently used for testing the RabbitMQ AMQP
implementation, but it shows the feasiblity of how apps could also
get integrity guarantees of the whole bare message using HMACs or
signatures.
Fix crashes when message is originally sent via AMQP and
stored within a classic or quorum queue and subsequently
dead lettered where the dead letter exchange needs access to message
annotations or properties or application-properties.
AMQP 3.2.1 defines durable=false to be the default.
However, the same section also mentions:
> If the header section is omitted the receiver MUST assume the appropriate
> default values (or the meaning implied by no value being set) for the
> fields within the header unless other target or node specific defaults
> have otherwise been set.
We want RabbitMQ to be secure by default, hence in RabbitMQ we set
durable=true to be the default.
Ensure footer gets deliverd to AMQP client as received from AMQP client
when feature flag message_containers_store_amqp_v1 is disabled.
Fixes test
```
bazel test //deps/rabbit:amqp_system_SUITE-mixed -t- --test_sharding_strategy=disabled --test_env FOCUS="-group [dotnet] -case footer"
```
This commit fixes test
```
bazel test //deps/rabbitmq_mqtt:shared_SUITE-mixed -t- \
--test_sharding_strategy=disabled --test_env \
FOCUS="-group [mqtt,v3,cluster_size_3] -case pubsub"
```
Fix some mixed version tests
Assume the AMQP body, especially amqp-value section won't be parsed.
Hence, omit smart conversions from AMQP to MQTT involving the
Payload-Format-Indicator bit.
Fix test
Fix
```
bazel test //deps/amqp10_client:system_SUITE-mixed -t- --test_sharding_strategy=disabled --test_env FOCUS="-group [rabbitmq]
```
Prior to this commit test case
```
bazel test //deps/rabbit:amqp_client_SUITE-mixed -t- \
--test_sharding_strategy=disabled --test_env \
FOCUS="-group [cluster_size_3] -case quorum_queue_on_old_node"
```
was failing because `mc_amqp:size(#v1{})` was called on the old node
which doesn't understand the new AMQP on disk message format.
Even though the old 3.13 node never stored mc_amqp messages in classic
or quorum queues,
1. it will either need to understand the new mc_amqp message format, or
2. we should prevent the new format being sent to 3.13. nodes.
In this commit we decide for the 2nd solution.
In `mc:prepare(store, Msg)`, we convert the new mc_amqp format to
mc_amqpl which is guaranteed to be understood by the old node.
Note that `mc:prepare(store, Msg)` is not only stored before actual
storage, but already before the message is sent to the queue process
(which might be hosted by the old node).
The 2nd solution is easier to reason about over the 1st solution
because:
a) We don't have to backport code meant to be for 4.0 to 3.13, and
b) 3.13 is guaranteed to never store mc_amqp messages in classic or
quorum queues, even in mixed version clusters.
The disadvantage of the 2nd solution is that messages are converted from
mc_amqp to mc_amqpl and back to mc_amqp if there is an AMQP sender and
AMQP receiver. However, this only happens while the new feature flag
is disabled during the rolling upgrade. In a certain sense, this is a
hybrid to how the AMQP 1.0 plugin worked in 3.13: Even though we don't
proxy via AMQP 0.9.1 anymore, we still convert to AMQP 0.9.1 (mc_amqpl)
messages when feature flag message_containers_store_amqp_v1 is disabled.
In openldap 2.5, `memberOf` seems to be a read-only annotation. Instead
the LDAPWiki recommends adding the user's name to the member attribute
on the relevant group objects:
<https://ldapwiki.com/wiki/Wiki.jsp?page=MemberOf>
We already do that when setting up the group objects, so this annotation
is safe to remove.
A folder can be a mount point and the kernel won't allow
deleting a mount point - it will return ebusy
We don't need to check for ebusy when deleting a file
Previously `rabbitmqctl list_shovels` would only list
shovels running on the local node (where the command was executed).
For consistency with other commands, such as `list_connections`
it should list all shovels in the cluster.
Crashes could happen because compaction would wrongly write
over valid messages, or truncate over valid messages, because
when looking for messages into the files it would encounter
leftover data that made it look like there was a message,
which prompted compaction to not look for the real messages
hidden within.
To avoid this we ensure that there can't be leftover data
as a result of compaction. We get this guarantee by blanking
data in the holes in the file before we start copying messages
closer to the start of the file. This requires us to do a few
more writes but we know that the only data in the files at any
point are valid messages.
Note that it's possible that some of the messages in the files
are no longer referenced; that's OK. We filter them out after
scanning the file.
This was also a good time to merge two almost identical scan
functions, and be more explicit about what messages should be
dropped after scanning the file (the messages no longer in the
ets index and the fan-out messages that ended up re-written in
a more recent file).
```
make -C deps/rabbit ct-amqp_system t=dotnet:fragmentation
```
fails in the new make CI with:
```
amqp_system_SUITE > dotnet > fragmentation
#1. {error,{{badmatch,{error,134,
"Unhandled exception. Amqp.AmqpException: Invalid frame size:527, maximum frame size:512.\n at Amqp.Connection.ThrowIfClosed(String operation)\n at Amqp.Connection.AddSession(Session session)\n at Amqp.Session..ctor(Connection connection, Begin begin, OnBegin onBegin)\n at Amqp.Session..ctor(Connection connection)\n at Program.AmqpClient.connectWithOpen(String uri, Open opn) in /home/runner/work/rabbitmq-server/rabbitmq-server/deps/rabbit/test/amqp_system_SUITE_data/fsharp-tests/Program.fs:line 53\n at Program.Test.fragmentation(String uri) in /home/runner/work/rabbitmq-server/rabbitmq-server/deps/rabbit/test/amqp_system_SUITE_data/fsharp-tests/Program.fs:line 284\n at Program.main(String[] argv) in /home/runner/work/rabbitmq-server/rabbitmq-server/deps/rabbit/test/amqp_system_SUITE_data/fsharp-tests/Program.fs:line 533\n"}},
[{amqp_system_SUITE,run_dotnet_test,2,
[{file,"amqp_system_SUITE.erl"},
{line,228}]},
{test_server,ts_tc,3,[{file,"test_server.erl"},{line,1793}]},
{test_server,run_test_case_eval1,6,
[{file,"test_server.erl"},{line,1302}]},
{test_server,run_test_case_eval,9,
[{file,"test_server.erl"},{line,1234}]}]}}
```
RabbitMQ includes its node name and cluster name in the open frame to
the client. Running this test locally shows an open frame size of 467
bytes.
The suspicion is that the node name and cluster name in CI is longer
causing the open frame from RabbitMQ to the client to exceed the frame size
of 512 bytes.
Previously `rabbit_vhost:delete/2` threw this error within
`rabbit_policy:list/2` when the vhost didn't exist. We can return the
value instead so that `rabbitmqctl delete_vhost` behaves the same
between these changes and the branch fork point.
The `rabbit_vhost_sup_sup:delete_on_all_nodes/1` cleanup is idempotent so
we can run all teardown steps for the vhost and return this error
instead. This might be useful if deletion fails for some resources
under the vhost, for example a queue. If the vhost record is gone it
might still be useful to try to clean up the vhost's resources.
The x_jms_topic_table Mnesia table must be on all nodes
for messages to be published to JMS topic exchanges
and routed to topic subscribers.
The table used to be only in RAM on one node, so it would
be unavailable when the node was down and empty
when it came back up, losing the state for subscribers
still online because connected to other nodes.
References #9005
These changes to the defaults can affect other test cases. In particular
it will affect the time to live and deletion events in the vhost
deletion idempotency case.
In the rabbitmqctl docs for list_unresponsive_queues, `type` is listed as
a queueinfoitem parameter, but this is not reflected in the source code.
This commit adds `type` as a queueinfoitem parameter for list_unresponsive_queues.
I noticed that `stop_app` would take over 5 seconds on an empty
node which is strange. I found this gap in the logs:
```
2024-04-18 10:49:04.646450+09:00 [info] <0.599.0> Management plugin: to stop collect_statistics.
2024-04-18 10:49:09.647652+09:00 [debug] <0.247.0> Set stop reason to: normal
```
No point waiting for that stats to stop being emitted when we stop the
node completely. It was added in https://github.com/rabbitmq/rabbitmq-server/commit/0e640da5
to ensure the stats stop when the management_agent gets disabled
This is a follow up to https://github.com/rabbitmq/rabbitmq-server/pull/11012
## What?
For incoming MQTT messages, always set the `durable` message container
annotation.
## Why?
Even though defaulting to `durable=true` when no durable annotation is
set, as prior to this commit, is good enough, explicitly setting the
durable annotation makes the code a bit more future proof and
maintainable going forward in 4.0 where we will rely more on the durable
annotation because AMQP 1.0 message headers will be omitted in classic
and quorum queues (see https://github.com/rabbitmq/rabbitmq-server/pull/10964)
For MQTT messages, it's important to know whether the message was
published with QoS 0 or QoS 1 because it affects the QoS for the MQTT
message that will delivered to the MQTT subscriber.
The performance impact of always setting the durable annotation is
negligible.
Bump org.springframework.boot:spring-boot-starter-parent from 3.2.4 to 3.2.5 in /deps/rabbitmq_auth_backend_http/examples/rabbitmq_auth_backend_spring_boot_kotlin
Resolves https://github.com/rabbitmq/rabbitmq-server/discussions/11021
Prior to this commit, an MQTT client that connects to RabbitMQ needed
configure access to its will queue even if the will queue has never
existed. This breaks client apps connecting with either v3 or v4 or with
v5 without making use of the Will-Delay-Interval.
Specifically, in 3.13.0 and 3.13.1 an MQTT client that connects to
RabbitMQ needs unnecessarily configure access to queue
`mqtt-will-<MQTT client ID>`.
This commit only check for configure access, if the queue actually gets
deleted, i.e. if it existed.
* make amqp10_common dialyze green in make
* make rabbitmq_ct_client_helpers dialyze green with make
* fixup rabbitmq_prelaunch path ref
* Cleanup unused dep_* vars
* Fixup xref for rabbitmq_ct_helpers
I could not figure out how to make xref aware of the cli code without
also checking the cli code as well, and reporting additional errors
* remove unused file
* fix make diaylze for rabbitmq_stream_common
* update deps/oauth2_client/Makefile to match Bazel
By default, when the 'durable' message container (mc) annotation is unset,
messages are interpreted to be durable.
Prior to this commit, MQTT messages that were sent with QoS 0 were
stored durably in classic queues.
This commit takes the same approach for mc_mqtt as for mc_amqpl and mc_amqp:
If the message is durable, the durable mc annotation will not be set.
If the message is non-durable, the durable mc annotation will be set to false.
[Why]
We want to make sure that the queried remote node doesn't know about the
querying node. That's why we use a temporary hidden node in the first
place.
In fact, it is possible to start the temporary hidden node and
communicate with it using an alternative connection instead of the
regular Erlang distribution. This further ensures that the two RabbitMQ
nodes don't connect to each other while properties are queried.
[How]
We use the `standard_io` alternative connection. We need to use
`peer:call/4` instead of `erpc:call/4` to run code on the temporary
hidden node.
While here, we add assertions to verify that the three node didn't form
a full mesh network at the end of the query code.
[Why]
As shown in #10728, in an IPv6-only environment, `kernel` name
resolution must be configured through an inetrc file.
The temporary hidden node must be configured exactly like the main node
(and all nodes in the cluster in fact) to allow communication. Thus we
must pass the same inetrc file to that temporary hidden node. This
wasn’t the case before this patch.
[How]
We query the main node’s kernel to see if there is any inetrc file set
and we use the same on the temporary hidden node’s command line.
While here, extract the handling of the `proto_dist` module from the TLS
code. This parameter may be used outside of TLS like this IPv6-only
environment.
V2: Accept `inetrc` filenames as atoms, not only lists. `kernel` seems
to accept them. This makes a better user experience for users used
to Bourne shell quoting not knowing that single quotes have a
special meaning in Erlang.
Fixes#10728.
[Why]
When peer discovery runs on initial boot, the database layer is not
ready when peer discovery is executed. Thus if the node selects itself
as the node to "join", it shouldn't pay attention to the database
readyness otherwise it would wait forever.
This commit will print
```
[debug] <0.725.0> Transitioned from tcp_connected to peer_properties_exchanged
[debug] <0.725.0> Transitioned from peer_properties_exchanged to authenticating
[debug] <0.725.0> User 'guest' authenticated successfully by backend rabbit_auth_backend_internal
[debug] <0.725.0> Transitioned from authenticating to authenticated
[debug] <0.725.0> Tuning response 1048576 0
[debug] <0.725.0> Open frame received for fakevhost
[warning] <0.725.0> Opening connection failed: access to vhost 'fakevhost' refused for user 'guest'
[debug] <0.725.0> Transitioned from tuning to failure
[warning] <0.725.0> Closing socket #Port<0.48>. Invalid transition from tuning to failure.
[debug] <0.725.0> rabbit_stream_reader terminating in state 'tuning' with reason 'normal'
```
when the user doesn't have access to the vhost.
A fairly large chunk of boot time is spent trying to look up modules
that have certain attributes via `Module:module_info(attributes)`.
Executing the builtin `module_info/1` function is very very fast but
only after the module is initially loaded. For any unloaded module,
attempting to execute the function loads the module. Code loading can
be fairly slow with some modules taking around a millisecond
individually, and all code loading is currently done in serial by the
code server.
We use this for `rabbit_boot_step` and `rabbit_feature_flag` attributes
for example and we can't avoid scanning many modules without much larger
breaking changes. When we read those attributes though we only lookup
modules from applications that depend on the `rabbit` app. This saves
quite a lot of work because we avoid most dependencies and builtin
modules from Erlang/OTP that we would never load anyways, for example
the `wx` modules.
We can re-use that function in the management plugin to avoid scanning
most available modules for the `rabbit_mgmt_extension` behaviour. We
also need to stop the `prometheus` dependency from scanning for its
interceptor and collector behaviours on boot. We can do this by setting
explicit empty/default values for the application environment variables
`prometheus` uses as defaults. This is a safe change because we don't
use interceptors and we register all collectors explicitly.
**There is a functional change to the management plugin to be aware
of**: any plugins that use the `rabbit_mgmt_extension` behaviour must
declare a dependency on the `rabbit` application. This is true for all
tier-1 plugins but should be kept in mind for community plugins.
For me locally this reduces single node boot (`bazel run broker`) time
from ~6100ms to ~4300ms.
Avoid following function clause error:
```
[{rabbit_amqp_session,incoming_mgmt_link_transfer,
[{'v1_0.transfer',
{uint,0},
{uint,1},
{binary,<<0>>},
{uint,0},
false,false,undefined,undefined,undefined,undefined,undefined},
<<0,83,112,192,2,1,65>>,
{state,
{cfg,65528,<0.3506.0>,<0.3510.0>,
{user,<<"guest">>,
[administrator],
[{rabbit_auth_backend_internal,
#Fun<rabbit_auth_backend_internal.3.111050101>}]},
<<"/">>,1,0,#{},none,<<"127.0.0.1:43416 -> 127.0.0.1:5672">>},
2,398,4294967292,1600,2147483645,
{[],[]},
0,#{},#{},#{},#{},#{},#{},[],[],[],[],
{rabbit_queue_type,#{}},
[{{resource,<<"/">>,exchange,<<>>},write},
{{resource,<<"/">>,queue,
<<"ResourceListenerTest_publisherIsClosedOnExchangeDeletion-aec3-6e1a90386458">>},
configure}],
[]}],
[{file,"rabbit_amqp_session.erl"},{line,1694}]},
{rabbit_amqp_session,handle_control,2,
[{file,"rabbit_amqp_session.erl"},{line,1068}]},
{rabbit_amqp_session,handle_cast,2,
[{file,"rabbit_amqp_session.erl"},{line,391}]},
{gen_server,try_handle_cast,3,[{file,"gen_server.erl"},{line,1121}]},
{gen_server,handle_msg,6,[{file,"gen_server.erl"},{line,1183}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,241}]}] [condition = amqp:internal-error]
```
when the client keeps sending messages although its target queue got
deleted and the server already sent a DETACH frame to the client.
From now on, the server instead closes the session with session error
amqp:session:unattached-handle
The test case is included in the AMQP Java client.
Prefer building the list calling at the end maps:from_list/1 over
building the map element by element.
The 1st approach is chosen in the standard library functions, e.g.
maps:map/2, maps:filter/2, maps:with/2
When generating iodata() in the AMQP 1.0 generator, prefer integers over
binaries.
Rename functions and variable names to better reflect the AMQP 1.0 spec
instead of using AMQP 0.9.1 wording.
When feature flag message_containers is enabled, setting
```
message_interceptors.incoming.set_header_timestamp
```
wasn't respected anymore when a message is published via MQTT to a
stream and subsequently consumed via AMQP 0.9.1.
This commit ensures that AMQP 0.9.1 header timestamp_in_ms will be
set.
Note that we must not modify the AMQP 1.0 properties section when messages
are received via AMQP 1.0 and consumed via AMQP 1.0.
Also, message annoation keys not starting with "x-" are reserved.
Exit code is useful for monitoring and process supervisors when it comes
to deciding on what to do when the process exits, for example we may
want to restart it or send a report. The current implementation of
`rabbitmq-server` script does not propagate the exit code in a general
case which makes it impossible to know whether the exit was clean and,
for example, use restart policy `on-failure` in docker.
This change makes the exit code to be propagated.
Before this change some Management API endpoints handling POST requests crashed and returned HTTP 500 error code when called for a non-existing vhost. The reason was that parsing of the virtual host name could return a `not_found` atom which could potentially reach later steps of the data flow, which expect a vhost name binary only. Instead of returning `not_found`, now the code fails early with HTTP 400 error code and a descriptive error reason.
See more details in the github issue
Fixes#10901
## What?
Introduce a new address format (let's call it v2) for AMQP 1.0 source and target addresses.
The old format (let's call it v1) is described in
https://github.com/rabbitmq/rabbitmq-server/tree/v3.13.x/deps/rabbitmq_amqp1_0#routing-and-addressing
The only v2 source address format is:
```
/queue/:queue
```
The 4 possible v2 target addresses formats are:
```
/exchange/:exchange/key/:routing-key
/exchange/:exchange
/queue/:queue
<null>
```
where the last AMQP <null> value format requires that each message’s `to` field contains one of:
```
/exchange/:exchange/key/:routing-key
/exchange/:exchange
/queue/:queue
```
## Why?
The AMQP address v1 format comes with the following flaws:
1. Obscure address format:
Without reading the documentation, the differences for example between source addresses
```
/amq/queue/:queue
/queue/:queue
:queue
```
are unknown to users. Hence, the address format is obscure.
2. Implicit creation of topologies
Some address formats implicitly create queues (and bindings), such as source address
```
/exchange/:exchange/:binding-key
```
or target address
```
/queue/:queue
```
These queues and bindings are never deleted (by the AMQP 1.0 plugin.)
Implicit creation of such topologies is also obscure.
3. Redundant address formats
```
/queue/:queue
:queue
```
have the same meaning and are therefore redundant.
4. Properties section must be parsed to determine whether a routing key is present
Target address
```
/exchange/:exchange
```
requires RabbitMQ to parse the properties section in order to check whether the message `subject` is set.
If `subject` is not set, the routing key will default to the empty string.
5. Using `subject` as routing key misuses the purpose of this field.
According to the AMQP spec, the message `subject` field's purpose is:
> A common field for summary information about the message content and purpose.
6. Exchange names, queue names and routing keys must not contain the "/" (slash) character.
The current 3.13 implemenation splits by "/" disallowing these
characters in exchange, and queue names, and routing keys which is
unnecessary prohibitive.
7. Clients must create a separate link per target exchange
While this is reasonable working assumption, there might be rare use
cases where it could make sense to create many exchanges (e.g. 1
exchange per queue, see
https://github.com/rabbitmq/rabbitmq-server/discussions/10708) and have
a single application publish to all these exchanges.
With the v1 address format, for an application to send to 500 different
exchanges, it needs to create 500 links.
Due to these disadvantages and thanks to #10559 which allows clients to explicitly create topologies,
we can create a simpler, clearer, and better v2 address format.
## How?
### Design goals
Following the 7 cons from v1, the design goals for v2 are:
1. The address format should be simple so that users have a chance to
understand the meaning of the address without necessarily consulting the docs.
2. The address format should not implicitly create queues, bindings, or exchanges.
Instead, topologies should be created either explicitly via the new management node
prior to link attachment (see #10559), or in future, we might support the `dynamic`
source or target properties so that RabbitMQ creates queues dynamically.
3. No redundant address formats.
4. The target address format should explicitly state whether the routing key is present, empty,
or will be provided dynamically in each message.
5. `Subject` should not be used as routing key. Instead, a better
fitting field should be used.
6. Exchange names, queue names, and routing keys should allow to contain
valid UTF-8 encoded data including the "/" character.
7. Allow both target exchange and routing key to by dynamically provided within each message.
Furthermore
8. v2 must co-exist with v1 for at least some time. Applications should be able to upgrade to
RabbitMQ 4.0 while continuing to use v1. Examples include AMQP 1.0 shovels and plugins communicating
between a 4.0 and a 3.13 cluster. Starting with 4.1, we should change the AMQP 1.0 shovel and plugin clients
to use only the new v2 address format. This will allow AMQP 1.0 and plugins to communicate between a 4.1 and 4.2 cluster.
We will deprecate v1 in 4.0 and remove support for v1 in a later 4.x version.
### Additional Context
The address is usually a String, but can be of any type.
The [AMQP Addressing extension](https://docs.oasis-open.org/amqp/addressing/v1.0/addressing-v1.0.html)
suggests that addresses are URIs and are therefore hierarchical and could even contain query parameters:
> An AMQP address is a URI reference as defined by RFC3986.
> the path expression is a sequence of identifier segments that reflects a path through an
> implementation specific relationship graph of AMQP nodes and their termini.
> The path expression MUST resolve to a node’s terminus in an AMQP container.
The [Using the AMQP Anonymous Terminus for Message Routing Version 1.0](https://docs.oasis-open.org/amqp/anonterm/v1.0/cs01/anonterm-v1.0-cs01.html)
extension allows for the target being `null` and the `To` property to contain the node address.
This corresponds to AMQP 0.9.1 where clients can send each message on the same channel to a different `{exchange, routing-key}` destination.
The following v2 address formats will be used.
### v2 addresses
A new deprecated feature flag `amqp_address_v1` will be introduced in 4.0 which is permitted by default.
Starting with 4.1, we should change the AMQP 1.0 shovel and plugin AMQP 1.0 clients to use only the new v2 address format.
However, 4.1 server code must still understand the 4.0 AMQP 1.0 shovel and plugin AMQP 1.0 clients’ v1 address format.
The new deprecated feature flag will therefore be denied by default in 4.2.
This allows AMQP 1.0 shovels and plugins to work between
* 4.0 and 3.13 clusters using v1
* 4.1 and 4.0 clusters using v2 from 4.1 to v4.0 and v1 from 4.0 to 4.1
* 4.2 and 4.1 clusters using v2
without having to support both v1 and v2 at the same time in the AMQP 1.0 shovel and plugin clients.
While supporting both v1 and v2 in these clients is feasible, it's simpler to switch the client code directly from v1 to v2.
### v2 source addresses
The source address format is
```
/queue/:queue
```
If the deprecated feature flag `amqp_address_v1` is permitted and the queue does not exist, the queue will be auto-created.
If the deprecated feature flag `amqp_address_v1` is denied, the queue must exist.
### v2 target addresses
v1 requires attaching a new link for each destination exchange.
v2 will allow dynamic `{exchange, routing-key}` combinations for a given link.
v2 therefore allows for the rare use cases where a single AMQP 1.0 publisher app needs to send to many different exchanges.
Setting up a link per destination exchange could be cumbersome.
Hence, v2 will support the dynamic `{exchange, routing-key}` combinations of AMQP 0.9.1.
To achieve this, we make use of the "Anonymous Terminus for Message Routing" extension:
The target address will contain the AMQP value null.
The `To` field in each message must be set and contain either address format
```
/exchange/:exchange/key/:routing-key
```
or
```
/exchange/:exchange
```
when using the empty routing key.
The `to` field requires an address type and is better suited than the `subject field.
Note that each message will contain this `To` value for the anonymous terminus.
Hence, we should save some bytes being sent across the network and stored on disk.
Using a format
```
/e/:exchange/k/:routing-key
```
saves more bytes, but is too obscure.
However, we use only `/key/` instead of `/routing-key/` so save a few bytes.
This also simplifies the format because users don’t have to remember whether to use spell `routing-key` or `routing_key` or `routingkey`.
The other allowed target address formats are:
```
/exchange/:exchange/key/:routing-key
```
where exchange and routing key are static on the given link.
```
/exchange/:exchange
```
where exchange and routing key are static on the given link, and routing key will be the empty string (useful for example for the fanout exchange).
```
/queue/:queue
```
This provides RabbitMQ beginners the illusion of sending a message directly
to a queue without having to understand what exchanges and routing keys are.
If the deprecated feature flag `amqp_address_v1` is permitted and the queue does not exist, the queue will be auto-created.
If the deprecated feature flag `amqp_address_v1` is denied, the queue must exist.
Besides the additional queue existence check, this queue target is different from
```
/exchange//key/:queue
```
in that queue specific optimisations might be done (in future) by RabbitMQ
(for example different receiving queue types could grant different amounts of link credits to the sending clients).
A write permission check to the amq.default exchange will be performed nevertheless.
v2 will prohibit the v1 static link & dynamic routing-key combination
where the routing key is sent in the message `subject` as that’s also obscure.
For this use case, v2’s new anonymous terminus can be used where both exchange and routing key are defined in the message’s `To` field.
(The bare message must not be modified because it could be signed.)
The alias format
```
/topic/:topic
```
will also be removed.
Sending to topic exchanges is arguably an advanced feature.
Users can directly use the format
```
/exchange/amq.topic/key/:topic
```
which reduces the number of redundant address formats.
### v2 address format reference
To sump up (and as stated at the top of this commit message):
The only v2 source address format is:
```
/queue/:queue
```
The 4 possible v2 target addresses formats are:
```
/exchange/:exchange/key/:routing-key
/exchange/:exchange
/queue/:queue
<null>
```
where the last AMQP <null> value format requires that each message’s `to` field contains one of:
```
/exchange/:exchange/key/:routing-key
/exchange/:exchange
/queue/:queue
```
Hence, all 8 listed design goals are reached.
* Use file path validators to improve error messages
when a certificate, key or another file does not
exist or cannot be read by the node
* Introduce a number of standard TLS options in
addition to the Kubelet-provided CA certificate
```
bazel test //deps/rabbit:amqp_client_SUITE-mixed -t- --test_sharding_strategy=disabled --test_env FOCUS="-group [cluster_size_3] -case async_notify_unsettled_classic_queue" --config=rbe-26 --runs_per_test=40
```
was failing 8 out of 40 times.
Skip this test as we know that link flow control with classic queues is
broken in 3.13:
https://github.com/rabbitmq/rabbitmq-server/issues/2597
Credit API v2 in RabbitMQ 4.0 fixes this bug.
Not only are quorum queues wrongly implemented, but also classic queues
when draining in 3.13.
Like quorum queues, classsic queues reply with a send_drained event
before delivering the message(s).
Therefore, we have to skip the drain test in such mixed version
clusters where the leader runs on the old (3.13.1) node.
The new 4.0 implementation with credit API v2 fixes this bug.
Prior to this commit, mixed version test classic_queue_on_old_node
of amqp_client_SUITE was failing.
Commit 02c29ac1c0
must make sure that the new (4.0) AMQP 1.0 classic queue client
continues to convey RabbitMQ internal credit flow control information
back to the old (3.13.1) classic queue server.
Otherwise, the old classic queue server will stop sending more messages
to the new classic queue client after exactly 200 messages, which caused
this mixed version test to fail.