[Why]
We want to make sure that the queried remote node doesn't know about the
querying node. That's why we use a temporary hidden node in the first
place.
In fact, it is possible to start the temporary hidden node and
communicate with it using an alternative connection instead of the
regular Erlang distribution. This further ensures that the two RabbitMQ
nodes don't connect to each other while properties are queried.
[How]
We use the `standard_io` alternative connection. We need to use
`peer:call/4` instead of `erpc:call/4` to run code on the temporary
hidden node.
While here, we add assertions to verify that the three node didn't form
a full mesh network at the end of the query code.
[Why]
As shown in #10728, in an IPv6-only environment, `kernel` name
resolution must be configured through an inetrc file.
The temporary hidden node must be configured exactly like the main node
(and all nodes in the cluster in fact) to allow communication. Thus we
must pass the same inetrc file to that temporary hidden node. This
wasn’t the case before this patch.
[How]
We query the main node’s kernel to see if there is any inetrc file set
and we use the same on the temporary hidden node’s command line.
While here, extract the handling of the `proto_dist` module from the TLS
code. This parameter may be used outside of TLS like this IPv6-only
environment.
V2: Accept `inetrc` filenames as atoms, not only lists. `kernel` seems
to accept them. This makes a better user experience for users used
to Bourne shell quoting not knowing that single quotes have a
special meaning in Erlang.
Fixes#10728.
[Why]
When peer discovery runs on initial boot, the database layer is not
ready when peer discovery is executed. Thus if the node selects itself
as the node to "join", it shouldn't pay attention to the database
readyness otherwise it would wait forever.
This commit will print
```
[debug] <0.725.0> Transitioned from tcp_connected to peer_properties_exchanged
[debug] <0.725.0> Transitioned from peer_properties_exchanged to authenticating
[debug] <0.725.0> User 'guest' authenticated successfully by backend rabbit_auth_backend_internal
[debug] <0.725.0> Transitioned from authenticating to authenticated
[debug] <0.725.0> Tuning response 1048576 0
[debug] <0.725.0> Open frame received for fakevhost
[warning] <0.725.0> Opening connection failed: access to vhost 'fakevhost' refused for user 'guest'
[debug] <0.725.0> Transitioned from tuning to failure
[warning] <0.725.0> Closing socket #Port<0.48>. Invalid transition from tuning to failure.
[debug] <0.725.0> rabbit_stream_reader terminating in state 'tuning' with reason 'normal'
```
when the user doesn't have access to the vhost.
A fairly large chunk of boot time is spent trying to look up modules
that have certain attributes via `Module:module_info(attributes)`.
Executing the builtin `module_info/1` function is very very fast but
only after the module is initially loaded. For any unloaded module,
attempting to execute the function loads the module. Code loading can
be fairly slow with some modules taking around a millisecond
individually, and all code loading is currently done in serial by the
code server.
We use this for `rabbit_boot_step` and `rabbit_feature_flag` attributes
for example and we can't avoid scanning many modules without much larger
breaking changes. When we read those attributes though we only lookup
modules from applications that depend on the `rabbit` app. This saves
quite a lot of work because we avoid most dependencies and builtin
modules from Erlang/OTP that we would never load anyways, for example
the `wx` modules.
We can re-use that function in the management plugin to avoid scanning
most available modules for the `rabbit_mgmt_extension` behaviour. We
also need to stop the `prometheus` dependency from scanning for its
interceptor and collector behaviours on boot. We can do this by setting
explicit empty/default values for the application environment variables
`prometheus` uses as defaults. This is a safe change because we don't
use interceptors and we register all collectors explicitly.
**There is a functional change to the management plugin to be aware
of**: any plugins that use the `rabbit_mgmt_extension` behaviour must
declare a dependency on the `rabbit` application. This is true for all
tier-1 plugins but should be kept in mind for community plugins.
For me locally this reduces single node boot (`bazel run broker`) time
from ~6100ms to ~4300ms.
Avoid following function clause error:
```
[{rabbit_amqp_session,incoming_mgmt_link_transfer,
[{'v1_0.transfer',
{uint,0},
{uint,1},
{binary,<<0>>},
{uint,0},
false,false,undefined,undefined,undefined,undefined,undefined},
<<0,83,112,192,2,1,65>>,
{state,
{cfg,65528,<0.3506.0>,<0.3510.0>,
{user,<<"guest">>,
[administrator],
[{rabbit_auth_backend_internal,
#Fun<rabbit_auth_backend_internal.3.111050101>}]},
<<"/">>,1,0,#{},none,<<"127.0.0.1:43416 -> 127.0.0.1:5672">>},
2,398,4294967292,1600,2147483645,
{[],[]},
0,#{},#{},#{},#{},#{},#{},[],[],[],[],
{rabbit_queue_type,#{}},
[{{resource,<<"/">>,exchange,<<>>},write},
{{resource,<<"/">>,queue,
<<"ResourceListenerTest_publisherIsClosedOnExchangeDeletion-aec3-6e1a90386458">>},
configure}],
[]}],
[{file,"rabbit_amqp_session.erl"},{line,1694}]},
{rabbit_amqp_session,handle_control,2,
[{file,"rabbit_amqp_session.erl"},{line,1068}]},
{rabbit_amqp_session,handle_cast,2,
[{file,"rabbit_amqp_session.erl"},{line,391}]},
{gen_server,try_handle_cast,3,[{file,"gen_server.erl"},{line,1121}]},
{gen_server,handle_msg,6,[{file,"gen_server.erl"},{line,1183}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,241}]}] [condition = amqp:internal-error]
```
when the client keeps sending messages although its target queue got
deleted and the server already sent a DETACH frame to the client.
From now on, the server instead closes the session with session error
amqp:session:unattached-handle
The test case is included in the AMQP Java client.
Prefer building the list calling at the end maps:from_list/1 over
building the map element by element.
The 1st approach is chosen in the standard library functions, e.g.
maps:map/2, maps:filter/2, maps:with/2
When generating iodata() in the AMQP 1.0 generator, prefer integers over
binaries.
Rename functions and variable names to better reflect the AMQP 1.0 spec
instead of using AMQP 0.9.1 wording.
When feature flag message_containers is enabled, setting
```
message_interceptors.incoming.set_header_timestamp
```
wasn't respected anymore when a message is published via MQTT to a
stream and subsequently consumed via AMQP 0.9.1.
This commit ensures that AMQP 0.9.1 header timestamp_in_ms will be
set.
Note that we must not modify the AMQP 1.0 properties section when messages
are received via AMQP 1.0 and consumed via AMQP 1.0.
Also, message annoation keys not starting with "x-" are reserved.
Exit code is useful for monitoring and process supervisors when it comes
to deciding on what to do when the process exits, for example we may
want to restart it or send a report. The current implementation of
`rabbitmq-server` script does not propagate the exit code in a general
case which makes it impossible to know whether the exit was clean and,
for example, use restart policy `on-failure` in docker.
This change makes the exit code to be propagated.
Before this change some Management API endpoints handling POST requests crashed and returned HTTP 500 error code when called for a non-existing vhost. The reason was that parsing of the virtual host name could return a `not_found` atom which could potentially reach later steps of the data flow, which expect a vhost name binary only. Instead of returning `not_found`, now the code fails early with HTTP 400 error code and a descriptive error reason.
See more details in the github issue
Fixes#10901
## What?
Introduce a new address format (let's call it v2) for AMQP 1.0 source and target addresses.
The old format (let's call it v1) is described in
https://github.com/rabbitmq/rabbitmq-server/tree/v3.13.x/deps/rabbitmq_amqp1_0#routing-and-addressing
The only v2 source address format is:
```
/queue/:queue
```
The 4 possible v2 target addresses formats are:
```
/exchange/:exchange/key/:routing-key
/exchange/:exchange
/queue/:queue
<null>
```
where the last AMQP <null> value format requires that each message’s `to` field contains one of:
```
/exchange/:exchange/key/:routing-key
/exchange/:exchange
/queue/:queue
```
## Why?
The AMQP address v1 format comes with the following flaws:
1. Obscure address format:
Without reading the documentation, the differences for example between source addresses
```
/amq/queue/:queue
/queue/:queue
:queue
```
are unknown to users. Hence, the address format is obscure.
2. Implicit creation of topologies
Some address formats implicitly create queues (and bindings), such as source address
```
/exchange/:exchange/:binding-key
```
or target address
```
/queue/:queue
```
These queues and bindings are never deleted (by the AMQP 1.0 plugin.)
Implicit creation of such topologies is also obscure.
3. Redundant address formats
```
/queue/:queue
:queue
```
have the same meaning and are therefore redundant.
4. Properties section must be parsed to determine whether a routing key is present
Target address
```
/exchange/:exchange
```
requires RabbitMQ to parse the properties section in order to check whether the message `subject` is set.
If `subject` is not set, the routing key will default to the empty string.
5. Using `subject` as routing key misuses the purpose of this field.
According to the AMQP spec, the message `subject` field's purpose is:
> A common field for summary information about the message content and purpose.
6. Exchange names, queue names and routing keys must not contain the "/" (slash) character.
The current 3.13 implemenation splits by "/" disallowing these
characters in exchange, and queue names, and routing keys which is
unnecessary prohibitive.
7. Clients must create a separate link per target exchange
While this is reasonable working assumption, there might be rare use
cases where it could make sense to create many exchanges (e.g. 1
exchange per queue, see
https://github.com/rabbitmq/rabbitmq-server/discussions/10708) and have
a single application publish to all these exchanges.
With the v1 address format, for an application to send to 500 different
exchanges, it needs to create 500 links.
Due to these disadvantages and thanks to #10559 which allows clients to explicitly create topologies,
we can create a simpler, clearer, and better v2 address format.
## How?
### Design goals
Following the 7 cons from v1, the design goals for v2 are:
1. The address format should be simple so that users have a chance to
understand the meaning of the address without necessarily consulting the docs.
2. The address format should not implicitly create queues, bindings, or exchanges.
Instead, topologies should be created either explicitly via the new management node
prior to link attachment (see #10559), or in future, we might support the `dynamic`
source or target properties so that RabbitMQ creates queues dynamically.
3. No redundant address formats.
4. The target address format should explicitly state whether the routing key is present, empty,
or will be provided dynamically in each message.
5. `Subject` should not be used as routing key. Instead, a better
fitting field should be used.
6. Exchange names, queue names, and routing keys should allow to contain
valid UTF-8 encoded data including the "/" character.
7. Allow both target exchange and routing key to by dynamically provided within each message.
Furthermore
8. v2 must co-exist with v1 for at least some time. Applications should be able to upgrade to
RabbitMQ 4.0 while continuing to use v1. Examples include AMQP 1.0 shovels and plugins communicating
between a 4.0 and a 3.13 cluster. Starting with 4.1, we should change the AMQP 1.0 shovel and plugin clients
to use only the new v2 address format. This will allow AMQP 1.0 and plugins to communicate between a 4.1 and 4.2 cluster.
We will deprecate v1 in 4.0 and remove support for v1 in a later 4.x version.
### Additional Context
The address is usually a String, but can be of any type.
The [AMQP Addressing extension](https://docs.oasis-open.org/amqp/addressing/v1.0/addressing-v1.0.html)
suggests that addresses are URIs and are therefore hierarchical and could even contain query parameters:
> An AMQP address is a URI reference as defined by RFC3986.
> the path expression is a sequence of identifier segments that reflects a path through an
> implementation specific relationship graph of AMQP nodes and their termini.
> The path expression MUST resolve to a node’s terminus in an AMQP container.
The [Using the AMQP Anonymous Terminus for Message Routing Version 1.0](https://docs.oasis-open.org/amqp/anonterm/v1.0/cs01/anonterm-v1.0-cs01.html)
extension allows for the target being `null` and the `To` property to contain the node address.
This corresponds to AMQP 0.9.1 where clients can send each message on the same channel to a different `{exchange, routing-key}` destination.
The following v2 address formats will be used.
### v2 addresses
A new deprecated feature flag `amqp_address_v1` will be introduced in 4.0 which is permitted by default.
Starting with 4.1, we should change the AMQP 1.0 shovel and plugin AMQP 1.0 clients to use only the new v2 address format.
However, 4.1 server code must still understand the 4.0 AMQP 1.0 shovel and plugin AMQP 1.0 clients’ v1 address format.
The new deprecated feature flag will therefore be denied by default in 4.2.
This allows AMQP 1.0 shovels and plugins to work between
* 4.0 and 3.13 clusters using v1
* 4.1 and 4.0 clusters using v2 from 4.1 to v4.0 and v1 from 4.0 to 4.1
* 4.2 and 4.1 clusters using v2
without having to support both v1 and v2 at the same time in the AMQP 1.0 shovel and plugin clients.
While supporting both v1 and v2 in these clients is feasible, it's simpler to switch the client code directly from v1 to v2.
### v2 source addresses
The source address format is
```
/queue/:queue
```
If the deprecated feature flag `amqp_address_v1` is permitted and the queue does not exist, the queue will be auto-created.
If the deprecated feature flag `amqp_address_v1` is denied, the queue must exist.
### v2 target addresses
v1 requires attaching a new link for each destination exchange.
v2 will allow dynamic `{exchange, routing-key}` combinations for a given link.
v2 therefore allows for the rare use cases where a single AMQP 1.0 publisher app needs to send to many different exchanges.
Setting up a link per destination exchange could be cumbersome.
Hence, v2 will support the dynamic `{exchange, routing-key}` combinations of AMQP 0.9.1.
To achieve this, we make use of the "Anonymous Terminus for Message Routing" extension:
The target address will contain the AMQP value null.
The `To` field in each message must be set and contain either address format
```
/exchange/:exchange/key/:routing-key
```
or
```
/exchange/:exchange
```
when using the empty routing key.
The `to` field requires an address type and is better suited than the `subject field.
Note that each message will contain this `To` value for the anonymous terminus.
Hence, we should save some bytes being sent across the network and stored on disk.
Using a format
```
/e/:exchange/k/:routing-key
```
saves more bytes, but is too obscure.
However, we use only `/key/` instead of `/routing-key/` so save a few bytes.
This also simplifies the format because users don’t have to remember whether to use spell `routing-key` or `routing_key` or `routingkey`.
The other allowed target address formats are:
```
/exchange/:exchange/key/:routing-key
```
where exchange and routing key are static on the given link.
```
/exchange/:exchange
```
where exchange and routing key are static on the given link, and routing key will be the empty string (useful for example for the fanout exchange).
```
/queue/:queue
```
This provides RabbitMQ beginners the illusion of sending a message directly
to a queue without having to understand what exchanges and routing keys are.
If the deprecated feature flag `amqp_address_v1` is permitted and the queue does not exist, the queue will be auto-created.
If the deprecated feature flag `amqp_address_v1` is denied, the queue must exist.
Besides the additional queue existence check, this queue target is different from
```
/exchange//key/:queue
```
in that queue specific optimisations might be done (in future) by RabbitMQ
(for example different receiving queue types could grant different amounts of link credits to the sending clients).
A write permission check to the amq.default exchange will be performed nevertheless.
v2 will prohibit the v1 static link & dynamic routing-key combination
where the routing key is sent in the message `subject` as that’s also obscure.
For this use case, v2’s new anonymous terminus can be used where both exchange and routing key are defined in the message’s `To` field.
(The bare message must not be modified because it could be signed.)
The alias format
```
/topic/:topic
```
will also be removed.
Sending to topic exchanges is arguably an advanced feature.
Users can directly use the format
```
/exchange/amq.topic/key/:topic
```
which reduces the number of redundant address formats.
### v2 address format reference
To sump up (and as stated at the top of this commit message):
The only v2 source address format is:
```
/queue/:queue
```
The 4 possible v2 target addresses formats are:
```
/exchange/:exchange/key/:routing-key
/exchange/:exchange
/queue/:queue
<null>
```
where the last AMQP <null> value format requires that each message’s `to` field contains one of:
```
/exchange/:exchange/key/:routing-key
/exchange/:exchange
/queue/:queue
```
Hence, all 8 listed design goals are reached.
* Use file path validators to improve error messages
when a certificate, key or another file does not
exist or cannot be read by the node
* Introduce a number of standard TLS options in
addition to the Kubelet-provided CA certificate
```
bazel test //deps/rabbit:amqp_client_SUITE-mixed -t- --test_sharding_strategy=disabled --test_env FOCUS="-group [cluster_size_3] -case async_notify_unsettled_classic_queue" --config=rbe-26 --runs_per_test=40
```
was failing 8 out of 40 times.
Skip this test as we know that link flow control with classic queues is
broken in 3.13:
https://github.com/rabbitmq/rabbitmq-server/issues/2597
Credit API v2 in RabbitMQ 4.0 fixes this bug.
Not only are quorum queues wrongly implemented, but also classic queues
when draining in 3.13.
Like quorum queues, classsic queues reply with a send_drained event
before delivering the message(s).
Therefore, we have to skip the drain test in such mixed version
clusters where the leader runs on the old (3.13.1) node.
The new 4.0 implementation with credit API v2 fixes this bug.
Prior to this commit, mixed version test classic_queue_on_old_node
of amqp_client_SUITE was failing.
Commit 02c29ac1c0
must make sure that the new (4.0) AMQP 1.0 classic queue client
continues to convey RabbitMQ internal credit flow control information
back to the old (3.13.1) classic queue server.
Otherwise, the old classic queue server will stop sending more messages
to the new classic queue client after exactly 200 messages, which caused
this mixed version test to fail.
Khepri v0.13.0 contains a fix for how projections are handled during
registration and recovery. The error returned from
`khepri:register_projection/1,2,3` has also been updated to use the
`?khepri_error(..)` helper macro.
Co-authored-by: Jean-Sébastien Pédron <jean-sebastien.pedron@dumbbell.fr>
Apply the following PR feedback:
> The data returned is strongly validated; there can't be an extra unknown field.
> I would suggest ignoring field names that are not known as this allows for better
> extensibility. On the other hand, there should be a check that all required fields
> are found.
## What?
* Allow AMQP 1.0 clients to dynamically create and delete RabbitMQ
topologies (exchanges, queues, bindings).
* Provide an Erlang AMQP 1.0 client that manages topologies.
## Why?
Today, RabbitMQ topologies can be created via:
* [Management HTTP API](https://www.rabbitmq.com/docs/management#http-api)
(including Management UI and
[messaging-topology-operator](https://github.com/rabbitmq/messaging-topology-operator))
* [Definition Import](https://www.rabbitmq.com/docs/definitions#import)
* AMQP 0.9.1 clients
Up to RabbitMQ 3.13 the RabbitMQ AMQP 1.0 plugin auto creates queues
and bindings depending on the terminus [address
format](https://github.com/rabbitmq/rabbitmq-server/tree/v3.13.x/deps/rabbitmq_amqp1_0#routing-and-addressing).
Such implicit creation of topologies is limiting and obscure.
For some address formats, queues will be created, but not deleted.
Some of RabbitMQ's success is due to its flexible routing topologies
that AMQP 0.9.1 clients can create and delete dynamically.
This commit allows dynamic management of topologies for AMQP 1.0 clients.
This commit builds on top of Native AMQP 1.0 (PR #9022) and will be
available in RabbitMQ 4.0.
## How?
This commits adds the following management operations for AMQP 1.0 clients:
* declare queue
* delete queue
* purge queue
* bind queue to exchange
* unbind queue from exchange
* declare exchange
* delete exchange
* bind exchange to exchange
* unbind exchange from exchange
Hence, at least the AMQP 0.9.1 management operations are supported for
AMQP 1.0 clients.
In addition the operation
* get queue
is provided which - similar to `declare queue` - returns queue
information including the current leader and replicas.
This allows clients to publish or consume locally on the node that hosts
the queue.
Compared to AMQP 0.9.1 whose commands and command fields are fixed, the
new AMQP Management API is extensible: New operations and new fields can
easily be added in the future.
There are different design options how management operations could be
supported for AMQP 1.0 clients:
1. Use a special exchange type as done in https://github.com/rabbitmq/rabbitmq-management-exchange
This has the advantage that any protocol client (e.g. also STOMP clients) could
dynamically manage topologies. However, a special exchange type is the wrong abstraction.
2. Clients could send "special" messages with special headers that the broker interprets.
This commit decided for a variation of the 2nd option using a more
standardized way by re-using a subest of the following latest AMQP 1.0 extension
specifications:
* [AMQP Request-Response Messaging with Link Pairing Version 1.0 - Committee Specification 01](https://docs.oasis-open.org/amqp/linkpair/v1.0/cs01/linkpair-v1.0-cs01.html) (February 2021)
* [HTTP Semantics and Content over AMQP Version 1.0 - Working Draft 06](https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=65571) (July 2019)
* [AMQP Management Version 1.0 - Working Draft 16](https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=65575) (July 2019)
An important goal is to keep the interaction between AMQP 1.0 client and RabbitMQ
simple to increase usage, development and adoptability of future RabbitMQ AMQP 1.0
client library wrappers.
The AMQP 1.0 client has to create a link pair to the special `/management` node.
This allows the client to send and receive from the management node.
Similar to AMQP 0.9.1, there is no need for a reply queue since the reply
will be sent directly to the client.
Requests and responses are modelled via HTTP, but sent via AMQP using
the `HTTP Semantics and Content over AMQP` extension (henceforth `HTTP
over AMQP` extension).
This commit tries to follow the `HTTP over AMQP` extension as much as
possible but deviates where this draft spec doesn't make sense.
The projected mode §4.1 is used as opposed to tunneled mode §4.2.
A named relay `/management` is used (§6.3) where the message field `to` is the URL.
Deviations are
* §3.1 mandates that URIs are not encoded in an AMQP message.
However, we percent encode URIs in the AMQP message. Otherwise there
is for example no way to distinguish a `/` in a queue name from the
URI path separator `/`.
* §4.1.4 mandates a data section. This commit uses an amqp-value section
as it's a better fit given that the content is AMQP encoded data.
Using an HTTP API allows for a common well understood interface and future extensibility.
Instead of re-using the current RabbitMQ HTTP API, this commit uses a
new HTTP API (let's call it v2) which could be used as a future API for
plain HTTP clients.
### HTTP API v1
The current HTTP API (let's call it v1) is **not** used since v1
comes with a couple of weaknesses:
1. Deep level of nesting becomes confusing and difficult to manage.
Examples of deep nesting in v1:
```
/api/bindings/vhost/e/source/e/destination/props
/api/bindings/vhost/e/exchange/q/queue/props
```
2. Redundant endpoints returning the same resources
v1 has 9 endpoints to list binding(s):
```
/api/exchanges/vhost/name/bindings/source
/api/exchanges/vhost/name/bindings/destination
/api/queues/vhost/name/bindings
/api/bindings
/api/bindings/vhost
/api/bindings/vhost/e/exchange/q/queue
/api/bindings/vhost/e/exchange/q/queue/props
/api/bindings/vhost/e/source/e/destination
/api/bindings/vhost/e/source/e/destination/props
```
3. Verbs in path names
Path names should be nouns instead.
v1 contains verbs:
```
/api/queues/vhost/name/get
/api/exchanges/vhost/name/publish
```
### AMQP Management extension
Only few aspects of the AMQP Management extension are used.
The central idea of the AMQP management spec is **dynamic discovery** such that broker independent AMQP 1.0
clients can discover objects, types, operations, and HTTP endpoints of specific brokers.
In fact, clients are only conformant if:
> All request addresses are dynamically discovered starting from the discovery document.
> A requesting container MUST NOT use fixed assumptions about the addressing structure of the management API.
While this is a nice and powerful idea, no AMQP 1.0 client and no AMQP 1.0 server implement the
latest AMQP 1.0 management spec from 2019, partly presumably due to its complexity.
Therefore, the idea of such dynamic discovery has failed to be implemented in practice.
The AMQP management spec mandates that the management endpoint returns a discovery document containing
broker specific collections, types, configuration, and operations including their endpoints.
The API endpoints of the AMQP management spec are therefore all designed around dynamic discovery.
For example, to create either a queue or an exchange, the client has to
```
POST /$management/entities
```
which shows that the entities collection acts as a generic factory, see section 2.2.
The server will then create the resource and reply with a location header containing a URI pointing to the resource.
For RabbitMQ, we don’t need such a generic factory to create queues or exchanges.
To list bindings for a queue Q1, the spec suggests
```
GET /$management/Queues/Q1/$management/entities
```
which again shows the generic entities endpoint as well as a `$management` endpoint under Q1 to
allow a queue to return a discovery document.
For RabbitMQ, we don’t need such generic endpoints and discovery documents.
Given we aim for our own thin RabbitMQ AMQP 1.0 client wrapper libraries which expose
the RabbitMQ model to the developer, we can directly use fixed HTTP endpoint assumptions
in our RabbitMQ specific libraries.
This is by far simpler than using the dynamic endpoints of the management spec.
Simplicity leads to higher adoption and enables more developers to write RabbitMQ AMQP 1.0 client
library wrappers.
The AMQP Management extension also suffers from deep level of nesting in paths
Examples:
```
/$management/Queues/Q1/$management/entities
/$management/Queues/Q1/Bindings/Binding1
```
as well as verbs in path names: Section 7.1.4 suggests using verbs in path names,
for example “purge”, due to the dynamic operations discovery document.
### HTTP API v2
This commit introduces a new HTTP API v2 following best practices.
It could serve as a future API for plain HTTP clients.
This commit and RabbitMQ 4.0 will only implement a minimal set of
HTTP API v2 endpoints and only for HTTP over AMQP.
In other words, the existing HTTP API v1 Cowboy handlers will continue to be
used for all plain HTTP requests in RabbitMQ 4.0 and will remain untouched for RabbitMQ 4.0.
Over time, after 4.0 shipped, we could ship a pure HTTP API implementation for HTTP API v2.
Hence, the new HTTP API v2 endpoints for HTTP over AMQP should be designed such that they
can be re-used in the future for a pure HTTP implementation.
The minimal set of endpoints for RabbitMQ 4.0 are:
``
GET / PUT / DELETE
/vhosts/:vhost/queues/:queue
```
read, create, delete a queue
```
DELETE
/vhosts/:vhost/queues/:queue/messages
```
purges a queue
```
GET / DELETE
/vhosts/:vhost/bindings/:binding
```
read, delete bindings
where `:binding` is a binding ID of the following path segment:
```
src=e1;dstq=q2;key=my-key;args=
```
Binding arguments `args` has an empty value by default, i.e. there are no binding arguments.
If the binding includes binding arguments, `args` will be an Erlang portable term hash
provided by the server similar to what’s provided in HTTP API v1 today.
Alternatively, we could use an arguments scheme of:
```
args=k1,utf8,v1&k2,uint,3
```
However, such a scheme leads to long URIs when there are many binding arguments.
Note that it’s perfectly fine for URI producing applications to include URI
reserved characters `=` / `;` / `,` / `$` in a path segment.
To create a binding, the client therefore needs to POST to a bindings factory URI:
```
POST
/vhosts/:vhost/bindings
```
To list all bindings between a source exchange e1 and destination exchange e2 with binding key k1:
```
GET
/vhosts/:vhost/bindings?src=e1&dste=e2&key=k1
```
This endpoint will be called by the RabbitMQ AMQP 1.0 client library to unbind a
binding with non-empty binding arguments to get the binding ID before invoking a
```
DELETE
/vhosts/:vhost/bindings/:binding
```
In future, after RabbitMQ 4.0 shipped, new API endpoints could be added.
The following is up for discussion and is only meant to show the clean and simple design of HTTP API v2.
Bindings endpoint can be queried as follows:
to list all bindings for a given source exchange e1:
```
GET
/vhosts/:vhost/bindings?src=e1
```
to list all bindings for a given destination queue q1:
```
GET
/vhosts/:vhost/bindings?dstq=q1
```
to list all bindings between a source exchange e1 and destination queue q1:
```
GET
/vhosts/:vhost/bindings?src=e1&dstq=q1
```
multiple bindings between source exchange e1 and destination queue q1 could be deleted at once as follows:
```
DELETE /vhosts/:vhost/bindings?src=e1&dstq=q1
```
GET could be supported globally across all vhosts:
```
/exchanges
/queues
/bindings
```
Publish a message:
```
POST
/vhosts/:vhost/queues/:queue/messages
```
Consume or peek a message (depending on query parameters):
```
GET
/vhosts/:vhost/queues/:queue/messages
```
Note that the AMQP 1.0 client omits the `/vhost/:vhost` path prefix.
Since an AMQP connection belongs to a single vhost, there is no need to
additionally include the vhost in every HTTP request.
Pros of HTTP API v2:
1. Low level of nesting
Queues, exchanges, bindings are top level entities directly under vhosts.
Although the HTTP API doesn’t have to reflect how resources are stored in the database,
v2 does nicely reflect the Khepri tree structure.
2. Nouns instead of verbs
HTTP API v2 is very simple to read and understand as shown by
```
POST /vhosts/:vhost/queues/:queue/messages to post messages, i.e. publish to a queue.
GET /vhosts/:vhost/queues/:queue/messages to get messages, i.e. consume or peek from a queue.
DELETE /vhosts/:vhost/queues/:queue/messages to delete messages, i.e. purge a queue.
```
A separate new HTTP API v2 allows us to ship only handlers for HTTP over AMQP for RabbitMQ 4.0
and therefore move faster while still keeping the option on the table to re-use the new v2 API
for pure HTTP in the future.
In contrast, re-using the HTTP API v1 for HTTP over AMQP is possible, but dirty because separate handlers
(HTTP over AMQP and pure HTTP) replying differently will be needed for the same v1 endpoints.
[Why]
As shown in #10728, in an IPv6-only environment, `kernel` name
resolution must be configured through an inetrc file.
The temporary hidden node must be configured exactly like the main node
(and all nodes in the cluster in fact) to allow communication. Thus we
must pass the same inetrc file to that temporary hidden node. This
wasn’t the case before this patch.
[How]
We query the main node’s kernel to see if there is any inetrc file set
and we use the same on the temporary hidden node’s command line.
While here, extract the handling of the `proto_dist` module from the TLS
code. This parameter may be used outside of TLS like this IPv6-only
environment.
Fixes#10728.
For `/api/queues`, users can specify `disable_stats=true` and
`enable_queue_totals=true` parameters to return a concise set of
fields. However, the `enable_queue_totals` is not currently
supported for `/api/queues/<vhost>/<name>`, probably just a small
oversight that slipped through the cracks. This commit adds that
support and updates the respective unit test, focusing on not breaking
existing public functions and on simplicity, at the cost of a slight
bit of duplication.
Fixes https://github.com/rabbitmq/rabbitmq-server/pull/10761#discussion_r1528039577 :
"Could you please check a real condition that the old
version can't be used as part of this test?
is_mixed_versions() will still return true in 10 years
when testing RabbitMQ 21.x against 22.x. This function should
almost never be used."
The Web MQTT link is not used in the rabbitmq_mqtt Erlang app.
This link is only used in the rabbitmq_web_mqtt Erlang app.
Hence, move the link to the correct Erlang app.
PR #10761 added a new CLI command to list Web MQTT connections.
That new CLI command relies on feature flag delete_ra_cluster_mqtt_node
being enabled.
This commit ensures exactly this condition.
This contains an important bug fix for streams on windows systems
where a log could get corrupted after a simple reboot.
It also contains a few changes to how replica reader processes
exit on error to avoid logging too much.
These files seem to generate incorrectly on windows due to recent rules_python changes, and since they change rarely, it seems reasonable to commit them. The bazel build automatically generates tets to ensure that the files are up to date
[Why]
If the testcase run between the time the node is started and the time
the `rabbit` application environment is loaded, it will fail.
[How]
Instead of waiting for the node to be reachable only, we also wait for
the `rabbit` application environment to be filled.
`rabbit_db_queue:update_durable/2`'s caller
(`rabbit_amqqueue:mark_local_durable_queues_stopped`/1) passes a filter
function that performs some operations that aren't allowed within
Khepri transactions like looking up and using the current node and
executing an RPC. Calling
`rabbit_amqqueue:mark_local_durable_queues_stopped/1` on a Rabbit with
the `khepri_db` feature flag enabled will result in an error.
We can safely update a number of queues by using Khepri's
`khepri_adv:get_many/3` advanced API which returns the internal version
number of each queue. We can filter and update the queues outside of
a transaction function and then perform all updates at once, failing if
any queue has changed since the `khepri_adv:get_many/3` query. So we
get the main benefits of a transaction but we can still execute any
update or filter function.
Use of `sessionStorage` makes user experience extremely hostile, as separate tabs in a browser do not share the session. In addition to that, opening a new tab happens to initiate complete IdP signout if another signed in tab is open. None of these problems appear if `localStorage` is used.
Original author clearly had an idea to implement this, but for whatever reason kept this line commented out. Maybe because `WebStorageStateStore` type needs to be qualified with `oidc.`?
[Why]
The given `TakeFromRemoteNode` argument was unpacked and another tuple
was created to pass to `update_context/3`. However, the constructed
tuple would be:
{{Node, Timeout}, Timeout}
... which is incorrect.
the original paths, e.g. /streams.html, do have redirects
in place but it turned out to be a surprisingly fragile
Cloudflare feature when there are hundreds of them,
so we better switch now.
[Why]
It looks like `exit(Spammer, normal)` doesn't terminate the process.
This leaves a dangling process around and seems to cause transient
failures in the `try_to_deadlock_in_registry_reload_1` testcase that
follows it.
[How]
We use `exit(Spammer, kill)` and a monitor to wait for the process to
actually terminate.
Without a feature flag it is possible to add a member on a newer node
with a Ra command format that the other nodes do not yet understand
resulting in crashed nodes.
when message was published to a stream via the stream protocol.
Fixes the following test:
```
./mvnw test -Dtest=AmqpInteroperabilityTest#publishToStreamConsumeFromStreamQueue
```
for default and pre-declared exchanges to save copying
the #exchange{} record (i.e. save an ETS lookup call) on
every received message.
The default and pre-declared exchanges are protected from deletion and
modification. Exchange routing decorators are not used in tier 1 plugins
and in no open source tier 2 plugin.
If the server initiates the detach due to an error condition, it
destroys and therefore forgets the link.
This should be okay because accroding to section 2.6.5:
"When an error occurs at a link endpoint, the endpoint MUST be detached
with appropriate error information supplied in the error field of the detach
frame. The link endpoint MUST then be destroyed."
It is also valid that the client replies with a detach:
"If any input (other than a detach) related to the endpoint either via
the input handle or delivery-ids be received, the session MUST be
terminated with an errant-link session-error."
In this case, the server must not reply again with (i.e do not sent a 3rd)
detach.
```
matching on the float 0.0 will no longer also match -0.0 in OTP 27. If you specifically intend to match 0.0 alone, write +0.0 instead
```
such that
```
bazel build //:package-generic-unix --test_build
```
succeeds when building with OTP 26.1.2
As these are the most likely to potentially run a backlog of
rabbit messages in their mailboxes and we do not want to include
these in gc runs unnecessarily.
This aligns flow control behaviour for AMQP across all queue types.
All flow is controlled by the AMQP credit flow gestures rather than
relying on additional, parallel mechanism.
This will allow us to adjust the flow control approach for all
queue types and expect consistent results.
What?
Protect receiving application from being overloaded with new messages
while still processing existing messages if the auto credit renewal
feature of the Erlang AMQP 1.0 client library is used.
This feature can therefore be thought of as a prefetch window equivalent
in AMQP 0.9.1 or MQTT 5.0 property Receive Maximum.
How?
The credit auto renewal feature in RabbitMQ 3.x was wrongly implemented.
This commit takes the same approach as done in the server:
The incoming_unsettled map is hold in the link instead of in the session
to accurately and quickly determine the number of unsettled messages for
a receiving link.
The amqp10_client lib will grant more credits to the sender when the sum
of remaining link credits and number of unsettled deliveries falls below
the threshold RenewWhenBelow.
This avoids maintaning additional state like the `link_credit_unsettled`
or an alternative delivery_count_settled sequence number which is more
complex to implement correctly.
This commit breaks the amqp10_client_session:disposition/6 API:
This commit forces the client application to only range settle for a
given link, i.e. not across multiple links on a given session at once.
The latter is allowed according to the AMQP spec.
What?
To not risk any regressions, keep the behaviour of RabbitMQ 3.x
where channel processes and connection helper processes such as
rabbit_queue_collector and rabbit_heartbeat are terminated after
rabbit_reader process.
For example, when RabbitMQ terminates with SIGTERM, we want
exclusive queues being deleted synchronously (as in 3.x).
Prior to this commit:
1. java -jar target/perf-test.jar -x 0 -y 1
2. ./sbin/rabbitmqctl stop_app
resulted in the following crash:
```
crasher:
initial call: rabbit_reader:init/2
pid: <0.2389.0>
registered_name: []
exception exit: {noproc,
{gen_server,call,[<0.2391.0>,delete_all,infinity]}}
in function gen_server:call/3 (gen_server.erl, line 419)
in call from rabbit_reader:close_connection/1 (rabbit_reader.erl, line 683)
in call from rabbit_reader:send_error_on_channel0_and_close/4 (rabbit_reader.erl, line 1668)
in call from rabbit_reader:handle_dependent_exit/3 (rabbit_reader.erl, line 710)
in call from rabbit_reader:mainloop/4 (rabbit_reader.erl, line 530)
in call from rabbit_reader:run/1 (rabbit_reader.erl, line 452)
in call from rabbit_reader:start_connection/4 (rabbit_reader.erl, line 351)
```
because rabbit_queue_collector was terminated before rabbit_reader.
This commit fixes this crash.
How?
Any Erlang supervisor including the rabbit_connection_sup supervisor
terminates its children in the opposite of the start order.
Since we want channel and queue collector processes - children of
rabbit_connection_helper_sup - be terminated after the
reader process, we must start rabbit_connection_helper_sup before the
reader process.
Since rabbit_connection_sup - the ranch_protocol implementation - does
not know yet whether it will supervise an AMQP 0.9.1 or AMQP 1.0
connection, it creates rabbit_connection_helper_sup for each AMQP protocol
version removing the superfluous one as soon as the protocol version negotation is
completed. Spawning and deleting this addition process has a negligible
effect on performance.
The whole problem is that the rabbit_connection_helper_sup differs in
its supervisor flags for AMQP 0.9.1 and AMQP 1.0 when it is started
because for Native AMQP 1.0 in 4.0 we remove the unnecessary
rabbit_amqp1_0_session_sup_sup supervisor level.
Therefore, we achieve our goal:
* in Native AMQP 1.0, 1 additional Erlang process is created per session
* in AMQP 1.0 in 3.x, 15 additional Erlang processes are created per session
What?
For credit API v1, increase the outgoing delivery-count as soon as the
message is scheduled for delivery, that is before the message is queued
in the session's outgoing_pending queue.
Why?
1. More correct for credit API v1 in case a FLOW is received
for an outgoing link topping up credit while an outgoing transfer on
the same link is queued in outgoing_pending. For the server's credit
calculation to be correct, it doesn't matter whether the outgoing
in-flight message travels through the network, is queued in TCP
buffers, processed by the writer, or just queued in the session's
outgoing_pending queue.
2. Higher performance as no map update is performed for credit API v2
in send_pending()
3. Simplifies code
"In the event that the receiving link endpoint has not yet seen the
initial attach frame from the sender this field MUST NOT be set."
[2.7.4]
Since we (the server / the receiving link endpoint), have already seen
the initial attach frame from the sender, set the delivery-count.
## What
Similar to Native MQTT in #5895, this commits implements Native AMQP 1.0.
By "native", we mean do not proxy via AMQP 0.9.1 anymore.
## Why
Native AMQP 1.0 comes with the following major benefits:
1. Similar to Native MQTT, this commit provides better throughput, latency,
scalability, and resource usage for AMQP 1.0.
See https://blog.rabbitmq.com/posts/2023/03/native-mqtt for native MQTT improvements.
See further below for some benchmarks.
2. Since AMQP 1.0 is not limited anymore by the AMQP 0.9.1 protocol,
this commit allows implementing more AMQP 1.0 features in the future.
Some features are already implemented in this commit (see next section).
3. Simpler, better understandable, and more maintainable code.
Native AMQP 1.0 as implemented in this commit has the
following major benefits compared to AMQP 0.9.1:
4. Memory and disk alarms will only stop accepting incoming TRANSFER frames.
New connections can still be created to consume from RabbitMQ to empty queues.
5. Due to 4. no need anymore for separate connections for publishers and
consumers as we currently recommended for AMQP 0.9.1. which potentially
halves the number of physical TCP connections.
6. When a single connection sends to multiple target queues, a single
slow target queue won't block the entire connection.
Publisher can still send data quickly to all other target queues.
7. A publisher can request whether it wants publisher confirmation on a per-message basis.
In AMQP 0.9.1 publisher confirms are configured per channel only.
8. Consumers can change their "prefetch count" dynamically which isn't
possible in our AMQP 0.9.1 implementation. See #10174
9. AMQP 1.0 is an extensible protocol
This commit also fixes dozens of bugs present in the AMQP 1.0 plugin in
RabbitMQ 3.x - most of which cannot be backported due to the complexity
and limitations of the old 3.x implementation.
This commit contains breaking changes and is therefore targeted for RabbitMQ 4.0.
## Implementation details
1. Breaking change: With Native AMQP, the behaviour of
```
Convert AMQP 0.9.1 message headers to application properties for an AMQP 1.0 consumer
amqp1_0.convert_amqp091_headers_to_app_props = false | true (default false)
Convert AMQP 1.0 Application Properties to AMQP 0.9.1 headers
amqp1_0.convert_app_props_to_amqp091_headers = false | true (default false)
```
will break because we always convert according to the message container conversions.
For example, AMQP 0.9.1 x-headers will go into message-annotations instead of application properties.
Also, `false` won’t be respected since we always convert the headers with message containers.
2. Remove rabbit_queue_collector
rabbit_queue_collector is responsible for synchronously deleting
exclusive queues. Since the AMQP 1.0 plugin never creates exclusive
queues, rabbit_queue_collector doesn't need to be started in the first
place. This will save 1 Erlang process per AMQP 1.0 connection.
3. 7 processes per connection + 1 process per session in this commit instead of
7 processes per connection + 15 processes per session in 3.x
Supervision hierarchy got re-designed.
4. Use 1 writer process per AMQP 1.0 connection
AMQP 0.9.1 uses a separate rabbit_writer Erlang process per AMQP 0.9.1 channel.
Prior to this commit, AMQP 1.0 used a separate rabbit_amqp1_0_writer process per AMQP 1.0 session.
Advantage of single writer proc per session (prior to this commit):
* High parallelism for serialising packets if multiple sessions within
a connection write heavily at the same time.
This commit uses a single writer process per AMQP 1.0 connection that is
shared across all AMQP 1.0 sessions.
Advantages of single writer proc per connection (this commit):
* Lower memory usage with hundreds of thousands of AMQP 1.0 sessions
* Less TCP and IP header overhead given that the single writer process
can accumulate across all sessions bytes before flushing the socket.
In other words, this commit decides that a reader / writer process pair
per AMQP 1.0 connection is good enough for bi-directional TRANSFER flows.
Having a writer per session is too heavy.
We still ensure high throughput by having separate reader, writer, and
session processes.
5. Transform rabbit_amqp1_0_writer into gen_server
Why:
Prior to this commit, when clicking on the AMQP 1.0 writer process in
observer, the process crashed.
Instead of handling all these debug messages of the sys module, it's better
to implement a gen_server.
There is no advantage of using a special OTP process over gen_server
for the AMQP 1.0 writer.
gen_server also provides cleaner format status output.
How:
Message callbacks return a timeout of 0.
After all messages in the inbox are processed, the timeout message is
handled by flushing any pending bytes.
6. Remove stats timer from writer
AMQP 1.0 connections haven't emitted any stats previously.
7. When there are contiguous queue confirmations in the session process
mailbox, batch them. When the confirmations are sent to the publisher, a
single DISPOSITION frame is sent for contiguously confirmed delivery
IDs.
This approach should be good enough. However it's sub optimal in
scenarios where contiguous delivery IDs that need confirmations are rare,
for example:
* There are multiple links in the session with different sender
settlement modes and sender publishes across these links interleaved.
* sender settlement mode is mixed and sender publishes interleaved settled
and unsettled TRANSFERs.
8. Introduce credit API v2
Why:
The AMQP 0.9.1 credit extension which is to be removed in 4.0 was poorly
designed since basic.credit is a synchronous call into the queue process
blocking the entire AMQP 1.0 session process.
How:
Change the interactions between queue clients and queue server
implementations:
* Clients only request a credit reply if the FLOW's `echo` field is set
* Include all link flow control state held by the queue process into a
new credit_reply queue event:
* `available` after the queue sends any deliveries
* `link-credit` after the queue sends any deliveries
* `drain` which allows us to combine the old queue events
send_credit_reply and send_drained into a single new queue event
credit_reply.
* Include the consumer tag into the credit_reply queue event such that
the AMQP 1.0 session process can process any credit replies
asynchronously.
Link flow control state `delivery-count` also moves to the queue processes.
The new interactions are hidden behind feature flag credit_api_v2 to
allow for rolling upgrades from 3.13 to 4.0.
9. Use serial number arithmetic in quorum queues and session process.
10. Completely bypass the rabbit_limiter module for AMQP 1.0
flow control. The goal is to eventually remove the rabbit_limiter module
in 4.0 since AMQP 0.9.1 global QoS will be unsupported in 4.0. This
commit lifts the AMQP 1.0 link flow control logic out of rabbit_limiter
into rabbit_queue_consumers.
11. Fix credit bug for streams:
AMQP 1.0 settlements shouldn't top up link credit,
only FLOW frames should top up link credit.
12. Allow sender settle mode unsettled for streams
since AMQP 1.0 acknowledgements to streams are no-ops (currently).
13. Fix AMQP 1.0 client bugs
Auto renewing credits should not be related to settling TRANSFERs.
Remove field link_credit_unsettled as it was wrong and confusing.
Prior to this commit auto renewal did not work when the sender uses
sender settlement mode settled.
14. Fix AMQP 1.0 client bugs
The wrong outdated Link was passed to function auto_flow/2
15. Use osiris chunk iterator
Only hold messages of uncompressed sub batches in memory if consumer
doesn't have sufficient credits.
Compressed sub batches are skipped for non Stream protocol consumers.
16. Fix incoming link flow control
Always use confirms between AMQP 1.0 queue clients and queue servers.
As already done internally by rabbit_fifo_client and
rabbit_stream_queue, use confirms for classic queues as well.
17. Include link handle into correlation when publishing messages to target queues
such that session process can correlate confirms from target queues to
incoming links.
18. Only grant more credits to publishers if publisher hasn't sufficient credits
anymore and there are not too many unconfirmed messages on the link.
19. Completely ignore `block` and `unblock` queue actions and RabbitMQ credit flow
between classic queue process and session process.
20. Link flow control is independent between links.
A client can refer to a queue or to an exchange with multiple
dynamically added target queues. Multiple incoming links can also fan
in to the same queue. However the link topology looks like, this
commit ensures that each link is only granted more credits if that link
isn't overloaded.
21. A connection or a session can send to many different queues.
In AMQP 0.9.1, a single slow queue will lead to the entire channel, and
then entire connection being blocked.
This commit makes sure that a single slow queue from one link won't slow
down sending on other links.
For example, having link A sending to a local classic queue and
link B sending to 5 replica quorum queue, link B will naturally
grant credits slower than link A. So, despite the quorum queue being
slower in confirming messages, the same AMQP 1.0 connection and session
can still pump data very fast into the classic queue.
22. If cluster wide memory or disk alarm occurs.
Each session sends a FLOW with incoming-window to 0 to sending client.
If sending clients don’t obey, force disconnect the client.
If cluster wide memory alarm clears:
Each session resumes with a FLOW defaulting to initial incoming-window.
23. All operations apart of publishing TRANSFERS to RabbitMQ can continue during cluster wide alarms,
specifically, attaching consumers and consuming, i.e. emptying queues.
There is no need for separate AMQP 1.0 connections for publishers and consumers as recommended in our AMQP 0.9.1 implementation.
24. Flow control summary:
* If queue becomes bottleneck, that’s solved by slowing down individual sending links (AMQP 1.0 link flow control).
* If session becomes bottleneck (more unlikely), that’s solved by AMQP 1.0 session flow control.
* If connection becomes bottleneck, it naturally won’t read fast enough from the socket causing TCP backpressure being applied.
Nowhere will RabbitMQ internal credit based flow control (i.e. module credit_flow) be used on the incoming AMQP 1.0 message path.
25. Register AMQP sessions
Prefer local-only pg over our custom pg_local implementation as
pg is a better process group implementation than pg_local.
pg_local was identified as bottleneck in tests where many MQTT clients were disconnected at once.
26. Start a local-only pg when Rabbit boots:
> A scope can be kept local-only by using a scope name that is unique cluster-wide, e.g. the node name:
> pg:start_link(node()).
Register AMQP 1.0 connections and sessions with pg.
In future we should remove pg_local and instead use the new local-only
pg for all registered processes such as AMQP 0.9.1 connections and channels.
27. Requeue messages if link detached
Although the spec allows to settle delivery IDs on detached links, RabbitMQ does not respect the 'closed'
field of the DETACH frame and therefore handles every DETACH frame as closed. Since the link is closed,
we expect every outstanding delivery to be requeued.
In addition to consumer cancellation, detaching a link therefore causes in flight deliveries to be requeued.
Note that this behaviour is different from merely consumer cancellation in AMQP 0.9.1:
"After a consumer is cancelled there will be no future deliveries dispatched to it. Note that there can
still be "in flight" deliveries dispatched previously. Cancelling a consumer will neither discard nor requeue them."
[https://www.rabbitmq.com/consumers.html#unsubscribing]
An AMQP receiver can first drain, and then detach to prevent "in flight" deliveries
28. Init AMQP session with BEGIN frame
Similar to how there can't be an MQTT processor without a CONNECT
frame, there can't be an AMQP session without a BEGIN frame.
This allows having strict dialyzer types for session flow control
fields (i.e. not allowing 'undefined').
29. Move serial_number to AMQP 1.0 common lib
such that it can be used by both AMQP 1.0 server and client
30. Fix AMQP client to do serial number arithmetic.
31. AMQP client: Differentiate between delivery-id and transfer-id for better
understandability.
32. Fix link flow control in classic queues
This commit fixes
```
java -jar target/perf-test.jar -ad false -f persistent -u cq -c 3000 -C 1000000 -y 0
```
followed by
```
./omq -x 0 amqp -T /queue/cq -D 1000000 --amqp-consumer-credits 2
```
Prior to this commit, (and on RabbitMQ 3.x) the consuming would halt after around
8 - 10,000 messages.
The bug was that in flight messages from classic queue process to
session process were not taken into account when topping up credit to
the classic queue process.
Fixes#2597
The solution to this bug (and a much cleaner design anyway independent of
this bug) is that queues should hold all link flow control state including
the delivery-count.
Hence, when credit API v2 is used the delivery-count will be held by the
classic queue process, quorum queue process, and stream queue client
instead of managing the delivery-count in the session.
33. The double level crediting between (a) session process and
rabbit_fifo_client, and (b) rabbit_fifo_client and rabbit_fifo was
removed. Therefore, instead of managing 3 separate delivery-counts (i. session,
ii. rabbit_fifo_client, iii. rabbit_fifo), only 1 delivery-count is used
in rabbit_fifo. This is a big simplification.
34. This commit fixes quorum queues without bumping the machine version
nor introducing new rabbit_fifo commands.
Whether credit API v2 is used is solely determined at link attachment time
depending on whether feature flag credit_api_v2 is enabled.
Even when that feature flag will be enabled later on, this link will
keep using credit API v1 until detached (or the node is shut down).
Eventually, after feature flag credit_api_v2 has been enabled and a
subsequent rolling upgrade, all links will use credit API v2.
This approach is safe and simple.
The 2 alternatives to move delivery-count from the session process to the
queue processes would have been:
i. Explicit feature flag credit_api_v2 migration function
* Can use a gen_server:call and only finish migration once all delivery-counts were migrated.
Cons:
* Extra new message format just for migration is required.
* Risky as migration will fail if a target queue doesn’t reply.
ii. Session always includes DeliveryCountSnd when crediting to the queue:
Cons:
* 2 delivery counts will be hold simultaneously in session proc and queue proc;
could be solved by deleting the session proc’s delivery-count for credit-reply
* What happens if the receiver doesn’t provide credit for a very long time? Is that a problem?
35. Support stream filtering in AMQP 1.0 (by @acogoluegnes)
Use the x-stream-filter-value message annotation
to carry the filter value in a published message.
Use the rabbitmq:stream-filter and rabbitmq:stream-match-unfiltered
filters when creating a receiver that wants to filter
out messages from a stream.
36. Remove credit extension from AMQP 0.9.1 client
37. Support maintenance mode closing AMQP 1.0 connections.
38. Remove AMQP 0.9.1 client dependency from AMQP 1.0 implementation.
39. Move AMQP 1.0 plugin to the core. AMQP 1.0 is enabled by default.
The old rabbitmq_amqp1_0 plugin will be kept as a no-op plugin to prevent deployment
tools from failing that execute:
```
rabbitmq-plugins enable rabbitmq_amqp1_0
rabbitmq-plugins disable rabbitmq_amqp1_0
```
40. Breaking change: Remove CLI command `rabbitmqctl list_amqp10_connections`.
Instead, list both AMQP 0.9.1 and AMQP 1.0 connections in `list_connections`:
```
rabbitmqctl list_connections protocol
Listing connections ...
protocol
{1, 0}
{0,9,1}
```
## Benchmarks
### Throughput & Latency
Setup:
* Single node Ubuntu 22.04
* Erlang 26.1.1
Start RabbitMQ:
```
make run-broker PLUGINS="rabbitmq_management rabbitmq_amqp1_0" FULL=1 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 3"
```
Predeclare durable classic queue cq1, durable quorum queue qq1, durable stream queue sq1.
Start client:
https://github.com/ssorj/quiverhttps://hub.docker.com/r/ssorj/quiver/tags (digest 453a2aceda64)
```
docker run -it --rm --add-host host.docker.internal:host-gateway ssorj/quiver:latest
bash-5.1# quiver --version
quiver 0.4.0-SNAPSHOT
```
1. Classic queue
```
quiver //host.docker.internal//amq/queue/cq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000
```
This commit:
```
Count ............................................. 1,000,000 messages
Duration ............................................... 73.8 seconds
Sender rate .......................................... 13,548 messages/s
Receiver rate ........................................ 13,547 messages/s
End-to-end rate ...................................... 13,547 messages/s
Latencies by percentile:
0% ........ 0 ms 90.00% ........ 9 ms
25% ........ 2 ms 99.00% ....... 14 ms
50% ........ 4 ms 99.90% ....... 17 ms
100% ....... 26 ms 99.99% ....... 24 ms
```
RabbitMQ 3.x (main branch as of 30 January 2024):
```
---------------------- Sender ----------------------- --------------------- Receiver ---------------------- --------
Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms]
----------------------------------------------------- ----------------------------------------------------- --------
2.1 130,814 65,342 6 73.6 2.1 3,217 1,607 0 8.0 511
4.1 163,580 16,367 2 74.1 4.1 3,217 0 0 8.0 0
6.1 229,114 32,767 3 74.1 6.1 3,217 0 0 8.0 0
8.1 261,880 16,367 2 74.1 8.1 67,874 32,296 8 8.2 7,662
10.1 294,646 16,367 2 74.1 10.1 67,874 0 0 8.2 0
12.1 360,180 32,734 3 74.1 12.1 67,874 0 0 8.2 0
14.1 392,946 16,367 3 74.1 14.1 68,604 365 0 8.2 12,147
16.1 458,480 32,734 3 74.1 16.1 68,604 0 0 8.2 0
18.1 491,246 16,367 2 74.1 18.1 68,604 0 0 8.2 0
20.1 556,780 32,767 4 74.1 20.1 68,604 0 0 8.2 0
22.1 589,546 16,375 2 74.1 22.1 68,604 0 0 8.2 0
receiver timed out
24.1 622,312 16,367 2 74.1 24.1 68,604 0 0 8.2 0
quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1.
Traceback (most recent call last):
File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run
_plano.wait(receiver, check=True)
File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait
raise PlanoProcessError(proc)
plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1.
```
2. Quorum queue:
```
quiver //host.docker.internal//amq/queue/qq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000
```
This commit:
```
Count ............................................. 1,000,000 messages
Duration .............................................. 101.4 seconds
Sender rate ........................................... 9,867 messages/s
Receiver rate ......................................... 9,868 messages/s
End-to-end rate ....................................... 9,865 messages/s
Latencies by percentile:
0% ....... 11 ms 90.00% ....... 23 ms
25% ....... 15 ms 99.00% ....... 28 ms
50% ....... 18 ms 99.90% ....... 33 ms
100% ....... 49 ms 99.99% ....... 47 ms
```
RabbitMQ 3.x:
```
---------------------- Sender ----------------------- --------------------- Receiver ---------------------- --------
Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms]
----------------------------------------------------- ----------------------------------------------------- --------
2.1 130,814 65,342 9 69.9 2.1 18,430 9,206 5 7.6 1,221
4.1 163,580 16,375 5 70.2 4.1 18,867 218 0 7.6 2,168
6.1 229,114 32,767 6 70.2 6.1 18,867 0 0 7.6 0
8.1 294,648 32,734 7 70.2 8.1 18,867 0 0 7.6 0
10.1 360,182 32,734 6 70.2 10.1 18,867 0 0 7.6 0
12.1 425,716 32,767 6 70.2 12.1 18,867 0 0 7.6 0
receiver timed out
14.1 458,482 16,367 5 70.2 14.1 18,867 0 0 7.6 0
quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1.
Traceback (most recent call last):
File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run
_plano.wait(receiver, check=True)
File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait
raise PlanoProcessError(proc)
plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1.
```
3. Stream:
```
quiver-arrow send //host.docker.internal//amq/queue/sq1 --durable --count 1m -d 10m --summary --verbose
```
This commit:
```
Count ............................................. 1,000,000 messages
Duration ................................................ 8.7 seconds
Message rate ........................................ 115,154 messages/s
```
RabbitMQ 3.x:
```
Count ............................................. 1,000,000 messages
Duration ............................................... 21.2 seconds
Message rate ......................................... 47,232 messages/s
```
### Memory usage
Start RabbitMQ:
```
ERL_MAX_PORTS=3000000 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+P 3000000 +S 6" make run-broker PLUGINS="rabbitmq_amqp1_0" FULL=1 RABBITMQ_CONFIG_FILE="rabbitmq.conf"
```
```
/bin/cat rabbitmq.conf
tcp_listen_options.sndbuf = 2048
tcp_listen_options.recbuf = 2048
vm_memory_high_watermark.relative = 0.95
vm_memory_high_watermark_paging_ratio = 0.95
loopback_users = none
```
Create 50k connections with 2 sessions per connection, i.e. 100k session in total:
```go
package main
import (
"context"
"log"
"time"
"github.com/Azure/go-amqp"
)
func main() {
for i := 0; i < 50000; i++ {
conn, err := amqp.Dial(context.TODO(), "amqp://nuc", &amqp.ConnOptions{SASLType: amqp.SASLTypeAnonymous()})
if err != nil {
log.Fatal("dialing AMQP server:", err)
}
_, err = conn.NewSession(context.TODO(), nil)
if err != nil {
log.Fatal("creating AMQP session:", err)
}
_, err = conn.NewSession(context.TODO(), nil)
if err != nil {
log.Fatal("creating AMQP session:", err)
}
}
log.Println("opened all connections")
time.Sleep(5 * time.Hour)
}
```
This commit:
```
erlang:memory().
[{total,4586376480},
{processes,4025898504},
{processes_used,4025871040},
{system,560477976},
{atom,1048841},
{atom_used,1042841},
{binary,233228608},
{code,21449982},
{ets,108560464}]
erlang:system_info(process_count).
450289
```
7 procs per connection + 1 proc per session.
(7 + 2*1) * 50,000 = 450,000 procs
RabbitMQ 3.x:
```
erlang:memory().
[{total,15168232704},
{processes,14044779256},
{processes_used,14044755120},
{system,1123453448},
{atom,1057033},
{atom_used,1052587},
{binary,236381264},
{code,21790238},
{ets,391423744}]
erlang:system_info(process_count).
1850309
```
7 procs per connection + 15 per session
(7 + 2*15) * 50,000 = 1,850,000 procs
50k connections + 100k session require
with this commit: 4.5 GB
in RabbitMQ 3.x: 15 GB
## Future work
1. More efficient parser and serializer
2. TODO in mc_amqp: Do not store the parsed message on disk.
3. Implement both AMQP HTTP extension and AMQP management extension to allow AMQP
clients to create RabbitMQ objects (queues, exchanges, ...).
Fixes https://github.com/rabbitmq/rabbitmq-server/discussions/10620
Up to RabbitMQ 3.12:
* When an AMQP 0.9.1 publisher sends a message with P_basic.headers
unset, RabbitMQ will deliver an AMQP 0.9.1 message with
P_basic.headers unset.
* When an AMQP 0.9.1 publisher sends a message with P_basic.headers
being an empty list ([]), RabbitMQ will deliver an AMQP 0.9.1 message with
P_basic.headers being an empty list ([]).
In 3.13 including message containers, the 1st behaviour stayed the same
while the 2nd behaviour changed to:
* When an AMQP 0.9.1 publisher sends a message with P_basic.headers
being an empty list ([]), RabbitMQ will deliver an AMQP 0.9.1 message with
P_basic.headers unset.
This commit fixes this regression by using the same behaviour as in
3.12.
If anything fails during file handle reservation it will take a
quorum queue process down with it. This commits makes this function
more defensive as well as avoiding printing the full stack trace
if this happened during shutdown (which is currently quite likely)
rabbit_common is indirectly included via rabbit_stream_reader.hrl, and
the rules_erlang gazelle extension does not yet know how to detect
this, therefore the directive manually declares it
An AMQP boolean can by encoded using 1 byte or 2 bytes:
https://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-types-v1.0-os.html#type-boolean
Prior to this commit, our Erlang parser returned:
* Erlang terms `true` or `false` for the 1 byte AMQP encoding
* Erlang terms `{boolean, true}` or `{boolean, false}` for the 2 byte AMQP enconding
Having a serializer and parser that perform the opposite actions such
that
```
Term = parse(serialize(Term))
```
is desirable as it provides a symmetric property useful not only for
property based testing, but also for avoiding altering message hashes
when serializing and parsing the same term.
However, dealing wth `{boolean, boolean()}` tuples instead of `boolean()` is very unhandy since
all Erlang code must take care of both forms leading to subtle bugs as
occurred in:
* 4cbeab8974/deps/rabbitmq_amqp1_0/src/rabbit_amqp1_0_message.erl (L155-L158)
* b8173c9d3b/deps/rabbitmq_mqtt/src/mc_mqtt.erl (L83-L88)
* b8173c9d3b/deps/rabbit/src/mc_amqpl.erl (L123-L127)
Therefore, this commits decides to take the safe approach and always
parse to an Erlang `boolean()` independent of whether the AMQP boolean
was encoded with 1 or 2 bytes.
[Why]
We don't need to change the mirrored_supervisor child ID format for
Khepri. Unfortunately, the temporary experimental was erroneously
backported to 3.11.x and 3.12.x releases...
This broke the federation and shovel plugins during upgrades.
[How]
Here, we restore the original behavior, meaning that the ID stays as it
was and we just modify it when we need a Khepri path.
The code is updated to know about the temporary experimental format as
well because it will be used by the latest 3.11.x and 3.12.x releases.
[Why]
The format was changed to be compatible with Khepri paths. However, this
ID is used in in-memory states here and there as well. So changing its
format makes upgrades complicated because the code has to handle both
the old and new formats possibly used by the mirrored supervisor already
running on other nodes.
[How]
Instead, this patch converts the ID (in its old format) to something
compatible with a Khepri path only when we need to build a Khepri path.
This relies on the fact that the `Group` is a module and we can call it
to let it convert the opaque ID to a Khepri path.
While here, improve the type specs to document that a group is always a
module name and to document what a child ID can be.
[Why]
There is a bug in Khepri that prevents the mirrored supervisor from
restarting its processes on the new node. This is unrelated to the
shovel plugin or this testsuite.
[How]
Mark the testsuite as "flaky" until a solution is found in Khepri.
[Why]
An upgrade scenario going from RabbitMQ 3.11.24 to the upcoming 3.12.8
was shared in issue #9894 to demonstrate that the change of child ID
format broke rolling upgrades when there are existing dynamic shovels.
[How]
The testcase uses 4 nodes:
* one reference node
* one node to host source and target queues
* one "old" node
* one "new" node
The reference node is using the new version to see what format it uses.
The node hosting queues is using the old version but it is not relevant
for this one?
The testcase uses the old node to create the dynamic shovel, then the
new node to simulate an upgrade by clustering it with the old node and
stopping the old one.
[Why]
An upgrade scenario going from RabbitMQ 3.12.x to the upcoming 3.13.0
was shared in issue #10306 to demonstrate that the change of child ID
format broke rolling upgrades when there are existing federated
exchanges.
[How]
The testcase uses 5 nodes:
* one upstream node
* two "old" downstream nodes
* two "new" downstream nodes
The old downstream nodes are used to prepare a 2-node cluster that is
about to be upgraded. The new downstream nodes are added to the cluster
then the old downstream nodes are stopped to simulate that rolling
upgrade.
The child ID format was restored in the previous commit, thus there is
no conversion to handle and the testcase should just work with a fresh
3.13.0+ cluster or with a mixed-version cluster with 3.12.x. It failed
during the preparation of the previous commit to make sure it was
effective.
The offset lag for a consumer is the difference between the
last committed offset (offset of the first message in the
last chunk confirmed by a quorum of stream members) and
the current offset of the consumer (offset of the first message
in the last chunk dispatched to the consumer).
The calculation is simple for most cases, but it needs
to be refined by using more context for edge cases (subscription
to a stream that has not messages yet, subscription at the
very end of a quiet stream).
Example: subscription at "next" (waiting for new messages) in
a quiet stream (no messages published). The previous implementation
would return consumer offset = 0 and offset lag = last committed
offset, where we would expect to get consumer offset = next offset
and offset lag = 0.
This commit fixes the calculation for these edge cases.