AMQP10 shovels don't need the amqp10 message format, the binary
can be translated directly into a message container and also
the other way around. The new amqp10_raw_msg just stores the payload
and information required to create the transfer frame, skipping
a few unnecessary encoding/decoding operations of the AMQP10 sections.
# What?
* Support Direct Reply-To for AMQP 1.0
* Compared to AMQP 0.9.1, this PR allows for multiple volatile queues on a single
AMQP 1.0 session. Use case: JMS clients can create multiple temporary queues on
the same JMS/AMQP session:
* https://jakarta.ee/specifications/messaging/3.1/apidocs/jakarta.messaging/jakarta/jms/session#createTemporaryQueue()
* https://jakarta.ee/specifications/messaging/3.1/apidocs/jakarta.messaging/jakarta/jms/jmscontext#createTemporaryQueue()
* Fix missing metrics in for Direct Reply-To in AMQP 0.9.1, e.g.
`messages_delivered_total`
* Fix missing metrics (even without using Direct Reply-To ) in AMQP 0.9.1:
If stats level is not `fine`, global metrics `rabbitmq_global_messages_delivered_*` should still be incremented.
# Why?
* Allow for scalable at-most-once RPC reply delivery
Example use case: thousands of requesters connect, send a single
request, wait for a single reply, and disconnect.
This PR won't create any queue and won't write to the metadata store.
Therefore, there's less pressure on the metadata store, less pressure
on the Management API when listing all queues, less pressure on the
metrics subsystem, etc.
* Feature parity with AMQP 0.9.1
# How?
This PR extracts the previously channel specific Direct Reply-To code
into a new queue type: `rabbit_volatile_queue`.
"Volatile" describes the semantics, not a use-case. It signals non-durable,
zero-buffer, at-most-once, may-drop, and "not stored in Khepri."
This new queue type is then used for AMQP 1.0 and AMQP 0.9.1.
Sending to the volatile queue is stateless like previously with Direct Reply-To in AMQP 0.9.1 and like done
for the MQTT QoS 0 queue.
This allows for use cases where a single responder replies to e.g. 100k different requesters.
RabbitMQ will automatically auto grant new link-credit to the responder because the new queue type confirms immediately.
The key gets implicitly checked by the channel/session:
If the queue name (including the key) doesn’t exist, the `handle_event` callback for this queue isn’t invoked and therefore
no delivery will be sent to the responder.
This commit supports Direct Reply-To across AMQP 1.0 and 0.9.1. In other
words, the requester can be an AMQP 1.0 client while the responder is an
AMQP 0.9.1 client or vice versa.
RabbitMQ will internally convert between AMQP 0.9.1 `reply_to` and AMQP
1.0 `/queues/<queue>` address. The AMQP 0.9.1 `reply_to` property is
expected to contain a queue name. That's in line with the AMQP 0.9.1
spec:
> One of the standard message properties is Reply-To, which is designed
specifically for carrying the name of reply queues.
Compared to AMQP 0.9.1 where the requester sets the `reply_to` property
to `amq.rabbitmq.reply-to` and RabbitMQ modifies this field when
forwarding the message to the request queue, in AMQP 1.0 the requester
learns about the queue name from the broker at link attachment time.
The requester has to set the reply-to property to the server generated
queue name. That's because the server isn't allowed to modify the bare
message.
During link attachment time, the client has to set certain fields.
These fields are expected to be set by the RabbitMQ client libraries.
Here is an Erlang example:
```erl
Source = #{address => undefined,
durable => none,
expiry_policy => <<"link-detach">>,
dynamic => true,
capabilities => [<<"rabbitmq:volatile-queue">>]},
AttachArgs = #{name => <<"receiver">>,
role => {receiver, Source, self()},
snd_settle_mode => settled,
rcv_settle_mode => first},
{ok, Receiver} = amqp10_client:attach_link(Session, AttachArgs),
AddressReplyQ = receive {amqp10_event, {link, Receiver, {attached, Attach}}} ->
#'v1_0.attach'{source = #'v1_0.source'{address = {utf8, Addr}}} = Attach,
Addr
end,
```
The client then sends the message by setting the reply-to address as
follows:
```erl
amqp10_client:send_msg(
SenderRequester,
amqp10_msg:set_properties(
#{message_id => <<"my ID">>,
reply_to => AddressReplyQ},
amqp10_msg:new(<<"tag">>, <<"request">>))),
```
If the responder attaches to the queue target in the reply-to field,
RabbitMQ will check if the requester link is still attached. If the
requester detached, the link will be refused.
The responder can also attach to the anonymous null target and set the
`to` field to the `reply-to` address.
If RabbitMQ cannot deliver a reply, instead of buffering the reply,
RabbitMQ will be drop the reply and increment the following Prometheus metric:
```
rabbitmq_global_messages_dead_lettered_maxlen_total{queue_type="rabbit_volatile_queue",dead_letter_strategy="disabled"} 0.0
```
That's in line with the MQTT QoS 0 queue type.
A reply message could be dropped for a variety of reasons:
1. The requester ran out of link-credit. It's therefore the requester's
responsibility to grant sufficient link-credit on its receiving link.
2. RabbitMQ isn't allowed to deliver any message to due session flow
control. It's the requster's responsibility to keep the session window
large enough.
3. The requester doesn't consume messages fast enough causing TCP
backpressure being applied or the RabbitMQ AMQP writer proc isn't
scheduled quickly enough. The latter can happen for example if
RabbitMQ runs with a single scheduler (is assigned a single CPU
core). In either case, RabbitMQ internal flow control causes the
volatile queue to drop messages.
Therefore, if high throughput is required while message loss is undesirable, a classic queue should be used
instead of a volatile queue since the former buffers messages while the
latter doesn't.
The main difference between the volatile queue and the MQTT QoS 0 queue
is that the former isn't written to the metadata store.
# Breaking Change
Prior to this PR the following [documented caveat](https://www.rabbitmq.com/docs/4.0/direct-reply-to#limitations) applied:
> If the RPC server publishes with the mandatory flag set then `amq.rabbitmq.reply-to.*`
is treated as **not** a queue; i.e. if the server only publishes to this name then the message
will be considered "not routed"; a `basic.return` will be sent if the mandatory flag was set.
This PR removes this caveat.
This PR introduces the following new behaviour:
> If the RPC server publishes with the mandatory flag set, then `amq.rabbitmq.reply-to.*`
is treated as a queue (assuming this queue name is encoded correctly). However,
whether the requester is still there to consume the reply is not checked at routing time.
In other words, if the RPC server only publishes to this name, then the message will be
considered "routed" and RabbitMQ will therefore not send a `basic.return`.
## What?
Refuse or detach the link instead of ending the session for many
link-level errors, including the following:
* Source queue or target exchange doesn't exist during attach
* Trying to consume from an exclusive queue on a different connection
* Delivery of message to a target queue fails
* Wrong delivery-id
* Wrong settled flag
* Queue declaration fails for dynamic queues
* Publishing to internal exchange
## Why?
Because many errors are scoped to a single terminus, detaching just that link
preserves the rest of the session’s links - avoiding needless disruption of
other traffic. AMQP 1.0’s error model is hierarchical; RabbitMQ should escalate to
ending the session only for session-level faults or if the client keeps
using a destroyed link.
## How?
Refuse link as per figure 2.33
This commit fixes the failing test cases
```
make -C deps/rabbit ct-msg_size_metrics t=tests:message_size
make -C deps/amqp10_client ct-system t=rabbitmq:roundtrip_large_messages
```
The second test case failed with:
```
flush {amqp10_event,
{connection,<0.193.0>,
{closed,
{framing_error,
<<"frame size (131073 bytes) > maximum frame size (131072 bytes)">>}}}}
```
## What?
This commit allows AMQP 1.0 clients to define SQL-like filter expressions when consuming from streams, enabling server-side message filtering.
RabbitMQ will only dispatch messages that match the provided filter expression, reducing network traffic and client-side processing overhead.
SQL filter expressions are a more powerful alternative to the [AMQP Property Filter Expressions](https://www.rabbitmq.com/blog/2024/12/13/amqp-filter-expressions) introduced in RabbitMQ 4.1.
SQL filter expressions are based on the [JMS message selector syntax](https://jakarta.ee/specifications/messaging/3.1/jakarta-messaging-spec-3.1#message-selector-syntax) and support:
* Comparison operators (`=`, `<>`, `>`, `<`, `>=`, `<=`)
* Logical operators (`AND`, `OR`, `NOT`)
* Arithmetic operators (`+`, `-`, `*`, `/`)
* Special operators (`BETWEEN`, `LIKE`, `IN`, `IS NULL`)
* Access to the properties and application-properties sections
**Examples**
Simple expression:
```sql
header.priority > 4
```
Complex expression:
```sql
order_type IN ('premium', 'express') AND
total_amount BETWEEN 100 AND 5000 AND
(customer_region LIKE 'EU-%' OR customer_region = 'US-CA') AND
properties.creation-time >= 1750772279000 AND
NOT cancelled
```
Like AMQP property filter expressions, SQL filter expressions can be
combined with Bloom filters. Combining both allows for highly customisable
expressions (SQL) and extremely fast evaluation (Bloom filter) if only a
subset of the chunks need to be read from disk.
## Why?
Compared to AMQP property filter expressions, SQL filter expressions provide the following advantage:
* High expressiveness and flexibility in defining the filter
Like for AMQP property filter expressions, the following advantages apply:
* No false positives (as is the case for Bloom filters)
* Multiple concurrent clients can attach to the same stream each consuming
only a specific subset of messages while preserving message order.
* Low network overhead as only messages that match the filter are
transferred to the client
* Likewise, lower resource usage (CPU and memory) on clients since they
don't need to deserialise messages that they are not interested in.
* If the SQL expression is simple, even the broker will save resources
because it doesn't need to serialse and send messages that the client
isn't interested in.
## How?
### JMS Message Selector Syntax vs. AMQP Extension Spec
The AMQP Filter Expressions Version 1.0 extension Working Draft 09 defines SQL Filter Expressions in Section 6.
This spec differs from the JMS message selector spec. Neither is a subset of the other. We can choose to follow either.
However, I think it makes most sense to follow the JMS spec because:
* The JMS spec is better defined
* The JMS spec is far more widespread than the AMQP Working Draft spec. (A slight variation of the AMQP Working
Draft is used by Azure Service Bus: https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-messaging-sql-filter)
* The JMS spec is mostly simpler (partly because matching on only simple types)
* This will allow for a single SQL parser in RabbitMQ for both AMQP clients consuming from a stream and possibly in future for JMS clients consuming from queues or topics.
<details>
<summary>AMQP extension spec vs JMS spec</summary>
AMQP
!= is synonym for <>
JMS
defines only <>
Conclusion
<> is sufficient
AMQP
Strings can be tested for “greater than”
“both operands are of type string or of type symbol (any combination is permitted) and
the lexicographical rank of the left operand is greater than the lexicographical rank of the right operand”
JMS
“String and Boolean comparison is restricted to = and <>.”
Conclusion
The JMS behaviour is sufficient.
AMQP
IN <set-expression>
set-expression can contain non-string literals
JMS:
set-expression can contain only string literals
Conclusion
The JMS behaviour is sufficient.
AMQP
EXISTS predicate to check for composite types
JMS
Only simple types
Conclusion
We want to match only for simple types, i.e. allowing matching only against values in the application-properties, properties sections and priority field of the header section.
AMQP:
Modulo operator %
Conclusion
JMS doesn't define the modulo operator. Let's start without it.
We can decide in future to add support since it can actually be useful, for example for two receivers who want to process every other message.
AMQP:
The ‘+’ operator can concatenate string and symbol values
Conclusion
Such string concatenation isn't defined in JMS. We don't need it.
AMQP:
Define NAN and INF
JMS:
“Approximate literals use the Java floating-point literal syntax.”
Examples include "7."
Conclusion
We can go with the JMS spec given that needs to be implemented anyway
for JMS support.
Scientific notations are supported in both the AMQP spec and JMS spec.
AMQP
String literals can be surrounded by single or double quotation marks
JMS
A string literal is enclosed in single quotes
Conclusion
Supporting single quotes is good enough.
AMQP
“A binary constant is a string of pairs of hexadecimal digits prefixed by ‘0x’ that are not enclosed in quotation marks”
Conclusion
JMS doesn't support binary constants. We can start without binary constants.
Matching against binary values are still supported if these binary values can be expressed as UTF-8 strings.
AMQP
Functions DATE, UTC, SUBSTRING, LOWER, UPPER, LEFT, RIGHT
Vendor specific functions
Conclusion
JMS doesn't define such functions. We can start without those functions.
AMQP
<field><array_element_reference>
<field>‘.’<composite_type_reference>
to access map and array elements
Conclusion
Same as above:
We want to match only for simple types, i.e. allowing matching only against values in the application-properties, properties sections and priority field of the header section.
AMQP
allows for delimited identifiers
JMS
Java identifier part characters
Conclusion
We can go with the Java identifiers extending the allowed characters by
`.` and `-` to reference field names such as `properties.group-id`.
JMS:
BETWEEN operator
Conclusion
The BETWEEN operator isn't supported in the AMQP spec. Let's support it as convenience since it's already available in JMS.
</details>
### Filter Name
The client provides a filter with name `sql-filter` instead of name
`jms-selector` to allow to differentiate between JMS clients and other
native AMQP 1.0 clients using SQL expressions. This way, we can also
optionally extend the SQL grammar in future.
### Identifiers
JMS message selectors allow identifiers to contain some well known JMS headers that match to well known AMQP fields, for example:
```erl
jms_header_to_amqp_field_name(<<"JMSDeliveryMode">>) -> durable;
jms_header_to_amqp_field_name(<<"JMSPriority">>) -> priority;
jms_header_to_amqp_field_name(<<"JMSMessageID">>) -> message_id;
jms_header_to_amqp_field_name(<<"JMSTimestamp">>) -> creation_time;
jms_header_to_amqp_field_name(<<"JMSCorrelationID">>) -> correlation_id;
jms_header_to_amqp_field_name(<<"JMSType">>) -> subject;
%% amqp-bindmap-jms-v1.0-wd10 § 3.2.2 JMS-defined ’JMSX’ Properties
jms_header_to_amqp_field_name(<<"JMSXUserID">>) -> user_id;
jms_header_to_amqp_field_name(<<"JMSXGroupID">>) -> group_id;
jms_header_to_amqp_field_name(<<"JMSXGroupSeq">>) -> group_sequence;
```
This commit does a similar matching for `header.` and `properties.` prefixed identifiers to field names in the AMQP property section.
The only field that is supported to filter on in the AMQP header section is `priority`, that is identifier `header.priority`.
By default, as described in the AMQP extension spec, if an identifier is not prefixed, it refers to a key in the application-properties section.
Hence, all identifiers prefixed with `header.`, and `properties.` have special meanings and MUST be avoided by applications unless they want to refer to those specific fields.
Azure Service Bus uses the `sys.` and `user.` prefixes for well known field names and arbitrary application-provided keys, respectively.
### SQL lexer, parser and evaluator
This commit implements the SQL lexer and parser in files rabbit_jms_selector_lexer.xrl and
rabbit_jms_selector_parser.yrl, respectively.
Advantages:
* Both the definitions in the lexer and the grammar in the parser are defined **declaratively**.
* In total, the entire SQL syntax and grammar is defined in only 240 lines.
* Therefore, lexer and parser are simple to maintain.
The idea of this commit is to use the same lexer and parser for native AMQP clients consumings
from streams (this commit) as for JMS clients (in the future).
All native AMQP client vs JMS client bits are then manipulated after
the Abstract Syntax Tree (AST) has been created by the parser.
For example, this commit transforms the AST specifically for native AMQP clients
by mapping `properties.` prefixed identifiers (field names) to atoms.
A JMS client's mapping from `JMS` prefixed headers can transform the AST
differently.
Likewise, this commit transforms the AST to compile a regex for complex LIKE
expressions when consuming from a stream while a future version
might not want to compile a regex when consuming from quorum queues.
Module `rabbit_jms_ast` provides such AST helper methods.
The lexer and parser are not performance critical as this work happens
upon receivers attaching to the stream.
The evaluator however is performance critical as message evaluation
happens on the hot path.
### LIKE expressions
The evaluator has been optimised to only compile a regex when necessary.
If the LIKE expression-value contains no wildcard or only a single `%`
wildcard, Erlang pattern matching is used as it's more efficient.
Since `_` can match any UTF-8 character, a regex will be compiled with
the `[unicode]` options.
### Filter errors
Any errors upon a receiver attaching to a stream causes the filter to
not become active. RabbitMQ will log a warning describing the reason and
will omit the named filter in its attach reply frame. The client lib is
responsible for detaching the link as explained in the AMQP spec:
> The receiving endpoint sets its desired filter, the sending endpoint sets the filter actually in place
(including any filters defaulted at the node). The receiving endpoint MUST check that the filter in
place meets its needs and take responsibility for detaching if it does not.
This applies to lexer and parser errors.
Errors during message evaluation will result in an unknown value.
Conditional operators on unknown are described in the JMS spec. If the
entire selector condition is unknown, the message does not match, and
will therefore not be delivered to the client.
## Clients
Support for passing the SQL expression from app to broker is provided by
the Java client in https://github.com/rabbitmq/rabbitmq-amqp-java-client/pull/216
Trigger a 4.2.x alpha release build / trigger_alpha_build (push) Waiting to runDetails
Test (make) / Build and Xref (1.17, 26) (push) Waiting to runDetails
Test (make) / Build and Xref (1.17, 27) (push) Waiting to runDetails
Test (make) / Test (1.17, 27, khepri) (push) Waiting to runDetails
Test (make) / Test (1.17, 27, mnesia) (push) Waiting to runDetails
Test (make) / Test mixed clusters (1.17, 27, khepri) (push) Waiting to runDetails
Test (make) / Test mixed clusters (1.17, 27, mnesia) (push) Waiting to runDetails
Test (make) / Type check (1.17, 27) (push) Waiting to runDetails
We've experienced lots of failures in CI:
```
GEN test/system_SUITE_data/apache-activemq-5.18.3-bin.tar.gz
make: *** [Makefile:65: test/system_SUITE_data/apache-activemq-5.18.3-bin.tar.gz] Error 28
make: Leaving directory '/home/runner/work/rabbitmq-server/rabbitmq-server/deps/amqp10_client'
Error: Process completed with exit code 2.
```
Bumping to the latest ActiveMQ Classic version may or may not help with
these failures.
Either way, we want to test against the latest ActiveMQ version. Version
5.18.3 reached end-of-life and is no longer maintained.
[Why]
They make it more difficult to compile RabbitMQ on Windows. They were
probably useful at the time of the switch to a monorepository but I
don't see their need anymore.
The AMQP spec defines:
```
<field name="durable" type="boolean" default="false"/>
```
RabbitMQ 4.0 and 4.1 interpret the durable field as true if not set.
The idea was to favour safety over performance.
This complies with the AMQP spec because the spec allows other target or
node specific defaults for the durable field:
> If the header section is omitted the receiver MUST assume the appropriate
> default values (or the meaning implied by no value being set) for the fields
> within the header unless other target or node specific defaults have otherwise
> been set.
However, some client libraries completely omit the header section if the
app expliclity sets durable=false. This complies with the spec, but it
means that RabbitMQ cannot diffentiate between "client app forgot to set
the durable field" vs "client lib opted in for an optimisation omitting
the header section".
This is problematic with JMS message selectors where JMS apps can filter
on JMSDeliveryMode. To be able to correctly filter on JMSDeliveryMode,
RabbitMQ needs to know whether the JMS app sent the message as
PERSISTENT or NON_PERSISTENT.
Rather than relying on client libs to always send the header section
including the durable field, this commit makes RabbitMQ comply with the
default value for durable in the AMQP spec.
Some client lib maintainers accepted to send the header section, while
other maintainers refused to do so:
https://github.com/Azure/go-amqp/issues/330https://issues.apache.org/jira/browse/QPIDJMS-608
Likely the AMQP spec was designed to omit the header section when
performance is important, as is the case with durable=false. Omitting
the header section means saving a few bytes per message on the wire and
some marshalling and unmarshalling overhead on both client and server.
Therefore, it's better to push the "safe by default" behaviour from the broker
back to the client libs. Client libs should send messages as durable by
default unless the client app expliclity opts in to send messages as
non-durable. This is also what JMS does: By default JMS apps send
messages as PERSISTENT:
> The message producer's default delivery mode is PERSISTENT.
Therefore, this commit also makes the AMQP Erlang client send messages as
durable, by default.
This commit will apply to RabbitMQ 4.2.
It's arguably not a breaking change because in RabbitMQ, message durability
is actually more determined by the queue type the message is sent to rather than the
durable field of the message:
* Quroum queues and streams store messages durably (fsync or replicate)
no matter what the durable field is
* MQTT QoS 0 queues hold messages in memory no matter what the
durable field is
* Classic queues do not fsync even if the durable field is set to true
In addition, the RabbitMQ AMQP Java library introduced in RabbitMQ 4.0 sends messages with
durable=true:
53e3dd6abb/src/main/java/com/rabbitmq/client/amqp/impl/AmqpPublisher.java (L91)
The tests for selecting messages by JMSDeliveryMode relying on the
behaviour in this commit can be found on the `jms` branch.
This commit fixes a bug in the Erlang AMQP 1.0 client.
Prior to this commit, to repro this bug:
1. Send more than 2^16 messages to a queue.
2. Grant more than a total of 2^16 link credit initially (on a single link
or across multiple links) on a single session without any
auto or manual link credit renewal.
The expectation is that thanks to sufficiently granted initial link-credit,
the client will receive all messages.
However, consumption stops after exactly 2^16-1 messages.
That's because the client lib was never sending a flow frame to the server.
So, after the client received all 2^16-1 messages (the initial
incoming-window set by the client), the server's remote-incoming-window
reached 0 causing the server to stop delivering messages.
The expectation is that the client lib automatically handles session
flow control without any manual involvement of the client app.
This commit implements this fix:
* We keep the server's remote-incoming window always large by default as
explained in https://www.rabbitmq.com/blog/2024/09/02/amqp-flow-control#incoming-window
* Hence, the client lib sets its incoming-window to 100,000 initially.
* The client lib tracks its incoming-window decrementing it by 1 for
every transfer it received. (This wasn't done prior to this commit.)
* Whenever this window shrinks below 50,000, the client sends a flow
frame without any link information widening its incoming-window back to 100,000.
* For test cases (maybe later for apps as well), there is a new function
`amqp10_client_session:flow/3`, which allows for a test case to do manual
session flow control. Its API is designed very similar to
`amqp10_client_session:flow_link/4` in that the test can optionally request
the lib to auto widen the session window whenever it falls below a certain threshold.
This avoids using Mix while compiling which simplifies
a number of things and let us do further build improvements
later on.
Elixir is only enabled from within rabbitmq_cli currently.
Eunit is disabled since there are only Elixir tests.
Dialyzer will force-enable Elixir in order to process
Elixir-compiled beam files.
This commit also includes a few changes that are
related:
* The Erlang distribution will now be started for parallel-ct
* Many unnecessary PROJECT_MOD lines have been removed
* `eunit_formatters` has been removed, it provides little value
* The new `maybe_flock` Erlang.mk function is used where possible
* Build test deps when testing rabbitmq_cli (Mix won't do it anymore)
* rabbitmq_ct_helpers now use the early plugins to have Dialyzer
properly set up
Fix crash in close_sent since the client might receive the open frame if
it previously sent the close frame in state open_sent.
We explicitly ignore the open frame. The alternative is to add another
gen_statem state CLOSE_PIPE which might be an overkill however.
This commit also fixes a wrong comment: No sessions have begun if the
app requests the connection to be closed in state open_sent.
## What?
Support the `dynamic` field of sources and targets.
## Why?
1. This allows AMQP clients to dynamically create exclusive queues, which
can be useful for RPC workloads.
2. Support creation of JMS temporary queues over AMQP using the Qpid JMS
client. Exclusive queues map very nicely to JMS temporary queues
because:
> Although sessions are used to create temporary destinations, this is only
for convenience. Their scope is actually the entire connection. Their
lifetime is that of their connection and any of the connection’s sessions
are allowed to create a consumer for them.
https://jakarta.ee/specifications/messaging/3.1/jakarta-messaging-spec-3.1#creating-temporary-destinations
## How?
If the terminus contains the capability `temporary-queue` as defined in
[amqp-bindmap-jms-v1.0-wd10](https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=67638)
[5.2] and as sent by Qpid JMS client,
RabbitMQ will create an exclusive queue.
(This allows a future commit to take other actions if capability
`temporary-topic` will be used, such as the additional creation of bindings.)
No matter what the desired node properties are, RabbitMQ will set the
lifetime policy delete-on-close deleting the exclusive queue when the
link which caused its creation ceases to exist. This means the exclusive
queue will be deleted if either:
* the link gets detached, or
* the session ends, or
* the connection closes
Although the AMQP JMS Mapping and Qpid JMS create only a **sending** link
with `dynamic=true`, this commit also supports **receiving** links with
`dynamic=true` for non-JMS AMQP clients.
RabbitMQ is free to choose the generated queue name. As suggested by the
AMQP spec, the generated queue name will contain the container-id and link
name unless they are very long.
Co-authored-by: Arnaud Cogoluègnes <acogoluegnes@gmail.com>
## What?
Implement the AMQP over WebSocket Binding Committee Specification 01 in
the AMQP 1.0 Erlang client:
https://docs.oasis-open.org/amqp-bindmap/amqp-wsb/v1.0/cs01/amqp-wsb-v1.0-cs01.html
## Why?
1. This allows writing integration tests for the server implementation
of AMQP over WebSocket.
2. Erlang and Elixir clients can use AMQP over WebSocket in environments
where firewalls prohibit access to the AMQP port.
## How?
Use gun as WebSocket client.
The new module `amqp10_client_socket` handles socket operations (open, close, send) for:
* TCP sockets
* SSL sockets
* WebSockets
Prior to this commit, the amqp10_client_connection process closed only the
write end of the socket after it sent the AMQP close performative.
This commit removed premature socket closure because:
1. There is no equivalent feature provided in Gun since sending a
WebSocket close frame causes Gun to cleanly close the connection for
both writing and reading.
2. It's unnecessary and can result in unexpected and confusing behaviour on the server.
3. It's better practive to keep the TCP connection fully open until
the AMQP closing handshake completes.
4. When amqp10_client_frame_reader terminates, it will cleanly close
the socket for both writing and reading.
Prior to this commit, when the sending client overshot RabbitMQ's incoming-window
(which is allowed in the event of a cluster wide memory or disk alarm),
and RabbitMQ sent a FLOW frame to the client, RabbitMQ sent a negative
incoming-window field in the FLOW frame causing the following crash in
the writer proc:
```
crasher:
initial call: rabbit_amqp_writer:init/1
pid: <0.19353.0>
registered_name: []
exception error: bad argument
in function iolist_size/1
called as iolist_size([<<112,0,0,23,120>>,
[82,-15],
<<"pÿÿÿü">>,<<"pÿÿÿÿ">>,67,
<<112,0,0,23,120>>,
"Rª",64,64,64,64])
*** argument 1: not an iodata term
in call from amqp10_binary_generator:generate1/1 (amqp10_binary_generator.erl, line 141)
in call from amqp10_binary_generator:generate1/1 (amqp10_binary_generator.erl, line 88)
in call from amqp10_binary_generator:generate/1 (amqp10_binary_generator.erl, line 79)
in call from rabbit_amqp_writer:assemble_frame/3 (rabbit_amqp_writer.erl, line 206)
in call from rabbit_amqp_writer:internal_send_command_async/3 (rabbit_amqp_writer.erl, line 189)
in call from rabbit_amqp_writer:handle_cast/2 (rabbit_amqp_writer.erl, line 110)
in call from gen_server:try_handle_cast/3 (gen_server.erl, line 1121)
```
This commit fixes this crash by maintaning a floor of zero for
incoming-window in the FLOW frame.
Fixes#12816
Introduce a single place in the AMQP 1.0 Erlang client that infers the AMQP 1.0 type.
Erlang integers are inferred to be AMQP type `long` to avoid overflow surprises.
Provides a specific function to fix client ssl options, i.e.: apply all
fixes that are applied for TLS listeneres and clients on previous
versions but also sets `cacerts` option to CA certificates obtained by
`public_key:cacerts_get`, only when no `cacertfile` or `cacerts` are
provided.
This commit notifies the client app with the AMQP performative if
connection config `notify_with_performative` is set to `true`.
This allows the client app to learn about all fields including
properties and capabilities returned by the AMQP server.
Support tracking the requeue history as described in
https://github.com/rabbitmq/rabbitmq-website/pull/2095
This commit:
1. adds a test case tracing the requeue history via AMQP 1.0
using the modified outcome and
2. fixes bugs in the broker which crashed if a modified message
annotation value is an AMQP 1.0 list, map, or array.
Complex modified annotation values (list, map, array) are stored as tagged values from now on.
This means AMQP 0.9.1 consumers will not receive modified annotations of
type list, map, or array (which is okay).
* Support AMQP filter expressions
## What?
This PR implements the following property filter expressions for AMQP clients
consuming from streams as defined in
[AMQP Filter Expressions Version 1.0 Working Draft 09](https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=66227):
* properties filters [section 4.2.4]
* application-properties filters [section 4.2.5]
String prefix and suffix matching is also supported.
This PR also fixes a bug where RabbitMQ would accept wrong filters.
Specifically, prior to this PR the values of the filter-set's map were
allowed to be symbols. However, "every value MUST be either null or of a
described type which provides the archetype filter."
## Why?
This feature adds the ability to RabbitMQ to have multiple concurrent clients
each consuming only a subset of messages while maintaining message order.
This feature also reduces network traffic between RabbitMQ and clients by
only dispatching those messages that the clients are actually interested in.
Note that AMQP filter expressions are more fine grained than the [bloom filter based
stream filtering](https://www.rabbitmq.com/blog/2023/10/16/stream-filtering) because
* they do not suffer false positives
* the unit of filtering is per-message instead of per-chunk
* matching can be performed on **multiple** values in the properties and
application-properties sections
* prefix and suffix matching on the actual values is supported.
Both, AMQP filter expressions and bloom filters can be used together.
## How?
If a filter isn't valid, RabbitMQ ignores the filter. RabbitMQ only
replies with filters it actually supports and validated successfully to
comply with:
"The receiving endpoint sets its desired filter, the sending endpoint
[RabbitMQ] sets the filter actually in place (including any filters defaulted at
the node)."
* Delete streams test case
The test suite constructed a wrong filter-set.
Specifically the value of the filter-set didn't use a described type as
mandated by the spec.
Using https://azure.github.io/amqpnetlite/api/Amqp.Types.DescribedValue.html
throws errors that the descriptor can't be encoded. Given that this code
path is already tests via the amqp_filtex_SUITE, this F# test gets
therefore deleted.
* Re-introduce the AMQP filter-set bug
Since clients might rely on the wrong filter-set value type, we support
the bug behind a deprecated feature flag and gradually remove support
this bug.
* Revert "Delete streams test case"
This reverts commit c95cfeaef7.
For messages published to RabbitMQ, RabbitMQ honors the transfer `settled`
field, no matter what value the sender settle mode was set to in the attach
frame.
Therefore, prior to this commit, a client could send a transfer with
`settled=true` even though sender settle mode was set to `unsettled` in the
attach frame.
This commit enforces that the publisher sets only transfer `settled` fields
that are valid with the spec.
If sender settle mode is:
* `unsettled`, the transfer `settled` flag must be `false`.
* `settled`, the transfer `settled` flag must be `true`.
* `mixed`, the transfer `settled` flag can be `true` or `false`.
```
<field name="message-annotations" type="fields"/>
```
Prior to this commit integration tests succeeded because both Erlang
client and RabbitMQ server contained a bug.
This bug was noticed by a Java client test suite.
* Enforce AMQP 1.0 channel-max
Enforce AMQP 1.0 field `channel-max` in the `open` frame by introducing
a new more user friendly setting called `session_max`:
> The channel-max value is the highest channel number that can be used on the connection.
> This value plus one is the maximum number of sessions that can be simultaneously active on the connection.
We set the default value of `session_max` to 64 such that, by
default, RabbitMQ 4.0 allows maximum 64 AMQP 1.0 sessions per AMQP 1.0 connection.
More than 64 AMQP 1.0 sessions per connection make little sense.
See also https://www.rabbitmq.com/blog/2024/09/02/amqp-flow-control#session
Limiting the maximum number of sessions per connection can be useful to
protect against
* applications that accidentally open new sessions without ending old sessions
(session leaks)
* too many metrics being exposed, for example in the future via the
"/metrics/per-object" Prometheus endpoint with timeseries per session
being emitted.
This commit does not make use of the existing `channel_max` setting
because:
1. Given that `channel_max = 0` means "no limit", there is no way for an
operator to limit the number of sessions per connections to 1.
2. Operators might want to set different limits for maximum number of
AMQP 0.9.1 channels and maximum number of AMQP 1.0 sessions.
3. The default of `channel_max` is very high: It allows using more than
2,000 AMQP 0.9.1 channels per connection. Lowering this default might
break existing AMQP 0.9.1 applications.
This commit also fixes a bug in the AMQP 1.0 Erlang client which, prior
to this commit used channel number 1 for the first session. That's wrong
if a broker allows maximum 1 session by replying with `channel-max = 0`
in the `open` frame. Additionally, the spec recommends:
> To make it easier to monitor AMQP sessions, it is RECOMMENDED that implementations always assign the lowest available unused channel number.
Note that in AMQP 0.9.1, channel number 0 has a special meaning:
> The channel number is 0 for all frames which are global to the connection and 1-65535 for frames that
refer to specific channels.
* Apply PR feedback
This has no real impact on performance[1] but should
make it clear which application can run the broker
and/or publish to Hex.pm. In particular, applications
that we can't run the broker from will now give up
early if we try to.
Note that while the broker can't normally run from the
amqp_client application's directory, it can run from
tests and some of the tests start the broker.
[1] on my machine
No real need to have two files, especially since it contains
only a few variable definitions. Plan is to only keep
separate files for larger features such as dist or run.