# What?
* Support Direct Reply-To for AMQP 1.0
* Compared to AMQP 0.9.1, this PR allows for multiple volatile queues on a single
AMQP 1.0 session. Use case: JMS clients can create multiple temporary queues on
the same JMS/AMQP session:
* https://jakarta.ee/specifications/messaging/3.1/apidocs/jakarta.messaging/jakarta/jms/session#createTemporaryQueue()
* https://jakarta.ee/specifications/messaging/3.1/apidocs/jakarta.messaging/jakarta/jms/jmscontext#createTemporaryQueue()
* Fix missing metrics in for Direct Reply-To in AMQP 0.9.1, e.g.
`messages_delivered_total`
* Fix missing metrics (even without using Direct Reply-To ) in AMQP 0.9.1:
If stats level is not `fine`, global metrics `rabbitmq_global_messages_delivered_*` should still be incremented.
# Why?
* Allow for scalable at-most-once RPC reply delivery
Example use case: thousands of requesters connect, send a single
request, wait for a single reply, and disconnect.
This PR won't create any queue and won't write to the metadata store.
Therefore, there's less pressure on the metadata store, less pressure
on the Management API when listing all queues, less pressure on the
metrics subsystem, etc.
* Feature parity with AMQP 0.9.1
# How?
This PR extracts the previously channel specific Direct Reply-To code
into a new queue type: `rabbit_volatile_queue`.
"Volatile" describes the semantics, not a use-case. It signals non-durable,
zero-buffer, at-most-once, may-drop, and "not stored in Khepri."
This new queue type is then used for AMQP 1.0 and AMQP 0.9.1.
Sending to the volatile queue is stateless like previously with Direct Reply-To in AMQP 0.9.1 and like done
for the MQTT QoS 0 queue.
This allows for use cases where a single responder replies to e.g. 100k different requesters.
RabbitMQ will automatically auto grant new link-credit to the responder because the new queue type confirms immediately.
The key gets implicitly checked by the channel/session:
If the queue name (including the key) doesn’t exist, the `handle_event` callback for this queue isn’t invoked and therefore
no delivery will be sent to the responder.
This commit supports Direct Reply-To across AMQP 1.0 and 0.9.1. In other
words, the requester can be an AMQP 1.0 client while the responder is an
AMQP 0.9.1 client or vice versa.
RabbitMQ will internally convert between AMQP 0.9.1 `reply_to` and AMQP
1.0 `/queues/<queue>` address. The AMQP 0.9.1 `reply_to` property is
expected to contain a queue name. That's in line with the AMQP 0.9.1
spec:
> One of the standard message properties is Reply-To, which is designed
specifically for carrying the name of reply queues.
Compared to AMQP 0.9.1 where the requester sets the `reply_to` property
to `amq.rabbitmq.reply-to` and RabbitMQ modifies this field when
forwarding the message to the request queue, in AMQP 1.0 the requester
learns about the queue name from the broker at link attachment time.
The requester has to set the reply-to property to the server generated
queue name. That's because the server isn't allowed to modify the bare
message.
During link attachment time, the client has to set certain fields.
These fields are expected to be set by the RabbitMQ client libraries.
Here is an Erlang example:
```erl
Source = #{address => undefined,
durable => none,
expiry_policy => <<"link-detach">>,
dynamic => true,
capabilities => [<<"rabbitmq:volatile-queue">>]},
AttachArgs = #{name => <<"receiver">>,
role => {receiver, Source, self()},
snd_settle_mode => settled,
rcv_settle_mode => first},
{ok, Receiver} = amqp10_client:attach_link(Session, AttachArgs),
AddressReplyQ = receive {amqp10_event, {link, Receiver, {attached, Attach}}} ->
#'v1_0.attach'{source = #'v1_0.source'{address = {utf8, Addr}}} = Attach,
Addr
end,
```
The client then sends the message by setting the reply-to address as
follows:
```erl
amqp10_client:send_msg(
SenderRequester,
amqp10_msg:set_properties(
#{message_id => <<"my ID">>,
reply_to => AddressReplyQ},
amqp10_msg:new(<<"tag">>, <<"request">>))),
```
If the responder attaches to the queue target in the reply-to field,
RabbitMQ will check if the requester link is still attached. If the
requester detached, the link will be refused.
The responder can also attach to the anonymous null target and set the
`to` field to the `reply-to` address.
If RabbitMQ cannot deliver a reply, instead of buffering the reply,
RabbitMQ will be drop the reply and increment the following Prometheus metric:
```
rabbitmq_global_messages_dead_lettered_maxlen_total{queue_type="rabbit_volatile_queue",dead_letter_strategy="disabled"} 0.0
```
That's in line with the MQTT QoS 0 queue type.
A reply message could be dropped for a variety of reasons:
1. The requester ran out of link-credit. It's therefore the requester's
responsibility to grant sufficient link-credit on its receiving link.
2. RabbitMQ isn't allowed to deliver any message to due session flow
control. It's the requster's responsibility to keep the session window
large enough.
3. The requester doesn't consume messages fast enough causing TCP
backpressure being applied or the RabbitMQ AMQP writer proc isn't
scheduled quickly enough. The latter can happen for example if
RabbitMQ runs with a single scheduler (is assigned a single CPU
core). In either case, RabbitMQ internal flow control causes the
volatile queue to drop messages.
Therefore, if high throughput is required while message loss is undesirable, a classic queue should be used
instead of a volatile queue since the former buffers messages while the
latter doesn't.
The main difference between the volatile queue and the MQTT QoS 0 queue
is that the former isn't written to the metadata store.
# Breaking Change
Prior to this PR the following [documented caveat](https://www.rabbitmq.com/docs/4.0/direct-reply-to#limitations) applied:
> If the RPC server publishes with the mandatory flag set then `amq.rabbitmq.reply-to.*`
is treated as **not** a queue; i.e. if the server only publishes to this name then the message
will be considered "not routed"; a `basic.return` will be sent if the mandatory flag was set.
This PR removes this caveat.
This PR introduces the following new behaviour:
> If the RPC server publishes with the mandatory flag set, then `amq.rabbitmq.reply-to.*`
is treated as a queue (assuming this queue name is encoded correctly). However,
whether the requester is still there to consume the reply is not checked at routing time.
In other words, if the RPC server only publishes to this name, then the message will be
considered "routed" and RabbitMQ will therefore not send a `basic.return`.
This is a follow-up to commit 93db480bc4
`erlang.mk` supports the `YRL_ERLC_OPTS` variable to set `erlc`-specific
compiler options when processing `.yrl` and `.xrl` files. By using this
variable, it allows `make RMQ_ERLC_OPTS=` to disable the
`+deterministic` option. This allows using `c()` in the erl shell to
recompile modules on the fly when a cluster is running.
You can see that, when `make RMQ_ERLC_OPTS=` is run, these generated
files were produced with the `+deterministic` option, because their
`-file` directives use only basenames.
* `deps/rabbit/src/rabbit_amqp_sql_lexer.erl`
* `deps/rabbit/src/rabbit_amqp_sql_parser.erl`
```
-file("rabbit_amqp_sql_parser.yrl", 0).
-module(rabbit_amqp_sql_parser).
-file("rabbit_amqp_sql_parser.erl", 3).
-export([parse/1, parse_and_scan/1, format_error/1]).
-file("rabbit_amqp_sql_parser.yrl", 122).
```
This commit also ignores those two files, as they will always be
auto-generated.
## What?
This commit allows AMQP 1.0 clients to define SQL-like filter expressions when consuming from streams, enabling server-side message filtering.
RabbitMQ will only dispatch messages that match the provided filter expression, reducing network traffic and client-side processing overhead.
SQL filter expressions are a more powerful alternative to the [AMQP Property Filter Expressions](https://www.rabbitmq.com/blog/2024/12/13/amqp-filter-expressions) introduced in RabbitMQ 4.1.
SQL filter expressions are based on the [JMS message selector syntax](https://jakarta.ee/specifications/messaging/3.1/jakarta-messaging-spec-3.1#message-selector-syntax) and support:
* Comparison operators (`=`, `<>`, `>`, `<`, `>=`, `<=`)
* Logical operators (`AND`, `OR`, `NOT`)
* Arithmetic operators (`+`, `-`, `*`, `/`)
* Special operators (`BETWEEN`, `LIKE`, `IN`, `IS NULL`)
* Access to the properties and application-properties sections
**Examples**
Simple expression:
```sql
header.priority > 4
```
Complex expression:
```sql
order_type IN ('premium', 'express') AND
total_amount BETWEEN 100 AND 5000 AND
(customer_region LIKE 'EU-%' OR customer_region = 'US-CA') AND
properties.creation-time >= 1750772279000 AND
NOT cancelled
```
Like AMQP property filter expressions, SQL filter expressions can be
combined with Bloom filters. Combining both allows for highly customisable
expressions (SQL) and extremely fast evaluation (Bloom filter) if only a
subset of the chunks need to be read from disk.
## Why?
Compared to AMQP property filter expressions, SQL filter expressions provide the following advantage:
* High expressiveness and flexibility in defining the filter
Like for AMQP property filter expressions, the following advantages apply:
* No false positives (as is the case for Bloom filters)
* Multiple concurrent clients can attach to the same stream each consuming
only a specific subset of messages while preserving message order.
* Low network overhead as only messages that match the filter are
transferred to the client
* Likewise, lower resource usage (CPU and memory) on clients since they
don't need to deserialise messages that they are not interested in.
* If the SQL expression is simple, even the broker will save resources
because it doesn't need to serialse and send messages that the client
isn't interested in.
## How?
### JMS Message Selector Syntax vs. AMQP Extension Spec
The AMQP Filter Expressions Version 1.0 extension Working Draft 09 defines SQL Filter Expressions in Section 6.
This spec differs from the JMS message selector spec. Neither is a subset of the other. We can choose to follow either.
However, I think it makes most sense to follow the JMS spec because:
* The JMS spec is better defined
* The JMS spec is far more widespread than the AMQP Working Draft spec. (A slight variation of the AMQP Working
Draft is used by Azure Service Bus: https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-messaging-sql-filter)
* The JMS spec is mostly simpler (partly because matching on only simple types)
* This will allow for a single SQL parser in RabbitMQ for both AMQP clients consuming from a stream and possibly in future for JMS clients consuming from queues or topics.
<details>
<summary>AMQP extension spec vs JMS spec</summary>
AMQP
!= is synonym for <>
JMS
defines only <>
Conclusion
<> is sufficient
AMQP
Strings can be tested for “greater than”
“both operands are of type string or of type symbol (any combination is permitted) and
the lexicographical rank of the left operand is greater than the lexicographical rank of the right operand”
JMS
“String and Boolean comparison is restricted to = and <>.”
Conclusion
The JMS behaviour is sufficient.
AMQP
IN <set-expression>
set-expression can contain non-string literals
JMS:
set-expression can contain only string literals
Conclusion
The JMS behaviour is sufficient.
AMQP
EXISTS predicate to check for composite types
JMS
Only simple types
Conclusion
We want to match only for simple types, i.e. allowing matching only against values in the application-properties, properties sections and priority field of the header section.
AMQP:
Modulo operator %
Conclusion
JMS doesn't define the modulo operator. Let's start without it.
We can decide in future to add support since it can actually be useful, for example for two receivers who want to process every other message.
AMQP:
The ‘+’ operator can concatenate string and symbol values
Conclusion
Such string concatenation isn't defined in JMS. We don't need it.
AMQP:
Define NAN and INF
JMS:
“Approximate literals use the Java floating-point literal syntax.”
Examples include "7."
Conclusion
We can go with the JMS spec given that needs to be implemented anyway
for JMS support.
Scientific notations are supported in both the AMQP spec and JMS spec.
AMQP
String literals can be surrounded by single or double quotation marks
JMS
A string literal is enclosed in single quotes
Conclusion
Supporting single quotes is good enough.
AMQP
“A binary constant is a string of pairs of hexadecimal digits prefixed by ‘0x’ that are not enclosed in quotation marks”
Conclusion
JMS doesn't support binary constants. We can start without binary constants.
Matching against binary values are still supported if these binary values can be expressed as UTF-8 strings.
AMQP
Functions DATE, UTC, SUBSTRING, LOWER, UPPER, LEFT, RIGHT
Vendor specific functions
Conclusion
JMS doesn't define such functions. We can start without those functions.
AMQP
<field><array_element_reference>
<field>‘.’<composite_type_reference>
to access map and array elements
Conclusion
Same as above:
We want to match only for simple types, i.e. allowing matching only against values in the application-properties, properties sections and priority field of the header section.
AMQP
allows for delimited identifiers
JMS
Java identifier part characters
Conclusion
We can go with the Java identifiers extending the allowed characters by
`.` and `-` to reference field names such as `properties.group-id`.
JMS:
BETWEEN operator
Conclusion
The BETWEEN operator isn't supported in the AMQP spec. Let's support it as convenience since it's already available in JMS.
</details>
### Filter Name
The client provides a filter with name `sql-filter` instead of name
`jms-selector` to allow to differentiate between JMS clients and other
native AMQP 1.0 clients using SQL expressions. This way, we can also
optionally extend the SQL grammar in future.
### Identifiers
JMS message selectors allow identifiers to contain some well known JMS headers that match to well known AMQP fields, for example:
```erl
jms_header_to_amqp_field_name(<<"JMSDeliveryMode">>) -> durable;
jms_header_to_amqp_field_name(<<"JMSPriority">>) -> priority;
jms_header_to_amqp_field_name(<<"JMSMessageID">>) -> message_id;
jms_header_to_amqp_field_name(<<"JMSTimestamp">>) -> creation_time;
jms_header_to_amqp_field_name(<<"JMSCorrelationID">>) -> correlation_id;
jms_header_to_amqp_field_name(<<"JMSType">>) -> subject;
%% amqp-bindmap-jms-v1.0-wd10 § 3.2.2 JMS-defined ’JMSX’ Properties
jms_header_to_amqp_field_name(<<"JMSXUserID">>) -> user_id;
jms_header_to_amqp_field_name(<<"JMSXGroupID">>) -> group_id;
jms_header_to_amqp_field_name(<<"JMSXGroupSeq">>) -> group_sequence;
```
This commit does a similar matching for `header.` and `properties.` prefixed identifiers to field names in the AMQP property section.
The only field that is supported to filter on in the AMQP header section is `priority`, that is identifier `header.priority`.
By default, as described in the AMQP extension spec, if an identifier is not prefixed, it refers to a key in the application-properties section.
Hence, all identifiers prefixed with `header.`, and `properties.` have special meanings and MUST be avoided by applications unless they want to refer to those specific fields.
Azure Service Bus uses the `sys.` and `user.` prefixes for well known field names and arbitrary application-provided keys, respectively.
### SQL lexer, parser and evaluator
This commit implements the SQL lexer and parser in files rabbit_jms_selector_lexer.xrl and
rabbit_jms_selector_parser.yrl, respectively.
Advantages:
* Both the definitions in the lexer and the grammar in the parser are defined **declaratively**.
* In total, the entire SQL syntax and grammar is defined in only 240 lines.
* Therefore, lexer and parser are simple to maintain.
The idea of this commit is to use the same lexer and parser for native AMQP clients consumings
from streams (this commit) as for JMS clients (in the future).
All native AMQP client vs JMS client bits are then manipulated after
the Abstract Syntax Tree (AST) has been created by the parser.
For example, this commit transforms the AST specifically for native AMQP clients
by mapping `properties.` prefixed identifiers (field names) to atoms.
A JMS client's mapping from `JMS` prefixed headers can transform the AST
differently.
Likewise, this commit transforms the AST to compile a regex for complex LIKE
expressions when consuming from a stream while a future version
might not want to compile a regex when consuming from quorum queues.
Module `rabbit_jms_ast` provides such AST helper methods.
The lexer and parser are not performance critical as this work happens
upon receivers attaching to the stream.
The evaluator however is performance critical as message evaluation
happens on the hot path.
### LIKE expressions
The evaluator has been optimised to only compile a regex when necessary.
If the LIKE expression-value contains no wildcard or only a single `%`
wildcard, Erlang pattern matching is used as it's more efficient.
Since `_` can match any UTF-8 character, a regex will be compiled with
the `[unicode]` options.
### Filter errors
Any errors upon a receiver attaching to a stream causes the filter to
not become active. RabbitMQ will log a warning describing the reason and
will omit the named filter in its attach reply frame. The client lib is
responsible for detaching the link as explained in the AMQP spec:
> The receiving endpoint sets its desired filter, the sending endpoint sets the filter actually in place
(including any filters defaulted at the node). The receiving endpoint MUST check that the filter in
place meets its needs and take responsibility for detaching if it does not.
This applies to lexer and parser errors.
Errors during message evaluation will result in an unknown value.
Conditional operators on unknown are described in the JMS spec. If the
entire selector condition is unknown, the message does not match, and
will therefore not be delivered to the client.
## Clients
Support for passing the SQL expression from app to broker is provided by
the Java client in https://github.com/rabbitmq/rabbitmq-amqp-java-client/pull/216
A boolean status in the stream SAC coordinator is not enough to follow
the evolution of a consumer. For example a former active consumer that
is stepping down can go down before another consumer in the group is
activated, letting the coordinator expect an activation request that
will never arrive, leaving the group without any active consumer.
This commit introduces 3 status: active (formerly "true"), waiting
(formerly "false"), and deactivating. The coordinator will now know when
a deactivating consumer goes down and will trigger a rebalancing to
avoid a stuck group.
This commit also introduces a status related to the connectivity state
of a consumer. The possible values are: connected, disconnected, and
presumed_down. Consumers are by default connected, they can become
disconnected if the coordinator receives a down event with a
noconnection reason, meaning the node of the consumer has been
disconnected from the other nodes. Consumers can become connected again when
their node joins the other nodes again.
Disconnected consumers are still considered part of a group, as they are
expected to come back at some point. For example there is no rebalancing
in a group if the active consumer got disconnected.
The coordinator sets a timer when a disconnection occurs. When the timer
expires, corresponding disconnected consumers pass into the "presumed
down" state. At this point they are no longer considered part of their
respective group and are excluded from rebalancing decision. They are expected
to get removed from the group by the appropriate down event of a
monitor.
So the consumer status is now a tuple, e.g. {connected, active}. Note
this is an implementation detail: only the stream SAC coordinator deals with
the status of stream SAC consumers.
2 new configuration entries are introduced:
* rabbit.stream_sac_disconnected_timeout: this is the duration in ms of the
disconnected-to-forgotten timer.
* rabbit.stream_cmd_timeout: this is the timeout in ms to apply RA commands
in the coordinator. It used to be a fixed value of 30 seconds. The
default value is still the same. The setting has been introduced to
make integration tests faster.
Fixes#14070
Trigger a 4.2.x alpha release build / trigger_alpha_build (push) Has been cancelledDetails
Test (make) / Build and Xref (1.17, 26) (push) Has been cancelledDetails
Test (make) / Build and Xref (1.17, 27) (push) Has been cancelledDetails
Test (make) / Test (1.17, 27, khepri) (push) Has been cancelledDetails
Test (make) / Test (1.17, 27, mnesia) (push) Has been cancelledDetails
Test (make) / Test mixed clusters (1.17, 27, khepri) (push) Has been cancelledDetails
Test (make) / Test mixed clusters (1.17, 27, mnesia) (push) Has been cancelledDetails
Test (make) / Type check (1.17, 27) (push) Has been cancelledDetails
## What?
PR #13971 added a property test that applies the same quorum queue Raft
command on different quorum queue members on different Erlang nodes
ensuring that the state machine ends up in exaclty the same state.
The different Erlang nodes run the **same** Erlang/OTP version however.
This commit adds another property test where the different Erlang nodes
run **different** Erlang/OTP versions.
## Why?
This test allows spotting any non-determinism that could occur when
running quorum queue members in a mixed version cluster, where mixed
version means in our context different Erlang/OTP versions.
## How?
CI runs currently tests with Erlang 27.
This commit starts an Erlang 26 node in docker, specifically for the
`rabbit_fifo_prop_SUITE`.
Test case `two_nodes_different_otp_version` running Erlang 27 then transfers
a few Erlang modules (e.g. module `rabbit_fifo`) to the Erlang 26 node.
The test case then runs the Ra commands on its own node in Erlang 27 and
on the Erlang 26 node in Docker.
By default, this test case is skipped locally.
However, to run this test case locally, simply start an Erlang node as
follows:
```
erl -sname rabbit_fifo_prop@localhost
```
As a follow-up to my GChat thread about removing default logger handler to clean CT stdout, I was looking at
injecting logger config with undefined default handler to ct_run. It is possible but breaks cth_styledout - no
nice green things whatsoever. Then I found rabbit_ct_hook which calls redirect_logger_to_ct_logs which in turn
calls logger:remove_handler(default) apparently with zero effect! To cut story short - turned out rabbit_ct_hook
must run before cth_styledout for remove_handler line to have any effect
This avoids using Mix while compiling which simplifies
a number of things and let us do further build improvements
later on.
Elixir is only enabled from within rabbitmq_cli currently.
Eunit is disabled since there are only Elixir tests.
Dialyzer will force-enable Elixir in order to process
Elixir-compiled beam files.
This commit also includes a few changes that are
related:
* The Erlang distribution will now be started for parallel-ct
* Many unnecessary PROJECT_MOD lines have been removed
* `eunit_formatters` has been removed, it provides little value
* The new `maybe_flock` Erlang.mk function is used where possible
* Build test deps when testing rabbitmq_cli (Mix won't do it anymore)
* rabbitmq_ct_helpers now use the early plugins to have Dialyzer
properly set up
* RMQ-1263: Check if queue protected from deleted inside rabbit_amqqueue:with_delete
Delayed exchange automatically manages associated Delayed Queue. We don't want users to delete it accidentally.
If queue is indeed protected its removal can be forced by calling with
?INTERNAL_USER as ActingUser.
* RMQ-1263: Correct a type spec of amqqueue:internal_owner/1
* RMQ-1263: Add protected queues test
---------
Co-authored-by: Iliia Khaprov <iliia.khaprov@broadcom.net>
Co-authored-by: Michael Klishin <klishinm@vmware.com>
(cherry picked from commit 97f44adfad6d0d98feb1c3a47de76e72694c19e0)
[Why]
This testsuite is very unstable and it is difficult to debug while it is
part of a `parallel-ct` group. It also forced us to re-run the entire
`parallel-ct` group just to retry that one testsuite.
Since https://github.com/rabbitmq/rabbitmq-server/pull/13242 updated
Cowlib to v2.14.0, this commit deletes rabbit_uri as written in the
comments of rabbit_uri.erl:
```
This file is a partial copy of
https://github.com/ninenines/cowlib/blob/optimise-urldecode/src/cow_uri.erl
We use this copy because:
1. uri_string:unquote/1 is lax: It doesn't validate that characters that are
required to be percent encoded are indeed percent encoded. In RabbitMQ,
we want to enforce that proper percent encoding is done by AMQP clients.
2. uri_string:unquote/1 and cow_uri:urldecode/1 in cowlib v2.13.0 are both
slow because they allocate a new binary for the common case where no
character was percent encoded.
When a new cowlib version is released, we should make app rabbit depend on
app cowlib calling cow_uri:urldecode/1 and delete this file (rabbit_uri.erl).
```
This commit contains the following changes:
1. Simplify .NET suite
2. Simplify Java package naming
3. Extract JMS tests into separate suite. This way, it's easier to run,
debug, and add new tests compared to the previous suite which mixed
.NET tests with JMS tests.
4. Add tests for different JMS message types
msg_store_io_batch_size is no longer used
msg_store_credit_disc_bound appears to be used in the code, but I don't
see any impact of that value on the performance. It should be properly
investigated and either removed completely or fixed, because there's
hardly any point in warning about the values configured
(plus, this settings is hopefully almost never used anyway)
The credit_flow between publishing AMQP 0.9.1 channel (or MQTT
connection) and (non-mirrored) classic queue processes was
unintentionally removed in 4.0 together with anything else related to
CQ mirroring.
By default we restore the 3.x behaviour for non-mirored classic
queues. It is possible to disable flow-control (the earlier 4.0.x
behaviour) with the new env `classic_queue_flow_control`. In 3.x this
was possible with the config `mirroring_flow_control`.
(cherry picked from commit d65bd7d07a)
[Why]
We pin a version of Horus even if we don't use it directly (it is a
dependency of Khepri). But currently, we can't update Khepri while still
needing the fix in Horus 0.3.1.
Horus 0.3.1 works around a crash in `cover` that mostly affects CI for
now.
This pinning will have to go away with the next update of Khepri.
Instead of every time we run Make for these applications.
This means that during development we are free to modify
these values or create new test suites without having to
worry about the check. If we forget to then add the test
suites in PARALLEL_CT the workflow will tell us.
* Support AMQP filter expressions
## What?
This PR implements the following property filter expressions for AMQP clients
consuming from streams as defined in
[AMQP Filter Expressions Version 1.0 Working Draft 09](https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=66227):
* properties filters [section 4.2.4]
* application-properties filters [section 4.2.5]
String prefix and suffix matching is also supported.
This PR also fixes a bug where RabbitMQ would accept wrong filters.
Specifically, prior to this PR the values of the filter-set's map were
allowed to be symbols. However, "every value MUST be either null or of a
described type which provides the archetype filter."
## Why?
This feature adds the ability to RabbitMQ to have multiple concurrent clients
each consuming only a subset of messages while maintaining message order.
This feature also reduces network traffic between RabbitMQ and clients by
only dispatching those messages that the clients are actually interested in.
Note that AMQP filter expressions are more fine grained than the [bloom filter based
stream filtering](https://www.rabbitmq.com/blog/2023/10/16/stream-filtering) because
* they do not suffer false positives
* the unit of filtering is per-message instead of per-chunk
* matching can be performed on **multiple** values in the properties and
application-properties sections
* prefix and suffix matching on the actual values is supported.
Both, AMQP filter expressions and bloom filters can be used together.
## How?
If a filter isn't valid, RabbitMQ ignores the filter. RabbitMQ only
replies with filters it actually supports and validated successfully to
comply with:
"The receiving endpoint sets its desired filter, the sending endpoint
[RabbitMQ] sets the filter actually in place (including any filters defaulted at
the node)."
* Delete streams test case
The test suite constructed a wrong filter-set.
Specifically the value of the filter-set didn't use a described type as
mandated by the spec.
Using https://azure.github.io/amqpnetlite/api/Amqp.Types.DescribedValue.html
throws errors that the descriptor can't be encoded. Given that this code
path is already tests via the amqp_filtex_SUITE, this F# test gets
therefore deleted.
* Re-introduce the AMQP filter-set bug
Since clients might rely on the wrong filter-set value type, we support
the bug behind a deprecated feature flag and gradually remove support
this bug.
* Revert "Delete streams test case"
This reverts commit c95cfeaef7.
The problem comes from `ct_master` which doesn't tell us
in the return value whether the tests succeeded. In order
to get that information a CT hook was created. But then
we run into another problem: despite its documentation
claiming otherwise, `ct_master` does not handle `ct_hooks`
instructions in the test spec.
So for the time being we fork `ct_master` into a new
`ct_master_fork` module and insert our hook directly
in the code. Later on we will submit patches to OTP.
All CT logs will now be under <toplevel>/logs. An improved
test workflow would be to always keep the logs/all_runs.html
page open in the browser and refresh it whenever tests are
run in any of the rabbit applications.
This is a proof of concept that mostly works but is missing
some tests, such as rabbitmq_mqtt or rabbitmq_cli. It also
doesn't apply to mixed version testing yet.
Otherwise some plugins can't build if we try to run tests
directly after checkout. This is because the plugins
depend on osiris as well as rabbit, but there is no
dep_osiris defined in the plugin itself.
## What?
1. Support `handle-max` field in the AMQP 1.0 `begin` frame
2. Add a new setting `link_max_per_session` which defaults to 256.
3. Rename `session_max` to `session_max_per_connection`
## Why?
1. Operators might want to limit the number of links per session. A
similar setting `consumer_max_per_channel` exists for AMQP 0.9.1.
2. We should use RabbitMQ 4.0 as an opportunity to set a sensible
default as to how many links can be active on a given session simultaneously.
The session code does iterate over every link in some scenarios (e.g.
queue was deleted). At some point, it's better to just open 2nd
session instead of attaching hundreds or thousands of links to a single session.
A default `link_max_per_session` of 256 should be more than enough given
that `session_max_per_connection` is 64. So, the defaults allow
`256 * 64 = 16,384` links to be active on an AMQP 1.0 connection.
(Operators might want to lower both defaults.)
3. The name is clearer given that we might introduce
`session_max_per_node` in the future since
`channel_max_per_node` exists for AMQP 0.9.1.
### Additional Context
> Link handles MAY be reused once a link is closed for both send and receive.
> To make it easier to monitor AMQP link attach frames, it is RECOMMENDED that
> implementations always assign the lowest available handle to this field.
* Enforce AMQP 1.0 channel-max
Enforce AMQP 1.0 field `channel-max` in the `open` frame by introducing
a new more user friendly setting called `session_max`:
> The channel-max value is the highest channel number that can be used on the connection.
> This value plus one is the maximum number of sessions that can be simultaneously active on the connection.
We set the default value of `session_max` to 64 such that, by
default, RabbitMQ 4.0 allows maximum 64 AMQP 1.0 sessions per AMQP 1.0 connection.
More than 64 AMQP 1.0 sessions per connection make little sense.
See also https://www.rabbitmq.com/blog/2024/09/02/amqp-flow-control#session
Limiting the maximum number of sessions per connection can be useful to
protect against
* applications that accidentally open new sessions without ending old sessions
(session leaks)
* too many metrics being exposed, for example in the future via the
"/metrics/per-object" Prometheus endpoint with timeseries per session
being emitted.
This commit does not make use of the existing `channel_max` setting
because:
1. Given that `channel_max = 0` means "no limit", there is no way for an
operator to limit the number of sessions per connections to 1.
2. Operators might want to set different limits for maximum number of
AMQP 0.9.1 channels and maximum number of AMQP 1.0 sessions.
3. The default of `channel_max` is very high: It allows using more than
2,000 AMQP 0.9.1 channels per connection. Lowering this default might
break existing AMQP 0.9.1 applications.
This commit also fixes a bug in the AMQP 1.0 Erlang client which, prior
to this commit used channel number 1 for the first session. That's wrong
if a broker allows maximum 1 session by replying with `channel-max = 0`
in the `open` frame. Additionally, the spec recommends:
> To make it easier to monitor AMQP sessions, it is RECOMMENDED that implementations always assign the lowest available unused channel number.
Note that in AMQP 0.9.1, channel number 0 has a special meaning:
> The channel number is 0 for all frames which are global to the connection and 1-65535 for frames that
refer to specific channels.
* Apply PR feedback
Because `ct_master` is yet another Erlang node, and it is used
to run multiple CT nodes, meaning it is in a cluster of CT
nodes, the tests that change the net_ticktime could not
work properly anymore. This is because net_ticktime must
be the same value across the cluster.
The same value had to be set for all tests in order to solve
this. This is why it was changed to 5s across the board. The
lower net_ticktime was used in most places to speed up tests
that must deal with cluster failures, so that value is good
enough for these cases.
One test in amqp_client was using the net_ticktime to test
the behavior of the direct connection timeout with varying
net_ticktime configurations. The test now mocks the
`net_kernel:get_net_ticktime()` function to achieve the
same result.
This has no real impact on performance[1] but should
make it clear which application can run the broker
and/or publish to Hex.pm. In particular, applications
that we can't run the broker from will now give up
early if we try to.
Note that while the broker can't normally run from the
amqp_client application's directory, it can run from
tests and some of the tests start the broker.
[1] on my machine
No real need to have two files, especially since it contains
only a few variable definitions. Plan is to only keep
separate files for larger features such as dist or run.
The default of 0.4 was very conservative even when it was
set years ago. Since then:
- we moved to CQv2, which have much more predictable memory usage than (non-lazy) CQv1 used to
- we removed CQ mirroring which caused large sudden memory spikes in some situations
- we removed the option to store message payload in memory in quorum queues
For the past two years or so, we've been running all our internal tests and benchmarks
using the value of 0.8 with no OOMkills at all (note: we do this on
Kubernetes where the Cluster Operators overrides the available memory
levaing some additional headroom, but effectively we are still using more than
0.6 of memory).