This is mostly the same as the `messages_total` property test but checks
that the Raft indexes in `ra_indexes` are the set of the indexes checked
out by all consumers union any indexes in the `returns` queue. This is
the intended state of `ra_indexes` and failing this condition could
cause bugs that would prevent snapshotting.
The AMQP spec defines:
```
<field name="durable" type="boolean" default="false"/>
```
RabbitMQ 4.0 and 4.1 interpret the durable field as true if not set.
The idea was to favour safety over performance.
This complies with the AMQP spec because the spec allows other target or
node specific defaults for the durable field:
> If the header section is omitted the receiver MUST assume the appropriate
> default values (or the meaning implied by no value being set) for the fields
> within the header unless other target or node specific defaults have otherwise
> been set.
However, some client libraries completely omit the header section if the
app expliclity sets durable=false. This complies with the spec, but it
means that RabbitMQ cannot diffentiate between "client app forgot to set
the durable field" vs "client lib opted in for an optimisation omitting
the header section".
This is problematic with JMS message selectors where JMS apps can filter
on JMSDeliveryMode. To be able to correctly filter on JMSDeliveryMode,
RabbitMQ needs to know whether the JMS app sent the message as
PERSISTENT or NON_PERSISTENT.
Rather than relying on client libs to always send the header section
including the durable field, this commit makes RabbitMQ comply with the
default value for durable in the AMQP spec.
Some client lib maintainers accepted to send the header section, while
other maintainers refused to do so:
https://github.com/Azure/go-amqp/issues/330https://issues.apache.org/jira/browse/QPIDJMS-608
Likely the AMQP spec was designed to omit the header section when
performance is important, as is the case with durable=false. Omitting
the header section means saving a few bytes per message on the wire and
some marshalling and unmarshalling overhead on both client and server.
Therefore, it's better to push the "safe by default" behaviour from the broker
back to the client libs. Client libs should send messages as durable by
default unless the client app expliclity opts in to send messages as
non-durable. This is also what JMS does: By default JMS apps send
messages as PERSISTENT:
> The message producer's default delivery mode is PERSISTENT.
Therefore, this commit also makes the AMQP Erlang client send messages as
durable, by default.
This commit will apply to RabbitMQ 4.2.
It's arguably not a breaking change because in RabbitMQ, message durability
is actually more determined by the queue type the message is sent to rather than the
durable field of the message:
* Quroum queues and streams store messages durably (fsync or replicate)
no matter what the durable field is
* MQTT QoS 0 queues hold messages in memory no matter what the
durable field is
* Classic queues do not fsync even if the durable field is set to true
In addition, the RabbitMQ AMQP Java library introduced in RabbitMQ 4.0 sends messages with
durable=true:
53e3dd6abb/src/main/java/com/rabbitmq/client/amqp/impl/AmqpPublisher.java (L91)
The tests for selecting messages by JMSDeliveryMode relying on the
behaviour in this commit can be found on the `jms` branch.
https://github.com/erlang/otp/issues/9739
In OTP28+, splitting an empty string returns an empty list, not an empty
string (the input).
Additionally `street-address` macro was removed in OTP28 - replace with
the value it used to be.
Lastly, rabbitmq_auth_backend_oauth2 has an MQTT test, so add
rabbitmq_mqtt to TEST_DEPS
revive_local_queue_replicas -> revive_local_queue_members
can_redeliver converted from callback to capabilities key
rebalance_moduled converted from callback to capabilities key
Building from source using this command:
```
make RMQ_ERLC_OPTS= FULL=1
```
... then starting RabbitMQ via `make run-broker`, allows re-compilation
from the erl shell:
```
1> c(rabbit).
Recompiling /home/lbakken/development/rabbitmq/rabbitmq-server/deps/rabbit/src/rabbit.erl
{ok,rabbit}
```
When `+deterministic` is passed to `erlc`, the `compile` data in each
modules' information is missing the source path for the module.
Follow-up to #3442
`sets` v2 were not yet available when this module was written. Compared
to `gb_sets`, v2 `sets` are faster and more memory efficient:
> List = lists:seq(1, 50_000).
> tprof:profile(sets, from_list, [List, [{version, 2}]], #{type => call_memory}).
****** Process <0.94.0> -- 100.00% of total ***
FUNCTION CALLS WORDS PER CALL [ %]
maps:from_keys/2 1 184335 184335.00 [100.00]
184335 [ 100.0]
ok
> tprof:profile(gb_sets, from_list, [List], #{type => call_memory}).
****** Process <0.97.0> -- 100.00% of total ***
FUNCTION CALLS WORDS PER CALL [ %]
lists:rumergel/3 1 2 2.00 [ 0.00]
gb_sets:from_ordset/1 1 3 3.00 [ 0.00]
lists:reverse/2 1 100000 100000.00 [16.76]
lists:usplit_1/5 49999 100002 2.00 [16.76]
gb_sets:balance_list_1/2 65535 396605 6.05 [66.48]
596612 [100.0]
The `rabbit_mgmt_gc` gen_server performs garbage collections
periodically. When doing so it can create potentially fairly large
terms, for example by creating a set out of
`rabbit_exchange:list_names/0`. With many exchanges, for example, the
process memory usage can climb steadily especially when the management
agent is mostly idle since `rabbit_mgmt_gc` won't hit enough reductions
to cause a full-sweep GC on itself. Since the process is only active
periodically (once every 2min by default) we can hibernate it to GC the
terms it created.
This can save a medium amount of memory in situations where there are
very many pieces of metadata (exchanges, vhosts, queues, etc.). For
example on an idle single-node broker with 50k exchanges,
`rabbit_mgmt_gc` can hover around 50MB before being naturally GC'd. With
this patch the process memory usage stays consistent between `start_gc`
timer messages at around 1KB.
The `below_node_connection_limit_test` and `ready_to_serve_clients_test`
cases could possibly flake because `is_quorum_critical_single_node_test`
uses the channel manager in `rabbit_ct_client_helpers` to open a
connection. This can cause the line
true = lists:all(fun(E) -> is_pid(E) end, Connections),
to fail to match. The last connection could have been rejected if the
channel manager kept its connection open, so instead of being a pid the
element would have been `{error, not_allowed}`.
With `rabbit_ct_client_helpers:close_channels_and_connection/2` we can
reset the connection manager and force it to close its connection.