As requested in https://github.com/rabbitmq/rabbitmq-server/discussions/6331#discussioncomment-5796154
include all infos that were emitted in the MQTT connection created event also
in the MQTT connection closed event.
This ensures infos such as MQTT client ID are part of the connection
closed event.
Therefore, it's easy for the user to correlate between the two event
types.
Note that the MQTT plugin emits connection created and connection closed events only if
the CONNECT packet was successfully processed, i.e.authentication was successful.
Remove the disconnected_at property because it was never used.
rabbit_event already adds a timestamp to any event.
Up to 3.11.x an MQTT client ID is tracked in Ra
as a list of bytes as returned by binary_to_list/1 in
48467d6e12/deps/rabbitmq_mqtt/src/rabbit_mqtt_frame.erl (L137)
This has two downsides:
1. Lists consume more memory than binaries (when tracking many clients).
2. It violates the MQTT spec which states
"The ClientId MUST be a UTF-8 encoded string as defined in Section 1.5.3 [MQTT-3.1.3-4]." [v4 3.1.3.1]
Therefore, the original idea was to always store MQTT client IDs as
binaries starting with Native MQTT in 3.12.
However, this leads to client ID tracking misbehaving in mixed version
clusters since new nodes would register client IDs as binaries and old
nodes would register client IDs as lists. This means that a client
registering on a new node with the same client ID as a connection to the
old node did not terminate the connection on the old node.
Therefore, for backwards compatibility, we leave the client ID as a list of bytes
in the Ra machine state because the feature flag delete_ra_cluster_mqtt_node
introduced in v3.12 will delete the Ra cluster anyway and
the new client ID tracking via pg local will store client IDs as
binaries.
An interesting side note learned here is that the compiled file
rabbit_mqtt_collector must not be changed. This commit only modifies
function specs. However as soon as the compiled code is changed, this
module becomes a new version. The new version causes the anonymous ra query
function to fail in mixed clusters: When the old node does a
ra:leader_query where the leader is on the new node, the query function
fails on the new node with `badfun` because the new node does not have
the same module version. For more context, read:
https://web.archive.org/web/20181017104411/http://www.javalimit.com/2010/05/passing-funs-to-other-erlang-nodes.html
What:
Delete bindings of exclusive durable queues after an
unclean shutdown.
Why:
Native MQTT in 3.12 uses only durable queues to ease transition to
Khepri. Since auto-delete queues are not deleted after an unclean
shutdown, Native MQTT uses exclusive (instead of auto-delete) queues
for clean sessions.
While this bug is not specific to Native MQTT, this bug is most relevant
for the upcoming 3.12 release since exclusive durable queues are rarely used
otherwise.
How:
During queue recovery, not all bindings are recovered yet.
Therefore, if, during queue recovery, an exclusive, durable queue need to
be deleted, only durable bindings should be queried.
Queue types need to make sure that their exclusive, durable queues including their
bindings are deleted before starting with binding recovery.
Otherwise binding deletion and binding recovery get interleaved leading to topic
bindings being created and left behind.
Therefore, a classic queue process replies to the recovery process after it
deleted its queue record and associated bindings from the database.
This is useful for understanding if a deleted queue was matching any
policies given the more selective policies introduced in #7601.
Does not apply to bulk deletion of transient queues on node down.
Deploying a 5 node RabbitMQ cluster with rabbitmq_mqtt plugin enabled
using the cluster-operator with rabbitmq image
rabbitmq:3.12.0-beta.1-management often fails with the following
error:
```
Feature flags: failed to enable `delete_ra_cluster_mqtt_node`:
{error,
{no_more_servers_to_try,
[{error,
noproc},
{error,
noproc},
{timeout,
{mqtt_node,
'rabbit@mqtt-rabbit-3-12-server-2.mqtt-rabbit-3-12-nodes.default'}}]}}
```
During rabbitmq_mqtt plugin start, the plugin decides whether it should
create a Ra cluster:
If feature flag delete_ra_cluster_mqtt_node is enabled, no Ra cluster
gets created.
If this feature flag is disabled, a Ra cluster is created.
Even though all feature flags are enabled by default for a fresh
cluster, the feature flag subsystem cannot provide any promise when feature
flag migration functions run during the boot process:
The migration functions can run before plugins are started or many
seconds after plugins are started.
There is also no API that tells when feature flag initialisation on the
local node is completed.
Therefore, during a fresh 3.12 cluster start with rabbitmq_mqtt enabled,
some nodes decide to start a Ra cluster (because not all feature flags
are enabled yet when the rabbitmq_mqtt plugin is started).
Prior to this commit, when the feature flag delete_ra_cluster_mqtt_node
got enabled, Ra cluster deletion timed out because the Ra cluster never got
initialised successfully: Members never proceeded past the pre-vote
phase because their Ra peers already had feature flag delete_ra_cluster_mqtt_node
enabled and therefore don't participate.
One possible fix is to remove the feature flag delete_ra_cluster_mqtt_node
and have the newly upgraded 3.12 node delete its Ra membership during a
rolling update. However, this approach is too risky because during a
rolling update of a 3 node cluster A, B, and C the following issue
arises:
1. C upgrades successfully and deletes its Ra membership
2. B shuts down. At this point A and B are members of the Ra cluster
while only A is online (i.e. not a majority). MQTT clients which try
to re-connect to A fail because A times out registering the client in
ad5cc7e250/deps/rabbitmq_mqtt/src/rabbit_mqtt_processor.erl (L174-L179)
Therefore this commit fixes 3.12 cluster creation as follows:
The Ra cluster deletion timeout is reduced from 60 seconds to 15 seconds,
and the Ra server is force deleted if Ra cluster deletion fails.
Force deleting the server will wipe the Ra cluster data on disk.
Instead of using atom `undefined`, use an integer as correlation term
when sending the will message to destination queues.
Classic queue clients for example expect a non negative integer.
Quorum queues expect any term.
"If the Client supplies a zero-byte ClientId with CleanSession set to 0,
the Server MUST respond to the CONNECT Packet with a CONNACK return code 0x02
(Identifier rejected) and then close the Network Connection" [MQTT-3.1.3-8].
In Web MQTT, the CONNACK was not sent to the client because the Web MQTT
connection process terminated before being sending the CONNACK to the
client.
So far, we had the following functions to list nodes in a RabbitMQ
cluster:
* `rabbit_mnesia:cluster_nodes/1` to get members of the Mnesia cluster;
the argument was used to select members (all members or only those
running Mnesia and participating in the cluster)
* `rabbit_nodes:all/0` to get all members of the Mnesia cluster
* `rabbit_nodes:all_running/0` to get all members who currently run
Mnesia
Basically:
* `rabbit_nodes:all/0` calls `rabbit_mnesia:cluster_nodes(all)`
* `rabbit_nodes:all_running/0` calls `rabbit_mnesia:cluster_nodes(running)`
We also have:
* `rabbit_node_monitor:alive_nodes/1` which filters the given list of
nodes to only select those currently running Mnesia
* `rabbit_node_monitor:alive_rabbit_nodes/1` which filters the given
list of nodes to only select those currently running RabbitMQ
Most of the code uses `rabbit_mnesia:cluster_nodes/1` or the
`rabbit_nodes:all*/0` functions. `rabbit_mnesia:cluster_nodes(running)`
or `rabbit_nodes:all_running/0` is often used as a close approximation
of "all cluster members running RabbitMQ". This list might be incorrect
in times where a node is joining the clustered or is being worked on
(i.e. Mnesia is running but not RabbitMQ).
With Khepri, there won't be the same possible approximation because we
will try to keep Khepri/Ra running even if RabbitMQ is stopped to
expand/shrink the cluster.
So in order to clarify what we want when we query a list of nodes, this
patch introduces the following functions:
* `rabbit_nodes:list_members/0` to get all cluster members, regardless
of their state
* `rabbit_nodes:list_reachable/0` to get all cluster members we can
reach using Erlang distribution, regardless of the state of RabbitMQ
* `rabbit_nodes:list_running/0` to get all cluster members who run
RabbitMQ, regardless of the maintenance state
* `rabbit_nodes:list_serving/0` to get all cluster members who run
RabbitMQ and are accepting clients
In addition to the list functions, there are the corresponding
`rabbit_nodes:is_*(Node)` checks and `rabbit_nodes:filter_*(Nodes)`
filtering functions.
The code is modified to use these new functions. One possible
significant change is that the new list functions will perform RPC calls
to query the nodes' state, unlike `rabbit_mnesia:cluster_nodes(running)`.
such that MQTT and WebMQTT tests of the shared_SUITE can run in parallel.
Before this commit, the shared_SUITE runs 14 minutes, after this commit
the shared_SUITE runs 4 minutes in GitHub actions.
AMQP 0.9.1 header x-mqtt-dup was determined by the incoming MQTT PUBLISH
packet's DUP flag. Its only use was to determine the outgoing MQTT
PUBLISH packet's DUP flag. However, that's wrong behaviour because
the MQTT 3.1.1 protocol spec mandates:
"The value of the DUP flag from an incoming PUBLISH packet is not
propagated when the PUBLISH Packet is sent to subscribers by the Server.
The DUP flag in the outgoing PUBLISH packet is set independently to the
incoming PUBLISH packet, its value MUST be determined solely by whether
the outgoing PUBLISH packet is a retransmission."
[MQTT-3.3.1-3]
Native MQTT fixes this wrong behaviour. Therefore, we can delete this
AMQP 0.9.1 header.
Native MQTT introduced a regression where the "{username}" and "{vhost}"
variables were not expanded in permission patterns.
This regression was unnoticed because the java_SUITE's
topicAuthorisationVariableExpansion test was wrongfully passing because
its topic started with "test-topic" which matched another allow listed
topic (namely "test-topic") instead of the pattern
"{username}.{client_id}.a".
This other java_SUITE regression got introduced by commit
26a17e8530
This commit fixes both the buggy Java test and the actual regression
introduced in Native MQTT.
This commit is pure refactoring making the code base more maintainable.
Replace rabbit_misc:pipeline/3 with the new OTP 25 experimental maybe
expression because
"Frequent ways in which people work with sequences of failable
operations include folds over lists of functions, and abusing list
comprehensions. Both patterns have heavy weaknesses that makes them less
than ideal."
https://www.erlang.org/eeps/eep-0049#obsoleting-messy-patterns
Additionally, this commit is more restrictive in the type spec of
rabbit_mqtt_processor state fields.
Specifically, many fields were defined to be `undefined | T` where
`undefined` was only temporarily until the first CONNECT packet was
processed by the processor.
It's better to initialise the MQTT processor upon first CONNECT packet
because there is no point in having a processor without having received
any packet.
This allows many type specs in the processor to change from `undefined |
T` to just `T`.
Additionally, memory is saved by removing the `received_connect_packet`
field from the `rabbit_mqtt_reader` and `rabbit_web_mqtt_handler`.
Include API functions to the rabbit_mqtt_retained_msg_store
behaviour module.
"There is a best practice to have the behaviour module include
the API also as it helps other parts of the code to be correct
and a bit more dialyzable."
This commit also fixes a bug where the retainer process had only
60 milliseconds shutdown time before being unconditionally killed.
60 milliseconds can be too short to dump a large ETS table containing
many retained messages to disk.
Prior to this commit test `deps.rabbitmq_mqtt.cluster_SUITE`
`connection_id_tracking_with_decommissioned_node` was flaky and sometimes
failed with
```
{cluster_SUITE,connection_id_tracking_with_decommissioned_node,160}
{test_case_failed,failed to match connection count 0}
```
as it seems to always match peer_host.
Commit 7e09b85426 adds peer address
provided by WebMQTT plugin.
However, this seems unnecessary since function rabbit_net:peername/1 on
the unwrapped socket provides the same address.
The peer address was the address of the proxy if the proxy protocol is
enabled.
This commit simplifies code and reduces memory consumption.
In MQTT 3.1.1, the CONNECT packet consists of
1. 10 bytes variable header
2. ClientId (up to 23 bytes must be supported)
3. Will Topic
4. Will Message (maximum length 2^16 bytes)
5. User Name
6. Password
Restricting the CONNECT packet size to 2^16 = 65,536 bytes
seems to be a reasonalbe default.
The value is configurable via the MQTT app parameter
`max_packet_size_unauthenticated`.
(Instead of being called `max_packet_size_connect`) the
name `max_packet_size_unauthenticated` is generic
because MQTT 5 introduces an AUTH packet type.
When a single field in a record is updated, all remaining
fields' pointers are copied. Hence, if the record is large,
a lot will be copied.
Therefore, put static or rarely changing fields into their own record.
The same was done for the state in rabbit_channel or rabbit_fifo
for example.
Also, merge #info{} record into the new #cfg{} record.
as it was unnecessary to introduce it in the first place.
Remove the queue name from all queue type clients and pass the queue
name to the queue type callbacks that need it.
We have to leave feature flag classic_queue_type_delivery_support
required because we removed the monitor registry
1fd4a6d353/deps/rabbit/src/rabbit_queue_type.erl (L322-L325)
Implements review from Karl:
"rather than changing the message format we could amend the queue type
callbacks involved with the stateful operation to also take the queue
name record as an argument. This way we don't need to maintain the extra
queue name (which uses memory for known but obscurely technical reasons
with how maps work) in the queue type state (as it is used in the queue
type state map as the key)"
Instead of having optional rabbit_queue_type callbacks, add stub
implementations to rabbit_mqtt_qos0_queue throwing an exception.
The exception uses erlang:error/2 including stack trace and arguments
of the unsupported functions to ease debugging in case these functions
were ever to be called.
Dialyzer suppressions are added for these functions such that dialyzer
won't complain about:
```
rabbit_mqtt_qos0_queue.erl:244:1: Function init/1 only terminates with explicit exception
```
Use delegate.
For large fan-outs with medium to large message size,
this commit will reduce inter-node data traffic by
multiple orders of magnitude preventing busy distribution
ports.
We want the build to fail if there are any dialyzer warnings in
rabbitmq_mqtt or rabbitmq_web_mqtt. Otherwise we rely on people manually
executing and checking the results of dialyzer.
Also, we want any test to fail that is flaky.
Flaky tests can indicate subtle errors in either test or program execution.
Instead of marking them as flaky, we should understand and - if possible -
fix the underlying root cause.
Fix OTP 25.0 dialyzer warning
Type gen_server:format_status() is known in OTP 25.2, but not in 25.0
Run test either on a RabbitMQ cluster of size 1 or size 3.
Running a test on both cluster sizes does not result in higher test
coverage.
This puts less pressure on Buildbuddy and reduces overall test
execution time.
Although the first element (destination queue) of the compound key
in rabbit_reverse_route is provided, scaling tests with million of
subscribers have shown via timer:tc/1 that the mnesia:match_object/3
query often takes > 100 microseconds, sometimes even a few milliseconds.
So, querying bindings is not super expensive, but moderately expensive
when done many times concurrenlty.
Espcecially when mass diconnecting millions of clients, `msacc` showed
that all schedulers were 80% busy in `ets`.
To put less pressure on the CPU, in this commit we rather decide to
slightly increase memory usage.
When first connecting a client, we only query bindings and cache them in
the process state if a prior session for the same client is present.
Thereafter, bindings are not queried again.
Up to RabbitMQ 3.11, the following bug existed.
The MQTT 3.1.1. protocol spec mandates:
```
The Session state in the Server consists of:
* The Client’s subscriptions.
* ...
```
However, because QoS 0 queues were auto-delete up to 3.11 (or exclusive
prior to this commit), QoS 0 queues and therefore their bindings were
deleted when a non-clean session terminated.
When the same client re-connected, its QoS 0 subscription was lost.
Note that storing **messages** for QoS 0 subscription is optional while the
client is disconnected. However, storing the subscription itself (i.e.
bindings in RabbitMQ terms) is not optional: The client must receive new
messages for its QoS 0 subscriptions after it reconnects without having
to send a SUBSCRIBE packet again.
"After the disconnection of a Session that had CleanSession set to 0,
the Server MUST store further QoS 1 and QoS 2 messages that match any
subscriptions that the client had at the time of disconnection as part
of the Session state [MQTT-3.1.2-5]. It MAY also store QoS 0 messages
that meet the same criteria."
This commit additionally implements the last sentence.
Prior to this commit, there was a CPU bottleneck (not present in 3.11.x)
when creating, deleting or disconnecting many MQTT subscribers.
Example:
Add 120 MQTT connections per second each creating a subscription.
Starting at around 300k MQTT subscribers, all 45 CPUs on the server were
maxed out spending time in `ets` according to msacc.
When running a similar workload with only 30k MQTT subscribers on a
local server with only 5 CPUs, all 5 CPUs were maxed out and the CPU
flame graph showed that 86% of the CPU time is spent in function
rabbit_mqtt_processor:topic_names/2.
This commit uses the rabbit_reverse_route table to query MQTT
subscriptions for a given client ID. CPU usage is now drastically lower.
The configured source topic exchange is always the same in the MQTT
plugin. There is however a high cardinality in the destination queues
(MQTT client IDs) and routing keys (topics).
When a cluster wide memory or disk alarm is fired, in AMQP 0.9.1 only
connections that are publishing messages get blocked.
Connections that only consume can continue to empty the queues.
Prior to this commit, all MQTT connections got blocked during a memory
or disk alarm. This has two downsides:
1. MQTT connections that only consume, but never published, cannot empty
queues anymore.
2. If the memory or disk alarm takes long, the MQTT client does not
receive a PINGRESP from the server when it sends a PINGREQ potentially
leading to mass client disconnection (depending on the MQTT client
implementation).
This commit makes sure that an MQTT connection that never sent a single
PUBLISH packet (e.g. "pure" MQTT subscribers) are not blocked during
memory or disk alarms.
In contrast to AMQP 0.9.1, new connections are still blocked from being
accepted because accepting (many) new MQTT connections also lead to
increased resource usage.
The implemenation as done in this commit is simpler, but also more naive
than the logic in rabbit_reader: rabbit_reader blocks connections more
dynamically whereas rabbit_mqtt_reader and rabbit_web_mqtt_handler
block a connection if the connection ever sent a single PUBLISH packet
during its lifetime.
Convert from the old rabbit_log* API to the new Logger macros for MQTT
and Web MQTT connections.
Advantages:
* metadata mfa, file, line, pid, gl, time is auto-inserted by Logger.
* Log lines output by the shared module rabbit_mqtt_processor now
include via the domain whether it's a MQTT or Web MQTT connection.
Instead of using domain [rabbitmq, connection], this commit now uses
the smaller and more specialized domains [rabbitmq, connection, mqtt] and
[rabbitmq, connection, web_mqtt] for MQTT and Web MQTT processes
respectively, resulting in the following example output:
"msg":"Received a CONNECT,", "domain":"rabbitmq.connection.mqtt"
or
"msg":"Received a CONNECT,", "domain":"rabbitmq.connection.web_mqtt"
New test suite deps/rabbitmq_mqtt/test/shared_SUITE contains tests that
are executed against both MQTT and Web MQTT.
This has two major advantages:
1. Eliminates test code duplication between rabbitmq_mqtt and
rabbitmq_web_mqtt making the tests easier to maintain and to understand.
2. Increases test coverage of Web MQTT.
It's acceptable to add a **test** dependency from rabbitmq_mqtt to
rabbitmq_web_mqtt. Obviously, there should be no such dependency
for non-test code.
Topic, username, and password are parsed as binaries.
Storing topics as lists or converting between
lists and binaries back and forth several times is
unnecessary and expensive.