2023-01-25 16:41:56 +08:00
|
|
|
accept:
|
|
|
|
|
- accept_encoding_header
|
|
|
|
|
- accept_header
|
|
|
|
|
- accept_neg
|
|
|
|
|
- accept_parser
|
2023-02-23 21:47:41 +08:00
|
|
|
amqp_client:
|
|
|
|
|
- amqp_auth_mechanisms
|
|
|
|
|
- amqp_channel
|
|
|
|
|
- amqp_channel_sup
|
|
|
|
|
- amqp_channel_sup_sup
|
|
|
|
|
- amqp_channels_manager
|
|
|
|
|
- amqp_client
|
|
|
|
|
- amqp_connection
|
|
|
|
|
- amqp_connection_sup
|
|
|
|
|
- amqp_connection_type_sup
|
|
|
|
|
- amqp_direct_connection
|
|
|
|
|
- amqp_direct_consumer
|
|
|
|
|
- amqp_gen_connection
|
|
|
|
|
- amqp_gen_consumer
|
|
|
|
|
- amqp_main_reader
|
|
|
|
|
- amqp_network_connection
|
|
|
|
|
- amqp_rpc_client
|
|
|
|
|
- amqp_rpc_server
|
|
|
|
|
- amqp_selective_consumer
|
|
|
|
|
- amqp_ssl
|
|
|
|
|
- amqp_sup
|
|
|
|
|
- amqp_uri
|
|
|
|
|
- amqp_util
|
|
|
|
|
- rabbit_routing_util
|
|
|
|
|
- uri_parser
|
|
|
|
|
amqp10_client:
|
|
|
|
|
- amqp10_client
|
|
|
|
|
- amqp10_client_app
|
|
|
|
|
- amqp10_client_connection
|
|
|
|
|
- amqp10_client_connection_sup
|
|
|
|
|
- amqp10_client_connections_sup
|
|
|
|
|
- amqp10_client_frame_reader
|
|
|
|
|
- amqp10_client_session
|
|
|
|
|
- amqp10_client_sessions_sup
|
|
|
|
|
- amqp10_client_sup
|
|
|
|
|
- amqp10_client_types
|
|
|
|
|
- amqp10_msg
|
|
|
|
|
amqp10_common:
|
|
|
|
|
- amqp10_binary_generator
|
|
|
|
|
- amqp10_binary_parser
|
|
|
|
|
- amqp10_framing
|
|
|
|
|
- amqp10_framing0
|
Support MQTT 5.0 features No Local, RAP, Subscription IDs
Support subscription options "No Local" and "Retain As Published"
as well as Subscription Identifiers.
All three MQTT 5.0 features can be set on a per subscription basis.
Due to wildcards in topic filters, multiple subscriptions
can match a given topic. Therefore, to implement Retain As Published and
Subscription Identifiers, the destination MQTT connection process needs
to know what subscription(s) caused it to receive the message.
There are a few ways how this could be implemented:
1. The destination MQTT connection process is aware of all its
subscriptions. Whenever, it receives a message, it can match the
message's routing key / topic against all its known topic filters.
However, to iteratively match the routing key against all topic
filters for every received message can become very expensive in the
worst case when the MQTT client creates many subscriptions containing
wildcards. This could be the case for an MQTT client that acts as a
bridge or proxy or dispatcher: It could subscribe via a wildcard for
each of its own clients.
2. Instead of interatively matching the topic of the received message
against all topic filters that contain wildcards, a better approach
would be for every MQTT subscriber connection process to maintain a
local trie datastructure (similar to how topic exchanges are
implemented) and perform matching therefore more efficiently.
However, this does not sound optimal either because routing is
effectively performed twice: in the topic exchange and again against
a much smaller trie in each destination connection process.
3. Given that the topic exchange already perform routing, a much more
sensible way would be to send the matched binding key(s) to the
destination MQTT connection process. A subscription (topic filter)
maps to a binding key in AMQP 0.9.1 routing. Therefore, for the first
time in RabbitMQ, the routing function should not only output a list
of unique destination queues, but also the binding keys (subscriptions)
that caused the message to be routed to the destination queue.
This commit therefore implements the 3rd approach.
The downside of the 3rd approach is that it requires API changes to the
routing function and topic exchange.
Specifically, this commit adds a new function rabbit_exchange:route/3
that accepts a list of routing options. If that list contains version 2,
the caller of the routing function knows how to handle the return value
that could also contain binding keys.
This commits allows an MQTT connection process, the channel process, and
at-most-once dead lettering to handle binding keys. Binding keys are
included as AMQP 0.9.1 headers into the basic message.
Therefore, whenever a message is sent from an MQTT client or AMQP 0.9.1
client or AMQP 1.0 client or STOMP client, the MQTT receiver will know
the subscription identifier that caused the message to be received.
Note that due to the low number of allowed wildcard characters (# and
+), the cardinality of matched binding keys shouldn't be high even if
the topic contains for example 3 levels and the message is sent to for
example 5 million destination queues. In other words, sending multiple
distinct basic messages to the destination shouldn't hurt the delegate
optimisation too much. The delegate optimisation implemented for classic
queues and rabbit_mqtt_qos0_queue(s) still takes place for all basic
messages that contain the same set of matched binding keys.
The topic exchange returns all matched binding keys by remembering the
edges walked down to the leaves. As an optimisation, only for MQTT
queues are binding keys being returned. This does add a small dependency
from app rabbit to app rabbitmq_mqtt which is not optimal. However, this
dependency should be simple to remove when omitting this optimisation.
Another important feature of this commit is persisting subscription
options and subscription identifiers because they are part of the
MQTT 5.0 session state.
In MQTT v3 and v4, the only subscription information that were part of
the session state was the topic filter and the QoS level.
Both information were implicitly stored in the form of bindings:
The topic filter as the binding key and the QoS level as the destination
queue name of the binding.
For MQTT v5 we need to persist more subscription information.
From a domain perspective, it makes sense to store subscription options
as part of subscriptions, i.e. bindings, even though they are currently
not used in routing.
Therefore, this commits stores subscription options as binding arguments.
Storing subscription options as binding arguments comes in turn with
new challenges: How to handle mixed version clusters and upgrading an
MQTT session from v3 or v4 to v5?
Imagine an MQTT client connects via v5 with Session Expiry Interval > 0
to a new node in a mixed version cluster, creates a subscription,
disconnects, and subsequently connects via v3 to an old node. The
client should continue to receive messages.
To simplify such edge cases, this commit introduces a new feature flag
called mqtt_v5. If mqtt_v5 is disabled, clients cannot connect to
RabbitMQ via MQTT 5.0.
This still doesn't entirely solve the problem of MQTT session upgrades
(v4 to v5 client) or session downgrades (v5 to v4 client).
Ideally, once mqtt_v5 is enabled, all MQTT bindings contain non-empty binding
arguments. However, this will require a feature flag migration function
to modify all MQTT bindings. To be more precise, all MQTT bindings need
to be deleted and added because the binding argument is part of the
Mnesia table key.
Since feature flag migration functions are non-trivial to implement in
RabbitMQ (they can run on every node multiple times and concurrently),
this commit takes a simpler approach:
All v3 / v4 sessions keep the empty binding argument [].
All v5 sessions use the new binding argument [#mqtt_subscription_opts{}].
This requires only handling a session upgrade / downgrade by
creating a binding (with the new binding arg) and deleting the old
binding (with the old binding arg) when processing the CONNECT packet.
Note that such session upgrades or downgrades should be rather rare in
practice. Therefore these binding transactions shouldn't hurt peformance.
The No Local option is implemented within the MQTT publishing connection
process: The message is not sent to the MQTT destination if the
destination queue name matches the current MQTT client ID and the
message was routed due to a subscription that has the No Local flag set.
This avoids unnecessary traffic on the MQTT queue.
The alternative would have been that the "receiving side" (same process)
filters the message out - which would have been more consistent in how
Retain As Published and Subscription Identifiers are implemented, but
would have caused unnecessary load on the MQTT queue.
2023-04-19 21:32:34 +08:00
|
|
|
app:
|
|
|
|
|
- appup_src
|
2023-01-25 16:41:56 +08:00
|
|
|
aten:
|
|
|
|
|
- aten
|
|
|
|
|
- aten_app
|
|
|
|
|
- aten_detect
|
|
|
|
|
- aten_detector
|
|
|
|
|
- aten_emitter
|
|
|
|
|
- aten_sink
|
|
|
|
|
- aten_sup
|
|
|
|
|
base64url:
|
|
|
|
|
- base64url
|
Support MQTT 5.0 features No Local, RAP, Subscription IDs
Support subscription options "No Local" and "Retain As Published"
as well as Subscription Identifiers.
All three MQTT 5.0 features can be set on a per subscription basis.
Due to wildcards in topic filters, multiple subscriptions
can match a given topic. Therefore, to implement Retain As Published and
Subscription Identifiers, the destination MQTT connection process needs
to know what subscription(s) caused it to receive the message.
There are a few ways how this could be implemented:
1. The destination MQTT connection process is aware of all its
subscriptions. Whenever, it receives a message, it can match the
message's routing key / topic against all its known topic filters.
However, to iteratively match the routing key against all topic
filters for every received message can become very expensive in the
worst case when the MQTT client creates many subscriptions containing
wildcards. This could be the case for an MQTT client that acts as a
bridge or proxy or dispatcher: It could subscribe via a wildcard for
each of its own clients.
2. Instead of interatively matching the topic of the received message
against all topic filters that contain wildcards, a better approach
would be for every MQTT subscriber connection process to maintain a
local trie datastructure (similar to how topic exchanges are
implemented) and perform matching therefore more efficiently.
However, this does not sound optimal either because routing is
effectively performed twice: in the topic exchange and again against
a much smaller trie in each destination connection process.
3. Given that the topic exchange already perform routing, a much more
sensible way would be to send the matched binding key(s) to the
destination MQTT connection process. A subscription (topic filter)
maps to a binding key in AMQP 0.9.1 routing. Therefore, for the first
time in RabbitMQ, the routing function should not only output a list
of unique destination queues, but also the binding keys (subscriptions)
that caused the message to be routed to the destination queue.
This commit therefore implements the 3rd approach.
The downside of the 3rd approach is that it requires API changes to the
routing function and topic exchange.
Specifically, this commit adds a new function rabbit_exchange:route/3
that accepts a list of routing options. If that list contains version 2,
the caller of the routing function knows how to handle the return value
that could also contain binding keys.
This commits allows an MQTT connection process, the channel process, and
at-most-once dead lettering to handle binding keys. Binding keys are
included as AMQP 0.9.1 headers into the basic message.
Therefore, whenever a message is sent from an MQTT client or AMQP 0.9.1
client or AMQP 1.0 client or STOMP client, the MQTT receiver will know
the subscription identifier that caused the message to be received.
Note that due to the low number of allowed wildcard characters (# and
+), the cardinality of matched binding keys shouldn't be high even if
the topic contains for example 3 levels and the message is sent to for
example 5 million destination queues. In other words, sending multiple
distinct basic messages to the destination shouldn't hurt the delegate
optimisation too much. The delegate optimisation implemented for classic
queues and rabbit_mqtt_qos0_queue(s) still takes place for all basic
messages that contain the same set of matched binding keys.
The topic exchange returns all matched binding keys by remembering the
edges walked down to the leaves. As an optimisation, only for MQTT
queues are binding keys being returned. This does add a small dependency
from app rabbit to app rabbitmq_mqtt which is not optimal. However, this
dependency should be simple to remove when omitting this optimisation.
Another important feature of this commit is persisting subscription
options and subscription identifiers because they are part of the
MQTT 5.0 session state.
In MQTT v3 and v4, the only subscription information that were part of
the session state was the topic filter and the QoS level.
Both information were implicitly stored in the form of bindings:
The topic filter as the binding key and the QoS level as the destination
queue name of the binding.
For MQTT v5 we need to persist more subscription information.
From a domain perspective, it makes sense to store subscription options
as part of subscriptions, i.e. bindings, even though they are currently
not used in routing.
Therefore, this commits stores subscription options as binding arguments.
Storing subscription options as binding arguments comes in turn with
new challenges: How to handle mixed version clusters and upgrading an
MQTT session from v3 or v4 to v5?
Imagine an MQTT client connects via v5 with Session Expiry Interval > 0
to a new node in a mixed version cluster, creates a subscription,
disconnects, and subsequently connects via v3 to an old node. The
client should continue to receive messages.
To simplify such edge cases, this commit introduces a new feature flag
called mqtt_v5. If mqtt_v5 is disabled, clients cannot connect to
RabbitMQ via MQTT 5.0.
This still doesn't entirely solve the problem of MQTT session upgrades
(v4 to v5 client) or session downgrades (v5 to v4 client).
Ideally, once mqtt_v5 is enabled, all MQTT bindings contain non-empty binding
arguments. However, this will require a feature flag migration function
to modify all MQTT bindings. To be more precise, all MQTT bindings need
to be deleted and added because the binding argument is part of the
Mnesia table key.
Since feature flag migration functions are non-trivial to implement in
RabbitMQ (they can run on every node multiple times and concurrently),
this commit takes a simpler approach:
All v3 / v4 sessions keep the empty binding argument [].
All v5 sessions use the new binding argument [#mqtt_subscription_opts{}].
This requires only handling a session upgrade / downgrade by
creating a binding (with the new binding arg) and deleting the old
binding (with the old binding arg) when processing the CONNECT packet.
Note that such session upgrades or downgrades should be rather rare in
practice. Therefore these binding transactions shouldn't hurt peformance.
The No Local option is implemented within the MQTT publishing connection
process: The message is not sent to the MQTT destination if the
destination queue name matches the current MQTT client ID and the
message was routed due to a subscription that has the No Local flag set.
This avoids unnecessary traffic on the MQTT queue.
The alternative would have been that the "receiving side" (same process)
filters the message out - which would have been more consistent in how
Retain As Published and Subscription Identifiers are implemented, but
would have caused unnecessary load on the MQTT queue.
2023-04-19 21:32:34 +08:00
|
|
|
bazdep:
|
|
|
|
|
- bazdep
|
|
|
|
|
certifi:
|
|
|
|
|
- certifi
|
|
|
|
|
- certifi_pt
|
|
|
|
|
codepath:
|
|
|
|
|
- codepath
|
|
|
|
|
cover:
|
|
|
|
|
- foo
|
2023-01-25 16:41:56 +08:00
|
|
|
cowboy:
|
|
|
|
|
- cowboy
|
|
|
|
|
- cowboy_app
|
|
|
|
|
- cowboy_bstr
|
|
|
|
|
- cowboy_children
|
|
|
|
|
- cowboy_clear
|
|
|
|
|
- cowboy_clock
|
|
|
|
|
- cowboy_compress_h
|
|
|
|
|
- cowboy_constraints
|
|
|
|
|
- cowboy_handler
|
|
|
|
|
- cowboy_http
|
|
|
|
|
- cowboy_http2
|
|
|
|
|
- cowboy_loop
|
|
|
|
|
- cowboy_metrics_h
|
|
|
|
|
- cowboy_middleware
|
|
|
|
|
- cowboy_req
|
|
|
|
|
- cowboy_rest
|
|
|
|
|
- cowboy_router
|
|
|
|
|
- cowboy_static
|
|
|
|
|
- cowboy_stream
|
|
|
|
|
- cowboy_stream_h
|
|
|
|
|
- cowboy_sub_protocol
|
|
|
|
|
- cowboy_sup
|
|
|
|
|
- cowboy_tls
|
|
|
|
|
- cowboy_tracer_h
|
|
|
|
|
- cowboy_websocket
|
|
|
|
|
cowlib:
|
|
|
|
|
- cow_base64url
|
|
|
|
|
- cow_cookie
|
|
|
|
|
- cow_date
|
|
|
|
|
- cow_hpack
|
|
|
|
|
- cow_http
|
|
|
|
|
- cow_http2
|
|
|
|
|
- cow_http2_machine
|
|
|
|
|
- cow_http_hd
|
|
|
|
|
- cow_http_struct_hd
|
|
|
|
|
- cow_http_te
|
|
|
|
|
- cow_iolists
|
|
|
|
|
- cow_link
|
|
|
|
|
- cow_mimetypes
|
|
|
|
|
- cow_multipart
|
|
|
|
|
- cow_qs
|
|
|
|
|
- cow_spdy
|
|
|
|
|
- cow_sse
|
|
|
|
|
- cow_uri
|
|
|
|
|
- cow_uri_template
|
|
|
|
|
- cow_ws
|
|
|
|
|
credentials_obfuscation:
|
|
|
|
|
- credentials_obfuscation
|
|
|
|
|
- credentials_obfuscation_app
|
|
|
|
|
- credentials_obfuscation_pbe
|
|
|
|
|
- credentials_obfuscation_sup
|
|
|
|
|
- credentials_obfuscation_svc
|
|
|
|
|
ct_helper:
|
|
|
|
|
- ct_helper
|
|
|
|
|
- ct_helper_error_h
|
Support MQTT 5.0 features No Local, RAP, Subscription IDs
Support subscription options "No Local" and "Retain As Published"
as well as Subscription Identifiers.
All three MQTT 5.0 features can be set on a per subscription basis.
Due to wildcards in topic filters, multiple subscriptions
can match a given topic. Therefore, to implement Retain As Published and
Subscription Identifiers, the destination MQTT connection process needs
to know what subscription(s) caused it to receive the message.
There are a few ways how this could be implemented:
1. The destination MQTT connection process is aware of all its
subscriptions. Whenever, it receives a message, it can match the
message's routing key / topic against all its known topic filters.
However, to iteratively match the routing key against all topic
filters for every received message can become very expensive in the
worst case when the MQTT client creates many subscriptions containing
wildcards. This could be the case for an MQTT client that acts as a
bridge or proxy or dispatcher: It could subscribe via a wildcard for
each of its own clients.
2. Instead of interatively matching the topic of the received message
against all topic filters that contain wildcards, a better approach
would be for every MQTT subscriber connection process to maintain a
local trie datastructure (similar to how topic exchanges are
implemented) and perform matching therefore more efficiently.
However, this does not sound optimal either because routing is
effectively performed twice: in the topic exchange and again against
a much smaller trie in each destination connection process.
3. Given that the topic exchange already perform routing, a much more
sensible way would be to send the matched binding key(s) to the
destination MQTT connection process. A subscription (topic filter)
maps to a binding key in AMQP 0.9.1 routing. Therefore, for the first
time in RabbitMQ, the routing function should not only output a list
of unique destination queues, but also the binding keys (subscriptions)
that caused the message to be routed to the destination queue.
This commit therefore implements the 3rd approach.
The downside of the 3rd approach is that it requires API changes to the
routing function and topic exchange.
Specifically, this commit adds a new function rabbit_exchange:route/3
that accepts a list of routing options. If that list contains version 2,
the caller of the routing function knows how to handle the return value
that could also contain binding keys.
This commits allows an MQTT connection process, the channel process, and
at-most-once dead lettering to handle binding keys. Binding keys are
included as AMQP 0.9.1 headers into the basic message.
Therefore, whenever a message is sent from an MQTT client or AMQP 0.9.1
client or AMQP 1.0 client or STOMP client, the MQTT receiver will know
the subscription identifier that caused the message to be received.
Note that due to the low number of allowed wildcard characters (# and
+), the cardinality of matched binding keys shouldn't be high even if
the topic contains for example 3 levels and the message is sent to for
example 5 million destination queues. In other words, sending multiple
distinct basic messages to the destination shouldn't hurt the delegate
optimisation too much. The delegate optimisation implemented for classic
queues and rabbit_mqtt_qos0_queue(s) still takes place for all basic
messages that contain the same set of matched binding keys.
The topic exchange returns all matched binding keys by remembering the
edges walked down to the leaves. As an optimisation, only for MQTT
queues are binding keys being returned. This does add a small dependency
from app rabbit to app rabbitmq_mqtt which is not optimal. However, this
dependency should be simple to remove when omitting this optimisation.
Another important feature of this commit is persisting subscription
options and subscription identifiers because they are part of the
MQTT 5.0 session state.
In MQTT v3 and v4, the only subscription information that were part of
the session state was the topic filter and the QoS level.
Both information were implicitly stored in the form of bindings:
The topic filter as the binding key and the QoS level as the destination
queue name of the binding.
For MQTT v5 we need to persist more subscription information.
From a domain perspective, it makes sense to store subscription options
as part of subscriptions, i.e. bindings, even though they are currently
not used in routing.
Therefore, this commits stores subscription options as binding arguments.
Storing subscription options as binding arguments comes in turn with
new challenges: How to handle mixed version clusters and upgrading an
MQTT session from v3 or v4 to v5?
Imagine an MQTT client connects via v5 with Session Expiry Interval > 0
to a new node in a mixed version cluster, creates a subscription,
disconnects, and subsequently connects via v3 to an old node. The
client should continue to receive messages.
To simplify such edge cases, this commit introduces a new feature flag
called mqtt_v5. If mqtt_v5 is disabled, clients cannot connect to
RabbitMQ via MQTT 5.0.
This still doesn't entirely solve the problem of MQTT session upgrades
(v4 to v5 client) or session downgrades (v5 to v4 client).
Ideally, once mqtt_v5 is enabled, all MQTT bindings contain non-empty binding
arguments. However, this will require a feature flag migration function
to modify all MQTT bindings. To be more precise, all MQTT bindings need
to be deleted and added because the binding argument is part of the
Mnesia table key.
Since feature flag migration functions are non-trivial to implement in
RabbitMQ (they can run on every node multiple times and concurrently),
this commit takes a simpler approach:
All v3 / v4 sessions keep the empty binding argument [].
All v5 sessions use the new binding argument [#mqtt_subscription_opts{}].
This requires only handling a session upgrade / downgrade by
creating a binding (with the new binding arg) and deleting the old
binding (with the old binding arg) when processing the CONNECT packet.
Note that such session upgrades or downgrades should be rather rare in
practice. Therefore these binding transactions shouldn't hurt peformance.
The No Local option is implemented within the MQTT publishing connection
process: The message is not sent to the MQTT destination if the
destination queue name matches the current MQTT client ID and the
message was routed due to a subscription that has the No Local flag set.
This avoids unnecessary traffic on the MQTT queue.
The alternative would have been that the "receiving side" (same process)
filters the message out - which would have been more consistent in how
Retain As Published and Subscription Identifiers are implemented, but
would have caused unnecessary load on the MQTT queue.
2023-04-19 21:32:34 +08:00
|
|
|
cth_styledout:
|
|
|
|
|
- cth_styledout
|
2023-01-25 16:41:56 +08:00
|
|
|
cuttlefish:
|
|
|
|
|
- conf_parse
|
|
|
|
|
- cuttlefish
|
|
|
|
|
- cuttlefish_advanced
|
|
|
|
|
- cuttlefish_bytesize
|
|
|
|
|
- cuttlefish_conf
|
|
|
|
|
- cuttlefish_datatypes
|
|
|
|
|
- cuttlefish_duration
|
|
|
|
|
- cuttlefish_duration_parse
|
|
|
|
|
- cuttlefish_effective
|
|
|
|
|
- cuttlefish_enum
|
|
|
|
|
- cuttlefish_error
|
|
|
|
|
- cuttlefish_escript
|
|
|
|
|
- cuttlefish_flag
|
|
|
|
|
- cuttlefish_generator
|
|
|
|
|
- cuttlefish_mapping
|
|
|
|
|
- cuttlefish_rebar_plugin
|
|
|
|
|
- cuttlefish_schema
|
|
|
|
|
- cuttlefish_translation
|
|
|
|
|
- cuttlefish_unit
|
|
|
|
|
- cuttlefish_util
|
|
|
|
|
- cuttlefish_validator
|
|
|
|
|
- cuttlefish_variable
|
|
|
|
|
- cuttlefish_vmargs
|
Support MQTT 5.0 features No Local, RAP, Subscription IDs
Support subscription options "No Local" and "Retain As Published"
as well as Subscription Identifiers.
All three MQTT 5.0 features can be set on a per subscription basis.
Due to wildcards in topic filters, multiple subscriptions
can match a given topic. Therefore, to implement Retain As Published and
Subscription Identifiers, the destination MQTT connection process needs
to know what subscription(s) caused it to receive the message.
There are a few ways how this could be implemented:
1. The destination MQTT connection process is aware of all its
subscriptions. Whenever, it receives a message, it can match the
message's routing key / topic against all its known topic filters.
However, to iteratively match the routing key against all topic
filters for every received message can become very expensive in the
worst case when the MQTT client creates many subscriptions containing
wildcards. This could be the case for an MQTT client that acts as a
bridge or proxy or dispatcher: It could subscribe via a wildcard for
each of its own clients.
2. Instead of interatively matching the topic of the received message
against all topic filters that contain wildcards, a better approach
would be for every MQTT subscriber connection process to maintain a
local trie datastructure (similar to how topic exchanges are
implemented) and perform matching therefore more efficiently.
However, this does not sound optimal either because routing is
effectively performed twice: in the topic exchange and again against
a much smaller trie in each destination connection process.
3. Given that the topic exchange already perform routing, a much more
sensible way would be to send the matched binding key(s) to the
destination MQTT connection process. A subscription (topic filter)
maps to a binding key in AMQP 0.9.1 routing. Therefore, for the first
time in RabbitMQ, the routing function should not only output a list
of unique destination queues, but also the binding keys (subscriptions)
that caused the message to be routed to the destination queue.
This commit therefore implements the 3rd approach.
The downside of the 3rd approach is that it requires API changes to the
routing function and topic exchange.
Specifically, this commit adds a new function rabbit_exchange:route/3
that accepts a list of routing options. If that list contains version 2,
the caller of the routing function knows how to handle the return value
that could also contain binding keys.
This commits allows an MQTT connection process, the channel process, and
at-most-once dead lettering to handle binding keys. Binding keys are
included as AMQP 0.9.1 headers into the basic message.
Therefore, whenever a message is sent from an MQTT client or AMQP 0.9.1
client or AMQP 1.0 client or STOMP client, the MQTT receiver will know
the subscription identifier that caused the message to be received.
Note that due to the low number of allowed wildcard characters (# and
+), the cardinality of matched binding keys shouldn't be high even if
the topic contains for example 3 levels and the message is sent to for
example 5 million destination queues. In other words, sending multiple
distinct basic messages to the destination shouldn't hurt the delegate
optimisation too much. The delegate optimisation implemented for classic
queues and rabbit_mqtt_qos0_queue(s) still takes place for all basic
messages that contain the same set of matched binding keys.
The topic exchange returns all matched binding keys by remembering the
edges walked down to the leaves. As an optimisation, only for MQTT
queues are binding keys being returned. This does add a small dependency
from app rabbit to app rabbitmq_mqtt which is not optimal. However, this
dependency should be simple to remove when omitting this optimisation.
Another important feature of this commit is persisting subscription
options and subscription identifiers because they are part of the
MQTT 5.0 session state.
In MQTT v3 and v4, the only subscription information that were part of
the session state was the topic filter and the QoS level.
Both information were implicitly stored in the form of bindings:
The topic filter as the binding key and the QoS level as the destination
queue name of the binding.
For MQTT v5 we need to persist more subscription information.
From a domain perspective, it makes sense to store subscription options
as part of subscriptions, i.e. bindings, even though they are currently
not used in routing.
Therefore, this commits stores subscription options as binding arguments.
Storing subscription options as binding arguments comes in turn with
new challenges: How to handle mixed version clusters and upgrading an
MQTT session from v3 or v4 to v5?
Imagine an MQTT client connects via v5 with Session Expiry Interval > 0
to a new node in a mixed version cluster, creates a subscription,
disconnects, and subsequently connects via v3 to an old node. The
client should continue to receive messages.
To simplify such edge cases, this commit introduces a new feature flag
called mqtt_v5. If mqtt_v5 is disabled, clients cannot connect to
RabbitMQ via MQTT 5.0.
This still doesn't entirely solve the problem of MQTT session upgrades
(v4 to v5 client) or session downgrades (v5 to v4 client).
Ideally, once mqtt_v5 is enabled, all MQTT bindings contain non-empty binding
arguments. However, this will require a feature flag migration function
to modify all MQTT bindings. To be more precise, all MQTT bindings need
to be deleted and added because the binding argument is part of the
Mnesia table key.
Since feature flag migration functions are non-trivial to implement in
RabbitMQ (they can run on every node multiple times and concurrently),
this commit takes a simpler approach:
All v3 / v4 sessions keep the empty binding argument [].
All v5 sessions use the new binding argument [#mqtt_subscription_opts{}].
This requires only handling a session upgrade / downgrade by
creating a binding (with the new binding arg) and deleting the old
binding (with the old binding arg) when processing the CONNECT packet.
Note that such session upgrades or downgrades should be rather rare in
practice. Therefore these binding transactions shouldn't hurt peformance.
The No Local option is implemented within the MQTT publishing connection
process: The message is not sent to the MQTT destination if the
destination queue name matches the current MQTT client ID and the
message was routed due to a subscription that has the No Local flag set.
This avoids unnecessary traffic on the MQTT queue.
The alternative would have been that the "receiving side" (same process)
filters the message out - which would have been more consistent in how
Retain As Published and Subscription Identifiers are implemented, but
would have caused unnecessary load on the MQTT queue.
2023-04-19 21:32:34 +08:00
|
|
|
dummy:
|
|
|
|
|
- dummy_app
|
|
|
|
|
- dummy_server
|
|
|
|
|
- dummy_sup
|
2023-01-25 16:41:56 +08:00
|
|
|
eetcd:
|
|
|
|
|
- eetcd_auth_gen
|
|
|
|
|
- eetcd_cluster_gen
|
|
|
|
|
- eetcd_election_gen
|
|
|
|
|
- eetcd_health_gen
|
|
|
|
|
- eetcd_kv_gen
|
|
|
|
|
- eetcd_lease_gen
|
|
|
|
|
- eetcd_lock_gen
|
|
|
|
|
- eetcd_maintenance_gen
|
|
|
|
|
- eetcd_watch_gen
|
|
|
|
|
- eetcd
|
|
|
|
|
- eetcd_app
|
|
|
|
|
- eetcd_auth
|
|
|
|
|
- eetcd_cluster
|
|
|
|
|
- eetcd_compare
|
|
|
|
|
- eetcd_conn
|
|
|
|
|
- eetcd_conn_sup
|
|
|
|
|
- eetcd_data_coercion
|
|
|
|
|
- eetcd_election
|
|
|
|
|
- eetcd_grpc
|
|
|
|
|
- eetcd_kv
|
|
|
|
|
- eetcd_lease
|
|
|
|
|
- eetcd_lease_sup
|
|
|
|
|
- eetcd_lock
|
|
|
|
|
- eetcd_maintenance
|
|
|
|
|
- eetcd_op
|
|
|
|
|
- eetcd_stream
|
|
|
|
|
- eetcd_sup
|
|
|
|
|
- eetcd_watch
|
|
|
|
|
- auth_pb
|
|
|
|
|
- gogo_pb
|
|
|
|
|
- health_pb
|
|
|
|
|
- kv_pb
|
|
|
|
|
- router_pb
|
|
|
|
|
emqtt:
|
|
|
|
|
- emqtt
|
|
|
|
|
- emqtt_cli
|
|
|
|
|
- emqtt_frame
|
|
|
|
|
- emqtt_inflight
|
|
|
|
|
- emqtt_props
|
|
|
|
|
- emqtt_quic
|
2023-02-28 23:47:02 +08:00
|
|
|
- emqtt_quic_connection
|
|
|
|
|
- emqtt_quic_stream
|
2023-01-25 16:41:56 +08:00
|
|
|
- emqtt_secret
|
|
|
|
|
- emqtt_sock
|
|
|
|
|
- emqtt_ws
|
|
|
|
|
enough:
|
|
|
|
|
- enough
|
Support MQTT 5.0 features No Local, RAP, Subscription IDs
Support subscription options "No Local" and "Retain As Published"
as well as Subscription Identifiers.
All three MQTT 5.0 features can be set on a per subscription basis.
Due to wildcards in topic filters, multiple subscriptions
can match a given topic. Therefore, to implement Retain As Published and
Subscription Identifiers, the destination MQTT connection process needs
to know what subscription(s) caused it to receive the message.
There are a few ways how this could be implemented:
1. The destination MQTT connection process is aware of all its
subscriptions. Whenever, it receives a message, it can match the
message's routing key / topic against all its known topic filters.
However, to iteratively match the routing key against all topic
filters for every received message can become very expensive in the
worst case when the MQTT client creates many subscriptions containing
wildcards. This could be the case for an MQTT client that acts as a
bridge or proxy or dispatcher: It could subscribe via a wildcard for
each of its own clients.
2. Instead of interatively matching the topic of the received message
against all topic filters that contain wildcards, a better approach
would be for every MQTT subscriber connection process to maintain a
local trie datastructure (similar to how topic exchanges are
implemented) and perform matching therefore more efficiently.
However, this does not sound optimal either because routing is
effectively performed twice: in the topic exchange and again against
a much smaller trie in each destination connection process.
3. Given that the topic exchange already perform routing, a much more
sensible way would be to send the matched binding key(s) to the
destination MQTT connection process. A subscription (topic filter)
maps to a binding key in AMQP 0.9.1 routing. Therefore, for the first
time in RabbitMQ, the routing function should not only output a list
of unique destination queues, but also the binding keys (subscriptions)
that caused the message to be routed to the destination queue.
This commit therefore implements the 3rd approach.
The downside of the 3rd approach is that it requires API changes to the
routing function and topic exchange.
Specifically, this commit adds a new function rabbit_exchange:route/3
that accepts a list of routing options. If that list contains version 2,
the caller of the routing function knows how to handle the return value
that could also contain binding keys.
This commits allows an MQTT connection process, the channel process, and
at-most-once dead lettering to handle binding keys. Binding keys are
included as AMQP 0.9.1 headers into the basic message.
Therefore, whenever a message is sent from an MQTT client or AMQP 0.9.1
client or AMQP 1.0 client or STOMP client, the MQTT receiver will know
the subscription identifier that caused the message to be received.
Note that due to the low number of allowed wildcard characters (# and
+), the cardinality of matched binding keys shouldn't be high even if
the topic contains for example 3 levels and the message is sent to for
example 5 million destination queues. In other words, sending multiple
distinct basic messages to the destination shouldn't hurt the delegate
optimisation too much. The delegate optimisation implemented for classic
queues and rabbit_mqtt_qos0_queue(s) still takes place for all basic
messages that contain the same set of matched binding keys.
The topic exchange returns all matched binding keys by remembering the
edges walked down to the leaves. As an optimisation, only for MQTT
queues are binding keys being returned. This does add a small dependency
from app rabbit to app rabbitmq_mqtt which is not optimal. However, this
dependency should be simple to remove when omitting this optimisation.
Another important feature of this commit is persisting subscription
options and subscription identifiers because they are part of the
MQTT 5.0 session state.
In MQTT v3 and v4, the only subscription information that were part of
the session state was the topic filter and the QoS level.
Both information were implicitly stored in the form of bindings:
The topic filter as the binding key and the QoS level as the destination
queue name of the binding.
For MQTT v5 we need to persist more subscription information.
From a domain perspective, it makes sense to store subscription options
as part of subscriptions, i.e. bindings, even though they are currently
not used in routing.
Therefore, this commits stores subscription options as binding arguments.
Storing subscription options as binding arguments comes in turn with
new challenges: How to handle mixed version clusters and upgrading an
MQTT session from v3 or v4 to v5?
Imagine an MQTT client connects via v5 with Session Expiry Interval > 0
to a new node in a mixed version cluster, creates a subscription,
disconnects, and subsequently connects via v3 to an old node. The
client should continue to receive messages.
To simplify such edge cases, this commit introduces a new feature flag
called mqtt_v5. If mqtt_v5 is disabled, clients cannot connect to
RabbitMQ via MQTT 5.0.
This still doesn't entirely solve the problem of MQTT session upgrades
(v4 to v5 client) or session downgrades (v5 to v4 client).
Ideally, once mqtt_v5 is enabled, all MQTT bindings contain non-empty binding
arguments. However, this will require a feature flag migration function
to modify all MQTT bindings. To be more precise, all MQTT bindings need
to be deleted and added because the binding argument is part of the
Mnesia table key.
Since feature flag migration functions are non-trivial to implement in
RabbitMQ (they can run on every node multiple times and concurrently),
this commit takes a simpler approach:
All v3 / v4 sessions keep the empty binding argument [].
All v5 sessions use the new binding argument [#mqtt_subscription_opts{}].
This requires only handling a session upgrade / downgrade by
creating a binding (with the new binding arg) and deleting the old
binding (with the old binding arg) when processing the CONNECT packet.
Note that such session upgrades or downgrades should be rather rare in
practice. Therefore these binding transactions shouldn't hurt peformance.
The No Local option is implemented within the MQTT publishing connection
process: The message is not sent to the MQTT destination if the
destination queue name matches the current MQTT client ID and the
message was routed due to a subscription that has the No Local flag set.
This avoids unnecessary traffic on the MQTT queue.
The alternative would have been that the "receiving side" (same process)
filters the message out - which would have been more consistent in how
Retain As Published and Subscription Identifiers are implemented, but
would have caused unnecessary load on the MQTT queue.
2023-04-19 21:32:34 +08:00
|
|
|
erlc:
|
|
|
|
|
- first_erl
|
|
|
|
|
- foo
|
|
|
|
|
- foo_app
|
|
|
|
|
- foo_test_worker
|
|
|
|
|
- foo_worker
|
|
|
|
|
eunit:
|
|
|
|
|
- foo
|
|
|
|
|
eunit_surefire:
|
|
|
|
|
- foo
|
|
|
|
|
foo:
|
|
|
|
|
- java
|
|
|
|
|
- lisp
|
|
|
|
|
- pascal
|
|
|
|
|
- perl
|
|
|
|
|
foodep:
|
|
|
|
|
- foodep
|
2023-01-25 16:41:56 +08:00
|
|
|
gen_batch_server:
|
|
|
|
|
- gen_batch_server
|
|
|
|
|
getopt:
|
|
|
|
|
- getopt
|
Support MQTT 5.0 features No Local, RAP, Subscription IDs
Support subscription options "No Local" and "Retain As Published"
as well as Subscription Identifiers.
All three MQTT 5.0 features can be set on a per subscription basis.
Due to wildcards in topic filters, multiple subscriptions
can match a given topic. Therefore, to implement Retain As Published and
Subscription Identifiers, the destination MQTT connection process needs
to know what subscription(s) caused it to receive the message.
There are a few ways how this could be implemented:
1. The destination MQTT connection process is aware of all its
subscriptions. Whenever, it receives a message, it can match the
message's routing key / topic against all its known topic filters.
However, to iteratively match the routing key against all topic
filters for every received message can become very expensive in the
worst case when the MQTT client creates many subscriptions containing
wildcards. This could be the case for an MQTT client that acts as a
bridge or proxy or dispatcher: It could subscribe via a wildcard for
each of its own clients.
2. Instead of interatively matching the topic of the received message
against all topic filters that contain wildcards, a better approach
would be for every MQTT subscriber connection process to maintain a
local trie datastructure (similar to how topic exchanges are
implemented) and perform matching therefore more efficiently.
However, this does not sound optimal either because routing is
effectively performed twice: in the topic exchange and again against
a much smaller trie in each destination connection process.
3. Given that the topic exchange already perform routing, a much more
sensible way would be to send the matched binding key(s) to the
destination MQTT connection process. A subscription (topic filter)
maps to a binding key in AMQP 0.9.1 routing. Therefore, for the first
time in RabbitMQ, the routing function should not only output a list
of unique destination queues, but also the binding keys (subscriptions)
that caused the message to be routed to the destination queue.
This commit therefore implements the 3rd approach.
The downside of the 3rd approach is that it requires API changes to the
routing function and topic exchange.
Specifically, this commit adds a new function rabbit_exchange:route/3
that accepts a list of routing options. If that list contains version 2,
the caller of the routing function knows how to handle the return value
that could also contain binding keys.
This commits allows an MQTT connection process, the channel process, and
at-most-once dead lettering to handle binding keys. Binding keys are
included as AMQP 0.9.1 headers into the basic message.
Therefore, whenever a message is sent from an MQTT client or AMQP 0.9.1
client or AMQP 1.0 client or STOMP client, the MQTT receiver will know
the subscription identifier that caused the message to be received.
Note that due to the low number of allowed wildcard characters (# and
+), the cardinality of matched binding keys shouldn't be high even if
the topic contains for example 3 levels and the message is sent to for
example 5 million destination queues. In other words, sending multiple
distinct basic messages to the destination shouldn't hurt the delegate
optimisation too much. The delegate optimisation implemented for classic
queues and rabbit_mqtt_qos0_queue(s) still takes place for all basic
messages that contain the same set of matched binding keys.
The topic exchange returns all matched binding keys by remembering the
edges walked down to the leaves. As an optimisation, only for MQTT
queues are binding keys being returned. This does add a small dependency
from app rabbit to app rabbitmq_mqtt which is not optimal. However, this
dependency should be simple to remove when omitting this optimisation.
Another important feature of this commit is persisting subscription
options and subscription identifiers because they are part of the
MQTT 5.0 session state.
In MQTT v3 and v4, the only subscription information that were part of
the session state was the topic filter and the QoS level.
Both information were implicitly stored in the form of bindings:
The topic filter as the binding key and the QoS level as the destination
queue name of the binding.
For MQTT v5 we need to persist more subscription information.
From a domain perspective, it makes sense to store subscription options
as part of subscriptions, i.e. bindings, even though they are currently
not used in routing.
Therefore, this commits stores subscription options as binding arguments.
Storing subscription options as binding arguments comes in turn with
new challenges: How to handle mixed version clusters and upgrading an
MQTT session from v3 or v4 to v5?
Imagine an MQTT client connects via v5 with Session Expiry Interval > 0
to a new node in a mixed version cluster, creates a subscription,
disconnects, and subsequently connects via v3 to an old node. The
client should continue to receive messages.
To simplify such edge cases, this commit introduces a new feature flag
called mqtt_v5. If mqtt_v5 is disabled, clients cannot connect to
RabbitMQ via MQTT 5.0.
This still doesn't entirely solve the problem of MQTT session upgrades
(v4 to v5 client) or session downgrades (v5 to v4 client).
Ideally, once mqtt_v5 is enabled, all MQTT bindings contain non-empty binding
arguments. However, this will require a feature flag migration function
to modify all MQTT bindings. To be more precise, all MQTT bindings need
to be deleted and added because the binding argument is part of the
Mnesia table key.
Since feature flag migration functions are non-trivial to implement in
RabbitMQ (they can run on every node multiple times and concurrently),
this commit takes a simpler approach:
All v3 / v4 sessions keep the empty binding argument [].
All v5 sessions use the new binding argument [#mqtt_subscription_opts{}].
This requires only handling a session upgrade / downgrade by
creating a binding (with the new binding arg) and deleting the old
binding (with the old binding arg) when processing the CONNECT packet.
Note that such session upgrades or downgrades should be rather rare in
practice. Therefore these binding transactions shouldn't hurt peformance.
The No Local option is implemented within the MQTT publishing connection
process: The message is not sent to the MQTT destination if the
destination queue name matches the current MQTT client ID and the
message was routed due to a subscription that has the No Local flag set.
This avoids unnecessary traffic on the MQTT queue.
The alternative would have been that the "receiving side" (same process)
filters the message out - which would have been more consistent in how
Retain As Published and Subscription Identifiers are implemented, but
would have caused unnecessary load on the MQTT queue.
2023-04-19 21:32:34 +08:00
|
|
|
gpb:
|
|
|
|
|
- gpb
|
|
|
|
|
- gpb_compile
|
2023-01-25 16:41:56 +08:00
|
|
|
gun:
|
|
|
|
|
- gun
|
|
|
|
|
- gun_app
|
|
|
|
|
- gun_content_handler
|
|
|
|
|
- gun_data_h
|
|
|
|
|
- gun_http
|
|
|
|
|
- gun_http2
|
|
|
|
|
- gun_sse_h
|
|
|
|
|
- gun_sup
|
|
|
|
|
- gun_tcp
|
|
|
|
|
- gun_tls
|
|
|
|
|
- gun_ws
|
|
|
|
|
- gun_ws_h
|
Support MQTT 5.0 features No Local, RAP, Subscription IDs
Support subscription options "No Local" and "Retain As Published"
as well as Subscription Identifiers.
All three MQTT 5.0 features can be set on a per subscription basis.
Due to wildcards in topic filters, multiple subscriptions
can match a given topic. Therefore, to implement Retain As Published and
Subscription Identifiers, the destination MQTT connection process needs
to know what subscription(s) caused it to receive the message.
There are a few ways how this could be implemented:
1. The destination MQTT connection process is aware of all its
subscriptions. Whenever, it receives a message, it can match the
message's routing key / topic against all its known topic filters.
However, to iteratively match the routing key against all topic
filters for every received message can become very expensive in the
worst case when the MQTT client creates many subscriptions containing
wildcards. This could be the case for an MQTT client that acts as a
bridge or proxy or dispatcher: It could subscribe via a wildcard for
each of its own clients.
2. Instead of interatively matching the topic of the received message
against all topic filters that contain wildcards, a better approach
would be for every MQTT subscriber connection process to maintain a
local trie datastructure (similar to how topic exchanges are
implemented) and perform matching therefore more efficiently.
However, this does not sound optimal either because routing is
effectively performed twice: in the topic exchange and again against
a much smaller trie in each destination connection process.
3. Given that the topic exchange already perform routing, a much more
sensible way would be to send the matched binding key(s) to the
destination MQTT connection process. A subscription (topic filter)
maps to a binding key in AMQP 0.9.1 routing. Therefore, for the first
time in RabbitMQ, the routing function should not only output a list
of unique destination queues, but also the binding keys (subscriptions)
that caused the message to be routed to the destination queue.
This commit therefore implements the 3rd approach.
The downside of the 3rd approach is that it requires API changes to the
routing function and topic exchange.
Specifically, this commit adds a new function rabbit_exchange:route/3
that accepts a list of routing options. If that list contains version 2,
the caller of the routing function knows how to handle the return value
that could also contain binding keys.
This commits allows an MQTT connection process, the channel process, and
at-most-once dead lettering to handle binding keys. Binding keys are
included as AMQP 0.9.1 headers into the basic message.
Therefore, whenever a message is sent from an MQTT client or AMQP 0.9.1
client or AMQP 1.0 client or STOMP client, the MQTT receiver will know
the subscription identifier that caused the message to be received.
Note that due to the low number of allowed wildcard characters (# and
+), the cardinality of matched binding keys shouldn't be high even if
the topic contains for example 3 levels and the message is sent to for
example 5 million destination queues. In other words, sending multiple
distinct basic messages to the destination shouldn't hurt the delegate
optimisation too much. The delegate optimisation implemented for classic
queues and rabbit_mqtt_qos0_queue(s) still takes place for all basic
messages that contain the same set of matched binding keys.
The topic exchange returns all matched binding keys by remembering the
edges walked down to the leaves. As an optimisation, only for MQTT
queues are binding keys being returned. This does add a small dependency
from app rabbit to app rabbitmq_mqtt which is not optimal. However, this
dependency should be simple to remove when omitting this optimisation.
Another important feature of this commit is persisting subscription
options and subscription identifiers because they are part of the
MQTT 5.0 session state.
In MQTT v3 and v4, the only subscription information that were part of
the session state was the topic filter and the QoS level.
Both information were implicitly stored in the form of bindings:
The topic filter as the binding key and the QoS level as the destination
queue name of the binding.
For MQTT v5 we need to persist more subscription information.
From a domain perspective, it makes sense to store subscription options
as part of subscriptions, i.e. bindings, even though they are currently
not used in routing.
Therefore, this commits stores subscription options as binding arguments.
Storing subscription options as binding arguments comes in turn with
new challenges: How to handle mixed version clusters and upgrading an
MQTT session from v3 or v4 to v5?
Imagine an MQTT client connects via v5 with Session Expiry Interval > 0
to a new node in a mixed version cluster, creates a subscription,
disconnects, and subsequently connects via v3 to an old node. The
client should continue to receive messages.
To simplify such edge cases, this commit introduces a new feature flag
called mqtt_v5. If mqtt_v5 is disabled, clients cannot connect to
RabbitMQ via MQTT 5.0.
This still doesn't entirely solve the problem of MQTT session upgrades
(v4 to v5 client) or session downgrades (v5 to v4 client).
Ideally, once mqtt_v5 is enabled, all MQTT bindings contain non-empty binding
arguments. However, this will require a feature flag migration function
to modify all MQTT bindings. To be more precise, all MQTT bindings need
to be deleted and added because the binding argument is part of the
Mnesia table key.
Since feature flag migration functions are non-trivial to implement in
RabbitMQ (they can run on every node multiple times and concurrently),
this commit takes a simpler approach:
All v3 / v4 sessions keep the empty binding argument [].
All v5 sessions use the new binding argument [#mqtt_subscription_opts{}].
This requires only handling a session upgrade / downgrade by
creating a binding (with the new binding arg) and deleting the old
binding (with the old binding arg) when processing the CONNECT packet.
Note that such session upgrades or downgrades should be rather rare in
practice. Therefore these binding transactions shouldn't hurt peformance.
The No Local option is implemented within the MQTT publishing connection
process: The message is not sent to the MQTT destination if the
destination queue name matches the current MQTT client ID and the
message was routed due to a subscription that has the No Local flag set.
This avoids unnecessary traffic on the MQTT queue.
The alternative would have been that the "receiving side" (same process)
filters the message out - which would have been more consistent in how
Retain As Published and Subscription Identifiers are implemented, but
would have caused unnecessary load on the MQTT queue.
2023-04-19 21:32:34 +08:00
|
|
|
hackney:
|
|
|
|
|
- hackney
|
|
|
|
|
- hackney_app
|
|
|
|
|
- hackney_bstr
|
|
|
|
|
- hackney_connect
|
|
|
|
|
- hackney_connection
|
|
|
|
|
- hackney_connections
|
|
|
|
|
- hackney_cookie
|
|
|
|
|
- hackney_date
|
|
|
|
|
- hackney_headers
|
|
|
|
|
- hackney_headers_new
|
|
|
|
|
- hackney_http
|
|
|
|
|
- hackney_http_connect
|
|
|
|
|
- hackney_local_tcp
|
|
|
|
|
- hackney_manager
|
|
|
|
|
- hackney_metrics
|
|
|
|
|
- hackney_multipart
|
|
|
|
|
- hackney_pool
|
|
|
|
|
- hackney_pool_handler
|
|
|
|
|
- hackney_request
|
|
|
|
|
- hackney_response
|
|
|
|
|
- hackney_socks5
|
|
|
|
|
- hackney_ssl
|
|
|
|
|
- hackney_ssl_certificate
|
|
|
|
|
- hackney_stream
|
|
|
|
|
- hackney_sup
|
|
|
|
|
- hackney_tcp
|
|
|
|
|
- hackney_trace
|
|
|
|
|
- hackney_url
|
|
|
|
|
- hackney_util
|
|
|
|
|
idna:
|
|
|
|
|
- idna
|
|
|
|
|
- idna_bidi
|
|
|
|
|
- idna_context
|
|
|
|
|
- idna_data
|
|
|
|
|
- idna_mapping
|
|
|
|
|
- idna_table
|
|
|
|
|
- idna_ucs
|
|
|
|
|
- punycode
|
2023-01-25 16:41:56 +08:00
|
|
|
inet_tcp_proxy_dist:
|
|
|
|
|
- inet_tcp_proxy_dist
|
|
|
|
|
- inet_tcp_proxy_dist_app
|
|
|
|
|
- inet_tcp_proxy_dist_conn_sup
|
|
|
|
|
- inet_tcp_proxy_dist_controller
|
|
|
|
|
- inet_tcp_proxy_dist_sup
|
|
|
|
|
jose:
|
|
|
|
|
- jose_base
|
|
|
|
|
- jose_base64
|
|
|
|
|
- jose_base64url
|
|
|
|
|
- jose
|
|
|
|
|
- jose_app
|
|
|
|
|
- jose_block_encryptor
|
|
|
|
|
- jose_chacha20_poly1305
|
|
|
|
|
- jose_chacha20_poly1305_crypto
|
|
|
|
|
- jose_chacha20_poly1305_libsodium
|
|
|
|
|
- jose_chacha20_poly1305_unsupported
|
|
|
|
|
- jose_crypto_compat
|
|
|
|
|
- jose_curve25519
|
|
|
|
|
- jose_curve25519_libdecaf
|
|
|
|
|
- jose_curve25519_libsodium
|
|
|
|
|
- jose_curve25519_unsupported
|
|
|
|
|
- jose_curve448
|
|
|
|
|
- jose_curve448_libdecaf
|
|
|
|
|
- jose_curve448_unsupported
|
|
|
|
|
- jose_public_key
|
|
|
|
|
- jose_server
|
|
|
|
|
- jose_sha3
|
|
|
|
|
- jose_sha3_keccakf1600_driver
|
|
|
|
|
- jose_sha3_keccakf1600_nif
|
|
|
|
|
- jose_sha3_libdecaf
|
|
|
|
|
- jose_sha3_unsupported
|
|
|
|
|
- jose_sup
|
|
|
|
|
- jose_xchacha20_poly1305
|
|
|
|
|
- jose_xchacha20_poly1305_crypto
|
|
|
|
|
- jose_xchacha20_poly1305_unsupported
|
|
|
|
|
- jose_json
|
|
|
|
|
- jose_json_jason
|
|
|
|
|
- jose_json_jiffy
|
|
|
|
|
- jose_json_jsone
|
|
|
|
|
- jose_json_jsx
|
|
|
|
|
- jose_json_ojson
|
|
|
|
|
- jose_json_poison
|
|
|
|
|
- jose_json_poison_compat_encoder
|
|
|
|
|
- jose_json_poison_lexical_encoder
|
|
|
|
|
- jose_json_thoas
|
|
|
|
|
- jose_json_unsupported
|
|
|
|
|
- jose_jwa
|
|
|
|
|
- jose_jwa_aes
|
|
|
|
|
- jose_jwa_aes_kw
|
|
|
|
|
- jose_jwa_base64url
|
|
|
|
|
- jose_jwa_bench
|
|
|
|
|
- jose_jwa_chacha20
|
|
|
|
|
- jose_jwa_chacha20_poly1305
|
|
|
|
|
- jose_jwa_concat_kdf
|
|
|
|
|
- jose_jwa_curve25519
|
|
|
|
|
- jose_jwa_curve448
|
|
|
|
|
- jose_jwa_ed25519
|
|
|
|
|
- jose_jwa_ed448
|
|
|
|
|
- jose_jwa_hchacha20
|
|
|
|
|
- jose_jwa_math
|
|
|
|
|
- jose_jwa_pkcs1
|
|
|
|
|
- jose_jwa_pkcs5
|
|
|
|
|
- jose_jwa_pkcs7
|
|
|
|
|
- jose_jwa_poly1305
|
|
|
|
|
- jose_jwa_sha3
|
|
|
|
|
- jose_jwa_unsupported
|
|
|
|
|
- jose_jwa_x25519
|
|
|
|
|
- jose_jwa_x448
|
|
|
|
|
- jose_jwa_xchacha20
|
|
|
|
|
- jose_jwa_xchacha20_poly1305
|
|
|
|
|
- jose_jwe
|
|
|
|
|
- jose_jwe_alg
|
|
|
|
|
- jose_jwe_alg_aes_kw
|
|
|
|
|
- jose_jwe_alg_c20p_kw
|
|
|
|
|
- jose_jwe_alg_dir
|
|
|
|
|
- jose_jwe_alg_ecdh_1pu
|
|
|
|
|
- jose_jwe_alg_ecdh_es
|
|
|
|
|
- jose_jwe_alg_pbes2
|
|
|
|
|
- jose_jwe_alg_rsa
|
|
|
|
|
- jose_jwe_alg_xc20p_kw
|
|
|
|
|
- jose_jwe_enc
|
|
|
|
|
- jose_jwe_enc_aes
|
|
|
|
|
- jose_jwe_enc_c20p
|
|
|
|
|
- jose_jwe_enc_xc20p
|
|
|
|
|
- jose_jwe_zip
|
|
|
|
|
- jose_jwk
|
|
|
|
|
- jose_jwk_der
|
|
|
|
|
- jose_jwk_kty
|
|
|
|
|
- jose_jwk_kty_ec
|
|
|
|
|
- jose_jwk_kty_oct
|
|
|
|
|
- jose_jwk_kty_okp_ed25519
|
|
|
|
|
- jose_jwk_kty_okp_ed25519ph
|
|
|
|
|
- jose_jwk_kty_okp_ed448
|
|
|
|
|
- jose_jwk_kty_okp_ed448ph
|
|
|
|
|
- jose_jwk_kty_okp_x25519
|
|
|
|
|
- jose_jwk_kty_okp_x448
|
|
|
|
|
- jose_jwk_kty_rsa
|
|
|
|
|
- jose_jwk_oct
|
|
|
|
|
- jose_jwk_openssh_key
|
|
|
|
|
- jose_jwk_pem
|
|
|
|
|
- jose_jwk_set
|
|
|
|
|
- jose_jwk_use_enc
|
|
|
|
|
- jose_jwk_use_sig
|
|
|
|
|
- jose_jws
|
|
|
|
|
- jose_jws_alg
|
|
|
|
|
- jose_jws_alg_ecdsa
|
|
|
|
|
- jose_jws_alg_eddsa
|
|
|
|
|
- jose_jws_alg_hmac
|
|
|
|
|
- jose_jws_alg_none
|
|
|
|
|
- jose_jws_alg_poly1305
|
|
|
|
|
- jose_jws_alg_rsa_pkcs1_v1_5
|
|
|
|
|
- jose_jws_alg_rsa_pss
|
|
|
|
|
- jose_jwt
|
|
|
|
|
meck:
|
|
|
|
|
- meck
|
|
|
|
|
- meck_args_matcher
|
|
|
|
|
- meck_code
|
|
|
|
|
- meck_code_gen
|
|
|
|
|
- meck_cover
|
|
|
|
|
- meck_expect
|
|
|
|
|
- meck_history
|
|
|
|
|
- meck_matcher
|
|
|
|
|
- meck_proc
|
|
|
|
|
- meck_ret_spec
|
|
|
|
|
- meck_util
|
Support MQTT 5.0 features No Local, RAP, Subscription IDs
Support subscription options "No Local" and "Retain As Published"
as well as Subscription Identifiers.
All three MQTT 5.0 features can be set on a per subscription basis.
Due to wildcards in topic filters, multiple subscriptions
can match a given topic. Therefore, to implement Retain As Published and
Subscription Identifiers, the destination MQTT connection process needs
to know what subscription(s) caused it to receive the message.
There are a few ways how this could be implemented:
1. The destination MQTT connection process is aware of all its
subscriptions. Whenever, it receives a message, it can match the
message's routing key / topic against all its known topic filters.
However, to iteratively match the routing key against all topic
filters for every received message can become very expensive in the
worst case when the MQTT client creates many subscriptions containing
wildcards. This could be the case for an MQTT client that acts as a
bridge or proxy or dispatcher: It could subscribe via a wildcard for
each of its own clients.
2. Instead of interatively matching the topic of the received message
against all topic filters that contain wildcards, a better approach
would be for every MQTT subscriber connection process to maintain a
local trie datastructure (similar to how topic exchanges are
implemented) and perform matching therefore more efficiently.
However, this does not sound optimal either because routing is
effectively performed twice: in the topic exchange and again against
a much smaller trie in each destination connection process.
3. Given that the topic exchange already perform routing, a much more
sensible way would be to send the matched binding key(s) to the
destination MQTT connection process. A subscription (topic filter)
maps to a binding key in AMQP 0.9.1 routing. Therefore, for the first
time in RabbitMQ, the routing function should not only output a list
of unique destination queues, but also the binding keys (subscriptions)
that caused the message to be routed to the destination queue.
This commit therefore implements the 3rd approach.
The downside of the 3rd approach is that it requires API changes to the
routing function and topic exchange.
Specifically, this commit adds a new function rabbit_exchange:route/3
that accepts a list of routing options. If that list contains version 2,
the caller of the routing function knows how to handle the return value
that could also contain binding keys.
This commits allows an MQTT connection process, the channel process, and
at-most-once dead lettering to handle binding keys. Binding keys are
included as AMQP 0.9.1 headers into the basic message.
Therefore, whenever a message is sent from an MQTT client or AMQP 0.9.1
client or AMQP 1.0 client or STOMP client, the MQTT receiver will know
the subscription identifier that caused the message to be received.
Note that due to the low number of allowed wildcard characters (# and
+), the cardinality of matched binding keys shouldn't be high even if
the topic contains for example 3 levels and the message is sent to for
example 5 million destination queues. In other words, sending multiple
distinct basic messages to the destination shouldn't hurt the delegate
optimisation too much. The delegate optimisation implemented for classic
queues and rabbit_mqtt_qos0_queue(s) still takes place for all basic
messages that contain the same set of matched binding keys.
The topic exchange returns all matched binding keys by remembering the
edges walked down to the leaves. As an optimisation, only for MQTT
queues are binding keys being returned. This does add a small dependency
from app rabbit to app rabbitmq_mqtt which is not optimal. However, this
dependency should be simple to remove when omitting this optimisation.
Another important feature of this commit is persisting subscription
options and subscription identifiers because they are part of the
MQTT 5.0 session state.
In MQTT v3 and v4, the only subscription information that were part of
the session state was the topic filter and the QoS level.
Both information were implicitly stored in the form of bindings:
The topic filter as the binding key and the QoS level as the destination
queue name of the binding.
For MQTT v5 we need to persist more subscription information.
From a domain perspective, it makes sense to store subscription options
as part of subscriptions, i.e. bindings, even though they are currently
not used in routing.
Therefore, this commits stores subscription options as binding arguments.
Storing subscription options as binding arguments comes in turn with
new challenges: How to handle mixed version clusters and upgrading an
MQTT session from v3 or v4 to v5?
Imagine an MQTT client connects via v5 with Session Expiry Interval > 0
to a new node in a mixed version cluster, creates a subscription,
disconnects, and subsequently connects via v3 to an old node. The
client should continue to receive messages.
To simplify such edge cases, this commit introduces a new feature flag
called mqtt_v5. If mqtt_v5 is disabled, clients cannot connect to
RabbitMQ via MQTT 5.0.
This still doesn't entirely solve the problem of MQTT session upgrades
(v4 to v5 client) or session downgrades (v5 to v4 client).
Ideally, once mqtt_v5 is enabled, all MQTT bindings contain non-empty binding
arguments. However, this will require a feature flag migration function
to modify all MQTT bindings. To be more precise, all MQTT bindings need
to be deleted and added because the binding argument is part of the
Mnesia table key.
Since feature flag migration functions are non-trivial to implement in
RabbitMQ (they can run on every node multiple times and concurrently),
this commit takes a simpler approach:
All v3 / v4 sessions keep the empty binding argument [].
All v5 sessions use the new binding argument [#mqtt_subscription_opts{}].
This requires only handling a session upgrade / downgrade by
creating a binding (with the new binding arg) and deleting the old
binding (with the old binding arg) when processing the CONNECT packet.
Note that such session upgrades or downgrades should be rather rare in
practice. Therefore these binding transactions shouldn't hurt peformance.
The No Local option is implemented within the MQTT publishing connection
process: The message is not sent to the MQTT destination if the
destination queue name matches the current MQTT client ID and the
message was routed due to a subscription that has the No Local flag set.
This avoids unnecessary traffic on the MQTT queue.
The alternative would have been that the "receiving side" (same process)
filters the message out - which would have been more consistent in how
Retain As Published and Subscription Identifiers are implemented, but
would have caused unnecessary load on the MQTT queue.
2023-04-19 21:32:34 +08:00
|
|
|
metrics:
|
|
|
|
|
- metrics
|
|
|
|
|
- metrics_dummy
|
|
|
|
|
- metrics_exometer
|
|
|
|
|
- metrics_folsom
|
|
|
|
|
mimerl:
|
|
|
|
|
- mimerl
|
2023-02-23 21:47:41 +08:00
|
|
|
my_plugin:
|
|
|
|
|
- my_plugin
|
Support MQTT 5.0 features No Local, RAP, Subscription IDs
Support subscription options "No Local" and "Retain As Published"
as well as Subscription Identifiers.
All three MQTT 5.0 features can be set on a per subscription basis.
Due to wildcards in topic filters, multiple subscriptions
can match a given topic. Therefore, to implement Retain As Published and
Subscription Identifiers, the destination MQTT connection process needs
to know what subscription(s) caused it to receive the message.
There are a few ways how this could be implemented:
1. The destination MQTT connection process is aware of all its
subscriptions. Whenever, it receives a message, it can match the
message's routing key / topic against all its known topic filters.
However, to iteratively match the routing key against all topic
filters for every received message can become very expensive in the
worst case when the MQTT client creates many subscriptions containing
wildcards. This could be the case for an MQTT client that acts as a
bridge or proxy or dispatcher: It could subscribe via a wildcard for
each of its own clients.
2. Instead of interatively matching the topic of the received message
against all topic filters that contain wildcards, a better approach
would be for every MQTT subscriber connection process to maintain a
local trie datastructure (similar to how topic exchanges are
implemented) and perform matching therefore more efficiently.
However, this does not sound optimal either because routing is
effectively performed twice: in the topic exchange and again against
a much smaller trie in each destination connection process.
3. Given that the topic exchange already perform routing, a much more
sensible way would be to send the matched binding key(s) to the
destination MQTT connection process. A subscription (topic filter)
maps to a binding key in AMQP 0.9.1 routing. Therefore, for the first
time in RabbitMQ, the routing function should not only output a list
of unique destination queues, but also the binding keys (subscriptions)
that caused the message to be routed to the destination queue.
This commit therefore implements the 3rd approach.
The downside of the 3rd approach is that it requires API changes to the
routing function and topic exchange.
Specifically, this commit adds a new function rabbit_exchange:route/3
that accepts a list of routing options. If that list contains version 2,
the caller of the routing function knows how to handle the return value
that could also contain binding keys.
This commits allows an MQTT connection process, the channel process, and
at-most-once dead lettering to handle binding keys. Binding keys are
included as AMQP 0.9.1 headers into the basic message.
Therefore, whenever a message is sent from an MQTT client or AMQP 0.9.1
client or AMQP 1.0 client or STOMP client, the MQTT receiver will know
the subscription identifier that caused the message to be received.
Note that due to the low number of allowed wildcard characters (# and
+), the cardinality of matched binding keys shouldn't be high even if
the topic contains for example 3 levels and the message is sent to for
example 5 million destination queues. In other words, sending multiple
distinct basic messages to the destination shouldn't hurt the delegate
optimisation too much. The delegate optimisation implemented for classic
queues and rabbit_mqtt_qos0_queue(s) still takes place for all basic
messages that contain the same set of matched binding keys.
The topic exchange returns all matched binding keys by remembering the
edges walked down to the leaves. As an optimisation, only for MQTT
queues are binding keys being returned. This does add a small dependency
from app rabbit to app rabbitmq_mqtt which is not optimal. However, this
dependency should be simple to remove when omitting this optimisation.
Another important feature of this commit is persisting subscription
options and subscription identifiers because they are part of the
MQTT 5.0 session state.
In MQTT v3 and v4, the only subscription information that were part of
the session state was the topic filter and the QoS level.
Both information were implicitly stored in the form of bindings:
The topic filter as the binding key and the QoS level as the destination
queue name of the binding.
For MQTT v5 we need to persist more subscription information.
From a domain perspective, it makes sense to store subscription options
as part of subscriptions, i.e. bindings, even though they are currently
not used in routing.
Therefore, this commits stores subscription options as binding arguments.
Storing subscription options as binding arguments comes in turn with
new challenges: How to handle mixed version clusters and upgrading an
MQTT session from v3 or v4 to v5?
Imagine an MQTT client connects via v5 with Session Expiry Interval > 0
to a new node in a mixed version cluster, creates a subscription,
disconnects, and subsequently connects via v3 to an old node. The
client should continue to receive messages.
To simplify such edge cases, this commit introduces a new feature flag
called mqtt_v5. If mqtt_v5 is disabled, clients cannot connect to
RabbitMQ via MQTT 5.0.
This still doesn't entirely solve the problem of MQTT session upgrades
(v4 to v5 client) or session downgrades (v5 to v4 client).
Ideally, once mqtt_v5 is enabled, all MQTT bindings contain non-empty binding
arguments. However, this will require a feature flag migration function
to modify all MQTT bindings. To be more precise, all MQTT bindings need
to be deleted and added because the binding argument is part of the
Mnesia table key.
Since feature flag migration functions are non-trivial to implement in
RabbitMQ (they can run on every node multiple times and concurrently),
this commit takes a simpler approach:
All v3 / v4 sessions keep the empty binding argument [].
All v5 sessions use the new binding argument [#mqtt_subscription_opts{}].
This requires only handling a session upgrade / downgrade by
creating a binding (with the new binding arg) and deleting the old
binding (with the old binding arg) when processing the CONNECT packet.
Note that such session upgrades or downgrades should be rather rare in
practice. Therefore these binding transactions shouldn't hurt peformance.
The No Local option is implemented within the MQTT publishing connection
process: The message is not sent to the MQTT destination if the
destination queue name matches the current MQTT client ID and the
message was routed due to a subscription that has the No Local flag set.
This avoids unnecessary traffic on the MQTT queue.
The alternative would have been that the "receiving side" (same process)
filters the message out - which would have been more consistent in how
Retain As Published and Subscription Identifiers are implemented, but
would have caused unnecessary load on the MQTT queue.
2023-04-19 21:32:34 +08:00
|
|
|
neotoma:
|
|
|
|
|
- neotoma
|
|
|
|
|
- neotoma_parse
|
2023-01-25 16:41:56 +08:00
|
|
|
observer_cli:
|
|
|
|
|
- observer_cli
|
|
|
|
|
- observer_cli_application
|
|
|
|
|
- observer_cli_escriptize
|
|
|
|
|
- observer_cli_ets
|
|
|
|
|
- observer_cli_help
|
|
|
|
|
- observer_cli_inet
|
|
|
|
|
- observer_cli_lib
|
|
|
|
|
- observer_cli_mnesia
|
|
|
|
|
- observer_cli_plugin
|
|
|
|
|
- observer_cli_port
|
|
|
|
|
- observer_cli_process
|
|
|
|
|
- observer_cli_store
|
|
|
|
|
- observer_cli_system
|
|
|
|
|
osiris:
|
|
|
|
|
- osiris
|
|
|
|
|
- osiris_app
|
|
|
|
|
- osiris_bench
|
|
|
|
|
- osiris_counters
|
|
|
|
|
- osiris_ets
|
|
|
|
|
- osiris_log
|
|
|
|
|
- osiris_log_shared
|
|
|
|
|
- osiris_replica
|
|
|
|
|
- osiris_replica_reader
|
|
|
|
|
- osiris_replica_reader_sup
|
|
|
|
|
- osiris_retention
|
|
|
|
|
- osiris_server_sup
|
|
|
|
|
- osiris_sup
|
|
|
|
|
- osiris_tracking
|
|
|
|
|
- osiris_util
|
|
|
|
|
- osiris_writer
|
Support MQTT 5.0 features No Local, RAP, Subscription IDs
Support subscription options "No Local" and "Retain As Published"
as well as Subscription Identifiers.
All three MQTT 5.0 features can be set on a per subscription basis.
Due to wildcards in topic filters, multiple subscriptions
can match a given topic. Therefore, to implement Retain As Published and
Subscription Identifiers, the destination MQTT connection process needs
to know what subscription(s) caused it to receive the message.
There are a few ways how this could be implemented:
1. The destination MQTT connection process is aware of all its
subscriptions. Whenever, it receives a message, it can match the
message's routing key / topic against all its known topic filters.
However, to iteratively match the routing key against all topic
filters for every received message can become very expensive in the
worst case when the MQTT client creates many subscriptions containing
wildcards. This could be the case for an MQTT client that acts as a
bridge or proxy or dispatcher: It could subscribe via a wildcard for
each of its own clients.
2. Instead of interatively matching the topic of the received message
against all topic filters that contain wildcards, a better approach
would be for every MQTT subscriber connection process to maintain a
local trie datastructure (similar to how topic exchanges are
implemented) and perform matching therefore more efficiently.
However, this does not sound optimal either because routing is
effectively performed twice: in the topic exchange and again against
a much smaller trie in each destination connection process.
3. Given that the topic exchange already perform routing, a much more
sensible way would be to send the matched binding key(s) to the
destination MQTT connection process. A subscription (topic filter)
maps to a binding key in AMQP 0.9.1 routing. Therefore, for the first
time in RabbitMQ, the routing function should not only output a list
of unique destination queues, but also the binding keys (subscriptions)
that caused the message to be routed to the destination queue.
This commit therefore implements the 3rd approach.
The downside of the 3rd approach is that it requires API changes to the
routing function and topic exchange.
Specifically, this commit adds a new function rabbit_exchange:route/3
that accepts a list of routing options. If that list contains version 2,
the caller of the routing function knows how to handle the return value
that could also contain binding keys.
This commits allows an MQTT connection process, the channel process, and
at-most-once dead lettering to handle binding keys. Binding keys are
included as AMQP 0.9.1 headers into the basic message.
Therefore, whenever a message is sent from an MQTT client or AMQP 0.9.1
client or AMQP 1.0 client or STOMP client, the MQTT receiver will know
the subscription identifier that caused the message to be received.
Note that due to the low number of allowed wildcard characters (# and
+), the cardinality of matched binding keys shouldn't be high even if
the topic contains for example 3 levels and the message is sent to for
example 5 million destination queues. In other words, sending multiple
distinct basic messages to the destination shouldn't hurt the delegate
optimisation too much. The delegate optimisation implemented for classic
queues and rabbit_mqtt_qos0_queue(s) still takes place for all basic
messages that contain the same set of matched binding keys.
The topic exchange returns all matched binding keys by remembering the
edges walked down to the leaves. As an optimisation, only for MQTT
queues are binding keys being returned. This does add a small dependency
from app rabbit to app rabbitmq_mqtt which is not optimal. However, this
dependency should be simple to remove when omitting this optimisation.
Another important feature of this commit is persisting subscription
options and subscription identifiers because they are part of the
MQTT 5.0 session state.
In MQTT v3 and v4, the only subscription information that were part of
the session state was the topic filter and the QoS level.
Both information were implicitly stored in the form of bindings:
The topic filter as the binding key and the QoS level as the destination
queue name of the binding.
For MQTT v5 we need to persist more subscription information.
From a domain perspective, it makes sense to store subscription options
as part of subscriptions, i.e. bindings, even though they are currently
not used in routing.
Therefore, this commits stores subscription options as binding arguments.
Storing subscription options as binding arguments comes in turn with
new challenges: How to handle mixed version clusters and upgrading an
MQTT session from v3 or v4 to v5?
Imagine an MQTT client connects via v5 with Session Expiry Interval > 0
to a new node in a mixed version cluster, creates a subscription,
disconnects, and subsequently connects via v3 to an old node. The
client should continue to receive messages.
To simplify such edge cases, this commit introduces a new feature flag
called mqtt_v5. If mqtt_v5 is disabled, clients cannot connect to
RabbitMQ via MQTT 5.0.
This still doesn't entirely solve the problem of MQTT session upgrades
(v4 to v5 client) or session downgrades (v5 to v4 client).
Ideally, once mqtt_v5 is enabled, all MQTT bindings contain non-empty binding
arguments. However, this will require a feature flag migration function
to modify all MQTT bindings. To be more precise, all MQTT bindings need
to be deleted and added because the binding argument is part of the
Mnesia table key.
Since feature flag migration functions are non-trivial to implement in
RabbitMQ (they can run on every node multiple times and concurrently),
this commit takes a simpler approach:
All v3 / v4 sessions keep the empty binding argument [].
All v5 sessions use the new binding argument [#mqtt_subscription_opts{}].
This requires only handling a session upgrade / downgrade by
creating a binding (with the new binding arg) and deleting the old
binding (with the old binding arg) when processing the CONNECT packet.
Note that such session upgrades or downgrades should be rather rare in
practice. Therefore these binding transactions shouldn't hurt peformance.
The No Local option is implemented within the MQTT publishing connection
process: The message is not sent to the MQTT destination if the
destination queue name matches the current MQTT client ID and the
message was routed due to a subscription that has the No Local flag set.
This avoids unnecessary traffic on the MQTT queue.
The alternative would have been that the "receiving side" (same process)
filters the message out - which would have been more consistent in how
Retain As Published and Subscription Identifiers are implemented, but
would have caused unnecessary load on the MQTT queue.
2023-04-19 21:32:34 +08:00
|
|
|
parse_trans:
|
|
|
|
|
- ct_expand
|
|
|
|
|
- exprecs
|
|
|
|
|
- parse_trans
|
|
|
|
|
- parse_trans_codegen
|
|
|
|
|
- parse_trans_mod
|
|
|
|
|
- parse_trans_pp
|
2023-01-25 16:41:56 +08:00
|
|
|
prometheus:
|
|
|
|
|
- prometheus_mnesia_collector
|
|
|
|
|
- prometheus_vm_dist_collector
|
|
|
|
|
- prometheus_vm_memory_collector
|
|
|
|
|
- prometheus_vm_msacc_collector
|
|
|
|
|
- prometheus_vm_statistics_collector
|
|
|
|
|
- prometheus_vm_system_info_collector
|
|
|
|
|
- prometheus_http
|
|
|
|
|
- prometheus_mnesia
|
|
|
|
|
- prometheus_test_instrumenter
|
|
|
|
|
- prometheus_protobuf_format
|
|
|
|
|
- prometheus_text_format
|
|
|
|
|
- prometheus_boolean
|
|
|
|
|
- prometheus_counter
|
|
|
|
|
- prometheus_gauge
|
|
|
|
|
- prometheus_histogram
|
|
|
|
|
- prometheus_quantile_summary
|
|
|
|
|
- prometheus_summary
|
|
|
|
|
- prometheus_model
|
|
|
|
|
- prometheus_model_helpers
|
|
|
|
|
- prometheus
|
|
|
|
|
- prometheus_buckets
|
|
|
|
|
- prometheus_collector
|
|
|
|
|
- prometheus_format
|
|
|
|
|
- prometheus_instrumenter
|
|
|
|
|
- prometheus_metric
|
|
|
|
|
- prometheus_metric_spec
|
|
|
|
|
- prometheus_misc
|
|
|
|
|
- prometheus_registry
|
|
|
|
|
- prometheus_sup
|
|
|
|
|
- prometheus_time
|
|
|
|
|
proper:
|
|
|
|
|
- proper
|
|
|
|
|
- proper_arith
|
|
|
|
|
- proper_array
|
|
|
|
|
- proper_dict
|
|
|
|
|
- proper_erlang_abstract_code
|
|
|
|
|
- proper_fsm
|
|
|
|
|
- proper_gb_sets
|
|
|
|
|
- proper_gb_trees
|
|
|
|
|
- proper_gen
|
|
|
|
|
- proper_gen_next
|
|
|
|
|
- proper_orddict
|
|
|
|
|
- proper_ordsets
|
|
|
|
|
- proper_prop_remover
|
|
|
|
|
- proper_queue
|
|
|
|
|
- proper_sa
|
|
|
|
|
- proper_sets
|
|
|
|
|
- proper_shrink
|
|
|
|
|
- proper_statem
|
|
|
|
|
- proper_symb
|
|
|
|
|
- proper_target
|
|
|
|
|
- proper_transformer
|
|
|
|
|
- proper_types
|
|
|
|
|
- proper_typeserver
|
|
|
|
|
- proper_unicode
|
|
|
|
|
- proper_unused_imports_remover
|
|
|
|
|
- vararg
|
Support MQTT 5.0 features No Local, RAP, Subscription IDs
Support subscription options "No Local" and "Retain As Published"
as well as Subscription Identifiers.
All three MQTT 5.0 features can be set on a per subscription basis.
Due to wildcards in topic filters, multiple subscriptions
can match a given topic. Therefore, to implement Retain As Published and
Subscription Identifiers, the destination MQTT connection process needs
to know what subscription(s) caused it to receive the message.
There are a few ways how this could be implemented:
1. The destination MQTT connection process is aware of all its
subscriptions. Whenever, it receives a message, it can match the
message's routing key / topic against all its known topic filters.
However, to iteratively match the routing key against all topic
filters for every received message can become very expensive in the
worst case when the MQTT client creates many subscriptions containing
wildcards. This could be the case for an MQTT client that acts as a
bridge or proxy or dispatcher: It could subscribe via a wildcard for
each of its own clients.
2. Instead of interatively matching the topic of the received message
against all topic filters that contain wildcards, a better approach
would be for every MQTT subscriber connection process to maintain a
local trie datastructure (similar to how topic exchanges are
implemented) and perform matching therefore more efficiently.
However, this does not sound optimal either because routing is
effectively performed twice: in the topic exchange and again against
a much smaller trie in each destination connection process.
3. Given that the topic exchange already perform routing, a much more
sensible way would be to send the matched binding key(s) to the
destination MQTT connection process. A subscription (topic filter)
maps to a binding key in AMQP 0.9.1 routing. Therefore, for the first
time in RabbitMQ, the routing function should not only output a list
of unique destination queues, but also the binding keys (subscriptions)
that caused the message to be routed to the destination queue.
This commit therefore implements the 3rd approach.
The downside of the 3rd approach is that it requires API changes to the
routing function and topic exchange.
Specifically, this commit adds a new function rabbit_exchange:route/3
that accepts a list of routing options. If that list contains version 2,
the caller of the routing function knows how to handle the return value
that could also contain binding keys.
This commits allows an MQTT connection process, the channel process, and
at-most-once dead lettering to handle binding keys. Binding keys are
included as AMQP 0.9.1 headers into the basic message.
Therefore, whenever a message is sent from an MQTT client or AMQP 0.9.1
client or AMQP 1.0 client or STOMP client, the MQTT receiver will know
the subscription identifier that caused the message to be received.
Note that due to the low number of allowed wildcard characters (# and
+), the cardinality of matched binding keys shouldn't be high even if
the topic contains for example 3 levels and the message is sent to for
example 5 million destination queues. In other words, sending multiple
distinct basic messages to the destination shouldn't hurt the delegate
optimisation too much. The delegate optimisation implemented for classic
queues and rabbit_mqtt_qos0_queue(s) still takes place for all basic
messages that contain the same set of matched binding keys.
The topic exchange returns all matched binding keys by remembering the
edges walked down to the leaves. As an optimisation, only for MQTT
queues are binding keys being returned. This does add a small dependency
from app rabbit to app rabbitmq_mqtt which is not optimal. However, this
dependency should be simple to remove when omitting this optimisation.
Another important feature of this commit is persisting subscription
options and subscription identifiers because they are part of the
MQTT 5.0 session state.
In MQTT v3 and v4, the only subscription information that were part of
the session state was the topic filter and the QoS level.
Both information were implicitly stored in the form of bindings:
The topic filter as the binding key and the QoS level as the destination
queue name of the binding.
For MQTT v5 we need to persist more subscription information.
From a domain perspective, it makes sense to store subscription options
as part of subscriptions, i.e. bindings, even though they are currently
not used in routing.
Therefore, this commits stores subscription options as binding arguments.
Storing subscription options as binding arguments comes in turn with
new challenges: How to handle mixed version clusters and upgrading an
MQTT session from v3 or v4 to v5?
Imagine an MQTT client connects via v5 with Session Expiry Interval > 0
to a new node in a mixed version cluster, creates a subscription,
disconnects, and subsequently connects via v3 to an old node. The
client should continue to receive messages.
To simplify such edge cases, this commit introduces a new feature flag
called mqtt_v5. If mqtt_v5 is disabled, clients cannot connect to
RabbitMQ via MQTT 5.0.
This still doesn't entirely solve the problem of MQTT session upgrades
(v4 to v5 client) or session downgrades (v5 to v4 client).
Ideally, once mqtt_v5 is enabled, all MQTT bindings contain non-empty binding
arguments. However, this will require a feature flag migration function
to modify all MQTT bindings. To be more precise, all MQTT bindings need
to be deleted and added because the binding argument is part of the
Mnesia table key.
Since feature flag migration functions are non-trivial to implement in
RabbitMQ (they can run on every node multiple times and concurrently),
this commit takes a simpler approach:
All v3 / v4 sessions keep the empty binding argument [].
All v5 sessions use the new binding argument [#mqtt_subscription_opts{}].
This requires only handling a session upgrade / downgrade by
creating a binding (with the new binding arg) and deleting the old
binding (with the old binding arg) when processing the CONNECT packet.
Note that such session upgrades or downgrades should be rather rare in
practice. Therefore these binding transactions shouldn't hurt peformance.
The No Local option is implemented within the MQTT publishing connection
process: The message is not sent to the MQTT destination if the
destination queue name matches the current MQTT client ID and the
message was routed due to a subscription that has the No Local flag set.
This avoids unnecessary traffic on the MQTT queue.
The alternative would have been that the "receiving side" (same process)
filters the message out - which would have been more consistent in how
Retain As Published and Subscription Identifiers are implemented, but
would have caused unnecessary load on the MQTT queue.
2023-04-19 21:32:34 +08:00
|
|
|
proto_gpb:
|
|
|
|
|
- foo
|
|
|
|
|
- foo_app
|
|
|
|
|
- foo_sup
|
|
|
|
|
proto_protobuffs:
|
|
|
|
|
- foo
|
|
|
|
|
- foo_app
|
|
|
|
|
- foo_sup
|
|
|
|
|
protobuffs:
|
|
|
|
|
- protobuffs
|
|
|
|
|
- protobuffs_compile
|
2023-01-25 16:41:56 +08:00
|
|
|
quantile_estimator:
|
|
|
|
|
- quantile
|
|
|
|
|
- quantile_estimator
|
|
|
|
|
ra:
|
|
|
|
|
- ra
|
|
|
|
|
- ra_app
|
|
|
|
|
- ra_bench
|
|
|
|
|
- ra_counters
|
|
|
|
|
- ra_dbg
|
|
|
|
|
- ra_directory
|
|
|
|
|
- ra_env
|
2023-03-10 02:54:20 +08:00
|
|
|
- ra_ets_queue
|
2023-01-25 16:41:56 +08:00
|
|
|
- ra_file_handle
|
|
|
|
|
- ra_flru
|
|
|
|
|
- ra_leaderboard
|
|
|
|
|
- ra_lib
|
|
|
|
|
- ra_log
|
2023-03-10 02:54:20 +08:00
|
|
|
- ra_log_cache
|
2023-01-25 16:41:56 +08:00
|
|
|
- ra_log_ets
|
|
|
|
|
- ra_log_meta
|
|
|
|
|
- ra_log_pre_init
|
|
|
|
|
- ra_log_reader
|
|
|
|
|
- ra_log_segment
|
|
|
|
|
- ra_log_segment_writer
|
|
|
|
|
- ra_log_snapshot
|
|
|
|
|
- ra_log_sup
|
|
|
|
|
- ra_log_wal
|
|
|
|
|
- ra_log_wal_sup
|
|
|
|
|
- ra_machine
|
|
|
|
|
- ra_machine_ets
|
|
|
|
|
- ra_machine_simple
|
|
|
|
|
- ra_metrics_ets
|
|
|
|
|
- ra_monitors
|
|
|
|
|
- ra_server
|
|
|
|
|
- ra_server_proc
|
|
|
|
|
- ra_server_sup
|
|
|
|
|
- ra_server_sup_sup
|
|
|
|
|
- ra_snapshot
|
|
|
|
|
- ra_sup
|
|
|
|
|
- ra_system
|
|
|
|
|
- ra_system_sup
|
|
|
|
|
- ra_systems_sup
|
2023-02-23 21:47:41 +08:00
|
|
|
rabbit:
|
|
|
|
|
- amqqueue
|
|
|
|
|
- background_gc
|
|
|
|
|
- code_server_cache
|
|
|
|
|
- gatherer
|
|
|
|
|
- gm
|
|
|
|
|
- internal_user
|
|
|
|
|
- lqueue
|
|
|
|
|
- mirrored_supervisor
|
|
|
|
|
- mirrored_supervisor_sups
|
|
|
|
|
- pg_local
|
|
|
|
|
- pid_recomposition
|
|
|
|
|
- rabbit
|
|
|
|
|
- rabbit_access_control
|
|
|
|
|
- rabbit_alarm
|
|
|
|
|
- rabbit_amqqueue
|
|
|
|
|
- rabbit_amqqueue_process
|
|
|
|
|
- rabbit_amqqueue_sup
|
|
|
|
|
- rabbit_amqqueue_sup_sup
|
|
|
|
|
- rabbit_auth_backend_internal
|
|
|
|
|
- rabbit_auth_mechanism_amqplain
|
|
|
|
|
- rabbit_auth_mechanism_cr_demo
|
|
|
|
|
- rabbit_auth_mechanism_plain
|
|
|
|
|
- rabbit_autoheal
|
|
|
|
|
- rabbit_backing_queue
|
|
|
|
|
- rabbit_basic
|
|
|
|
|
- rabbit_binding
|
|
|
|
|
- rabbit_boot_steps
|
|
|
|
|
- rabbit_channel
|
|
|
|
|
- rabbit_channel_interceptor
|
|
|
|
|
- rabbit_channel_sup
|
|
|
|
|
- rabbit_channel_sup_sup
|
|
|
|
|
- rabbit_channel_tracking
|
|
|
|
|
- rabbit_channel_tracking_handler
|
|
|
|
|
- rabbit_classic_queue
|
|
|
|
|
- rabbit_classic_queue_index_v2
|
|
|
|
|
- rabbit_classic_queue_store_v2
|
|
|
|
|
- rabbit_client_sup
|
|
|
|
|
- rabbit_config
|
|
|
|
|
- rabbit_confirms
|
|
|
|
|
- rabbit_connection_helper_sup
|
|
|
|
|
- rabbit_connection_sup
|
|
|
|
|
- rabbit_connection_tracking
|
|
|
|
|
- rabbit_connection_tracking_handler
|
|
|
|
|
- rabbit_control_pbe
|
|
|
|
|
- rabbit_core_ff
|
|
|
|
|
- rabbit_core_metrics_gc
|
|
|
|
|
- rabbit_credential_validation
|
|
|
|
|
- rabbit_credential_validator
|
|
|
|
|
- rabbit_credential_validator_accept_everything
|
|
|
|
|
- rabbit_credential_validator_min_password_length
|
|
|
|
|
- rabbit_credential_validator_password_regexp
|
|
|
|
|
- rabbit_cuttlefish
|
|
|
|
|
- rabbit_db
|
|
|
|
|
- rabbit_db_binding
|
|
|
|
|
- rabbit_db_cluster
|
|
|
|
|
- rabbit_db_exchange
|
|
|
|
|
- rabbit_db_maintenance
|
|
|
|
|
- rabbit_db_msup
|
|
|
|
|
- rabbit_db_policy
|
|
|
|
|
- rabbit_db_queue
|
|
|
|
|
- rabbit_db_rtparams
|
|
|
|
|
- rabbit_db_topic_exchange
|
|
|
|
|
- rabbit_db_user
|
|
|
|
|
- rabbit_db_vhost
|
|
|
|
|
- rabbit_db_vhost_defaults
|
|
|
|
|
- rabbit_dead_letter
|
|
|
|
|
- rabbit_definitions
|
|
|
|
|
- rabbit_definitions_hashing
|
|
|
|
|
- rabbit_definitions_import_https
|
|
|
|
|
- rabbit_definitions_import_local_filesystem
|
Deprecated features: New module to manage deprecated features (!)
This introduces a way to declare deprecated features in the code, not
only in our communication. The new module allows to disallow the use of
a deprecated feature and/or warn the user when he relies on such a
feature.
[Why]
Currently, we only tell people about deprecated features through blog
posts and the mailing-list. This might be insufficiant for our users
that a feature they use will be removed in a future version:
* They may not read our blog or mailing-list
* They may not understand that they use such a deprecated feature
* They might wait for the big removal before they plan testing
* They might not take it seriously enough
The idea behind this patch is to increase the chance that users notice
that they are using something which is about to be dropped from
RabbitMQ. Anopther benefit is that they should be able to test how
RabbitMQ will behave in the future before the actual removal. This
should allow them to test and plan changes.
[How]
When a feature is deprecated in other large projects (such as FreeBSD
where I took the idea from), it goes through a lifecycle:
1. The feature is still available, but users get a warning somehow when
they use it. They can disable it to test.
2. The feature is still available, but disabled out-of-the-box. Users
can re-enable it (and get a warning).
3. The feature is disconnected from the build. Therefore, the code
behind it is still there, but users have to recompile the thing to be
able to use it.
4. The feature is removed from the source code. Users have to adapt or
they can't upgrade anymore.
The solution in this patch offers the same lifecycle. A deprecated
feature will be in one of these deprecation phases:
1. `permitted_by_default`: The feature is available. Users get a warning
if they use it. They can disable it from the configuration.
2. `denied_by_default`: The feature is available but disabled by
default. Users get an error if they use it and RabbitMQ behaves like
the feature is removed. They can re-enable is from the configuration
and get a warning.
3. `disconnected`: The feature is present in the source code, but is
disabled and can't be re-enabled without recompiling RabbitMQ. Users
get the same behavior as if the code was removed.
4. `removed`: The feature's code is gone.
The whole thing is based on the feature flags subsystem, but it has the
following differences with other feature flags:
* The semantic is reversed: the feature flag behind a deprecated feature
is disabled when the deprecated feature is permitted, or enabled when
the deprecated feature is denied.
* The feature flag behind a deprecated feature is enabled out-of-the-box
(meaning the deprecated feature is denied):
* if the deprecation phase is `permitted_by_default` and the
configuration denies the deprecated feature
* if the deprecation phase is `denied_by_default` and the
configuration doesn't permit the deprecated feature
* if the deprecation phase is `disconnected` or `removed`
* Feature flags behind deprecated feature don't appear in feature flags
listings.
Otherwise, deprecated features' feature flags are managed like other
feature flags, in particular inside clusters.
To declare a deprecated feature:
-rabbit_deprecated_feature(
{my_deprecated_feature,
#{deprecation_phase => permitted_by_default,
msgs => #{when_permitted => "This feature will be removed in RabbitMQ X.0"},
}}).
Then, to check the state of a deprecated feature in the code:
case rabbit_deprecated_features:is_permitted(my_deprecated_feature) of
true ->
%% The deprecated feature is still permitted.
ok;
false ->
%% The deprecated feature is gone or should be considered
%% unavailable.
error
end.
Warnings and errors are logged automatically. A message is generated
automatically, but it is possible to define a message in the deprecated
feature flag declaration like in the example above.
Here is an example of a logged warning that was generated automatically:
Feature `my_deprecated_feature` is deprecated.
By default, this feature can still be used for now.
Its use will not be permitted by default in a future minor RabbitMQ version and the feature will be removed from a future major RabbitMQ version; actual versions to be determined.
To continue using this feature when it is not permitted by default, set the following parameter in your configuration:
"deprecated_features.permit.my_deprecated_feature = true"
To test RabbitMQ as if the feature was removed, set this in your configuration:
"deprecated_features.permit.my_deprecated_feature = false"
To override the default state of `permitted_by_default` and
`denied_by_default` deprecation phases, users can set the following
configuration:
# In rabbitmq.conf:
deprecated_features.permit.my_deprecated_feature = true # or false
The actual behavior protected by a deprecated feature check is out of
scope for this subsystem. It is the repsonsibility of each deprecated
feature code to determine what to do when the deprecated feature is
denied.
V1: Deprecated feature states are initially computed during the
initialization of the registry, based on their deprecation phase and
possibly the configuration. They don't go through the `enable/1`
code at all.
V2: Manage deprecated feature states as any other non-required
feature flags. This allows to execute an `is_feature_used()`
callback to determine if a deprecated feature can be denied. This
also allows to prevent the RabbitMQ node from starting if it
continues to use a deprecated feature.
V3: Manage deprecated feature states from the registry initialization
again. This is required because we need to know very early if some
of them are denied, so that an upgrade to a version of RabbitMQ
where a deprecated feature is disconnected or removed can be
performed.
To still prevent the start of a RabbitMQ node when a denied
deprecated feature is actively used, we run the `is_feature_used()`
callback of all denied deprecated features as part of the
`sync_cluster()` task. This task is executed as part of a feature
flag refresh executed when RabbitMQ starts or when plugins are
enabled. So even though a deprecated feature is marked as denied in
the registry early in the boot process, we will still abort the
start of a RabbitMQ node if the feature is used.
V4: Support context-dependent warnings. It is now possible to set a
specific message when deprecated feature is permitted, when it is
denied and when it is removed. Generic per-context messages are
still generated.
V5: Improve default warning messages, thanks to @pstack2021.
V6: Rename the configuration variable from `permit_deprecated_features.*`
to `deprecated_features.permit.*`. As @michaelklishin said, we tend
to use shorter top-level names.
2023-02-23 00:26:52 +08:00
|
|
|
- rabbit_deprecated_features
|
2023-02-23 21:47:41 +08:00
|
|
|
- rabbit_diagnostics
|
|
|
|
|
- rabbit_direct
|
|
|
|
|
- rabbit_direct_reply_to
|
|
|
|
|
- rabbit_disk_monitor
|
|
|
|
|
- rabbit_epmd_monitor
|
|
|
|
|
- rabbit_event_consumer
|
|
|
|
|
- rabbit_exchange
|
|
|
|
|
- rabbit_exchange_decorator
|
|
|
|
|
- rabbit_exchange_parameters
|
|
|
|
|
- rabbit_exchange_type
|
|
|
|
|
- rabbit_exchange_type_direct
|
|
|
|
|
- rabbit_exchange_type_fanout
|
|
|
|
|
- rabbit_exchange_type_headers
|
|
|
|
|
- rabbit_exchange_type_invalid
|
|
|
|
|
- rabbit_exchange_type_topic
|
|
|
|
|
- rabbit_feature_flags
|
|
|
|
|
- rabbit_ff_controller
|
|
|
|
|
- rabbit_ff_extra
|
|
|
|
|
- rabbit_ff_registry
|
|
|
|
|
- rabbit_ff_registry_factory
|
2023-05-23 12:02:30 +08:00
|
|
|
- rabbit_ff_registry_wrapper
|
2023-02-23 21:47:41 +08:00
|
|
|
- rabbit_fhc_helpers
|
|
|
|
|
- rabbit_fifo
|
|
|
|
|
- rabbit_fifo_client
|
|
|
|
|
- rabbit_fifo_dlx
|
|
|
|
|
- rabbit_fifo_dlx_client
|
|
|
|
|
- rabbit_fifo_dlx_sup
|
|
|
|
|
- rabbit_fifo_dlx_worker
|
|
|
|
|
- rabbit_fifo_index
|
|
|
|
|
- rabbit_fifo_v0
|
|
|
|
|
- rabbit_fifo_v1
|
|
|
|
|
- rabbit_file
|
|
|
|
|
- rabbit_global_counters
|
|
|
|
|
- rabbit_guid
|
|
|
|
|
- rabbit_health_check
|
|
|
|
|
- rabbit_limiter
|
|
|
|
|
- rabbit_log_channel
|
|
|
|
|
- rabbit_log_connection
|
|
|
|
|
- rabbit_log_mirroring
|
|
|
|
|
- rabbit_log_prelaunch
|
|
|
|
|
- rabbit_log_queue
|
|
|
|
|
- rabbit_log_tail
|
|
|
|
|
- rabbit_logger_exchange_h
|
|
|
|
|
- rabbit_looking_glass
|
|
|
|
|
- rabbit_maintenance
|
|
|
|
|
- rabbit_memory_monitor
|
Move plugin rabbitmq-message-timestamp to the core
As reported in https://groups.google.com/g/rabbitmq-users/c/x8ACs4dBlkI/
plugins that implement rabbit_channel_interceptor break with
Native MQTT in 3.12 because Native MQTT does not use rabbit_channel anymore.
Specifically, these plugins don't work anymore in 3.12 when sending a message
from an MQTT publisher to an AMQP 0.9.1 consumer.
Two of these plugins are
https://github.com/rabbitmq/rabbitmq-message-timestamp
and
https://github.com/rabbitmq/rabbitmq-routing-node-stamp
This commit moves both plugins into rabbitmq-server.
Therefore, these plugins are deprecated starting in 3.12.
Instead of using these plugins, the user gets the same behaviour by
configuring rabbitmq.conf as follows:
```
incoming_message_interceptors.set_header_timestamp.overwrite = false
incoming_message_interceptors.set_header_routing_node.overwrite = false
```
While both plugins were incompatible to be used together, this commit
allows setting both headers.
We name the top level configuration key `incoming_message_interceptors`
because only incoming messages are intercepted.
Currently, only `set_header_timestamp` and `set_header_routing_node` are
supported. (We might support more in the future.)
Both can set `overwrite` to `false` or `true`.
The meaning of `overwrite` is the same as documented in
https://github.com/rabbitmq/rabbitmq-message-timestamp#always-overwrite-timestamps
i.e. whether headers should be overwritten if they are already present
in the message.
Both `set_header_timestamp` and `set_header_routing_node` behave exactly
to plugins `rabbitmq-message-timestamp` and `rabbitmq-routing-node-stamp`,
respectively.
Upon node boot, the configuration is put into persistent_term to not
cause any performance penalty in the default case where these settings
are disabled.
The channel and MQTT connection process will intercept incoming messages
and - if configured - add the desired AMQP 0.9.1 headers.
For now, this allows using Native MQTT in 3.12 with the old plugins
behaviour.
In the future, once "message containers" are implemented,
we can think about more generic message interceptors where plugins can be
written to modify arbitrary headers or message contents for various protocols.
Likewise, in the future, once MQTT 5.0 is implemented, we can think
about an MQTT connection interceptor which could function similar to a
`rabbit_channel_interceptor` allowing to modify any MQTT packet.
2023-05-12 22:12:50 +08:00
|
|
|
- rabbit_message_interceptor
|
2023-02-23 21:47:41 +08:00
|
|
|
- rabbit_metrics
|
|
|
|
|
- rabbit_mirror_queue_coordinator
|
|
|
|
|
- rabbit_mirror_queue_master
|
|
|
|
|
- rabbit_mirror_queue_misc
|
|
|
|
|
- rabbit_mirror_queue_mode
|
|
|
|
|
- rabbit_mirror_queue_mode_all
|
|
|
|
|
- rabbit_mirror_queue_mode_exactly
|
|
|
|
|
- rabbit_mirror_queue_mode_nodes
|
|
|
|
|
- rabbit_mirror_queue_slave
|
|
|
|
|
- rabbit_mirror_queue_sync
|
|
|
|
|
- rabbit_mnesia
|
|
|
|
|
- rabbit_mnesia_rename
|
|
|
|
|
- rabbit_msg_file
|
|
|
|
|
- rabbit_msg_record
|
|
|
|
|
- rabbit_msg_store
|
|
|
|
|
- rabbit_msg_store_ets_index
|
|
|
|
|
- rabbit_msg_store_gc
|
|
|
|
|
- rabbit_networking
|
|
|
|
|
- rabbit_networking_store
|
|
|
|
|
- rabbit_node_monitor
|
|
|
|
|
- rabbit_nodes
|
|
|
|
|
- rabbit_observer_cli
|
|
|
|
|
- rabbit_observer_cli_classic_queues
|
2023-05-19 00:25:08 +08:00
|
|
|
- rabbit_observer_cli_quorum_queues
|
2023-02-23 21:47:41 +08:00
|
|
|
- rabbit_osiris_metrics
|
|
|
|
|
- rabbit_parameter_validation
|
|
|
|
|
- rabbit_peer_discovery
|
|
|
|
|
- rabbit_peer_discovery_classic_config
|
|
|
|
|
- rabbit_peer_discovery_dns
|
|
|
|
|
- rabbit_plugins
|
|
|
|
|
- rabbit_policies
|
|
|
|
|
- rabbit_policy
|
|
|
|
|
- rabbit_policy_merge_strategy
|
|
|
|
|
- rabbit_prelaunch_cluster
|
|
|
|
|
- rabbit_prelaunch_enabled_plugins_file
|
|
|
|
|
- rabbit_prelaunch_feature_flags
|
|
|
|
|
- rabbit_prelaunch_logging
|
|
|
|
|
- rabbit_prequeue
|
|
|
|
|
- rabbit_priority_queue
|
|
|
|
|
- rabbit_process
|
|
|
|
|
- rabbit_queue_consumers
|
|
|
|
|
- rabbit_queue_decorator
|
|
|
|
|
- rabbit_queue_index
|
|
|
|
|
- rabbit_queue_location
|
|
|
|
|
- rabbit_queue_location_client_local
|
|
|
|
|
- rabbit_queue_location_min_masters
|
|
|
|
|
- rabbit_queue_location_random
|
|
|
|
|
- rabbit_queue_location_validator
|
|
|
|
|
- rabbit_queue_master_location_misc
|
|
|
|
|
- rabbit_queue_master_locator
|
|
|
|
|
- rabbit_queue_type
|
|
|
|
|
- rabbit_queue_type_util
|
|
|
|
|
- rabbit_quorum_memory_manager
|
|
|
|
|
- rabbit_quorum_queue
|
2023-05-17 08:06:01 +08:00
|
|
|
- rabbit_quorum_queue_periodic_membership_reconciliation
|
2023-02-23 21:47:41 +08:00
|
|
|
- rabbit_ra_registry
|
|
|
|
|
- rabbit_ra_systems
|
|
|
|
|
- rabbit_reader
|
|
|
|
|
- rabbit_recovery_terms
|
|
|
|
|
- rabbit_release_series
|
|
|
|
|
- rabbit_restartable_sup
|
|
|
|
|
- rabbit_router
|
|
|
|
|
- rabbit_runtime_parameters
|
|
|
|
|
- rabbit_ssl
|
|
|
|
|
- rabbit_stream_coordinator
|
|
|
|
|
- rabbit_stream_queue
|
|
|
|
|
- rabbit_stream_sac_coordinator
|
|
|
|
|
- rabbit_sup
|
|
|
|
|
- rabbit_sysmon_handler
|
|
|
|
|
- rabbit_sysmon_minder
|
|
|
|
|
- rabbit_table
|
|
|
|
|
- rabbit_time_travel_dbg
|
|
|
|
|
- rabbit_trace
|
|
|
|
|
- rabbit_tracking
|
|
|
|
|
- rabbit_tracking_store
|
|
|
|
|
- rabbit_upgrade_preparation
|
|
|
|
|
- rabbit_variable_queue
|
|
|
|
|
- rabbit_version
|
|
|
|
|
- rabbit_vhost
|
|
|
|
|
- rabbit_vhost_limit
|
|
|
|
|
- rabbit_vhost_msg_store
|
|
|
|
|
- rabbit_vhost_process
|
|
|
|
|
- rabbit_vhost_sup
|
|
|
|
|
- rabbit_vhost_sup_sup
|
|
|
|
|
- rabbit_vhost_sup_wrapper
|
|
|
|
|
- rabbit_vm
|
|
|
|
|
- supervised_lifecycle
|
|
|
|
|
- tcp_listener
|
|
|
|
|
- tcp_listener_sup
|
|
|
|
|
- term_to_binary_compat
|
|
|
|
|
- vhost
|
|
|
|
|
rabbit_common:
|
|
|
|
|
- app_utils
|
|
|
|
|
- code_version
|
|
|
|
|
- credit_flow
|
|
|
|
|
- delegate
|
|
|
|
|
- delegate_sup
|
|
|
|
|
- file_handle_cache
|
|
|
|
|
- file_handle_cache_stats
|
|
|
|
|
- gen_server2
|
|
|
|
|
- mirrored_supervisor_locks
|
|
|
|
|
- mnesia_sync
|
|
|
|
|
- pmon
|
|
|
|
|
- priority_queue
|
|
|
|
|
- rabbit_amqp_connection
|
|
|
|
|
- rabbit_amqqueue_common
|
|
|
|
|
- rabbit_auth_backend_dummy
|
|
|
|
|
- rabbit_auth_mechanism
|
|
|
|
|
- rabbit_authn_backend
|
|
|
|
|
- rabbit_authz_backend
|
|
|
|
|
- rabbit_basic_common
|
|
|
|
|
- rabbit_binary_generator
|
|
|
|
|
- rabbit_binary_parser
|
|
|
|
|
- rabbit_cert_info
|
|
|
|
|
- rabbit_channel_common
|
|
|
|
|
- rabbit_command_assembler
|
|
|
|
|
- rabbit_control_misc
|
|
|
|
|
- rabbit_core_metrics
|
|
|
|
|
- rabbit_data_coercion
|
|
|
|
|
- rabbit_date_time
|
|
|
|
|
- rabbit_env
|
|
|
|
|
- rabbit_error_logger_handler
|
|
|
|
|
- rabbit_event
|
|
|
|
|
- rabbit_framing
|
|
|
|
|
- rabbit_framing_amqp_0_8
|
|
|
|
|
- rabbit_framing_amqp_0_9_1
|
|
|
|
|
- rabbit_heartbeat
|
|
|
|
|
- rabbit_http_util
|
|
|
|
|
- rabbit_json
|
|
|
|
|
- rabbit_log
|
|
|
|
|
- rabbit_misc
|
|
|
|
|
- rabbit_msg_store_index
|
|
|
|
|
- rabbit_net
|
|
|
|
|
- rabbit_nodes_common
|
|
|
|
|
- rabbit_numerical
|
|
|
|
|
- rabbit_password
|
|
|
|
|
- rabbit_password_hashing
|
|
|
|
|
- rabbit_password_hashing_md5
|
|
|
|
|
- rabbit_password_hashing_sha256
|
|
|
|
|
- rabbit_password_hashing_sha512
|
|
|
|
|
- rabbit_pbe
|
|
|
|
|
- rabbit_peer_discovery_backend
|
|
|
|
|
- rabbit_policy_validator
|
|
|
|
|
- rabbit_queue_collector
|
|
|
|
|
- rabbit_registry
|
|
|
|
|
- rabbit_registry_class
|
|
|
|
|
- rabbit_resource_monitor_misc
|
|
|
|
|
- rabbit_runtime
|
|
|
|
|
- rabbit_runtime_parameter
|
|
|
|
|
- rabbit_semver
|
|
|
|
|
- rabbit_semver_parser
|
|
|
|
|
- rabbit_ssl_options
|
|
|
|
|
- rabbit_types
|
|
|
|
|
- rabbit_writer
|
|
|
|
|
- supervisor2
|
|
|
|
|
- vm_memory_monitor
|
|
|
|
|
- worker_pool
|
|
|
|
|
- worker_pool_sup
|
|
|
|
|
- worker_pool_worker
|
|
|
|
|
rabbitmq_amqp1_0:
|
|
|
|
|
- Elixir.RabbitMQ.CLI.Ctl.Commands.ListAmqp10ConnectionsCommand
|
|
|
|
|
- rabbit_amqp1_0
|
|
|
|
|
- rabbit_amqp1_0_channel
|
|
|
|
|
- rabbit_amqp1_0_incoming_link
|
|
|
|
|
- rabbit_amqp1_0_link_util
|
|
|
|
|
- rabbit_amqp1_0_message
|
|
|
|
|
- rabbit_amqp1_0_outgoing_link
|
|
|
|
|
- rabbit_amqp1_0_reader
|
|
|
|
|
- rabbit_amqp1_0_session
|
|
|
|
|
- rabbit_amqp1_0_session_process
|
|
|
|
|
- rabbit_amqp1_0_session_sup
|
|
|
|
|
- rabbit_amqp1_0_session_sup_sup
|
|
|
|
|
- rabbit_amqp1_0_util
|
|
|
|
|
- rabbit_amqp1_0_writer
|
|
|
|
|
rabbitmq_auth_backend_cache:
|
|
|
|
|
- rabbit_auth_backend_cache
|
|
|
|
|
- rabbit_auth_backend_cache_app
|
|
|
|
|
- rabbit_auth_cache
|
|
|
|
|
- rabbit_auth_cache_dict
|
|
|
|
|
- rabbit_auth_cache_ets
|
|
|
|
|
- rabbit_auth_cache_ets_segmented
|
|
|
|
|
- rabbit_auth_cache_ets_segmented_stateless
|
|
|
|
|
rabbitmq_auth_backend_http:
|
|
|
|
|
- rabbit_auth_backend_http
|
|
|
|
|
- rabbit_auth_backend_http_app
|
|
|
|
|
rabbitmq_auth_backend_ldap:
|
|
|
|
|
- rabbit_auth_backend_ldap
|
|
|
|
|
- rabbit_auth_backend_ldap_app
|
|
|
|
|
- rabbit_auth_backend_ldap_util
|
|
|
|
|
- rabbit_log_ldap
|
|
|
|
|
rabbitmq_auth_backend_oauth2:
|
|
|
|
|
- Elixir.RabbitMQ.CLI.Ctl.Commands.AddUaaKeyCommand
|
|
|
|
|
- rabbit_auth_backend_oauth2
|
|
|
|
|
- rabbit_auth_backend_oauth2_app
|
|
|
|
|
- rabbit_oauth2_scope
|
|
|
|
|
- uaa_jwks
|
|
|
|
|
- uaa_jwt
|
|
|
|
|
- uaa_jwt_jwk
|
|
|
|
|
- uaa_jwt_jwt
|
|
|
|
|
- wildcard
|
|
|
|
|
rabbitmq_auth_mechanism_ssl:
|
|
|
|
|
- rabbit_auth_mechanism_ssl
|
|
|
|
|
- rabbit_auth_mechanism_ssl_app
|
|
|
|
|
rabbitmq_aws:
|
|
|
|
|
- rabbitmq_aws
|
|
|
|
|
- rabbitmq_aws_app
|
|
|
|
|
- rabbitmq_aws_config
|
|
|
|
|
- rabbitmq_aws_json
|
|
|
|
|
- rabbitmq_aws_sign
|
|
|
|
|
- rabbitmq_aws_sup
|
|
|
|
|
- rabbitmq_aws_urilib
|
|
|
|
|
- rabbitmq_aws_xml
|
|
|
|
|
rabbitmq_consistent_hash_exchange:
|
|
|
|
|
- Elixir.RabbitMQ.CLI.Diagnostics.Commands.ConsistentHashExchangeRingStateCommand
|
|
|
|
|
- rabbit_db_ch_exchange
|
|
|
|
|
- rabbit_exchange_type_consistent_hash
|
|
|
|
|
rabbitmq_ct_client_helpers:
|
|
|
|
|
- rabbit_ct_client_helpers
|
|
|
|
|
rabbitmq_ct_helpers:
|
|
|
|
|
- cth_log_redirect_any_domains
|
|
|
|
|
- rabbit_control_helper
|
|
|
|
|
- rabbit_ct_broker_helpers
|
|
|
|
|
- rabbit_ct_config_schema
|
|
|
|
|
- rabbit_ct_helpers
|
|
|
|
|
- rabbit_ct_proper_helpers
|
|
|
|
|
- rabbit_ct_vm_helpers
|
|
|
|
|
- rabbit_mgmt_test_util
|
|
|
|
|
rabbitmq_event_exchange:
|
|
|
|
|
- rabbit_event_exchange_decorator
|
|
|
|
|
- rabbit_exchange_type_event
|
|
|
|
|
rabbitmq_federation:
|
|
|
|
|
- Elixir.RabbitMQ.CLI.Ctl.Commands.FederationStatusCommand
|
|
|
|
|
- Elixir.RabbitMQ.CLI.Ctl.Commands.RestartFederationLinkCommand
|
|
|
|
|
- rabbit_federation_app
|
|
|
|
|
- rabbit_federation_db
|
|
|
|
|
- rabbit_federation_event
|
|
|
|
|
- rabbit_federation_exchange
|
|
|
|
|
- rabbit_federation_exchange_link
|
|
|
|
|
- rabbit_federation_exchange_link_sup_sup
|
|
|
|
|
- rabbit_federation_link_sup
|
|
|
|
|
- rabbit_federation_link_util
|
|
|
|
|
- rabbit_federation_parameters
|
|
|
|
|
- rabbit_federation_pg
|
|
|
|
|
- rabbit_federation_queue
|
|
|
|
|
- rabbit_federation_queue_link
|
|
|
|
|
- rabbit_federation_queue_link_sup_sup
|
|
|
|
|
- rabbit_federation_status
|
|
|
|
|
- rabbit_federation_sup
|
|
|
|
|
- rabbit_federation_upstream
|
|
|
|
|
- rabbit_federation_upstream_exchange
|
|
|
|
|
- rabbit_federation_util
|
|
|
|
|
- rabbit_log_federation
|
|
|
|
|
rabbitmq_federation_management:
|
|
|
|
|
- rabbit_federation_mgmt
|
|
|
|
|
rabbitmq_jms_topic_exchange:
|
|
|
|
|
- rabbit_db_jms_exchange
|
|
|
|
|
- rabbit_jms_topic_exchange
|
|
|
|
|
- sjx_evaluator
|
|
|
|
|
rabbitmq_management:
|
|
|
|
|
- rabbit_mgmt_app
|
|
|
|
|
- rabbit_mgmt_cors
|
|
|
|
|
- rabbit_mgmt_csp
|
|
|
|
|
- rabbit_mgmt_db
|
|
|
|
|
- rabbit_mgmt_db_cache
|
|
|
|
|
- rabbit_mgmt_db_cache_sup
|
|
|
|
|
- rabbit_mgmt_dispatcher
|
|
|
|
|
- rabbit_mgmt_extension
|
|
|
|
|
- rabbit_mgmt_features
|
|
|
|
|
- rabbit_mgmt_headers
|
|
|
|
|
- rabbit_mgmt_hsts
|
|
|
|
|
- rabbit_mgmt_load_definitions
|
|
|
|
|
- rabbit_mgmt_login
|
|
|
|
|
- rabbit_mgmt_oauth_bootstrap
|
|
|
|
|
- rabbit_mgmt_reset_handler
|
|
|
|
|
- rabbit_mgmt_stats
|
|
|
|
|
- rabbit_mgmt_sup
|
|
|
|
|
- rabbit_mgmt_sup_sup
|
|
|
|
|
- rabbit_mgmt_util
|
|
|
|
|
- rabbit_mgmt_wm_aliveness_test
|
|
|
|
|
- rabbit_mgmt_wm_auth
|
|
|
|
|
- rabbit_mgmt_wm_auth_attempts
|
|
|
|
|
- rabbit_mgmt_wm_binding
|
|
|
|
|
- rabbit_mgmt_wm_bindings
|
|
|
|
|
- rabbit_mgmt_wm_channel
|
|
|
|
|
- rabbit_mgmt_wm_channels
|
|
|
|
|
- rabbit_mgmt_wm_channels_vhost
|
|
|
|
|
- rabbit_mgmt_wm_cluster_name
|
|
|
|
|
- rabbit_mgmt_wm_connection
|
|
|
|
|
- rabbit_mgmt_wm_connection_channels
|
|
|
|
|
- rabbit_mgmt_wm_connection_user_name
|
|
|
|
|
- rabbit_mgmt_wm_connections
|
|
|
|
|
- rabbit_mgmt_wm_connections_vhost
|
|
|
|
|
- rabbit_mgmt_wm_consumers
|
|
|
|
|
- rabbit_mgmt_wm_definitions
|
|
|
|
|
- rabbit_mgmt_wm_environment
|
|
|
|
|
- rabbit_mgmt_wm_exchange
|
|
|
|
|
- rabbit_mgmt_wm_exchange_publish
|
|
|
|
|
- rabbit_mgmt_wm_exchanges
|
|
|
|
|
- rabbit_mgmt_wm_extensions
|
|
|
|
|
- rabbit_mgmt_wm_feature_flag_enable
|
|
|
|
|
- rabbit_mgmt_wm_feature_flags
|
|
|
|
|
- rabbit_mgmt_wm_global_parameter
|
|
|
|
|
- rabbit_mgmt_wm_global_parameters
|
|
|
|
|
- rabbit_mgmt_wm_hash_password
|
|
|
|
|
- rabbit_mgmt_wm_health_check_alarms
|
|
|
|
|
- rabbit_mgmt_wm_health_check_certificate_expiration
|
|
|
|
|
- rabbit_mgmt_wm_health_check_local_alarms
|
|
|
|
|
- rabbit_mgmt_wm_health_check_node_is_mirror_sync_critical
|
|
|
|
|
- rabbit_mgmt_wm_health_check_node_is_quorum_critical
|
|
|
|
|
- rabbit_mgmt_wm_health_check_port_listener
|
|
|
|
|
- rabbit_mgmt_wm_health_check_protocol_listener
|
|
|
|
|
- rabbit_mgmt_wm_health_check_virtual_hosts
|
|
|
|
|
- rabbit_mgmt_wm_healthchecks
|
|
|
|
|
- rabbit_mgmt_wm_limit
|
|
|
|
|
- rabbit_mgmt_wm_limits
|
|
|
|
|
- rabbit_mgmt_wm_login
|
|
|
|
|
- rabbit_mgmt_wm_node
|
|
|
|
|
- rabbit_mgmt_wm_node_memory
|
|
|
|
|
- rabbit_mgmt_wm_node_memory_ets
|
|
|
|
|
- rabbit_mgmt_wm_nodes
|
|
|
|
|
- rabbit_mgmt_wm_operator_policies
|
|
|
|
|
- rabbit_mgmt_wm_operator_policy
|
|
|
|
|
- rabbit_mgmt_wm_overview
|
|
|
|
|
- rabbit_mgmt_wm_parameter
|
|
|
|
|
- rabbit_mgmt_wm_parameters
|
|
|
|
|
- rabbit_mgmt_wm_permission
|
|
|
|
|
- rabbit_mgmt_wm_permissions
|
|
|
|
|
- rabbit_mgmt_wm_permissions_user
|
|
|
|
|
- rabbit_mgmt_wm_permissions_vhost
|
|
|
|
|
- rabbit_mgmt_wm_policies
|
|
|
|
|
- rabbit_mgmt_wm_policy
|
|
|
|
|
- rabbit_mgmt_wm_queue
|
|
|
|
|
- rabbit_mgmt_wm_queue_actions
|
|
|
|
|
- rabbit_mgmt_wm_queue_get
|
|
|
|
|
- rabbit_mgmt_wm_queue_purge
|
|
|
|
|
- rabbit_mgmt_wm_queues
|
2023-06-13 05:36:54 +08:00
|
|
|
- rabbit_mgmt_wm_quorum_queue_replicas_add_member
|
|
|
|
|
- rabbit_mgmt_wm_quorum_queue_replicas_delete_member
|
2023-06-14 06:01:31 +08:00
|
|
|
- rabbit_mgmt_wm_quorum_queue_replicas_grow
|
|
|
|
|
- rabbit_mgmt_wm_quorum_queue_replicas_shrink
|
2023-02-23 21:47:41 +08:00
|
|
|
- rabbit_mgmt_wm_rebalance_queues
|
|
|
|
|
- rabbit_mgmt_wm_redirect
|
|
|
|
|
- rabbit_mgmt_wm_reset
|
|
|
|
|
- rabbit_mgmt_wm_static
|
|
|
|
|
- rabbit_mgmt_wm_topic_permission
|
|
|
|
|
- rabbit_mgmt_wm_topic_permissions
|
|
|
|
|
- rabbit_mgmt_wm_topic_permissions_user
|
|
|
|
|
- rabbit_mgmt_wm_topic_permissions_vhost
|
|
|
|
|
- rabbit_mgmt_wm_user
|
|
|
|
|
- rabbit_mgmt_wm_user_limit
|
|
|
|
|
- rabbit_mgmt_wm_user_limits
|
|
|
|
|
- rabbit_mgmt_wm_users
|
|
|
|
|
- rabbit_mgmt_wm_users_bulk_delete
|
|
|
|
|
- rabbit_mgmt_wm_vhost
|
|
|
|
|
- rabbit_mgmt_wm_vhost_restart
|
|
|
|
|
- rabbit_mgmt_wm_vhosts
|
|
|
|
|
- rabbit_mgmt_wm_whoami
|
|
|
|
|
rabbitmq_management_agent:
|
|
|
|
|
- Elixir.RabbitMQ.CLI.Ctl.Commands.ResetStatsDbCommand
|
|
|
|
|
- exometer_slide
|
|
|
|
|
- rabbit_mgmt_agent_app
|
|
|
|
|
- rabbit_mgmt_agent_config
|
|
|
|
|
- rabbit_mgmt_agent_sup
|
|
|
|
|
- rabbit_mgmt_agent_sup_sup
|
|
|
|
|
- rabbit_mgmt_data
|
|
|
|
|
- rabbit_mgmt_data_compat
|
|
|
|
|
- rabbit_mgmt_db_handler
|
|
|
|
|
- rabbit_mgmt_external_stats
|
|
|
|
|
- rabbit_mgmt_ff
|
|
|
|
|
- rabbit_mgmt_format
|
|
|
|
|
- rabbit_mgmt_gc
|
|
|
|
|
- rabbit_mgmt_metrics_collector
|
|
|
|
|
- rabbit_mgmt_metrics_gc
|
|
|
|
|
- rabbit_mgmt_storage
|
|
|
|
|
rabbitmq_mqtt:
|
|
|
|
|
- Elixir.RabbitMQ.CLI.Ctl.Commands.DecommissionMqttNodeCommand
|
|
|
|
|
- Elixir.RabbitMQ.CLI.Ctl.Commands.ListMqttConnectionsCommand
|
|
|
|
|
- mqtt_machine
|
|
|
|
|
- mqtt_machine_v0
|
|
|
|
|
- mqtt_node
|
|
|
|
|
- rabbit_mqtt
|
|
|
|
|
- rabbit_mqtt_collector
|
|
|
|
|
- rabbit_mqtt_confirms
|
|
|
|
|
- rabbit_mqtt_ff
|
|
|
|
|
- rabbit_mqtt_internal_event_handler
|
|
|
|
|
- rabbit_mqtt_keepalive
|
|
|
|
|
- rabbit_mqtt_packet
|
|
|
|
|
- rabbit_mqtt_processor
|
|
|
|
|
- rabbit_mqtt_qos0_queue
|
|
|
|
|
- rabbit_mqtt_reader
|
|
|
|
|
- rabbit_mqtt_retained_msg_store
|
|
|
|
|
- rabbit_mqtt_retained_msg_store_dets
|
|
|
|
|
- rabbit_mqtt_retained_msg_store_ets
|
|
|
|
|
- rabbit_mqtt_retained_msg_store_noop
|
|
|
|
|
- rabbit_mqtt_retainer
|
|
|
|
|
- rabbit_mqtt_retainer_sup
|
|
|
|
|
- rabbit_mqtt_sup
|
|
|
|
|
- rabbit_mqtt_util
|
|
|
|
|
rabbitmq_peer_discovery_aws:
|
|
|
|
|
- rabbit_peer_discovery_aws
|
|
|
|
|
- rabbitmq_peer_discovery_aws
|
|
|
|
|
rabbitmq_peer_discovery_common:
|
|
|
|
|
- rabbit_peer_discovery_cleanup
|
|
|
|
|
- rabbit_peer_discovery_common_app
|
|
|
|
|
- rabbit_peer_discovery_common_sup
|
|
|
|
|
- rabbit_peer_discovery_config
|
|
|
|
|
- rabbit_peer_discovery_httpc
|
|
|
|
|
- rabbit_peer_discovery_util
|
|
|
|
|
rabbitmq_peer_discovery_consul:
|
|
|
|
|
- rabbit_peer_discovery_consul
|
|
|
|
|
- rabbitmq_peer_discovery_consul
|
|
|
|
|
- rabbitmq_peer_discovery_consul_app
|
|
|
|
|
- rabbitmq_peer_discovery_consul_health_check_helper
|
|
|
|
|
- rabbitmq_peer_discovery_consul_sup
|
|
|
|
|
rabbitmq_peer_discovery_etcd:
|
|
|
|
|
- rabbit_peer_discovery_etcd
|
|
|
|
|
- rabbitmq_peer_discovery_etcd
|
|
|
|
|
- rabbitmq_peer_discovery_etcd_app
|
|
|
|
|
- rabbitmq_peer_discovery_etcd_sup
|
|
|
|
|
- rabbitmq_peer_discovery_etcd_v3_client
|
|
|
|
|
rabbitmq_peer_discovery_k8s:
|
|
|
|
|
- rabbit_peer_discovery_k8s
|
|
|
|
|
- rabbitmq_peer_discovery_k8s
|
|
|
|
|
- rabbitmq_peer_discovery_k8s_app
|
|
|
|
|
- rabbitmq_peer_discovery_k8s_node_monitor
|
|
|
|
|
- rabbitmq_peer_discovery_k8s_sup
|
|
|
|
|
rabbitmq_prelaunch:
|
|
|
|
|
- rabbit_boot_state
|
|
|
|
|
- rabbit_boot_state_sup
|
|
|
|
|
- rabbit_boot_state_systemd
|
|
|
|
|
- rabbit_boot_state_xterm_titlebar
|
|
|
|
|
- rabbit_logger_fmt_helpers
|
|
|
|
|
- rabbit_logger_json_fmt
|
|
|
|
|
- rabbit_logger_std_h
|
|
|
|
|
- rabbit_logger_text_fmt
|
|
|
|
|
- rabbit_prelaunch
|
|
|
|
|
- rabbit_prelaunch_app
|
|
|
|
|
- rabbit_prelaunch_conf
|
|
|
|
|
- rabbit_prelaunch_dist
|
|
|
|
|
- rabbit_prelaunch_early_logging
|
|
|
|
|
- rabbit_prelaunch_erlang_compat
|
|
|
|
|
- rabbit_prelaunch_errors
|
|
|
|
|
- rabbit_prelaunch_file
|
|
|
|
|
- rabbit_prelaunch_sighandler
|
|
|
|
|
- rabbit_prelaunch_sup
|
|
|
|
|
rabbitmq_prometheus:
|
|
|
|
|
- prometheus_process_collector
|
|
|
|
|
- prometheus_rabbitmq_alarm_metrics_collector
|
|
|
|
|
- prometheus_rabbitmq_core_metrics_collector
|
|
|
|
|
- prometheus_rabbitmq_global_metrics_collector
|
|
|
|
|
- rabbit_prometheus_app
|
|
|
|
|
- rabbit_prometheus_dispatcher
|
|
|
|
|
- rabbit_prometheus_handler
|
|
|
|
|
rabbitmq_random_exchange:
|
|
|
|
|
- rabbit_exchange_type_random
|
|
|
|
|
rabbitmq_recent_history_exchange:
|
|
|
|
|
- rabbit_db_rh_exchange
|
|
|
|
|
- rabbit_exchange_type_recent_history
|
|
|
|
|
rabbitmq_sharding:
|
|
|
|
|
- rabbit_sharding_exchange_decorator
|
|
|
|
|
- rabbit_sharding_exchange_type_modulus_hash
|
|
|
|
|
- rabbit_sharding_interceptor
|
|
|
|
|
- rabbit_sharding_policy_validator
|
|
|
|
|
- rabbit_sharding_shard
|
|
|
|
|
- rabbit_sharding_util
|
|
|
|
|
rabbitmq_shovel:
|
|
|
|
|
- Elixir.RabbitMQ.CLI.Ctl.Commands.DeleteShovelCommand
|
|
|
|
|
- Elixir.RabbitMQ.CLI.Ctl.Commands.RestartShovelCommand
|
|
|
|
|
- Elixir.RabbitMQ.CLI.Ctl.Commands.ShovelStatusCommand
|
|
|
|
|
- rabbit_amqp091_shovel
|
|
|
|
|
- rabbit_amqp10_shovel
|
|
|
|
|
- rabbit_log_shovel
|
|
|
|
|
- rabbit_shovel
|
|
|
|
|
- rabbit_shovel_behaviour
|
|
|
|
|
- rabbit_shovel_config
|
|
|
|
|
- rabbit_shovel_dyn_worker_sup
|
|
|
|
|
- rabbit_shovel_dyn_worker_sup_sup
|
|
|
|
|
- rabbit_shovel_locks
|
|
|
|
|
- rabbit_shovel_parameters
|
|
|
|
|
- rabbit_shovel_status
|
|
|
|
|
- rabbit_shovel_sup
|
|
|
|
|
- rabbit_shovel_util
|
|
|
|
|
- rabbit_shovel_worker
|
|
|
|
|
- rabbit_shovel_worker_sup
|
|
|
|
|
rabbitmq_shovel_management:
|
|
|
|
|
- rabbit_shovel_mgmt
|
|
|
|
|
- rabbit_shovel_mgmt_util
|
|
|
|
|
rabbitmq_stomp:
|
|
|
|
|
- Elixir.RabbitMQ.CLI.Ctl.Commands.ListStompConnectionsCommand
|
|
|
|
|
- rabbit_stomp
|
|
|
|
|
- rabbit_stomp_client_sup
|
|
|
|
|
- rabbit_stomp_connection_info
|
|
|
|
|
- rabbit_stomp_frame
|
|
|
|
|
- rabbit_stomp_internal_event_handler
|
|
|
|
|
- rabbit_stomp_processor
|
|
|
|
|
- rabbit_stomp_reader
|
|
|
|
|
- rabbit_stomp_sup
|
|
|
|
|
- rabbit_stomp_util
|
|
|
|
|
rabbitmq_stream:
|
|
|
|
|
- Elixir.RabbitMQ.CLI.Ctl.Commands.AddSuperStreamCommand
|
|
|
|
|
- Elixir.RabbitMQ.CLI.Ctl.Commands.DeleteSuperStreamCommand
|
|
|
|
|
- Elixir.RabbitMQ.CLI.Ctl.Commands.ListStreamConnectionsCommand
|
|
|
|
|
- Elixir.RabbitMQ.CLI.Ctl.Commands.ListStreamConsumerGroupsCommand
|
|
|
|
|
- Elixir.RabbitMQ.CLI.Ctl.Commands.ListStreamConsumersCommand
|
|
|
|
|
- Elixir.RabbitMQ.CLI.Ctl.Commands.ListStreamGroupConsumersCommand
|
|
|
|
|
- Elixir.RabbitMQ.CLI.Ctl.Commands.ListStreamPublishersCommand
|
|
|
|
|
- rabbit_stream
|
|
|
|
|
- rabbit_stream_connection_sup
|
|
|
|
|
- rabbit_stream_manager
|
|
|
|
|
- rabbit_stream_metrics
|
|
|
|
|
- rabbit_stream_metrics_gc
|
|
|
|
|
- rabbit_stream_reader
|
|
|
|
|
- rabbit_stream_sup
|
|
|
|
|
- rabbit_stream_utils
|
|
|
|
|
rabbitmq_stream_common:
|
|
|
|
|
- rabbit_stream_core
|
|
|
|
|
rabbitmq_stream_management:
|
|
|
|
|
- rabbit_stream_connection_consumers_mgmt
|
|
|
|
|
- rabbit_stream_connection_mgmt
|
|
|
|
|
- rabbit_stream_connection_publishers_mgmt
|
|
|
|
|
- rabbit_stream_connections_mgmt
|
|
|
|
|
- rabbit_stream_connections_vhost_mgmt
|
|
|
|
|
- rabbit_stream_consumers_mgmt
|
|
|
|
|
- rabbit_stream_management_utils
|
|
|
|
|
- rabbit_stream_mgmt_db
|
|
|
|
|
- rabbit_stream_publishers_mgmt
|
|
|
|
|
rabbitmq_top:
|
|
|
|
|
- rabbit_top_app
|
|
|
|
|
- rabbit_top_extension
|
|
|
|
|
- rabbit_top_sup
|
|
|
|
|
- rabbit_top_util
|
|
|
|
|
- rabbit_top_wm_ets_tables
|
|
|
|
|
- rabbit_top_wm_process
|
|
|
|
|
- rabbit_top_wm_processes
|
|
|
|
|
- rabbit_top_worker
|
|
|
|
|
rabbitmq_tracing:
|
|
|
|
|
- rabbit_tracing_app
|
|
|
|
|
- rabbit_tracing_consumer
|
|
|
|
|
- rabbit_tracing_consumer_sup
|
|
|
|
|
- rabbit_tracing_files
|
|
|
|
|
- rabbit_tracing_mgmt
|
|
|
|
|
- rabbit_tracing_sup
|
|
|
|
|
- rabbit_tracing_traces
|
|
|
|
|
- rabbit_tracing_util
|
|
|
|
|
- rabbit_tracing_wm_file
|
|
|
|
|
- rabbit_tracing_wm_files
|
|
|
|
|
- rabbit_tracing_wm_trace
|
|
|
|
|
- rabbit_tracing_wm_traces
|
|
|
|
|
rabbitmq_trust_store:
|
|
|
|
|
- rabbit_trust_store
|
|
|
|
|
- rabbit_trust_store_app
|
|
|
|
|
- rabbit_trust_store_certificate_provider
|
|
|
|
|
- rabbit_trust_store_file_provider
|
|
|
|
|
- rabbit_trust_store_http_provider
|
|
|
|
|
- rabbit_trust_store_sup
|
|
|
|
|
rabbitmq_web_dispatch:
|
|
|
|
|
- rabbit_cowboy_middleware
|
|
|
|
|
- rabbit_cowboy_redirect
|
|
|
|
|
- rabbit_cowboy_stream_h
|
|
|
|
|
- rabbit_web_dispatch
|
2023-06-24 05:52:56 +08:00
|
|
|
- rabbit_web_dispatch_access_control
|
2023-02-23 21:47:41 +08:00
|
|
|
- rabbit_web_dispatch_app
|
|
|
|
|
- rabbit_web_dispatch_listing_handler
|
|
|
|
|
- rabbit_web_dispatch_registry
|
|
|
|
|
- rabbit_web_dispatch_sup
|
|
|
|
|
- rabbit_web_dispatch_util
|
|
|
|
|
- webmachine_log
|
|
|
|
|
- webmachine_log_handler
|
|
|
|
|
rabbitmq_web_mqtt:
|
|
|
|
|
- rabbit_web_mqtt_app
|
|
|
|
|
- rabbit_web_mqtt_handler
|
|
|
|
|
- rabbit_web_mqtt_stream_handler
|
|
|
|
|
rabbitmq_web_mqtt_examples:
|
|
|
|
|
- rabbit_web_mqtt_examples_app
|
|
|
|
|
rabbitmq_web_stomp:
|
|
|
|
|
- rabbit_web_stomp_app
|
|
|
|
|
- rabbit_web_stomp_connection_sup
|
|
|
|
|
- rabbit_web_stomp_handler
|
|
|
|
|
- rabbit_web_stomp_internal_event_handler
|
|
|
|
|
- rabbit_web_stomp_listener
|
|
|
|
|
- rabbit_web_stomp_middleware
|
|
|
|
|
- rabbit_web_stomp_stream_handler
|
|
|
|
|
- rabbit_web_stomp_sup
|
|
|
|
|
rabbitmq_web_stomp_examples:
|
|
|
|
|
- rabbit_web_stomp_examples_app
|
2023-01-25 16:41:56 +08:00
|
|
|
ranch:
|
|
|
|
|
- ranch
|
|
|
|
|
- ranch_acceptor
|
|
|
|
|
- ranch_acceptors_sup
|
|
|
|
|
- ranch_app
|
|
|
|
|
- ranch_conns_sup
|
|
|
|
|
- ranch_conns_sup_sup
|
|
|
|
|
- ranch_crc32c
|
|
|
|
|
- ranch_embedded_sup
|
|
|
|
|
- ranch_listener_sup
|
|
|
|
|
- ranch_protocol
|
|
|
|
|
- ranch_proxy_header
|
|
|
|
|
- ranch_server
|
|
|
|
|
- ranch_server_proxy
|
|
|
|
|
- ranch_ssl
|
|
|
|
|
- ranch_sup
|
|
|
|
|
- ranch_tcp
|
|
|
|
|
- ranch_transport
|
Support MQTT 5.0 features No Local, RAP, Subscription IDs
Support subscription options "No Local" and "Retain As Published"
as well as Subscription Identifiers.
All three MQTT 5.0 features can be set on a per subscription basis.
Due to wildcards in topic filters, multiple subscriptions
can match a given topic. Therefore, to implement Retain As Published and
Subscription Identifiers, the destination MQTT connection process needs
to know what subscription(s) caused it to receive the message.
There are a few ways how this could be implemented:
1. The destination MQTT connection process is aware of all its
subscriptions. Whenever, it receives a message, it can match the
message's routing key / topic against all its known topic filters.
However, to iteratively match the routing key against all topic
filters for every received message can become very expensive in the
worst case when the MQTT client creates many subscriptions containing
wildcards. This could be the case for an MQTT client that acts as a
bridge or proxy or dispatcher: It could subscribe via a wildcard for
each of its own clients.
2. Instead of interatively matching the topic of the received message
against all topic filters that contain wildcards, a better approach
would be for every MQTT subscriber connection process to maintain a
local trie datastructure (similar to how topic exchanges are
implemented) and perform matching therefore more efficiently.
However, this does not sound optimal either because routing is
effectively performed twice: in the topic exchange and again against
a much smaller trie in each destination connection process.
3. Given that the topic exchange already perform routing, a much more
sensible way would be to send the matched binding key(s) to the
destination MQTT connection process. A subscription (topic filter)
maps to a binding key in AMQP 0.9.1 routing. Therefore, for the first
time in RabbitMQ, the routing function should not only output a list
of unique destination queues, but also the binding keys (subscriptions)
that caused the message to be routed to the destination queue.
This commit therefore implements the 3rd approach.
The downside of the 3rd approach is that it requires API changes to the
routing function and topic exchange.
Specifically, this commit adds a new function rabbit_exchange:route/3
that accepts a list of routing options. If that list contains version 2,
the caller of the routing function knows how to handle the return value
that could also contain binding keys.
This commits allows an MQTT connection process, the channel process, and
at-most-once dead lettering to handle binding keys. Binding keys are
included as AMQP 0.9.1 headers into the basic message.
Therefore, whenever a message is sent from an MQTT client or AMQP 0.9.1
client or AMQP 1.0 client or STOMP client, the MQTT receiver will know
the subscription identifier that caused the message to be received.
Note that due to the low number of allowed wildcard characters (# and
+), the cardinality of matched binding keys shouldn't be high even if
the topic contains for example 3 levels and the message is sent to for
example 5 million destination queues. In other words, sending multiple
distinct basic messages to the destination shouldn't hurt the delegate
optimisation too much. The delegate optimisation implemented for classic
queues and rabbit_mqtt_qos0_queue(s) still takes place for all basic
messages that contain the same set of matched binding keys.
The topic exchange returns all matched binding keys by remembering the
edges walked down to the leaves. As an optimisation, only for MQTT
queues are binding keys being returned. This does add a small dependency
from app rabbit to app rabbitmq_mqtt which is not optimal. However, this
dependency should be simple to remove when omitting this optimisation.
Another important feature of this commit is persisting subscription
options and subscription identifiers because they are part of the
MQTT 5.0 session state.
In MQTT v3 and v4, the only subscription information that were part of
the session state was the topic filter and the QoS level.
Both information were implicitly stored in the form of bindings:
The topic filter as the binding key and the QoS level as the destination
queue name of the binding.
For MQTT v5 we need to persist more subscription information.
From a domain perspective, it makes sense to store subscription options
as part of subscriptions, i.e. bindings, even though they are currently
not used in routing.
Therefore, this commits stores subscription options as binding arguments.
Storing subscription options as binding arguments comes in turn with
new challenges: How to handle mixed version clusters and upgrading an
MQTT session from v3 or v4 to v5?
Imagine an MQTT client connects via v5 with Session Expiry Interval > 0
to a new node in a mixed version cluster, creates a subscription,
disconnects, and subsequently connects via v3 to an old node. The
client should continue to receive messages.
To simplify such edge cases, this commit introduces a new feature flag
called mqtt_v5. If mqtt_v5 is disabled, clients cannot connect to
RabbitMQ via MQTT 5.0.
This still doesn't entirely solve the problem of MQTT session upgrades
(v4 to v5 client) or session downgrades (v5 to v4 client).
Ideally, once mqtt_v5 is enabled, all MQTT bindings contain non-empty binding
arguments. However, this will require a feature flag migration function
to modify all MQTT bindings. To be more precise, all MQTT bindings need
to be deleted and added because the binding argument is part of the
Mnesia table key.
Since feature flag migration functions are non-trivial to implement in
RabbitMQ (they can run on every node multiple times and concurrently),
this commit takes a simpler approach:
All v3 / v4 sessions keep the empty binding argument [].
All v5 sessions use the new binding argument [#mqtt_subscription_opts{}].
This requires only handling a session upgrade / downgrade by
creating a binding (with the new binding arg) and deleting the old
binding (with the old binding arg) when processing the CONNECT packet.
Note that such session upgrades or downgrades should be rather rare in
practice. Therefore these binding transactions shouldn't hurt peformance.
The No Local option is implemented within the MQTT publishing connection
process: The message is not sent to the MQTT destination if the
destination queue name matches the current MQTT client ID and the
message was routed due to a subscription that has the No Local flag set.
This avoids unnecessary traffic on the MQTT queue.
The alternative would have been that the "receiving side" (same process)
filters the message out - which would have been more consistent in how
Retain As Published and Subscription Identifiers are implemented, but
would have caused unnecessary load on the MQTT queue.
2023-04-19 21:32:34 +08:00
|
|
|
rebar:
|
|
|
|
|
- rebar
|
|
|
|
|
- rebar_abnfc_compiler
|
|
|
|
|
- rebar_app_utils
|
|
|
|
|
- rebar_appups
|
|
|
|
|
- rebar_asn1_compiler
|
|
|
|
|
- rebar_base_compiler
|
|
|
|
|
- rebar_cleaner
|
|
|
|
|
- rebar_config
|
|
|
|
|
- rebar_core
|
|
|
|
|
- rebar_cover_utils
|
|
|
|
|
- rebar_ct
|
|
|
|
|
- rebar_deps
|
|
|
|
|
- rebar_dia_compiler
|
|
|
|
|
- rebar_dialyzer
|
|
|
|
|
- rebar_edoc
|
|
|
|
|
- rebar_erlc_compiler
|
|
|
|
|
- rebar_erlydtl_compiler
|
|
|
|
|
- rebar_escripter
|
|
|
|
|
- rebar_eunit
|
|
|
|
|
- rebar_file_utils
|
|
|
|
|
- rebar_getopt
|
|
|
|
|
- rebar_lfe_compiler
|
|
|
|
|
- rebar_log
|
|
|
|
|
- rebar_metacmds
|
|
|
|
|
- rebar_mustache
|
|
|
|
|
- rebar_neotoma_compiler
|
|
|
|
|
- rebar_otp_app
|
|
|
|
|
- rebar_otp_appup
|
|
|
|
|
- rebar_port_compiler
|
|
|
|
|
- rebar_proto_compiler
|
|
|
|
|
- rebar_proto_gpb_compiler
|
|
|
|
|
- rebar_protobuffs_compiler
|
|
|
|
|
- rebar_qc
|
|
|
|
|
- rebar_rand_compat
|
|
|
|
|
- rebar_rel_utils
|
|
|
|
|
- rebar_reltool
|
|
|
|
|
- rebar_require_vsn
|
|
|
|
|
- rebar_shell
|
|
|
|
|
- rebar_subdirs
|
|
|
|
|
- rebar_templater
|
|
|
|
|
- rebar_upgrade
|
|
|
|
|
- rebar_utils
|
|
|
|
|
- rebar_xref
|
|
|
|
|
- rmemo
|
2023-01-25 16:41:56 +08:00
|
|
|
recon:
|
|
|
|
|
- recon
|
|
|
|
|
- recon_alloc
|
|
|
|
|
- recon_lib
|
|
|
|
|
- recon_map
|
|
|
|
|
- recon_rec
|
|
|
|
|
- recon_trace
|
|
|
|
|
redbug:
|
|
|
|
|
- redbug
|
|
|
|
|
- redbug_compiler
|
|
|
|
|
- redbug_dtop
|
|
|
|
|
- redbug_lexer
|
|
|
|
|
- redbug_parser
|
|
|
|
|
- redbug_targ
|
|
|
|
|
seshat:
|
|
|
|
|
- seshat
|
|
|
|
|
- seshat_app
|
|
|
|
|
- seshat_counters_server
|
|
|
|
|
- seshat_sup
|
Support MQTT 5.0 features No Local, RAP, Subscription IDs
Support subscription options "No Local" and "Retain As Published"
as well as Subscription Identifiers.
All three MQTT 5.0 features can be set on a per subscription basis.
Due to wildcards in topic filters, multiple subscriptions
can match a given topic. Therefore, to implement Retain As Published and
Subscription Identifiers, the destination MQTT connection process needs
to know what subscription(s) caused it to receive the message.
There are a few ways how this could be implemented:
1. The destination MQTT connection process is aware of all its
subscriptions. Whenever, it receives a message, it can match the
message's routing key / topic against all its known topic filters.
However, to iteratively match the routing key against all topic
filters for every received message can become very expensive in the
worst case when the MQTT client creates many subscriptions containing
wildcards. This could be the case for an MQTT client that acts as a
bridge or proxy or dispatcher: It could subscribe via a wildcard for
each of its own clients.
2. Instead of interatively matching the topic of the received message
against all topic filters that contain wildcards, a better approach
would be for every MQTT subscriber connection process to maintain a
local trie datastructure (similar to how topic exchanges are
implemented) and perform matching therefore more efficiently.
However, this does not sound optimal either because routing is
effectively performed twice: in the topic exchange and again against
a much smaller trie in each destination connection process.
3. Given that the topic exchange already perform routing, a much more
sensible way would be to send the matched binding key(s) to the
destination MQTT connection process. A subscription (topic filter)
maps to a binding key in AMQP 0.9.1 routing. Therefore, for the first
time in RabbitMQ, the routing function should not only output a list
of unique destination queues, but also the binding keys (subscriptions)
that caused the message to be routed to the destination queue.
This commit therefore implements the 3rd approach.
The downside of the 3rd approach is that it requires API changes to the
routing function and topic exchange.
Specifically, this commit adds a new function rabbit_exchange:route/3
that accepts a list of routing options. If that list contains version 2,
the caller of the routing function knows how to handle the return value
that could also contain binding keys.
This commits allows an MQTT connection process, the channel process, and
at-most-once dead lettering to handle binding keys. Binding keys are
included as AMQP 0.9.1 headers into the basic message.
Therefore, whenever a message is sent from an MQTT client or AMQP 0.9.1
client or AMQP 1.0 client or STOMP client, the MQTT receiver will know
the subscription identifier that caused the message to be received.
Note that due to the low number of allowed wildcard characters (# and
+), the cardinality of matched binding keys shouldn't be high even if
the topic contains for example 3 levels and the message is sent to for
example 5 million destination queues. In other words, sending multiple
distinct basic messages to the destination shouldn't hurt the delegate
optimisation too much. The delegate optimisation implemented for classic
queues and rabbit_mqtt_qos0_queue(s) still takes place for all basic
messages that contain the same set of matched binding keys.
The topic exchange returns all matched binding keys by remembering the
edges walked down to the leaves. As an optimisation, only for MQTT
queues are binding keys being returned. This does add a small dependency
from app rabbit to app rabbitmq_mqtt which is not optimal. However, this
dependency should be simple to remove when omitting this optimisation.
Another important feature of this commit is persisting subscription
options and subscription identifiers because they are part of the
MQTT 5.0 session state.
In MQTT v3 and v4, the only subscription information that were part of
the session state was the topic filter and the QoS level.
Both information were implicitly stored in the form of bindings:
The topic filter as the binding key and the QoS level as the destination
queue name of the binding.
For MQTT v5 we need to persist more subscription information.
From a domain perspective, it makes sense to store subscription options
as part of subscriptions, i.e. bindings, even though they are currently
not used in routing.
Therefore, this commits stores subscription options as binding arguments.
Storing subscription options as binding arguments comes in turn with
new challenges: How to handle mixed version clusters and upgrading an
MQTT session from v3 or v4 to v5?
Imagine an MQTT client connects via v5 with Session Expiry Interval > 0
to a new node in a mixed version cluster, creates a subscription,
disconnects, and subsequently connects via v3 to an old node. The
client should continue to receive messages.
To simplify such edge cases, this commit introduces a new feature flag
called mqtt_v5. If mqtt_v5 is disabled, clients cannot connect to
RabbitMQ via MQTT 5.0.
This still doesn't entirely solve the problem of MQTT session upgrades
(v4 to v5 client) or session downgrades (v5 to v4 client).
Ideally, once mqtt_v5 is enabled, all MQTT bindings contain non-empty binding
arguments. However, this will require a feature flag migration function
to modify all MQTT bindings. To be more precise, all MQTT bindings need
to be deleted and added because the binding argument is part of the
Mnesia table key.
Since feature flag migration functions are non-trivial to implement in
RabbitMQ (they can run on every node multiple times and concurrently),
this commit takes a simpler approach:
All v3 / v4 sessions keep the empty binding argument [].
All v5 sessions use the new binding argument [#mqtt_subscription_opts{}].
This requires only handling a session upgrade / downgrade by
creating a binding (with the new binding arg) and deleting the old
binding (with the old binding arg) when processing the CONNECT packet.
Note that such session upgrades or downgrades should be rather rare in
practice. Therefore these binding transactions shouldn't hurt peformance.
The No Local option is implemented within the MQTT publishing connection
process: The message is not sent to the MQTT destination if the
destination queue name matches the current MQTT client ID and the
message was routed due to a subscription that has the No Local flag set.
This avoids unnecessary traffic on the MQTT queue.
The alternative would have been that the "receiving side" (same process)
filters the message out - which would have been more consistent in how
Retain As Published and Subscription Identifiers are implemented, but
would have caused unnecessary load on the MQTT queue.
2023-04-19 21:32:34 +08:00
|
|
|
ssl_verify_fun:
|
|
|
|
|
- ssl_verify_fingerprint
|
|
|
|
|
- ssl_verify_fun_cert_helpers
|
|
|
|
|
- ssl_verify_fun_encodings
|
|
|
|
|
- ssl_verify_hostname
|
|
|
|
|
- ssl_verify_pk
|
|
|
|
|
- ssl_verify_string
|
|
|
|
|
- ssl_verify_util
|
2023-01-25 16:41:56 +08:00
|
|
|
stdout_formatter:
|
|
|
|
|
- stdout_formatter
|
|
|
|
|
- stdout_formatter_paragraph
|
|
|
|
|
- stdout_formatter_table
|
|
|
|
|
- stdout_formatter_utils
|
2023-05-23 23:15:28 +08:00
|
|
|
syslog:
|
|
|
|
|
- syslog
|
|
|
|
|
- syslog_error_h
|
|
|
|
|
- syslog_lager_backend
|
|
|
|
|
- syslog_lib
|
|
|
|
|
- syslog_logger
|
|
|
|
|
- syslog_logger_h
|
|
|
|
|
- syslog_monitor
|
|
|
|
|
- syslog_rfc3164
|
|
|
|
|
- syslog_rfc5424
|
2023-01-25 16:41:56 +08:00
|
|
|
sysmon_handler:
|
|
|
|
|
- sysmon_handler_app
|
|
|
|
|
- sysmon_handler_example_handler
|
|
|
|
|
- sysmon_handler_filter
|
|
|
|
|
- sysmon_handler_sup
|
|
|
|
|
- sysmon_handler_testhandler
|
|
|
|
|
systemd:
|
|
|
|
|
- systemd
|
|
|
|
|
- systemd_app
|
|
|
|
|
- systemd_journal_h
|
|
|
|
|
- systemd_kmsg_formatter
|
|
|
|
|
- systemd_protocol
|
|
|
|
|
- systemd_socket
|
|
|
|
|
- systemd_sup
|
|
|
|
|
- systemd_watchdog
|
|
|
|
|
thoas:
|
|
|
|
|
- thoas
|
|
|
|
|
- thoas_decode
|
|
|
|
|
- thoas_encode
|
2023-02-23 21:47:41 +08:00
|
|
|
trust_store_http:
|
|
|
|
|
- trust_store_http
|
|
|
|
|
- trust_store_http_app
|
|
|
|
|
- trust_store_http_sup
|
|
|
|
|
- trust_store_invalid_handler
|
|
|
|
|
- trust_store_list_handler
|
Support MQTT 5.0 features No Local, RAP, Subscription IDs
Support subscription options "No Local" and "Retain As Published"
as well as Subscription Identifiers.
All three MQTT 5.0 features can be set on a per subscription basis.
Due to wildcards in topic filters, multiple subscriptions
can match a given topic. Therefore, to implement Retain As Published and
Subscription Identifiers, the destination MQTT connection process needs
to know what subscription(s) caused it to receive the message.
There are a few ways how this could be implemented:
1. The destination MQTT connection process is aware of all its
subscriptions. Whenever, it receives a message, it can match the
message's routing key / topic against all its known topic filters.
However, to iteratively match the routing key against all topic
filters for every received message can become very expensive in the
worst case when the MQTT client creates many subscriptions containing
wildcards. This could be the case for an MQTT client that acts as a
bridge or proxy or dispatcher: It could subscribe via a wildcard for
each of its own clients.
2. Instead of interatively matching the topic of the received message
against all topic filters that contain wildcards, a better approach
would be for every MQTT subscriber connection process to maintain a
local trie datastructure (similar to how topic exchanges are
implemented) and perform matching therefore more efficiently.
However, this does not sound optimal either because routing is
effectively performed twice: in the topic exchange and again against
a much smaller trie in each destination connection process.
3. Given that the topic exchange already perform routing, a much more
sensible way would be to send the matched binding key(s) to the
destination MQTT connection process. A subscription (topic filter)
maps to a binding key in AMQP 0.9.1 routing. Therefore, for the first
time in RabbitMQ, the routing function should not only output a list
of unique destination queues, but also the binding keys (subscriptions)
that caused the message to be routed to the destination queue.
This commit therefore implements the 3rd approach.
The downside of the 3rd approach is that it requires API changes to the
routing function and topic exchange.
Specifically, this commit adds a new function rabbit_exchange:route/3
that accepts a list of routing options. If that list contains version 2,
the caller of the routing function knows how to handle the return value
that could also contain binding keys.
This commits allows an MQTT connection process, the channel process, and
at-most-once dead lettering to handle binding keys. Binding keys are
included as AMQP 0.9.1 headers into the basic message.
Therefore, whenever a message is sent from an MQTT client or AMQP 0.9.1
client or AMQP 1.0 client or STOMP client, the MQTT receiver will know
the subscription identifier that caused the message to be received.
Note that due to the low number of allowed wildcard characters (# and
+), the cardinality of matched binding keys shouldn't be high even if
the topic contains for example 3 levels and the message is sent to for
example 5 million destination queues. In other words, sending multiple
distinct basic messages to the destination shouldn't hurt the delegate
optimisation too much. The delegate optimisation implemented for classic
queues and rabbit_mqtt_qos0_queue(s) still takes place for all basic
messages that contain the same set of matched binding keys.
The topic exchange returns all matched binding keys by remembering the
edges walked down to the leaves. As an optimisation, only for MQTT
queues are binding keys being returned. This does add a small dependency
from app rabbit to app rabbitmq_mqtt which is not optimal. However, this
dependency should be simple to remove when omitting this optimisation.
Another important feature of this commit is persisting subscription
options and subscription identifiers because they are part of the
MQTT 5.0 session state.
In MQTT v3 and v4, the only subscription information that were part of
the session state was the topic filter and the QoS level.
Both information were implicitly stored in the form of bindings:
The topic filter as the binding key and the QoS level as the destination
queue name of the binding.
For MQTT v5 we need to persist more subscription information.
From a domain perspective, it makes sense to store subscription options
as part of subscriptions, i.e. bindings, even though they are currently
not used in routing.
Therefore, this commits stores subscription options as binding arguments.
Storing subscription options as binding arguments comes in turn with
new challenges: How to handle mixed version clusters and upgrading an
MQTT session from v3 or v4 to v5?
Imagine an MQTT client connects via v5 with Session Expiry Interval > 0
to a new node in a mixed version cluster, creates a subscription,
disconnects, and subsequently connects via v3 to an old node. The
client should continue to receive messages.
To simplify such edge cases, this commit introduces a new feature flag
called mqtt_v5. If mqtt_v5 is disabled, clients cannot connect to
RabbitMQ via MQTT 5.0.
This still doesn't entirely solve the problem of MQTT session upgrades
(v4 to v5 client) or session downgrades (v5 to v4 client).
Ideally, once mqtt_v5 is enabled, all MQTT bindings contain non-empty binding
arguments. However, this will require a feature flag migration function
to modify all MQTT bindings. To be more precise, all MQTT bindings need
to be deleted and added because the binding argument is part of the
Mnesia table key.
Since feature flag migration functions are non-trivial to implement in
RabbitMQ (they can run on every node multiple times and concurrently),
this commit takes a simpler approach:
All v3 / v4 sessions keep the empty binding argument [].
All v5 sessions use the new binding argument [#mqtt_subscription_opts{}].
This requires only handling a session upgrade / downgrade by
creating a binding (with the new binding arg) and deleting the old
binding (with the old binding arg) when processing the CONNECT packet.
Note that such session upgrades or downgrades should be rather rare in
practice. Therefore these binding transactions shouldn't hurt peformance.
The No Local option is implemented within the MQTT publishing connection
process: The message is not sent to the MQTT destination if the
destination queue name matches the current MQTT client ID and the
message was routed due to a subscription that has the No Local flag set.
This avoids unnecessary traffic on the MQTT queue.
The alternative would have been that the "receiving side" (same process)
filters the message out - which would have been more consistent in how
Retain As Published and Subscription Identifiers are implemented, but
would have caused unnecessary load on the MQTT queue.
2023-04-19 21:32:34 +08:00
|
|
|
unicode_util_compat:
|
|
|
|
|
- string_compat
|
|
|
|
|
- unicode_util_compat
|
|
|
|
|
unuseddep:
|
|
|
|
|
- unuseddep
|