rabbitmq-server/deps/rabbit/BUILD.bazel

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

1391 lines
30 KiB
Python
Raw Permalink Normal View History

load("@rules_erlang//:eunit2.bzl", "eunit")
load("@rules_erlang//:xref2.bzl", "xref")
load("@rules_erlang//:dialyze.bzl", "dialyze", "plt")
load("//:rabbitmq_home.bzl", "rabbitmq_home")
load("//:rabbitmq_run.bzl", "rabbitmq_run")
load(
"//:rabbitmq.bzl",
"RABBITMQ_DIALYZER_OPTS",
"assert_suites",
"rabbitmq_app",
"rabbitmq_integration_suite",
2021-05-11 16:44:28 +08:00
"rabbitmq_suite",
"without",
)
load(
":app.bzl",
"all_beam_files",
"all_srcs",
"all_test_beam_files",
"test_suite_beam_files",
)
load(":bats.bzl", "bats")
exports_files(glob([
"scripts/**",
]) + ["INSTALL"])
_APP_ENV = """[
%% See https://www.rabbitmq.com/docs/consumers#acknowledgement-timeout
%% 30 minutes
{consumer_timeout, 1800000},
{tcp_listeners, [5672]},
{num_tcp_acceptors, 10},
{ssl_listeners, []},
{num_ssl_acceptors, 10},
{ssl_options, []},
{vm_memory_high_watermark, 0.6},
{vm_memory_calculation_strategy, rss},
{disk_free_limit, 50000000}, %% 50MB
{backing_queue_module, rabbit_variable_queue},
%% 0 ("no limit") would make a better default, but that
%% breaks the QPid Java client
{frame_max, 131072},
%% see rabbitmq-server#1593
{channel_max, 2047},
{session_max_per_connection, 64},
{link_max_per_session, 256},
{ranch_connection_max, infinity},
{heartbeat, 60},
{msg_store_file_size_limit, 16777216},
{msg_store_shutdown_timeout, 600000},
{fhc_write_buffering, true},
{fhc_read_buffering, false},
{queue_index_max_journal_entries, 32768},
{queue_index_embed_msgs_below, 4096},
{default_user, <<"guest">>},
{default_pass, <<"guest">>},
{default_user_tags, [administrator]},
{default_vhost, <<"/">>},
{default_permissions, [<<".*">>, <<".*">>, <<".*">>]},
{loopback_users, [<<"guest">>]},
{password_hashing_module, rabbit_password_hashing_sha256},
{server_properties, []},
{collect_statistics, none},
{collect_statistics_interval, 5000},
{mnesia_table_loading_retry_timeout, 30000},
{mnesia_table_loading_retry_limit, 10},
Add SASL mechanism ANONYMOUS ## 1. Introduce new SASL mechanism ANONYMOUS ### What? Introduce a new `rabbit_auth_mechanism` implementation for SASL mechanism ANONYMOUS called `rabbit_auth_mechanism_anonymous`. ### Why? As described in AMQP section 5.3.3.1, ANONYMOUS should be used when the client doesn't need to authenticate. Introducing a new `rabbit_auth_mechanism` consolidates and simplifies how anonymous logins work across all RabbitMQ protocols that support SASL. This commit therefore allows AMQP 0.9.1, AMQP 1.0, stream clients to connect out of the box to RabbitMQ without providing any username or password. Today's AMQP 0.9.1 and stream protocol client libs hard code RabbitMQ default credentials `guest:guest` for example done in: * https://github.com/rabbitmq/rabbitmq-java-client/blob/0215e85643a9ae0800822869be0200024e2ab569/src/main/java/com/rabbitmq/client/ConnectionFactory.java#L58-L61 * https://github.com/rabbitmq/amqp091-go/blob/ddb7a2f0685689063e6d709b8e417dbf9d09469c/uri.go#L31-L32 Hard coding RabbitMQ specific default credentials in dozens of different client libraries is an anti-pattern in my opinion. Furthermore, there are various AMQP 1.0 and MQTT client libraries which we do not control or maintain and which still should work out of the box when a user is getting started with RabbitMQ (that is without providing `guest:guest` credentials). ### How? The old RabbitMQ 3.13 AMQP 1.0 plugin `default_user` [configuration](https://github.com/rabbitmq/rabbitmq-server/blob/146b4862d8e570b344c99c37d91246760e218b18/deps/rabbitmq_amqp1_0/Makefile#L6) is replaced with the following two new `rabbit` configurations: ``` {anonymous_login_user, <<"guest">>}, {anonymous_login_pass, <<"guest">>}, ``` We call it `anonymous_login_user` because this user will be used for anonymous logins. The subsequent commit uses the same setting for anonymous logins in MQTT. Hence, this user is orthogonal to the protocol used when the client connects. Setting `anonymous_login_pass` could have been left out. This commit decides to include it because our documentation has so far recommended: > It is highly recommended to pre-configure a new user with a generated username and password or delete the guest user > or at least change its password to reasonably secure generated value that won't be known to the public. By having the new module `rabbit_auth_mechanism_anonymous` internally authenticate with `anonymous_login_pass` instead of blindly allowing access without any password, we protect operators that relied on the sentence: > or at least change its password to reasonably secure generated value that won't be known to the public To ease the getting started experience, since RabbitMQ already deploys a guest user with full access to the default virtual host `/`, this commit also allows SASL mechanism ANONYMOUS in `rabbit` setting `auth_mechanisms`. In production, operators should disable SASL mechanism ANONYMOUS by setting `anonymous_login_user` to `none` (or by removing ANONYMOUS from the `auth_mechanisms` setting. This will be documented separately. Even if operators forget or don't read the docs, this new ANONYMOUS mechanism won't do any harm because it relies on the default user name `guest` and password `guest`, which is recommended against in production, and who by default can only connect from the local host. ## 2. Require SASL security layer in AMQP 1.0 ### What? An AMQP 1.0 client must use the SASL security layer. ### Why? This is in line with the mandatory usage of SASL in AMQP 0.9.1 and RabbitMQ stream protocol. Since (presumably) any AMQP 1.0 client knows how to authenticate with a username and password using SASL mechanism PLAIN, any AMQP 1.0 client also (presumably) implements the trivial SASL mechanism ANONYMOUS. Skipping SASL is not recommended in production anyway. By requiring SASL, configuration for operators becomes easier. Following the principle of least surprise, when an an operator configures `auth_mechanisms` to exclude `ANONYMOUS`, anonymous logins will be prohibited in SASL and also by disallowing skipping the SASL layer. ### How? This commit implements AMQP 1.0 figure 2.13. A follow-up commit needs to be pushed to `v3.13.x` which will use SASL mechanism `anon` instead of `none` in the Erlang AMQP 1.0 client such that AMQP 1.0 shovels running on 3.13 can connect to 4.0 RabbitMQ nodes.
2024-08-14 18:19:17 +08:00
{anonymous_login_user, <<"guest">>},
{anonymous_login_pass, <<"guest">>},
{auth_mechanisms, ['PLAIN', 'AMQPLAIN', 'ANONYMOUS']},
{auth_backends, [rabbit_auth_backend_internal]},
{delegate_count, 16},
{trace_vhosts, []},
{ssl_cert_login_from, distinguished_name},
{ssl_handshake_timeout, 5000},
{ssl_allow_poodle_attack, false},
{handshake_timeout, 10000},
{reverse_dns_lookups, false},
{cluster_partition_handling, ignore},
{cluster_keepalive_interval, 10000},
{autoheal_state_transition_timeout, 60000},
{tcp_listen_options, [{backlog, 128},
{nodelay, true},
{linger, {true, 0}},
{exit_on_close, false}
]},
{ssl_apps, [asn1, crypto, public_key, ssl]},
%% see rabbitmq-server#114
{classic_queue_flow_control, true},
%% see rabbitmq-server#227 and related tickets.
%% msg_store_credit_disc_bound only takes effect when
%% messages are persisted to the message store. If messages
%% are embedded on the queue index, then modifying this
%% setting has no effect because credit_flow is not used when
%% writing to the queue index. See the setting
%% queue_index_embed_msgs_below above.
{msg_store_credit_disc_bound, {4000, 800}},
{msg_store_io_batch_size, 4096},
%% see rabbitmq-server#143,
%% rabbitmq-server#949, rabbitmq-server#1098
{credit_flow_default_credit, {400, 200}},
{quorum_commands_soft_limit, 32},
{quorum_cluster_size, 3},
%% see rabbitmq-server#248
%% and rabbitmq-server#667
{channel_operation_timeout, 15000},
%% used by rabbit_peer_discovery_classic_config
{cluster_nodes, {[], disc}},
{config_entry_decoder, [{passphrase, undefined}]},
{background_gc_enabled, false},
{background_gc_target_interval, 60000},
%% rabbitmq-server#589
{proxy_protocol, false},
{disk_monitor_failure_retries, 10},
{disk_monitor_failure_retry_interval, 120000},
%% either "stop_node" or "continue".
%% by default we choose to not terminate the entire node if one
%% vhost had to shut down, see server#1158 and server#1280
{vhost_restart_strategy, continue},
%% {global, prefetch count}
{default_consumer_prefetch, {false, 0}},
%% interval at which the channel can perform periodic actions
{channel_tick_interval, 60000},
%% Default max message size is 16 MB
{max_message_size, 16777216},
%% Socket writer will run GC every 1 GB of outgoing data
{writer_gc_threshold, 1000000000},
%% interval at which connection/channel tracking executes post operations
{tracking_execution_timeout, 15000},
{stream_messages_soft_limit, 256},
2022-01-15 00:11:38 +08:00
{track_auth_attempt_source, false},
{credentials_obfuscation_fallback_secret, <<"nocookie">>},
{dead_letter_worker_consumer_prefetch, 32},
{dead_letter_worker_publisher_confirm_timeout, 180000},
{vhost_process_reconciliation_run_interval, 30},
%% for testing
{vhost_process_reconciliation_enabled, true},
{license_line, "Licensed under the MPL 2.0. Website: https://rabbitmq.com"}
]
"""
2021-05-19 16:54:56 +08:00
APP_MODULE = "rabbit"
APP_REGISTERED = [
"rabbit_amqqueue_sup",
"rabbit_direct_client_sup",
"rabbit_log",
"rabbit_node_monitor",
"rabbit_router",
]
all_beam_files(name = "all_beam_files")
all_test_beam_files(name = "all_test_beam_files")
all_srcs(name = "all_srcs")
test_suite_beam_files(name = "test_suite_beam_files")
# gazelle:erlang_apps_dirs apps
# gazelle:erlang_app_extra_app sasl
# gazelle:erlang_app_extra_app os_mon
# gazelle:erlang_app_extra_app inets
# gazelle:erlang_app_extra_app compiler
# gazelle:erlang_app_extra_app crypto
# gazelle:erlang_app_extra_app public_key
# gazelle:erlang_app_extra_app ssl
# gazelle:erlang_app_extra_app syntax_tools
# gazelle:erlang_app_extra_app xmerl
# gazelle:erlang_app_dep cuttlefish
# gazelle:erlang_app_dep syslog
# gazelle:erlang_app_dep observer_cli
# gazelle:erlang_app_dep redbug
# gazelle:erlang_app_dep sysmon_handler
# gazelle:erlang_app_dep systemd
rabbitmq_app(
name = "erlang_app",
srcs = [":all_srcs"],
hdrs = [":public_hdrs"],
app_description = "RabbitMQ",
app_env = _APP_ENV,
2021-05-19 16:54:56 +08:00
app_module = APP_MODULE,
app_name = "rabbit",
app_registered = APP_REGISTERED,
beam_files = [":beam_files"],
extra_apps = [
"compiler",
"inets",
"os_mon",
"public_key",
"sasl",
"ssl",
"syntax_tools",
"xmerl",
"crypto",
],
license_files = [":license_files"],
priv = [":priv"],
deps = [
"//deps/amqp10_common:erlang_app",
"//deps/rabbit_common:erlang_app",
2023-12-09 12:02:38 +08:00
"//deps/rabbitmq_prelaunch:erlang_app",
"@cuttlefish//:erlang_app",
2023-09-14 12:03:54 +08:00
"@gen_batch_server//:erlang_app",
Allow to use Khepri database to store metadata instead of Mnesia [Why] Mnesia is a very powerful and convenient tool for Erlang applications: it is a persistent disc-based database, it handles replication accross multiple Erlang nodes and it is available out-of-the-box from the Erlang/OTP distribution. RabbitMQ relies on Mnesia to manage all its metadata: * virtual hosts' properties * intenal users * queue, exchange and binding declarations (not queues data) * runtime parameters and policies * ... Unfortunately Mnesia makes it difficult to handle network partition and, as a consequence, the merge conflicts between Erlang nodes once the network partition is resolved. RabbitMQ provides several partition handling strategies but they are not bullet-proof. Users still hit situations where it is a pain to repair a cluster following a network partition. [How] @kjnilsson created Ra [1], a Raft consensus library that RabbitMQ already uses successfully to implement quorum queues and streams for instance. Those queues do not suffer from network partitions. We created Khepri [2], a new persistent and replicated database engine based on Ra and we want to use it in place of Mnesia in RabbitMQ to solve the problems with network partitions. This patch integrates Khepri as an experimental feature. When enabled, RabbitMQ will store all its metadata in Khepri instead of Mnesia. This change comes with behavior changes. While Khepri remains disabled, you should see no changes to the behavior of RabbitMQ. If there are changes, it is a bug. After Khepri is enabled, there are significant changes of behavior that you should be aware of. Because it is based on the Raft consensus algorithm, when there is a network partition, only the cluster members that are in the partition with at least `(Number of nodes in the cluster ÷ 2) + 1` number of nodes can "make progress". In other words, only those nodes may write to the Khepri database and read from the database and expect a consistent result. For instance in a cluster of 5 RabbitMQ nodes: * If there are two partitions, one with 3 nodes, one with 2 nodes, only the group of 3 nodes will be able to write to the database. * If there are three partitions, two with 2 nodes, one with 1 node, none of the group can write to the database. Because the Khepri database will be used for all kind of metadata, it means that RabbitMQ nodes that can't write to the database will be unable to perform some operations. A list of operations and what to expect is documented in the associated pull request and the RabbitMQ website. This requirement from Raft also affects the startup of RabbitMQ nodes in a cluster. Indeed, at least a quorum number of nodes must be started at once to allow nodes to become ready. To enable Khepri, you need to enable the `khepri_db` feature flag: rabbitmqctl enable_feature_flag khepri_db When the `khepri_db` feature flag is enabled, the migration code performs the following two tasks: 1. It synchronizes the Khepri cluster membership from the Mnesia cluster. It uses `mnesia_to_khepri:sync_cluster_membership/1` from the `khepri_mnesia_migration` application [3]. 2. It copies data from relevant Mnesia tables to Khepri, doing some conversion if necessary on the way. Again, it uses `mnesia_to_khepri:copy_tables/4` from `khepri_mnesia_migration` to do it. This can be performed on a running standalone RabbitMQ node or cluster. Data will be migrated from Mnesia to Khepri without any service interruption. Note that during the migration, the performance may decrease and the memory footprint may go up. Because this feature flag is considered experimental, it is not enabled by default even on a brand new RabbitMQ deployment. More about the implementation details below: In the past months, all accesses to Mnesia were isolated in a collection of `rabbit_db*` modules. This is where the integration of Khepri mostly takes place: we use a function called `rabbit_khepri:handle_fallback/1` which selects the database and perform the query or the transaction. Here is an example from `rabbit_db_vhost`: * Up until RabbitMQ 3.12.x: get(VHostName) when is_binary(VHostName) -> get_in_mnesia(VHostName). * Starting with RabbitMQ 3.13.0: get(VHostName) when is_binary(VHostName) -> rabbit_khepri:handle_fallback( #{mnesia => fun() -> get_in_mnesia(VHostName) end, khepri => fun() -> get_in_khepri(VHostName) end}). This `rabbit_khepri:handle_fallback/1` function relies on two things: 1. the fact that the `khepri_db` feature flag is enabled, in which case it always executes the Khepri-based variant. 4. the ability or not to read and write to Mnesia tables otherwise. Before the feature flag is enabled, or during the migration, the function will try to execute the Mnesia-based variant. If it succeeds, then it returns the result. If it fails because one or more Mnesia tables can't be used, it restarts from scratch: it means the feature flag is being enabled and depending on the outcome, either the Mnesia-based variant will succeed (the feature flag couldn't be enabled) or the feature flag will be marked as enabled and it will call the Khepri-based variant. The meat of this function really lives in the `khepri_mnesia_migration` application [3] and `rabbit_khepri:handle_fallback/1` is a wrapper on top of it that knows about the feature flag. However, some calls to the database do not depend on the existence of Mnesia tables, such as functions where we need to learn about the members of a cluster. For those, we can't rely on exceptions from Mnesia. Therefore, we just look at the state of the feature flag to determine which database to use. There are two situations though: * Sometimes, we need the feature flag state query to block because the function interested in it can't return a valid answer during the migration. Here is an example: case rabbit_khepri:is_enabled(RemoteNode) of true -> can_join_using_khepri(RemoteNode); false -> can_join_using_mnesia(RemoteNode) end * Sometimes, we need the feature flag state query to NOT block (for instance because it would cause a deadlock). Here is an example: case rabbit_khepri:get_feature_state() of enabled -> members_using_khepri(); _ -> members_using_mnesia() end Direct accesses to Mnesia still exists. They are limited to code that is specific to Mnesia such as classic queue mirroring or network partitions handling strategies. Now, to discover the Mnesia tables to migrate and how to migrate them, we use an Erlang module attribute called `rabbit_mnesia_tables_to_khepri_db` which indicates a list of Mnesia tables and an associated converter module. Here is an example in the `rabbitmq_recent_history_exchange` plugin: -rabbit_mnesia_tables_to_khepri_db( [{?RH_TABLE, rabbit_db_rh_exchange_m2k_converter}]). The converter module — `rabbit_db_rh_exchange_m2k_converter` in this example — is is fact a "sub" converter module called but `rabbit_db_m2k_converter`. See the documentation of a `mnesia_to_khepri` converter module to learn more about these modules. [1] https://github.com/rabbitmq/ra [2] https://github.com/rabbitmq/khepri [3] https://github.com/rabbitmq/khepri_mnesia_migration See #7206. Co-authored-by: Jean-Sébastien Pédron <jean-sebastien@rabbitmq.com> Co-authored-by: Diana Parra Corbacho <dparracorbac@vmware.com> Co-authored-by: Michael Davis <mcarsondavis@gmail.com>
2023-01-05 20:57:50 +08:00
"@khepri//:erlang_app",
"@khepri_mnesia_migration//:erlang_app",
"@observer_cli//:erlang_app",
"@osiris//:erlang_app",
"@ra//:erlang_app",
"@ranch//:erlang_app",
"@recon//:erlang_app",
"@redbug//:erlang_app",
"@seshat//:erlang_app",
"@stdout_formatter//:erlang_app",
"@syslog//:erlang_app",
"@sysmon_handler//:erlang_app",
"@systemd//:erlang_app",
],
)
xref(
name = "xref",
Support AMQP 1.0 natively ## What Similar to Native MQTT in #5895, this commits implements Native AMQP 1.0. By "native", we mean do not proxy via AMQP 0.9.1 anymore. ## Why Native AMQP 1.0 comes with the following major benefits: 1. Similar to Native MQTT, this commit provides better throughput, latency, scalability, and resource usage for AMQP 1.0. See https://blog.rabbitmq.com/posts/2023/03/native-mqtt for native MQTT improvements. See further below for some benchmarks. 2. Since AMQP 1.0 is not limited anymore by the AMQP 0.9.1 protocol, this commit allows implementing more AMQP 1.0 features in the future. Some features are already implemented in this commit (see next section). 3. Simpler, better understandable, and more maintainable code. Native AMQP 1.0 as implemented in this commit has the following major benefits compared to AMQP 0.9.1: 4. Memory and disk alarms will only stop accepting incoming TRANSFER frames. New connections can still be created to consume from RabbitMQ to empty queues. 5. Due to 4. no need anymore for separate connections for publishers and consumers as we currently recommended for AMQP 0.9.1. which potentially halves the number of physical TCP connections. 6. When a single connection sends to multiple target queues, a single slow target queue won't block the entire connection. Publisher can still send data quickly to all other target queues. 7. A publisher can request whether it wants publisher confirmation on a per-message basis. In AMQP 0.9.1 publisher confirms are configured per channel only. 8. Consumers can change their "prefetch count" dynamically which isn't possible in our AMQP 0.9.1 implementation. See #10174 9. AMQP 1.0 is an extensible protocol This commit also fixes dozens of bugs present in the AMQP 1.0 plugin in RabbitMQ 3.x - most of which cannot be backported due to the complexity and limitations of the old 3.x implementation. This commit contains breaking changes and is therefore targeted for RabbitMQ 4.0. ## Implementation details 1. Breaking change: With Native AMQP, the behaviour of ``` Convert AMQP 0.9.1 message headers to application properties for an AMQP 1.0 consumer amqp1_0.convert_amqp091_headers_to_app_props = false | true (default false) Convert AMQP 1.0 Application Properties to AMQP 0.9.1 headers amqp1_0.convert_app_props_to_amqp091_headers = false | true (default false) ``` will break because we always convert according to the message container conversions. For example, AMQP 0.9.1 x-headers will go into message-annotations instead of application properties. Also, `false` won’t be respected since we always convert the headers with message containers. 2. Remove rabbit_queue_collector rabbit_queue_collector is responsible for synchronously deleting exclusive queues. Since the AMQP 1.0 plugin never creates exclusive queues, rabbit_queue_collector doesn't need to be started in the first place. This will save 1 Erlang process per AMQP 1.0 connection. 3. 7 processes per connection + 1 process per session in this commit instead of 7 processes per connection + 15 processes per session in 3.x Supervision hierarchy got re-designed. 4. Use 1 writer process per AMQP 1.0 connection AMQP 0.9.1 uses a separate rabbit_writer Erlang process per AMQP 0.9.1 channel. Prior to this commit, AMQP 1.0 used a separate rabbit_amqp1_0_writer process per AMQP 1.0 session. Advantage of single writer proc per session (prior to this commit): * High parallelism for serialising packets if multiple sessions within a connection write heavily at the same time. This commit uses a single writer process per AMQP 1.0 connection that is shared across all AMQP 1.0 sessions. Advantages of single writer proc per connection (this commit): * Lower memory usage with hundreds of thousands of AMQP 1.0 sessions * Less TCP and IP header overhead given that the single writer process can accumulate across all sessions bytes before flushing the socket. In other words, this commit decides that a reader / writer process pair per AMQP 1.0 connection is good enough for bi-directional TRANSFER flows. Having a writer per session is too heavy. We still ensure high throughput by having separate reader, writer, and session processes. 5. Transform rabbit_amqp1_0_writer into gen_server Why: Prior to this commit, when clicking on the AMQP 1.0 writer process in observer, the process crashed. Instead of handling all these debug messages of the sys module, it's better to implement a gen_server. There is no advantage of using a special OTP process over gen_server for the AMQP 1.0 writer. gen_server also provides cleaner format status output. How: Message callbacks return a timeout of 0. After all messages in the inbox are processed, the timeout message is handled by flushing any pending bytes. 6. Remove stats timer from writer AMQP 1.0 connections haven't emitted any stats previously. 7. When there are contiguous queue confirmations in the session process mailbox, batch them. When the confirmations are sent to the publisher, a single DISPOSITION frame is sent for contiguously confirmed delivery IDs. This approach should be good enough. However it's sub optimal in scenarios where contiguous delivery IDs that need confirmations are rare, for example: * There are multiple links in the session with different sender settlement modes and sender publishes across these links interleaved. * sender settlement mode is mixed and sender publishes interleaved settled and unsettled TRANSFERs. 8. Introduce credit API v2 Why: The AMQP 0.9.1 credit extension which is to be removed in 4.0 was poorly designed since basic.credit is a synchronous call into the queue process blocking the entire AMQP 1.0 session process. How: Change the interactions between queue clients and queue server implementations: * Clients only request a credit reply if the FLOW's `echo` field is set * Include all link flow control state held by the queue process into a new credit_reply queue event: * `available` after the queue sends any deliveries * `link-credit` after the queue sends any deliveries * `drain` which allows us to combine the old queue events send_credit_reply and send_drained into a single new queue event credit_reply. * Include the consumer tag into the credit_reply queue event such that the AMQP 1.0 session process can process any credit replies asynchronously. Link flow control state `delivery-count` also moves to the queue processes. The new interactions are hidden behind feature flag credit_api_v2 to allow for rolling upgrades from 3.13 to 4.0. 9. Use serial number arithmetic in quorum queues and session process. 10. Completely bypass the rabbit_limiter module for AMQP 1.0 flow control. The goal is to eventually remove the rabbit_limiter module in 4.0 since AMQP 0.9.1 global QoS will be unsupported in 4.0. This commit lifts the AMQP 1.0 link flow control logic out of rabbit_limiter into rabbit_queue_consumers. 11. Fix credit bug for streams: AMQP 1.0 settlements shouldn't top up link credit, only FLOW frames should top up link credit. 12. Allow sender settle mode unsettled for streams since AMQP 1.0 acknowledgements to streams are no-ops (currently). 13. Fix AMQP 1.0 client bugs Auto renewing credits should not be related to settling TRANSFERs. Remove field link_credit_unsettled as it was wrong and confusing. Prior to this commit auto renewal did not work when the sender uses sender settlement mode settled. 14. Fix AMQP 1.0 client bugs The wrong outdated Link was passed to function auto_flow/2 15. Use osiris chunk iterator Only hold messages of uncompressed sub batches in memory if consumer doesn't have sufficient credits. Compressed sub batches are skipped for non Stream protocol consumers. 16. Fix incoming link flow control Always use confirms between AMQP 1.0 queue clients and queue servers. As already done internally by rabbit_fifo_client and rabbit_stream_queue, use confirms for classic queues as well. 17. Include link handle into correlation when publishing messages to target queues such that session process can correlate confirms from target queues to incoming links. 18. Only grant more credits to publishers if publisher hasn't sufficient credits anymore and there are not too many unconfirmed messages on the link. 19. Completely ignore `block` and `unblock` queue actions and RabbitMQ credit flow between classic queue process and session process. 20. Link flow control is independent between links. A client can refer to a queue or to an exchange with multiple dynamically added target queues. Multiple incoming links can also fan in to the same queue. However the link topology looks like, this commit ensures that each link is only granted more credits if that link isn't overloaded. 21. A connection or a session can send to many different queues. In AMQP 0.9.1, a single slow queue will lead to the entire channel, and then entire connection being blocked. This commit makes sure that a single slow queue from one link won't slow down sending on other links. For example, having link A sending to a local classic queue and link B sending to 5 replica quorum queue, link B will naturally grant credits slower than link A. So, despite the quorum queue being slower in confirming messages, the same AMQP 1.0 connection and session can still pump data very fast into the classic queue. 22. If cluster wide memory or disk alarm occurs. Each session sends a FLOW with incoming-window to 0 to sending client. If sending clients don’t obey, force disconnect the client. If cluster wide memory alarm clears: Each session resumes with a FLOW defaulting to initial incoming-window. 23. All operations apart of publishing TRANSFERS to RabbitMQ can continue during cluster wide alarms, specifically, attaching consumers and consuming, i.e. emptying queues. There is no need for separate AMQP 1.0 connections for publishers and consumers as recommended in our AMQP 0.9.1 implementation. 24. Flow control summary: * If queue becomes bottleneck, that’s solved by slowing down individual sending links (AMQP 1.0 link flow control). * If session becomes bottleneck (more unlikely), that’s solved by AMQP 1.0 session flow control. * If connection becomes bottleneck, it naturally won’t read fast enough from the socket causing TCP backpressure being applied. Nowhere will RabbitMQ internal credit based flow control (i.e. module credit_flow) be used on the incoming AMQP 1.0 message path. 25. Register AMQP sessions Prefer local-only pg over our custom pg_local implementation as pg is a better process group implementation than pg_local. pg_local was identified as bottleneck in tests where many MQTT clients were disconnected at once. 26. Start a local-only pg when Rabbit boots: > A scope can be kept local-only by using a scope name that is unique cluster-wide, e.g. the node name: > pg:start_link(node()). Register AMQP 1.0 connections and sessions with pg. In future we should remove pg_local and instead use the new local-only pg for all registered processes such as AMQP 0.9.1 connections and channels. 27. Requeue messages if link detached Although the spec allows to settle delivery IDs on detached links, RabbitMQ does not respect the 'closed' field of the DETACH frame and therefore handles every DETACH frame as closed. Since the link is closed, we expect every outstanding delivery to be requeued. In addition to consumer cancellation, detaching a link therefore causes in flight deliveries to be requeued. Note that this behaviour is different from merely consumer cancellation in AMQP 0.9.1: "After a consumer is cancelled there will be no future deliveries dispatched to it. Note that there can still be "in flight" deliveries dispatched previously. Cancelling a consumer will neither discard nor requeue them." [https://www.rabbitmq.com/consumers.html#unsubscribing] An AMQP receiver can first drain, and then detach to prevent "in flight" deliveries 28. Init AMQP session with BEGIN frame Similar to how there can't be an MQTT processor without a CONNECT frame, there can't be an AMQP session without a BEGIN frame. This allows having strict dialyzer types for session flow control fields (i.e. not allowing 'undefined'). 29. Move serial_number to AMQP 1.0 common lib such that it can be used by both AMQP 1.0 server and client 30. Fix AMQP client to do serial number arithmetic. 31. AMQP client: Differentiate between delivery-id and transfer-id for better understandability. 32. Fix link flow control in classic queues This commit fixes ``` java -jar target/perf-test.jar -ad false -f persistent -u cq -c 3000 -C 1000000 -y 0 ``` followed by ``` ./omq -x 0 amqp -T /queue/cq -D 1000000 --amqp-consumer-credits 2 ``` Prior to this commit, (and on RabbitMQ 3.x) the consuming would halt after around 8 - 10,000 messages. The bug was that in flight messages from classic queue process to session process were not taken into account when topping up credit to the classic queue process. Fixes #2597 The solution to this bug (and a much cleaner design anyway independent of this bug) is that queues should hold all link flow control state including the delivery-count. Hence, when credit API v2 is used the delivery-count will be held by the classic queue process, quorum queue process, and stream queue client instead of managing the delivery-count in the session. 33. The double level crediting between (a) session process and rabbit_fifo_client, and (b) rabbit_fifo_client and rabbit_fifo was removed. Therefore, instead of managing 3 separate delivery-counts (i. session, ii. rabbit_fifo_client, iii. rabbit_fifo), only 1 delivery-count is used in rabbit_fifo. This is a big simplification. 34. This commit fixes quorum queues without bumping the machine version nor introducing new rabbit_fifo commands. Whether credit API v2 is used is solely determined at link attachment time depending on whether feature flag credit_api_v2 is enabled. Even when that feature flag will be enabled later on, this link will keep using credit API v1 until detached (or the node is shut down). Eventually, after feature flag credit_api_v2 has been enabled and a subsequent rolling upgrade, all links will use credit API v2. This approach is safe and simple. The 2 alternatives to move delivery-count from the session process to the queue processes would have been: i. Explicit feature flag credit_api_v2 migration function * Can use a gen_server:call and only finish migration once all delivery-counts were migrated. Cons: * Extra new message format just for migration is required. * Risky as migration will fail if a target queue doesn’t reply. ii. Session always includes DeliveryCountSnd when crediting to the queue: Cons: * 2 delivery counts will be hold simultaneously in session proc and queue proc; could be solved by deleting the session proc’s delivery-count for credit-reply * What happens if the receiver doesn’t provide credit for a very long time? Is that a problem? 35. Support stream filtering in AMQP 1.0 (by @acogoluegnes) Use the x-stream-filter-value message annotation to carry the filter value in a published message. Use the rabbitmq:stream-filter and rabbitmq:stream-match-unfiltered filters when creating a receiver that wants to filter out messages from a stream. 36. Remove credit extension from AMQP 0.9.1 client 37. Support maintenance mode closing AMQP 1.0 connections. 38. Remove AMQP 0.9.1 client dependency from AMQP 1.0 implementation. 39. Move AMQP 1.0 plugin to the core. AMQP 1.0 is enabled by default. The old rabbitmq_amqp1_0 plugin will be kept as a no-op plugin to prevent deployment tools from failing that execute: ``` rabbitmq-plugins enable rabbitmq_amqp1_0 rabbitmq-plugins disable rabbitmq_amqp1_0 ``` 40. Breaking change: Remove CLI command `rabbitmqctl list_amqp10_connections`. Instead, list both AMQP 0.9.1 and AMQP 1.0 connections in `list_connections`: ``` rabbitmqctl list_connections protocol Listing connections ... protocol {1, 0} {0,9,1} ``` ## Benchmarks ### Throughput & Latency Setup: * Single node Ubuntu 22.04 * Erlang 26.1.1 Start RabbitMQ: ``` make run-broker PLUGINS="rabbitmq_management rabbitmq_amqp1_0" FULL=1 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 3" ``` Predeclare durable classic queue cq1, durable quorum queue qq1, durable stream queue sq1. Start client: https://github.com/ssorj/quiver https://hub.docker.com/r/ssorj/quiver/tags (digest 453a2aceda64) ``` docker run -it --rm --add-host host.docker.internal:host-gateway ssorj/quiver:latest bash-5.1# quiver --version quiver 0.4.0-SNAPSHOT ``` 1. Classic queue ``` quiver //host.docker.internal//amq/queue/cq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ............................................... 73.8 seconds Sender rate .......................................... 13,548 messages/s Receiver rate ........................................ 13,547 messages/s End-to-end rate ...................................... 13,547 messages/s Latencies by percentile: 0% ........ 0 ms 90.00% ........ 9 ms 25% ........ 2 ms 99.00% ....... 14 ms 50% ........ 4 ms 99.90% ....... 17 ms 100% ....... 26 ms 99.99% ....... 24 ms ``` RabbitMQ 3.x (main branch as of 30 January 2024): ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 6 73.6 2.1 3,217 1,607 0 8.0 511 4.1 163,580 16,367 2 74.1 4.1 3,217 0 0 8.0 0 6.1 229,114 32,767 3 74.1 6.1 3,217 0 0 8.0 0 8.1 261,880 16,367 2 74.1 8.1 67,874 32,296 8 8.2 7,662 10.1 294,646 16,367 2 74.1 10.1 67,874 0 0 8.2 0 12.1 360,180 32,734 3 74.1 12.1 67,874 0 0 8.2 0 14.1 392,946 16,367 3 74.1 14.1 68,604 365 0 8.2 12,147 16.1 458,480 32,734 3 74.1 16.1 68,604 0 0 8.2 0 18.1 491,246 16,367 2 74.1 18.1 68,604 0 0 8.2 0 20.1 556,780 32,767 4 74.1 20.1 68,604 0 0 8.2 0 22.1 589,546 16,375 2 74.1 22.1 68,604 0 0 8.2 0 receiver timed out 24.1 622,312 16,367 2 74.1 24.1 68,604 0 0 8.2 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. ``` 2. Quorum queue: ``` quiver //host.docker.internal//amq/queue/qq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration .............................................. 101.4 seconds Sender rate ........................................... 9,867 messages/s Receiver rate ......................................... 9,868 messages/s End-to-end rate ....................................... 9,865 messages/s Latencies by percentile: 0% ....... 11 ms 90.00% ....... 23 ms 25% ....... 15 ms 99.00% ....... 28 ms 50% ....... 18 ms 99.90% ....... 33 ms 100% ....... 49 ms 99.99% ....... 47 ms ``` RabbitMQ 3.x: ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 9 69.9 2.1 18,430 9,206 5 7.6 1,221 4.1 163,580 16,375 5 70.2 4.1 18,867 218 0 7.6 2,168 6.1 229,114 32,767 6 70.2 6.1 18,867 0 0 7.6 0 8.1 294,648 32,734 7 70.2 8.1 18,867 0 0 7.6 0 10.1 360,182 32,734 6 70.2 10.1 18,867 0 0 7.6 0 12.1 425,716 32,767 6 70.2 12.1 18,867 0 0 7.6 0 receiver timed out 14.1 458,482 16,367 5 70.2 14.1 18,867 0 0 7.6 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. ``` 3. Stream: ``` quiver-arrow send //host.docker.internal//amq/queue/sq1 --durable --count 1m -d 10m --summary --verbose ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ................................................ 8.7 seconds Message rate ........................................ 115,154 messages/s ``` RabbitMQ 3.x: ``` Count ............................................. 1,000,000 messages Duration ............................................... 21.2 seconds Message rate ......................................... 47,232 messages/s ``` ### Memory usage Start RabbitMQ: ``` ERL_MAX_PORTS=3000000 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+P 3000000 +S 6" make run-broker PLUGINS="rabbitmq_amqp1_0" FULL=1 RABBITMQ_CONFIG_FILE="rabbitmq.conf" ``` ``` /bin/cat rabbitmq.conf tcp_listen_options.sndbuf = 2048 tcp_listen_options.recbuf = 2048 vm_memory_high_watermark.relative = 0.95 vm_memory_high_watermark_paging_ratio = 0.95 loopback_users = none ``` Create 50k connections with 2 sessions per connection, i.e. 100k session in total: ```go package main import ( "context" "log" "time" "github.com/Azure/go-amqp" ) func main() { for i := 0; i < 50000; i++ { conn, err := amqp.Dial(context.TODO(), "amqp://nuc", &amqp.ConnOptions{SASLType: amqp.SASLTypeAnonymous()}) if err != nil { log.Fatal("dialing AMQP server:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } } log.Println("opened all connections") time.Sleep(5 * time.Hour) } ``` This commit: ``` erlang:memory(). [{total,4586376480}, {processes,4025898504}, {processes_used,4025871040}, {system,560477976}, {atom,1048841}, {atom_used,1042841}, {binary,233228608}, {code,21449982}, {ets,108560464}] erlang:system_info(process_count). 450289 ``` 7 procs per connection + 1 proc per session. (7 + 2*1) * 50,000 = 450,000 procs RabbitMQ 3.x: ``` erlang:memory(). [{total,15168232704}, {processes,14044779256}, {processes_used,14044755120}, {system,1123453448}, {atom,1057033}, {atom_used,1052587}, {binary,236381264}, {code,21790238}, {ets,391423744}] erlang:system_info(process_count). 1850309 ``` 7 procs per connection + 15 per session (7 + 2*15) * 50,000 = 1,850,000 procs 50k connections + 100k session require with this commit: 4.5 GB in RabbitMQ 3.x: 15 GB ## Future work 1. More efficient parser and serializer 2. TODO in mc_amqp: Do not store the parsed message on disk. 3. Implement both AMQP HTTP extension and AMQP management extension to allow AMQP clients to create RabbitMQ objects (queues, exchanges, ...).
2023-07-21 18:29:07 +08:00
additional_libs = [
"//deps/rabbitmq_cli:erlang_app", # keep
],
target = ":erlang_app",
)
2021-05-28 19:27:21 +08:00
plt(
name = "deps_plt",
apps = [
"mnesia", # keep
2023-05-25 23:23:35 +08:00
"runtime_tools", # keep
],
for_target = ":erlang_app",
ignore_warnings = True,
libs = ["@rules_elixir//elixir"], # keep
plt = "//:base_plt",
2023-05-25 23:23:35 +08:00
deps = [
Support AMQP 1.0 natively ## What Similar to Native MQTT in #5895, this commits implements Native AMQP 1.0. By "native", we mean do not proxy via AMQP 0.9.1 anymore. ## Why Native AMQP 1.0 comes with the following major benefits: 1. Similar to Native MQTT, this commit provides better throughput, latency, scalability, and resource usage for AMQP 1.0. See https://blog.rabbitmq.com/posts/2023/03/native-mqtt for native MQTT improvements. See further below for some benchmarks. 2. Since AMQP 1.0 is not limited anymore by the AMQP 0.9.1 protocol, this commit allows implementing more AMQP 1.0 features in the future. Some features are already implemented in this commit (see next section). 3. Simpler, better understandable, and more maintainable code. Native AMQP 1.0 as implemented in this commit has the following major benefits compared to AMQP 0.9.1: 4. Memory and disk alarms will only stop accepting incoming TRANSFER frames. New connections can still be created to consume from RabbitMQ to empty queues. 5. Due to 4. no need anymore for separate connections for publishers and consumers as we currently recommended for AMQP 0.9.1. which potentially halves the number of physical TCP connections. 6. When a single connection sends to multiple target queues, a single slow target queue won't block the entire connection. Publisher can still send data quickly to all other target queues. 7. A publisher can request whether it wants publisher confirmation on a per-message basis. In AMQP 0.9.1 publisher confirms are configured per channel only. 8. Consumers can change their "prefetch count" dynamically which isn't possible in our AMQP 0.9.1 implementation. See #10174 9. AMQP 1.0 is an extensible protocol This commit also fixes dozens of bugs present in the AMQP 1.0 plugin in RabbitMQ 3.x - most of which cannot be backported due to the complexity and limitations of the old 3.x implementation. This commit contains breaking changes and is therefore targeted for RabbitMQ 4.0. ## Implementation details 1. Breaking change: With Native AMQP, the behaviour of ``` Convert AMQP 0.9.1 message headers to application properties for an AMQP 1.0 consumer amqp1_0.convert_amqp091_headers_to_app_props = false | true (default false) Convert AMQP 1.0 Application Properties to AMQP 0.9.1 headers amqp1_0.convert_app_props_to_amqp091_headers = false | true (default false) ``` will break because we always convert according to the message container conversions. For example, AMQP 0.9.1 x-headers will go into message-annotations instead of application properties. Also, `false` won’t be respected since we always convert the headers with message containers. 2. Remove rabbit_queue_collector rabbit_queue_collector is responsible for synchronously deleting exclusive queues. Since the AMQP 1.0 plugin never creates exclusive queues, rabbit_queue_collector doesn't need to be started in the first place. This will save 1 Erlang process per AMQP 1.0 connection. 3. 7 processes per connection + 1 process per session in this commit instead of 7 processes per connection + 15 processes per session in 3.x Supervision hierarchy got re-designed. 4. Use 1 writer process per AMQP 1.0 connection AMQP 0.9.1 uses a separate rabbit_writer Erlang process per AMQP 0.9.1 channel. Prior to this commit, AMQP 1.0 used a separate rabbit_amqp1_0_writer process per AMQP 1.0 session. Advantage of single writer proc per session (prior to this commit): * High parallelism for serialising packets if multiple sessions within a connection write heavily at the same time. This commit uses a single writer process per AMQP 1.0 connection that is shared across all AMQP 1.0 sessions. Advantages of single writer proc per connection (this commit): * Lower memory usage with hundreds of thousands of AMQP 1.0 sessions * Less TCP and IP header overhead given that the single writer process can accumulate across all sessions bytes before flushing the socket. In other words, this commit decides that a reader / writer process pair per AMQP 1.0 connection is good enough for bi-directional TRANSFER flows. Having a writer per session is too heavy. We still ensure high throughput by having separate reader, writer, and session processes. 5. Transform rabbit_amqp1_0_writer into gen_server Why: Prior to this commit, when clicking on the AMQP 1.0 writer process in observer, the process crashed. Instead of handling all these debug messages of the sys module, it's better to implement a gen_server. There is no advantage of using a special OTP process over gen_server for the AMQP 1.0 writer. gen_server also provides cleaner format status output. How: Message callbacks return a timeout of 0. After all messages in the inbox are processed, the timeout message is handled by flushing any pending bytes. 6. Remove stats timer from writer AMQP 1.0 connections haven't emitted any stats previously. 7. When there are contiguous queue confirmations in the session process mailbox, batch them. When the confirmations are sent to the publisher, a single DISPOSITION frame is sent for contiguously confirmed delivery IDs. This approach should be good enough. However it's sub optimal in scenarios where contiguous delivery IDs that need confirmations are rare, for example: * There are multiple links in the session with different sender settlement modes and sender publishes across these links interleaved. * sender settlement mode is mixed and sender publishes interleaved settled and unsettled TRANSFERs. 8. Introduce credit API v2 Why: The AMQP 0.9.1 credit extension which is to be removed in 4.0 was poorly designed since basic.credit is a synchronous call into the queue process blocking the entire AMQP 1.0 session process. How: Change the interactions between queue clients and queue server implementations: * Clients only request a credit reply if the FLOW's `echo` field is set * Include all link flow control state held by the queue process into a new credit_reply queue event: * `available` after the queue sends any deliveries * `link-credit` after the queue sends any deliveries * `drain` which allows us to combine the old queue events send_credit_reply and send_drained into a single new queue event credit_reply. * Include the consumer tag into the credit_reply queue event such that the AMQP 1.0 session process can process any credit replies asynchronously. Link flow control state `delivery-count` also moves to the queue processes. The new interactions are hidden behind feature flag credit_api_v2 to allow for rolling upgrades from 3.13 to 4.0. 9. Use serial number arithmetic in quorum queues and session process. 10. Completely bypass the rabbit_limiter module for AMQP 1.0 flow control. The goal is to eventually remove the rabbit_limiter module in 4.0 since AMQP 0.9.1 global QoS will be unsupported in 4.0. This commit lifts the AMQP 1.0 link flow control logic out of rabbit_limiter into rabbit_queue_consumers. 11. Fix credit bug for streams: AMQP 1.0 settlements shouldn't top up link credit, only FLOW frames should top up link credit. 12. Allow sender settle mode unsettled for streams since AMQP 1.0 acknowledgements to streams are no-ops (currently). 13. Fix AMQP 1.0 client bugs Auto renewing credits should not be related to settling TRANSFERs. Remove field link_credit_unsettled as it was wrong and confusing. Prior to this commit auto renewal did not work when the sender uses sender settlement mode settled. 14. Fix AMQP 1.0 client bugs The wrong outdated Link was passed to function auto_flow/2 15. Use osiris chunk iterator Only hold messages of uncompressed sub batches in memory if consumer doesn't have sufficient credits. Compressed sub batches are skipped for non Stream protocol consumers. 16. Fix incoming link flow control Always use confirms between AMQP 1.0 queue clients and queue servers. As already done internally by rabbit_fifo_client and rabbit_stream_queue, use confirms for classic queues as well. 17. Include link handle into correlation when publishing messages to target queues such that session process can correlate confirms from target queues to incoming links. 18. Only grant more credits to publishers if publisher hasn't sufficient credits anymore and there are not too many unconfirmed messages on the link. 19. Completely ignore `block` and `unblock` queue actions and RabbitMQ credit flow between classic queue process and session process. 20. Link flow control is independent between links. A client can refer to a queue or to an exchange with multiple dynamically added target queues. Multiple incoming links can also fan in to the same queue. However the link topology looks like, this commit ensures that each link is only granted more credits if that link isn't overloaded. 21. A connection or a session can send to many different queues. In AMQP 0.9.1, a single slow queue will lead to the entire channel, and then entire connection being blocked. This commit makes sure that a single slow queue from one link won't slow down sending on other links. For example, having link A sending to a local classic queue and link B sending to 5 replica quorum queue, link B will naturally grant credits slower than link A. So, despite the quorum queue being slower in confirming messages, the same AMQP 1.0 connection and session can still pump data very fast into the classic queue. 22. If cluster wide memory or disk alarm occurs. Each session sends a FLOW with incoming-window to 0 to sending client. If sending clients don’t obey, force disconnect the client. If cluster wide memory alarm clears: Each session resumes with a FLOW defaulting to initial incoming-window. 23. All operations apart of publishing TRANSFERS to RabbitMQ can continue during cluster wide alarms, specifically, attaching consumers and consuming, i.e. emptying queues. There is no need for separate AMQP 1.0 connections for publishers and consumers as recommended in our AMQP 0.9.1 implementation. 24. Flow control summary: * If queue becomes bottleneck, that’s solved by slowing down individual sending links (AMQP 1.0 link flow control). * If session becomes bottleneck (more unlikely), that’s solved by AMQP 1.0 session flow control. * If connection becomes bottleneck, it naturally won’t read fast enough from the socket causing TCP backpressure being applied. Nowhere will RabbitMQ internal credit based flow control (i.e. module credit_flow) be used on the incoming AMQP 1.0 message path. 25. Register AMQP sessions Prefer local-only pg over our custom pg_local implementation as pg is a better process group implementation than pg_local. pg_local was identified as bottleneck in tests where many MQTT clients were disconnected at once. 26. Start a local-only pg when Rabbit boots: > A scope can be kept local-only by using a scope name that is unique cluster-wide, e.g. the node name: > pg:start_link(node()). Register AMQP 1.0 connections and sessions with pg. In future we should remove pg_local and instead use the new local-only pg for all registered processes such as AMQP 0.9.1 connections and channels. 27. Requeue messages if link detached Although the spec allows to settle delivery IDs on detached links, RabbitMQ does not respect the 'closed' field of the DETACH frame and therefore handles every DETACH frame as closed. Since the link is closed, we expect every outstanding delivery to be requeued. In addition to consumer cancellation, detaching a link therefore causes in flight deliveries to be requeued. Note that this behaviour is different from merely consumer cancellation in AMQP 0.9.1: "After a consumer is cancelled there will be no future deliveries dispatched to it. Note that there can still be "in flight" deliveries dispatched previously. Cancelling a consumer will neither discard nor requeue them." [https://www.rabbitmq.com/consumers.html#unsubscribing] An AMQP receiver can first drain, and then detach to prevent "in flight" deliveries 28. Init AMQP session with BEGIN frame Similar to how there can't be an MQTT processor without a CONNECT frame, there can't be an AMQP session without a BEGIN frame. This allows having strict dialyzer types for session flow control fields (i.e. not allowing 'undefined'). 29. Move serial_number to AMQP 1.0 common lib such that it can be used by both AMQP 1.0 server and client 30. Fix AMQP client to do serial number arithmetic. 31. AMQP client: Differentiate between delivery-id and transfer-id for better understandability. 32. Fix link flow control in classic queues This commit fixes ``` java -jar target/perf-test.jar -ad false -f persistent -u cq -c 3000 -C 1000000 -y 0 ``` followed by ``` ./omq -x 0 amqp -T /queue/cq -D 1000000 --amqp-consumer-credits 2 ``` Prior to this commit, (and on RabbitMQ 3.x) the consuming would halt after around 8 - 10,000 messages. The bug was that in flight messages from classic queue process to session process were not taken into account when topping up credit to the classic queue process. Fixes #2597 The solution to this bug (and a much cleaner design anyway independent of this bug) is that queues should hold all link flow control state including the delivery-count. Hence, when credit API v2 is used the delivery-count will be held by the classic queue process, quorum queue process, and stream queue client instead of managing the delivery-count in the session. 33. The double level crediting between (a) session process and rabbit_fifo_client, and (b) rabbit_fifo_client and rabbit_fifo was removed. Therefore, instead of managing 3 separate delivery-counts (i. session, ii. rabbit_fifo_client, iii. rabbit_fifo), only 1 delivery-count is used in rabbit_fifo. This is a big simplification. 34. This commit fixes quorum queues without bumping the machine version nor introducing new rabbit_fifo commands. Whether credit API v2 is used is solely determined at link attachment time depending on whether feature flag credit_api_v2 is enabled. Even when that feature flag will be enabled later on, this link will keep using credit API v1 until detached (or the node is shut down). Eventually, after feature flag credit_api_v2 has been enabled and a subsequent rolling upgrade, all links will use credit API v2. This approach is safe and simple. The 2 alternatives to move delivery-count from the session process to the queue processes would have been: i. Explicit feature flag credit_api_v2 migration function * Can use a gen_server:call and only finish migration once all delivery-counts were migrated. Cons: * Extra new message format just for migration is required. * Risky as migration will fail if a target queue doesn’t reply. ii. Session always includes DeliveryCountSnd when crediting to the queue: Cons: * 2 delivery counts will be hold simultaneously in session proc and queue proc; could be solved by deleting the session proc’s delivery-count for credit-reply * What happens if the receiver doesn’t provide credit for a very long time? Is that a problem? 35. Support stream filtering in AMQP 1.0 (by @acogoluegnes) Use the x-stream-filter-value message annotation to carry the filter value in a published message. Use the rabbitmq:stream-filter and rabbitmq:stream-match-unfiltered filters when creating a receiver that wants to filter out messages from a stream. 36. Remove credit extension from AMQP 0.9.1 client 37. Support maintenance mode closing AMQP 1.0 connections. 38. Remove AMQP 0.9.1 client dependency from AMQP 1.0 implementation. 39. Move AMQP 1.0 plugin to the core. AMQP 1.0 is enabled by default. The old rabbitmq_amqp1_0 plugin will be kept as a no-op plugin to prevent deployment tools from failing that execute: ``` rabbitmq-plugins enable rabbitmq_amqp1_0 rabbitmq-plugins disable rabbitmq_amqp1_0 ``` 40. Breaking change: Remove CLI command `rabbitmqctl list_amqp10_connections`. Instead, list both AMQP 0.9.1 and AMQP 1.0 connections in `list_connections`: ``` rabbitmqctl list_connections protocol Listing connections ... protocol {1, 0} {0,9,1} ``` ## Benchmarks ### Throughput & Latency Setup: * Single node Ubuntu 22.04 * Erlang 26.1.1 Start RabbitMQ: ``` make run-broker PLUGINS="rabbitmq_management rabbitmq_amqp1_0" FULL=1 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 3" ``` Predeclare durable classic queue cq1, durable quorum queue qq1, durable stream queue sq1. Start client: https://github.com/ssorj/quiver https://hub.docker.com/r/ssorj/quiver/tags (digest 453a2aceda64) ``` docker run -it --rm --add-host host.docker.internal:host-gateway ssorj/quiver:latest bash-5.1# quiver --version quiver 0.4.0-SNAPSHOT ``` 1. Classic queue ``` quiver //host.docker.internal//amq/queue/cq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ............................................... 73.8 seconds Sender rate .......................................... 13,548 messages/s Receiver rate ........................................ 13,547 messages/s End-to-end rate ...................................... 13,547 messages/s Latencies by percentile: 0% ........ 0 ms 90.00% ........ 9 ms 25% ........ 2 ms 99.00% ....... 14 ms 50% ........ 4 ms 99.90% ....... 17 ms 100% ....... 26 ms 99.99% ....... 24 ms ``` RabbitMQ 3.x (main branch as of 30 January 2024): ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 6 73.6 2.1 3,217 1,607 0 8.0 511 4.1 163,580 16,367 2 74.1 4.1 3,217 0 0 8.0 0 6.1 229,114 32,767 3 74.1 6.1 3,217 0 0 8.0 0 8.1 261,880 16,367 2 74.1 8.1 67,874 32,296 8 8.2 7,662 10.1 294,646 16,367 2 74.1 10.1 67,874 0 0 8.2 0 12.1 360,180 32,734 3 74.1 12.1 67,874 0 0 8.2 0 14.1 392,946 16,367 3 74.1 14.1 68,604 365 0 8.2 12,147 16.1 458,480 32,734 3 74.1 16.1 68,604 0 0 8.2 0 18.1 491,246 16,367 2 74.1 18.1 68,604 0 0 8.2 0 20.1 556,780 32,767 4 74.1 20.1 68,604 0 0 8.2 0 22.1 589,546 16,375 2 74.1 22.1 68,604 0 0 8.2 0 receiver timed out 24.1 622,312 16,367 2 74.1 24.1 68,604 0 0 8.2 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. ``` 2. Quorum queue: ``` quiver //host.docker.internal//amq/queue/qq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration .............................................. 101.4 seconds Sender rate ........................................... 9,867 messages/s Receiver rate ......................................... 9,868 messages/s End-to-end rate ....................................... 9,865 messages/s Latencies by percentile: 0% ....... 11 ms 90.00% ....... 23 ms 25% ....... 15 ms 99.00% ....... 28 ms 50% ....... 18 ms 99.90% ....... 33 ms 100% ....... 49 ms 99.99% ....... 47 ms ``` RabbitMQ 3.x: ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 9 69.9 2.1 18,430 9,206 5 7.6 1,221 4.1 163,580 16,375 5 70.2 4.1 18,867 218 0 7.6 2,168 6.1 229,114 32,767 6 70.2 6.1 18,867 0 0 7.6 0 8.1 294,648 32,734 7 70.2 8.1 18,867 0 0 7.6 0 10.1 360,182 32,734 6 70.2 10.1 18,867 0 0 7.6 0 12.1 425,716 32,767 6 70.2 12.1 18,867 0 0 7.6 0 receiver timed out 14.1 458,482 16,367 5 70.2 14.1 18,867 0 0 7.6 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. ``` 3. Stream: ``` quiver-arrow send //host.docker.internal//amq/queue/sq1 --durable --count 1m -d 10m --summary --verbose ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ................................................ 8.7 seconds Message rate ........................................ 115,154 messages/s ``` RabbitMQ 3.x: ``` Count ............................................. 1,000,000 messages Duration ............................................... 21.2 seconds Message rate ......................................... 47,232 messages/s ``` ### Memory usage Start RabbitMQ: ``` ERL_MAX_PORTS=3000000 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+P 3000000 +S 6" make run-broker PLUGINS="rabbitmq_amqp1_0" FULL=1 RABBITMQ_CONFIG_FILE="rabbitmq.conf" ``` ``` /bin/cat rabbitmq.conf tcp_listen_options.sndbuf = 2048 tcp_listen_options.recbuf = 2048 vm_memory_high_watermark.relative = 0.95 vm_memory_high_watermark_paging_ratio = 0.95 loopback_users = none ``` Create 50k connections with 2 sessions per connection, i.e. 100k session in total: ```go package main import ( "context" "log" "time" "github.com/Azure/go-amqp" ) func main() { for i := 0; i < 50000; i++ { conn, err := amqp.Dial(context.TODO(), "amqp://nuc", &amqp.ConnOptions{SASLType: amqp.SASLTypeAnonymous()}) if err != nil { log.Fatal("dialing AMQP server:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } } log.Println("opened all connections") time.Sleep(5 * time.Hour) } ``` This commit: ``` erlang:memory(). [{total,4586376480}, {processes,4025898504}, {processes_used,4025871040}, {system,560477976}, {atom,1048841}, {atom_used,1042841}, {binary,233228608}, {code,21449982}, {ets,108560464}] erlang:system_info(process_count). 450289 ``` 7 procs per connection + 1 proc per session. (7 + 2*1) * 50,000 = 450,000 procs RabbitMQ 3.x: ``` erlang:memory(). [{total,15168232704}, {processes,14044779256}, {processes_used,14044755120}, {system,1123453448}, {atom,1057033}, {atom_used,1052587}, {binary,236381264}, {code,21790238}, {ets,391423744}] erlang:system_info(process_count). 1850309 ``` 7 procs per connection + 15 per session (7 + 2*15) * 50,000 = 1,850,000 procs 50k connections + 100k session require with this commit: 4.5 GB in RabbitMQ 3.x: 15 GB ## Future work 1. More efficient parser and serializer 2. TODO in mc_amqp: Do not store the parsed message on disk. 3. Implement both AMQP HTTP extension and AMQP management extension to allow AMQP clients to create RabbitMQ objects (queues, exchanges, ...).
2023-07-21 18:29:07 +08:00
"//deps/rabbitmq_cli:erlang_app", # keep
2023-05-25 23:23:35 +08:00
],
2021-05-28 19:27:21 +08:00
)
dialyze(
name = "dialyze",
2023-05-25 23:23:35 +08:00
dialyzer_opts = RABBITMQ_DIALYZER_OPTS,
plt = ":deps_plt",
target = ":erlang_app",
2021-05-28 19:27:21 +08:00
)
bats(
srcs = glob(["test/**/*.bats"]),
data = glob(
["scripts/*"],
exclude = ["scripts/*.bat"],
),
tags = ["bats"],
)
rabbitmq_home(
name = "broker-for-tests-home",
testonly = True,
plugins = [
":test_erlang_app",
"//deps/rabbitmq_ct_client_helpers:erlang_app",
Support AMQP 1.0 natively ## What Similar to Native MQTT in #5895, this commits implements Native AMQP 1.0. By "native", we mean do not proxy via AMQP 0.9.1 anymore. ## Why Native AMQP 1.0 comes with the following major benefits: 1. Similar to Native MQTT, this commit provides better throughput, latency, scalability, and resource usage for AMQP 1.0. See https://blog.rabbitmq.com/posts/2023/03/native-mqtt for native MQTT improvements. See further below for some benchmarks. 2. Since AMQP 1.0 is not limited anymore by the AMQP 0.9.1 protocol, this commit allows implementing more AMQP 1.0 features in the future. Some features are already implemented in this commit (see next section). 3. Simpler, better understandable, and more maintainable code. Native AMQP 1.0 as implemented in this commit has the following major benefits compared to AMQP 0.9.1: 4. Memory and disk alarms will only stop accepting incoming TRANSFER frames. New connections can still be created to consume from RabbitMQ to empty queues. 5. Due to 4. no need anymore for separate connections for publishers and consumers as we currently recommended for AMQP 0.9.1. which potentially halves the number of physical TCP connections. 6. When a single connection sends to multiple target queues, a single slow target queue won't block the entire connection. Publisher can still send data quickly to all other target queues. 7. A publisher can request whether it wants publisher confirmation on a per-message basis. In AMQP 0.9.1 publisher confirms are configured per channel only. 8. Consumers can change their "prefetch count" dynamically which isn't possible in our AMQP 0.9.1 implementation. See #10174 9. AMQP 1.0 is an extensible protocol This commit also fixes dozens of bugs present in the AMQP 1.0 plugin in RabbitMQ 3.x - most of which cannot be backported due to the complexity and limitations of the old 3.x implementation. This commit contains breaking changes and is therefore targeted for RabbitMQ 4.0. ## Implementation details 1. Breaking change: With Native AMQP, the behaviour of ``` Convert AMQP 0.9.1 message headers to application properties for an AMQP 1.0 consumer amqp1_0.convert_amqp091_headers_to_app_props = false | true (default false) Convert AMQP 1.0 Application Properties to AMQP 0.9.1 headers amqp1_0.convert_app_props_to_amqp091_headers = false | true (default false) ``` will break because we always convert according to the message container conversions. For example, AMQP 0.9.1 x-headers will go into message-annotations instead of application properties. Also, `false` won’t be respected since we always convert the headers with message containers. 2. Remove rabbit_queue_collector rabbit_queue_collector is responsible for synchronously deleting exclusive queues. Since the AMQP 1.0 plugin never creates exclusive queues, rabbit_queue_collector doesn't need to be started in the first place. This will save 1 Erlang process per AMQP 1.0 connection. 3. 7 processes per connection + 1 process per session in this commit instead of 7 processes per connection + 15 processes per session in 3.x Supervision hierarchy got re-designed. 4. Use 1 writer process per AMQP 1.0 connection AMQP 0.9.1 uses a separate rabbit_writer Erlang process per AMQP 0.9.1 channel. Prior to this commit, AMQP 1.0 used a separate rabbit_amqp1_0_writer process per AMQP 1.0 session. Advantage of single writer proc per session (prior to this commit): * High parallelism for serialising packets if multiple sessions within a connection write heavily at the same time. This commit uses a single writer process per AMQP 1.0 connection that is shared across all AMQP 1.0 sessions. Advantages of single writer proc per connection (this commit): * Lower memory usage with hundreds of thousands of AMQP 1.0 sessions * Less TCP and IP header overhead given that the single writer process can accumulate across all sessions bytes before flushing the socket. In other words, this commit decides that a reader / writer process pair per AMQP 1.0 connection is good enough for bi-directional TRANSFER flows. Having a writer per session is too heavy. We still ensure high throughput by having separate reader, writer, and session processes. 5. Transform rabbit_amqp1_0_writer into gen_server Why: Prior to this commit, when clicking on the AMQP 1.0 writer process in observer, the process crashed. Instead of handling all these debug messages of the sys module, it's better to implement a gen_server. There is no advantage of using a special OTP process over gen_server for the AMQP 1.0 writer. gen_server also provides cleaner format status output. How: Message callbacks return a timeout of 0. After all messages in the inbox are processed, the timeout message is handled by flushing any pending bytes. 6. Remove stats timer from writer AMQP 1.0 connections haven't emitted any stats previously. 7. When there are contiguous queue confirmations in the session process mailbox, batch them. When the confirmations are sent to the publisher, a single DISPOSITION frame is sent for contiguously confirmed delivery IDs. This approach should be good enough. However it's sub optimal in scenarios where contiguous delivery IDs that need confirmations are rare, for example: * There are multiple links in the session with different sender settlement modes and sender publishes across these links interleaved. * sender settlement mode is mixed and sender publishes interleaved settled and unsettled TRANSFERs. 8. Introduce credit API v2 Why: The AMQP 0.9.1 credit extension which is to be removed in 4.0 was poorly designed since basic.credit is a synchronous call into the queue process blocking the entire AMQP 1.0 session process. How: Change the interactions between queue clients and queue server implementations: * Clients only request a credit reply if the FLOW's `echo` field is set * Include all link flow control state held by the queue process into a new credit_reply queue event: * `available` after the queue sends any deliveries * `link-credit` after the queue sends any deliveries * `drain` which allows us to combine the old queue events send_credit_reply and send_drained into a single new queue event credit_reply. * Include the consumer tag into the credit_reply queue event such that the AMQP 1.0 session process can process any credit replies asynchronously. Link flow control state `delivery-count` also moves to the queue processes. The new interactions are hidden behind feature flag credit_api_v2 to allow for rolling upgrades from 3.13 to 4.0. 9. Use serial number arithmetic in quorum queues and session process. 10. Completely bypass the rabbit_limiter module for AMQP 1.0 flow control. The goal is to eventually remove the rabbit_limiter module in 4.0 since AMQP 0.9.1 global QoS will be unsupported in 4.0. This commit lifts the AMQP 1.0 link flow control logic out of rabbit_limiter into rabbit_queue_consumers. 11. Fix credit bug for streams: AMQP 1.0 settlements shouldn't top up link credit, only FLOW frames should top up link credit. 12. Allow sender settle mode unsettled for streams since AMQP 1.0 acknowledgements to streams are no-ops (currently). 13. Fix AMQP 1.0 client bugs Auto renewing credits should not be related to settling TRANSFERs. Remove field link_credit_unsettled as it was wrong and confusing. Prior to this commit auto renewal did not work when the sender uses sender settlement mode settled. 14. Fix AMQP 1.0 client bugs The wrong outdated Link was passed to function auto_flow/2 15. Use osiris chunk iterator Only hold messages of uncompressed sub batches in memory if consumer doesn't have sufficient credits. Compressed sub batches are skipped for non Stream protocol consumers. 16. Fix incoming link flow control Always use confirms between AMQP 1.0 queue clients and queue servers. As already done internally by rabbit_fifo_client and rabbit_stream_queue, use confirms for classic queues as well. 17. Include link handle into correlation when publishing messages to target queues such that session process can correlate confirms from target queues to incoming links. 18. Only grant more credits to publishers if publisher hasn't sufficient credits anymore and there are not too many unconfirmed messages on the link. 19. Completely ignore `block` and `unblock` queue actions and RabbitMQ credit flow between classic queue process and session process. 20. Link flow control is independent between links. A client can refer to a queue or to an exchange with multiple dynamically added target queues. Multiple incoming links can also fan in to the same queue. However the link topology looks like, this commit ensures that each link is only granted more credits if that link isn't overloaded. 21. A connection or a session can send to many different queues. In AMQP 0.9.1, a single slow queue will lead to the entire channel, and then entire connection being blocked. This commit makes sure that a single slow queue from one link won't slow down sending on other links. For example, having link A sending to a local classic queue and link B sending to 5 replica quorum queue, link B will naturally grant credits slower than link A. So, despite the quorum queue being slower in confirming messages, the same AMQP 1.0 connection and session can still pump data very fast into the classic queue. 22. If cluster wide memory or disk alarm occurs. Each session sends a FLOW with incoming-window to 0 to sending client. If sending clients don’t obey, force disconnect the client. If cluster wide memory alarm clears: Each session resumes with a FLOW defaulting to initial incoming-window. 23. All operations apart of publishing TRANSFERS to RabbitMQ can continue during cluster wide alarms, specifically, attaching consumers and consuming, i.e. emptying queues. There is no need for separate AMQP 1.0 connections for publishers and consumers as recommended in our AMQP 0.9.1 implementation. 24. Flow control summary: * If queue becomes bottleneck, that’s solved by slowing down individual sending links (AMQP 1.0 link flow control). * If session becomes bottleneck (more unlikely), that’s solved by AMQP 1.0 session flow control. * If connection becomes bottleneck, it naturally won’t read fast enough from the socket causing TCP backpressure being applied. Nowhere will RabbitMQ internal credit based flow control (i.e. module credit_flow) be used on the incoming AMQP 1.0 message path. 25. Register AMQP sessions Prefer local-only pg over our custom pg_local implementation as pg is a better process group implementation than pg_local. pg_local was identified as bottleneck in tests where many MQTT clients were disconnected at once. 26. Start a local-only pg when Rabbit boots: > A scope can be kept local-only by using a scope name that is unique cluster-wide, e.g. the node name: > pg:start_link(node()). Register AMQP 1.0 connections and sessions with pg. In future we should remove pg_local and instead use the new local-only pg for all registered processes such as AMQP 0.9.1 connections and channels. 27. Requeue messages if link detached Although the spec allows to settle delivery IDs on detached links, RabbitMQ does not respect the 'closed' field of the DETACH frame and therefore handles every DETACH frame as closed. Since the link is closed, we expect every outstanding delivery to be requeued. In addition to consumer cancellation, detaching a link therefore causes in flight deliveries to be requeued. Note that this behaviour is different from merely consumer cancellation in AMQP 0.9.1: "After a consumer is cancelled there will be no future deliveries dispatched to it. Note that there can still be "in flight" deliveries dispatched previously. Cancelling a consumer will neither discard nor requeue them." [https://www.rabbitmq.com/consumers.html#unsubscribing] An AMQP receiver can first drain, and then detach to prevent "in flight" deliveries 28. Init AMQP session with BEGIN frame Similar to how there can't be an MQTT processor without a CONNECT frame, there can't be an AMQP session without a BEGIN frame. This allows having strict dialyzer types for session flow control fields (i.e. not allowing 'undefined'). 29. Move serial_number to AMQP 1.0 common lib such that it can be used by both AMQP 1.0 server and client 30. Fix AMQP client to do serial number arithmetic. 31. AMQP client: Differentiate between delivery-id and transfer-id for better understandability. 32. Fix link flow control in classic queues This commit fixes ``` java -jar target/perf-test.jar -ad false -f persistent -u cq -c 3000 -C 1000000 -y 0 ``` followed by ``` ./omq -x 0 amqp -T /queue/cq -D 1000000 --amqp-consumer-credits 2 ``` Prior to this commit, (and on RabbitMQ 3.x) the consuming would halt after around 8 - 10,000 messages. The bug was that in flight messages from classic queue process to session process were not taken into account when topping up credit to the classic queue process. Fixes #2597 The solution to this bug (and a much cleaner design anyway independent of this bug) is that queues should hold all link flow control state including the delivery-count. Hence, when credit API v2 is used the delivery-count will be held by the classic queue process, quorum queue process, and stream queue client instead of managing the delivery-count in the session. 33. The double level crediting between (a) session process and rabbit_fifo_client, and (b) rabbit_fifo_client and rabbit_fifo was removed. Therefore, instead of managing 3 separate delivery-counts (i. session, ii. rabbit_fifo_client, iii. rabbit_fifo), only 1 delivery-count is used in rabbit_fifo. This is a big simplification. 34. This commit fixes quorum queues without bumping the machine version nor introducing new rabbit_fifo commands. Whether credit API v2 is used is solely determined at link attachment time depending on whether feature flag credit_api_v2 is enabled. Even when that feature flag will be enabled later on, this link will keep using credit API v1 until detached (or the node is shut down). Eventually, after feature flag credit_api_v2 has been enabled and a subsequent rolling upgrade, all links will use credit API v2. This approach is safe and simple. The 2 alternatives to move delivery-count from the session process to the queue processes would have been: i. Explicit feature flag credit_api_v2 migration function * Can use a gen_server:call and only finish migration once all delivery-counts were migrated. Cons: * Extra new message format just for migration is required. * Risky as migration will fail if a target queue doesn’t reply. ii. Session always includes DeliveryCountSnd when crediting to the queue: Cons: * 2 delivery counts will be hold simultaneously in session proc and queue proc; could be solved by deleting the session proc’s delivery-count for credit-reply * What happens if the receiver doesn’t provide credit for a very long time? Is that a problem? 35. Support stream filtering in AMQP 1.0 (by @acogoluegnes) Use the x-stream-filter-value message annotation to carry the filter value in a published message. Use the rabbitmq:stream-filter and rabbitmq:stream-match-unfiltered filters when creating a receiver that wants to filter out messages from a stream. 36. Remove credit extension from AMQP 0.9.1 client 37. Support maintenance mode closing AMQP 1.0 connections. 38. Remove AMQP 0.9.1 client dependency from AMQP 1.0 implementation. 39. Move AMQP 1.0 plugin to the core. AMQP 1.0 is enabled by default. The old rabbitmq_amqp1_0 plugin will be kept as a no-op plugin to prevent deployment tools from failing that execute: ``` rabbitmq-plugins enable rabbitmq_amqp1_0 rabbitmq-plugins disable rabbitmq_amqp1_0 ``` 40. Breaking change: Remove CLI command `rabbitmqctl list_amqp10_connections`. Instead, list both AMQP 0.9.1 and AMQP 1.0 connections in `list_connections`: ``` rabbitmqctl list_connections protocol Listing connections ... protocol {1, 0} {0,9,1} ``` ## Benchmarks ### Throughput & Latency Setup: * Single node Ubuntu 22.04 * Erlang 26.1.1 Start RabbitMQ: ``` make run-broker PLUGINS="rabbitmq_management rabbitmq_amqp1_0" FULL=1 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 3" ``` Predeclare durable classic queue cq1, durable quorum queue qq1, durable stream queue sq1. Start client: https://github.com/ssorj/quiver https://hub.docker.com/r/ssorj/quiver/tags (digest 453a2aceda64) ``` docker run -it --rm --add-host host.docker.internal:host-gateway ssorj/quiver:latest bash-5.1# quiver --version quiver 0.4.0-SNAPSHOT ``` 1. Classic queue ``` quiver //host.docker.internal//amq/queue/cq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ............................................... 73.8 seconds Sender rate .......................................... 13,548 messages/s Receiver rate ........................................ 13,547 messages/s End-to-end rate ...................................... 13,547 messages/s Latencies by percentile: 0% ........ 0 ms 90.00% ........ 9 ms 25% ........ 2 ms 99.00% ....... 14 ms 50% ........ 4 ms 99.90% ....... 17 ms 100% ....... 26 ms 99.99% ....... 24 ms ``` RabbitMQ 3.x (main branch as of 30 January 2024): ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 6 73.6 2.1 3,217 1,607 0 8.0 511 4.1 163,580 16,367 2 74.1 4.1 3,217 0 0 8.0 0 6.1 229,114 32,767 3 74.1 6.1 3,217 0 0 8.0 0 8.1 261,880 16,367 2 74.1 8.1 67,874 32,296 8 8.2 7,662 10.1 294,646 16,367 2 74.1 10.1 67,874 0 0 8.2 0 12.1 360,180 32,734 3 74.1 12.1 67,874 0 0 8.2 0 14.1 392,946 16,367 3 74.1 14.1 68,604 365 0 8.2 12,147 16.1 458,480 32,734 3 74.1 16.1 68,604 0 0 8.2 0 18.1 491,246 16,367 2 74.1 18.1 68,604 0 0 8.2 0 20.1 556,780 32,767 4 74.1 20.1 68,604 0 0 8.2 0 22.1 589,546 16,375 2 74.1 22.1 68,604 0 0 8.2 0 receiver timed out 24.1 622,312 16,367 2 74.1 24.1 68,604 0 0 8.2 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. ``` 2. Quorum queue: ``` quiver //host.docker.internal//amq/queue/qq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration .............................................. 101.4 seconds Sender rate ........................................... 9,867 messages/s Receiver rate ......................................... 9,868 messages/s End-to-end rate ....................................... 9,865 messages/s Latencies by percentile: 0% ....... 11 ms 90.00% ....... 23 ms 25% ....... 15 ms 99.00% ....... 28 ms 50% ....... 18 ms 99.90% ....... 33 ms 100% ....... 49 ms 99.99% ....... 47 ms ``` RabbitMQ 3.x: ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 9 69.9 2.1 18,430 9,206 5 7.6 1,221 4.1 163,580 16,375 5 70.2 4.1 18,867 218 0 7.6 2,168 6.1 229,114 32,767 6 70.2 6.1 18,867 0 0 7.6 0 8.1 294,648 32,734 7 70.2 8.1 18,867 0 0 7.6 0 10.1 360,182 32,734 6 70.2 10.1 18,867 0 0 7.6 0 12.1 425,716 32,767 6 70.2 12.1 18,867 0 0 7.6 0 receiver timed out 14.1 458,482 16,367 5 70.2 14.1 18,867 0 0 7.6 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. ``` 3. Stream: ``` quiver-arrow send //host.docker.internal//amq/queue/sq1 --durable --count 1m -d 10m --summary --verbose ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ................................................ 8.7 seconds Message rate ........................................ 115,154 messages/s ``` RabbitMQ 3.x: ``` Count ............................................. 1,000,000 messages Duration ............................................... 21.2 seconds Message rate ......................................... 47,232 messages/s ``` ### Memory usage Start RabbitMQ: ``` ERL_MAX_PORTS=3000000 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+P 3000000 +S 6" make run-broker PLUGINS="rabbitmq_amqp1_0" FULL=1 RABBITMQ_CONFIG_FILE="rabbitmq.conf" ``` ``` /bin/cat rabbitmq.conf tcp_listen_options.sndbuf = 2048 tcp_listen_options.recbuf = 2048 vm_memory_high_watermark.relative = 0.95 vm_memory_high_watermark_paging_ratio = 0.95 loopback_users = none ``` Create 50k connections with 2 sessions per connection, i.e. 100k session in total: ```go package main import ( "context" "log" "time" "github.com/Azure/go-amqp" ) func main() { for i := 0; i < 50000; i++ { conn, err := amqp.Dial(context.TODO(), "amqp://nuc", &amqp.ConnOptions{SASLType: amqp.SASLTypeAnonymous()}) if err != nil { log.Fatal("dialing AMQP server:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } } log.Println("opened all connections") time.Sleep(5 * time.Hour) } ``` This commit: ``` erlang:memory(). [{total,4586376480}, {processes,4025898504}, {processes_used,4025871040}, {system,560477976}, {atom,1048841}, {atom_used,1042841}, {binary,233228608}, {code,21449982}, {ets,108560464}] erlang:system_info(process_count). 450289 ``` 7 procs per connection + 1 proc per session. (7 + 2*1) * 50,000 = 450,000 procs RabbitMQ 3.x: ``` erlang:memory(). [{total,15168232704}, {processes,14044779256}, {processes_used,14044755120}, {system,1123453448}, {atom,1057033}, {atom_used,1052587}, {binary,236381264}, {code,21790238}, {ets,391423744}] erlang:system_info(process_count). 1850309 ``` 7 procs per connection + 15 per session (7 + 2*15) * 50,000 = 1,850,000 procs 50k connections + 100k session require with this commit: 4.5 GB in RabbitMQ 3.x: 15 GB ## Future work 1. More efficient parser and serializer 2. TODO in mc_amqp: Do not store the parsed message on disk. 3. Implement both AMQP HTTP extension and AMQP management extension to allow AMQP clients to create RabbitMQ objects (queues, exchanges, ...).
2023-07-21 18:29:07 +08:00
"//deps/rabbitmq_amqp1_0:erlang_app",
"@inet_tcp_proxy_dist//:erlang_app",
"@meck//:erlang_app",
],
)
rabbitmq_run(
name = "rabbitmq-for-tests-run",
testonly = True,
home = ":broker-for-tests-home",
)
rabbitmq_suite(
name = "amqqueue_backward_compatibility_SUITE",
size = "small",
)
rabbitmq_integration_suite(
name = "backing_queue_SUITE",
size = "large",
)
rabbitmq_integration_suite(
name = "channel_interceptor_SUITE",
size = "medium",
additional_beam = [
"test/dummy_interceptor.beam",
"test/failing_dummy_interceptor.beam",
],
)
rabbitmq_integration_suite(
name = "channel_operation_timeout_SUITE",
size = "medium",
additional_beam = [
"test/channel_operation_timeout_test_queue.beam",
],
)
rabbitmq_integration_suite(
name = "classic_queue_prop_SUITE",
size = "large",
deps = [
"@proper//:erlang_app",
],
)
rabbitmq_integration_suite(
name = "cluster_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "clustering_events_SUITE",
2024-07-26 12:02:38 +08:00
size = "medium",
additional_beam = [
":test_event_recorder_beam",
],
)
rabbitmq_integration_suite(
name = "quorum_queue_member_reconciliation_SUITE",
size = "medium",
additional_beam = [
":test_queue_utils_beam",
],
)
2024-05-09 05:20:38 +08:00
rabbitmq_integration_suite(
name = "cluster_limit_SUITE",
size = "medium",
additional_beam = [
":test_queue_utils_beam",
],
)
rabbitmq_integration_suite(
name = "clustering_management_SUITE",
size = "large",
additional_beam = [
":test_clustering_utils_beam",
],
shard_count = 45,
2023-09-01 12:02:31 +08:00
sharding_method = "case",
)
rabbitmq_integration_suite(
name = "clustering_recovery_SUITE",
size = "medium",
Allow to use Khepri database to store metadata instead of Mnesia [Why] Mnesia is a very powerful and convenient tool for Erlang applications: it is a persistent disc-based database, it handles replication accross multiple Erlang nodes and it is available out-of-the-box from the Erlang/OTP distribution. RabbitMQ relies on Mnesia to manage all its metadata: * virtual hosts' properties * intenal users * queue, exchange and binding declarations (not queues data) * runtime parameters and policies * ... Unfortunately Mnesia makes it difficult to handle network partition and, as a consequence, the merge conflicts between Erlang nodes once the network partition is resolved. RabbitMQ provides several partition handling strategies but they are not bullet-proof. Users still hit situations where it is a pain to repair a cluster following a network partition. [How] @kjnilsson created Ra [1], a Raft consensus library that RabbitMQ already uses successfully to implement quorum queues and streams for instance. Those queues do not suffer from network partitions. We created Khepri [2], a new persistent and replicated database engine based on Ra and we want to use it in place of Mnesia in RabbitMQ to solve the problems with network partitions. This patch integrates Khepri as an experimental feature. When enabled, RabbitMQ will store all its metadata in Khepri instead of Mnesia. This change comes with behavior changes. While Khepri remains disabled, you should see no changes to the behavior of RabbitMQ. If there are changes, it is a bug. After Khepri is enabled, there are significant changes of behavior that you should be aware of. Because it is based on the Raft consensus algorithm, when there is a network partition, only the cluster members that are in the partition with at least `(Number of nodes in the cluster ÷ 2) + 1` number of nodes can "make progress". In other words, only those nodes may write to the Khepri database and read from the database and expect a consistent result. For instance in a cluster of 5 RabbitMQ nodes: * If there are two partitions, one with 3 nodes, one with 2 nodes, only the group of 3 nodes will be able to write to the database. * If there are three partitions, two with 2 nodes, one with 1 node, none of the group can write to the database. Because the Khepri database will be used for all kind of metadata, it means that RabbitMQ nodes that can't write to the database will be unable to perform some operations. A list of operations and what to expect is documented in the associated pull request and the RabbitMQ website. This requirement from Raft also affects the startup of RabbitMQ nodes in a cluster. Indeed, at least a quorum number of nodes must be started at once to allow nodes to become ready. To enable Khepri, you need to enable the `khepri_db` feature flag: rabbitmqctl enable_feature_flag khepri_db When the `khepri_db` feature flag is enabled, the migration code performs the following two tasks: 1. It synchronizes the Khepri cluster membership from the Mnesia cluster. It uses `mnesia_to_khepri:sync_cluster_membership/1` from the `khepri_mnesia_migration` application [3]. 2. It copies data from relevant Mnesia tables to Khepri, doing some conversion if necessary on the way. Again, it uses `mnesia_to_khepri:copy_tables/4` from `khepri_mnesia_migration` to do it. This can be performed on a running standalone RabbitMQ node or cluster. Data will be migrated from Mnesia to Khepri without any service interruption. Note that during the migration, the performance may decrease and the memory footprint may go up. Because this feature flag is considered experimental, it is not enabled by default even on a brand new RabbitMQ deployment. More about the implementation details below: In the past months, all accesses to Mnesia were isolated in a collection of `rabbit_db*` modules. This is where the integration of Khepri mostly takes place: we use a function called `rabbit_khepri:handle_fallback/1` which selects the database and perform the query or the transaction. Here is an example from `rabbit_db_vhost`: * Up until RabbitMQ 3.12.x: get(VHostName) when is_binary(VHostName) -> get_in_mnesia(VHostName). * Starting with RabbitMQ 3.13.0: get(VHostName) when is_binary(VHostName) -> rabbit_khepri:handle_fallback( #{mnesia => fun() -> get_in_mnesia(VHostName) end, khepri => fun() -> get_in_khepri(VHostName) end}). This `rabbit_khepri:handle_fallback/1` function relies on two things: 1. the fact that the `khepri_db` feature flag is enabled, in which case it always executes the Khepri-based variant. 4. the ability or not to read and write to Mnesia tables otherwise. Before the feature flag is enabled, or during the migration, the function will try to execute the Mnesia-based variant. If it succeeds, then it returns the result. If it fails because one or more Mnesia tables can't be used, it restarts from scratch: it means the feature flag is being enabled and depending on the outcome, either the Mnesia-based variant will succeed (the feature flag couldn't be enabled) or the feature flag will be marked as enabled and it will call the Khepri-based variant. The meat of this function really lives in the `khepri_mnesia_migration` application [3] and `rabbit_khepri:handle_fallback/1` is a wrapper on top of it that knows about the feature flag. However, some calls to the database do not depend on the existence of Mnesia tables, such as functions where we need to learn about the members of a cluster. For those, we can't rely on exceptions from Mnesia. Therefore, we just look at the state of the feature flag to determine which database to use. There are two situations though: * Sometimes, we need the feature flag state query to block because the function interested in it can't return a valid answer during the migration. Here is an example: case rabbit_khepri:is_enabled(RemoteNode) of true -> can_join_using_khepri(RemoteNode); false -> can_join_using_mnesia(RemoteNode) end * Sometimes, we need the feature flag state query to NOT block (for instance because it would cause a deadlock). Here is an example: case rabbit_khepri:get_feature_state() of enabled -> members_using_khepri(); _ -> members_using_mnesia() end Direct accesses to Mnesia still exists. They are limited to code that is specific to Mnesia such as classic queue mirroring or network partitions handling strategies. Now, to discover the Mnesia tables to migrate and how to migrate them, we use an Erlang module attribute called `rabbit_mnesia_tables_to_khepri_db` which indicates a list of Mnesia tables and an associated converter module. Here is an example in the `rabbitmq_recent_history_exchange` plugin: -rabbit_mnesia_tables_to_khepri_db( [{?RH_TABLE, rabbit_db_rh_exchange_m2k_converter}]). The converter module — `rabbit_db_rh_exchange_m2k_converter` in this example — is is fact a "sub" converter module called but `rabbit_db_m2k_converter`. See the documentation of a `mnesia_to_khepri` converter module to learn more about these modules. [1] https://github.com/rabbitmq/ra [2] https://github.com/rabbitmq/khepri [3] https://github.com/rabbitmq/khepri_mnesia_migration See #7206. Co-authored-by: Jean-Sébastien Pédron <jean-sebastien@rabbitmq.com> Co-authored-by: Diana Parra Corbacho <dparracorbac@vmware.com> Co-authored-by: Michael Davis <mcarsondavis@gmail.com>
2023-01-05 20:57:50 +08:00
additional_beam = [
":test_clustering_utils_beam",
],
2023-09-29 23:13:20 +08:00
shard_count = 8,
sharding_method = "case",
)
rabbitmq_integration_suite(
name = "config_schema_SUITE",
size = "medium",
data = [
"test/definition_import_SUITE_data/case1.json",
],
)
rabbitmq_integration_suite(
name = "confirms_rejects_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "consumer_timeout_SUITE",
size = "medium",
additional_beam = [
":test_queue_utils_beam",
],
)
rabbitmq_integration_suite(
name = "crashing_queues_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "dead_lettering_SUITE",
size = "large",
additional_beam = [
":test_queue_utils_beam",
],
2023-10-04 18:11:54 +08:00
shard_count = 6,
Fix dead lettering # What? This commit fixes #11159, #11160, #11173. # How? ## Background RabbitMQ allows to dead letter messages for four different reasons, out of which three reasons cause messages to be dead lettered automatically internally in the broker: (maxlen, expired, delivery_limit) and 1 reason is caused by an explicit client action (rejected). RabbitMQ also allows dead letter topologies. When a message is dead lettered, it is re-published to an exchange, and therefore zero to multiple target queues. These target queues can in turn dead letter messages. Hence it is possible to create a cycle of queues where messages get dead lettered endlessly, which is what we want to avoid. ## Alternative approach One approach to avoid such endless cycles is to use a similar concept of the TTL field of the IPv4 datagram, or the hop limit field of an IPv6 datagram. These fields ensure that IP packets aren't cicrulating forever in the Internet. Each router decrements this counter. If this counter reaches 0, the sender will be notified and the message gets dropped. We could use the same approach in RabbitMQ: Whenever a queue dead letters a message, a dead_letter_hop_limit field could be decremented. If this field reaches 0, the message will be dropped. Such a hop limit field could have a sensible default value, for example 32. The sender of the message could override this value. Likewise, the client rejecting a message could set a new value via the Modified outcome. Such an approach has multiple advantages: 1. No dead letter cycle detection per se needs to be performed within the broker which is a slight simplification to what we have today. 2. Simpler dead letter topologies. One very common use case is that clients re-try sending the message after some time by consuming from a dead-letter queue and rejecting the message such that the message gets republished to the original queue. Instead of requiring explicit client actions, which increases complexity, a x-message-ttl argument could be set on the dead-letter queue to automatically retry after some time. This is a big simplification because it eliminates the need of various frameworks that retry, such as https://docs.spring.io/spring-cloud-stream/reference/rabbit/rabbit_overview/rabbitmq-retry.html 3. No dead letter history information needs to be compressed because there is a clear limit on how often a message gets dead lettered. Therefore, the full history including timestamps of every dead letter event will be available to clients. Disadvantages: 1. Breaks a lot of clients, even for 4.0. ## 3.12 approach Instead of decrementing a counter, the approach up to 3.12 has been to drop the message if the message cycled automatically. A message cycled automatically if no client expliclity rejected the message, i.e. the mesage got dead lettered due to maxlen, expired, or delivery_limit, but not due to rejected. In this approach, the broker must be able to detect such cycles reliably. Reliably detecting dead letter cycles broke in 3.13 due to #11159 and #11160. To reliably detect cycles, the broker must be able to obtain the exact order of dead letter events for a given message. In 3.13.0 - 3.13.2, the order cannot exactly be determined because wall clock time is used to record the death time. This commit uses the same approach as done in 3.12: a list ordered by death recency is used with the most recent death at the head of the list. To not grow this list endlessly (for example when a client rejects the same message hundreds of times), this list should be compacted. This commit, like 3.12, compacts by tuple `{Queue, Reason}`: If this message got already dead lettered from this Queue for this Reason, then only a counter is incremented and the element is moved to the front of the list. ## Streams & AMQP 1.0 clients Dead lettering from a stream doesn't make sense because: 1. a client cannot reject a message from a stream since the stream must maintain the total order of events to be consumed by multiple clients. 2. TTL is implemented by Stream retention where only old Stream segments are automatically deleted (or archived in the future). 3. same applies to maxlen Although messages cannot be dead lettered **from** a stream, messages can be dead lettered **into** a stream. This commit provides clients consuming from a stream the death history: #11173 Additionally, this commit provides AMQP 1.0 clients the death history via message annotation `x-opt-deaths` which contains the same information as AMQP 0.9.1 header `x-death`. Both, storing the death history in a stream and providing death history to an AMQP 1.0 client, use the same encoding: a message annoation `x-opt-deaths` that contains an array of maps ordered by death recency. The information encoded is the same as in the AMQP 0.9.1 x-death header. Instead of providing an array of maps, a better approach could be to use an array of a custom AMQP death type, such as: ```xml <amqp name="rabbitmq"> <section name="custom-types"> <type name="death" class="composite" source="list"> <descriptor name="rabbitmq:death:list" code="0x00000000:0x000000255"/> <field name="queue" type="string" mandatory="true" label="the name of the queue the message was dead lettered from"/> <field name="reason" type="symbol" mandatory="true" label="the reason why this message was dead lettered"/> <field name="count" type="ulong" default="1" label="how many times this message was dead lettered from this queue for this reason"/> <field name="time" mandatory="true" type="timestamp" label="the first time when this message was dead lettered from this queue for this reason"/> <field name="exchange" type="string" default="" label="the exchange this message was published to before it was dead lettered for the first time from this queue for this reason"/> <field name="routing-keys" type="string" default="" multiple="true" label="the routing keys this message was published with before it was dead lettered for the first time from this queue for this reason"/> <field name="ttl" type="milliseconds" label="the time to live of this message before it was dead lettered for the first time from this queue for reason ‘expired’"/> </type> </section> </amqp> ``` However, encoding and decoding custom AMQP types that are nested within arrays which in turn are nested within the message annotation map can be difficult for clients and the broker. Also, each client will need to know the custom AMQP type. For now, therefore we use an array of maps. ## Feature flag The new way to record death information is done via mc annotation `deaths_v2`. Because old nodes do not know this new annotation, recording death information via mc annotation `deaths_v2` is hidden behind a new feature flag `message_containers_deaths_v2`. If this feature flag is disabled, a message will continue to use the 3.13.0 - 3.13.2 way to record death information in mc annotation `deaths`, or even the older way within `x-death` header directly if feature flag message_containers is also disabled. Only if feature flag `message_containers_deaths_v2` is enabled and this message hasn't been dead lettered before, will the new mc annotation `deaths_v2` be used.
2024-05-04 23:12:02 +08:00
)
Fix message IDs settlement order ## What? This commit fixes issues that were present only on `main` branch and were introduced by #9022. 1. Classic queues (specifically `rabbit_queue_consumers:subtract_acks/3`) expect message IDs to be (n)acked in the order as they were delivered to the channel / session proc. Hence, the `lists:usort(MsgIds0)` in `rabbit_classic_queue:settle/5` was wrong causing not all messages to be acked adding a regression to also AMQP 0.9.1. 2. The order in which the session proc requeues or rejects multiple message IDs at once is important. For example, if the client sends a DISPOSITION with first=3 and last=5, the message IDs corresponding to delivery IDs 3,4,5 must be requeued or rejected in exactly that order. For example, quorum queues use this order of message IDs in https://github.com/rabbitmq/rabbitmq-server/blob/34d3f943742bdcf7d34859edff8d45f35e4007d4/deps/rabbit/src/rabbit_fifo.erl#L226-L234 to dead letter in that order. ## How? The session proc will settle (internal) message IDs to queues in ascending (AMQP) delivery ID order, i.e. in the order messages were sent to the client and in the order messages were settled by the client. This commit chooses to keep the session's outgoing_unsettled_map map data structure. An alternative would have been to use a queue or lqueue for the outgoing_unsettled_map as done in * https://github.com/rabbitmq/rabbitmq-server/blob/34d3f943742bdcf7d34859edff8d45f35e4007d4/deps/rabbit/src/rabbit_channel.erl#L135 * https://github.com/rabbitmq/rabbitmq-server/blob/34d3f943742bdcf7d34859edff8d45f35e4007d4/deps/rabbit/src/rabbit_queue_consumers.erl#L43 Whether a queue (as done by `rabbit_channel`) or a map (as done by `rabbit_amqp_session`) performs better depends on the pattern how clients ack messages. A queue will likely perform good enough because usually the oldest delivered messages will be acked first. However, given that there can be many different consumers on an AQMP 0.9.1 channel or AMQP 1.0 session, this commit favours a map because it will likely generate less garbage and is very efficient when for example a single new message (or few new messages) gets acked while many (older) messages are still checked out by the session (but by possibly different AMQP 1.0 receivers).
2024-06-22 22:29:52 +08:00
rabbitmq_integration_suite(
name = "amqpl_consumer_ack_SUITE",
)
Fix dead lettering # What? This commit fixes #11159, #11160, #11173. # How? ## Background RabbitMQ allows to dead letter messages for four different reasons, out of which three reasons cause messages to be dead lettered automatically internally in the broker: (maxlen, expired, delivery_limit) and 1 reason is caused by an explicit client action (rejected). RabbitMQ also allows dead letter topologies. When a message is dead lettered, it is re-published to an exchange, and therefore zero to multiple target queues. These target queues can in turn dead letter messages. Hence it is possible to create a cycle of queues where messages get dead lettered endlessly, which is what we want to avoid. ## Alternative approach One approach to avoid such endless cycles is to use a similar concept of the TTL field of the IPv4 datagram, or the hop limit field of an IPv6 datagram. These fields ensure that IP packets aren't cicrulating forever in the Internet. Each router decrements this counter. If this counter reaches 0, the sender will be notified and the message gets dropped. We could use the same approach in RabbitMQ: Whenever a queue dead letters a message, a dead_letter_hop_limit field could be decremented. If this field reaches 0, the message will be dropped. Such a hop limit field could have a sensible default value, for example 32. The sender of the message could override this value. Likewise, the client rejecting a message could set a new value via the Modified outcome. Such an approach has multiple advantages: 1. No dead letter cycle detection per se needs to be performed within the broker which is a slight simplification to what we have today. 2. Simpler dead letter topologies. One very common use case is that clients re-try sending the message after some time by consuming from a dead-letter queue and rejecting the message such that the message gets republished to the original queue. Instead of requiring explicit client actions, which increases complexity, a x-message-ttl argument could be set on the dead-letter queue to automatically retry after some time. This is a big simplification because it eliminates the need of various frameworks that retry, such as https://docs.spring.io/spring-cloud-stream/reference/rabbit/rabbit_overview/rabbitmq-retry.html 3. No dead letter history information needs to be compressed because there is a clear limit on how often a message gets dead lettered. Therefore, the full history including timestamps of every dead letter event will be available to clients. Disadvantages: 1. Breaks a lot of clients, even for 4.0. ## 3.12 approach Instead of decrementing a counter, the approach up to 3.12 has been to drop the message if the message cycled automatically. A message cycled automatically if no client expliclity rejected the message, i.e. the mesage got dead lettered due to maxlen, expired, or delivery_limit, but not due to rejected. In this approach, the broker must be able to detect such cycles reliably. Reliably detecting dead letter cycles broke in 3.13 due to #11159 and #11160. To reliably detect cycles, the broker must be able to obtain the exact order of dead letter events for a given message. In 3.13.0 - 3.13.2, the order cannot exactly be determined because wall clock time is used to record the death time. This commit uses the same approach as done in 3.12: a list ordered by death recency is used with the most recent death at the head of the list. To not grow this list endlessly (for example when a client rejects the same message hundreds of times), this list should be compacted. This commit, like 3.12, compacts by tuple `{Queue, Reason}`: If this message got already dead lettered from this Queue for this Reason, then only a counter is incremented and the element is moved to the front of the list. ## Streams & AMQP 1.0 clients Dead lettering from a stream doesn't make sense because: 1. a client cannot reject a message from a stream since the stream must maintain the total order of events to be consumed by multiple clients. 2. TTL is implemented by Stream retention where only old Stream segments are automatically deleted (or archived in the future). 3. same applies to maxlen Although messages cannot be dead lettered **from** a stream, messages can be dead lettered **into** a stream. This commit provides clients consuming from a stream the death history: #11173 Additionally, this commit provides AMQP 1.0 clients the death history via message annotation `x-opt-deaths` which contains the same information as AMQP 0.9.1 header `x-death`. Both, storing the death history in a stream and providing death history to an AMQP 1.0 client, use the same encoding: a message annoation `x-opt-deaths` that contains an array of maps ordered by death recency. The information encoded is the same as in the AMQP 0.9.1 x-death header. Instead of providing an array of maps, a better approach could be to use an array of a custom AMQP death type, such as: ```xml <amqp name="rabbitmq"> <section name="custom-types"> <type name="death" class="composite" source="list"> <descriptor name="rabbitmq:death:list" code="0x00000000:0x000000255"/> <field name="queue" type="string" mandatory="true" label="the name of the queue the message was dead lettered from"/> <field name="reason" type="symbol" mandatory="true" label="the reason why this message was dead lettered"/> <field name="count" type="ulong" default="1" label="how many times this message was dead lettered from this queue for this reason"/> <field name="time" mandatory="true" type="timestamp" label="the first time when this message was dead lettered from this queue for this reason"/> <field name="exchange" type="string" default="" label="the exchange this message was published to before it was dead lettered for the first time from this queue for this reason"/> <field name="routing-keys" type="string" default="" multiple="true" label="the routing keys this message was published with before it was dead lettered for the first time from this queue for this reason"/> <field name="ttl" type="milliseconds" label="the time to live of this message before it was dead lettered for the first time from this queue for reason ‘expired’"/> </type> </section> </amqp> ``` However, encoding and decoding custom AMQP types that are nested within arrays which in turn are nested within the message annotation map can be difficult for clients and the broker. Also, each client will need to know the custom AMQP type. For now, therefore we use an array of maps. ## Feature flag The new way to record death information is done via mc annotation `deaths_v2`. Because old nodes do not know this new annotation, recording death information via mc annotation `deaths_v2` is hidden behind a new feature flag `message_containers_deaths_v2`. If this feature flag is disabled, a message will continue to use the 3.13.0 - 3.13.2 way to record death information in mc annotation `deaths`, or even the older way within `x-death` header directly if feature flag message_containers is also disabled. Only if feature flag `message_containers_deaths_v2` is enabled and this message hasn't been dead lettered before, will the new mc annotation `deaths_v2` be used.
2024-05-04 23:12:02 +08:00
rabbitmq_integration_suite(
name = "message_containers_deaths_v2_SUITE",
size = "medium",
shard_count = 1,
)
rabbitmq_integration_suite(
name = "definition_import_SUITE",
size = "medium",
)
Deprecated features: New module to manage deprecated features (!) This introduces a way to declare deprecated features in the code, not only in our communication. The new module allows to disallow the use of a deprecated feature and/or warn the user when he relies on such a feature. [Why] Currently, we only tell people about deprecated features through blog posts and the mailing-list. This might be insufficiant for our users that a feature they use will be removed in a future version: * They may not read our blog or mailing-list * They may not understand that they use such a deprecated feature * They might wait for the big removal before they plan testing * They might not take it seriously enough The idea behind this patch is to increase the chance that users notice that they are using something which is about to be dropped from RabbitMQ. Anopther benefit is that they should be able to test how RabbitMQ will behave in the future before the actual removal. This should allow them to test and plan changes. [How] When a feature is deprecated in other large projects (such as FreeBSD where I took the idea from), it goes through a lifecycle: 1. The feature is still available, but users get a warning somehow when they use it. They can disable it to test. 2. The feature is still available, but disabled out-of-the-box. Users can re-enable it (and get a warning). 3. The feature is disconnected from the build. Therefore, the code behind it is still there, but users have to recompile the thing to be able to use it. 4. The feature is removed from the source code. Users have to adapt or they can't upgrade anymore. The solution in this patch offers the same lifecycle. A deprecated feature will be in one of these deprecation phases: 1. `permitted_by_default`: The feature is available. Users get a warning if they use it. They can disable it from the configuration. 2. `denied_by_default`: The feature is available but disabled by default. Users get an error if they use it and RabbitMQ behaves like the feature is removed. They can re-enable is from the configuration and get a warning. 3. `disconnected`: The feature is present in the source code, but is disabled and can't be re-enabled without recompiling RabbitMQ. Users get the same behavior as if the code was removed. 4. `removed`: The feature's code is gone. The whole thing is based on the feature flags subsystem, but it has the following differences with other feature flags: * The semantic is reversed: the feature flag behind a deprecated feature is disabled when the deprecated feature is permitted, or enabled when the deprecated feature is denied. * The feature flag behind a deprecated feature is enabled out-of-the-box (meaning the deprecated feature is denied): * if the deprecation phase is `permitted_by_default` and the configuration denies the deprecated feature * if the deprecation phase is `denied_by_default` and the configuration doesn't permit the deprecated feature * if the deprecation phase is `disconnected` or `removed` * Feature flags behind deprecated feature don't appear in feature flags listings. Otherwise, deprecated features' feature flags are managed like other feature flags, in particular inside clusters. To declare a deprecated feature: -rabbit_deprecated_feature( {my_deprecated_feature, #{deprecation_phase => permitted_by_default, msgs => #{when_permitted => "This feature will be removed in RabbitMQ X.0"}, }}). Then, to check the state of a deprecated feature in the code: case rabbit_deprecated_features:is_permitted(my_deprecated_feature) of true -> %% The deprecated feature is still permitted. ok; false -> %% The deprecated feature is gone or should be considered %% unavailable. error end. Warnings and errors are logged automatically. A message is generated automatically, but it is possible to define a message in the deprecated feature flag declaration like in the example above. Here is an example of a logged warning that was generated automatically: Feature `my_deprecated_feature` is deprecated. By default, this feature can still be used for now. Its use will not be permitted by default in a future minor RabbitMQ version and the feature will be removed from a future major RabbitMQ version; actual versions to be determined. To continue using this feature when it is not permitted by default, set the following parameter in your configuration: "deprecated_features.permit.my_deprecated_feature = true" To test RabbitMQ as if the feature was removed, set this in your configuration: "deprecated_features.permit.my_deprecated_feature = false" To override the default state of `permitted_by_default` and `denied_by_default` deprecation phases, users can set the following configuration: # In rabbitmq.conf: deprecated_features.permit.my_deprecated_feature = true # or false The actual behavior protected by a deprecated feature check is out of scope for this subsystem. It is the repsonsibility of each deprecated feature code to determine what to do when the deprecated feature is denied. V1: Deprecated feature states are initially computed during the initialization of the registry, based on their deprecation phase and possibly the configuration. They don't go through the `enable/1` code at all. V2: Manage deprecated feature states as any other non-required feature flags. This allows to execute an `is_feature_used()` callback to determine if a deprecated feature can be denied. This also allows to prevent the RabbitMQ node from starting if it continues to use a deprecated feature. V3: Manage deprecated feature states from the registry initialization again. This is required because we need to know very early if some of them are denied, so that an upgrade to a version of RabbitMQ where a deprecated feature is disconnected or removed can be performed. To still prevent the start of a RabbitMQ node when a denied deprecated feature is actively used, we run the `is_feature_used()` callback of all denied deprecated features as part of the `sync_cluster()` task. This task is executed as part of a feature flag refresh executed when RabbitMQ starts or when plugins are enabled. So even though a deprecated feature is marked as denied in the registry early in the boot process, we will still abort the start of a RabbitMQ node if the feature is used. V4: Support context-dependent warnings. It is now possible to set a specific message when deprecated feature is permitted, when it is denied and when it is removed. Generic per-context messages are still generated. V5: Improve default warning messages, thanks to @pstack2021. V6: Rename the configuration variable from `permit_deprecated_features.*` to `deprecated_features.permit.*`. As @michaelklishin said, we tend to use shorter top-level names.
2023-02-23 00:26:52 +08:00
rabbitmq_integration_suite(
name = "deprecated_features_SUITE",
size = "medium",
additional_beam = [
":feature_flags_v2_SUITE_beam_files",
],
)
rabbitmq_integration_suite(
name = "disconnect_detected_during_alarm_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "disk_monitor_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "dynamic_qq_SUITE",
size = "large",
additional_beam = [
":test_queue_utils_beam",
],
deps = [
"@proper//:erlang_app",
],
)
rabbitmq_integration_suite(
name = "feature_flags_SUITE",
size = "large",
2023-09-01 12:02:31 +08:00
additional_beam = [
":test_clustering_utils_beam",
],
flaky = True,
shard_count = 5,
runtime_deps = [
"//deps/rabbit/test/feature_flags_SUITE_data/my_plugin:erlang_app",
],
)
rabbitmq_integration_suite(
name = "feature_flags_v2_SUITE",
size = "large",
)
Emit histogram metric for received message sizes per protocol (#12342) * Add global histogram metrics for received message sizes per-protocol fixup: add new files to bazel fixup: expose message_size_bytes as prometheus classic histogram type `rabbit_msg_size_metrics` does not use `seshat` any more, but `counters` directly. fixup: add msg_size_metrics unit test * Improve message size histogram 1. Avoid unnecessary time series emitted for stream protocol The stream protocol cannot observe message sizes. This commit ensures that the following time series are omitted: ``` rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="64"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="256"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="1024"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="4096"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="16384"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="65536"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="262144"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="1048576"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="4194304"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="16777216"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="67108864"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="268435456"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="+Inf"} 0 rabbitmq_global_message_size_bytes_count{protocol="stream"} 0 rabbitmq_global_message_size_bytes_sum{protocol="stream"} 0 ``` This reduces the number of time series by 15. 2. Further reduce the number of time series by reducing the number of buckets. Instead of 13 bucktes, emit only 9 buckets. Buckets are not free, each is an extra time series stored. Prior to this commit: ``` curl -s -u guest:guest localhost:15692/metrics | ag message_size | wc -l 92 ``` After this commit: ``` curl -s -u guest:guest localhost:15692/metrics | ag message_size | wc -l 57 ``` 3. The emitted metric should be called `rabbitmq_message_size_bytes_bucket` instead of `rabbitmq_global_message_size_bytes_bucket`. The latter is poor naming. There is no need to use `global` in the metric name given that this metric doesn't exist in the old flawed aggregated metrics. 4. This commit simplies module `rabbit_global_counters`. 5. Avoid garbage collecting the 10-elements list of buckets per message being received. --------- Co-authored-by: Péter Gömöri <peter@84codes.com>
2024-09-25 00:08:24 +08:00
rabbitmq_integration_suite(
name = "msg_size_metrics_SUITE",
runtime_deps = [
"//deps/rabbitmq_amqp_client:erlang_app",
],
)
rabbitmq_integration_suite(
name = "list_consumers_sanity_check_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "list_queues_online_and_offline_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "logging_SUITE",
runtime_deps = [
"@syslog//:erlang_app",
],
)
rabbitmq_suite(
name = "lqueue_SUITE",
size = "small",
deps = [
"@proper//:erlang_app",
],
)
rabbitmq_integration_suite(
name = "maintenance_mode_SUITE",
size = "medium",
additional_beam = [
":test_queue_utils_beam",
],
)
Move plugin rabbitmq-message-timestamp to the core As reported in https://groups.google.com/g/rabbitmq-users/c/x8ACs4dBlkI/ plugins that implement rabbit_channel_interceptor break with Native MQTT in 3.12 because Native MQTT does not use rabbit_channel anymore. Specifically, these plugins don't work anymore in 3.12 when sending a message from an MQTT publisher to an AMQP 0.9.1 consumer. Two of these plugins are https://github.com/rabbitmq/rabbitmq-message-timestamp and https://github.com/rabbitmq/rabbitmq-routing-node-stamp This commit moves both plugins into rabbitmq-server. Therefore, these plugins are deprecated starting in 3.12. Instead of using these plugins, the user gets the same behaviour by configuring rabbitmq.conf as follows: ``` incoming_message_interceptors.set_header_timestamp.overwrite = false incoming_message_interceptors.set_header_routing_node.overwrite = false ``` While both plugins were incompatible to be used together, this commit allows setting both headers. We name the top level configuration key `incoming_message_interceptors` because only incoming messages are intercepted. Currently, only `set_header_timestamp` and `set_header_routing_node` are supported. (We might support more in the future.) Both can set `overwrite` to `false` or `true`. The meaning of `overwrite` is the same as documented in https://github.com/rabbitmq/rabbitmq-message-timestamp#always-overwrite-timestamps i.e. whether headers should be overwritten if they are already present in the message. Both `set_header_timestamp` and `set_header_routing_node` behave exactly to plugins `rabbitmq-message-timestamp` and `rabbitmq-routing-node-stamp`, respectively. Upon node boot, the configuration is put into persistent_term to not cause any performance penalty in the default case where these settings are disabled. The channel and MQTT connection process will intercept incoming messages and - if configured - add the desired AMQP 0.9.1 headers. For now, this allows using Native MQTT in 3.12 with the old plugins behaviour. In the future, once "message containers" are implemented, we can think about more generic message interceptors where plugins can be written to modify arbitrary headers or message contents for various protocols. Likewise, in the future, once MQTT 5.0 is implemented, we can think about an MQTT connection interceptor which could function similar to a `rabbit_channel_interceptor` allowing to modify any MQTT packet.
2023-05-12 22:12:50 +08:00
rabbitmq_integration_suite(
name = "rabbit_message_interceptor_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "message_size_limit_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "per_node_limit_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "metrics_SUITE",
size = "medium",
deps = [
"@proper//:erlang_app",
],
)
rabbitmq_integration_suite(
name = "mirrored_supervisor_SUITE",
size = "small",
additional_beam = [
"test/mirrored_supervisor_SUITE_gs.beam",
],
)
rabbitmq_suite(
name = "msg_store_SUITE",
size = "small",
deps = [
"//deps/rabbit_common:erlang_app",
],
)
rabbitmq_integration_suite(
name = "peer_discovery_classic_config_SUITE",
size = "large",
)
rabbitmq_integration_suite(
name = "peer_discovery_dns_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "peer_discovery_tmp_hidden_node_SUITE",
size = "large",
)
rabbitmq_integration_suite(
name = "per_user_connection_channel_limit_partitions_SUITE",
size = "large",
)
rabbitmq_integration_suite(
name = "per_user_connection_channel_limit_SUITE",
size = "medium",
shard_count = 4,
)
rabbitmq_integration_suite(
name = "per_user_connection_channel_tracking_SUITE",
size = "medium",
shard_count = 4,
)
rabbitmq_integration_suite(
name = "per_user_connection_tracking_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "per_vhost_connection_limit_partitions_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "per_vhost_connection_limit_SUITE",
size = "medium",
shard_count = 5,
)
rabbitmq_integration_suite(
name = "per_vhost_msg_store_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "per_vhost_queue_limit_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "policy_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "priority_queue_recovery_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "priority_queue_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "product_info_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "proxy_protocol_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "publisher_confirms_parallel_SUITE",
size = "medium",
additional_beam = [
":test_queue_utils_beam",
],
)
rabbitmq_integration_suite(
name = "queue_length_limits_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "queue_parallel_SUITE",
size = "large",
additional_beam = [
":test_queue_utils_beam",
],
shard_count = 3,
)
rabbitmq_integration_suite(
name = "queue_type_SUITE",
size = "medium",
2024-01-26 00:52:35 +08:00
additional_beam = [
":test_queue_utils_beam",
],
)
rabbitmq_integration_suite(
name = "quorum_queue_SUITE",
size = "large",
additional_beam = [
":test_queue_utils_beam",
":test_clustering_utils_beam",
],
shard_count = 6,
)
rabbitmq_integration_suite(
name = "classic_queue_SUITE",
size = "medium",
)
rabbitmq_suite(
name = "rabbit_confirms_SUITE",
size = "small",
deps = [
"//deps/rabbit_common:erlang_app",
],
)
rabbitmq_integration_suite(
name = "rabbit_core_metrics_gc_SUITE",
size = "medium",
)
rabbitmq_suite(
name = "rabbit_cuttlefish_SUITE",
)
rabbitmq_suite(
name = "rabbit_fifo_int_SUITE",
size = "medium",
Quorum queues v4 (#10637) This commit contains the following new quorum queue features: * Fair share high/low priorities * SAC consumers honour consumer priorities * Credited consumer refactoring to meet AMQP requirements. * Use checkpoints feature to reduce memory use for queues with long backlogs * Consumer cancel option that immediately removes consumer and returns all pending messages. * More compact commands of the most common commands such as enqueue, settle and credit * Correctly track the delivery-count to be compatible with the AMQP spec * Support the "modified" AMQP 1.0 outcome better. Commits: * Quorum queues v4 scaffolding. Create the new version but not including any changes yet. QQ: force delete followers after leader has terminated. Also try a longer sleep for mqtt_shared_SUITE so that the delete operation stands a chance to time out and move on to the forced deletion stage. In some mixed machine version scenarios some followers will never apply the poison pill command so we may as well force delete them just in case. QQ: skip test in amqp_client that cannot pass with mixed machine versions QQ: remove dead code Code relating to prior machine versions and state conversions. rabbit_fifo_prop_SUITE fixes * QQ: add v4 ff and new more compact enqueue command. Also update rabbit_fifo_* suites to test more relevant code versions where applicable. QQ: always use the updated credit mode format QQv4: use more compact consumer reference in settle, credit, return This introudces a new type: consumer_key() which is either the consumer_id or the raft index the checkout was processed at. If the consumer is using one of the updated credit spec formats rabbit_fifo will use the raft index as the primary key for the consumer such that the rabbit fifo client can then use the more space efficient integer index instead of the full consumer id in subsequent commands. There is compatibility code to still accept the consumer id in settle, return, discard and credit commands but this is slighlyt slower and of course less space efficient. The old form will be used in cases where the fifo client may have already remove the local consumer state (as happens after a cancel). Lots of test refactorings of the rabbit_fifo_SUITE to begin to use the new forms. * More test refactoring and new API fixes rabbit_fifo_prop_SUITE refactoring and other fixes. * First pass SAC consumer priority implementation. Single active consumers will be activated if they have a higher priority than the currently active consumer. if the currently active consumer has pending messages, no further messages will be assigned to the consumer and the activation of the new consumer will happen once all pending messages are settled. This is to ensure processing order. Consumers with the same priority will internally be ordered to favour those with credit then those that attached first. QQ: add SAC consumer priority integration tests QQ: add check for ff in tests * QQ: add new consumer cancel option: 'remove' This option immediately removes and returns all messages for a consumer instead of the softer 'cancel' option which keeps the consumer around until all pending messages have been either settled or returned. This involves a change to the rabbit_queue_type:cancel/5 API to rabbit_queue_type:cancel/3. * QQ: capture checked out time for each consumer message. This will form the basis for queue initiated consumer timeouts. * QQ: Refactor to use the new ra_machine:handle_aux/5 API Instead of the old ra_machine:handle_aux/6 callback. * QQ hi/lo priority queue * QQ: Avoid using mc:size/1 inside rabbit_fifo As we dont want to depend on external functions for things that may change the state of the queue. * QQ bug fix: Maintain order when returning multiple Prior to this commit, quorum queues requeued messages in an undefined order, which is wrong. This commit fixes this bug and requeues messages always in the order as nacked / rejected / released by the client. We ensure that order of requeues is deterministic from the client's point of view and doesn't depend on whether the quorum queue soft limit was exceeded temporarily. So, even when rabbit_fifo_client batches requeues, the order as nacked by the client is still maintained. * Simplify * Add rabbit_quorum_queue:file_handle* functions back. For backwards compat. * dialyzer fix * dynamic_qq_SUITE: avoid mixed versions failure. * QQ: track number of requeues for message. To be able to calculate the correct value for the AMQP delivery_count header we need to be able to distinguish between messages that were "released" or returned in QQ speak and those that were returned due to errors such as channel termination. This commit implement such tracking as well as the calculation of a new mc annotations `delivery_count` that AMQP makes use of to set the header value accordingly. * Use QQ consumer removal when AMQP client detaches This enables us to unskip some AMQP tests. * Use AMQP address v2 in fsharp-tests * QQ: track number of requeues for message. To be able to calculate the correct value for the AMQP delivery_count header we need to be able to distinguish between messages that were "released" or returned in QQ speak and those that were returned due to errors such as channel termination. This commit implement such tracking as well as the calculation of a new mc annotations `delivery_count` that AMQP makes use of to set the header value accordingly. * rabbit_fifo: Use Ra checkpoints * quorum queues: Use a custom interval for checkpoints * rabbit_fifo_SUITE: List actual effects in ?ASSERT_EFF failure * QQ: Checkpoints modifications * fixes * QQ: emit release cursors on tick for followers and leaders else followers could end up holding on to segments a bit longer after traffic stops. * Support draining a QQ SAC waiting consumer By issuing drain=true, the client says "either send a transfer or a flow frame". Since there are no messages to send to an inactive consumer, the sending queue should advance the delivery-count consuming all link-credit and send a credit_reply with drain=true to the session proc which causes the session proc to send a flow frame to the client. * Extract applying #credit{} cmd into 2 functions This commit is only refactoring and doesn't change any behaviour. * Fix default priority level Prior to this commit, when a message didn't have a priority level set, it got enqueued as high prio. This is wrong because the default priority is 4 and "for example, if 2 distinct priorities are implemented, then levels 0 to 4 are equivalent, and levels 5 to 9 are equivalent and levels 4 and 5 are distinct." Hence, by default a message without priority set, must be enqueued as low prio. * bazel run gazelle * Avoid deprecated time unit * Fix aux_test * Delete dead code * Fix rabbit_fifo_q:get_lowest_index/1 * Delete unused normalize functions * Generate less garbage * Add integration test for QQ SAC with consumer priority * Improve readability * Change modified outcome behaviour With the new quorum queue v4 improvements where a requeue counter was added in addition to the quorum queue delivery counter, the following sentence from https://github.com/rabbitmq/rabbitmq-server/pull/6292#issue-1431275848 doesn't apply anymore: > Also the case where delivery_failed=false|undefined requires the release of the > message without incrementing the delivery_count. Again this is not something > that our queues are able to do so again we have to reject without requeue. Therefore, we simplify the modified outcome behaviour: RabbitMQ will from now on only discard the message if the modified's undeliverable-here field is true. * Introduce single feature flag rabbitmq_4.0.0 ## What? Merge all feature flags introduced in RabbitMQ 4.0.0 into a single feature flag called rabbitmq_4.0.0. ## Why? 1. This fixes the crash in https://github.com/rabbitmq/rabbitmq-server/pull/10637#discussion_r1681002352 2. It's better user experience. * QQ: expose priority metrics in UI * Enable skipped test after rebasing onto main * QQ: add new command "modify" to better handle AMQP modified outcomes. This new command can be used to annotate returned or rejected messages. This commit also retains the delivery-count across dead letter boundaries such that the AMQP header delivery-count field can now include _all_ failed deliver attempts since the message was originally received. Internally the quorum queue has moved it's delivery_count header to only track the AMQP protocol delivery attempts and now introduces a new acquired_count to track all message acquisitions by consumers. * Type tweaks and naming * Add test for modified outcome with classic queue * Add test routing on message-annotations in modified outcome * Skip tests in mixed version tests Skip tests in mixed version tests because feature flag rabbitmq_4.0.0 is needed for the new #modify{} Ra command being sent to quorum queues. --------- Co-authored-by: David Ansari <david.ansari@gmx.de> Co-authored-by: Michael Davis <mcarsondavis@gmail.com>
2024-08-08 15:48:27 +08:00
additional_beam = [
":test_test_util_beam",
],
deps = [
"//deps/rabbit_common:erlang_app",
2024-08-16 12:02:25 +08:00
"//deps/rabbitmq_ct_helpers:erlang_app",
"@aten//:erlang_app",
"@gen_batch_server//:erlang_app",
"@meck//:erlang_app",
"@ra//:erlang_app",
],
)
rabbitmq_suite(
name = "rabbit_fifo_prop_SUITE",
size = "large",
additional_beam = [
":test_test_util_beam",
],
deps = [
"//deps/rabbit_common:erlang_app",
Quorum queues v4 (#10637) This commit contains the following new quorum queue features: * Fair share high/low priorities * SAC consumers honour consumer priorities * Credited consumer refactoring to meet AMQP requirements. * Use checkpoints feature to reduce memory use for queues with long backlogs * Consumer cancel option that immediately removes consumer and returns all pending messages. * More compact commands of the most common commands such as enqueue, settle and credit * Correctly track the delivery-count to be compatible with the AMQP spec * Support the "modified" AMQP 1.0 outcome better. Commits: * Quorum queues v4 scaffolding. Create the new version but not including any changes yet. QQ: force delete followers after leader has terminated. Also try a longer sleep for mqtt_shared_SUITE so that the delete operation stands a chance to time out and move on to the forced deletion stage. In some mixed machine version scenarios some followers will never apply the poison pill command so we may as well force delete them just in case. QQ: skip test in amqp_client that cannot pass with mixed machine versions QQ: remove dead code Code relating to prior machine versions and state conversions. rabbit_fifo_prop_SUITE fixes * QQ: add v4 ff and new more compact enqueue command. Also update rabbit_fifo_* suites to test more relevant code versions where applicable. QQ: always use the updated credit mode format QQv4: use more compact consumer reference in settle, credit, return This introudces a new type: consumer_key() which is either the consumer_id or the raft index the checkout was processed at. If the consumer is using one of the updated credit spec formats rabbit_fifo will use the raft index as the primary key for the consumer such that the rabbit fifo client can then use the more space efficient integer index instead of the full consumer id in subsequent commands. There is compatibility code to still accept the consumer id in settle, return, discard and credit commands but this is slighlyt slower and of course less space efficient. The old form will be used in cases where the fifo client may have already remove the local consumer state (as happens after a cancel). Lots of test refactorings of the rabbit_fifo_SUITE to begin to use the new forms. * More test refactoring and new API fixes rabbit_fifo_prop_SUITE refactoring and other fixes. * First pass SAC consumer priority implementation. Single active consumers will be activated if they have a higher priority than the currently active consumer. if the currently active consumer has pending messages, no further messages will be assigned to the consumer and the activation of the new consumer will happen once all pending messages are settled. This is to ensure processing order. Consumers with the same priority will internally be ordered to favour those with credit then those that attached first. QQ: add SAC consumer priority integration tests QQ: add check for ff in tests * QQ: add new consumer cancel option: 'remove' This option immediately removes and returns all messages for a consumer instead of the softer 'cancel' option which keeps the consumer around until all pending messages have been either settled or returned. This involves a change to the rabbit_queue_type:cancel/5 API to rabbit_queue_type:cancel/3. * QQ: capture checked out time for each consumer message. This will form the basis for queue initiated consumer timeouts. * QQ: Refactor to use the new ra_machine:handle_aux/5 API Instead of the old ra_machine:handle_aux/6 callback. * QQ hi/lo priority queue * QQ: Avoid using mc:size/1 inside rabbit_fifo As we dont want to depend on external functions for things that may change the state of the queue. * QQ bug fix: Maintain order when returning multiple Prior to this commit, quorum queues requeued messages in an undefined order, which is wrong. This commit fixes this bug and requeues messages always in the order as nacked / rejected / released by the client. We ensure that order of requeues is deterministic from the client's point of view and doesn't depend on whether the quorum queue soft limit was exceeded temporarily. So, even when rabbit_fifo_client batches requeues, the order as nacked by the client is still maintained. * Simplify * Add rabbit_quorum_queue:file_handle* functions back. For backwards compat. * dialyzer fix * dynamic_qq_SUITE: avoid mixed versions failure. * QQ: track number of requeues for message. To be able to calculate the correct value for the AMQP delivery_count header we need to be able to distinguish between messages that were "released" or returned in QQ speak and those that were returned due to errors such as channel termination. This commit implement such tracking as well as the calculation of a new mc annotations `delivery_count` that AMQP makes use of to set the header value accordingly. * Use QQ consumer removal when AMQP client detaches This enables us to unskip some AMQP tests. * Use AMQP address v2 in fsharp-tests * QQ: track number of requeues for message. To be able to calculate the correct value for the AMQP delivery_count header we need to be able to distinguish between messages that were "released" or returned in QQ speak and those that were returned due to errors such as channel termination. This commit implement such tracking as well as the calculation of a new mc annotations `delivery_count` that AMQP makes use of to set the header value accordingly. * rabbit_fifo: Use Ra checkpoints * quorum queues: Use a custom interval for checkpoints * rabbit_fifo_SUITE: List actual effects in ?ASSERT_EFF failure * QQ: Checkpoints modifications * fixes * QQ: emit release cursors on tick for followers and leaders else followers could end up holding on to segments a bit longer after traffic stops. * Support draining a QQ SAC waiting consumer By issuing drain=true, the client says "either send a transfer or a flow frame". Since there are no messages to send to an inactive consumer, the sending queue should advance the delivery-count consuming all link-credit and send a credit_reply with drain=true to the session proc which causes the session proc to send a flow frame to the client. * Extract applying #credit{} cmd into 2 functions This commit is only refactoring and doesn't change any behaviour. * Fix default priority level Prior to this commit, when a message didn't have a priority level set, it got enqueued as high prio. This is wrong because the default priority is 4 and "for example, if 2 distinct priorities are implemented, then levels 0 to 4 are equivalent, and levels 5 to 9 are equivalent and levels 4 and 5 are distinct." Hence, by default a message without priority set, must be enqueued as low prio. * bazel run gazelle * Avoid deprecated time unit * Fix aux_test * Delete dead code * Fix rabbit_fifo_q:get_lowest_index/1 * Delete unused normalize functions * Generate less garbage * Add integration test for QQ SAC with consumer priority * Improve readability * Change modified outcome behaviour With the new quorum queue v4 improvements where a requeue counter was added in addition to the quorum queue delivery counter, the following sentence from https://github.com/rabbitmq/rabbitmq-server/pull/6292#issue-1431275848 doesn't apply anymore: > Also the case where delivery_failed=false|undefined requires the release of the > message without incrementing the delivery_count. Again this is not something > that our queues are able to do so again we have to reject without requeue. Therefore, we simplify the modified outcome behaviour: RabbitMQ will from now on only discard the message if the modified's undeliverable-here field is true. * Introduce single feature flag rabbitmq_4.0.0 ## What? Merge all feature flags introduced in RabbitMQ 4.0.0 into a single feature flag called rabbitmq_4.0.0. ## Why? 1. This fixes the crash in https://github.com/rabbitmq/rabbitmq-server/pull/10637#discussion_r1681002352 2. It's better user experience. * QQ: expose priority metrics in UI * Enable skipped test after rebasing onto main * QQ: add new command "modify" to better handle AMQP modified outcomes. This new command can be used to annotate returned or rejected messages. This commit also retains the delivery-count across dead letter boundaries such that the AMQP header delivery-count field can now include _all_ failed deliver attempts since the message was originally received. Internally the quorum queue has moved it's delivery_count header to only track the AMQP protocol delivery attempts and now introduces a new acquired_count to track all message acquisitions by consumers. * Type tweaks and naming * Add test for modified outcome with classic queue * Add test routing on message-annotations in modified outcome * Skip tests in mixed version tests Skip tests in mixed version tests because feature flag rabbitmq_4.0.0 is needed for the new #modify{} Ra command being sent to quorum queues. --------- Co-authored-by: David Ansari <david.ansari@gmx.de> Co-authored-by: Michael Davis <mcarsondavis@gmail.com>
2024-08-08 15:48:27 +08:00
"@meck//:erlang_app",
"@proper//:erlang_app",
"@ra//:erlang_app",
],
)
rabbitmq_suite(
name = "rabbit_fifo_dlx_SUITE",
size = "small",
deps = [
"//deps/rabbit_common:erlang_app",
],
)
Quorum queues v4 (#10637) This commit contains the following new quorum queue features: * Fair share high/low priorities * SAC consumers honour consumer priorities * Credited consumer refactoring to meet AMQP requirements. * Use checkpoints feature to reduce memory use for queues with long backlogs * Consumer cancel option that immediately removes consumer and returns all pending messages. * More compact commands of the most common commands such as enqueue, settle and credit * Correctly track the delivery-count to be compatible with the AMQP spec * Support the "modified" AMQP 1.0 outcome better. Commits: * Quorum queues v4 scaffolding. Create the new version but not including any changes yet. QQ: force delete followers after leader has terminated. Also try a longer sleep for mqtt_shared_SUITE so that the delete operation stands a chance to time out and move on to the forced deletion stage. In some mixed machine version scenarios some followers will never apply the poison pill command so we may as well force delete them just in case. QQ: skip test in amqp_client that cannot pass with mixed machine versions QQ: remove dead code Code relating to prior machine versions and state conversions. rabbit_fifo_prop_SUITE fixes * QQ: add v4 ff and new more compact enqueue command. Also update rabbit_fifo_* suites to test more relevant code versions where applicable. QQ: always use the updated credit mode format QQv4: use more compact consumer reference in settle, credit, return This introudces a new type: consumer_key() which is either the consumer_id or the raft index the checkout was processed at. If the consumer is using one of the updated credit spec formats rabbit_fifo will use the raft index as the primary key for the consumer such that the rabbit fifo client can then use the more space efficient integer index instead of the full consumer id in subsequent commands. There is compatibility code to still accept the consumer id in settle, return, discard and credit commands but this is slighlyt slower and of course less space efficient. The old form will be used in cases where the fifo client may have already remove the local consumer state (as happens after a cancel). Lots of test refactorings of the rabbit_fifo_SUITE to begin to use the new forms. * More test refactoring and new API fixes rabbit_fifo_prop_SUITE refactoring and other fixes. * First pass SAC consumer priority implementation. Single active consumers will be activated if they have a higher priority than the currently active consumer. if the currently active consumer has pending messages, no further messages will be assigned to the consumer and the activation of the new consumer will happen once all pending messages are settled. This is to ensure processing order. Consumers with the same priority will internally be ordered to favour those with credit then those that attached first. QQ: add SAC consumer priority integration tests QQ: add check for ff in tests * QQ: add new consumer cancel option: 'remove' This option immediately removes and returns all messages for a consumer instead of the softer 'cancel' option which keeps the consumer around until all pending messages have been either settled or returned. This involves a change to the rabbit_queue_type:cancel/5 API to rabbit_queue_type:cancel/3. * QQ: capture checked out time for each consumer message. This will form the basis for queue initiated consumer timeouts. * QQ: Refactor to use the new ra_machine:handle_aux/5 API Instead of the old ra_machine:handle_aux/6 callback. * QQ hi/lo priority queue * QQ: Avoid using mc:size/1 inside rabbit_fifo As we dont want to depend on external functions for things that may change the state of the queue. * QQ bug fix: Maintain order when returning multiple Prior to this commit, quorum queues requeued messages in an undefined order, which is wrong. This commit fixes this bug and requeues messages always in the order as nacked / rejected / released by the client. We ensure that order of requeues is deterministic from the client's point of view and doesn't depend on whether the quorum queue soft limit was exceeded temporarily. So, even when rabbit_fifo_client batches requeues, the order as nacked by the client is still maintained. * Simplify * Add rabbit_quorum_queue:file_handle* functions back. For backwards compat. * dialyzer fix * dynamic_qq_SUITE: avoid mixed versions failure. * QQ: track number of requeues for message. To be able to calculate the correct value for the AMQP delivery_count header we need to be able to distinguish between messages that were "released" or returned in QQ speak and those that were returned due to errors such as channel termination. This commit implement such tracking as well as the calculation of a new mc annotations `delivery_count` that AMQP makes use of to set the header value accordingly. * Use QQ consumer removal when AMQP client detaches This enables us to unskip some AMQP tests. * Use AMQP address v2 in fsharp-tests * QQ: track number of requeues for message. To be able to calculate the correct value for the AMQP delivery_count header we need to be able to distinguish between messages that were "released" or returned in QQ speak and those that were returned due to errors such as channel termination. This commit implement such tracking as well as the calculation of a new mc annotations `delivery_count` that AMQP makes use of to set the header value accordingly. * rabbit_fifo: Use Ra checkpoints * quorum queues: Use a custom interval for checkpoints * rabbit_fifo_SUITE: List actual effects in ?ASSERT_EFF failure * QQ: Checkpoints modifications * fixes * QQ: emit release cursors on tick for followers and leaders else followers could end up holding on to segments a bit longer after traffic stops. * Support draining a QQ SAC waiting consumer By issuing drain=true, the client says "either send a transfer or a flow frame". Since there are no messages to send to an inactive consumer, the sending queue should advance the delivery-count consuming all link-credit and send a credit_reply with drain=true to the session proc which causes the session proc to send a flow frame to the client. * Extract applying #credit{} cmd into 2 functions This commit is only refactoring and doesn't change any behaviour. * Fix default priority level Prior to this commit, when a message didn't have a priority level set, it got enqueued as high prio. This is wrong because the default priority is 4 and "for example, if 2 distinct priorities are implemented, then levels 0 to 4 are equivalent, and levels 5 to 9 are equivalent and levels 4 and 5 are distinct." Hence, by default a message without priority set, must be enqueued as low prio. * bazel run gazelle * Avoid deprecated time unit * Fix aux_test * Delete dead code * Fix rabbit_fifo_q:get_lowest_index/1 * Delete unused normalize functions * Generate less garbage * Add integration test for QQ SAC with consumer priority * Improve readability * Change modified outcome behaviour With the new quorum queue v4 improvements where a requeue counter was added in addition to the quorum queue delivery counter, the following sentence from https://github.com/rabbitmq/rabbitmq-server/pull/6292#issue-1431275848 doesn't apply anymore: > Also the case where delivery_failed=false|undefined requires the release of the > message without incrementing the delivery_count. Again this is not something > that our queues are able to do so again we have to reject without requeue. Therefore, we simplify the modified outcome behaviour: RabbitMQ will from now on only discard the message if the modified's undeliverable-here field is true. * Introduce single feature flag rabbitmq_4.0.0 ## What? Merge all feature flags introduced in RabbitMQ 4.0.0 into a single feature flag called rabbitmq_4.0.0. ## Why? 1. This fixes the crash in https://github.com/rabbitmq/rabbitmq-server/pull/10637#discussion_r1681002352 2. It's better user experience. * QQ: expose priority metrics in UI * Enable skipped test after rebasing onto main * QQ: add new command "modify" to better handle AMQP modified outcomes. This new command can be used to annotate returned or rejected messages. This commit also retains the delivery-count across dead letter boundaries such that the AMQP header delivery-count field can now include _all_ failed deliver attempts since the message was originally received. Internally the quorum queue has moved it's delivery_count header to only track the AMQP protocol delivery attempts and now introduces a new acquired_count to track all message acquisitions by consumers. * Type tweaks and naming * Add test for modified outcome with classic queue * Add test routing on message-annotations in modified outcome * Skip tests in mixed version tests Skip tests in mixed version tests because feature flag rabbitmq_4.0.0 is needed for the new #modify{} Ra command being sent to quorum queues. --------- Co-authored-by: David Ansari <david.ansari@gmx.de> Co-authored-by: Michael Davis <mcarsondavis@gmail.com>
2024-08-08 15:48:27 +08:00
rabbitmq_suite(
name = "rabbit_fifo_q_SUITE",
size = "small",
deps = [
"//deps/rabbit_common:erlang_app",
"@proper//:erlang_app",
],
)
rabbitmq_integration_suite(
name = "rabbit_fifo_dlx_integration_SUITE",
size = "medium",
additional_beam = [
":test_test_util_beam",
":test_queue_utils_beam",
":quorum_queue_SUITE_beam_files",
],
deps = [
"@proper//:erlang_app",
"@ra//:erlang_app",
],
)
rabbitmq_suite(
name = "rabbit_fifo_SUITE",
size = "medium",
additional_beam = [
":test_test_util_beam",
":rabbit_fifo_v0_SUITE_beam_files",
],
deps = [
"//deps/rabbit_common:erlang_app",
"@meck//:erlang_app",
"@ra//:erlang_app",
],
)
rabbitmq_suite(
name = "rabbit_fifo_v0_SUITE",
size = "medium",
additional_beam = [
":test_test_util_beam",
],
deps = [
"//deps/rabbit_common:erlang_app",
"@meck//:erlang_app",
"@ra//:erlang_app",
],
)
Message Containers (#5077) This PR implements an approach for a "protocol (data format) agnostic core" where the format of the message isn't converted at point of reception. Currently all non AMQP 0.9.1 originating messages are converted into a AMQP 0.9.1 flavoured basic_message record before sent to a queue. If the messages are then consumed by the originating protocol they are converted back from AMQP 0.9.1. For some protocols such as MQTT 3.1 this isn't too expensive as MQTT is mostly a fairly easily mapped subset of AMQP 0.9.1 but for others such as AMQP 1.0 the conversions are awkward and in some cases lossy even if consuming from the originating protocol. This PR instead wraps all incoming messages in their originating form into a generic, extensible message container type (mc). The container module exposes an API to get common message details such as size and various properties (ttl, priority etc) directly from the source data type. Each protocol needs to implement the mc behaviour such that when a message originating form one protocol is consumed by another protocol we convert it to the target protocol at that point. The message container also contains annotations, dead letter records and other meta data we need to record during the lifetime of a message. The original protocol message is never modified unless it is consumed. This includes conversion modules to and from amqp, amqpl (AMQP 0.9.1) and mqtt. COMMIT HISTORY: * Refactor away from using the delivery{} record In many places including exchange types. This should make it easier to move towards using a message container type instead of basic_message. Add mc module and move direct replies outside of exchange Lots of changes incl classic queues Implement stream support incl amqp conversions simplify mc state record move mc.erl mc dlx stuff recent history exchange Make tracking work But doesn't take a protocol agnostic approach as we just convert everything into AMQP legacy and back. Might be good enough for now. Tracing as a whole may want a bit of a re-vamp at some point. tidy make quorum queue peek work by legacy conversion dead lettering fixes dead lettering fixes CMQ fixes rabbit_trace type fixes fixes fix Fix classic queue props test assertion fix feature flag and backwards compat Enable message_container feature flag in some SUITEs Dialyzer fixes fixes fix test fixes Various Manually update a gazelle generated file until a gazelle enhancement can be made https://github.com/rabbitmq/rules_erlang/issues/185 Add message_containers_SUITE to bazel and regen bazel files with gazelle from rules_erlang@main Simplify essential proprty access Such as durable, ttl and priority by extracting them into annotations at message container init time. Move type to remove dependenc on amqp10 stuff in mc.erl mostly because I don't know how to make bazel do the right thing add more stuff Refine routing header stuff wip Cosmetics Do not use "maybe" as type name as "maybe" is a keyword since OTP 25 which makes Erlang LS complain. * Dedup death queue names * Fix function clause crashes Fix failing tests in the MQTT shared_SUITE: A classic queue message ID can be undefined as set in https://github.com/rabbitmq/rabbitmq-server/blob/fbe79ff47b4edbc0fd95457e623d6593161ad198/deps/rabbit/src/rabbit_classic_queue_index_v2.erl#L1048 Fix failing tests in the MQTT shared_SUITE-mixed: When feature flag message_containers is disabled, the message is not an #mc{} record, but a #basic_message{} record. * Fix is_utf8_no_null crash Prior to this commit, the function crashed if invalid UTF-8 was provided, e.g.: ``` 1> rabbit_misc:is_valid_shortstr(<<"😇"/utf16>>). ** exception error: no function clause matching rabbit_misc:is_utf8_no_null(<<216,61,222,7>>) (rabbit_misc.erl, line 1481) ``` * Implement mqtt mc behaviour For now via amqp translation. This is still work in progress, but the following SUITEs pass: ``` make -C deps/rabbitmq_mqtt ct-shared t=[mqtt,v5,cluster_size_1] FULL=1 make -C deps/rabbitmq_mqtt ct-v5 t=[mqtt,cluster_size_1] FULL=1 ``` * Shorten mc file names Module name length matters because for each persistent message the #mc{} record is persisted to disk. ``` 1> iolist_size(term_to_iovec({mc, rabbit_mc_amqp_legacy})). 30 2> iolist_size(term_to_iovec({mc, mc_amqpl})). 17 ``` This commit renames the mc modules: ``` ag -l rabbit_mc_amqp_legacy | xargs sed -i 's/rabbit_mc_amqp_legacy/mc_amqpl/g' ag -l rabbit_mc_amqp | xargs sed -i 's/rabbit_mc_amqp/mc_amqp/g' ag -l rabbit_mqtt_mc | xargs sed -i 's/rabbit_mqtt_mc/mc_mqtt/g' ``` * mc: make deaths an annotation + fixes * Fix mc_mqtt protocol_state callback * Fix test will_delay_node_restart ``` make -C deps/rabbitmq_mqtt ct-v5 t=[mqtt,cluster_size_3]:will_delay_node_restart FULL=1 ``` * Bazel run gazelle * mix format rabbitmqctl.ex * Ensure ttl annotation is refelected in amqp legacy protocol state * Fix id access in message store * Fix rabbit_message_interceptor_SUITE * dializer fixes * Fix rabbit:rabbit_message_interceptor_SUITE-mixed set_annotation/3 should not result in duplicate keys * Fix MQTT shared_SUITE-mixed Up to 3.12 non-MQTT publishes were always QoS 1 regardless of delivery_mode. https://github.com/rabbitmq/rabbitmq-server/blob/75a953ce286a10aca910c098805a4f545989af38/deps/rabbitmq_mqtt/src/rabbit_mqtt_processor.erl#L2075-L2076 From now on, non-MQTT publishes are QoS 1 if durable. This makes more sense. The MQTT plugin must send a #basic_message{} to an old node that does not understand message containers. * Field content of 'v1_0.data' can be binary Fix ``` bazel test //deps/rabbitmq_mqtt:shared_SUITE-mixed \ --test_env FOCUS="-group [mqtt,v4,cluster_size_1] -case trace" \ -t- --test_sharding_strategy=disabled ``` * Remove route/2 and implement route/3 for all exchange types. This removes the route/2 callback from rabbit_exchange_type and makes route/3 mandatory instead. This is a breaking change and will require all implementations of exchange types to update their code, however this is necessary anyway for them to correctly handle the mc type. stream filtering fixes * Translate directly from MQTT to AMQP 0.9.1 * handle undecoded properties in mc_compat amqpl: put clause in right order recover death deatails from amqp data * Replace callback init_amqp with convert_from * Fix return value of lists:keyfind/3 * Translate directly from AMQP 0.9.1 to MQTT * Fix MQTT payload size MQTT payload can be a list when converted from AMQP 0.9.1 for example First conversions tests Plus some other conversion related fixes. bazel bazel translate amqp 1.0 null to undefined mc: property/2 and correlation_id/message_id return type tagged values. To ensure we can support a variety of types better. The type type tags are AMQP 1.0 flavoured. fix death recovery mc_mqtt: impl new api Add callbacks to allow protocols to compact data before storage And make readable if needing to query things repeatedly. bazel fix * more decoding * tracking mixed versions compat * mc: flip default of `durable` annotation to save some data. Assuming most messages are durable and that in memory messages suffer less from persistence overhead it makes sense for a non existent `durable` annotation to mean durable=true. * mc conversion tests and tidy up * mc make x_header unstrict again * amqpl: death record fixes * bazel * amqp -> amqpl conversion test * Fix crash in mc_amqp:size/1 Body can be a single amqp-value section (instead of being a list) as shown by test ``` make -C deps/rabbitmq_amqp1_0/ ct-system t=java ``` on branch native-amqp. * Fix crash in lists:flatten/1 Data can be a single amqp-value section (instead of being a list) as shown by test ``` make -C deps/rabbitmq_amqp1_0 ct-system t=dotnet:roundtrip_to_amqp_091 ``` on branch native-amqp. * Fix crash in rabbit_writer Running test ``` make -C deps/rabbitmq_amqp1_0 ct-system t=dotnet:roundtrip_to_amqp_091 ``` on branch native-amqp resulted in the following crash: ``` crasher: initial call: rabbit_writer:enter_mainloop/2 pid: <0.711.0> registered_name: [] exception error: bad argument in function size/1 called as size([<<0>>,<<"Sw">>,[<<160,2>>,<<"hi">>]]) *** argument 1: not tuple or binary in call from rabbit_binary_generator:build_content_frames/7 (rabbit_binary_generator.erl, line 89) in call from rabbit_binary_generator:build_simple_content_frames/4 (rabbit_binary_generator.erl, line 61) in call from rabbit_writer:assemble_frames/5 (rabbit_writer.erl, line 334) in call from rabbit_writer:internal_send_command_async/3 (rabbit_writer.erl, line 365) in call from rabbit_writer:handle_message/2 (rabbit_writer.erl, line 265) in call from rabbit_writer:handle_message/3 (rabbit_writer.erl, line 232) in call from rabbit_writer:mainloop1/2 (rabbit_writer.erl, line 223) ``` because #content.payload_fragments_rev is currently supposed to be a flat list of binaries instead of being an iolist. This commit fixes this crash inefficiently by calling iolist_to_binary/1. A better solution would be to allow AMQP legacy's #content.payload_fragments_rev to be an iolist. * Add accidentally deleted line back * mc: optimise mc_amqp internal format By removint the outer records for message and delivery annotations as well as application properties and footers. * mc: optimis mc_amqp map_add by using upsert * mc: refactoring and bug fixes * mc_SUITE routingheader assertions * mc remove serialize/1 callback as only used by amqp * mc_amqp: avoid returning a nested list from protocol_state * test and bug fix * move infer_type to mc_util * mc fixes and additiona assertions * Support headers exchange routing for MQTT messages When a headers exchange is bound to the MQTT topic exchange, routing will be performend based on both MQTT topic (by the topic exchange) and MQTT User Property (by the headers exchange). This combines the best worlds of both MQTT 5.0 and AMQP 0.9.1 and enables powerful routing topologies. When the User Property contains the same name multiple times, only the last name (and value) will be considered by the headers exchange. * Fix crash when sending from stream to amqpl When publishing a message via the stream protocol and consuming it via AMQP 0.9.1, the following crash occurred prior to this commit: ``` crasher: initial call: rabbit_channel:init/1 pid: <0.818.0> registered_name: [] exception exit: {{badmatch,undefined}, [{rabbit_channel,handle_deliver0,4, [{file,"rabbit_channel.erl"}, {line,2728}]}, {lists,foldl,3,[{file,"lists.erl"},{line,1594}]}, {rabbit_channel,handle_cast,2, [{file,"rabbit_channel.erl"}, {line,728}]}, {gen_server2,handle_msg,2, [{file,"gen_server2.erl"},{line,1056}]}, {proc_lib,wake_up,3, [{file,"proc_lib.erl"},{line,251}]}]} ``` This commit first gives `mc:init/3` the chance to set exchange and routing_keys annotations. If not set, `rabbit_stream_queue` will set these annotations assuming the message was originally published via the stream protocol. * Support consistent hash exchange routing for MQTT 5.0 When a consistent hash exchange is bound to the MQTT topic exchange, MQTT 5.0 messages can be routed to queues consistently based on the Correlation-Data in the PUBLISH packet. * Convert MQTT 5.0 User Property * to AMQP 0.9.1 headers * from AMQP 0.9.1 headers * to AMQP 1.0 application properties and message annotations * from AMQP 1.0 application properties and message annotations * Make use of Annotations in mc_mqtt:protocol_state/2 mc_mqtt:protocol_state/2 includes Annotations as parameter. It's cleaner to make use of these Annotations when computing the protocol state instead of relying on the caller (rabbitmq_mqtt_processor) to compute the protocol state. * Enforce AMQP 0.9.1 field name length limit The AMQP 0.9.1 spec prohibits field names longer than 128 characters. Therefore, when converting AMQP 1.0 message annotations, application properties or MQTT 5.0 User Property to AMQP 0.9.1 headers, drop any names longer than 128 characters. * Fix type specs Apply feedback from Michael Davis Co-authored-by: Michael Davis <mcarsondavis@gmail.com> * Add mc_mqtt unit test suite Implement mc_mqtt:x_header/2 * Translate indicator that payload is UTF-8 encoded when converting between MQTT 5.0 and AMQP 1.0 * Translate single amqp-value section from AMQP 1.0 to MQTT Convert to a text representation, if possible, and indicate to MQTT client that the payload is UTF-8 encoded. This way, the MQTT client will be able to parse the payload. If conversion to text representation is not possible, encode the payload using the AMQP 1.0 type system and indiate the encoding via Content-Type message/vnd.rabbitmq.amqp. This Content-Type is not registered. Type "message" makes sense since it's a message. Vendor tree "vnd.rabbitmq.amqp" makes sense since merely subtype "amqp" is not registered. * Fix payload conversion * Translate Response Topic between MQTT and AMQP Translate MQTT 5.0 Response Topic to AMQP 1.0 reply-to address and vice versa. The Response Topic must be a UTF-8 encoded string. This commit re-uses the already defined RabbitMQ target addresses: ``` "/topic/" RK Publish to amq.topic with routing key RK "/exchange/" X "/" RK Publish to exchange X with routing key RK ``` By default, the MQTT topic exchange is configure dto be amq.topic using the 1st target address. When an operator modifies the mqtt.exchange, the 2nd target address is used. * Apply PR feedback and fix formatting Co-authored-by: Michael Davis <mcarsondavis@gmail.com> * tidy up * Add MQTT message_containers test * consistent hash exchange: avoid amqp legacy conversion When hashing on a header value. * Avoid converting to amqp legacy when using exchange federation * Fix test flake * test and dialyzer fixes * dialyzer fix * Add MQTT protocol interoperability tests Test receiving from and sending to MQTT 5.0 and * AMQP 0.9.1 * AMQP 1.0 * STOMP * Streams * Regenerate portions of deps/rabbit/app.bzl with gazelle I'm not exactly sure how this happened, but gazell seems to have been run with an older version of the rules_erlang gazelle extension at some point. This caused generation of a structure that is no longer used. This commit updates the structure to the current pattern. * mc: refactoring * mc_amqpl: handle delivery annotations Just in case they are included. Also use iolist_to_iovec to create flat list of binaries when converting from amqp with amqp encoded payload. --------- Co-authored-by: David Ansari <david.ansari@gmx.de> Co-authored-by: Michael Davis <mcarsondavis@gmail.com> Co-authored-by: Rin Kuryloski <kuryloskip@vmware.com>
2023-08-31 18:27:13 +08:00
rabbitmq_suite(
name = "mc_unit_SUITE",
Message Containers (#5077) This PR implements an approach for a "protocol (data format) agnostic core" where the format of the message isn't converted at point of reception. Currently all non AMQP 0.9.1 originating messages are converted into a AMQP 0.9.1 flavoured basic_message record before sent to a queue. If the messages are then consumed by the originating protocol they are converted back from AMQP 0.9.1. For some protocols such as MQTT 3.1 this isn't too expensive as MQTT is mostly a fairly easily mapped subset of AMQP 0.9.1 but for others such as AMQP 1.0 the conversions are awkward and in some cases lossy even if consuming from the originating protocol. This PR instead wraps all incoming messages in their originating form into a generic, extensible message container type (mc). The container module exposes an API to get common message details such as size and various properties (ttl, priority etc) directly from the source data type. Each protocol needs to implement the mc behaviour such that when a message originating form one protocol is consumed by another protocol we convert it to the target protocol at that point. The message container also contains annotations, dead letter records and other meta data we need to record during the lifetime of a message. The original protocol message is never modified unless it is consumed. This includes conversion modules to and from amqp, amqpl (AMQP 0.9.1) and mqtt. COMMIT HISTORY: * Refactor away from using the delivery{} record In many places including exchange types. This should make it easier to move towards using a message container type instead of basic_message. Add mc module and move direct replies outside of exchange Lots of changes incl classic queues Implement stream support incl amqp conversions simplify mc state record move mc.erl mc dlx stuff recent history exchange Make tracking work But doesn't take a protocol agnostic approach as we just convert everything into AMQP legacy and back. Might be good enough for now. Tracing as a whole may want a bit of a re-vamp at some point. tidy make quorum queue peek work by legacy conversion dead lettering fixes dead lettering fixes CMQ fixes rabbit_trace type fixes fixes fix Fix classic queue props test assertion fix feature flag and backwards compat Enable message_container feature flag in some SUITEs Dialyzer fixes fixes fix test fixes Various Manually update a gazelle generated file until a gazelle enhancement can be made https://github.com/rabbitmq/rules_erlang/issues/185 Add message_containers_SUITE to bazel and regen bazel files with gazelle from rules_erlang@main Simplify essential proprty access Such as durable, ttl and priority by extracting them into annotations at message container init time. Move type to remove dependenc on amqp10 stuff in mc.erl mostly because I don't know how to make bazel do the right thing add more stuff Refine routing header stuff wip Cosmetics Do not use "maybe" as type name as "maybe" is a keyword since OTP 25 which makes Erlang LS complain. * Dedup death queue names * Fix function clause crashes Fix failing tests in the MQTT shared_SUITE: A classic queue message ID can be undefined as set in https://github.com/rabbitmq/rabbitmq-server/blob/fbe79ff47b4edbc0fd95457e623d6593161ad198/deps/rabbit/src/rabbit_classic_queue_index_v2.erl#L1048 Fix failing tests in the MQTT shared_SUITE-mixed: When feature flag message_containers is disabled, the message is not an #mc{} record, but a #basic_message{} record. * Fix is_utf8_no_null crash Prior to this commit, the function crashed if invalid UTF-8 was provided, e.g.: ``` 1> rabbit_misc:is_valid_shortstr(<<"😇"/utf16>>). ** exception error: no function clause matching rabbit_misc:is_utf8_no_null(<<216,61,222,7>>) (rabbit_misc.erl, line 1481) ``` * Implement mqtt mc behaviour For now via amqp translation. This is still work in progress, but the following SUITEs pass: ``` make -C deps/rabbitmq_mqtt ct-shared t=[mqtt,v5,cluster_size_1] FULL=1 make -C deps/rabbitmq_mqtt ct-v5 t=[mqtt,cluster_size_1] FULL=1 ``` * Shorten mc file names Module name length matters because for each persistent message the #mc{} record is persisted to disk. ``` 1> iolist_size(term_to_iovec({mc, rabbit_mc_amqp_legacy})). 30 2> iolist_size(term_to_iovec({mc, mc_amqpl})). 17 ``` This commit renames the mc modules: ``` ag -l rabbit_mc_amqp_legacy | xargs sed -i 's/rabbit_mc_amqp_legacy/mc_amqpl/g' ag -l rabbit_mc_amqp | xargs sed -i 's/rabbit_mc_amqp/mc_amqp/g' ag -l rabbit_mqtt_mc | xargs sed -i 's/rabbit_mqtt_mc/mc_mqtt/g' ``` * mc: make deaths an annotation + fixes * Fix mc_mqtt protocol_state callback * Fix test will_delay_node_restart ``` make -C deps/rabbitmq_mqtt ct-v5 t=[mqtt,cluster_size_3]:will_delay_node_restart FULL=1 ``` * Bazel run gazelle * mix format rabbitmqctl.ex * Ensure ttl annotation is refelected in amqp legacy protocol state * Fix id access in message store * Fix rabbit_message_interceptor_SUITE * dializer fixes * Fix rabbit:rabbit_message_interceptor_SUITE-mixed set_annotation/3 should not result in duplicate keys * Fix MQTT shared_SUITE-mixed Up to 3.12 non-MQTT publishes were always QoS 1 regardless of delivery_mode. https://github.com/rabbitmq/rabbitmq-server/blob/75a953ce286a10aca910c098805a4f545989af38/deps/rabbitmq_mqtt/src/rabbit_mqtt_processor.erl#L2075-L2076 From now on, non-MQTT publishes are QoS 1 if durable. This makes more sense. The MQTT plugin must send a #basic_message{} to an old node that does not understand message containers. * Field content of 'v1_0.data' can be binary Fix ``` bazel test //deps/rabbitmq_mqtt:shared_SUITE-mixed \ --test_env FOCUS="-group [mqtt,v4,cluster_size_1] -case trace" \ -t- --test_sharding_strategy=disabled ``` * Remove route/2 and implement route/3 for all exchange types. This removes the route/2 callback from rabbit_exchange_type and makes route/3 mandatory instead. This is a breaking change and will require all implementations of exchange types to update their code, however this is necessary anyway for them to correctly handle the mc type. stream filtering fixes * Translate directly from MQTT to AMQP 0.9.1 * handle undecoded properties in mc_compat amqpl: put clause in right order recover death deatails from amqp data * Replace callback init_amqp with convert_from * Fix return value of lists:keyfind/3 * Translate directly from AMQP 0.9.1 to MQTT * Fix MQTT payload size MQTT payload can be a list when converted from AMQP 0.9.1 for example First conversions tests Plus some other conversion related fixes. bazel bazel translate amqp 1.0 null to undefined mc: property/2 and correlation_id/message_id return type tagged values. To ensure we can support a variety of types better. The type type tags are AMQP 1.0 flavoured. fix death recovery mc_mqtt: impl new api Add callbacks to allow protocols to compact data before storage And make readable if needing to query things repeatedly. bazel fix * more decoding * tracking mixed versions compat * mc: flip default of `durable` annotation to save some data. Assuming most messages are durable and that in memory messages suffer less from persistence overhead it makes sense for a non existent `durable` annotation to mean durable=true. * mc conversion tests and tidy up * mc make x_header unstrict again * amqpl: death record fixes * bazel * amqp -> amqpl conversion test * Fix crash in mc_amqp:size/1 Body can be a single amqp-value section (instead of being a list) as shown by test ``` make -C deps/rabbitmq_amqp1_0/ ct-system t=java ``` on branch native-amqp. * Fix crash in lists:flatten/1 Data can be a single amqp-value section (instead of being a list) as shown by test ``` make -C deps/rabbitmq_amqp1_0 ct-system t=dotnet:roundtrip_to_amqp_091 ``` on branch native-amqp. * Fix crash in rabbit_writer Running test ``` make -C deps/rabbitmq_amqp1_0 ct-system t=dotnet:roundtrip_to_amqp_091 ``` on branch native-amqp resulted in the following crash: ``` crasher: initial call: rabbit_writer:enter_mainloop/2 pid: <0.711.0> registered_name: [] exception error: bad argument in function size/1 called as size([<<0>>,<<"Sw">>,[<<160,2>>,<<"hi">>]]) *** argument 1: not tuple or binary in call from rabbit_binary_generator:build_content_frames/7 (rabbit_binary_generator.erl, line 89) in call from rabbit_binary_generator:build_simple_content_frames/4 (rabbit_binary_generator.erl, line 61) in call from rabbit_writer:assemble_frames/5 (rabbit_writer.erl, line 334) in call from rabbit_writer:internal_send_command_async/3 (rabbit_writer.erl, line 365) in call from rabbit_writer:handle_message/2 (rabbit_writer.erl, line 265) in call from rabbit_writer:handle_message/3 (rabbit_writer.erl, line 232) in call from rabbit_writer:mainloop1/2 (rabbit_writer.erl, line 223) ``` because #content.payload_fragments_rev is currently supposed to be a flat list of binaries instead of being an iolist. This commit fixes this crash inefficiently by calling iolist_to_binary/1. A better solution would be to allow AMQP legacy's #content.payload_fragments_rev to be an iolist. * Add accidentally deleted line back * mc: optimise mc_amqp internal format By removint the outer records for message and delivery annotations as well as application properties and footers. * mc: optimis mc_amqp map_add by using upsert * mc: refactoring and bug fixes * mc_SUITE routingheader assertions * mc remove serialize/1 callback as only used by amqp * mc_amqp: avoid returning a nested list from protocol_state * test and bug fix * move infer_type to mc_util * mc fixes and additiona assertions * Support headers exchange routing for MQTT messages When a headers exchange is bound to the MQTT topic exchange, routing will be performend based on both MQTT topic (by the topic exchange) and MQTT User Property (by the headers exchange). This combines the best worlds of both MQTT 5.0 and AMQP 0.9.1 and enables powerful routing topologies. When the User Property contains the same name multiple times, only the last name (and value) will be considered by the headers exchange. * Fix crash when sending from stream to amqpl When publishing a message via the stream protocol and consuming it via AMQP 0.9.1, the following crash occurred prior to this commit: ``` crasher: initial call: rabbit_channel:init/1 pid: <0.818.0> registered_name: [] exception exit: {{badmatch,undefined}, [{rabbit_channel,handle_deliver0,4, [{file,"rabbit_channel.erl"}, {line,2728}]}, {lists,foldl,3,[{file,"lists.erl"},{line,1594}]}, {rabbit_channel,handle_cast,2, [{file,"rabbit_channel.erl"}, {line,728}]}, {gen_server2,handle_msg,2, [{file,"gen_server2.erl"},{line,1056}]}, {proc_lib,wake_up,3, [{file,"proc_lib.erl"},{line,251}]}]} ``` This commit first gives `mc:init/3` the chance to set exchange and routing_keys annotations. If not set, `rabbit_stream_queue` will set these annotations assuming the message was originally published via the stream protocol. * Support consistent hash exchange routing for MQTT 5.0 When a consistent hash exchange is bound to the MQTT topic exchange, MQTT 5.0 messages can be routed to queues consistently based on the Correlation-Data in the PUBLISH packet. * Convert MQTT 5.0 User Property * to AMQP 0.9.1 headers * from AMQP 0.9.1 headers * to AMQP 1.0 application properties and message annotations * from AMQP 1.0 application properties and message annotations * Make use of Annotations in mc_mqtt:protocol_state/2 mc_mqtt:protocol_state/2 includes Annotations as parameter. It's cleaner to make use of these Annotations when computing the protocol state instead of relying on the caller (rabbitmq_mqtt_processor) to compute the protocol state. * Enforce AMQP 0.9.1 field name length limit The AMQP 0.9.1 spec prohibits field names longer than 128 characters. Therefore, when converting AMQP 1.0 message annotations, application properties or MQTT 5.0 User Property to AMQP 0.9.1 headers, drop any names longer than 128 characters. * Fix type specs Apply feedback from Michael Davis Co-authored-by: Michael Davis <mcarsondavis@gmail.com> * Add mc_mqtt unit test suite Implement mc_mqtt:x_header/2 * Translate indicator that payload is UTF-8 encoded when converting between MQTT 5.0 and AMQP 1.0 * Translate single amqp-value section from AMQP 1.0 to MQTT Convert to a text representation, if possible, and indicate to MQTT client that the payload is UTF-8 encoded. This way, the MQTT client will be able to parse the payload. If conversion to text representation is not possible, encode the payload using the AMQP 1.0 type system and indiate the encoding via Content-Type message/vnd.rabbitmq.amqp. This Content-Type is not registered. Type "message" makes sense since it's a message. Vendor tree "vnd.rabbitmq.amqp" makes sense since merely subtype "amqp" is not registered. * Fix payload conversion * Translate Response Topic between MQTT and AMQP Translate MQTT 5.0 Response Topic to AMQP 1.0 reply-to address and vice versa. The Response Topic must be a UTF-8 encoded string. This commit re-uses the already defined RabbitMQ target addresses: ``` "/topic/" RK Publish to amq.topic with routing key RK "/exchange/" X "/" RK Publish to exchange X with routing key RK ``` By default, the MQTT topic exchange is configure dto be amq.topic using the 1st target address. When an operator modifies the mqtt.exchange, the 2nd target address is used. * Apply PR feedback and fix formatting Co-authored-by: Michael Davis <mcarsondavis@gmail.com> * tidy up * Add MQTT message_containers test * consistent hash exchange: avoid amqp legacy conversion When hashing on a header value. * Avoid converting to amqp legacy when using exchange federation * Fix test flake * test and dialyzer fixes * dialyzer fix * Add MQTT protocol interoperability tests Test receiving from and sending to MQTT 5.0 and * AMQP 0.9.1 * AMQP 1.0 * STOMP * Streams * Regenerate portions of deps/rabbit/app.bzl with gazelle I'm not exactly sure how this happened, but gazell seems to have been run with an older version of the rules_erlang gazelle extension at some point. This caused generation of a structure that is no longer used. This commit updates the structure to the current pattern. * mc: refactoring * mc_amqpl: handle delivery annotations Just in case they are included. Also use iolist_to_iovec to create flat list of binaries when converting from amqp with amqp encoded payload. --------- Co-authored-by: David Ansari <david.ansari@gmx.de> Co-authored-by: Michael Davis <mcarsondavis@gmail.com> Co-authored-by: Rin Kuryloski <kuryloskip@vmware.com>
2023-08-31 18:27:13 +08:00
size = "small",
deps = [
"//deps/amqp10_common:erlang_app",
"//deps/rabbit_common:erlang_app",
],
)
rabbitmq_suite(
name = "rabbit_stream_coordinator_SUITE",
deps = [
"//deps/rabbit_common:erlang_app",
],
)
rabbitmq_suite(
name = "rabbit_stream_sac_coordinator_SUITE",
runtime_deps = [
"@meck//:erlang_app",
],
deps = [
"//deps/rabbit_common:erlang_app",
],
)
rabbitmq_suite(
name = "rabbit_access_control_SUITE",
runtime_deps = [
"@meck//:erlang_app",
],
deps = [
"//deps/rabbit_common:erlang_app",
],
)
rabbitmq_integration_suite(
name = "rabbit_stream_queue_SUITE",
size = "large",
additional_beam = [
":test_queue_utils_beam",
],
shard_count = 20,
deps = [
"@proper//:erlang_app",
],
)
rabbitmq_integration_suite(
name = "rabbitmq_4_0_deprecations_SUITE",
size = "large",
)
rabbitmq_integration_suite(
name = "rabbitmq_queues_cli_integration_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "rabbitmqctl_integration_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "rabbitmqctl_shutdown_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "signal_handling_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "single_active_consumer_SUITE",
size = "medium",
additional_beam = [
":test_queue_utils_beam",
],
)
rabbitmq_integration_suite(
name = "term_to_binary_compat_prop_SUITE",
deps = [
"@proper//:erlang_app",
],
)
rabbitmq_integration_suite(
name = "topic_permission_SUITE",
size = "medium",
additional_beam = [
":test_amqp_utils_beam",
],
runtime_deps = [
"//deps/rabbitmq_amqp_client:erlang_app",
],
)
2023-09-07 17:00:20 +08:00
rabbitmq_integration_suite(
name = "transactions_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "unit_access_control_authn_authz_context_propagation_SUITE",
size = "medium",
additional_beam = [
"test/rabbit_auth_backend_context_propagation_mock.beam",
"test/rabbit_foo_protocol_connection_info.beam",
],
)
rabbitmq_integration_suite(
name = "unit_access_control_credential_validation_SUITE",
size = "medium",
deps = [
"@proper//:erlang_app",
],
)
rabbitmq_integration_suite(
name = "unit_access_control_SUITE",
size = "medium",
additional_beam = [
"test/rabbit_dummy_protocol_connection_info.beam",
],
)
rabbitmq_suite(
name = "unit_amqp091_content_framing_SUITE",
size = "small",
deps = [
"//deps/rabbit_common:erlang_app",
],
)
rabbitmq_integration_suite(
name = "unit_amqp091_server_properties_SUITE",
size = "medium",
)
rabbitmq_suite(
name = "unit_quorum_queue_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "unit_app_management_SUITE",
size = "medium",
)
rabbitmq_suite(
name = "unit_cluster_formation_locking_mocks_SUITE",
size = "small",
deps = [
"@meck//:erlang_app",
],
)
rabbit_peer_discovery: Rewrite core logic [Why] This work started as an effort to add peer discovery support to our Khepri integration. Indeed, as part of the task to integrate Khepri, we missed the fact that `rabbit_peer_discovery:maybe_create_cluster/1` was called from the Mnesia-specific code only. Even though we knew about it because we hit many issues caused by the fact the `join_cluster` and peer discovery use different code path to create a cluster. To add support for Khepri, the first version of this patch was to move the call to `rabbit_peer_discovery:maybe_create_cluster/1` from `rabbit_db_cluster` instead of `rabbit_mnesia`. To achieve that, it made sense to unify the code and simply call `rabbit_db_cluster:join/2` instead of duplicating the work. Unfortunately, doing so highlighted another issue: the way the node to cluster with was selected. Indeed, it could cause situations where multiple clusters are created instead of one, without resorting to out-of-band counter-measures, like a 30-second delay added in the Kubernetes operator (rabbitmq/cluster-operator#1156). This problem was even more frequent when we tried to unify the code path and call `join_cluster`. After several iterations on the patch and even more discussions with the team, we decided to rewrite the algorithm to make node selection more robust and still use `rabbit_db_cluster:join/2` to create the cluster. [How] This commit is only about the rewrite of the algorithm. Calling peer discovery from `rabbit_db_cluster` instead of `rabbit_mnesia` (and thus making peer discovery work with Khepri) will be done in a follow-up commit. We wanted the new algorithm to fulfill the following properties: 1. `rabbit_peer_discovery` should provide the ability to re-trigger it easily to re-evaluate the cluster. The new public API is `rabbit_peer_discovery:sync_desired_cluster/0`. 2. The selection of the node to join should be designed in a way that all nodes select the same, regardless of the order in which they become available. The adopted solution is to sort the list of discovered nodes with the following criterias (in that order): 1. the size of the cluster a discovered node is part of; sorted from bigger to smaller clusters 2. the start time of a discovered node; sorted from older to younger nodes 3. the name of a discovered node; sorted alphabetically The first node in that list will not join anyone and simply proceed with its boot process. Other nodes will try to join the first node. 3. To reduce the chance of incorrectly having multiple standalone nodes because the discovery backend returned only a single node, we want to apply the following constraints to the list of nodes after it is filtered and sorted (see property 2 above): * The list must contain `node()` (i.e. the node running peer discovery itself). * If the RabbitMQ's cluster size hint is greater than 1, the list must have at least two nodes. The cluster size hint is the maximum between the configured target cluster size hint and the number of elements in the nodes list returned by the backend. If one of the constraint is not met, the entire peer discovery process is restarted after a delay. 4. The lock is acquired only to protect the actual join, not the discovery step where the backend is queried to get the list of peers. With the node selection described above, this will let the first node to start without acquiring the lock. 5. The cluster membership views queried as part of the algorithm to sort the list of nodes will be used to detect additional clusters or standalone nodes that did not cluster correctly. These nodes will be asked to re-evaluate peer discovery to increase the chance of forming a single cluster. 6. After some delay, peer discovery will be re-evaluated to further eliminate the chances of having multiple clusters instead of one. This commit covers properties from point 1 to point 4. Remaining properties will be the scope of additional pull requests after this one works. If there is a failure at any point during discovery, filtering/sorting, locking or joining, the entire process is restarted after a delay. This is configured using the following parameters: * cluster_formation.discovery_retry_limit * cluster_formation.discovery_retry_interval The default parameters were bumped to 30 retries with a delay of 1 second between each. The locking retries/interval parameters are not used by the new algorithm anymore. There are extra minor changes that come with the rewrite: * The configured backend is cached in a persistent term. The goal is to make sure we use the same backend throughout the entire process and when we call `maybe_unregister/0` even if the configuration changed for whatever reason in between. * `maybe_register/0` is called from `rabbit_db_cluster` instead of at the end of a successful peer discovery process. `rabbit_db_cluster` had to call `maybe_register/0` if the node was not virgin anyway. So make it simpler and always call it in `rabbit_db_cluster` regardless of the state of the node. * `log_configured_backend/0` is gone. `maybe_init/0` can log the backend directly. There is no need to explicitly call another function for that. * Messages are logged using `?LOG_*()` macros instead of the old `rabbit_log` module.
2023-11-09 21:08:08 +08:00
rabbitmq_suite(
name = "unit_cluster_formation_sort_nodes_SUITE",
size = "small",
deps = [
"@meck//:erlang_app",
],
)
rabbitmq_suite(
name = "unit_collections_SUITE",
size = "small",
deps = [
"//deps/rabbit_common:erlang_app",
],
)
rabbitmq_suite(
name = "unit_config_value_encryption_SUITE",
size = "medium",
deps = [
"//deps/rabbit_common:test_erlang_app",
2023-12-09 12:02:38 +08:00
"//deps/rabbitmq_prelaunch:test_erlang_app",
"@credentials_obfuscation//:erlang_app",
],
)
rabbitmq_integration_suite(
name = "unit_connection_tracking_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "unit_credit_flow_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "unit_disk_monitor_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "unit_file_handle_cache_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "unit_gen_server2_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "unit_log_management_SUITE",
size = "medium",
)
Emit histogram metric for received message sizes per protocol (#12342) * Add global histogram metrics for received message sizes per-protocol fixup: add new files to bazel fixup: expose message_size_bytes as prometheus classic histogram type `rabbit_msg_size_metrics` does not use `seshat` any more, but `counters` directly. fixup: add msg_size_metrics unit test * Improve message size histogram 1. Avoid unnecessary time series emitted for stream protocol The stream protocol cannot observe message sizes. This commit ensures that the following time series are omitted: ``` rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="64"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="256"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="1024"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="4096"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="16384"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="65536"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="262144"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="1048576"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="4194304"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="16777216"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="67108864"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="268435456"} 0 rabbitmq_global_message_size_bytes_bucket{protocol="stream",le="+Inf"} 0 rabbitmq_global_message_size_bytes_count{protocol="stream"} 0 rabbitmq_global_message_size_bytes_sum{protocol="stream"} 0 ``` This reduces the number of time series by 15. 2. Further reduce the number of time series by reducing the number of buckets. Instead of 13 bucktes, emit only 9 buckets. Buckets are not free, each is an extra time series stored. Prior to this commit: ``` curl -s -u guest:guest localhost:15692/metrics | ag message_size | wc -l 92 ``` After this commit: ``` curl -s -u guest:guest localhost:15692/metrics | ag message_size | wc -l 57 ``` 3. The emitted metric should be called `rabbitmq_message_size_bytes_bucket` instead of `rabbitmq_global_message_size_bytes_bucket`. The latter is poor naming. There is no need to use `global` in the metric name given that this metric doesn't exist in the old flawed aggregated metrics. 4. This commit simplies module `rabbit_global_counters`. 5. Avoid garbage collecting the 10-elements list of buckets per message being received. --------- Co-authored-by: Péter Gömöri <peter@84codes.com>
2024-09-25 00:08:24 +08:00
rabbitmq_suite(
name = "unit_msg_size_metrics_SUITE",
size = "small",
)
rabbitmq_suite(
name = "unit_operator_policy_SUITE",
size = "small",
deps = [
"//deps/rabbit_common:test_erlang_app",
],
)
rabbitmq_suite(
name = "unit_pg_local_SUITE",
size = "small",
)
rabbitmq_suite(
name = "unit_plugin_directories_SUITE",
size = "small",
deps = [
"//deps/rabbit_common:test_erlang_app",
],
)
rabbitmq_integration_suite(
name = "unit_plugin_versioning_SUITE",
size = "small",
)
rabbitmq_integration_suite(
name = "unit_policy_validators_SUITE",
size = "small",
)
rabbitmq_suite(
name = "unit_priority_queue_SUITE",
size = "small",
)
rabbitmq_suite(
name = "unit_queue_consumers_SUITE",
size = "small",
)
rabbitmq_suite(
name = "unit_queue_location_SUITE",
size = "small",
deps = [
"@meck//:erlang_app",
2024-07-13 12:02:17 +08:00
],
)
rabbitmq_integration_suite(
name = "unit_stats_and_metrics_SUITE",
size = "medium",
additional_beam = [
"test/dummy_event_receiver.beam",
],
)
rabbitmq_suite(
name = "unit_supervisor2_SUITE",
size = "small",
additional_beam = [
"test/dummy_supervisor2.beam",
],
)
rabbitmq_integration_suite(
name = "unit_vm_memory_monitor_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "upgrade_preparation_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "vhost_SUITE",
size = "medium",
additional_beam = [
"test/test_rabbit_event_handler.beam",
],
)
rabbitmq_integration_suite(
name = "direct_exchange_routing_v2_SUITE",
size = "medium",
additional_beam = [
":test_queue_utils_beam",
],
)
rabbitmq_integration_suite(
name = "rabbit_local_random_exchange_SUITE",
size = "small",
additional_beam = [
":test_queue_utils_beam",
],
)
rabbitmq_integration_suite(
name = "rabbit_direct_reply_to_prop_SUITE",
size = "medium",
deps = [
"@proper//:erlang_app",
],
)
rabbitmq_integration_suite(
name = "unicode_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "exchanges_SUITE",
size = "medium",
additional_beam = [
":test_queue_utils_beam",
],
)
rabbitmq_integration_suite(
name = "bindings_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "rabbit_db_queue_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "rabbit_db_maintenance_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "rabbit_db_topic_exchange_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "rabbit_db_exchange_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "rabbit_db_binding_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "rabbit_db_msup_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "rabbit_db_policy_SUITE",
size = "medium",
)
2023-04-18 07:29:12 +08:00
rabbitmq_integration_suite(
name = "runtime_parameters_SUITE",
size = "small",
additional_beam = [
"test/dummy_runtime_parameters.beam",
2023-04-18 07:29:12 +08:00
],
)
Allow to use Khepri database to store metadata instead of Mnesia [Why] Mnesia is a very powerful and convenient tool for Erlang applications: it is a persistent disc-based database, it handles replication accross multiple Erlang nodes and it is available out-of-the-box from the Erlang/OTP distribution. RabbitMQ relies on Mnesia to manage all its metadata: * virtual hosts' properties * intenal users * queue, exchange and binding declarations (not queues data) * runtime parameters and policies * ... Unfortunately Mnesia makes it difficult to handle network partition and, as a consequence, the merge conflicts between Erlang nodes once the network partition is resolved. RabbitMQ provides several partition handling strategies but they are not bullet-proof. Users still hit situations where it is a pain to repair a cluster following a network partition. [How] @kjnilsson created Ra [1], a Raft consensus library that RabbitMQ already uses successfully to implement quorum queues and streams for instance. Those queues do not suffer from network partitions. We created Khepri [2], a new persistent and replicated database engine based on Ra and we want to use it in place of Mnesia in RabbitMQ to solve the problems with network partitions. This patch integrates Khepri as an experimental feature. When enabled, RabbitMQ will store all its metadata in Khepri instead of Mnesia. This change comes with behavior changes. While Khepri remains disabled, you should see no changes to the behavior of RabbitMQ. If there are changes, it is a bug. After Khepri is enabled, there are significant changes of behavior that you should be aware of. Because it is based on the Raft consensus algorithm, when there is a network partition, only the cluster members that are in the partition with at least `(Number of nodes in the cluster ÷ 2) + 1` number of nodes can "make progress". In other words, only those nodes may write to the Khepri database and read from the database and expect a consistent result. For instance in a cluster of 5 RabbitMQ nodes: * If there are two partitions, one with 3 nodes, one with 2 nodes, only the group of 3 nodes will be able to write to the database. * If there are three partitions, two with 2 nodes, one with 1 node, none of the group can write to the database. Because the Khepri database will be used for all kind of metadata, it means that RabbitMQ nodes that can't write to the database will be unable to perform some operations. A list of operations and what to expect is documented in the associated pull request and the RabbitMQ website. This requirement from Raft also affects the startup of RabbitMQ nodes in a cluster. Indeed, at least a quorum number of nodes must be started at once to allow nodes to become ready. To enable Khepri, you need to enable the `khepri_db` feature flag: rabbitmqctl enable_feature_flag khepri_db When the `khepri_db` feature flag is enabled, the migration code performs the following two tasks: 1. It synchronizes the Khepri cluster membership from the Mnesia cluster. It uses `mnesia_to_khepri:sync_cluster_membership/1` from the `khepri_mnesia_migration` application [3]. 2. It copies data from relevant Mnesia tables to Khepri, doing some conversion if necessary on the way. Again, it uses `mnesia_to_khepri:copy_tables/4` from `khepri_mnesia_migration` to do it. This can be performed on a running standalone RabbitMQ node or cluster. Data will be migrated from Mnesia to Khepri without any service interruption. Note that during the migration, the performance may decrease and the memory footprint may go up. Because this feature flag is considered experimental, it is not enabled by default even on a brand new RabbitMQ deployment. More about the implementation details below: In the past months, all accesses to Mnesia were isolated in a collection of `rabbit_db*` modules. This is where the integration of Khepri mostly takes place: we use a function called `rabbit_khepri:handle_fallback/1` which selects the database and perform the query or the transaction. Here is an example from `rabbit_db_vhost`: * Up until RabbitMQ 3.12.x: get(VHostName) when is_binary(VHostName) -> get_in_mnesia(VHostName). * Starting with RabbitMQ 3.13.0: get(VHostName) when is_binary(VHostName) -> rabbit_khepri:handle_fallback( #{mnesia => fun() -> get_in_mnesia(VHostName) end, khepri => fun() -> get_in_khepri(VHostName) end}). This `rabbit_khepri:handle_fallback/1` function relies on two things: 1. the fact that the `khepri_db` feature flag is enabled, in which case it always executes the Khepri-based variant. 4. the ability or not to read and write to Mnesia tables otherwise. Before the feature flag is enabled, or during the migration, the function will try to execute the Mnesia-based variant. If it succeeds, then it returns the result. If it fails because one or more Mnesia tables can't be used, it restarts from scratch: it means the feature flag is being enabled and depending on the outcome, either the Mnesia-based variant will succeed (the feature flag couldn't be enabled) or the feature flag will be marked as enabled and it will call the Khepri-based variant. The meat of this function really lives in the `khepri_mnesia_migration` application [3] and `rabbit_khepri:handle_fallback/1` is a wrapper on top of it that knows about the feature flag. However, some calls to the database do not depend on the existence of Mnesia tables, such as functions where we need to learn about the members of a cluster. For those, we can't rely on exceptions from Mnesia. Therefore, we just look at the state of the feature flag to determine which database to use. There are two situations though: * Sometimes, we need the feature flag state query to block because the function interested in it can't return a valid answer during the migration. Here is an example: case rabbit_khepri:is_enabled(RemoteNode) of true -> can_join_using_khepri(RemoteNode); false -> can_join_using_mnesia(RemoteNode) end * Sometimes, we need the feature flag state query to NOT block (for instance because it would cause a deadlock). Here is an example: case rabbit_khepri:get_feature_state() of enabled -> members_using_khepri(); _ -> members_using_mnesia() end Direct accesses to Mnesia still exists. They are limited to code that is specific to Mnesia such as classic queue mirroring or network partitions handling strategies. Now, to discover the Mnesia tables to migrate and how to migrate them, we use an Erlang module attribute called `rabbit_mnesia_tables_to_khepri_db` which indicates a list of Mnesia tables and an associated converter module. Here is an example in the `rabbitmq_recent_history_exchange` plugin: -rabbit_mnesia_tables_to_khepri_db( [{?RH_TABLE, rabbit_db_rh_exchange_m2k_converter}]). The converter module — `rabbit_db_rh_exchange_m2k_converter` in this example — is is fact a "sub" converter module called but `rabbit_db_m2k_converter`. See the documentation of a `mnesia_to_khepri` converter module to learn more about these modules. [1] https://github.com/rabbitmq/ra [2] https://github.com/rabbitmq/khepri [3] https://github.com/rabbitmq/khepri_mnesia_migration See #7206. Co-authored-by: Jean-Sébastien Pédron <jean-sebastien@rabbitmq.com> Co-authored-by: Diana Parra Corbacho <dparracorbac@vmware.com> Co-authored-by: Michael Davis <mcarsondavis@gmail.com>
2023-01-05 20:57:50 +08:00
rabbitmq_integration_suite(
name = "metadata_store_clustering_SUITE",
size = "large",
shard_count = 19,
Allow to use Khepri database to store metadata instead of Mnesia [Why] Mnesia is a very powerful and convenient tool for Erlang applications: it is a persistent disc-based database, it handles replication accross multiple Erlang nodes and it is available out-of-the-box from the Erlang/OTP distribution. RabbitMQ relies on Mnesia to manage all its metadata: * virtual hosts' properties * intenal users * queue, exchange and binding declarations (not queues data) * runtime parameters and policies * ... Unfortunately Mnesia makes it difficult to handle network partition and, as a consequence, the merge conflicts between Erlang nodes once the network partition is resolved. RabbitMQ provides several partition handling strategies but they are not bullet-proof. Users still hit situations where it is a pain to repair a cluster following a network partition. [How] @kjnilsson created Ra [1], a Raft consensus library that RabbitMQ already uses successfully to implement quorum queues and streams for instance. Those queues do not suffer from network partitions. We created Khepri [2], a new persistent and replicated database engine based on Ra and we want to use it in place of Mnesia in RabbitMQ to solve the problems with network partitions. This patch integrates Khepri as an experimental feature. When enabled, RabbitMQ will store all its metadata in Khepri instead of Mnesia. This change comes with behavior changes. While Khepri remains disabled, you should see no changes to the behavior of RabbitMQ. If there are changes, it is a bug. After Khepri is enabled, there are significant changes of behavior that you should be aware of. Because it is based on the Raft consensus algorithm, when there is a network partition, only the cluster members that are in the partition with at least `(Number of nodes in the cluster ÷ 2) + 1` number of nodes can "make progress". In other words, only those nodes may write to the Khepri database and read from the database and expect a consistent result. For instance in a cluster of 5 RabbitMQ nodes: * If there are two partitions, one with 3 nodes, one with 2 nodes, only the group of 3 nodes will be able to write to the database. * If there are three partitions, two with 2 nodes, one with 1 node, none of the group can write to the database. Because the Khepri database will be used for all kind of metadata, it means that RabbitMQ nodes that can't write to the database will be unable to perform some operations. A list of operations and what to expect is documented in the associated pull request and the RabbitMQ website. This requirement from Raft also affects the startup of RabbitMQ nodes in a cluster. Indeed, at least a quorum number of nodes must be started at once to allow nodes to become ready. To enable Khepri, you need to enable the `khepri_db` feature flag: rabbitmqctl enable_feature_flag khepri_db When the `khepri_db` feature flag is enabled, the migration code performs the following two tasks: 1. It synchronizes the Khepri cluster membership from the Mnesia cluster. It uses `mnesia_to_khepri:sync_cluster_membership/1` from the `khepri_mnesia_migration` application [3]. 2. It copies data from relevant Mnesia tables to Khepri, doing some conversion if necessary on the way. Again, it uses `mnesia_to_khepri:copy_tables/4` from `khepri_mnesia_migration` to do it. This can be performed on a running standalone RabbitMQ node or cluster. Data will be migrated from Mnesia to Khepri without any service interruption. Note that during the migration, the performance may decrease and the memory footprint may go up. Because this feature flag is considered experimental, it is not enabled by default even on a brand new RabbitMQ deployment. More about the implementation details below: In the past months, all accesses to Mnesia were isolated in a collection of `rabbit_db*` modules. This is where the integration of Khepri mostly takes place: we use a function called `rabbit_khepri:handle_fallback/1` which selects the database and perform the query or the transaction. Here is an example from `rabbit_db_vhost`: * Up until RabbitMQ 3.12.x: get(VHostName) when is_binary(VHostName) -> get_in_mnesia(VHostName). * Starting with RabbitMQ 3.13.0: get(VHostName) when is_binary(VHostName) -> rabbit_khepri:handle_fallback( #{mnesia => fun() -> get_in_mnesia(VHostName) end, khepri => fun() -> get_in_khepri(VHostName) end}). This `rabbit_khepri:handle_fallback/1` function relies on two things: 1. the fact that the `khepri_db` feature flag is enabled, in which case it always executes the Khepri-based variant. 4. the ability or not to read and write to Mnesia tables otherwise. Before the feature flag is enabled, or during the migration, the function will try to execute the Mnesia-based variant. If it succeeds, then it returns the result. If it fails because one or more Mnesia tables can't be used, it restarts from scratch: it means the feature flag is being enabled and depending on the outcome, either the Mnesia-based variant will succeed (the feature flag couldn't be enabled) or the feature flag will be marked as enabled and it will call the Khepri-based variant. The meat of this function really lives in the `khepri_mnesia_migration` application [3] and `rabbit_khepri:handle_fallback/1` is a wrapper on top of it that knows about the feature flag. However, some calls to the database do not depend on the existence of Mnesia tables, such as functions where we need to learn about the members of a cluster. For those, we can't rely on exceptions from Mnesia. Therefore, we just look at the state of the feature flag to determine which database to use. There are two situations though: * Sometimes, we need the feature flag state query to block because the function interested in it can't return a valid answer during the migration. Here is an example: case rabbit_khepri:is_enabled(RemoteNode) of true -> can_join_using_khepri(RemoteNode); false -> can_join_using_mnesia(RemoteNode) end * Sometimes, we need the feature flag state query to NOT block (for instance because it would cause a deadlock). Here is an example: case rabbit_khepri:get_feature_state() of enabled -> members_using_khepri(); _ -> members_using_mnesia() end Direct accesses to Mnesia still exists. They are limited to code that is specific to Mnesia such as classic queue mirroring or network partitions handling strategies. Now, to discover the Mnesia tables to migrate and how to migrate them, we use an Erlang module attribute called `rabbit_mnesia_tables_to_khepri_db` which indicates a list of Mnesia tables and an associated converter module. Here is an example in the `rabbitmq_recent_history_exchange` plugin: -rabbit_mnesia_tables_to_khepri_db( [{?RH_TABLE, rabbit_db_rh_exchange_m2k_converter}]). The converter module — `rabbit_db_rh_exchange_m2k_converter` in this example — is is fact a "sub" converter module called but `rabbit_db_m2k_converter`. See the documentation of a `mnesia_to_khepri` converter module to learn more about these modules. [1] https://github.com/rabbitmq/ra [2] https://github.com/rabbitmq/khepri [3] https://github.com/rabbitmq/khepri_mnesia_migration See #7206. Co-authored-by: Jean-Sébastien Pédron <jean-sebastien@rabbitmq.com> Co-authored-by: Diana Parra Corbacho <dparracorbac@vmware.com> Co-authored-by: Michael Davis <mcarsondavis@gmail.com>
2023-01-05 20:57:50 +08:00
sharding_method = "case",
)
rabbitmq_integration_suite(
name = "metadata_store_phase1_SUITE",
size = "small",
deps = [
"@khepri//:erlang_app",
],
)
rabbitmq_integration_suite(
name = "metadata_store_migration_SUITE",
size = "small",
)
rabbitmq_integration_suite(
name = "routing_SUITE",
size = "large",
)
rabbitmq_integration_suite(
name = "cli_forget_cluster_node_SUITE",
size = "medium",
additional_beam = [
":test_clustering_utils_beam",
],
)
Allow to use Khepri database to store metadata instead of Mnesia [Why] Mnesia is a very powerful and convenient tool for Erlang applications: it is a persistent disc-based database, it handles replication accross multiple Erlang nodes and it is available out-of-the-box from the Erlang/OTP distribution. RabbitMQ relies on Mnesia to manage all its metadata: * virtual hosts' properties * intenal users * queue, exchange and binding declarations (not queues data) * runtime parameters and policies * ... Unfortunately Mnesia makes it difficult to handle network partition and, as a consequence, the merge conflicts between Erlang nodes once the network partition is resolved. RabbitMQ provides several partition handling strategies but they are not bullet-proof. Users still hit situations where it is a pain to repair a cluster following a network partition. [How] @kjnilsson created Ra [1], a Raft consensus library that RabbitMQ already uses successfully to implement quorum queues and streams for instance. Those queues do not suffer from network partitions. We created Khepri [2], a new persistent and replicated database engine based on Ra and we want to use it in place of Mnesia in RabbitMQ to solve the problems with network partitions. This patch integrates Khepri as an experimental feature. When enabled, RabbitMQ will store all its metadata in Khepri instead of Mnesia. This change comes with behavior changes. While Khepri remains disabled, you should see no changes to the behavior of RabbitMQ. If there are changes, it is a bug. After Khepri is enabled, there are significant changes of behavior that you should be aware of. Because it is based on the Raft consensus algorithm, when there is a network partition, only the cluster members that are in the partition with at least `(Number of nodes in the cluster ÷ 2) + 1` number of nodes can "make progress". In other words, only those nodes may write to the Khepri database and read from the database and expect a consistent result. For instance in a cluster of 5 RabbitMQ nodes: * If there are two partitions, one with 3 nodes, one with 2 nodes, only the group of 3 nodes will be able to write to the database. * If there are three partitions, two with 2 nodes, one with 1 node, none of the group can write to the database. Because the Khepri database will be used for all kind of metadata, it means that RabbitMQ nodes that can't write to the database will be unable to perform some operations. A list of operations and what to expect is documented in the associated pull request and the RabbitMQ website. This requirement from Raft also affects the startup of RabbitMQ nodes in a cluster. Indeed, at least a quorum number of nodes must be started at once to allow nodes to become ready. To enable Khepri, you need to enable the `khepri_db` feature flag: rabbitmqctl enable_feature_flag khepri_db When the `khepri_db` feature flag is enabled, the migration code performs the following two tasks: 1. It synchronizes the Khepri cluster membership from the Mnesia cluster. It uses `mnesia_to_khepri:sync_cluster_membership/1` from the `khepri_mnesia_migration` application [3]. 2. It copies data from relevant Mnesia tables to Khepri, doing some conversion if necessary on the way. Again, it uses `mnesia_to_khepri:copy_tables/4` from `khepri_mnesia_migration` to do it. This can be performed on a running standalone RabbitMQ node or cluster. Data will be migrated from Mnesia to Khepri without any service interruption. Note that during the migration, the performance may decrease and the memory footprint may go up. Because this feature flag is considered experimental, it is not enabled by default even on a brand new RabbitMQ deployment. More about the implementation details below: In the past months, all accesses to Mnesia were isolated in a collection of `rabbit_db*` modules. This is where the integration of Khepri mostly takes place: we use a function called `rabbit_khepri:handle_fallback/1` which selects the database and perform the query or the transaction. Here is an example from `rabbit_db_vhost`: * Up until RabbitMQ 3.12.x: get(VHostName) when is_binary(VHostName) -> get_in_mnesia(VHostName). * Starting with RabbitMQ 3.13.0: get(VHostName) when is_binary(VHostName) -> rabbit_khepri:handle_fallback( #{mnesia => fun() -> get_in_mnesia(VHostName) end, khepri => fun() -> get_in_khepri(VHostName) end}). This `rabbit_khepri:handle_fallback/1` function relies on two things: 1. the fact that the `khepri_db` feature flag is enabled, in which case it always executes the Khepri-based variant. 4. the ability or not to read and write to Mnesia tables otherwise. Before the feature flag is enabled, or during the migration, the function will try to execute the Mnesia-based variant. If it succeeds, then it returns the result. If it fails because one or more Mnesia tables can't be used, it restarts from scratch: it means the feature flag is being enabled and depending on the outcome, either the Mnesia-based variant will succeed (the feature flag couldn't be enabled) or the feature flag will be marked as enabled and it will call the Khepri-based variant. The meat of this function really lives in the `khepri_mnesia_migration` application [3] and `rabbit_khepri:handle_fallback/1` is a wrapper on top of it that knows about the feature flag. However, some calls to the database do not depend on the existence of Mnesia tables, such as functions where we need to learn about the members of a cluster. For those, we can't rely on exceptions from Mnesia. Therefore, we just look at the state of the feature flag to determine which database to use. There are two situations though: * Sometimes, we need the feature flag state query to block because the function interested in it can't return a valid answer during the migration. Here is an example: case rabbit_khepri:is_enabled(RemoteNode) of true -> can_join_using_khepri(RemoteNode); false -> can_join_using_mnesia(RemoteNode) end * Sometimes, we need the feature flag state query to NOT block (for instance because it would cause a deadlock). Here is an example: case rabbit_khepri:get_feature_state() of enabled -> members_using_khepri(); _ -> members_using_mnesia() end Direct accesses to Mnesia still exists. They are limited to code that is specific to Mnesia such as classic queue mirroring or network partitions handling strategies. Now, to discover the Mnesia tables to migrate and how to migrate them, we use an Erlang module attribute called `rabbit_mnesia_tables_to_khepri_db` which indicates a list of Mnesia tables and an associated converter module. Here is an example in the `rabbitmq_recent_history_exchange` plugin: -rabbit_mnesia_tables_to_khepri_db( [{?RH_TABLE, rabbit_db_rh_exchange_m2k_converter}]). The converter module — `rabbit_db_rh_exchange_m2k_converter` in this example — is is fact a "sub" converter module called but `rabbit_db_m2k_converter`. See the documentation of a `mnesia_to_khepri` converter module to learn more about these modules. [1] https://github.com/rabbitmq/ra [2] https://github.com/rabbitmq/khepri [3] https://github.com/rabbitmq/khepri_mnesia_migration See #7206. Co-authored-by: Jean-Sébastien Pédron <jean-sebastien@rabbitmq.com> Co-authored-by: Diana Parra Corbacho <dparracorbac@vmware.com> Co-authored-by: Michael Davis <mcarsondavis@gmail.com>
2023-01-05 20:57:50 +08:00
rabbitmq_integration_suite(
name = "cluster_minority_SUITE",
size = "medium",
Allow to use Khepri database to store metadata instead of Mnesia [Why] Mnesia is a very powerful and convenient tool for Erlang applications: it is a persistent disc-based database, it handles replication accross multiple Erlang nodes and it is available out-of-the-box from the Erlang/OTP distribution. RabbitMQ relies on Mnesia to manage all its metadata: * virtual hosts' properties * intenal users * queue, exchange and binding declarations (not queues data) * runtime parameters and policies * ... Unfortunately Mnesia makes it difficult to handle network partition and, as a consequence, the merge conflicts between Erlang nodes once the network partition is resolved. RabbitMQ provides several partition handling strategies but they are not bullet-proof. Users still hit situations where it is a pain to repair a cluster following a network partition. [How] @kjnilsson created Ra [1], a Raft consensus library that RabbitMQ already uses successfully to implement quorum queues and streams for instance. Those queues do not suffer from network partitions. We created Khepri [2], a new persistent and replicated database engine based on Ra and we want to use it in place of Mnesia in RabbitMQ to solve the problems with network partitions. This patch integrates Khepri as an experimental feature. When enabled, RabbitMQ will store all its metadata in Khepri instead of Mnesia. This change comes with behavior changes. While Khepri remains disabled, you should see no changes to the behavior of RabbitMQ. If there are changes, it is a bug. After Khepri is enabled, there are significant changes of behavior that you should be aware of. Because it is based on the Raft consensus algorithm, when there is a network partition, only the cluster members that are in the partition with at least `(Number of nodes in the cluster ÷ 2) + 1` number of nodes can "make progress". In other words, only those nodes may write to the Khepri database and read from the database and expect a consistent result. For instance in a cluster of 5 RabbitMQ nodes: * If there are two partitions, one with 3 nodes, one with 2 nodes, only the group of 3 nodes will be able to write to the database. * If there are three partitions, two with 2 nodes, one with 1 node, none of the group can write to the database. Because the Khepri database will be used for all kind of metadata, it means that RabbitMQ nodes that can't write to the database will be unable to perform some operations. A list of operations and what to expect is documented in the associated pull request and the RabbitMQ website. This requirement from Raft also affects the startup of RabbitMQ nodes in a cluster. Indeed, at least a quorum number of nodes must be started at once to allow nodes to become ready. To enable Khepri, you need to enable the `khepri_db` feature flag: rabbitmqctl enable_feature_flag khepri_db When the `khepri_db` feature flag is enabled, the migration code performs the following two tasks: 1. It synchronizes the Khepri cluster membership from the Mnesia cluster. It uses `mnesia_to_khepri:sync_cluster_membership/1` from the `khepri_mnesia_migration` application [3]. 2. It copies data from relevant Mnesia tables to Khepri, doing some conversion if necessary on the way. Again, it uses `mnesia_to_khepri:copy_tables/4` from `khepri_mnesia_migration` to do it. This can be performed on a running standalone RabbitMQ node or cluster. Data will be migrated from Mnesia to Khepri without any service interruption. Note that during the migration, the performance may decrease and the memory footprint may go up. Because this feature flag is considered experimental, it is not enabled by default even on a brand new RabbitMQ deployment. More about the implementation details below: In the past months, all accesses to Mnesia were isolated in a collection of `rabbit_db*` modules. This is where the integration of Khepri mostly takes place: we use a function called `rabbit_khepri:handle_fallback/1` which selects the database and perform the query or the transaction. Here is an example from `rabbit_db_vhost`: * Up until RabbitMQ 3.12.x: get(VHostName) when is_binary(VHostName) -> get_in_mnesia(VHostName). * Starting with RabbitMQ 3.13.0: get(VHostName) when is_binary(VHostName) -> rabbit_khepri:handle_fallback( #{mnesia => fun() -> get_in_mnesia(VHostName) end, khepri => fun() -> get_in_khepri(VHostName) end}). This `rabbit_khepri:handle_fallback/1` function relies on two things: 1. the fact that the `khepri_db` feature flag is enabled, in which case it always executes the Khepri-based variant. 4. the ability or not to read and write to Mnesia tables otherwise. Before the feature flag is enabled, or during the migration, the function will try to execute the Mnesia-based variant. If it succeeds, then it returns the result. If it fails because one or more Mnesia tables can't be used, it restarts from scratch: it means the feature flag is being enabled and depending on the outcome, either the Mnesia-based variant will succeed (the feature flag couldn't be enabled) or the feature flag will be marked as enabled and it will call the Khepri-based variant. The meat of this function really lives in the `khepri_mnesia_migration` application [3] and `rabbit_khepri:handle_fallback/1` is a wrapper on top of it that knows about the feature flag. However, some calls to the database do not depend on the existence of Mnesia tables, such as functions where we need to learn about the members of a cluster. For those, we can't rely on exceptions from Mnesia. Therefore, we just look at the state of the feature flag to determine which database to use. There are two situations though: * Sometimes, we need the feature flag state query to block because the function interested in it can't return a valid answer during the migration. Here is an example: case rabbit_khepri:is_enabled(RemoteNode) of true -> can_join_using_khepri(RemoteNode); false -> can_join_using_mnesia(RemoteNode) end * Sometimes, we need the feature flag state query to NOT block (for instance because it would cause a deadlock). Here is an example: case rabbit_khepri:get_feature_state() of enabled -> members_using_khepri(); _ -> members_using_mnesia() end Direct accesses to Mnesia still exists. They are limited to code that is specific to Mnesia such as classic queue mirroring or network partitions handling strategies. Now, to discover the Mnesia tables to migrate and how to migrate them, we use an Erlang module attribute called `rabbit_mnesia_tables_to_khepri_db` which indicates a list of Mnesia tables and an associated converter module. Here is an example in the `rabbitmq_recent_history_exchange` plugin: -rabbit_mnesia_tables_to_khepri_db( [{?RH_TABLE, rabbit_db_rh_exchange_m2k_converter}]). The converter module — `rabbit_db_rh_exchange_m2k_converter` in this example — is is fact a "sub" converter module called but `rabbit_db_m2k_converter`. See the documentation of a `mnesia_to_khepri` converter module to learn more about these modules. [1] https://github.com/rabbitmq/ra [2] https://github.com/rabbitmq/khepri [3] https://github.com/rabbitmq/khepri_mnesia_migration See #7206. Co-authored-by: Jean-Sébastien Pédron <jean-sebastien@rabbitmq.com> Co-authored-by: Diana Parra Corbacho <dparracorbac@vmware.com> Co-authored-by: Michael Davis <mcarsondavis@gmail.com>
2023-01-05 20:57:50 +08:00
additional_beam = [
":test_clustering_utils_beam",
],
)
rabbitmq_integration_suite(
name = "cluster_upgrade_SUITE",
size = "medium",
additional_beam = [
":test_queue_utils_beam",
],
)
Support AMQP 1.0 natively ## What Similar to Native MQTT in #5895, this commits implements Native AMQP 1.0. By "native", we mean do not proxy via AMQP 0.9.1 anymore. ## Why Native AMQP 1.0 comes with the following major benefits: 1. Similar to Native MQTT, this commit provides better throughput, latency, scalability, and resource usage for AMQP 1.0. See https://blog.rabbitmq.com/posts/2023/03/native-mqtt for native MQTT improvements. See further below for some benchmarks. 2. Since AMQP 1.0 is not limited anymore by the AMQP 0.9.1 protocol, this commit allows implementing more AMQP 1.0 features in the future. Some features are already implemented in this commit (see next section). 3. Simpler, better understandable, and more maintainable code. Native AMQP 1.0 as implemented in this commit has the following major benefits compared to AMQP 0.9.1: 4. Memory and disk alarms will only stop accepting incoming TRANSFER frames. New connections can still be created to consume from RabbitMQ to empty queues. 5. Due to 4. no need anymore for separate connections for publishers and consumers as we currently recommended for AMQP 0.9.1. which potentially halves the number of physical TCP connections. 6. When a single connection sends to multiple target queues, a single slow target queue won't block the entire connection. Publisher can still send data quickly to all other target queues. 7. A publisher can request whether it wants publisher confirmation on a per-message basis. In AMQP 0.9.1 publisher confirms are configured per channel only. 8. Consumers can change their "prefetch count" dynamically which isn't possible in our AMQP 0.9.1 implementation. See #10174 9. AMQP 1.0 is an extensible protocol This commit also fixes dozens of bugs present in the AMQP 1.0 plugin in RabbitMQ 3.x - most of which cannot be backported due to the complexity and limitations of the old 3.x implementation. This commit contains breaking changes and is therefore targeted for RabbitMQ 4.0. ## Implementation details 1. Breaking change: With Native AMQP, the behaviour of ``` Convert AMQP 0.9.1 message headers to application properties for an AMQP 1.0 consumer amqp1_0.convert_amqp091_headers_to_app_props = false | true (default false) Convert AMQP 1.0 Application Properties to AMQP 0.9.1 headers amqp1_0.convert_app_props_to_amqp091_headers = false | true (default false) ``` will break because we always convert according to the message container conversions. For example, AMQP 0.9.1 x-headers will go into message-annotations instead of application properties. Also, `false` won’t be respected since we always convert the headers with message containers. 2. Remove rabbit_queue_collector rabbit_queue_collector is responsible for synchronously deleting exclusive queues. Since the AMQP 1.0 plugin never creates exclusive queues, rabbit_queue_collector doesn't need to be started in the first place. This will save 1 Erlang process per AMQP 1.0 connection. 3. 7 processes per connection + 1 process per session in this commit instead of 7 processes per connection + 15 processes per session in 3.x Supervision hierarchy got re-designed. 4. Use 1 writer process per AMQP 1.0 connection AMQP 0.9.1 uses a separate rabbit_writer Erlang process per AMQP 0.9.1 channel. Prior to this commit, AMQP 1.0 used a separate rabbit_amqp1_0_writer process per AMQP 1.0 session. Advantage of single writer proc per session (prior to this commit): * High parallelism for serialising packets if multiple sessions within a connection write heavily at the same time. This commit uses a single writer process per AMQP 1.0 connection that is shared across all AMQP 1.0 sessions. Advantages of single writer proc per connection (this commit): * Lower memory usage with hundreds of thousands of AMQP 1.0 sessions * Less TCP and IP header overhead given that the single writer process can accumulate across all sessions bytes before flushing the socket. In other words, this commit decides that a reader / writer process pair per AMQP 1.0 connection is good enough for bi-directional TRANSFER flows. Having a writer per session is too heavy. We still ensure high throughput by having separate reader, writer, and session processes. 5. Transform rabbit_amqp1_0_writer into gen_server Why: Prior to this commit, when clicking on the AMQP 1.0 writer process in observer, the process crashed. Instead of handling all these debug messages of the sys module, it's better to implement a gen_server. There is no advantage of using a special OTP process over gen_server for the AMQP 1.0 writer. gen_server also provides cleaner format status output. How: Message callbacks return a timeout of 0. After all messages in the inbox are processed, the timeout message is handled by flushing any pending bytes. 6. Remove stats timer from writer AMQP 1.0 connections haven't emitted any stats previously. 7. When there are contiguous queue confirmations in the session process mailbox, batch them. When the confirmations are sent to the publisher, a single DISPOSITION frame is sent for contiguously confirmed delivery IDs. This approach should be good enough. However it's sub optimal in scenarios where contiguous delivery IDs that need confirmations are rare, for example: * There are multiple links in the session with different sender settlement modes and sender publishes across these links interleaved. * sender settlement mode is mixed and sender publishes interleaved settled and unsettled TRANSFERs. 8. Introduce credit API v2 Why: The AMQP 0.9.1 credit extension which is to be removed in 4.0 was poorly designed since basic.credit is a synchronous call into the queue process blocking the entire AMQP 1.0 session process. How: Change the interactions between queue clients and queue server implementations: * Clients only request a credit reply if the FLOW's `echo` field is set * Include all link flow control state held by the queue process into a new credit_reply queue event: * `available` after the queue sends any deliveries * `link-credit` after the queue sends any deliveries * `drain` which allows us to combine the old queue events send_credit_reply and send_drained into a single new queue event credit_reply. * Include the consumer tag into the credit_reply queue event such that the AMQP 1.0 session process can process any credit replies asynchronously. Link flow control state `delivery-count` also moves to the queue processes. The new interactions are hidden behind feature flag credit_api_v2 to allow for rolling upgrades from 3.13 to 4.0. 9. Use serial number arithmetic in quorum queues and session process. 10. Completely bypass the rabbit_limiter module for AMQP 1.0 flow control. The goal is to eventually remove the rabbit_limiter module in 4.0 since AMQP 0.9.1 global QoS will be unsupported in 4.0. This commit lifts the AMQP 1.0 link flow control logic out of rabbit_limiter into rabbit_queue_consumers. 11. Fix credit bug for streams: AMQP 1.0 settlements shouldn't top up link credit, only FLOW frames should top up link credit. 12. Allow sender settle mode unsettled for streams since AMQP 1.0 acknowledgements to streams are no-ops (currently). 13. Fix AMQP 1.0 client bugs Auto renewing credits should not be related to settling TRANSFERs. Remove field link_credit_unsettled as it was wrong and confusing. Prior to this commit auto renewal did not work when the sender uses sender settlement mode settled. 14. Fix AMQP 1.0 client bugs The wrong outdated Link was passed to function auto_flow/2 15. Use osiris chunk iterator Only hold messages of uncompressed sub batches in memory if consumer doesn't have sufficient credits. Compressed sub batches are skipped for non Stream protocol consumers. 16. Fix incoming link flow control Always use confirms between AMQP 1.0 queue clients and queue servers. As already done internally by rabbit_fifo_client and rabbit_stream_queue, use confirms for classic queues as well. 17. Include link handle into correlation when publishing messages to target queues such that session process can correlate confirms from target queues to incoming links. 18. Only grant more credits to publishers if publisher hasn't sufficient credits anymore and there are not too many unconfirmed messages on the link. 19. Completely ignore `block` and `unblock` queue actions and RabbitMQ credit flow between classic queue process and session process. 20. Link flow control is independent between links. A client can refer to a queue or to an exchange with multiple dynamically added target queues. Multiple incoming links can also fan in to the same queue. However the link topology looks like, this commit ensures that each link is only granted more credits if that link isn't overloaded. 21. A connection or a session can send to many different queues. In AMQP 0.9.1, a single slow queue will lead to the entire channel, and then entire connection being blocked. This commit makes sure that a single slow queue from one link won't slow down sending on other links. For example, having link A sending to a local classic queue and link B sending to 5 replica quorum queue, link B will naturally grant credits slower than link A. So, despite the quorum queue being slower in confirming messages, the same AMQP 1.0 connection and session can still pump data very fast into the classic queue. 22. If cluster wide memory or disk alarm occurs. Each session sends a FLOW with incoming-window to 0 to sending client. If sending clients don’t obey, force disconnect the client. If cluster wide memory alarm clears: Each session resumes with a FLOW defaulting to initial incoming-window. 23. All operations apart of publishing TRANSFERS to RabbitMQ can continue during cluster wide alarms, specifically, attaching consumers and consuming, i.e. emptying queues. There is no need for separate AMQP 1.0 connections for publishers and consumers as recommended in our AMQP 0.9.1 implementation. 24. Flow control summary: * If queue becomes bottleneck, that’s solved by slowing down individual sending links (AMQP 1.0 link flow control). * If session becomes bottleneck (more unlikely), that’s solved by AMQP 1.0 session flow control. * If connection becomes bottleneck, it naturally won’t read fast enough from the socket causing TCP backpressure being applied. Nowhere will RabbitMQ internal credit based flow control (i.e. module credit_flow) be used on the incoming AMQP 1.0 message path. 25. Register AMQP sessions Prefer local-only pg over our custom pg_local implementation as pg is a better process group implementation than pg_local. pg_local was identified as bottleneck in tests where many MQTT clients were disconnected at once. 26. Start a local-only pg when Rabbit boots: > A scope can be kept local-only by using a scope name that is unique cluster-wide, e.g. the node name: > pg:start_link(node()). Register AMQP 1.0 connections and sessions with pg. In future we should remove pg_local and instead use the new local-only pg for all registered processes such as AMQP 0.9.1 connections and channels. 27. Requeue messages if link detached Although the spec allows to settle delivery IDs on detached links, RabbitMQ does not respect the 'closed' field of the DETACH frame and therefore handles every DETACH frame as closed. Since the link is closed, we expect every outstanding delivery to be requeued. In addition to consumer cancellation, detaching a link therefore causes in flight deliveries to be requeued. Note that this behaviour is different from merely consumer cancellation in AMQP 0.9.1: "After a consumer is cancelled there will be no future deliveries dispatched to it. Note that there can still be "in flight" deliveries dispatched previously. Cancelling a consumer will neither discard nor requeue them." [https://www.rabbitmq.com/consumers.html#unsubscribing] An AMQP receiver can first drain, and then detach to prevent "in flight" deliveries 28. Init AMQP session with BEGIN frame Similar to how there can't be an MQTT processor without a CONNECT frame, there can't be an AMQP session without a BEGIN frame. This allows having strict dialyzer types for session flow control fields (i.e. not allowing 'undefined'). 29. Move serial_number to AMQP 1.0 common lib such that it can be used by both AMQP 1.0 server and client 30. Fix AMQP client to do serial number arithmetic. 31. AMQP client: Differentiate between delivery-id and transfer-id for better understandability. 32. Fix link flow control in classic queues This commit fixes ``` java -jar target/perf-test.jar -ad false -f persistent -u cq -c 3000 -C 1000000 -y 0 ``` followed by ``` ./omq -x 0 amqp -T /queue/cq -D 1000000 --amqp-consumer-credits 2 ``` Prior to this commit, (and on RabbitMQ 3.x) the consuming would halt after around 8 - 10,000 messages. The bug was that in flight messages from classic queue process to session process were not taken into account when topping up credit to the classic queue process. Fixes #2597 The solution to this bug (and a much cleaner design anyway independent of this bug) is that queues should hold all link flow control state including the delivery-count. Hence, when credit API v2 is used the delivery-count will be held by the classic queue process, quorum queue process, and stream queue client instead of managing the delivery-count in the session. 33. The double level crediting between (a) session process and rabbit_fifo_client, and (b) rabbit_fifo_client and rabbit_fifo was removed. Therefore, instead of managing 3 separate delivery-counts (i. session, ii. rabbit_fifo_client, iii. rabbit_fifo), only 1 delivery-count is used in rabbit_fifo. This is a big simplification. 34. This commit fixes quorum queues without bumping the machine version nor introducing new rabbit_fifo commands. Whether credit API v2 is used is solely determined at link attachment time depending on whether feature flag credit_api_v2 is enabled. Even when that feature flag will be enabled later on, this link will keep using credit API v1 until detached (or the node is shut down). Eventually, after feature flag credit_api_v2 has been enabled and a subsequent rolling upgrade, all links will use credit API v2. This approach is safe and simple. The 2 alternatives to move delivery-count from the session process to the queue processes would have been: i. Explicit feature flag credit_api_v2 migration function * Can use a gen_server:call and only finish migration once all delivery-counts were migrated. Cons: * Extra new message format just for migration is required. * Risky as migration will fail if a target queue doesn’t reply. ii. Session always includes DeliveryCountSnd when crediting to the queue: Cons: * 2 delivery counts will be hold simultaneously in session proc and queue proc; could be solved by deleting the session proc’s delivery-count for credit-reply * What happens if the receiver doesn’t provide credit for a very long time? Is that a problem? 35. Support stream filtering in AMQP 1.0 (by @acogoluegnes) Use the x-stream-filter-value message annotation to carry the filter value in a published message. Use the rabbitmq:stream-filter and rabbitmq:stream-match-unfiltered filters when creating a receiver that wants to filter out messages from a stream. 36. Remove credit extension from AMQP 0.9.1 client 37. Support maintenance mode closing AMQP 1.0 connections. 38. Remove AMQP 0.9.1 client dependency from AMQP 1.0 implementation. 39. Move AMQP 1.0 plugin to the core. AMQP 1.0 is enabled by default. The old rabbitmq_amqp1_0 plugin will be kept as a no-op plugin to prevent deployment tools from failing that execute: ``` rabbitmq-plugins enable rabbitmq_amqp1_0 rabbitmq-plugins disable rabbitmq_amqp1_0 ``` 40. Breaking change: Remove CLI command `rabbitmqctl list_amqp10_connections`. Instead, list both AMQP 0.9.1 and AMQP 1.0 connections in `list_connections`: ``` rabbitmqctl list_connections protocol Listing connections ... protocol {1, 0} {0,9,1} ``` ## Benchmarks ### Throughput & Latency Setup: * Single node Ubuntu 22.04 * Erlang 26.1.1 Start RabbitMQ: ``` make run-broker PLUGINS="rabbitmq_management rabbitmq_amqp1_0" FULL=1 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 3" ``` Predeclare durable classic queue cq1, durable quorum queue qq1, durable stream queue sq1. Start client: https://github.com/ssorj/quiver https://hub.docker.com/r/ssorj/quiver/tags (digest 453a2aceda64) ``` docker run -it --rm --add-host host.docker.internal:host-gateway ssorj/quiver:latest bash-5.1# quiver --version quiver 0.4.0-SNAPSHOT ``` 1. Classic queue ``` quiver //host.docker.internal//amq/queue/cq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ............................................... 73.8 seconds Sender rate .......................................... 13,548 messages/s Receiver rate ........................................ 13,547 messages/s End-to-end rate ...................................... 13,547 messages/s Latencies by percentile: 0% ........ 0 ms 90.00% ........ 9 ms 25% ........ 2 ms 99.00% ....... 14 ms 50% ........ 4 ms 99.90% ....... 17 ms 100% ....... 26 ms 99.99% ....... 24 ms ``` RabbitMQ 3.x (main branch as of 30 January 2024): ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 6 73.6 2.1 3,217 1,607 0 8.0 511 4.1 163,580 16,367 2 74.1 4.1 3,217 0 0 8.0 0 6.1 229,114 32,767 3 74.1 6.1 3,217 0 0 8.0 0 8.1 261,880 16,367 2 74.1 8.1 67,874 32,296 8 8.2 7,662 10.1 294,646 16,367 2 74.1 10.1 67,874 0 0 8.2 0 12.1 360,180 32,734 3 74.1 12.1 67,874 0 0 8.2 0 14.1 392,946 16,367 3 74.1 14.1 68,604 365 0 8.2 12,147 16.1 458,480 32,734 3 74.1 16.1 68,604 0 0 8.2 0 18.1 491,246 16,367 2 74.1 18.1 68,604 0 0 8.2 0 20.1 556,780 32,767 4 74.1 20.1 68,604 0 0 8.2 0 22.1 589,546 16,375 2 74.1 22.1 68,604 0 0 8.2 0 receiver timed out 24.1 622,312 16,367 2 74.1 24.1 68,604 0 0 8.2 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. ``` 2. Quorum queue: ``` quiver //host.docker.internal//amq/queue/qq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration .............................................. 101.4 seconds Sender rate ........................................... 9,867 messages/s Receiver rate ......................................... 9,868 messages/s End-to-end rate ....................................... 9,865 messages/s Latencies by percentile: 0% ....... 11 ms 90.00% ....... 23 ms 25% ....... 15 ms 99.00% ....... 28 ms 50% ....... 18 ms 99.90% ....... 33 ms 100% ....... 49 ms 99.99% ....... 47 ms ``` RabbitMQ 3.x: ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 9 69.9 2.1 18,430 9,206 5 7.6 1,221 4.1 163,580 16,375 5 70.2 4.1 18,867 218 0 7.6 2,168 6.1 229,114 32,767 6 70.2 6.1 18,867 0 0 7.6 0 8.1 294,648 32,734 7 70.2 8.1 18,867 0 0 7.6 0 10.1 360,182 32,734 6 70.2 10.1 18,867 0 0 7.6 0 12.1 425,716 32,767 6 70.2 12.1 18,867 0 0 7.6 0 receiver timed out 14.1 458,482 16,367 5 70.2 14.1 18,867 0 0 7.6 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. ``` 3. Stream: ``` quiver-arrow send //host.docker.internal//amq/queue/sq1 --durable --count 1m -d 10m --summary --verbose ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ................................................ 8.7 seconds Message rate ........................................ 115,154 messages/s ``` RabbitMQ 3.x: ``` Count ............................................. 1,000,000 messages Duration ............................................... 21.2 seconds Message rate ......................................... 47,232 messages/s ``` ### Memory usage Start RabbitMQ: ``` ERL_MAX_PORTS=3000000 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+P 3000000 +S 6" make run-broker PLUGINS="rabbitmq_amqp1_0" FULL=1 RABBITMQ_CONFIG_FILE="rabbitmq.conf" ``` ``` /bin/cat rabbitmq.conf tcp_listen_options.sndbuf = 2048 tcp_listen_options.recbuf = 2048 vm_memory_high_watermark.relative = 0.95 vm_memory_high_watermark_paging_ratio = 0.95 loopback_users = none ``` Create 50k connections with 2 sessions per connection, i.e. 100k session in total: ```go package main import ( "context" "log" "time" "github.com/Azure/go-amqp" ) func main() { for i := 0; i < 50000; i++ { conn, err := amqp.Dial(context.TODO(), "amqp://nuc", &amqp.ConnOptions{SASLType: amqp.SASLTypeAnonymous()}) if err != nil { log.Fatal("dialing AMQP server:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } } log.Println("opened all connections") time.Sleep(5 * time.Hour) } ``` This commit: ``` erlang:memory(). [{total,4586376480}, {processes,4025898504}, {processes_used,4025871040}, {system,560477976}, {atom,1048841}, {atom_used,1042841}, {binary,233228608}, {code,21449982}, {ets,108560464}] erlang:system_info(process_count). 450289 ``` 7 procs per connection + 1 proc per session. (7 + 2*1) * 50,000 = 450,000 procs RabbitMQ 3.x: ``` erlang:memory(). [{total,15168232704}, {processes,14044779256}, {processes_used,14044755120}, {system,1123453448}, {atom,1057033}, {atom_used,1052587}, {binary,236381264}, {code,21790238}, {ets,391423744}] erlang:system_info(process_count). 1850309 ``` 7 procs per connection + 15 per session (7 + 2*15) * 50,000 = 1,850,000 procs 50k connections + 100k session require with this commit: 4.5 GB in RabbitMQ 3.x: 15 GB ## Future work 1. More efficient parser and serializer 2. TODO in mc_amqp: Do not store the parsed message on disk. 3. Implement both AMQP HTTP extension and AMQP management extension to allow AMQP clients to create RabbitMQ objects (queues, exchanges, ...).
2023-07-21 18:29:07 +08:00
rabbitmq_integration_suite(
name = "amqp_client_SUITE",
size = "large",
additional_beam = [
Support AMQP filter expressions (#12415) * Support AMQP filter expressions ## What? This PR implements the following property filter expressions for AMQP clients consuming from streams as defined in [AMQP Filter Expressions Version 1.0 Working Draft 09](https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=66227): * properties filters [section 4.2.4] * application-properties filters [section 4.2.5] String prefix and suffix matching is also supported. This PR also fixes a bug where RabbitMQ would accept wrong filters. Specifically, prior to this PR the values of the filter-set's map were allowed to be symbols. However, "every value MUST be either null or of a described type which provides the archetype filter." ## Why? This feature adds the ability to RabbitMQ to have multiple concurrent clients each consuming only a subset of messages while maintaining message order. This feature also reduces network traffic between RabbitMQ and clients by only dispatching those messages that the clients are actually interested in. Note that AMQP filter expressions are more fine grained than the [bloom filter based stream filtering](https://www.rabbitmq.com/blog/2023/10/16/stream-filtering) because * they do not suffer false positives * the unit of filtering is per-message instead of per-chunk * matching can be performed on **multiple** values in the properties and application-properties sections * prefix and suffix matching on the actual values is supported. Both, AMQP filter expressions and bloom filters can be used together. ## How? If a filter isn't valid, RabbitMQ ignores the filter. RabbitMQ only replies with filters it actually supports and validated successfully to comply with: "The receiving endpoint sets its desired filter, the sending endpoint [RabbitMQ] sets the filter actually in place (including any filters defaulted at the node)." * Delete streams test case The test suite constructed a wrong filter-set. Specifically the value of the filter-set didn't use a described type as mandated by the spec. Using https://azure.github.io/amqpnetlite/api/Amqp.Types.DescribedValue.html throws errors that the descriptor can't be encoded. Given that this code path is already tests via the amqp_filtex_SUITE, this F# test gets therefore deleted. * Re-introduce the AMQP filter-set bug Since clients might rely on the wrong filter-set value type, we support the bug behind a deprecated feature flag and gradually remove support this bug. * Revert "Delete streams test case" This reverts commit c95cfeaef74160894050ae51a563bf839384d2d7.
2024-10-07 23:12:26 +08:00
":test_amqp_utils_beam",
Support AMQP 1.0 natively ## What Similar to Native MQTT in #5895, this commits implements Native AMQP 1.0. By "native", we mean do not proxy via AMQP 0.9.1 anymore. ## Why Native AMQP 1.0 comes with the following major benefits: 1. Similar to Native MQTT, this commit provides better throughput, latency, scalability, and resource usage for AMQP 1.0. See https://blog.rabbitmq.com/posts/2023/03/native-mqtt for native MQTT improvements. See further below for some benchmarks. 2. Since AMQP 1.0 is not limited anymore by the AMQP 0.9.1 protocol, this commit allows implementing more AMQP 1.0 features in the future. Some features are already implemented in this commit (see next section). 3. Simpler, better understandable, and more maintainable code. Native AMQP 1.0 as implemented in this commit has the following major benefits compared to AMQP 0.9.1: 4. Memory and disk alarms will only stop accepting incoming TRANSFER frames. New connections can still be created to consume from RabbitMQ to empty queues. 5. Due to 4. no need anymore for separate connections for publishers and consumers as we currently recommended for AMQP 0.9.1. which potentially halves the number of physical TCP connections. 6. When a single connection sends to multiple target queues, a single slow target queue won't block the entire connection. Publisher can still send data quickly to all other target queues. 7. A publisher can request whether it wants publisher confirmation on a per-message basis. In AMQP 0.9.1 publisher confirms are configured per channel only. 8. Consumers can change their "prefetch count" dynamically which isn't possible in our AMQP 0.9.1 implementation. See #10174 9. AMQP 1.0 is an extensible protocol This commit also fixes dozens of bugs present in the AMQP 1.0 plugin in RabbitMQ 3.x - most of which cannot be backported due to the complexity and limitations of the old 3.x implementation. This commit contains breaking changes and is therefore targeted for RabbitMQ 4.0. ## Implementation details 1. Breaking change: With Native AMQP, the behaviour of ``` Convert AMQP 0.9.1 message headers to application properties for an AMQP 1.0 consumer amqp1_0.convert_amqp091_headers_to_app_props = false | true (default false) Convert AMQP 1.0 Application Properties to AMQP 0.9.1 headers amqp1_0.convert_app_props_to_amqp091_headers = false | true (default false) ``` will break because we always convert according to the message container conversions. For example, AMQP 0.9.1 x-headers will go into message-annotations instead of application properties. Also, `false` won’t be respected since we always convert the headers with message containers. 2. Remove rabbit_queue_collector rabbit_queue_collector is responsible for synchronously deleting exclusive queues. Since the AMQP 1.0 plugin never creates exclusive queues, rabbit_queue_collector doesn't need to be started in the first place. This will save 1 Erlang process per AMQP 1.0 connection. 3. 7 processes per connection + 1 process per session in this commit instead of 7 processes per connection + 15 processes per session in 3.x Supervision hierarchy got re-designed. 4. Use 1 writer process per AMQP 1.0 connection AMQP 0.9.1 uses a separate rabbit_writer Erlang process per AMQP 0.9.1 channel. Prior to this commit, AMQP 1.0 used a separate rabbit_amqp1_0_writer process per AMQP 1.0 session. Advantage of single writer proc per session (prior to this commit): * High parallelism for serialising packets if multiple sessions within a connection write heavily at the same time. This commit uses a single writer process per AMQP 1.0 connection that is shared across all AMQP 1.0 sessions. Advantages of single writer proc per connection (this commit): * Lower memory usage with hundreds of thousands of AMQP 1.0 sessions * Less TCP and IP header overhead given that the single writer process can accumulate across all sessions bytes before flushing the socket. In other words, this commit decides that a reader / writer process pair per AMQP 1.0 connection is good enough for bi-directional TRANSFER flows. Having a writer per session is too heavy. We still ensure high throughput by having separate reader, writer, and session processes. 5. Transform rabbit_amqp1_0_writer into gen_server Why: Prior to this commit, when clicking on the AMQP 1.0 writer process in observer, the process crashed. Instead of handling all these debug messages of the sys module, it's better to implement a gen_server. There is no advantage of using a special OTP process over gen_server for the AMQP 1.0 writer. gen_server also provides cleaner format status output. How: Message callbacks return a timeout of 0. After all messages in the inbox are processed, the timeout message is handled by flushing any pending bytes. 6. Remove stats timer from writer AMQP 1.0 connections haven't emitted any stats previously. 7. When there are contiguous queue confirmations in the session process mailbox, batch them. When the confirmations are sent to the publisher, a single DISPOSITION frame is sent for contiguously confirmed delivery IDs. This approach should be good enough. However it's sub optimal in scenarios where contiguous delivery IDs that need confirmations are rare, for example: * There are multiple links in the session with different sender settlement modes and sender publishes across these links interleaved. * sender settlement mode is mixed and sender publishes interleaved settled and unsettled TRANSFERs. 8. Introduce credit API v2 Why: The AMQP 0.9.1 credit extension which is to be removed in 4.0 was poorly designed since basic.credit is a synchronous call into the queue process blocking the entire AMQP 1.0 session process. How: Change the interactions between queue clients and queue server implementations: * Clients only request a credit reply if the FLOW's `echo` field is set * Include all link flow control state held by the queue process into a new credit_reply queue event: * `available` after the queue sends any deliveries * `link-credit` after the queue sends any deliveries * `drain` which allows us to combine the old queue events send_credit_reply and send_drained into a single new queue event credit_reply. * Include the consumer tag into the credit_reply queue event such that the AMQP 1.0 session process can process any credit replies asynchronously. Link flow control state `delivery-count` also moves to the queue processes. The new interactions are hidden behind feature flag credit_api_v2 to allow for rolling upgrades from 3.13 to 4.0. 9. Use serial number arithmetic in quorum queues and session process. 10. Completely bypass the rabbit_limiter module for AMQP 1.0 flow control. The goal is to eventually remove the rabbit_limiter module in 4.0 since AMQP 0.9.1 global QoS will be unsupported in 4.0. This commit lifts the AMQP 1.0 link flow control logic out of rabbit_limiter into rabbit_queue_consumers. 11. Fix credit bug for streams: AMQP 1.0 settlements shouldn't top up link credit, only FLOW frames should top up link credit. 12. Allow sender settle mode unsettled for streams since AMQP 1.0 acknowledgements to streams are no-ops (currently). 13. Fix AMQP 1.0 client bugs Auto renewing credits should not be related to settling TRANSFERs. Remove field link_credit_unsettled as it was wrong and confusing. Prior to this commit auto renewal did not work when the sender uses sender settlement mode settled. 14. Fix AMQP 1.0 client bugs The wrong outdated Link was passed to function auto_flow/2 15. Use osiris chunk iterator Only hold messages of uncompressed sub batches in memory if consumer doesn't have sufficient credits. Compressed sub batches are skipped for non Stream protocol consumers. 16. Fix incoming link flow control Always use confirms between AMQP 1.0 queue clients and queue servers. As already done internally by rabbit_fifo_client and rabbit_stream_queue, use confirms for classic queues as well. 17. Include link handle into correlation when publishing messages to target queues such that session process can correlate confirms from target queues to incoming links. 18. Only grant more credits to publishers if publisher hasn't sufficient credits anymore and there are not too many unconfirmed messages on the link. 19. Completely ignore `block` and `unblock` queue actions and RabbitMQ credit flow between classic queue process and session process. 20. Link flow control is independent between links. A client can refer to a queue or to an exchange with multiple dynamically added target queues. Multiple incoming links can also fan in to the same queue. However the link topology looks like, this commit ensures that each link is only granted more credits if that link isn't overloaded. 21. A connection or a session can send to many different queues. In AMQP 0.9.1, a single slow queue will lead to the entire channel, and then entire connection being blocked. This commit makes sure that a single slow queue from one link won't slow down sending on other links. For example, having link A sending to a local classic queue and link B sending to 5 replica quorum queue, link B will naturally grant credits slower than link A. So, despite the quorum queue being slower in confirming messages, the same AMQP 1.0 connection and session can still pump data very fast into the classic queue. 22. If cluster wide memory or disk alarm occurs. Each session sends a FLOW with incoming-window to 0 to sending client. If sending clients don’t obey, force disconnect the client. If cluster wide memory alarm clears: Each session resumes with a FLOW defaulting to initial incoming-window. 23. All operations apart of publishing TRANSFERS to RabbitMQ can continue during cluster wide alarms, specifically, attaching consumers and consuming, i.e. emptying queues. There is no need for separate AMQP 1.0 connections for publishers and consumers as recommended in our AMQP 0.9.1 implementation. 24. Flow control summary: * If queue becomes bottleneck, that’s solved by slowing down individual sending links (AMQP 1.0 link flow control). * If session becomes bottleneck (more unlikely), that’s solved by AMQP 1.0 session flow control. * If connection becomes bottleneck, it naturally won’t read fast enough from the socket causing TCP backpressure being applied. Nowhere will RabbitMQ internal credit based flow control (i.e. module credit_flow) be used on the incoming AMQP 1.0 message path. 25. Register AMQP sessions Prefer local-only pg over our custom pg_local implementation as pg is a better process group implementation than pg_local. pg_local was identified as bottleneck in tests where many MQTT clients were disconnected at once. 26. Start a local-only pg when Rabbit boots: > A scope can be kept local-only by using a scope name that is unique cluster-wide, e.g. the node name: > pg:start_link(node()). Register AMQP 1.0 connections and sessions with pg. In future we should remove pg_local and instead use the new local-only pg for all registered processes such as AMQP 0.9.1 connections and channels. 27. Requeue messages if link detached Although the spec allows to settle delivery IDs on detached links, RabbitMQ does not respect the 'closed' field of the DETACH frame and therefore handles every DETACH frame as closed. Since the link is closed, we expect every outstanding delivery to be requeued. In addition to consumer cancellation, detaching a link therefore causes in flight deliveries to be requeued. Note that this behaviour is different from merely consumer cancellation in AMQP 0.9.1: "After a consumer is cancelled there will be no future deliveries dispatched to it. Note that there can still be "in flight" deliveries dispatched previously. Cancelling a consumer will neither discard nor requeue them." [https://www.rabbitmq.com/consumers.html#unsubscribing] An AMQP receiver can first drain, and then detach to prevent "in flight" deliveries 28. Init AMQP session with BEGIN frame Similar to how there can't be an MQTT processor without a CONNECT frame, there can't be an AMQP session without a BEGIN frame. This allows having strict dialyzer types for session flow control fields (i.e. not allowing 'undefined'). 29. Move serial_number to AMQP 1.0 common lib such that it can be used by both AMQP 1.0 server and client 30. Fix AMQP client to do serial number arithmetic. 31. AMQP client: Differentiate between delivery-id and transfer-id for better understandability. 32. Fix link flow control in classic queues This commit fixes ``` java -jar target/perf-test.jar -ad false -f persistent -u cq -c 3000 -C 1000000 -y 0 ``` followed by ``` ./omq -x 0 amqp -T /queue/cq -D 1000000 --amqp-consumer-credits 2 ``` Prior to this commit, (and on RabbitMQ 3.x) the consuming would halt after around 8 - 10,000 messages. The bug was that in flight messages from classic queue process to session process were not taken into account when topping up credit to the classic queue process. Fixes #2597 The solution to this bug (and a much cleaner design anyway independent of this bug) is that queues should hold all link flow control state including the delivery-count. Hence, when credit API v2 is used the delivery-count will be held by the classic queue process, quorum queue process, and stream queue client instead of managing the delivery-count in the session. 33. The double level crediting between (a) session process and rabbit_fifo_client, and (b) rabbit_fifo_client and rabbit_fifo was removed. Therefore, instead of managing 3 separate delivery-counts (i. session, ii. rabbit_fifo_client, iii. rabbit_fifo), only 1 delivery-count is used in rabbit_fifo. This is a big simplification. 34. This commit fixes quorum queues without bumping the machine version nor introducing new rabbit_fifo commands. Whether credit API v2 is used is solely determined at link attachment time depending on whether feature flag credit_api_v2 is enabled. Even when that feature flag will be enabled later on, this link will keep using credit API v1 until detached (or the node is shut down). Eventually, after feature flag credit_api_v2 has been enabled and a subsequent rolling upgrade, all links will use credit API v2. This approach is safe and simple. The 2 alternatives to move delivery-count from the session process to the queue processes would have been: i. Explicit feature flag credit_api_v2 migration function * Can use a gen_server:call and only finish migration once all delivery-counts were migrated. Cons: * Extra new message format just for migration is required. * Risky as migration will fail if a target queue doesn’t reply. ii. Session always includes DeliveryCountSnd when crediting to the queue: Cons: * 2 delivery counts will be hold simultaneously in session proc and queue proc; could be solved by deleting the session proc’s delivery-count for credit-reply * What happens if the receiver doesn’t provide credit for a very long time? Is that a problem? 35. Support stream filtering in AMQP 1.0 (by @acogoluegnes) Use the x-stream-filter-value message annotation to carry the filter value in a published message. Use the rabbitmq:stream-filter and rabbitmq:stream-match-unfiltered filters when creating a receiver that wants to filter out messages from a stream. 36. Remove credit extension from AMQP 0.9.1 client 37. Support maintenance mode closing AMQP 1.0 connections. 38. Remove AMQP 0.9.1 client dependency from AMQP 1.0 implementation. 39. Move AMQP 1.0 plugin to the core. AMQP 1.0 is enabled by default. The old rabbitmq_amqp1_0 plugin will be kept as a no-op plugin to prevent deployment tools from failing that execute: ``` rabbitmq-plugins enable rabbitmq_amqp1_0 rabbitmq-plugins disable rabbitmq_amqp1_0 ``` 40. Breaking change: Remove CLI command `rabbitmqctl list_amqp10_connections`. Instead, list both AMQP 0.9.1 and AMQP 1.0 connections in `list_connections`: ``` rabbitmqctl list_connections protocol Listing connections ... protocol {1, 0} {0,9,1} ``` ## Benchmarks ### Throughput & Latency Setup: * Single node Ubuntu 22.04 * Erlang 26.1.1 Start RabbitMQ: ``` make run-broker PLUGINS="rabbitmq_management rabbitmq_amqp1_0" FULL=1 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 3" ``` Predeclare durable classic queue cq1, durable quorum queue qq1, durable stream queue sq1. Start client: https://github.com/ssorj/quiver https://hub.docker.com/r/ssorj/quiver/tags (digest 453a2aceda64) ``` docker run -it --rm --add-host host.docker.internal:host-gateway ssorj/quiver:latest bash-5.1# quiver --version quiver 0.4.0-SNAPSHOT ``` 1. Classic queue ``` quiver //host.docker.internal//amq/queue/cq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ............................................... 73.8 seconds Sender rate .......................................... 13,548 messages/s Receiver rate ........................................ 13,547 messages/s End-to-end rate ...................................... 13,547 messages/s Latencies by percentile: 0% ........ 0 ms 90.00% ........ 9 ms 25% ........ 2 ms 99.00% ....... 14 ms 50% ........ 4 ms 99.90% ....... 17 ms 100% ....... 26 ms 99.99% ....... 24 ms ``` RabbitMQ 3.x (main branch as of 30 January 2024): ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 6 73.6 2.1 3,217 1,607 0 8.0 511 4.1 163,580 16,367 2 74.1 4.1 3,217 0 0 8.0 0 6.1 229,114 32,767 3 74.1 6.1 3,217 0 0 8.0 0 8.1 261,880 16,367 2 74.1 8.1 67,874 32,296 8 8.2 7,662 10.1 294,646 16,367 2 74.1 10.1 67,874 0 0 8.2 0 12.1 360,180 32,734 3 74.1 12.1 67,874 0 0 8.2 0 14.1 392,946 16,367 3 74.1 14.1 68,604 365 0 8.2 12,147 16.1 458,480 32,734 3 74.1 16.1 68,604 0 0 8.2 0 18.1 491,246 16,367 2 74.1 18.1 68,604 0 0 8.2 0 20.1 556,780 32,767 4 74.1 20.1 68,604 0 0 8.2 0 22.1 589,546 16,375 2 74.1 22.1 68,604 0 0 8.2 0 receiver timed out 24.1 622,312 16,367 2 74.1 24.1 68,604 0 0 8.2 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. ``` 2. Quorum queue: ``` quiver //host.docker.internal//amq/queue/qq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration .............................................. 101.4 seconds Sender rate ........................................... 9,867 messages/s Receiver rate ......................................... 9,868 messages/s End-to-end rate ....................................... 9,865 messages/s Latencies by percentile: 0% ....... 11 ms 90.00% ....... 23 ms 25% ....... 15 ms 99.00% ....... 28 ms 50% ....... 18 ms 99.90% ....... 33 ms 100% ....... 49 ms 99.99% ....... 47 ms ``` RabbitMQ 3.x: ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 9 69.9 2.1 18,430 9,206 5 7.6 1,221 4.1 163,580 16,375 5 70.2 4.1 18,867 218 0 7.6 2,168 6.1 229,114 32,767 6 70.2 6.1 18,867 0 0 7.6 0 8.1 294,648 32,734 7 70.2 8.1 18,867 0 0 7.6 0 10.1 360,182 32,734 6 70.2 10.1 18,867 0 0 7.6 0 12.1 425,716 32,767 6 70.2 12.1 18,867 0 0 7.6 0 receiver timed out 14.1 458,482 16,367 5 70.2 14.1 18,867 0 0 7.6 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. ``` 3. Stream: ``` quiver-arrow send //host.docker.internal//amq/queue/sq1 --durable --count 1m -d 10m --summary --verbose ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ................................................ 8.7 seconds Message rate ........................................ 115,154 messages/s ``` RabbitMQ 3.x: ``` Count ............................................. 1,000,000 messages Duration ............................................... 21.2 seconds Message rate ......................................... 47,232 messages/s ``` ### Memory usage Start RabbitMQ: ``` ERL_MAX_PORTS=3000000 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+P 3000000 +S 6" make run-broker PLUGINS="rabbitmq_amqp1_0" FULL=1 RABBITMQ_CONFIG_FILE="rabbitmq.conf" ``` ``` /bin/cat rabbitmq.conf tcp_listen_options.sndbuf = 2048 tcp_listen_options.recbuf = 2048 vm_memory_high_watermark.relative = 0.95 vm_memory_high_watermark_paging_ratio = 0.95 loopback_users = none ``` Create 50k connections with 2 sessions per connection, i.e. 100k session in total: ```go package main import ( "context" "log" "time" "github.com/Azure/go-amqp" ) func main() { for i := 0; i < 50000; i++ { conn, err := amqp.Dial(context.TODO(), "amqp://nuc", &amqp.ConnOptions{SASLType: amqp.SASLTypeAnonymous()}) if err != nil { log.Fatal("dialing AMQP server:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } } log.Println("opened all connections") time.Sleep(5 * time.Hour) } ``` This commit: ``` erlang:memory(). [{total,4586376480}, {processes,4025898504}, {processes_used,4025871040}, {system,560477976}, {atom,1048841}, {atom_used,1042841}, {binary,233228608}, {code,21449982}, {ets,108560464}] erlang:system_info(process_count). 450289 ``` 7 procs per connection + 1 proc per session. (7 + 2*1) * 50,000 = 450,000 procs RabbitMQ 3.x: ``` erlang:memory(). [{total,15168232704}, {processes,14044779256}, {processes_used,14044755120}, {system,1123453448}, {atom,1057033}, {atom_used,1052587}, {binary,236381264}, {code,21790238}, {ets,391423744}] erlang:system_info(process_count). 1850309 ``` 7 procs per connection + 15 per session (7 + 2*15) * 50,000 = 1,850,000 procs 50k connections + 100k session require with this commit: 4.5 GB in RabbitMQ 3.x: 15 GB ## Future work 1. More efficient parser and serializer 2. TODO in mc_amqp: Do not store the parsed message on disk. 3. Implement both AMQP HTTP extension and AMQP management extension to allow AMQP clients to create RabbitMQ objects (queues, exchanges, ...).
2023-07-21 18:29:07 +08:00
":test_event_recorder_beam",
],
shard_count = 3,
runtime_deps = [
Enable AMQP 1.0 clients to manage topologies ## What? * Allow AMQP 1.0 clients to dynamically create and delete RabbitMQ topologies (exchanges, queues, bindings). * Provide an Erlang AMQP 1.0 client that manages topologies. ## Why? Today, RabbitMQ topologies can be created via: * [Management HTTP API](https://www.rabbitmq.com/docs/management#http-api) (including Management UI and [messaging-topology-operator](https://github.com/rabbitmq/messaging-topology-operator)) * [Definition Import](https://www.rabbitmq.com/docs/definitions#import) * AMQP 0.9.1 clients Up to RabbitMQ 3.13 the RabbitMQ AMQP 1.0 plugin auto creates queues and bindings depending on the terminus [address format](https://github.com/rabbitmq/rabbitmq-server/tree/v3.13.x/deps/rabbitmq_amqp1_0#routing-and-addressing). Such implicit creation of topologies is limiting and obscure. For some address formats, queues will be created, but not deleted. Some of RabbitMQ's success is due to its flexible routing topologies that AMQP 0.9.1 clients can create and delete dynamically. This commit allows dynamic management of topologies for AMQP 1.0 clients. This commit builds on top of Native AMQP 1.0 (PR #9022) and will be available in RabbitMQ 4.0. ## How? This commits adds the following management operations for AMQP 1.0 clients: * declare queue * delete queue * purge queue * bind queue to exchange * unbind queue from exchange * declare exchange * delete exchange * bind exchange to exchange * unbind exchange from exchange Hence, at least the AMQP 0.9.1 management operations are supported for AMQP 1.0 clients. In addition the operation * get queue is provided which - similar to `declare queue` - returns queue information including the current leader and replicas. This allows clients to publish or consume locally on the node that hosts the queue. Compared to AMQP 0.9.1 whose commands and command fields are fixed, the new AMQP Management API is extensible: New operations and new fields can easily be added in the future. There are different design options how management operations could be supported for AMQP 1.0 clients: 1. Use a special exchange type as done in https://github.com/rabbitmq/rabbitmq-management-exchange This has the advantage that any protocol client (e.g. also STOMP clients) could dynamically manage topologies. However, a special exchange type is the wrong abstraction. 2. Clients could send "special" messages with special headers that the broker interprets. This commit decided for a variation of the 2nd option using a more standardized way by re-using a subest of the following latest AMQP 1.0 extension specifications: * [AMQP Request-Response Messaging with Link Pairing Version 1.0 - Committee Specification 01](https://docs.oasis-open.org/amqp/linkpair/v1.0/cs01/linkpair-v1.0-cs01.html) (February 2021) * [HTTP Semantics and Content over AMQP Version 1.0 - Working Draft 06](https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=65571) (July 2019) * [AMQP Management Version 1.0 - Working Draft 16](https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=65575) (July 2019) An important goal is to keep the interaction between AMQP 1.0 client and RabbitMQ simple to increase usage, development and adoptability of future RabbitMQ AMQP 1.0 client library wrappers. The AMQP 1.0 client has to create a link pair to the special `/management` node. This allows the client to send and receive from the management node. Similar to AMQP 0.9.1, there is no need for a reply queue since the reply will be sent directly to the client. Requests and responses are modelled via HTTP, but sent via AMQP using the `HTTP Semantics and Content over AMQP` extension (henceforth `HTTP over AMQP` extension). This commit tries to follow the `HTTP over AMQP` extension as much as possible but deviates where this draft spec doesn't make sense. The projected mode §4.1 is used as opposed to tunneled mode §4.2. A named relay `/management` is used (§6.3) where the message field `to` is the URL. Deviations are * §3.1 mandates that URIs are not encoded in an AMQP message. However, we percent encode URIs in the AMQP message. Otherwise there is for example no way to distinguish a `/` in a queue name from the URI path separator `/`. * §4.1.4 mandates a data section. This commit uses an amqp-value section as it's a better fit given that the content is AMQP encoded data. Using an HTTP API allows for a common well understood interface and future extensibility. Instead of re-using the current RabbitMQ HTTP API, this commit uses a new HTTP API (let's call it v2) which could be used as a future API for plain HTTP clients. ### HTTP API v1 The current HTTP API (let's call it v1) is **not** used since v1 comes with a couple of weaknesses: 1. Deep level of nesting becomes confusing and difficult to manage. Examples of deep nesting in v1: ``` /api/bindings/vhost/e/source/e/destination/props /api/bindings/vhost/e/exchange/q/queue/props ``` 2. Redundant endpoints returning the same resources v1 has 9 endpoints to list binding(s): ``` /api/exchanges/vhost/name/bindings/source /api/exchanges/vhost/name/bindings/destination /api/queues/vhost/name/bindings /api/bindings /api/bindings/vhost /api/bindings/vhost/e/exchange/q/queue /api/bindings/vhost/e/exchange/q/queue/props /api/bindings/vhost/e/source/e/destination /api/bindings/vhost/e/source/e/destination/props ``` 3. Verbs in path names Path names should be nouns instead. v1 contains verbs: ``` /api/queues/vhost/name/get /api/exchanges/vhost/name/publish ``` ### AMQP Management extension Only few aspects of the AMQP Management extension are used. The central idea of the AMQP management spec is **dynamic discovery** such that broker independent AMQP 1.0 clients can discover objects, types, operations, and HTTP endpoints of specific brokers. In fact, clients are only conformant if: > All request addresses are dynamically discovered starting from the discovery document. > A requesting container MUST NOT use fixed assumptions about the addressing structure of the management API. While this is a nice and powerful idea, no AMQP 1.0 client and no AMQP 1.0 server implement the latest AMQP 1.0 management spec from 2019, partly presumably due to its complexity. Therefore, the idea of such dynamic discovery has failed to be implemented in practice. The AMQP management spec mandates that the management endpoint returns a discovery document containing broker specific collections, types, configuration, and operations including their endpoints. The API endpoints of the AMQP management spec are therefore all designed around dynamic discovery. For example, to create either a queue or an exchange, the client has to ``` POST /$management/entities ``` which shows that the entities collection acts as a generic factory, see section 2.2. The server will then create the resource and reply with a location header containing a URI pointing to the resource. For RabbitMQ, we don’t need such a generic factory to create queues or exchanges. To list bindings for a queue Q1, the spec suggests ``` GET /$management/Queues/Q1/$management/entities ``` which again shows the generic entities endpoint as well as a `$management` endpoint under Q1 to allow a queue to return a discovery document. For RabbitMQ, we don’t need such generic endpoints and discovery documents. Given we aim for our own thin RabbitMQ AMQP 1.0 client wrapper libraries which expose the RabbitMQ model to the developer, we can directly use fixed HTTP endpoint assumptions in our RabbitMQ specific libraries. This is by far simpler than using the dynamic endpoints of the management spec. Simplicity leads to higher adoption and enables more developers to write RabbitMQ AMQP 1.0 client library wrappers. The AMQP Management extension also suffers from deep level of nesting in paths Examples: ``` /$management/Queues/Q1/$management/entities /$management/Queues/Q1/Bindings/Binding1 ``` as well as verbs in path names: Section 7.1.4 suggests using verbs in path names, for example “purge”, due to the dynamic operations discovery document. ### HTTP API v2 This commit introduces a new HTTP API v2 following best practices. It could serve as a future API for plain HTTP clients. This commit and RabbitMQ 4.0 will only implement a minimal set of HTTP API v2 endpoints and only for HTTP over AMQP. In other words, the existing HTTP API v1 Cowboy handlers will continue to be used for all plain HTTP requests in RabbitMQ 4.0 and will remain untouched for RabbitMQ 4.0. Over time, after 4.0 shipped, we could ship a pure HTTP API implementation for HTTP API v2. Hence, the new HTTP API v2 endpoints for HTTP over AMQP should be designed such that they can be re-used in the future for a pure HTTP implementation. The minimal set of endpoints for RabbitMQ 4.0 are: `` GET / PUT / DELETE /vhosts/:vhost/queues/:queue ``` read, create, delete a queue ``` DELETE /vhosts/:vhost/queues/:queue/messages ``` purges a queue ``` GET / DELETE /vhosts/:vhost/bindings/:binding ``` read, delete bindings where `:binding` is a binding ID of the following path segment: ``` src=e1;dstq=q2;key=my-key;args= ``` Binding arguments `args` has an empty value by default, i.e. there are no binding arguments. If the binding includes binding arguments, `args` will be an Erlang portable term hash provided by the server similar to what’s provided in HTTP API v1 today. Alternatively, we could use an arguments scheme of: ``` args=k1,utf8,v1&k2,uint,3 ``` However, such a scheme leads to long URIs when there are many binding arguments. Note that it’s perfectly fine for URI producing applications to include URI reserved characters `=` / `;` / `,` / `$` in a path segment. To create a binding, the client therefore needs to POST to a bindings factory URI: ``` POST /vhosts/:vhost/bindings ``` To list all bindings between a source exchange e1 and destination exchange e2 with binding key k1: ``` GET /vhosts/:vhost/bindings?src=e1&dste=e2&key=k1 ``` This endpoint will be called by the RabbitMQ AMQP 1.0 client library to unbind a binding with non-empty binding arguments to get the binding ID before invoking a ``` DELETE /vhosts/:vhost/bindings/:binding ``` In future, after RabbitMQ 4.0 shipped, new API endpoints could be added. The following is up for discussion and is only meant to show the clean and simple design of HTTP API v2. Bindings endpoint can be queried as follows: to list all bindings for a given source exchange e1: ``` GET /vhosts/:vhost/bindings?src=e1 ``` to list all bindings for a given destination queue q1: ``` GET /vhosts/:vhost/bindings?dstq=q1 ``` to list all bindings between a source exchange e1 and destination queue q1: ``` GET /vhosts/:vhost/bindings?src=e1&dstq=q1 ``` multiple bindings between source exchange e1 and destination queue q1 could be deleted at once as follows: ``` DELETE /vhosts/:vhost/bindings?src=e1&dstq=q1 ``` GET could be supported globally across all vhosts: ``` /exchanges /queues /bindings ``` Publish a message: ``` POST /vhosts/:vhost/queues/:queue/messages ``` Consume or peek a message (depending on query parameters): ``` GET /vhosts/:vhost/queues/:queue/messages ``` Note that the AMQP 1.0 client omits the `/vhost/:vhost` path prefix. Since an AMQP connection belongs to a single vhost, there is no need to additionally include the vhost in every HTTP request. Pros of HTTP API v2: 1. Low level of nesting Queues, exchanges, bindings are top level entities directly under vhosts. Although the HTTP API doesn’t have to reflect how resources are stored in the database, v2 does nicely reflect the Khepri tree structure. 2. Nouns instead of verbs HTTP API v2 is very simple to read and understand as shown by ``` POST /vhosts/:vhost/queues/:queue/messages to post messages, i.e. publish to a queue. GET /vhosts/:vhost/queues/:queue/messages to get messages, i.e. consume or peek from a queue. DELETE /vhosts/:vhost/queues/:queue/messages to delete messages, i.e. purge a queue. ``` A separate new HTTP API v2 allows us to ship only handlers for HTTP over AMQP for RabbitMQ 4.0 and therefore move faster while still keeping the option on the table to re-use the new v2 API for pure HTTP in the future. In contrast, re-using the HTTP API v1 for HTTP over AMQP is possible, but dirty because separate handlers (HTTP over AMQP and pure HTTP) replying differently will be needed for the same v1 endpoints.
2024-02-08 01:26:13 +08:00
"//deps/rabbitmq_amqp_client:erlang_app",
Support AMQP 1.0 natively ## What Similar to Native MQTT in #5895, this commits implements Native AMQP 1.0. By "native", we mean do not proxy via AMQP 0.9.1 anymore. ## Why Native AMQP 1.0 comes with the following major benefits: 1. Similar to Native MQTT, this commit provides better throughput, latency, scalability, and resource usage for AMQP 1.0. See https://blog.rabbitmq.com/posts/2023/03/native-mqtt for native MQTT improvements. See further below for some benchmarks. 2. Since AMQP 1.0 is not limited anymore by the AMQP 0.9.1 protocol, this commit allows implementing more AMQP 1.0 features in the future. Some features are already implemented in this commit (see next section). 3. Simpler, better understandable, and more maintainable code. Native AMQP 1.0 as implemented in this commit has the following major benefits compared to AMQP 0.9.1: 4. Memory and disk alarms will only stop accepting incoming TRANSFER frames. New connections can still be created to consume from RabbitMQ to empty queues. 5. Due to 4. no need anymore for separate connections for publishers and consumers as we currently recommended for AMQP 0.9.1. which potentially halves the number of physical TCP connections. 6. When a single connection sends to multiple target queues, a single slow target queue won't block the entire connection. Publisher can still send data quickly to all other target queues. 7. A publisher can request whether it wants publisher confirmation on a per-message basis. In AMQP 0.9.1 publisher confirms are configured per channel only. 8. Consumers can change their "prefetch count" dynamically which isn't possible in our AMQP 0.9.1 implementation. See #10174 9. AMQP 1.0 is an extensible protocol This commit also fixes dozens of bugs present in the AMQP 1.0 plugin in RabbitMQ 3.x - most of which cannot be backported due to the complexity and limitations of the old 3.x implementation. This commit contains breaking changes and is therefore targeted for RabbitMQ 4.0. ## Implementation details 1. Breaking change: With Native AMQP, the behaviour of ``` Convert AMQP 0.9.1 message headers to application properties for an AMQP 1.0 consumer amqp1_0.convert_amqp091_headers_to_app_props = false | true (default false) Convert AMQP 1.0 Application Properties to AMQP 0.9.1 headers amqp1_0.convert_app_props_to_amqp091_headers = false | true (default false) ``` will break because we always convert according to the message container conversions. For example, AMQP 0.9.1 x-headers will go into message-annotations instead of application properties. Also, `false` won’t be respected since we always convert the headers with message containers. 2. Remove rabbit_queue_collector rabbit_queue_collector is responsible for synchronously deleting exclusive queues. Since the AMQP 1.0 plugin never creates exclusive queues, rabbit_queue_collector doesn't need to be started in the first place. This will save 1 Erlang process per AMQP 1.0 connection. 3. 7 processes per connection + 1 process per session in this commit instead of 7 processes per connection + 15 processes per session in 3.x Supervision hierarchy got re-designed. 4. Use 1 writer process per AMQP 1.0 connection AMQP 0.9.1 uses a separate rabbit_writer Erlang process per AMQP 0.9.1 channel. Prior to this commit, AMQP 1.0 used a separate rabbit_amqp1_0_writer process per AMQP 1.0 session. Advantage of single writer proc per session (prior to this commit): * High parallelism for serialising packets if multiple sessions within a connection write heavily at the same time. This commit uses a single writer process per AMQP 1.0 connection that is shared across all AMQP 1.0 sessions. Advantages of single writer proc per connection (this commit): * Lower memory usage with hundreds of thousands of AMQP 1.0 sessions * Less TCP and IP header overhead given that the single writer process can accumulate across all sessions bytes before flushing the socket. In other words, this commit decides that a reader / writer process pair per AMQP 1.0 connection is good enough for bi-directional TRANSFER flows. Having a writer per session is too heavy. We still ensure high throughput by having separate reader, writer, and session processes. 5. Transform rabbit_amqp1_0_writer into gen_server Why: Prior to this commit, when clicking on the AMQP 1.0 writer process in observer, the process crashed. Instead of handling all these debug messages of the sys module, it's better to implement a gen_server. There is no advantage of using a special OTP process over gen_server for the AMQP 1.0 writer. gen_server also provides cleaner format status output. How: Message callbacks return a timeout of 0. After all messages in the inbox are processed, the timeout message is handled by flushing any pending bytes. 6. Remove stats timer from writer AMQP 1.0 connections haven't emitted any stats previously. 7. When there are contiguous queue confirmations in the session process mailbox, batch them. When the confirmations are sent to the publisher, a single DISPOSITION frame is sent for contiguously confirmed delivery IDs. This approach should be good enough. However it's sub optimal in scenarios where contiguous delivery IDs that need confirmations are rare, for example: * There are multiple links in the session with different sender settlement modes and sender publishes across these links interleaved. * sender settlement mode is mixed and sender publishes interleaved settled and unsettled TRANSFERs. 8. Introduce credit API v2 Why: The AMQP 0.9.1 credit extension which is to be removed in 4.0 was poorly designed since basic.credit is a synchronous call into the queue process blocking the entire AMQP 1.0 session process. How: Change the interactions between queue clients and queue server implementations: * Clients only request a credit reply if the FLOW's `echo` field is set * Include all link flow control state held by the queue process into a new credit_reply queue event: * `available` after the queue sends any deliveries * `link-credit` after the queue sends any deliveries * `drain` which allows us to combine the old queue events send_credit_reply and send_drained into a single new queue event credit_reply. * Include the consumer tag into the credit_reply queue event such that the AMQP 1.0 session process can process any credit replies asynchronously. Link flow control state `delivery-count` also moves to the queue processes. The new interactions are hidden behind feature flag credit_api_v2 to allow for rolling upgrades from 3.13 to 4.0. 9. Use serial number arithmetic in quorum queues and session process. 10. Completely bypass the rabbit_limiter module for AMQP 1.0 flow control. The goal is to eventually remove the rabbit_limiter module in 4.0 since AMQP 0.9.1 global QoS will be unsupported in 4.0. This commit lifts the AMQP 1.0 link flow control logic out of rabbit_limiter into rabbit_queue_consumers. 11. Fix credit bug for streams: AMQP 1.0 settlements shouldn't top up link credit, only FLOW frames should top up link credit. 12. Allow sender settle mode unsettled for streams since AMQP 1.0 acknowledgements to streams are no-ops (currently). 13. Fix AMQP 1.0 client bugs Auto renewing credits should not be related to settling TRANSFERs. Remove field link_credit_unsettled as it was wrong and confusing. Prior to this commit auto renewal did not work when the sender uses sender settlement mode settled. 14. Fix AMQP 1.0 client bugs The wrong outdated Link was passed to function auto_flow/2 15. Use osiris chunk iterator Only hold messages of uncompressed sub batches in memory if consumer doesn't have sufficient credits. Compressed sub batches are skipped for non Stream protocol consumers. 16. Fix incoming link flow control Always use confirms between AMQP 1.0 queue clients and queue servers. As already done internally by rabbit_fifo_client and rabbit_stream_queue, use confirms for classic queues as well. 17. Include link handle into correlation when publishing messages to target queues such that session process can correlate confirms from target queues to incoming links. 18. Only grant more credits to publishers if publisher hasn't sufficient credits anymore and there are not too many unconfirmed messages on the link. 19. Completely ignore `block` and `unblock` queue actions and RabbitMQ credit flow between classic queue process and session process. 20. Link flow control is independent between links. A client can refer to a queue or to an exchange with multiple dynamically added target queues. Multiple incoming links can also fan in to the same queue. However the link topology looks like, this commit ensures that each link is only granted more credits if that link isn't overloaded. 21. A connection or a session can send to many different queues. In AMQP 0.9.1, a single slow queue will lead to the entire channel, and then entire connection being blocked. This commit makes sure that a single slow queue from one link won't slow down sending on other links. For example, having link A sending to a local classic queue and link B sending to 5 replica quorum queue, link B will naturally grant credits slower than link A. So, despite the quorum queue being slower in confirming messages, the same AMQP 1.0 connection and session can still pump data very fast into the classic queue. 22. If cluster wide memory or disk alarm occurs. Each session sends a FLOW with incoming-window to 0 to sending client. If sending clients don’t obey, force disconnect the client. If cluster wide memory alarm clears: Each session resumes with a FLOW defaulting to initial incoming-window. 23. All operations apart of publishing TRANSFERS to RabbitMQ can continue during cluster wide alarms, specifically, attaching consumers and consuming, i.e. emptying queues. There is no need for separate AMQP 1.0 connections for publishers and consumers as recommended in our AMQP 0.9.1 implementation. 24. Flow control summary: * If queue becomes bottleneck, that’s solved by slowing down individual sending links (AMQP 1.0 link flow control). * If session becomes bottleneck (more unlikely), that’s solved by AMQP 1.0 session flow control. * If connection becomes bottleneck, it naturally won’t read fast enough from the socket causing TCP backpressure being applied. Nowhere will RabbitMQ internal credit based flow control (i.e. module credit_flow) be used on the incoming AMQP 1.0 message path. 25. Register AMQP sessions Prefer local-only pg over our custom pg_local implementation as pg is a better process group implementation than pg_local. pg_local was identified as bottleneck in tests where many MQTT clients were disconnected at once. 26. Start a local-only pg when Rabbit boots: > A scope can be kept local-only by using a scope name that is unique cluster-wide, e.g. the node name: > pg:start_link(node()). Register AMQP 1.0 connections and sessions with pg. In future we should remove pg_local and instead use the new local-only pg for all registered processes such as AMQP 0.9.1 connections and channels. 27. Requeue messages if link detached Although the spec allows to settle delivery IDs on detached links, RabbitMQ does not respect the 'closed' field of the DETACH frame and therefore handles every DETACH frame as closed. Since the link is closed, we expect every outstanding delivery to be requeued. In addition to consumer cancellation, detaching a link therefore causes in flight deliveries to be requeued. Note that this behaviour is different from merely consumer cancellation in AMQP 0.9.1: "After a consumer is cancelled there will be no future deliveries dispatched to it. Note that there can still be "in flight" deliveries dispatched previously. Cancelling a consumer will neither discard nor requeue them." [https://www.rabbitmq.com/consumers.html#unsubscribing] An AMQP receiver can first drain, and then detach to prevent "in flight" deliveries 28. Init AMQP session with BEGIN frame Similar to how there can't be an MQTT processor without a CONNECT frame, there can't be an AMQP session without a BEGIN frame. This allows having strict dialyzer types for session flow control fields (i.e. not allowing 'undefined'). 29. Move serial_number to AMQP 1.0 common lib such that it can be used by both AMQP 1.0 server and client 30. Fix AMQP client to do serial number arithmetic. 31. AMQP client: Differentiate between delivery-id and transfer-id for better understandability. 32. Fix link flow control in classic queues This commit fixes ``` java -jar target/perf-test.jar -ad false -f persistent -u cq -c 3000 -C 1000000 -y 0 ``` followed by ``` ./omq -x 0 amqp -T /queue/cq -D 1000000 --amqp-consumer-credits 2 ``` Prior to this commit, (and on RabbitMQ 3.x) the consuming would halt after around 8 - 10,000 messages. The bug was that in flight messages from classic queue process to session process were not taken into account when topping up credit to the classic queue process. Fixes #2597 The solution to this bug (and a much cleaner design anyway independent of this bug) is that queues should hold all link flow control state including the delivery-count. Hence, when credit API v2 is used the delivery-count will be held by the classic queue process, quorum queue process, and stream queue client instead of managing the delivery-count in the session. 33. The double level crediting between (a) session process and rabbit_fifo_client, and (b) rabbit_fifo_client and rabbit_fifo was removed. Therefore, instead of managing 3 separate delivery-counts (i. session, ii. rabbit_fifo_client, iii. rabbit_fifo), only 1 delivery-count is used in rabbit_fifo. This is a big simplification. 34. This commit fixes quorum queues without bumping the machine version nor introducing new rabbit_fifo commands. Whether credit API v2 is used is solely determined at link attachment time depending on whether feature flag credit_api_v2 is enabled. Even when that feature flag will be enabled later on, this link will keep using credit API v1 until detached (or the node is shut down). Eventually, after feature flag credit_api_v2 has been enabled and a subsequent rolling upgrade, all links will use credit API v2. This approach is safe and simple. The 2 alternatives to move delivery-count from the session process to the queue processes would have been: i. Explicit feature flag credit_api_v2 migration function * Can use a gen_server:call and only finish migration once all delivery-counts were migrated. Cons: * Extra new message format just for migration is required. * Risky as migration will fail if a target queue doesn’t reply. ii. Session always includes DeliveryCountSnd when crediting to the queue: Cons: * 2 delivery counts will be hold simultaneously in session proc and queue proc; could be solved by deleting the session proc’s delivery-count for credit-reply * What happens if the receiver doesn’t provide credit for a very long time? Is that a problem? 35. Support stream filtering in AMQP 1.0 (by @acogoluegnes) Use the x-stream-filter-value message annotation to carry the filter value in a published message. Use the rabbitmq:stream-filter and rabbitmq:stream-match-unfiltered filters when creating a receiver that wants to filter out messages from a stream. 36. Remove credit extension from AMQP 0.9.1 client 37. Support maintenance mode closing AMQP 1.0 connections. 38. Remove AMQP 0.9.1 client dependency from AMQP 1.0 implementation. 39. Move AMQP 1.0 plugin to the core. AMQP 1.0 is enabled by default. The old rabbitmq_amqp1_0 plugin will be kept as a no-op plugin to prevent deployment tools from failing that execute: ``` rabbitmq-plugins enable rabbitmq_amqp1_0 rabbitmq-plugins disable rabbitmq_amqp1_0 ``` 40. Breaking change: Remove CLI command `rabbitmqctl list_amqp10_connections`. Instead, list both AMQP 0.9.1 and AMQP 1.0 connections in `list_connections`: ``` rabbitmqctl list_connections protocol Listing connections ... protocol {1, 0} {0,9,1} ``` ## Benchmarks ### Throughput & Latency Setup: * Single node Ubuntu 22.04 * Erlang 26.1.1 Start RabbitMQ: ``` make run-broker PLUGINS="rabbitmq_management rabbitmq_amqp1_0" FULL=1 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 3" ``` Predeclare durable classic queue cq1, durable quorum queue qq1, durable stream queue sq1. Start client: https://github.com/ssorj/quiver https://hub.docker.com/r/ssorj/quiver/tags (digest 453a2aceda64) ``` docker run -it --rm --add-host host.docker.internal:host-gateway ssorj/quiver:latest bash-5.1# quiver --version quiver 0.4.0-SNAPSHOT ``` 1. Classic queue ``` quiver //host.docker.internal//amq/queue/cq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ............................................... 73.8 seconds Sender rate .......................................... 13,548 messages/s Receiver rate ........................................ 13,547 messages/s End-to-end rate ...................................... 13,547 messages/s Latencies by percentile: 0% ........ 0 ms 90.00% ........ 9 ms 25% ........ 2 ms 99.00% ....... 14 ms 50% ........ 4 ms 99.90% ....... 17 ms 100% ....... 26 ms 99.99% ....... 24 ms ``` RabbitMQ 3.x (main branch as of 30 January 2024): ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 6 73.6 2.1 3,217 1,607 0 8.0 511 4.1 163,580 16,367 2 74.1 4.1 3,217 0 0 8.0 0 6.1 229,114 32,767 3 74.1 6.1 3,217 0 0 8.0 0 8.1 261,880 16,367 2 74.1 8.1 67,874 32,296 8 8.2 7,662 10.1 294,646 16,367 2 74.1 10.1 67,874 0 0 8.2 0 12.1 360,180 32,734 3 74.1 12.1 67,874 0 0 8.2 0 14.1 392,946 16,367 3 74.1 14.1 68,604 365 0 8.2 12,147 16.1 458,480 32,734 3 74.1 16.1 68,604 0 0 8.2 0 18.1 491,246 16,367 2 74.1 18.1 68,604 0 0 8.2 0 20.1 556,780 32,767 4 74.1 20.1 68,604 0 0 8.2 0 22.1 589,546 16,375 2 74.1 22.1 68,604 0 0 8.2 0 receiver timed out 24.1 622,312 16,367 2 74.1 24.1 68,604 0 0 8.2 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. ``` 2. Quorum queue: ``` quiver //host.docker.internal//amq/queue/qq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration .............................................. 101.4 seconds Sender rate ........................................... 9,867 messages/s Receiver rate ......................................... 9,868 messages/s End-to-end rate ....................................... 9,865 messages/s Latencies by percentile: 0% ....... 11 ms 90.00% ....... 23 ms 25% ....... 15 ms 99.00% ....... 28 ms 50% ....... 18 ms 99.90% ....... 33 ms 100% ....... 49 ms 99.99% ....... 47 ms ``` RabbitMQ 3.x: ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 9 69.9 2.1 18,430 9,206 5 7.6 1,221 4.1 163,580 16,375 5 70.2 4.1 18,867 218 0 7.6 2,168 6.1 229,114 32,767 6 70.2 6.1 18,867 0 0 7.6 0 8.1 294,648 32,734 7 70.2 8.1 18,867 0 0 7.6 0 10.1 360,182 32,734 6 70.2 10.1 18,867 0 0 7.6 0 12.1 425,716 32,767 6 70.2 12.1 18,867 0 0 7.6 0 receiver timed out 14.1 458,482 16,367 5 70.2 14.1 18,867 0 0 7.6 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. ``` 3. Stream: ``` quiver-arrow send //host.docker.internal//amq/queue/sq1 --durable --count 1m -d 10m --summary --verbose ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ................................................ 8.7 seconds Message rate ........................................ 115,154 messages/s ``` RabbitMQ 3.x: ``` Count ............................................. 1,000,000 messages Duration ............................................... 21.2 seconds Message rate ......................................... 47,232 messages/s ``` ### Memory usage Start RabbitMQ: ``` ERL_MAX_PORTS=3000000 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+P 3000000 +S 6" make run-broker PLUGINS="rabbitmq_amqp1_0" FULL=1 RABBITMQ_CONFIG_FILE="rabbitmq.conf" ``` ``` /bin/cat rabbitmq.conf tcp_listen_options.sndbuf = 2048 tcp_listen_options.recbuf = 2048 vm_memory_high_watermark.relative = 0.95 vm_memory_high_watermark_paging_ratio = 0.95 loopback_users = none ``` Create 50k connections with 2 sessions per connection, i.e. 100k session in total: ```go package main import ( "context" "log" "time" "github.com/Azure/go-amqp" ) func main() { for i := 0; i < 50000; i++ { conn, err := amqp.Dial(context.TODO(), "amqp://nuc", &amqp.ConnOptions{SASLType: amqp.SASLTypeAnonymous()}) if err != nil { log.Fatal("dialing AMQP server:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } } log.Println("opened all connections") time.Sleep(5 * time.Hour) } ``` This commit: ``` erlang:memory(). [{total,4586376480}, {processes,4025898504}, {processes_used,4025871040}, {system,560477976}, {atom,1048841}, {atom_used,1042841}, {binary,233228608}, {code,21449982}, {ets,108560464}] erlang:system_info(process_count). 450289 ``` 7 procs per connection + 1 proc per session. (7 + 2*1) * 50,000 = 450,000 procs RabbitMQ 3.x: ``` erlang:memory(). [{total,15168232704}, {processes,14044779256}, {processes_used,14044755120}, {system,1123453448}, {atom,1057033}, {atom_used,1052587}, {binary,236381264}, {code,21790238}, {ets,391423744}] erlang:system_info(process_count). 1850309 ``` 7 procs per connection + 15 per session (7 + 2*15) * 50,000 = 1,850,000 procs 50k connections + 100k session require with this commit: 4.5 GB in RabbitMQ 3.x: 15 GB ## Future work 1. More efficient parser and serializer 2. TODO in mc_amqp: Do not store the parsed message on disk. 3. Implement both AMQP HTTP extension and AMQP management extension to allow AMQP clients to create RabbitMQ objects (queues, exchanges, ...).
2023-07-21 18:29:07 +08:00
],
)
Support AMQP filter expressions (#12415) * Support AMQP filter expressions ## What? This PR implements the following property filter expressions for AMQP clients consuming from streams as defined in [AMQP Filter Expressions Version 1.0 Working Draft 09](https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=66227): * properties filters [section 4.2.4] * application-properties filters [section 4.2.5] String prefix and suffix matching is also supported. This PR also fixes a bug where RabbitMQ would accept wrong filters. Specifically, prior to this PR the values of the filter-set's map were allowed to be symbols. However, "every value MUST be either null or of a described type which provides the archetype filter." ## Why? This feature adds the ability to RabbitMQ to have multiple concurrent clients each consuming only a subset of messages while maintaining message order. This feature also reduces network traffic between RabbitMQ and clients by only dispatching those messages that the clients are actually interested in. Note that AMQP filter expressions are more fine grained than the [bloom filter based stream filtering](https://www.rabbitmq.com/blog/2023/10/16/stream-filtering) because * they do not suffer false positives * the unit of filtering is per-message instead of per-chunk * matching can be performed on **multiple** values in the properties and application-properties sections * prefix and suffix matching on the actual values is supported. Both, AMQP filter expressions and bloom filters can be used together. ## How? If a filter isn't valid, RabbitMQ ignores the filter. RabbitMQ only replies with filters it actually supports and validated successfully to comply with: "The receiving endpoint sets its desired filter, the sending endpoint [RabbitMQ] sets the filter actually in place (including any filters defaulted at the node)." * Delete streams test case The test suite constructed a wrong filter-set. Specifically the value of the filter-set didn't use a described type as mandated by the spec. Using https://azure.github.io/amqpnetlite/api/Amqp.Types.DescribedValue.html throws errors that the descriptor can't be encoded. Given that this code path is already tests via the amqp_filtex_SUITE, this F# test gets therefore deleted. * Re-introduce the AMQP filter-set bug Since clients might rely on the wrong filter-set value type, we support the bug behind a deprecated feature flag and gradually remove support this bug. * Revert "Delete streams test case" This reverts commit c95cfeaef74160894050ae51a563bf839384d2d7.
2024-10-07 23:12:26 +08:00
rabbitmq_integration_suite(
name = "amqp_filtex_SUITE",
additional_beam = [
":test_amqp_utils_beam",
],
runtime_deps = [
"//deps/rabbitmq_amqp_client:erlang_app",
],
)
Support AMQP 1.0 natively ## What Similar to Native MQTT in #5895, this commits implements Native AMQP 1.0. By "native", we mean do not proxy via AMQP 0.9.1 anymore. ## Why Native AMQP 1.0 comes with the following major benefits: 1. Similar to Native MQTT, this commit provides better throughput, latency, scalability, and resource usage for AMQP 1.0. See https://blog.rabbitmq.com/posts/2023/03/native-mqtt for native MQTT improvements. See further below for some benchmarks. 2. Since AMQP 1.0 is not limited anymore by the AMQP 0.9.1 protocol, this commit allows implementing more AMQP 1.0 features in the future. Some features are already implemented in this commit (see next section). 3. Simpler, better understandable, and more maintainable code. Native AMQP 1.0 as implemented in this commit has the following major benefits compared to AMQP 0.9.1: 4. Memory and disk alarms will only stop accepting incoming TRANSFER frames. New connections can still be created to consume from RabbitMQ to empty queues. 5. Due to 4. no need anymore for separate connections for publishers and consumers as we currently recommended for AMQP 0.9.1. which potentially halves the number of physical TCP connections. 6. When a single connection sends to multiple target queues, a single slow target queue won't block the entire connection. Publisher can still send data quickly to all other target queues. 7. A publisher can request whether it wants publisher confirmation on a per-message basis. In AMQP 0.9.1 publisher confirms are configured per channel only. 8. Consumers can change their "prefetch count" dynamically which isn't possible in our AMQP 0.9.1 implementation. See #10174 9. AMQP 1.0 is an extensible protocol This commit also fixes dozens of bugs present in the AMQP 1.0 plugin in RabbitMQ 3.x - most of which cannot be backported due to the complexity and limitations of the old 3.x implementation. This commit contains breaking changes and is therefore targeted for RabbitMQ 4.0. ## Implementation details 1. Breaking change: With Native AMQP, the behaviour of ``` Convert AMQP 0.9.1 message headers to application properties for an AMQP 1.0 consumer amqp1_0.convert_amqp091_headers_to_app_props = false | true (default false) Convert AMQP 1.0 Application Properties to AMQP 0.9.1 headers amqp1_0.convert_app_props_to_amqp091_headers = false | true (default false) ``` will break because we always convert according to the message container conversions. For example, AMQP 0.9.1 x-headers will go into message-annotations instead of application properties. Also, `false` won’t be respected since we always convert the headers with message containers. 2. Remove rabbit_queue_collector rabbit_queue_collector is responsible for synchronously deleting exclusive queues. Since the AMQP 1.0 plugin never creates exclusive queues, rabbit_queue_collector doesn't need to be started in the first place. This will save 1 Erlang process per AMQP 1.0 connection. 3. 7 processes per connection + 1 process per session in this commit instead of 7 processes per connection + 15 processes per session in 3.x Supervision hierarchy got re-designed. 4. Use 1 writer process per AMQP 1.0 connection AMQP 0.9.1 uses a separate rabbit_writer Erlang process per AMQP 0.9.1 channel. Prior to this commit, AMQP 1.0 used a separate rabbit_amqp1_0_writer process per AMQP 1.0 session. Advantage of single writer proc per session (prior to this commit): * High parallelism for serialising packets if multiple sessions within a connection write heavily at the same time. This commit uses a single writer process per AMQP 1.0 connection that is shared across all AMQP 1.0 sessions. Advantages of single writer proc per connection (this commit): * Lower memory usage with hundreds of thousands of AMQP 1.0 sessions * Less TCP and IP header overhead given that the single writer process can accumulate across all sessions bytes before flushing the socket. In other words, this commit decides that a reader / writer process pair per AMQP 1.0 connection is good enough for bi-directional TRANSFER flows. Having a writer per session is too heavy. We still ensure high throughput by having separate reader, writer, and session processes. 5. Transform rabbit_amqp1_0_writer into gen_server Why: Prior to this commit, when clicking on the AMQP 1.0 writer process in observer, the process crashed. Instead of handling all these debug messages of the sys module, it's better to implement a gen_server. There is no advantage of using a special OTP process over gen_server for the AMQP 1.0 writer. gen_server also provides cleaner format status output. How: Message callbacks return a timeout of 0. After all messages in the inbox are processed, the timeout message is handled by flushing any pending bytes. 6. Remove stats timer from writer AMQP 1.0 connections haven't emitted any stats previously. 7. When there are contiguous queue confirmations in the session process mailbox, batch them. When the confirmations are sent to the publisher, a single DISPOSITION frame is sent for contiguously confirmed delivery IDs. This approach should be good enough. However it's sub optimal in scenarios where contiguous delivery IDs that need confirmations are rare, for example: * There are multiple links in the session with different sender settlement modes and sender publishes across these links interleaved. * sender settlement mode is mixed and sender publishes interleaved settled and unsettled TRANSFERs. 8. Introduce credit API v2 Why: The AMQP 0.9.1 credit extension which is to be removed in 4.0 was poorly designed since basic.credit is a synchronous call into the queue process blocking the entire AMQP 1.0 session process. How: Change the interactions between queue clients and queue server implementations: * Clients only request a credit reply if the FLOW's `echo` field is set * Include all link flow control state held by the queue process into a new credit_reply queue event: * `available` after the queue sends any deliveries * `link-credit` after the queue sends any deliveries * `drain` which allows us to combine the old queue events send_credit_reply and send_drained into a single new queue event credit_reply. * Include the consumer tag into the credit_reply queue event such that the AMQP 1.0 session process can process any credit replies asynchronously. Link flow control state `delivery-count` also moves to the queue processes. The new interactions are hidden behind feature flag credit_api_v2 to allow for rolling upgrades from 3.13 to 4.0. 9. Use serial number arithmetic in quorum queues and session process. 10. Completely bypass the rabbit_limiter module for AMQP 1.0 flow control. The goal is to eventually remove the rabbit_limiter module in 4.0 since AMQP 0.9.1 global QoS will be unsupported in 4.0. This commit lifts the AMQP 1.0 link flow control logic out of rabbit_limiter into rabbit_queue_consumers. 11. Fix credit bug for streams: AMQP 1.0 settlements shouldn't top up link credit, only FLOW frames should top up link credit. 12. Allow sender settle mode unsettled for streams since AMQP 1.0 acknowledgements to streams are no-ops (currently). 13. Fix AMQP 1.0 client bugs Auto renewing credits should not be related to settling TRANSFERs. Remove field link_credit_unsettled as it was wrong and confusing. Prior to this commit auto renewal did not work when the sender uses sender settlement mode settled. 14. Fix AMQP 1.0 client bugs The wrong outdated Link was passed to function auto_flow/2 15. Use osiris chunk iterator Only hold messages of uncompressed sub batches in memory if consumer doesn't have sufficient credits. Compressed sub batches are skipped for non Stream protocol consumers. 16. Fix incoming link flow control Always use confirms between AMQP 1.0 queue clients and queue servers. As already done internally by rabbit_fifo_client and rabbit_stream_queue, use confirms for classic queues as well. 17. Include link handle into correlation when publishing messages to target queues such that session process can correlate confirms from target queues to incoming links. 18. Only grant more credits to publishers if publisher hasn't sufficient credits anymore and there are not too many unconfirmed messages on the link. 19. Completely ignore `block` and `unblock` queue actions and RabbitMQ credit flow between classic queue process and session process. 20. Link flow control is independent between links. A client can refer to a queue or to an exchange with multiple dynamically added target queues. Multiple incoming links can also fan in to the same queue. However the link topology looks like, this commit ensures that each link is only granted more credits if that link isn't overloaded. 21. A connection or a session can send to many different queues. In AMQP 0.9.1, a single slow queue will lead to the entire channel, and then entire connection being blocked. This commit makes sure that a single slow queue from one link won't slow down sending on other links. For example, having link A sending to a local classic queue and link B sending to 5 replica quorum queue, link B will naturally grant credits slower than link A. So, despite the quorum queue being slower in confirming messages, the same AMQP 1.0 connection and session can still pump data very fast into the classic queue. 22. If cluster wide memory or disk alarm occurs. Each session sends a FLOW with incoming-window to 0 to sending client. If sending clients don’t obey, force disconnect the client. If cluster wide memory alarm clears: Each session resumes with a FLOW defaulting to initial incoming-window. 23. All operations apart of publishing TRANSFERS to RabbitMQ can continue during cluster wide alarms, specifically, attaching consumers and consuming, i.e. emptying queues. There is no need for separate AMQP 1.0 connections for publishers and consumers as recommended in our AMQP 0.9.1 implementation. 24. Flow control summary: * If queue becomes bottleneck, that’s solved by slowing down individual sending links (AMQP 1.0 link flow control). * If session becomes bottleneck (more unlikely), that’s solved by AMQP 1.0 session flow control. * If connection becomes bottleneck, it naturally won’t read fast enough from the socket causing TCP backpressure being applied. Nowhere will RabbitMQ internal credit based flow control (i.e. module credit_flow) be used on the incoming AMQP 1.0 message path. 25. Register AMQP sessions Prefer local-only pg over our custom pg_local implementation as pg is a better process group implementation than pg_local. pg_local was identified as bottleneck in tests where many MQTT clients were disconnected at once. 26. Start a local-only pg when Rabbit boots: > A scope can be kept local-only by using a scope name that is unique cluster-wide, e.g. the node name: > pg:start_link(node()). Register AMQP 1.0 connections and sessions with pg. In future we should remove pg_local and instead use the new local-only pg for all registered processes such as AMQP 0.9.1 connections and channels. 27. Requeue messages if link detached Although the spec allows to settle delivery IDs on detached links, RabbitMQ does not respect the 'closed' field of the DETACH frame and therefore handles every DETACH frame as closed. Since the link is closed, we expect every outstanding delivery to be requeued. In addition to consumer cancellation, detaching a link therefore causes in flight deliveries to be requeued. Note that this behaviour is different from merely consumer cancellation in AMQP 0.9.1: "After a consumer is cancelled there will be no future deliveries dispatched to it. Note that there can still be "in flight" deliveries dispatched previously. Cancelling a consumer will neither discard nor requeue them." [https://www.rabbitmq.com/consumers.html#unsubscribing] An AMQP receiver can first drain, and then detach to prevent "in flight" deliveries 28. Init AMQP session with BEGIN frame Similar to how there can't be an MQTT processor without a CONNECT frame, there can't be an AMQP session without a BEGIN frame. This allows having strict dialyzer types for session flow control fields (i.e. not allowing 'undefined'). 29. Move serial_number to AMQP 1.0 common lib such that it can be used by both AMQP 1.0 server and client 30. Fix AMQP client to do serial number arithmetic. 31. AMQP client: Differentiate between delivery-id and transfer-id for better understandability. 32. Fix link flow control in classic queues This commit fixes ``` java -jar target/perf-test.jar -ad false -f persistent -u cq -c 3000 -C 1000000 -y 0 ``` followed by ``` ./omq -x 0 amqp -T /queue/cq -D 1000000 --amqp-consumer-credits 2 ``` Prior to this commit, (and on RabbitMQ 3.x) the consuming would halt after around 8 - 10,000 messages. The bug was that in flight messages from classic queue process to session process were not taken into account when topping up credit to the classic queue process. Fixes #2597 The solution to this bug (and a much cleaner design anyway independent of this bug) is that queues should hold all link flow control state including the delivery-count. Hence, when credit API v2 is used the delivery-count will be held by the classic queue process, quorum queue process, and stream queue client instead of managing the delivery-count in the session. 33. The double level crediting between (a) session process and rabbit_fifo_client, and (b) rabbit_fifo_client and rabbit_fifo was removed. Therefore, instead of managing 3 separate delivery-counts (i. session, ii. rabbit_fifo_client, iii. rabbit_fifo), only 1 delivery-count is used in rabbit_fifo. This is a big simplification. 34. This commit fixes quorum queues without bumping the machine version nor introducing new rabbit_fifo commands. Whether credit API v2 is used is solely determined at link attachment time depending on whether feature flag credit_api_v2 is enabled. Even when that feature flag will be enabled later on, this link will keep using credit API v1 until detached (or the node is shut down). Eventually, after feature flag credit_api_v2 has been enabled and a subsequent rolling upgrade, all links will use credit API v2. This approach is safe and simple. The 2 alternatives to move delivery-count from the session process to the queue processes would have been: i. Explicit feature flag credit_api_v2 migration function * Can use a gen_server:call and only finish migration once all delivery-counts were migrated. Cons: * Extra new message format just for migration is required. * Risky as migration will fail if a target queue doesn’t reply. ii. Session always includes DeliveryCountSnd when crediting to the queue: Cons: * 2 delivery counts will be hold simultaneously in session proc and queue proc; could be solved by deleting the session proc’s delivery-count for credit-reply * What happens if the receiver doesn’t provide credit for a very long time? Is that a problem? 35. Support stream filtering in AMQP 1.0 (by @acogoluegnes) Use the x-stream-filter-value message annotation to carry the filter value in a published message. Use the rabbitmq:stream-filter and rabbitmq:stream-match-unfiltered filters when creating a receiver that wants to filter out messages from a stream. 36. Remove credit extension from AMQP 0.9.1 client 37. Support maintenance mode closing AMQP 1.0 connections. 38. Remove AMQP 0.9.1 client dependency from AMQP 1.0 implementation. 39. Move AMQP 1.0 plugin to the core. AMQP 1.0 is enabled by default. The old rabbitmq_amqp1_0 plugin will be kept as a no-op plugin to prevent deployment tools from failing that execute: ``` rabbitmq-plugins enable rabbitmq_amqp1_0 rabbitmq-plugins disable rabbitmq_amqp1_0 ``` 40. Breaking change: Remove CLI command `rabbitmqctl list_amqp10_connections`. Instead, list both AMQP 0.9.1 and AMQP 1.0 connections in `list_connections`: ``` rabbitmqctl list_connections protocol Listing connections ... protocol {1, 0} {0,9,1} ``` ## Benchmarks ### Throughput & Latency Setup: * Single node Ubuntu 22.04 * Erlang 26.1.1 Start RabbitMQ: ``` make run-broker PLUGINS="rabbitmq_management rabbitmq_amqp1_0" FULL=1 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 3" ``` Predeclare durable classic queue cq1, durable quorum queue qq1, durable stream queue sq1. Start client: https://github.com/ssorj/quiver https://hub.docker.com/r/ssorj/quiver/tags (digest 453a2aceda64) ``` docker run -it --rm --add-host host.docker.internal:host-gateway ssorj/quiver:latest bash-5.1# quiver --version quiver 0.4.0-SNAPSHOT ``` 1. Classic queue ``` quiver //host.docker.internal//amq/queue/cq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ............................................... 73.8 seconds Sender rate .......................................... 13,548 messages/s Receiver rate ........................................ 13,547 messages/s End-to-end rate ...................................... 13,547 messages/s Latencies by percentile: 0% ........ 0 ms 90.00% ........ 9 ms 25% ........ 2 ms 99.00% ....... 14 ms 50% ........ 4 ms 99.90% ....... 17 ms 100% ....... 26 ms 99.99% ....... 24 ms ``` RabbitMQ 3.x (main branch as of 30 January 2024): ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 6 73.6 2.1 3,217 1,607 0 8.0 511 4.1 163,580 16,367 2 74.1 4.1 3,217 0 0 8.0 0 6.1 229,114 32,767 3 74.1 6.1 3,217 0 0 8.0 0 8.1 261,880 16,367 2 74.1 8.1 67,874 32,296 8 8.2 7,662 10.1 294,646 16,367 2 74.1 10.1 67,874 0 0 8.2 0 12.1 360,180 32,734 3 74.1 12.1 67,874 0 0 8.2 0 14.1 392,946 16,367 3 74.1 14.1 68,604 365 0 8.2 12,147 16.1 458,480 32,734 3 74.1 16.1 68,604 0 0 8.2 0 18.1 491,246 16,367 2 74.1 18.1 68,604 0 0 8.2 0 20.1 556,780 32,767 4 74.1 20.1 68,604 0 0 8.2 0 22.1 589,546 16,375 2 74.1 22.1 68,604 0 0 8.2 0 receiver timed out 24.1 622,312 16,367 2 74.1 24.1 68,604 0 0 8.2 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. ``` 2. Quorum queue: ``` quiver //host.docker.internal//amq/queue/qq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration .............................................. 101.4 seconds Sender rate ........................................... 9,867 messages/s Receiver rate ......................................... 9,868 messages/s End-to-end rate ....................................... 9,865 messages/s Latencies by percentile: 0% ....... 11 ms 90.00% ....... 23 ms 25% ....... 15 ms 99.00% ....... 28 ms 50% ....... 18 ms 99.90% ....... 33 ms 100% ....... 49 ms 99.99% ....... 47 ms ``` RabbitMQ 3.x: ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 9 69.9 2.1 18,430 9,206 5 7.6 1,221 4.1 163,580 16,375 5 70.2 4.1 18,867 218 0 7.6 2,168 6.1 229,114 32,767 6 70.2 6.1 18,867 0 0 7.6 0 8.1 294,648 32,734 7 70.2 8.1 18,867 0 0 7.6 0 10.1 360,182 32,734 6 70.2 10.1 18,867 0 0 7.6 0 12.1 425,716 32,767 6 70.2 12.1 18,867 0 0 7.6 0 receiver timed out 14.1 458,482 16,367 5 70.2 14.1 18,867 0 0 7.6 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. ``` 3. Stream: ``` quiver-arrow send //host.docker.internal//amq/queue/sq1 --durable --count 1m -d 10m --summary --verbose ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ................................................ 8.7 seconds Message rate ........................................ 115,154 messages/s ``` RabbitMQ 3.x: ``` Count ............................................. 1,000,000 messages Duration ............................................... 21.2 seconds Message rate ......................................... 47,232 messages/s ``` ### Memory usage Start RabbitMQ: ``` ERL_MAX_PORTS=3000000 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+P 3000000 +S 6" make run-broker PLUGINS="rabbitmq_amqp1_0" FULL=1 RABBITMQ_CONFIG_FILE="rabbitmq.conf" ``` ``` /bin/cat rabbitmq.conf tcp_listen_options.sndbuf = 2048 tcp_listen_options.recbuf = 2048 vm_memory_high_watermark.relative = 0.95 vm_memory_high_watermark_paging_ratio = 0.95 loopback_users = none ``` Create 50k connections with 2 sessions per connection, i.e. 100k session in total: ```go package main import ( "context" "log" "time" "github.com/Azure/go-amqp" ) func main() { for i := 0; i < 50000; i++ { conn, err := amqp.Dial(context.TODO(), "amqp://nuc", &amqp.ConnOptions{SASLType: amqp.SASLTypeAnonymous()}) if err != nil { log.Fatal("dialing AMQP server:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } } log.Println("opened all connections") time.Sleep(5 * time.Hour) } ``` This commit: ``` erlang:memory(). [{total,4586376480}, {processes,4025898504}, {processes_used,4025871040}, {system,560477976}, {atom,1048841}, {atom_used,1042841}, {binary,233228608}, {code,21449982}, {ets,108560464}] erlang:system_info(process_count). 450289 ``` 7 procs per connection + 1 proc per session. (7 + 2*1) * 50,000 = 450,000 procs RabbitMQ 3.x: ``` erlang:memory(). [{total,15168232704}, {processes,14044779256}, {processes_used,14044755120}, {system,1123453448}, {atom,1057033}, {atom_used,1052587}, {binary,236381264}, {code,21790238}, {ets,391423744}] erlang:system_info(process_count). 1850309 ``` 7 procs per connection + 15 per session (7 + 2*15) * 50,000 = 1,850,000 procs 50k connections + 100k session require with this commit: 4.5 GB in RabbitMQ 3.x: 15 GB ## Future work 1. More efficient parser and serializer 2. TODO in mc_amqp: Do not store the parsed message on disk. 3. Implement both AMQP HTTP extension and AMQP management extension to allow AMQP clients to create RabbitMQ objects (queues, exchanges, ...).
2023-07-21 18:29:07 +08:00
rabbitmq_integration_suite(
name = "amqp_proxy_protocol_SUITE",
size = "medium",
)
rabbitmq_integration_suite(
name = "amqp_system_SUITE",
flaky = True,
shard_count = 2,
tags = [
"dotnet",
],
test_env = {
"TMPDIR": "$TEST_TMPDIR",
},
)
rabbitmq_integration_suite(
name = "amqp_auth_SUITE",
additional_beam = [
Support AMQP filter expressions (#12415) * Support AMQP filter expressions ## What? This PR implements the following property filter expressions for AMQP clients consuming from streams as defined in [AMQP Filter Expressions Version 1.0 Working Draft 09](https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=66227): * properties filters [section 4.2.4] * application-properties filters [section 4.2.5] String prefix and suffix matching is also supported. This PR also fixes a bug where RabbitMQ would accept wrong filters. Specifically, prior to this PR the values of the filter-set's map were allowed to be symbols. However, "every value MUST be either null or of a described type which provides the archetype filter." ## Why? This feature adds the ability to RabbitMQ to have multiple concurrent clients each consuming only a subset of messages while maintaining message order. This feature also reduces network traffic between RabbitMQ and clients by only dispatching those messages that the clients are actually interested in. Note that AMQP filter expressions are more fine grained than the [bloom filter based stream filtering](https://www.rabbitmq.com/blog/2023/10/16/stream-filtering) because * they do not suffer false positives * the unit of filtering is per-message instead of per-chunk * matching can be performed on **multiple** values in the properties and application-properties sections * prefix and suffix matching on the actual values is supported. Both, AMQP filter expressions and bloom filters can be used together. ## How? If a filter isn't valid, RabbitMQ ignores the filter. RabbitMQ only replies with filters it actually supports and validated successfully to comply with: "The receiving endpoint sets its desired filter, the sending endpoint [RabbitMQ] sets the filter actually in place (including any filters defaulted at the node)." * Delete streams test case The test suite constructed a wrong filter-set. Specifically the value of the filter-set didn't use a described type as mandated by the spec. Using https://azure.github.io/amqpnetlite/api/Amqp.Types.DescribedValue.html throws errors that the descriptor can't be encoded. Given that this code path is already tests via the amqp_filtex_SUITE, this F# test gets therefore deleted. * Re-introduce the AMQP filter-set bug Since clients might rely on the wrong filter-set value type, we support the bug behind a deprecated feature flag and gradually remove support this bug. * Revert "Delete streams test case" This reverts commit c95cfeaef74160894050ae51a563bf839384d2d7.
2024-10-07 23:12:26 +08:00
":test_amqp_utils_beam",
Support AMQP 1.0 natively ## What Similar to Native MQTT in #5895, this commits implements Native AMQP 1.0. By "native", we mean do not proxy via AMQP 0.9.1 anymore. ## Why Native AMQP 1.0 comes with the following major benefits: 1. Similar to Native MQTT, this commit provides better throughput, latency, scalability, and resource usage for AMQP 1.0. See https://blog.rabbitmq.com/posts/2023/03/native-mqtt for native MQTT improvements. See further below for some benchmarks. 2. Since AMQP 1.0 is not limited anymore by the AMQP 0.9.1 protocol, this commit allows implementing more AMQP 1.0 features in the future. Some features are already implemented in this commit (see next section). 3. Simpler, better understandable, and more maintainable code. Native AMQP 1.0 as implemented in this commit has the following major benefits compared to AMQP 0.9.1: 4. Memory and disk alarms will only stop accepting incoming TRANSFER frames. New connections can still be created to consume from RabbitMQ to empty queues. 5. Due to 4. no need anymore for separate connections for publishers and consumers as we currently recommended for AMQP 0.9.1. which potentially halves the number of physical TCP connections. 6. When a single connection sends to multiple target queues, a single slow target queue won't block the entire connection. Publisher can still send data quickly to all other target queues. 7. A publisher can request whether it wants publisher confirmation on a per-message basis. In AMQP 0.9.1 publisher confirms are configured per channel only. 8. Consumers can change their "prefetch count" dynamically which isn't possible in our AMQP 0.9.1 implementation. See #10174 9. AMQP 1.0 is an extensible protocol This commit also fixes dozens of bugs present in the AMQP 1.0 plugin in RabbitMQ 3.x - most of which cannot be backported due to the complexity and limitations of the old 3.x implementation. This commit contains breaking changes and is therefore targeted for RabbitMQ 4.0. ## Implementation details 1. Breaking change: With Native AMQP, the behaviour of ``` Convert AMQP 0.9.1 message headers to application properties for an AMQP 1.0 consumer amqp1_0.convert_amqp091_headers_to_app_props = false | true (default false) Convert AMQP 1.0 Application Properties to AMQP 0.9.1 headers amqp1_0.convert_app_props_to_amqp091_headers = false | true (default false) ``` will break because we always convert according to the message container conversions. For example, AMQP 0.9.1 x-headers will go into message-annotations instead of application properties. Also, `false` won’t be respected since we always convert the headers with message containers. 2. Remove rabbit_queue_collector rabbit_queue_collector is responsible for synchronously deleting exclusive queues. Since the AMQP 1.0 plugin never creates exclusive queues, rabbit_queue_collector doesn't need to be started in the first place. This will save 1 Erlang process per AMQP 1.0 connection. 3. 7 processes per connection + 1 process per session in this commit instead of 7 processes per connection + 15 processes per session in 3.x Supervision hierarchy got re-designed. 4. Use 1 writer process per AMQP 1.0 connection AMQP 0.9.1 uses a separate rabbit_writer Erlang process per AMQP 0.9.1 channel. Prior to this commit, AMQP 1.0 used a separate rabbit_amqp1_0_writer process per AMQP 1.0 session. Advantage of single writer proc per session (prior to this commit): * High parallelism for serialising packets if multiple sessions within a connection write heavily at the same time. This commit uses a single writer process per AMQP 1.0 connection that is shared across all AMQP 1.0 sessions. Advantages of single writer proc per connection (this commit): * Lower memory usage with hundreds of thousands of AMQP 1.0 sessions * Less TCP and IP header overhead given that the single writer process can accumulate across all sessions bytes before flushing the socket. In other words, this commit decides that a reader / writer process pair per AMQP 1.0 connection is good enough for bi-directional TRANSFER flows. Having a writer per session is too heavy. We still ensure high throughput by having separate reader, writer, and session processes. 5. Transform rabbit_amqp1_0_writer into gen_server Why: Prior to this commit, when clicking on the AMQP 1.0 writer process in observer, the process crashed. Instead of handling all these debug messages of the sys module, it's better to implement a gen_server. There is no advantage of using a special OTP process over gen_server for the AMQP 1.0 writer. gen_server also provides cleaner format status output. How: Message callbacks return a timeout of 0. After all messages in the inbox are processed, the timeout message is handled by flushing any pending bytes. 6. Remove stats timer from writer AMQP 1.0 connections haven't emitted any stats previously. 7. When there are contiguous queue confirmations in the session process mailbox, batch them. When the confirmations are sent to the publisher, a single DISPOSITION frame is sent for contiguously confirmed delivery IDs. This approach should be good enough. However it's sub optimal in scenarios where contiguous delivery IDs that need confirmations are rare, for example: * There are multiple links in the session with different sender settlement modes and sender publishes across these links interleaved. * sender settlement mode is mixed and sender publishes interleaved settled and unsettled TRANSFERs. 8. Introduce credit API v2 Why: The AMQP 0.9.1 credit extension which is to be removed in 4.0 was poorly designed since basic.credit is a synchronous call into the queue process blocking the entire AMQP 1.0 session process. How: Change the interactions between queue clients and queue server implementations: * Clients only request a credit reply if the FLOW's `echo` field is set * Include all link flow control state held by the queue process into a new credit_reply queue event: * `available` after the queue sends any deliveries * `link-credit` after the queue sends any deliveries * `drain` which allows us to combine the old queue events send_credit_reply and send_drained into a single new queue event credit_reply. * Include the consumer tag into the credit_reply queue event such that the AMQP 1.0 session process can process any credit replies asynchronously. Link flow control state `delivery-count` also moves to the queue processes. The new interactions are hidden behind feature flag credit_api_v2 to allow for rolling upgrades from 3.13 to 4.0. 9. Use serial number arithmetic in quorum queues and session process. 10. Completely bypass the rabbit_limiter module for AMQP 1.0 flow control. The goal is to eventually remove the rabbit_limiter module in 4.0 since AMQP 0.9.1 global QoS will be unsupported in 4.0. This commit lifts the AMQP 1.0 link flow control logic out of rabbit_limiter into rabbit_queue_consumers. 11. Fix credit bug for streams: AMQP 1.0 settlements shouldn't top up link credit, only FLOW frames should top up link credit. 12. Allow sender settle mode unsettled for streams since AMQP 1.0 acknowledgements to streams are no-ops (currently). 13. Fix AMQP 1.0 client bugs Auto renewing credits should not be related to settling TRANSFERs. Remove field link_credit_unsettled as it was wrong and confusing. Prior to this commit auto renewal did not work when the sender uses sender settlement mode settled. 14. Fix AMQP 1.0 client bugs The wrong outdated Link was passed to function auto_flow/2 15. Use osiris chunk iterator Only hold messages of uncompressed sub batches in memory if consumer doesn't have sufficient credits. Compressed sub batches are skipped for non Stream protocol consumers. 16. Fix incoming link flow control Always use confirms between AMQP 1.0 queue clients and queue servers. As already done internally by rabbit_fifo_client and rabbit_stream_queue, use confirms for classic queues as well. 17. Include link handle into correlation when publishing messages to target queues such that session process can correlate confirms from target queues to incoming links. 18. Only grant more credits to publishers if publisher hasn't sufficient credits anymore and there are not too many unconfirmed messages on the link. 19. Completely ignore `block` and `unblock` queue actions and RabbitMQ credit flow between classic queue process and session process. 20. Link flow control is independent between links. A client can refer to a queue or to an exchange with multiple dynamically added target queues. Multiple incoming links can also fan in to the same queue. However the link topology looks like, this commit ensures that each link is only granted more credits if that link isn't overloaded. 21. A connection or a session can send to many different queues. In AMQP 0.9.1, a single slow queue will lead to the entire channel, and then entire connection being blocked. This commit makes sure that a single slow queue from one link won't slow down sending on other links. For example, having link A sending to a local classic queue and link B sending to 5 replica quorum queue, link B will naturally grant credits slower than link A. So, despite the quorum queue being slower in confirming messages, the same AMQP 1.0 connection and session can still pump data very fast into the classic queue. 22. If cluster wide memory or disk alarm occurs. Each session sends a FLOW with incoming-window to 0 to sending client. If sending clients don’t obey, force disconnect the client. If cluster wide memory alarm clears: Each session resumes with a FLOW defaulting to initial incoming-window. 23. All operations apart of publishing TRANSFERS to RabbitMQ can continue during cluster wide alarms, specifically, attaching consumers and consuming, i.e. emptying queues. There is no need for separate AMQP 1.0 connections for publishers and consumers as recommended in our AMQP 0.9.1 implementation. 24. Flow control summary: * If queue becomes bottleneck, that’s solved by slowing down individual sending links (AMQP 1.0 link flow control). * If session becomes bottleneck (more unlikely), that’s solved by AMQP 1.0 session flow control. * If connection becomes bottleneck, it naturally won’t read fast enough from the socket causing TCP backpressure being applied. Nowhere will RabbitMQ internal credit based flow control (i.e. module credit_flow) be used on the incoming AMQP 1.0 message path. 25. Register AMQP sessions Prefer local-only pg over our custom pg_local implementation as pg is a better process group implementation than pg_local. pg_local was identified as bottleneck in tests where many MQTT clients were disconnected at once. 26. Start a local-only pg when Rabbit boots: > A scope can be kept local-only by using a scope name that is unique cluster-wide, e.g. the node name: > pg:start_link(node()). Register AMQP 1.0 connections and sessions with pg. In future we should remove pg_local and instead use the new local-only pg for all registered processes such as AMQP 0.9.1 connections and channels. 27. Requeue messages if link detached Although the spec allows to settle delivery IDs on detached links, RabbitMQ does not respect the 'closed' field of the DETACH frame and therefore handles every DETACH frame as closed. Since the link is closed, we expect every outstanding delivery to be requeued. In addition to consumer cancellation, detaching a link therefore causes in flight deliveries to be requeued. Note that this behaviour is different from merely consumer cancellation in AMQP 0.9.1: "After a consumer is cancelled there will be no future deliveries dispatched to it. Note that there can still be "in flight" deliveries dispatched previously. Cancelling a consumer will neither discard nor requeue them." [https://www.rabbitmq.com/consumers.html#unsubscribing] An AMQP receiver can first drain, and then detach to prevent "in flight" deliveries 28. Init AMQP session with BEGIN frame Similar to how there can't be an MQTT processor without a CONNECT frame, there can't be an AMQP session without a BEGIN frame. This allows having strict dialyzer types for session flow control fields (i.e. not allowing 'undefined'). 29. Move serial_number to AMQP 1.0 common lib such that it can be used by both AMQP 1.0 server and client 30. Fix AMQP client to do serial number arithmetic. 31. AMQP client: Differentiate between delivery-id and transfer-id for better understandability. 32. Fix link flow control in classic queues This commit fixes ``` java -jar target/perf-test.jar -ad false -f persistent -u cq -c 3000 -C 1000000 -y 0 ``` followed by ``` ./omq -x 0 amqp -T /queue/cq -D 1000000 --amqp-consumer-credits 2 ``` Prior to this commit, (and on RabbitMQ 3.x) the consuming would halt after around 8 - 10,000 messages. The bug was that in flight messages from classic queue process to session process were not taken into account when topping up credit to the classic queue process. Fixes #2597 The solution to this bug (and a much cleaner design anyway independent of this bug) is that queues should hold all link flow control state including the delivery-count. Hence, when credit API v2 is used the delivery-count will be held by the classic queue process, quorum queue process, and stream queue client instead of managing the delivery-count in the session. 33. The double level crediting between (a) session process and rabbit_fifo_client, and (b) rabbit_fifo_client and rabbit_fifo was removed. Therefore, instead of managing 3 separate delivery-counts (i. session, ii. rabbit_fifo_client, iii. rabbit_fifo), only 1 delivery-count is used in rabbit_fifo. This is a big simplification. 34. This commit fixes quorum queues without bumping the machine version nor introducing new rabbit_fifo commands. Whether credit API v2 is used is solely determined at link attachment time depending on whether feature flag credit_api_v2 is enabled. Even when that feature flag will be enabled later on, this link will keep using credit API v1 until detached (or the node is shut down). Eventually, after feature flag credit_api_v2 has been enabled and a subsequent rolling upgrade, all links will use credit API v2. This approach is safe and simple. The 2 alternatives to move delivery-count from the session process to the queue processes would have been: i. Explicit feature flag credit_api_v2 migration function * Can use a gen_server:call and only finish migration once all delivery-counts were migrated. Cons: * Extra new message format just for migration is required. * Risky as migration will fail if a target queue doesn’t reply. ii. Session always includes DeliveryCountSnd when crediting to the queue: Cons: * 2 delivery counts will be hold simultaneously in session proc and queue proc; could be solved by deleting the session proc’s delivery-count for credit-reply * What happens if the receiver doesn’t provide credit for a very long time? Is that a problem? 35. Support stream filtering in AMQP 1.0 (by @acogoluegnes) Use the x-stream-filter-value message annotation to carry the filter value in a published message. Use the rabbitmq:stream-filter and rabbitmq:stream-match-unfiltered filters when creating a receiver that wants to filter out messages from a stream. 36. Remove credit extension from AMQP 0.9.1 client 37. Support maintenance mode closing AMQP 1.0 connections. 38. Remove AMQP 0.9.1 client dependency from AMQP 1.0 implementation. 39. Move AMQP 1.0 plugin to the core. AMQP 1.0 is enabled by default. The old rabbitmq_amqp1_0 plugin will be kept as a no-op plugin to prevent deployment tools from failing that execute: ``` rabbitmq-plugins enable rabbitmq_amqp1_0 rabbitmq-plugins disable rabbitmq_amqp1_0 ``` 40. Breaking change: Remove CLI command `rabbitmqctl list_amqp10_connections`. Instead, list both AMQP 0.9.1 and AMQP 1.0 connections in `list_connections`: ``` rabbitmqctl list_connections protocol Listing connections ... protocol {1, 0} {0,9,1} ``` ## Benchmarks ### Throughput & Latency Setup: * Single node Ubuntu 22.04 * Erlang 26.1.1 Start RabbitMQ: ``` make run-broker PLUGINS="rabbitmq_management rabbitmq_amqp1_0" FULL=1 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 3" ``` Predeclare durable classic queue cq1, durable quorum queue qq1, durable stream queue sq1. Start client: https://github.com/ssorj/quiver https://hub.docker.com/r/ssorj/quiver/tags (digest 453a2aceda64) ``` docker run -it --rm --add-host host.docker.internal:host-gateway ssorj/quiver:latest bash-5.1# quiver --version quiver 0.4.0-SNAPSHOT ``` 1. Classic queue ``` quiver //host.docker.internal//amq/queue/cq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ............................................... 73.8 seconds Sender rate .......................................... 13,548 messages/s Receiver rate ........................................ 13,547 messages/s End-to-end rate ...................................... 13,547 messages/s Latencies by percentile: 0% ........ 0 ms 90.00% ........ 9 ms 25% ........ 2 ms 99.00% ....... 14 ms 50% ........ 4 ms 99.90% ....... 17 ms 100% ....... 26 ms 99.99% ....... 24 ms ``` RabbitMQ 3.x (main branch as of 30 January 2024): ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 6 73.6 2.1 3,217 1,607 0 8.0 511 4.1 163,580 16,367 2 74.1 4.1 3,217 0 0 8.0 0 6.1 229,114 32,767 3 74.1 6.1 3,217 0 0 8.0 0 8.1 261,880 16,367 2 74.1 8.1 67,874 32,296 8 8.2 7,662 10.1 294,646 16,367 2 74.1 10.1 67,874 0 0 8.2 0 12.1 360,180 32,734 3 74.1 12.1 67,874 0 0 8.2 0 14.1 392,946 16,367 3 74.1 14.1 68,604 365 0 8.2 12,147 16.1 458,480 32,734 3 74.1 16.1 68,604 0 0 8.2 0 18.1 491,246 16,367 2 74.1 18.1 68,604 0 0 8.2 0 20.1 556,780 32,767 4 74.1 20.1 68,604 0 0 8.2 0 22.1 589,546 16,375 2 74.1 22.1 68,604 0 0 8.2 0 receiver timed out 24.1 622,312 16,367 2 74.1 24.1 68,604 0 0 8.2 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. ``` 2. Quorum queue: ``` quiver //host.docker.internal//amq/queue/qq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration .............................................. 101.4 seconds Sender rate ........................................... 9,867 messages/s Receiver rate ......................................... 9,868 messages/s End-to-end rate ....................................... 9,865 messages/s Latencies by percentile: 0% ....... 11 ms 90.00% ....... 23 ms 25% ....... 15 ms 99.00% ....... 28 ms 50% ....... 18 ms 99.90% ....... 33 ms 100% ....... 49 ms 99.99% ....... 47 ms ``` RabbitMQ 3.x: ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 9 69.9 2.1 18,430 9,206 5 7.6 1,221 4.1 163,580 16,375 5 70.2 4.1 18,867 218 0 7.6 2,168 6.1 229,114 32,767 6 70.2 6.1 18,867 0 0 7.6 0 8.1 294,648 32,734 7 70.2 8.1 18,867 0 0 7.6 0 10.1 360,182 32,734 6 70.2 10.1 18,867 0 0 7.6 0 12.1 425,716 32,767 6 70.2 12.1 18,867 0 0 7.6 0 receiver timed out 14.1 458,482 16,367 5 70.2 14.1 18,867 0 0 7.6 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. ``` 3. Stream: ``` quiver-arrow send //host.docker.internal//amq/queue/sq1 --durable --count 1m -d 10m --summary --verbose ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ................................................ 8.7 seconds Message rate ........................................ 115,154 messages/s ``` RabbitMQ 3.x: ``` Count ............................................. 1,000,000 messages Duration ............................................... 21.2 seconds Message rate ......................................... 47,232 messages/s ``` ### Memory usage Start RabbitMQ: ``` ERL_MAX_PORTS=3000000 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+P 3000000 +S 6" make run-broker PLUGINS="rabbitmq_amqp1_0" FULL=1 RABBITMQ_CONFIG_FILE="rabbitmq.conf" ``` ``` /bin/cat rabbitmq.conf tcp_listen_options.sndbuf = 2048 tcp_listen_options.recbuf = 2048 vm_memory_high_watermark.relative = 0.95 vm_memory_high_watermark_paging_ratio = 0.95 loopback_users = none ``` Create 50k connections with 2 sessions per connection, i.e. 100k session in total: ```go package main import ( "context" "log" "time" "github.com/Azure/go-amqp" ) func main() { for i := 0; i < 50000; i++ { conn, err := amqp.Dial(context.TODO(), "amqp://nuc", &amqp.ConnOptions{SASLType: amqp.SASLTypeAnonymous()}) if err != nil { log.Fatal("dialing AMQP server:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } } log.Println("opened all connections") time.Sleep(5 * time.Hour) } ``` This commit: ``` erlang:memory(). [{total,4586376480}, {processes,4025898504}, {processes_used,4025871040}, {system,560477976}, {atom,1048841}, {atom_used,1042841}, {binary,233228608}, {code,21449982}, {ets,108560464}] erlang:system_info(process_count). 450289 ``` 7 procs per connection + 1 proc per session. (7 + 2*1) * 50,000 = 450,000 procs RabbitMQ 3.x: ``` erlang:memory(). [{total,15168232704}, {processes,14044779256}, {processes_used,14044755120}, {system,1123453448}, {atom,1057033}, {atom_used,1052587}, {binary,236381264}, {code,21790238}, {ets,391423744}] erlang:system_info(process_count). 1850309 ``` 7 procs per connection + 15 per session (7 + 2*15) * 50,000 = 1,850,000 procs 50k connections + 100k session require with this commit: 4.5 GB in RabbitMQ 3.x: 15 GB ## Future work 1. More efficient parser and serializer 2. TODO in mc_amqp: Do not store the parsed message on disk. 3. Implement both AMQP HTTP extension and AMQP management extension to allow AMQP clients to create RabbitMQ objects (queues, exchanges, ...).
2023-07-21 18:29:07 +08:00
":test_event_recorder_beam",
],
2024-04-06 12:02:19 +08:00
shard_count = 2,
Support AMQP 1.0 natively ## What Similar to Native MQTT in #5895, this commits implements Native AMQP 1.0. By "native", we mean do not proxy via AMQP 0.9.1 anymore. ## Why Native AMQP 1.0 comes with the following major benefits: 1. Similar to Native MQTT, this commit provides better throughput, latency, scalability, and resource usage for AMQP 1.0. See https://blog.rabbitmq.com/posts/2023/03/native-mqtt for native MQTT improvements. See further below for some benchmarks. 2. Since AMQP 1.0 is not limited anymore by the AMQP 0.9.1 protocol, this commit allows implementing more AMQP 1.0 features in the future. Some features are already implemented in this commit (see next section). 3. Simpler, better understandable, and more maintainable code. Native AMQP 1.0 as implemented in this commit has the following major benefits compared to AMQP 0.9.1: 4. Memory and disk alarms will only stop accepting incoming TRANSFER frames. New connections can still be created to consume from RabbitMQ to empty queues. 5. Due to 4. no need anymore for separate connections for publishers and consumers as we currently recommended for AMQP 0.9.1. which potentially halves the number of physical TCP connections. 6. When a single connection sends to multiple target queues, a single slow target queue won't block the entire connection. Publisher can still send data quickly to all other target queues. 7. A publisher can request whether it wants publisher confirmation on a per-message basis. In AMQP 0.9.1 publisher confirms are configured per channel only. 8. Consumers can change their "prefetch count" dynamically which isn't possible in our AMQP 0.9.1 implementation. See #10174 9. AMQP 1.0 is an extensible protocol This commit also fixes dozens of bugs present in the AMQP 1.0 plugin in RabbitMQ 3.x - most of which cannot be backported due to the complexity and limitations of the old 3.x implementation. This commit contains breaking changes and is therefore targeted for RabbitMQ 4.0. ## Implementation details 1. Breaking change: With Native AMQP, the behaviour of ``` Convert AMQP 0.9.1 message headers to application properties for an AMQP 1.0 consumer amqp1_0.convert_amqp091_headers_to_app_props = false | true (default false) Convert AMQP 1.0 Application Properties to AMQP 0.9.1 headers amqp1_0.convert_app_props_to_amqp091_headers = false | true (default false) ``` will break because we always convert according to the message container conversions. For example, AMQP 0.9.1 x-headers will go into message-annotations instead of application properties. Also, `false` won’t be respected since we always convert the headers with message containers. 2. Remove rabbit_queue_collector rabbit_queue_collector is responsible for synchronously deleting exclusive queues. Since the AMQP 1.0 plugin never creates exclusive queues, rabbit_queue_collector doesn't need to be started in the first place. This will save 1 Erlang process per AMQP 1.0 connection. 3. 7 processes per connection + 1 process per session in this commit instead of 7 processes per connection + 15 processes per session in 3.x Supervision hierarchy got re-designed. 4. Use 1 writer process per AMQP 1.0 connection AMQP 0.9.1 uses a separate rabbit_writer Erlang process per AMQP 0.9.1 channel. Prior to this commit, AMQP 1.0 used a separate rabbit_amqp1_0_writer process per AMQP 1.0 session. Advantage of single writer proc per session (prior to this commit): * High parallelism for serialising packets if multiple sessions within a connection write heavily at the same time. This commit uses a single writer process per AMQP 1.0 connection that is shared across all AMQP 1.0 sessions. Advantages of single writer proc per connection (this commit): * Lower memory usage with hundreds of thousands of AMQP 1.0 sessions * Less TCP and IP header overhead given that the single writer process can accumulate across all sessions bytes before flushing the socket. In other words, this commit decides that a reader / writer process pair per AMQP 1.0 connection is good enough for bi-directional TRANSFER flows. Having a writer per session is too heavy. We still ensure high throughput by having separate reader, writer, and session processes. 5. Transform rabbit_amqp1_0_writer into gen_server Why: Prior to this commit, when clicking on the AMQP 1.0 writer process in observer, the process crashed. Instead of handling all these debug messages of the sys module, it's better to implement a gen_server. There is no advantage of using a special OTP process over gen_server for the AMQP 1.0 writer. gen_server also provides cleaner format status output. How: Message callbacks return a timeout of 0. After all messages in the inbox are processed, the timeout message is handled by flushing any pending bytes. 6. Remove stats timer from writer AMQP 1.0 connections haven't emitted any stats previously. 7. When there are contiguous queue confirmations in the session process mailbox, batch them. When the confirmations are sent to the publisher, a single DISPOSITION frame is sent for contiguously confirmed delivery IDs. This approach should be good enough. However it's sub optimal in scenarios where contiguous delivery IDs that need confirmations are rare, for example: * There are multiple links in the session with different sender settlement modes and sender publishes across these links interleaved. * sender settlement mode is mixed and sender publishes interleaved settled and unsettled TRANSFERs. 8. Introduce credit API v2 Why: The AMQP 0.9.1 credit extension which is to be removed in 4.0 was poorly designed since basic.credit is a synchronous call into the queue process blocking the entire AMQP 1.0 session process. How: Change the interactions between queue clients and queue server implementations: * Clients only request a credit reply if the FLOW's `echo` field is set * Include all link flow control state held by the queue process into a new credit_reply queue event: * `available` after the queue sends any deliveries * `link-credit` after the queue sends any deliveries * `drain` which allows us to combine the old queue events send_credit_reply and send_drained into a single new queue event credit_reply. * Include the consumer tag into the credit_reply queue event such that the AMQP 1.0 session process can process any credit replies asynchronously. Link flow control state `delivery-count` also moves to the queue processes. The new interactions are hidden behind feature flag credit_api_v2 to allow for rolling upgrades from 3.13 to 4.0. 9. Use serial number arithmetic in quorum queues and session process. 10. Completely bypass the rabbit_limiter module for AMQP 1.0 flow control. The goal is to eventually remove the rabbit_limiter module in 4.0 since AMQP 0.9.1 global QoS will be unsupported in 4.0. This commit lifts the AMQP 1.0 link flow control logic out of rabbit_limiter into rabbit_queue_consumers. 11. Fix credit bug for streams: AMQP 1.0 settlements shouldn't top up link credit, only FLOW frames should top up link credit. 12. Allow sender settle mode unsettled for streams since AMQP 1.0 acknowledgements to streams are no-ops (currently). 13. Fix AMQP 1.0 client bugs Auto renewing credits should not be related to settling TRANSFERs. Remove field link_credit_unsettled as it was wrong and confusing. Prior to this commit auto renewal did not work when the sender uses sender settlement mode settled. 14. Fix AMQP 1.0 client bugs The wrong outdated Link was passed to function auto_flow/2 15. Use osiris chunk iterator Only hold messages of uncompressed sub batches in memory if consumer doesn't have sufficient credits. Compressed sub batches are skipped for non Stream protocol consumers. 16. Fix incoming link flow control Always use confirms between AMQP 1.0 queue clients and queue servers. As already done internally by rabbit_fifo_client and rabbit_stream_queue, use confirms for classic queues as well. 17. Include link handle into correlation when publishing messages to target queues such that session process can correlate confirms from target queues to incoming links. 18. Only grant more credits to publishers if publisher hasn't sufficient credits anymore and there are not too many unconfirmed messages on the link. 19. Completely ignore `block` and `unblock` queue actions and RabbitMQ credit flow between classic queue process and session process. 20. Link flow control is independent between links. A client can refer to a queue or to an exchange with multiple dynamically added target queues. Multiple incoming links can also fan in to the same queue. However the link topology looks like, this commit ensures that each link is only granted more credits if that link isn't overloaded. 21. A connection or a session can send to many different queues. In AMQP 0.9.1, a single slow queue will lead to the entire channel, and then entire connection being blocked. This commit makes sure that a single slow queue from one link won't slow down sending on other links. For example, having link A sending to a local classic queue and link B sending to 5 replica quorum queue, link B will naturally grant credits slower than link A. So, despite the quorum queue being slower in confirming messages, the same AMQP 1.0 connection and session can still pump data very fast into the classic queue. 22. If cluster wide memory or disk alarm occurs. Each session sends a FLOW with incoming-window to 0 to sending client. If sending clients don’t obey, force disconnect the client. If cluster wide memory alarm clears: Each session resumes with a FLOW defaulting to initial incoming-window. 23. All operations apart of publishing TRANSFERS to RabbitMQ can continue during cluster wide alarms, specifically, attaching consumers and consuming, i.e. emptying queues. There is no need for separate AMQP 1.0 connections for publishers and consumers as recommended in our AMQP 0.9.1 implementation. 24. Flow control summary: * If queue becomes bottleneck, that’s solved by slowing down individual sending links (AMQP 1.0 link flow control). * If session becomes bottleneck (more unlikely), that’s solved by AMQP 1.0 session flow control. * If connection becomes bottleneck, it naturally won’t read fast enough from the socket causing TCP backpressure being applied. Nowhere will RabbitMQ internal credit based flow control (i.e. module credit_flow) be used on the incoming AMQP 1.0 message path. 25. Register AMQP sessions Prefer local-only pg over our custom pg_local implementation as pg is a better process group implementation than pg_local. pg_local was identified as bottleneck in tests where many MQTT clients were disconnected at once. 26. Start a local-only pg when Rabbit boots: > A scope can be kept local-only by using a scope name that is unique cluster-wide, e.g. the node name: > pg:start_link(node()). Register AMQP 1.0 connections and sessions with pg. In future we should remove pg_local and instead use the new local-only pg for all registered processes such as AMQP 0.9.1 connections and channels. 27. Requeue messages if link detached Although the spec allows to settle delivery IDs on detached links, RabbitMQ does not respect the 'closed' field of the DETACH frame and therefore handles every DETACH frame as closed. Since the link is closed, we expect every outstanding delivery to be requeued. In addition to consumer cancellation, detaching a link therefore causes in flight deliveries to be requeued. Note that this behaviour is different from merely consumer cancellation in AMQP 0.9.1: "After a consumer is cancelled there will be no future deliveries dispatched to it. Note that there can still be "in flight" deliveries dispatched previously. Cancelling a consumer will neither discard nor requeue them." [https://www.rabbitmq.com/consumers.html#unsubscribing] An AMQP receiver can first drain, and then detach to prevent "in flight" deliveries 28. Init AMQP session with BEGIN frame Similar to how there can't be an MQTT processor without a CONNECT frame, there can't be an AMQP session without a BEGIN frame. This allows having strict dialyzer types for session flow control fields (i.e. not allowing 'undefined'). 29. Move serial_number to AMQP 1.0 common lib such that it can be used by both AMQP 1.0 server and client 30. Fix AMQP client to do serial number arithmetic. 31. AMQP client: Differentiate between delivery-id and transfer-id for better understandability. 32. Fix link flow control in classic queues This commit fixes ``` java -jar target/perf-test.jar -ad false -f persistent -u cq -c 3000 -C 1000000 -y 0 ``` followed by ``` ./omq -x 0 amqp -T /queue/cq -D 1000000 --amqp-consumer-credits 2 ``` Prior to this commit, (and on RabbitMQ 3.x) the consuming would halt after around 8 - 10,000 messages. The bug was that in flight messages from classic queue process to session process were not taken into account when topping up credit to the classic queue process. Fixes #2597 The solution to this bug (and a much cleaner design anyway independent of this bug) is that queues should hold all link flow control state including the delivery-count. Hence, when credit API v2 is used the delivery-count will be held by the classic queue process, quorum queue process, and stream queue client instead of managing the delivery-count in the session. 33. The double level crediting between (a) session process and rabbit_fifo_client, and (b) rabbit_fifo_client and rabbit_fifo was removed. Therefore, instead of managing 3 separate delivery-counts (i. session, ii. rabbit_fifo_client, iii. rabbit_fifo), only 1 delivery-count is used in rabbit_fifo. This is a big simplification. 34. This commit fixes quorum queues without bumping the machine version nor introducing new rabbit_fifo commands. Whether credit API v2 is used is solely determined at link attachment time depending on whether feature flag credit_api_v2 is enabled. Even when that feature flag will be enabled later on, this link will keep using credit API v1 until detached (or the node is shut down). Eventually, after feature flag credit_api_v2 has been enabled and a subsequent rolling upgrade, all links will use credit API v2. This approach is safe and simple. The 2 alternatives to move delivery-count from the session process to the queue processes would have been: i. Explicit feature flag credit_api_v2 migration function * Can use a gen_server:call and only finish migration once all delivery-counts were migrated. Cons: * Extra new message format just for migration is required. * Risky as migration will fail if a target queue doesn’t reply. ii. Session always includes DeliveryCountSnd when crediting to the queue: Cons: * 2 delivery counts will be hold simultaneously in session proc and queue proc; could be solved by deleting the session proc’s delivery-count for credit-reply * What happens if the receiver doesn’t provide credit for a very long time? Is that a problem? 35. Support stream filtering in AMQP 1.0 (by @acogoluegnes) Use the x-stream-filter-value message annotation to carry the filter value in a published message. Use the rabbitmq:stream-filter and rabbitmq:stream-match-unfiltered filters when creating a receiver that wants to filter out messages from a stream. 36. Remove credit extension from AMQP 0.9.1 client 37. Support maintenance mode closing AMQP 1.0 connections. 38. Remove AMQP 0.9.1 client dependency from AMQP 1.0 implementation. 39. Move AMQP 1.0 plugin to the core. AMQP 1.0 is enabled by default. The old rabbitmq_amqp1_0 plugin will be kept as a no-op plugin to prevent deployment tools from failing that execute: ``` rabbitmq-plugins enable rabbitmq_amqp1_0 rabbitmq-plugins disable rabbitmq_amqp1_0 ``` 40. Breaking change: Remove CLI command `rabbitmqctl list_amqp10_connections`. Instead, list both AMQP 0.9.1 and AMQP 1.0 connections in `list_connections`: ``` rabbitmqctl list_connections protocol Listing connections ... protocol {1, 0} {0,9,1} ``` ## Benchmarks ### Throughput & Latency Setup: * Single node Ubuntu 22.04 * Erlang 26.1.1 Start RabbitMQ: ``` make run-broker PLUGINS="rabbitmq_management rabbitmq_amqp1_0" FULL=1 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 3" ``` Predeclare durable classic queue cq1, durable quorum queue qq1, durable stream queue sq1. Start client: https://github.com/ssorj/quiver https://hub.docker.com/r/ssorj/quiver/tags (digest 453a2aceda64) ``` docker run -it --rm --add-host host.docker.internal:host-gateway ssorj/quiver:latest bash-5.1# quiver --version quiver 0.4.0-SNAPSHOT ``` 1. Classic queue ``` quiver //host.docker.internal//amq/queue/cq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ............................................... 73.8 seconds Sender rate .......................................... 13,548 messages/s Receiver rate ........................................ 13,547 messages/s End-to-end rate ...................................... 13,547 messages/s Latencies by percentile: 0% ........ 0 ms 90.00% ........ 9 ms 25% ........ 2 ms 99.00% ....... 14 ms 50% ........ 4 ms 99.90% ....... 17 ms 100% ....... 26 ms 99.99% ....... 24 ms ``` RabbitMQ 3.x (main branch as of 30 January 2024): ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 6 73.6 2.1 3,217 1,607 0 8.0 511 4.1 163,580 16,367 2 74.1 4.1 3,217 0 0 8.0 0 6.1 229,114 32,767 3 74.1 6.1 3,217 0 0 8.0 0 8.1 261,880 16,367 2 74.1 8.1 67,874 32,296 8 8.2 7,662 10.1 294,646 16,367 2 74.1 10.1 67,874 0 0 8.2 0 12.1 360,180 32,734 3 74.1 12.1 67,874 0 0 8.2 0 14.1 392,946 16,367 3 74.1 14.1 68,604 365 0 8.2 12,147 16.1 458,480 32,734 3 74.1 16.1 68,604 0 0 8.2 0 18.1 491,246 16,367 2 74.1 18.1 68,604 0 0 8.2 0 20.1 556,780 32,767 4 74.1 20.1 68,604 0 0 8.2 0 22.1 589,546 16,375 2 74.1 22.1 68,604 0 0 8.2 0 receiver timed out 24.1 622,312 16,367 2 74.1 24.1 68,604 0 0 8.2 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. ``` 2. Quorum queue: ``` quiver //host.docker.internal//amq/queue/qq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration .............................................. 101.4 seconds Sender rate ........................................... 9,867 messages/s Receiver rate ......................................... 9,868 messages/s End-to-end rate ....................................... 9,865 messages/s Latencies by percentile: 0% ....... 11 ms 90.00% ....... 23 ms 25% ....... 15 ms 99.00% ....... 28 ms 50% ....... 18 ms 99.90% ....... 33 ms 100% ....... 49 ms 99.99% ....... 47 ms ``` RabbitMQ 3.x: ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 9 69.9 2.1 18,430 9,206 5 7.6 1,221 4.1 163,580 16,375 5 70.2 4.1 18,867 218 0 7.6 2,168 6.1 229,114 32,767 6 70.2 6.1 18,867 0 0 7.6 0 8.1 294,648 32,734 7 70.2 8.1 18,867 0 0 7.6 0 10.1 360,182 32,734 6 70.2 10.1 18,867 0 0 7.6 0 12.1 425,716 32,767 6 70.2 12.1 18,867 0 0 7.6 0 receiver timed out 14.1 458,482 16,367 5 70.2 14.1 18,867 0 0 7.6 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. ``` 3. Stream: ``` quiver-arrow send //host.docker.internal//amq/queue/sq1 --durable --count 1m -d 10m --summary --verbose ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ................................................ 8.7 seconds Message rate ........................................ 115,154 messages/s ``` RabbitMQ 3.x: ``` Count ............................................. 1,000,000 messages Duration ............................................... 21.2 seconds Message rate ......................................... 47,232 messages/s ``` ### Memory usage Start RabbitMQ: ``` ERL_MAX_PORTS=3000000 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+P 3000000 +S 6" make run-broker PLUGINS="rabbitmq_amqp1_0" FULL=1 RABBITMQ_CONFIG_FILE="rabbitmq.conf" ``` ``` /bin/cat rabbitmq.conf tcp_listen_options.sndbuf = 2048 tcp_listen_options.recbuf = 2048 vm_memory_high_watermark.relative = 0.95 vm_memory_high_watermark_paging_ratio = 0.95 loopback_users = none ``` Create 50k connections with 2 sessions per connection, i.e. 100k session in total: ```go package main import ( "context" "log" "time" "github.com/Azure/go-amqp" ) func main() { for i := 0; i < 50000; i++ { conn, err := amqp.Dial(context.TODO(), "amqp://nuc", &amqp.ConnOptions{SASLType: amqp.SASLTypeAnonymous()}) if err != nil { log.Fatal("dialing AMQP server:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } } log.Println("opened all connections") time.Sleep(5 * time.Hour) } ``` This commit: ``` erlang:memory(). [{total,4586376480}, {processes,4025898504}, {processes_used,4025871040}, {system,560477976}, {atom,1048841}, {atom_used,1042841}, {binary,233228608}, {code,21449982}, {ets,108560464}] erlang:system_info(process_count). 450289 ``` 7 procs per connection + 1 proc per session. (7 + 2*1) * 50,000 = 450,000 procs RabbitMQ 3.x: ``` erlang:memory(). [{total,15168232704}, {processes,14044779256}, {processes_used,14044755120}, {system,1123453448}, {atom,1057033}, {atom_used,1052587}, {binary,236381264}, {code,21790238}, {ets,391423744}] erlang:system_info(process_count). 1850309 ``` 7 procs per connection + 15 per session (7 + 2*15) * 50,000 = 1,850,000 procs 50k connections + 100k session require with this commit: 4.5 GB in RabbitMQ 3.x: 15 GB ## Future work 1. More efficient parser and serializer 2. TODO in mc_amqp: Do not store the parsed message on disk. 3. Implement both AMQP HTTP extension and AMQP management extension to allow AMQP clients to create RabbitMQ objects (queues, exchanges, ...).
2023-07-21 18:29:07 +08:00
runtime_deps = [
Enable AMQP 1.0 clients to manage topologies ## What? * Allow AMQP 1.0 clients to dynamically create and delete RabbitMQ topologies (exchanges, queues, bindings). * Provide an Erlang AMQP 1.0 client that manages topologies. ## Why? Today, RabbitMQ topologies can be created via: * [Management HTTP API](https://www.rabbitmq.com/docs/management#http-api) (including Management UI and [messaging-topology-operator](https://github.com/rabbitmq/messaging-topology-operator)) * [Definition Import](https://www.rabbitmq.com/docs/definitions#import) * AMQP 0.9.1 clients Up to RabbitMQ 3.13 the RabbitMQ AMQP 1.0 plugin auto creates queues and bindings depending on the terminus [address format](https://github.com/rabbitmq/rabbitmq-server/tree/v3.13.x/deps/rabbitmq_amqp1_0#routing-and-addressing). Such implicit creation of topologies is limiting and obscure. For some address formats, queues will be created, but not deleted. Some of RabbitMQ's success is due to its flexible routing topologies that AMQP 0.9.1 clients can create and delete dynamically. This commit allows dynamic management of topologies for AMQP 1.0 clients. This commit builds on top of Native AMQP 1.0 (PR #9022) and will be available in RabbitMQ 4.0. ## How? This commits adds the following management operations for AMQP 1.0 clients: * declare queue * delete queue * purge queue * bind queue to exchange * unbind queue from exchange * declare exchange * delete exchange * bind exchange to exchange * unbind exchange from exchange Hence, at least the AMQP 0.9.1 management operations are supported for AMQP 1.0 clients. In addition the operation * get queue is provided which - similar to `declare queue` - returns queue information including the current leader and replicas. This allows clients to publish or consume locally on the node that hosts the queue. Compared to AMQP 0.9.1 whose commands and command fields are fixed, the new AMQP Management API is extensible: New operations and new fields can easily be added in the future. There are different design options how management operations could be supported for AMQP 1.0 clients: 1. Use a special exchange type as done in https://github.com/rabbitmq/rabbitmq-management-exchange This has the advantage that any protocol client (e.g. also STOMP clients) could dynamically manage topologies. However, a special exchange type is the wrong abstraction. 2. Clients could send "special" messages with special headers that the broker interprets. This commit decided for a variation of the 2nd option using a more standardized way by re-using a subest of the following latest AMQP 1.0 extension specifications: * [AMQP Request-Response Messaging with Link Pairing Version 1.0 - Committee Specification 01](https://docs.oasis-open.org/amqp/linkpair/v1.0/cs01/linkpair-v1.0-cs01.html) (February 2021) * [HTTP Semantics and Content over AMQP Version 1.0 - Working Draft 06](https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=65571) (July 2019) * [AMQP Management Version 1.0 - Working Draft 16](https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=65575) (July 2019) An important goal is to keep the interaction between AMQP 1.0 client and RabbitMQ simple to increase usage, development and adoptability of future RabbitMQ AMQP 1.0 client library wrappers. The AMQP 1.0 client has to create a link pair to the special `/management` node. This allows the client to send and receive from the management node. Similar to AMQP 0.9.1, there is no need for a reply queue since the reply will be sent directly to the client. Requests and responses are modelled via HTTP, but sent via AMQP using the `HTTP Semantics and Content over AMQP` extension (henceforth `HTTP over AMQP` extension). This commit tries to follow the `HTTP over AMQP` extension as much as possible but deviates where this draft spec doesn't make sense. The projected mode §4.1 is used as opposed to tunneled mode §4.2. A named relay `/management` is used (§6.3) where the message field `to` is the URL. Deviations are * §3.1 mandates that URIs are not encoded in an AMQP message. However, we percent encode URIs in the AMQP message. Otherwise there is for example no way to distinguish a `/` in a queue name from the URI path separator `/`. * §4.1.4 mandates a data section. This commit uses an amqp-value section as it's a better fit given that the content is AMQP encoded data. Using an HTTP API allows for a common well understood interface and future extensibility. Instead of re-using the current RabbitMQ HTTP API, this commit uses a new HTTP API (let's call it v2) which could be used as a future API for plain HTTP clients. ### HTTP API v1 The current HTTP API (let's call it v1) is **not** used since v1 comes with a couple of weaknesses: 1. Deep level of nesting becomes confusing and difficult to manage. Examples of deep nesting in v1: ``` /api/bindings/vhost/e/source/e/destination/props /api/bindings/vhost/e/exchange/q/queue/props ``` 2. Redundant endpoints returning the same resources v1 has 9 endpoints to list binding(s): ``` /api/exchanges/vhost/name/bindings/source /api/exchanges/vhost/name/bindings/destination /api/queues/vhost/name/bindings /api/bindings /api/bindings/vhost /api/bindings/vhost/e/exchange/q/queue /api/bindings/vhost/e/exchange/q/queue/props /api/bindings/vhost/e/source/e/destination /api/bindings/vhost/e/source/e/destination/props ``` 3. Verbs in path names Path names should be nouns instead. v1 contains verbs: ``` /api/queues/vhost/name/get /api/exchanges/vhost/name/publish ``` ### AMQP Management extension Only few aspects of the AMQP Management extension are used. The central idea of the AMQP management spec is **dynamic discovery** such that broker independent AMQP 1.0 clients can discover objects, types, operations, and HTTP endpoints of specific brokers. In fact, clients are only conformant if: > All request addresses are dynamically discovered starting from the discovery document. > A requesting container MUST NOT use fixed assumptions about the addressing structure of the management API. While this is a nice and powerful idea, no AMQP 1.0 client and no AMQP 1.0 server implement the latest AMQP 1.0 management spec from 2019, partly presumably due to its complexity. Therefore, the idea of such dynamic discovery has failed to be implemented in practice. The AMQP management spec mandates that the management endpoint returns a discovery document containing broker specific collections, types, configuration, and operations including their endpoints. The API endpoints of the AMQP management spec are therefore all designed around dynamic discovery. For example, to create either a queue or an exchange, the client has to ``` POST /$management/entities ``` which shows that the entities collection acts as a generic factory, see section 2.2. The server will then create the resource and reply with a location header containing a URI pointing to the resource. For RabbitMQ, we don’t need such a generic factory to create queues or exchanges. To list bindings for a queue Q1, the spec suggests ``` GET /$management/Queues/Q1/$management/entities ``` which again shows the generic entities endpoint as well as a `$management` endpoint under Q1 to allow a queue to return a discovery document. For RabbitMQ, we don’t need such generic endpoints and discovery documents. Given we aim for our own thin RabbitMQ AMQP 1.0 client wrapper libraries which expose the RabbitMQ model to the developer, we can directly use fixed HTTP endpoint assumptions in our RabbitMQ specific libraries. This is by far simpler than using the dynamic endpoints of the management spec. Simplicity leads to higher adoption and enables more developers to write RabbitMQ AMQP 1.0 client library wrappers. The AMQP Management extension also suffers from deep level of nesting in paths Examples: ``` /$management/Queues/Q1/$management/entities /$management/Queues/Q1/Bindings/Binding1 ``` as well as verbs in path names: Section 7.1.4 suggests using verbs in path names, for example “purge”, due to the dynamic operations discovery document. ### HTTP API v2 This commit introduces a new HTTP API v2 following best practices. It could serve as a future API for plain HTTP clients. This commit and RabbitMQ 4.0 will only implement a minimal set of HTTP API v2 endpoints and only for HTTP over AMQP. In other words, the existing HTTP API v1 Cowboy handlers will continue to be used for all plain HTTP requests in RabbitMQ 4.0 and will remain untouched for RabbitMQ 4.0. Over time, after 4.0 shipped, we could ship a pure HTTP API implementation for HTTP API v2. Hence, the new HTTP API v2 endpoints for HTTP over AMQP should be designed such that they can be re-used in the future for a pure HTTP implementation. The minimal set of endpoints for RabbitMQ 4.0 are: `` GET / PUT / DELETE /vhosts/:vhost/queues/:queue ``` read, create, delete a queue ``` DELETE /vhosts/:vhost/queues/:queue/messages ``` purges a queue ``` GET / DELETE /vhosts/:vhost/bindings/:binding ``` read, delete bindings where `:binding` is a binding ID of the following path segment: ``` src=e1;dstq=q2;key=my-key;args= ``` Binding arguments `args` has an empty value by default, i.e. there are no binding arguments. If the binding includes binding arguments, `args` will be an Erlang portable term hash provided by the server similar to what’s provided in HTTP API v1 today. Alternatively, we could use an arguments scheme of: ``` args=k1,utf8,v1&k2,uint,3 ``` However, such a scheme leads to long URIs when there are many binding arguments. Note that it’s perfectly fine for URI producing applications to include URI reserved characters `=` / `;` / `,` / `$` in a path segment. To create a binding, the client therefore needs to POST to a bindings factory URI: ``` POST /vhosts/:vhost/bindings ``` To list all bindings between a source exchange e1 and destination exchange e2 with binding key k1: ``` GET /vhosts/:vhost/bindings?src=e1&dste=e2&key=k1 ``` This endpoint will be called by the RabbitMQ AMQP 1.0 client library to unbind a binding with non-empty binding arguments to get the binding ID before invoking a ``` DELETE /vhosts/:vhost/bindings/:binding ``` In future, after RabbitMQ 4.0 shipped, new API endpoints could be added. The following is up for discussion and is only meant to show the clean and simple design of HTTP API v2. Bindings endpoint can be queried as follows: to list all bindings for a given source exchange e1: ``` GET /vhosts/:vhost/bindings?src=e1 ``` to list all bindings for a given destination queue q1: ``` GET /vhosts/:vhost/bindings?dstq=q1 ``` to list all bindings between a source exchange e1 and destination queue q1: ``` GET /vhosts/:vhost/bindings?src=e1&dstq=q1 ``` multiple bindings between source exchange e1 and destination queue q1 could be deleted at once as follows: ``` DELETE /vhosts/:vhost/bindings?src=e1&dstq=q1 ``` GET could be supported globally across all vhosts: ``` /exchanges /queues /bindings ``` Publish a message: ``` POST /vhosts/:vhost/queues/:queue/messages ``` Consume or peek a message (depending on query parameters): ``` GET /vhosts/:vhost/queues/:queue/messages ``` Note that the AMQP 1.0 client omits the `/vhost/:vhost` path prefix. Since an AMQP connection belongs to a single vhost, there is no need to additionally include the vhost in every HTTP request. Pros of HTTP API v2: 1. Low level of nesting Queues, exchanges, bindings are top level entities directly under vhosts. Although the HTTP API doesn’t have to reflect how resources are stored in the database, v2 does nicely reflect the Khepri tree structure. 2. Nouns instead of verbs HTTP API v2 is very simple to read and understand as shown by ``` POST /vhosts/:vhost/queues/:queue/messages to post messages, i.e. publish to a queue. GET /vhosts/:vhost/queues/:queue/messages to get messages, i.e. consume or peek from a queue. DELETE /vhosts/:vhost/queues/:queue/messages to delete messages, i.e. purge a queue. ``` A separate new HTTP API v2 allows us to ship only handlers for HTTP over AMQP for RabbitMQ 4.0 and therefore move faster while still keeping the option on the table to re-use the new v2 API for pure HTTP in the future. In contrast, re-using the HTTP API v1 for HTTP over AMQP is possible, but dirty because separate handlers (HTTP over AMQP and pure HTTP) replying differently will be needed for the same v1 endpoints.
2024-02-08 01:26:13 +08:00
"//deps/rabbitmq_amqp_client:erlang_app",
Introduce new AMQP 1.0 address format ## What? Introduce a new address format (let's call it v2) for AMQP 1.0 source and target addresses. The old format (let's call it v1) is described in https://github.com/rabbitmq/rabbitmq-server/tree/v3.13.x/deps/rabbitmq_amqp1_0#routing-and-addressing The only v2 source address format is: ``` /queue/:queue ``` The 4 possible v2 target addresses formats are: ``` /exchange/:exchange/key/:routing-key /exchange/:exchange /queue/:queue <null> ``` where the last AMQP <null> value format requires that each message’s `to` field contains one of: ``` /exchange/:exchange/key/:routing-key /exchange/:exchange /queue/:queue ``` ## Why? The AMQP address v1 format comes with the following flaws: 1. Obscure address format: Without reading the documentation, the differences for example between source addresses ``` /amq/queue/:queue /queue/:queue :queue ``` are unknown to users. Hence, the address format is obscure. 2. Implicit creation of topologies Some address formats implicitly create queues (and bindings), such as source address ``` /exchange/:exchange/:binding-key ``` or target address ``` /queue/:queue ``` These queues and bindings are never deleted (by the AMQP 1.0 plugin.) Implicit creation of such topologies is also obscure. 3. Redundant address formats ``` /queue/:queue :queue ``` have the same meaning and are therefore redundant. 4. Properties section must be parsed to determine whether a routing key is present Target address ``` /exchange/:exchange ``` requires RabbitMQ to parse the properties section in order to check whether the message `subject` is set. If `subject` is not set, the routing key will default to the empty string. 5. Using `subject` as routing key misuses the purpose of this field. According to the AMQP spec, the message `subject` field's purpose is: > A common field for summary information about the message content and purpose. 6. Exchange names, queue names and routing keys must not contain the "/" (slash) character. The current 3.13 implemenation splits by "/" disallowing these characters in exchange, and queue names, and routing keys which is unnecessary prohibitive. 7. Clients must create a separate link per target exchange While this is reasonable working assumption, there might be rare use cases where it could make sense to create many exchanges (e.g. 1 exchange per queue, see https://github.com/rabbitmq/rabbitmq-server/discussions/10708) and have a single application publish to all these exchanges. With the v1 address format, for an application to send to 500 different exchanges, it needs to create 500 links. Due to these disadvantages and thanks to #10559 which allows clients to explicitly create topologies, we can create a simpler, clearer, and better v2 address format. ## How? ### Design goals Following the 7 cons from v1, the design goals for v2 are: 1. The address format should be simple so that users have a chance to understand the meaning of the address without necessarily consulting the docs. 2. The address format should not implicitly create queues, bindings, or exchanges. Instead, topologies should be created either explicitly via the new management node prior to link attachment (see #10559), or in future, we might support the `dynamic` source or target properties so that RabbitMQ creates queues dynamically. 3. No redundant address formats. 4. The target address format should explicitly state whether the routing key is present, empty, or will be provided dynamically in each message. 5. `Subject` should not be used as routing key. Instead, a better fitting field should be used. 6. Exchange names, queue names, and routing keys should allow to contain valid UTF-8 encoded data including the "/" character. 7. Allow both target exchange and routing key to by dynamically provided within each message. Furthermore 8. v2 must co-exist with v1 for at least some time. Applications should be able to upgrade to RabbitMQ 4.0 while continuing to use v1. Examples include AMQP 1.0 shovels and plugins communicating between a 4.0 and a 3.13 cluster. Starting with 4.1, we should change the AMQP 1.0 shovel and plugin clients to use only the new v2 address format. This will allow AMQP 1.0 and plugins to communicate between a 4.1 and 4.2 cluster. We will deprecate v1 in 4.0 and remove support for v1 in a later 4.x version. ### Additional Context The address is usually a String, but can be of any type. The [AMQP Addressing extension](https://docs.oasis-open.org/amqp/addressing/v1.0/addressing-v1.0.html) suggests that addresses are URIs and are therefore hierarchical and could even contain query parameters: > An AMQP address is a URI reference as defined by RFC3986. > the path expression is a sequence of identifier segments that reflects a path through an > implementation specific relationship graph of AMQP nodes and their termini. > The path expression MUST resolve to a node’s terminus in an AMQP container. The [Using the AMQP Anonymous Terminus for Message Routing Version 1.0](https://docs.oasis-open.org/amqp/anonterm/v1.0/cs01/anonterm-v1.0-cs01.html) extension allows for the target being `null` and the `To` property to contain the node address. This corresponds to AMQP 0.9.1 where clients can send each message on the same channel to a different `{exchange, routing-key}` destination. The following v2 address formats will be used. ### v2 addresses A new deprecated feature flag `amqp_address_v1` will be introduced in 4.0 which is permitted by default. Starting with 4.1, we should change the AMQP 1.0 shovel and plugin AMQP 1.0 clients to use only the new v2 address format. However, 4.1 server code must still understand the 4.0 AMQP 1.0 shovel and plugin AMQP 1.0 clients’ v1 address format. The new deprecated feature flag will therefore be denied by default in 4.2. This allows AMQP 1.0 shovels and plugins to work between * 4.0 and 3.13 clusters using v1 * 4.1 and 4.0 clusters using v2 from 4.1 to v4.0 and v1 from 4.0 to 4.1 * 4.2 and 4.1 clusters using v2 without having to support both v1 and v2 at the same time in the AMQP 1.0 shovel and plugin clients. While supporting both v1 and v2 in these clients is feasible, it's simpler to switch the client code directly from v1 to v2. ### v2 source addresses The source address format is ``` /queue/:queue ``` If the deprecated feature flag `amqp_address_v1` is permitted and the queue does not exist, the queue will be auto-created. If the deprecated feature flag `amqp_address_v1` is denied, the queue must exist. ### v2 target addresses v1 requires attaching a new link for each destination exchange. v2 will allow dynamic `{exchange, routing-key}` combinations for a given link. v2 therefore allows for the rare use cases where a single AMQP 1.0 publisher app needs to send to many different exchanges. Setting up a link per destination exchange could be cumbersome. Hence, v2 will support the dynamic `{exchange, routing-key}` combinations of AMQP 0.9.1. To achieve this, we make use of the "Anonymous Terminus for Message Routing" extension: The target address will contain the AMQP value null. The `To` field in each message must be set and contain either address format ``` /exchange/:exchange/key/:routing-key ``` or ``` /exchange/:exchange ``` when using the empty routing key. The `to` field requires an address type and is better suited than the `subject field. Note that each message will contain this `To` value for the anonymous terminus. Hence, we should save some bytes being sent across the network and stored on disk. Using a format ``` /e/:exchange/k/:routing-key ``` saves more bytes, but is too obscure. However, we use only `/key/` instead of `/routing-key/` so save a few bytes. This also simplifies the format because users don’t have to remember whether to use spell `routing-key` or `routing_key` or `routingkey`. The other allowed target address formats are: ``` /exchange/:exchange/key/:routing-key ``` where exchange and routing key are static on the given link. ``` /exchange/:exchange ``` where exchange and routing key are static on the given link, and routing key will be the empty string (useful for example for the fanout exchange). ``` /queue/:queue ``` This provides RabbitMQ beginners the illusion of sending a message directly to a queue without having to understand what exchanges and routing keys are. If the deprecated feature flag `amqp_address_v1` is permitted and the queue does not exist, the queue will be auto-created. If the deprecated feature flag `amqp_address_v1` is denied, the queue must exist. Besides the additional queue existence check, this queue target is different from ``` /exchange//key/:queue ``` in that queue specific optimisations might be done (in future) by RabbitMQ (for example different receiving queue types could grant different amounts of link credits to the sending clients). A write permission check to the amq.default exchange will be performed nevertheless. v2 will prohibit the v1 static link & dynamic routing-key combination where the routing key is sent in the message `subject` as that’s also obscure. For this use case, v2’s new anonymous terminus can be used where both exchange and routing key are defined in the message’s `To` field. (The bare message must not be modified because it could be signed.) The alias format ``` /topic/:topic ``` will also be removed. Sending to topic exchanges is arguably an advanced feature. Users can directly use the format ``` /exchange/amq.topic/key/:topic ``` which reduces the number of redundant address formats. ### v2 address format reference To sump up (and as stated at the top of this commit message): The only v2 source address format is: ``` /queue/:queue ``` The 4 possible v2 target addresses formats are: ``` /exchange/:exchange/key/:routing-key /exchange/:exchange /queue/:queue <null> ``` where the last AMQP <null> value format requires that each message’s `to` field contains one of: ``` /exchange/:exchange/key/:routing-key /exchange/:exchange /queue/:queue ``` Hence, all 8 listed design goals are reached.
2024-04-02 00:35:06 +08:00
],
)
rabbitmq_integration_suite(
name = "amqp_address_SUITE",
Support AMQP filter expressions (#12415) * Support AMQP filter expressions ## What? This PR implements the following property filter expressions for AMQP clients consuming from streams as defined in [AMQP Filter Expressions Version 1.0 Working Draft 09](https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=66227): * properties filters [section 4.2.4] * application-properties filters [section 4.2.5] String prefix and suffix matching is also supported. This PR also fixes a bug where RabbitMQ would accept wrong filters. Specifically, prior to this PR the values of the filter-set's map were allowed to be symbols. However, "every value MUST be either null or of a described type which provides the archetype filter." ## Why? This feature adds the ability to RabbitMQ to have multiple concurrent clients each consuming only a subset of messages while maintaining message order. This feature also reduces network traffic between RabbitMQ and clients by only dispatching those messages that the clients are actually interested in. Note that AMQP filter expressions are more fine grained than the [bloom filter based stream filtering](https://www.rabbitmq.com/blog/2023/10/16/stream-filtering) because * they do not suffer false positives * the unit of filtering is per-message instead of per-chunk * matching can be performed on **multiple** values in the properties and application-properties sections * prefix and suffix matching on the actual values is supported. Both, AMQP filter expressions and bloom filters can be used together. ## How? If a filter isn't valid, RabbitMQ ignores the filter. RabbitMQ only replies with filters it actually supports and validated successfully to comply with: "The receiving endpoint sets its desired filter, the sending endpoint [RabbitMQ] sets the filter actually in place (including any filters defaulted at the node)." * Delete streams test case The test suite constructed a wrong filter-set. Specifically the value of the filter-set didn't use a described type as mandated by the spec. Using https://azure.github.io/amqpnetlite/api/Amqp.Types.DescribedValue.html throws errors that the descriptor can't be encoded. Given that this code path is already tests via the amqp_filtex_SUITE, this F# test gets therefore deleted. * Re-introduce the AMQP filter-set bug Since clients might rely on the wrong filter-set value type, we support the bug behind a deprecated feature flag and gradually remove support this bug. * Revert "Delete streams test case" This reverts commit c95cfeaef74160894050ae51a563bf839384d2d7.
2024-10-07 23:12:26 +08:00
additional_beam = [
":test_amqp_utils_beam",
],
2024-10-08 12:02:25 +08:00
shard_count = 2,
Introduce new AMQP 1.0 address format ## What? Introduce a new address format (let's call it v2) for AMQP 1.0 source and target addresses. The old format (let's call it v1) is described in https://github.com/rabbitmq/rabbitmq-server/tree/v3.13.x/deps/rabbitmq_amqp1_0#routing-and-addressing The only v2 source address format is: ``` /queue/:queue ``` The 4 possible v2 target addresses formats are: ``` /exchange/:exchange/key/:routing-key /exchange/:exchange /queue/:queue <null> ``` where the last AMQP <null> value format requires that each message’s `to` field contains one of: ``` /exchange/:exchange/key/:routing-key /exchange/:exchange /queue/:queue ``` ## Why? The AMQP address v1 format comes with the following flaws: 1. Obscure address format: Without reading the documentation, the differences for example between source addresses ``` /amq/queue/:queue /queue/:queue :queue ``` are unknown to users. Hence, the address format is obscure. 2. Implicit creation of topologies Some address formats implicitly create queues (and bindings), such as source address ``` /exchange/:exchange/:binding-key ``` or target address ``` /queue/:queue ``` These queues and bindings are never deleted (by the AMQP 1.0 plugin.) Implicit creation of such topologies is also obscure. 3. Redundant address formats ``` /queue/:queue :queue ``` have the same meaning and are therefore redundant. 4. Properties section must be parsed to determine whether a routing key is present Target address ``` /exchange/:exchange ``` requires RabbitMQ to parse the properties section in order to check whether the message `subject` is set. If `subject` is not set, the routing key will default to the empty string. 5. Using `subject` as routing key misuses the purpose of this field. According to the AMQP spec, the message `subject` field's purpose is: > A common field for summary information about the message content and purpose. 6. Exchange names, queue names and routing keys must not contain the "/" (slash) character. The current 3.13 implemenation splits by "/" disallowing these characters in exchange, and queue names, and routing keys which is unnecessary prohibitive. 7. Clients must create a separate link per target exchange While this is reasonable working assumption, there might be rare use cases where it could make sense to create many exchanges (e.g. 1 exchange per queue, see https://github.com/rabbitmq/rabbitmq-server/discussions/10708) and have a single application publish to all these exchanges. With the v1 address format, for an application to send to 500 different exchanges, it needs to create 500 links. Due to these disadvantages and thanks to #10559 which allows clients to explicitly create topologies, we can create a simpler, clearer, and better v2 address format. ## How? ### Design goals Following the 7 cons from v1, the design goals for v2 are: 1. The address format should be simple so that users have a chance to understand the meaning of the address without necessarily consulting the docs. 2. The address format should not implicitly create queues, bindings, or exchanges. Instead, topologies should be created either explicitly via the new management node prior to link attachment (see #10559), or in future, we might support the `dynamic` source or target properties so that RabbitMQ creates queues dynamically. 3. No redundant address formats. 4. The target address format should explicitly state whether the routing key is present, empty, or will be provided dynamically in each message. 5. `Subject` should not be used as routing key. Instead, a better fitting field should be used. 6. Exchange names, queue names, and routing keys should allow to contain valid UTF-8 encoded data including the "/" character. 7. Allow both target exchange and routing key to by dynamically provided within each message. Furthermore 8. v2 must co-exist with v1 for at least some time. Applications should be able to upgrade to RabbitMQ 4.0 while continuing to use v1. Examples include AMQP 1.0 shovels and plugins communicating between a 4.0 and a 3.13 cluster. Starting with 4.1, we should change the AMQP 1.0 shovel and plugin clients to use only the new v2 address format. This will allow AMQP 1.0 and plugins to communicate between a 4.1 and 4.2 cluster. We will deprecate v1 in 4.0 and remove support for v1 in a later 4.x version. ### Additional Context The address is usually a String, but can be of any type. The [AMQP Addressing extension](https://docs.oasis-open.org/amqp/addressing/v1.0/addressing-v1.0.html) suggests that addresses are URIs and are therefore hierarchical and could even contain query parameters: > An AMQP address is a URI reference as defined by RFC3986. > the path expression is a sequence of identifier segments that reflects a path through an > implementation specific relationship graph of AMQP nodes and their termini. > The path expression MUST resolve to a node’s terminus in an AMQP container. The [Using the AMQP Anonymous Terminus for Message Routing Version 1.0](https://docs.oasis-open.org/amqp/anonterm/v1.0/cs01/anonterm-v1.0-cs01.html) extension allows for the target being `null` and the `To` property to contain the node address. This corresponds to AMQP 0.9.1 where clients can send each message on the same channel to a different `{exchange, routing-key}` destination. The following v2 address formats will be used. ### v2 addresses A new deprecated feature flag `amqp_address_v1` will be introduced in 4.0 which is permitted by default. Starting with 4.1, we should change the AMQP 1.0 shovel and plugin AMQP 1.0 clients to use only the new v2 address format. However, 4.1 server code must still understand the 4.0 AMQP 1.0 shovel and plugin AMQP 1.0 clients’ v1 address format. The new deprecated feature flag will therefore be denied by default in 4.2. This allows AMQP 1.0 shovels and plugins to work between * 4.0 and 3.13 clusters using v1 * 4.1 and 4.0 clusters using v2 from 4.1 to v4.0 and v1 from 4.0 to 4.1 * 4.2 and 4.1 clusters using v2 without having to support both v1 and v2 at the same time in the AMQP 1.0 shovel and plugin clients. While supporting both v1 and v2 in these clients is feasible, it's simpler to switch the client code directly from v1 to v2. ### v2 source addresses The source address format is ``` /queue/:queue ``` If the deprecated feature flag `amqp_address_v1` is permitted and the queue does not exist, the queue will be auto-created. If the deprecated feature flag `amqp_address_v1` is denied, the queue must exist. ### v2 target addresses v1 requires attaching a new link for each destination exchange. v2 will allow dynamic `{exchange, routing-key}` combinations for a given link. v2 therefore allows for the rare use cases where a single AMQP 1.0 publisher app needs to send to many different exchanges. Setting up a link per destination exchange could be cumbersome. Hence, v2 will support the dynamic `{exchange, routing-key}` combinations of AMQP 0.9.1. To achieve this, we make use of the "Anonymous Terminus for Message Routing" extension: The target address will contain the AMQP value null. The `To` field in each message must be set and contain either address format ``` /exchange/:exchange/key/:routing-key ``` or ``` /exchange/:exchange ``` when using the empty routing key. The `to` field requires an address type and is better suited than the `subject field. Note that each message will contain this `To` value for the anonymous terminus. Hence, we should save some bytes being sent across the network and stored on disk. Using a format ``` /e/:exchange/k/:routing-key ``` saves more bytes, but is too obscure. However, we use only `/key/` instead of `/routing-key/` so save a few bytes. This also simplifies the format because users don’t have to remember whether to use spell `routing-key` or `routing_key` or `routingkey`. The other allowed target address formats are: ``` /exchange/:exchange/key/:routing-key ``` where exchange and routing key are static on the given link. ``` /exchange/:exchange ``` where exchange and routing key are static on the given link, and routing key will be the empty string (useful for example for the fanout exchange). ``` /queue/:queue ``` This provides RabbitMQ beginners the illusion of sending a message directly to a queue without having to understand what exchanges and routing keys are. If the deprecated feature flag `amqp_address_v1` is permitted and the queue does not exist, the queue will be auto-created. If the deprecated feature flag `amqp_address_v1` is denied, the queue must exist. Besides the additional queue existence check, this queue target is different from ``` /exchange//key/:queue ``` in that queue specific optimisations might be done (in future) by RabbitMQ (for example different receiving queue types could grant different amounts of link credits to the sending clients). A write permission check to the amq.default exchange will be performed nevertheless. v2 will prohibit the v1 static link & dynamic routing-key combination where the routing key is sent in the message `subject` as that’s also obscure. For this use case, v2’s new anonymous terminus can be used where both exchange and routing key are defined in the message’s `To` field. (The bare message must not be modified because it could be signed.) The alias format ``` /topic/:topic ``` will also be removed. Sending to topic exchanges is arguably an advanced feature. Users can directly use the format ``` /exchange/amq.topic/key/:topic ``` which reduces the number of redundant address formats. ### v2 address format reference To sump up (and as stated at the top of this commit message): The only v2 source address format is: ``` /queue/:queue ``` The 4 possible v2 target addresses formats are: ``` /exchange/:exchange/key/:routing-key /exchange/:exchange /queue/:queue <null> ``` where the last AMQP <null> value format requires that each message’s `to` field contains one of: ``` /exchange/:exchange/key/:routing-key /exchange/:exchange /queue/:queue ``` Hence, all 8 listed design goals are reached.
2024-04-02 00:35:06 +08:00
runtime_deps = [
"//deps/rabbitmq_amqp_client:erlang_app",
Support AMQP 1.0 natively ## What Similar to Native MQTT in #5895, this commits implements Native AMQP 1.0. By "native", we mean do not proxy via AMQP 0.9.1 anymore. ## Why Native AMQP 1.0 comes with the following major benefits: 1. Similar to Native MQTT, this commit provides better throughput, latency, scalability, and resource usage for AMQP 1.0. See https://blog.rabbitmq.com/posts/2023/03/native-mqtt for native MQTT improvements. See further below for some benchmarks. 2. Since AMQP 1.0 is not limited anymore by the AMQP 0.9.1 protocol, this commit allows implementing more AMQP 1.0 features in the future. Some features are already implemented in this commit (see next section). 3. Simpler, better understandable, and more maintainable code. Native AMQP 1.0 as implemented in this commit has the following major benefits compared to AMQP 0.9.1: 4. Memory and disk alarms will only stop accepting incoming TRANSFER frames. New connections can still be created to consume from RabbitMQ to empty queues. 5. Due to 4. no need anymore for separate connections for publishers and consumers as we currently recommended for AMQP 0.9.1. which potentially halves the number of physical TCP connections. 6. When a single connection sends to multiple target queues, a single slow target queue won't block the entire connection. Publisher can still send data quickly to all other target queues. 7. A publisher can request whether it wants publisher confirmation on a per-message basis. In AMQP 0.9.1 publisher confirms are configured per channel only. 8. Consumers can change their "prefetch count" dynamically which isn't possible in our AMQP 0.9.1 implementation. See #10174 9. AMQP 1.0 is an extensible protocol This commit also fixes dozens of bugs present in the AMQP 1.0 plugin in RabbitMQ 3.x - most of which cannot be backported due to the complexity and limitations of the old 3.x implementation. This commit contains breaking changes and is therefore targeted for RabbitMQ 4.0. ## Implementation details 1. Breaking change: With Native AMQP, the behaviour of ``` Convert AMQP 0.9.1 message headers to application properties for an AMQP 1.0 consumer amqp1_0.convert_amqp091_headers_to_app_props = false | true (default false) Convert AMQP 1.0 Application Properties to AMQP 0.9.1 headers amqp1_0.convert_app_props_to_amqp091_headers = false | true (default false) ``` will break because we always convert according to the message container conversions. For example, AMQP 0.9.1 x-headers will go into message-annotations instead of application properties. Also, `false` won’t be respected since we always convert the headers with message containers. 2. Remove rabbit_queue_collector rabbit_queue_collector is responsible for synchronously deleting exclusive queues. Since the AMQP 1.0 plugin never creates exclusive queues, rabbit_queue_collector doesn't need to be started in the first place. This will save 1 Erlang process per AMQP 1.0 connection. 3. 7 processes per connection + 1 process per session in this commit instead of 7 processes per connection + 15 processes per session in 3.x Supervision hierarchy got re-designed. 4. Use 1 writer process per AMQP 1.0 connection AMQP 0.9.1 uses a separate rabbit_writer Erlang process per AMQP 0.9.1 channel. Prior to this commit, AMQP 1.0 used a separate rabbit_amqp1_0_writer process per AMQP 1.0 session. Advantage of single writer proc per session (prior to this commit): * High parallelism for serialising packets if multiple sessions within a connection write heavily at the same time. This commit uses a single writer process per AMQP 1.0 connection that is shared across all AMQP 1.0 sessions. Advantages of single writer proc per connection (this commit): * Lower memory usage with hundreds of thousands of AMQP 1.0 sessions * Less TCP and IP header overhead given that the single writer process can accumulate across all sessions bytes before flushing the socket. In other words, this commit decides that a reader / writer process pair per AMQP 1.0 connection is good enough for bi-directional TRANSFER flows. Having a writer per session is too heavy. We still ensure high throughput by having separate reader, writer, and session processes. 5. Transform rabbit_amqp1_0_writer into gen_server Why: Prior to this commit, when clicking on the AMQP 1.0 writer process in observer, the process crashed. Instead of handling all these debug messages of the sys module, it's better to implement a gen_server. There is no advantage of using a special OTP process over gen_server for the AMQP 1.0 writer. gen_server also provides cleaner format status output. How: Message callbacks return a timeout of 0. After all messages in the inbox are processed, the timeout message is handled by flushing any pending bytes. 6. Remove stats timer from writer AMQP 1.0 connections haven't emitted any stats previously. 7. When there are contiguous queue confirmations in the session process mailbox, batch them. When the confirmations are sent to the publisher, a single DISPOSITION frame is sent for contiguously confirmed delivery IDs. This approach should be good enough. However it's sub optimal in scenarios where contiguous delivery IDs that need confirmations are rare, for example: * There are multiple links in the session with different sender settlement modes and sender publishes across these links interleaved. * sender settlement mode is mixed and sender publishes interleaved settled and unsettled TRANSFERs. 8. Introduce credit API v2 Why: The AMQP 0.9.1 credit extension which is to be removed in 4.0 was poorly designed since basic.credit is a synchronous call into the queue process blocking the entire AMQP 1.0 session process. How: Change the interactions between queue clients and queue server implementations: * Clients only request a credit reply if the FLOW's `echo` field is set * Include all link flow control state held by the queue process into a new credit_reply queue event: * `available` after the queue sends any deliveries * `link-credit` after the queue sends any deliveries * `drain` which allows us to combine the old queue events send_credit_reply and send_drained into a single new queue event credit_reply. * Include the consumer tag into the credit_reply queue event such that the AMQP 1.0 session process can process any credit replies asynchronously. Link flow control state `delivery-count` also moves to the queue processes. The new interactions are hidden behind feature flag credit_api_v2 to allow for rolling upgrades from 3.13 to 4.0. 9. Use serial number arithmetic in quorum queues and session process. 10. Completely bypass the rabbit_limiter module for AMQP 1.0 flow control. The goal is to eventually remove the rabbit_limiter module in 4.0 since AMQP 0.9.1 global QoS will be unsupported in 4.0. This commit lifts the AMQP 1.0 link flow control logic out of rabbit_limiter into rabbit_queue_consumers. 11. Fix credit bug for streams: AMQP 1.0 settlements shouldn't top up link credit, only FLOW frames should top up link credit. 12. Allow sender settle mode unsettled for streams since AMQP 1.0 acknowledgements to streams are no-ops (currently). 13. Fix AMQP 1.0 client bugs Auto renewing credits should not be related to settling TRANSFERs. Remove field link_credit_unsettled as it was wrong and confusing. Prior to this commit auto renewal did not work when the sender uses sender settlement mode settled. 14. Fix AMQP 1.0 client bugs The wrong outdated Link was passed to function auto_flow/2 15. Use osiris chunk iterator Only hold messages of uncompressed sub batches in memory if consumer doesn't have sufficient credits. Compressed sub batches are skipped for non Stream protocol consumers. 16. Fix incoming link flow control Always use confirms between AMQP 1.0 queue clients and queue servers. As already done internally by rabbit_fifo_client and rabbit_stream_queue, use confirms for classic queues as well. 17. Include link handle into correlation when publishing messages to target queues such that session process can correlate confirms from target queues to incoming links. 18. Only grant more credits to publishers if publisher hasn't sufficient credits anymore and there are not too many unconfirmed messages on the link. 19. Completely ignore `block` and `unblock` queue actions and RabbitMQ credit flow between classic queue process and session process. 20. Link flow control is independent between links. A client can refer to a queue or to an exchange with multiple dynamically added target queues. Multiple incoming links can also fan in to the same queue. However the link topology looks like, this commit ensures that each link is only granted more credits if that link isn't overloaded. 21. A connection or a session can send to many different queues. In AMQP 0.9.1, a single slow queue will lead to the entire channel, and then entire connection being blocked. This commit makes sure that a single slow queue from one link won't slow down sending on other links. For example, having link A sending to a local classic queue and link B sending to 5 replica quorum queue, link B will naturally grant credits slower than link A. So, despite the quorum queue being slower in confirming messages, the same AMQP 1.0 connection and session can still pump data very fast into the classic queue. 22. If cluster wide memory or disk alarm occurs. Each session sends a FLOW with incoming-window to 0 to sending client. If sending clients don’t obey, force disconnect the client. If cluster wide memory alarm clears: Each session resumes with a FLOW defaulting to initial incoming-window. 23. All operations apart of publishing TRANSFERS to RabbitMQ can continue during cluster wide alarms, specifically, attaching consumers and consuming, i.e. emptying queues. There is no need for separate AMQP 1.0 connections for publishers and consumers as recommended in our AMQP 0.9.1 implementation. 24. Flow control summary: * If queue becomes bottleneck, that’s solved by slowing down individual sending links (AMQP 1.0 link flow control). * If session becomes bottleneck (more unlikely), that’s solved by AMQP 1.0 session flow control. * If connection becomes bottleneck, it naturally won’t read fast enough from the socket causing TCP backpressure being applied. Nowhere will RabbitMQ internal credit based flow control (i.e. module credit_flow) be used on the incoming AMQP 1.0 message path. 25. Register AMQP sessions Prefer local-only pg over our custom pg_local implementation as pg is a better process group implementation than pg_local. pg_local was identified as bottleneck in tests where many MQTT clients were disconnected at once. 26. Start a local-only pg when Rabbit boots: > A scope can be kept local-only by using a scope name that is unique cluster-wide, e.g. the node name: > pg:start_link(node()). Register AMQP 1.0 connections and sessions with pg. In future we should remove pg_local and instead use the new local-only pg for all registered processes such as AMQP 0.9.1 connections and channels. 27. Requeue messages if link detached Although the spec allows to settle delivery IDs on detached links, RabbitMQ does not respect the 'closed' field of the DETACH frame and therefore handles every DETACH frame as closed. Since the link is closed, we expect every outstanding delivery to be requeued. In addition to consumer cancellation, detaching a link therefore causes in flight deliveries to be requeued. Note that this behaviour is different from merely consumer cancellation in AMQP 0.9.1: "After a consumer is cancelled there will be no future deliveries dispatched to it. Note that there can still be "in flight" deliveries dispatched previously. Cancelling a consumer will neither discard nor requeue them." [https://www.rabbitmq.com/consumers.html#unsubscribing] An AMQP receiver can first drain, and then detach to prevent "in flight" deliveries 28. Init AMQP session with BEGIN frame Similar to how there can't be an MQTT processor without a CONNECT frame, there can't be an AMQP session without a BEGIN frame. This allows having strict dialyzer types for session flow control fields (i.e. not allowing 'undefined'). 29. Move serial_number to AMQP 1.0 common lib such that it can be used by both AMQP 1.0 server and client 30. Fix AMQP client to do serial number arithmetic. 31. AMQP client: Differentiate between delivery-id and transfer-id for better understandability. 32. Fix link flow control in classic queues This commit fixes ``` java -jar target/perf-test.jar -ad false -f persistent -u cq -c 3000 -C 1000000 -y 0 ``` followed by ``` ./omq -x 0 amqp -T /queue/cq -D 1000000 --amqp-consumer-credits 2 ``` Prior to this commit, (and on RabbitMQ 3.x) the consuming would halt after around 8 - 10,000 messages. The bug was that in flight messages from classic queue process to session process were not taken into account when topping up credit to the classic queue process. Fixes #2597 The solution to this bug (and a much cleaner design anyway independent of this bug) is that queues should hold all link flow control state including the delivery-count. Hence, when credit API v2 is used the delivery-count will be held by the classic queue process, quorum queue process, and stream queue client instead of managing the delivery-count in the session. 33. The double level crediting between (a) session process and rabbit_fifo_client, and (b) rabbit_fifo_client and rabbit_fifo was removed. Therefore, instead of managing 3 separate delivery-counts (i. session, ii. rabbit_fifo_client, iii. rabbit_fifo), only 1 delivery-count is used in rabbit_fifo. This is a big simplification. 34. This commit fixes quorum queues without bumping the machine version nor introducing new rabbit_fifo commands. Whether credit API v2 is used is solely determined at link attachment time depending on whether feature flag credit_api_v2 is enabled. Even when that feature flag will be enabled later on, this link will keep using credit API v1 until detached (or the node is shut down). Eventually, after feature flag credit_api_v2 has been enabled and a subsequent rolling upgrade, all links will use credit API v2. This approach is safe and simple. The 2 alternatives to move delivery-count from the session process to the queue processes would have been: i. Explicit feature flag credit_api_v2 migration function * Can use a gen_server:call and only finish migration once all delivery-counts were migrated. Cons: * Extra new message format just for migration is required. * Risky as migration will fail if a target queue doesn’t reply. ii. Session always includes DeliveryCountSnd when crediting to the queue: Cons: * 2 delivery counts will be hold simultaneously in session proc and queue proc; could be solved by deleting the session proc’s delivery-count for credit-reply * What happens if the receiver doesn’t provide credit for a very long time? Is that a problem? 35. Support stream filtering in AMQP 1.0 (by @acogoluegnes) Use the x-stream-filter-value message annotation to carry the filter value in a published message. Use the rabbitmq:stream-filter and rabbitmq:stream-match-unfiltered filters when creating a receiver that wants to filter out messages from a stream. 36. Remove credit extension from AMQP 0.9.1 client 37. Support maintenance mode closing AMQP 1.0 connections. 38. Remove AMQP 0.9.1 client dependency from AMQP 1.0 implementation. 39. Move AMQP 1.0 plugin to the core. AMQP 1.0 is enabled by default. The old rabbitmq_amqp1_0 plugin will be kept as a no-op plugin to prevent deployment tools from failing that execute: ``` rabbitmq-plugins enable rabbitmq_amqp1_0 rabbitmq-plugins disable rabbitmq_amqp1_0 ``` 40. Breaking change: Remove CLI command `rabbitmqctl list_amqp10_connections`. Instead, list both AMQP 0.9.1 and AMQP 1.0 connections in `list_connections`: ``` rabbitmqctl list_connections protocol Listing connections ... protocol {1, 0} {0,9,1} ``` ## Benchmarks ### Throughput & Latency Setup: * Single node Ubuntu 22.04 * Erlang 26.1.1 Start RabbitMQ: ``` make run-broker PLUGINS="rabbitmq_management rabbitmq_amqp1_0" FULL=1 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 3" ``` Predeclare durable classic queue cq1, durable quorum queue qq1, durable stream queue sq1. Start client: https://github.com/ssorj/quiver https://hub.docker.com/r/ssorj/quiver/tags (digest 453a2aceda64) ``` docker run -it --rm --add-host host.docker.internal:host-gateway ssorj/quiver:latest bash-5.1# quiver --version quiver 0.4.0-SNAPSHOT ``` 1. Classic queue ``` quiver //host.docker.internal//amq/queue/cq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ............................................... 73.8 seconds Sender rate .......................................... 13,548 messages/s Receiver rate ........................................ 13,547 messages/s End-to-end rate ...................................... 13,547 messages/s Latencies by percentile: 0% ........ 0 ms 90.00% ........ 9 ms 25% ........ 2 ms 99.00% ....... 14 ms 50% ........ 4 ms 99.90% ....... 17 ms 100% ....... 26 ms 99.99% ....... 24 ms ``` RabbitMQ 3.x (main branch as of 30 January 2024): ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 6 73.6 2.1 3,217 1,607 0 8.0 511 4.1 163,580 16,367 2 74.1 4.1 3,217 0 0 8.0 0 6.1 229,114 32,767 3 74.1 6.1 3,217 0 0 8.0 0 8.1 261,880 16,367 2 74.1 8.1 67,874 32,296 8 8.2 7,662 10.1 294,646 16,367 2 74.1 10.1 67,874 0 0 8.2 0 12.1 360,180 32,734 3 74.1 12.1 67,874 0 0 8.2 0 14.1 392,946 16,367 3 74.1 14.1 68,604 365 0 8.2 12,147 16.1 458,480 32,734 3 74.1 16.1 68,604 0 0 8.2 0 18.1 491,246 16,367 2 74.1 18.1 68,604 0 0 8.2 0 20.1 556,780 32,767 4 74.1 20.1 68,604 0 0 8.2 0 22.1 589,546 16,375 2 74.1 22.1 68,604 0 0 8.2 0 receiver timed out 24.1 622,312 16,367 2 74.1 24.1 68,604 0 0 8.2 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. ``` 2. Quorum queue: ``` quiver //host.docker.internal//amq/queue/qq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration .............................................. 101.4 seconds Sender rate ........................................... 9,867 messages/s Receiver rate ......................................... 9,868 messages/s End-to-end rate ....................................... 9,865 messages/s Latencies by percentile: 0% ....... 11 ms 90.00% ....... 23 ms 25% ....... 15 ms 99.00% ....... 28 ms 50% ....... 18 ms 99.90% ....... 33 ms 100% ....... 49 ms 99.99% ....... 47 ms ``` RabbitMQ 3.x: ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 9 69.9 2.1 18,430 9,206 5 7.6 1,221 4.1 163,580 16,375 5 70.2 4.1 18,867 218 0 7.6 2,168 6.1 229,114 32,767 6 70.2 6.1 18,867 0 0 7.6 0 8.1 294,648 32,734 7 70.2 8.1 18,867 0 0 7.6 0 10.1 360,182 32,734 6 70.2 10.1 18,867 0 0 7.6 0 12.1 425,716 32,767 6 70.2 12.1 18,867 0 0 7.6 0 receiver timed out 14.1 458,482 16,367 5 70.2 14.1 18,867 0 0 7.6 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. ``` 3. Stream: ``` quiver-arrow send //host.docker.internal//amq/queue/sq1 --durable --count 1m -d 10m --summary --verbose ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ................................................ 8.7 seconds Message rate ........................................ 115,154 messages/s ``` RabbitMQ 3.x: ``` Count ............................................. 1,000,000 messages Duration ............................................... 21.2 seconds Message rate ......................................... 47,232 messages/s ``` ### Memory usage Start RabbitMQ: ``` ERL_MAX_PORTS=3000000 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+P 3000000 +S 6" make run-broker PLUGINS="rabbitmq_amqp1_0" FULL=1 RABBITMQ_CONFIG_FILE="rabbitmq.conf" ``` ``` /bin/cat rabbitmq.conf tcp_listen_options.sndbuf = 2048 tcp_listen_options.recbuf = 2048 vm_memory_high_watermark.relative = 0.95 vm_memory_high_watermark_paging_ratio = 0.95 loopback_users = none ``` Create 50k connections with 2 sessions per connection, i.e. 100k session in total: ```go package main import ( "context" "log" "time" "github.com/Azure/go-amqp" ) func main() { for i := 0; i < 50000; i++ { conn, err := amqp.Dial(context.TODO(), "amqp://nuc", &amqp.ConnOptions{SASLType: amqp.SASLTypeAnonymous()}) if err != nil { log.Fatal("dialing AMQP server:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } } log.Println("opened all connections") time.Sleep(5 * time.Hour) } ``` This commit: ``` erlang:memory(). [{total,4586376480}, {processes,4025898504}, {processes_used,4025871040}, {system,560477976}, {atom,1048841}, {atom_used,1042841}, {binary,233228608}, {code,21449982}, {ets,108560464}] erlang:system_info(process_count). 450289 ``` 7 procs per connection + 1 proc per session. (7 + 2*1) * 50,000 = 450,000 procs RabbitMQ 3.x: ``` erlang:memory(). [{total,15168232704}, {processes,14044779256}, {processes_used,14044755120}, {system,1123453448}, {atom,1057033}, {atom_used,1052587}, {binary,236381264}, {code,21790238}, {ets,391423744}] erlang:system_info(process_count). 1850309 ``` 7 procs per connection + 15 per session (7 + 2*15) * 50,000 = 1,850,000 procs 50k connections + 100k session require with this commit: 4.5 GB in RabbitMQ 3.x: 15 GB ## Future work 1. More efficient parser and serializer 2. TODO in mc_amqp: Do not store the parsed message on disk. 3. Implement both AMQP HTTP extension and AMQP management extension to allow AMQP clients to create RabbitMQ objects (queues, exchanges, ...).
2023-07-21 18:29:07 +08:00
],
)
rabbitmq_integration_suite(
name = "amqp_credit_api_v2_SUITE",
runtime_deps = [
"//deps/rabbitmq_amqp_client:erlang_app",
Support AMQP 1.0 natively ## What Similar to Native MQTT in #5895, this commits implements Native AMQP 1.0. By "native", we mean do not proxy via AMQP 0.9.1 anymore. ## Why Native AMQP 1.0 comes with the following major benefits: 1. Similar to Native MQTT, this commit provides better throughput, latency, scalability, and resource usage for AMQP 1.0. See https://blog.rabbitmq.com/posts/2023/03/native-mqtt for native MQTT improvements. See further below for some benchmarks. 2. Since AMQP 1.0 is not limited anymore by the AMQP 0.9.1 protocol, this commit allows implementing more AMQP 1.0 features in the future. Some features are already implemented in this commit (see next section). 3. Simpler, better understandable, and more maintainable code. Native AMQP 1.0 as implemented in this commit has the following major benefits compared to AMQP 0.9.1: 4. Memory and disk alarms will only stop accepting incoming TRANSFER frames. New connections can still be created to consume from RabbitMQ to empty queues. 5. Due to 4. no need anymore for separate connections for publishers and consumers as we currently recommended for AMQP 0.9.1. which potentially halves the number of physical TCP connections. 6. When a single connection sends to multiple target queues, a single slow target queue won't block the entire connection. Publisher can still send data quickly to all other target queues. 7. A publisher can request whether it wants publisher confirmation on a per-message basis. In AMQP 0.9.1 publisher confirms are configured per channel only. 8. Consumers can change their "prefetch count" dynamically which isn't possible in our AMQP 0.9.1 implementation. See #10174 9. AMQP 1.0 is an extensible protocol This commit also fixes dozens of bugs present in the AMQP 1.0 plugin in RabbitMQ 3.x - most of which cannot be backported due to the complexity and limitations of the old 3.x implementation. This commit contains breaking changes and is therefore targeted for RabbitMQ 4.0. ## Implementation details 1. Breaking change: With Native AMQP, the behaviour of ``` Convert AMQP 0.9.1 message headers to application properties for an AMQP 1.0 consumer amqp1_0.convert_amqp091_headers_to_app_props = false | true (default false) Convert AMQP 1.0 Application Properties to AMQP 0.9.1 headers amqp1_0.convert_app_props_to_amqp091_headers = false | true (default false) ``` will break because we always convert according to the message container conversions. For example, AMQP 0.9.1 x-headers will go into message-annotations instead of application properties. Also, `false` won’t be respected since we always convert the headers with message containers. 2. Remove rabbit_queue_collector rabbit_queue_collector is responsible for synchronously deleting exclusive queues. Since the AMQP 1.0 plugin never creates exclusive queues, rabbit_queue_collector doesn't need to be started in the first place. This will save 1 Erlang process per AMQP 1.0 connection. 3. 7 processes per connection + 1 process per session in this commit instead of 7 processes per connection + 15 processes per session in 3.x Supervision hierarchy got re-designed. 4. Use 1 writer process per AMQP 1.0 connection AMQP 0.9.1 uses a separate rabbit_writer Erlang process per AMQP 0.9.1 channel. Prior to this commit, AMQP 1.0 used a separate rabbit_amqp1_0_writer process per AMQP 1.0 session. Advantage of single writer proc per session (prior to this commit): * High parallelism for serialising packets if multiple sessions within a connection write heavily at the same time. This commit uses a single writer process per AMQP 1.0 connection that is shared across all AMQP 1.0 sessions. Advantages of single writer proc per connection (this commit): * Lower memory usage with hundreds of thousands of AMQP 1.0 sessions * Less TCP and IP header overhead given that the single writer process can accumulate across all sessions bytes before flushing the socket. In other words, this commit decides that a reader / writer process pair per AMQP 1.0 connection is good enough for bi-directional TRANSFER flows. Having a writer per session is too heavy. We still ensure high throughput by having separate reader, writer, and session processes. 5. Transform rabbit_amqp1_0_writer into gen_server Why: Prior to this commit, when clicking on the AMQP 1.0 writer process in observer, the process crashed. Instead of handling all these debug messages of the sys module, it's better to implement a gen_server. There is no advantage of using a special OTP process over gen_server for the AMQP 1.0 writer. gen_server also provides cleaner format status output. How: Message callbacks return a timeout of 0. After all messages in the inbox are processed, the timeout message is handled by flushing any pending bytes. 6. Remove stats timer from writer AMQP 1.0 connections haven't emitted any stats previously. 7. When there are contiguous queue confirmations in the session process mailbox, batch them. When the confirmations are sent to the publisher, a single DISPOSITION frame is sent for contiguously confirmed delivery IDs. This approach should be good enough. However it's sub optimal in scenarios where contiguous delivery IDs that need confirmations are rare, for example: * There are multiple links in the session with different sender settlement modes and sender publishes across these links interleaved. * sender settlement mode is mixed and sender publishes interleaved settled and unsettled TRANSFERs. 8. Introduce credit API v2 Why: The AMQP 0.9.1 credit extension which is to be removed in 4.0 was poorly designed since basic.credit is a synchronous call into the queue process blocking the entire AMQP 1.0 session process. How: Change the interactions between queue clients and queue server implementations: * Clients only request a credit reply if the FLOW's `echo` field is set * Include all link flow control state held by the queue process into a new credit_reply queue event: * `available` after the queue sends any deliveries * `link-credit` after the queue sends any deliveries * `drain` which allows us to combine the old queue events send_credit_reply and send_drained into a single new queue event credit_reply. * Include the consumer tag into the credit_reply queue event such that the AMQP 1.0 session process can process any credit replies asynchronously. Link flow control state `delivery-count` also moves to the queue processes. The new interactions are hidden behind feature flag credit_api_v2 to allow for rolling upgrades from 3.13 to 4.0. 9. Use serial number arithmetic in quorum queues and session process. 10. Completely bypass the rabbit_limiter module for AMQP 1.0 flow control. The goal is to eventually remove the rabbit_limiter module in 4.0 since AMQP 0.9.1 global QoS will be unsupported in 4.0. This commit lifts the AMQP 1.0 link flow control logic out of rabbit_limiter into rabbit_queue_consumers. 11. Fix credit bug for streams: AMQP 1.0 settlements shouldn't top up link credit, only FLOW frames should top up link credit. 12. Allow sender settle mode unsettled for streams since AMQP 1.0 acknowledgements to streams are no-ops (currently). 13. Fix AMQP 1.0 client bugs Auto renewing credits should not be related to settling TRANSFERs. Remove field link_credit_unsettled as it was wrong and confusing. Prior to this commit auto renewal did not work when the sender uses sender settlement mode settled. 14. Fix AMQP 1.0 client bugs The wrong outdated Link was passed to function auto_flow/2 15. Use osiris chunk iterator Only hold messages of uncompressed sub batches in memory if consumer doesn't have sufficient credits. Compressed sub batches are skipped for non Stream protocol consumers. 16. Fix incoming link flow control Always use confirms between AMQP 1.0 queue clients and queue servers. As already done internally by rabbit_fifo_client and rabbit_stream_queue, use confirms for classic queues as well. 17. Include link handle into correlation when publishing messages to target queues such that session process can correlate confirms from target queues to incoming links. 18. Only grant more credits to publishers if publisher hasn't sufficient credits anymore and there are not too many unconfirmed messages on the link. 19. Completely ignore `block` and `unblock` queue actions and RabbitMQ credit flow between classic queue process and session process. 20. Link flow control is independent between links. A client can refer to a queue or to an exchange with multiple dynamically added target queues. Multiple incoming links can also fan in to the same queue. However the link topology looks like, this commit ensures that each link is only granted more credits if that link isn't overloaded. 21. A connection or a session can send to many different queues. In AMQP 0.9.1, a single slow queue will lead to the entire channel, and then entire connection being blocked. This commit makes sure that a single slow queue from one link won't slow down sending on other links. For example, having link A sending to a local classic queue and link B sending to 5 replica quorum queue, link B will naturally grant credits slower than link A. So, despite the quorum queue being slower in confirming messages, the same AMQP 1.0 connection and session can still pump data very fast into the classic queue. 22. If cluster wide memory or disk alarm occurs. Each session sends a FLOW with incoming-window to 0 to sending client. If sending clients don’t obey, force disconnect the client. If cluster wide memory alarm clears: Each session resumes with a FLOW defaulting to initial incoming-window. 23. All operations apart of publishing TRANSFERS to RabbitMQ can continue during cluster wide alarms, specifically, attaching consumers and consuming, i.e. emptying queues. There is no need for separate AMQP 1.0 connections for publishers and consumers as recommended in our AMQP 0.9.1 implementation. 24. Flow control summary: * If queue becomes bottleneck, that’s solved by slowing down individual sending links (AMQP 1.0 link flow control). * If session becomes bottleneck (more unlikely), that’s solved by AMQP 1.0 session flow control. * If connection becomes bottleneck, it naturally won’t read fast enough from the socket causing TCP backpressure being applied. Nowhere will RabbitMQ internal credit based flow control (i.e. module credit_flow) be used on the incoming AMQP 1.0 message path. 25. Register AMQP sessions Prefer local-only pg over our custom pg_local implementation as pg is a better process group implementation than pg_local. pg_local was identified as bottleneck in tests where many MQTT clients were disconnected at once. 26. Start a local-only pg when Rabbit boots: > A scope can be kept local-only by using a scope name that is unique cluster-wide, e.g. the node name: > pg:start_link(node()). Register AMQP 1.0 connections and sessions with pg. In future we should remove pg_local and instead use the new local-only pg for all registered processes such as AMQP 0.9.1 connections and channels. 27. Requeue messages if link detached Although the spec allows to settle delivery IDs on detached links, RabbitMQ does not respect the 'closed' field of the DETACH frame and therefore handles every DETACH frame as closed. Since the link is closed, we expect every outstanding delivery to be requeued. In addition to consumer cancellation, detaching a link therefore causes in flight deliveries to be requeued. Note that this behaviour is different from merely consumer cancellation in AMQP 0.9.1: "After a consumer is cancelled there will be no future deliveries dispatched to it. Note that there can still be "in flight" deliveries dispatched previously. Cancelling a consumer will neither discard nor requeue them." [https://www.rabbitmq.com/consumers.html#unsubscribing] An AMQP receiver can first drain, and then detach to prevent "in flight" deliveries 28. Init AMQP session with BEGIN frame Similar to how there can't be an MQTT processor without a CONNECT frame, there can't be an AMQP session without a BEGIN frame. This allows having strict dialyzer types for session flow control fields (i.e. not allowing 'undefined'). 29. Move serial_number to AMQP 1.0 common lib such that it can be used by both AMQP 1.0 server and client 30. Fix AMQP client to do serial number arithmetic. 31. AMQP client: Differentiate between delivery-id and transfer-id for better understandability. 32. Fix link flow control in classic queues This commit fixes ``` java -jar target/perf-test.jar -ad false -f persistent -u cq -c 3000 -C 1000000 -y 0 ``` followed by ``` ./omq -x 0 amqp -T /queue/cq -D 1000000 --amqp-consumer-credits 2 ``` Prior to this commit, (and on RabbitMQ 3.x) the consuming would halt after around 8 - 10,000 messages. The bug was that in flight messages from classic queue process to session process were not taken into account when topping up credit to the classic queue process. Fixes #2597 The solution to this bug (and a much cleaner design anyway independent of this bug) is that queues should hold all link flow control state including the delivery-count. Hence, when credit API v2 is used the delivery-count will be held by the classic queue process, quorum queue process, and stream queue client instead of managing the delivery-count in the session. 33. The double level crediting between (a) session process and rabbit_fifo_client, and (b) rabbit_fifo_client and rabbit_fifo was removed. Therefore, instead of managing 3 separate delivery-counts (i. session, ii. rabbit_fifo_client, iii. rabbit_fifo), only 1 delivery-count is used in rabbit_fifo. This is a big simplification. 34. This commit fixes quorum queues without bumping the machine version nor introducing new rabbit_fifo commands. Whether credit API v2 is used is solely determined at link attachment time depending on whether feature flag credit_api_v2 is enabled. Even when that feature flag will be enabled later on, this link will keep using credit API v1 until detached (or the node is shut down). Eventually, after feature flag credit_api_v2 has been enabled and a subsequent rolling upgrade, all links will use credit API v2. This approach is safe and simple. The 2 alternatives to move delivery-count from the session process to the queue processes would have been: i. Explicit feature flag credit_api_v2 migration function * Can use a gen_server:call and only finish migration once all delivery-counts were migrated. Cons: * Extra new message format just for migration is required. * Risky as migration will fail if a target queue doesn’t reply. ii. Session always includes DeliveryCountSnd when crediting to the queue: Cons: * 2 delivery counts will be hold simultaneously in session proc and queue proc; could be solved by deleting the session proc’s delivery-count for credit-reply * What happens if the receiver doesn’t provide credit for a very long time? Is that a problem? 35. Support stream filtering in AMQP 1.0 (by @acogoluegnes) Use the x-stream-filter-value message annotation to carry the filter value in a published message. Use the rabbitmq:stream-filter and rabbitmq:stream-match-unfiltered filters when creating a receiver that wants to filter out messages from a stream. 36. Remove credit extension from AMQP 0.9.1 client 37. Support maintenance mode closing AMQP 1.0 connections. 38. Remove AMQP 0.9.1 client dependency from AMQP 1.0 implementation. 39. Move AMQP 1.0 plugin to the core. AMQP 1.0 is enabled by default. The old rabbitmq_amqp1_0 plugin will be kept as a no-op plugin to prevent deployment tools from failing that execute: ``` rabbitmq-plugins enable rabbitmq_amqp1_0 rabbitmq-plugins disable rabbitmq_amqp1_0 ``` 40. Breaking change: Remove CLI command `rabbitmqctl list_amqp10_connections`. Instead, list both AMQP 0.9.1 and AMQP 1.0 connections in `list_connections`: ``` rabbitmqctl list_connections protocol Listing connections ... protocol {1, 0} {0,9,1} ``` ## Benchmarks ### Throughput & Latency Setup: * Single node Ubuntu 22.04 * Erlang 26.1.1 Start RabbitMQ: ``` make run-broker PLUGINS="rabbitmq_management rabbitmq_amqp1_0" FULL=1 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 3" ``` Predeclare durable classic queue cq1, durable quorum queue qq1, durable stream queue sq1. Start client: https://github.com/ssorj/quiver https://hub.docker.com/r/ssorj/quiver/tags (digest 453a2aceda64) ``` docker run -it --rm --add-host host.docker.internal:host-gateway ssorj/quiver:latest bash-5.1# quiver --version quiver 0.4.0-SNAPSHOT ``` 1. Classic queue ``` quiver //host.docker.internal//amq/queue/cq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ............................................... 73.8 seconds Sender rate .......................................... 13,548 messages/s Receiver rate ........................................ 13,547 messages/s End-to-end rate ...................................... 13,547 messages/s Latencies by percentile: 0% ........ 0 ms 90.00% ........ 9 ms 25% ........ 2 ms 99.00% ....... 14 ms 50% ........ 4 ms 99.90% ....... 17 ms 100% ....... 26 ms 99.99% ....... 24 ms ``` RabbitMQ 3.x (main branch as of 30 January 2024): ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 6 73.6 2.1 3,217 1,607 0 8.0 511 4.1 163,580 16,367 2 74.1 4.1 3,217 0 0 8.0 0 6.1 229,114 32,767 3 74.1 6.1 3,217 0 0 8.0 0 8.1 261,880 16,367 2 74.1 8.1 67,874 32,296 8 8.2 7,662 10.1 294,646 16,367 2 74.1 10.1 67,874 0 0 8.2 0 12.1 360,180 32,734 3 74.1 12.1 67,874 0 0 8.2 0 14.1 392,946 16,367 3 74.1 14.1 68,604 365 0 8.2 12,147 16.1 458,480 32,734 3 74.1 16.1 68,604 0 0 8.2 0 18.1 491,246 16,367 2 74.1 18.1 68,604 0 0 8.2 0 20.1 556,780 32,767 4 74.1 20.1 68,604 0 0 8.2 0 22.1 589,546 16,375 2 74.1 22.1 68,604 0 0 8.2 0 receiver timed out 24.1 622,312 16,367 2 74.1 24.1 68,604 0 0 8.2 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. ``` 2. Quorum queue: ``` quiver //host.docker.internal//amq/queue/qq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration .............................................. 101.4 seconds Sender rate ........................................... 9,867 messages/s Receiver rate ......................................... 9,868 messages/s End-to-end rate ....................................... 9,865 messages/s Latencies by percentile: 0% ....... 11 ms 90.00% ....... 23 ms 25% ....... 15 ms 99.00% ....... 28 ms 50% ....... 18 ms 99.90% ....... 33 ms 100% ....... 49 ms 99.99% ....... 47 ms ``` RabbitMQ 3.x: ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 9 69.9 2.1 18,430 9,206 5 7.6 1,221 4.1 163,580 16,375 5 70.2 4.1 18,867 218 0 7.6 2,168 6.1 229,114 32,767 6 70.2 6.1 18,867 0 0 7.6 0 8.1 294,648 32,734 7 70.2 8.1 18,867 0 0 7.6 0 10.1 360,182 32,734 6 70.2 10.1 18,867 0 0 7.6 0 12.1 425,716 32,767 6 70.2 12.1 18,867 0 0 7.6 0 receiver timed out 14.1 458,482 16,367 5 70.2 14.1 18,867 0 0 7.6 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. ``` 3. Stream: ``` quiver-arrow send //host.docker.internal//amq/queue/sq1 --durable --count 1m -d 10m --summary --verbose ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ................................................ 8.7 seconds Message rate ........................................ 115,154 messages/s ``` RabbitMQ 3.x: ``` Count ............................................. 1,000,000 messages Duration ............................................... 21.2 seconds Message rate ......................................... 47,232 messages/s ``` ### Memory usage Start RabbitMQ: ``` ERL_MAX_PORTS=3000000 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+P 3000000 +S 6" make run-broker PLUGINS="rabbitmq_amqp1_0" FULL=1 RABBITMQ_CONFIG_FILE="rabbitmq.conf" ``` ``` /bin/cat rabbitmq.conf tcp_listen_options.sndbuf = 2048 tcp_listen_options.recbuf = 2048 vm_memory_high_watermark.relative = 0.95 vm_memory_high_watermark_paging_ratio = 0.95 loopback_users = none ``` Create 50k connections with 2 sessions per connection, i.e. 100k session in total: ```go package main import ( "context" "log" "time" "github.com/Azure/go-amqp" ) func main() { for i := 0; i < 50000; i++ { conn, err := amqp.Dial(context.TODO(), "amqp://nuc", &amqp.ConnOptions{SASLType: amqp.SASLTypeAnonymous()}) if err != nil { log.Fatal("dialing AMQP server:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } } log.Println("opened all connections") time.Sleep(5 * time.Hour) } ``` This commit: ``` erlang:memory(). [{total,4586376480}, {processes,4025898504}, {processes_used,4025871040}, {system,560477976}, {atom,1048841}, {atom_used,1042841}, {binary,233228608}, {code,21449982}, {ets,108560464}] erlang:system_info(process_count). 450289 ``` 7 procs per connection + 1 proc per session. (7 + 2*1) * 50,000 = 450,000 procs RabbitMQ 3.x: ``` erlang:memory(). [{total,15168232704}, {processes,14044779256}, {processes_used,14044755120}, {system,1123453448}, {atom,1057033}, {atom_used,1052587}, {binary,236381264}, {code,21790238}, {ets,391423744}] erlang:system_info(process_count). 1850309 ``` 7 procs per connection + 15 per session (7 + 2*15) * 50,000 = 1,850,000 procs 50k connections + 100k session require with this commit: 4.5 GB in RabbitMQ 3.x: 15 GB ## Future work 1. More efficient parser and serializer 2. TODO in mc_amqp: Do not store the parsed message on disk. 3. Implement both AMQP HTTP extension and AMQP management extension to allow AMQP clients to create RabbitMQ objects (queues, exchanges, ...).
2023-07-21 18:29:07 +08:00
],
)
rabbitmq_integration_suite(
name = "amqpl_direct_reply_to_SUITE",
)
assert_suites()
filegroup(
name = "manpages",
srcs = glob([
"docs/*.1",
"docs/*.2",
"docs/*.3",
"docs/*.4",
"docs/*.5",
"docs/*.6",
"docs/*.7",
"docs/*.8",
"docs/*.9",
]),
)
genrule(
name = "manpages-dir",
srcs = [":manpages"],
outs = ["manpages.tar"],
cmd = """set -euo pipefail
DESTDIR=share/man
mkdir -p $${DESTDIR}
for mp in $(SRCS); do
section=$${mp##*.}
mkdir -p $${DESTDIR}/man$$section
gzip < $$mp \\
> $${DESTDIR}/man$$section/$$(basename $$mp).gz
done
tar -cf $@ share
rm -dr share
""",
visibility = ["//visibility:public"],
)
genrule(
name = "web-manpages",
srcs = [":manpages"],
outs = ["web-manpages.tar"],
cmd = """set -euo pipefail
mkdir web-manpages-tmp
for mp in $(SRCS); do
d=web-manpages-tmp/$$(basename $${mp}).html
echo "Converting $$mp to $$d..."
mandoc -T html -O 'fragment,man=%N.%S.html' "$$mp" | \\
awk '\\
/^<table class="head">$$/ { remove_table=1; next; } \\
/^<table class="foot">$$/ { remove_table=1; next; } \\
/^<\\/table>$$/ { if (remove_table) { remove_table=0; next; } } \\
{ if (!remove_table) { \\
line=$$0; \\
gsub(/<h2/, "<h3", line); \\
gsub(/<\\/h2>/, "</h3>", line); \\
gsub(/<h1/, "<h2", line); \\
gsub(/<\\/h1>/, "</h2>", line); \\
gsub(/class="D1"/, "class=\"D1 lang-bash\"", line); \\
gsub(/class="Bd Bd-indent"/, "class=\"Bd Bd-indent lang-bash\"", line); \\
gsub(/&#[xX]201[cCdD];/, "\\&quot;", line); \\
print line; \\
} } \\
' > "$$d"
done
tar --strip-components 1 -cf $@ web-manpages-tmp/*
rm -dr web-manpages-tmp
""",
visibility = ["//visibility:public"],
)
alias(
name = "rabbit",
actual = ":erlang_app",
visibility = ["//visibility:public"],
)
eunit(
name = "eunit",
compiled_suites = [
":test_channel_operation_timeout_test_queue_beam",
":test_dummy_event_receiver_beam",
":test_dummy_interceptor_beam",
":test_dummy_runtime_parameters_beam",
":test_dummy_supervisor2_beam",
":test_failing_dummy_interceptor_beam",
":test_mirrored_supervisor_SUITE_gs_beam",
":test_queue_utils_beam",
":test_rabbit_auth_backend_context_propagation_mock_beam",
":test_rabbit_dummy_protocol_connection_info_beam",
":test_rabbit_foo_protocol_connection_info_beam",
":test_test_util_beam",
":test_test_rabbit_event_handler_beam",
2023-09-01 12:02:31 +08:00
":test_clustering_utils_beam",
Support AMQP 1.0 natively ## What Similar to Native MQTT in #5895, this commits implements Native AMQP 1.0. By "native", we mean do not proxy via AMQP 0.9.1 anymore. ## Why Native AMQP 1.0 comes with the following major benefits: 1. Similar to Native MQTT, this commit provides better throughput, latency, scalability, and resource usage for AMQP 1.0. See https://blog.rabbitmq.com/posts/2023/03/native-mqtt for native MQTT improvements. See further below for some benchmarks. 2. Since AMQP 1.0 is not limited anymore by the AMQP 0.9.1 protocol, this commit allows implementing more AMQP 1.0 features in the future. Some features are already implemented in this commit (see next section). 3. Simpler, better understandable, and more maintainable code. Native AMQP 1.0 as implemented in this commit has the following major benefits compared to AMQP 0.9.1: 4. Memory and disk alarms will only stop accepting incoming TRANSFER frames. New connections can still be created to consume from RabbitMQ to empty queues. 5. Due to 4. no need anymore for separate connections for publishers and consumers as we currently recommended for AMQP 0.9.1. which potentially halves the number of physical TCP connections. 6. When a single connection sends to multiple target queues, a single slow target queue won't block the entire connection. Publisher can still send data quickly to all other target queues. 7. A publisher can request whether it wants publisher confirmation on a per-message basis. In AMQP 0.9.1 publisher confirms are configured per channel only. 8. Consumers can change their "prefetch count" dynamically which isn't possible in our AMQP 0.9.1 implementation. See #10174 9. AMQP 1.0 is an extensible protocol This commit also fixes dozens of bugs present in the AMQP 1.0 plugin in RabbitMQ 3.x - most of which cannot be backported due to the complexity and limitations of the old 3.x implementation. This commit contains breaking changes and is therefore targeted for RabbitMQ 4.0. ## Implementation details 1. Breaking change: With Native AMQP, the behaviour of ``` Convert AMQP 0.9.1 message headers to application properties for an AMQP 1.0 consumer amqp1_0.convert_amqp091_headers_to_app_props = false | true (default false) Convert AMQP 1.0 Application Properties to AMQP 0.9.1 headers amqp1_0.convert_app_props_to_amqp091_headers = false | true (default false) ``` will break because we always convert according to the message container conversions. For example, AMQP 0.9.1 x-headers will go into message-annotations instead of application properties. Also, `false` won’t be respected since we always convert the headers with message containers. 2. Remove rabbit_queue_collector rabbit_queue_collector is responsible for synchronously deleting exclusive queues. Since the AMQP 1.0 plugin never creates exclusive queues, rabbit_queue_collector doesn't need to be started in the first place. This will save 1 Erlang process per AMQP 1.0 connection. 3. 7 processes per connection + 1 process per session in this commit instead of 7 processes per connection + 15 processes per session in 3.x Supervision hierarchy got re-designed. 4. Use 1 writer process per AMQP 1.0 connection AMQP 0.9.1 uses a separate rabbit_writer Erlang process per AMQP 0.9.1 channel. Prior to this commit, AMQP 1.0 used a separate rabbit_amqp1_0_writer process per AMQP 1.0 session. Advantage of single writer proc per session (prior to this commit): * High parallelism for serialising packets if multiple sessions within a connection write heavily at the same time. This commit uses a single writer process per AMQP 1.0 connection that is shared across all AMQP 1.0 sessions. Advantages of single writer proc per connection (this commit): * Lower memory usage with hundreds of thousands of AMQP 1.0 sessions * Less TCP and IP header overhead given that the single writer process can accumulate across all sessions bytes before flushing the socket. In other words, this commit decides that a reader / writer process pair per AMQP 1.0 connection is good enough for bi-directional TRANSFER flows. Having a writer per session is too heavy. We still ensure high throughput by having separate reader, writer, and session processes. 5. Transform rabbit_amqp1_0_writer into gen_server Why: Prior to this commit, when clicking on the AMQP 1.0 writer process in observer, the process crashed. Instead of handling all these debug messages of the sys module, it's better to implement a gen_server. There is no advantage of using a special OTP process over gen_server for the AMQP 1.0 writer. gen_server also provides cleaner format status output. How: Message callbacks return a timeout of 0. After all messages in the inbox are processed, the timeout message is handled by flushing any pending bytes. 6. Remove stats timer from writer AMQP 1.0 connections haven't emitted any stats previously. 7. When there are contiguous queue confirmations in the session process mailbox, batch them. When the confirmations are sent to the publisher, a single DISPOSITION frame is sent for contiguously confirmed delivery IDs. This approach should be good enough. However it's sub optimal in scenarios where contiguous delivery IDs that need confirmations are rare, for example: * There are multiple links in the session with different sender settlement modes and sender publishes across these links interleaved. * sender settlement mode is mixed and sender publishes interleaved settled and unsettled TRANSFERs. 8. Introduce credit API v2 Why: The AMQP 0.9.1 credit extension which is to be removed in 4.0 was poorly designed since basic.credit is a synchronous call into the queue process blocking the entire AMQP 1.0 session process. How: Change the interactions between queue clients and queue server implementations: * Clients only request a credit reply if the FLOW's `echo` field is set * Include all link flow control state held by the queue process into a new credit_reply queue event: * `available` after the queue sends any deliveries * `link-credit` after the queue sends any deliveries * `drain` which allows us to combine the old queue events send_credit_reply and send_drained into a single new queue event credit_reply. * Include the consumer tag into the credit_reply queue event such that the AMQP 1.0 session process can process any credit replies asynchronously. Link flow control state `delivery-count` also moves to the queue processes. The new interactions are hidden behind feature flag credit_api_v2 to allow for rolling upgrades from 3.13 to 4.0. 9. Use serial number arithmetic in quorum queues and session process. 10. Completely bypass the rabbit_limiter module for AMQP 1.0 flow control. The goal is to eventually remove the rabbit_limiter module in 4.0 since AMQP 0.9.1 global QoS will be unsupported in 4.0. This commit lifts the AMQP 1.0 link flow control logic out of rabbit_limiter into rabbit_queue_consumers. 11. Fix credit bug for streams: AMQP 1.0 settlements shouldn't top up link credit, only FLOW frames should top up link credit. 12. Allow sender settle mode unsettled for streams since AMQP 1.0 acknowledgements to streams are no-ops (currently). 13. Fix AMQP 1.0 client bugs Auto renewing credits should not be related to settling TRANSFERs. Remove field link_credit_unsettled as it was wrong and confusing. Prior to this commit auto renewal did not work when the sender uses sender settlement mode settled. 14. Fix AMQP 1.0 client bugs The wrong outdated Link was passed to function auto_flow/2 15. Use osiris chunk iterator Only hold messages of uncompressed sub batches in memory if consumer doesn't have sufficient credits. Compressed sub batches are skipped for non Stream protocol consumers. 16. Fix incoming link flow control Always use confirms between AMQP 1.0 queue clients and queue servers. As already done internally by rabbit_fifo_client and rabbit_stream_queue, use confirms for classic queues as well. 17. Include link handle into correlation when publishing messages to target queues such that session process can correlate confirms from target queues to incoming links. 18. Only grant more credits to publishers if publisher hasn't sufficient credits anymore and there are not too many unconfirmed messages on the link. 19. Completely ignore `block` and `unblock` queue actions and RabbitMQ credit flow between classic queue process and session process. 20. Link flow control is independent between links. A client can refer to a queue or to an exchange with multiple dynamically added target queues. Multiple incoming links can also fan in to the same queue. However the link topology looks like, this commit ensures that each link is only granted more credits if that link isn't overloaded. 21. A connection or a session can send to many different queues. In AMQP 0.9.1, a single slow queue will lead to the entire channel, and then entire connection being blocked. This commit makes sure that a single slow queue from one link won't slow down sending on other links. For example, having link A sending to a local classic queue and link B sending to 5 replica quorum queue, link B will naturally grant credits slower than link A. So, despite the quorum queue being slower in confirming messages, the same AMQP 1.0 connection and session can still pump data very fast into the classic queue. 22. If cluster wide memory or disk alarm occurs. Each session sends a FLOW with incoming-window to 0 to sending client. If sending clients don’t obey, force disconnect the client. If cluster wide memory alarm clears: Each session resumes with a FLOW defaulting to initial incoming-window. 23. All operations apart of publishing TRANSFERS to RabbitMQ can continue during cluster wide alarms, specifically, attaching consumers and consuming, i.e. emptying queues. There is no need for separate AMQP 1.0 connections for publishers and consumers as recommended in our AMQP 0.9.1 implementation. 24. Flow control summary: * If queue becomes bottleneck, that’s solved by slowing down individual sending links (AMQP 1.0 link flow control). * If session becomes bottleneck (more unlikely), that’s solved by AMQP 1.0 session flow control. * If connection becomes bottleneck, it naturally won’t read fast enough from the socket causing TCP backpressure being applied. Nowhere will RabbitMQ internal credit based flow control (i.e. module credit_flow) be used on the incoming AMQP 1.0 message path. 25. Register AMQP sessions Prefer local-only pg over our custom pg_local implementation as pg is a better process group implementation than pg_local. pg_local was identified as bottleneck in tests where many MQTT clients were disconnected at once. 26. Start a local-only pg when Rabbit boots: > A scope can be kept local-only by using a scope name that is unique cluster-wide, e.g. the node name: > pg:start_link(node()). Register AMQP 1.0 connections and sessions with pg. In future we should remove pg_local and instead use the new local-only pg for all registered processes such as AMQP 0.9.1 connections and channels. 27. Requeue messages if link detached Although the spec allows to settle delivery IDs on detached links, RabbitMQ does not respect the 'closed' field of the DETACH frame and therefore handles every DETACH frame as closed. Since the link is closed, we expect every outstanding delivery to be requeued. In addition to consumer cancellation, detaching a link therefore causes in flight deliveries to be requeued. Note that this behaviour is different from merely consumer cancellation in AMQP 0.9.1: "After a consumer is cancelled there will be no future deliveries dispatched to it. Note that there can still be "in flight" deliveries dispatched previously. Cancelling a consumer will neither discard nor requeue them." [https://www.rabbitmq.com/consumers.html#unsubscribing] An AMQP receiver can first drain, and then detach to prevent "in flight" deliveries 28. Init AMQP session with BEGIN frame Similar to how there can't be an MQTT processor without a CONNECT frame, there can't be an AMQP session without a BEGIN frame. This allows having strict dialyzer types for session flow control fields (i.e. not allowing 'undefined'). 29. Move serial_number to AMQP 1.0 common lib such that it can be used by both AMQP 1.0 server and client 30. Fix AMQP client to do serial number arithmetic. 31. AMQP client: Differentiate between delivery-id and transfer-id for better understandability. 32. Fix link flow control in classic queues This commit fixes ``` java -jar target/perf-test.jar -ad false -f persistent -u cq -c 3000 -C 1000000 -y 0 ``` followed by ``` ./omq -x 0 amqp -T /queue/cq -D 1000000 --amqp-consumer-credits 2 ``` Prior to this commit, (and on RabbitMQ 3.x) the consuming would halt after around 8 - 10,000 messages. The bug was that in flight messages from classic queue process to session process were not taken into account when topping up credit to the classic queue process. Fixes #2597 The solution to this bug (and a much cleaner design anyway independent of this bug) is that queues should hold all link flow control state including the delivery-count. Hence, when credit API v2 is used the delivery-count will be held by the classic queue process, quorum queue process, and stream queue client instead of managing the delivery-count in the session. 33. The double level crediting between (a) session process and rabbit_fifo_client, and (b) rabbit_fifo_client and rabbit_fifo was removed. Therefore, instead of managing 3 separate delivery-counts (i. session, ii. rabbit_fifo_client, iii. rabbit_fifo), only 1 delivery-count is used in rabbit_fifo. This is a big simplification. 34. This commit fixes quorum queues without bumping the machine version nor introducing new rabbit_fifo commands. Whether credit API v2 is used is solely determined at link attachment time depending on whether feature flag credit_api_v2 is enabled. Even when that feature flag will be enabled later on, this link will keep using credit API v1 until detached (or the node is shut down). Eventually, after feature flag credit_api_v2 has been enabled and a subsequent rolling upgrade, all links will use credit API v2. This approach is safe and simple. The 2 alternatives to move delivery-count from the session process to the queue processes would have been: i. Explicit feature flag credit_api_v2 migration function * Can use a gen_server:call and only finish migration once all delivery-counts were migrated. Cons: * Extra new message format just for migration is required. * Risky as migration will fail if a target queue doesn’t reply. ii. Session always includes DeliveryCountSnd when crediting to the queue: Cons: * 2 delivery counts will be hold simultaneously in session proc and queue proc; could be solved by deleting the session proc’s delivery-count for credit-reply * What happens if the receiver doesn’t provide credit for a very long time? Is that a problem? 35. Support stream filtering in AMQP 1.0 (by @acogoluegnes) Use the x-stream-filter-value message annotation to carry the filter value in a published message. Use the rabbitmq:stream-filter and rabbitmq:stream-match-unfiltered filters when creating a receiver that wants to filter out messages from a stream. 36. Remove credit extension from AMQP 0.9.1 client 37. Support maintenance mode closing AMQP 1.0 connections. 38. Remove AMQP 0.9.1 client dependency from AMQP 1.0 implementation. 39. Move AMQP 1.0 plugin to the core. AMQP 1.0 is enabled by default. The old rabbitmq_amqp1_0 plugin will be kept as a no-op plugin to prevent deployment tools from failing that execute: ``` rabbitmq-plugins enable rabbitmq_amqp1_0 rabbitmq-plugins disable rabbitmq_amqp1_0 ``` 40. Breaking change: Remove CLI command `rabbitmqctl list_amqp10_connections`. Instead, list both AMQP 0.9.1 and AMQP 1.0 connections in `list_connections`: ``` rabbitmqctl list_connections protocol Listing connections ... protocol {1, 0} {0,9,1} ``` ## Benchmarks ### Throughput & Latency Setup: * Single node Ubuntu 22.04 * Erlang 26.1.1 Start RabbitMQ: ``` make run-broker PLUGINS="rabbitmq_management rabbitmq_amqp1_0" FULL=1 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 3" ``` Predeclare durable classic queue cq1, durable quorum queue qq1, durable stream queue sq1. Start client: https://github.com/ssorj/quiver https://hub.docker.com/r/ssorj/quiver/tags (digest 453a2aceda64) ``` docker run -it --rm --add-host host.docker.internal:host-gateway ssorj/quiver:latest bash-5.1# quiver --version quiver 0.4.0-SNAPSHOT ``` 1. Classic queue ``` quiver //host.docker.internal//amq/queue/cq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ............................................... 73.8 seconds Sender rate .......................................... 13,548 messages/s Receiver rate ........................................ 13,547 messages/s End-to-end rate ...................................... 13,547 messages/s Latencies by percentile: 0% ........ 0 ms 90.00% ........ 9 ms 25% ........ 2 ms 99.00% ....... 14 ms 50% ........ 4 ms 99.90% ....... 17 ms 100% ....... 26 ms 99.99% ....... 24 ms ``` RabbitMQ 3.x (main branch as of 30 January 2024): ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 6 73.6 2.1 3,217 1,607 0 8.0 511 4.1 163,580 16,367 2 74.1 4.1 3,217 0 0 8.0 0 6.1 229,114 32,767 3 74.1 6.1 3,217 0 0 8.0 0 8.1 261,880 16,367 2 74.1 8.1 67,874 32,296 8 8.2 7,662 10.1 294,646 16,367 2 74.1 10.1 67,874 0 0 8.2 0 12.1 360,180 32,734 3 74.1 12.1 67,874 0 0 8.2 0 14.1 392,946 16,367 3 74.1 14.1 68,604 365 0 8.2 12,147 16.1 458,480 32,734 3 74.1 16.1 68,604 0 0 8.2 0 18.1 491,246 16,367 2 74.1 18.1 68,604 0 0 8.2 0 20.1 556,780 32,767 4 74.1 20.1 68,604 0 0 8.2 0 22.1 589,546 16,375 2 74.1 22.1 68,604 0 0 8.2 0 receiver timed out 24.1 622,312 16,367 2 74.1 24.1 68,604 0 0 8.2 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. ``` 2. Quorum queue: ``` quiver //host.docker.internal//amq/queue/qq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration .............................................. 101.4 seconds Sender rate ........................................... 9,867 messages/s Receiver rate ......................................... 9,868 messages/s End-to-end rate ....................................... 9,865 messages/s Latencies by percentile: 0% ....... 11 ms 90.00% ....... 23 ms 25% ....... 15 ms 99.00% ....... 28 ms 50% ....... 18 ms 99.90% ....... 33 ms 100% ....... 49 ms 99.99% ....... 47 ms ``` RabbitMQ 3.x: ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 9 69.9 2.1 18,430 9,206 5 7.6 1,221 4.1 163,580 16,375 5 70.2 4.1 18,867 218 0 7.6 2,168 6.1 229,114 32,767 6 70.2 6.1 18,867 0 0 7.6 0 8.1 294,648 32,734 7 70.2 8.1 18,867 0 0 7.6 0 10.1 360,182 32,734 6 70.2 10.1 18,867 0 0 7.6 0 12.1 425,716 32,767 6 70.2 12.1 18,867 0 0 7.6 0 receiver timed out 14.1 458,482 16,367 5 70.2 14.1 18,867 0 0 7.6 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. ``` 3. Stream: ``` quiver-arrow send //host.docker.internal//amq/queue/sq1 --durable --count 1m -d 10m --summary --verbose ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ................................................ 8.7 seconds Message rate ........................................ 115,154 messages/s ``` RabbitMQ 3.x: ``` Count ............................................. 1,000,000 messages Duration ............................................... 21.2 seconds Message rate ......................................... 47,232 messages/s ``` ### Memory usage Start RabbitMQ: ``` ERL_MAX_PORTS=3000000 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+P 3000000 +S 6" make run-broker PLUGINS="rabbitmq_amqp1_0" FULL=1 RABBITMQ_CONFIG_FILE="rabbitmq.conf" ``` ``` /bin/cat rabbitmq.conf tcp_listen_options.sndbuf = 2048 tcp_listen_options.recbuf = 2048 vm_memory_high_watermark.relative = 0.95 vm_memory_high_watermark_paging_ratio = 0.95 loopback_users = none ``` Create 50k connections with 2 sessions per connection, i.e. 100k session in total: ```go package main import ( "context" "log" "time" "github.com/Azure/go-amqp" ) func main() { for i := 0; i < 50000; i++ { conn, err := amqp.Dial(context.TODO(), "amqp://nuc", &amqp.ConnOptions{SASLType: amqp.SASLTypeAnonymous()}) if err != nil { log.Fatal("dialing AMQP server:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } } log.Println("opened all connections") time.Sleep(5 * time.Hour) } ``` This commit: ``` erlang:memory(). [{total,4586376480}, {processes,4025898504}, {processes_used,4025871040}, {system,560477976}, {atom,1048841}, {atom_used,1042841}, {binary,233228608}, {code,21449982}, {ets,108560464}] erlang:system_info(process_count). 450289 ``` 7 procs per connection + 1 proc per session. (7 + 2*1) * 50,000 = 450,000 procs RabbitMQ 3.x: ``` erlang:memory(). [{total,15168232704}, {processes,14044779256}, {processes_used,14044755120}, {system,1123453448}, {atom,1057033}, {atom_used,1052587}, {binary,236381264}, {code,21790238}, {ets,391423744}] erlang:system_info(process_count). 1850309 ``` 7 procs per connection + 15 per session (7 + 2*15) * 50,000 = 1,850,000 procs 50k connections + 100k session require with this commit: 4.5 GB in RabbitMQ 3.x: 15 GB ## Future work 1. More efficient parser and serializer 2. TODO in mc_amqp: Do not store the parsed message on disk. 3. Implement both AMQP HTTP extension and AMQP management extension to allow AMQP clients to create RabbitMQ objects (queues, exchanges, ...).
2023-07-21 18:29:07 +08:00
":test_event_recorder_beam",
2024-09-03 12:02:30 +08:00
":test_rabbit_ct_hook_beam",
Support AMQP filter expressions (#12415) * Support AMQP filter expressions ## What? This PR implements the following property filter expressions for AMQP clients consuming from streams as defined in [AMQP Filter Expressions Version 1.0 Working Draft 09](https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=66227): * properties filters [section 4.2.4] * application-properties filters [section 4.2.5] String prefix and suffix matching is also supported. This PR also fixes a bug where RabbitMQ would accept wrong filters. Specifically, prior to this PR the values of the filter-set's map were allowed to be symbols. However, "every value MUST be either null or of a described type which provides the archetype filter." ## Why? This feature adds the ability to RabbitMQ to have multiple concurrent clients each consuming only a subset of messages while maintaining message order. This feature also reduces network traffic between RabbitMQ and clients by only dispatching those messages that the clients are actually interested in. Note that AMQP filter expressions are more fine grained than the [bloom filter based stream filtering](https://www.rabbitmq.com/blog/2023/10/16/stream-filtering) because * they do not suffer false positives * the unit of filtering is per-message instead of per-chunk * matching can be performed on **multiple** values in the properties and application-properties sections * prefix and suffix matching on the actual values is supported. Both, AMQP filter expressions and bloom filters can be used together. ## How? If a filter isn't valid, RabbitMQ ignores the filter. RabbitMQ only replies with filters it actually supports and validated successfully to comply with: "The receiving endpoint sets its desired filter, the sending endpoint [RabbitMQ] sets the filter actually in place (including any filters defaulted at the node)." * Delete streams test case The test suite constructed a wrong filter-set. Specifically the value of the filter-set didn't use a described type as mandated by the spec. Using https://azure.github.io/amqpnetlite/api/Amqp.Types.DescribedValue.html throws errors that the descriptor can't be encoded. Given that this code path is already tests via the amqp_filtex_SUITE, this F# test gets therefore deleted. * Re-introduce the AMQP filter-set bug Since clients might rely on the wrong filter-set value type, we support the bug behind a deprecated feature flag and gradually remove support this bug. * Revert "Delete streams test case" This reverts commit c95cfeaef74160894050ae51a563bf839384d2d7.
2024-10-07 23:12:26 +08:00
":test_amqp_utils_beam",
],
target = ":test_erlang_app",
test_env = {
"COVERDATA_TO_LCOV_APPS_DIRS": "deps:deps/rabbit/apps",
},
)