Support AMQP 1.0 natively
## What Similar to Native MQTT in #5895, this commits implements Native AMQP 1.0. By "native", we mean do not proxy via AMQP 0.9.1 anymore. ## Why Native AMQP 1.0 comes with the following major benefits: 1. Similar to Native MQTT, this commit provides better throughput, latency, scalability, and resource usage for AMQP 1.0. See https://blog.rabbitmq.com/posts/2023/03/native-mqtt for native MQTT improvements. See further below for some benchmarks. 2. Since AMQP 1.0 is not limited anymore by the AMQP 0.9.1 protocol, this commit allows implementing more AMQP 1.0 features in the future. Some features are already implemented in this commit (see next section). 3. Simpler, better understandable, and more maintainable code. Native AMQP 1.0 as implemented in this commit has the following major benefits compared to AMQP 0.9.1: 4. Memory and disk alarms will only stop accepting incoming TRANSFER frames. New connections can still be created to consume from RabbitMQ to empty queues. 5. Due to 4. no need anymore for separate connections for publishers and consumers as we currently recommended for AMQP 0.9.1. which potentially halves the number of physical TCP connections. 6. When a single connection sends to multiple target queues, a single slow target queue won't block the entire connection. Publisher can still send data quickly to all other target queues. 7. A publisher can request whether it wants publisher confirmation on a per-message basis. In AMQP 0.9.1 publisher confirms are configured per channel only. 8. Consumers can change their "prefetch count" dynamically which isn't possible in our AMQP 0.9.1 implementation. See #10174 9. AMQP 1.0 is an extensible protocol This commit also fixes dozens of bugs present in the AMQP 1.0 plugin in RabbitMQ 3.x - most of which cannot be backported due to the complexity and limitations of the old 3.x implementation. This commit contains breaking changes and is therefore targeted for RabbitMQ 4.0. ## Implementation details 1. Breaking change: With Native AMQP, the behaviour of ``` Convert AMQP 0.9.1 message headers to application properties for an AMQP 1.0 consumer amqp1_0.convert_amqp091_headers_to_app_props = false | true (default false) Convert AMQP 1.0 Application Properties to AMQP 0.9.1 headers amqp1_0.convert_app_props_to_amqp091_headers = false | true (default false) ``` will break because we always convert according to the message container conversions. For example, AMQP 0.9.1 x-headers will go into message-annotations instead of application properties. Also, `false` won’t be respected since we always convert the headers with message containers. 2. Remove rabbit_queue_collector rabbit_queue_collector is responsible for synchronously deleting exclusive queues. Since the AMQP 1.0 plugin never creates exclusive queues, rabbit_queue_collector doesn't need to be started in the first place. This will save 1 Erlang process per AMQP 1.0 connection. 3. 7 processes per connection + 1 process per session in this commit instead of 7 processes per connection + 15 processes per session in 3.x Supervision hierarchy got re-designed. 4. Use 1 writer process per AMQP 1.0 connection AMQP 0.9.1 uses a separate rabbit_writer Erlang process per AMQP 0.9.1 channel. Prior to this commit, AMQP 1.0 used a separate rabbit_amqp1_0_writer process per AMQP 1.0 session. Advantage of single writer proc per session (prior to this commit): * High parallelism for serialising packets if multiple sessions within a connection write heavily at the same time. This commit uses a single writer process per AMQP 1.0 connection that is shared across all AMQP 1.0 sessions. Advantages of single writer proc per connection (this commit): * Lower memory usage with hundreds of thousands of AMQP 1.0 sessions * Less TCP and IP header overhead given that the single writer process can accumulate across all sessions bytes before flushing the socket. In other words, this commit decides that a reader / writer process pair per AMQP 1.0 connection is good enough for bi-directional TRANSFER flows. Having a writer per session is too heavy. We still ensure high throughput by having separate reader, writer, and session processes. 5. Transform rabbit_amqp1_0_writer into gen_server Why: Prior to this commit, when clicking on the AMQP 1.0 writer process in observer, the process crashed. Instead of handling all these debug messages of the sys module, it's better to implement a gen_server. There is no advantage of using a special OTP process over gen_server for the AMQP 1.0 writer. gen_server also provides cleaner format status output. How: Message callbacks return a timeout of 0. After all messages in the inbox are processed, the timeout message is handled by flushing any pending bytes. 6. Remove stats timer from writer AMQP 1.0 connections haven't emitted any stats previously. 7. When there are contiguous queue confirmations in the session process mailbox, batch them. When the confirmations are sent to the publisher, a single DISPOSITION frame is sent for contiguously confirmed delivery IDs. This approach should be good enough. However it's sub optimal in scenarios where contiguous delivery IDs that need confirmations are rare, for example: * There are multiple links in the session with different sender settlement modes and sender publishes across these links interleaved. * sender settlement mode is mixed and sender publishes interleaved settled and unsettled TRANSFERs. 8. Introduce credit API v2 Why: The AMQP 0.9.1 credit extension which is to be removed in 4.0 was poorly designed since basic.credit is a synchronous call into the queue process blocking the entire AMQP 1.0 session process. How: Change the interactions between queue clients and queue server implementations: * Clients only request a credit reply if the FLOW's `echo` field is set * Include all link flow control state held by the queue process into a new credit_reply queue event: * `available` after the queue sends any deliveries * `link-credit` after the queue sends any deliveries * `drain` which allows us to combine the old queue events send_credit_reply and send_drained into a single new queue event credit_reply. * Include the consumer tag into the credit_reply queue event such that the AMQP 1.0 session process can process any credit replies asynchronously. Link flow control state `delivery-count` also moves to the queue processes. The new interactions are hidden behind feature flag credit_api_v2 to allow for rolling upgrades from 3.13 to 4.0. 9. Use serial number arithmetic in quorum queues and session process. 10. Completely bypass the rabbit_limiter module for AMQP 1.0 flow control. The goal is to eventually remove the rabbit_limiter module in 4.0 since AMQP 0.9.1 global QoS will be unsupported in 4.0. This commit lifts the AMQP 1.0 link flow control logic out of rabbit_limiter into rabbit_queue_consumers. 11. Fix credit bug for streams: AMQP 1.0 settlements shouldn't top up link credit, only FLOW frames should top up link credit. 12. Allow sender settle mode unsettled for streams since AMQP 1.0 acknowledgements to streams are no-ops (currently). 13. Fix AMQP 1.0 client bugs Auto renewing credits should not be related to settling TRANSFERs. Remove field link_credit_unsettled as it was wrong and confusing. Prior to this commit auto renewal did not work when the sender uses sender settlement mode settled. 14. Fix AMQP 1.0 client bugs The wrong outdated Link was passed to function auto_flow/2 15. Use osiris chunk iterator Only hold messages of uncompressed sub batches in memory if consumer doesn't have sufficient credits. Compressed sub batches are skipped for non Stream protocol consumers. 16. Fix incoming link flow control Always use confirms between AMQP 1.0 queue clients and queue servers. As already done internally by rabbit_fifo_client and rabbit_stream_queue, use confirms for classic queues as well. 17. Include link handle into correlation when publishing messages to target queues such that session process can correlate confirms from target queues to incoming links. 18. Only grant more credits to publishers if publisher hasn't sufficient credits anymore and there are not too many unconfirmed messages on the link. 19. Completely ignore `block` and `unblock` queue actions and RabbitMQ credit flow between classic queue process and session process. 20. Link flow control is independent between links. A client can refer to a queue or to an exchange with multiple dynamically added target queues. Multiple incoming links can also fan in to the same queue. However the link topology looks like, this commit ensures that each link is only granted more credits if that link isn't overloaded. 21. A connection or a session can send to many different queues. In AMQP 0.9.1, a single slow queue will lead to the entire channel, and then entire connection being blocked. This commit makes sure that a single slow queue from one link won't slow down sending on other links. For example, having link A sending to a local classic queue and link B sending to 5 replica quorum queue, link B will naturally grant credits slower than link A. So, despite the quorum queue being slower in confirming messages, the same AMQP 1.0 connection and session can still pump data very fast into the classic queue. 22. If cluster wide memory or disk alarm occurs. Each session sends a FLOW with incoming-window to 0 to sending client. If sending clients don’t obey, force disconnect the client. If cluster wide memory alarm clears: Each session resumes with a FLOW defaulting to initial incoming-window. 23. All operations apart of publishing TRANSFERS to RabbitMQ can continue during cluster wide alarms, specifically, attaching consumers and consuming, i.e. emptying queues. There is no need for separate AMQP 1.0 connections for publishers and consumers as recommended in our AMQP 0.9.1 implementation. 24. Flow control summary: * If queue becomes bottleneck, that’s solved by slowing down individual sending links (AMQP 1.0 link flow control). * If session becomes bottleneck (more unlikely), that’s solved by AMQP 1.0 session flow control. * If connection becomes bottleneck, it naturally won’t read fast enough from the socket causing TCP backpressure being applied. Nowhere will RabbitMQ internal credit based flow control (i.e. module credit_flow) be used on the incoming AMQP 1.0 message path. 25. Register AMQP sessions Prefer local-only pg over our custom pg_local implementation as pg is a better process group implementation than pg_local. pg_local was identified as bottleneck in tests where many MQTT clients were disconnected at once. 26. Start a local-only pg when Rabbit boots: > A scope can be kept local-only by using a scope name that is unique cluster-wide, e.g. the node name: > pg:start_link(node()). Register AMQP 1.0 connections and sessions with pg. In future we should remove pg_local and instead use the new local-only pg for all registered processes such as AMQP 0.9.1 connections and channels. 27. Requeue messages if link detached Although the spec allows to settle delivery IDs on detached links, RabbitMQ does not respect the 'closed' field of the DETACH frame and therefore handles every DETACH frame as closed. Since the link is closed, we expect every outstanding delivery to be requeued. In addition to consumer cancellation, detaching a link therefore causes in flight deliveries to be requeued. Note that this behaviour is different from merely consumer cancellation in AMQP 0.9.1: "After a consumer is cancelled there will be no future deliveries dispatched to it. Note that there can still be "in flight" deliveries dispatched previously. Cancelling a consumer will neither discard nor requeue them." [https://www.rabbitmq.com/consumers.html#unsubscribing] An AMQP receiver can first drain, and then detach to prevent "in flight" deliveries 28. Init AMQP session with BEGIN frame Similar to how there can't be an MQTT processor without a CONNECT frame, there can't be an AMQP session without a BEGIN frame. This allows having strict dialyzer types for session flow control fields (i.e. not allowing 'undefined'). 29. Move serial_number to AMQP 1.0 common lib such that it can be used by both AMQP 1.0 server and client 30. Fix AMQP client to do serial number arithmetic. 31. AMQP client: Differentiate between delivery-id and transfer-id for better understandability. 32. Fix link flow control in classic queues This commit fixes ``` java -jar target/perf-test.jar -ad false -f persistent -u cq -c 3000 -C 1000000 -y 0 ``` followed by ``` ./omq -x 0 amqp -T /queue/cq -D 1000000 --amqp-consumer-credits 2 ``` Prior to this commit, (and on RabbitMQ 3.x) the consuming would halt after around 8 - 10,000 messages. The bug was that in flight messages from classic queue process to session process were not taken into account when topping up credit to the classic queue process. Fixes #2597 The solution to this bug (and a much cleaner design anyway independent of this bug) is that queues should hold all link flow control state including the delivery-count. Hence, when credit API v2 is used the delivery-count will be held by the classic queue process, quorum queue process, and stream queue client instead of managing the delivery-count in the session. 33. The double level crediting between (a) session process and rabbit_fifo_client, and (b) rabbit_fifo_client and rabbit_fifo was removed. Therefore, instead of managing 3 separate delivery-counts (i. session, ii. rabbit_fifo_client, iii. rabbit_fifo), only 1 delivery-count is used in rabbit_fifo. This is a big simplification. 34. This commit fixes quorum queues without bumping the machine version nor introducing new rabbit_fifo commands. Whether credit API v2 is used is solely determined at link attachment time depending on whether feature flag credit_api_v2 is enabled. Even when that feature flag will be enabled later on, this link will keep using credit API v1 until detached (or the node is shut down). Eventually, after feature flag credit_api_v2 has been enabled and a subsequent rolling upgrade, all links will use credit API v2. This approach is safe and simple. The 2 alternatives to move delivery-count from the session process to the queue processes would have been: i. Explicit feature flag credit_api_v2 migration function * Can use a gen_server:call and only finish migration once all delivery-counts were migrated. Cons: * Extra new message format just for migration is required. * Risky as migration will fail if a target queue doesn’t reply. ii. Session always includes DeliveryCountSnd when crediting to the queue: Cons: * 2 delivery counts will be hold simultaneously in session proc and queue proc; could be solved by deleting the session proc’s delivery-count for credit-reply * What happens if the receiver doesn’t provide credit for a very long time? Is that a problem? 35. Support stream filtering in AMQP 1.0 (by @acogoluegnes) Use the x-stream-filter-value message annotation to carry the filter value in a published message. Use the rabbitmq:stream-filter and rabbitmq:stream-match-unfiltered filters when creating a receiver that wants to filter out messages from a stream. 36. Remove credit extension from AMQP 0.9.1 client 37. Support maintenance mode closing AMQP 1.0 connections. 38. Remove AMQP 0.9.1 client dependency from AMQP 1.0 implementation. 39. Move AMQP 1.0 plugin to the core. AMQP 1.0 is enabled by default. The old rabbitmq_amqp1_0 plugin will be kept as a no-op plugin to prevent deployment tools from failing that execute: ``` rabbitmq-plugins enable rabbitmq_amqp1_0 rabbitmq-plugins disable rabbitmq_amqp1_0 ``` 40. Breaking change: Remove CLI command `rabbitmqctl list_amqp10_connections`. Instead, list both AMQP 0.9.1 and AMQP 1.0 connections in `list_connections`: ``` rabbitmqctl list_connections protocol Listing connections ... protocol {1, 0} {0,9,1} ``` ## Benchmarks ### Throughput & Latency Setup: * Single node Ubuntu 22.04 * Erlang 26.1.1 Start RabbitMQ: ``` make run-broker PLUGINS="rabbitmq_management rabbitmq_amqp1_0" FULL=1 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 3" ``` Predeclare durable classic queue cq1, durable quorum queue qq1, durable stream queue sq1. Start client: https://github.com/ssorj/quiver https://hub.docker.com/r/ssorj/quiver/tags (digest 453a2aceda64) ``` docker run -it --rm --add-host host.docker.internal:host-gateway ssorj/quiver:latest bash-5.1# quiver --version quiver 0.4.0-SNAPSHOT ``` 1. Classic queue ``` quiver //host.docker.internal//amq/queue/cq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ............................................... 73.8 seconds Sender rate .......................................... 13,548 messages/s Receiver rate ........................................ 13,547 messages/s End-to-end rate ...................................... 13,547 messages/s Latencies by percentile: 0% ........ 0 ms 90.00% ........ 9 ms 25% ........ 2 ms 99.00% ....... 14 ms 50% ........ 4 ms 99.90% ....... 17 ms 100% ....... 26 ms 99.99% ....... 24 ms ``` RabbitMQ 3.x (main branch as of 30 January 2024): ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 6 73.6 2.1 3,217 1,607 0 8.0 511 4.1 163,580 16,367 2 74.1 4.1 3,217 0 0 8.0 0 6.1 229,114 32,767 3 74.1 6.1 3,217 0 0 8.0 0 8.1 261,880 16,367 2 74.1 8.1 67,874 32,296 8 8.2 7,662 10.1 294,646 16,367 2 74.1 10.1 67,874 0 0 8.2 0 12.1 360,180 32,734 3 74.1 12.1 67,874 0 0 8.2 0 14.1 392,946 16,367 3 74.1 14.1 68,604 365 0 8.2 12,147 16.1 458,480 32,734 3 74.1 16.1 68,604 0 0 8.2 0 18.1 491,246 16,367 2 74.1 18.1 68,604 0 0 8.2 0 20.1 556,780 32,767 4 74.1 20.1 68,604 0 0 8.2 0 22.1 589,546 16,375 2 74.1 22.1 68,604 0 0 8.2 0 receiver timed out 24.1 622,312 16,367 2 74.1 24.1 68,604 0 0 8.2 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1. ``` 2. Quorum queue: ``` quiver //host.docker.internal//amq/queue/qq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000 ``` This commit: ``` Count ............................................. 1,000,000 messages Duration .............................................. 101.4 seconds Sender rate ........................................... 9,867 messages/s Receiver rate ......................................... 9,868 messages/s End-to-end rate ....................................... 9,865 messages/s Latencies by percentile: 0% ....... 11 ms 90.00% ....... 23 ms 25% ....... 15 ms 99.00% ....... 28 ms 50% ....... 18 ms 99.90% ....... 33 ms 100% ....... 49 ms 99.99% ....... 47 ms ``` RabbitMQ 3.x: ``` ---------------------- Sender ----------------------- --------------------- Receiver ---------------------- -------- Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Time [s] Count [m] Rate [m/s] CPU [%] RSS [M] Lat [ms] ----------------------------------------------------- ----------------------------------------------------- -------- 2.1 130,814 65,342 9 69.9 2.1 18,430 9,206 5 7.6 1,221 4.1 163,580 16,375 5 70.2 4.1 18,867 218 0 7.6 2,168 6.1 229,114 32,767 6 70.2 6.1 18,867 0 0 7.6 0 8.1 294,648 32,734 7 70.2 8.1 18,867 0 0 7.6 0 10.1 360,182 32,734 6 70.2 10.1 18,867 0 0 7.6 0 12.1 425,716 32,767 6 70.2 12.1 18,867 0 0 7.6 0 receiver timed out 14.1 458,482 16,367 5 70.2 14.1 18,867 0 0 7.6 0 quiver: error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run _plano.wait(receiver, check=True) File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait raise PlanoProcessError(proc) plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1. ``` 3. Stream: ``` quiver-arrow send //host.docker.internal//amq/queue/sq1 --durable --count 1m -d 10m --summary --verbose ``` This commit: ``` Count ............................................. 1,000,000 messages Duration ................................................ 8.7 seconds Message rate ........................................ 115,154 messages/s ``` RabbitMQ 3.x: ``` Count ............................................. 1,000,000 messages Duration ............................................... 21.2 seconds Message rate ......................................... 47,232 messages/s ``` ### Memory usage Start RabbitMQ: ``` ERL_MAX_PORTS=3000000 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+P 3000000 +S 6" make run-broker PLUGINS="rabbitmq_amqp1_0" FULL=1 RABBITMQ_CONFIG_FILE="rabbitmq.conf" ``` ``` /bin/cat rabbitmq.conf tcp_listen_options.sndbuf = 2048 tcp_listen_options.recbuf = 2048 vm_memory_high_watermark.relative = 0.95 vm_memory_high_watermark_paging_ratio = 0.95 loopback_users = none ``` Create 50k connections with 2 sessions per connection, i.e. 100k session in total: ```go package main import ( "context" "log" "time" "github.com/Azure/go-amqp" ) func main() { for i := 0; i < 50000; i++ { conn, err := amqp.Dial(context.TODO(), "amqp://nuc", &amqp.ConnOptions{SASLType: amqp.SASLTypeAnonymous()}) if err != nil { log.Fatal("dialing AMQP server:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } _, err = conn.NewSession(context.TODO(), nil) if err != nil { log.Fatal("creating AMQP session:", err) } } log.Println("opened all connections") time.Sleep(5 * time.Hour) } ``` This commit: ``` erlang:memory(). [{total,4586376480}, {processes,4025898504}, {processes_used,4025871040}, {system,560477976}, {atom,1048841}, {atom_used,1042841}, {binary,233228608}, {code,21449982}, {ets,108560464}] erlang:system_info(process_count). 450289 ``` 7 procs per connection + 1 proc per session. (7 + 2*1) * 50,000 = 450,000 procs RabbitMQ 3.x: ``` erlang:memory(). [{total,15168232704}, {processes,14044779256}, {processes_used,14044755120}, {system,1123453448}, {atom,1057033}, {atom_used,1052587}, {binary,236381264}, {code,21790238}, {ets,391423744}] erlang:system_info(process_count). 1850309 ``` 7 procs per connection + 15 per session (7 + 2*15) * 50,000 = 1,850,000 procs 50k connections + 100k session require with this commit: 4.5 GB in RabbitMQ 3.x: 15 GB ## Future work 1. More efficient parser and serializer 2. TODO in mc_amqp: Do not store the parsed message on disk. 3. Implement both AMQP HTTP extension and AMQP management extension to allow AMQP clients to create RabbitMQ objects (queues, exchanges, ...).
This commit is contained in:
parent
3513df355b
commit
8cb313d5a1
2
Makefile
2
Makefile
|
@ -176,8 +176,6 @@ RSYNC_FLAGS += -a $(RSYNC_V) \
|
|||
--exclude '/cowboy/doc/' \
|
||||
--exclude '/cowboy/examples/' \
|
||||
--exclude '/rabbit/escript/' \
|
||||
--exclude '/rabbitmq_amqp1_0/test/swiftmq/build/'\
|
||||
--exclude '/rabbitmq_amqp1_0/test/swiftmq/swiftmq*'\
|
||||
--exclude '/rabbitmq_cli/escript/' \
|
||||
--exclude '/rabbitmq_mqtt/test/build/' \
|
||||
--exclude '/rabbitmq_mqtt/test/test_client/'\
|
||||
|
|
|
@ -100,7 +100,6 @@ dialyze(
|
|||
)
|
||||
|
||||
broker_for_integration_suites(
|
||||
extra_plugins = ["//deps/rabbitmq_amqp1_0:erlang_app"],
|
||||
)
|
||||
|
||||
TEST_DEPS = [
|
||||
|
|
|
@ -30,7 +30,7 @@ PACKAGES_DIR ?= $(abspath PACKAGES)
|
|||
|
||||
BUILD_DEPS = rabbit_common elvis_mk
|
||||
DEPS = amqp10_common credentials_obfuscation
|
||||
TEST_DEPS = rabbit rabbitmq_amqp1_0 rabbitmq_ct_helpers
|
||||
TEST_DEPS = rabbit rabbitmq_ct_helpers
|
||||
LOCAL_DEPS = ssl inets crypto public_key
|
||||
|
||||
DEP_EARLY_PLUGINS = rabbit_common/mk/rabbitmq-early-test.mk
|
||||
|
@ -51,20 +51,6 @@ include erlang.mk
|
|||
HEX_TARBALL_FILES += rabbitmq-components.mk \
|
||||
git-revisions.txt
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# Compiler flags.
|
||||
# --------------------------------------------------------------------
|
||||
|
||||
# gen_fsm is deprecated starting from Erlang 20, but we want to support
|
||||
# Erlang 19 as well.
|
||||
|
||||
ERTS_VER := $(shell erl -version 2>&1 | sed -E 's/.* version //')
|
||||
ERLANG_20_ERTS_VER := 9.0
|
||||
|
||||
ifeq ($(call compare_version,$(ERTS_VER),$(ERLANG_20_ERTS_VER),>=),true)
|
||||
ERLC_OPTS += -Dnowarn_deprecated_gen_fsm
|
||||
endif
|
||||
|
||||
# Dialyze the tests.
|
||||
DIALYZER_OPTS += --src -r test
|
||||
|
||||
|
|
|
@ -2,16 +2,16 @@
|
|||
|
||||
This is an [Erlang client for the AMQP 1.0](https://www.amqp.org/resources/specifications) protocol.
|
||||
|
||||
It's primary purpose is to be used in RabbitMQ related projects but it is a
|
||||
generic client that was tested with at least 4 implementations of AMQP 1.0.
|
||||
Its primary purpose is to be used in RabbitMQ related projects but it is a
|
||||
generic client that was tested with at least 3 implementations of AMQP 1.0.
|
||||
|
||||
If you are looking for an Erlang client for [AMQP 0-9-1](https://www.rabbitmq.com/tutorials/amqp-concepts.html) — a completely different
|
||||
protocol despite the name — [consider this one](https://github.com/rabbitmq/rabbitmq-erlang-client).
|
||||
protocol despite the name — [consider this one](../amqp_client).
|
||||
|
||||
## Project Maturity and Status
|
||||
|
||||
This client is used in the cross-protocol version of the RabbitMQ Shovel plugin. It is not 100%
|
||||
feature complete but moderately mature and was tested against at least three AMQP 1.0 servers:
|
||||
feature complete but moderately mature and was tested against at least 3 AMQP 1.0 servers:
|
||||
RabbitMQ, Azure ServiceBus, ActiveMQ.
|
||||
|
||||
This client library is not officially supported by VMware at this time.
|
||||
|
@ -80,8 +80,8 @@ after 2000 ->
|
|||
exit(credited_timeout)
|
||||
end.
|
||||
|
||||
%% create a new message using a delivery-tag, body and indicate
|
||||
%% it's settlement status (true meaning no disposition confirmation
|
||||
%% Create a new message using a delivery-tag, body and indicate
|
||||
%% its settlement status (true meaning no disposition confirmation
|
||||
%% will be sent by the receiver).
|
||||
OutMsg = amqp10_msg:new(<<"my-tag">>, <<"my-body">>, true),
|
||||
ok = amqp10_client:send_msg(Sender, OutMsg),
|
||||
|
@ -112,7 +112,7 @@ after the `Open` frame has been successfully written to the socket rather than
|
|||
waiting until the remote end returns with their `Open` frame. The client will
|
||||
notify the caller of various internal/async events using `amqp10_event`
|
||||
messages. In the example above when the remote replies with their `Open` frame
|
||||
a message is sent of the following forma:
|
||||
a message is sent of the following form:
|
||||
|
||||
```
|
||||
{amqp10_event, {connection, ConnectionPid, opened}}
|
||||
|
|
|
@ -13,7 +13,6 @@ def all_beam_files(name = "all_beam_files"):
|
|||
"src/amqp10_client_app.erl",
|
||||
"src/amqp10_client_connection.erl",
|
||||
"src/amqp10_client_connection_sup.erl",
|
||||
"src/amqp10_client_connections_sup.erl",
|
||||
"src/amqp10_client_frame_reader.erl",
|
||||
"src/amqp10_client_session.erl",
|
||||
"src/amqp10_client_sessions_sup.erl",
|
||||
|
@ -42,7 +41,6 @@ def all_test_beam_files(name = "all_test_beam_files"):
|
|||
"src/amqp10_client_app.erl",
|
||||
"src/amqp10_client_connection.erl",
|
||||
"src/amqp10_client_connection_sup.erl",
|
||||
"src/amqp10_client_connections_sup.erl",
|
||||
"src/amqp10_client_frame_reader.erl",
|
||||
"src/amqp10_client_session.erl",
|
||||
"src/amqp10_client_sessions_sup.erl",
|
||||
|
@ -77,7 +75,6 @@ def all_srcs(name = "all_srcs"):
|
|||
"src/amqp10_client_app.erl",
|
||||
"src/amqp10_client_connection.erl",
|
||||
"src/amqp10_client_connection_sup.erl",
|
||||
"src/amqp10_client_connections_sup.erl",
|
||||
"src/amqp10_client_frame_reader.erl",
|
||||
"src/amqp10_client_session.erl",
|
||||
"src/amqp10_client_sessions_sup.erl",
|
||||
|
|
|
@ -35,7 +35,7 @@
|
|||
settle_msg/3,
|
||||
flow_link_credit/3,
|
||||
flow_link_credit/4,
|
||||
echo/1,
|
||||
stop_receiver_link/1,
|
||||
link_handle/1,
|
||||
get_msg/1,
|
||||
get_msg/2,
|
||||
|
@ -55,7 +55,7 @@
|
|||
-type attach_role() :: amqp10_client_session:attach_role().
|
||||
-type attach_args() :: amqp10_client_session:attach_args().
|
||||
-type filter() :: amqp10_client_session:filter().
|
||||
-type properties() :: amqp10_client_session:properties().
|
||||
-type properties() :: amqp10_client_types:properties().
|
||||
|
||||
-type connection_config() :: amqp10_client_connection:connection_config().
|
||||
|
||||
|
@ -109,10 +109,10 @@ open_connection(ConnectionConfig0) ->
|
|||
notify_when_closed => NotifyWhenClosed
|
||||
},
|
||||
Sasl = maps:get(sasl, ConnectionConfig1),
|
||||
ConnectionConfig2 = ConnectionConfig1#{sasl => amqp10_client_connection:encrypt_sasl(Sasl)},
|
||||
amqp10_client_connection:open(ConnectionConfig2).
|
||||
ConnectionConfig = ConnectionConfig1#{sasl => amqp10_client_connection:encrypt_sasl(Sasl)},
|
||||
amqp10_client_connection:open(ConnectionConfig).
|
||||
|
||||
%% @doc Opens a connection using a connection_config map
|
||||
%% @doc Closes a connection.
|
||||
%% This is asynchronous and will notify completion to the caller using
|
||||
%% an amqp10_event of the following format:
|
||||
%% {amqp10_event, {connection, ConnectionPid, {closed, Why}}}
|
||||
|
@ -271,9 +271,8 @@ attach_receiver_link(Session, Name, Source, SettleMode, Durability, Filter) ->
|
|||
%% This is asynchronous and will notify completion of the attach request to the
|
||||
%% caller using an amqp10_event of the following format:
|
||||
%% {amqp10_event, {link, LinkRef, attached | {detached, Why}}}
|
||||
-spec attach_receiver_link(pid(), binary(), binary(),
|
||||
snd_settle_mode(), terminus_durability(), filter(),
|
||||
properties()) ->
|
||||
-spec attach_receiver_link(pid(), binary(), binary(), snd_settle_mode(),
|
||||
terminus_durability(), filter(), properties()) ->
|
||||
{ok, link_ref()}.
|
||||
attach_receiver_link(Session, Name, Source, SettleMode, Durability, Filter, Properties)
|
||||
when is_pid(Session) andalso
|
||||
|
@ -307,43 +306,45 @@ detach_link(#link_ref{link_handle = Handle, session = Session}) ->
|
|||
amqp10_client_session:detach(Session, Handle).
|
||||
|
||||
%% @doc Grant credit to a sender.
|
||||
%% The amqp10_client will automatically grant more credit to the sender when
|
||||
%% The amqp10_client will automatically grant Credit to the sender when
|
||||
%% the remaining link credit falls below the value of RenewWhenBelow.
|
||||
%% If RenewWhenBelow is 'never' the client will never grant new credit. Instead
|
||||
%% If RenewWhenBelow is 'never' the client will never grant more credit. Instead
|
||||
%% the caller will be notified when the link_credit reaches 0 with an
|
||||
%% amqp10_event of the following format:
|
||||
%% {amqp10_event, {link, LinkRef, credit_exhausted}}
|
||||
-spec flow_link_credit(link_ref(), Credit :: non_neg_integer(),
|
||||
RenewWhenBelow :: never | non_neg_integer()) -> ok.
|
||||
RenewWhenBelow :: never | pos_integer()) -> ok.
|
||||
flow_link_credit(Ref, Credit, RenewWhenBelow) ->
|
||||
flow_link_credit(Ref, Credit, RenewWhenBelow, false).
|
||||
|
||||
-spec flow_link_credit(link_ref(), Credit :: non_neg_integer(),
|
||||
RenewWhenBelow :: never | non_neg_integer(),
|
||||
RenewWhenBelow :: never | pos_integer(),
|
||||
Drain :: boolean()) -> ok.
|
||||
flow_link_credit(#link_ref{role = receiver, session = Session,
|
||||
link_handle = Handle},
|
||||
Credit, RenewWhenBelow, Drain) ->
|
||||
Credit, RenewWhenBelow, Drain)
|
||||
when RenewWhenBelow =:= never orelse
|
||||
is_integer(RenewWhenBelow) andalso
|
||||
RenewWhenBelow > 0 andalso
|
||||
RenewWhenBelow =< Credit ->
|
||||
Flow = #'v1_0.flow'{link_credit = {uint, Credit},
|
||||
drain = Drain},
|
||||
ok = amqp10_client_session:flow(Session, Handle, Flow, RenewWhenBelow).
|
||||
|
||||
%% @doc Request that the sender's flow state is echoed back
|
||||
%% This may be used to determine when the Link has finally quiesced.
|
||||
%% see §2.6.10 of the spec
|
||||
echo(#link_ref{role = receiver, session = Session,
|
||||
link_handle = Handle}) ->
|
||||
%% @doc Stop a receiving link.
|
||||
%% See AMQP 1.0 spec §2.6.10.
|
||||
stop_receiver_link(#link_ref{role = receiver,
|
||||
session = Session,
|
||||
link_handle = Handle}) ->
|
||||
Flow = #'v1_0.flow'{link_credit = {uint, 0},
|
||||
echo = true},
|
||||
ok = amqp10_client_session:flow(Session, Handle, Flow, 0).
|
||||
ok = amqp10_client_session:flow(Session, Handle, Flow, never).
|
||||
|
||||
%%% messages
|
||||
|
||||
%% @doc Send a message on a the link referred to be the 'LinkRef'.
|
||||
%% Returns ok for "async" transfers when messages are sent with settled=true
|
||||
%% else it returns the delivery state from the disposition
|
||||
-spec send_msg(link_ref(), amqp10_msg:amqp10_msg()) ->
|
||||
ok | {error, insufficient_credit | link_not_found | half_attached}.
|
||||
ok | amqp10_client_session:transfer_error().
|
||||
send_msg(#link_ref{role = sender, session = Session,
|
||||
link_handle = Handle}, Msg0) ->
|
||||
Msg = amqp10_msg:set_handle(Handle, Msg0),
|
||||
|
|
|
@ -7,8 +7,6 @@
|
|||
|
||||
-define(AMQP_PROTOCOL_HEADER, <<"AMQP", 0, 1, 0, 0>>).
|
||||
-define(SASL_PROTOCOL_HEADER, <<"AMQP", 3, 1, 0, 0>>).
|
||||
-define(MIN_MAX_FRAME_SIZE, 512).
|
||||
-define(MAX_MAX_FRAME_SIZE, 1024 * 1024).
|
||||
-define(FRAME_HEADER_SIZE, 8).
|
||||
|
||||
-define(TIMEOUT, 5000).
|
||||
|
|
|
@ -9,30 +9,12 @@
|
|||
|
||||
-behaviour(application).
|
||||
|
||||
%% Application callbacks
|
||||
%% application callbacks
|
||||
-export([start/2,
|
||||
stop/1]).
|
||||
|
||||
-type start_type() :: (
|
||||
normal |
|
||||
{takeover, Node :: node()} |
|
||||
{failover, Node :: node()}
|
||||
).
|
||||
-type state() :: term().
|
||||
|
||||
%%====================================================================
|
||||
%% API
|
||||
%%====================================================================
|
||||
|
||||
-spec start(StartType :: start_type(), StartArgs :: term()) ->
|
||||
{ok, Pid :: pid()} | {ok, Pid :: pid(), State :: state()} | {error, Reason :: term()}.
|
||||
start(_Type, _Args) ->
|
||||
amqp10_client_sup:start_link().
|
||||
|
||||
-spec stop(State :: state()) -> ok.
|
||||
stop(_State) ->
|
||||
ok.
|
||||
|
||||
%%====================================================================
|
||||
%% Internal functions
|
||||
%%====================================================================
|
||||
|
|
|
@ -11,21 +11,13 @@
|
|||
|
||||
-include("amqp10_client.hrl").
|
||||
-include_lib("amqp10_common/include/amqp10_framing.hrl").
|
||||
-include_lib("amqp10_common/include/amqp10_types.hrl").
|
||||
|
||||
-ifdef(nowarn_deprecated_gen_fsm).
|
||||
-compile({nowarn_deprecated_function,
|
||||
[{gen_fsm, reply, 2},
|
||||
{gen_fsm, send_all_state_event, 2},
|
||||
{gen_fsm, send_event, 2},
|
||||
{gen_fsm, start_link, 3},
|
||||
{gen_fsm, sync_send_all_state_event, 2}]}).
|
||||
-endif.
|
||||
|
||||
%% Public API.
|
||||
%% public API
|
||||
-export([open/1,
|
||||
close/2]).
|
||||
|
||||
%% Private API.
|
||||
%% private API
|
||||
-export([start_link/2,
|
||||
socket_ready/2,
|
||||
protocol_header_received/5,
|
||||
|
@ -34,13 +26,14 @@
|
|||
encrypt_sasl/1,
|
||||
decrypt_sasl/1]).
|
||||
|
||||
%% gen_fsm callbacks.
|
||||
%% gen_statem callbacks
|
||||
-export([init/1,
|
||||
callback_mode/0,
|
||||
terminate/3,
|
||||
code_change/4]).
|
||||
|
||||
%% gen_fsm state callbacks.
|
||||
%% gen_statem state callbacks
|
||||
%% see figure 2.23
|
||||
-export([expecting_socket/3,
|
||||
sasl_hdr_sent/3,
|
||||
sasl_hdr_rcvds/3,
|
||||
|
@ -71,8 +64,10 @@
|
|||
notify => pid() | none, % the pid to send connection events to
|
||||
notify_when_opened => pid() | none,
|
||||
notify_when_closed => pid() | none,
|
||||
max_frame_size => non_neg_integer(), % TODO: constrain to large than 512
|
||||
outgoing_max_frame_size => non_neg_integer() | undefined,
|
||||
%% incoming maximum frame size set by our client application
|
||||
max_frame_size => pos_integer(), % TODO: constrain to large than 512
|
||||
%% outgoing maximum frame size set by AMQP peer in OPEN performative
|
||||
outgoing_max_frame_size => pos_integer() | undefined,
|
||||
idle_time_out => milliseconds(),
|
||||
% set to a negative value to allow a sender to "overshoot" the flow
|
||||
% control by this margin
|
||||
|
@ -80,9 +75,7 @@
|
|||
%% These credentials_obfuscation-wrapped values have the type of
|
||||
%% decrypted_sasl/0
|
||||
sasl => encrypted_sasl() | decrypted_sasl(),
|
||||
notify => pid(),
|
||||
notify_when_opened => pid() | none,
|
||||
notify_when_closed => pid() | none
|
||||
properties => amqp10_client_types:properties()
|
||||
}.
|
||||
|
||||
-record(state,
|
||||
|
@ -167,13 +160,13 @@ protocol_header_received(Pid, Protocol, Maj, Min, Rev) ->
|
|||
|
||||
-spec begin_session(pid()) -> supervisor:startchild_ret().
|
||||
begin_session(Pid) ->
|
||||
gen_statem:call(Pid, begin_session, {dirty_timeout, ?TIMEOUT}).
|
||||
gen_statem:call(Pid, begin_session, ?TIMEOUT).
|
||||
|
||||
heartbeat(Pid) ->
|
||||
gen_statem:cast(Pid, heartbeat).
|
||||
|
||||
%% -------------------------------------------------------------------
|
||||
%% gen_fsm callbacks.
|
||||
%% gen_statem callbacks.
|
||||
%% -------------------------------------------------------------------
|
||||
|
||||
callback_mode() -> [state_functions].
|
||||
|
@ -259,7 +252,7 @@ hdr_sent({call, From}, begin_session,
|
|||
State1 = State#state{pending_session_reqs = [From | PendingSessionReqs]},
|
||||
{keep_state, State1}.
|
||||
|
||||
open_sent(_EvtType, #'v1_0.open'{max_frame_size = MFSz,
|
||||
open_sent(_EvtType, #'v1_0.open'{max_frame_size = MaybeMaxFrameSize,
|
||||
idle_time_out = Timeout},
|
||||
#state{pending_session_reqs = PendingSessionReqs,
|
||||
config = Config} = State0) ->
|
||||
|
@ -271,8 +264,14 @@ open_sent(_EvtType, #'v1_0.open'{max_frame_size = MFSz,
|
|||
heartbeat_timer = Tmr};
|
||||
_ -> State0
|
||||
end,
|
||||
State1 = State#state{config =
|
||||
Config#{outgoing_max_frame_size => unpack(MFSz)}},
|
||||
MaxFrameSize = case unpack(MaybeMaxFrameSize) of
|
||||
undefined ->
|
||||
%% default as per 2.7.1
|
||||
?UINT_MAX;
|
||||
Bytes when is_integer(Bytes) ->
|
||||
Bytes
|
||||
end,
|
||||
State1 = State#state{config = Config#{outgoing_max_frame_size => MaxFrameSize}},
|
||||
State2 = lists:foldr(
|
||||
fun(From, S0) ->
|
||||
{Ret, S2} = handle_begin_session(From, S0),
|
||||
|
@ -403,32 +402,32 @@ handle_begin_session({FromPid, _Ref},
|
|||
end,
|
||||
{Ret, State1}.
|
||||
|
||||
send_open(#state{socket = Socket, config = Config}) ->
|
||||
send_open(#state{socket = Socket, config = Config0}) ->
|
||||
{ok, Product} = application:get_key(description),
|
||||
{ok, Version} = application:get_key(vsn),
|
||||
Platform = "Erlang/OTP " ++ erlang:system_info(otp_release),
|
||||
Props = {map, [{{symbol, <<"product">>},
|
||||
{utf8, list_to_binary(Product)}},
|
||||
{{symbol, <<"version">>},
|
||||
{utf8, list_to_binary(Version)}},
|
||||
{{symbol, <<"platform">>},
|
||||
{utf8, list_to_binary(Platform)}}
|
||||
]},
|
||||
Props0 = #{<<"product">> => {utf8, list_to_binary(Product)},
|
||||
<<"version">> => {utf8, list_to_binary(Version)},
|
||||
<<"platform">> => {utf8, list_to_binary(Platform)}},
|
||||
Config = maps:update_with(properties,
|
||||
fun(Val) -> maps:merge(Props0, Val) end,
|
||||
Props0,
|
||||
Config0),
|
||||
Props = amqp10_client_types:make_properties(Config),
|
||||
ContainerId = maps:get(container_id, Config, generate_container_id()),
|
||||
IdleTimeOut = maps:get(idle_time_out, Config, 0),
|
||||
IncomingMaxFrameSize = maps:get(max_frame_size, Config),
|
||||
Open0 = #'v1_0.open'{container_id = {utf8, ContainerId},
|
||||
channel_max = {ushort, 100},
|
||||
idle_time_out = {uint, IdleTimeOut},
|
||||
properties = Props},
|
||||
Open1 = case Config of
|
||||
#{max_frame_size := MFSz} ->
|
||||
Open0#'v1_0.open'{max_frame_size = {uint, MFSz}};
|
||||
_ -> Open0
|
||||
end,
|
||||
properties = Props,
|
||||
max_frame_size = {uint, IncomingMaxFrameSize}
|
||||
},
|
||||
Open = case Config of
|
||||
#{hostname := Hostname} ->
|
||||
Open1#'v1_0.open'{hostname = {utf8, Hostname}};
|
||||
_ -> Open1
|
||||
Open0#'v1_0.open'{hostname = {utf8, Hostname}};
|
||||
_ ->
|
||||
Open0
|
||||
end,
|
||||
Encoded = amqp10_framing:encode_bin(Open),
|
||||
Frame = amqp10_binary_generator:build_frame(0, Encoded),
|
||||
|
@ -508,7 +507,7 @@ unpack(V) -> amqp10_client_types:unpack(V).
|
|||
|
||||
-spec generate_container_id() -> binary().
|
||||
generate_container_id() ->
|
||||
Pre = list_to_binary(atom_to_list(node())),
|
||||
Pre = atom_to_binary(node()),
|
||||
Id = bin_to_hex(crypto:strong_rand_bytes(8)),
|
||||
<<Pre/binary, <<"_">>/binary, Id/binary>>.
|
||||
|
||||
|
@ -552,4 +551,5 @@ decrypted_sasl_to_bin(none) -> <<"ANONYMOUS">>.
|
|||
config_defaults() ->
|
||||
#{sasl => none,
|
||||
transfer_limit_margin => 0,
|
||||
max_frame_size => ?MAX_MAX_FRAME_SIZE}.
|
||||
%% 1 MB
|
||||
max_frame_size => 1_048_576}.
|
||||
|
|
|
@ -8,35 +8,31 @@
|
|||
|
||||
-behaviour(supervisor).
|
||||
|
||||
%% Private API.
|
||||
%% API
|
||||
-export([start_link/1]).
|
||||
|
||||
%% Supervisor callbacks.
|
||||
%% Supervisor callbacks
|
||||
-export([init/1]).
|
||||
|
||||
-define(CHILD(Id, Mod, Type, Args), {Id, {Mod, start_link, Args},
|
||||
transient, 5000, Type, [Mod]}).
|
||||
|
||||
%% -------------------------------------------------------------------
|
||||
%% Private API.
|
||||
%% -------------------------------------------------------------------
|
||||
|
||||
-spec start_link(amqp10_client_connection:connection_config()) ->
|
||||
{ok, pid()} | ignore | {error, any()}.
|
||||
start_link(Config) ->
|
||||
supervisor:start_link(?MODULE, [Config]).
|
||||
|
||||
%% -------------------------------------------------------------------
|
||||
%% Supervisor callbacks.
|
||||
%% -------------------------------------------------------------------
|
||||
|
||||
init(Args) ->
|
||||
ReaderSpec = ?CHILD(reader, amqp10_client_frame_reader,
|
||||
worker, [self() | Args]),
|
||||
ConnectionSpec = ?CHILD(connection, amqp10_client_connection,
|
||||
worker, [self() | Args]),
|
||||
SessionsSupSpec = ?CHILD(sessions, amqp10_client_sessions_sup,
|
||||
supervisor, []),
|
||||
{ok, {{one_for_all, 0, 1}, [ConnectionSpec,
|
||||
ReaderSpec,
|
||||
SessionsSupSpec]}}.
|
||||
init(Args0) ->
|
||||
SupFlags = #{strategy => one_for_all,
|
||||
intensity => 0,
|
||||
period => 1},
|
||||
Fun = start_link,
|
||||
Args = [self() | Args0],
|
||||
ConnectionSpec = #{id => connection,
|
||||
start => {amqp10_client_connection, Fun, Args},
|
||||
restart => transient},
|
||||
ReaderSpec = #{id => reader,
|
||||
start => {amqp10_client_frame_reader, Fun, Args},
|
||||
restart => transient},
|
||||
SessionsSupSpec = #{id => sessions,
|
||||
start => {amqp10_client_sessions_sup, Fun, []},
|
||||
restart => transient,
|
||||
type => supervisor},
|
||||
{ok, {SupFlags, [ConnectionSpec,
|
||||
ReaderSpec,
|
||||
SessionsSupSpec]}}.
|
||||
|
|
|
@ -1,38 +0,0 @@
|
|||
%% This Source Code Form is subject to the terms of the Mozilla Public
|
||||
%% License, v. 2.0. If a copy of the MPL was not distributed with this
|
||||
%% file, You can obtain one at https://mozilla.org/MPL/2.0/.
|
||||
%%
|
||||
%% Copyright (c) 2007-2024 Broadcom. All Rights Reserved. The term “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. All rights reserved.
|
||||
%%
|
||||
-module(amqp10_client_connections_sup).
|
||||
|
||||
-behaviour(supervisor).
|
||||
|
||||
%% Private API.
|
||||
-export([start_link/0,
|
||||
stop_child/1]).
|
||||
|
||||
%% Supervisor callbacks.
|
||||
-export([init/1]).
|
||||
|
||||
-define(CHILD(Id, Mod, Type, Args), {Id, {Mod, start_link, Args},
|
||||
temporary, infinity, Type, [Mod]}).
|
||||
|
||||
%% -------------------------------------------------------------------
|
||||
%% Private API.
|
||||
%% -------------------------------------------------------------------
|
||||
|
||||
stop_child(Pid) ->
|
||||
supervisor:terminate_child({local, ?MODULE}, Pid).
|
||||
|
||||
start_link() ->
|
||||
supervisor:start_link({local, ?MODULE}, ?MODULE, []).
|
||||
|
||||
%% -------------------------------------------------------------------
|
||||
%% Supervisor callbacks.
|
||||
%% -------------------------------------------------------------------
|
||||
|
||||
init([]) ->
|
||||
Template = ?CHILD(connection_sup, amqp10_client_connection_sup,
|
||||
supervisor, []),
|
||||
{ok, {{simple_one_for_one, 0, 1}, [Template]}}.
|
File diff suppressed because it is too large
Load Diff
|
@ -8,29 +8,20 @@
|
|||
|
||||
-behaviour(supervisor).
|
||||
|
||||
%% Private API.
|
||||
%% API
|
||||
-export([start_link/0]).
|
||||
|
||||
%% Supervisor callbacks.
|
||||
%% Supervisor callbacks
|
||||
-export([init/1]).
|
||||
|
||||
-define(CHILD(Id, Mod, Type, Args), {Id, {Mod, start_link, Args},
|
||||
transient, 5000, Type, [Mod]}).
|
||||
|
||||
%% -------------------------------------------------------------------
|
||||
%% Private API.
|
||||
%% -------------------------------------------------------------------
|
||||
|
||||
-spec start_link() ->
|
||||
{ok, pid()} | ignore | {error, any()}.
|
||||
|
||||
start_link() ->
|
||||
supervisor:start_link(?MODULE, []).
|
||||
|
||||
%% -------------------------------------------------------------------
|
||||
%% Supervisor callbacks.
|
||||
%% -------------------------------------------------------------------
|
||||
|
||||
init(Args) ->
|
||||
Template = ?CHILD(session, amqp10_client_session, worker, Args),
|
||||
{ok, {{simple_one_for_one, 0, 1}, [Template]}}.
|
||||
init([]) ->
|
||||
SupFlags = #{strategy => simple_one_for_one,
|
||||
intensity => 0,
|
||||
period => 1},
|
||||
ChildSpec = #{id => session,
|
||||
start => {amqp10_client_session, start_link, []},
|
||||
restart => transient},
|
||||
{ok, {SupFlags, [ChildSpec]}}.
|
||||
|
|
|
@ -8,27 +8,21 @@
|
|||
|
||||
-behaviour(supervisor).
|
||||
|
||||
%% Private API.
|
||||
%% API
|
||||
-export([start_link/0]).
|
||||
|
||||
%% Supervisor callbacks.
|
||||
%% Supervisor callbacks
|
||||
-export([init/1]).
|
||||
|
||||
-define(CHILD(Id, Mod, Type, Args), {Id, {Mod, start_link, Args},
|
||||
temporary, infinity, Type, [Mod]}).
|
||||
|
||||
%% -------------------------------------------------------------------
|
||||
%% Private API.
|
||||
%% -------------------------------------------------------------------
|
||||
|
||||
start_link() ->
|
||||
supervisor:start_link({local, ?MODULE}, ?MODULE, []).
|
||||
|
||||
%% -------------------------------------------------------------------
|
||||
%% Supervisor callbacks.
|
||||
%% -------------------------------------------------------------------
|
||||
|
||||
init([]) ->
|
||||
Template = ?CHILD(connection_sup, amqp10_client_connection_sup,
|
||||
supervisor, []),
|
||||
{ok, {{simple_one_for_one, 0, 1}, [Template]}}.
|
||||
SupFlags = #{strategy => simple_one_for_one,
|
||||
intensity => 0,
|
||||
period => 1},
|
||||
ChildSpec = #{id => connection_sup,
|
||||
start => {amqp10_client_connection_sup, start_link, []},
|
||||
restart => temporary,
|
||||
type => supervisor},
|
||||
{ok, {SupFlags, [ChildSpec]}}.
|
||||
|
|
|
@ -10,7 +10,8 @@
|
|||
|
||||
-export([unpack/1,
|
||||
utf8/1,
|
||||
uint/1]).
|
||||
uint/1,
|
||||
make_properties/1]).
|
||||
|
||||
-type amqp10_performative() :: #'v1_0.open'{} | #'v1_0.begin'{} | #'v1_0.attach'{} |
|
||||
#'v1_0.flow'{} | #'v1_0.transfer'{} |
|
||||
|
@ -63,10 +64,13 @@
|
|||
link_event_detail()}.
|
||||
-type amqp10_event() :: {amqp10_event, amqp10_event_detail()}.
|
||||
|
||||
-type properties() :: #{binary() => tuple()}.
|
||||
|
||||
-export_type([amqp10_performative/0, channel/0,
|
||||
source/0, target/0, amqp10_msg_record/0,
|
||||
delivery_state/0, amqp_error/0, connection_error/0,
|
||||
amqp10_event_detail/0, amqp10_event/0]).
|
||||
amqp10_event_detail/0, amqp10_event/0,
|
||||
properties/0]).
|
||||
|
||||
|
||||
unpack(undefined) -> undefined;
|
||||
|
@ -77,3 +81,12 @@ utf8(S) when is_list(S) -> {utf8, list_to_binary(S)};
|
|||
utf8(B) when is_binary(B) -> {utf8, B}.
|
||||
|
||||
uint(N) -> {uint, N}.
|
||||
|
||||
make_properties(#{properties := Props})
|
||||
when is_map(Props) andalso
|
||||
map_size(Props) > 0 ->
|
||||
{map, maps:fold(fun(K, V, L) ->
|
||||
[{{symbol, K}, V} | L]
|
||||
end, [], Props)};
|
||||
make_properties(_) ->
|
||||
undefined.
|
||||
|
|
|
@ -38,7 +38,7 @@
|
|||
|
||||
-include_lib("amqp10_common/include/amqp10_framing.hrl").
|
||||
|
||||
-type maybe(T) :: T | undefined.
|
||||
-type opt(T) :: T | undefined.
|
||||
|
||||
-type delivery_tag() :: binary().
|
||||
-type content_type() :: term(). % TODO: refine
|
||||
|
@ -52,23 +52,23 @@
|
|||
|
||||
-type amqp10_header() :: #{durable => boolean(), % false
|
||||
priority => byte(), % 4
|
||||
ttl => maybe(non_neg_integer()),
|
||||
ttl => opt(non_neg_integer()),
|
||||
first_acquirer => boolean(), % false
|
||||
delivery_count => non_neg_integer()}. % 0
|
||||
|
||||
-type amqp10_properties() :: #{message_id => maybe(any()),
|
||||
user_id => maybe(binary()),
|
||||
to => maybe(any()),
|
||||
subject => maybe(binary()),
|
||||
reply_to => maybe(any()),
|
||||
correlation_id => maybe(any()),
|
||||
content_type => maybe(content_type()),
|
||||
content_encoding => maybe(content_encoding()),
|
||||
absolute_expiry_time => maybe(non_neg_integer()),
|
||||
creation_time => maybe(non_neg_integer()),
|
||||
group_id => maybe(binary()),
|
||||
group_sequence => maybe(non_neg_integer()),
|
||||
reply_to_group_id => maybe(binary())}.
|
||||
-type amqp10_properties() :: #{message_id => opt(any()),
|
||||
user_id => opt(binary()),
|
||||
to => opt(any()),
|
||||
subject => opt(binary()),
|
||||
reply_to => opt(any()),
|
||||
correlation_id => opt(any()),
|
||||
content_type => opt(content_type()),
|
||||
content_encoding => opt(content_encoding()),
|
||||
absolute_expiry_time => opt(non_neg_integer()),
|
||||
creation_time => opt(non_neg_integer()),
|
||||
group_id => opt(binary()),
|
||||
group_sequence => opt(non_neg_integer()),
|
||||
reply_to_group_id => opt(binary())}.
|
||||
|
||||
-type amqp10_body() :: [#'v1_0.data'{}] |
|
||||
[#'v1_0.amqp_sequence'{}] |
|
||||
|
@ -78,13 +78,13 @@
|
|||
|
||||
-record(amqp10_msg,
|
||||
{transfer :: #'v1_0.transfer'{},
|
||||
header :: maybe(#'v1_0.header'{}),
|
||||
delivery_annotations :: maybe(#'v1_0.delivery_annotations'{}),
|
||||
message_annotations :: maybe(#'v1_0.message_annotations'{}),
|
||||
properties :: maybe(#'v1_0.properties'{}),
|
||||
application_properties :: maybe(#'v1_0.application_properties'{}),
|
||||
header :: opt(#'v1_0.header'{}),
|
||||
delivery_annotations :: opt(#'v1_0.delivery_annotations'{}),
|
||||
message_annotations :: opt(#'v1_0.message_annotations'{}),
|
||||
properties :: opt(#'v1_0.properties'{}),
|
||||
application_properties :: opt(#'v1_0.application_properties'{}),
|
||||
body :: amqp10_body() | unset,
|
||||
footer :: maybe(#'v1_0.footer'{})
|
||||
footer :: opt(#'v1_0.footer'{})
|
||||
}).
|
||||
|
||||
-opaque amqp10_msg() :: #amqp10_msg{}.
|
||||
|
@ -142,7 +142,7 @@ settled(#amqp10_msg{transfer = #'v1_0.transfer'{settled = Settled}}) ->
|
|||
% the last 1 octet is the version
|
||||
% See 2.8.11 in the spec
|
||||
-spec message_format(amqp10_msg()) ->
|
||||
maybe({non_neg_integer(), non_neg_integer()}).
|
||||
opt({non_neg_integer(), non_neg_integer()}).
|
||||
message_format(#amqp10_msg{transfer =
|
||||
#'v1_0.transfer'{message_format = undefined}}) ->
|
||||
undefined;
|
||||
|
@ -306,7 +306,7 @@ set_headers(Headers, #amqp10_msg{header = Current} = Msg) ->
|
|||
H = maps:fold(fun(durable, V, Acc) ->
|
||||
Acc#'v1_0.header'{durable = V};
|
||||
(priority, V, Acc) ->
|
||||
Acc#'v1_0.header'{priority = {uint, V}};
|
||||
Acc#'v1_0.header'{priority = {ubyte, V}};
|
||||
(first_acquirer, V, Acc) ->
|
||||
Acc#'v1_0.header'{first_acquirer = V};
|
||||
(ttl, V, Acc) ->
|
||||
|
@ -325,8 +325,8 @@ set_properties(Props, #amqp10_msg{properties = Current} = Msg) ->
|
|||
P = maps:fold(fun(message_id, V, Acc) when is_binary(V) ->
|
||||
% message_id can be any type but we restrict it here
|
||||
Acc#'v1_0.properties'{message_id = utf8(V)};
|
||||
(user_id, V, Acc) ->
|
||||
Acc#'v1_0.properties'{user_id = utf8(V)};
|
||||
(user_id, V, Acc) when is_binary(V) ->
|
||||
Acc#'v1_0.properties'{user_id = {binary, V}};
|
||||
(to, V, Acc) ->
|
||||
Acc#'v1_0.properties'{to = utf8(V)};
|
||||
(subject, V, Acc) ->
|
||||
|
|
|
@ -14,21 +14,10 @@
|
|||
|
||||
-include("src/amqp10_client.hrl").
|
||||
|
||||
-compile(export_all).
|
||||
-compile([export_all, nowarn_export_all]).
|
||||
|
||||
-define(UNAUTHORIZED_USER, <<"test_user_no_perm">>).
|
||||
|
||||
%% The latch constant defines how many processes are spawned in order
|
||||
%% to run certain functionality in parallel. It follows the standard
|
||||
%% countdown latch pattern.
|
||||
-define(LATCH, 100).
|
||||
|
||||
%% The wait constant defines how long a consumer waits before it
|
||||
%% unsubscribes
|
||||
-define(WAIT, 200).
|
||||
|
||||
%% How to long wait for a process to die after an expected failure
|
||||
-define(PROCESS_EXIT_TIMEOUT, 5000).
|
||||
suite() ->
|
||||
[{timetrap, {seconds, 120}}].
|
||||
|
||||
all() ->
|
||||
[
|
||||
|
@ -77,7 +66,8 @@ shared() ->
|
|||
subscribe,
|
||||
subscribe_with_auto_flow,
|
||||
outgoing_heartbeat,
|
||||
roundtrip_large_messages
|
||||
roundtrip_large_messages,
|
||||
transfer_id_vs_delivery_id
|
||||
].
|
||||
|
||||
%% -------------------------------------------------------------------
|
||||
|
@ -112,17 +102,13 @@ stop_amqp10_client_app(Config) ->
|
|||
init_per_group(rabbitmq, Config0) ->
|
||||
Config = rabbit_ct_helpers:set_config(Config0,
|
||||
{sasl, {plain, <<"guest">>, <<"guest">>}}),
|
||||
Config1 = rabbit_ct_helpers:merge_app_env(Config,
|
||||
[{rabbitmq_amqp1_0,
|
||||
[{protocol_strict_mode, true}]}]),
|
||||
rabbit_ct_helpers:run_steps(Config1, rabbit_ct_broker_helpers:setup_steps());
|
||||
rabbit_ct_helpers:run_steps(Config, rabbit_ct_broker_helpers:setup_steps());
|
||||
init_per_group(rabbitmq_strict, Config0) ->
|
||||
Config = rabbit_ct_helpers:set_config(Config0,
|
||||
{sasl, {plain, <<"guest">>, <<"guest">>}}),
|
||||
Config1 = rabbit_ct_helpers:merge_app_env(Config,
|
||||
[{rabbitmq_amqp1_0,
|
||||
[{default_user, none},
|
||||
{protocol_strict_mode, true}]}]),
|
||||
[{rabbit,
|
||||
[{amqp1_0_default_user, none}]}]),
|
||||
rabbit_ct_helpers:run_steps(Config1, rabbit_ct_broker_helpers:setup_steps());
|
||||
init_per_group(activemq, Config0) ->
|
||||
Config = rabbit_ct_helpers:set_config(Config0, {sasl, anon}),
|
||||
|
@ -309,9 +295,7 @@ roundtrip_large_messages(Config) ->
|
|||
Data1Mb = binary:copy(DataKb, 1024),
|
||||
roundtrip(OpenConf, Data1Mb),
|
||||
roundtrip(OpenConf, binary:copy(Data1Mb, 8)),
|
||||
roundtrip(OpenConf, binary:copy(Data1Mb, 64)),
|
||||
ok.
|
||||
|
||||
ok = roundtrip(OpenConf, binary:copy(Data1Mb, 64)).
|
||||
|
||||
roundtrip(OpenConf) ->
|
||||
roundtrip(OpenConf, <<"banana">>).
|
||||
|
@ -319,39 +303,32 @@ roundtrip(OpenConf) ->
|
|||
roundtrip(OpenConf, Body) ->
|
||||
{ok, Connection} = amqp10_client:open_connection(OpenConf),
|
||||
{ok, Session} = amqp10_client:begin_session(Connection),
|
||||
{ok, Sender} = amqp10_client:attach_sender_link(Session,
|
||||
<<"banana-sender">>,
|
||||
<<"test1">>,
|
||||
settled,
|
||||
unsettled_state),
|
||||
{ok, Sender} = amqp10_client:attach_sender_link(
|
||||
Session, <<"banana-sender">>, <<"test1">>, settled, unsettled_state),
|
||||
await_link(Sender, credited, link_credit_timeout),
|
||||
|
||||
Now = os:system_time(millisecond),
|
||||
Props = #{creation_time => Now},
|
||||
Msg0 = amqp10_msg:set_properties(Props,
|
||||
amqp10_msg:new(<<"my-tag">>, Body, true)),
|
||||
Msg1 = amqp10_msg:set_application_properties(#{"a_key" => "a_value"}, Msg0),
|
||||
Msg = amqp10_msg:set_message_annotations(#{<<"x_key">> => "x_value"}, Msg1),
|
||||
% RabbitMQ AMQP 1.0 does not yet support delivery annotations
|
||||
% Msg = amqp10_msg:set_delivery_annotations(#{<<"x_key">> => "x_value"}, Msg2),
|
||||
Msg0 = amqp10_msg:new(<<"my-tag">>, Body, true),
|
||||
Msg1 = amqp10_msg:set_properties(Props, Msg0),
|
||||
Msg2 = amqp10_msg:set_application_properties(#{"a_key" => "a_value"}, Msg1),
|
||||
Msg3 = amqp10_msg:set_message_annotations(#{<<"x_key">> => "x_value"}, Msg2),
|
||||
Msg = amqp10_msg:set_delivery_annotations(#{<<"y_key">> => "y_value"}, Msg3),
|
||||
ok = amqp10_client:send_msg(Sender, Msg),
|
||||
ok = amqp10_client:detach_link(Sender),
|
||||
await_link(Sender, {detached, normal}, link_detach_timeout),
|
||||
|
||||
{error, link_not_found} = amqp10_client:detach_link(Sender),
|
||||
{ok, Receiver} = amqp10_client:attach_receiver_link(Session,
|
||||
<<"banana-receiver">>,
|
||||
<<"test1">>,
|
||||
settled,
|
||||
unsettled_state),
|
||||
{ok, OutMsg} = amqp10_client:get_msg(Receiver, 60000 * 5),
|
||||
{ok, Receiver} = amqp10_client:attach_receiver_link(
|
||||
Session, <<"banana-receiver">>, <<"test1">>, settled, unsettled_state),
|
||||
{ok, OutMsg} = amqp10_client:get_msg(Receiver, 60_000 * 4),
|
||||
ok = amqp10_client:end_session(Session),
|
||||
ok = amqp10_client:close_connection(Connection),
|
||||
% ct:pal(?LOW_IMPORTANCE, "roundtrip message Out: ~tp~nIn: ~tp~n", [OutMsg, Msg]),
|
||||
#{creation_time := Now} = amqp10_msg:properties(OutMsg),
|
||||
#{<<"a_key">> := <<"a_value">>} = amqp10_msg:application_properties(OutMsg),
|
||||
#{<<"x_key">> := <<"x_value">>} = amqp10_msg:message_annotations(OutMsg),
|
||||
% #{<<"x_key">> := <<"x_value">>} = amqp10_msg:delivery_annotations(OutMsg),
|
||||
#{<<"y_key">> := <<"y_value">>} = amqp10_msg:delivery_annotations(OutMsg),
|
||||
?assertEqual([Body], amqp10_msg:body(OutMsg)),
|
||||
ok.
|
||||
|
||||
|
@ -379,7 +356,7 @@ filtered_roundtrip(OpenConf, Body) ->
|
|||
settled,
|
||||
unsettled_state),
|
||||
ok = amqp10_client:send_msg(Sender, Msg1),
|
||||
{ok, OutMsg1} = amqp10_client:get_msg(DefaultReceiver, 60000 * 5),
|
||||
{ok, OutMsg1} = amqp10_client:get_msg(DefaultReceiver, 60_000 * 4),
|
||||
?assertEqual(<<"msg-1-tag">>, amqp10_msg:delivery_tag(OutMsg1)),
|
||||
|
||||
timer:sleep(5 * 1000),
|
||||
|
@ -398,16 +375,52 @@ filtered_roundtrip(OpenConf, Body) ->
|
|||
unsettled_state,
|
||||
#{<<"apache.org:selector-filter:string">> => <<"amqp.annotation.x-opt-enqueuedtimeutc > ", Now2Binary/binary>>}),
|
||||
|
||||
{ok, OutMsg2} = amqp10_client:get_msg(DefaultReceiver, 60000 * 5),
|
||||
{ok, OutMsg2} = amqp10_client:get_msg(DefaultReceiver, 60_000 * 4),
|
||||
?assertEqual(<<"msg-2-tag">>, amqp10_msg:delivery_tag(OutMsg2)),
|
||||
|
||||
{ok, OutMsgFiltered} = amqp10_client:get_msg(FilteredReceiver, 60000 * 5),
|
||||
{ok, OutMsgFiltered} = amqp10_client:get_msg(FilteredReceiver, 60_000 * 4),
|
||||
?assertEqual(<<"msg-2-tag">>, amqp10_msg:delivery_tag(OutMsgFiltered)),
|
||||
|
||||
ok = amqp10_client:end_session(Session),
|
||||
ok = amqp10_client:close_connection(Connection),
|
||||
ok.
|
||||
|
||||
%% Assert that implementations respect the difference between transfer-id and delivery-id.
|
||||
transfer_id_vs_delivery_id(Config) ->
|
||||
Hostname = ?config(rmq_hostname, Config),
|
||||
Port = rabbit_ct_broker_helpers:get_node_config(Config, 0, tcp_port_amqp),
|
||||
OpenConf = #{address => Hostname, port => Port, sasl => anon},
|
||||
|
||||
{ok, Connection} = amqp10_client:open_connection(OpenConf),
|
||||
{ok, Session} = amqp10_client:begin_session(Connection),
|
||||
{ok, Sender} = amqp10_client:attach_sender_link(
|
||||
Session, <<"banana-sender">>, <<"test1">>, settled, unsettled_state),
|
||||
await_link(Sender, credited, link_credit_timeout),
|
||||
|
||||
P0 = binary:copy(<<0>>, 8_000_000),
|
||||
P1 = <<P0/binary, 1>>,
|
||||
P2 = <<P0/binary, 2>>,
|
||||
Msg1 = amqp10_msg:new(<<"tag 1">>, P1, true),
|
||||
Msg2 = amqp10_msg:new(<<"tag 2">>, P2, true),
|
||||
ok = amqp10_client:send_msg(Sender, Msg1),
|
||||
ok = amqp10_client:send_msg(Sender, Msg2),
|
||||
ok = amqp10_client:detach_link(Sender),
|
||||
await_link(Sender, {detached, normal}, link_detach_timeout),
|
||||
|
||||
{ok, Receiver} = amqp10_client:attach_receiver_link(
|
||||
Session, <<"banana-receiver">>, <<"test1">>, settled, unsettled_state),
|
||||
{ok, RcvMsg1} = amqp10_client:get_msg(Receiver, 60_000 * 4),
|
||||
{ok, RcvMsg2} = amqp10_client:get_msg(Receiver, 60_000 * 4),
|
||||
ok = amqp10_client:end_session(Session),
|
||||
ok = amqp10_client:close_connection(Connection),
|
||||
|
||||
?assertEqual([P1], amqp10_msg:body(RcvMsg1)),
|
||||
?assertEqual([P2], amqp10_msg:body(RcvMsg2)),
|
||||
%% Despite many transfers, there were only 2 deliveries.
|
||||
%% Therefore, delivery-id should have been increased by just 1.
|
||||
?assertEqual(serial_number:add(amqp10_msg:delivery_id(RcvMsg1), 1),
|
||||
amqp10_msg:delivery_id(RcvMsg2)).
|
||||
|
||||
% a message is sent before the link attach is guaranteed to
|
||||
% have completed and link credit granted
|
||||
% also queue a link detached immediately after transfer
|
||||
|
@ -676,11 +689,13 @@ incoming_heartbeat(Config) ->
|
|||
idle_time_out => 1000, notify => self()},
|
||||
{ok, Connection} = amqp10_client:open_connection(CConf),
|
||||
receive
|
||||
{amqp10_event, {connection, Connection,
|
||||
{closed, {resource_limit_exceeded, <<"remote idle-time-out">>}}}} ->
|
||||
{amqp10_event,
|
||||
{connection, Connection0,
|
||||
{closed, {resource_limit_exceeded, <<"remote idle-time-out">>}}}}
|
||||
when Connection0 =:= Connection ->
|
||||
ok
|
||||
after 5000 ->
|
||||
exit(incoming_heartbeat_assert)
|
||||
exit(incoming_heartbeat_assert)
|
||||
end,
|
||||
demonitor(MockRef).
|
||||
|
||||
|
@ -704,7 +719,8 @@ publish_messages(Sender, Data, Num) ->
|
|||
|
||||
receive_one(Receiver) ->
|
||||
receive
|
||||
{amqp10_msg, Receiver, Msg} ->
|
||||
{amqp10_msg, Receiver0, Msg}
|
||||
when Receiver0 =:= Receiver ->
|
||||
amqp10_client:accept_msg(Receiver, Msg)
|
||||
after 2000 ->
|
||||
timeout
|
||||
|
@ -712,7 +728,8 @@ receive_one(Receiver) ->
|
|||
|
||||
await_disposition(DeliveryTag) ->
|
||||
receive
|
||||
{amqp10_disposition, {accepted, DeliveryTag}} -> ok
|
||||
{amqp10_disposition, {accepted, DeliveryTag0}}
|
||||
when DeliveryTag0 =:= DeliveryTag -> ok
|
||||
after 3000 ->
|
||||
flush(),
|
||||
exit(dispostion_timeout)
|
||||
|
@ -720,9 +737,12 @@ await_disposition(DeliveryTag) ->
|
|||
|
||||
await_link(Who, What, Err) ->
|
||||
receive
|
||||
{amqp10_event, {link, Who, What}} ->
|
||||
{amqp10_event, {link, Who0, What0}}
|
||||
when Who0 =:= Who andalso
|
||||
What0 =:= What ->
|
||||
ok;
|
||||
{amqp10_event, {link, Who, {detached, Why}}} ->
|
||||
{amqp10_event, {link, Who0, {detached, Why}}}
|
||||
when Who0 =:= Who ->
|
||||
exit(Why)
|
||||
after 5000 ->
|
||||
flush(),
|
||||
|
|
|
@ -116,6 +116,11 @@ rabbitmq_suite(
|
|||
name = "binary_parser_SUITE",
|
||||
)
|
||||
|
||||
rabbitmq_suite(
|
||||
name = "serial_number_SUITE",
|
||||
size = "small",
|
||||
)
|
||||
|
||||
assert_suites()
|
||||
|
||||
alias(
|
||||
|
|
|
@ -13,6 +13,7 @@ def all_beam_files(name = "all_beam_files"):
|
|||
"src/amqp10_binary_parser.erl",
|
||||
"src/amqp10_framing.erl",
|
||||
"src/amqp10_framing0.erl",
|
||||
"src/serial_number.erl",
|
||||
],
|
||||
hdrs = [":public_and_private_hdrs"],
|
||||
app_name = "amqp10_common",
|
||||
|
@ -34,6 +35,7 @@ def all_test_beam_files(name = "all_test_beam_files"):
|
|||
"src/amqp10_binary_parser.erl",
|
||||
"src/amqp10_framing.erl",
|
||||
"src/amqp10_framing0.erl",
|
||||
"src/serial_number.erl",
|
||||
],
|
||||
hdrs = [":public_and_private_hdrs"],
|
||||
app_name = "amqp10_common",
|
||||
|
@ -62,11 +64,12 @@ def all_srcs(name = "all_srcs"):
|
|||
"src/amqp10_binary_parser.erl",
|
||||
"src/amqp10_framing.erl",
|
||||
"src/amqp10_framing0.erl",
|
||||
"src/serial_number.erl",
|
||||
],
|
||||
)
|
||||
filegroup(
|
||||
name = "public_hdrs",
|
||||
srcs = ["include/amqp10_framing.hrl"],
|
||||
srcs = ["include/amqp10_framing.hrl", "include/amqp10_types.hrl"],
|
||||
)
|
||||
filegroup(
|
||||
name = "private_hdrs",
|
||||
|
@ -96,3 +99,11 @@ def test_suite_beam_files(name = "test_suite_beam_files"):
|
|||
app_name = "amqp10_common",
|
||||
erlc_opts = "//:test_erlc_opts",
|
||||
)
|
||||
erlang_bytecode(
|
||||
name = "serial_number_SUITE_beam_files",
|
||||
testonly = True,
|
||||
srcs = ["test/serial_number_SUITE.erl"],
|
||||
outs = ["test/serial_number_SUITE.beam"],
|
||||
app_name = "amqp10_common",
|
||||
erlc_opts = "//:test_erlc_opts",
|
||||
)
|
||||
|
|
|
@ -0,0 +1,12 @@
|
|||
-define(UINT_MAX, 16#ff_ff_ff_ff).
|
||||
|
||||
% [1.6.5]
|
||||
-type uint() :: 0..?UINT_MAX.
|
||||
% [2.8.4]
|
||||
-type link_handle() :: uint().
|
||||
% [2.8.8]
|
||||
-type delivery_number() :: sequence_no().
|
||||
% [2.8.9]
|
||||
-type transfer_number() :: sequence_no().
|
||||
% [2.8.10]
|
||||
-type sequence_no() :: uint().
|
|
@ -117,11 +117,8 @@ parse_compound(UnitSize, Bin) ->
|
|||
|
||||
parse_compound1(0, <<>>, List) ->
|
||||
lists:reverse(List);
|
||||
parse_compound1(_Left, <<>>, List) ->
|
||||
case application:get_env(rabbitmq_amqp1_0, protocol_strict_mode) of
|
||||
{ok, false} -> lists:reverse(List); %% ignore miscount
|
||||
{ok, true} -> throw(compound_datatype_miscount)
|
||||
end;
|
||||
parse_compound1(_Left, <<>>, _List) ->
|
||||
throw(compound_datatype_miscount);
|
||||
parse_compound1(Count, Bin, Acc) ->
|
||||
{Value, Rest} = parse(Bin),
|
||||
parse_compound1(Count - 1, Rest, [Value | Acc]).
|
||||
|
|
|
@ -0,0 +1,118 @@
|
|||
%% This Source Code Form is subject to the terms of the Mozilla Public
|
||||
%% License, v. 2.0. If a copy of the MPL was not distributed with this
|
||||
%% file, You can obtain one at https://mozilla.org/MPL/2.0/.
|
||||
%%
|
||||
%% Copyright (c) 2007-2023 VMware, Inc. or its affiliates. All rights reserved.
|
||||
|
||||
%% https://www.ietf.org/rfc/rfc1982.txt
|
||||
-module(serial_number).
|
||||
-include("amqp10_types.hrl").
|
||||
|
||||
-export([add/2,
|
||||
compare/2,
|
||||
ranges/1,
|
||||
diff/2,
|
||||
foldl/4]).
|
||||
|
||||
-ifdef(TEST).
|
||||
-export([usort/1]).
|
||||
-endif.
|
||||
|
||||
-type serial_number() :: sequence_no().
|
||||
-export_type([serial_number/0]).
|
||||
|
||||
%% SERIAL_BITS = 32
|
||||
%% 2 ^ SERIAL_BITS
|
||||
-define(SERIAL_SPACE, 16#100000000).
|
||||
%% 2 ^ (SERIAL_BITS - 1) - 1
|
||||
-define(SERIAL_MAX_ADDEND, 16#7fffffff).
|
||||
|
||||
-spec add(serial_number(), non_neg_integer()) ->
|
||||
serial_number().
|
||||
add(S, N)
|
||||
when N >= 0 andalso
|
||||
N =< ?SERIAL_MAX_ADDEND ->
|
||||
(S + N) rem ?SERIAL_SPACE;
|
||||
add(S, N) ->
|
||||
exit({undefined_serial_addition, S, N}).
|
||||
|
||||
%% 2 ^ (SERIAL_BITS - 1)
|
||||
-define(COMPARE, 2_147_483_648).
|
||||
|
||||
-spec compare(serial_number(), serial_number()) ->
|
||||
equal | less | greater.
|
||||
compare(A, B) ->
|
||||
if A =:= B ->
|
||||
equal;
|
||||
(A < B andalso B - A < ?COMPARE) orelse
|
||||
(A > B andalso A - B > ?COMPARE) ->
|
||||
less;
|
||||
(A < B andalso B - A > ?COMPARE) orelse
|
||||
(A > B andalso A - B < ?COMPARE) ->
|
||||
greater;
|
||||
true ->
|
||||
exit({undefined_serial_comparison, A, B})
|
||||
end.
|
||||
|
||||
-spec usort([serial_number()]) ->
|
||||
[serial_number()].
|
||||
usort(L) ->
|
||||
lists:usort(fun(A, B) ->
|
||||
case compare(A, B) of
|
||||
greater -> false;
|
||||
_ -> true
|
||||
end
|
||||
end, L).
|
||||
|
||||
%% Takes a list of serial numbers and returns tuples
|
||||
%% {First, Last} representing contiguous serial numbers.
|
||||
-spec ranges([serial_number()]) ->
|
||||
[{First :: serial_number(), Last :: serial_number()}].
|
||||
ranges([]) ->
|
||||
[];
|
||||
ranges(SerialNumbers) ->
|
||||
[First | Rest] = usort(SerialNumbers),
|
||||
ranges0(Rest, [{First, First}]).
|
||||
|
||||
ranges0([], Acc) ->
|
||||
lists:reverse(Acc);
|
||||
ranges0([H | Rest], [{First, Last} | AccRest] = Acc0) ->
|
||||
case add(Last, 1) of
|
||||
H ->
|
||||
Acc = [{First, H} | AccRest],
|
||||
ranges0(Rest, Acc);
|
||||
_ ->
|
||||
Acc = [{H, H} | Acc0],
|
||||
ranges0(Rest, Acc)
|
||||
end.
|
||||
|
||||
-define(SERIAL_DIFF_BOUND, 16#80000000).
|
||||
-spec diff(serial_number(), serial_number()) -> integer().
|
||||
diff(A, B) ->
|
||||
Diff = A - B,
|
||||
if Diff > (?SERIAL_DIFF_BOUND) ->
|
||||
%% B is actually greater than A
|
||||
- (?SERIAL_SPACE - Diff);
|
||||
Diff < - (?SERIAL_DIFF_BOUND) ->
|
||||
?SERIAL_SPACE + Diff;
|
||||
Diff < ?SERIAL_DIFF_BOUND andalso Diff > -?SERIAL_DIFF_BOUND ->
|
||||
Diff;
|
||||
true ->
|
||||
exit({undefined_serial_diff, A, B})
|
||||
end.
|
||||
|
||||
-spec foldl(Fun, Acc0, First, Last) -> Acc1 when
|
||||
Fun :: fun((serial_number(), AccIn) -> AccOut),
|
||||
Acc0 :: term(),
|
||||
Acc1 :: term(),
|
||||
AccIn :: term(),
|
||||
AccOut :: term(),
|
||||
First :: serial_number(),
|
||||
Last :: serial_number().
|
||||
|
||||
foldl(Fun, Acc0, Current, Last) ->
|
||||
Acc = Fun(Current, Acc0),
|
||||
case compare(Current, Last) of
|
||||
less -> foldl(Fun, Acc, add(Current, 1), Last);
|
||||
equal -> Acc
|
||||
end.
|
|
@ -0,0 +1,124 @@
|
|||
%% This Source Code Form is subject to the terms of the Mozilla Public
|
||||
%% License, v. 2.0. If a copy of the MPL was not distributed with this
|
||||
%% file, You can obtain one at https://mozilla.org/MPL/2.0/.
|
||||
%%
|
||||
%% Copyright (c) 2007-2023 VMware, Inc. or its affiliates. All rights reserved.
|
||||
%%
|
||||
|
||||
-module(serial_number_SUITE).
|
||||
-include_lib("eunit/include/eunit.hrl").
|
||||
|
||||
-compile([export_all,
|
||||
nowarn_export_all]).
|
||||
|
||||
-import(serial_number, [add/2,
|
||||
compare/2,
|
||||
usort/1,
|
||||
ranges/1,
|
||||
diff/2,
|
||||
foldl/4]).
|
||||
|
||||
all() -> [test_add,
|
||||
test_compare,
|
||||
test_usort,
|
||||
test_ranges,
|
||||
test_diff,
|
||||
test_foldl].
|
||||
|
||||
test_add(_Config) ->
|
||||
?assertEqual(1, add(0, 1)),
|
||||
%% "Addition of a value outside the range
|
||||
%% [0 .. (2^(SERIAL_BITS - 1) - 1)] is undefined."
|
||||
MaxAddend = round(math:pow(2, 32 - 1) - 1),
|
||||
MinAddend = 0,
|
||||
?assertEqual(MaxAddend, add(0, MaxAddend)),
|
||||
?assertEqual(MinAddend, add(0, MinAddend)),
|
||||
?assertEqual(0, add(16#ffffffff, 1)),
|
||||
?assertEqual(1, add(16#ffffffff, 2)),
|
||||
AddendTooLarge = MaxAddend + 1,
|
||||
?assertExit({undefined_serial_addition, 0, AddendTooLarge},
|
||||
add(0, AddendTooLarge)),
|
||||
AddendTooSmall = MinAddend - 1,
|
||||
?assertExit({undefined_serial_addition, 0, AddendTooSmall},
|
||||
add(0, AddendTooSmall)).
|
||||
|
||||
test_compare(_Config) ->
|
||||
?assertEqual(equal, compare(0, 0)),
|
||||
?assertEqual(equal, compare(16#ffffffff, 16#ffffffff)),
|
||||
?assertEqual(less, compare(0, 1)),
|
||||
?assertEqual(greater, compare(1, 0)),
|
||||
?assertEqual(less, compare(0, 2)),
|
||||
?assertEqual(less, compare(0, round(math:pow(2, 32 - 1)) - 1)),
|
||||
?assertExit({undefined_serial_comparison, 0, _},
|
||||
compare(0, round(math:pow(2, 32 - 1)))),
|
||||
?assertEqual(less, compare(16#ffffffff - 5, 30_000)),
|
||||
?assertEqual(greater, compare(1, 0)),
|
||||
?assertEqual(greater, compare(2147483647, 0)),
|
||||
?assertExit({undefined_serial_comparison, 2147483648, 0},
|
||||
compare(2147483648, 0)).
|
||||
|
||||
test_usort(_Config) ->
|
||||
?assertEqual([],
|
||||
usort([])),
|
||||
?assertEqual([3],
|
||||
usort([3])),
|
||||
?assertEqual([0],
|
||||
usort([0, 0])),
|
||||
?assertEqual([4294967000, 4294967293, 4294967294, 4294967295, 0, 3, 4],
|
||||
usort([3, 4294967295, 4294967295, 4294967293, 4294967000, 4294967294, 0, 4])).
|
||||
|
||||
test_ranges(_Config) ->
|
||||
?assertEqual([],
|
||||
ranges([])),
|
||||
?assertEqual([{0, 0}],
|
||||
ranges([0])),
|
||||
?assertEqual([{0, 1}],
|
||||
ranges([0, 1])),
|
||||
?assertEqual([{0, 1}],
|
||||
ranges([1, 0])),
|
||||
?assertEqual([{0, 0}, {2, 2}],
|
||||
ranges([0, 2])),
|
||||
?assertEqual([{0, 0}, {2, 2}],
|
||||
ranges([2, 0])),
|
||||
%% 2 ^ 32 - 1 = 4294967295
|
||||
?assertEqual([{4294967290, 4294967290}, {4294967295, 4294967295}],
|
||||
ranges([4294967290, 4294967295])),
|
||||
?assertEqual([{4294967290, 4294967290}, {4294967295, 4294967295}],
|
||||
ranges([4294967295, 4294967290])),
|
||||
?assertEqual([{4294967294, 4294967294}, {0, 0}],
|
||||
ranges([4294967294, 0])),
|
||||
?assertEqual([{4294967294, 4294967294}, {0, 0}],
|
||||
ranges([0, 4294967294])),
|
||||
?assertEqual([{4294967295, 0}],
|
||||
ranges([4294967295, 0])),
|
||||
?assertEqual([{4294967294, 1}, {3, 5}, {10, 10}, {18, 19}],
|
||||
ranges([4294967294, 4294967295, 0, 1, 3, 4, 5, 10, 18, 19])),
|
||||
?assertEqual([{4294967294, 1}, {3, 5}, {10, 10}, {18, 19}],
|
||||
ranges([1, 10, 4294967294, 0, 3, 4, 5, 19, 18, 4294967295])).
|
||||
|
||||
test_diff(_Config) ->
|
||||
?assertEqual(0, diff(0, 0)),
|
||||
?assertEqual(0, diff(1, 1)),
|
||||
?assertEqual(0, diff(16#ffffffff, 16#ffffffff)),
|
||||
?assertEqual(1, diff(1, 0)),
|
||||
?assertEqual(2, diff(1, 16#ffffffff)),
|
||||
?assertEqual(6, diff(0, 16#fffffffa)),
|
||||
?assertEqual(206, diff(200, 16#fffffffa)),
|
||||
?assertEqual(-2, diff(16#ffffffff, 1)),
|
||||
?assertExit({undefined_serial_diff, _, _},
|
||||
diff(0, 16#80000000)),
|
||||
?assertExit({undefined_serial_diff, _, _},
|
||||
diff(16#ffffffff, 16#7fffffff)).
|
||||
|
||||
test_foldl(_Config) ->
|
||||
?assertEqual(
|
||||
[16#ffffffff - 1, 16#ffffffff, 0, 1],
|
||||
foldl(fun(S, Acc) ->
|
||||
Acc ++ [S]
|
||||
end, [], 16#ffffffff - 1, 1)),
|
||||
|
||||
?assertEqual(
|
||||
[0],
|
||||
foldl(fun(S, Acc) ->
|
||||
Acc ++ [S]
|
||||
end, [], 0, 0)).
|
|
@ -797,11 +797,6 @@ handle_method_from_server1(#'basic.nack'{} = BasicNack, none,
|
|||
#state{confirm_handler = {CH, _Ref}} = State) ->
|
||||
CH ! BasicNack,
|
||||
{noreply, update_confirm_set(BasicNack, State)};
|
||||
|
||||
handle_method_from_server1(#'basic.credit_drained'{} = CreditDrained, none,
|
||||
#state{consumer = Consumer} = State) ->
|
||||
Consumer ! CreditDrained,
|
||||
{noreply, State};
|
||||
handle_method_from_server1(Method, none, State) ->
|
||||
{noreply, rpc_bottom_half(Method, State)};
|
||||
handle_method_from_server1(Method, Content, State) ->
|
||||
|
|
|
@ -176,10 +176,7 @@ handle_info({'DOWN', _MRef, process, Pid, _Info},
|
|||
_ -> {ok, State} %% unnamed consumer went down
|
||||
%% before receiving consume_ok
|
||||
end
|
||||
end;
|
||||
handle_info(#'basic.credit_drained'{} = Method, State) ->
|
||||
deliver_to_consumer_or_die(Method, Method, State),
|
||||
{ok, State}.
|
||||
end.
|
||||
|
||||
%% @private
|
||||
handle_call({register_default_consumer, Pid}, _From,
|
||||
|
@ -246,8 +243,7 @@ tag(#'basic.consume'{consumer_tag = Tag}) -> Tag;
|
|||
tag(#'basic.consume_ok'{consumer_tag = Tag}) -> Tag;
|
||||
tag(#'basic.cancel'{consumer_tag = Tag}) -> Tag;
|
||||
tag(#'basic.cancel_ok'{consumer_tag = Tag}) -> Tag;
|
||||
tag(#'basic.deliver'{consumer_tag = Tag}) -> Tag;
|
||||
tag(#'basic.credit_drained'{consumer_tag = Tag}) -> Tag.
|
||||
tag(#'basic.deliver'{consumer_tag = Tag}) -> Tag.
|
||||
|
||||
add_to_monitor_dict(Pid, Monitors) ->
|
||||
case maps:find(Pid, Monitors) of
|
||||
|
|
|
@ -9,8 +9,7 @@
|
|||
|
||||
-export([init_state/0, dest_prefixes/0, all_dest_prefixes/0]).
|
||||
-export([ensure_endpoint/4, ensure_endpoint/5, ensure_binding/3]).
|
||||
-export([parse_endpoint/1, parse_endpoint/2]).
|
||||
-export([parse_routing/1, dest_temp_queue/1]).
|
||||
-export([dest_temp_queue/1]).
|
||||
|
||||
-include("amqp_client.hrl").
|
||||
-include("rabbit_routing_prefixes.hrl").
|
||||
|
@ -26,50 +25,6 @@ all_dest_prefixes() -> [?TEMP_QUEUE_PREFIX | dest_prefixes()].
|
|||
|
||||
%% --------------------------------------------------------------------------
|
||||
|
||||
parse_endpoint(Destination) ->
|
||||
parse_endpoint(Destination, false).
|
||||
|
||||
parse_endpoint(undefined, AllowAnonymousQueue) ->
|
||||
parse_endpoint("/queue", AllowAnonymousQueue);
|
||||
|
||||
parse_endpoint(Destination, AllowAnonymousQueue) when is_binary(Destination) ->
|
||||
parse_endpoint(unicode:characters_to_list(Destination),
|
||||
AllowAnonymousQueue);
|
||||
parse_endpoint(Destination, AllowAnonymousQueue) when is_list(Destination) ->
|
||||
case re:split(Destination, "/", [{return, list}]) of
|
||||
[Name] ->
|
||||
{ok, {queue, unescape(Name)}};
|
||||
["", Type | Rest]
|
||||
when Type =:= "exchange" orelse Type =:= "queue" orelse
|
||||
Type =:= "topic" orelse Type =:= "temp-queue" ->
|
||||
parse_endpoint0(atomise(Type), Rest, AllowAnonymousQueue);
|
||||
["", "amq", "queue" | Rest] ->
|
||||
parse_endpoint0(amqqueue, Rest, AllowAnonymousQueue);
|
||||
["", "reply-queue" = Prefix | [_|_]] ->
|
||||
parse_endpoint0(reply_queue,
|
||||
[lists:nthtail(2 + length(Prefix), Destination)],
|
||||
AllowAnonymousQueue);
|
||||
_ ->
|
||||
{error, {unknown_destination, Destination}}
|
||||
end.
|
||||
|
||||
parse_endpoint0(exchange, ["" | _] = Rest, _) ->
|
||||
{error, {invalid_destination, exchange, to_url(Rest)}};
|
||||
parse_endpoint0(exchange, [Name], _) ->
|
||||
{ok, {exchange, {unescape(Name), undefined}}};
|
||||
parse_endpoint0(exchange, [Name, Pattern], _) ->
|
||||
{ok, {exchange, {unescape(Name), unescape(Pattern)}}};
|
||||
parse_endpoint0(queue, [], false) ->
|
||||
{error, {invalid_destination, queue, []}};
|
||||
parse_endpoint0(queue, [], true) ->
|
||||
{ok, {queue, undefined}};
|
||||
parse_endpoint0(Type, [[_|_]] = [Name], _) ->
|
||||
{ok, {Type, unescape(Name)}};
|
||||
parse_endpoint0(Type, Rest, _) ->
|
||||
{error, {invalid_destination, Type, to_url(Rest)}}.
|
||||
|
||||
%% --------------------------------------------------------------------------
|
||||
|
||||
ensure_endpoint(Dir, Channel, Endpoint, State) ->
|
||||
ensure_endpoint(Dir, Channel, Endpoint, [], State).
|
||||
|
||||
|
@ -140,16 +95,6 @@ ensure_binding(Queue, {Exchange, RoutingKey}, Channel) ->
|
|||
|
||||
%% --------------------------------------------------------------------------
|
||||
|
||||
parse_routing({exchange, {Name, undefined}}) ->
|
||||
{Name, ""};
|
||||
parse_routing({exchange, {Name, Pattern}}) ->
|
||||
{Name, Pattern};
|
||||
parse_routing({topic, Name}) ->
|
||||
{"amq.topic", Name};
|
||||
parse_routing({Type, Name})
|
||||
when Type =:= queue orelse Type =:= reply_queue orelse Type =:= amqqueue ->
|
||||
{"", Name}.
|
||||
|
||||
dest_temp_queue({temp_queue, Name}) -> Name;
|
||||
dest_temp_queue(_) -> none.
|
||||
|
||||
|
@ -206,17 +151,3 @@ queue_declare_method(#'queue.declare'{} = Method, Type, Params) ->
|
|||
_ ->
|
||||
Method2
|
||||
end.
|
||||
|
||||
%% --------------------------------------------------------------------------
|
||||
|
||||
to_url([]) -> [];
|
||||
to_url(Lol) -> "/" ++ string:join(Lol, "/").
|
||||
|
||||
atomise(Name) when is_list(Name) ->
|
||||
list_to_atom(re:replace(Name, "-", "_", [{return,list}, global])).
|
||||
|
||||
unescape(Str) -> unescape(Str, []).
|
||||
|
||||
unescape("%2F" ++ Str, Acc) -> unescape(Str, [$/ | Acc]);
|
||||
unescape([C | Str], Acc) -> unescape(Str, [C | Acc]);
|
||||
unescape([], Acc) -> lists:reverse(Acc).
|
||||
|
|
|
@ -1342,9 +1342,9 @@ channel_writer_death(Config) ->
|
|||
Ret = amqp_channel:call(Channel, QoS),
|
||||
throw({unexpected_success, Ret})
|
||||
catch
|
||||
exit:{{function_clause,
|
||||
[{rabbit_channel, check_user_id_header, _, _} | _]}, _}
|
||||
when ConnType =:= direct -> ok;
|
||||
exit:{{{badrecord, <<>>},
|
||||
[{rabbit_channel, _, _, _} | _]}, _}
|
||||
when ConnType =:= direct -> ok;
|
||||
|
||||
exit:{{infrastructure_died, {unknown_properties_record, <<>>}}, _}
|
||||
when ConnType =:= network -> ok
|
||||
|
|
|
@ -322,9 +322,9 @@ route_destination_parsing(_Config) ->
|
|||
ok.
|
||||
|
||||
parse_dest(Destination, Params) ->
|
||||
rabbit_routing_util:parse_endpoint(Destination, Params).
|
||||
rabbit_routing_parser:parse_endpoint(Destination, Params).
|
||||
parse_dest(Destination) ->
|
||||
rabbit_routing_util:parse_endpoint(Destination).
|
||||
rabbit_routing_parser:parse_endpoint(Destination).
|
||||
|
||||
%% -------------------------------------------------------------------
|
||||
%% Topic variable map
|
||||
|
|
|
@ -40,3 +40,6 @@ callgraph.dot*
|
|||
PACKAGES/*
|
||||
|
||||
rabbit-rabbitmq-deps.mk
|
||||
|
||||
[Bb]in/
|
||||
[Oo]bj/
|
||||
|
|
|
@ -59,6 +59,8 @@ _APP_ENV = """[
|
|||
{default_user_tags, [administrator]},
|
||||
{default_vhost, <<"/">>},
|
||||
{default_permissions, [<<".*">>, <<".*">>, <<".*">>]},
|
||||
{amqp1_0_default_user, <<"guest">>},
|
||||
{amqp1_0_default_vhost, <<"/">>},
|
||||
{loopback_users, [<<"guest">>]},
|
||||
{password_hashing_module, rabbit_password_hashing_sha256},
|
||||
{server_properties, []},
|
||||
|
@ -234,6 +236,9 @@ rabbitmq_app(
|
|||
|
||||
xref(
|
||||
name = "xref",
|
||||
additional_libs = [
|
||||
"//deps/rabbitmq_cli:erlang_app", # keep
|
||||
],
|
||||
target = ":erlang_app",
|
||||
)
|
||||
|
||||
|
@ -245,8 +250,10 @@ plt(
|
|||
],
|
||||
for_target = ":erlang_app",
|
||||
ignore_warnings = True,
|
||||
libs = ["//deps/rabbitmq_cli:elixir"], # keep
|
||||
plt = "//:base_plt",
|
||||
deps = [
|
||||
"//deps/rabbitmq_cli:erlang_app", # keep
|
||||
"@looking_glass//:erlang_app", # keep
|
||||
],
|
||||
)
|
||||
|
@ -273,6 +280,7 @@ rabbitmq_home(
|
|||
plugins = [
|
||||
":test_erlang_app",
|
||||
"//deps/rabbitmq_ct_client_helpers:erlang_app",
|
||||
"//deps/rabbitmq_amqp1_0:erlang_app",
|
||||
"@inet_tcp_proxy_dist//:erlang_app",
|
||||
"@meck//:erlang_app",
|
||||
],
|
||||
|
@ -1236,6 +1244,52 @@ rabbitmq_integration_suite(
|
|||
],
|
||||
)
|
||||
|
||||
rabbitmq_integration_suite(
|
||||
name = "amqp_client_SUITE",
|
||||
size = "large",
|
||||
additional_beam = [
|
||||
":test_event_recorder_beam",
|
||||
],
|
||||
shard_count = 3,
|
||||
runtime_deps = [
|
||||
"//deps/amqp10_client:erlang_app",
|
||||
],
|
||||
)
|
||||
|
||||
rabbitmq_integration_suite(
|
||||
name = "amqp_proxy_protocol_SUITE",
|
||||
size = "medium",
|
||||
)
|
||||
|
||||
rabbitmq_integration_suite(
|
||||
name = "amqp_system_SUITE",
|
||||
flaky = True,
|
||||
shard_count = 2,
|
||||
tags = [
|
||||
"dotnet",
|
||||
],
|
||||
test_env = {
|
||||
"TMPDIR": "$TEST_TMPDIR",
|
||||
},
|
||||
)
|
||||
|
||||
rabbitmq_integration_suite(
|
||||
name = "amqp_auth_SUITE",
|
||||
additional_beam = [
|
||||
":test_event_recorder_beam",
|
||||
],
|
||||
runtime_deps = [
|
||||
"//deps/amqp10_client:erlang_app",
|
||||
],
|
||||
)
|
||||
|
||||
rabbitmq_integration_suite(
|
||||
name = "amqp_credit_api_v2_SUITE",
|
||||
runtime_deps = [
|
||||
"//deps/amqp10_client:erlang_app",
|
||||
],
|
||||
)
|
||||
|
||||
assert_suites()
|
||||
|
||||
filegroup(
|
||||
|
@ -1332,6 +1386,7 @@ eunit(
|
|||
":test_test_util_beam",
|
||||
":test_test_rabbit_event_handler_beam",
|
||||
":test_clustering_utils_beam",
|
||||
":test_event_recorder_beam",
|
||||
],
|
||||
target = ":test_erlang_app",
|
||||
test_env = {
|
||||
|
|
|
@ -39,6 +39,8 @@ define PROJECT_ENV
|
|||
{default_user_tags, [administrator]},
|
||||
{default_vhost, <<"/">>},
|
||||
{default_permissions, [<<".*">>, <<".*">>, <<".*">>]},
|
||||
{amqp1_0_default_user, <<"guest">>},
|
||||
{amqp1_0_default_vhost, <<"/">>},
|
||||
{loopback_users, [<<"guest">>]},
|
||||
{password_hashing_module, rabbit_password_hashing_sha256},
|
||||
{server_properties, []},
|
||||
|
@ -133,8 +135,8 @@ endef
|
|||
LOCAL_DEPS = sasl os_mon inets compiler public_key crypto ssl syntax_tools xmerl
|
||||
|
||||
BUILD_DEPS = rabbitmq_cli
|
||||
DEPS = ranch rabbit_common rabbitmq_prelaunch ra sysmon_handler stdout_formatter recon redbug observer_cli osiris amqp10_common syslog systemd seshat khepri khepri_mnesia_migration
|
||||
TEST_DEPS = rabbitmq_ct_helpers rabbitmq_ct_client_helpers amqp_client meck proper
|
||||
DEPS = ranch rabbit_common amqp10_common rabbitmq_prelaunch ra sysmon_handler stdout_formatter recon redbug observer_cli osiris syslog systemd seshat khepri khepri_mnesia_migration
|
||||
TEST_DEPS = rabbitmq_ct_helpers rabbitmq_ct_client_helpers meck proper amqp_client amqp10_client rabbitmq_amqp1_0
|
||||
|
||||
PLT_APPS += mnesia
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@ def all_beam_files(name = "all_beam_files"):
|
|||
app_name = "rabbit",
|
||||
dest = "ebin",
|
||||
erlc_opts = "//:erlc_opts",
|
||||
deps = ["//deps/rabbit_common:erlang_app"],
|
||||
deps = ["//deps/amqp10_common:erlang_app", "//deps/rabbit_common:erlang_app"],
|
||||
)
|
||||
erlang_bytecode(
|
||||
name = "other_beam",
|
||||
|
@ -46,6 +46,12 @@ def all_beam_files(name = "all_beam_files"):
|
|||
"src/rabbit.erl",
|
||||
"src/rabbit_access_control.erl",
|
||||
"src/rabbit_alarm.erl",
|
||||
"src/rabbit_amqp1_0.erl",
|
||||
"src/rabbit_amqp_reader.erl",
|
||||
"src/rabbit_amqp_session.erl",
|
||||
"src/rabbit_amqp_session_sup.erl",
|
||||
"src/rabbit_amqp_util.erl",
|
||||
"src/rabbit_amqp_writer.erl",
|
||||
"src/rabbit_amqqueue.erl",
|
||||
"src/rabbit_amqqueue_control.erl",
|
||||
"src/rabbit_amqqueue_process.erl",
|
||||
|
@ -286,7 +292,7 @@ def all_test_beam_files(name = "all_test_beam_files"):
|
|||
app_name = "rabbit",
|
||||
dest = "test",
|
||||
erlc_opts = "//:test_erlc_opts",
|
||||
deps = ["//deps/rabbit_common:erlang_app"],
|
||||
deps = ["//deps/amqp10_common:erlang_app", "//deps/rabbit_common:erlang_app"],
|
||||
)
|
||||
erlang_bytecode(
|
||||
name = "test_other_beam",
|
||||
|
@ -309,6 +315,12 @@ def all_test_beam_files(name = "all_test_beam_files"):
|
|||
"src/rabbit.erl",
|
||||
"src/rabbit_access_control.erl",
|
||||
"src/rabbit_alarm.erl",
|
||||
"src/rabbit_amqp1_0.erl",
|
||||
"src/rabbit_amqp_reader.erl",
|
||||
"src/rabbit_amqp_session.erl",
|
||||
"src/rabbit_amqp_session_sup.erl",
|
||||
"src/rabbit_amqp_util.erl",
|
||||
"src/rabbit_amqp_writer.erl",
|
||||
"src/rabbit_amqqueue.erl",
|
||||
"src/rabbit_amqqueue_control.erl",
|
||||
"src/rabbit_amqqueue_process.erl",
|
||||
|
@ -541,6 +553,7 @@ def all_srcs(name = "all_srcs"):
|
|||
"include/gm_specs.hrl",
|
||||
"include/internal_user.hrl",
|
||||
"include/mc.hrl",
|
||||
"include/rabbit_amqp.hrl",
|
||||
"include/rabbit_global_counters.hrl",
|
||||
"include/vhost.hrl",
|
||||
"include/vhost_v2.hrl",
|
||||
|
@ -586,6 +599,12 @@ def all_srcs(name = "all_srcs"):
|
|||
"src/rabbit.erl",
|
||||
"src/rabbit_access_control.erl",
|
||||
"src/rabbit_alarm.erl",
|
||||
"src/rabbit_amqp1_0.erl",
|
||||
"src/rabbit_amqp_reader.erl",
|
||||
"src/rabbit_amqp_session.erl",
|
||||
"src/rabbit_amqp_session_sup.erl",
|
||||
"src/rabbit_amqp_util.erl",
|
||||
"src/rabbit_amqp_writer.erl",
|
||||
"src/rabbit_amqqueue.erl",
|
||||
"src/rabbit_amqqueue_control.erl",
|
||||
"src/rabbit_amqqueue_process.erl",
|
||||
|
@ -2147,3 +2166,57 @@ def test_suite_beam_files(name = "test_suite_beam_files"):
|
|||
erlc_opts = "//:test_erlc_opts",
|
||||
deps = ["//deps/amqp_client:erlang_app"],
|
||||
)
|
||||
|
||||
erlang_bytecode(
|
||||
name = "test_event_recorder_beam",
|
||||
testonly = True,
|
||||
srcs = ["test/event_recorder.erl"],
|
||||
outs = ["test/event_recorder.beam"],
|
||||
app_name = "rabbit",
|
||||
erlc_opts = "//:test_erlc_opts",
|
||||
deps = ["//deps/rabbit_common:erlang_app"],
|
||||
)
|
||||
erlang_bytecode(
|
||||
name = "amqp_auth_SUITE_beam_files",
|
||||
testonly = True,
|
||||
srcs = ["test/amqp_auth_SUITE.erl"],
|
||||
outs = ["test/amqp_auth_SUITE.beam"],
|
||||
app_name = "rabbit",
|
||||
erlc_opts = "//:test_erlc_opts",
|
||||
deps = ["//deps/amqp10_common:erlang_app", "//deps/amqp_client:erlang_app"],
|
||||
)
|
||||
erlang_bytecode(
|
||||
name = "amqp_client_SUITE_beam_files",
|
||||
testonly = True,
|
||||
srcs = ["test/amqp_client_SUITE.erl"],
|
||||
outs = ["test/amqp_client_SUITE.beam"],
|
||||
app_name = "rabbit",
|
||||
erlc_opts = "//:test_erlc_opts",
|
||||
deps = ["//deps/amqp10_common:erlang_app", "//deps/amqp_client:erlang_app"],
|
||||
)
|
||||
erlang_bytecode(
|
||||
name = "amqp_credit_api_v2_SUITE_beam_files",
|
||||
testonly = True,
|
||||
srcs = ["test/amqp_credit_api_v2_SUITE.erl"],
|
||||
outs = ["test/amqp_credit_api_v2_SUITE.beam"],
|
||||
app_name = "rabbit",
|
||||
erlc_opts = "//:test_erlc_opts",
|
||||
deps = ["//deps/amqp_client:erlang_app"],
|
||||
)
|
||||
erlang_bytecode(
|
||||
name = "amqp_proxy_protocol_SUITE_beam_files",
|
||||
testonly = True,
|
||||
srcs = ["test/amqp_proxy_protocol_SUITE.erl"],
|
||||
outs = ["test/amqp_proxy_protocol_SUITE.beam"],
|
||||
app_name = "rabbit",
|
||||
erlc_opts = "//:test_erlc_opts",
|
||||
)
|
||||
erlang_bytecode(
|
||||
name = "amqp_system_SUITE_beam_files",
|
||||
testonly = True,
|
||||
srcs = ["test/amqp_system_SUITE.erl"],
|
||||
outs = ["test/amqp_system_SUITE.beam"],
|
||||
app_name = "rabbit",
|
||||
erlc_opts = "//:test_erlc_opts",
|
||||
deps = ["//deps/rabbit_common:erlang_app"],
|
||||
)
|
||||
|
|
|
@ -954,10 +954,6 @@
|
|||
##
|
||||
# amqp1_0.default_user = guest
|
||||
|
||||
## Enable protocol strict mode. See the README for more information.
|
||||
##
|
||||
# amqp1_0.protocol_strict_mode = false
|
||||
|
||||
## Logging settings.
|
||||
##
|
||||
## See https://rabbitmq.com/logging.html for details.
|
||||
|
|
|
@ -0,0 +1,74 @@
|
|||
%%-define(debug, true).
|
||||
|
||||
-ifdef(debug).
|
||||
-define(DEBUG0(F), ?SAFE(rabbit_log:debug(F, []))).
|
||||
-define(DEBUG(F, A), ?SAFE(rabbit_log:debug(F, A))).
|
||||
-else.
|
||||
-define(DEBUG0(F), ok).
|
||||
-define(DEBUG(F, A), ok).
|
||||
-endif.
|
||||
|
||||
-define(pprint(F), rabbit_log:debug("~p~n",
|
||||
[amqp10_framing:pprint(F)])).
|
||||
|
||||
-define(SAFE(F),
|
||||
((fun() ->
|
||||
try F
|
||||
catch __T:__E:__ST ->
|
||||
rabbit_log:debug("~p:~p thrown debugging~n~p~n",
|
||||
[__T, __E, __ST])
|
||||
end
|
||||
end)())).
|
||||
|
||||
%% General consts
|
||||
|
||||
%% [2.8.19]
|
||||
-define(MIN_MAX_FRAME_1_0_SIZE, 512).
|
||||
|
||||
-define(SEND_ROLE, false).
|
||||
-define(RECV_ROLE, true).
|
||||
|
||||
%% for rabbit_event user_authentication_success and user_authentication_failure
|
||||
-define(AUTH_EVENT_KEYS,
|
||||
[name,
|
||||
host,
|
||||
port,
|
||||
peer_host,
|
||||
peer_port,
|
||||
protocol,
|
||||
auth_mechanism,
|
||||
ssl,
|
||||
ssl_protocol,
|
||||
ssl_key_exchange,
|
||||
ssl_cipher,
|
||||
ssl_hash,
|
||||
peer_cert_issuer,
|
||||
peer_cert_subject,
|
||||
peer_cert_validity]).
|
||||
|
||||
-define(ITEMS,
|
||||
[pid,
|
||||
frame_max,
|
||||
timeout,
|
||||
vhost,
|
||||
user,
|
||||
node
|
||||
] ++ ?AUTH_EVENT_KEYS).
|
||||
|
||||
-define(INFO_ITEMS,
|
||||
[connection_state,
|
||||
recv_oct,
|
||||
recv_cnt,
|
||||
send_oct,
|
||||
send_cnt
|
||||
] ++ ?ITEMS).
|
||||
|
||||
%% for rabbit_event connection_created
|
||||
-define(CONNECTION_EVENT_KEYS,
|
||||
[type,
|
||||
client_properties,
|
||||
connected_at,
|
||||
channel_max
|
||||
] ++ ?ITEMS).
|
||||
|
||||
-include_lib("amqp10_common/include/amqp10_framing.hrl").
|
|
@ -1,5 +1,4 @@
|
|||
-define(NUM_PROTOCOL_COUNTERS, 8).
|
||||
-define(NUM_PROTOCOL_QUEUE_TYPE_COUNTERS, 8).
|
||||
|
||||
%% Dead Letter counters:
|
||||
%%
|
||||
|
|
|
@ -2586,6 +2586,33 @@ end}.
|
|||
end
|
||||
}.
|
||||
|
||||
% ===============================
|
||||
% AMQP 1.0
|
||||
% ===============================
|
||||
|
||||
%% Connections that skip SASL layer or use SASL mechanism ANONYMOUS will connect as this account.
|
||||
%% Setting this to a username will allow clients to connect without authenticating.
|
||||
%% For production environments, set this value to 'none'.
|
||||
{mapping, "amqp1_0.default_user", "rabbit.amqp1_0_default_user",
|
||||
[{datatype, [{enum, [none]}, string]}]}.
|
||||
|
||||
{mapping, "amqp1_0.default_vhost", "rabbit.amqp1_0_default_vhost",
|
||||
[{datatype, string}]}.
|
||||
|
||||
{translation, "rabbit.amqp1_0_default_user",
|
||||
fun(Conf) ->
|
||||
case cuttlefish:conf_get("amqp1_0.default_user", Conf) of
|
||||
none -> none;
|
||||
User -> list_to_binary(User)
|
||||
end
|
||||
end}.
|
||||
|
||||
{translation , "rabbit.amqp1_0_default_vhost",
|
||||
fun(Conf) ->
|
||||
list_to_binary(cuttlefish:conf_get("amqp1_0.default_vhost", Conf))
|
||||
end}.
|
||||
|
||||
|
||||
% ===============================
|
||||
% Validators
|
||||
% ===============================
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
is_persistent/1,
|
||||
ttl/1,
|
||||
correlation_id/1,
|
||||
user_id/1,
|
||||
message_id/1,
|
||||
timestamp/1,
|
||||
priority/1,
|
||||
|
@ -280,6 +281,15 @@ correlation_id(#?MODULE{protocol = Proto,
|
|||
correlation_id(BasicMsg) ->
|
||||
mc_compat:correlation_id(BasicMsg).
|
||||
|
||||
-spec user_id(state()) ->
|
||||
{binary, rabbit_types:username()} |
|
||||
undefined.
|
||||
user_id(#?MODULE{protocol = Proto,
|
||||
data = Data}) ->
|
||||
Proto:property(?FUNCTION_NAME, Data);
|
||||
user_id(BasicMsg) ->
|
||||
mc_compat:user_id(BasicMsg).
|
||||
|
||||
-spec message_id(state()) ->
|
||||
{uuid, binary()} |
|
||||
{utf8, binary()} |
|
||||
|
|
|
@ -58,14 +58,32 @@
|
|||
message_section/0
|
||||
]).
|
||||
|
||||
%% mc implementation
|
||||
%% TODO
|
||||
%% Up to 3.13 the parsed AMQP 1.0 message is never stored on disk.
|
||||
%% We want that to hold true for 4.0 as well to save disk space and disk I/O.
|
||||
%%
|
||||
%% As the essential annotations, durable, priority, ttl and delivery_count
|
||||
%% is all we are interested in it isn't necessary to keep hold of the
|
||||
%% incoming AMQP header inside the state
|
||||
%%
|
||||
%% Probably prepare(store, Msg) should serialize the message.
|
||||
%% mc:prepare(store, Msg) should also be called from rabbit_stream_queue after converting to mc_amqp.
|
||||
%%
|
||||
%% When we received the message via AMQP 1.0, our mc_amqp:state() should ideally store a binary of each section.
|
||||
%% This way, prepare(store, Msg) wouldn't need to serialize anything because there shouldn't be any changes
|
||||
%% in the sections between receiving via AMQP 1.0 and storing the message in queues.
|
||||
%%
|
||||
%% Also, we don't need to parse each section.
|
||||
%% For example, apart from validation we wouldn’t need to parse application properties at all - unless requested by the headers exchange.
|
||||
%% Ideally the parser could have a validate mode, that validated the section(s) but didn’t build up an erlang term representation of the data.
|
||||
%% Such a validation mode could be used for application properties. Message annotations might not need to be parsed either.
|
||||
%% So, message annotations and application properties should be parsed lazily, only if needed.
|
||||
%%
|
||||
%% Upon sending the message to clients, when converting from AMQP 1.0, the serialized message needs to be parsed into sections.
|
||||
init(Sections) when is_list(Sections) ->
|
||||
Msg = decode(Sections, #msg{}),
|
||||
init(Msg);
|
||||
init(#msg{} = Msg) ->
|
||||
%% TODO: as the essential annotations, durable, priority, ttl and delivery_count
|
||||
%% is all we are interested in it isn't necessary to keep hold of the
|
||||
%% incoming AMQP header inside the state
|
||||
Anns = essential_properties(Msg),
|
||||
{Msg, Anns}.
|
||||
|
||||
|
@ -95,6 +113,8 @@ property(correlation_id, #msg{properties = #'v1_0.properties'{correlation_id = C
|
|||
Corr;
|
||||
property(message_id, #msg{properties = #'v1_0.properties'{message_id = MsgId}}) ->
|
||||
MsgId;
|
||||
property(user_id, #msg{properties = #'v1_0.properties'{user_id = UserId}}) ->
|
||||
UserId;
|
||||
property(_Prop, #msg{}) ->
|
||||
undefined.
|
||||
|
||||
|
@ -134,7 +154,7 @@ get_property(timestamp, Msg) ->
|
|||
end;
|
||||
get_property(ttl, Msg) ->
|
||||
case Msg of
|
||||
#msg{header = #'v1_0.header'{ttl = {_, Ttl}}} ->
|
||||
#msg{header = #'v1_0.header'{ttl = {uint, Ttl}}} ->
|
||||
Ttl;
|
||||
_ ->
|
||||
%% fallback in case the source protocol was AMQP 0.9.1
|
||||
|
@ -158,6 +178,13 @@ get_property(priority, Msg) ->
|
|||
_ ->
|
||||
undefined
|
||||
end
|
||||
end;
|
||||
get_property(subject, Msg) ->
|
||||
case Msg of
|
||||
#msg{properties = #'v1_0.properties'{subject = {utf8, Subject}}} ->
|
||||
Subject;
|
||||
_ ->
|
||||
undefined
|
||||
end.
|
||||
|
||||
convert_to(?MODULE, Msg, _Env) ->
|
||||
|
@ -170,10 +197,19 @@ convert_to(TargetProto, Msg, Env) ->
|
|||
serialize(Sections) ->
|
||||
encode_bin(Sections).
|
||||
|
||||
protocol_state(Msg, Anns) ->
|
||||
protocol_state(Msg0 = #msg{header = Header0}, Anns) ->
|
||||
Redelivered = maps:get(redelivered, Anns, false),
|
||||
FirstAcquirer = not Redelivered,
|
||||
Header = case Header0 of
|
||||
undefined ->
|
||||
#'v1_0.header'{first_acquirer = FirstAcquirer};
|
||||
#'v1_0.header'{} ->
|
||||
Header0#'v1_0.header'{first_acquirer = FirstAcquirer}
|
||||
end,
|
||||
Msg = Msg0#msg{header = Header},
|
||||
|
||||
#{?ANN_EXCHANGE := Exchange,
|
||||
?ANN_ROUTING_KEYS := [RKey | _]} = Anns,
|
||||
|
||||
%% any x-* annotations get added as message annotations
|
||||
AnnsToAdd = maps:filter(fun (Key, _) -> mc_util:is_x_header(Key) end, Anns),
|
||||
|
||||
|
@ -394,6 +430,10 @@ essential_properties(#msg{message_annotations = MA} = Msg) ->
|
|||
Priority = get_property(priority, Msg),
|
||||
Timestamp = get_property(timestamp, Msg),
|
||||
Ttl = get_property(ttl, Msg),
|
||||
RoutingKeys = case get_property(subject, Msg) of
|
||||
undefined -> undefined;
|
||||
Subject -> [Subject]
|
||||
end,
|
||||
|
||||
Deaths = case message_annotation(<<"x-death">>, Msg, undefined) of
|
||||
{list, DeathMaps} ->
|
||||
|
@ -418,8 +458,10 @@ essential_properties(#msg{message_annotations = MA} = Msg) ->
|
|||
maps_put_truthy(
|
||||
ttl, Ttl,
|
||||
maps_put_truthy(
|
||||
deaths, Deaths,
|
||||
#{}))))),
|
||||
?ANN_ROUTING_KEYS, RoutingKeys,
|
||||
maps_put_truthy(
|
||||
deaths, Deaths,
|
||||
#{})))))),
|
||||
case MA of
|
||||
[] ->
|
||||
Anns;
|
||||
|
|
|
@ -25,7 +25,8 @@
|
|||
message/3,
|
||||
message/4,
|
||||
message/5,
|
||||
from_basic_message/1
|
||||
from_basic_message/1,
|
||||
to_091/2
|
||||
]).
|
||||
|
||||
-import(rabbit_misc,
|
||||
|
|
|
@ -14,6 +14,7 @@
|
|||
is_persistent/1,
|
||||
ttl/1,
|
||||
correlation_id/1,
|
||||
user_id/1,
|
||||
message_id/1,
|
||||
timestamp/1,
|
||||
priority/1,
|
||||
|
@ -106,6 +107,9 @@ timestamp(#basic_message{content = Content}) ->
|
|||
priority(#basic_message{content = Content}) ->
|
||||
get_property(?FUNCTION_NAME, Content).
|
||||
|
||||
user_id(#basic_message{content = Content}) ->
|
||||
get_property(?FUNCTION_NAME, Content).
|
||||
|
||||
correlation_id(#basic_message{content = Content}) ->
|
||||
case get_property(?FUNCTION_NAME, Content) of
|
||||
undefined ->
|
||||
|
@ -384,6 +388,13 @@ get_property(P, #content{properties = none} = Content) ->
|
|||
get_property(durable,
|
||||
#content{properties = #'P_basic'{delivery_mode = Mode}}) ->
|
||||
Mode == 2;
|
||||
get_property(user_id,
|
||||
#content{properties = #'P_basic'{user_id = UserId}}) ->
|
||||
if UserId =:= undefined ->
|
||||
undefined;
|
||||
is_binary(UserId) ->
|
||||
{binary, UserId}
|
||||
end;
|
||||
get_property(ttl, #content{properties = Props}) ->
|
||||
{ok, MsgTTL} = rabbit_basic:parse_expiration(Props),
|
||||
MsgTTL;
|
||||
|
|
|
@ -36,7 +36,7 @@
|
|||
|
||||
%%---------------------------------------------------------------------------
|
||||
%% Boot steps.
|
||||
-export([maybe_insert_default_data/0, boot_delegate/0, recover/0]).
|
||||
-export([maybe_insert_default_data/0, boot_delegate/0, recover/0, pg_local/0]).
|
||||
|
||||
%% for tests
|
||||
-export([validate_msg_store_io_batch_size_and_credit_disc_bound/2]).
|
||||
|
@ -267,6 +267,12 @@
|
|||
{mfa, {logger, debug, ["'networking' boot step skipped and moved to end of startup", [], #{domain => ?RMQLOG_DOMAIN_GLOBAL}]}},
|
||||
{requires, notify_cluster}]}).
|
||||
|
||||
-rabbit_boot_step({pg_local,
|
||||
[{description, "local-only pg scope"},
|
||||
{mfa, {rabbit, pg_local, []}},
|
||||
{requires, kernel_ready},
|
||||
{enables, core_initialized}]}).
|
||||
|
||||
%%---------------------------------------------------------------------------
|
||||
|
||||
-include_lib("rabbit_common/include/rabbit_framing.hrl").
|
||||
|
@ -752,7 +758,7 @@ status() ->
|
|||
true ->
|
||||
[{virtual_host_count, rabbit_vhost:count()},
|
||||
{connection_count,
|
||||
length(rabbit_networking:connections_local()) +
|
||||
length(rabbit_networking:local_connections()) +
|
||||
length(rabbit_networking:local_non_amqp_connections())},
|
||||
{queue_count, total_queue_count()}];
|
||||
false ->
|
||||
|
@ -1098,6 +1104,9 @@ recover() ->
|
|||
ok = rabbit_vhost:recover(),
|
||||
ok.
|
||||
|
||||
pg_local() ->
|
||||
rabbit_sup:start_child(pg, [node()]).
|
||||
|
||||
-spec maybe_insert_default_data() -> 'ok'.
|
||||
|
||||
maybe_insert_default_data() ->
|
||||
|
@ -1690,7 +1699,19 @@ persist_static_configuration() ->
|
|||
classic_queue_store_v2_max_cache_size,
|
||||
classic_queue_store_v2_check_crc32,
|
||||
incoming_message_interceptors
|
||||
]).
|
||||
]),
|
||||
|
||||
%% Disallow 0 as it means unlimited:
|
||||
%% "If this field is zero or unset, there is no maximum
|
||||
%% size imposed by the link endpoint." [AMQP 1.0 §2.7.3]
|
||||
MaxMsgSize = case application:get_env(?MODULE, max_message_size) of
|
||||
{ok, Size}
|
||||
when is_integer(Size) andalso Size > 0 ->
|
||||
erlang:min(Size, ?MAX_MSG_SIZE);
|
||||
_ ->
|
||||
?MAX_MSG_SIZE
|
||||
end,
|
||||
ok = persistent_term:put(max_message_size, MaxMsgSize).
|
||||
|
||||
persist_static_configuration(Params) ->
|
||||
App = ?MODULE,
|
||||
|
|
|
@ -10,7 +10,8 @@
|
|||
-include_lib("rabbit_common/include/rabbit.hrl").
|
||||
|
||||
-export([check_user_pass_login/2, check_user_login/2, check_user_loopback/2,
|
||||
check_vhost_access/4, check_resource_access/4, check_topic_access/4]).
|
||||
check_vhost_access/4, check_resource_access/4, check_topic_access/4,
|
||||
check_user_id/2]).
|
||||
|
||||
-export([permission_cache_can_expire/1, update_state/2, expiry_timestamp/1]).
|
||||
|
||||
|
@ -222,6 +223,31 @@ check_access(Fun, Module, ErrStr, ErrArgs, ErrName) ->
|
|||
rabbit_misc:protocol_error(ErrName, FullErrStr, FullErrArgs)
|
||||
end.
|
||||
|
||||
-spec check_user_id(mc:state(), rabbit_types:user()) ->
|
||||
ok | {refused, string(), [term()]}.
|
||||
check_user_id(Message, ActualUser) ->
|
||||
case mc:user_id(Message) of
|
||||
undefined ->
|
||||
ok;
|
||||
{binary, ClaimedUserName} ->
|
||||
check_user_id0(ClaimedUserName, ActualUser)
|
||||
end.
|
||||
|
||||
check_user_id0(Username, #user{username = Username}) ->
|
||||
ok;
|
||||
check_user_id0(_, #user{authz_backends = [{rabbit_auth_backend_dummy, _}]}) ->
|
||||
ok;
|
||||
check_user_id0(ClaimedUserName, #user{username = ActualUserName,
|
||||
tags = Tags}) ->
|
||||
case lists:member(impersonator, Tags) of
|
||||
true ->
|
||||
ok;
|
||||
false ->
|
||||
{refused,
|
||||
"user_id property set to '~ts' but authenticated user was '~ts'",
|
||||
[ClaimedUserName, ActualUserName]}
|
||||
end.
|
||||
|
||||
-spec update_state(User :: rabbit_types:user(), NewState :: term()) ->
|
||||
{'ok', rabbit_types:auth_user()} |
|
||||
{'refused', string()} |
|
||||
|
|
|
@ -50,7 +50,7 @@
|
|||
-type resource_alarm() :: {resource_limit, resource_alarm_source(), node()}.
|
||||
-type alarm() :: local_alarm() | resource_alarm().
|
||||
-type resource_alert() :: {WasAlarmSetForNode :: boolean(),
|
||||
IsThereAnyAlarmsWithSameSourceInTheCluster :: boolean(),
|
||||
IsThereAnyAlarmWithSameSourceInTheCluster :: boolean(),
|
||||
NodeForWhichAlarmWasSetOrCleared :: node()}.
|
||||
|
||||
%%----------------------------------------------------------------------------
|
||||
|
|
|
@ -6,11 +6,16 @@
|
|||
%%
|
||||
-module(rabbit_amqp1_0).
|
||||
|
||||
-define(PROCESS_GROUP_NAME, rabbit_amqp10_connections).
|
||||
|
||||
-export([list_local/0,
|
||||
register_connection/1]).
|
||||
|
||||
%% Below 2 functions are deprecated.
|
||||
%% They could be called in 3.13 / 4.0 mixed version clusters by the old 3.13 CLI command
|
||||
%% rabbitmqctl list_amqp10_connections
|
||||
-export([emit_connection_info_local/3,
|
||||
emit_connection_info_all/4,
|
||||
list/0,
|
||||
register_connection/1,
|
||||
unregister_connection/1]).
|
||||
emit_connection_info_all/4]).
|
||||
|
||||
emit_connection_info_all(Nodes, Items, Ref, AggregatorPid) ->
|
||||
Pids = [spawn_link(Node, rabbit_amqp1_0, emit_connection_info_local,
|
||||
|
@ -20,21 +25,19 @@ emit_connection_info_all(Nodes, Items, Ref, AggregatorPid) ->
|
|||
ok.
|
||||
|
||||
emit_connection_info_local(Items, Ref, AggregatorPid) ->
|
||||
ConnectionPids = list_local(),
|
||||
rabbit_control_misc:emitting_map_with_exit_handler(
|
||||
AggregatorPid, Ref,
|
||||
AggregatorPid,
|
||||
Ref,
|
||||
fun(Pid) ->
|
||||
rabbit_amqp1_0_reader:info(Pid, Items)
|
||||
rabbit_amqp_reader:info(Pid, Items)
|
||||
end,
|
||||
list()).
|
||||
ConnectionPids).
|
||||
|
||||
-spec list() -> [pid()].
|
||||
list() ->
|
||||
pg_local:get_members(rabbit_amqp10_connections).
|
||||
-spec list_local() -> [pid()].
|
||||
list_local() ->
|
||||
pg:get_local_members(node(), ?PROCESS_GROUP_NAME).
|
||||
|
||||
-spec register_connection(pid()) -> ok.
|
||||
register_connection(Pid) ->
|
||||
pg_local:join(rabbit_amqp10_connections, Pid).
|
||||
|
||||
-spec unregister_connection(pid()) -> ok.
|
||||
unregister_connection(Pid) ->
|
||||
pg_local:leave(rabbit_amqp10_connections, Pid).
|
||||
ok = pg:join(node(), ?PROCESS_GROUP_NAME, Pid).
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,39 @@
|
|||
%% This Source Code Form is subject to the terms of the Mozilla Public
|
||||
%% License, v. 2.0. If a copy of the MPL was not distributed with this
|
||||
%% file, You can obtain one at https://mozilla.org/MPL/2.0/.
|
||||
%%
|
||||
%% Copyright (c) 2007-2023 Broadcom. All Rights Reserved. The term “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. All rights reserved.
|
||||
%%
|
||||
|
||||
-module(rabbit_amqp_session_sup).
|
||||
-behaviour(supervisor).
|
||||
|
||||
-include_lib("rabbit_common/include/rabbit.hrl").
|
||||
|
||||
%% client API
|
||||
-export([start_link/1,
|
||||
start_session/2]).
|
||||
|
||||
%% supervisor callback
|
||||
-export([init/1]).
|
||||
|
||||
-spec start_link(Reader :: pid()) ->
|
||||
supervisor:startlink_ret().
|
||||
start_link(ReaderPid) ->
|
||||
supervisor:start_link(?MODULE, ReaderPid).
|
||||
|
||||
init(ReaderPid) ->
|
||||
SupFlags = #{strategy => simple_one_for_one,
|
||||
intensity => 0,
|
||||
period => 1},
|
||||
ChildSpec = #{id => amqp1_0_session,
|
||||
start => {rabbit_amqp_session, start_link, [ReaderPid]},
|
||||
restart => temporary,
|
||||
shutdown => ?WORKER_WAIT,
|
||||
type => worker},
|
||||
{ok, {SupFlags, [ChildSpec]}}.
|
||||
|
||||
-spec start_session(pid(), list()) ->
|
||||
supervisor:startchild_ret().
|
||||
start_session(SessionSupPid, Args) ->
|
||||
supervisor:start_child(SessionSupPid, Args).
|
|
@ -0,0 +1,19 @@
|
|||
%% This Source Code Form is subject to the terms of the Mozilla Public
|
||||
%% License, v. 2.0. If a copy of the MPL was not distributed with this
|
||||
%% file, You can obtain one at https://mozilla.org/MPL/2.0/.
|
||||
%%
|
||||
%% Copyright (c) 2007-2023 Broadcom. All Rights Reserved. The term “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. All rights reserved.
|
||||
%%
|
||||
|
||||
-module(rabbit_amqp_util).
|
||||
-include("rabbit_amqp.hrl").
|
||||
|
||||
-export([protocol_error/3]).
|
||||
|
||||
-spec protocol_error(term(), io:format(), [term()]) ->
|
||||
no_return().
|
||||
protocol_error(Condition, Msg, Args) ->
|
||||
Description = list_to_binary(lists:flatten(io_lib:format(Msg, Args))),
|
||||
Reason = #'v1_0.error'{condition = Condition,
|
||||
description = {utf8, Description}},
|
||||
exit(Reason).
|
|
@ -0,0 +1,218 @@
|
|||
%% This Source Code Form is subject to the terms of the Mozilla Public
|
||||
%% License, v. 2.0. If a copy of the MPL was not distributed with this
|
||||
%% file, You can obtain one at https://mozilla.org/MPL/2.0/.
|
||||
%%
|
||||
%% Copyright (c) 2007-2023 Broadcom. All Rights Reserved. The term “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. All rights reserved.
|
||||
%%
|
||||
|
||||
-module(rabbit_amqp_writer).
|
||||
-behaviour(gen_server).
|
||||
|
||||
-include("rabbit_amqp.hrl").
|
||||
|
||||
%% client API
|
||||
-export([start_link/3,
|
||||
send_command/3,
|
||||
send_command/4,
|
||||
send_command_sync/3,
|
||||
send_command_and_notify/6,
|
||||
internal_send_command/3]).
|
||||
|
||||
%% gen_server callbacks
|
||||
-export([init/1,
|
||||
handle_call/3,
|
||||
handle_cast/2,
|
||||
handle_info/2,
|
||||
format_status/1]).
|
||||
|
||||
-record(state, {
|
||||
sock :: rabbit_net:socket(),
|
||||
max_frame_size :: unlimited | pos_integer(),
|
||||
reader :: rabbit_types:connection(),
|
||||
pending :: iolist(),
|
||||
%% This field is just an optimisation to minimize the cost of erlang:iolist_size/1
|
||||
pending_size :: non_neg_integer()
|
||||
}).
|
||||
|
||||
-define(HIBERNATE_AFTER, 6_000).
|
||||
-define(CALL_TIMEOUT, 300_000).
|
||||
-define(AMQP_SASL_FRAME_TYPE, 1).
|
||||
|
||||
%%%%%%%%%%%%%%%%%%
|
||||
%%% client API %%%
|
||||
%%%%%%%%%%%%%%%%%%
|
||||
|
||||
-spec start_link (rabbit_net:socket(), non_neg_integer(), pid()) ->
|
||||
rabbit_types:ok(pid()).
|
||||
start_link(Sock, MaxFrame, ReaderPid) ->
|
||||
Args = {Sock, MaxFrame, ReaderPid},
|
||||
Opts = [{hibernate_after, ?HIBERNATE_AFTER}],
|
||||
gen_server:start_link(?MODULE, Args, Opts).
|
||||
|
||||
-spec send_command(pid(),
|
||||
rabbit_types:channel_number(),
|
||||
rabbit_framing:amqp_method_record()) -> ok.
|
||||
send_command(Writer, ChannelNum, MethodRecord) ->
|
||||
Request = {send_command, ChannelNum, MethodRecord},
|
||||
gen_server:cast(Writer, Request).
|
||||
|
||||
-spec send_command(pid(),
|
||||
rabbit_types:channel_number(),
|
||||
rabbit_framing:amqp_method_record(),
|
||||
rabbit_types:content()) -> ok.
|
||||
send_command(Writer, ChannelNum, MethodRecord, Content) ->
|
||||
Request = {send_command, ChannelNum, MethodRecord, Content},
|
||||
gen_server:cast(Writer, Request).
|
||||
|
||||
-spec send_command_sync(pid(),
|
||||
rabbit_types:channel_number(),
|
||||
rabbit_framing:amqp_method_record()) -> ok.
|
||||
send_command_sync(Writer, ChannelNum, MethodRecord) ->
|
||||
Request = {send_command, ChannelNum, MethodRecord},
|
||||
gen_server:call(Writer, Request, ?CALL_TIMEOUT).
|
||||
|
||||
-spec send_command_and_notify(pid(),
|
||||
rabbit_types:channel_number(),
|
||||
pid(),
|
||||
pid(),
|
||||
rabbit_framing:amqp_method_record(),
|
||||
rabbit_types:content()) -> ok.
|
||||
send_command_and_notify(Writer, ChannelNum, QueuePid, SessionPid, MethodRecord, Content) ->
|
||||
Request = {send_command_and_notify, ChannelNum, QueuePid, SessionPid, MethodRecord, Content},
|
||||
gen_server:cast(Writer, Request).
|
||||
|
||||
-spec internal_send_command(rabbit_net:socket(),
|
||||
rabbit_framing:amqp_method_record(),
|
||||
amqp10_framing | rabbit_amqp_sasl) -> ok.
|
||||
internal_send_command(Sock, MethodRecord, Protocol) ->
|
||||
Data = assemble_frame(0, MethodRecord, Protocol),
|
||||
ok = tcp_send(Sock, Data).
|
||||
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
%%% gen_server callbacks %%%
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
|
||||
init({Sock, MaxFrame, ReaderPid}) ->
|
||||
State = #state{sock = Sock,
|
||||
max_frame_size = MaxFrame,
|
||||
reader = ReaderPid,
|
||||
pending = [],
|
||||
pending_size = 0},
|
||||
{ok, State}.
|
||||
|
||||
handle_cast({send_command, ChannelNum, MethodRecord}, State0) ->
|
||||
State = internal_send_command_async(ChannelNum, MethodRecord, State0),
|
||||
no_reply(State);
|
||||
handle_cast({send_command, ChannelNum, MethodRecord, Content}, State0) ->
|
||||
State = internal_send_command_async(ChannelNum, MethodRecord, Content, State0),
|
||||
no_reply(State);
|
||||
handle_cast({send_command_and_notify, ChannelNum, QueuePid, SessionPid, MethodRecord, Content}, State0) ->
|
||||
State = internal_send_command_async(ChannelNum, MethodRecord, Content, State0),
|
||||
rabbit_amqqueue:notify_sent(QueuePid, SessionPid),
|
||||
no_reply(State).
|
||||
|
||||
handle_call({send_command, ChannelNum, MethodRecord}, _From, State0) ->
|
||||
State1 = internal_send_command_async(ChannelNum, MethodRecord, State0),
|
||||
State = flush(State1),
|
||||
{reply, ok, State}.
|
||||
|
||||
handle_info(timeout, State0) ->
|
||||
State = flush(State0),
|
||||
{noreply, State};
|
||||
handle_info({'DOWN', _MRef, process, QueuePid, _Reason}, State) ->
|
||||
rabbit_amqqueue:notify_sent_queue_down(QueuePid),
|
||||
no_reply(State).
|
||||
|
||||
format_status(Status) ->
|
||||
maps:update_with(
|
||||
state,
|
||||
fun(#state{sock = Sock,
|
||||
max_frame_size = MaxFrame,
|
||||
reader = Reader,
|
||||
pending = Pending,
|
||||
pending_size = PendingSize}) ->
|
||||
#{socket => Sock,
|
||||
max_frame_size => MaxFrame,
|
||||
reader => Reader,
|
||||
%% Below 2 fields should always have the same value.
|
||||
pending => iolist_size(Pending),
|
||||
pending_size => PendingSize}
|
||||
end,
|
||||
Status).
|
||||
|
||||
%%%%%%%%%%%%%%%
|
||||
%%% Helpers %%%
|
||||
%%%%%%%%%%%%%%%
|
||||
|
||||
no_reply(State) ->
|
||||
{noreply, State, 0}.
|
||||
|
||||
internal_send_command_async(Channel, MethodRecord,
|
||||
State = #state{pending = Pending,
|
||||
pending_size = PendingSize}) ->
|
||||
Frame = assemble_frame(Channel, MethodRecord),
|
||||
maybe_flush(State#state{pending = [Frame | Pending],
|
||||
pending_size = PendingSize + iolist_size(Frame)}).
|
||||
|
||||
internal_send_command_async(Channel, MethodRecord, Content,
|
||||
State = #state{max_frame_size = MaxFrame,
|
||||
pending = Pending,
|
||||
pending_size = PendingSize}) ->
|
||||
Frames = assemble_frames(Channel, MethodRecord, Content, MaxFrame),
|
||||
maybe_flush(State#state{pending = [Frames | Pending],
|
||||
pending_size = PendingSize + iolist_size(Frames)}).
|
||||
|
||||
%% Note: a transfer record can be followed by a number of other
|
||||
%% records to make a complete frame but unlike 0-9-1 we may have many
|
||||
%% content records. However, that's already been handled for us, we're
|
||||
%% just sending a chunk, so from this perspective it's just a binary.
|
||||
|
||||
%%TODO respect MaxFrame
|
||||
assemble_frames(Channel, Performative, Content, _MaxFrame) ->
|
||||
?DEBUG("~s Channel ~tp <-~n~tp~n followed by ~tp bytes of content~n",
|
||||
[?MODULE, Channel, amqp10_framing:pprint(Performative),
|
||||
iolist_size(Content)]),
|
||||
PerfBin = amqp10_framing:encode_bin(Performative),
|
||||
amqp10_binary_generator:build_frame(Channel, [PerfBin, Content]).
|
||||
|
||||
assemble_frame(Channel, Performative) ->
|
||||
assemble_frame(Channel, Performative, amqp10_framing).
|
||||
|
||||
assemble_frame(Channel, Performative, amqp10_framing) ->
|
||||
?DEBUG("~s Channel ~tp <-~n~tp~n",
|
||||
[?MODULE, Channel, amqp10_framing:pprint(Performative)]),
|
||||
PerfBin = amqp10_framing:encode_bin(Performative),
|
||||
amqp10_binary_generator:build_frame(Channel, PerfBin);
|
||||
assemble_frame(Channel, Performative, rabbit_amqp_sasl) ->
|
||||
?DEBUG("~s Channel ~tp <-~n~tp~n",
|
||||
[?MODULE, Channel, amqp10_framing:pprint(Performative)]),
|
||||
PerfBin = amqp10_framing:encode_bin(Performative),
|
||||
amqp10_binary_generator:build_frame(Channel, ?AMQP_SASL_FRAME_TYPE, PerfBin).
|
||||
|
||||
tcp_send(Sock, Data) ->
|
||||
rabbit_misc:throw_on_error(
|
||||
inet_error,
|
||||
fun() -> rabbit_net:send(Sock, Data) end).
|
||||
|
||||
%% Flush when more than 2.5 * 1460 bytes (TCP over Ethernet MSS) = 3650 bytes of data
|
||||
%% has accumulated. The idea is to get the TCP data sections full (i.e. fill 1460 bytes)
|
||||
%% as often as possible to reduce the overhead of TCP/IP headers.
|
||||
-define(FLUSH_THRESHOLD, 3650).
|
||||
|
||||
maybe_flush(State = #state{pending_size = PendingSize}) ->
|
||||
case PendingSize > ?FLUSH_THRESHOLD of
|
||||
true -> flush(State);
|
||||
false -> State
|
||||
end.
|
||||
|
||||
flush(State = #state{pending = []}) ->
|
||||
State;
|
||||
flush(State = #state{sock = Sock,
|
||||
pending = Pending}) ->
|
||||
case rabbit_net:send(Sock, lists:reverse(Pending)) of
|
||||
ok ->
|
||||
State#state{pending = [],
|
||||
pending_size = 0};
|
||||
{error, Reason} ->
|
||||
exit({writer, send_failed, Reason})
|
||||
end.
|
|
@ -33,7 +33,7 @@
|
|||
-export([consumers/1, consumers_all/1, emit_consumers_all/4, consumer_info_keys/0]).
|
||||
-export([basic_get/5, basic_consume/12, basic_cancel/5, notify_decorators/1]).
|
||||
-export([notify_sent/2, notify_sent_queue_down/1, resume/2]).
|
||||
-export([notify_down_all/2, notify_down_all/3, activate_limit_all/2, credit/5]).
|
||||
-export([notify_down_all/2, notify_down_all/3, activate_limit_all/2]).
|
||||
-export([on_node_up/1, on_node_down/1]).
|
||||
-export([update/2, store_queue/1, update_decorators/2, policy_changed/2]).
|
||||
-export([update_mirroring/1, sync_mirrors/1, cancel_sync_mirrors/1]).
|
||||
|
@ -92,7 +92,7 @@
|
|||
-define(IS_QUORUM(QPid), is_tuple(QPid)).
|
||||
%%----------------------------------------------------------------------------
|
||||
|
||||
-export_type([name/0, qmsg/0, absent_reason/0]).
|
||||
-export_type([name/0, qmsg/0, msg_id/0, absent_reason/0]).
|
||||
|
||||
-type name() :: rabbit_types:r('queue').
|
||||
|
||||
|
@ -101,7 +101,7 @@
|
|||
-type qfun(A) :: fun ((amqqueue:amqqueue()) -> A | no_return()).
|
||||
-type qmsg() :: {name(), pid() | {atom(), pid()}, msg_id(),
|
||||
boolean(), mc:state()}.
|
||||
-type msg_id() :: non_neg_integer().
|
||||
-type msg_id() :: undefined | non_neg_integer() | {Priority :: non_neg_integer(), undefined | non_neg_integer()}.
|
||||
-type ok_or_errors() ::
|
||||
'ok' | {'error', [{'error' | 'exit' | 'throw', any()}]}.
|
||||
-type absent_reason() :: 'nodedown' | 'crashed' | stopped | timeout.
|
||||
|
@ -789,11 +789,13 @@ check_exclusive_access(Q, _ReaderPid, _MatchType) ->
|
|||
[rabbit_misc:rs(QueueName)]).
|
||||
|
||||
-spec with_exclusive_access_or_die(name(), pid(), qfun(A)) ->
|
||||
A | rabbit_types:channel_exit().
|
||||
|
||||
A | rabbit_types:channel_exit().
|
||||
with_exclusive_access_or_die(Name, ReaderPid, F) ->
|
||||
with_or_die(Name,
|
||||
fun (Q) -> check_exclusive_access(Q, ReaderPid), F(Q) end).
|
||||
fun (Q) ->
|
||||
check_exclusive_access(Q, ReaderPid),
|
||||
F(Q)
|
||||
end).
|
||||
|
||||
assert_args_equivalence(Q, NewArgs) ->
|
||||
ExistingArgs = amqqueue:get_arguments(Q),
|
||||
|
@ -1731,15 +1733,6 @@ deactivate_limit_all(QRefs, ChPid) ->
|
|||
delegate:invoke_no_result(QPids, {gen_server2, cast,
|
||||
[{deactivate_limit, ChPid}]}).
|
||||
|
||||
-spec credit(amqqueue:amqqueue(),
|
||||
rabbit_types:ctag(),
|
||||
non_neg_integer(),
|
||||
boolean(),
|
||||
rabbit_queue_type:state()) ->
|
||||
{ok, rabbit_queue_type:state(), rabbit_queue_type:actions()}.
|
||||
credit(Q, CTag, Credit, Drain, QStates) ->
|
||||
rabbit_queue_type:credit(Q, CTag, Credit, Drain, QStates).
|
||||
|
||||
-spec basic_get(amqqueue:amqqueue(), boolean(), pid(), rabbit_types:ctag(),
|
||||
rabbit_queue_type:state()) ->
|
||||
{'ok', non_neg_integer(), qmsg(), rabbit_queue_type:state()} |
|
||||
|
@ -1766,7 +1759,7 @@ basic_consume(Q, NoAck, ChPid, LimiterPid,
|
|||
channel_pid => ChPid,
|
||||
limiter_pid => LimiterPid,
|
||||
limiter_active => LimiterActive,
|
||||
prefetch_count => ConsumerPrefetchCount,
|
||||
mode => {simple_prefetch, ConsumerPrefetchCount},
|
||||
consumer_tag => ConsumerTag,
|
||||
exclusive_consume => ExclusiveConsume,
|
||||
args => Args,
|
||||
|
|
|
@ -370,6 +370,13 @@ code_change(_OldVsn, State, _Extra) ->
|
|||
maybe_notify_decorators(false, State) -> State;
|
||||
maybe_notify_decorators(true, State) -> notify_decorators(State), State.
|
||||
|
||||
notify_decorators_if_became_empty(WasEmpty, State) ->
|
||||
case (not WasEmpty) andalso is_empty(State) of
|
||||
true -> notify_decorators(State);
|
||||
false -> ok
|
||||
end,
|
||||
State.
|
||||
|
||||
notify_decorators(Event, State) ->
|
||||
_ = decorator_callback(qname(State), Event, []),
|
||||
ok.
|
||||
|
@ -570,14 +577,6 @@ assert_invariant(State = #q{consumers = Consumers, single_active_consumer_on = f
|
|||
|
||||
is_empty(#q{backing_queue = BQ, backing_queue_state = BQS}) -> BQ:is_empty(BQS).
|
||||
|
||||
maybe_send_drained(WasEmpty, #q{q = Q} = State) ->
|
||||
case (not WasEmpty) andalso is_empty(State) of
|
||||
true -> notify_decorators(State),
|
||||
rabbit_queue_consumers:send_drained(amqqueue:get_name(Q));
|
||||
false -> ok
|
||||
end,
|
||||
State.
|
||||
|
||||
confirm_messages([], MTC, _QName) ->
|
||||
MTC;
|
||||
confirm_messages(MsgIds, MTC, QName) ->
|
||||
|
@ -852,7 +851,7 @@ requeue_and_run(AckTags, State = #q{backing_queue = BQ,
|
|||
WasEmpty = BQ:is_empty(BQS),
|
||||
{_MsgIds, BQS1} = BQ:requeue(AckTags, BQS),
|
||||
{_Dropped, State1} = maybe_drop_head(State#q{backing_queue_state = BQS1}),
|
||||
run_message_queue(maybe_send_drained(WasEmpty, drop_expired_msgs(State1))).
|
||||
run_message_queue(notify_decorators_if_became_empty(WasEmpty, drop_expired_msgs(State1))).
|
||||
|
||||
fetch(AckRequired, State = #q{backing_queue = BQ,
|
||||
backing_queue_state = BQS}) ->
|
||||
|
@ -861,7 +860,7 @@ fetch(AckRequired, State = #q{backing_queue = BQ,
|
|||
%% we will send expired messages at times.
|
||||
{Result, BQS1} = BQ:fetch(AckRequired, BQS),
|
||||
State1 = drop_expired_msgs(State#q{backing_queue_state = BQS1}),
|
||||
{Result, maybe_send_drained(Result =:= empty, State1)}.
|
||||
{Result, notify_decorators_if_became_empty(Result =:= empty, State1)}.
|
||||
|
||||
ack(AckTags, ChPid, State) ->
|
||||
subtract_acks(ChPid, AckTags, State,
|
||||
|
@ -992,11 +991,6 @@ calculate_msg_expiry(Msg, TTL) ->
|
|||
os:system_time(microsecond) + T * 1000
|
||||
end.
|
||||
|
||||
%% Logically this function should invoke maybe_send_drained/2.
|
||||
%% However, that is expensive. Since some frequent callers of
|
||||
%% drop_expired_msgs/1, in particular deliver_or_enqueue/3, cannot
|
||||
%% possibly cause the queue to become empty, we push the
|
||||
%% responsibility to the callers. So be cautious when adding new ones.
|
||||
drop_expired_msgs(State) ->
|
||||
case is_empty(State) of
|
||||
true -> State;
|
||||
|
@ -1343,9 +1337,8 @@ handle_call({basic_get, ChPid, NoAck, LimiterPid}, _From,
|
|||
end;
|
||||
|
||||
handle_call({basic_consume, NoAck, ChPid, LimiterPid, LimiterActive,
|
||||
PrefetchCount, ConsumerTag, ExclusiveConsume, Args, OkMsg, ActingUser},
|
||||
_From, State = #q{q = Q,
|
||||
consumers = Consumers,
|
||||
ModeOrPrefetch, ConsumerTag, ExclusiveConsume, Args, OkMsg, ActingUser},
|
||||
_From, State = #q{consumers = Consumers,
|
||||
active_consumer = Holder,
|
||||
single_active_consumer_on = SingleActiveConsumerOn}) ->
|
||||
ConsumerRegistration = case SingleActiveConsumerOn of
|
||||
|
@ -1355,33 +1348,28 @@ handle_call({basic_consume, NoAck, ChPid, LimiterPid, LimiterActive,
|
|||
{error, reply({error, exclusive_consume_unavailable}, State)};
|
||||
false ->
|
||||
Consumers1 = rabbit_queue_consumers:add(
|
||||
amqqueue:get_name(Q),
|
||||
ChPid, ConsumerTag, NoAck,
|
||||
LimiterPid, LimiterActive,
|
||||
PrefetchCount, Args, is_empty(State),
|
||||
ActingUser, Consumers),
|
||||
|
||||
case Holder of
|
||||
none ->
|
||||
NewConsumer = rabbit_queue_consumers:get(ChPid, ConsumerTag, Consumers1),
|
||||
{state, State#q{consumers = Consumers1,
|
||||
has_had_consumers = true,
|
||||
active_consumer = NewConsumer}};
|
||||
_ ->
|
||||
{state, State#q{consumers = Consumers1,
|
||||
has_had_consumers = true}}
|
||||
end
|
||||
LimiterPid, LimiterActive, ModeOrPrefetch,
|
||||
Args, ActingUser, Consumers),
|
||||
case Holder of
|
||||
none ->
|
||||
NewConsumer = rabbit_queue_consumers:get(ChPid, ConsumerTag, Consumers1),
|
||||
{state, State#q{consumers = Consumers1,
|
||||
has_had_consumers = true,
|
||||
active_consumer = NewConsumer}};
|
||||
_ ->
|
||||
{state, State#q{consumers = Consumers1,
|
||||
has_had_consumers = true}}
|
||||
end
|
||||
end;
|
||||
false ->
|
||||
case check_exclusive_access(Holder, ExclusiveConsume, State) of
|
||||
in_use -> {error, reply({error, exclusive_consume_unavailable}, State)};
|
||||
ok ->
|
||||
Consumers1 = rabbit_queue_consumers:add(
|
||||
amqqueue:get_name(Q),
|
||||
ChPid, ConsumerTag, NoAck,
|
||||
LimiterPid, LimiterActive,
|
||||
PrefetchCount, Args, is_empty(State),
|
||||
ActingUser, Consumers),
|
||||
LimiterPid, LimiterActive, ModeOrPrefetch,
|
||||
Args, ActingUser, Consumers),
|
||||
ExclusiveConsumer =
|
||||
if ExclusiveConsume -> {ChPid, ConsumerTag};
|
||||
true -> Holder
|
||||
|
@ -1408,7 +1396,8 @@ handle_call({basic_consume, NoAck, ChPid, LimiterPid, LimiterActive,
|
|||
{false, _} ->
|
||||
{true, up}
|
||||
end,
|
||||
rabbit_core_metrics:consumer_created(
|
||||
PrefetchCount = rabbit_queue_consumers:parse_prefetch_count(ModeOrPrefetch),
|
||||
rabbit_core_metrics:consumer_created(
|
||||
ChPid, ConsumerTag, ExclusiveConsume, AckRequired, QName,
|
||||
PrefetchCount, ConsumerIsActive, ActivityStatus, Args),
|
||||
emit_consumer_created(ChPid, ConsumerTag, ExclusiveConsume,
|
||||
|
@ -1436,7 +1425,9 @@ handle_call({basic_cancel, ChPid, ConsumerTag, OkMsg, ActingUser}, _From,
|
|||
emit_consumer_deleted(ChPid, ConsumerTag, qname(State1), ActingUser),
|
||||
notify_decorators(State1),
|
||||
case should_auto_delete(State1) of
|
||||
false -> reply(ok, ensure_expiry_timer(State1));
|
||||
false ->
|
||||
State2 = run_message_queue(Holder =/= Holder1, State1),
|
||||
reply(ok, ensure_expiry_timer(State2));
|
||||
true ->
|
||||
log_auto_delete(
|
||||
io_lib:format(
|
||||
|
@ -1467,7 +1458,7 @@ handle_call(purge, _From, State = #q{backing_queue = BQ,
|
|||
backing_queue_state = BQS}) ->
|
||||
{Count, BQS1} = BQ:purge(BQS),
|
||||
State1 = State#q{backing_queue_state = BQS1},
|
||||
reply({ok, Count}, maybe_send_drained(Count =:= 0, State1));
|
||||
reply({ok, Count}, notify_decorators_if_became_empty(Count =:= 0, State1));
|
||||
|
||||
handle_call({requeue, AckTags, ChPid}, From, State) ->
|
||||
gen_server2:reply(From, ok),
|
||||
|
@ -1638,21 +1629,57 @@ handle_cast(update_mirroring, State = #q{q = Q,
|
|||
noreply(update_mirroring(Policy, State1))
|
||||
end;
|
||||
|
||||
handle_cast({credit, ChPid, CTag, Credit, Drain},
|
||||
State = #q{consumers = Consumers,
|
||||
backing_queue = BQ,
|
||||
backing_queue_state = BQS,
|
||||
q = Q}) ->
|
||||
Len = BQ:len(BQS),
|
||||
rabbit_classic_queue:send_credit_reply(ChPid, amqqueue:get_name(Q), Len),
|
||||
noreply(
|
||||
case rabbit_queue_consumers:credit(amqqueue:get_name(Q),
|
||||
Len == 0, Credit, Drain, ChPid, CTag,
|
||||
Consumers) of
|
||||
unchanged -> State;
|
||||
{unblocked, Consumers1} -> State1 = State#q{consumers = Consumers1},
|
||||
run_message_queue(true, State1)
|
||||
end);
|
||||
handle_cast({credit, SessionPid, CTag, Credit, Drain},
|
||||
#q{q = Q,
|
||||
backing_queue = BQ,
|
||||
backing_queue_state = BQS0} = State) ->
|
||||
%% Credit API v1.
|
||||
%% Delete this function clause when feature flag credit_api_v2 becomes required.
|
||||
%% Behave like non-native AMQP 1.0: Send send_credit_reply before deliveries.
|
||||
rabbit_classic_queue:send_credit_reply_credit_api_v1(
|
||||
SessionPid, amqqueue:get_name(Q), BQ:len(BQS0)),
|
||||
handle_cast({credit, SessionPid, CTag, credit_api_v1, Credit, Drain, false}, State);
|
||||
handle_cast({credit, SessionPid, CTag, DeliveryCountRcv, Credit, Drain, Echo},
|
||||
#q{consumers = Consumers0,
|
||||
q = Q} = State0) ->
|
||||
QName = amqqueue:get_name(Q),
|
||||
State = #q{backing_queue_state = PostBQS,
|
||||
backing_queue = BQ} = case rabbit_queue_consumers:process_credit(
|
||||
DeliveryCountRcv, Credit, SessionPid, CTag, Consumers0) of
|
||||
unchanged ->
|
||||
State0;
|
||||
{unblocked, Consumers1} ->
|
||||
State1 = State0#q{consumers = Consumers1},
|
||||
run_message_queue(true, State1)
|
||||
end,
|
||||
case rabbit_queue_consumers:get_link_state(SessionPid, CTag) of
|
||||
{credit_api_v1, PostCred}
|
||||
when Drain andalso
|
||||
is_integer(PostCred) andalso PostCred > 0 ->
|
||||
%% credit API v1
|
||||
rabbit_queue_consumers:drained(credit_api_v1, SessionPid, CTag),
|
||||
rabbit_classic_queue:send_drained_credit_api_v1(SessionPid, QName, CTag, PostCred);
|
||||
{PostDeliveryCountSnd, PostCred}
|
||||
when is_integer(PostDeliveryCountSnd) andalso
|
||||
Drain andalso
|
||||
is_integer(PostCred) andalso PostCred > 0 ->
|
||||
%% credit API v2
|
||||
AdvancedDeliveryCount = serial_number:add(PostDeliveryCountSnd, PostCred),
|
||||
rabbit_queue_consumers:drained(AdvancedDeliveryCount, SessionPid, CTag),
|
||||
Avail = BQ:len(PostBQS),
|
||||
rabbit_classic_queue:send_credit_reply(
|
||||
SessionPid, QName, CTag, AdvancedDeliveryCount, 0, Avail, Drain);
|
||||
{PostDeliveryCountSnd, PostCred}
|
||||
when is_integer(PostDeliveryCountSnd) andalso
|
||||
Echo ->
|
||||
%% credit API v2
|
||||
Avail = BQ:len(PostBQS),
|
||||
rabbit_classic_queue:send_credit_reply(
|
||||
SessionPid, QName, CTag, PostDeliveryCountSnd, PostCred, Avail, Drain);
|
||||
_ ->
|
||||
ok
|
||||
end,
|
||||
noreply(State);
|
||||
|
||||
% Note: https://www.pivotaltracker.com/story/show/166962656
|
||||
% This event is necessary for the stats timer to be initialized with
|
||||
|
@ -1731,7 +1758,7 @@ handle_info({maybe_expire, _Vsn}, State) ->
|
|||
handle_info({drop_expired, Vsn}, State = #q{args_policy_version = Vsn}) ->
|
||||
WasEmpty = is_empty(State),
|
||||
State1 = drop_expired_msgs(State#q{ttl_timer_ref = undefined}),
|
||||
noreply(maybe_send_drained(WasEmpty, State1));
|
||||
noreply(notify_decorators_if_became_empty(WasEmpty, State1));
|
||||
|
||||
handle_info({drop_expired, _Vsn}, State) ->
|
||||
noreply(State);
|
||||
|
|
|
@ -411,7 +411,7 @@ make_decision(AllPartitions) ->
|
|||
partition_value(Partition) ->
|
||||
Connections = [Res || Node <- Partition,
|
||||
Res <- [rpc:call(Node, rabbit_networking,
|
||||
connections_local, [])],
|
||||
local_connections, [])],
|
||||
is_list(Res)],
|
||||
{length(lists:append(Connections)), length(Partition)}.
|
||||
|
||||
|
|
|
@ -63,7 +63,7 @@
|
|||
-export([get_vhost/1, get_user/1]).
|
||||
%% For testing
|
||||
-export([build_topic_variable_map/3]).
|
||||
-export([list_queue_states/1, get_max_message_size/0]).
|
||||
-export([list_queue_states/1]).
|
||||
|
||||
%% Mgmt HTTP API refactor
|
||||
-export([handle_method/6]).
|
||||
|
@ -87,13 +87,9 @@
|
|||
%% same as reader's name, see #v1.name
|
||||
%% in rabbit_reader
|
||||
conn_name,
|
||||
%% channel's originating source e.g. rabbit_reader | rabbit_direct | undefined
|
||||
%% or any other channel creating/spawning entity
|
||||
source,
|
||||
%% same as #v1.user in the reader, used in
|
||||
%% authorisation checks
|
||||
user,
|
||||
%% same as #v1.user in the reader
|
||||
virtual_host,
|
||||
%% when queue.bind's queue field is empty,
|
||||
%% this name will be used instead
|
||||
|
@ -107,15 +103,10 @@
|
|||
capabilities,
|
||||
trace_state :: rabbit_trace:state(),
|
||||
consumer_prefetch,
|
||||
%% Message content size limit
|
||||
max_message_size,
|
||||
consumer_timeout,
|
||||
authz_context,
|
||||
%% defines how ofter gc will be executed
|
||||
writer_gc_threshold,
|
||||
%% true with AMQP 1.0 to include the publishing sequence
|
||||
%% in the return callback, false otherwise
|
||||
extended_return_callback
|
||||
writer_gc_threshold
|
||||
}).
|
||||
|
||||
-record(pending_ack, {
|
||||
|
@ -513,10 +504,8 @@ init([Channel, ReaderPid, WriterPid, ConnPid, ConnName, Protocol, User, VHost,
|
|||
end,
|
||||
%% Process dictionary is used here because permission cache already uses it. MK.
|
||||
put(permission_cache_can_expire, rabbit_access_control:permission_cache_can_expire(User)),
|
||||
MaxMessageSize = get_max_message_size(),
|
||||
ConsumerTimeout = get_consumer_timeout(),
|
||||
OptionalVariables = extract_variable_map_from_amqp_params(AmqpParams),
|
||||
UseExtendedReturnCallback = use_extended_return_callback(AmqpParams),
|
||||
{ok, GCThreshold} = application:get_env(rabbit, writer_gc_threshold),
|
||||
State = #ch{cfg = #conf{state = starting,
|
||||
protocol = Protocol,
|
||||
|
@ -532,17 +521,14 @@ init([Channel, ReaderPid, WriterPid, ConnPid, ConnName, Protocol, User, VHost,
|
|||
capabilities = Capabilities,
|
||||
trace_state = rabbit_trace:init(VHost),
|
||||
consumer_prefetch = Prefetch,
|
||||
max_message_size = MaxMessageSize,
|
||||
consumer_timeout = ConsumerTimeout,
|
||||
authz_context = OptionalVariables,
|
||||
writer_gc_threshold = GCThreshold,
|
||||
extended_return_callback = UseExtendedReturnCallback
|
||||
writer_gc_threshold = GCThreshold
|
||||
},
|
||||
limiter = Limiter,
|
||||
tx = none,
|
||||
next_tag = 1,
|
||||
unacked_message_q = ?QUEUE:new(),
|
||||
queue_monitors = pmon:new(),
|
||||
consumer_mapping = #{},
|
||||
queue_consumers = #{},
|
||||
confirm_enabled = false,
|
||||
|
@ -755,8 +741,7 @@ handle_info(emit_stats, State) ->
|
|||
{noreply, send_confirms_and_nacks(State1), hibernate};
|
||||
|
||||
handle_info({{'DOWN', QName}, _MRef, process, QPid, Reason},
|
||||
#ch{queue_states = QStates0,
|
||||
queue_monitors = _QMons} = State0) ->
|
||||
#ch{queue_states = QStates0} = State0) ->
|
||||
credit_flow:peer_down(QPid),
|
||||
case rabbit_queue_type:handle_down(QPid, QName, Reason, QStates0) of
|
||||
{ok, QState1, Actions} ->
|
||||
|
@ -812,17 +797,17 @@ terminate(_Reason,
|
|||
State = #ch{cfg = #conf{user = #user{username = Username}},
|
||||
consumer_mapping = CM,
|
||||
queue_states = QueueCtxs}) ->
|
||||
_ = rabbit_queue_type:close(QueueCtxs),
|
||||
rabbit_queue_type:close(QueueCtxs),
|
||||
{_Res, _State1} = notify_queues(State),
|
||||
pg_local:leave(rabbit_channels, self()),
|
||||
rabbit_event:if_enabled(State, #ch.stats_timer,
|
||||
fun() -> emit_stats(State) end),
|
||||
[delete_stats(Tag) || {Tag, _} <- get()],
|
||||
maybe_decrease_global_publishers(State),
|
||||
_ = maps:map(
|
||||
fun (_, _) ->
|
||||
rabbit_global_counters:consumer_deleted(amqp091)
|
||||
end, CM),
|
||||
maps:foreach(
|
||||
fun (_, _) ->
|
||||
rabbit_global_counters:consumer_deleted(amqp091)
|
||||
end, CM),
|
||||
rabbit_core_metrics:channel_closed(self()),
|
||||
rabbit_event:notify(channel_closed, [{pid, self()},
|
||||
{user_who_performed_action, Username},
|
||||
|
@ -839,16 +824,6 @@ code_change(_OldVsn, State, _Extra) ->
|
|||
|
||||
format_message_queue(Opt, MQ) -> rabbit_misc:format_message_queue(Opt, MQ).
|
||||
|
||||
-spec get_max_message_size() -> non_neg_integer().
|
||||
|
||||
get_max_message_size() ->
|
||||
case application:get_env(rabbit, max_message_size) of
|
||||
{ok, MS} when is_integer(MS) ->
|
||||
erlang:min(MS, ?MAX_MSG_SIZE);
|
||||
_ ->
|
||||
?MAX_MSG_SIZE
|
||||
end.
|
||||
|
||||
get_consumer_timeout() ->
|
||||
case application:get_env(rabbit, consumer_timeout) of
|
||||
{ok, MS} when is_integer(MS) ->
|
||||
|
@ -954,30 +929,19 @@ check_write_permitted_on_topic(Resource, User, RoutingKey, AuthzContext) ->
|
|||
check_read_permitted_on_topic(Resource, User, RoutingKey, AuthzContext) ->
|
||||
check_topic_authorisation(Resource, User, RoutingKey, AuthzContext, read).
|
||||
|
||||
check_user_id_header(#'P_basic'{user_id = undefined}, _) ->
|
||||
ok;
|
||||
check_user_id_header(#'P_basic'{user_id = Username},
|
||||
#ch{cfg = #conf{user = #user{username = Username}}}) ->
|
||||
ok;
|
||||
check_user_id_header(
|
||||
#'P_basic'{}, #ch{cfg = #conf{user = #user{authz_backends =
|
||||
[{rabbit_auth_backend_dummy, _}]}}}) ->
|
||||
ok;
|
||||
check_user_id_header(#'P_basic'{user_id = Claimed},
|
||||
#ch{cfg = #conf{user = #user{username = Actual,
|
||||
tags = Tags}}}) ->
|
||||
case lists:member(impersonator, Tags) of
|
||||
true -> ok;
|
||||
false -> rabbit_misc:precondition_failed(
|
||||
"user_id property set to '~ts' but authenticated user was "
|
||||
"'~ts'", [Claimed, Actual])
|
||||
check_user_id_header(Msg, User) ->
|
||||
case rabbit_access_control:check_user_id(Msg, User) of
|
||||
ok ->
|
||||
ok;
|
||||
{refused, Reason, Args} ->
|
||||
rabbit_misc:precondition_failed(Reason, Args)
|
||||
end.
|
||||
|
||||
check_expiration_header(Props) ->
|
||||
case rabbit_basic:parse_expiration(Props) of
|
||||
{ok, _} -> ok;
|
||||
{error, E} -> rabbit_misc:precondition_failed("invalid expiration '~ts': ~tp",
|
||||
[Props#'P_basic'.expiration, E])
|
||||
[Props#'P_basic'.expiration, E])
|
||||
end.
|
||||
|
||||
check_internal_exchange(#exchange{name = Name, internal = true}) ->
|
||||
|
@ -1028,28 +992,21 @@ extract_variable_map_from_amqp_params([Value]) ->
|
|||
extract_variable_map_from_amqp_params(_) ->
|
||||
#{}.
|
||||
|
||||
%% Use tuple representation of amqp_params to avoid a dependency on amqp_client.
|
||||
%% Used for AMQP 1.0
|
||||
use_extended_return_callback({amqp_params_direct,_,_,_,_,
|
||||
{amqp_adapter_info,_,_,_,_,_,{'AMQP',"1.0"},_},
|
||||
_}) ->
|
||||
true;
|
||||
use_extended_return_callback(_) ->
|
||||
false.
|
||||
|
||||
check_msg_size(Content, MaxMessageSize, GCThreshold) ->
|
||||
check_msg_size(Content, GCThreshold) ->
|
||||
MaxMessageSize = persistent_term:get(max_message_size),
|
||||
Size = rabbit_basic:maybe_gc_large_msg(Content, GCThreshold),
|
||||
case Size of
|
||||
S when S > MaxMessageSize ->
|
||||
ErrorMessage = case MaxMessageSize of
|
||||
?MAX_MSG_SIZE ->
|
||||
"message size ~B is larger than max size ~B";
|
||||
_ ->
|
||||
"message size ~B is larger than configured max size ~B"
|
||||
end,
|
||||
rabbit_misc:precondition_failed(ErrorMessage,
|
||||
[Size, MaxMessageSize]);
|
||||
_ -> ok
|
||||
case Size =< MaxMessageSize of
|
||||
true ->
|
||||
ok;
|
||||
false ->
|
||||
Fmt = case MaxMessageSize of
|
||||
?MAX_MSG_SIZE ->
|
||||
"message size ~B is larger than max size ~B";
|
||||
_ ->
|
||||
"message size ~B is larger than configured max size ~B"
|
||||
end,
|
||||
rabbit_misc:precondition_failed(
|
||||
Fmt, [Size, MaxMessageSize])
|
||||
end.
|
||||
|
||||
check_vhost_queue_limit(#resource{name = QueueName}, VHost) ->
|
||||
|
@ -1226,22 +1183,21 @@ handle_method(#'basic.publish'{immediate = true}, _Content, _State) ->
|
|||
handle_method(#'basic.publish'{exchange = ExchangeNameBin,
|
||||
routing_key = RoutingKey,
|
||||
mandatory = Mandatory},
|
||||
Content, State = #ch{cfg = #conf{channel = ChannelNum,
|
||||
conn_name = ConnName,
|
||||
virtual_host = VHostPath,
|
||||
user = #user{username = Username} = User,
|
||||
trace_state = TraceState,
|
||||
max_message_size = MaxMessageSize,
|
||||
authz_context = AuthzContext,
|
||||
writer_gc_threshold = GCThreshold
|
||||
},
|
||||
Content, State0 = #ch{cfg = #conf{channel = ChannelNum,
|
||||
conn_name = ConnName,
|
||||
virtual_host = VHostPath,
|
||||
user = #user{username = Username} = User,
|
||||
trace_state = TraceState,
|
||||
authz_context = AuthzContext,
|
||||
writer_gc_threshold = GCThreshold
|
||||
},
|
||||
tx = Tx,
|
||||
confirm_enabled = ConfirmEnabled,
|
||||
delivery_flow = Flow
|
||||
}) ->
|
||||
State0 = maybe_increase_global_publishers(State),
|
||||
State1 = maybe_increase_global_publishers(State0),
|
||||
rabbit_global_counters:messages_received(amqp091, 1),
|
||||
check_msg_size(Content, MaxMessageSize, GCThreshold),
|
||||
check_msg_size(Content, GCThreshold),
|
||||
ExchangeName = rabbit_misc:r(VHostPath, exchange, ExchangeNameBin),
|
||||
check_write_permitted(ExchangeName, User, AuthzContext),
|
||||
Exchange = rabbit_exchange:lookup_or_die(ExchangeName),
|
||||
|
@ -1251,19 +1207,19 @@ handle_method(#'basic.publish'{exchange = ExchangeNameBin,
|
|||
%% certain to want to look at delivery-mode and priority.
|
||||
DecodedContent = #content {properties = Props} =
|
||||
maybe_set_fast_reply_to(
|
||||
rabbit_binary_parser:ensure_content_decoded(Content), State),
|
||||
check_user_id_header(Props, State),
|
||||
rabbit_binary_parser:ensure_content_decoded(Content), State1),
|
||||
check_expiration_header(Props),
|
||||
DoConfirm = Tx =/= none orelse ConfirmEnabled,
|
||||
{DeliveryOptions, State1} =
|
||||
{DeliveryOptions, State} =
|
||||
case DoConfirm of
|
||||
false ->
|
||||
{maps_put_truthy(flow, Flow, #{mandatory => Mandatory}), State0};
|
||||
{maps_put_truthy(flow, Flow, #{mandatory => Mandatory}), State1};
|
||||
true ->
|
||||
rabbit_global_counters:messages_received_confirm(amqp091, 1),
|
||||
SeqNo = State0#ch.publish_seqno,
|
||||
Opts = maps_put_truthy(flow, Flow, #{correlation => SeqNo, mandatory => Mandatory}),
|
||||
{Opts, State0#ch{publish_seqno = SeqNo + 1}}
|
||||
SeqNo = State1#ch.publish_seqno,
|
||||
Opts = maps_put_truthy(flow, Flow, #{correlation => SeqNo,
|
||||
mandatory => Mandatory}),
|
||||
{Opts, State1#ch{publish_seqno = SeqNo + 1}}
|
||||
end,
|
||||
|
||||
case mc_amqpl:message(ExchangeName,
|
||||
|
@ -1273,6 +1229,7 @@ handle_method(#'basic.publish'{exchange = ExchangeNameBin,
|
|||
rabbit_misc:precondition_failed("invalid message: ~tp", [Reason]);
|
||||
{ok, Message0} ->
|
||||
Message = rabbit_message_interceptor:intercept(Message0),
|
||||
check_user_id_header(Message, User),
|
||||
QNames = rabbit_exchange:route(Exchange, Message, #{return_binding_keys => true}),
|
||||
[rabbit_channel:deliver_reply(RK, Message) ||
|
||||
{virtual_reply_queue, RK} <- QNames],
|
||||
|
@ -1283,10 +1240,10 @@ handle_method(#'basic.publish'{exchange = ExchangeNameBin,
|
|||
Delivery = {Message, DeliveryOptions, Queues},
|
||||
{noreply, case Tx of
|
||||
none ->
|
||||
deliver_to_queues(ExchangeName, Delivery, State1);
|
||||
deliver_to_queues(ExchangeName, Delivery, State);
|
||||
{Msgs, Acks} ->
|
||||
Msgs1 = ?QUEUE:in(Delivery, Msgs),
|
||||
State1#ch{tx = {Msgs1, Acks}}
|
||||
State#ch{tx = {Msgs1, Acks}}
|
||||
end}
|
||||
end;
|
||||
|
||||
|
@ -1729,19 +1686,6 @@ handle_method(#'channel.flow'{active = true}, _, State) ->
|
|||
handle_method(#'channel.flow'{active = false}, _, _State) ->
|
||||
rabbit_misc:protocol_error(not_implemented, "active=false", []);
|
||||
|
||||
handle_method(#'basic.credit'{consumer_tag = CTag,
|
||||
credit = Credit,
|
||||
drain = Drain},
|
||||
_, State = #ch{consumer_mapping = Consumers,
|
||||
queue_states = QStates0}) ->
|
||||
case maps:find(CTag, Consumers) of
|
||||
{ok, {Q, _CParams}} ->
|
||||
{ok, QStates, Actions} = rabbit_queue_type:credit(Q, CTag, Credit, Drain, QStates0),
|
||||
{noreply, handle_queue_actions(Actions, State#ch{queue_states = QStates})};
|
||||
error -> rabbit_misc:precondition_failed(
|
||||
"unknown consumer tag '~ts'", [CTag])
|
||||
end;
|
||||
|
||||
handle_method(_MethodRecord, _Content, _State) ->
|
||||
rabbit_misc:protocol_error(
|
||||
command_invalid, "unimplemented method", []).
|
||||
|
@ -2146,10 +2090,10 @@ deliver_to_queues(XName,
|
|||
{ok, QueueStates, Actions} ->
|
||||
rabbit_global_counters:messages_routed(amqp091, length(Qs)),
|
||||
QueueNames = rabbit_amqqueue:queue_names(Qs),
|
||||
MsgSeqNo = maps:get(correlation, Options, undefined),
|
||||
%% NB: the order here is important since basic.returns must be
|
||||
%% sent before confirms.
|
||||
ok = process_routing_mandatory(Mandatory, RoutedToQueues, MsgSeqNo, Message, XName, State0),
|
||||
ok = process_routing_mandatory(Mandatory, RoutedToQueues, Message, XName, State0),
|
||||
MsgSeqNo = maps:get(correlation, Options, undefined),
|
||||
State1 = process_routing_confirm(MsgSeqNo, QueueNames, XName, State0),
|
||||
%% Actions must be processed after registering confirms as actions may
|
||||
%% contain rejections of publishes
|
||||
|
@ -2178,32 +2122,23 @@ deliver_to_queues(XName,
|
|||
|
||||
process_routing_mandatory(_Mandatory = true,
|
||||
_RoutedToQs = [],
|
||||
MsgSeqNo,
|
||||
Msg,
|
||||
XName,
|
||||
State = #ch{cfg = #conf{extended_return_callback = ExtRetCallback}}) ->
|
||||
State) ->
|
||||
rabbit_global_counters:messages_unroutable_returned(amqp091, 1),
|
||||
?INCR_STATS(exchange_stats, XName, 1, return_unroutable, State),
|
||||
Content0 = mc:protocol_state(Msg),
|
||||
Content = case ExtRetCallback of
|
||||
true ->
|
||||
%% providing the publishing sequence for AMQP 1.0
|
||||
{MsgSeqNo, Content0};
|
||||
false ->
|
||||
Content0
|
||||
end,
|
||||
Content = mc:protocol_state(Msg),
|
||||
[RoutingKey | _] = mc:routing_keys(Msg),
|
||||
ok = basic_return(Content, RoutingKey, XName#resource.name, State, no_route);
|
||||
process_routing_mandatory(_Mandatory = false,
|
||||
_RoutedToQs = [],
|
||||
_MsgSeqNo,
|
||||
_Msg,
|
||||
XName,
|
||||
State) ->
|
||||
rabbit_global_counters:messages_unroutable_dropped(amqp091, 1),
|
||||
?INCR_STATS(exchange_stats, XName, 1, drop_unroutable, State),
|
||||
ok;
|
||||
process_routing_mandatory(_, _, _, _, _, _) ->
|
||||
process_routing_mandatory(_, _, _, _, _) ->
|
||||
ok.
|
||||
|
||||
process_routing_confirm(undefined, _, _, State) ->
|
||||
|
@ -2797,12 +2732,11 @@ handle_consumer_timed_out(Timeout,#pending_ack{delivery_tag = DeliveryTag, tag =
|
|||
[Channel, Timeout], none),
|
||||
handle_exception(Ex, State).
|
||||
|
||||
handle_queue_actions(Actions, #ch{cfg = #conf{writer_pid = WriterPid}} = State0) ->
|
||||
handle_queue_actions(Actions, State) ->
|
||||
lists:foldl(
|
||||
fun
|
||||
({settled, QRef, MsgSeqNos}, S0) ->
|
||||
fun({settled, QRef, MsgSeqNos}, S0) ->
|
||||
confirm(MsgSeqNos, QRef, S0);
|
||||
({rejected, _QRef, MsgSeqNos}, S0) ->
|
||||
({rejected, _QRef, MsgSeqNos}, S0) ->
|
||||
{U, Rej} =
|
||||
lists:foldr(
|
||||
fun(SeqNo, {U1, Acc}) ->
|
||||
|
@ -2815,26 +2749,17 @@ handle_queue_actions(Actions, #ch{cfg = #conf{writer_pid = WriterPid}} = State0)
|
|||
end, {S0#ch.unconfirmed, []}, MsgSeqNos),
|
||||
S = S0#ch{unconfirmed = U},
|
||||
record_rejects(Rej, S);
|
||||
({deliver, CTag, AckRequired, Msgs}, S0) ->
|
||||
({deliver, CTag, AckRequired, Msgs}, S0) ->
|
||||
handle_deliver(CTag, AckRequired, Msgs, S0);
|
||||
({queue_down, QRef}, S0) ->
|
||||
({queue_down, QRef}, S0) ->
|
||||
handle_consuming_queue_down_or_eol(QRef, S0);
|
||||
({block, QName}, S0) ->
|
||||
({block, QName}, S0) ->
|
||||
credit_flow:block(QName),
|
||||
S0;
|
||||
({unblock, QName}, S0) ->
|
||||
({unblock, QName}, S0) ->
|
||||
credit_flow:unblock(QName),
|
||||
S0;
|
||||
({send_credit_reply, Avail}, S0) ->
|
||||
ok = rabbit_writer:send_command(WriterPid,
|
||||
#'basic.credit_ok'{available = Avail}),
|
||||
S0;
|
||||
({send_drained, {CTag, Credit}}, S0) ->
|
||||
ok = rabbit_writer:send_command(WriterPid,
|
||||
#'basic.credit_drained'{consumer_tag = CTag,
|
||||
credit_drained = Credit}),
|
||||
S0
|
||||
end, State0, Actions).
|
||||
end, State, Actions).
|
||||
|
||||
handle_eol(QName, State0) ->
|
||||
State1 = handle_consuming_queue_down_or_eol(QName, State0),
|
||||
|
|
|
@ -41,7 +41,8 @@
|
|||
handle_event/3,
|
||||
deliver/3,
|
||||
settle/5,
|
||||
credit/5,
|
||||
credit_v1/5,
|
||||
credit/7,
|
||||
dequeue/5,
|
||||
info/2,
|
||||
state_info/1,
|
||||
|
@ -58,8 +59,9 @@
|
|||
-export([confirm_to_sender/3,
|
||||
send_rejection/3,
|
||||
deliver_to_consumer/5,
|
||||
send_drained/3,
|
||||
send_credit_reply/3]).
|
||||
send_credit_reply_credit_api_v1/3,
|
||||
send_drained_credit_api_v1/4,
|
||||
send_credit_reply/7]).
|
||||
|
||||
-spec is_enabled() -> boolean().
|
||||
is_enabled() -> true.
|
||||
|
@ -237,16 +239,17 @@ consume(Q, Spec, State0) when ?amqqueue_is_classic(Q) ->
|
|||
channel_pid := ChPid,
|
||||
limiter_pid := LimiterPid,
|
||||
limiter_active := LimiterActive,
|
||||
prefetch_count := ConsumerPrefetchCount,
|
||||
mode := Mode,
|
||||
consumer_tag := ConsumerTag,
|
||||
exclusive_consume := ExclusiveConsume,
|
||||
args := Args,
|
||||
args := Args0,
|
||||
ok_msg := OkMsg,
|
||||
acting_user := ActingUser} = Spec,
|
||||
{ModeOrPrefetch, Args} = consume_backwards_compat(Mode, Args0),
|
||||
case delegate:invoke(QPid,
|
||||
{gen_server2, call,
|
||||
[{basic_consume, NoAck, ChPid, LimiterPid,
|
||||
LimiterActive, ConsumerPrefetchCount, ConsumerTag,
|
||||
LimiterActive, ModeOrPrefetch, ConsumerTag,
|
||||
ExclusiveConsume, Args, OkMsg, ActingUser},
|
||||
infinity]}) of
|
||||
ok ->
|
||||
|
@ -257,6 +260,22 @@ consume(Q, Spec, State0) when ?amqqueue_is_classic(Q) ->
|
|||
Err
|
||||
end.
|
||||
|
||||
%% Delete this function when feature flag credit_api_v2 becomes required.
|
||||
consume_backwards_compat({simple_prefetch, PrefetchCount} = Mode, Args) ->
|
||||
case rabbit_feature_flags:is_enabled(credit_api_v2) of
|
||||
true -> {Mode, Args};
|
||||
false -> {PrefetchCount, Args}
|
||||
end;
|
||||
consume_backwards_compat({credited, InitialDeliveryCount} = Mode, Args)
|
||||
when is_integer(InitialDeliveryCount) ->
|
||||
%% credit API v2
|
||||
{Mode, Args};
|
||||
consume_backwards_compat({credited, credit_api_v1}, Args) ->
|
||||
%% credit API v1
|
||||
{_PrefetchCount = 0,
|
||||
[{<<"x-credit">>, table, [{<<"credit">>, long, 0},
|
||||
{<<"drain">>, bool, false}]} | Args]}.
|
||||
|
||||
cancel(Q, ConsumerTag, OkMsg, ActingUser, State) ->
|
||||
QPid = amqqueue:get_pid(Q),
|
||||
case delegate:invoke(QPid, {gen_server2, call,
|
||||
|
@ -282,11 +301,14 @@ settle(_QName, Op, _CTag, MsgIds, State) ->
|
|||
[{reject, Op == requeue, MsgIds, ChPid}]}),
|
||||
{State, []}.
|
||||
|
||||
credit(_QName, CTag, Credit, Drain, State) ->
|
||||
ChPid = self(),
|
||||
delegate:invoke_no_result(State#?STATE.pid,
|
||||
{gen_server2, cast,
|
||||
[{credit, ChPid, CTag, Credit, Drain}]}),
|
||||
credit_v1(_QName, Ctag, LinkCreditSnd, Drain, #?STATE{pid = QPid} = State) ->
|
||||
Request = {credit, self(), Ctag, LinkCreditSnd, Drain},
|
||||
delegate:invoke_no_result(QPid, {gen_server2, cast, [Request]}),
|
||||
{State, []}.
|
||||
|
||||
credit(_QName, Ctag, DeliveryCountRcv, LinkCreditRcv, Drain, Echo, #?STATE{pid = QPid} = State) ->
|
||||
Request = {credit, self(), Ctag, DeliveryCountRcv, LinkCreditRcv, Drain, Echo},
|
||||
delegate:invoke_no_result(QPid, {gen_server2, cast, [Request]}),
|
||||
{State, []}.
|
||||
|
||||
handle_event(QName, {confirm, MsgSeqNos, Pid}, #?STATE{unconfirmed = U0} = State) ->
|
||||
|
@ -352,9 +374,13 @@ handle_event(QName, {down, Pid, Info}, #?STATE{monitored = Monitored,
|
|||
{ok, State#?STATE{unconfirmed = U},
|
||||
[{rejected, QName, MsgIds} | Actions0]}
|
||||
end;
|
||||
handle_event(_QName, {send_drained, _} = Action, State) ->
|
||||
handle_event(_QName, Action, State)
|
||||
when element(1, Action) =:= credit_reply ->
|
||||
{ok, State, [Action]};
|
||||
handle_event(_QName, {send_credit_reply, _} = Action, State) ->
|
||||
handle_event(_QName, {send_drained, {Ctag, Credit}}, State) ->
|
||||
%% This function clause should be deleted when feature flag
|
||||
%% credit_api_v2 becomes required.
|
||||
Action = {credit_reply_v1, Ctag, Credit, _Available = 0, _Drain = true},
|
||||
{ok, State, [Action]}.
|
||||
|
||||
settlement_action(_Type, _QRef, [], Acc) ->
|
||||
|
@ -610,26 +636,30 @@ ensure_monitor(Pid, QName, State = #?STATE{monitored = Monitored}) ->
|
|||
|
||||
%% part of channel <-> queue api
|
||||
confirm_to_sender(Pid, QName, MsgSeqNos) ->
|
||||
Msg = {confirm, MsgSeqNos, self()},
|
||||
gen_server:cast(Pid, {queue_event, QName, Msg}).
|
||||
Evt = {confirm, MsgSeqNos, self()},
|
||||
send_queue_event(Pid, QName, Evt).
|
||||
|
||||
send_rejection(Pid, QName, MsgSeqNo) ->
|
||||
Msg = {reject_publish, MsgSeqNo, self()},
|
||||
gen_server:cast(Pid, {queue_event, QName, Msg}).
|
||||
Evt = {reject_publish, MsgSeqNo, self()},
|
||||
send_queue_event(Pid, QName, Evt).
|
||||
|
||||
deliver_to_consumer(Pid, QName, CTag, AckRequired, Message) ->
|
||||
Deliver = {deliver, CTag, AckRequired, [Message]},
|
||||
Evt = {queue_event, QName, Deliver},
|
||||
gen_server:cast(Pid, Evt).
|
||||
Evt = {deliver, CTag, AckRequired, [Message]},
|
||||
send_queue_event(Pid, QName, Evt).
|
||||
|
||||
send_drained(Pid, QName, CTagCredits) when is_list(CTagCredits) ->
|
||||
lists:foreach(fun(CTagCredit) ->
|
||||
send_drained(Pid, QName, CTagCredit)
|
||||
end, CTagCredits);
|
||||
send_drained(Pid, QName, CTagCredit) when is_tuple(CTagCredit) ->
|
||||
gen_server:cast(Pid, {queue_event, QName,
|
||||
{send_drained, CTagCredit}}).
|
||||
%% Delete this function when feature flag credit_api_v2 becomes required.
|
||||
send_credit_reply_credit_api_v1(Pid, QName, Available) ->
|
||||
Evt = {send_credit_reply, Available},
|
||||
send_queue_event(Pid, QName, Evt).
|
||||
|
||||
send_credit_reply(Pid, QName, Len) when is_integer(Len) ->
|
||||
gen_server:cast(Pid, {queue_event, QName,
|
||||
{send_credit_reply, Len}}).
|
||||
%% Delete this function when feature flag credit_api_v2 becomes required.
|
||||
send_drained_credit_api_v1(Pid, QName, Ctag, Credit) ->
|
||||
Evt = {send_drained, {Ctag, Credit}},
|
||||
send_queue_event(Pid, QName, Evt).
|
||||
|
||||
send_credit_reply(Pid, QName, Ctag, DeliveryCount, Credit, Available, Drain) ->
|
||||
Evt = {credit_reply, Ctag, DeliveryCount, Credit, Available, Drain},
|
||||
send_queue_event(Pid, QName, Evt).
|
||||
|
||||
send_queue_event(Pid, QName, Event) ->
|
||||
gen_server:cast(Pid, {queue_event, QName, Event}).
|
||||
|
|
|
@ -45,7 +45,7 @@ insert(SeqNo, QNames, #resource{kind = exchange} = XName,
|
|||
when is_integer(SeqNo)
|
||||
andalso is_list(QNames)
|
||||
andalso not is_map_key(SeqNo, U0) ->
|
||||
U = U0#{SeqNo => {XName, maps:from_list([{Q, ok} || Q <- QNames])}},
|
||||
U = U0#{SeqNo => {XName, maps:from_keys(QNames, ok)}},
|
||||
S = case S0 of
|
||||
undefined -> SeqNo;
|
||||
_ -> S0
|
||||
|
@ -58,20 +58,18 @@ insert(SeqNo, QNames, #resource{kind = exchange} = XName,
|
|||
confirm(SeqNos, QName, #?MODULE{smallest = Smallest0,
|
||||
unconfirmed = U0} = State)
|
||||
when is_list(SeqNos) ->
|
||||
{Confirmed, U} = lists:foldr(
|
||||
fun (SeqNo, Acc) ->
|
||||
confirm_one(SeqNo, QName, Acc)
|
||||
end, {[], U0}, SeqNos),
|
||||
%% check if smallest is in Confirmed
|
||||
%% TODO: this can be optimised by checking in the preceeding foldr
|
||||
Smallest =
|
||||
case lists:any(fun ({S, _}) -> S == Smallest0 end, Confirmed) of
|
||||
true ->
|
||||
%% work out new smallest
|
||||
next_smallest(Smallest0, U);
|
||||
false ->
|
||||
Smallest0
|
||||
end,
|
||||
{Confirmed, ConfirmedSmallest, U} =
|
||||
lists:foldl(
|
||||
fun (SeqNo, Acc) ->
|
||||
confirm_one(SeqNo, QName, Smallest0, Acc)
|
||||
end, {[], false, U0}, SeqNos),
|
||||
Smallest = case ConfirmedSmallest of
|
||||
true ->
|
||||
%% work out new smallest
|
||||
next_smallest(Smallest0, U);
|
||||
false ->
|
||||
Smallest0
|
||||
end,
|
||||
{Confirmed, State#?MODULE{smallest = Smallest,
|
||||
unconfirmed = U}}.
|
||||
|
||||
|
@ -124,17 +122,21 @@ is_empty(State) ->
|
|||
|
||||
%% INTERNAL
|
||||
|
||||
confirm_one(SeqNo, QName, {Acc, U0}) ->
|
||||
confirm_one(SeqNo, QName, Smallest, {Acc, ConfirmedSmallest0, U0}) ->
|
||||
case maps:take(SeqNo, U0) of
|
||||
{{XName, QS}, U1}
|
||||
when is_map_key(QName, QS)
|
||||
andalso map_size(QS) == 1 ->
|
||||
%% last queue confirm
|
||||
{[{SeqNo, XName} | Acc], U1};
|
||||
ConfirmedSmallest = case SeqNo of
|
||||
Smallest -> true;
|
||||
_ -> ConfirmedSmallest0
|
||||
end,
|
||||
{[{SeqNo, XName} | Acc], ConfirmedSmallest, U1};
|
||||
{{XName, QS}, U1} ->
|
||||
{Acc, U1#{SeqNo => {XName, maps:remove(QName, QS)}}};
|
||||
{Acc, ConfirmedSmallest0, U1#{SeqNo => {XName, maps:remove(QName, QS)}}};
|
||||
error ->
|
||||
{Acc, U0}
|
||||
{Acc, ConfirmedSmallest0, U0}
|
||||
end.
|
||||
|
||||
next_smallest(_S, U) when map_size(U) == 0 ->
|
||||
|
|
|
@ -18,7 +18,7 @@
|
|||
|
||||
-behaviour(supervisor).
|
||||
|
||||
-export([start_link/0]).
|
||||
-export([start_link/1]).
|
||||
-export([
|
||||
start_channel_sup_sup/1,
|
||||
start_queue_collector/2
|
||||
|
@ -30,10 +30,10 @@
|
|||
|
||||
%%----------------------------------------------------------------------------
|
||||
|
||||
-spec start_link() -> rabbit_types:ok_pid_or_error().
|
||||
|
||||
start_link() ->
|
||||
supervisor:start_link(?MODULE, []).
|
||||
-spec start_link(supervisor:sup_flags()) ->
|
||||
supervisor:startlink_ret().
|
||||
start_link(SupFlags) ->
|
||||
supervisor:start_link(?MODULE, SupFlags).
|
||||
|
||||
-spec start_channel_sup_sup(pid()) -> rabbit_types:ok_pid_or_error().
|
||||
|
||||
|
@ -62,10 +62,6 @@ start_queue_collector(SupPid, Identity) ->
|
|||
|
||||
%%----------------------------------------------------------------------------
|
||||
|
||||
init([]) ->
|
||||
init(SupFlags) ->
|
||||
?LG_PROCESS_TYPE(connection_helper_sup),
|
||||
SupFlags = #{strategy => one_for_one,
|
||||
intensity => 10,
|
||||
period => 10,
|
||||
auto_shutdown => any_significant},
|
||||
{ok, {SupFlags, []}}.
|
||||
|
|
|
@ -19,7 +19,10 @@
|
|||
-behaviour(supervisor).
|
||||
-behaviour(ranch_protocol).
|
||||
|
||||
-export([start_link/3, reader/1]).
|
||||
-export([start_link/3,
|
||||
reader/1,
|
||||
start_connection_helper_sup/2
|
||||
]).
|
||||
|
||||
-export([init/1]).
|
||||
|
||||
|
@ -27,40 +30,17 @@
|
|||
|
||||
%%----------------------------------------------------------------------------
|
||||
|
||||
-spec start_link(any(), module(), any()) ->
|
||||
-spec start_link(ranch:ref(), module(), any()) ->
|
||||
{'ok', pid(), pid()}.
|
||||
|
||||
start_link(Ref, _Transport, _Opts) ->
|
||||
{ok, SupPid} = supervisor:start_link(?MODULE, []),
|
||||
%% We need to get channels in the hierarchy here so they get shut
|
||||
%% down after the reader, so the reader gets a chance to terminate
|
||||
%% them cleanly. But for 1.0 readers we can't start the real
|
||||
%% ch_sup_sup (because we don't know if we will be 0-9-1 or 1.0) -
|
||||
%% so we add another supervisor into the hierarchy.
|
||||
%%
|
||||
%% This supervisor also acts as an intermediary for heartbeaters and
|
||||
%% the queue collector process, since these must not be siblings of the
|
||||
%% reader due to the potential for deadlock if they are added/restarted
|
||||
%% whilst the supervision tree is shutting down.
|
||||
{ok, HelperSup} =
|
||||
supervisor:start_child(
|
||||
SupPid,
|
||||
#{
|
||||
id => helper_sup,
|
||||
start => {rabbit_connection_helper_sup, start_link, []},
|
||||
restart => transient,
|
||||
significant => true,
|
||||
shutdown => infinity,
|
||||
type => supervisor,
|
||||
modules => [rabbit_connection_helper_sup]
|
||||
}
|
||||
),
|
||||
{ok, ReaderPid} =
|
||||
supervisor:start_child(
|
||||
SupPid,
|
||||
#{
|
||||
id => reader,
|
||||
start => {rabbit_reader, start_link, [HelperSup, Ref]},
|
||||
start => {rabbit_reader, start_link, [Ref]},
|
||||
restart => transient,
|
||||
significant => true,
|
||||
shutdown => ?WORKER_WAIT,
|
||||
|
@ -75,6 +55,20 @@ start_link(Ref, _Transport, _Opts) ->
|
|||
reader(Pid) ->
|
||||
hd(rabbit_misc:find_child(Pid, reader)).
|
||||
|
||||
-spec start_connection_helper_sup(pid(), supervisor:sup_flags()) ->
|
||||
supervisor:startchild_ret().
|
||||
start_connection_helper_sup(ConnectionSupPid, ConnectionHelperSupFlags) ->
|
||||
supervisor:start_child(
|
||||
ConnectionSupPid,
|
||||
#{
|
||||
id => helper_sup,
|
||||
start => {rabbit_connection_helper_sup, start_link, [ConnectionHelperSupFlags]},
|
||||
restart => transient,
|
||||
significant => true,
|
||||
shutdown => infinity,
|
||||
type => supervisor
|
||||
}).
|
||||
|
||||
%%--------------------------------------------------------------------------
|
||||
|
||||
init([]) ->
|
||||
|
|
|
@ -123,6 +123,8 @@
|
|||
-rabbit_feature_flag(
|
||||
{message_containers,
|
||||
#{desc => "Message containers.",
|
||||
%%TODO Once lower version node in mixed versions is bumped to 3.13,
|
||||
%% make 'required' for upgrading AMQP 1.0 from 3.13 to 4.0
|
||||
stability => stable,
|
||||
depends_on => [feature_flags_v2]
|
||||
}}).
|
||||
|
@ -156,3 +158,9 @@
|
|||
stability => stable,
|
||||
depends_on => [stream_queue]
|
||||
}}).
|
||||
|
||||
-rabbit_feature_flag(
|
||||
{credit_api_v2,
|
||||
#{desc => "Credit API v2 between queue clients and queue processes",
|
||||
stability => stable
|
||||
}}).
|
||||
|
|
|
@ -74,6 +74,8 @@
|
|||
chunk_disk_msgs/3]).
|
||||
-endif.
|
||||
|
||||
-import(serial_number, [add/2, diff/2]).
|
||||
|
||||
%% command records representing all the protocol actions that are supported
|
||||
-record(enqueue, {pid :: option(pid()),
|
||||
seq :: option(msg_seqno()),
|
||||
|
@ -95,7 +97,7 @@
|
|||
msg_ids :: [msg_id()]}).
|
||||
-record(credit, {consumer_id :: consumer_id(),
|
||||
credit :: non_neg_integer(),
|
||||
delivery_count :: non_neg_integer(),
|
||||
delivery_count :: rabbit_queue_type:delivery_count(),
|
||||
drain :: boolean()}).
|
||||
-record(purge, {}).
|
||||
-record(purge_nodes, {nodes :: [node()]}).
|
||||
|
@ -130,7 +132,6 @@
|
|||
delivery/0,
|
||||
command/0,
|
||||
credit_mode/0,
|
||||
consumer_tag/0,
|
||||
consumer_meta/0,
|
||||
consumer_id/0,
|
||||
client_msg/0,
|
||||
|
@ -184,8 +185,8 @@ update_config(Conf, State) ->
|
|||
% msg_ids are scoped per consumer
|
||||
% ra_indexes holds all raft indexes for enqueues currently on queue
|
||||
-spec apply(ra_machine:command_meta_data(), command(), state()) ->
|
||||
{state(), Reply :: term(), ra_machine:effects()} |
|
||||
{state(), Reply :: term()}.
|
||||
{state(), ra_machine:reply(), ra_machine:effects() | ra_machine:effect()} |
|
||||
{state(), ra_machine:reply()}.
|
||||
apply(Meta, #enqueue{pid = From, seq = Seq,
|
||||
msg = RawMsg}, State00) ->
|
||||
apply_enqueue(Meta, From, Seq, RawMsg, State00);
|
||||
|
@ -276,59 +277,92 @@ apply(#{index := Idx} = Meta,
|
|||
_ ->
|
||||
{State00, ok, []}
|
||||
end;
|
||||
apply(Meta, #credit{credit = NewCredit, delivery_count = RemoteDelCnt,
|
||||
drain = Drain, consumer_id = ConsumerId},
|
||||
apply(Meta, #credit{credit = LinkCreditRcv, delivery_count = DeliveryCountRcv,
|
||||
drain = Drain, consumer_id = ConsumerId = {CTag, CPid}},
|
||||
#?MODULE{consumers = Cons0,
|
||||
service_queue = ServiceQueue0,
|
||||
waiting_consumers = Waiting0} = State0) ->
|
||||
case Cons0 of
|
||||
#{ConsumerId := #consumer{delivery_count = DelCnt} = Con0} ->
|
||||
%% this can go below 0 when credit is reduced
|
||||
C = max(0, RemoteDelCnt + NewCredit - DelCnt),
|
||||
#{ConsumerId := #consumer{delivery_count = DeliveryCountSnd,
|
||||
cfg = Cfg} = Con0} ->
|
||||
LinkCreditSnd = link_credit_snd(DeliveryCountRcv, LinkCreditRcv, DeliveryCountSnd, Cfg),
|
||||
%% grant the credit
|
||||
Con1 = Con0#consumer{credit = C},
|
||||
ServiceQueue = maybe_queue_consumer(ConsumerId, Con1,
|
||||
ServiceQueue0),
|
||||
Cons = maps:put(ConsumerId, Con1, Cons0),
|
||||
{State1, ok, Effects} =
|
||||
checkout(Meta, State0,
|
||||
State0#?MODULE{service_queue = ServiceQueue,
|
||||
consumers = Cons}, []),
|
||||
Response = {send_credit_reply, messages_ready(State1)},
|
||||
%% by this point all checkouts for the updated credit value
|
||||
%% should be processed so we can evaluate the drain
|
||||
case Drain of
|
||||
false ->
|
||||
%% just return the result of the checkout
|
||||
{State1, Response, Effects};
|
||||
Con1 = Con0#consumer{credit = LinkCreditSnd},
|
||||
ServiceQueue = maybe_queue_consumer(ConsumerId, Con1, ServiceQueue0),
|
||||
State1 = State0#?MODULE{service_queue = ServiceQueue,
|
||||
consumers = maps:update(ConsumerId, Con1, Cons0)},
|
||||
{State2, ok, Effects} = checkout(Meta, State0, State1, []),
|
||||
|
||||
#?MODULE{consumers = Cons1 = #{ConsumerId := Con2}} = State2,
|
||||
#consumer{credit = PostCred,
|
||||
delivery_count = PostDeliveryCount} = Con2,
|
||||
Available = messages_ready(State2),
|
||||
case credit_api_v2(Cfg) of
|
||||
true ->
|
||||
Con = #consumer{credit = PostCred} =
|
||||
maps:get(ConsumerId, State1#?MODULE.consumers),
|
||||
%% add the outstanding credit to the delivery count
|
||||
DeliveryCount = Con#consumer.delivery_count + PostCred,
|
||||
Consumers = maps:put(ConsumerId,
|
||||
Con#consumer{delivery_count = DeliveryCount,
|
||||
credit = 0},
|
||||
State1#?MODULE.consumers),
|
||||
Drained = Con#consumer.credit,
|
||||
{CTag, _} = ConsumerId,
|
||||
{State1#?MODULE{consumers = Consumers},
|
||||
%% returning a multi response with two client actions
|
||||
%% for the channel to execute
|
||||
{multi, [Response, {send_drained, {CTag, Drained}}]},
|
||||
Effects}
|
||||
{Credit, DeliveryCount, State} =
|
||||
case Drain andalso PostCred > 0 of
|
||||
true ->
|
||||
AdvancedDeliveryCount = add(PostDeliveryCount, PostCred),
|
||||
ZeroCredit = 0,
|
||||
Con = Con2#consumer{delivery_count = AdvancedDeliveryCount,
|
||||
credit = ZeroCredit},
|
||||
Cons = maps:update(ConsumerId, Con, Cons1),
|
||||
State3 = State2#?MODULE{consumers = Cons},
|
||||
{ZeroCredit, AdvancedDeliveryCount, State3};
|
||||
false ->
|
||||
{PostCred, PostDeliveryCount, State2}
|
||||
end,
|
||||
%% We must send to queue client delivery effects before credit_reply such
|
||||
%% that session process can send to AMQP 1.0 client TRANSFERs before FLOW.
|
||||
{State, ok, Effects ++ [{send_msg, CPid,
|
||||
{credit_reply, CTag, DeliveryCount, Credit, Available, Drain},
|
||||
?DELIVERY_SEND_MSG_OPTS}]};
|
||||
false ->
|
||||
%% We must always send a send_credit_reply because basic.credit is synchronous.
|
||||
%% Additionally, we keep the bug of credit API v1 that we send to queue client the
|
||||
%% send_drained reply before the delivery effects (resulting in the wrong behaviour
|
||||
%% that the session process sends to AMQP 1.0 client the FLOW before the TRANSFERs).
|
||||
%% We have to keep this bug because old rabbit_fifo_client implementations expect
|
||||
%% a send_drained Ra reply (they can't handle such a Ra effect).
|
||||
CreditReply = {send_credit_reply, Available},
|
||||
case Drain of
|
||||
true ->
|
||||
AdvancedDeliveryCount = PostDeliveryCount + PostCred,
|
||||
Con = Con2#consumer{delivery_count = AdvancedDeliveryCount,
|
||||
credit = 0},
|
||||
Cons = maps:update(ConsumerId, Con, Cons1),
|
||||
State = State2#?MODULE{consumers = Cons},
|
||||
Reply = {multi, [CreditReply, {send_drained, {CTag, PostCred}}]},
|
||||
{State, Reply, Effects};
|
||||
false ->
|
||||
{State2, CreditReply, Effects}
|
||||
end
|
||||
end;
|
||||
_ when Waiting0 /= [] ->
|
||||
%% there are waiting consuemrs
|
||||
%%TODO next time when we bump the machine version:
|
||||
%% 1. Do not put consumer at head of waiting_consumers if NewCredit == 0
|
||||
%% to reduce likelihood of activating a 0 credit consumer.
|
||||
%% 2. Support Drain == true, i.e. advance delivery-count, consuming all link-credit since there
|
||||
%% are no messages available for an inactive consumer and send credit_reply with Drain=true.
|
||||
case lists:keytake(ConsumerId, 1, Waiting0) of
|
||||
{value, {_, Con0 = #consumer{delivery_count = DelCnt}}, Waiting} ->
|
||||
%% the consumer is a waiting one
|
||||
{value, {_, Con0 = #consumer{delivery_count = DeliveryCountSnd,
|
||||
cfg = Cfg}}, Waiting} ->
|
||||
LinkCreditSnd = link_credit_snd(DeliveryCountRcv, LinkCreditRcv, DeliveryCountSnd, Cfg),
|
||||
%% grant the credit
|
||||
C = max(0, RemoteDelCnt + NewCredit - DelCnt),
|
||||
Con = Con0#consumer{credit = C},
|
||||
Con = Con0#consumer{credit = LinkCreditSnd},
|
||||
State = State0#?MODULE{waiting_consumers =
|
||||
[{ConsumerId, Con} | Waiting]},
|
||||
{State, {send_credit_reply, messages_ready(State)}};
|
||||
%% No messages are available for inactive consumers.
|
||||
Available = 0,
|
||||
case credit_api_v2(Cfg) of
|
||||
true ->
|
||||
{State, ok,
|
||||
{send_msg, CPid,
|
||||
{credit_reply, CTag, DeliveryCountSnd, LinkCreditSnd, Available, false},
|
||||
?DELIVERY_SEND_MSG_OPTS}};
|
||||
false ->
|
||||
{State, {send_credit_reply, Available}}
|
||||
end;
|
||||
false ->
|
||||
{State0, ok}
|
||||
end;
|
||||
|
@ -1240,12 +1274,12 @@ query_consumers(#?MODULE{consumers = Consumers,
|
|||
FromConsumers =
|
||||
maps:fold(fun (_, #consumer{status = cancelled}, Acc) ->
|
||||
Acc;
|
||||
({Tag, Pid},
|
||||
(Key = {Tag, Pid},
|
||||
#consumer{cfg = #consumer_cfg{meta = Meta}} = Consumer,
|
||||
Acc) ->
|
||||
{Active, ActivityStatus} =
|
||||
ActiveActivityStatusFun({Tag, Pid}, Consumer),
|
||||
maps:put({Tag, Pid},
|
||||
ActiveActivityStatusFun(Key, Consumer),
|
||||
maps:put(Key,
|
||||
{Pid, Tag,
|
||||
maps:get(ack, Meta, undefined),
|
||||
maps:get(prefetch, Meta, undefined),
|
||||
|
@ -1258,12 +1292,12 @@ query_consumers(#?MODULE{consumers = Consumers,
|
|||
FromWaitingConsumers =
|
||||
lists:foldl(fun ({_, #consumer{status = cancelled}}, Acc) ->
|
||||
Acc;
|
||||
({{Tag, Pid},
|
||||
(Key = {{Tag, Pid},
|
||||
#consumer{cfg = #consumer_cfg{meta = Meta}} = Consumer},
|
||||
Acc) ->
|
||||
{Active, ActivityStatus} =
|
||||
ActiveActivityStatusFun({Tag, Pid}, Consumer),
|
||||
maps:put({Tag, Pid},
|
||||
ActiveActivityStatusFun(Key, Consumer),
|
||||
maps:put(Key,
|
||||
{Pid, Tag,
|
||||
maps:get(ack, Meta, undefined),
|
||||
maps:get(prefetch, Meta, undefined),
|
||||
|
@ -2032,7 +2066,7 @@ get_next_msg(#?MODULE{returns = Returns0,
|
|||
delivery_effect({CTag, CPid}, [{MsgId, ?MSG(Idx, Header)}],
|
||||
#?MODULE{msg_cache = {Idx, RawMsg}}) ->
|
||||
{send_msg, CPid, {delivery, CTag, [{MsgId, {Header, RawMsg}}]},
|
||||
[local, ra_event]};
|
||||
?DELIVERY_SEND_MSG_OPTS};
|
||||
delivery_effect({CTag, CPid}, Msgs, _State) ->
|
||||
RaftIdxs = lists:foldr(fun ({_, ?MSG(I, _)}, Acc) ->
|
||||
[I | Acc]
|
||||
|
@ -2043,7 +2077,7 @@ delivery_effect({CTag, CPid}, Msgs, _State) ->
|
|||
fun (Cmd, {MsgId, ?MSG(_Idx, Header)}) ->
|
||||
{MsgId, {Header, get_msg(Cmd)}}
|
||||
end, Log, Msgs),
|
||||
[{send_msg, CPid, {delivery, CTag, DelMsgs}, [local, ra_event]}]
|
||||
[{send_msg, CPid, {delivery, CTag, DelMsgs}, ?DELIVERY_SEND_MSG_OPTS}]
|
||||
end,
|
||||
{local, node(CPid)}}.
|
||||
|
||||
|
@ -2078,21 +2112,25 @@ checkout_one(#{system_time := Ts} = Meta, ExpiredMsg0, InitState0, Effects0) ->
|
|||
%% recurse without consumer on queue
|
||||
checkout_one(Meta, ExpiredMsg,
|
||||
InitState#?MODULE{service_queue = SQ1}, Effects1);
|
||||
#consumer{status = cancelled} ->
|
||||
checkout_one(Meta, ExpiredMsg,
|
||||
InitState#?MODULE{service_queue = SQ1}, Effects1);
|
||||
#consumer{status = suspected_down} ->
|
||||
#consumer{status = S}
|
||||
when S =:= cancelled orelse
|
||||
S =:= suspected_down ->
|
||||
checkout_one(Meta, ExpiredMsg,
|
||||
InitState#?MODULE{service_queue = SQ1}, Effects1);
|
||||
#consumer{checked_out = Checked0,
|
||||
next_msg_id = Next,
|
||||
credit = Credit,
|
||||
delivery_count = DelCnt} = Con0 ->
|
||||
delivery_count = DelCnt0,
|
||||
cfg = Cfg} = Con0 ->
|
||||
Checked = maps:put(Next, ConsumerMsg, Checked0),
|
||||
DelCnt = case credit_api_v2(Cfg) of
|
||||
true -> add(DelCnt0, 1);
|
||||
false -> DelCnt0 + 1
|
||||
end,
|
||||
Con = Con0#consumer{checked_out = Checked,
|
||||
next_msg_id = Next + 1,
|
||||
credit = Credit - 1,
|
||||
delivery_count = DelCnt + 1},
|
||||
delivery_count = DelCnt},
|
||||
Size = get_header(size, get_msg_header(ConsumerMsg)),
|
||||
State = update_or_remove_sub(
|
||||
Meta, ConsumerId, Con,
|
||||
|
@ -2186,11 +2224,11 @@ update_or_remove_sub(_Meta, ConsumerId,
|
|||
#?MODULE{consumers = Cons,
|
||||
service_queue = ServiceQueue} = State) ->
|
||||
State#?MODULE{consumers = maps:put(ConsumerId, Con, Cons),
|
||||
service_queue = uniq_queue_in(ConsumerId, Con, ServiceQueue)}.
|
||||
service_queue = maybe_queue_consumer(ConsumerId, Con, ServiceQueue)}.
|
||||
|
||||
uniq_queue_in(Key, #consumer{credit = Credit,
|
||||
status = up,
|
||||
cfg = #consumer_cfg{priority = P}}, ServiceQueue)
|
||||
maybe_queue_consumer(Key, #consumer{credit = Credit,
|
||||
status = up,
|
||||
cfg = #consumer_cfg{priority = P}}, ServiceQueue)
|
||||
when Credit > 0 ->
|
||||
% TODO: queue:member could surely be quite expensive, however the practical
|
||||
% number of unique consumers may not be large enough for it to matter
|
||||
|
@ -2200,7 +2238,7 @@ uniq_queue_in(Key, #consumer{credit = Credit,
|
|||
false ->
|
||||
priority_queue:in(Key, P, ServiceQueue)
|
||||
end;
|
||||
uniq_queue_in(_Key, _Consumer, ServiceQueue) ->
|
||||
maybe_queue_consumer(_Key, _Consumer, ServiceQueue) ->
|
||||
ServiceQueue.
|
||||
|
||||
update_consumer(Meta, {Tag, Pid} = ConsumerId, ConsumerMeta,
|
||||
|
@ -2218,7 +2256,8 @@ update_consumer(Meta, {Tag, Pid} = ConsumerId, ConsumerMeta,
|
|||
meta = ConsumerMeta,
|
||||
priority = Priority,
|
||||
credit_mode = Mode},
|
||||
credit = Credit}
|
||||
credit = Credit,
|
||||
delivery_count = initial_delivery_count(ConsumerMeta)}
|
||||
end,
|
||||
{Consumer, update_or_remove_sub(Meta, ConsumerId, Consumer, State0)};
|
||||
update_consumer(Meta, {Tag, Pid} = ConsumerId, ConsumerMeta,
|
||||
|
@ -2252,8 +2291,8 @@ update_consumer(Meta, {Tag, Pid} = ConsumerId, ConsumerMeta,
|
|||
meta = ConsumerMeta,
|
||||
priority = Priority,
|
||||
credit_mode = Mode},
|
||||
credit = Credit},
|
||||
|
||||
credit = Credit,
|
||||
delivery_count = initial_delivery_count(ConsumerMeta)},
|
||||
{Consumer,
|
||||
State0#?MODULE{waiting_consumers =
|
||||
Waiting ++ [{ConsumerId, Consumer}]}}
|
||||
|
@ -2277,16 +2316,6 @@ credit_mode(#{machine_version := Vsn}, Credit, simple_prefetch)
|
|||
credit_mode(_, _, Mode) ->
|
||||
Mode.
|
||||
|
||||
maybe_queue_consumer(ConsumerId, #consumer{credit = Credit} = Con,
|
||||
ServiceQueue0) ->
|
||||
case Credit > 0 of
|
||||
true ->
|
||||
% consumer needs service - check if already on service queue
|
||||
uniq_queue_in(ConsumerId, Con, ServiceQueue0);
|
||||
false ->
|
||||
ServiceQueue0
|
||||
end.
|
||||
|
||||
%% creates a dehydrated version of the current state to be cached and
|
||||
%% potentially used to for a snaphot at a later point
|
||||
dehydrate_state(#?MODULE{cfg = #cfg{},
|
||||
|
@ -2363,8 +2392,8 @@ make_return(ConsumerId, MsgIds) ->
|
|||
make_discard(ConsumerId, MsgIds) ->
|
||||
#discard{consumer_id = ConsumerId, msg_ids = MsgIds}.
|
||||
|
||||
-spec make_credit(consumer_id(), non_neg_integer(), non_neg_integer(),
|
||||
boolean()) -> protocol().
|
||||
-spec make_credit(consumer_id(), rabbit_queue_type:credit(),
|
||||
non_neg_integer(), boolean()) -> protocol().
|
||||
make_credit(ConsumerId, Credit, DeliveryCount, Drain) ->
|
||||
#credit{consumer_id = ConsumerId,
|
||||
credit = Credit,
|
||||
|
@ -2563,3 +2592,26 @@ get_msg(#enqueue{msg = M}) ->
|
|||
M;
|
||||
get_msg(#requeue{msg = M}) ->
|
||||
M.
|
||||
|
||||
-spec initial_delivery_count(consumer_meta()) ->
|
||||
rabbit_queue_type:delivery_count().
|
||||
initial_delivery_count(#{initial_delivery_count := Count}) ->
|
||||
%% credit API v2
|
||||
Count;
|
||||
initial_delivery_count(_) ->
|
||||
%% credit API v1
|
||||
0.
|
||||
|
||||
-spec credit_api_v2(#consumer_cfg{}) ->
|
||||
boolean().
|
||||
credit_api_v2(#consumer_cfg{meta = ConsumerMeta}) ->
|
||||
maps:is_key(initial_delivery_count, ConsumerMeta).
|
||||
|
||||
%% AMQP 1.0 §2.6.7
|
||||
link_credit_snd(DeliveryCountRcv, LinkCreditRcv, DeliveryCountSnd, ConsumerCfg) ->
|
||||
C = case credit_api_v2(ConsumerCfg) of
|
||||
true -> diff(add(DeliveryCountRcv, LinkCreditRcv), DeliveryCountSnd);
|
||||
false -> DeliveryCountRcv + LinkCreditRcv - DeliveryCountSnd
|
||||
end,
|
||||
%% C can be negative when receiver decreases credits while messages are in flight.
|
||||
max(0, C).
|
||||
|
|
|
@ -17,6 +17,8 @@
|
|||
is_list(H) orelse
|
||||
(is_map(H) andalso is_map_key(size, H))).
|
||||
|
||||
-define(DELIVERY_SEND_MSG_OPTS, [local, ra_event]).
|
||||
|
||||
-type optimised_tuple(A, B) :: nonempty_improper_list(A, B).
|
||||
|
||||
-type option(T) :: undefined | T.
|
||||
|
@ -56,14 +58,10 @@
|
|||
-type delivery_msg() :: {msg_id(), {msg_header(), raw_msg()}}.
|
||||
%% A tuple consisting of the message id, and the headered message.
|
||||
|
||||
-type consumer_tag() :: binary().
|
||||
%% An arbitrary binary tag used to distinguish between different consumers
|
||||
%% set up by the same process. See: {@link rabbit_fifo_client:checkout/3.}
|
||||
|
||||
-type delivery() :: {delivery, consumer_tag(), [delivery_msg()]}.
|
||||
-type delivery() :: {delivery, rabbit_types:ctag(), [delivery_msg()]}.
|
||||
%% Represents the delivery of one or more rabbit_fifo messages.
|
||||
|
||||
-type consumer_id() :: {consumer_tag(), pid()}.
|
||||
-type consumer_id() :: {rabbit_types:ctag(), pid()}.
|
||||
%% The entity that receives messages. Uniquely identifies a consumer.
|
||||
|
||||
-type credit_mode() :: credited |
|
||||
|
@ -81,7 +79,10 @@
|
|||
-type consumer_meta() :: #{ack => boolean(),
|
||||
username => binary(),
|
||||
prefetch => non_neg_integer(),
|
||||
args => list()}.
|
||||
args => list(),
|
||||
%% set if and only if credit API v2 is in use
|
||||
initial_delivery_count => rabbit_queue_type:delivery_count()
|
||||
}.
|
||||
%% static meta data associated with a consumer
|
||||
|
||||
-type applied_mfa() :: {module(), atom(), list()}.
|
||||
|
@ -101,7 +102,7 @@
|
|||
-record(consumer_cfg,
|
||||
{meta = #{} :: consumer_meta(),
|
||||
pid :: pid(),
|
||||
tag :: consumer_tag(),
|
||||
tag :: rabbit_types:ctag(),
|
||||
%% the mode of how credit is incremented
|
||||
%% simple_prefetch: credit is re-filled as deliveries are settled
|
||||
%% or returned.
|
||||
|
@ -119,9 +120,8 @@
|
|||
%% max number of messages that can be sent
|
||||
%% decremented for each delivery
|
||||
credit = 0 : non_neg_integer(),
|
||||
%% total number of checked out messages - ever
|
||||
%% incremented for each delivery
|
||||
delivery_count = 0 :: non_neg_integer()
|
||||
%% AMQP 1.0 §2.6.7
|
||||
delivery_count :: rabbit_queue_type:delivery_count()
|
||||
}).
|
||||
|
||||
-type consumer() :: #consumer{}.
|
||||
|
@ -200,7 +200,7 @@
|
|||
dlx = rabbit_fifo_dlx:init() :: rabbit_fifo_dlx:state(),
|
||||
msg_bytes_enqueue = 0 :: non_neg_integer(),
|
||||
msg_bytes_checkout = 0 :: non_neg_integer(),
|
||||
%% waiting consumers, one is picked active consumer is cancelled or dies
|
||||
%% one is picked if active consumer is cancelled or dies
|
||||
%% used only when single active consumer is on
|
||||
waiting_consumers = [] :: [{consumer_id(), consumer()}],
|
||||
last_active :: option(non_neg_integer()),
|
||||
|
|
|
@ -22,7 +22,8 @@
|
|||
settle/3,
|
||||
return/3,
|
||||
discard/3,
|
||||
credit/4,
|
||||
credit_v1/4,
|
||||
credit/6,
|
||||
handle_ra_event/4,
|
||||
untracked_enqueue/2,
|
||||
purge/1,
|
||||
|
@ -39,15 +40,16 @@
|
|||
-define(COMMAND_TIMEOUT, 30000).
|
||||
|
||||
-type seq() :: non_neg_integer().
|
||||
-type action() :: {send_credit_reply, Available :: non_neg_integer()} |
|
||||
{send_drained, CTagCredit ::
|
||||
{rabbit_fifo:consumer_tag(), non_neg_integer()}} |
|
||||
rabbit_queue_type:action().
|
||||
-type actions() :: [action()].
|
||||
|
||||
-record(consumer, {last_msg_id :: seq() | -1 | undefined,
|
||||
ack = false :: boolean(),
|
||||
delivery_count = 0 :: non_neg_integer()}).
|
||||
%% 'echo' field from latest FLOW, see AMQP 1.0 §2.7.4
|
||||
%% Quorum queue server will always echo back to us,
|
||||
%% but we only emit a credit_reply if Echo=true
|
||||
echo :: boolean(),
|
||||
%% Remove this field when feature flag credit_api_v2 becomes required.
|
||||
delivery_count :: {credit_api_v1, rabbit_queue_type:delivery_count()} | credit_api_v2
|
||||
}).
|
||||
|
||||
-record(cfg, {servers = [] :: [ra:server_id()],
|
||||
soft_limit = ?SOFT_LIMIT :: non_neg_integer(),
|
||||
|
@ -65,18 +67,14 @@
|
|||
{[seq()], [seq()], [seq()]}},
|
||||
pending = #{} :: #{seq() =>
|
||||
{term(), rabbit_fifo:command()}},
|
||||
consumer_deliveries = #{} :: #{rabbit_fifo:consumer_tag() =>
|
||||
consumer_deliveries = #{} :: #{rabbit_types:ctag() =>
|
||||
#consumer{}},
|
||||
timer_state :: term()
|
||||
}).
|
||||
|
||||
-opaque state() :: #state{}.
|
||||
|
||||
-export_type([
|
||||
state/0,
|
||||
actions/0
|
||||
]).
|
||||
|
||||
-export_type([state/0]).
|
||||
|
||||
%% @doc Create the initial state for a new rabbit_fifo sessions. A state is needed
|
||||
%% to interact with a rabbit_fifo queue using @module.
|
||||
|
@ -111,7 +109,7 @@ init(Servers, SoftLimit) ->
|
|||
%% by the {@link handle_ra_event/2. handle_ra_event/2} function.
|
||||
-spec enqueue(rabbit_amqqueue:name(), Correlation :: term(),
|
||||
Msg :: term(), State :: state()) ->
|
||||
{ok, state(), actions()} | {reject_publish, state()}.
|
||||
{ok, state(), rabbit_queue_type:actions()} | {reject_publish, state()}.
|
||||
enqueue(QName, Correlation, Msg,
|
||||
#state{queue_status = undefined,
|
||||
next_enqueue_seq = 1,
|
||||
|
@ -177,7 +175,7 @@ enqueue(QName, Correlation, Msg,
|
|||
%% by the {@link handle_ra_event/2. handle_ra_event/2} function.
|
||||
%%
|
||||
-spec enqueue(rabbit_amqqueue:name(), Msg :: term(), State :: state()) ->
|
||||
{ok, state(), actions()} | {reject_publish, state()}.
|
||||
{ok, state(), rabbit_queue_type:actions()} | {reject_publish, state()}.
|
||||
enqueue(QName, Msg, State) ->
|
||||
enqueue(QName, undefined, Msg, State).
|
||||
|
||||
|
@ -193,7 +191,7 @@ enqueue(QName, Msg, State) ->
|
|||
%% @param State The {@module} state.
|
||||
%%
|
||||
%% @returns `{ok, IdMsg, State}' or `{error | timeout, term()}'
|
||||
-spec dequeue(rabbit_amqqueue:name(), rabbit_fifo:consumer_tag(),
|
||||
-spec dequeue(rabbit_amqqueue:name(), rabbit_types:ctag(),
|
||||
Settlement :: settled | unsettled, state()) ->
|
||||
{ok, non_neg_integer(), term(), non_neg_integer()}
|
||||
| {empty, state()} | {error | timeout, term()}.
|
||||
|
@ -239,7 +237,7 @@ add_delivery_count_header(Msg, Count) ->
|
|||
%% @param MsgIds the message ids received with the {@link rabbit_fifo:delivery/0.}
|
||||
%% @param State the {@module} state
|
||||
%%
|
||||
-spec settle(rabbit_fifo:consumer_tag(), [rabbit_fifo:msg_id()], state()) ->
|
||||
-spec settle(rabbit_types:ctag(), [rabbit_fifo:msg_id()], state()) ->
|
||||
{state(), list()}.
|
||||
settle(ConsumerTag, [_|_] = MsgIds, #state{slow = false} = State0) ->
|
||||
ServerId = pick_server(State0),
|
||||
|
@ -267,7 +265,7 @@ settle(ConsumerTag, [_|_] = MsgIds,
|
|||
%% @returns
|
||||
%% `{State, list()}' if the command was successfully sent.
|
||||
%%
|
||||
-spec return(rabbit_fifo:consumer_tag(), [rabbit_fifo:msg_id()], state()) ->
|
||||
-spec return(rabbit_types:ctag(), [rabbit_fifo:msg_id()], state()) ->
|
||||
{state(), list()}.
|
||||
return(ConsumerTag, [_|_] = MsgIds, #state{slow = false} = State0) ->
|
||||
ServerId = pick_server(State0),
|
||||
|
@ -292,7 +290,7 @@ return(ConsumerTag, [_|_] = MsgIds,
|
|||
%% @param MsgIds the message ids to discard
|
||||
%% from {@link rabbit_fifo:delivery/0.}
|
||||
%% @param State the {@module} state
|
||||
-spec discard(rabbit_fifo:consumer_tag(), [rabbit_fifo:msg_id()], state()) ->
|
||||
-spec discard(rabbit_types:ctag(), [rabbit_fifo:msg_id()], state()) ->
|
||||
{state(), list()}.
|
||||
discard(ConsumerTag, [_|_] = MsgIds, #state{slow = false} = State0) ->
|
||||
ServerId = pick_server(State0),
|
||||
|
@ -325,7 +323,7 @@ discard(ConsumerTag, [_|_] = MsgIds,
|
|||
%% @param State The {@module} state.
|
||||
%%
|
||||
%% @returns `{ok, State}' or `{error | timeout, term()}'
|
||||
-spec checkout(rabbit_fifo:consumer_tag(),
|
||||
-spec checkout(rabbit_types:ctag(),
|
||||
NumUnsettled :: non_neg_integer(),
|
||||
CreditMode :: rabbit_fifo:credit_mode(),
|
||||
Meta :: rabbit_fifo:consumer_meta(),
|
||||
|
@ -362,10 +360,18 @@ checkout(ConsumerTag, NumUnsettled, CreditMode, Meta,
|
|||
NextMsgId - 1
|
||||
end
|
||||
end,
|
||||
DeliveryCount = case maps:is_key(initial_delivery_count, Meta) of
|
||||
true -> credit_api_v2;
|
||||
false -> {credit_api_v1, 0}
|
||||
end,
|
||||
SDels = maps:update_with(
|
||||
ConsumerTag, fun (C) -> C#consumer{ack = Ack} end,
|
||||
ConsumerTag,
|
||||
fun (C) -> C#consumer{ack = Ack} end,
|
||||
#consumer{last_msg_id = LastMsgId,
|
||||
ack = Ack}, CDels0),
|
||||
ack = Ack,
|
||||
echo = false,
|
||||
delivery_count = DeliveryCount},
|
||||
CDels0),
|
||||
{ok, State0#state{leader = Leader,
|
||||
consumer_deliveries = SDels}};
|
||||
Err ->
|
||||
|
@ -385,29 +391,45 @@ query_single_active_consumer(#state{leader = Leader}) ->
|
|||
Err
|
||||
end.
|
||||
|
||||
-spec credit_v1(rabbit_types:ctag(),
|
||||
Credit :: non_neg_integer(),
|
||||
Drain :: boolean(),
|
||||
state()) ->
|
||||
{state(), rabbit_queue_type:actions()}.
|
||||
credit_v1(ConsumerTag, Credit, Drain,
|
||||
#state{consumer_deliveries = CDels} = State0) ->
|
||||
ConsumerId = consumer_id(ConsumerTag),
|
||||
#consumer{delivery_count = {credit_api_v1, Count}} = maps:get(ConsumerTag, CDels),
|
||||
ServerId = pick_server(State0),
|
||||
Cmd = rabbit_fifo:make_credit(ConsumerId, Credit, Count, Drain),
|
||||
{send_command(ServerId, undefined, Cmd, normal, State0), []}.
|
||||
|
||||
%% @doc Provide credit to the queue
|
||||
%%
|
||||
%% This only has an effect if the consumer uses credit mode: credited
|
||||
%% @param ConsumerTag a unique tag to identify this particular consumer.
|
||||
%% @param Credit the amount of credit to provide to theq queue
|
||||
%% @param Credit the amount of credit to provide to the queue
|
||||
%% @param Drain tells the queue to use up any credit that cannot be immediately
|
||||
%% fulfilled. (i.e. there are not enough messages on queue to use up all the
|
||||
%% provided credit).
|
||||
-spec credit(rabbit_fifo:consumer_tag(),
|
||||
Credit :: non_neg_integer(),
|
||||
%% @param Reply true if the queue client requests a credit_reply queue action
|
||||
-spec credit(rabbit_types:ctag(),
|
||||
rabbit_queue_type:delivery_count(),
|
||||
rabbit_queue_type:credit(),
|
||||
Drain :: boolean(),
|
||||
Echo :: boolean(),
|
||||
state()) ->
|
||||
{state(), actions()}.
|
||||
credit(ConsumerTag, Credit, Drain,
|
||||
#state{consumer_deliveries = CDels} = State0) ->
|
||||
{state(), rabbit_queue_type:actions()}.
|
||||
credit(ConsumerTag, DeliveryCount, Credit, Drain, Echo,
|
||||
#state{consumer_deliveries = CDels0} = State0) ->
|
||||
ConsumerId = consumer_id(ConsumerTag),
|
||||
%% the last received msgid provides us with the delivery count if we
|
||||
%% add one as it is 0 indexed
|
||||
C = maps:get(ConsumerTag, CDels, #consumer{last_msg_id = -1}),
|
||||
ServerId = pick_server(State0),
|
||||
Cmd = rabbit_fifo:make_credit(ConsumerId, Credit,
|
||||
C#consumer.last_msg_id + 1, Drain),
|
||||
{send_command(ServerId, undefined, Cmd, normal, State0), []}.
|
||||
Cmd = rabbit_fifo:make_credit(ConsumerId, Credit, DeliveryCount, Drain),
|
||||
CDels = maps:update_with(ConsumerTag,
|
||||
fun(C) -> C#consumer{echo = Echo} end,
|
||||
CDels0),
|
||||
State = State0#state{consumer_deliveries = CDels},
|
||||
{send_command(ServerId, undefined, Cmd, normal, State), []}.
|
||||
|
||||
%% @doc Cancels a checkout with the rabbit_fifo queue for the consumer tag
|
||||
%%
|
||||
|
@ -418,7 +440,7 @@ credit(ConsumerTag, Credit, Drain,
|
|||
%% @param State The {@module} state.
|
||||
%%
|
||||
%% @returns `{ok, State}' or `{error | timeout, term()}'
|
||||
-spec cancel_checkout(rabbit_fifo:consumer_tag(), state()) ->
|
||||
-spec cancel_checkout(rabbit_types:ctag(), state()) ->
|
||||
{ok, state()} | {error | timeout, term()}.
|
||||
cancel_checkout(ConsumerTag, #state{consumer_deliveries = CDels} = State0) ->
|
||||
Servers = sorted_servers(State0),
|
||||
|
@ -521,25 +543,25 @@ update_machine_state(Server, Conf) ->
|
|||
%% with them.</li>
|
||||
-spec handle_ra_event(rabbit_amqqueue:name(), ra:server_id(),
|
||||
ra_server_proc:ra_event_body(), state()) ->
|
||||
{internal, Correlators :: [term()], actions(), state()} |
|
||||
{rabbit_fifo:client_msg(), state()} | {eol, actions()}.
|
||||
{internal, Correlators :: [term()], rabbit_queue_type:actions(), state()} |
|
||||
{rabbit_fifo:client_msg(), state()} | {eol, rabbit_queue_type:actions()}.
|
||||
handle_ra_event(QName, From, {applied, Seqs},
|
||||
#state{cfg = #cfg{soft_limit = SftLmt}} = State0) ->
|
||||
|
||||
{Corrs, Actions0, State1} = lists:foldl(fun seq_applied/2,
|
||||
{[], [], State0#state{leader = From}},
|
||||
Seqs),
|
||||
{Corrs, ActionsRev, State1} = lists:foldl(fun seq_applied/2,
|
||||
{[], [], State0#state{leader = From}},
|
||||
Seqs),
|
||||
Actions0 = lists:reverse(ActionsRev),
|
||||
Actions = case Corrs of
|
||||
[] ->
|
||||
lists:reverse(Actions0);
|
||||
Actions0;
|
||||
_ ->
|
||||
%%TODO consider using lists:foldr/3 above because
|
||||
%% Corrs is returned in the wrong order here.
|
||||
%% The wrong order does not matter much because the channel sorts the
|
||||
%% sequence numbers before confirming to the client. But rabbit_fifo_client
|
||||
%% is sequence numer agnostic: it handles any correlation terms.
|
||||
[{settled, QName, Corrs}
|
||||
| lists:reverse(Actions0)]
|
||||
[{settled, QName, Corrs} | Actions0]
|
||||
end,
|
||||
case maps:size(State1#state.pending) < SftLmt of
|
||||
true when State1#state.slow == true ->
|
||||
|
@ -572,6 +594,21 @@ handle_ra_event(QName, From, {applied, Seqs},
|
|||
end;
|
||||
handle_ra_event(QName, From, {machine, {delivery, _ConsumerTag, _} = Del}, State0) ->
|
||||
handle_delivery(QName, From, Del, State0);
|
||||
handle_ra_event(_QName, _From,
|
||||
{machine, {credit_reply_v1, _CTag, _Credit, _Available, _Drain = false} = Action},
|
||||
State) ->
|
||||
{ok, State, [Action]};
|
||||
handle_ra_event(_QName, _From,
|
||||
{machine, {credit_reply, CTag, _DeliveryCount, _Credit, _Available, Drain} = Action},
|
||||
#state{consumer_deliveries = CDels} = State) ->
|
||||
Actions = case CDels of
|
||||
#{CTag := #consumer{echo = Echo}}
|
||||
when Echo orelse Drain ->
|
||||
[Action];
|
||||
_ ->
|
||||
[]
|
||||
end,
|
||||
{ok, State, Actions};
|
||||
handle_ra_event(_QName, _, {machine, {queue_status, Status}},
|
||||
#state{} = State) ->
|
||||
%% just set the queue status
|
||||
|
@ -667,14 +704,12 @@ maybe_add_action({multi, Actions}, Acc0, State0) ->
|
|||
lists:foldl(fun (Act, {Acc, State}) ->
|
||||
maybe_add_action(Act, Acc, State)
|
||||
end, {Acc0, State0}, Actions);
|
||||
maybe_add_action({send_drained, {Tag, Credit}} = Action, Acc,
|
||||
#state{consumer_deliveries = CDels} = State) ->
|
||||
%% add credit to consumer delivery_count
|
||||
C = maps:get(Tag, CDels),
|
||||
{[Action | Acc],
|
||||
State#state{consumer_deliveries =
|
||||
update_consumer(Tag, C#consumer.last_msg_id,
|
||||
Credit, C, CDels)}};
|
||||
maybe_add_action({send_drained, {Tag, Credit}}, Acc, State0) ->
|
||||
%% This function clause should be deleted when
|
||||
%% feature flag credit_api_v2 becomes required.
|
||||
State = add_delivery_count(Credit, Tag, State0),
|
||||
Action = {credit_reply_v1, Tag, Credit, _Avail = 0, _Drain = true},
|
||||
{[Action | Acc], State};
|
||||
maybe_add_action(Action, Acc, State) ->
|
||||
%% anything else is assumed to be an action
|
||||
{[Action | Acc], State}.
|
||||
|
@ -785,13 +820,20 @@ transform_msgs(QName, QRef, Msgs) ->
|
|||
{QName, QRef, MsgId, Redelivered, Msg}
|
||||
end, Msgs).
|
||||
|
||||
update_consumer(Tag, LastId, DelCntIncr,
|
||||
#consumer{delivery_count = D} = C, Consumers) ->
|
||||
maps:put(Tag,
|
||||
C#consumer{last_msg_id = LastId,
|
||||
delivery_count = D + DelCntIncr},
|
||||
Consumers).
|
||||
update_consumer(Tag, LastId, DelCntIncr, Consumer, Consumers) ->
|
||||
D = case Consumer#consumer.delivery_count of
|
||||
credit_api_v2 -> credit_api_v2;
|
||||
{credit_api_v1, Count} -> {credit_api_v1, Count + DelCntIncr}
|
||||
end,
|
||||
maps:update(Tag,
|
||||
Consumer#consumer{last_msg_id = LastId,
|
||||
delivery_count = D},
|
||||
Consumers).
|
||||
|
||||
add_delivery_count(DelCntIncr, Tag, #state{consumer_deliveries = CDels0} = State) ->
|
||||
Con = #consumer{last_msg_id = LastMsgId} = maps:get(Tag, CDels0),
|
||||
CDels = update_consumer(Tag, LastMsgId, DelCntIncr, Con, CDels0),
|
||||
State#state{consumer_deliveries = CDels}.
|
||||
|
||||
get_missing_deliveries(State, From, To, ConsumerTag) ->
|
||||
%% find local server
|
||||
|
|
|
@ -93,7 +93,6 @@
|
|||
-define(MESSAGES_GET_EMPTY, 6).
|
||||
-define(MESSAGES_REDELIVERED, 7).
|
||||
-define(MESSAGES_ACKNOWLEDGED, 8).
|
||||
%% Note: ?NUM_PROTOCOL_QUEUE_TYPE_COUNTERS needs to be up-to-date. See include/rabbit_global_counters.hrl
|
||||
-define(PROTOCOL_QUEUE_TYPE_COUNTERS,
|
||||
[
|
||||
{
|
||||
|
@ -131,13 +130,15 @@
|
|||
]).
|
||||
|
||||
boot_step() ->
|
||||
%% Protocol counters
|
||||
init([{protocol, amqp091}]),
|
||||
[begin
|
||||
%% Protocol counters
|
||||
init([{protocol, Proto}]),
|
||||
|
||||
%% Protocol & Queue Type counters
|
||||
init([{protocol, amqp091}, {queue_type, rabbit_classic_queue}]),
|
||||
init([{protocol, amqp091}, {queue_type, rabbit_quorum_queue}]),
|
||||
init([{protocol, amqp091}, {queue_type, rabbit_stream_queue}]),
|
||||
%% Protocol & Queue Type counters
|
||||
init([{protocol, Proto}, {queue_type, rabbit_classic_queue}]),
|
||||
init([{protocol, Proto}, {queue_type, rabbit_quorum_queue}]),
|
||||
init([{protocol, Proto}, {queue_type, rabbit_stream_queue}])
|
||||
end || Proto <- [amqp091, amqp10]],
|
||||
|
||||
%% Dead Letter counters
|
||||
%%
|
||||
|
|
|
@ -62,12 +62,11 @@
|
|||
%% that's what the limit_prefetch/3, unlimit_prefetch/1,
|
||||
%% get_prefetch_limit/1 API functions are about. They also tell the
|
||||
%% limiter queue state (via the queue) about consumer credit
|
||||
%% changes and message acknowledgement - that's what credit/5 and
|
||||
%% changes and message acknowledgement - that's what credit/4 and
|
||||
%% ack_from_queue/3 are for.
|
||||
%%
|
||||
%% 2. Queues also tell the limiter queue state about the queue
|
||||
%% becoming empty (via drained/1) and consumers leaving (via
|
||||
%% forget_consumer/2).
|
||||
%% 2. Queues also tell the limiter queue state about consumers leaving
|
||||
%% (via forget_consumer/2).
|
||||
%%
|
||||
%% 3. Queues register with the limiter - this happens as part of
|
||||
%% activate/1.
|
||||
|
@ -120,8 +119,8 @@
|
|||
get_prefetch_limit/1, ack/2, pid/1]).
|
||||
%% queue API
|
||||
-export([client/1, activate/1, can_send/3, resume/1, deactivate/1,
|
||||
is_suspended/1, is_consumer_blocked/2, credit/5, ack_from_queue/3,
|
||||
drained/1, forget_consumer/2]).
|
||||
is_suspended/1, is_consumer_blocked/2, credit/4, ack_from_queue/3,
|
||||
forget_consumer/2]).
|
||||
%% callbacks
|
||||
-export([init/1, terminate/2, code_change/3, handle_call/3, handle_cast/2,
|
||||
handle_info/2, prioritise_call/4]).
|
||||
|
@ -136,7 +135,7 @@
|
|||
-type qstate() :: #qstate{pid :: pid() | none,
|
||||
state :: 'dormant' | 'active' | 'suspended'}.
|
||||
|
||||
-type credit_mode() :: 'manual' | 'drain' | 'auto'.
|
||||
-type credit_mode() :: auto | manual.
|
||||
|
||||
%%----------------------------------------------------------------------------
|
||||
|
||||
|
@ -259,18 +258,11 @@ is_consumer_blocked(#qstate{credits = Credits}, CTag) ->
|
|||
{value, #credit{}} -> true
|
||||
end.
|
||||
|
||||
-spec credit
|
||||
(qstate(), rabbit_types:ctag(), non_neg_integer(), credit_mode(),
|
||||
boolean()) ->
|
||||
{boolean(), qstate()}.
|
||||
|
||||
credit(Limiter = #qstate{credits = Credits}, CTag, Crd, Mode, IsEmpty) ->
|
||||
{Res, Cr} =
|
||||
case IsEmpty andalso Mode =:= drain of
|
||||
true -> {true, #credit{credit = 0, mode = manual}};
|
||||
false -> {false, #credit{credit = Crd, mode = Mode}}
|
||||
end,
|
||||
{Res, Limiter#qstate{credits = enter_credit(CTag, Cr, Credits)}}.
|
||||
-spec credit(qstate(), rabbit_types:ctag(), non_neg_integer(), credit_mode()) ->
|
||||
qstate().
|
||||
credit(Limiter = #qstate{credits = Credits}, CTag, Crd, Mode) ->
|
||||
Cr = #credit{credit = Crd, mode = Mode},
|
||||
Limiter#qstate{credits = enter_credit(CTag, Cr, Credits)}.
|
||||
|
||||
-spec ack_from_queue(qstate(), rabbit_types:ctag(), non_neg_integer()) ->
|
||||
{boolean(), qstate()}.
|
||||
|
@ -286,20 +278,6 @@ ack_from_queue(Limiter = #qstate{credits = Credits}, CTag, Credit) ->
|
|||
end,
|
||||
{Unblocked, Limiter#qstate{credits = Credits1}}.
|
||||
|
||||
-spec drained(qstate()) ->
|
||||
{[{rabbit_types:ctag(), non_neg_integer()}], qstate()}.
|
||||
|
||||
drained(Limiter = #qstate{credits = Credits}) ->
|
||||
Drain = fun(C) -> C#credit{credit = 0, mode = manual} end,
|
||||
{CTagCredits, Credits2} =
|
||||
rabbit_misc:gb_trees_fold(
|
||||
fun (CTag, C = #credit{credit = Crd, mode = drain}, {Acc, Creds0}) ->
|
||||
{[{CTag, Crd} | Acc], update_credit(CTag, Drain(C), Creds0)};
|
||||
(_CTag, #credit{credit = _Crd, mode = _Mode}, {Acc, Creds0}) ->
|
||||
{Acc, Creds0}
|
||||
end, {[], Credits}, Credits),
|
||||
{CTagCredits, Limiter#qstate{credits = Credits2}}.
|
||||
|
||||
-spec forget_consumer(qstate(), rabbit_types:ctag()) -> qstate().
|
||||
|
||||
forget_consumer(Limiter = #qstate{credits = Credits}, CTag) ->
|
||||
|
@ -309,13 +287,6 @@ forget_consumer(Limiter = #qstate{credits = Credits}, CTag) ->
|
|||
%% Queue-local code
|
||||
%%----------------------------------------------------------------------------
|
||||
|
||||
%% We want to do all the AMQP 1.0-ish link level credit calculations
|
||||
%% in the queue (to do them elsewhere introduces a ton of
|
||||
%% races). However, it's a big chunk of code that is conceptually very
|
||||
%% linked to the limiter concept. So we get the queue to hold a bit of
|
||||
%% state for us (#qstate.credits), and maintain a fiction that the
|
||||
%% limiter is making the decisions...
|
||||
|
||||
decrement_credit(CTag, Credits) ->
|
||||
case gb_trees:lookup(CTag, Credits) of
|
||||
{value, C = #credit{credit = Credit}} ->
|
||||
|
@ -325,16 +296,10 @@ decrement_credit(CTag, Credits) ->
|
|||
end.
|
||||
|
||||
enter_credit(CTag, C, Credits) ->
|
||||
gb_trees:enter(CTag, ensure_credit_invariant(C), Credits).
|
||||
gb_trees:enter(CTag, C, Credits).
|
||||
|
||||
update_credit(CTag, C, Credits) ->
|
||||
gb_trees:update(CTag, ensure_credit_invariant(C), Credits).
|
||||
|
||||
ensure_credit_invariant(C = #credit{credit = 0, mode = drain}) ->
|
||||
%% Using up all credit implies no need to send a 'drained' event
|
||||
C#credit{mode = manual};
|
||||
ensure_credit_invariant(C) ->
|
||||
C.
|
||||
gb_trees:update(CTag, C, Credits).
|
||||
|
||||
%%----------------------------------------------------------------------------
|
||||
%% gen_server callbacks
|
||||
|
|
|
@ -49,9 +49,7 @@
|
|||
|
||||
-export([
|
||||
local_connections/0,
|
||||
local_non_amqp_connections/0,
|
||||
%% prefer local_connections/0
|
||||
connections_local/0
|
||||
local_non_amqp_connections/0
|
||||
]).
|
||||
|
||||
-include_lib("rabbit_common/include/rabbit.hrl").
|
||||
|
@ -448,19 +446,15 @@ register_connection(Pid) -> pg_local:join(rabbit_connections, Pid).
|
|||
unregister_connection(Pid) -> pg_local:leave(rabbit_connections, Pid).
|
||||
|
||||
-spec connections() -> [rabbit_types:connection()].
|
||||
|
||||
connections() ->
|
||||
Nodes = rabbit_nodes:list_running(),
|
||||
rabbit_misc:append_rpc_all_nodes(Nodes, rabbit_networking, connections_local, [], ?RPC_TIMEOUT).
|
||||
rabbit_misc:append_rpc_all_nodes(Nodes, rabbit_networking, local_connections, [], ?RPC_TIMEOUT).
|
||||
|
||||
-spec local_connections() -> [rabbit_types:connection()].
|
||||
%% @doc Returns pids of AMQP 0-9-1 and AMQP 1.0 connections local to this node.
|
||||
local_connections() ->
|
||||
connections_local().
|
||||
|
||||
-spec connections_local() -> [rabbit_types:connection()].
|
||||
%% @deprecated Prefer {@link local_connections}
|
||||
connections_local() -> pg_local:get_members(rabbit_connections).
|
||||
Amqp091Pids = pg_local:get_members(rabbit_connections),
|
||||
Amqp10Pids = rabbit_amqp1_0:list_local(),
|
||||
Amqp10Pids ++ Amqp091Pids.
|
||||
|
||||
-spec register_non_amqp_connection(pid()) -> ok.
|
||||
|
||||
|
@ -510,21 +504,16 @@ emit_connection_info_all(Nodes, Items, Ref, AggregatorPid) ->
|
|||
emit_connection_info_local(Items, Ref, AggregatorPid) ->
|
||||
rabbit_control_misc:emitting_map_with_exit_handler(
|
||||
AggregatorPid, Ref, fun(Q) -> connection_info(Q, Items) end,
|
||||
connections_local()).
|
||||
local_connections()).
|
||||
|
||||
-spec close_connection(pid(), string()) -> 'ok'.
|
||||
|
||||
close_connection(Pid, Explanation) ->
|
||||
case lists:member(Pid, connections()) of
|
||||
true ->
|
||||
Res = rabbit_reader:shutdown(Pid, Explanation),
|
||||
rabbit_log:info("Closing connection ~tp because ~tp", [Pid, Explanation]),
|
||||
Res;
|
||||
false ->
|
||||
rabbit_log:warning("Asked to close connection ~tp (reason: ~tp) "
|
||||
"but no running cluster node reported it as an active connection. Was it already closed? ",
|
||||
[Pid, Explanation]),
|
||||
ok
|
||||
rabbit_log:info("Closing connection ~tp because ~tp",
|
||||
[Pid, Explanation]),
|
||||
try rabbit_reader:shutdown(Pid, Explanation)
|
||||
catch exit:{Reason, _Location} ->
|
||||
rabbit_log:warning("Could not close connection ~tp (reason: ~tp): ~p",
|
||||
[Pid, Explanation, Reason])
|
||||
end.
|
||||
|
||||
-spec close_connections([pid()], string()) -> 'ok'.
|
||||
|
|
|
@ -8,12 +8,13 @@
|
|||
-module(rabbit_queue_consumers).
|
||||
|
||||
-export([new/0, max_active_priority/1, inactive/1, all/1, all/3, count/0,
|
||||
unacknowledged_message_count/0, add/11, remove/3, erase_ch/2,
|
||||
send_drained/1, deliver/5, record_ack/3, subtract_acks/3,
|
||||
unacknowledged_message_count/0, add/9, remove/3, erase_ch/2,
|
||||
deliver/5, record_ack/3, subtract_acks/3,
|
||||
possibly_unblock/3,
|
||||
resume_fun/0, notify_sent_fun/1, activate_limit_fun/0,
|
||||
credit/7, utilisation/1, capacity/1, is_same/3, get_consumer/1, get/3,
|
||||
consumer_tag/1, get_infos/1]).
|
||||
drained/3, process_credit/5, get_link_state/2,
|
||||
utilisation/1, capacity/1, is_same/3, get_consumer/1, get/3,
|
||||
consumer_tag/1, get_infos/1, parse_prefetch_count/1]).
|
||||
|
||||
-export([deactivate_limit_fun/0]).
|
||||
|
||||
|
@ -30,7 +31,13 @@
|
|||
|
||||
-record(consumer, {tag, ack_required, prefetch, args, user}).
|
||||
|
||||
%% AMQP 1.0 link flow control state, see §2.6.7
|
||||
%% Delete atom credit_api_v1 when feature flag credit_api_v2 becomes required.
|
||||
-record(link_state, {delivery_count :: rabbit_queue_type:delivery_count() | credit_api_v1,
|
||||
credit :: rabbit_queue_type:credit()}).
|
||||
|
||||
%% These are held in our process dictionary
|
||||
%% channel record
|
||||
-record(cr, {ch_pid,
|
||||
monitor_ref,
|
||||
acktags,
|
||||
|
@ -41,7 +48,9 @@
|
|||
%% The limiter itself
|
||||
limiter,
|
||||
%% Internal flow control for queue -> writer
|
||||
unsent_message_count}).
|
||||
unsent_message_count,
|
||||
link_states :: #{rabbit_types:ctag() => #link_state{}}
|
||||
}).
|
||||
|
||||
%%----------------------------------------------------------------------------
|
||||
|
||||
|
@ -120,33 +129,50 @@ count() -> lists:sum([Count || #cr{consumer_count = Count} <- all_ch_record()]).
|
|||
unacknowledged_message_count() ->
|
||||
lists:sum([?QUEUE:len(C#cr.acktags) || C <- all_ch_record()]).
|
||||
|
||||
-spec add(rabbit_amqqueue:name(), ch(), rabbit_types:ctag(), boolean(), pid() | none, boolean(),
|
||||
non_neg_integer(), rabbit_framing:amqp_table(), boolean(),
|
||||
rabbit_types:username(), state())
|
||||
-> state().
|
||||
-spec add(ch(), rabbit_types:ctag(), boolean(), pid() | none, boolean(),
|
||||
%% credit API v1
|
||||
SimplePrefetch :: non_neg_integer() |
|
||||
%% credit API v2
|
||||
{simple_prefetch, non_neg_integer()} | {credited, rabbit_queue_type:delivery_count()},
|
||||
rabbit_framing:amqp_table(),
|
||||
rabbit_types:username(), state()) ->
|
||||
state().
|
||||
|
||||
add(QName, ChPid, CTag, NoAck, LimiterPid, LimiterActive, Prefetch, Args, IsEmpty,
|
||||
add(ChPid, CTag, NoAck, LimiterPid, LimiterActive,
|
||||
ModeOrPrefetch, Args,
|
||||
Username, State = #state{consumers = Consumers,
|
||||
use = CUInfo}) ->
|
||||
C = #cr{consumer_count = Count,
|
||||
limiter = Limiter} = ch_record(ChPid, LimiterPid),
|
||||
C0 = #cr{consumer_count = Count,
|
||||
limiter = Limiter,
|
||||
link_states = LinkStates} = ch_record(ChPid, LimiterPid),
|
||||
Limiter1 = case LimiterActive of
|
||||
true -> rabbit_limiter:activate(Limiter);
|
||||
false -> Limiter
|
||||
end,
|
||||
C1 = C#cr{consumer_count = Count + 1, limiter = Limiter1},
|
||||
update_ch_record(
|
||||
case parse_credit_args(Prefetch, Args) of
|
||||
{0, auto} -> C1;
|
||||
{_Credit, auto} when NoAck -> C1;
|
||||
{Credit, Mode} -> credit_and_drain(QName,
|
||||
C1, CTag, Credit, Mode, IsEmpty)
|
||||
end),
|
||||
C1 = C0#cr{consumer_count = Count + 1,
|
||||
limiter = Limiter1},
|
||||
C = case parse_credit_mode(ModeOrPrefetch, Args) of
|
||||
{0, auto} ->
|
||||
C1;
|
||||
{Credit, auto = Mode} ->
|
||||
case NoAck of
|
||||
true ->
|
||||
C1;
|
||||
false ->
|
||||
Limiter2 = rabbit_limiter:credit(Limiter1, CTag, Credit, Mode),
|
||||
C1#cr{limiter = Limiter2}
|
||||
end;
|
||||
{InitialDeliveryCount, manual} ->
|
||||
C1#cr{link_states = LinkStates#{CTag => #link_state{
|
||||
credit = 0,
|
||||
delivery_count = InitialDeliveryCount}}}
|
||||
end,
|
||||
update_ch_record(C),
|
||||
Consumer = #consumer{tag = CTag,
|
||||
ack_required = not NoAck,
|
||||
prefetch = Prefetch,
|
||||
prefetch = parse_prefetch_count(ModeOrPrefetch),
|
||||
args = Args,
|
||||
user = Username},
|
||||
user = Username},
|
||||
State#state{consumers = add_consumer({ChPid, Consumer}, Consumers),
|
||||
use = update_use(CUInfo, active)}.
|
||||
|
||||
|
@ -159,7 +185,8 @@ remove(ChPid, CTag, State = #state{consumers = Consumers}) ->
|
|||
not_found;
|
||||
C = #cr{consumer_count = Count,
|
||||
limiter = Limiter,
|
||||
blocked_consumers = Blocked} ->
|
||||
blocked_consumers = Blocked,
|
||||
link_states = LinkStates} ->
|
||||
Blocked1 = remove_consumer(ChPid, CTag, Blocked),
|
||||
Limiter1 = case Count of
|
||||
1 -> rabbit_limiter:deactivate(Limiter);
|
||||
|
@ -168,9 +195,10 @@ remove(ChPid, CTag, State = #state{consumers = Consumers}) ->
|
|||
Limiter2 = rabbit_limiter:forget_consumer(Limiter1, CTag),
|
||||
update_ch_record(C#cr{consumer_count = Count - 1,
|
||||
limiter = Limiter2,
|
||||
blocked_consumers = Blocked1}),
|
||||
blocked_consumers = Blocked1,
|
||||
link_states = maps:remove(CTag, LinkStates)}),
|
||||
State#state{consumers =
|
||||
remove_consumer(ChPid, CTag, Consumers)}
|
||||
remove_consumer(ChPid, CTag, Consumers)}
|
||||
end.
|
||||
|
||||
-spec erase_ch(ch(), state()) ->
|
||||
|
@ -192,11 +220,6 @@ erase_ch(ChPid, State = #state{consumers = Consumers}) ->
|
|||
State#state{consumers = remove_consumers(ChPid, Consumers)}}
|
||||
end.
|
||||
|
||||
-spec send_drained(rabbit_amqqueue:name()) -> 'ok'.
|
||||
send_drained(QName) ->
|
||||
[update_ch_record(send_drained(QName, C)) || C <- all_ch_record()],
|
||||
ok.
|
||||
|
||||
-spec deliver(fun ((boolean()) -> {fetch_result(), T}),
|
||||
rabbit_amqqueue:name(), state(), boolean(),
|
||||
none | {ch(), rabbit_types:ctag()} | {ch(), consumer()}) ->
|
||||
|
@ -252,17 +275,37 @@ deliver_to_consumer(FetchFun, E = {ChPid, Consumer}, QName) ->
|
|||
true ->
|
||||
block_consumer(C, E),
|
||||
undelivered;
|
||||
false -> case rabbit_limiter:can_send(C#cr.limiter,
|
||||
Consumer#consumer.ack_required,
|
||||
Consumer#consumer.tag) of
|
||||
{suspend, Limiter} ->
|
||||
block_consumer(C#cr{limiter = Limiter}, E),
|
||||
undelivered;
|
||||
{continue, Limiter} ->
|
||||
{delivered, deliver_to_consumer(
|
||||
FetchFun, Consumer,
|
||||
C#cr{limiter = Limiter}, QName)}
|
||||
end
|
||||
false ->
|
||||
CTag = Consumer#consumer.tag,
|
||||
LinkStates = C#cr.link_states,
|
||||
case maps:find(CTag, LinkStates) of
|
||||
{ok, #link_state{delivery_count = DeliveryCount0,
|
||||
credit = Credit} = LinkState0}
|
||||
when Credit > 0 ->
|
||||
DeliveryCount = case DeliveryCount0 of
|
||||
credit_api_v1 -> DeliveryCount0;
|
||||
_ -> serial_number:add(DeliveryCount0, 1)
|
||||
end,
|
||||
LinkState = LinkState0#link_state{delivery_count = DeliveryCount,
|
||||
credit = Credit - 1},
|
||||
C1 = C#cr{link_states = maps:update(CTag, LinkState, LinkStates)},
|
||||
{delivered, deliver_to_consumer(FetchFun, Consumer, C1, QName)};
|
||||
{ok, _Exhausted} ->
|
||||
block_consumer(C, E),
|
||||
undelivered;
|
||||
error ->
|
||||
case rabbit_limiter:can_send(C#cr.limiter,
|
||||
Consumer#consumer.ack_required,
|
||||
CTag) of
|
||||
{suspend, Limiter} ->
|
||||
block_consumer(C#cr{limiter = Limiter}, E),
|
||||
undelivered;
|
||||
{continue, Limiter} ->
|
||||
{delivered, deliver_to_consumer(
|
||||
FetchFun, Consumer,
|
||||
C#cr{limiter = Limiter}, QName)}
|
||||
end
|
||||
end
|
||||
end.
|
||||
|
||||
deliver_to_consumer(FetchFun,
|
||||
|
@ -349,11 +392,21 @@ possibly_unblock(Update, ChPid, State) ->
|
|||
end
|
||||
end.
|
||||
|
||||
unblock(C = #cr{blocked_consumers = BlockedQ, limiter = Limiter},
|
||||
unblock(C = #cr{blocked_consumers = BlockedQ,
|
||||
limiter = Limiter,
|
||||
link_states = LinkStates},
|
||||
State = #state{consumers = Consumers, use = Use}) ->
|
||||
case lists:partition(
|
||||
fun({_P, {_ChPid, #consumer{tag = CTag}}}) ->
|
||||
rabbit_limiter:is_consumer_blocked(Limiter, CTag)
|
||||
case maps:find(CTag, LinkStates) of
|
||||
{ok, #link_state{credit = Credits}}
|
||||
when Credits > 0 ->
|
||||
false;
|
||||
{ok, _Exhausted} ->
|
||||
true;
|
||||
error ->
|
||||
rabbit_limiter:is_consumer_blocked(Limiter, CTag)
|
||||
end
|
||||
end, priority_queue:to_list(BlockedQ)) of
|
||||
{_, []} ->
|
||||
update_ch_record(C),
|
||||
|
@ -395,28 +448,63 @@ deactivate_limit_fun() ->
|
|||
C#cr{limiter = rabbit_limiter:deactivate(Limiter)}
|
||||
end.
|
||||
|
||||
-spec credit(rabbit_amqqueue:name(), boolean(), integer(), boolean(), ch(),
|
||||
rabbit_types:ctag(),
|
||||
state()) -> 'unchanged' | {'unblocked', state()}.
|
||||
|
||||
credit(QName, IsEmpty, Credit, Drain, ChPid, CTag, State) ->
|
||||
-spec drained(rabbit_queue_type:delivery_count() | credit_api_v1, ch(), rabbit_types:ctag()) ->
|
||||
ok.
|
||||
drained(AdvancedDeliveryCount, ChPid, CTag) ->
|
||||
case lookup_ch(ChPid) of
|
||||
not_found ->
|
||||
unchanged;
|
||||
#cr{limiter = Limiter} = C ->
|
||||
C1 = #cr{limiter = Limiter1} =
|
||||
credit_and_drain(QName, C, CTag, Credit, drain_mode(Drain), IsEmpty),
|
||||
case is_ch_blocked(C1) orelse
|
||||
(not rabbit_limiter:is_consumer_blocked(Limiter, CTag)) orelse
|
||||
rabbit_limiter:is_consumer_blocked(Limiter1, CTag) of
|
||||
true -> update_ch_record(C1),
|
||||
unchanged;
|
||||
false -> unblock(C1, State)
|
||||
end
|
||||
C0 = #cr{link_states = LinkStates = #{CTag := LinkState0}} ->
|
||||
LinkState = LinkState0#link_state{delivery_count = AdvancedDeliveryCount,
|
||||
credit = 0},
|
||||
C = C0#cr{link_states = maps:update(CTag, LinkState, LinkStates)},
|
||||
update_ch_record(C);
|
||||
_ ->
|
||||
ok
|
||||
end.
|
||||
|
||||
drain_mode(true) -> drain;
|
||||
drain_mode(false) -> manual.
|
||||
-spec process_credit(rabbit_queue_type:delivery_count() | credit_api_v1,
|
||||
rabbit_queue_type:credit(), ch(), rabbit_types:ctag(), state()) ->
|
||||
'unchanged' | {'unblocked', state()}.
|
||||
process_credit(DeliveryCountRcv, LinkCredit, ChPid, CTag, State) ->
|
||||
case lookup_ch(ChPid) of
|
||||
#cr{link_states = LinkStates = #{CTag := LinkState = #link_state{delivery_count = DeliveryCountSnd,
|
||||
credit = OldLinkCreditSnd}},
|
||||
unsent_message_count = Count} = C0 ->
|
||||
LinkCreditSnd = case DeliveryCountSnd of
|
||||
credit_api_v1 ->
|
||||
%% LinkCredit refers to LinkCreditSnd
|
||||
LinkCredit;
|
||||
_ ->
|
||||
%% credit API v2
|
||||
%% LinkCredit refers to LinkCreditRcv
|
||||
%% See AMQP §2.6.7
|
||||
serial_number:diff(
|
||||
serial_number:add(DeliveryCountRcv, LinkCredit),
|
||||
DeliveryCountSnd)
|
||||
end,
|
||||
C = C0#cr{link_states = maps:update(CTag, LinkState#link_state{credit = LinkCreditSnd}, LinkStates)},
|
||||
case Count >= ?UNSENT_MESSAGE_LIMIT orelse
|
||||
OldLinkCreditSnd > 0 orelse
|
||||
LinkCreditSnd < 1 of
|
||||
true ->
|
||||
update_ch_record(C),
|
||||
unchanged;
|
||||
false ->
|
||||
unblock(C, State)
|
||||
end;
|
||||
_ ->
|
||||
unchanged
|
||||
end.
|
||||
|
||||
-spec get_link_state(pid(), rabbit_types:ctag()) ->
|
||||
{rabbit_queue_type:delivery_count() | credit_api_v1, rabbit_queue_type:credit()} | not_found.
|
||||
get_link_state(ChPid, CTag) ->
|
||||
case lookup_ch(ChPid) of
|
||||
#cr{link_states = #{CTag := #link_state{delivery_count = DeliveryCount,
|
||||
credit = Credit}}} ->
|
||||
{DeliveryCount, Credit};
|
||||
_ ->
|
||||
not_found
|
||||
end.
|
||||
|
||||
-spec utilisation(state()) -> ratio().
|
||||
utilisation(State) ->
|
||||
|
@ -465,14 +553,39 @@ consumer_tag(#consumer{tag = CTag}) ->
|
|||
|
||||
%%----------------------------------------------------------------------------
|
||||
|
||||
parse_credit_args(Default, Args) ->
|
||||
%% credit API v2 uses mode
|
||||
parse_prefetch_count({simple_prefetch, Prefetch}) ->
|
||||
Prefetch;
|
||||
parse_prefetch_count({credited, _InitialDeliveryCount}) ->
|
||||
0;
|
||||
%% credit API v1 uses prefetch
|
||||
parse_prefetch_count(Prefetch)
|
||||
when is_integer(Prefetch) ->
|
||||
Prefetch.
|
||||
|
||||
-spec parse_credit_mode(rabbit_queue_type:consume_mode(), rabbit_framing:amqp_table()) ->
|
||||
{Prefetch :: non_neg_integer(), auto | manual}.
|
||||
|
||||
%% credit API v2
|
||||
parse_credit_mode({simple_prefetch, Prefetch}, _Args) ->
|
||||
{Prefetch, auto};
|
||||
parse_credit_mode({credited, InitialDeliveryCount}, _Args) ->
|
||||
{InitialDeliveryCount, manual};
|
||||
%% credit API v1
|
||||
%% i.e. below function clause should be deleted when feature flag credit_api_v2 becomes required:
|
||||
parse_credit_mode(Prefetch, Args)
|
||||
when is_integer(Prefetch) ->
|
||||
case rabbit_misc:table_lookup(Args, <<"x-credit">>) of
|
||||
{table, T} -> case {rabbit_misc:table_lookup(T, <<"credit">>),
|
||||
rabbit_misc:table_lookup(T, <<"drain">>)} of
|
||||
{{long, C}, {bool, D}} -> {C, drain_mode(D)};
|
||||
_ -> {Default, auto}
|
||||
end;
|
||||
undefined -> {Default, auto}
|
||||
{table, T} ->
|
||||
case {rabbit_misc:table_lookup(T, <<"credit">>),
|
||||
rabbit_misc:table_lookup(T, <<"drain">>)} of
|
||||
{{long, 0}, {bool, false}} ->
|
||||
{credit_api_v1, manual};
|
||||
_ ->
|
||||
{Prefetch, auto}
|
||||
end;
|
||||
undefined ->
|
||||
{Prefetch, auto}
|
||||
end.
|
||||
|
||||
lookup_ch(ChPid) ->
|
||||
|
@ -492,7 +605,8 @@ ch_record(ChPid, LimiterPid) ->
|
|||
consumer_count = 0,
|
||||
blocked_consumers = priority_queue:new(),
|
||||
limiter = Limiter,
|
||||
unsent_message_count = 0},
|
||||
unsent_message_count = 0,
|
||||
link_states = #{}},
|
||||
put(Key, C),
|
||||
C;
|
||||
C = #cr{} -> C
|
||||
|
@ -524,31 +638,14 @@ block_consumer(C = #cr{blocked_consumers = Blocked}, QEntry) ->
|
|||
is_ch_blocked(#cr{unsent_message_count = Count, limiter = Limiter}) ->
|
||||
Count >= ?UNSENT_MESSAGE_LIMIT orelse rabbit_limiter:is_suspended(Limiter).
|
||||
|
||||
send_drained(QName, C = #cr{ch_pid = ChPid, limiter = Limiter}) ->
|
||||
case rabbit_limiter:drained(Limiter) of
|
||||
{[], Limiter} -> C;
|
||||
{CTagCredits, Limiter2} ->
|
||||
ok = rabbit_classic_queue:send_drained(ChPid, QName, CTagCredits),
|
||||
C#cr{limiter = Limiter2}
|
||||
end.
|
||||
|
||||
credit_and_drain(QName, C = #cr{ch_pid = ChPid, limiter = Limiter},
|
||||
CTag, Credit, Mode, IsEmpty) ->
|
||||
case rabbit_limiter:credit(Limiter, CTag, Credit, Mode, IsEmpty) of
|
||||
{true, Limiter1} ->
|
||||
ok = rabbit_classic_queue:send_drained(ChPid, QName, [{CTag, Credit}]),
|
||||
C#cr{limiter = Limiter1};
|
||||
{false, Limiter1} -> C#cr{limiter = Limiter1}
|
||||
end.
|
||||
|
||||
tags(CList) -> [CTag || {_P, {_ChPid, #consumer{tag = CTag}}} <- CList].
|
||||
|
||||
add_consumer({ChPid, Consumer = #consumer{args = Args}}, Queue) ->
|
||||
add_consumer(Key = {_ChPid, #consumer{args = Args}}, Queue) ->
|
||||
Priority = case rabbit_misc:table_lookup(Args, <<"x-priority">>) of
|
||||
{_, P} -> P;
|
||||
_ -> 0
|
||||
end,
|
||||
priority_queue:in({ChPid, Consumer}, Priority, Queue).
|
||||
priority_queue:in(Key, Priority, Queue).
|
||||
|
||||
remove_consumer(ChPid, CTag, Queue) ->
|
||||
priority_queue:filter(fun ({CP, #consumer{tag = CT}}) ->
|
||||
|
|
|
@ -11,6 +11,7 @@
|
|||
|
||||
-include("amqqueue.hrl").
|
||||
-include_lib("rabbit_common/include/rabbit.hrl").
|
||||
-include_lib("amqp10_common/include/amqp10_types.hrl").
|
||||
|
||||
-export([
|
||||
init/0,
|
||||
|
@ -43,7 +44,8 @@
|
|||
module/2,
|
||||
deliver/4,
|
||||
settle/5,
|
||||
credit/5,
|
||||
credit_v1/5,
|
||||
credit/7,
|
||||
dequeue/5,
|
||||
fold_state/3,
|
||||
is_policy_applicable/2,
|
||||
|
@ -63,11 +65,14 @@
|
|||
|
||||
-type queue_name() :: rabbit_amqqueue:name().
|
||||
-type queue_state() :: term().
|
||||
-type msg_tag() :: term().
|
||||
%% sequence number typically
|
||||
-type correlation() :: term().
|
||||
-type arguments() :: queue_arguments | consumer_arguments.
|
||||
-type queue_type() :: rabbit_classic_queue | rabbit_quorum_queue | rabbit_stream_queue.
|
||||
|
||||
-export_type([queue_type/0]).
|
||||
%% see AMQP 1.0 §2.6.7
|
||||
-type delivery_count() :: sequence_no().
|
||||
%% Link credit can be negative, see AMQP 1.0 §2.6.7
|
||||
-type credit() :: integer().
|
||||
|
||||
-define(STATE, ?MODULE).
|
||||
|
||||
|
@ -83,9 +88,15 @@
|
|||
-type action() ::
|
||||
%% indicate to the queue type module that a message has been delivered
|
||||
%% fully to the queue
|
||||
{settled, Success :: boolean(), [msg_tag()]} |
|
||||
{settled, queue_name(), [correlation()]} |
|
||||
{deliver, rabbit_types:ctag(), boolean(), [rabbit_amqqueue:qmsg()]} |
|
||||
{block | unblock, QueueName :: term()}.
|
||||
{block | unblock, QueueName :: term()} |
|
||||
%% credit API v2
|
||||
{credit_reply, rabbit_types:ctag(), delivery_count(), credit(),
|
||||
Available :: non_neg_integer(), Drain :: boolean()} |
|
||||
%% credit API v1
|
||||
{credit_reply_v1, rabbit_types:ctag(), credit(),
|
||||
Available :: non_neg_integer(), Drain :: boolean()}.
|
||||
|
||||
-type actions() :: [action()].
|
||||
|
||||
|
@ -94,44 +105,42 @@
|
|||
term().
|
||||
|
||||
-record(ctx, {module :: module(),
|
||||
%% "publisher confirm queue accounting"
|
||||
%% queue type implementation should emit a:
|
||||
%% {settle, Success :: boolean(), msg_tag()}
|
||||
%% to either settle or reject the delivery of a
|
||||
%% message to the queue instance
|
||||
%% The queue type module will then emit a {confirm | reject, [msg_tag()}
|
||||
%% action to the channel or channel like process when a msg_tag
|
||||
%% has reached its conclusion
|
||||
state :: queue_state()}).
|
||||
|
||||
|
||||
-record(?STATE, {ctxs = #{} :: #{queue_name() => #ctx{}}
|
||||
}).
|
||||
|
||||
-opaque state() :: #?STATE{}.
|
||||
|
||||
%% Delete atom 'credit_api_v1' when feature flag credit_api_v2 becomes required.
|
||||
-type consume_mode() :: {simple_prefetch, non_neg_integer()} | {credited, Initial :: delivery_count() | credit_api_v1}.
|
||||
-type consume_spec() :: #{no_ack := boolean(),
|
||||
channel_pid := pid(),
|
||||
limiter_pid => pid() | none,
|
||||
limiter_active => boolean(),
|
||||
prefetch_count => non_neg_integer(),
|
||||
mode := consume_mode(),
|
||||
consumer_tag := rabbit_types:ctag(),
|
||||
exclusive_consume => boolean(),
|
||||
args => rabbit_framing:amqp_table(),
|
||||
ok_msg := term(),
|
||||
acting_user := rabbit_types:username()}.
|
||||
|
||||
-type delivery_options() :: #{correlation => term(), %% sequence no typically
|
||||
-type delivery_options() :: #{correlation => correlation(),
|
||||
atom() => term()}.
|
||||
|
||||
-type settle_op() :: 'complete' | 'requeue' | 'discard'.
|
||||
|
||||
-export_type([state/0,
|
||||
consume_mode/0,
|
||||
consume_spec/0,
|
||||
delivery_options/0,
|
||||
action/0,
|
||||
actions/0,
|
||||
settle_op/0]).
|
||||
settle_op/0,
|
||||
queue_type/0,
|
||||
credit/0,
|
||||
correlation/0,
|
||||
delivery_count/0]).
|
||||
|
||||
-callback is_enabled() -> boolean().
|
||||
|
||||
|
@ -179,7 +188,8 @@
|
|||
-callback consume(amqqueue:amqqueue(),
|
||||
consume_spec(),
|
||||
queue_state()) ->
|
||||
{ok, queue_state(), actions()} | {error, term()} |
|
||||
{ok, queue_state(), actions()} |
|
||||
{error, term()} |
|
||||
{protocol_error, Type :: atom(), Reason :: string(), Args :: term()}.
|
||||
|
||||
-callback cancel(amqqueue:amqqueue(),
|
||||
|
@ -207,8 +217,13 @@
|
|||
{queue_state(), actions()} |
|
||||
{'protocol_error', Type :: atom(), Reason :: string(), Args :: term()}.
|
||||
|
||||
-callback credit(queue_name(), rabbit_types:ctag(),
|
||||
non_neg_integer(), Drain :: boolean(), queue_state()) ->
|
||||
%% Delete this callback when feature flag credit_api_v2 becomes required.
|
||||
-callback credit_v1(queue_name(), rabbit_types:ctag(), credit(), Drain :: boolean(), queue_state()) ->
|
||||
{queue_state(), actions()}.
|
||||
|
||||
%% credit API v2
|
||||
-callback credit(queue_name(), rabbit_types:ctag(), delivery_count(), credit(),
|
||||
Drain :: boolean(), Echo :: boolean(), queue_state()) ->
|
||||
{queue_state(), actions()}.
|
||||
|
||||
-callback dequeue(queue_name(), NoAck :: boolean(), LimiterPid :: pid(),
|
||||
|
@ -414,7 +429,9 @@ new(Q, State) when ?is_amqqueue(Q) ->
|
|||
set_ctx(Q, Ctx, State).
|
||||
|
||||
-spec consume(amqqueue:amqqueue(), consume_spec(), state()) ->
|
||||
{ok, state()} | {error, term()}.
|
||||
{ok, state()} |
|
||||
{error, term()} |
|
||||
{protocol_error, Type :: atom(), Reason :: string(), Args :: term()}.
|
||||
consume(Q, Spec, State) ->
|
||||
#ctx{state = CtxState0} = Ctx = get_ctx(Q, State),
|
||||
Mod = amqqueue:get_type(Q),
|
||||
|
@ -629,15 +646,23 @@ settle(#resource{kind = queue} = QRef, Op, CTag, MsgIds, Ctxs) ->
|
|||
end
|
||||
end.
|
||||
|
||||
-spec credit(amqqueue:amqqueue() | queue_name(),
|
||||
rabbit_types:ctag(), non_neg_integer(),
|
||||
boolean(), state()) -> {ok, state(), actions()}.
|
||||
credit(Q, CTag, Credit, Drain, Ctxs) ->
|
||||
%% Delete this function when feature flag credit_api_v2 becomes required.
|
||||
-spec credit_v1(queue_name(), rabbit_types:ctag(), credit(), boolean(), state()) ->
|
||||
{ok, state(), actions()}.
|
||||
credit_v1(QName, CTag, LinkCreditSnd, Drain, Ctxs) ->
|
||||
#ctx{state = State0,
|
||||
module = Mod} = Ctx = get_ctx(Q, Ctxs),
|
||||
QName = amqqueue:get_name(Q),
|
||||
{State, Actions} = Mod:credit(QName, CTag, Credit, Drain, State0),
|
||||
{ok, set_ctx(Q, Ctx#ctx{state = State}, Ctxs), Actions}.
|
||||
module = Mod} = Ctx = get_ctx(QName, Ctxs),
|
||||
{State, Actions} = Mod:credit_v1(QName, CTag, LinkCreditSnd, Drain, State0),
|
||||
{ok, set_ctx(QName, Ctx#ctx{state = State}, Ctxs), Actions}.
|
||||
|
||||
%% credit API v2
|
||||
-spec credit(queue_name(), rabbit_types:ctag(), delivery_count(), credit(), boolean(), boolean(), state()) ->
|
||||
{ok, state(), actions()}.
|
||||
credit(QName, CTag, DeliveryCount, Credit, Drain, Echo, Ctxs) ->
|
||||
#ctx{state = State0,
|
||||
module = Mod} = Ctx = get_ctx(QName, Ctxs),
|
||||
{State, Actions} = Mod:credit(QName, CTag, DeliveryCount, Credit, Drain, Echo, State0),
|
||||
{ok, set_ctx(QName, Ctx#ctx{state = State}, Ctxs), Actions}.
|
||||
|
||||
-spec dequeue(amqqueue:amqqueue(), boolean(),
|
||||
pid(), rabbit_types:ctag(), state()) ->
|
||||
|
|
|
@ -25,7 +25,7 @@
|
|||
delete_immediately/1]).
|
||||
-export([state_info/1, info/2, stat/1, infos/1, infos/2]).
|
||||
-export([settle/5, dequeue/5, consume/3, cancel/5]).
|
||||
-export([credit/5]).
|
||||
-export([credit_v1/5, credit/7]).
|
||||
-export([purge/1]).
|
||||
-export([stateless_deliver/2, deliver/3]).
|
||||
-export([dead_letter_publish/5]).
|
||||
|
@ -130,6 +130,7 @@
|
|||
-define(DELETE_TIMEOUT, 5000).
|
||||
-define(ADD_MEMBER_TIMEOUT, 5000).
|
||||
-define(SNAPSHOT_INTERVAL, 8192). %% the ra default is 4096
|
||||
-define(UNLIMITED_PREFETCH_COUNT, 2000). %% something large for ra
|
||||
|
||||
%%----------- QQ policies ---------------------------------------------------
|
||||
|
||||
|
@ -477,7 +478,7 @@ capabilities() ->
|
|||
<<"x-single-active-consumer">>, <<"x-queue-type">>,
|
||||
<<"x-quorum-initial-group-size">>, <<"x-delivery-limit">>,
|
||||
<<"x-message-ttl">>, <<"x-queue-leader-locator">>],
|
||||
consumer_arguments => [<<"x-priority">>, <<"x-credit">>],
|
||||
consumer_arguments => [<<"x-priority">>],
|
||||
server_named => false}.
|
||||
|
||||
rpc_delete_metrics(QName) ->
|
||||
|
@ -800,8 +801,11 @@ settle(_QName, requeue, CTag, MsgIds, QState) ->
|
|||
settle(_QName, discard, CTag, MsgIds, QState) ->
|
||||
rabbit_fifo_client:discard(quorum_ctag(CTag), MsgIds, QState).
|
||||
|
||||
credit(_QName, CTag, Credit, Drain, QState) ->
|
||||
rabbit_fifo_client:credit(quorum_ctag(CTag), Credit, Drain, QState).
|
||||
credit_v1(_QName, CTag, Credit, Drain, QState) ->
|
||||
rabbit_fifo_client:credit_v1(quorum_ctag(CTag), Credit, Drain, QState).
|
||||
|
||||
credit(_QName, CTag, DeliveryCount, Credit, Drain, Echo, QState) ->
|
||||
rabbit_fifo_client:credit(quorum_ctag(CTag), DeliveryCount, Credit, Drain, Echo, QState).
|
||||
|
||||
-spec dequeue(rabbit_amqqueue:name(), NoAck :: boolean(), pid(),
|
||||
rabbit_types:ctag(), rabbit_fifo_client:state()) ->
|
||||
|
@ -829,7 +833,7 @@ consume(Q, #{limiter_active := true}, _State)
|
|||
consume(Q, Spec, QState0) when ?amqqueue_is_quorum(Q) ->
|
||||
#{no_ack := NoAck,
|
||||
channel_pid := ChPid,
|
||||
prefetch_count := ConsumerPrefetchCount,
|
||||
mode := Mode,
|
||||
consumer_tag := ConsumerTag0,
|
||||
exclusive_consume := ExclusiveConsume,
|
||||
args := Args,
|
||||
|
@ -840,35 +844,33 @@ consume(Q, Spec, QState0) when ?amqqueue_is_quorum(Q) ->
|
|||
QName = amqqueue:get_name(Q),
|
||||
maybe_send_reply(ChPid, OkMsg),
|
||||
ConsumerTag = quorum_ctag(ConsumerTag0),
|
||||
%% A prefetch count of 0 means no limitation,
|
||||
%% let's make it into something large for ra
|
||||
Prefetch0 = case ConsumerPrefetchCount of
|
||||
0 -> 2000;
|
||||
Other -> Other
|
||||
end,
|
||||
%% consumer info is used to describe the consumer properties
|
||||
AckRequired = not NoAck,
|
||||
ConsumerMeta = #{ack => AckRequired,
|
||||
prefetch => ConsumerPrefetchCount,
|
||||
args => Args,
|
||||
username => ActingUser},
|
||||
|
||||
{CreditMode, Credit, Drain} = parse_credit_args(Prefetch0, Args),
|
||||
%% if the mode is credited we should send a separate credit command
|
||||
%% after checkout and give 0 credits initally
|
||||
Prefetch = case CreditMode of
|
||||
credited -> 0;
|
||||
simple_prefetch -> Prefetch0
|
||||
end,
|
||||
{ok, QState1} = rabbit_fifo_client:checkout(ConsumerTag, Prefetch,
|
||||
CreditMode, ConsumerMeta,
|
||||
QState0),
|
||||
QState = case CreditMode of
|
||||
credited when Credit > 0 ->
|
||||
rabbit_fifo_client:credit(ConsumerTag, Credit, Drain,
|
||||
QState1);
|
||||
_ -> QState1
|
||||
end,
|
||||
{CreditMode, EffectivePrefetch, DeclaredPrefetch, ConsumerMeta0} =
|
||||
case Mode of
|
||||
{credited, C} ->
|
||||
Meta = if C =:= credit_api_v1 ->
|
||||
#{};
|
||||
is_integer(C) ->
|
||||
#{initial_delivery_count => C}
|
||||
end,
|
||||
{credited, 0, 0, Meta};
|
||||
{simple_prefetch = M, Declared} ->
|
||||
Effective = case Declared of
|
||||
0 -> ?UNLIMITED_PREFETCH_COUNT;
|
||||
_ -> Declared
|
||||
end,
|
||||
{M, Effective, Declared, #{}}
|
||||
end,
|
||||
ConsumerMeta = maps:merge(
|
||||
ConsumerMeta0,
|
||||
#{ack => AckRequired,
|
||||
prefetch => DeclaredPrefetch,
|
||||
args => Args,
|
||||
username => ActingUser}),
|
||||
{ok, QState} = rabbit_fifo_client:checkout(ConsumerTag, EffectivePrefetch,
|
||||
CreditMode, ConsumerMeta,
|
||||
QState0),
|
||||
case single_active_consumer_on(Q) of
|
||||
true ->
|
||||
%% get the leader from state
|
||||
|
@ -883,10 +885,10 @@ consume(Q, Spec, QState0) when ?amqqueue_is_quorum(Q) ->
|
|||
rabbit_core_metrics:consumer_created(
|
||||
ChPid, ConsumerTag, ExclusiveConsume,
|
||||
AckRequired, QName,
|
||||
ConsumerPrefetchCount, ActivityStatus == single_active, %% Active
|
||||
DeclaredPrefetch, ActivityStatus == single_active, %% Active
|
||||
ActivityStatus, Args),
|
||||
emit_consumer_created(ChPid, ConsumerTag, ExclusiveConsume,
|
||||
AckRequired, QName, Prefetch,
|
||||
AckRequired, QName, DeclaredPrefetch,
|
||||
Args, none, ActingUser),
|
||||
{ok, QState};
|
||||
{error, Error} ->
|
||||
|
@ -898,10 +900,10 @@ consume(Q, Spec, QState0) when ?amqqueue_is_quorum(Q) ->
|
|||
rabbit_core_metrics:consumer_created(
|
||||
ChPid, ConsumerTag, ExclusiveConsume,
|
||||
AckRequired, QName,
|
||||
ConsumerPrefetchCount, true, %% Active
|
||||
DeclaredPrefetch, true, %% Active
|
||||
up, Args),
|
||||
emit_consumer_created(ChPid, ConsumerTag, ExclusiveConsume,
|
||||
AckRequired, QName, Prefetch,
|
||||
AckRequired, QName, DeclaredPrefetch,
|
||||
Args, none, ActingUser),
|
||||
{ok, QState}
|
||||
end.
|
||||
|
@ -1818,20 +1820,6 @@ overflow(<<"reject-publish-dlx">> = V, Def, QName) ->
|
|||
[V, rabbit_misc:rs(QName)]),
|
||||
Def.
|
||||
|
||||
parse_credit_args(Default, Args) ->
|
||||
case rabbit_misc:table_lookup(Args, <<"x-credit">>) of
|
||||
{table, T} ->
|
||||
case {rabbit_misc:table_lookup(T, <<"credit">>),
|
||||
rabbit_misc:table_lookup(T, <<"drain">>)} of
|
||||
{{long, C}, {bool, D}} ->
|
||||
{credited, C, D};
|
||||
_ ->
|
||||
{simple_prefetch, Default, false}
|
||||
end;
|
||||
undefined ->
|
||||
{simple_prefetch, Default, false}
|
||||
end.
|
||||
|
||||
-spec notify_decorators(amqqueue:amqqueue()) -> 'ok'.
|
||||
notify_decorators(Q) when ?is_amqqueue(Q) ->
|
||||
QName = amqqueue:get_name(Q),
|
||||
|
|
|
@ -43,12 +43,12 @@
|
|||
-include_lib("rabbit_common/include/rabbit_framing.hrl").
|
||||
-include_lib("rabbit_common/include/rabbit.hrl").
|
||||
|
||||
-export([start_link/2, info_keys/0, info/1, info/2, force_event_refresh/2,
|
||||
-export([start_link/1, info_keys/0, info/1, info/2, force_event_refresh/2,
|
||||
shutdown/2]).
|
||||
|
||||
-export([system_continue/3, system_terminate/4, system_code_change/4]).
|
||||
|
||||
-export([init/3, mainloop/4, recvloop/4]).
|
||||
-export([init/2, mainloop/4, recvloop/4]).
|
||||
|
||||
-export([conserve_resources/3, server_properties/1]).
|
||||
|
||||
|
@ -145,11 +145,10 @@
|
|||
|
||||
%%--------------------------------------------------------------------------
|
||||
|
||||
-spec start_link(pid(), any()) -> rabbit_types:ok(pid()).
|
||||
|
||||
start_link(HelperSup, Ref) ->
|
||||
Pid = proc_lib:spawn_link(?MODULE, init, [self(), HelperSup, Ref]),
|
||||
|
||||
-spec start_link(ranch:ref()) ->
|
||||
rabbit_types:ok(pid()).
|
||||
start_link(Ref) ->
|
||||
Pid = proc_lib:spawn_link(?MODULE, init, [self(), Ref]),
|
||||
{ok, Pid}.
|
||||
|
||||
-spec shutdown(pid(), string()) -> 'ok'.
|
||||
|
@ -157,14 +156,14 @@ start_link(HelperSup, Ref) ->
|
|||
shutdown(Pid, Explanation) ->
|
||||
gen_server:call(Pid, {shutdown, Explanation}, infinity).
|
||||
|
||||
-spec init(pid(), pid(), any()) -> no_return().
|
||||
|
||||
init(Parent, HelperSup, Ref) ->
|
||||
-spec init(pid(), ranch:ref()) ->
|
||||
no_return().
|
||||
init(Parent, Ref) ->
|
||||
?LG_PROCESS_TYPE(reader),
|
||||
{ok, Sock} = rabbit_networking:handshake(Ref,
|
||||
application:get_env(rabbit, proxy_protocol, false)),
|
||||
Deb = sys:debug_options([]),
|
||||
start_connection(Parent, HelperSup, Ref, Deb, Sock).
|
||||
start_connection(Parent, Ref, Deb, Sock).
|
||||
|
||||
-spec system_continue(_,_,{[binary()], non_neg_integer(), #v1{}}) -> any().
|
||||
|
||||
|
@ -291,10 +290,10 @@ socket_op(Sock, Fun) ->
|
|||
exit(normal)
|
||||
end.
|
||||
|
||||
-spec start_connection(pid(), pid(), ranch:ref(), any(), rabbit_net:socket()) ->
|
||||
-spec start_connection(pid(), ranch:ref(), any(), rabbit_net:socket()) ->
|
||||
no_return().
|
||||
|
||||
start_connection(Parent, HelperSup, RanchRef, Deb, Sock) ->
|
||||
start_connection(Parent, RanchRef, Deb, Sock) ->
|
||||
process_flag(trap_exit, true),
|
||||
RealSocket = rabbit_net:unwrap_socket(Sock),
|
||||
Name = case rabbit_net:connection_string(Sock, inbound) of
|
||||
|
@ -337,7 +336,7 @@ start_connection(Parent, HelperSup, RanchRef, Deb, Sock) ->
|
|||
pending_recv = false,
|
||||
connection_state = pre_init,
|
||||
queue_collector = undefined, %% started on tune-ok
|
||||
helper_sup = HelperSup,
|
||||
helper_sup = none,
|
||||
heartbeater = none,
|
||||
channel_sup_sup_pid = none,
|
||||
channel_count = 0,
|
||||
|
@ -356,16 +355,16 @@ start_connection(Parent, HelperSup, RanchRef, Deb, Sock) ->
|
|||
%% connection was closed cleanly by the client
|
||||
#v1{connection = #connection{user = #user{username = Username},
|
||||
vhost = VHost}} ->
|
||||
rabbit_log_connection:info("closing AMQP connection ~tp (~ts, vhost: '~ts', user: '~ts')",
|
||||
[self(), dynamic_connection_name(Name), VHost, Username]);
|
||||
rabbit_log_connection:info("closing AMQP connection (~ts, vhost: '~ts', user: '~ts')",
|
||||
[dynamic_connection_name(Name), VHost, Username]);
|
||||
%% just to be more defensive
|
||||
_ ->
|
||||
rabbit_log_connection:info("closing AMQP connection ~tp (~ts)",
|
||||
[self(), dynamic_connection_name(Name)])
|
||||
rabbit_log_connection:info("closing AMQP connection (~ts)",
|
||||
[dynamic_connection_name(Name)])
|
||||
end
|
||||
catch
|
||||
Ex ->
|
||||
log_connection_exception(dynamic_connection_name(Name), Ex)
|
||||
log_connection_exception(dynamic_connection_name(Name), Ex)
|
||||
after
|
||||
%% We don't call gen_tcp:close/1 here since it waits for
|
||||
%% pending output to be sent, which results in unnecessary
|
||||
|
@ -499,8 +498,8 @@ mainloop(Deb, Buf, BufLen, State = #v1{sock = Sock,
|
|||
%%
|
||||
%% The goal is to not log TCP healthchecks (a connection
|
||||
%% with no data received) unless specified otherwise.
|
||||
Fmt = "accepting AMQP connection ~tp (~ts)",
|
||||
Args = [self(), ConnName],
|
||||
Fmt = "accepting AMQP connection ~ts",
|
||||
Args = [ConnName],
|
||||
case Recv of
|
||||
closed -> _ = rabbit_log_connection:debug(Fmt, Args);
|
||||
_ -> _ = rabbit_log_connection:info(Fmt, Args)
|
||||
|
@ -1078,75 +1077,64 @@ handle_input({frame_payload, Type, Channel, PayloadSize}, Data, State) ->
|
|||
Type, Channel, Payload, State)
|
||||
end;
|
||||
handle_input(handshake, <<"AMQP", A, B, C, D, Rest/binary>>, State) ->
|
||||
{Rest, handshake({A, B, C, D}, State)};
|
||||
{Rest, version_negotiation({A, B, C, D}, State)};
|
||||
handle_input(handshake, <<Other:8/binary, _/binary>>, #v1{sock = Sock}) ->
|
||||
refuse_connection(Sock, {bad_header, Other});
|
||||
handle_input(Callback, Data, _State) ->
|
||||
throw({bad_input, Callback, Data}).
|
||||
|
||||
%% The two rules pertaining to version negotiation:
|
||||
%%
|
||||
%% * If the server cannot support the protocol specified in the
|
||||
%% protocol header, it MUST respond with a valid protocol header and
|
||||
%% then close the socket connection.
|
||||
%%
|
||||
%% * The server MUST provide a protocol version that is lower than or
|
||||
%% equal to that requested by the client in the protocol header.
|
||||
handshake({0, 0, 9, 1}, State) ->
|
||||
start_connection({0, 9, 1}, rabbit_framing_amqp_0_9_1, State);
|
||||
|
||||
%% This is the protocol header for 0-9, which we can safely treat as
|
||||
%% though it were 0-9-1.
|
||||
handshake({1, 1, 0, 9}, State) ->
|
||||
start_connection({0, 9, 0}, rabbit_framing_amqp_0_9_1, State);
|
||||
|
||||
%% This is what most clients send for 0-8. The 0-8 spec, confusingly,
|
||||
%% defines the version as 8-0.
|
||||
handshake({1, 1, 8, 0}, State) ->
|
||||
start_connection({8, 0, 0}, rabbit_framing_amqp_0_8, State);
|
||||
|
||||
%% The 0-8 spec as on the AMQP web site actually has this as the
|
||||
%% protocol header; some libraries e.g., py-amqplib, send it when they
|
||||
%% want 0-8.
|
||||
handshake({1, 1, 9, 1}, State) ->
|
||||
start_connection({8, 0, 0}, rabbit_framing_amqp_0_8, State);
|
||||
|
||||
%% ... and finally, the 1.0 spec is crystal clear!
|
||||
handshake({Id, 1, 0, 0}, State) ->
|
||||
become_1_0(Id, State);
|
||||
|
||||
handshake(Vsn, #v1{sock = Sock}) ->
|
||||
%% AMQP 1.0 §2.2
|
||||
version_negotiation({Id, 1, 0, 0}, State) ->
|
||||
become_10(Id, State);
|
||||
version_negotiation({0, 0, 9, 1}, State) ->
|
||||
start_091_connection({0, 9, 1}, rabbit_framing_amqp_0_9_1, State);
|
||||
version_negotiation({1, 1, 0, 9}, State) ->
|
||||
%% This is the protocol header for 0-9, which we can safely treat as though it were 0-9-1.
|
||||
start_091_connection({0, 9, 0}, rabbit_framing_amqp_0_9_1, State);
|
||||
version_negotiation(Vsn = {0, 0, Minor, _}, #v1{sock = Sock})
|
||||
when Minor >= 9 ->
|
||||
refuse_connection(Sock, {bad_version, Vsn}, {0, 0, 9, 1});
|
||||
version_negotiation(Vsn, #v1{sock = Sock}) ->
|
||||
refuse_connection(Sock, {bad_version, Vsn}).
|
||||
|
||||
%% Offer a protocol version to the client. Connection.start only
|
||||
%% includes a major and minor version number, Luckily 0-9 and 0-9-1
|
||||
%% are similar enough that clients will be happy with either.
|
||||
start_connection({ProtocolMajor, ProtocolMinor, _ProtocolRevision},
|
||||
Protocol,
|
||||
State = #v1{sock = Sock, connection = Connection}) ->
|
||||
start_091_connection({ProtocolMajor, ProtocolMinor, _ProtocolRevision},
|
||||
Protocol,
|
||||
#v1{parent = Parent,
|
||||
sock = Sock,
|
||||
connection = Connection} = State0) ->
|
||||
ConnectionHelperSupFlags = #{strategy => one_for_one,
|
||||
intensity => 10,
|
||||
period => 10,
|
||||
auto_shutdown => any_significant},
|
||||
{ok, ConnectionHelperSupPid} = rabbit_connection_sup:start_connection_helper_sup(
|
||||
Parent, ConnectionHelperSupFlags),
|
||||
rabbit_networking:register_connection(self()),
|
||||
Start = #'connection.start'{
|
||||
version_major = ProtocolMajor,
|
||||
version_minor = ProtocolMinor,
|
||||
server_properties = server_properties(Protocol),
|
||||
mechanisms = auth_mechanisms_binary(Sock),
|
||||
locales = <<"en_US">> },
|
||||
version_major = ProtocolMajor,
|
||||
version_minor = ProtocolMinor,
|
||||
server_properties = server_properties(Protocol),
|
||||
mechanisms = auth_mechanisms_binary(Sock),
|
||||
locales = <<"en_US">> },
|
||||
ok = send_on_channel0(Sock, Start, Protocol),
|
||||
switch_callback(State#v1{connection = Connection#connection{
|
||||
timeout_sec = ?NORMAL_TIMEOUT,
|
||||
protocol = Protocol},
|
||||
connection_state = starting},
|
||||
frame_header, 7).
|
||||
State = State0#v1{connection = Connection#connection{
|
||||
timeout_sec = ?NORMAL_TIMEOUT,
|
||||
protocol = Protocol},
|
||||
connection_state = starting,
|
||||
helper_sup = ConnectionHelperSupPid},
|
||||
switch_callback(State, frame_header, 7).
|
||||
|
||||
-spec refuse_connection(rabbit_net:socket(), any()) -> no_return().
|
||||
refuse_connection(Sock, Exception) ->
|
||||
refuse_connection(Sock, Exception, {0, 1, 0, 0}).
|
||||
|
||||
-spec refuse_connection(_, _, _) -> no_return().
|
||||
refuse_connection(Sock, Exception, {A, B, C, D}) ->
|
||||
ok = inet_op(fun () -> rabbit_net:send(Sock, <<"AMQP",A,B,C,D>>) end),
|
||||
throw(Exception).
|
||||
|
||||
-spec refuse_connection(rabbit_net:socket(), any()) -> no_return().
|
||||
|
||||
refuse_connection(Sock, Exception) ->
|
||||
refuse_connection(Sock, Exception, {0, 0, 9, 1}).
|
||||
|
||||
ensure_stats_timer(State = #v1{connection_state = running}) ->
|
||||
rabbit_event:ensure_stats_timer(State, #v1.stats_timer, emit_stats);
|
||||
|
@ -1283,9 +1271,8 @@ handle_method0(#'connection.open'{virtual_host = VHost},
|
|||
rabbit_event:notify(connection_created, Infos),
|
||||
maybe_emit_stats(State1),
|
||||
rabbit_log_connection:info(
|
||||
"connection ~tp (~ts): "
|
||||
"user '~ts' authenticated and granted access to vhost '~ts'",
|
||||
[self(), dynamic_connection_name(ConnName), Username, VHost]),
|
||||
"connection ~ts: user '~ts' authenticated and granted access to vhost '~ts'",
|
||||
[dynamic_connection_name(ConnName), Username, VHost]),
|
||||
State1;
|
||||
handle_method0(#'connection.close'{}, State) when ?IS_RUNNING(State) ->
|
||||
lists:foreach(fun rabbit_channel:shutdown/1, all_channels()),
|
||||
|
@ -1309,9 +1296,9 @@ handle_method0(#'connection.update_secret'{new_secret = NewSecret, reason = Reas
|
|||
log_name = ConnName} = Conn,
|
||||
sock = Sock}) when ?IS_RUNNING(State) ->
|
||||
rabbit_log_connection:debug(
|
||||
"connection ~tp (~ts) of user '~ts': "
|
||||
"asked to update secret, reason: ~ts",
|
||||
[self(), dynamic_connection_name(ConnName), Username, Reason]),
|
||||
"connection ~ts of user '~ts': "
|
||||
"asked to update secret, reason: ~ts",
|
||||
[dynamic_connection_name(ConnName), Username, Reason]),
|
||||
case rabbit_access_control:update_state(User, NewSecret) of
|
||||
{ok, User1} ->
|
||||
%% User/auth backend state has been updated. Now we can propagate it to channels
|
||||
|
@ -1326,9 +1313,8 @@ handle_method0(#'connection.update_secret'{new_secret = NewSecret, reason = Reas
|
|||
end, all_channels()),
|
||||
ok = send_on_channel0(Sock, #'connection.update_secret_ok'{}, Protocol),
|
||||
rabbit_log_connection:info(
|
||||
"connection ~tp (~ts): "
|
||||
"user '~ts' updated secret, reason: ~ts",
|
||||
[self(), dynamic_connection_name(ConnName), Username, Reason]),
|
||||
"connection ~ts: user '~ts' updated secret, reason: ~ts",
|
||||
[dynamic_connection_name(ConnName), Username, Reason]),
|
||||
State#v1{connection = Conn#connection{user = User1}};
|
||||
{refused, Message} ->
|
||||
rabbit_log_connection:error("Secret update was refused for user '~ts': ~tp",
|
||||
|
@ -1643,32 +1629,34 @@ emit_stats(State) ->
|
|||
ensure_stats_timer(State1).
|
||||
|
||||
%% 1.0 stub
|
||||
-spec become_1_0(non_neg_integer(), #v1{}) -> no_return().
|
||||
-spec become_10(non_neg_integer(), #v1{}) -> no_return().
|
||||
become_10(Id, State = #v1{sock = Sock}) ->
|
||||
Mode = case Id of
|
||||
0 -> amqp;
|
||||
3 -> sasl;
|
||||
_ -> refuse_connection(
|
||||
Sock, {unsupported_amqp1_0_protocol_id, Id},
|
||||
{3, 1, 0, 0})
|
||||
end,
|
||||
F = fun (_Deb, Buf, BufLen, State0) ->
|
||||
{rabbit_amqp_reader, init,
|
||||
[Mode, pack_for_1_0(Buf, BufLen, State0)]}
|
||||
end,
|
||||
State#v1{connection_state = {become, F}}.
|
||||
|
||||
become_1_0(Id, State = #v1{sock = Sock}) ->
|
||||
case code:is_loaded(rabbit_amqp1_0_reader) of
|
||||
false -> refuse_connection(Sock, amqp1_0_plugin_not_enabled);
|
||||
_ -> Mode = case Id of
|
||||
0 -> amqp;
|
||||
3 -> sasl;
|
||||
_ -> refuse_connection(
|
||||
Sock, {unsupported_amqp1_0_protocol_id, Id},
|
||||
{3, 1, 0, 0})
|
||||
end,
|
||||
F = fun (_Deb, Buf, BufLen, S) ->
|
||||
{rabbit_amqp1_0_reader, init,
|
||||
[Mode, pack_for_1_0(Buf, BufLen, S)]}
|
||||
end,
|
||||
State#v1{connection_state = {become, F}}
|
||||
end.
|
||||
|
||||
pack_for_1_0(Buf, BufLen, #v1{parent = Parent,
|
||||
sock = Sock,
|
||||
pack_for_1_0(Buf, BufLen, #v1{sock = Sock,
|
||||
recv_len = RecvLen,
|
||||
pending_recv = PendingRecv,
|
||||
helper_sup = SupPid,
|
||||
proxy_socket = ProxySocket}) ->
|
||||
{Parent, Sock, RecvLen, PendingRecv, SupPid, Buf, BufLen, ProxySocket}.
|
||||
proxy_socket = ProxySocket,
|
||||
connection = #connection{
|
||||
name = Name,
|
||||
host = Host,
|
||||
peer_host = PeerHost,
|
||||
port = Port,
|
||||
peer_port = PeerPort,
|
||||
connected_at = ConnectedAt}}) ->
|
||||
{Sock, RecvLen, PendingRecv, Buf, BufLen, ProxySocket,
|
||||
Name, Host, PeerHost, Port, PeerPort, ConnectedAt}.
|
||||
|
||||
respond_and_close(State, Channel, Protocol, Reason, LogErr) ->
|
||||
log_hard_error(State, Channel, LogErr),
|
||||
|
@ -1802,7 +1790,8 @@ augment_connection_log_name(#connection{name = Name} = Connection) ->
|
|||
Connection;
|
||||
UserSpecifiedName ->
|
||||
LogName = <<Name/binary, " - ", UserSpecifiedName/binary>>,
|
||||
rabbit_log_connection:info("connection ~tp (~ts) has a client-provided name: ~ts", [self(), Name, UserSpecifiedName]),
|
||||
rabbit_log_connection:info("connection ~ts has a client-provided name: ~ts",
|
||||
[Name, UserSpecifiedName]),
|
||||
?store_proc_name(LogName),
|
||||
Connection#connection{log_name = LogName}
|
||||
end.
|
||||
|
|
|
@ -23,7 +23,8 @@
|
|||
handle_event/3,
|
||||
deliver/3,
|
||||
settle/5,
|
||||
credit/5,
|
||||
credit_v1/5,
|
||||
credit/7,
|
||||
dequeue/5,
|
||||
info/2,
|
||||
queue_length/1,
|
||||
|
@ -69,25 +70,32 @@
|
|||
|
||||
-type appender_seq() :: non_neg_integer().
|
||||
|
||||
-type msg_id() :: non_neg_integer().
|
||||
-type msg() :: term(). %% TODO: refine
|
||||
|
||||
-record(stream, {credit :: integer(),
|
||||
max :: non_neg_integer(),
|
||||
-record(stream, {mode :: rabbit_queue_type:consume_mode(),
|
||||
delivery_count :: none | rabbit_queue_type:delivery_count(),
|
||||
credit :: rabbit_queue_type:credit(),
|
||||
ack :: boolean(),
|
||||
start_offset = 0 :: non_neg_integer(),
|
||||
listening_offset = 0 :: non_neg_integer(),
|
||||
last_consumed_offset = 0 :: non_neg_integer(),
|
||||
log :: undefined | osiris_log:state(),
|
||||
chunk_iterator :: undefined | osiris_log:chunk_iterator(),
|
||||
%% These messages were already read ahead from the Osiris log,
|
||||
%% were part of an uncompressed sub batch, and are buffered in
|
||||
%% reversed order until the consumer has more credits to consume them.
|
||||
buffer_msgs_rev = [] :: [rabbit_amqqueue:qmsg()],
|
||||
reader_options :: map()}).
|
||||
|
||||
-record(stream_client, {stream_id :: string(),
|
||||
name :: term(),
|
||||
name :: rabbit_amqqueue:name(),
|
||||
leader :: pid(),
|
||||
local_pid :: undefined | pid(),
|
||||
next_seq = 1 :: non_neg_integer(),
|
||||
correlation = #{} :: #{appender_seq() => {msg_id(), msg()}},
|
||||
correlation = #{} :: #{appender_seq() => {rabbit_queue_type:correlation(), msg()}},
|
||||
soft_limit :: non_neg_integer(),
|
||||
slow = false :: boolean(),
|
||||
readers = #{} :: #{term() => #stream{}},
|
||||
readers = #{} :: #{rabbit_types:ctag() => #stream{}},
|
||||
writer_id :: binary(),
|
||||
filtering_supported :: boolean()
|
||||
}).
|
||||
|
@ -264,14 +272,15 @@ format(Q, Ctx) ->
|
|||
{state, down}]
|
||||
end.
|
||||
|
||||
consume(Q, #{prefetch_count := 0}, _)
|
||||
consume(Q, #{mode := {simple_prefetch, 0}}, _)
|
||||
when ?amqqueue_is_stream(Q) ->
|
||||
{protocol_error, precondition_failed, "consumer prefetch count is not set for '~ts'",
|
||||
{protocol_error, precondition_failed, "consumer prefetch count is not set for stream ~ts",
|
||||
[rabbit_misc:rs(amqqueue:get_name(Q))]};
|
||||
consume(Q, #{no_ack := true}, _)
|
||||
consume(Q, #{no_ack := true,
|
||||
mode := {simple_prefetch, _}}, _)
|
||||
when ?amqqueue_is_stream(Q) ->
|
||||
{protocol_error, not_implemented,
|
||||
"automatic acknowledgement not supported by stream queues ~ts",
|
||||
"automatic acknowledgement not supported by stream ~ts",
|
||||
[rabbit_misc:rs(amqqueue:get_name(Q))]};
|
||||
consume(Q, #{limiter_active := true}, _State)
|
||||
when ?amqqueue_is_stream(Q) ->
|
||||
|
@ -284,7 +293,7 @@ consume(Q, Spec,
|
|||
{LocalPid, QState} when is_pid(LocalPid) ->
|
||||
#{no_ack := NoAck,
|
||||
channel_pid := ChPid,
|
||||
prefetch_count := ConsumerPrefetchCount,
|
||||
mode := Mode,
|
||||
consumer_tag := ConsumerTag,
|
||||
exclusive_consume := ExclusiveConsume,
|
||||
args := Args,
|
||||
|
@ -303,22 +312,24 @@ consume(Q, Spec,
|
|||
{protocol_error, precondition_failed,
|
||||
"Filtering is not supported", []};
|
||||
_ ->
|
||||
rabbit_core_metrics:consumer_created(ChPid, ConsumerTag,
|
||||
ExclusiveConsume,
|
||||
not NoAck, QName,
|
||||
ConsumerPrefetchCount,
|
||||
false, up, Args),
|
||||
ConsumerPrefetchCount = case Mode of
|
||||
{simple_prefetch, C} -> C;
|
||||
_ -> 0
|
||||
end,
|
||||
AckRequired = not NoAck,
|
||||
rabbit_core_metrics:consumer_created(
|
||||
ChPid, ConsumerTag, ExclusiveConsume, AckRequired,
|
||||
QName, ConsumerPrefetchCount, false, up, Args),
|
||||
%% reply needs to be sent before the stream
|
||||
%% begins sending
|
||||
maybe_send_reply(ChPid, OkMsg),
|
||||
_ = rabbit_stream_coordinator:register_local_member_listener(Q),
|
||||
begin_stream(QState, ConsumerTag, OffsetSpec,
|
||||
ConsumerPrefetchCount, FilterSpec)
|
||||
begin_stream(QState, ConsumerTag, OffsetSpec, Mode, AckRequired, FilterSpec)
|
||||
end
|
||||
end;
|
||||
{undefined, _} ->
|
||||
{protocol_error, precondition_failed,
|
||||
"queue '~ts' does not have a running replica on the local node",
|
||||
"stream ~ts does not have a running replica on the local node",
|
||||
[rabbit_misc:rs(amqqueue:get_name(Q))]}
|
||||
end.
|
||||
|
||||
|
@ -405,7 +416,7 @@ query_local_pid(#stream_client{stream_id = StreamId} = State) ->
|
|||
begin_stream(#stream_client{name = QName,
|
||||
readers = Readers0,
|
||||
local_pid = LocalPid} = State,
|
||||
Tag, Offset, Max, Options)
|
||||
Tag, Offset, Mode, AckRequired, Options)
|
||||
when is_pid(LocalPid) ->
|
||||
CounterSpec = {{?MODULE, QName, Tag, self()}, []},
|
||||
{ok, Seg0} = osiris:init_reader(LocalPid, Offset, CounterSpec, Options),
|
||||
|
@ -418,14 +429,22 @@ begin_stream(#stream_client{name = QName,
|
|||
{timestamp, _} -> NextOffset;
|
||||
_ -> Offset
|
||||
end,
|
||||
Str0 = #stream{credit = Max,
|
||||
{DeliveryCount, Credit} = case Mode of
|
||||
{simple_prefetch, N} ->
|
||||
{none, N};
|
||||
{credited, InitialDC} ->
|
||||
{InitialDC, 0}
|
||||
end,
|
||||
Str0 = #stream{mode = Mode,
|
||||
delivery_count = DeliveryCount,
|
||||
credit = Credit,
|
||||
ack = AckRequired,
|
||||
start_offset = StartOffset,
|
||||
listening_offset = NextOffset,
|
||||
last_consumed_offset = StartOffset,
|
||||
log = Seg0,
|
||||
max = Max,
|
||||
reader_options = Options},
|
||||
{ok, State#stream_client{local_pid = LocalPid,
|
||||
readers = Readers0#{Tag => Str0}}}.
|
||||
{ok, State#stream_client{readers = Readers0#{Tag => Str0}}}.
|
||||
|
||||
cancel(_Q, ConsumerTag, OkMsg, ActingUser, #stream_client{readers = Readers0,
|
||||
name = QName} = State) ->
|
||||
|
@ -444,34 +463,54 @@ cancel(_Q, ConsumerTag, OkMsg, ActingUser, #stream_client{readers = Readers0,
|
|||
{ok, State}
|
||||
end.
|
||||
|
||||
credit(QName, CTag, Credit, Drain, #stream_client{readers = Readers0,
|
||||
name = Name,
|
||||
local_pid = LocalPid} = State) ->
|
||||
case Readers0 of
|
||||
#{CTag := #stream{credit = Credit0} = Str0} ->
|
||||
Str1 = Str0#stream{credit = Credit0 + Credit},
|
||||
{Str, Msgs} = stream_entries(QName, Name, LocalPid, Str1),
|
||||
Actions = case Msgs of
|
||||
[] ->
|
||||
[{send_credit_reply, 0}];
|
||||
_ ->
|
||||
[{send_credit_reply, length(Msgs)},
|
||||
{deliver, CTag, true, Msgs}]
|
||||
-dialyzer({nowarn_function, credit_v1/5}).
|
||||
credit_v1(_, _, _, _, _) ->
|
||||
erlang:error(credit_v1_unsupported).
|
||||
|
||||
credit(QName, CTag, DeliveryCountRcv, LinkCreditRcv, Drain, Echo,
|
||||
#stream_client{readers = Readers,
|
||||
name = Name,
|
||||
local_pid = LocalPid} = State0) ->
|
||||
case Readers of
|
||||
#{CTag := Str0 = #stream{delivery_count = DeliveryCountSnd}} ->
|
||||
LinkCreditSnd = serial_number:diff(
|
||||
serial_number:add(DeliveryCountRcv, LinkCreditRcv),
|
||||
DeliveryCountSnd),
|
||||
Str1 = Str0#stream{credit = LinkCreditSnd},
|
||||
{Str2 = #stream{delivery_count = DeliveryCount,
|
||||
credit = Credit,
|
||||
ack = Ack}, Msgs} = stream_entries(QName, Name, LocalPid, Str1),
|
||||
DrainedInsufficientMsgs = Drain andalso Credit > 0,
|
||||
Str = case DrainedInsufficientMsgs of
|
||||
true ->
|
||||
Str2#stream{delivery_count = serial_number:add(DeliveryCount, Credit),
|
||||
credit = 0};
|
||||
false ->
|
||||
Str2
|
||||
end,
|
||||
DeliverActions = deliver_actions(CTag, Ack, Msgs),
|
||||
State = State0#stream_client{readers = maps:update(CTag, Str, Readers)},
|
||||
Actions = case Echo orelse DrainedInsufficientMsgs of
|
||||
true ->
|
||||
DeliverActions ++ [{credit_reply,
|
||||
CTag,
|
||||
Str#stream.delivery_count,
|
||||
Str#stream.credit,
|
||||
available_messages(Str),
|
||||
Drain}];
|
||||
false ->
|
||||
DeliverActions
|
||||
end,
|
||||
case Drain of
|
||||
true ->
|
||||
Readers = Readers0#{CTag => Str#stream{credit = 0}},
|
||||
{State#stream_client{readers = Readers},
|
||||
%% send_drained needs to come after deliver
|
||||
Actions ++ [{send_drained, {CTag, Str#stream.credit}}]};
|
||||
false ->
|
||||
Readers = Readers0#{CTag => Str},
|
||||
{State#stream_client{readers = Readers}, Actions}
|
||||
end;
|
||||
{State, Actions};
|
||||
_ ->
|
||||
{State, []}
|
||||
{State0, []}
|
||||
end.
|
||||
|
||||
%% Returns only an approximation.
|
||||
available_messages(#stream{log = Log,
|
||||
last_consumed_offset = LastConsumedOffset}) ->
|
||||
max(0, osiris_log:committed_offset(Log) - LastConsumedOffset).
|
||||
|
||||
deliver(QSs, Msg, Options) ->
|
||||
lists:foldl(
|
||||
fun({Q, stateless}, {Qs, Actions}) ->
|
||||
|
@ -500,7 +539,7 @@ deliver0(MsgId, Msg,
|
|||
Correlation = case MsgId of
|
||||
undefined ->
|
||||
Correlation0;
|
||||
_ when is_number(MsgId) ->
|
||||
_ ->
|
||||
Correlation0#{Seq => {MsgId, Msg}}
|
||||
end,
|
||||
{Slow, Actions} = case maps:size(Correlation) >= SftLmt of
|
||||
|
@ -513,16 +552,21 @@ deliver0(MsgId, Msg,
|
|||
correlation = Correlation,
|
||||
slow = Slow}, Actions}.
|
||||
|
||||
stream_message(Msg, _FilteringSupported = true) ->
|
||||
MsgData = msg_to_iodata(Msg),
|
||||
case mc:x_header(<<"x-stream-filter-value">>, Msg) of
|
||||
undefined ->
|
||||
MsgData;
|
||||
{utf8, Value} ->
|
||||
{Value, MsgData}
|
||||
end;
|
||||
stream_message(Msg, _FilteringSupported = false) ->
|
||||
msg_to_iodata(Msg).
|
||||
stream_message(Msg, FilteringSupported) ->
|
||||
McAmqp = mc:convert(mc_amqp, Msg),
|
||||
Sections = mc:protocol_state(McAmqp),
|
||||
MsgData = mc_amqp:serialize(Sections),
|
||||
case FilteringSupported of
|
||||
true ->
|
||||
case mc:x_header(<<"x-stream-filter-value">>, McAmqp) of
|
||||
undefined ->
|
||||
MsgData;
|
||||
{utf8, Value} ->
|
||||
{Value, MsgData}
|
||||
end;
|
||||
false ->
|
||||
MsgData
|
||||
end.
|
||||
|
||||
-spec dequeue(_, _, _, _, client()) -> no_return().
|
||||
dequeue(_, _, _, _, #stream_client{name = Name}) ->
|
||||
|
@ -530,43 +574,41 @@ dequeue(_, _, _, _, #stream_client{name = Name}) ->
|
|||
[rabbit_misc:rs(Name)]}.
|
||||
|
||||
handle_event(_QName, {osiris_written, From, _WriterId, Corrs},
|
||||
State = #stream_client{correlation = Correlation0,
|
||||
soft_limit = SftLmt,
|
||||
slow = Slow0,
|
||||
name = Name}) ->
|
||||
State0 = #stream_client{correlation = Correlation0,
|
||||
soft_limit = SftLmt,
|
||||
slow = Slow0,
|
||||
name = Name}) ->
|
||||
MsgIds = lists:sort(maps:fold(
|
||||
fun (_Seq, {I, _M}, Acc) ->
|
||||
[I | Acc]
|
||||
end, [], maps:with(Corrs, Correlation0))),
|
||||
|
||||
Correlation = maps:without(Corrs, Correlation0),
|
||||
{Slow, Actions} = case maps:size(Correlation) < SftLmt of
|
||||
true when Slow0 ->
|
||||
{false, [{unblock, Name}]};
|
||||
_ ->
|
||||
{Slow0, []}
|
||||
end,
|
||||
{ok, State#stream_client{correlation = Correlation,
|
||||
slow = Slow}, [{settled, From, MsgIds} | Actions]};
|
||||
{Slow, Actions0} = case maps:size(Correlation) < SftLmt of
|
||||
true when Slow0 ->
|
||||
{false, [{unblock, Name}]};
|
||||
_ ->
|
||||
{Slow0, []}
|
||||
end,
|
||||
Actions = case MsgIds of
|
||||
[] -> Actions0;
|
||||
[_|_] -> [{settled, From, MsgIds} | Actions0]
|
||||
end,
|
||||
State = State0#stream_client{correlation = Correlation,
|
||||
slow = Slow},
|
||||
{ok, State, Actions};
|
||||
handle_event(QName, {osiris_offset, _From, _Offs},
|
||||
State = #stream_client{local_pid = LocalPid,
|
||||
readers = Readers0,
|
||||
name = Name}) ->
|
||||
Ack = true,
|
||||
%% offset isn't actually needed as we use the atomic to read the
|
||||
%% current committed
|
||||
{Readers, Deliveries} =
|
||||
maps:fold(
|
||||
fun (Tag, Str0, {Acc, TM}) ->
|
||||
case stream_entries(QName, Name, LocalPid, Str0) of
|
||||
{Str, []} ->
|
||||
{Acc#{Tag => Str}, TM};
|
||||
{Str, Msgs} ->
|
||||
{Acc#{Tag => Str},
|
||||
[{deliver, Tag, Ack, Msgs} | TM]}
|
||||
end
|
||||
end, {#{}, []}, Readers0),
|
||||
{ok, State#stream_client{readers = Readers}, Deliveries};
|
||||
{Readers, Actions} = maps:fold(
|
||||
fun (Tag, Str0, {Rds, As}) ->
|
||||
{Str, Msgs} = stream_entries(QName, Name, LocalPid, Str0),
|
||||
{Rds#{Tag => Str}, deliver_actions(Tag, Str#stream.ack, Msgs) ++ As}
|
||||
end, {#{}, []}, Readers0),
|
||||
{ok, State#stream_client{readers = Readers}, Actions};
|
||||
handle_event(_QName, {stream_leader_change, Pid}, State) ->
|
||||
{ok, update_leader_pid(Pid, State), []};
|
||||
handle_event(_QName, {stream_local_member_change, Pid},
|
||||
|
@ -611,19 +653,22 @@ recover(_VHost, Queues) ->
|
|||
end, {[], []}, Queues).
|
||||
|
||||
settle(QName, _, CTag, MsgIds, #stream_client{readers = Readers0,
|
||||
local_pid = LocalPid,
|
||||
name = Name} = State) ->
|
||||
%% all settle reasons will "give credit" to the stream queue
|
||||
Credit = length(MsgIds),
|
||||
{Readers, Msgs} = case Readers0 of
|
||||
#{CTag := #stream{credit = Credit0} = Str0} ->
|
||||
Str1 = Str0#stream{credit = Credit0 + Credit},
|
||||
{Str, Msgs0} = stream_entries(QName, Name, LocalPid, Str1),
|
||||
{Readers0#{CTag => Str}, Msgs0};
|
||||
_ ->
|
||||
{Readers0, []}
|
||||
end,
|
||||
{State#stream_client{readers = Readers}, [{deliver, CTag, true, Msgs}]}.
|
||||
local_pid = LocalPid,
|
||||
name = Name} = State) ->
|
||||
case Readers0 of
|
||||
#{CTag := #stream{mode = {simple_prefetch, _MaxCredit},
|
||||
ack = Ack,
|
||||
credit = Credit0} = Str0} ->
|
||||
%% all settle reasons will "give credit" to the stream queue
|
||||
Credit = length(MsgIds),
|
||||
Str1 = Str0#stream{credit = Credit0 + Credit},
|
||||
{Str, Msgs} = stream_entries(QName, Name, LocalPid, Str1),
|
||||
Readers = maps:update(CTag, Str, Readers0),
|
||||
{State#stream_client{readers = Readers},
|
||||
deliver_actions(CTag, Ack, Msgs)};
|
||||
_ ->
|
||||
{State, []}
|
||||
end.
|
||||
|
||||
info(Q, all_keys) ->
|
||||
info(Q, ?INFO_KEYS);
|
||||
|
@ -1064,72 +1109,164 @@ recover(Q) ->
|
|||
maybe_send_reply(_ChPid, undefined) -> ok;
|
||||
maybe_send_reply(ChPid, Msg) -> ok = rabbit_channel:send_command(ChPid, Msg).
|
||||
|
||||
stream_entries(QName, Name, LocalPid,
|
||||
#stream{chunk_iterator = undefined,
|
||||
credit = Credit} = Str0) ->
|
||||
case Credit > 0 of
|
||||
true ->
|
||||
case chunk_iterator(Str0, LocalPid) of
|
||||
{ok, Str} ->
|
||||
stream_entries(QName, Name, LocalPid, Str);
|
||||
{end_of_stream, Str} ->
|
||||
{Str, []}
|
||||
end;
|
||||
false ->
|
||||
{Str0, []}
|
||||
end;
|
||||
stream_entries(QName, Name, LocalPid,
|
||||
#stream{delivery_count = DC,
|
||||
credit = Credit,
|
||||
buffer_msgs_rev = Buf0,
|
||||
last_consumed_offset = LastOff} = Str0)
|
||||
when Credit > 0 andalso Buf0 =/= [] ->
|
||||
BufLen = length(Buf0),
|
||||
case Credit =< BufLen of
|
||||
true ->
|
||||
%% Entire credit worth of messages can be served from the buffer.
|
||||
{Buf, BufMsgsRev} = lists:split(BufLen - Credit, Buf0),
|
||||
{Str0#stream{delivery_count = delivery_count_add(DC, Credit),
|
||||
credit = 0,
|
||||
buffer_msgs_rev = Buf,
|
||||
last_consumed_offset = LastOff + Credit},
|
||||
lists:reverse(BufMsgsRev)};
|
||||
false ->
|
||||
Str = Str0#stream{delivery_count = delivery_count_add(DC, BufLen),
|
||||
credit = Credit - BufLen,
|
||||
buffer_msgs_rev = [],
|
||||
last_consumed_offset = LastOff + BufLen},
|
||||
stream_entries(QName, Name, LocalPid, Str, Buf0)
|
||||
end;
|
||||
stream_entries(QName, Name, LocalPid, Str) ->
|
||||
stream_entries(QName, Name, LocalPid, Str, []).
|
||||
|
||||
stream_entries(_, _, _, #stream{credit = Credit} = Str, Acc)
|
||||
when Credit < 1 ->
|
||||
{Str, lists:reverse(Acc)};
|
||||
stream_entries(QName, Name, LocalPid,
|
||||
#stream{credit = Credit,
|
||||
start_offset = StartOffs,
|
||||
listening_offset = LOffs,
|
||||
log = Seg0} = Str0, MsgIn)
|
||||
when Credit > 0 ->
|
||||
case osiris_log:read_chunk_parsed(Seg0) of
|
||||
{end_of_stream, Seg} ->
|
||||
NextOffset = osiris_log:next_offset(Seg),
|
||||
case NextOffset > LOffs of
|
||||
true ->
|
||||
osiris:register_offset_listener(LocalPid, NextOffset),
|
||||
{Str0#stream{log = Seg,
|
||||
listening_offset = NextOffset}, MsgIn};
|
||||
false ->
|
||||
{Str0#stream{log = Seg}, MsgIn}
|
||||
#stream{chunk_iterator = Iter0,
|
||||
delivery_count = DC,
|
||||
credit = Credit,
|
||||
start_offset = StartOffset} = Str0, Acc0) ->
|
||||
case osiris_log:iterator_next(Iter0) of
|
||||
end_of_chunk ->
|
||||
case chunk_iterator(Str0, LocalPid) of
|
||||
{ok, Str} ->
|
||||
stream_entries(QName, Name, LocalPid, Str, Acc0);
|
||||
{end_of_stream, Str} ->
|
||||
{Str, lists:reverse(Acc0)}
|
||||
end;
|
||||
{error, Err} ->
|
||||
rabbit_log:debug("stream client: error reading chunk ~w", [Err]),
|
||||
exit(Err);
|
||||
{Records, Seg} ->
|
||||
Msgs = [begin
|
||||
Msg0 = binary_to_msg(QName, B),
|
||||
Msg = mc:set_annotation(<<"x-stream-offset">>, O, Msg0),
|
||||
{Name, LocalPid, O, false, Msg}
|
||||
end || {O, B} <- Records,
|
||||
O >= StartOffs],
|
||||
|
||||
NumMsgs = length(Msgs),
|
||||
|
||||
Str = Str0#stream{credit = Credit - NumMsgs,
|
||||
log = Seg},
|
||||
case Str#stream.credit < 1 of
|
||||
true ->
|
||||
%% we are done here
|
||||
{Str, MsgIn ++ Msgs};
|
||||
false ->
|
||||
%% if there are fewer Msgs than Entries0 it means there were non-events
|
||||
%% in the log and we should recurse and try again
|
||||
stream_entries(QName, Name, LocalPid, Str, MsgIn ++ Msgs)
|
||||
end
|
||||
end;
|
||||
stream_entries(_QName, _Name, _LocalPid, Str, Msgs) ->
|
||||
{Str, Msgs}.
|
||||
|
||||
binary_to_msg(#resource{kind = queue,
|
||||
name = QName}, Data) ->
|
||||
Mc0 = mc:init(mc_amqp, amqp10_framing:decode_bin(Data), #{}),
|
||||
%% If exchange or routing_keys annotation isn't present the data most likely came
|
||||
%% from the rabbitmq-stream plugin so we'll choose defaults that simulate use
|
||||
%% of the direct exchange.
|
||||
Mc = case mc:exchange(Mc0) of
|
||||
undefined -> mc:set_annotation(?ANN_EXCHANGE, <<>>, Mc0);
|
||||
_ -> Mc0
|
||||
end,
|
||||
case mc:routing_keys(Mc) of
|
||||
[] -> mc:set_annotation(?ANN_ROUTING_KEYS, [QName], Mc);
|
||||
_ -> Mc
|
||||
{{Offset, Entry}, Iter} ->
|
||||
{Str, Acc} = case Entry of
|
||||
{batch, _NumRecords, 0, _Len, BatchedEntries} ->
|
||||
{MsgsRev, NumMsgs} = parse_uncompressed_subbatch(
|
||||
BatchedEntries, Offset, StartOffset,
|
||||
QName, Name, LocalPid, {[], 0}),
|
||||
case Credit >= NumMsgs of
|
||||
true ->
|
||||
{Str0#stream{chunk_iterator = Iter,
|
||||
delivery_count = delivery_count_add(DC, NumMsgs),
|
||||
credit = Credit - NumMsgs,
|
||||
last_consumed_offset = Offset + NumMsgs - 1},
|
||||
MsgsRev ++ Acc0};
|
||||
false ->
|
||||
%% Consumer doesn't have sufficient credit.
|
||||
%% Buffer the remaining messages.
|
||||
[] = Str0#stream.buffer_msgs_rev, % assertion
|
||||
{Buf, MsgsRev1} = lists:split(NumMsgs - Credit, MsgsRev),
|
||||
{Str0#stream{chunk_iterator = Iter,
|
||||
delivery_count = delivery_count_add(DC, Credit),
|
||||
credit = 0,
|
||||
buffer_msgs_rev = Buf,
|
||||
last_consumed_offset = Offset + Credit - 1},
|
||||
MsgsRev1 ++ Acc0}
|
||||
end;
|
||||
{batch, _, _CompressionType, _, _} ->
|
||||
%% Skip compressed sub batch.
|
||||
%% It can only be consumed by Stream protocol clients.
|
||||
{Str0#stream{chunk_iterator = Iter}, Acc0};
|
||||
_SimpleEntry ->
|
||||
case Offset >= StartOffset of
|
||||
true ->
|
||||
Msg = entry_to_msg(Entry, Offset, QName, Name, LocalPid),
|
||||
{Str0#stream{chunk_iterator = Iter,
|
||||
delivery_count = delivery_count_add(DC, 1),
|
||||
credit = Credit - 1,
|
||||
last_consumed_offset = Offset},
|
||||
[Msg | Acc0]};
|
||||
false ->
|
||||
{Str0#stream{chunk_iterator = Iter}, Acc0}
|
||||
end
|
||||
end,
|
||||
stream_entries(QName, Name, LocalPid, Str, Acc)
|
||||
end.
|
||||
|
||||
msg_to_iodata(Msg0) ->
|
||||
Sections = mc:protocol_state(mc:convert(mc_amqp, Msg0)),
|
||||
mc_amqp:serialize(Sections).
|
||||
chunk_iterator(#stream{credit = Credit,
|
||||
listening_offset = LOffs,
|
||||
log = Log0} = Str0, LocalPid) ->
|
||||
case osiris_log:chunk_iterator(Log0, Credit) of
|
||||
{ok, _ChunkHeader, Iter, Log} ->
|
||||
{ok, Str0#stream{chunk_iterator = Iter,
|
||||
log = Log}};
|
||||
{end_of_stream, Log} ->
|
||||
NextOffset = osiris_log:next_offset(Log),
|
||||
Str = case NextOffset > LOffs of
|
||||
true ->
|
||||
osiris:register_offset_listener(LocalPid, NextOffset),
|
||||
Str0#stream{log = Log,
|
||||
listening_offset = NextOffset};
|
||||
false ->
|
||||
Str0#stream{log = Log}
|
||||
end,
|
||||
{end_of_stream, Str};
|
||||
{error, Err} ->
|
||||
rabbit_log:info("stream client: failed to create chunk iterator ~p", [Err]),
|
||||
exit(Err)
|
||||
end.
|
||||
|
||||
%% Deliver each record of an uncompressed sub batch individually.
|
||||
parse_uncompressed_subbatch(<<>>, _Offset, _StartOffset, _QName, _Name, _LocalPid, Acc) ->
|
||||
Acc;
|
||||
parse_uncompressed_subbatch(
|
||||
<<0:1, %% simple entry
|
||||
Len:31/unsigned,
|
||||
Entry:Len/binary,
|
||||
Rem/binary>>,
|
||||
Offset, StartOffset, QName, Name, LocalPid, Acc0 = {AccList, AccCount}) ->
|
||||
Acc = case Offset >= StartOffset of
|
||||
true ->
|
||||
Msg = entry_to_msg(Entry, Offset, QName, Name, LocalPid),
|
||||
{[Msg | AccList], AccCount + 1};
|
||||
false ->
|
||||
Acc0
|
||||
end,
|
||||
parse_uncompressed_subbatch(Rem, Offset + 1, StartOffset, QName, Name, LocalPid, Acc).
|
||||
|
||||
entry_to_msg(Entry, Offset, #resource{kind = queue,
|
||||
name = QName}, Name, LocalPid) ->
|
||||
Mc0 = mc:init(mc_amqp, amqp10_framing:decode_bin(Entry), #{}),
|
||||
%% If exchange or routing_keys annotation isn't present the entry most likely came
|
||||
%% from the rabbitmq-stream plugin so we'll choose defaults that simulate use
|
||||
%% of the direct exchange.
|
||||
Mc1 = case mc:exchange(Mc0) of
|
||||
undefined -> mc:set_annotation(?ANN_EXCHANGE, <<>>, Mc0);
|
||||
_ -> Mc0
|
||||
end,
|
||||
Mc2 = case mc:routing_keys(Mc1) of
|
||||
[] -> mc:set_annotation(?ANN_ROUTING_KEYS, [QName], Mc1);
|
||||
_ -> Mc1
|
||||
end,
|
||||
Mc = mc:set_annotation(<<"x-stream-offset">>, Offset, Mc2),
|
||||
{Name, LocalPid, Offset, false, Mc}.
|
||||
|
||||
capabilities() ->
|
||||
#{unsupported_policies => [%% Classic policies
|
||||
|
@ -1146,7 +1283,7 @@ capabilities() ->
|
|||
queue_arguments => [<<"x-max-length-bytes">>, <<"x-queue-type">>,
|
||||
<<"x-max-age">>, <<"x-stream-max-segment-size-bytes">>,
|
||||
<<"x-initial-cluster-size">>, <<"x-queue-leader-locator">>],
|
||||
consumer_arguments => [<<"x-stream-offset">>, <<"x-credit">>],
|
||||
consumer_arguments => [<<"x-stream-offset">>],
|
||||
server_named => false}.
|
||||
|
||||
notify_decorators(Q) when ?is_amqqueue(Q) ->
|
||||
|
@ -1211,3 +1348,13 @@ get_nodes(Q) when ?is_amqqueue(Q) ->
|
|||
is_minority(All, Up) ->
|
||||
MinQuorum = length(All) div 2 + 1,
|
||||
length(Up) < MinQuorum.
|
||||
|
||||
deliver_actions(_, _, []) ->
|
||||
[];
|
||||
deliver_actions(CTag, Ack, Msgs) ->
|
||||
[{deliver, CTag, Ack, Msgs}].
|
||||
|
||||
delivery_count_add(none, _) ->
|
||||
none;
|
||||
delivery_count_add(Count, N) ->
|
||||
serial_number:add(Count, N).
|
||||
|
|
|
@ -26,26 +26,24 @@
|
|||
%%----------------------------------------------------------------------------
|
||||
|
||||
-spec init(rabbit_types:vhost()) -> state().
|
||||
|
||||
init(VHost)
|
||||
when is_binary(VHost) ->
|
||||
case enabled(VHost) of
|
||||
false -> none;
|
||||
true -> {ok, X} = rabbit_exchange:lookup(
|
||||
rabbit_misc:r(VHost, exchange, ?XNAME)),
|
||||
X
|
||||
false ->
|
||||
none;
|
||||
true ->
|
||||
{ok, X} = rabbit_exchange:lookup(rabbit_misc:r(VHost, exchange, ?XNAME)),
|
||||
X
|
||||
end.
|
||||
|
||||
-spec enabled(rabbit_types:vhost() | state()) -> boolean().
|
||||
|
||||
enabled(VHost)
|
||||
when is_binary(VHost) ->
|
||||
{ok, VHosts} = application:get_env(rabbit, ?TRACE_VHOSTS),
|
||||
lists:member(VHost, VHosts);
|
||||
enabled(none) ->
|
||||
false;
|
||||
enabled(#exchange{}) ->
|
||||
true.
|
||||
true;
|
||||
enabled(VHost)
|
||||
when is_binary(VHost) ->
|
||||
lists:member(VHost, vhosts_with_tracing_enabled()).
|
||||
|
||||
-spec tap_in(mc:state(), rabbit_exchange:route_return(),
|
||||
binary(), rabbit_types:username(), state()) -> 'ok'.
|
||||
|
@ -55,7 +53,8 @@ tap_in(Msg, QNames, ConnName, Username, State) ->
|
|||
-spec tap_in(mc:state(), rabbit_exchange:route_return(),
|
||||
binary(), rabbit_channel:channel_number(),
|
||||
rabbit_types:username(), state()) -> 'ok'.
|
||||
tap_in(_Msg, _QNames, _ConnName, _ChannelNum, _Username, none) -> ok;
|
||||
tap_in(_Msg, _QNames, _ConnName, _ChannelNum, _Username, none) ->
|
||||
ok;
|
||||
tap_in(Msg, QNames, ConnName, ChannelNum, Username, TraceX) ->
|
||||
XName = mc:exchange(Msg),
|
||||
#exchange{name = #resource{virtual_host = VHost}} = TraceX,
|
||||
|
@ -79,11 +78,15 @@ tap_out(Msg, ConnName, Username, State) ->
|
|||
-spec tap_out(rabbit_amqqueue:qmsg(), binary(),
|
||||
rabbit_channel:channel_number(),
|
||||
rabbit_types:username(), state()) -> 'ok'.
|
||||
tap_out(_Msg, _ConnName, _ChannelNum, _Username, none) -> ok;
|
||||
tap_out(_Msg, _ConnName, _ChannelNum, _Username, none) ->
|
||||
ok;
|
||||
tap_out({#resource{name = QName, virtual_host = VHost},
|
||||
_QPid, _QMsgId, Redelivered, Msg},
|
||||
ConnName, ChannelNum, Username, TraceX) ->
|
||||
RedeliveredNum = case Redelivered of true -> 1; false -> 0 end,
|
||||
RedeliveredNum = case Redelivered of
|
||||
true -> 1;
|
||||
false -> 0
|
||||
end,
|
||||
trace(TraceX, Msg, <<"deliver">>, QName,
|
||||
[{<<"redelivered">>, signedint, RedeliveredNum},
|
||||
{<<"vhost">>, longstr, VHost},
|
||||
|
@ -94,28 +97,24 @@ tap_out({#resource{name = QName, virtual_host = VHost},
|
|||
%%----------------------------------------------------------------------------
|
||||
|
||||
-spec start(rabbit_types:vhost()) -> 'ok'.
|
||||
|
||||
start(VHost)
|
||||
when is_binary(VHost) ->
|
||||
case lists:member(VHost, vhosts_with_tracing_enabled()) of
|
||||
case enabled(VHost) of
|
||||
true ->
|
||||
rabbit_log:info("Tracing is already enabled for vhost '~ts'", [VHost]),
|
||||
ok;
|
||||
false ->
|
||||
rabbit_log:info("Enabling tracing for vhost '~ts'", [VHost]),
|
||||
update_config(fun (VHosts) ->
|
||||
lists:usort([VHost | VHosts])
|
||||
end)
|
||||
update_config(fun(VHosts) -> lists:usort([VHost | VHosts]) end)
|
||||
end.
|
||||
|
||||
-spec stop(rabbit_types:vhost()) -> 'ok'.
|
||||
|
||||
stop(VHost)
|
||||
when is_binary(VHost) ->
|
||||
case lists:member(VHost, vhosts_with_tracing_enabled()) of
|
||||
case enabled(VHost) of
|
||||
true ->
|
||||
rabbit_log:info("Disabling tracing for vhost '~ts'", [VHost]),
|
||||
update_config(fun (VHosts) -> VHosts -- [VHost] end);
|
||||
update_config(fun(VHosts) -> VHosts -- [VHost] end);
|
||||
false ->
|
||||
rabbit_log:info("Tracing is already disabled for vhost '~ts'", [VHost]),
|
||||
ok
|
||||
|
@ -125,17 +124,20 @@ update_config(Fun) ->
|
|||
VHosts0 = vhosts_with_tracing_enabled(),
|
||||
VHosts = Fun(VHosts0),
|
||||
application:set_env(rabbit, ?TRACE_VHOSTS, VHosts),
|
||||
Sessions = rabbit_amqp_session:list_local(),
|
||||
NonAmqpPids = rabbit_networking:local_non_amqp_connections(),
|
||||
rabbit_log:debug("Will now refresh state of channels and of ~b non AMQP 0.9.1 "
|
||||
"connections after virtual host tracing changes",
|
||||
[length(NonAmqpPids)]),
|
||||
lists:foreach(fun(Pid) -> gen_server:cast(Pid, refresh_config) end, NonAmqpPids),
|
||||
{Time, _} = timer:tc(fun rabbit_channel:refresh_config_local/0),
|
||||
rabbit_log:debug("Refreshed channel state in ~fs", [Time/1_000_000]),
|
||||
rabbit_log:debug("Refreshing state of channels, ~b sessions and ~b non "
|
||||
"AMQP 0.9.1 connections after virtual host tracing changes...",
|
||||
[length(Sessions), length(NonAmqpPids)]),
|
||||
Pids = Sessions ++ NonAmqpPids,
|
||||
lists:foreach(fun(Pid) -> gen_server:cast(Pid, refresh_config) end, Pids),
|
||||
{Time, ok} = timer:tc(fun rabbit_channel:refresh_config_local/0),
|
||||
rabbit_log:debug("Refreshed channel states in ~fs", [Time / 1_000_000]),
|
||||
ok.
|
||||
|
||||
vhosts_with_tracing_enabled() ->
|
||||
application:get_env(rabbit, ?TRACE_VHOSTS, []).
|
||||
{ok, Vhosts} = application:get_env(rabbit, ?TRACE_VHOSTS),
|
||||
Vhosts.
|
||||
|
||||
%%----------------------------------------------------------------------------
|
||||
|
||||
|
@ -148,9 +150,7 @@ trace(X, Msg0, RKPrefix, RKSuffix, Extra) ->
|
|||
RoutingKeys = mc:routing_keys(Msg0),
|
||||
%% for now convert into amqp legacy
|
||||
Msg = mc:prepare(read, mc:convert(mc_amqpl, Msg0)),
|
||||
%% check exchange name in case it is same as target
|
||||
#content{properties = Props} = Content0 =
|
||||
mc:protocol_state(Msg),
|
||||
#content{properties = Props} = Content0 = mc:protocol_state(Msg),
|
||||
|
||||
Key = <<RKPrefix/binary, ".", RKSuffix/binary>>,
|
||||
Content = Content0#content{properties =
|
||||
|
@ -159,26 +159,23 @@ trace(X, Msg0, RKPrefix, RKSuffix, Extra) ->
|
|||
properties_bin = none},
|
||||
TargetXName = SourceXName#resource{name = ?XNAME},
|
||||
{ok, TraceMsg} = mc_amqpl:message(TargetXName, Key, Content),
|
||||
ok = rabbit_queue_type:publish_at_most_once(X, TraceMsg),
|
||||
ok
|
||||
ok = rabbit_queue_type:publish_at_most_once(X, TraceMsg)
|
||||
end.
|
||||
|
||||
msg_to_table(XName, RoutingKeys, Props) ->
|
||||
{PropsTable, _Ix} =
|
||||
lists:foldl(fun (K, {L, Ix}) ->
|
||||
lists:foldl(fun(K, {L, Ix}) ->
|
||||
V = element(Ix, Props),
|
||||
NewL = case V of
|
||||
undefined -> L;
|
||||
_ -> [{a2b(K), type(V), V} | L]
|
||||
_ -> [{atom_to_binary(K), type(V), V} | L]
|
||||
end,
|
||||
{NewL, Ix + 1}
|
||||
end, {[], 2}, record_info(fields, 'P_basic')),
|
||||
[{<<"exchange_name">>, longstr, XName},
|
||||
{<<"routing_keys">>, array, [{longstr, K} || K <- RoutingKeys]},
|
||||
{<<"properties">>, table, PropsTable},
|
||||
{<<"node">>, longstr, a2b(node())}].
|
||||
|
||||
a2b(A) -> list_to_binary(atom_to_list(A)).
|
||||
{<<"node">>, longstr, atom_to_binary(node())}].
|
||||
|
||||
type(V) when is_list(V) -> table;
|
||||
type(V) when is_integer(V) -> signedint;
|
||||
|
|
|
@ -0,0 +1,621 @@
|
|||
%% This Source Code Form is subject to the terms of the Mozilla Public
|
||||
%% License, v. 2.0. If a copy of the MPL was not distributed with this
|
||||
%% file, You can obtain one at https://mozilla.org/MPL/2.0/.
|
||||
%%
|
||||
%% Copyright (c) 2007-2023 VMware, Inc. or its affiliates. All rights reserved.
|
||||
|
||||
-module(amqp_auth_SUITE).
|
||||
|
||||
-compile([export_all,
|
||||
nowarn_export_all]).
|
||||
|
||||
-include_lib("common_test/include/ct.hrl").
|
||||
-include_lib("eunit/include/eunit.hrl").
|
||||
-include_lib("amqp_client/include/amqp_client.hrl").
|
||||
-include_lib("amqp10_common/include/amqp10_framing.hrl").
|
||||
|
||||
-import(rabbit_ct_broker_helpers,
|
||||
[rpc/4]).
|
||||
-import(rabbit_ct_helpers,
|
||||
[eventually/1]).
|
||||
-import(event_recorder,
|
||||
[assert_event_type/2,
|
||||
assert_event_prop/2]).
|
||||
|
||||
all() ->
|
||||
[
|
||||
{group, tests}
|
||||
].
|
||||
|
||||
groups() ->
|
||||
[
|
||||
{tests, [shuffle],
|
||||
[
|
||||
attach_target_queue,
|
||||
attach_source_exchange,
|
||||
send_to_topic,
|
||||
send_to_topic_using_subject,
|
||||
attach_source_topic,
|
||||
attach_target_internal_exchange,
|
||||
authn_failure_event,
|
||||
sasl_anonymous_success,
|
||||
sasl_none_success,
|
||||
sasl_plain_success,
|
||||
sasl_anonymous_failure,
|
||||
sasl_none_failure,
|
||||
sasl_plain_failure,
|
||||
vhost_absent,
|
||||
vhost_connection_limit,
|
||||
user_connection_limit,
|
||||
vhost_queue_limit
|
||||
]
|
||||
}
|
||||
].
|
||||
|
||||
init_per_suite(Config) ->
|
||||
application:ensure_all_started(amqp10_client),
|
||||
rabbit_ct_helpers:log_environment(),
|
||||
Config.
|
||||
|
||||
end_per_suite(Config) ->
|
||||
Config.
|
||||
|
||||
init_per_group(_Group, Config0) ->
|
||||
Config = rabbit_ct_helpers:run_setup_steps(
|
||||
Config0,
|
||||
rabbit_ct_broker_helpers:setup_steps() ++
|
||||
rabbit_ct_client_helpers:setup_steps()),
|
||||
Vhost = <<"test vhost">>,
|
||||
User = <<"test user">>,
|
||||
ok = rabbit_ct_broker_helpers:add_vhost(Config, Vhost),
|
||||
ok = rabbit_ct_broker_helpers:add_user(Config, User),
|
||||
[{test_vhost, Vhost},
|
||||
{test_user, User}] ++ Config.
|
||||
|
||||
end_per_group(_Group, Config) ->
|
||||
ok = rabbit_ct_broker_helpers:delete_user(Config, ?config(test_user, Config)),
|
||||
ok = rabbit_ct_broker_helpers:delete_vhost(Config, ?config(test_vhost, Config)),
|
||||
rabbit_ct_helpers:run_teardown_steps(
|
||||
Config,
|
||||
rabbit_ct_client_helpers:teardown_steps() ++
|
||||
rabbit_ct_broker_helpers:teardown_steps()).
|
||||
|
||||
init_per_testcase(Testcase, Config) ->
|
||||
ok = set_permissions(Config, <<>>, <<>>, <<"^some vhost permission">>),
|
||||
rabbit_ct_helpers:testcase_started(Config, Testcase).
|
||||
|
||||
end_per_testcase(Testcase, Config) ->
|
||||
delete_all_queues(Config),
|
||||
ok = clear_permissions(Config),
|
||||
rabbit_ct_helpers:testcase_finished(Config, Testcase).
|
||||
|
||||
attach_target_queue(Config) ->
|
||||
QName = <<"test queue">>,
|
||||
%% This target address means RabbitMQ will create a queue
|
||||
%% requiring configure access on the queue.
|
||||
%% We will also need write access to the default exchange to send to this queue.
|
||||
TargetAddress = <<"/queue/", QName/binary>>,
|
||||
OpnConf = connection_config(Config),
|
||||
{ok, Connection} = amqp10_client:open_connection(OpnConf),
|
||||
{ok, Session1} = amqp10_client:begin_session_sync(Connection),
|
||||
{ok, _Sender1} = amqp10_client:attach_sender_link(
|
||||
Session1, <<"test-sender-1">>, TargetAddress),
|
||||
ExpectedErr1 = error_unauthorized(
|
||||
<<"configure access to queue 'test queue' in vhost "
|
||||
"'test vhost' refused for user 'test user'">>),
|
||||
receive {amqp10_event, {session, Session1, {ended, ExpectedErr1}}} -> ok
|
||||
after 5000 -> flush(missing_ended),
|
||||
ct:fail("did not receive AMQP_ERROR_UNAUTHORIZED_ACCESS")
|
||||
end,
|
||||
|
||||
%% Give the user configure permissions on the queue.
|
||||
ok = set_permissions(Config, QName, <<>>, <<>>),
|
||||
{ok, Session2} = amqp10_client:begin_session_sync(Connection),
|
||||
{ok, _Sender2} = amqp10_client:attach_sender_link(
|
||||
Session2, <<"test-sender-2">>, TargetAddress),
|
||||
ExpectedErr2 = error_unauthorized(
|
||||
<<"write access to exchange 'amq.default' in vhost "
|
||||
"'test vhost' refused for user 'test user'">>),
|
||||
receive {amqp10_event, {session, Session2, {ended, ExpectedErr2}}} -> ok
|
||||
after 5000 -> flush(missing_ended),
|
||||
ct:fail("did not receive AMQP_ERROR_UNAUTHORIZED_ACCESS")
|
||||
end,
|
||||
|
||||
%% Give the user configure permissions on the queue and
|
||||
%% write access to the default exchange.
|
||||
ok = set_permissions(Config, QName, <<"amq\.default">>, <<>>),
|
||||
{ok, Session3} = amqp10_client:begin_session_sync(Connection),
|
||||
{ok, Sender3} = amqp10_client:attach_sender_link(
|
||||
Session3, <<"test-sender-3">>, TargetAddress),
|
||||
receive {amqp10_event, {link, Sender3, attached}} -> ok
|
||||
after 5000 -> flush(missing_attached),
|
||||
ct:fail("missing ATTACH from server")
|
||||
end,
|
||||
|
||||
ok = close_connection_sync(Connection).
|
||||
|
||||
attach_source_exchange(Config) ->
|
||||
%% This source address means RabbitMQ will create a queue with a generated name
|
||||
%% prefixed with amq.gen requiring configure access on the queue.
|
||||
%% The queue is bound to the fanout exchange requiring write access on the queue
|
||||
%% and read access on the fanout exchange.
|
||||
%% To consume from the queue, we will also need read access on the queue.
|
||||
SourceAddress = <<"/exchange/amq.fanout/ignored">>,
|
||||
OpnConf = connection_config(Config),
|
||||
{ok, Connection} = amqp10_client:open_connection(OpnConf),
|
||||
{ok, Session1} = amqp10_client:begin_session_sync(Connection),
|
||||
{ok, _Recv1} = amqp10_client:attach_receiver_link(
|
||||
Session1, <<"receiver-1">>, SourceAddress),
|
||||
receive
|
||||
{amqp10_event,
|
||||
{session, Session1,
|
||||
{ended,
|
||||
#'v1_0.error'{
|
||||
condition = ?V_1_0_AMQP_ERROR_UNAUTHORIZED_ACCESS,
|
||||
description = {utf8, <<"configure access to queue 'amq.gen", _/binary>>}}}}} -> ok
|
||||
after 5000 -> flush(missing_ended),
|
||||
ct:fail("did not receive AMQP_ERROR_UNAUTHORIZED_ACCESS")
|
||||
end,
|
||||
|
||||
%% Give the user configure permissions on the queue.
|
||||
ok = set_permissions(Config, <<"^amq\.gen">>, <<>>, <<>>),
|
||||
{ok, Session2} = amqp10_client:begin_session_sync(Connection),
|
||||
{ok, _Recv2} = amqp10_client:attach_receiver_link(
|
||||
Session2, <<"receiver-2">>, SourceAddress),
|
||||
receive
|
||||
{amqp10_event,
|
||||
{session, Session2,
|
||||
{ended,
|
||||
#'v1_0.error'{
|
||||
condition = ?V_1_0_AMQP_ERROR_UNAUTHORIZED_ACCESS,
|
||||
description = {utf8, <<"write access to queue 'amq.gen", _/binary>>}}}}} -> ok
|
||||
after 5000 -> flush(missing_ended),
|
||||
ct:fail("did not receive AMQP_ERROR_UNAUTHORIZED_ACCESS")
|
||||
end,
|
||||
|
||||
%% Give the user configure and write permissions on the queue.
|
||||
ok = set_permissions(Config, <<"^amq\.gen">>, <<"^amq\.gen">>, <<>>),
|
||||
{ok, Session3} = amqp10_client:begin_session_sync(Connection),
|
||||
{ok, _Recv3} = amqp10_client:attach_receiver_link(
|
||||
Session3, <<"receiver-3">>, SourceAddress),
|
||||
ExpectedErr1 = error_unauthorized(
|
||||
<<"read access to exchange 'amq.fanout' in vhost "
|
||||
"'test vhost' refused for user 'test user'">>),
|
||||
receive {amqp10_event, {session, Session3, {ended, ExpectedErr1}}} -> ok
|
||||
after 5000 -> flush(missing_ended),
|
||||
ct:fail("did not receive AMQP_ERROR_UNAUTHORIZED_ACCESS")
|
||||
end,
|
||||
|
||||
%% Give the user configure and write permissions on the queue, and read access on the exchange.
|
||||
ok = set_permissions(Config, <<"^amq\.gen">>, <<"^amq\.gen">>, <<"amq\.fanout">>),
|
||||
{ok, Session4} = amqp10_client:begin_session_sync(Connection),
|
||||
{ok, _Recv4} = amqp10_client:attach_receiver_link(
|
||||
Session4, <<"receiver-4">>, SourceAddress),
|
||||
receive
|
||||
{amqp10_event,
|
||||
{session, Session4,
|
||||
{ended,
|
||||
#'v1_0.error'{
|
||||
condition = ?V_1_0_AMQP_ERROR_UNAUTHORIZED_ACCESS,
|
||||
description = {utf8, <<"read access to queue 'amq.gen", _/binary>>}}}}} -> ok
|
||||
after 5000 -> flush(missing_ended),
|
||||
ct:fail("did not receive AMQP_ERROR_UNAUTHORIZED_ACCESS")
|
||||
end,
|
||||
|
||||
%% Give the user configure, write, and read permissions on the queue,
|
||||
%% and read access on the exchange.
|
||||
ok = set_permissions(Config, <<"^amq\.gen">>, <<"^amq\.gen">>, <<"^(amq\.gen|amq\.fanout)">>),
|
||||
{ok, Session5} = amqp10_client:begin_session_sync(Connection),
|
||||
{ok, Recv5} = amqp10_client:attach_receiver_link(
|
||||
Session5, <<"receiver-5">>, SourceAddress),
|
||||
receive {amqp10_event, {link, Recv5, attached}} -> ok
|
||||
after 5000 -> flush(missing_attached),
|
||||
ct:fail("missing ATTACH from server")
|
||||
end,
|
||||
|
||||
ok = close_connection_sync(Connection).
|
||||
|
||||
send_to_topic(Config) ->
|
||||
TargetAddresses = [<<"/topic/test vhost.test user.a.b">>,
|
||||
<<"/exchange/amq.topic/test vhost.test user.a.b">>],
|
||||
lists:foreach(fun(Address) ->
|
||||
ok = send_to_topic0(Address, Config)
|
||||
end, TargetAddresses).
|
||||
|
||||
send_to_topic0(TargetAddress, Config) ->
|
||||
User = ?config(test_user, Config),
|
||||
Vhost = ?config(test_vhost, Config),
|
||||
ok = rabbit_ct_broker_helpers:set_full_permissions(Config, User, Vhost),
|
||||
ok = set_topic_permissions(Config, <<"amq.topic">>, <<"^$">>, <<"^$">>),
|
||||
|
||||
OpnConf = connection_config(Config),
|
||||
{ok, Connection} = amqp10_client:open_connection(OpnConf),
|
||||
{ok, Session1} = amqp10_client:begin_session_sync(Connection),
|
||||
{ok, Sender1} = amqp10_client:attach_sender_link_sync(
|
||||
Session1, <<"sender-1">>, TargetAddress),
|
||||
ok = wait_for_credit(Sender1),
|
||||
Msg1 = amqp10_msg:new(<<255>>, <<1>>, true),
|
||||
ok = amqp10_client:send_msg(Sender1, Msg1),
|
||||
|
||||
ExpectedErr = error_unauthorized(
|
||||
<<"write access to topic 'test vhost.test user.a.b' in exchange "
|
||||
"'amq.topic' in vhost 'test vhost' refused for user 'test user'">>),
|
||||
receive {amqp10_event, {session, Session1, {ended, ExpectedErr}}} -> ok
|
||||
after 5000 -> flush(missing_ended),
|
||||
ct:fail("did not receive AMQP_ERROR_UNAUTHORIZED_ACCESS")
|
||||
end,
|
||||
|
||||
ok = set_topic_permissions(Config, <<"amq.topic">>, <<"^{vhost}\.{username}\.a\.b$">>, <<"^$">>),
|
||||
{ok, Session2} = amqp10_client:begin_session_sync(Connection),
|
||||
{ok, Sender2} = amqp10_client:attach_sender_link_sync(
|
||||
Session2, <<"sender-2">>, TargetAddress),
|
||||
ok = wait_for_credit(Sender2),
|
||||
Dtag = <<0, 0>>,
|
||||
Msg2 = amqp10_msg:new(Dtag, <<2>>, false),
|
||||
ok = amqp10_client:send_msg(Sender2, Msg2),
|
||||
%% We expect RELEASED since no queue is bound.
|
||||
receive {amqp10_disposition, {released, Dtag}} -> ok
|
||||
after 5000 -> ct:fail(released_timeout)
|
||||
end,
|
||||
|
||||
ok = amqp10_client:detach_link(Sender2),
|
||||
ok = close_connection_sync(Connection).
|
||||
|
||||
send_to_topic_using_subject(Config) ->
|
||||
TargetAddress = <<"/exchange/amq.topic">>,
|
||||
User = ?config(test_user, Config),
|
||||
Vhost = ?config(test_vhost, Config),
|
||||
ok = rabbit_ct_broker_helpers:set_full_permissions(Config, User, Vhost),
|
||||
ok = set_topic_permissions(Config, <<"amq.topic">>, <<"^\.a$">>, <<"^$">>),
|
||||
|
||||
OpnConf = connection_config(Config),
|
||||
{ok, Connection} = amqp10_client:open_connection(OpnConf),
|
||||
{ok, Session} = amqp10_client:begin_session_sync(Connection),
|
||||
{ok, Sender} = amqp10_client:attach_sender_link_sync(
|
||||
Session, <<"sender">>, TargetAddress),
|
||||
ok = wait_for_credit(Sender),
|
||||
|
||||
Dtag1 = <<"dtag 1">>,
|
||||
Msg1a = amqp10_msg:new(Dtag1, <<"m1">>, false),
|
||||
Msg1b = amqp10_msg:set_properties(#{subject => <<".a">>}, Msg1a),
|
||||
ok = amqp10_client:send_msg(Sender, Msg1b),
|
||||
%% We have sufficient authorization, but expect RELEASED since no queue is bound.
|
||||
receive {amqp10_disposition, {released, Dtag1}} -> ok
|
||||
after 5000 -> ct:fail(released_timeout)
|
||||
end,
|
||||
|
||||
Dtag2 = <<"dtag 2">>,
|
||||
Msg2a = amqp10_msg:new(Dtag2, <<"m2">>, false),
|
||||
%% We don't have sufficient authorization.
|
||||
Msg2b = amqp10_msg:set_properties(#{subject => <<".a.b">>}, Msg2a),
|
||||
ok = amqp10_client:send_msg(Sender, Msg2b),
|
||||
ExpectedErr = error_unauthorized(
|
||||
<<"write access to topic '.a.b' in exchange 'amq.topic' in "
|
||||
"vhost 'test vhost' refused for user 'test user'">>),
|
||||
receive {amqp10_event, {session, Session, {ended, ExpectedErr}}} -> ok
|
||||
after 5000 -> flush(missing_ended),
|
||||
ct:fail("did not receive AMQP_ERROR_UNAUTHORIZED_ACCESS")
|
||||
end,
|
||||
|
||||
ok = close_connection_sync(Connection).
|
||||
|
||||
attach_source_topic(Config) ->
|
||||
%% These source addresses mean RabbitMQ will bind a queue to the default topic
|
||||
%% exchange with binding key 'test vhost.test user.a.b'.
|
||||
%% Therefore, we need read access to that topic.
|
||||
%% We also test variable expansion in topic permission patterns.
|
||||
SourceAddresses = [<<"/topic/test vhost.test user.a.b">>,
|
||||
<<"/exchange/amq.topic/test vhost.test user.a.b">>],
|
||||
lists:foreach(fun(Address) ->
|
||||
ok = attach_source_topic0(Address, Config)
|
||||
end, SourceAddresses).
|
||||
|
||||
attach_source_topic0(SourceAddress, Config) ->
|
||||
User = ?config(test_user, Config),
|
||||
Vhost = ?config(test_vhost, Config),
|
||||
ok = rabbit_ct_broker_helpers:set_full_permissions(Config, User, Vhost),
|
||||
ok = set_topic_permissions(Config, <<"amq.topic">>, <<"^$">>, <<"^$">>),
|
||||
|
||||
OpnConf = connection_config(Config),
|
||||
{ok, Connection} = amqp10_client:open_connection(OpnConf),
|
||||
{ok, Session1} = amqp10_client:begin_session_sync(Connection),
|
||||
{ok, _Recv1} = amqp10_client:attach_receiver_link(
|
||||
Session1, <<"receiver-1">>, SourceAddress),
|
||||
ExpectedErr = error_unauthorized(
|
||||
<<"read access to topic 'test vhost.test user.a.b' in exchange "
|
||||
"'amq.topic' in vhost 'test vhost' refused for user 'test user'">>),
|
||||
receive {amqp10_event, {session, Session1, {ended, ExpectedErr}}} -> ok
|
||||
after 5000 -> flush(missing_ended),
|
||||
ct:fail("did not receive AMQP_ERROR_UNAUTHORIZED_ACCESS")
|
||||
end,
|
||||
|
||||
ok = set_topic_permissions(Config, <<"amq.topic">>, <<"^$">>, <<"^{vhost}\.{username}\.a\.b$">>),
|
||||
{ok, Session2} = amqp10_client:begin_session_sync(Connection),
|
||||
{ok, Recv2} = amqp10_client:attach_receiver_link(
|
||||
Session2, <<"receiver-2">>, SourceAddress),
|
||||
receive {amqp10_event, {link, Recv2, attached}} -> ok
|
||||
after 5000 -> flush(missing_attached),
|
||||
ct:fail("missing ATTACH from server")
|
||||
end,
|
||||
|
||||
ok = close_connection_sync(Connection).
|
||||
|
||||
attach_target_internal_exchange(Config) ->
|
||||
XName = <<"test exchange">>,
|
||||
Ch = rabbit_ct_client_helpers:open_channel(Config),
|
||||
#'exchange.declare_ok'{} = amqp_channel:call(Ch, #'exchange.declare'{internal = true,
|
||||
exchange = XName}),
|
||||
|
||||
OpnConf0 = connection_config(Config, <<"/">>),
|
||||
OpnConf = OpnConf0#{sasl := anon},
|
||||
{ok, Connection} = amqp10_client:open_connection(OpnConf),
|
||||
{ok, Session} = amqp10_client:begin_session_sync(Connection),
|
||||
Address = <<"/exchange/", XName/binary, "/some-routing-key">>,
|
||||
{ok, _} = amqp10_client:attach_sender_link(
|
||||
Session, <<"test-sender">>, Address),
|
||||
ExpectedErr = error_unauthorized(
|
||||
<<"attach to internal exchange 'test exchange' in vhost '/' is forbidden">>),
|
||||
receive {amqp10_event, {session, Session, {ended, ExpectedErr}}} -> ok
|
||||
after 5000 -> flush(missing_ended),
|
||||
ct:fail("did not receive AMQP_ERROR_UNAUTHORIZED_ACCESS")
|
||||
end,
|
||||
|
||||
ok = amqp10_client:close_connection(Connection),
|
||||
#'exchange.delete_ok'{} = amqp_channel:call(Ch, #'exchange.delete'{exchange = XName}),
|
||||
ok = rabbit_ct_client_helpers:close_channel(Ch).
|
||||
|
||||
authn_failure_event(Config) ->
|
||||
ok = event_recorder:start(Config),
|
||||
|
||||
Host = ?config(rmq_hostname, Config),
|
||||
Port = rabbit_ct_broker_helpers:get_node_config(Config, 0, tcp_port_amqp),
|
||||
Vhost = ?config(test_vhost, Config),
|
||||
User = ?config(test_user, Config),
|
||||
OpnConf = #{address => Host,
|
||||
port => Port,
|
||||
container_id => <<"my container">>,
|
||||
sasl => {plain, User, <<"wrong password">>},
|
||||
hostname => <<"vhost:", Vhost/binary>>},
|
||||
|
||||
{ok, Connection} = amqp10_client:open_connection(OpnConf),
|
||||
receive {amqp10_event, {connection, Connection, {closed, sasl_auth_failure}}} -> ok
|
||||
after 5000 -> flush(missing_closed),
|
||||
ct:fail("did not receive sasl_auth_failure")
|
||||
end,
|
||||
|
||||
[E | _] = event_recorder:get_events(Config),
|
||||
ok = event_recorder:stop(Config),
|
||||
|
||||
assert_event_type(user_authentication_failure, E),
|
||||
assert_event_prop([{name, <<"test user">>},
|
||||
{auth_mechanism, <<"PLAIN">>},
|
||||
{ssl, false},
|
||||
{protocol, {1, 0}}],
|
||||
E).
|
||||
|
||||
sasl_anonymous_success(Config) ->
|
||||
Mechanism = anon,
|
||||
ok = sasl_success(Mechanism, Config).
|
||||
|
||||
sasl_none_success(Config) ->
|
||||
Mechanism = none,
|
||||
ok = sasl_success(Mechanism, Config).
|
||||
|
||||
sasl_plain_success(Config) ->
|
||||
Mechanism = {plain, <<"guest">>, <<"guest">>},
|
||||
ok = sasl_success(Mechanism, Config).
|
||||
|
||||
sasl_success(Mechanism, Config) ->
|
||||
OpnConf0 = connection_config(Config, <<"/">>),
|
||||
OpnConf = OpnConf0#{sasl := Mechanism},
|
||||
{ok, Connection} = amqp10_client:open_connection(OpnConf),
|
||||
receive {amqp10_event, {connection, Connection, opened}} -> ok
|
||||
after 5000 -> ct:fail(missing_opened)
|
||||
end,
|
||||
ok = amqp10_client:close_connection(Connection).
|
||||
|
||||
sasl_anonymous_failure(Config) ->
|
||||
Mechanism = anon,
|
||||
?assertEqual(
|
||||
{sasl_not_supported, Mechanism},
|
||||
sasl_failure(Mechanism, Config)
|
||||
).
|
||||
|
||||
sasl_none_failure(Config) ->
|
||||
Mechanism = none,
|
||||
sasl_failure(Mechanism, Config).
|
||||
|
||||
sasl_plain_failure(Config) ->
|
||||
Mechanism = {plain, <<"guest">>, <<"wrong password">>},
|
||||
?assertEqual(
|
||||
sasl_auth_failure,
|
||||
sasl_failure(Mechanism, Config)
|
||||
).
|
||||
|
||||
sasl_failure(Mechanism, Config) ->
|
||||
App = rabbit,
|
||||
Par = amqp1_0_default_user,
|
||||
{ok, Default} = rpc(Config, application, get_env, [App, Par]),
|
||||
ok = rpc(Config, application, set_env, [App, Par, none]),
|
||||
|
||||
OpnConf0 = connection_config(Config, <<"/">>),
|
||||
OpnConf = OpnConf0#{sasl := Mechanism},
|
||||
{ok, Connection} = amqp10_client:open_connection(OpnConf),
|
||||
Reason = receive {amqp10_event, {connection, Connection, {closed, Reason0}}} -> Reason0
|
||||
after 5000 -> ct:fail(missing_closed)
|
||||
end,
|
||||
|
||||
ok = rpc(Config, application, set_env, [App, Par, Default]),
|
||||
Reason.
|
||||
|
||||
vhost_absent(Config) ->
|
||||
OpnConf = connection_config(Config, <<"vhost does not exist">>),
|
||||
{ok, Connection} = amqp10_client:open_connection(OpnConf),
|
||||
receive {amqp10_event, {connection, Connection, {closed, _}}} -> ok
|
||||
after 5000 -> ct:fail(missing_closed)
|
||||
end.
|
||||
|
||||
vhost_connection_limit(Config) ->
|
||||
Vhost = proplists:get_value(test_vhost, Config),
|
||||
ok = rabbit_ct_broker_helpers:set_vhost_limit(Config, 0, Vhost, max_connections, 1),
|
||||
|
||||
OpnConf = connection_config(Config),
|
||||
{ok, C1} = amqp10_client:open_connection(OpnConf),
|
||||
receive {amqp10_event, {connection, C1, opened}} -> ok
|
||||
after 5000 -> ct:fail({missing_event, ?LINE})
|
||||
end,
|
||||
{ok, C2} = amqp10_client:open_connection(OpnConf),
|
||||
receive {amqp10_event, {connection, C2, {closed, _}}} -> ok
|
||||
after 5000 -> ct:fail({missing_event, ?LINE})
|
||||
end,
|
||||
|
||||
OpnConf0 = connection_config(Config, <<"/">>),
|
||||
OpnConf1 = OpnConf0#{sasl := anon},
|
||||
{ok, C3} = amqp10_client:open_connection(OpnConf1),
|
||||
receive {amqp10_event, {connection, C3, opened}} -> ok
|
||||
after 5000 -> ct:fail({missing_event, ?LINE})
|
||||
end,
|
||||
{ok, C4} = amqp10_client:open_connection(OpnConf1),
|
||||
receive {amqp10_event, {connection, C4, opened}} -> ok
|
||||
after 5000 -> ct:fail({missing_event, ?LINE})
|
||||
end,
|
||||
|
||||
[ok = close_connection_sync(C) || C <- [C1, C3, C4]],
|
||||
ok = rabbit_ct_broker_helpers:clear_vhost_limit(Config, 0, Vhost).
|
||||
|
||||
user_connection_limit(Config) ->
|
||||
DefaultUser = <<"guest">>,
|
||||
Limit = max_connections,
|
||||
ok = rabbit_ct_broker_helpers:set_user_limits(Config, DefaultUser, #{Limit => 0}),
|
||||
OpnConf0 = connection_config(Config, <<"/">>),
|
||||
OpnConf = OpnConf0#{sasl := anon},
|
||||
{ok, C1} = amqp10_client:open_connection(OpnConf),
|
||||
receive {amqp10_event, {connection, C1, {closed, _}}} -> ok
|
||||
after 5000 -> ct:fail({missing_event, ?LINE})
|
||||
end,
|
||||
|
||||
{ok, C2} = amqp10_client:open_connection(connection_config(Config)),
|
||||
receive {amqp10_event, {connection, C2, opened}} -> ok
|
||||
after 5000 -> ct:fail({missing_event, ?LINE})
|
||||
end,
|
||||
|
||||
ok = close_connection_sync(C2),
|
||||
ok = rabbit_ct_broker_helpers:clear_user_limits(Config, DefaultUser, Limit).
|
||||
|
||||
vhost_queue_limit(Config) ->
|
||||
Vhost = proplists:get_value(test_vhost, Config),
|
||||
ok = rabbit_ct_broker_helpers:set_vhost_limit(Config, 0, Vhost, max_queues, 0),
|
||||
QName = <<"q1">>,
|
||||
ok = set_permissions(Config, QName, <<>>, <<>>),
|
||||
|
||||
OpnConf1 = connection_config(Config),
|
||||
{ok, C1} = amqp10_client:open_connection(OpnConf1),
|
||||
{ok, Session1} = amqp10_client:begin_session_sync(C1),
|
||||
TargetAddress = <<"/queue/", QName/binary>>,
|
||||
{ok, _Sender1} = amqp10_client:attach_sender_link(
|
||||
Session1, <<"test-sender-1">>, TargetAddress),
|
||||
ExpectedErr = amqp_error(
|
||||
?V_1_0_AMQP_ERROR_RESOURCE_LIMIT_EXCEEDED,
|
||||
<<"cannot declare queue 'q1' in vhost 'test vhost': vhost queue limit (0) is reached">>),
|
||||
receive {amqp10_event, {session, Session1, {ended, ExpectedErr}}} -> ok
|
||||
after 5000 -> flush(missing_ended),
|
||||
ct:fail("did not receive expected error")
|
||||
end,
|
||||
|
||||
OpnConf2 = connection_config(Config, <<"/">>),
|
||||
OpnConf3 = OpnConf2#{sasl := anon},
|
||||
{ok, C2} = amqp10_client:open_connection(OpnConf3),
|
||||
{ok, Session2} = amqp10_client:begin_session_sync(C2),
|
||||
{ok, Sender2} = amqp10_client:attach_sender_link(
|
||||
Session2, <<"test-sender-2">>, TargetAddress),
|
||||
receive {amqp10_event, {link, Sender2, attached}} -> ok
|
||||
after 5000 -> flush(missing_attached),
|
||||
ct:fail("missing ATTACH from server")
|
||||
end,
|
||||
|
||||
ok = close_connection_sync(C1),
|
||||
ok = close_connection_sync(C2),
|
||||
ok = rabbit_ct_broker_helpers:clear_vhost_limit(Config, 0, Vhost).
|
||||
|
||||
connection_config(Config) ->
|
||||
Vhost = ?config(test_vhost, Config),
|
||||
connection_config(Config, Vhost).
|
||||
|
||||
connection_config(Config, Vhost) ->
|
||||
Host = ?config(rmq_hostname, Config),
|
||||
Port = rabbit_ct_broker_helpers:get_node_config(Config, 0, tcp_port_amqp),
|
||||
User = Password = ?config(test_user, Config),
|
||||
#{address => Host,
|
||||
port => Port,
|
||||
container_id => <<"my container">>,
|
||||
sasl => {plain, User, Password},
|
||||
hostname => <<"vhost:", Vhost/binary>>}.
|
||||
|
||||
set_permissions(Config, ConfigurePerm, WritePerm, ReadPerm) ->
|
||||
ok = rabbit_ct_broker_helpers:set_permissions(Config,
|
||||
?config(test_user, Config),
|
||||
?config(test_vhost, Config),
|
||||
ConfigurePerm,
|
||||
WritePerm,
|
||||
ReadPerm).
|
||||
|
||||
set_topic_permissions(Config, Exchange, WritePat, ReadPat) ->
|
||||
ok = rpc(Config,
|
||||
rabbit_auth_backend_internal,
|
||||
set_topic_permissions,
|
||||
[?config(test_user, Config),
|
||||
?config(test_vhost, Config),
|
||||
Exchange,
|
||||
WritePat,
|
||||
ReadPat,
|
||||
<<"acting-user">>]).
|
||||
|
||||
clear_permissions(Config) ->
|
||||
User = ?config(test_user, Config),
|
||||
Vhost = ?config(test_vhost, Config),
|
||||
ok = rabbit_ct_broker_helpers:clear_permissions(Config, User, Vhost),
|
||||
ok = rpc(Config,
|
||||
rabbit_auth_backend_internal,
|
||||
clear_topic_permissions,
|
||||
[User, Vhost, <<"acting-user">>]).
|
||||
|
||||
error_unauthorized(Description) ->
|
||||
amqp_error(?V_1_0_AMQP_ERROR_UNAUTHORIZED_ACCESS, Description).
|
||||
|
||||
amqp_error(Condition, Description)
|
||||
when is_binary(Description) ->
|
||||
#'v1_0.error'{
|
||||
condition = Condition,
|
||||
description = {utf8, Description}}.
|
||||
|
||||
% before we can send messages we have to wait for credit from the server
|
||||
wait_for_credit(Sender) ->
|
||||
receive
|
||||
{amqp10_event, {link, Sender, credited}} ->
|
||||
flush(?FUNCTION_NAME),
|
||||
ok
|
||||
after 5000 ->
|
||||
flush("wait_for_credit timed out"),
|
||||
ct:fail(credited_timeout)
|
||||
end.
|
||||
|
||||
flush(Prefix) ->
|
||||
receive Msg ->
|
||||
ct:pal("~ts flushed: ~p~n", [Prefix, Msg]),
|
||||
flush(Prefix)
|
||||
after 1 ->
|
||||
ok
|
||||
end.
|
||||
|
||||
delete_all_queues(Config) ->
|
||||
Qs = rpc(Config, rabbit_amqqueue, list, []),
|
||||
[{ok, _QLen} = rpc(Config, rabbit_amqqueue, delete, [Q, false, false, <<"fake-user">>])
|
||||
|| Q <- Qs].
|
||||
|
||||
close_connection_sync(Connection)
|
||||
when is_pid(Connection) ->
|
||||
ok = amqp10_client:close_connection(Connection),
|
||||
receive {amqp10_event, {connection, Connection, {closed, normal}}} -> ok
|
||||
after 5000 -> flush(missing_closed),
|
||||
ct:fail("missing CLOSE from server")
|
||||
end.
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,221 @@
|
|||
%% This Source Code Form is subject to the terms of the Mozilla Public
|
||||
%% License, v. 2.0. If a copy of the MPL was not distributed with this
|
||||
%% file, You can obtain one at https://mozilla.org/MPL/2.0/.
|
||||
%%
|
||||
%% Copyright (c) 2016-2023 VMware, Inc. or its affiliates. All rights reserved.
|
||||
|
||||
-module(amqp_credit_api_v2_SUITE).
|
||||
|
||||
-compile([export_all, nowarn_export_all]).
|
||||
|
||||
-include_lib("common_test/include/ct.hrl").
|
||||
-include_lib("eunit/include/eunit.hrl").
|
||||
-include_lib("amqp_client/include/amqp_client.hrl").
|
||||
|
||||
all() ->
|
||||
[
|
||||
{group, cluster_size_1}
|
||||
].
|
||||
|
||||
groups() ->
|
||||
[
|
||||
{cluster_size_1, [],
|
||||
[credit_api_v2]}
|
||||
].
|
||||
|
||||
suite() ->
|
||||
[
|
||||
{timetrap, {minutes, 10}}
|
||||
].
|
||||
|
||||
init_per_suite(Config) ->
|
||||
{ok, _} = application:ensure_all_started(amqp10_client),
|
||||
rabbit_ct_helpers:log_environment(),
|
||||
rabbit_ct_helpers:run_setup_steps(Config, []).
|
||||
|
||||
end_per_suite(Config) ->
|
||||
rabbit_ct_helpers:run_teardown_steps(Config).
|
||||
|
||||
init_per_group(_Group, Config0) ->
|
||||
Config = rabbit_ct_helpers:merge_app_env(
|
||||
Config0, {rabbit, [{forced_feature_flags_on_init, []}]}),
|
||||
rabbit_ct_helpers:run_steps(Config,
|
||||
rabbit_ct_broker_helpers:setup_steps() ++
|
||||
rabbit_ct_client_helpers:setup_steps()).
|
||||
|
||||
end_per_group(_Group, Config) ->
|
||||
rabbit_ct_helpers:run_steps(Config,
|
||||
rabbit_ct_client_helpers:teardown_steps() ++
|
||||
rabbit_ct_broker_helpers:teardown_steps()).
|
||||
|
||||
init_per_testcase(TestCase, Config) ->
|
||||
case rabbit_ct_broker_helpers:is_feature_flag_supported(Config, TestCase) of
|
||||
true ->
|
||||
?assertNot(rabbit_ct_broker_helpers:is_feature_flag_enabled(Config, TestCase)),
|
||||
Config;
|
||||
false ->
|
||||
{skip, io_lib:format("feature flag ~s is unsupported", [TestCase])}
|
||||
end.
|
||||
|
||||
end_per_testcase(_TestCase, Config) ->
|
||||
Config.
|
||||
|
||||
credit_api_v2(Config) ->
|
||||
CQ = <<"classic queue">>,
|
||||
QQ = <<"quorum queue">>,
|
||||
CQAddr = <<"/amq/queue/", CQ/binary>>,
|
||||
QQAddr = <<"/amq/queue/", QQ/binary>>,
|
||||
|
||||
Ch = rabbit_ct_client_helpers:open_channel(Config),
|
||||
#'queue.declare_ok'{} = amqp_channel:call(Ch, #'queue.declare'{queue = CQ}),
|
||||
#'queue.declare_ok'{} = amqp_channel:call(
|
||||
Ch, #'queue.declare'{
|
||||
queue = QQ,
|
||||
durable = true,
|
||||
arguments = [{<<"x-queue-type">>, longstr, <<"quorum">>}]}),
|
||||
ok = rabbit_ct_client_helpers:close_channel(Ch),
|
||||
|
||||
Host = ?config(rmq_hostname, Config),
|
||||
Port = rabbit_ct_broker_helpers:get_node_config(Config, 0, tcp_port_amqp),
|
||||
OpnConf = #{address => Host,
|
||||
port => Port,
|
||||
container_id => <<"my container">>,
|
||||
sasl => {plain, <<"guest">>, <<"guest">>}},
|
||||
{ok, Connection} = amqp10_client:open_connection(OpnConf),
|
||||
{ok, Session} = amqp10_client:begin_session_sync(Connection),
|
||||
|
||||
{ok, CQSender} = amqp10_client:attach_sender_link(Session, <<"cq sender">>, CQAddr),
|
||||
{ok, QQSender} = amqp10_client:attach_sender_link(Session, <<"qq sender">>, QQAddr),
|
||||
receive {amqp10_event, {link, CQSender, credited}} -> ok
|
||||
after 5000 -> ct:fail(credited_timeout)
|
||||
end,
|
||||
receive {amqp10_event, {link, QQSender, credited}} -> ok
|
||||
after 5000 -> ct:fail(credited_timeout)
|
||||
end,
|
||||
|
||||
%% Send 40 messages to each queue.
|
||||
NumMsgs = 40,
|
||||
[begin
|
||||
Bin = integer_to_binary(N),
|
||||
ok = amqp10_client:send_msg(CQSender, amqp10_msg:new(Bin, Bin, true)),
|
||||
ok = amqp10_client:send_msg(QQSender, amqp10_msg:new(Bin, Bin, true))
|
||||
end || N <- lists:seq(1, NumMsgs)],
|
||||
ok = amqp10_client:detach_link(CQSender),
|
||||
ok = amqp10_client:detach_link(QQSender),
|
||||
|
||||
%% Consume with credit API v1
|
||||
CQAttachArgs = #{handle => 300,
|
||||
name => <<"cq receiver 1">>,
|
||||
role => {receiver, #{address => CQAddr,
|
||||
durable => configuration}, self()},
|
||||
snd_settle_mode => unsettled,
|
||||
rcv_settle_mode => first,
|
||||
filter => #{}},
|
||||
{ok, CQReceiver1} = amqp10_client:attach_link(Session, CQAttachArgs),
|
||||
QQAttachArgs = #{handle => 400,
|
||||
name => <<"qq receiver 1">>,
|
||||
role => {receiver, #{address => QQAddr,
|
||||
durable => configuration}, self()},
|
||||
snd_settle_mode => unsettled,
|
||||
rcv_settle_mode => first,
|
||||
filter => #{}},
|
||||
{ok, QQReceiver1} = amqp10_client:attach_link(Session, QQAttachArgs),
|
||||
|
||||
ok = consume_and_accept(10, CQReceiver1, Session),
|
||||
ok = consume_and_accept(10, QQReceiver1, Session),
|
||||
|
||||
?assertEqual(ok,
|
||||
rabbit_ct_broker_helpers:enable_feature_flag(Config, ?FUNCTION_NAME)),
|
||||
flush(enabled_feature_flag),
|
||||
|
||||
%% Consume with credit API v2
|
||||
{ok, CQReceiver2} = amqp10_client:attach_receiver_link(
|
||||
Session, <<"cq receiver 2">>, CQAddr, unsettled),
|
||||
{ok, QQReceiver2} = amqp10_client:attach_receiver_link(
|
||||
Session, <<"qq receiver 2">>, QQAddr, unsettled),
|
||||
ok = consume_and_accept(10, CQReceiver2, Session),
|
||||
ok = consume_and_accept(10, QQReceiver2, Session),
|
||||
|
||||
%% Consume via with credit API v1
|
||||
ok = consume_and_accept(10, CQReceiver1, Session),
|
||||
ok = consume_and_accept(10, QQReceiver1, Session),
|
||||
|
||||
%% Detach the credit API v1 links and attach with the same output handle.
|
||||
ok = detach_sync(CQReceiver1),
|
||||
ok = detach_sync(QQReceiver1),
|
||||
{ok, CQReceiver3} = amqp10_client:attach_link(Session, CQAttachArgs),
|
||||
{ok, QQReceiver3} = amqp10_client:attach_link(Session, QQAttachArgs),
|
||||
|
||||
%% The new links should use credit API v2
|
||||
ok = consume_and_accept(10, CQReceiver3, Session),
|
||||
ok = consume_and_accept(10, QQReceiver3, Session),
|
||||
|
||||
flush(pre_drain),
|
||||
%% Draining should also work.
|
||||
ok = amqp10_client:flow_link_credit(CQReceiver3, 10, never, true),
|
||||
receive {amqp10_event, {link, CQReceiver3, credit_exhausted}} -> ok
|
||||
after 5000 -> ct:fail({missing_credit_exhausted, ?LINE})
|
||||
end,
|
||||
receive Unexpected1 -> ct:fail({unexpected, ?LINE, Unexpected1})
|
||||
after 20 -> ok
|
||||
end,
|
||||
|
||||
ok = amqp10_client:flow_link_credit(QQReceiver3, 10, never, true),
|
||||
receive {amqp10_event, {link, QQReceiver3, credit_exhausted}} -> ok
|
||||
after 5000 -> ct:fail({missing_credit_exhausted, ?LINE})
|
||||
end,
|
||||
receive Unexpected2 -> ct:fail({unexpected, ?LINE, Unexpected2})
|
||||
after 20 -> ok
|
||||
end,
|
||||
|
||||
ok = detach_sync(CQReceiver2),
|
||||
ok = detach_sync(QQReceiver2),
|
||||
ok = detach_sync(CQReceiver3),
|
||||
ok = detach_sync(QQReceiver3),
|
||||
ok = amqp10_client:end_session(Session),
|
||||
receive {amqp10_event, {session, Session, {ended, _}}} -> ok
|
||||
after 5000 -> ct:fail(missing_ended)
|
||||
end,
|
||||
ok = amqp10_client:close_connection(Connection),
|
||||
receive {amqp10_event, {connection, Connection, {closed, normal}}} -> ok
|
||||
after 5000 -> ct:fail(missing_closed)
|
||||
end.
|
||||
|
||||
consume_and_accept(NumMsgs, Receiver, Session) ->
|
||||
ok = amqp10_client:flow_link_credit(Receiver, NumMsgs, never),
|
||||
Msgs = receive_messages(Receiver, NumMsgs),
|
||||
ok = amqp10_client_session:disposition(
|
||||
Session,
|
||||
receiver,
|
||||
amqp10_msg:delivery_id(hd(Msgs)),
|
||||
amqp10_msg:delivery_id(lists:last(Msgs)),
|
||||
true,
|
||||
accepted).
|
||||
|
||||
receive_messages(Receiver, N) ->
|
||||
receive_messages0(Receiver, N, []).
|
||||
|
||||
receive_messages0(_Receiver, 0, Acc) ->
|
||||
lists:reverse(Acc);
|
||||
receive_messages0(Receiver, N, Acc) ->
|
||||
receive
|
||||
{amqp10_msg, Receiver, Msg} ->
|
||||
receive_messages0(Receiver, N - 1, [Msg | Acc])
|
||||
after 5000 ->
|
||||
exit({timeout, {num_received, length(Acc)}, {num_missing, N}})
|
||||
end.
|
||||
|
||||
detach_sync(Receiver) ->
|
||||
ok = amqp10_client:detach_link(Receiver),
|
||||
receive {amqp10_event, {link, Receiver, {detached, normal}}} -> ok
|
||||
after 5000 -> ct:fail({missing_detached, Receiver})
|
||||
end.
|
||||
|
||||
flush(Prefix) ->
|
||||
receive
|
||||
Msg ->
|
||||
ct:pal("~ts flushed: ~p~n", [Prefix, Msg]),
|
||||
flush(Prefix)
|
||||
after 1 ->
|
||||
ok
|
||||
end.
|
|
@ -5,88 +5,83 @@
|
|||
%% Copyright (c) 2007-2024 Broadcom. All Rights Reserved. The term “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. All rights reserved.
|
||||
%%
|
||||
|
||||
-module(proxy_protocol_SUITE).
|
||||
-module(amqp_proxy_protocol_SUITE).
|
||||
|
||||
-include_lib("common_test/include/ct.hrl").
|
||||
-compile([export_all, nowarn_export_all]).
|
||||
|
||||
-compile(export_all).
|
||||
-include_lib("eunit/include/eunit.hrl").
|
||||
|
||||
-import(rabbit_ct_helpers, [eventually/3]).
|
||||
-import(rabbit_ct_broker_helpers, [rpc/4]).
|
||||
|
||||
-define(TIMEOUT, 5000).
|
||||
|
||||
all() ->
|
||||
[
|
||||
{group, sequential_tests}
|
||||
].
|
||||
[{group, tests}].
|
||||
|
||||
groups() -> [
|
||||
{sequential_tests, [], [
|
||||
proxy_protocol_v1,
|
||||
proxy_protocol_v1_tls,
|
||||
proxy_protocol_v2_local
|
||||
]}
|
||||
groups() ->
|
||||
[{tests, [shuffle],
|
||||
[
|
||||
v1,
|
||||
v1_tls,
|
||||
v2_local
|
||||
]}
|
||||
].
|
||||
|
||||
init_per_suite(Config) ->
|
||||
rabbit_ct_helpers:log_environment(),
|
||||
Config1 = rabbit_ct_helpers:set_config(Config, [
|
||||
{rmq_nodename_suffix, ?MODULE}
|
||||
]),
|
||||
Config2 = rabbit_ct_helpers:merge_app_env(Config1, [
|
||||
{rabbit, [
|
||||
{proxy_protocol, true}
|
||||
]}
|
||||
]),
|
||||
Config3 = rabbit_ct_helpers:set_config(Config2, {rabbitmq_ct_tls_verify, verify_none}),
|
||||
rabbit_ct_helpers:run_setup_steps(Config3,
|
||||
rabbit_ct_broker_helpers:setup_steps() ++
|
||||
rabbit_ct_client_helpers:setup_steps()).
|
||||
Config1 = rabbit_ct_helpers:set_config(
|
||||
Config,
|
||||
[{rmq_nodename_suffix, ?MODULE},
|
||||
{rabbitmq_ct_tls_verify, verify_none}]),
|
||||
Config2 = rabbit_ct_helpers:merge_app_env(
|
||||
Config1,
|
||||
[{rabbit, [{proxy_protocol, true}]}]),
|
||||
rabbit_ct_helpers:run_setup_steps(
|
||||
Config2,
|
||||
rabbit_ct_broker_helpers:setup_steps() ++
|
||||
rabbit_ct_client_helpers:setup_steps()).
|
||||
|
||||
end_per_suite(Config) ->
|
||||
rabbit_ct_helpers:run_teardown_steps(Config,
|
||||
rabbit_ct_client_helpers:teardown_steps() ++
|
||||
rabbit_ct_broker_helpers:teardown_steps()).
|
||||
|
||||
init_per_group(_, Config) -> Config.
|
||||
end_per_group(_, Config) -> Config.
|
||||
|
||||
init_per_testcase(Testcase, Config) ->
|
||||
rabbit_ct_helpers:testcase_started(Config, Testcase).
|
||||
|
||||
end_per_testcase(Testcase, Config) ->
|
||||
eventually(?_assertEqual(0, rpc(Config, ets, info, [connection_created, size])), 1000, 10),
|
||||
rabbit_ct_helpers:testcase_finished(Config, Testcase).
|
||||
|
||||
proxy_protocol_v1(Config) ->
|
||||
v1(Config) ->
|
||||
Port = rabbit_ct_broker_helpers:get_node_config(Config, 0, tcp_port_amqp),
|
||||
{ok, Socket} = gen_tcp:connect({127,0,0,1}, Port,
|
||||
[binary, {active, false}, {packet, raw}]),
|
||||
[binary, {active, false}, {packet, raw}]),
|
||||
ok = inet:send(Socket, "PROXY TCP4 192.168.1.1 192.168.1.2 80 81\r\n"),
|
||||
[ok = inet:send(Socket, amqp_1_0_frame(FrameType))
|
||||
|| FrameType <- [header_sasl, sasl_init, header_amqp, open, 'begin']],
|
||||
|| FrameType <- [header_sasl, sasl_init, header_amqp, open]],
|
||||
{ok, _Packet} = gen_tcp:recv(Socket, 0, ?TIMEOUT),
|
||||
ConnectionName = rabbit_ct_broker_helpers:rpc(Config, 0,
|
||||
?MODULE, connection_name, []),
|
||||
match = re:run(ConnectionName, <<"^192.168.1.1:80 -> 192.168.1.2:81 \\(\\d\\)">>, [{capture, none}]),
|
||||
gen_tcp:close(Socket),
|
||||
ok.
|
||||
ConnectionName = rpc(Config, ?MODULE, connection_name, []),
|
||||
match = re:run(ConnectionName, <<"^192.168.1.1:80 -> 192.168.1.2:81$">>, [{capture, none}]),
|
||||
ok = gen_tcp:close(Socket).
|
||||
|
||||
proxy_protocol_v1_tls(Config) ->
|
||||
v1_tls(Config) ->
|
||||
app_utils:start_applications([asn1, crypto, public_key, ssl]),
|
||||
Port = rabbit_ct_broker_helpers:get_node_config(Config, 0, tcp_port_amqp_tls),
|
||||
{ok, Socket} = gen_tcp:connect({127,0,0,1}, Port,
|
||||
[binary, {active, false}, {packet, raw}]),
|
||||
ok = inet:send(Socket, "PROXY TCP4 192.168.1.1 192.168.1.2 80 81\r\n"),
|
||||
[binary, {active, false}, {packet, raw}]),
|
||||
ok = inet:send(Socket, "PROXY TCP4 192.168.1.1 192.168.1.2 80 82\r\n"),
|
||||
{ok, SslSocket} = ssl:connect(Socket, [{verify, verify_none}], ?TIMEOUT),
|
||||
[ok = ssl:send(SslSocket, amqp_1_0_frame(FrameType))
|
||||
|| FrameType <- [header_sasl, sasl_init, header_amqp, open, 'begin']],
|
||||
|| FrameType <- [header_sasl, sasl_init, header_amqp, open]],
|
||||
{ok, _Packet} = ssl:recv(SslSocket, 0, ?TIMEOUT),
|
||||
timer:sleep(1000),
|
||||
ConnectionName = rabbit_ct_broker_helpers:rpc(Config, 0,
|
||||
?MODULE, connection_name, []),
|
||||
match = re:run(ConnectionName, <<"^192.168.1.1:80 -> 192.168.1.2:81 \\(\\d\\)$">>, [{capture, none}]),
|
||||
gen_tcp:close(Socket),
|
||||
ok.
|
||||
ConnectionName = rpc(Config, ?MODULE, connection_name, []),
|
||||
match = re:run(ConnectionName, <<"^192.168.1.1:80 -> 192.168.1.2:82$">>, [{capture, none}]),
|
||||
ok = gen_tcp:close(Socket).
|
||||
|
||||
proxy_protocol_v2_local(Config) ->
|
||||
v2_local(Config) ->
|
||||
ProxyInfo = #{
|
||||
command => local,
|
||||
version => 2
|
||||
|
@ -96,14 +91,11 @@ proxy_protocol_v2_local(Config) ->
|
|||
[binary, {active, false}, {packet, raw}]),
|
||||
ok = inet:send(Socket, ranch_proxy_header:header(ProxyInfo)),
|
||||
[ok = inet:send(Socket, amqp_1_0_frame(FrameType))
|
||||
|| FrameType <- [header_sasl, sasl_init, header_amqp, open, 'begin']],
|
||||
|| FrameType <- [header_sasl, sasl_init, header_amqp, open]],
|
||||
{ok, _Packet} = gen_tcp:recv(Socket, 0, ?TIMEOUT),
|
||||
ConnectionName = rabbit_ct_broker_helpers:rpc(Config, 0,
|
||||
?MODULE, connection_name, []),
|
||||
match = re:run(ConnectionName, <<"^127.0.0.1:\\d+ -> 127.0.0.1:\\d+ \\(\\d\\)$">>, [{capture, none}]),
|
||||
gen_tcp:close(Socket),
|
||||
ok.
|
||||
|
||||
ConnectionName = rpc(Config, ?MODULE, connection_name, []),
|
||||
match = re:run(ConnectionName, <<"^127.0.0.1:\\d+ -> 127.0.0.1:\\d+$">>, [{capture, none}]),
|
||||
ok = gen_tcp:close(Socket).
|
||||
|
||||
%% hex frames to send to have the connection recorded in RabbitMQ
|
||||
%% use wireshark with one of the Java tests to record those
|
||||
|
@ -114,9 +106,7 @@ amqp_1_0_frame(header_amqp) ->
|
|||
amqp_1_0_frame(sasl_init) ->
|
||||
hex_frame_to_binary("0000001902010000005341c00c01a309414e4f4e594d4f5553");
|
||||
amqp_1_0_frame(open) ->
|
||||
hex_frame_to_binary("0000003f02000000005310c03202a12438306335323662332d653530662d343835352d613564302d336466643738623537633730a1096c6f63616c686f7374");
|
||||
amqp_1_0_frame('begin') ->
|
||||
hex_frame_to_binary("0000002002000000005311c01305405201707fffffff707fffffff700000ffff").
|
||||
hex_frame_to_binary("0000003f02000000005310c03202a12438306335323662332d653530662d343835352d613564302d336466643738623537633730a1096c6f63616c686f7374").
|
||||
|
||||
hex_frame_to_binary(HexsString) ->
|
||||
Hexs = split(HexsString, []),
|
||||
|
@ -135,18 +125,16 @@ connection_name() ->
|
|||
%% hence the retry
|
||||
case retry(fun connection_registered/0, 20) of
|
||||
true ->
|
||||
Connections = ets:tab2list(connection_created),
|
||||
{_Key, Values} = lists:nth(1, Connections),
|
||||
[{_Key, Values}] = ets:tab2list(connection_created),
|
||||
{_, Name} = lists:keyfind(name, 1, Values),
|
||||
Name;
|
||||
false ->
|
||||
error
|
||||
ct:fail("not 1 connection registered")
|
||||
end.
|
||||
|
||||
connection_registered() ->
|
||||
I = ets:info(connection_created),
|
||||
Size = proplists:get_value(size, I),
|
||||
Size > 0.
|
||||
Size = ets:info(connection_created, size),
|
||||
Size =:= 1.
|
||||
|
||||
retry(_Function, 0) ->
|
||||
false;
|
|
@ -5,7 +5,7 @@
|
|||
%% Copyright (c) 2007-2024 Broadcom. All Rights Reserved. The term “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. All rights reserved.
|
||||
%%
|
||||
|
||||
-module(system_SUITE).
|
||||
-module(amqp_system_SUITE).
|
||||
|
||||
-include_lib("common_test/include/ct.hrl").
|
||||
-include_lib("rabbit_common/include/rabbit_framing.hrl").
|
||||
|
@ -58,27 +58,11 @@ init_per_suite(Config) ->
|
|||
end_per_suite(Config) ->
|
||||
Config.
|
||||
|
||||
init_per_group(streams, Config) ->
|
||||
case rabbit_ct_helpers:is_mixed_versions() of
|
||||
false ->
|
||||
Suffix = rabbit_ct_helpers:testcase_absname(Config, "", "-"),
|
||||
Config1 = rabbit_ct_helpers:set_config(Config, [
|
||||
{rmq_nodename_suffix, Suffix},
|
||||
{amqp10_client_library, dotnet}
|
||||
]),
|
||||
rabbit_ct_helpers:run_setup_steps(Config1, [
|
||||
fun build_dotnet_test_project/1
|
||||
] ++
|
||||
rabbit_ct_broker_helpers:setup_steps() ++
|
||||
rabbit_ct_client_helpers:setup_steps());
|
||||
_ ->
|
||||
{skip, "stream tests are skipped in mixed mode"}
|
||||
end;
|
||||
init_per_group(Group, Config) ->
|
||||
Suffix = rabbit_ct_helpers:testcase_absname(Config, "", "-"),
|
||||
Config1 = rabbit_ct_helpers:set_config(Config, [
|
||||
{rmq_nodename_suffix, Suffix},
|
||||
{amqp10_client_library, Group}
|
||||
{amqp_client_library, Group}
|
||||
]),
|
||||
GroupSetupStep = case Group of
|
||||
dotnet -> fun build_dotnet_test_project/1;
|
||||
|
@ -131,76 +115,51 @@ build_maven_test_project(Config) ->
|
|||
%% -------------------------------------------------------------------
|
||||
|
||||
roundtrip(Config) ->
|
||||
run(Config, [
|
||||
{dotnet, "roundtrip"},
|
||||
{java, "RoundTripTest"}
|
||||
]).
|
||||
run(Config, [{dotnet, "roundtrip"},
|
||||
{java, "RoundTripTest"}]).
|
||||
|
||||
streams(Config) ->
|
||||
Ch = rabbit_ct_client_helpers:open_channel(Config, 0),
|
||||
#'queue.declare_ok'{} =
|
||||
amqp_channel:call(Ch, #'queue.declare'{queue = <<"stream_q2">>,
|
||||
durable = true,
|
||||
arguments = [{<<"x-queue-type">>, longstr, "stream"}]}),
|
||||
run(Config, [
|
||||
{dotnet, "streams"}
|
||||
]).
|
||||
Ch = rabbit_ct_client_helpers:open_channel(Config),
|
||||
amqp_channel:call(Ch, #'queue.declare'{queue = <<"stream_q2">>,
|
||||
durable = true,
|
||||
arguments = [{<<"x-queue-type">>, longstr, "stream"}]}),
|
||||
run(Config, [{dotnet, "streams"}]).
|
||||
|
||||
roundtrip_to_amqp_091(Config) ->
|
||||
run(Config, [
|
||||
{dotnet, "roundtrip_to_amqp_091"}
|
||||
]).
|
||||
run(Config, [{dotnet, "roundtrip_to_amqp_091"}]).
|
||||
|
||||
default_outcome(Config) ->
|
||||
run(Config, [
|
||||
{dotnet, "default_outcome"}
|
||||
]).
|
||||
run(Config, [{dotnet, "default_outcome"}]).
|
||||
|
||||
no_routes_is_released(Config) ->
|
||||
Ch = rabbit_ct_client_helpers:open_channel(Config, 0),
|
||||
Ch = rabbit_ct_client_helpers:open_channel(Config),
|
||||
amqp_channel:call(Ch, #'exchange.declare'{exchange = <<"no_routes_is_released">>,
|
||||
durable = true}),
|
||||
run(Config, [
|
||||
{dotnet, "no_routes_is_released"}
|
||||
]).
|
||||
run(Config, [{dotnet, "no_routes_is_released"}]).
|
||||
|
||||
outcomes(Config) ->
|
||||
run(Config, [
|
||||
{dotnet, "outcomes"}
|
||||
]).
|
||||
run(Config, [{dotnet, "outcomes"}]).
|
||||
|
||||
fragmentation(Config) ->
|
||||
run(Config, [
|
||||
{dotnet, "fragmentation"}
|
||||
]).
|
||||
run(Config, [{dotnet, "fragmentation"}]).
|
||||
|
||||
message_annotations(Config) ->
|
||||
run(Config, [
|
||||
{dotnet, "message_annotations"}
|
||||
]).
|
||||
run(Config, [{dotnet, "message_annotations"}]).
|
||||
|
||||
footer(Config) ->
|
||||
run(Config, [
|
||||
{dotnet, "footer"}
|
||||
]).
|
||||
run(Config, [{dotnet, "footer"}]).
|
||||
|
||||
data_types(Config) ->
|
||||
run(Config, [
|
||||
{dotnet, "data_types"}
|
||||
]).
|
||||
run(Config, [{dotnet, "data_types"}]).
|
||||
|
||||
reject(Config) ->
|
||||
run(Config, [
|
||||
{dotnet, "reject"}
|
||||
]).
|
||||
run(Config, [{dotnet, "reject"}]).
|
||||
|
||||
redelivery(Config) ->
|
||||
run(Config, [
|
||||
{dotnet, "redelivery"}
|
||||
]).
|
||||
run(Config, [{dotnet, "redelivery"}]).
|
||||
|
||||
routing(Config) ->
|
||||
Ch = rabbit_ct_client_helpers:open_channel(Config, 0),
|
||||
Ch = rabbit_ct_client_helpers:open_channel(Config),
|
||||
amqp_channel:call(Ch, #'queue.declare'{queue = <<"transient_q">>,
|
||||
durable = false}),
|
||||
amqp_channel:call(Ch, #'queue.declare'{queue = <<"durable_q">>,
|
||||
|
@ -217,18 +176,6 @@ routing(Config) ->
|
|||
{dotnet, "routing"}
|
||||
]).
|
||||
|
||||
%% TODO: this tests doesn't test anything that the standard routing test
|
||||
%% already does. We should test stream specific things here like attaching
|
||||
%% to a given offset
|
||||
stream_interop_basics(Config) ->
|
||||
Ch = rabbit_ct_client_helpers:open_channel(Config, 0),
|
||||
amqp_channel:call(Ch, #'queue.declare'{queue = <<"stream_q">>,
|
||||
durable = true,
|
||||
arguments = [{<<"x-queue-type">>, longstr, <<"stream">>}]}),
|
||||
run(Config, [
|
||||
{dotnet, "routing"}
|
||||
]).
|
||||
|
||||
invalid_routes(Config) ->
|
||||
run(Config, [
|
||||
{dotnet, "invalid_routes"}
|
||||
|
@ -238,7 +185,7 @@ auth_failure(Config) ->
|
|||
run(Config, [ {dotnet, "auth_failure"} ]).
|
||||
|
||||
access_failure(Config) ->
|
||||
User = <<"access_failure">>,
|
||||
User = atom_to_binary(?FUNCTION_NAME),
|
||||
rabbit_ct_broker_helpers:add_user(Config, User, <<"boo">>),
|
||||
rabbit_ct_broker_helpers:set_permissions(Config, User, <<"/">>,
|
||||
<<".*">>, %% configure
|
||||
|
@ -248,12 +195,12 @@ access_failure(Config) ->
|
|||
run(Config, [ {dotnet, "access_failure"} ]).
|
||||
|
||||
access_failure_not_allowed(Config) ->
|
||||
User = <<"access_failure_not_allowed">>,
|
||||
User = atom_to_binary(?FUNCTION_NAME),
|
||||
rabbit_ct_broker_helpers:add_user(Config, User, <<"boo">>),
|
||||
run(Config, [ {dotnet, "access_failure_not_allowed"} ]).
|
||||
|
||||
access_failure_send(Config) ->
|
||||
User = <<"access_failure_send">>,
|
||||
User = atom_to_binary(?FUNCTION_NAME),
|
||||
rabbit_ct_broker_helpers:add_user(Config, User, <<"boo">>),
|
||||
rabbit_ct_broker_helpers:set_permissions(Config, User, <<"/">>,
|
||||
<<".*">>, %% configure
|
||||
|
@ -263,15 +210,13 @@ access_failure_send(Config) ->
|
|||
run(Config, [ {dotnet, "access_failure_send"} ]).
|
||||
|
||||
run(Config, Flavors) ->
|
||||
ClientLibrary = ?config(amqp10_client_library, Config),
|
||||
ClientLibrary = ?config(amqp_client_library, Config),
|
||||
Fun = case ClientLibrary of
|
||||
dotnet -> fun run_dotnet_test/2;
|
||||
java -> fun run_java_test/2
|
||||
end,
|
||||
case proplists:get_value(ClientLibrary, Flavors) of
|
||||
false -> ok;
|
||||
TestName -> Fun(Config, TestName)
|
||||
end.
|
||||
dotnet -> fun run_dotnet_test/2;
|
||||
java -> fun run_java_test/2
|
||||
end,
|
||||
{ClientLibrary, TestName} = proplists:lookup(ClientLibrary, Flavors),
|
||||
Fun(Config, TestName).
|
||||
|
||||
run_dotnet_test(Config, Method) ->
|
||||
TestProjectDir = ?config(dotnet_test_project_dir, Config),
|
|
@ -203,9 +203,11 @@ module Test =
|
|||
receiver.SetCredit(100, true)
|
||||
let rtd = receiver.Receive()
|
||||
assertNotNull rtd
|
||||
assertTrue (rtd.MessageAnnotations.Map.Count = 1)
|
||||
let (result, _) = rtd.MessageAnnotations.Map.TryGetValue("x-stream-offset")
|
||||
assertTrue result
|
||||
assertEqual 3 rtd.MessageAnnotations.Map.Count
|
||||
assertTrue (rtd.MessageAnnotations.Map.ContainsKey(Symbol "x-stream-offset"))
|
||||
assertTrue (rtd.MessageAnnotations.Map.ContainsKey(Symbol "x-exchange"))
|
||||
assertTrue (rtd.MessageAnnotations.Map.ContainsKey(Symbol "x-routing-key"))
|
||||
|
||||
assertEqual body rtd.Body
|
||||
assertEqual rtd.Properties.CorrelationId corr
|
||||
receiver.Close()
|
||||
|
@ -216,7 +218,7 @@ module Test =
|
|||
let roundtrip_to_amqp_091 uri =
|
||||
use c = connect uri
|
||||
let q = "roundtrip-091-q"
|
||||
let corr = "corrlation"
|
||||
let corr = "correlation"
|
||||
let sender = SenderLink(c.Session, q + "-sender" , q)
|
||||
|
||||
new Message("hi"B, Header = Header(),
|
||||
|
@ -300,7 +302,8 @@ module Test =
|
|||
|
||||
assertEqual m.Body m'.Body
|
||||
assertEqual (m.MessageAnnotations.Descriptor) (m'.MessageAnnotations.Descriptor)
|
||||
assertEqual 2 (m'.MessageAnnotations.Map.Count)
|
||||
// our 2 custom annotations + x-exchange + x-routing-key = 4
|
||||
assertEqual 4 (m'.MessageAnnotations.Map.Count)
|
||||
assertTrue (m.MessageAnnotations.[k1] = m'.MessageAnnotations.[k1])
|
||||
assertTrue (m.MessageAnnotations.[k2] = m'.MessageAnnotations.[k2])
|
||||
|
||||
|
@ -312,7 +315,7 @@ module Test =
|
|||
let k2 = Symbol "key2"
|
||||
footer.[Symbol "key1"] <- "value1"
|
||||
footer.[Symbol "key2"] <- "value2"
|
||||
let m = new Message("testing annotations", Footer = footer)
|
||||
let m = new Message("testing footer", Footer = footer)
|
||||
sender.Send m
|
||||
let m' = receive receiver
|
||||
|
||||
|
@ -432,7 +435,7 @@ module Test =
|
|||
receiver.Close()
|
||||
with
|
||||
| :? Amqp.AmqpException as ae ->
|
||||
assertEqual (ae.Error.Condition) (Symbol cond)
|
||||
assertEqual (Symbol cond) (ae.Error.Condition)
|
||||
| _ -> failwith "invalid expection thrown"
|
||||
|
||||
let authFailure uri =
|
||||
|
@ -456,8 +459,6 @@ module Test =
|
|||
))
|
||||
let sender = new SenderLink(ac.Session, "test-sender", dest)
|
||||
sender.Send(new Message "hi", TimeSpan.FromSeconds 15.)
|
||||
|
||||
|
||||
failwith "expected exception not received"
|
||||
with
|
||||
| :? Amqp.AmqpException as ex ->
|
|
@ -8,7 +8,7 @@
|
|||
</ItemGroup>
|
||||
<ItemGroup>
|
||||
<PackageReference Include="RabbitMQ.Client" Version="6.*" />
|
||||
<PackageReference Include="AmqpNetLite" Version="2.4.1" />
|
||||
<PackageReference Include="AmqpNetLite.Serialization" Version="2.4.1" />
|
||||
<PackageReference Include="AmqpNetLite" Version="2.4.8" />
|
||||
<PackageReference Include="AmqpNetLite.Serialization" Version="2.4.8" />
|
||||
</ItemGroup>
|
||||
</Project>
|
|
@ -0,0 +1,71 @@
|
|||
%% This Source Code Form is subject to the terms of the Mozilla Public
|
||||
%% License, v. 2.0. If a copy of the MPL was not distributed with this
|
||||
%% file, You can obtain one at https://mozilla.org/MPL/2.0/.
|
||||
%%
|
||||
%% Copyright (c) 2007-2023 VMware, Inc. or its affiliates. All rights reserved.
|
||||
%%
|
||||
|
||||
-module(event_recorder).
|
||||
-behaviour(gen_event).
|
||||
|
||||
-include_lib("stdlib/include/assert.hrl").
|
||||
-include_lib("rabbit_common/include/rabbit.hrl").
|
||||
|
||||
%% gen_event callbacks
|
||||
-export([init/1,
|
||||
handle_event/2,
|
||||
handle_call/2]).
|
||||
%% client API
|
||||
-export([start/1,
|
||||
stop/1,
|
||||
get_events/1]).
|
||||
-export([assert_event_type/2,
|
||||
assert_event_prop/2]).
|
||||
|
||||
-import(rabbit_ct_broker_helpers,
|
||||
[get_node_config/3]).
|
||||
|
||||
-define(INIT_STATE, []).
|
||||
|
||||
init(_) ->
|
||||
{ok, ?INIT_STATE}.
|
||||
|
||||
handle_event(#event{type = T}, State)
|
||||
when T =:= node_stats orelse
|
||||
T =:= node_node_stats orelse
|
||||
T =:= node_node_deleted ->
|
||||
{ok, State};
|
||||
handle_event(Event, State) ->
|
||||
{ok, [Event | State]}.
|
||||
|
||||
handle_call(take_state, State) ->
|
||||
{ok, lists:reverse(State), ?INIT_STATE}.
|
||||
|
||||
start(Config) ->
|
||||
ok = rabbit_ct_broker_helpers:add_code_path_to_all_nodes(Config, ?MODULE),
|
||||
ok = gen_event:add_handler(event_manager_ref(Config), ?MODULE, []).
|
||||
|
||||
stop(Config) ->
|
||||
ok = gen_event:delete_handler(event_manager_ref(Config), ?MODULE, []).
|
||||
|
||||
get_events(Config) ->
|
||||
%% events are sent and processed asynchronously
|
||||
timer:sleep(500),
|
||||
Result = gen_event:call(event_manager_ref(Config), ?MODULE, take_state),
|
||||
?assert(is_list(Result)),
|
||||
Result.
|
||||
|
||||
event_manager_ref(Config) ->
|
||||
Node = get_node_config(Config, 0, nodename),
|
||||
{rabbit_event, Node}.
|
||||
|
||||
assert_event_type(ExpectedType, #event{type = ActualType}) ->
|
||||
?assertEqual(ExpectedType, ActualType).
|
||||
|
||||
assert_event_prop(ExpectedProp = {Key, _Value}, #event{props = Props}) ->
|
||||
?assertEqual(ExpectedProp, lists:keyfind(Key, 1, Props));
|
||||
assert_event_prop(ExpectedProps, Event)
|
||||
when is_list(ExpectedProps) ->
|
||||
lists:foreach(fun(P) ->
|
||||
assert_event_prop(P, Event)
|
||||
end, ExpectedProps).
|
|
@ -7,29 +7,21 @@
|
|||
|
||||
-module(message_size_limit_SUITE).
|
||||
|
||||
-include_lib("common_test/include/ct.hrl").
|
||||
-include_lib("kernel/include/file.hrl").
|
||||
-compile([export_all, nowarn_export_all]).
|
||||
-include_lib("amqp_client/include/amqp_client.hrl").
|
||||
-include_lib("eunit/include/eunit.hrl").
|
||||
|
||||
-compile(export_all).
|
||||
|
||||
-define(TIMEOUT_LIST_OPS_PASS, 5000).
|
||||
-define(TIMEOUT, 30000).
|
||||
-define(TIMEOUT_CHANNEL_EXCEPTION, 5000).
|
||||
|
||||
-define(CLEANUP_QUEUE_NAME, <<"cleanup-queue">>).
|
||||
|
||||
all() ->
|
||||
[
|
||||
{group, parallel_tests}
|
||||
{group, tests}
|
||||
].
|
||||
|
||||
groups() ->
|
||||
[
|
||||
{parallel_tests, [parallel], [
|
||||
max_message_size
|
||||
]}
|
||||
{tests, [], [
|
||||
max_message_size
|
||||
]}
|
||||
].
|
||||
|
||||
suite() ->
|
||||
|
@ -81,8 +73,7 @@ max_message_size(Config) ->
|
|||
Size2Mb = 1024 * 1024 * 2,
|
||||
Size2Mb = byte_size(Binary2M),
|
||||
|
||||
rabbit_ct_broker_helpers:rpc(Config, 0,
|
||||
application, set_env, [rabbit, max_message_size, 1024 * 1024 * 3]),
|
||||
ok = rabbit_ct_broker_helpers:rpc(Config, persistent_term, put, [max_message_size, 1024 * 1024 * 3]),
|
||||
|
||||
{_, Ch} = rabbit_ct_client_helpers:open_connection_and_channel(Config, 0),
|
||||
|
||||
|
@ -96,8 +87,7 @@ max_message_size(Config) ->
|
|||
assert_channel_fail_max_size(Ch, Monitor),
|
||||
|
||||
%% increase the limit
|
||||
rabbit_ct_broker_helpers:rpc(Config, 0,
|
||||
application, set_env, [rabbit, max_message_size, 1024 * 1024 * 8]),
|
||||
ok = rabbit_ct_broker_helpers:rpc(Config, persistent_term, put, [max_message_size, 1024 * 1024 * 8]),
|
||||
|
||||
{_, Ch1} = rabbit_ct_client_helpers:open_connection_and_channel(Config, 0),
|
||||
|
||||
|
@ -112,15 +102,7 @@ max_message_size(Config) ->
|
|||
|
||||
Monitor1 = monitor(process, Ch1),
|
||||
amqp_channel:call(Ch1, #'basic.publish'{routing_key = <<"none">>}, #amqp_msg{payload = Binary10M}),
|
||||
assert_channel_fail_max_size(Ch1, Monitor1),
|
||||
|
||||
%% increase beyond the hard limit
|
||||
rabbit_ct_broker_helpers:rpc(Config, 0,
|
||||
application, set_env, [rabbit, max_message_size, 1024 * 1024 * 600]),
|
||||
Val = rabbit_ct_broker_helpers:rpc(Config, 0,
|
||||
rabbit_channel, get_max_message_size, []),
|
||||
|
||||
?assertEqual(?MAX_MSG_SIZE, Val).
|
||||
assert_channel_fail_max_size(Ch1, Monitor1).
|
||||
|
||||
%% -------------------------------------------------------------------
|
||||
%% Implementation
|
||||
|
|
|
@ -64,8 +64,8 @@ confirm(_Config) ->
|
|||
?assertEqual(undefined, rabbit_confirms:smallest(U7)),
|
||||
|
||||
U8 = rabbit_confirms:insert(2, [QName], XName, U1),
|
||||
{[{1, XName}, {2, XName}], _U9} = rabbit_confirms:confirm([1, 2], QName, U8),
|
||||
ok.
|
||||
{[{Seq1, XName}, {Seq2, XName}], _U9} = rabbit_confirms:confirm([1, 2], QName, U8),
|
||||
?assertEqual([1, 2], lists:sort([Seq1, Seq2])).
|
||||
|
||||
|
||||
reject(_Config) ->
|
||||
|
@ -94,8 +94,7 @@ reject(_Config) ->
|
|||
{ok, {2, XName}, U5} = rabbit_confirms:reject(2, U3),
|
||||
{error, not_found} = rabbit_confirms:reject(2, U5),
|
||||
?assertEqual(1, rabbit_confirms:size(U5)),
|
||||
?assertEqual(1, rabbit_confirms:smallest(U5)),
|
||||
ok.
|
||||
?assertEqual(1, rabbit_confirms:smallest(U5)).
|
||||
|
||||
remove_queue(_Config) ->
|
||||
XName = rabbit_misc:r(<<"/">>, exchange, <<"X">>),
|
||||
|
@ -114,5 +113,5 @@ remove_queue(_Config) ->
|
|||
|
||||
U5 = rabbit_confirms:insert(1, [QName], XName, U0),
|
||||
U6 = rabbit_confirms:insert(2, [QName], XName, U5),
|
||||
{[{1, XName}, {2, XName}], _U} = rabbit_confirms:remove_queue(QName, U6),
|
||||
ok.
|
||||
{[{Seq1, XName}, {Seq2, XName}], _U} = rabbit_confirms:remove_queue(QName, U6),
|
||||
?assertEqual([1, 2], lists:sort([Seq1, Seq2])).
|
||||
|
|
|
@ -34,17 +34,11 @@ all_tests() ->
|
|||
|
||||
groups() ->
|
||||
[
|
||||
{machine_version_2, [], all_tests()},
|
||||
{machine_version_3, [], all_tests()},
|
||||
{machine_version_conversion, [], [convert_v2_to_v3]}
|
||||
{machine_version_2, [shuffle], all_tests()},
|
||||
{machine_version_3, [shuffle], all_tests()},
|
||||
{machine_version_conversion, [shuffle], [convert_v2_to_v3]}
|
||||
].
|
||||
|
||||
init_per_suite(Config) ->
|
||||
Config.
|
||||
|
||||
end_per_suite(_Config) ->
|
||||
ok.
|
||||
|
||||
init_per_group(machine_version_2, Config) ->
|
||||
[{machine_version, 2} | Config];
|
||||
init_per_group(machine_version_3, Config) ->
|
||||
|
@ -55,12 +49,6 @@ init_per_group(machine_version_conversion, Config) ->
|
|||
end_per_group(_Group, _Config) ->
|
||||
ok.
|
||||
|
||||
init_per_testcase(_TestCase, Config) ->
|
||||
Config.
|
||||
|
||||
end_per_testcase(_TestCase, _Config) ->
|
||||
ok.
|
||||
|
||||
%%%===================================================================
|
||||
%%% Test cases
|
||||
%%%===================================================================
|
||||
|
@ -91,8 +79,7 @@ end_per_testcase(_TestCase, _Config) ->
|
|||
test_init(Name) ->
|
||||
init(#{name => Name,
|
||||
max_in_memory_length => 0,
|
||||
queue_resource => rabbit_misc:r("/", queue,
|
||||
atom_to_binary(Name, utf8)),
|
||||
queue_resource => rabbit_misc:r("/", queue, atom_to_binary(Name)),
|
||||
release_cursor_interval => 0}).
|
||||
|
||||
enq_enq_checkout_test(C) ->
|
||||
|
@ -109,7 +96,7 @@ enq_enq_checkout_test(C) ->
|
|||
?ASSERT_EFF({log, [1,2], _Fun, _Local}, Effects),
|
||||
ok.
|
||||
|
||||
credit_enq_enq_checkout_settled_credit_test(C) ->
|
||||
credit_enq_enq_checkout_settled_credit_v1_test(C) ->
|
||||
Cid = {?FUNCTION_NAME, self()},
|
||||
{State1, _} = enq(C, 1, 1, first, test_init(test)),
|
||||
{State2, _} = enq(C, 2, 2, second, State1),
|
||||
|
@ -122,7 +109,8 @@ credit_enq_enq_checkout_settled_credit_test(C) ->
|
|||
{State4, SettledEffects} = settle(C, Cid, 4, 1, State3),
|
||||
?assertEqual(false, lists:any(fun ({log, _, _, _}) ->
|
||||
true;
|
||||
(_) -> false
|
||||
(_) ->
|
||||
false
|
||||
end, SettledEffects)),
|
||||
%% granting credit (3) should deliver the second msg if the receivers
|
||||
%% delivery count is (1)
|
||||
|
@ -136,8 +124,43 @@ credit_enq_enq_checkout_settled_credit_test(C) ->
|
|||
end, FinalEffects)),
|
||||
ok.
|
||||
|
||||
credit_with_drained_test(C) ->
|
||||
Cid = {?FUNCTION_NAME, self()},
|
||||
credit_enq_enq_checkout_settled_credit_v2_test(C) ->
|
||||
Ctag = ?FUNCTION_NAME,
|
||||
Cid = {Ctag, self()},
|
||||
{State1, _} = enq(C, 1, 1, first, test_init(test)),
|
||||
{State2, _} = enq(C, 2, 2, second, State1),
|
||||
{State3, _, Effects} = apply(meta(C, 3),
|
||||
rabbit_fifo:make_checkout(
|
||||
Cid,
|
||||
{auto, 1, credited},
|
||||
%% denotes that credit API v2 is used
|
||||
#{initial_delivery_count => 16#ff_ff_ff_ff}),
|
||||
State2),
|
||||
?ASSERT_EFF({monitor, _, _}, Effects),
|
||||
?ASSERT_EFF({log, [1], _Fun, _Local}, Effects),
|
||||
%% Settling the delivery should not grant new credit.
|
||||
{State4, SettledEffects} = settle(C, Cid, 4, 1, State3),
|
||||
?assertEqual(false, lists:any(fun ({log, _, _, _}) ->
|
||||
true;
|
||||
(_) ->
|
||||
false
|
||||
end, SettledEffects)),
|
||||
{State5, CreditEffects} = credit(C, Cid, 5, 1, 0, false, State4),
|
||||
?ASSERT_EFF({log, [2], _, _}, CreditEffects),
|
||||
%% The credit_reply should be sent **after** the delivery.
|
||||
?assertEqual({send_msg, self(),
|
||||
{credit_reply, Ctag, _DeliveryCount = 1, _Credit = 0, _Available = 0, _Drain = false},
|
||||
?DELIVERY_SEND_MSG_OPTS},
|
||||
lists:last(CreditEffects)),
|
||||
{_State6, FinalEffects} = enq(C, 6, 3, third, State5),
|
||||
?assertEqual(false, lists:any(fun ({log, _, _, _}) ->
|
||||
true;
|
||||
(_) -> false
|
||||
end, FinalEffects)).
|
||||
|
||||
credit_with_drained_v1_test(C) ->
|
||||
Ctag = ?FUNCTION_NAME,
|
||||
Cid = {Ctag, self()},
|
||||
State0 = test_init(test),
|
||||
%% checkout with a single credit
|
||||
{State1, _, _} =
|
||||
|
@ -147,17 +170,42 @@ credit_with_drained_test(C) ->
|
|||
delivery_count = 0}}},
|
||||
State1),
|
||||
{State, Result, _} =
|
||||
apply(meta(C, 3), rabbit_fifo:make_credit(Cid, 0, 5, true), State1),
|
||||
?assertMatch(#rabbit_fifo{consumers = #{Cid := #consumer{credit = 0,
|
||||
delivery_count = 5}}},
|
||||
apply(meta(C, 3), rabbit_fifo:make_credit(Cid, 5, 0, true), State1),
|
||||
?assertMatch(#rabbit_fifo{consumers = #{Cid := #consumer{credit = 0,
|
||||
delivery_count = 5}}},
|
||||
State),
|
||||
?assertEqual({multi, [{send_credit_reply, 0},
|
||||
{send_drained, {?FUNCTION_NAME, 5}}]},
|
||||
{send_drained, {Ctag, 5}}]},
|
||||
Result),
|
||||
ok.
|
||||
|
||||
credit_and_drain_test(C) ->
|
||||
Cid = {?FUNCTION_NAME, self()},
|
||||
credit_with_drained_v2_test(C) ->
|
||||
Ctag = ?FUNCTION_NAME,
|
||||
Cid = {Ctag, self()},
|
||||
State0 = test_init(test),
|
||||
%% checkout with a single credit
|
||||
{State1, _, _} = apply(meta(C, 1),
|
||||
rabbit_fifo:make_checkout(
|
||||
Cid,
|
||||
{auto, 1, credited},
|
||||
%% denotes that credit API v2 is used
|
||||
#{initial_delivery_count => 0}),
|
||||
State0),
|
||||
?assertMatch(#rabbit_fifo{consumers = #{Cid := #consumer{credit = 1,
|
||||
delivery_count = 0}}},
|
||||
State1),
|
||||
{State, ok, Effects} = apply(meta(C, 3), rabbit_fifo:make_credit(Cid, 5, 0, true), State1),
|
||||
?assertMatch(#rabbit_fifo{consumers = #{Cid := #consumer{credit = 0,
|
||||
delivery_count = 5}}},
|
||||
State),
|
||||
?assertEqual([{send_msg, self(),
|
||||
{credit_reply, Ctag, _DeliveryCount = 5, _Credit = 0, _Available = 0, _Drain = true},
|
||||
?DELIVERY_SEND_MSG_OPTS}],
|
||||
Effects).
|
||||
|
||||
credit_and_drain_v1_test(C) ->
|
||||
Ctag = ?FUNCTION_NAME,
|
||||
Cid = {Ctag, self()},
|
||||
{State1, _} = enq(C, 1, 1, first, test_init(test)),
|
||||
{State2, _} = enq(C, 2, 2, second, State1),
|
||||
%% checkout without any initial credit (like AMQP 1.0 would)
|
||||
|
@ -167,7 +215,7 @@ credit_and_drain_test(C) ->
|
|||
|
||||
?ASSERT_NO_EFF({log, _, _, _}, CheckEffs),
|
||||
{State4, {multi, [{send_credit_reply, 0},
|
||||
{send_drained, {?FUNCTION_NAME, 2}}]},
|
||||
{send_drained, {Ctag, 2}}]},
|
||||
Effects} = apply(meta(C, 4), rabbit_fifo:make_credit(Cid, 4, 0, true), State3),
|
||||
?assertMatch(#rabbit_fifo{consumers = #{Cid := #consumer{credit = 0,
|
||||
delivery_count = 4}}},
|
||||
|
@ -178,7 +226,36 @@ credit_and_drain_test(C) ->
|
|||
?ASSERT_NO_EFF({log, _, _, _}, EnqEffs),
|
||||
ok.
|
||||
|
||||
credit_and_drain_v2_test(C) ->
|
||||
Ctag = ?FUNCTION_NAME,
|
||||
Cid = {Ctag, self()},
|
||||
{State1, _} = enq(C, 1, 1, first, test_init(test)),
|
||||
{State2, _} = enq(C, 2, 2, second, State1),
|
||||
{State3, _, CheckEffs} = apply(meta(C, 3),
|
||||
rabbit_fifo:make_checkout(
|
||||
Cid,
|
||||
%% checkout without any initial credit (like AMQP 1.0 would)
|
||||
{auto, 0, credited},
|
||||
%% denotes that credit API v2 is used
|
||||
#{initial_delivery_count => 16#ff_ff_ff_ff - 1}),
|
||||
State2),
|
||||
?ASSERT_NO_EFF({log, _, _, _}, CheckEffs),
|
||||
|
||||
{State4, ok, Effects} = apply(meta(C, 4),
|
||||
rabbit_fifo:make_credit(Cid, 4, 16#ff_ff_ff_ff - 1, true),
|
||||
State3),
|
||||
?assertMatch(#rabbit_fifo{consumers = #{Cid := #consumer{credit = 0,
|
||||
delivery_count = 2}}},
|
||||
State4),
|
||||
?ASSERT_EFF({log, [1, 2], _, _}, Effects),
|
||||
%% The credit_reply should be sent **after** the deliveries.
|
||||
?assertEqual({send_msg, self(),
|
||||
{credit_reply, Ctag, _DeliveryCount = 2, _Credit = 0, _Available = 0, _Drain = true},
|
||||
?DELIVERY_SEND_MSG_OPTS},
|
||||
lists:last(Effects)),
|
||||
|
||||
{_State5, EnqEffs} = enq(C, 5, 2, third, State4),
|
||||
?ASSERT_NO_EFF({log, _, _, _}, EnqEffs).
|
||||
|
||||
enq_enq_deq_test(C) ->
|
||||
Cid = {?FUNCTION_NAME, self()},
|
||||
|
@ -1402,10 +1479,9 @@ single_active_cancelled_with_unacked_test(C) ->
|
|||
?assertMatch([], rabbit_fifo:query_waiting_consumers(State6)),
|
||||
ok.
|
||||
|
||||
single_active_with_credited_test(C) ->
|
||||
single_active_with_credited_v1_test(C) ->
|
||||
State0 = init(#{name => ?FUNCTION_NAME,
|
||||
queue_resource => rabbit_misc:r("/", queue,
|
||||
atom_to_binary(?FUNCTION_NAME, utf8)),
|
||||
queue_resource => rabbit_misc:r("/", queue, atom_to_binary(?FUNCTION_NAME)),
|
||||
release_cursor_interval => 0,
|
||||
single_active_consumer_on => true}),
|
||||
|
||||
|
@ -1435,6 +1511,45 @@ single_active_with_credited_test(C) ->
|
|||
rabbit_fifo:query_waiting_consumers(State3)),
|
||||
ok.
|
||||
|
||||
single_active_with_credited_v2_test(C) ->
|
||||
State0 = init(#{name => ?FUNCTION_NAME,
|
||||
queue_resource => rabbit_misc:r("/", queue, atom_to_binary(?FUNCTION_NAME)),
|
||||
release_cursor_interval => 0,
|
||||
single_active_consumer_on => true}),
|
||||
C1 = {<<"ctag1">>, self()},
|
||||
{State1, _, _} = apply(meta(C, 1),
|
||||
make_checkout(C1,
|
||||
{auto, 0, credited},
|
||||
%% denotes that credit API v2 is used
|
||||
#{initial_delivery_count => 0}),
|
||||
State0),
|
||||
C2 = {<<"ctag2">>, self()},
|
||||
{State2, _, _} = apply(meta(C, 2),
|
||||
make_checkout(C2,
|
||||
{auto, 0, credited},
|
||||
%% denotes that credit API v2 is used
|
||||
#{initial_delivery_count => 0}),
|
||||
State1),
|
||||
%% add some credit
|
||||
C1Cred = rabbit_fifo:make_credit(C1, 5, 0, false),
|
||||
{State3, ok, Effects1} = apply(meta(C, 3), C1Cred, State2),
|
||||
?assertEqual([{send_msg, self(),
|
||||
{credit_reply, <<"ctag1">>, _DeliveryCount = 0, _Credit = 5, _Available = 0, _Drain = false},
|
||||
?DELIVERY_SEND_MSG_OPTS}],
|
||||
Effects1),
|
||||
|
||||
C2Cred = rabbit_fifo:make_credit(C2, 4, 0, false),
|
||||
{State, ok, Effects2} = apply(meta(C, 4), C2Cred, State3),
|
||||
?assertEqual({send_msg, self(),
|
||||
{credit_reply, <<"ctag2">>, _DeliveryCount = 0, _Credit = 4, _Available = 0, _Drain = false},
|
||||
?DELIVERY_SEND_MSG_OPTS},
|
||||
Effects2),
|
||||
|
||||
%% both consumers should have credit
|
||||
?assertMatch(#{C1 := #consumer{credit = 5}},
|
||||
State#rabbit_fifo.consumers),
|
||||
?assertMatch([{C2, #consumer{credit = 4}}],
|
||||
rabbit_fifo:query_waiting_consumers(State)).
|
||||
|
||||
register_enqueuer_test(C) ->
|
||||
State0 = init(#{name => ?FUNCTION_NAME,
|
||||
|
|
|
@ -32,7 +32,8 @@ all_tests() ->
|
|||
discard,
|
||||
cancel_checkout,
|
||||
lost_delivery,
|
||||
credit,
|
||||
credit_api_v1,
|
||||
credit_api_v2,
|
||||
untracked_enqueue,
|
||||
flow,
|
||||
test_queries,
|
||||
|
@ -42,7 +43,7 @@ all_tests() ->
|
|||
|
||||
groups() ->
|
||||
[
|
||||
{tests, [], all_tests()}
|
||||
{tests, [shuffle], all_tests()}
|
||||
].
|
||||
|
||||
init_per_group(_, Config) ->
|
||||
|
@ -441,7 +442,7 @@ lost_delivery(Config) ->
|
|||
end),
|
||||
ok.
|
||||
|
||||
credit(Config) ->
|
||||
credit_api_v1(Config) ->
|
||||
ClusterName = ?config(cluster_name, Config),
|
||||
ServerId = ?config(node_id, Config),
|
||||
ok = start_cluster(ClusterName, [ServerId]),
|
||||
|
@ -450,21 +451,27 @@ credit(Config) ->
|
|||
{ok, F2, []} = rabbit_fifo_client:enqueue(ClusterName, m2, F1),
|
||||
{_, _, F3} = process_ra_events(receive_ra_events(2, 0), ClusterName, F2),
|
||||
%% checkout with 0 prefetch
|
||||
{ok, F4} = rabbit_fifo_client:checkout(<<"tag">>, 0, credited, #{}, F3),
|
||||
CTag = <<"my-tag">>,
|
||||
{ok, F4} = rabbit_fifo_client:checkout(CTag, 0, credited, #{}, F3),
|
||||
%% assert no deliveries
|
||||
{_, _, F5} = process_ra_events(receive_ra_events(), ClusterName, F4, [], [],
|
||||
fun
|
||||
(D, _) -> error({unexpected_delivery, D})
|
||||
end),
|
||||
%% provide some credit
|
||||
{F6, []} = rabbit_fifo_client:credit(<<"tag">>, 1, false, F5),
|
||||
{[{_, _, _, _, m1}], [{send_credit_reply, _}], F7} =
|
||||
process_ra_events(receive_ra_events(1, 1), ClusterName, F6),
|
||||
{F6, []} = rabbit_fifo_client:credit_v1(CTag, 1, false, F5),
|
||||
{[{_, _, _, _, m1}], [{send_credit_reply, 1}], F7} =
|
||||
process_ra_events(receive_ra_events(1, 1), ClusterName, F6),
|
||||
|
||||
%% credit and drain
|
||||
{F8, []} = rabbit_fifo_client:credit(<<"tag">>, 4, true, F7),
|
||||
{[{_, _, _, _, m2}], [{send_credit_reply, _}, {send_drained, _}], F9} =
|
||||
process_ra_events(receive_ra_events(2, 1), ClusterName, F8),
|
||||
Drain = true,
|
||||
{F8, []} = rabbit_fifo_client:credit_v1(CTag, 4, Drain, F7),
|
||||
AvailableAfterCheckout = 0,
|
||||
{[{_, _, _, _, m2}],
|
||||
[{send_credit_reply, AvailableAfterCheckout},
|
||||
{credit_reply_v1, CTag, _CreditAfterCheckout = 3,
|
||||
AvailableAfterCheckout, Drain}],
|
||||
F9} = process_ra_events(receive_ra_events(2, 1), ClusterName, F8),
|
||||
flush(),
|
||||
|
||||
%% enqueue another message - at this point the consumer credit should be
|
||||
|
@ -476,10 +483,78 @@ credit(Config) ->
|
|||
(D, _) -> error({unexpected_delivery, D})
|
||||
end),
|
||||
%% credit again and receive the last message
|
||||
{F12, []} = rabbit_fifo_client:credit(<<"tag">>, 10, false, F11),
|
||||
{F12, []} = rabbit_fifo_client:credit_v1(CTag, 10, false, F11),
|
||||
{[{_, _, _, _, m3}], _, _} = process_ra_events(receive_ra_events(1, 1), ClusterName, F12),
|
||||
ok.
|
||||
|
||||
credit_api_v2(Config) ->
|
||||
ClusterName = ?config(cluster_name, Config),
|
||||
ServerId = ?config(node_id, Config),
|
||||
ok = start_cluster(ClusterName, [ServerId]),
|
||||
F0 = rabbit_fifo_client:init([ServerId], 4),
|
||||
%% Enqueue 2 messages.
|
||||
{ok, F1, []} = rabbit_fifo_client:enqueue(ClusterName, m1, F0),
|
||||
{ok, F2, []} = rabbit_fifo_client:enqueue(ClusterName, m2, F1),
|
||||
{_, _, F3} = process_ra_events(receive_ra_events(2, 0), ClusterName, F2),
|
||||
CTag = <<"my-tag">>,
|
||||
DC0 = 16#ff_ff_ff_ff,
|
||||
DC1 = 0, %% = DC0 + 1 using 32 bit serial number arithmetic
|
||||
{ok, F4} = rabbit_fifo_client:checkout(
|
||||
%% initial_delivery_count in consumer meta means credit API v2.
|
||||
CTag, 0, credited, #{initial_delivery_count => DC0}, F3),
|
||||
%% assert no deliveries
|
||||
{_, _, F5} = process_ra_events(receive_ra_events(), ClusterName, F4, [], [],
|
||||
fun
|
||||
(D, _) -> error({unexpected_delivery, D})
|
||||
end),
|
||||
%% Grant 1 credit.
|
||||
{F6, []} = rabbit_fifo_client:credit(CTag, DC0, 1, false, _Echo0 = true, F5),
|
||||
%% We expect exactly 1 message due to 1 credit being granted.
|
||||
{[{_, _, _, _, m1}],
|
||||
%% We expect a credit_reply action due to echo=true
|
||||
[{credit_reply, CTag, DC1, _Credit0 = 0, _Available0 = 1, _Drain0 = false}],
|
||||
F7} = process_ra_events(receive_ra_events(), ClusterName, F6),
|
||||
|
||||
%% Again, grant 1 credit.
|
||||
%% However, because we still use the initial delivery count DC0, rabbit_fifo
|
||||
%% wont' send us a new message since it already sent us m1 for that old delivery-count.
|
||||
%% In other words, this credit top up simulates in-flight deliveries.
|
||||
{F8, []} = rabbit_fifo_client:credit(CTag, DC0, 1, false, _Echo1 = true, F7),
|
||||
{_NoMessages = [],
|
||||
%% We still expect a credit_reply action due to echo=true
|
||||
[{credit_reply, CTag, DC1, _Credit1 = 0, _Available1 = 1, _Drain1 = false}],
|
||||
F9} = process_ra_events(receive_ra_events(), ClusterName, F8),
|
||||
|
||||
%% Grant 4 credits and drain.
|
||||
{F10, []} = rabbit_fifo_client:credit(CTag, DC1, 4, true, _Echo2 = false, F9),
|
||||
%% rabbit_fifo should advance the delivery-count as much as possible
|
||||
%% consuming all credits due to drain=true and insufficient messages in the queue.
|
||||
DC2 = DC1 + 4,
|
||||
%% We expect to receive m2 which is the only message in the queue.
|
||||
{[{_, _, _, _, m2}],
|
||||
%% Even though echo=false, we still expect a credit_reply action due
|
||||
%% drain=true and insufficient messages in the queue.
|
||||
[{credit_reply, CTag, DC2, _Credit2 = 0, _Available2 = 0, _Drain2 = true}],
|
||||
F11} = process_ra_events(receive_ra_events(), ClusterName, F10),
|
||||
flush(),
|
||||
|
||||
%% Enqueue another message.
|
||||
%% At this point the consumer credit should be all used up due to the drain.
|
||||
{ok, F12, []} = rabbit_fifo_client:enqueue(ClusterName, m3, F11),
|
||||
%% assert no deliveries
|
||||
{_, _, F13} = process_ra_events(receive_ra_events(), ClusterName, F12, [], [],
|
||||
fun
|
||||
(D, _) -> error({unexpected_delivery, D})
|
||||
end),
|
||||
|
||||
%% Grant 10 credits and receive the last message.
|
||||
{F14, []} = rabbit_fifo_client:credit(CTag, DC2, 10, false, _Echo = false, F13),
|
||||
?assertMatch(
|
||||
{[{_, _, _, _, m3}],
|
||||
%% Due to echo=false, we don't expect a credit_reply action.
|
||||
_NoCreditReplyAction = [],
|
||||
_F15}, process_ra_events(receive_ra_events(), ClusterName, F14)).
|
||||
|
||||
untracked_enqueue(Config) ->
|
||||
ClusterName = ?config(cluster_name, Config),
|
||||
ServerId = ?config(node_id, Config),
|
||||
|
|
|
@ -19,7 +19,7 @@
|
|||
-define(WAIT, 5000).
|
||||
|
||||
suite() ->
|
||||
[{timetrap, 15 * 60000}].
|
||||
[{timetrap, 15 * 60_000}].
|
||||
|
||||
all() ->
|
||||
[
|
||||
|
@ -1712,11 +1712,6 @@ consume_from_replica(Config) ->
|
|||
rabbit_ct_broker_helpers:rpc(Config, 0, ?MODULE, delete_testcase_queue, [Q]).
|
||||
|
||||
consume_credit(Config) ->
|
||||
%% Because osiris provides one chunk on every read and we don't want to buffer
|
||||
%% messages in the broker to avoid memory penalties, the credit value won't
|
||||
%% be strict - we allow it into the negative values.
|
||||
%% We can test that after receiving a chunk, no more messages are delivered until
|
||||
%% the credit goes back to a positive value.
|
||||
[Server | _] = rabbit_ct_broker_helpers:get_node_configs(Config, nodename),
|
||||
|
||||
Ch = rabbit_ct_client_helpers:open_channel(Config, Server),
|
||||
|
@ -1736,40 +1731,55 @@ consume_credit(Config) ->
|
|||
qos(Ch1, Credit, false),
|
||||
subscribe(Ch1, Q, false, 0),
|
||||
|
||||
%% Receive everything
|
||||
DeliveryTags = receive_batch(),
|
||||
|
||||
%% We receive at least the given credit as we know there are 100 messages in the queue
|
||||
?assert(length(DeliveryTags) >= Credit),
|
||||
|
||||
%% Let's ack as many messages as we can while avoiding a positive credit for new deliveries
|
||||
{ToAck, Pending} = lists:split(length(DeliveryTags) - Credit, DeliveryTags),
|
||||
|
||||
[ok = amqp_channel:cast(Ch1, #'basic.ack'{delivery_tag = DeliveryTag,
|
||||
multiple = false})
|
||||
|| DeliveryTag <- ToAck],
|
||||
|
||||
%% Nothing here, this is good
|
||||
receive
|
||||
{#'basic.deliver'{}, _} ->
|
||||
exit(unexpected_delivery)
|
||||
after 1000 ->
|
||||
ok
|
||||
%% We expect to receive exactly 2 messages.
|
||||
DTag1 = receive {#'basic.deliver'{delivery_tag = Tag1}, _} -> Tag1
|
||||
after 5000 -> ct:fail({missing_delivery, ?LINE})
|
||||
end,
|
||||
_DTag2 = receive {#'basic.deliver'{delivery_tag = Tag2}, _} -> Tag2
|
||||
after 5000 -> ct:fail({missing_delivery, ?LINE})
|
||||
end,
|
||||
receive {#'basic.deliver'{}, _} -> ct:fail({unexpected_delivery, ?LINE})
|
||||
after 100 -> ok
|
||||
end,
|
||||
|
||||
%% Let's ack one more, we should receive a new chunk
|
||||
ok = amqp_channel:cast(Ch1, #'basic.ack'{delivery_tag = hd(Pending),
|
||||
multiple = false}),
|
||||
|
||||
%% Yeah, here is the new chunk!
|
||||
receive
|
||||
{#'basic.deliver'{}, _} ->
|
||||
ok
|
||||
after 5000 ->
|
||||
exit(timeout)
|
||||
%% When we ack the 1st message, we should receive exactly 1 more message
|
||||
ok = amqp_channel:cast(Ch1, #'basic.ack'{delivery_tag = DTag1,
|
||||
multiple = false}),
|
||||
DTag3 = receive {#'basic.deliver'{delivery_tag = Tag3}, _} -> Tag3
|
||||
after 5000 -> ct:fail({missing_delivery, ?LINE})
|
||||
end,
|
||||
receive {#'basic.deliver'{}, _} ->
|
||||
ct:fail({unexpected_delivery, ?LINE})
|
||||
after 100 -> ok
|
||||
end,
|
||||
|
||||
%% Whenever we ack 2 messages, we should receive exactly 2 more messages.
|
||||
ok = consume_credit0(Ch1, DTag3),
|
||||
|
||||
rabbit_ct_broker_helpers:rpc(Config, 0, ?MODULE, delete_testcase_queue, [Q]).
|
||||
|
||||
consume_credit0(_Ch, DTag)
|
||||
when DTag > 50 ->
|
||||
%% sufficiently tested
|
||||
ok;
|
||||
consume_credit0(Ch, DTagPrev) ->
|
||||
%% Ack 2 messages.
|
||||
ok = amqp_channel:cast(Ch, #'basic.ack'{delivery_tag = DTagPrev,
|
||||
multiple = true}),
|
||||
%% Receive 1st message.
|
||||
receive {#'basic.deliver'{}, _} -> ok
|
||||
after 5000 -> ct:fail({missing_delivery, ?LINE})
|
||||
end,
|
||||
%% Receive 2nd message.
|
||||
DTag = receive {#'basic.deliver'{delivery_tag = T}, _} -> T
|
||||
after 5000 -> ct:fail({missing_delivery, ?LINE})
|
||||
end,
|
||||
%% We shouldn't receive more messages given that AMQP 0.9.1 prefetch count is 2.
|
||||
receive {#'basic.deliver'{}, _} -> ct:fail({unexpected_delivery, ?LINE})
|
||||
after 10 -> ok
|
||||
end,
|
||||
consume_credit0(Ch, DTag).
|
||||
|
||||
consume_credit_out_of_order_ack(Config) ->
|
||||
%% Like consume_credit but acknowledging the messages out of order.
|
||||
%% We want to ensure it doesn't behave like multiple, that is if we have
|
||||
|
|
|
@ -25,6 +25,7 @@ groups() ->
|
|||
{classic_queue, [], [
|
||||
all_messages_go_to_one_consumer,
|
||||
fallback_to_another_consumer_when_first_one_is_cancelled,
|
||||
fallback_to_another_consumer_when_first_one_is_cancelled_qos1,
|
||||
fallback_to_another_consumer_when_exclusive_consumer_channel_is_cancelled,
|
||||
fallback_to_another_consumer_when_first_one_is_cancelled_manual_acks,
|
||||
amqp_exclusive_consume_fails_on_exclusive_consumer_queue
|
||||
|
@ -32,6 +33,7 @@ groups() ->
|
|||
{quorum_queue, [], [
|
||||
all_messages_go_to_one_consumer,
|
||||
fallback_to_another_consumer_when_first_one_is_cancelled,
|
||||
fallback_to_another_consumer_when_first_one_is_cancelled_qos1,
|
||||
fallback_to_another_consumer_when_exclusive_consumer_channel_is_cancelled,
|
||||
fallback_to_another_consumer_when_first_one_is_cancelled_manual_acks,
|
||||
basic_get_is_unsupported
|
||||
|
@ -165,6 +167,49 @@ fallback_to_another_consumer_when_first_one_is_cancelled(Config) ->
|
|||
amqp_connection:close(C),
|
||||
ok.
|
||||
|
||||
fallback_to_another_consumer_when_first_one_is_cancelled_qos1(Config) ->
|
||||
{C, Ch} = connection_and_channel(Config),
|
||||
Q = queue_declare(Ch, Config),
|
||||
?assertEqual(#'basic.qos_ok'{},
|
||||
amqp_channel:call(Ch, #'basic.qos'{prefetch_count = 1})),
|
||||
CTag1 = <<"tag1">>,
|
||||
CTag2 = <<"tag2">>,
|
||||
amqp_channel:subscribe(Ch, #'basic.consume'{queue = Q,
|
||||
consumer_tag = CTag1}, self()),
|
||||
receive #'basic.consume_ok'{consumer_tag = CTag1} -> ok
|
||||
after 5000 -> ct:fail(timeout_ctag1)
|
||||
end,
|
||||
|
||||
amqp_channel:subscribe(Ch, #'basic.consume'{queue = Q,
|
||||
consumer_tag = CTag2}, self()),
|
||||
receive #'basic.consume_ok'{consumer_tag = CTag2} -> ok
|
||||
after 5000 -> ct:fail(timeout_ctag2)
|
||||
end,
|
||||
|
||||
Publish = #'basic.publish'{exchange = <<>>, routing_key = Q},
|
||||
amqp_channel:cast(Ch, Publish, #amqp_msg{payload = <<"m1">>}),
|
||||
amqp_channel:cast(Ch, Publish, #amqp_msg{payload = <<"m2">>}),
|
||||
|
||||
DTag1 = receive {#'basic.deliver'{consumer_tag = CTag1,
|
||||
delivery_tag = DTag},
|
||||
#amqp_msg{payload = <<"m1">>}} -> DTag
|
||||
after 5000 -> ct:fail(timeout_m1)
|
||||
end,
|
||||
|
||||
#'basic.cancel_ok'{consumer_tag = CTag1} = amqp_channel:call(Ch, #'basic.cancel'{consumer_tag = CTag1}),
|
||||
receive #'basic.cancel_ok'{consumer_tag = CTag1} -> ok
|
||||
after 5000 -> ct:fail(missing_cancel)
|
||||
end,
|
||||
|
||||
amqp_channel:cast(Ch, #'basic.ack'{delivery_tag = DTag1}),
|
||||
|
||||
receive {#'basic.deliver'{consumer_tag = CTag2},
|
||||
#amqp_msg{payload = <<"m2">>}} -> ok;
|
||||
Unexpected -> ct:fail({unexpected, Unexpected})
|
||||
after 5000 -> ct:fail(timeout_m2)
|
||||
end,
|
||||
amqp_connection:close(C).
|
||||
|
||||
fallback_to_another_consumer_when_first_one_is_cancelled_manual_acks(Config) ->
|
||||
%% Let's ensure that although the consumer is cancelled we still keep the unacked
|
||||
%% messages and accept acknowledgments on them.
|
||||
|
@ -292,7 +337,7 @@ queue_declare(Channel, Config) ->
|
|||
|
||||
consume({Parent, State, 0}) ->
|
||||
Parent ! {consumer_done, State};
|
||||
consume({Parent, {MessagesPerConsumer, MessageCount}, CountDown}) ->
|
||||
consume({Parent, {MessagesPerConsumer, MessageCount}, CountDown} = Arg) ->
|
||||
receive
|
||||
#'basic.consume_ok'{consumer_tag = CTag} ->
|
||||
consume({Parent, {maps:put(CTag, 0, MessagesPerConsumer), MessageCount}, CountDown});
|
||||
|
@ -307,9 +352,9 @@ consume({Parent, {MessagesPerConsumer, MessageCount}, CountDown}) ->
|
|||
consume({Parent, NewState, CountDown - 1});
|
||||
#'basic.cancel_ok'{consumer_tag = CTag} ->
|
||||
Parent ! {cancel_ok, CTag},
|
||||
consume({Parent, {MessagesPerConsumer, MessageCount}, CountDown});
|
||||
consume(Arg);
|
||||
_ ->
|
||||
consume({Parent, {MessagesPerConsumer, MessageCount}, CountDown})
|
||||
consume(Arg)
|
||||
after ?TIMEOUT ->
|
||||
Parent ! {consumer_timeout, {MessagesPerConsumer, MessageCount}},
|
||||
flush(),
|
||||
|
|
|
@ -8,11 +8,10 @@
|
|||
-module(unit_access_control_SUITE).
|
||||
|
||||
-include_lib("common_test/include/ct.hrl").
|
||||
-include_lib("kernel/include/file.hrl").
|
||||
-include_lib("amqp_client/include/amqp_client.hrl").
|
||||
-include_lib("eunit/include/eunit.hrl").
|
||||
|
||||
-compile(export_all).
|
||||
-compile([export_all, nowarn_export_all]).
|
||||
|
||||
all() ->
|
||||
[
|
||||
|
@ -24,7 +23,7 @@ groups() ->
|
|||
[
|
||||
{parallel_tests, [parallel], [
|
||||
password_hashing,
|
||||
unsupported_connection_refusal
|
||||
version_negotiation
|
||||
]},
|
||||
{sequential_tests, [], [
|
||||
login_with_credentials_but_no_password,
|
||||
|
@ -278,20 +277,37 @@ auth_backend_internal_expand_topic_permission(_Config) ->
|
|||
),
|
||||
ok.
|
||||
|
||||
unsupported_connection_refusal(Config) ->
|
||||
passed = rabbit_ct_broker_helpers:rpc(Config, 0,
|
||||
?MODULE, unsupported_connection_refusal1, [Config]).
|
||||
%% Test AMQP 1.0 §2.2
|
||||
version_negotiation(Config) ->
|
||||
ok = rabbit_ct_broker_helpers:rpc(Config, ?MODULE, version_negotiation1, [Config]).
|
||||
|
||||
unsupported_connection_refusal1(Config) ->
|
||||
version_negotiation1(Config) ->
|
||||
H = ?config(rmq_hostname, Config),
|
||||
P = rabbit_ct_broker_helpers:get_node_config(Config, 0, tcp_port_amqp),
|
||||
[passed = test_unsupported_connection_refusal(H, P, V) ||
|
||||
V <- [<<"AMQP",9,9,9,9>>, <<"AMQP",0,1,0,0>>, <<"XXXX",0,0,9,1>>]],
|
||||
passed.
|
||||
|
||||
test_unsupported_connection_refusal(H, P, Header) ->
|
||||
[?assertEqual(<<"AMQP",0,1,0,0>>, version_negotiation2(H, P, Vsn)) ||
|
||||
Vsn <- [<<"AMQP",0,1,0,0>>,
|
||||
<<"AMQP",0,1,0,1>>,
|
||||
<<"AMQP",0,1,1,0>>,
|
||||
<<"AMQP",0,9,1,0>>,
|
||||
<<"AMQP",0,0,8,0>>,
|
||||
<<"XXXX",0,1,0,0>>,
|
||||
<<"XXXX",0,0,9,1>>]],
|
||||
|
||||
[?assertEqual(<<"AMQP",3,1,0,0>>, version_negotiation2(H, P, Vsn)) ||
|
||||
Vsn <- [<<"AMQP",1,1,0,0>>,
|
||||
<<"AMQP",4,1,0,0>>,
|
||||
<<"AMQP",9,1,0,0>>]],
|
||||
|
||||
[?assertEqual(<<"AMQP",0,0,9,1>>, version_negotiation2(H, P, Vsn)) ||
|
||||
Vsn <- [<<"AMQP",0,0,9,2>>,
|
||||
<<"AMQP",0,0,10,0>>,
|
||||
<<"AMQP",0,0,10,1>>]],
|
||||
ok.
|
||||
|
||||
version_negotiation2(H, P, Header) ->
|
||||
{ok, C} = gen_tcp:connect(H, P, [binary, {active, false}]),
|
||||
ok = gen_tcp:send(C, Header),
|
||||
{ok, <<"AMQP",0,0,9,1>>} = gen_tcp:recv(C, 8, 100),
|
||||
{ok, ServerVersion} = gen_tcp:recv(C, 8, 100),
|
||||
ok = gen_tcp:close(C),
|
||||
passed.
|
||||
ServerVersion.
|
||||
|
|
|
@ -73,6 +73,7 @@ def all_beam_files(name = "all_beam_files"):
|
|||
"src/rabbit_queue_collector.erl",
|
||||
"src/rabbit_registry.erl",
|
||||
"src/rabbit_resource_monitor_misc.erl",
|
||||
"src/rabbit_routing_parser.erl",
|
||||
"src/rabbit_runtime.erl",
|
||||
"src/rabbit_runtime_parameter.erl",
|
||||
"src/rabbit_semver.erl",
|
||||
|
@ -168,6 +169,7 @@ def all_test_beam_files(name = "all_test_beam_files"):
|
|||
"src/rabbit_queue_collector.erl",
|
||||
"src/rabbit_registry.erl",
|
||||
"src/rabbit_resource_monitor_misc.erl",
|
||||
"src/rabbit_routing_parser.erl",
|
||||
"src/rabbit_runtime.erl",
|
||||
"src/rabbit_runtime_parameter.erl",
|
||||
"src/rabbit_semver.erl",
|
||||
|
@ -260,6 +262,7 @@ def all_srcs(name = "all_srcs"):
|
|||
"src/rabbit_registry.erl",
|
||||
"src/rabbit_registry_class.erl",
|
||||
"src/rabbit_resource_monitor_misc.erl",
|
||||
"src/rabbit_routing_parser.erl",
|
||||
"src/rabbit_runtime.erl",
|
||||
"src/rabbit_runtime_parameter.erl",
|
||||
"src/rabbit_semver.erl",
|
||||
|
|
|
@ -8,7 +8,7 @@
|
|||
-include("resource.hrl").
|
||||
|
||||
%% Passed around most places
|
||||
-record(user, {username,
|
||||
-record(user, {username :: rabbit_types:option(rabbit_types:username()),
|
||||
tags,
|
||||
authz_backends}). %% List of {Module, AuthUserImpl} pairs
|
||||
|
||||
|
@ -254,7 +254,7 @@
|
|||
%% Max message size is hard limited to 512 MiB.
|
||||
%% If user configures a greater rabbit.max_message_size,
|
||||
%% this value is used instead.
|
||||
-define(MAX_MSG_SIZE, 536870912).
|
||||
-define(MAX_MSG_SIZE, 536_870_912).
|
||||
|
||||
-define(store_proc_name(N), rabbit_misc:store_proc_name(?MODULE, N)).
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue