Commit Graph

4657 Commits

Author SHA1 Message Date
Marcial Rosales 7e5b19b0b3
Refactor more test cases and add new ones 2024-02-29 15:14:36 -05:00
Marcial Rosales c30f3b6989
Refactor unit tests of auth_settings() 2024-02-29 15:14:36 -05:00
Marcial Rosales 6d2292c0cb
Change strategy that checks if an element exists 2024-02-29 15:14:35 -05:00
Marcial Rosales 993720e1b8
Update bazel instructions 2024-02-29 15:14:34 -05:00
Marcial Rosales 34b3b1248e
Create Oauth2 client 2024-02-29 15:14:34 -05:00
Karl Nilsson 5b0faf5d8c
Streams: Soft remove policy configuration of max_segment_size_bytes
This configuration is not guaranteed to be safe to change after a stream has bee n
declared and thus we'll remove the ability to change it after the initial
declaration. Users should favour the x- queue arg for this config but it will still
be possible to configure it as a policy but it will only be evaluated at
declara tion time.

This means that if a policy is set for a stream that re-configures the
`stream-m ax-segment-size-bytes` key it will show in the UI as updated but
the pre-existing stream will not use the updated configuration.

The key has been removed from the UI but for backwards compatibility it is still
 settable.

NB: this PR adds a new command `update_config` to the stream coordinator state
machine. Strictly speaking this should require a new machine version but we're by
passing that by relying on the feature flag instead which avoids this command
being committed before all nodes have the new code version. A new machine version
can lower the availability properties during a rolling cluster upgrade so in
this case it is preferable to avoid that given the simplicity of the change.
2024-02-29 15:14:33 -05:00
Michael Klishin 44d4e584d5
More missed license header updates #9969 2024-02-29 15:14:30 -05:00
Michael Klishin f045662e60
More missed license header updates #9969 2024-02-29 15:14:30 -05:00
Michael Klishin a68eec6a29
Drive-by change: naming 2024-02-29 15:14:26 -05:00
Diana Parra Corbacho 226c45748a
Allow management users to query feature flags and deprecated features
The new banner to warn about not-enabled feature flags requires access
to this endpoint, and it must be visible for all users.
2024-02-29 15:14:25 -05:00
Michael Klishin 5d41ede8cd
An alternative to #10415, closes #10330
Per discussion in #10415, this introduces a new module,
rabbit_mgmt_nodes, which provides a couple of helpers
that can be used to implement Cowboy REST's
resource_exists/2 in the modules that return
information about cluster members.
2024-02-29 15:14:22 -05:00
Michael Klishin 438b5a0700
More missed (c) header updates 2024-02-29 15:14:19 -05:00
Loïc Hoguin b5b6cd7866
Update expected CQ version in tests 2024-02-29 15:14:08 -05:00
Diana Parra Corbacho e0a60c1d3c
Remove FF warning as soon as all features are enabled
The warning in the header needs a full refresh, just updating
the page content will not clear the warning.
2024-02-29 15:14:08 -05:00
Michael Klishin 1d62d8ba49
(c) year bumps 2024-02-29 15:14:01 -05:00
Ariel Otilibili 09640e5f4d
Defined "tags" as list
Typo spotted in #4050
2024-02-29 15:13:59 -05:00
Marcial Rosales 414ad58de5
Use correct user to authenticate
depending on the backend we want to
exercise
2024-02-29 15:13:58 -05:00
Michael Klishin 46b923cc3a
Selenium: update run-suites.sh argument to match #10200 2024-02-29 15:13:57 -05:00
Marcial Rosales 6776594328
Clean up 2024-02-29 15:13:56 -05:00
Marcial Rosales bb23719f74
Propagate all credentials to http backend 2024-02-29 15:13:56 -05:00
Michael Klishin adf59fd8b4
Add a test originally introduced in #10062 2024-02-29 15:13:56 -05:00
Ariel Otilibili f4b09f63e3
Replaced true | false by boolean() 2024-02-29 15:13:55 -05:00
Michael Klishin e6a9be75db
Revert "HTTP API: DELETE /api/queues/{vhost}/{name} use internal API call"
This reverts commit 78f901a224.
2024-02-29 15:13:51 -05:00
Péter Gömöri bbec2dfa02
Prevent formatter crash in mgmt_util
`rabbit_mgmt_util:direct_request/6` is always called with an
`ErrorMsg` which expects one format argument as a string. Convert the
arbitrary reason term into a string to avoid a crash like the below:

```
warning: FORMATTER CRASH: {"Delete exchange error: ~ts",[{'EXIT',{{badmatch,{error,...
```
2024-02-29 15:13:51 -05:00
Péter Gömöri 9480486852
Tolerate race condition when starting management db cache process
This prevents the below harmless crash when multiple parallel API
requests arrive soon after starting the node.

```
exception error: no match of right hand side value
                 {error,{already_started,<0.1593.0>}}
  in function  rabbit_mgmt_db_cache:fetch/4 (rabbit_mgmt_db_cache.erl, line 68)
  in call from rabbit_mgmt_db:submit_cached/4 (rabbit_mgmt_db.erl, line 756)
  in call from rabbit_mgmt_util:augment/2 (rabbit_mgmt_util.erl, line 412)
  in call from rabbit_mgmt_util:run_augmentation/2 (rabbit_mgmt_util.erl, line 389)
  in call from rabbit_mgmt_util:augment_resources0/6 (rabbit_mgmt_util.erl, line 378)
  in call from rabbit_mgmt_util:with_valid_pagination/3 (rabbit_mgmt_util.erl, line 302)
  in call from rabbit_mgmt_wm_queues:to_json/2 (rabbit_mgmt_wm_queues.erl, line 44)
  in call from cowboy_rest:call/3 (src/cowboy_rest.erl, line 1583)
```
2024-02-29 15:13:50 -05:00
David Ansari 3a906c407b
Fix crash when closing connection
Avoid the following crash
```
** Reason for termination ==
** {mqtt_unexpected_cast,{shutdown,"Closed via management plugin"}}

  crasher:
    initial call: rabbit_mqtt_reader:init/1
    pid: <0.1096.0>
    registered_name: []
    exception exit: {mqtt_unexpected_cast,
                        {shutdown,"Closed via management plugin"}}
      in function  gen_server:handle_common_reply/8 (gen_server.erl, line 1208)
```
when closing MQTT or Stream connections via HTTP API endpoint
```
/connections/username/:username
```
2024-02-29 15:13:46 -05:00
Diana Parra Corbacho 3583980643
Management: introduce deprecated features API endpoints, UI page and warnings 2024-02-29 15:13:45 -05:00
Diana Parra Corbacho c418a02591
Add experimental/disabled warning in State column 2024-02-29 15:13:44 -05:00
Diana Parra Corbacho a70a48df52
Add a warning banner if any stable feature flags is not enabled
Add an experimental tag on the description to experimental features
2024-02-29 15:13:44 -05:00
Michael Klishin 2adb2e0c5b
Definition import: more logging improvements 2024-02-29 15:13:40 -05:00
Michael Klishin 1849db00a9
Another take at #10068
Scan queues, exchanges and bindings before attempting
to import anything on boot. If they miss the virtual
host field, fail early and log a sensible message.
2024-02-29 15:13:39 -05:00
Johan Rhodin 67a0cb465f
Fix wrong link 2024-02-29 15:13:30 -05:00
Johan Rhodin dc5ce8a73a
fix info item, and behavior->behaviour 2024-02-29 15:13:07 -05:00
Michael Klishin 1f77610525
Fixes #9983 2024-02-29 15:04:02 -05:00
Michael Klishin fa7f402e26
Update (c) according to [1]
1. https://investors.broadcom.com/news-releases/news-release-details/broadcom-and-vmware-intend-close-transaction-november-22-2023
2024-02-29 15:04:00 -05:00
Michael Klishin d85eadcca0
Management UI: link to GitHub Discussions and not the Google group 2024-02-29 15:03:58 -05:00
Michael Klishin 813b2d2b2c
Merge pull request #10624 from rabbitmq/fixes-10612
Fix issue #10612
2024-02-28 22:37:15 -05:00
Johan Rhodin 331c736128 Fix operator policy separators 2024-02-28 16:01:16 -06:00
David Ansari 8cb313d5a1 Support AMQP 1.0 natively
## What

Similar to Native MQTT in #5895, this commits implements Native AMQP 1.0.
By "native", we mean do not proxy via AMQP 0.9.1 anymore.

  ## Why

Native AMQP 1.0 comes with the following major benefits:
1. Similar to Native MQTT, this commit provides better throughput, latency,
   scalability, and resource usage for AMQP 1.0.
   See https://blog.rabbitmq.com/posts/2023/03/native-mqtt for native MQTT improvements.
   See further below for some benchmarks.
2. Since AMQP 1.0 is not limited anymore by the AMQP 0.9.1 protocol,
   this commit allows implementing more AMQP 1.0 features in the future.
   Some features are already implemented in this commit (see next section).
3. Simpler, better understandable, and more maintainable code.

Native AMQP 1.0 as implemented in this commit has the
following major benefits compared to AMQP 0.9.1:
4. Memory and disk alarms will only stop accepting incoming TRANSFER frames.
   New connections can still be created to consume from RabbitMQ to empty queues.
5. Due to 4. no need anymore for separate connections for publishers and
   consumers as we currently recommended for AMQP 0.9.1. which potentially
   halves the number of physical TCP connections.
6. When a single connection sends to multiple target queues, a single
   slow target queue won't block the entire connection.
   Publisher can still send data quickly to all other target queues.
7. A publisher can request whether it wants publisher confirmation on a per-message basis.
   In AMQP 0.9.1 publisher confirms are configured per channel only.
8. Consumers can change their "prefetch count" dynamically which isn't
   possible in our AMQP 0.9.1 implementation. See #10174
9. AMQP 1.0 is an extensible protocol

This commit also fixes dozens of bugs present in the AMQP 1.0 plugin in
RabbitMQ 3.x - most of which cannot be backported due to the complexity
and limitations of the old 3.x implementation.

This commit contains breaking changes and is therefore targeted for RabbitMQ 4.0.

 ## Implementation details

1. Breaking change: With Native AMQP, the behaviour of
```
Convert AMQP 0.9.1 message headers to application properties for an AMQP 1.0 consumer
amqp1_0.convert_amqp091_headers_to_app_props = false | true (default false)
Convert AMQP 1.0 Application Properties to AMQP 0.9.1 headers
amqp1_0.convert_app_props_to_amqp091_headers = false | true (default false)
```
will break because we always convert according to the message container conversions.
For example, AMQP 0.9.1 x-headers will go into message-annotations instead of application properties.
Also, `false` won’t be respected since we always convert the headers with message containers.

2. Remove rabbit_queue_collector

rabbit_queue_collector is responsible for synchronously deleting
exclusive queues. Since the AMQP 1.0 plugin never creates exclusive
queues, rabbit_queue_collector doesn't need to be started in the first
place. This will save 1 Erlang process per AMQP 1.0 connection.

3. 7 processes per connection + 1 process per session in this commit instead of
   7 processes per connection + 15 processes per session in 3.x
Supervision hierarchy got re-designed.

4. Use 1 writer process per AMQP 1.0 connection
AMQP 0.9.1 uses a separate rabbit_writer Erlang process per AMQP 0.9.1 channel.
Prior to this commit, AMQP 1.0 used a separate rabbit_amqp1_0_writer process per AMQP 1.0 session.
Advantage of single writer proc per session (prior to this commit):
* High parallelism for serialising packets if multiple sessions within
  a connection write heavily at the same time.

This commit uses a single writer process per AMQP 1.0 connection that is
shared across all AMQP 1.0 sessions.
Advantages of single writer proc per connection (this commit):
* Lower memory usage with hundreds of thousands of AMQP 1.0 sessions
* Less TCP and IP header overhead given that the single writer process
  can accumulate across all sessions bytes before flushing the socket.

In other words, this commit decides that a reader / writer process pair
per AMQP 1.0 connection is good enough for bi-directional TRANSFER flows.
Having a writer per session is too heavy.
We still ensure high throughput by having separate reader, writer, and
session processes.

5. Transform rabbit_amqp1_0_writer into gen_server
Why:
Prior to this commit, when clicking on the AMQP 1.0 writer process in
observer, the process crashed.
Instead of handling all these debug messages of the sys module, it's better
to implement a gen_server.
There is no advantage of using a special OTP process over gen_server
for the AMQP 1.0 writer.
gen_server also provides cleaner format status output.

How:
Message callbacks return a timeout of 0.
After all messages in the inbox are processed, the timeout message is
handled by flushing any pending bytes.

6. Remove stats timer from writer
AMQP 1.0 connections haven't emitted any stats previously.

7. When there are contiguous queue confirmations in the session process
mailbox, batch them. When the confirmations are sent to the publisher, a
single DISPOSITION frame is sent for contiguously confirmed delivery
IDs.
This approach should be good enough. However it's sub optimal in
scenarios where contiguous delivery IDs that need confirmations are rare,
for example:
* There are multiple links in the session with different sender
  settlement modes and sender publishes across these links interleaved.
* sender settlement mode is mixed and sender publishes interleaved settled
  and unsettled TRANSFERs.

8. Introduce credit API v2
Why:
The AMQP 0.9.1 credit extension which is to be removed in 4.0 was poorly
designed since basic.credit is a synchronous call into the queue process
blocking the entire AMQP 1.0 session process.

How:
Change the interactions between queue clients and queue server
implementations:
* Clients only request a credit reply if the FLOW's `echo` field is set
* Include all link flow control state held by the queue process into a
  new credit_reply queue event:
  * `available` after the queue sends any deliveries
  * `link-credit` after the queue sends any deliveries
  * `drain` which allows us to combine the old queue events
    send_credit_reply and send_drained into a single new queue event
    credit_reply.
* Include the consumer tag into the credit_reply queue event such that
  the AMQP 1.0 session process can process any credit replies
  asynchronously.

Link flow control state `delivery-count` also moves to the queue processes.

The new interactions are hidden behind feature flag credit_api_v2 to
allow for rolling upgrades from 3.13 to 4.0.

9. Use serial number arithmetic in quorum queues and session process.

10. Completely bypass the rabbit_limiter module for AMQP 1.0
flow control. The goal is to eventually remove the rabbit_limiter module
in 4.0 since AMQP 0.9.1 global QoS will be unsupported in 4.0. This
commit lifts the AMQP 1.0 link flow control logic out of rabbit_limiter
into rabbit_queue_consumers.

11. Fix credit bug for streams:
AMQP 1.0 settlements shouldn't top up link credit,
only FLOW frames should top up link credit.

12. Allow sender settle mode unsettled for streams
since AMQP 1.0 acknowledgements to streams are no-ops (currently).

13. Fix AMQP 1.0 client bugs
Auto renewing credits should not be related to settling TRANSFERs.
Remove field link_credit_unsettled as it was wrong and confusing.
Prior to this commit auto renewal did not work when the sender uses
sender settlement mode settled.

14. Fix AMQP 1.0 client bugs
The wrong outdated Link was passed to function auto_flow/2

15. Use osiris chunk iterator
Only hold messages of uncompressed sub batches in memory if consumer
doesn't have sufficient credits.
Compressed sub batches are skipped for non Stream protocol consumers.

16. Fix incoming link flow control
Always use confirms between AMQP 1.0 queue clients and queue servers.
As already done internally by rabbit_fifo_client and
rabbit_stream_queue, use confirms for classic queues as well.

17. Include link handle into correlation when publishing messages to target queues
such that session process can correlate confirms from target queues to
incoming links.

18. Only grant more credits to publishers if publisher hasn't sufficient credits
anymore and there are not too many unconfirmed messages on the link.

19. Completely ignore `block` and `unblock` queue actions and RabbitMQ credit flow
between classic queue process and session process.

20. Link flow control is independent between links.
A client can refer to a queue or to an exchange with multiple
dynamically added target queues. Multiple incoming links can also fan
in to the same queue. However the link topology looks like, this
commit ensures that each link is only granted more credits if that link
isn't overloaded.

21. A connection or a session can send to many different queues.
In AMQP 0.9.1, a single slow queue will lead to the entire channel, and
then entire connection being blocked.
This commit makes sure that a single slow queue from one link won't slow
down sending on other links.
For example, having link A sending to a local classic queue and
link B sending to 5 replica quorum queue, link B will naturally
grant credits slower than link A. So, despite the quorum queue being
slower in confirming messages, the same AMQP 1.0 connection and session
can still pump data very fast into the classic queue.

22. If cluster wide memory or disk alarm occurs.
Each session sends a FLOW with incoming-window to 0 to sending client.
If sending clients don’t obey, force disconnect the client.

If cluster wide memory alarm clears:
Each session resumes with a FLOW defaulting to initial incoming-window.

23. All operations apart of publishing TRANSFERS to RabbitMQ can continue during cluster wide alarms,
specifically, attaching consumers and consuming, i.e. emptying queues.
There is no need for separate AMQP 1.0 connections for publishers and consumers as recommended in our AMQP 0.9.1 implementation.

24. Flow control summary:
* If queue becomes bottleneck, that’s solved by slowing down individual sending links (AMQP 1.0 link flow control).
* If session becomes bottleneck (more unlikely), that’s solved by AMQP 1.0 session flow control.
* If connection becomes bottleneck, it naturally won’t read fast enough from the socket causing TCP backpressure being applied.
Nowhere will RabbitMQ internal credit based flow control (i.e. module credit_flow) be used on the incoming AMQP 1.0 message path.

25. Register AMQP sessions
Prefer local-only pg over our custom pg_local implementation as
pg is a better process group implementation than pg_local.
pg_local was identified as bottleneck in tests where many MQTT clients were disconnected at once.

26. Start a local-only pg when Rabbit boots:
> A scope can be kept local-only by using a scope name that is unique cluster-wide, e.g. the node name:
> pg:start_link(node()).
Register AMQP 1.0 connections and sessions with pg.

In future we should remove pg_local and instead use the new local-only
pg for all registered processes such as AMQP 0.9.1 connections and channels.

27. Requeue messages if link detached
Although the spec allows to settle delivery IDs on detached links, RabbitMQ does not respect the 'closed'
field of the DETACH frame and therefore handles every DETACH frame as closed. Since the link is closed,
we expect every outstanding delivery to be requeued.
In addition to consumer cancellation, detaching a link therefore causes in flight deliveries to be requeued.
Note that this behaviour is different from merely consumer cancellation in AMQP 0.9.1:
"After a consumer is cancelled there will be no future deliveries dispatched to it. Note that there can
still be "in flight" deliveries dispatched previously. Cancelling a consumer will neither discard nor requeue them."
[https://www.rabbitmq.com/consumers.html#unsubscribing]
An AMQP receiver can first drain, and then detach to prevent "in flight" deliveries

28. Init AMQP session with BEGIN frame
Similar to how there can't be an MQTT processor without a CONNECT
frame, there can't be an AMQP session without a BEGIN frame.
This allows having strict dialyzer types for session flow control
fields (i.e. not allowing 'undefined').

29. Move serial_number to AMQP 1.0 common lib
such that it can be used by both AMQP 1.0 server and client

30. Fix AMQP client to do serial number arithmetic.

31. AMQP client: Differentiate between delivery-id and transfer-id for better
understandability.

32. Fix link flow control in classic queues
This commit fixes
```
java -jar target/perf-test.jar -ad false -f persistent -u cq -c 3000 -C 1000000 -y 0
```
followed by
```
./omq -x 0 amqp -T /queue/cq -D 1000000 --amqp-consumer-credits 2
```
Prior to this commit, (and on RabbitMQ 3.x) the consuming would halt after around
8 - 10,000 messages.

The bug was that in flight messages from classic queue process to
session process were not taken into account when topping up credit to
the classic queue process.
Fixes #2597

The solution to this bug (and a much cleaner design anyway independent of
this bug) is that queues should hold all link flow control state including
the delivery-count.

Hence, when credit API v2 is used the delivery-count will be held by the
classic queue process, quorum queue process, and stream queue client
instead of managing the delivery-count in the session.

33. The double level crediting between (a) session process and
rabbit_fifo_client, and (b) rabbit_fifo_client and rabbit_fifo was
removed. Therefore, instead of managing 3 separate delivery-counts (i. session,
ii. rabbit_fifo_client, iii. rabbit_fifo), only 1 delivery-count is used
in rabbit_fifo. This is a big simplification.

34. This commit fixes quorum queues without bumping the machine version
nor introducing new rabbit_fifo commands.

Whether credit API v2 is used is solely determined at link attachment time
depending on whether feature flag credit_api_v2 is enabled.

Even when that feature flag will be enabled later on, this link will
keep using credit API v1 until detached (or the node is shut down).

Eventually, after feature flag credit_api_v2 has been enabled and a
subsequent rolling upgrade, all links will use credit API v2.

This approach is safe and simple.

The 2 alternatives to move delivery-count from the session process to the
queue processes would have been:

i. Explicit feature flag credit_api_v2 migration function
* Can use a gen_server:call and only finish migration once all delivery-counts were migrated.
Cons:
* Extra new message format just for migration is required.
* Risky as migration will fail if a target queue doesn’t reply.

ii. Session always includes DeliveryCountSnd when crediting to the queue:
Cons:
* 2 delivery counts will be hold simultaneously in session proc and queue proc;
could be solved by deleting the session proc’s delivery-count for credit-reply
* What happens if the receiver doesn’t provide credit for a very long time? Is that a problem?

35. Support stream filtering in AMQP 1.0 (by @acogoluegnes)
Use the x-stream-filter-value message annotation
to carry the filter value in a published message.
Use the rabbitmq:stream-filter and rabbitmq:stream-match-unfiltered
filters when creating a receiver that wants to filter
out messages from a stream.

36. Remove credit extension from AMQP 0.9.1 client

37. Support maintenance mode closing AMQP 1.0 connections.

38. Remove AMQP 0.9.1 client dependency from AMQP 1.0 implementation.

39. Move AMQP 1.0 plugin to the core. AMQP 1.0 is enabled by default.
    The old rabbitmq_amqp1_0 plugin will be kept as a no-op plugin to prevent deployment
    tools from failing that execute:
```
rabbitmq-plugins enable rabbitmq_amqp1_0
rabbitmq-plugins disable rabbitmq_amqp1_0
```

40. Breaking change: Remove CLI command `rabbitmqctl list_amqp10_connections`.
Instead, list both AMQP 0.9.1 and AMQP 1.0 connections in `list_connections`:
```
rabbitmqctl list_connections protocol
Listing connections ...
protocol
{1, 0}
{0,9,1}
```

 ## Benchmarks

 ### Throughput & Latency

Setup:
* Single node Ubuntu 22.04
* Erlang 26.1.1

Start RabbitMQ:
```
make run-broker PLUGINS="rabbitmq_management rabbitmq_amqp1_0" FULL=1 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 3"
```

Predeclare durable classic queue cq1, durable quorum queue qq1, durable stream queue sq1.

Start client:
https://github.com/ssorj/quiver
https://hub.docker.com/r/ssorj/quiver/tags (digest 453a2aceda64)
```
docker run -it --rm --add-host host.docker.internal:host-gateway ssorj/quiver:latest
bash-5.1# quiver --version
quiver 0.4.0-SNAPSHOT
```

1. Classic queue
```
quiver //host.docker.internal//amq/queue/cq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000
```

This commit:
```
Count ............................................. 1,000,000 messages
Duration ............................................... 73.8 seconds
Sender rate .......................................... 13,548 messages/s
Receiver rate ........................................ 13,547 messages/s
End-to-end rate ...................................... 13,547 messages/s

Latencies by percentile:

          0% ........ 0 ms       90.00% ........ 9 ms
         25% ........ 2 ms       99.00% ....... 14 ms
         50% ........ 4 ms       99.90% ....... 17 ms
        100% ....... 26 ms       99.99% ....... 24 ms
```

RabbitMQ 3.x (main branch as of 30 January 2024):
```
---------------------- Sender -----------------------  --------------------- Receiver ----------------------  --------
Time [s]      Count [m]  Rate [m/s]  CPU [%]  RSS [M]  Time [s]      Count [m]  Rate [m/s]  CPU [%]  RSS [M]  Lat [ms]
-----------------------------------------------------  -----------------------------------------------------  --------
     2.1        130,814      65,342        6     73.6       2.1          3,217       1,607        0      8.0       511
     4.1        163,580      16,367        2     74.1       4.1          3,217           0        0      8.0         0
     6.1        229,114      32,767        3     74.1       6.1          3,217           0        0      8.0         0
     8.1        261,880      16,367        2     74.1       8.1         67,874      32,296        8      8.2     7,662
    10.1        294,646      16,367        2     74.1      10.1         67,874           0        0      8.2         0
    12.1        360,180      32,734        3     74.1      12.1         67,874           0        0      8.2         0
    14.1        392,946      16,367        3     74.1      14.1         68,604         365        0      8.2    12,147
    16.1        458,480      32,734        3     74.1      16.1         68,604           0        0      8.2         0
    18.1        491,246      16,367        2     74.1      18.1         68,604           0        0      8.2         0
    20.1        556,780      32,767        4     74.1      20.1         68,604           0        0      8.2         0
    22.1        589,546      16,375        2     74.1      22.1         68,604           0        0      8.2         0
receiver timed out
    24.1        622,312      16,367        2     74.1      24.1         68,604           0        0      8.2         0
quiver:  error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1.
Traceback (most recent call last):
  File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run
    _plano.wait(receiver, check=True)
  File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait
    raise PlanoProcessError(proc)
plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1.
```

2. Quorum queue:
```
quiver //host.docker.internal//amq/queue/qq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000
```
This commit:
```
Count ............................................. 1,000,000 messages
Duration .............................................. 101.4 seconds
Sender rate ........................................... 9,867 messages/s
Receiver rate ......................................... 9,868 messages/s
End-to-end rate ....................................... 9,865 messages/s

Latencies by percentile:

          0% ....... 11 ms       90.00% ....... 23 ms
         25% ....... 15 ms       99.00% ....... 28 ms
         50% ....... 18 ms       99.90% ....... 33 ms
        100% ....... 49 ms       99.99% ....... 47 ms
```

RabbitMQ 3.x:
```
---------------------- Sender -----------------------  --------------------- Receiver ----------------------  --------
Time [s]      Count [m]  Rate [m/s]  CPU [%]  RSS [M]  Time [s]      Count [m]  Rate [m/s]  CPU [%]  RSS [M]  Lat [ms]
-----------------------------------------------------  -----------------------------------------------------  --------
     2.1        130,814      65,342        9     69.9       2.1         18,430       9,206        5      7.6     1,221
     4.1        163,580      16,375        5     70.2       4.1         18,867         218        0      7.6     2,168
     6.1        229,114      32,767        6     70.2       6.1         18,867           0        0      7.6         0
     8.1        294,648      32,734        7     70.2       8.1         18,867           0        0      7.6         0
    10.1        360,182      32,734        6     70.2      10.1         18,867           0        0      7.6         0
    12.1        425,716      32,767        6     70.2      12.1         18,867           0        0      7.6         0
receiver timed out
    14.1        458,482      16,367        5     70.2      14.1         18,867           0        0      7.6         0
quiver:  error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1.
Traceback (most recent call last):
  File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run
    _plano.wait(receiver, check=True)
  File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait
    raise PlanoProcessError(proc)
plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1.
```

3. Stream:
```
quiver-arrow send //host.docker.internal//amq/queue/sq1 --durable --count 1m -d 10m --summary --verbose
```

This commit:
```
Count ............................................. 1,000,000 messages
Duration ................................................ 8.7 seconds
Message rate ........................................ 115,154 messages/s
```

RabbitMQ 3.x:
```
Count ............................................. 1,000,000 messages
Duration ............................................... 21.2 seconds
Message rate ......................................... 47,232 messages/s
```

 ### Memory usage

Start RabbitMQ:
```
ERL_MAX_PORTS=3000000 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+P 3000000 +S 6" make run-broker PLUGINS="rabbitmq_amqp1_0" FULL=1 RABBITMQ_CONFIG_FILE="rabbitmq.conf"
```

```
/bin/cat rabbitmq.conf

tcp_listen_options.sndbuf  = 2048
tcp_listen_options.recbuf  = 2048
vm_memory_high_watermark.relative = 0.95
vm_memory_high_watermark_paging_ratio = 0.95
loopback_users = none
```

Create 50k connections with 2 sessions per connection, i.e. 100k session in total:

```go
package main

import (
	"context"
	"log"
	"time"

	"github.com/Azure/go-amqp"
)

func main() {
	for i := 0; i < 50000; i++ {
		conn, err := amqp.Dial(context.TODO(), "amqp://nuc", &amqp.ConnOptions{SASLType: amqp.SASLTypeAnonymous()})
		if err != nil {
			log.Fatal("dialing AMQP server:", err)
		}
		_, err = conn.NewSession(context.TODO(), nil)
		if err != nil {
			log.Fatal("creating AMQP session:", err)
		}
		_, err = conn.NewSession(context.TODO(), nil)
		if err != nil {
			log.Fatal("creating AMQP session:", err)
		}
	}
	log.Println("opened all connections")
	time.Sleep(5 * time.Hour)
}
```

This commit:
```
erlang:memory().
[{total,4586376480},
 {processes,4025898504},
 {processes_used,4025871040},
 {system,560477976},
 {atom,1048841},
 {atom_used,1042841},
 {binary,233228608},
 {code,21449982},
 {ets,108560464}]

erlang:system_info(process_count).
450289
```
7 procs per connection + 1 proc per session.
(7 + 2*1) * 50,000 = 450,000 procs

RabbitMQ 3.x:
```
erlang:memory().
[{total,15168232704},
 {processes,14044779256},
 {processes_used,14044755120},
 {system,1123453448},
 {atom,1057033},
 {atom_used,1052587},
 {binary,236381264},
 {code,21790238},
 {ets,391423744}]

erlang:system_info(process_count).
1850309
```
7 procs per connection + 15 per session
(7 + 2*15) * 50,000 = 1,850,000 procs

50k connections + 100k session require
with this commit: 4.5 GB
in RabbitMQ 3.x: 15 GB

 ## Future work

1. More efficient parser and serializer
2. TODO in mc_amqp: Do not store the parsed message on disk.
3. Implement both AMQP HTTP extension and AMQP management extension to allow AMQP
clients to create RabbitMQ objects (queues, exchanges, ...).
2024-02-28 14:15:20 +01:00
Marcial Rosales 81fc7d14ef Add selenium test that verifies use of verify-none 2024-02-28 10:04:51 +01:00
Michal Kuratczyk 2c69380acb
Use default sorting when `?sort=` in MGMT API
crash repro:
curl -u guest:guest -v 'http://localhost:15672/api/nodes/?sort='
2024-02-26 15:14:09 +01:00
Marcial Rosales 41237fbb3b Fix gaxelle issues around oauth2 dependencies 2024-02-14 18:55:39 +01:00
Marcial Rosales 0d78f931d3 Fix test case 2024-02-12 08:55:48 +01:00
Marcial Rosales 31ac7922da Fix test when idp is down 2024-02-12 07:38:25 +01:00
Marcial Rosales 9d9b2f2134 Do not use tls with uaa
Because uaa is not exposing https
2024-02-10 21:24:45 +01:00
Marcial Rosales 447effd455 Remove noisy log statement 2024-02-10 21:17:57 +01:00
Marcial Rosales e4e0ece31d Fix issue looking up logout button 2024-02-10 20:54:17 +01:00
Marcial Rosales 57358acde6 Fix url of keycloak 2024-02-10 20:12:21 +01:00
Marcial Rosales 22aa5172b9 Fix issue waiting for oauth2 section 2024-02-10 20:12:20 +01:00
Marcial Rosales ad9fc504fb Fix issue retrieve WebElement Text 2024-02-10 20:12:19 +01:00
Marcial Rosales dfb41cb92e Fix url when using tls 2024-02-10 20:12:19 +01:00
Marcial Rosales 5aa59aa992 Fix issue mounting certs and import folders
+
2024-02-10 20:12:19 +01:00
Marcial Rosales 16dbb5d77c Add tests to verify negative case 2024-02-10 20:12:18 +01:00
Marcial Rosales 82d852927d Add missing suite 2024-02-10 20:12:18 +01:00
Marcial Rosales 91089feb7b Warn when some oauth resource is not available 2024-02-10 20:12:18 +01:00
Marcial Rosales ec18b170fc Show warning messages and disable resources
which are not available
2024-02-10 20:12:17 +01:00
Marcial Rosales 7998dbab1b Add ensure-others command
starts only those components which are
down rather than restarting them
2024-02-10 20:12:17 +01:00
Marcial Rosales a1ea410cd1 Add additional scopes 2024-02-10 20:12:14 +01:00
Marcial Rosales 06bef0af41 Use resource's id as label
when label is not configured
2024-02-10 20:12:14 +01:00
Marcial Rosales 868234de3a Fix test that verifies amqp10 with oauth2 2024-02-10 20:12:12 +01:00
Marcial Rosales ebcea3e055 WIP Add selenium tests to
verify oauth with multi providers and resources
against various messaging protocol
2024-02-10 20:12:12 +01:00
Marcial Rosales 9cf1c40d4a Fix issue initializing mock http server 2024-02-10 20:12:12 +01:00
Marcial Rosales fc2d1ae763 Fix multi oauth test cases
And refactor function to
assert options
2024-02-10 20:12:11 +01:00
Marcial Rosales 773b13eb36 Fix issue with multi-oauth
It turns out the rabbitmq url
configured in keycloak was not
rendered but fixed to localhost
2024-02-10 20:12:11 +01:00
Marcial Rosales 1bdfb1f8f9 Apply part of fix from pr #10438
And update test case to ensure that
there are no warning popup shown
after the user logs in and/or visits
all the tabs
2024-02-10 20:12:11 +01:00
Marcial Rosales 5c665141a3 Fix typo 2024-02-10 20:12:11 +01:00
Marcial Rosales def5b37b9c Test multi oauth without basic auth 2024-02-10 20:12:11 +01:00
Marcial Rosales 0e759efdf7 Fix landing.js 2024-02-10 20:12:10 +01:00
Marcial Rosales d69fc0e8b9 Fix unauthorized.js test 2024-02-10 20:12:10 +01:00
Marcial Rosales 16bdeff55a Verify oauth resources are listed 2024-02-10 20:12:10 +01:00
Marcial Rosales 27f3e0b5f2 Fix issue with test
it is not possible to simply check if an elemnet
exists, as it is not rendered right away hence
we have to wait for it
2024-02-10 20:12:10 +01:00
Marcial Rosales eaba7abb6d Fix bug checking if element was visible 2024-02-10 20:12:10 +01:00
Marcial Rosales e1d5fcaeb7 Add multi-oauth suite 2024-02-10 20:12:09 +01:00
Marcial Rosales 9e20ed835a Fix issue loading user definitions
when running rabbitmq local
2024-02-10 20:12:09 +01:00
Marcial Rosales 2a3c8ec1e9 Create dedicate multi-oauth setup 2024-02-10 20:12:09 +01:00
Marcial Rosales 982e8a237b Fix name issue
It should be oauth_resource_servers
not resource_servers
2024-02-10 20:12:09 +01:00
Marcial Rosales a253a8cc31 Simplify auth_settings
just an array of oauth_resource_servers
regardless whether we have just resource_server_id
or many resource servers
2024-02-10 20:12:09 +01:00
Marcial Rosales fa3653acb1 Fix issue initialzing logon_type 2024-02-10 20:12:08 +01:00
Marcial Rosales 87c309ad0b Fix dialyzer warning 2024-02-10 20:12:08 +01:00
Marcial Rosales aad98037bd Configure uaa with Cors and
fix issue initializing client_secret
2024-02-10 20:12:08 +01:00
Marcial Rosales 89c1bff84b Fix schema issue 2024-02-10 20:12:08 +01:00
Marcial Rosales b6ac76a6f3 Add prefix oauth to all resource server settings 2024-02-10 20:12:08 +01:00
Marcial Rosales 0f7d859cc3 Fix text case failures 2024-02-10 20:12:08 +01:00
Marcial Rosales 341529c57a Refactor test case
Extract functions that validate setting
and configure resource server settings
2024-02-10 20:12:07 +01:00
Marcial Rosales c07aa378a6 Complete coverage of authSettings 2024-02-10 20:12:07 +01:00
Marcial Rosales b0f124a5c6 Add more cases 2024-02-10 20:12:07 +01:00
Marcial Rosales 3925c4cbd0 Fix remaining test cases 2024-02-10 20:12:07 +01:00
Marcial Rosales 1a1147c471 Fix more test cases 2024-02-10 20:12:07 +01:00
Marcial Rosales c995fb8867 Reimplement how authSettings is calculated
WIP rename and simplify test cases
2024-02-10 20:12:07 +01:00
Marcial Rosales 8c84d123f6 Add reproducer test 2024-02-10 20:12:07 +01:00
Marcial Rosales 2f431a62a6 Refactor tests 2024-02-10 20:12:06 +01:00
Marcial Rosales ee7fb32e7e Refactor more test cases and add new ones 2024-02-10 20:12:06 +01:00
Marcial Rosales 1f11349060 Refactor unit tests of auth_settings() 2024-02-10 20:12:06 +01:00
Marcial Rosales 68a8de95ad Change strategy that checks if an element exists 2024-02-10 20:12:05 +01:00
Marcial Rosales cc1f9171e9 Update bazel instructions 2024-02-10 20:12:05 +01:00
Marcial Rosales d827b72ce1 Create Oauth2 client 2024-02-10 20:12:04 +01:00
Karl Nilsson 5317f958fb Streams: Soft remove policy configuration of max_segment_size_bytes
This configuration is not guaranteed to be safe to change after a stream has bee n
declared and thus we'll remove the ability to change it after the initial
declaration. Users should favour the x- queue arg for this config but it will still
be possible to configure it as a policy but it will only be evaluated at
declara tion time.

This means that if a policy is set for a stream that re-configures the
`stream-m ax-segment-size-bytes` key it will show in the UI as updated but
the pre-existing stream will not use the updated configuration.

The key has been removed from the UI but for backwards compatibility it is still
 settable.

NB: this PR adds a new command `update_config` to the stream coordinator state
machine. Strictly speaking this should require a new machine version but we're by
passing that by relying on the feature flag instead which avoids this command
being committed before all nodes have the new code version. A new machine version
can lower the availability properties during a rolling cluster upgrade so in
this case it is preferable to avoid that given the simplicity of the change.
2024-02-07 11:06:10 +00:00
Michael Klishin 9c79ad8d55 More missed license header updates #9969 2024-02-05 12:26:25 -05:00
Michael Klishin f414c2d512
More missed license header updates #9969 2024-02-05 11:53:50 -05:00
Michael Klishin f8401df53e
Drive-by change: naming 2024-01-29 12:21:45 -05:00
Diana Parra Corbacho dc3b6fb5bc Allow management users to query feature flags and deprecated features
The new banner to warn about not-enabled feature flags requires access
to this endpoint, and it must be visible for all users.
2024-01-29 15:51:21 +01:00
Michael Klishin 0c0e2ca932 An alternative to #10415, closes #10330
Per discussion in #10415, this introduces a new module,
rabbit_mgmt_nodes, which provides a couple of helpers
that can be used to implement Cowboy REST's
resource_exists/2 in the modules that return
information about cluster members.
2024-01-25 18:41:56 -05:00
Michael Klishin 7b151a7651 More missed (c) header updates 2024-01-22 23:44:47 -05:00
Michael Klishin 838cb93142
Merge pull request #10337 from rabbitmq/loic-revert-cqv2-default
Revert "Default to classic queues v2"
2024-01-15 08:03:05 -05:00
Loïc Hoguin a0c8dab057
Update expected CQ version in tests 2024-01-15 12:08:08 +01:00
Diana Parra Corbacho 5a3584beea Remove FF warning as soon as all features are enabled
The warning in the header needs a full refresh, just updating
the page content will not clear the warning.
2024-01-15 08:41:36 +01:00
Michael Klishin 01092ff31f
(c) year bumps 2024-01-01 22:02:20 -05:00
Ariel Otilibili 0b24d3c0bb Defined "tags" as list
Typo spotted in #4050
2023-12-27 22:47:52 +01:00
Marcial Rosales b9f3771f2d Use correct user to authenticate
depending on the backend we want to
exercise
2023-12-27 14:03:26 +00:00
Michael Klishin 5af5f0cf3d
Selenium: update run-suites.sh argument to match #10200 2023-12-23 20:33:21 -05:00
Michael Klishin a374f40303
Merge pull request #10200 from rabbitmq/propagate_credentials_to_http_backend
Propagate all credentials to http backend
2023-12-23 19:09:48 -05:00
Michael Klishin 920c664fa3
Add a test originally introduced in #10062 2023-12-23 19:09:35 -05:00
Ariel Otilibili e1d09fbba6 Replaced true | false by boolean() 2023-12-22 17:28:31 +01:00
Marcial Rosales f3c4355cfb Clean up 2023-12-22 14:29:53 +00:00
Marcial Rosales 2fc8d2b3ae Propagate all credentials to http backend 2023-12-22 13:54:34 +00:00
Michael Klishin 7ebaae7ef0
Revert "HTTP API: DELETE /api/queues/{vhost}/{name} use internal API call"
This reverts commit 78f901a224.
2023-12-20 04:13:11 -05:00
Péter Gömöri 0a144e7698 Prevent formatter crash in mgmt_util
`rabbit_mgmt_util:direct_request/6` is always called with an
`ErrorMsg` which expects one format argument as a string. Convert the
arbitrary reason term into a string to avoid a crash like the below:

```
warning: FORMATTER CRASH: {"Delete exchange error: ~ts",[{'EXIT',{{badmatch,{error,...
```
2023-12-20 00:11:26 +01:00
Péter Gömöri f4a9edfd2f Tolerate race condition when starting management db cache process
This prevents the below harmless crash when multiple parallel API
requests arrive soon after starting the node.

```
exception error: no match of right hand side value
                 {error,{already_started,<0.1593.0>}}
  in function  rabbit_mgmt_db_cache:fetch/4 (rabbit_mgmt_db_cache.erl, line 68)
  in call from rabbit_mgmt_db:submit_cached/4 (rabbit_mgmt_db.erl, line 756)
  in call from rabbit_mgmt_util:augment/2 (rabbit_mgmt_util.erl, line 412)
  in call from rabbit_mgmt_util:run_augmentation/2 (rabbit_mgmt_util.erl, line 389)
  in call from rabbit_mgmt_util:augment_resources0/6 (rabbit_mgmt_util.erl, line 378)
  in call from rabbit_mgmt_util:with_valid_pagination/3 (rabbit_mgmt_util.erl, line 302)
  in call from rabbit_mgmt_wm_queues:to_json/2 (rabbit_mgmt_wm_queues.erl, line 44)
  in call from cowboy_rest:call/3 (src/cowboy_rest.erl, line 1583)
```
2023-12-19 18:07:22 +01:00
David Ansari f44c851293 Fix crash when closing connection
Avoid the following crash
```
** Reason for termination ==
** {mqtt_unexpected_cast,{shutdown,"Closed via management plugin"}}

  crasher:
    initial call: rabbit_mqtt_reader:init/1
    pid: <0.1096.0>
    registered_name: []
    exception exit: {mqtt_unexpected_cast,
                        {shutdown,"Closed via management plugin"}}
      in function  gen_server:handle_common_reply/8 (gen_server.erl, line 1208)
```
when closing MQTT or Stream connections via HTTP API endpoint
```
/connections/username/:username
```
2023-12-14 12:35:51 +01:00
Diana Parra Corbacho 5aa35e0570 Management: introduce deprecated features API endpoints, UI page and warnings 2023-12-13 07:39:37 +01:00
Diana Parra Corbacho ee84038ef5 Add experimental/disabled warning in State column 2023-12-12 18:07:36 +01:00
Diana Parra Corbacho ada8083d0d Add a warning banner if any stable feature flags is not enabled
Add an experimental tag on the description to experimental features
2023-12-12 18:07:36 +01:00
Michael Klishin 26aa534e40 Definition import: more logging improvements 2023-12-09 20:20:56 -05:00
Michael Klishin 62fffb6634 Another take at #10068
Scan queues, exchanges and bindings before attempting
to import anything on boot. If they miss the virtual
host field, fail early and log a sensible message.
2023-12-08 01:39:47 -05:00
Johan Rhodin 8ea1f8fc49
Merge branch 'rabbitmq:main' into FixInfoItems 2023-11-27 15:59:34 -06:00
Michael Klishin cc3084dfbf Fixes #9983 2023-11-25 18:59:51 -05:00
Michael Klishin 1b642353ca
Update (c) according to [1]
1. https://investors.broadcom.com/news-releases/news-release-details/broadcom-and-vmware-intend-close-transaction-november-22-2023
2023-11-21 23:18:22 -05:00
Michael Klishin 28ad76467e Management UI: link to GitHub Discussions and not the Google group 2023-11-19 19:35:49 -05:00
Johan Rhodin 851fddcad2 Fix wrong link 2023-11-17 16:48:39 -06:00
Johan Rhodin 226e7d138d fix info item, and behavior->behaviour 2023-11-17 11:01:47 -06:00
Ayanda Dube 324debe6cf add a test case for mgmt API deletion of crashed queues 2023-11-15 17:43:47 +00:00
Ayanda Dube 1ce75c7aae handle different queue states on deletion from the mgmt API 2023-11-15 17:43:47 +00:00
Karl Nilsson baff660ab4 use right dummy type 2023-11-07 11:53:57 +00:00
Karl Nilsson ff12d3b6b4 HTTP API /queues optimise resource_exists
There is no need to list all queues to check if the vhost
exists.
2023-11-07 11:27:11 +00:00
Karl Nilsson c2cd60b18d Optimise mgmt HTTP API /queues endpoint
Listing queues with the HTTP API when there are many (1000s) of
quorum queues could be excessively slow compared to the same scenario
with classic queues.

This optimises various aspects of HTTP API queue listings.
For QQs it removes the expensive cluster wide rpcs used to get the
"online" status of each quorum queue. This was previously done _before_
paging and thus would perform a cluster-wide query for _each_ quorum queue in
the vhost/system. This accounted for most of the slowness compared to
classic queues.

Secondly the query to separate the running from the down queues
consisted of two separate queries that later were combined when a single
query would have sufficed.

This commit also includes a variety of other improvements and minor
fixes discovered during testing and optimisation.

MINOR BREAKING CHANGE: quorum queues would previously only display one
of two states: running or down. Now there is a new state called minority
which is emitted when the queue has at least one member running but
cannot commit entries due to lack of quorum.

Also the quorum queue may transiently enter the down state when a node
goes down and before its elected a new leader.
2023-11-06 15:34:26 +00:00
Emerson Almeida 9c87e1902d remove test to idp-initiated 2023-11-01 10:23:02 -03:00
Emerson Almeida 64ded4632d make the baseUrl equal when run with prefix 2023-11-01 09:52:27 -03:00
Duke ca5e7e34d7
Merge branch 'main' into improve-login-exp 2023-10-31 13:35:20 -03:00
Marcial Rosales 7c58649942 Reduce by 5 seconds every oauth2 test
clickToLogin sbould only check that the
login button exists and not that the
warning messages is not visible
2023-10-31 16:31:37 +01:00
Marcial Rosales b409d97927 Bump up chromedriver version 2023-10-31 14:07:38 +01:00
Marcial Rosales c169f6ef50 Fix issue in run-scripts 2023-10-31 12:55:31 +01:00
Duke cd680bc568
move store pref to startWithOAuthLogin 2023-10-30 12:02:29 -03:00
Duke 9060941fc2
Remove overview var from redirection-after-login.js 2023-10-30 11:51:59 -03:00
Duke 0e757f394a
remove unused overview var 2023-10-30 11:50:23 -03:00
Emerson Almeida 7bf452c522 fix pref default 2023-10-29 18:34:42 -03:00
Emerson Almeida 7525ccc236 add tests 2023-10-29 18:34:42 -03:00
Duke 3e0ca9ede7 add oauth-return-to 2023-10-29 18:34:42 -03:00
Diana Parra Corbacho 07196e297b Reduce the number of metrics served by GET /api/queues
Introduce GET /api/queues/detailed endpoint

Just removed garbage_collection, idle_since and any 'null' value

/api/queues with 10k classic queues returns 7.4MB of data
/api/queues/detailed with 10k classic queues returns 11MB of data

This sits behind a new feature flag, required to collect data from
all nodes: detailed_queues_endpoint
2023-10-23 19:49:37 -04:00
Karl Nilsson 1ba62a90f2 Actually nack when using 'Nack message requeue true'
Option in mgmt UI.
2023-10-17 14:12:09 +01:00
Michael Klishin 6009a4973f
Merge pull request #9708 from rabbitmq/mk-limit-max-http-api-payload-size
Introduce a configurable limit to HTTP API request body size
2023-10-16 21:49:50 -04:00
Michael Klishin 087794dded
HTTP API: adapt publishing tests
to take the newly introduced 10 MiB default body size limit
into account.
2023-10-16 19:14:16 -04:00
Rin Kuryloski 558b8d03f4 Remove mnesia from rabbitmq_management_agent deps in bazel
it's not required and is not listed in the LOCAL_DEPS in the Makefile
2023-10-16 17:37:54 +02:00
Michael Klishin c6d0382be4
Reduce default HTTP API request body size limit to 10 MiB
per discussion with the team.

It should be enough to accomodate a definition file with about
100K queues.
2023-10-16 06:48:23 -04:00
Michael Klishin b7b3514bb1
Introduce HTTP request body limit for definition uploads
The default is 20 MiB, which is enough to upload
a definition file with 200K queues, a few virtual host
and a few users. In other words, it should accomodate
a lot of environments.
2023-10-14 06:11:01 -04:00
Michael Klishin 8e7e8f9127
Merge branch 'main' into issue-9437-queue-storage-version 2023-10-10 15:03:50 -04:00
Michael Klishin aa0c52093f Add length limit overflow behavior to supported features in the UI 2023-10-05 21:17:56 -04:00
Alex Valiushko 2d569f1701 New quorum queue members join as temporary non-voters
Because both `add_member` and `grow` default to Membership status `promotable`,
new members will have to catch up before they are considered cluster members.
This can be overridden with either `voter` or (permanent `non_voter` statuses.
The latter one is useless without additional tooling so kept undocumented.

- non-voters do not affect quorum size for election purposes
- `observer_cli` reports their status with lowercase 'f'
- `rabbitmq-queues check_if_node_is_quorum_critical` takes voter status into
account
2023-10-05 20:30:30 -04:00
Diana Parra Corbacho c1a6e5b3e5 Return storage_version as top-level key in queue objects
A previous PR removed backing_queue_status as it is mostly unused,
but classic queue version is still useful. This PR returns version
as a top-level key in queue objects.
2023-10-04 09:29:01 +02:00
Diana Parra Corbacho 5f0981c5a3
Allow to use Khepri database to store metadata instead of Mnesia
[Why]

Mnesia is a very powerful and convenient tool for Erlang applications:
it is a persistent disc-based database, it handles replication accross
multiple Erlang nodes and it is available out-of-the-box from the
Erlang/OTP distribution. RabbitMQ relies on Mnesia to manage all its
metadata:

* virtual hosts' properties
* intenal users
* queue, exchange and binding declarations (not queues data)
* runtime parameters and policies
* ...

Unfortunately Mnesia makes it difficult to handle network partition and,
as a consequence, the merge conflicts between Erlang nodes once the
network partition is resolved. RabbitMQ provides several partition
handling strategies but they are not bullet-proof. Users still hit
situations where it is a pain to repair a cluster following a network
partition.

[How]

@kjnilsson created Ra [1], a Raft consensus library that RabbitMQ
already uses successfully to implement quorum queues and streams for
instance. Those queues do not suffer from network partitions.

We created Khepri [2], a new persistent and replicated database engine
based on Ra and we want to use it in place of Mnesia in RabbitMQ to
solve the problems with network partitions.

This patch integrates Khepri as an experimental feature. When enabled,
RabbitMQ will store all its metadata in Khepri instead of Mnesia.

This change comes with behavior changes. While Khepri remains disabled,
you should see no changes to the behavior of RabbitMQ. If there are
changes, it is a bug. After Khepri is enabled, there are significant
changes of behavior that you should be aware of.

Because it is based on the Raft consensus algorithm, when there is a
network partition, only the cluster members that are in the partition
with at least `(Number of nodes in the cluster ÷ 2) + 1` number of nodes
can "make progress". In other words, only those nodes may write to the
Khepri database and read from the database and expect a consistent
result.

For instance in a cluster of 5 RabbitMQ nodes:
* If there are two partitions, one with 3 nodes, one with 2 nodes, only
  the group of 3 nodes will be able to write to the database.
* If there are three partitions, two with 2 nodes, one with 1 node, none
  of the group can write to the database.

Because the Khepri database will be used for all kind of metadata, it
means that RabbitMQ nodes that can't write to the database will be
unable to perform some operations. A list of operations and what to
expect is documented in the associated pull request and the RabbitMQ
website.

This requirement from Raft also affects the startup of RabbitMQ nodes in
a cluster. Indeed, at least a quorum number of nodes must be started at
once to allow nodes to become ready.

To enable Khepri, you need to enable the `khepri_db` feature flag:

    rabbitmqctl enable_feature_flag khepri_db

When the `khepri_db` feature flag is enabled, the migration code
performs the following two tasks:
1. It synchronizes the Khepri cluster membership from the Mnesia
   cluster. It uses `mnesia_to_khepri:sync_cluster_membership/1` from
   the `khepri_mnesia_migration` application [3].
2. It copies data from relevant Mnesia tables to Khepri, doing some
   conversion if necessary on the way. Again, it uses
   `mnesia_to_khepri:copy_tables/4` from `khepri_mnesia_migration` to do
   it.

This can be performed on a running standalone RabbitMQ node or cluster.
Data will be migrated from Mnesia to Khepri without any service
interruption. Note that during the migration, the performance may
decrease and the memory footprint may go up.

Because this feature flag is considered experimental, it is not enabled
by default even on a brand new RabbitMQ deployment.

More about the implementation details below:

In the past months, all accesses to Mnesia were isolated in a collection
of `rabbit_db*` modules. This is where the integration of Khepri mostly
takes place: we use a function called `rabbit_khepri:handle_fallback/1`
which selects the database and perform the query or the transaction.
Here is an example from `rabbit_db_vhost`:

* Up until RabbitMQ 3.12.x:

        get(VHostName) when is_binary(VHostName) ->
            get_in_mnesia(VHostName).

* Starting with RabbitMQ 3.13.0:

        get(VHostName) when is_binary(VHostName) ->
            rabbit_khepri:handle_fallback(
              #{mnesia => fun() -> get_in_mnesia(VHostName) end,
                khepri => fun() -> get_in_khepri(VHostName) end}).

This `rabbit_khepri:handle_fallback/1` function relies on two things:
1. the fact that the `khepri_db` feature flag is enabled, in which case
   it always executes the Khepri-based variant.
4. the ability or not to read and write to Mnesia tables otherwise.

Before the feature flag is enabled, or during the migration, the
function will try to execute the Mnesia-based variant. If it succeeds,
then it returns the result. If it fails because one or more Mnesia
tables can't be used, it restarts from scratch: it means the feature
flag is being enabled and depending on the outcome, either the
Mnesia-based variant will succeed (the feature flag couldn't be enabled)
or the feature flag will be marked as enabled and it will call the
Khepri-based variant. The meat of this function really lives in the
`khepri_mnesia_migration` application [3] and
`rabbit_khepri:handle_fallback/1` is a wrapper on top of it that knows
about the feature flag.

However, some calls to the database do not depend on the existence of
Mnesia tables, such as functions where we need to learn about the
members of a cluster. For those, we can't rely on exceptions from
Mnesia. Therefore, we just look at the state of the feature flag to
determine which database to use. There are two situations though:

* Sometimes, we need the feature flag state query to block because the
  function interested in it can't return a valid answer during the
  migration. Here is an example:

        case rabbit_khepri:is_enabled(RemoteNode) of
            true  -> can_join_using_khepri(RemoteNode);
            false -> can_join_using_mnesia(RemoteNode)
        end

* Sometimes, we need the feature flag state query to NOT block (for
  instance because it would cause a deadlock). Here is an example:

        case rabbit_khepri:get_feature_state() of
            enabled -> members_using_khepri();
            _       -> members_using_mnesia()
        end

Direct accesses to Mnesia still exists. They are limited to code that is
specific to Mnesia such as classic queue mirroring or network partitions
handling strategies.

Now, to discover the Mnesia tables to migrate and how to migrate them,
we use an Erlang module attribute called
`rabbit_mnesia_tables_to_khepri_db` which indicates a list of Mnesia
tables and an associated converter module. Here is an example in the
`rabbitmq_recent_history_exchange` plugin:

    -rabbit_mnesia_tables_to_khepri_db(
       [{?RH_TABLE, rabbit_db_rh_exchange_m2k_converter}]).

The converter module  — `rabbit_db_rh_exchange_m2k_converter` in this
example  — is is fact a "sub" converter module called but
`rabbit_db_m2k_converter`. See the documentation of a `mnesia_to_khepri`
converter module to learn more about these modules.

[1] https://github.com/rabbitmq/ra
[2] https://github.com/rabbitmq/khepri
[3] https://github.com/rabbitmq/khepri_mnesia_migration

See #7206.

Co-authored-by: Jean-Sébastien Pédron <jean-sebastien@rabbitmq.com>
Co-authored-by: Diana Parra Corbacho <dparracorbac@vmware.com>
Co-authored-by: Michael Davis <mcarsondavis@gmail.com>
2023-09-29 16:00:11 +02:00
Michael Klishin 8ca0200503 HTTP API docs: be more specific 2023-09-28 05:57:50 -04:00
Diana Parra Corbacho 9d8a537073 HTTP API: document disable_stats and enable_queue_totals
Using GET /api/queues?disable_stats=true&enable_queue_totals=true is far more efficient than the standard GET /api/queues and in many cases will suffice for monitoring and operating purposes.
2023-09-28 09:16:08 +02:00
Michael Klishin 13702b7f04
Merge pull request #9550 from rabbitmq/issue-8758
HTTP API: DELETE /api/queues/{vhost}/{name} use internal API call
2023-09-27 02:55:47 -04:00
Diana Parra Corbacho 78f901a224 HTTP API: DELETE /api/queues/{vhost}/{name} use internal API call
A direct client operation fails if the queue is exclusive. This
API should behave like the rabbitmqctl that can delete the queue
even in that case
2023-09-27 08:18:59 +02:00
Jean-Sébastien Dominique 8c6ba6daca Add Classic Queue version to operator policies 2023-09-26 20:13:52 -04:00
Diana Parra Corbacho cbf479f1a9 mgmt UI admin page: list all operator policies per queue type 2023-09-22 09:01:27 +02:00
Marcial Rosales 54ff3273ab Run full manaegement ui suite by default 2023-09-05 19:50:47 +02:00
Marcial Rosales 7fb55881a4 Refactor suites to shorten pipeline execution
- Separate pure management ui suites from authnz
- Run full management ui suite on every commit to main or
 release brances
 - Fun full management ui suite on every change done to
 rabbitmq_management plugin on any PR
2023-09-05 19:50:46 +02:00
Karl Nilsson 119f034406
Message Containers (#5077)
This PR implements an approach for a "protocol (data format) agnostic core" where the format of the message isn't converted at point of reception.

Currently all non AMQP 0.9.1 originating messages are converted into a AMQP 0.9.1 flavoured basic_message record before sent to a queue. If the messages are then consumed by the originating protocol they are converted back from AMQP 0.9.1. For some protocols such as MQTT 3.1 this isn't too expensive as MQTT is mostly a fairly easily mapped subset of AMQP 0.9.1 but for others such as AMQP 1.0 the conversions are awkward and in some cases lossy even if consuming from the originating protocol.

This PR instead wraps all incoming messages in their originating form into a generic, extensible message container type (mc). The container module exposes an API to get common message details such as size and various properties (ttl, priority etc) directly from the source data type. Each protocol needs to implement the mc behaviour such that when a message originating form one protocol is consumed by another protocol we convert it to the target protocol at that point.

The message container also contains annotations, dead letter records and other meta data we need to record during the lifetime of a message. The original protocol message is never modified unless it is consumed.

This includes conversion modules to and from amqp, amqpl (AMQP 0.9.1) and mqtt.


COMMIT HISTORY:

* Refactor away from using the delivery{} record

In many places including exchange types. This should make it
easier to move towards using a message container type instead of
basic_message.

Add mc module and move direct replies outside of exchange

Lots of changes incl classic queues

Implement stream support incl amqp conversions

simplify mc state record

move mc.erl

mc dlx stuff

recent history exchange

Make tracking work

But doesn't take a protocol agnostic approach as we just convert
everything into AMQP legacy and back. Might be good enough for now.

Tracing as a whole may want a bit of a re-vamp at some point.

tidy

make quorum queue peek work by legacy conversion

dead lettering fixes

dead lettering fixes

CMQ fixes

rabbit_trace type fixes

fixes

fix

Fix classic queue props

test assertion fix

feature flag and backwards compat

Enable message_container feature flag in some SUITEs

Dialyzer fixes

fixes

fix

test fixes

Various

Manually update a gazelle generated file

until a gazelle enhancement can be made
https://github.com/rabbitmq/rules_erlang/issues/185

Add message_containers_SUITE to bazel

and regen bazel files with gazelle from rules_erlang@main

Simplify essential proprty access

Such as durable, ttl and priority by extracting them into annotations
at message container init time.

Move type

to remove dependenc on amqp10 stuff in mc.erl

mostly because I don't know how to make bazel do the right thing

add more stuff

Refine routing header stuff

wip

Cosmetics

Do not use "maybe" as type name as "maybe" is a keyword since OTP 25
which makes Erlang LS complain.

* Dedup death queue names

* Fix function clause crashes

Fix failing tests in the MQTT shared_SUITE:
A classic queue message ID can be undefined as set in
fbe79ff47b/deps/rabbit/src/rabbit_classic_queue_index_v2.erl (L1048)

Fix failing tests in the MQTT shared_SUITE-mixed:
When feature flag message_containers is disabled, the
message is not an #mc{} record, but a #basic_message{} record.

* Fix is_utf8_no_null crash

Prior to this commit, the function crashed if invalid UTF-8 was
provided, e.g.:
```
1> rabbit_misc:is_valid_shortstr(<<"😇"/utf16>>).
** exception error: no function clause matching rabbit_misc:is_utf8_no_null(<<216,61,222,7>>) (rabbit_misc.erl, line 1481)
```

* Implement mqtt mc behaviour

For now via amqp translation.

This is still work in progress, but the following SUITEs pass:
```
make -C deps/rabbitmq_mqtt ct-shared t=[mqtt,v5,cluster_size_1] FULL=1
make -C deps/rabbitmq_mqtt ct-v5 t=[mqtt,cluster_size_1] FULL=1
```

* Shorten mc file names

Module name length matters because for each persistent message the #mc{}
record is persisted to disk.

```
1> iolist_size(term_to_iovec({mc, rabbit_mc_amqp_legacy})).
30
2> iolist_size(term_to_iovec({mc, mc_amqpl})).
17
```

This commit renames the mc modules:
```
ag -l rabbit_mc_amqp_legacy | xargs sed -i 's/rabbit_mc_amqp_legacy/mc_amqpl/g'
ag -l rabbit_mc_amqp | xargs sed -i 's/rabbit_mc_amqp/mc_amqp/g'
ag -l rabbit_mqtt_mc | xargs sed -i 's/rabbit_mqtt_mc/mc_mqtt/g'
```

* mc: make deaths an annotation + fixes

* Fix mc_mqtt protocol_state callback

* Fix test will_delay_node_restart

```
make -C deps/rabbitmq_mqtt ct-v5 t=[mqtt,cluster_size_3]:will_delay_node_restart FULL=1
```

* Bazel run gazelle

* mix format rabbitmqctl.ex

* Ensure ttl annotation is refelected in amqp legacy protocol state

* Fix id access in message store

* Fix rabbit_message_interceptor_SUITE

* dializer fixes

* Fix rabbit:rabbit_message_interceptor_SUITE-mixed

set_annotation/3 should not result in duplicate keys

* Fix MQTT shared_SUITE-mixed

Up to 3.12 non-MQTT publishes were always QoS 1 regardless of delivery_mode.
75a953ce28/deps/rabbitmq_mqtt/src/rabbit_mqtt_processor.erl (L2075-L2076)
From now on, non-MQTT publishes are QoS 1 if durable.
This makes more sense.

The MQTT plugin must send a #basic_message{} to an old node that does
not understand message containers.

* Field content of 'v1_0.data' can be binary

Fix
```
bazel test //deps/rabbitmq_mqtt:shared_SUITE-mixed \
    --test_env FOCUS="-group [mqtt,v4,cluster_size_1] -case trace" \
    -t- --test_sharding_strategy=disabled
```

* Remove route/2 and implement route/3 for all exchange types.

This removes the route/2 callback from rabbit_exchange_type and
makes route/3 mandatory instead. This is a breaking change and
will require all implementations of exchange types to update their
code, however this is necessary anyway for them to correctly handle
the mc type.

stream filtering fixes

* Translate directly from MQTT to AMQP 0.9.1

* handle undecoded properties in mc_compat

amqpl: put clause in right order

recover death deatails from amqp data

* Replace callback init_amqp with convert_from

* Fix return value of lists:keyfind/3

* Translate directly from AMQP 0.9.1 to MQTT

* Fix MQTT payload size

MQTT payload can be a list when converted from AMQP 0.9.1 for example

First conversions tests

Plus some other conversion related fixes.

bazel

bazel

translate amqp 1.0 null to undefined

mc: property/2 and correlation_id/message_id return type tagged values.

To ensure we can support a variety of types better.

The type type tags are AMQP 1.0 flavoured.

fix death recovery

mc_mqtt: impl new api

Add callbacks to allow protocols to compact data before storage

And make readable if needing to query things repeatedly.

bazel fix

* more decoding

* tracking mixed versions compat

* mc: flip default of `durable` annotation to save some data.

Assuming most messages are durable and that in memory messages suffer less
from persistence overhead it makes sense for a non existent `durable`
annotation to mean durable=true.

* mc conversion tests and tidy up

* mc make x_header unstrict again

* amqpl: death record fixes

* bazel

* amqp -> amqpl conversion test

* Fix crash in mc_amqp:size/1

Body can be a single amqp-value section (instead of
being a list) as shown by test
```
make -C deps/rabbitmq_amqp1_0/ ct-system t=java
```
on branch native-amqp.

* Fix crash in lists:flatten/1

Data can be a single amqp-value section (instead of
being a list) as shown by test
```
make -C deps/rabbitmq_amqp1_0 ct-system t=dotnet:roundtrip_to_amqp_091
```
on branch native-amqp.

* Fix crash in rabbit_writer

Running test
```
make -C deps/rabbitmq_amqp1_0 ct-system t=dotnet:roundtrip_to_amqp_091
```
on branch native-amqp resulted in the following crash:
```
crasher:
  initial call: rabbit_writer:enter_mainloop/2
  pid: <0.711.0>
  registered_name: []
  exception error: bad argument
    in function  size/1
       called as size([<<0>>,<<"Sw">>,[<<160,2>>,<<"hi">>]])
       *** argument 1: not tuple or binary
    in call from rabbit_binary_generator:build_content_frames/7 (rabbit_binary_generator.erl, line 89)
    in call from rabbit_binary_generator:build_simple_content_frames/4 (rabbit_binary_generator.erl, line 61)
    in call from rabbit_writer:assemble_frames/5 (rabbit_writer.erl, line 334)
    in call from rabbit_writer:internal_send_command_async/3 (rabbit_writer.erl, line 365)
    in call from rabbit_writer:handle_message/2 (rabbit_writer.erl, line 265)
    in call from rabbit_writer:handle_message/3 (rabbit_writer.erl, line 232)
    in call from rabbit_writer:mainloop1/2 (rabbit_writer.erl, line 223)
```
because #content.payload_fragments_rev is currently supposed to
be a flat list of binaries instead of being an iolist.

This commit fixes this crash inefficiently by calling
iolist_to_binary/1. A better solution would be to allow AMQP legacy's #content.payload_fragments_rev
to be an iolist.

* Add accidentally deleted line back

* mc: optimise mc_amqp internal format

By removint the outer records for message and delivery annotations
as well as application properties and footers.

* mc: optimis mc_amqp map_add by using upsert

* mc: refactoring and bug fixes

* mc_SUITE routingheader assertions

* mc remove serialize/1 callback as only used by amqp

* mc_amqp: avoid returning a nested list from protocol_state

* test and bug fix

* move infer_type to mc_util

* mc fixes and additiona assertions

* Support headers exchange routing for MQTT messages

When a headers exchange is bound to the MQTT topic exchange, routing
will be performend based on both MQTT topic (by the topic exchange) and
MQTT User Property (by the headers exchange).

This combines the best worlds of both MQTT 5.0 and AMQP 0.9.1 and
enables powerful routing topologies.

When the User Property contains the same name multiple times, only the
last name (and value) will be considered by the headers exchange.

* Fix crash when sending from stream to amqpl

When publishing a message via the stream protocol and consuming it via
AMQP 0.9.1, the following crash occurred prior to this commit:
```
crasher:
  initial call: rabbit_channel:init/1
  pid: <0.818.0>
  registered_name: []
  exception exit: {{badmatch,undefined},
                   [{rabbit_channel,handle_deliver0,4,
                                    [{file,"rabbit_channel.erl"},
                                     {line,2728}]},
                    {lists,foldl,3,[{file,"lists.erl"},{line,1594}]},
                    {rabbit_channel,handle_cast,2,
                                    [{file,"rabbit_channel.erl"},
                                     {line,728}]},
                    {gen_server2,handle_msg,2,
                                 [{file,"gen_server2.erl"},{line,1056}]},
                    {proc_lib,wake_up,3,
                              [{file,"proc_lib.erl"},{line,251}]}]}
```

This commit first gives `mc:init/3` the chance to set exchange and
routing_keys annotations.
If not set, `rabbit_stream_queue` will set these annotations assuming
the message was originally published via the stream protocol.

* Support consistent hash exchange routing for MQTT 5.0

When a consistent hash exchange is bound to the MQTT topic exchange,
MQTT 5.0 messages can be routed to queues consistently based on the
Correlation-Data in the PUBLISH packet.

* Convert MQTT 5.0 User Property

* to AMQP 0.9.1 headers
* from AMQP 0.9.1 headers
* to AMQP 1.0 application properties and message annotations
* from AMQP 1.0 application properties and message annotations

* Make use of Annotations in mc_mqtt:protocol_state/2

mc_mqtt:protocol_state/2 includes Annotations as parameter.
It's cleaner to make use of these Annotations when computing the
protocol state instead of relying on the caller (rabbitmq_mqtt_processor)
to compute the protocol state.

* Enforce AMQP 0.9.1 field name length limit

The AMQP 0.9.1 spec prohibits field names longer than 128 characters.
Therefore, when converting AMQP 1.0 message annotations, application
properties or MQTT 5.0 User Property to AMQP 0.9.1 headers, drop any
names longer than 128 characters.

* Fix type specs

Apply feedback from Michael Davis

Co-authored-by: Michael Davis <mcarsondavis@gmail.com>

* Add mc_mqtt unit test suite

Implement mc_mqtt:x_header/2

* Translate indicator that payload is UTF-8 encoded

when converting between MQTT 5.0 and AMQP 1.0

* Translate single amqp-value section from AMQP 1.0 to MQTT

Convert to a text representation, if possible, and indicate to MQTT
client that the payload is UTF-8 encoded. This way, the MQTT client will
be able to parse the payload.

If conversion to text representation is not possible, encode the payload
using the AMQP 1.0 type system and indiate the encoding via Content-Type
message/vnd.rabbitmq.amqp.

This Content-Type is not registered.
Type "message" makes sense since it's a message.
Vendor tree "vnd.rabbitmq.amqp" makes sense since merely subtype "amqp" is not
registered.

* Fix payload conversion

* Translate Response Topic between MQTT and AMQP

Translate MQTT 5.0 Response Topic to AMQP 1.0 reply-to address and vice
versa.

The Response Topic must be a UTF-8 encoded string.

This commit re-uses the already defined RabbitMQ target addresses:
```
"/topic/"     RK        Publish to amq.topic with routing key RK
"/exchange/"  X "/" RK  Publish to exchange X with routing key RK
```

By default, the MQTT topic exchange is configure dto be amq.topic using
the 1st target address.

When an operator modifies the mqtt.exchange, the 2nd target address is
used.

* Apply PR feedback

and fix formatting

Co-authored-by: Michael Davis <mcarsondavis@gmail.com>

* tidy up

* Add MQTT message_containers test

* consistent hash exchange: avoid amqp legacy conversion

When hashing on a header value.

* Avoid converting to amqp legacy when using exchange federation

* Fix test flake

* test and dialyzer fixes

* dialyzer fix

* Add MQTT protocol interoperability tests

Test receiving from and sending to MQTT 5.0 and
* AMQP 0.9.1
* AMQP 1.0
* STOMP
* Streams

* Regenerate portions of deps/rabbit/app.bzl with gazelle

I'm not exactly sure how this happened, but gazell seems to have been
run with an older version of the rules_erlang gazelle extension at
some point. This caused generation of a structure that is no longer
used. This commit updates the structure to the current pattern.

* mc: refactoring

* mc_amqpl: handle delivery annotations

Just in case they are included.

Also use iolist_to_iovec to create flat list of binaries when
converting from amqp with amqp encoded payload.

---------

Co-authored-by: David Ansari <david.ansari@gmx.de>
Co-authored-by: Michael Davis <mcarsondavis@gmail.com>
Co-authored-by: Rin Kuryloski <kuryloskip@vmware.com>
2023-08-31 11:27:13 +01:00
Simon Unge 2d74d24b80 Disable add/delete/shrink/grow QQ operations via HTTP api 2023-08-23 01:03:28 +00:00
Marcial Rosales 1e74490412 Fix wrong config 2023-08-14 12:15:51 +01:00
Marcial Rosales dbffccba9d Fix #9043 2023-08-14 11:51:46 +01:00
Michael Klishin 52d78e018a
Merge pull request #8218 from SimonUnge/eval_membership_stand_alone_process
Reconcile (repair or expand) quorum queue membership periodically
2023-07-13 20:28:34 +04:00
Arnaud Cogoluègnes d0a6efc1c9
Document stream management plugin endpoints
Fixes #8751
2023-07-13 15:41:23 +02:00
Simon Unge 559a83d45f See #7209. Evaluate quorum queue membership periodically. 2023-07-11 13:14:04 -07:00
Karl Nilsson 86479670cf
Make filter size configurable
as a queue arg and policy
2023-07-10 15:21:53 +02:00
Jean-Sébastien Pédron f3be0118c6
Mark management metrics collection as deprecated
[Why]
Management metrics collection will be removed in RabbitMQ 4.0. The
prometheus plugin provides a better and more scalable alternative.

[How]
The management metrics collection is marked as deprecated in the code
using the Deprecated features subsystem (based on feature flags). See
pull request #7390 for a description of that subsystem.

To test RabbitMQ behavior as if the feature was removed, the following
configuration setting can be used:
deprecated_features.permit.management_metrics_collection = false

Management metrics collection can be turned off anytime, there are no
conditions to do that.

Once management metrics collection is turned off, the management API
will not report any metrics and the UI will show empty graphs.

Note that given the marketing calendar, the deprecated feature will go
directly from "permitted by default" to "removed" in RabbitMQ 4.0. It
won't go through the gradual deprecation process.
2023-07-06 11:02:45 +02:00
antsthebul 4ebc3244f0 Set max height value for popup, as to not conlfict with smaller length popups 2023-06-30 13:40:34 -04:00
antsthebul b8f65083d1 Adjust CSS on Popup box 2023-06-29 14:47:56 -04:00
Simon Unge 1037d8014d gazelle deps fix! 2023-06-23 14:52:56 -07:00
Simon Unge 8b3ca4c972 See #8605. Add authentcation support to prometheus. 2023-06-23 13:54:45 -07:00
Michael Klishin 55442aa914 Replace @rabbitmq.com addresses with rabbitmq-core@groups.vmware.com
Don't ask why we have to do it. Because reasons!
2023-06-20 15:40:13 +04:00
Michael Klishin 0a00526dba More wording, link to the maintenance mode doc section 2023-06-15 22:48:41 +04:00
Michael Klishin f428af75b7 One more UI wording change 2023-06-15 22:39:03 +04:00
Michael Klishin 46561fc9fe Naming changes #8578 2023-06-15 22:37:36 +04:00
Simon Unge 782830f4bd Show nodes in maintenance mode in UI 2023-06-15 22:37:36 +04:00
Marcial Rosales 77ee572467 Fixes #8547 2023-06-14 09:39:03 +02:00
Michael Klishin 99968792fa Basic tests for the endpoints introduced in #8532 2023-06-14 03:14:36 +04:00
Michael Klishin e5759e2b60 Implement {POST, DELETE} /api/queues/quorum/replicas/on/:node/{grow,shrink}
Part of #8532.
2023-06-14 02:01:31 +04:00
Michael Klishin fc895b7212 Return 202 for QQ replica removal 2023-06-14 00:07:06 +04:00
Michael Klishin de75f3bf79 HTTP API: introduce two endpoints for QQ replica management
to match what is offered by the CLI (`rabbitmq-queues`).

Part of #8532.
2023-06-13 01:36:54 +04:00
Michael Klishin ddb9fbd12c
Merge pull request #8520 from rabbitmq/mk-rename-management-ui-tabs
Rename a couple of management UI tabs
2023-06-11 00:14:43 +04:00
Michael Klishin f54b2906bd More Selenium suite updates 2023-06-10 23:21:32 +04:00
Michael Klishin f4aed7a55e Make sure that nav element ids follow a reasonable convention
Instead of using the label, use a snake-case value
without any spaces.

While at it, update Selenium/WebDriver test suites.
2023-06-10 19:38:36 +04:00
Michael Klishin c68d13d071 Don't fail with a 500 when virtual host or user do not exist
when clearing their limits.
2023-06-10 19:28:09 +04:00
Michael Klishin f720338658 Rename a couple of management UI tabs
* Queueus => Queues and Streams
 * Stream => Stream Connections

to better reflect what they display in modern versions.

Per discussion with the team.
2023-06-10 18:57:16 +04:00
Simon Unge cfdc4c7991 Fix /api/connections/username bug where wrong datatype was provided to internal function causing internal error 2023-06-05 15:35:23 -07:00
Marcial Rosales da43ccf6c7 Fix member variable for datamodel in Display 2023-05-23 17:03:45 +02:00
Marcial Rosales 02fda919a5 Fix #8276 2023-05-23 16:47:11 +02:00
Iliia Khaprov 00b3a895f1 UI bits for consumer timeout 2023-05-22 11:59:30 +02:00
Marcial Rosales 9736fb6c46 Fix mgt-only exchanges test 2023-05-19 21:08:44 +02:00
Marcial Rosales e155da86e8 Fix test cases 2023-05-19 20:32:44 +02:00
Marcial Rosales 79c206d339 Test only admin users can edit limits 2023-05-19 17:53:01 +02:00
Marcial Rosales 6ca5d026eb Only load users for limits for admin user 2023-05-19 17:24:38 +02:00
Marcial Rosales 93907b38a1 Look up for elements rather than clicking on them
This is because when we clicked on a menu option
there are html content refreshed which confuses
the webdriver
2023-05-19 17:01:15 +02:00
Marcial Rosales 67e04259a0 Test various user tags without vhost permissions 2023-05-19 17:01:15 +02:00
Marcial Rosales 1022f7d197 Do not mount route to pages
which require vhost access but the
use has no access to any vhost
2023-05-19 17:01:15 +02:00
Marcial Rosales 24fb9afe16 WIP Fix issue 2023-05-19 17:01:15 +02:00
Michael Klishin 29f9e1ceaf
Merge pull request #8236 from rabbitmq/no-more-lazy
Remove "lazy" from Management and lazy-specific tests
2023-05-19 11:51:22 +04:00
Michael Klishin e60a5409ff
Merge pull request #8241 from cloudamqp/queue_storage_version
Show classic queue storage version on Mgmt UI queue page
2023-05-19 11:09:19 +04:00
Péter Gömöri e0f485b1cc Show classic queue storage version on Mgmt UI queue page 2023-05-19 00:07:22 +02:00
Simon Unge 472496b4a3 Add ha-* operator policies to UI shortcuts 2023-05-18 11:24:04 -07:00
Michal Kuratczyk f8a3643d5d
Remove "lazy" from Management and lazy-specific tests 2023-05-18 13:59:50 +02:00
Rin Kuryloski eb94a58bc9 Add a workflow to compare the bazel/erlang.mk output
To catch any drift between the builds
2023-05-15 13:54:14 +02:00
Michael Klishin 65e59f670b Only validate regular expression when the regex box is checked 2023-04-27 13:43:44 +04:00
Michael Klishin a4386db25d Wording 2023-04-27 12:32:39 +04:00
Michael Klishin fe1fbb8264 Add a warning for invalid regular expressions
Warn the user when filter expression does not compile to a regular
expression.

Part of #8008.
2023-04-27 12:27:19 +04:00
Michael Klishin a93ad3b7f1 First attempt at addressing #8008
When filter exression is not a valid regexp, send
it as a regular text filer.
2023-04-27 12:06:13 +04:00
Rin Kuryloski a944439fba Replace globs in bazel with explicit lists of files
As this is preferred in rules_erlang 3.9.14
2023-04-25 17:29:12 +02:00
Rin Kuryloski 854d01d9a5 Restore the original -include_lib statements from before #6466
since this broke erlang_ls

requires rules_erlang 3.9.13
2023-04-20 12:40:45 +02:00
Michael Klishin c0ed80c625
Merge pull request #6466 from rabbitmq/gazelle
Use gazelle for some maintenance of bazel BUILD files
2023-04-19 09:33:44 +04:00
Michael Klishin 6e6c1581d4
rabbit_mgmt_wm_auth: rearrange exports 2023-04-18 03:28:58 +04:00
Rin Kuryloski 8de8f59d47 Use gazelle generated bazel files
Bazel build files are now maintained primarily with `bazel run
gazelle`. This will analyze and merge changes into the build files as
necessitated by certain code changes (e.g. the introduction of new
modules).

In some cases there hints to gazelle in the build files, such as `#
gazelle:erlang...` or `# keep` comments. xref checks on plugins that
depend on the cli are a good example.
2023-04-17 18:13:18 +02:00
Michael Klishin 0f8a5899de
Merge branch 'main' into otp26-compatibility 2023-04-17 14:23:21 +04:00
Michael Klishin 858b74ff19
Merge branch 'main' into rin/ignore-warnings-plts 2023-04-17 14:01:41 +04:00
Marcial Rosales 67638fa535 Use rabbit_json library to produce json representations 2023-04-17 11:12:36 +02:00
Rin Kuryloski 8a7eee6a86 Ignore warnings when building plt files for dependencies
As we don't generally care if a dependency has warnings, only the
target
2023-04-17 10:09:24 +02:00
Michael Klishin fe3d65002d
Revert "Remove an old version of rabbit_mgmt_wm_auth"
This reverts commit 753fa5a191.

Both rabbit_mgmt_oauth_bootstrap and rabbit_mgmt_wm_auth should
be kept for backwards compatibility with certain clients.
2023-04-17 11:48:23 +04:00
Michael Klishin 753fa5a191
Remove an old version of rabbit_mgmt_wm_auth
rabbit_mgmt_oauth_bootstrap is not hooked up to the dispatcher,
and appears to be an older version of what is now rabbit_mgmt_wm_auth

(cherry picked from commit 1209b86671)
2023-04-14 19:23:16 +04:00
Michal Kuratczyk 3c2917b871
Don't rely on implict list ordering
While at it, refactor `rabbit_misc:plmerge/2` to use the same precedence
as maps:merge and lists:merge (the second argument supersedes the first
one)
2023-04-13 14:37:18 +02:00
Marcial Rosales 1c1e4515f7 Deprecate uaa settings from management plugin 2023-04-13 11:22:05 +02:00
Michael Klishin f1af3bb922
Merge pull request #7365 from rabbitmq/oauth2-login-with-authorization-header
Read JWT token from Authorization header to login to management ui
2023-04-04 18:48:05 +04:00
Loïc Hoguin 1595727a1a
Fix parsing of cookie header in test suite 2023-04-04 10:11:31 +02:00
Marcial Rosales 829d9d9428 Read JWT token from Authorization Header 2023-04-04 12:00:08 +04:00
Michael Klishin 1a3126d72a Management UI footer link updates 2023-04-03 22:23:06 +04:00
Michael Klishin d478fadaa1 Type specs around virtual host addition 2023-04-02 01:04:16 +04:00
Michael Klishin f1a922a17c Virtual host limit: error type naming
vhost_precondition_failed => vhost_limit_exceeded

vhost_limit_exceeded is the error type used by
definition import when a per-vhost is exceeded.
It feels appropriate for this case, too.
2023-04-01 23:11:48 +04:00
Simon Unge 574ca55a3f See #7777. Use vhost_max to stop vhost creation in rabbitmq 2023-03-31 12:18:16 -07:00
Michael Klishin bfcbef64b4 HTTP API: rename default queue type key
from defaultqueuetype to default_queue_type.
defaultqueuetype is still used as a fallback for backwards
compatibility.

Closes #7734.
2023-03-25 01:33:22 +04:00
Michael Klishin 7fb32b6ae0
Merge pull request #7675 from cloudamqp/login_handler
Fix return value of mgmt login handler on bad method
2023-03-21 19:48:55 +04:00
Marcial Rosales 67b952c28c Refactor selenium tests 2023-03-21 12:39:28 +01:00
Péter Gömöri 4d21184a12 Fix return value of mgmt login handler on bad method
To match what cowboy_handler expects.
2023-03-20 11:13:12 +01:00
Michal Kuratczyk 0a3136a916
Allow applying policies to specific queue types
Rather than relying on queue name conventions, allow applying policies
based on the queue type. For example, this allows multiple policies that
apply to all queue names (".*") that specify different parameters for
different queue types.
2023-03-13 12:36:48 +01:00
Marcial Rosales 42b821f0e9 Add missing pem file 2023-02-28 14:10:00 +01:00
Marcial Rosales efb1b5bd10 Fix 2549
Allow list of preferred_username_claims in cuttlefish
config style.
Use new config style on two selenium test suites
Test oauth2 backend's config schema and oauth2 management
config schema
2023-02-28 10:38:28 +01:00
Luke Bakken f420487e5e
Add documentation for hashing passwords
Fixes #7432

Adds HTTP API documentation as well as `rabbitmqctl hash_password` docs.

Add `rabbitmqctl` docs
2023-02-26 15:16:38 -08:00
Jean-Sébastien Pédron 42bcd94dce
rabbit_db_cluster: New module on top of databases clustering
This new module sits on top of `rabbit_mnesia` and provide an API with
all cluster-related functions.

`rabbit_mnesia` should be called directly inside Mnesia-specific code
only, `rabbit_mnesia_rename` or classic mirrored queues for instance.
Otherwise, `rabbit_db_cluster` must be used.

Several modules, in particular in `rabbitmq_cli`, continue to call
`rabbit_mnesia` as a fallback option if the `rabbit_db_cluster` module
unavailable. This will be the case when the CLI will interact with an
older RabbitMQ version.

This will help with the introduction of a new database backend.
2023-02-22 15:28:04 +01:00
Marcial Rosales 20269bf222 Fix issue #7369
Depending on `disable_stats` value
search for detailed exchange details or
basic details
2023-02-21 13:11:45 +01:00
Marcial Rosales 89ee77e5ec Improve how to look for elements and wait for them 2023-02-21 13:07:37 +01:00
Marcial Rosales 9ab7dca650 Fix issue 7301 2023-02-15 14:29:55 +01:00
Michael Klishin d0dc951343
Merge pull request #7058 from rabbitmq/add-node-lists-functions-to-clarify-intent
rabbit_nodes: Add list functions to clarify which nodes we are interested in
2023-02-13 23:06:50 -03:00