Commit Graph

2248 Commits

Author SHA1 Message Date
Loïc Hoguin d9d74d0964
make: Remove ct-logs-archive target
Hasn't been used for a long time.
2024-08-29 15:19:14 +02:00
Loïc Hoguin f3d0d4e113
make: Remove sync-gitremote sync-gituser targets
They are not useful for the monorepo.
2024-08-29 15:19:14 +02:00
Loïc Hoguin a5cfb1ea9a
make: Remove show-upstream-git-fetch-url and co
They haven't been necessary for quite some time.
2024-08-29 15:19:14 +02:00
Loïc Hoguin 4e8ad90cd0
make: Remove commits-since-release
This was only relevant before the monorepo.
2024-08-29 15:19:14 +02:00
Loïc Hoguin 31409e86b0
make: Remove show-branch target
Not useful in the monorepo.
2024-08-29 15:19:14 +02:00
Loïc Hoguin b8bcd5c27c
make: Remove sync-gitignore-from-main target
No longer relevant because of the monorepo
2024-08-29 15:19:14 +02:00
Loïc Hoguin e947e098bd
make: Remove rabbitmq-deps.mk related targets 2024-08-29 15:19:14 +02:00
Loïc Hoguin 7e7e6feb9d
make: Remove rabbitmq-tests.mk
Everything in this file seems to be dead code except
ct-slow/ct-fast, which have been replaced by their
equivalent in the rabbit Makefile.
2024-08-29 15:19:13 +02:00
David Ansari ffefefba0f Run with default wal_sync_method
...which is `datasync`

RA never pre-allocates the WAL anymore unless explicitly configured to.
2024-08-22 16:24:07 +00:00
Michael Davis 0dd26f0c52
rabbit_db_queue: Transactionally delete transient queues from Khepri
The prior code skirted transactions because the filter function might
cause Khepri to call itself. We want to use the same idea as the old
code - get all queues, filter them, then delete them - but we want to
perform the deletion in a transaction and fail the transaction if any
queues changed since we read them.

This fixes a bug - that the call to `delete_in_khepri/2` could return
an error tuple that would be improperly recognized as `Deletions` -
but should also make deleting transient queues atomic and fast.
Each call to `delete_in_khepri/2` needed to wait on Ra to replicate
because the deletion is an individual command sent from one process.
Performing all deletions at once means we only need to wait for one
command to be replicated across the cluster.

We also bubble up any errors to delete now rather than storing them as
deletions. This fixes a crash that occurs on node down when Khepri is
in a minority.
2024-08-13 11:40:18 -04:00
Michael Davis 96c60a2de4
Move 'for_each_while_ok/2' helper to rabbit_misc 2024-07-22 16:02:03 -04:00
Lois Soto Lopez bb93e718c2 Prometheus: some per-exchange/per-queue metrics aggregated per-channel
Add copies of some per-object metrics that are labeled per-channel
aggregated to reduce cardinality. These metrics are valuable and
easier to process if exposed on per-exchange and per-queue basis.
2024-07-16 14:30:25 +02:00
Michal Kuratczyk f398892bda
Deprecate queue-master-locator (#11565)
* Deprecate queue-master-locator

This should not be a breaking change - all validation should still pass
* CQs can now use `queue-leader-locator`
* `queue-leader-locator` takes precedence over `queue-master-locator` if both are used
* regardless of which name is used, effectively there are only two  values: `client-local` (default) or `balanced`
* other values (`min-masters`, `random`, `least-leaders`) are mapped to `balanced`
* Management UI no longer shows `master-locator` fields when declaring a queue/policy, but such arguments can still be used manually (unless not permitted)
* exclusive queues are always declared locally, as before
2024-07-12 13:22:55 +02:00
Michael Klishin 0700e1cdc4 Revert "Provide per-exchange/queue metrics w/out channelID"
This reverts commit 3ed2e30e3a.
2024-07-11 21:34:52 -04:00
Lois Soto Lopez ec5e258825 Provide per-exchange/queue metrics w/out channelID 2024-07-11 17:34:18 -04:00
Loïc Hoguin bbfa066d79
Cleanup .gitignore files for the monorepo
We don't need to duplicate so many patterns in so many
files since we have a monorepo (and want to keep it).

If I managed to miss something or remove something that
should stay, please put it back. Note that monorepo-wide
patterns should go in the top-level .gitignore file.
Other .gitignore files are for application or folder-
specific patterns.
2024-06-28 12:00:52 +02:00
Loïc Hoguin 18f8ee1457
Merge pull request #11549 from rabbitmq/loic-make-cleanups
Various make cleanup/consolidation
2024-06-27 11:42:24 +02:00
Loïc Hoguin af49f5c526
make: Remove FAST_RUN_BROKER; make normal run-broker fast
The DIST step used rsync for copying files; changing this
to using cp/rm provides a noticeable speed boost.

Before this commit the situation was as follows. With
FAST_RUN_BROKER=1 we are pretty fast but don't benefit
from parallel make:

  make -C deps/rabbitmq_management run-broker FAST_RUN_BROKER=1
    2,04s user 1,57s system 90% cpu 4,016 total
  make -C deps/rabbitmq_management run-broker FAST_RUN_BROKER=1 -j8
    2,08s user 1,55s system 89% cpu 4,069 total

With FAST_RUN_BROKER=0 we are slow; on the other hand
we greatly benefit from parallel make:

  make -C deps/rabbitmq_management run-broker FAST_RUN_BROKER=0
    3,29s user 1,93s system 81% cpu 6,425 total
  make -C deps/rabbitmq_management run-broker FAST_RUN_BROKER=0 -j8
    3,36s user 1,90s system 142% cpu 3,695 total

The reason this method achieves such a result is because
the DIST step that takes a lot of time can be run in
parallel. In addition, this method results on only
the necessary plugins being available in the path,
therefore it doesn't discover unrelated plugins
during node startup, saving time.

By changing rsync to cp/rm, we get great results even
without parallel make:

  make -C deps/rabbitmq_management run-broker FAST_RUN_BROKER=0
    3,28s user 1,64s system 105% cpu 4,684 total
  make -C deps/rabbitmq_management run-broker FAST_RUN_BROKER=0 -j8
    3,27s user 1,65s system 135% cpu 3,640 total

We are within 1s of FAST_RUN_BROKER=1 by default, and
faster than FAST_RUN_BROKER=1 with parallel make. On
top of that, we greatly benefit when rebuilding as the
DIST files do not need to be rebuilt every time:

  make -C deps/rabbitmq_management run-broker FAST_RUN_BROKER=0
    2,94s user 1,40s system 107% cpu 4,035 total
  make -C deps/rabbitmq_management run-broker FAST_RUN_BROKER=0 -j8
    2,85s user 1,51s system 138% cpu 3,140 total

Therefore it only makes sense to remove FAST_RUN_BROKER,
and instead use the old method which is both more correct
and has more potential for optimisation.
2024-06-26 16:53:07 +02:00
Loïc Hoguin a64d1e67fc
Remove looking_glass
It has largely been superseded by `perf`. It is no longer
generally useful. It can always be added to BUILD_DEPS for
the rare cases it is needed, or installed locally and
pointed to by setting its path to ERL_LIBS.
2024-06-26 09:56:46 +02:00
Loïc Hoguin 2b03233ac1
make: Remove rabbitmq-macros.mk
It hasn't been used for some time. If compare_version
becomes necessary again in the future, it's in the history.
2024-06-25 13:39:38 +02:00
Loïc Hoguin 9f15e978b1
make: Remove xrefr
It is no longer used by Erlang.mk.
2024-06-25 13:08:08 +02:00
Loïc Hoguin 7e9cac3d00
make: Remove Travis-specific targets/config
This should no longer be used.
2024-06-24 14:12:02 +02:00
Loïc Hoguin 881ebc6138
make: Remove ANT variables
This should no longer be used.
2024-06-24 14:06:48 +02:00
Loïc Hoguin 31310d2315
make: Simplify looking for DEPS_DIR
With the monorepo the dependencies are either correct
or are the parent directory (when we are in a rabbit
application in deps/).
2024-06-24 14:06:46 +02:00
Loïc Hoguin 5c8366f753
Remove file_handle_cache_stats module
The stats were not removed from management agent, instead
they are hardcoded to zero in the agent itself.
2024-06-24 12:07:51 +02:00
Loïc Hoguin 6a47eaad22
Zero sockets_used/sockets_limit stats
They are no longer used.

This removes a couple file_handle_cache:info/1 calls.

We are not removing them from the HTTP API to avoid
breaking things unintentionally.
2024-06-24 12:07:51 +02:00
Loïc Hoguin 49bedfc17e
Remove most of the fd related FHC code
Stats were not removed, including management UI stats
relating to FDs.

Web-MQTT and Web-STOMP configuration relating to FHC
were not removed.

The file_handle_cache itself must be kept until we
remove CQv1.
2024-06-24 12:07:51 +02:00
Michal Kuratczyk 41a4d1711d
OTP27 support (#11366)
* "maybe" is now a keyword
* Bump horus to 0.2.5 and switch to hex
* Get rid of some deprecated callbacks/functions
2024-06-18 07:32:58 +02:00
Loïc Hoguin cc73b86e80
CQ: Remove rabbit_msg_store.hrl
It is no longer needed as only a single module uses it.
2024-06-14 11:52:03 +02:00
Loïc Hoguin 41ce4da5ca
CQ: Remove ability to change shared store index module
It will always use the ETS index. This change lets us
do optimisations that would otherwise not be possible,
including 81b2c39834953d9e1bd28938b7a6e472498fdf13.

A small functional change is included in this commit:
we now always use ets:update_counter to update the
ref_count, instead of a mix of update_{counter,fields}.

When upgrading to 4.0, the index will be rebuilt for
all users that were using a custom index module.
2024-06-14 11:52:03 +02:00
Loïc Hoguin efb695548a
make: Don't start all when run-broker a specific plugin
When FAST_RUN_BROKER=1 was introduced it helped reduce the
time to run the broker, but mistakenly was always starting
all plugins even when running run-broker against a specific
plugin, for example `make -C deps/rabbitmq_management run-broker`.

  Starting broker... completed with 36 plugins.

With this commit only the target plugin (and its dependencies)
will be started.

  Starting broker... completed with 3 plugins.

This also has a positive effect on start performance:

  make -C deps/rabbitmq_management run-broker  2,28s user 2,11s system 88% cpu 4,943 total
  make -C deps/rabbitmq_management run-broker  2,00s user 1,61s system 94% cpu 3,807 total
2024-06-13 10:37:28 +02:00
Michal Kuratczyk fde99508db
make restart-cluster (#11427)
Implement a rolling cluster restart similar
to what the Kubernetes Operator does
2024-06-11 12:25:49 +02:00
Loïc Hoguin fb339ebe7f
make: Don't search for rabbit/scripts
With the monorepo I do not believe we need to worry about
where the scripts are. They are always in deps/rabbit/scripts.
So we copy them from there directly.

This does not improve performance but should work better on
older environments.
2024-06-10 09:42:33 +02:00
Loïc Hoguin 3aa829fb9d
make: In install-scripts copy files all at once
Rather than one by one. I do not know why it was done that
way, but since that dates back before the monorepo, it may
no longer be necessary.

This has a small but noticeable speedup when building:

  make -C deps/rabbit  0,44s user 0,11s system 101% cpu 0,546 total
  make -C deps/rabbitmq_management  0,65s user 0,18s system 101% cpu 0,816 total

  make -C deps/rabbit  0,41s user 0,11s system 101% cpu 0,510 total
  make -C deps/rabbitmq_management  0,57s user 0,21s system 101% cpu 0,778 total
2024-06-10 09:42:33 +02:00
Loïc Hoguin 4f1ea93545
make: Restrict install-cli to the current working directory
Commit 5840834fa8 introduced
copying of scripts and escripts to the plugin currently
being worked on. At some point after that, in recent weeks,
the target would run for all applications being compiled,
and not just the plugin we have asked Make to build. This
led to increased build times.

This change ensures the scripts and escripts are only
copied for the directory we have ran Make in. So if
we simply do "make" this is the top-level directory.
If we do "make -C deps/rabbit" this is the rabbit
application's directory.

Doing this has a huge impact on performance when
rebuilding with no changes done to the source code:

  make  6,63s user 2,60s system 106% cpu 8,668 total
  make  2,88s user 1,30s system 117% cpu 3,567 total

And a less pronounced impact on specific applications
(results will vary):

  make -C deps/rabbit  0,50s user 0,18s system 100% cpu 0,677 total
  make -C deps/rabbit  0,43s user 0,12s system 101% cpu 0,551 total
2024-06-10 09:42:33 +02:00
Loïc Hoguin de8df83996
make: Easy code reloading for run-broker/start-cluster
This enables code reloading for single nodes and
for clusters. For single nodes using `make run-broker`
at least two terminals must be used: the one with
the broker, and the one doing the reloading.

When only using a single node (`make run-broker`),
the `RELOAD=1` variable can be passed to Make when
recompiling. This will lead to code reloading at
the end of compilation.

  make RELOAD=1

Alternatively the "reload-broker" target can be used:

  make all reload-broker

Special care must be given when the -j flag is used.
In that case it may be needed to separate compilation
and reloading in two separate calls:

  make && make reload-broker

The same considerations apply to clusters, only there
isn't a shorthand variable for them. Instead, the
"reload-cluster" target must be used:

  make all reload-cluster

Or:

  make && make reload-cluster

Either of these will compile and reload the modules
on all nodes of the cluster.

It is also possible to reload modules only on some nodes,
using `make reload-broker` with the variable "RABBITMQ_NODENAME"
set to the node you want to trigger a reload on. This could
be used to test scenarios where the code differs between
the nodes in the cluster.
2024-06-10 09:42:33 +02:00
Loïc Hoguin 769f19b58a
make: Don't build dist for run-broker by default
During development we want `make run-broker` to execute fast,
yet still pick up the changes we made in rabbit applications.
We can already do this by setting the appropriate variables.
This commit makes it so that this is the default. Now instead
of depending on the `dist` target we run plugins from the deps/
directory. And we depend on `all` to pick up changes.

This is equivalent to running
`make run-broker PLUGINS_FROM_DEPS_DIR=1 DIST_TARGET=all`.

It can be disabled by setting `FAST_RUN_BROKER=0`.

It doesn't invalide the `NOBUILD=1` variable which lets us
run the broker without recompiling (used in tests). It also
doesn't make `NOBUILD=1` faster (or slower).

The difference when running `make run-broker` by default is
roughly half the time of what it was before:

  make run-broker  16,67s user 10,42s system 101% cpu 26,567 total
  make run-broker  8,75s user 4,40s system 102% cpu 12,873 total

And it also applies to `make start-cluster`:

  make start-cluster  26,32s user 15,15s system 141% cpu 29,279 total
  make start-cluster  18,09s user 8,76s system 170% cpu 15,726 total
2024-06-10 09:42:33 +02:00
David Ansari d70e529d9a Introduce outbound RabbitMQ internal AMQP flow control
## What?

Introduce RabbitMQ internal flow control for messages sent to AMQP
clients.

Prior this PR, when an AMQP client granted a large amount of link
credit (e.g. 100k) to the sending queue, the sending queue sent
that amount of messages to the session process no matter what.
This becomes problematic for memory usage when the session process
cannot send out messages fast enough to the AMQP client, especially if
1. The writer proc cannot send fast enough. This can happen when
the AMQP client does not receive fast enough and causes TCP
back-pressure to the server. Or
2. The server session proc is limited by remote-incoming-window.

Both scenarios are now added as test cases.
Tests
* tcp_back_pressure_rabbitmq_internal_flow_quorum_queue
* tcp_back_pressure_rabbitmq_internal_flow_classic_queue
cover scenario 1.

Tests
* incoming_window_closed_rabbitmq_internal_flow_quorum_queue
* incoming_window_closed_rabbitmq_internal_flow_classic_queue
cover scenario 2.

This PR sends messages from queues to AMQP clients in a more controlled
manner.

To illustrate:
```
make run-broker PLUGINS="rabbitmq_management" RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 4"
observer_cli:start()
mq
```
where `mq` sorts by message queue length.
Create a stream:
```
deps/rabbitmq_management/bin/rabbitmqadmin declare queue name=s1 queue_type=stream durable=true
```
Next, send and receive from the Stream via AMQP.
Grant a large number of link credit to the sending stream:
```
docker run -it --rm --add-host host.docker.internal:host-gateway ssorj/quiver:latest
bash-5.1# quiver --version
quiver 0.4.0-SNAPSHOT
bash-5.1# quiver //host.docker.internal//queue/s1 --durable -d 30s --credit 100000
```

**Before** to this PR:
```
RESULTS

Count ............................................... 100,696 messages
Duration ............................................... 30.0 seconds
Sender rate ......................................... 120,422 messages/s
Receiver rate ......................................... 3,363 messages/s
End-to-end rate ....................................... 3,359 messages/s
```
We observe that all 100k link credit worth of messages are buffered in the
writer proc's mailbox:
```
|No | Pid        | MsgQueue  |Name or Initial Call                 |      Memory | Reductions          |Current Function                  |
|1  |<0.845.0>   |100001     |rabbit_amqp_writer:init/1            |  126.0734 MB| 466633491           |prim_inet:send/5                  |
```

**After** to this PR:
```
RESULTS

Count ............................................. 2,973,440 messages
Duration ............................................... 30.0 seconds
Sender rate ......................................... 123,322 messages/s
Receiver rate ........................................ 99,250 messages/s
End-to-end rate ...................................... 99,148 messages/s
```
We observe that the message queue lengths of both writer and session
procs are low.

 ## How?

Our goal is to have queues send out messages in a controlled manner
without overloading RabbitMQ itself.
We want RabbitMQ internal flow control between:
```
AMQP writer proc <--- session proc <--- queue proc
```
A similar concept exists for classic queues sending via AMQP 0.9.1.
We want an approach that applies to AMQP and works generic for all queue
types.

For the interaction between AMQP writer proc and session proc we use a
simple credit based approach reusing module `credit_flow`.

For the interaction between session proc and queue proc, the following options
exist:

 ### Option 1
The session process provides expliclity feedback to the queue after it
has sent N messages.
This approach is implemented in
https://github.com/ansd/rabbitmq-server/tree/amqp-flow-control-poc-1
and works well.
A new `rabbit_queue_type:sent/4` API was added which lets the queue proc know
that it can send further messages to the session proc.

Pros:
* Will work equally well for AMQP 0.9.1, e.g. when quorum queues send messages
  in auto ack mode to AMQP 0.9.1 clients.
* Simple for the session proc

Cons:
* Sligthly added complexity in every queue type implementation
* Multiple Ra commands (settle, credit, sent) to decide when a quorum
  queue sends more messages.

 ### Option 2
A dual link approach where two AMQP links exists between
```
AMQP client <---link--> session proc <---link---> queue proc
```
When the client grants a large amount of credits, the session proc will
top up credits to the queue proc periodically in smaller batches.

Pros:
* No queue type modifications required.
* Re-uses AMQP link flow control

Cons:
* Significant added complexity in the session proc. A client can
  dynamically decrease or increase credits and dynamically change the drain
  mode while the session tops up credit to the queue.

 ### Option 3
Credit is a 32 bit unsigned integer.
The spec mandates that the receiver independently chooses a credit.
Nothing in the spec prevents the receiver to choose a credit of 1 billion.
However the credit value is merely a **maximum**:
> The link-credit variable defines the current maximum legal amount that the delivery-count can be increased by.

Therefore, the server is not required to send all available messages to this
receiver.

For delivery-count:
> Only the sender MAY independently modify this field.

"independently" could be interpreted as the sender could add to the delivery-count
irrespective of what the client chose for drain and link-credit.

Option 3: The queue proc could at credit time already consume credit
and advance the delivery-count if credit is too large before checking out any messages.
For example if credit is 100k, but the queue only wants to send 1k, the queue could
consume 99k of credits and advance the delivery-count, and subsequently send maximum 1k messages.
If the queue advanced the delivery-count, RabbitMQ must send a FLOW to the receiver,
otherwise the receiver wouldn’t know that it ran out of link-credit.

Pros:
* Very simple

Cons:
* Possibly unexpected behaviour for receiving AMQP clients
* Possibly poor end-to-end throughput in auto-ack mode because the queue
  would send a batch of messages followed by a FLOW containing the advanced
  delivery-count. Only therafter the client will learn that it ran out of
  credits and top-up again. This feels like synchronously pulling a batch
  of messages. In contrast, option 2 sends out more messages as soon as
  the previous messages left RabbitMQ without requiring again a credit top
  up from the receiver.
* drain mode with large credits requires the queue to send all available
  messages and only thereafter advance the delivery-count. Therefore,
  drain mode breaks option 3 somewhat.

 ### Option 4
Session proc drops message payload when its outgoing-pending queue gets
too large and re-reads payloads from the queue once the message can be
sent (see `get_checked_out` Ra command for quorum queues).

Cons:
* Would need to be implemented for every queue type, especially classic queues
* Doesn't limit the amount of message metadata in the session proc's
  outgoing-pending queue

 ### Decision: Option 2
This commit implements option 2 to avoid any queue type modification.
At most one credit request is in-flight between session process and
queue process for a given queue consumer.
If the AMQP client sends another FLOW in between, the session proc
stashes the FLOW until it processes the previous credit reply.

A delivery is only sent from the outgoing-pending queue if the
session proc is not blocked by
1. writer proc, or
2. remote-incoming-window

The credit reply is placed into the outgoing-pending queue.
This ensures that the session proc will only top up the next batch of
credits if sufficient messages were sent out to the writer proc.

A future commit could additionally have each queue limit the number of
unacked messages for a given AMQP consumer, or alternatively make use
of session outgoing-window.
2024-06-04 13:11:55 +02:00
David Ansari bd847b8cac Put credit flow config into persistent term
Put configuration credit_flow_default_credit into persistent term such
that the tuple doesn't have to be copied on the hot path.

Also, change persistent term keys from `{rabbit, AtomKey}` to `AtomKey`
so that hashing becomes cheaper.
2024-05-31 16:20:51 +02:00
Jean-Sébastien Pédron a33768471f
Merge pull request #10088 from rabbitmq/convert-rabbit_writer-to-gen_server
rabbit_writer: Convert to a regular gen_server
2024-05-27 10:24:32 +02:00
Michael Klishin ca094402a9 Make rabbit_misc:which_applications/0 more resilient
In certain shutdown scenarios this function on
Erlang 26 runs into exceptions that stem from
application_controller.

The objective of this function is to be
an exception-safe version of
application:which_applications/1, so let's
handle more cases.

This helps certain test suites avoid exceptions
(process crash reports) logged during shutdown,
which makes CT helpers fail test run even
though there were no exceptions in RabbitMQ
itself, and all the exception indicates is a
certain edge case (during system shutdown)
that application_controller does not care to handle.
2024-05-24 19:56:58 -04:00
Jean-Sébastien Pédron 7849f143d8
rabbit_writer: Convert to a regular gen_server
[Why]
This process failed to implement properly the OTP principles. For
instance, the mainloop always kept a reference on the module because it
was not tail-recursive.

This prevents the module from being reloaded at runtime: because the
process always keep that reference on the module, it is killed by the
Code server as part of the code reloading.
2024-05-24 11:47:25 +02:00
Jean-Sébastien Pédron 643fd53d88
rabbitmq-early-test.mk: Use same Dialyzer options as in Bazel 2024-05-24 11:06:11 +02:00
Michal Kuratczyk cfa3de4b2b
Remove unused imports (thanks elp!) 2024-05-23 16:36:08 +02:00
Rin Kuryloski fef064c906 Reduce ineffeciency introduced in b636f41
Use "ERL_LIBS =" instead of "ERL_LIBS :=" so that $(shell ... is not
called so often
2024-05-22 12:31:55 +02:00
Roberto Aloi 04800b9cdf Wrap maybe function in quotes, so it is treated as an atom 2024-05-16 14:04:27 +02:00
Jean-Sébastien Pédron 3147ab7d47
rabbit_peer_discovery: Allow backends to select the node to join themselves
[Why]
Before, the backend would always return a list of nodes and the
subsystem would select one based on their uptimes, the nodes they are
already clustered with, and the readiness of their database.

This works well in general but has some limitations. For instance with
the Consul backend, the discoverability of nodes depends on when each
one registered and in which order. Therefore, the node with the highest
uptime might not be the first that registers. In this case, the one that
registers first will only discover itself and boot as a standalone node.
However, the one with the highest uptime that registered after will
discover both nodes. It will then select itself as the node to join
because it has the highest uptime. In the end both nodes form distinct
clusters.

Another example is the Kubernetes backend. The current solution works
fine but it could be optimized: the backend knows we always want to join
the first node ("$node-0") regardless of the order in which they are
started because picking the first node alphabetically is fine.

Therefore we want to let the backend selects the node to join if it
wants.

[How]
The `list_nodes()` callback can now return the following term:

    {ok, {SelectedNode :: node(), NodeType}}

If the subsystem sees this return value, it will consider that the
returned node is the one to join. It will still query properties because
we want to make sure the node's database is ready before joining it.
2024-05-14 09:40:44 +02:00
Jean-Sébastien Pédron cb9f0d8a44
rabbit_peer_discovery: Register node before running discovery
[Why]
The two backends that use registration are Consul and etcd. The
discovery process relies on the registered nodes: they return whatever
was previously registered.

With the new checks and failsafes added in peer discovery in RabbitMQ
3.13.0, the fact that registration happens after running discovery
breaks Consul and etcd backend.

It used to work before because the first node would eventually time out
waiting for a non-empty list of nodes from the backend and proceed as a
standalone node, registering itself on the way. Following nodes would
then discover that first node.

Among the new checks, the node running discovery expects to find itself
in the list of discovered nodes. Because it didn't register yet, it will
never find itself.

[How]
The solution is to register first, then run discovery. The node should
at least get itself in the discovered nodes.
2024-05-14 09:40:44 +02:00
Rin Kuryloski b636f41ef2 Allow adding rabbitmqctl to PLT_APPS 2024-04-29 12:47:10 +02:00
Rin Kuryloski ff6713d13d Fix deps for rabbit_common in make
so that dialyze passes
2024-04-18 19:11:01 +02:00
David Ansari 390d5715a0 Introduce new AMQP 1.0 address format
## What?
Introduce a new address format (let's call it v2) for AMQP 1.0 source and target addresses.

The old format (let's call it v1) is described in
https://github.com/rabbitmq/rabbitmq-server/tree/v3.13.x/deps/rabbitmq_amqp1_0#routing-and-addressing

The only v2 source address format is:
```
/queue/:queue
```

The 4 possible v2 target addresses formats are:
```
/exchange/:exchange/key/:routing-key
/exchange/:exchange
/queue/:queue
<null>
```
where the last AMQP <null> value format requires that each message’s `to` field contains one of:
```
/exchange/:exchange/key/:routing-key
/exchange/:exchange
/queue/:queue
```

 ## Why?

The AMQP address v1 format comes with the following flaws:

1. Obscure address format:

Without reading the documentation, the differences for example between source addresses
```
/amq/queue/:queue
/queue/:queue
:queue
```
are unknown to users. Hence, the address format is obscure.

2. Implicit creation of topologies

Some address formats implicitly create queues (and bindings), such as source address
```
/exchange/:exchange/:binding-key
```
or target address
```
/queue/:queue
```
These queues and bindings are never deleted (by the AMQP 1.0 plugin.)
Implicit creation of such topologies is also obscure.

3. Redundant address formats

```
/queue/:queue
:queue
```
have the same meaning and are therefore redundant.

4. Properties section must be parsed to determine whether a routing key is present

Target address
```
/exchange/:exchange
```
requires RabbitMQ to parse the properties section in order to check whether the message `subject` is set.
If `subject` is not set, the routing key will default to the empty string.

5. Using `subject` as routing key misuses the purpose of this field.

According to the AMQP spec, the message `subject` field's purpose is:
> A common field for summary information about the message content and purpose.

6. Exchange names, queue names and routing keys must not contain the "/" (slash) character.

The current 3.13 implemenation splits by "/" disallowing these
characters in exchange, and queue names, and routing keys which is
unnecessary prohibitive.

7. Clients must create a separate link per target exchange

While this is reasonable working assumption, there might be rare use
cases where it could make sense to create many exchanges (e.g. 1
exchange per queue, see
https://github.com/rabbitmq/rabbitmq-server/discussions/10708) and have
a single application publish to all these exchanges.
With the v1 address format, for an application to send to 500 different
exchanges, it needs to create 500 links.

Due to these disadvantages and thanks to #10559 which allows clients to explicitly create topologies,
we can create a simpler, clearer, and better v2 address format.

 ## How?

 ### Design goals

Following the 7 cons from v1, the design goals for v2 are:
1. The address format should be simple so that users have a chance to
   understand the meaning of the address without necessarily consulting the docs.
2. The address format should not implicitly create queues, bindings, or exchanges.
   Instead, topologies should be created either explicitly via the new management node
   prior to link attachment (see #10559), or in future, we might support the `dynamic`
   source or target properties so that RabbitMQ creates queues dynamically.
3. No redundant address formats.
4. The target address format should explicitly state whether the routing key is present, empty,
   or will be provided dynamically in each message.
5. `Subject` should not be used as routing key. Instead, a better
   fitting field should be used.
6. Exchange names, queue names, and routing keys should allow to contain
   valid UTF-8 encoded data including the "/" character.
7. Allow both target exchange and routing key to by dynamically provided within each message.

Furthermore
8. v2 must co-exist with v1 for at least some time. Applications should be able to upgrade to
   RabbitMQ 4.0 while continuing to use v1. Examples include AMQP 1.0 shovels and plugins communicating
   between a 4.0 and a 3.13 cluster. Starting with 4.1, we should change the AMQP 1.0 shovel and plugin clients
   to use only the new v2 address format. This will allow AMQP 1.0 and plugins to communicate between a 4.1 and 4.2 cluster.
   We will deprecate v1 in 4.0 and remove support for v1 in a later 4.x version.

 ### Additional Context

The address is usually a String, but can be of any type.

The [AMQP Addressing extension](https://docs.oasis-open.org/amqp/addressing/v1.0/addressing-v1.0.html)
suggests that addresses are URIs and are therefore hierarchical and could even contain query parameters:
> An AMQP address is a URI reference as defined by RFC3986.

> the path expression is a sequence of identifier segments that reflects a path through an
> implementation specific relationship graph of AMQP nodes and their termini.
> The path expression MUST resolve to a node’s terminus in an AMQP container.

The [Using the AMQP Anonymous Terminus for Message Routing Version 1.0](https://docs.oasis-open.org/amqp/anonterm/v1.0/cs01/anonterm-v1.0-cs01.html)
extension allows for the target being `null` and the `To` property to contain the node address.
This corresponds to AMQP 0.9.1 where clients can send each message on the same channel to a different `{exchange, routing-key}` destination.

The following v2 address formats will be used.

 ### v2 addresses

A new deprecated feature flag `amqp_address_v1` will be introduced in 4.0 which is permitted by default.
Starting with 4.1, we should change the AMQP 1.0 shovel and plugin AMQP 1.0 clients to use only the new v2 address format.
However, 4.1 server code must still understand the 4.0 AMQP 1.0 shovel and plugin AMQP 1.0 clients’ v1 address format.
The new deprecated feature flag will therefore be denied by default in 4.2.
This allows AMQP 1.0 shovels and plugins to work between
* 4.0 and 3.13 clusters using v1
* 4.1 and 4.0 clusters using v2 from 4.1 to v4.0 and v1 from 4.0 to 4.1
* 4.2 and 4.1 clusters using v2

without having to support both v1 and v2 at the same time in the AMQP 1.0 shovel and plugin clients.
While supporting both v1 and v2 in these clients is feasible, it's simpler to switch the client code directly from v1 to v2.

 ### v2 source addresses

The source address format is
```
/queue/:queue
```
If the deprecated feature flag `amqp_address_v1` is permitted and the queue does not exist, the queue will be auto-created.
If the deprecated feature flag `amqp_address_v1` is denied, the queue must exist.

 ### v2 target addresses

v1 requires attaching a new link for each destination exchange.
v2 will allow dynamic `{exchange, routing-key}` combinations for a given link.
v2 therefore allows for the rare use cases where a single AMQP 1.0 publisher app needs to send to many different exchanges.
Setting up a link per destination exchange could be cumbersome.
Hence, v2 will support the dynamic `{exchange, routing-key}` combinations of AMQP 0.9.1.
To achieve this, we make use of the "Anonymous Terminus for Message Routing" extension:
The target address will contain the AMQP value null.
The `To` field in each message must be set and contain either address format
```
/exchange/:exchange/key/:routing-key
```
or
```
/exchange/:exchange
```
when using the empty routing key.

The `to` field requires an address type and is better suited than the `subject field.

Note that each message will contain this `To` value for the anonymous terminus.
Hence, we should save some bytes being sent across the network and stored on disk.
Using a format
```
/e/:exchange/k/:routing-key
```
saves more bytes, but is too obscure.
However, we use only `/key/` instead of `/routing-key/` so save a few bytes.
This also simplifies the format because users don’t have to remember whether to use spell `routing-key` or `routing_key` or `routingkey`.

The other allowed target address formats are:
```
/exchange/:exchange/key/:routing-key
```
where exchange and routing key are static on the given link.

```
/exchange/:exchange
```
where exchange and routing key are static on the given link, and routing key will be the empty string (useful for example for the fanout exchange).

```
/queue/:queue
```
This provides RabbitMQ beginners the illusion of sending a message directly
to a queue without having to understand what exchanges and routing keys are.
If the deprecated feature flag `amqp_address_v1` is permitted and the queue does not exist, the queue will be auto-created.
If the deprecated feature flag `amqp_address_v1` is denied, the queue must exist.
Besides the additional queue existence check, this queue target is different from
```
/exchange//key/:queue
```
in that queue specific optimisations might be done (in future) by RabbitMQ
(for example different receiving queue types could grant different amounts of link credits to the sending clients).
A write permission check to the amq.default exchange will be performed nevertheless.

v2 will prohibit the v1 static link & dynamic routing-key combination
where the routing key is sent in the message `subject` as that’s also obscure.
For this use case, v2’s new anonymous terminus can be used where both exchange and routing key are defined in the message’s `To` field.

(The bare message must not be modified because it could be signed.)

The alias format
```
/topic/:topic
```
will also be removed.
Sending to topic exchanges is arguably an advanced feature.
Users can directly use the format
```
/exchange/amq.topic/key/:topic
```
which reduces the number of redundant address formats.

 ### v2 address format reference

To sump up (and as stated at the top of this commit message):

The only v2 source address format is:
```
/queue/:queue
```

The 4 possible v2 target addresses formats are:
```
/exchange/:exchange/key/:routing-key
/exchange/:exchange
/queue/:queue
<null>
```
where the last AMQP <null> value format requires that each message’s `to` field contains one of:
```
/exchange/:exchange/key/:routing-key
/exchange/:exchange
/queue/:queue
```

Hence, all 8 listed design goals are reached.
2024-04-05 12:22:02 +02:00
David Ansari 8a3f3c6f34 Enable AMQP 1.0 clients to manage topologies
## What?

* Allow AMQP 1.0 clients to dynamically create and delete RabbitMQ
  topologies (exchanges, queues, bindings).
* Provide an Erlang AMQP 1.0 client that manages topologies.

 ## Why?

Today, RabbitMQ topologies can be created via:
* [Management HTTP API](https://www.rabbitmq.com/docs/management#http-api)
  (including Management UI and
  [messaging-topology-operator](https://github.com/rabbitmq/messaging-topology-operator))
* [Definition Import](https://www.rabbitmq.com/docs/definitions#import)
* AMQP 0.9.1 clients

Up to RabbitMQ 3.13 the RabbitMQ AMQP 1.0 plugin auto creates queues
and bindings depending on the terminus [address
format](https://github.com/rabbitmq/rabbitmq-server/tree/v3.13.x/deps/rabbitmq_amqp1_0#routing-and-addressing).

Such implicit creation of topologies is limiting and obscure.
For some address formats, queues will be created, but not deleted.

Some of RabbitMQ's success is due to its flexible routing topologies
that AMQP 0.9.1 clients can create and delete dynamically.

This commit allows dynamic management of topologies for AMQP 1.0 clients.
This commit builds on top of Native AMQP 1.0 (PR #9022) and will be
available in RabbitMQ 4.0.

 ## How?

This commits adds the following management operations for AMQP 1.0 clients:
* declare queue
* delete queue
* purge queue
* bind queue to exchange
* unbind queue from exchange
* declare exchange
* delete exchange
* bind exchange to exchange
* unbind exchange from exchange

Hence, at least the AMQP 0.9.1 management operations are supported for
AMQP 1.0 clients.

In addition the operation
* get queue

is provided which - similar to `declare queue` - returns queue
information including the current leader and replicas.
This allows clients to publish or consume locally on the node that hosts
the queue.

Compared to AMQP 0.9.1 whose commands and command fields are fixed, the
new AMQP Management API is extensible: New operations and new fields can
easily be added in the future.

There are different design options how management operations could be
supported for AMQP 1.0 clients:
1. Use a special exchange type as done in https://github.com/rabbitmq/rabbitmq-management-exchange
  This has the advantage that any protocol client (e.g. also STOMP clients) could
  dynamically manage topologies. However, a special exchange type is the wrong abstraction.
2. Clients could send "special" messages with special headers that the broker interprets.

This commit decided for a variation of the 2nd option using a more
standardized way by re-using a subest of the following latest AMQP 1.0 extension
specifications:
* [AMQP Request-Response Messaging with Link Pairing Version 1.0 - Committee Specification 01](https://docs.oasis-open.org/amqp/linkpair/v1.0/cs01/linkpair-v1.0-cs01.html) (February 2021)
* [HTTP Semantics and Content over AMQP Version 1.0 - Working Draft 06](https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=65571) (July 2019)
* [AMQP Management Version 1.0 - Working Draft 16](https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=65575) (July 2019)

An important goal is to keep the interaction between AMQP 1.0 client and RabbitMQ
simple to increase usage, development and adoptability of future RabbitMQ AMQP 1.0
client library wrappers.

The AMQP 1.0 client has to create a link pair to the special `/management` node.
This allows the client to send and receive from the management node.
Similar to AMQP 0.9.1, there is no need for a reply queue since the reply
will be sent directly to the client.

Requests and responses are modelled via HTTP, but sent via AMQP using
the `HTTP Semantics and Content over AMQP` extension (henceforth `HTTP
over AMQP` extension).

This commit tries to follow the `HTTP over AMQP` extension as much as
possible but deviates where this draft spec doesn't make sense.

The projected mode §4.1 is used as opposed to tunneled mode §4.2.
A named relay `/management` is used (§6.3) where the message field `to` is the URL.

Deviations are
* §3.1 mandates that URIs are not encoded in an AMQP message.
  However, we percent encode URIs in the AMQP message. Otherwise there
  is for example no way to distinguish a `/` in a queue name from the
  URI path separator `/`.
* §4.1.4 mandates a data section. This commit uses an amqp-value section
  as it's a better fit given that the content is AMQP encoded data.

Using an HTTP API allows for a common well understood interface and future extensibility.
Instead of re-using the current RabbitMQ HTTP API, this commit uses a
new HTTP API (let's call it v2) which could be used as a future API for
plain HTTP clients.

 ### HTTP API v1

The current HTTP API (let's call it v1) is **not** used since v1
comes with a couple of weaknesses:

1. Deep level of nesting becomes confusing and difficult to manage.

Examples of deep nesting in v1:
```
/api/bindings/vhost/e/source/e/destination/props
/api/bindings/vhost/e/exchange/q/queue/props
```

2. Redundant endpoints returning the same resources

v1 has 9 endpoints to list binding(s):
```
/api/exchanges/vhost/name/bindings/source
/api/exchanges/vhost/name/bindings/destination
/api/queues/vhost/name/bindings
/api/bindings
/api/bindings/vhost
/api/bindings/vhost/e/exchange/q/queue
/api/bindings/vhost/e/exchange/q/queue/props
/api/bindings/vhost/e/source/e/destination
/api/bindings/vhost/e/source/e/destination/props
```

3. Verbs in path names
Path names should be nouns instead.
v1 contains verbs:
```
/api/queues/vhost/name/get
/api/exchanges/vhost/name/publish
```

 ### AMQP Management extension

Only few aspects of the AMQP Management extension are used.

The central idea of the AMQP management spec is **dynamic discovery** such that broker independent AMQP 1.0
clients can discover objects, types, operations, and HTTP endpoints of specific brokers.
In fact, clients are only conformant if:
> All request addresses are dynamically discovered starting from the discovery document.
> A requesting container MUST NOT use fixed assumptions about the addressing structure of the management API.

While this is a nice and powerful idea, no AMQP 1.0 client and no AMQP 1.0 server implement the
latest AMQP 1.0 management spec from 2019, partly presumably due to its complexity.
Therefore, the idea of such dynamic discovery has failed to be implemented in practice.

The AMQP management spec mandates that the management endpoint returns a discovery document containing
broker specific collections, types, configuration, and operations including their endpoints.

The API endpoints of the AMQP management spec are therefore all designed around dynamic discovery.

For example, to create either a queue or an exchange, the client has to
```
POST /$management/entities
```
which shows that the entities collection acts as a generic factory, see section 2.2.
The server will then create the resource and reply with a location header containing a URI pointing to the resource.
For RabbitMQ, we don’t need such a generic factory to create queues or exchanges.

To list bindings for a queue Q1, the spec suggests
```
GET /$management/Queues/Q1/$management/entities
```
which again shows the generic entities endpoint as well as a `$management` endpoint under Q1 to
allow a queue to return a discovery document.
For RabbitMQ, we don’t need such generic endpoints and discovery documents.

Given we aim for our own thin RabbitMQ AMQP 1.0 client wrapper libraries which expose
the RabbitMQ model to the developer, we can directly use fixed HTTP endpoint assumptions
in our RabbitMQ specific libraries.

This is by far simpler than using the dynamic endpoints of the management spec.
Simplicity leads to higher adoption and enables more developers to write RabbitMQ AMQP 1.0 client
library wrappers.

The AMQP Management extension also suffers from deep level of nesting in paths
Examples:
```
/$management/Queues/Q1/$management/entities
/$management/Queues/Q1/Bindings/Binding1
```
as well as verbs in path names: Section 7.1.4 suggests using verbs in path names,
for example “purge”, due to the dynamic operations discovery document.

 ### HTTP API v2

This commit introduces a new HTTP API v2 following best practices.
It could serve as a future API for plain HTTP clients.

This commit and RabbitMQ 4.0 will only implement a minimal set of
HTTP API v2 endpoints and only for HTTP over AMQP.
In other words, the existing HTTP API v1 Cowboy handlers will continue to be
used for all plain HTTP requests in RabbitMQ 4.0 and will remain untouched for RabbitMQ 4.0.
Over time, after 4.0 shipped, we could ship a pure HTTP API implementation for HTTP API v2.
Hence, the new HTTP API v2 endpoints for HTTP over AMQP should be designed such that they
can be re-used in the future for a pure HTTP implementation.

The minimal set of endpoints for RabbitMQ 4.0 are:

``
GET / PUT / DELETE
/vhosts/:vhost/queues/:queue
```
read, create, delete a queue

```
DELETE
/vhosts/:vhost/queues/:queue/messages
```
purges a queue

```
GET / DELETE
/vhosts/:vhost/bindings/:binding
```
read, delete bindings
where `:binding` is a binding ID of the following path segment:
```
src=e1;dstq=q2;key=my-key;args=
```
Binding arguments `args` has an empty value by default, i.e. there are no binding arguments.
If the binding includes binding arguments, `args` will be an Erlang portable term hash
provided by the server similar to what’s provided in HTTP API v1 today.
Alternatively, we could use an arguments scheme of:
```
args=k1,utf8,v1&k2,uint,3
```
However, such a scheme leads to long URIs when there are many binding arguments.
Note that it’s perfectly fine for URI producing applications to include URI
reserved characters `=` / `;` / `,` / `$` in a path segment.

To create a binding, the client therefore needs to POST to a bindings factory URI:
```
POST
/vhosts/:vhost/bindings
```

To list all bindings between a source exchange e1 and destination exchange e2 with binding key k1:
```
GET
/vhosts/:vhost/bindings?src=e1&dste=e2&key=k1
```

This endpoint will be called by the RabbitMQ AMQP 1.0 client library to unbind a
binding with non-empty binding arguments to get the binding ID before invoking a
```
DELETE
/vhosts/:vhost/bindings/:binding
```

In future, after RabbitMQ 4.0 shipped, new API endpoints could be added.
The following is up for discussion and is only meant to show the clean and simple design of HTTP API v2.

Bindings endpoint can be queried as follows:

to list all bindings for a given source exchange e1:
```
GET
/vhosts/:vhost/bindings?src=e1
```

to list all bindings for a given destination queue q1:
```
GET
/vhosts/:vhost/bindings?dstq=q1
```

to list all bindings between a source exchange e1 and destination queue q1:
```
GET
/vhosts/:vhost/bindings?src=e1&dstq=q1
```

multiple bindings between source exchange e1 and destination queue q1 could be deleted at once as follows:
```
DELETE /vhosts/:vhost/bindings?src=e1&dstq=q1
```

GET could be supported globally across all vhosts:
```
/exchanges
/queues
/bindings
```

Publish a message:
```
POST
/vhosts/:vhost/queues/:queue/messages
```

Consume or peek a message (depending on query parameters):
```
GET
/vhosts/:vhost/queues/:queue/messages
```

Note that the AMQP 1.0 client omits the `/vhost/:vhost` path prefix.
Since an AMQP connection belongs to a single vhost, there is no need to
additionally include the vhost in every HTTP request.

Pros of HTTP API v2:

1. Low level of nesting

Queues, exchanges, bindings are top level entities directly under vhosts.
Although the HTTP API doesn’t have to reflect how resources are stored in the database,
v2 does nicely reflect the Khepri tree structure.

2. Nouns instead of verbs
HTTP API v2 is very simple to read and understand as shown by
```
POST    /vhosts/:vhost/queues/:queue/messages	to post messages, i.e. publish to a queue.
GET     /vhosts/:vhost/queues/:queue/messages	to get messages, i.e. consume or peek from a queue.
DELETE  /vhosts/:vhost/queues/:queue/messages	to delete messages, i.e. purge a queue.
```

A separate new HTTP API v2 allows us to ship only handlers for HTTP over AMQP for RabbitMQ 4.0
and therefore move faster while still keeping the option on the table to re-use the new v2 API
for pure HTTP in the future.
In contrast, re-using the HTTP API v1 for HTTP over AMQP is possible, but dirty because separate handlers
(HTTP over AMQP and pure HTTP) replying differently will be needed for the same v1 endpoints.
2024-03-28 11:36:56 +01:00
Rin Kuryloski 70029d42c6 Commit the amqp framing generated sources
These files seem to generate incorrectly on windows due to recent rules_python changes, and since they change rarely, it seems reasonable to commit them. The bazel build automatically generates tets to ensure that the files are up to date
2024-03-20 13:31:34 +01:00
Jean-Sébastien Pédron c298a02574
rabbit_env_SUITE: Wait for application env. to be ready
[Why]
If the testcase run between the time the node is started and the time
the `rabbit` application environment is loaded, it will fail.

[How]
Instead of waiting for the node to be reachable only, we also wait for
the `rabbit` application environment to be filled.
2024-03-18 16:39:35 +01:00
Jean-Sébastien Pédron c6992d0250
rabbit_env_SUITE: Use a large timeout in `check_values_from_reachable_remote_node()`
[Why]
It looks to fail frequently in CI.
2024-03-12 13:16:46 +01:00
Jean-Sébastien Pédron 418ac04ab8
rabbit_env: Correctly pass `TakeFromRemoteNode` argument to update_context/3
[Why]
The given `TakeFromRemoteNode` argument was unpacked and another tuple
was created to pass to `update_context/3`. However, the constructed
tuple would be:

    {{Node, Timeout}, Timeout}

... which is incorrect.
2024-03-12 13:14:40 +01:00
David Ansari 6ff3e17db5 Fix warning
```
matching on the float 0.0 will no longer also match -0.0 in OTP 27. If you specifically intend to match 0.0 alone, write +0.0 instead
```

such that
```
bazel build //:package-generic-unix --test_build
```
succeeds when building with OTP 26.1.2
2024-02-28 15:39:12 +01:00
David Ansari 8cb313d5a1 Support AMQP 1.0 natively
## What

Similar to Native MQTT in #5895, this commits implements Native AMQP 1.0.
By "native", we mean do not proxy via AMQP 0.9.1 anymore.

  ## Why

Native AMQP 1.0 comes with the following major benefits:
1. Similar to Native MQTT, this commit provides better throughput, latency,
   scalability, and resource usage for AMQP 1.0.
   See https://blog.rabbitmq.com/posts/2023/03/native-mqtt for native MQTT improvements.
   See further below for some benchmarks.
2. Since AMQP 1.0 is not limited anymore by the AMQP 0.9.1 protocol,
   this commit allows implementing more AMQP 1.0 features in the future.
   Some features are already implemented in this commit (see next section).
3. Simpler, better understandable, and more maintainable code.

Native AMQP 1.0 as implemented in this commit has the
following major benefits compared to AMQP 0.9.1:
4. Memory and disk alarms will only stop accepting incoming TRANSFER frames.
   New connections can still be created to consume from RabbitMQ to empty queues.
5. Due to 4. no need anymore for separate connections for publishers and
   consumers as we currently recommended for AMQP 0.9.1. which potentially
   halves the number of physical TCP connections.
6. When a single connection sends to multiple target queues, a single
   slow target queue won't block the entire connection.
   Publisher can still send data quickly to all other target queues.
7. A publisher can request whether it wants publisher confirmation on a per-message basis.
   In AMQP 0.9.1 publisher confirms are configured per channel only.
8. Consumers can change their "prefetch count" dynamically which isn't
   possible in our AMQP 0.9.1 implementation. See #10174
9. AMQP 1.0 is an extensible protocol

This commit also fixes dozens of bugs present in the AMQP 1.0 plugin in
RabbitMQ 3.x - most of which cannot be backported due to the complexity
and limitations of the old 3.x implementation.

This commit contains breaking changes and is therefore targeted for RabbitMQ 4.0.

 ## Implementation details

1. Breaking change: With Native AMQP, the behaviour of
```
Convert AMQP 0.9.1 message headers to application properties for an AMQP 1.0 consumer
amqp1_0.convert_amqp091_headers_to_app_props = false | true (default false)
Convert AMQP 1.0 Application Properties to AMQP 0.9.1 headers
amqp1_0.convert_app_props_to_amqp091_headers = false | true (default false)
```
will break because we always convert according to the message container conversions.
For example, AMQP 0.9.1 x-headers will go into message-annotations instead of application properties.
Also, `false` won’t be respected since we always convert the headers with message containers.

2. Remove rabbit_queue_collector

rabbit_queue_collector is responsible for synchronously deleting
exclusive queues. Since the AMQP 1.0 plugin never creates exclusive
queues, rabbit_queue_collector doesn't need to be started in the first
place. This will save 1 Erlang process per AMQP 1.0 connection.

3. 7 processes per connection + 1 process per session in this commit instead of
   7 processes per connection + 15 processes per session in 3.x
Supervision hierarchy got re-designed.

4. Use 1 writer process per AMQP 1.0 connection
AMQP 0.9.1 uses a separate rabbit_writer Erlang process per AMQP 0.9.1 channel.
Prior to this commit, AMQP 1.0 used a separate rabbit_amqp1_0_writer process per AMQP 1.0 session.
Advantage of single writer proc per session (prior to this commit):
* High parallelism for serialising packets if multiple sessions within
  a connection write heavily at the same time.

This commit uses a single writer process per AMQP 1.0 connection that is
shared across all AMQP 1.0 sessions.
Advantages of single writer proc per connection (this commit):
* Lower memory usage with hundreds of thousands of AMQP 1.0 sessions
* Less TCP and IP header overhead given that the single writer process
  can accumulate across all sessions bytes before flushing the socket.

In other words, this commit decides that a reader / writer process pair
per AMQP 1.0 connection is good enough for bi-directional TRANSFER flows.
Having a writer per session is too heavy.
We still ensure high throughput by having separate reader, writer, and
session processes.

5. Transform rabbit_amqp1_0_writer into gen_server
Why:
Prior to this commit, when clicking on the AMQP 1.0 writer process in
observer, the process crashed.
Instead of handling all these debug messages of the sys module, it's better
to implement a gen_server.
There is no advantage of using a special OTP process over gen_server
for the AMQP 1.0 writer.
gen_server also provides cleaner format status output.

How:
Message callbacks return a timeout of 0.
After all messages in the inbox are processed, the timeout message is
handled by flushing any pending bytes.

6. Remove stats timer from writer
AMQP 1.0 connections haven't emitted any stats previously.

7. When there are contiguous queue confirmations in the session process
mailbox, batch them. When the confirmations are sent to the publisher, a
single DISPOSITION frame is sent for contiguously confirmed delivery
IDs.
This approach should be good enough. However it's sub optimal in
scenarios where contiguous delivery IDs that need confirmations are rare,
for example:
* There are multiple links in the session with different sender
  settlement modes and sender publishes across these links interleaved.
* sender settlement mode is mixed and sender publishes interleaved settled
  and unsettled TRANSFERs.

8. Introduce credit API v2
Why:
The AMQP 0.9.1 credit extension which is to be removed in 4.0 was poorly
designed since basic.credit is a synchronous call into the queue process
blocking the entire AMQP 1.0 session process.

How:
Change the interactions between queue clients and queue server
implementations:
* Clients only request a credit reply if the FLOW's `echo` field is set
* Include all link flow control state held by the queue process into a
  new credit_reply queue event:
  * `available` after the queue sends any deliveries
  * `link-credit` after the queue sends any deliveries
  * `drain` which allows us to combine the old queue events
    send_credit_reply and send_drained into a single new queue event
    credit_reply.
* Include the consumer tag into the credit_reply queue event such that
  the AMQP 1.0 session process can process any credit replies
  asynchronously.

Link flow control state `delivery-count` also moves to the queue processes.

The new interactions are hidden behind feature flag credit_api_v2 to
allow for rolling upgrades from 3.13 to 4.0.

9. Use serial number arithmetic in quorum queues and session process.

10. Completely bypass the rabbit_limiter module for AMQP 1.0
flow control. The goal is to eventually remove the rabbit_limiter module
in 4.0 since AMQP 0.9.1 global QoS will be unsupported in 4.0. This
commit lifts the AMQP 1.0 link flow control logic out of rabbit_limiter
into rabbit_queue_consumers.

11. Fix credit bug for streams:
AMQP 1.0 settlements shouldn't top up link credit,
only FLOW frames should top up link credit.

12. Allow sender settle mode unsettled for streams
since AMQP 1.0 acknowledgements to streams are no-ops (currently).

13. Fix AMQP 1.0 client bugs
Auto renewing credits should not be related to settling TRANSFERs.
Remove field link_credit_unsettled as it was wrong and confusing.
Prior to this commit auto renewal did not work when the sender uses
sender settlement mode settled.

14. Fix AMQP 1.0 client bugs
The wrong outdated Link was passed to function auto_flow/2

15. Use osiris chunk iterator
Only hold messages of uncompressed sub batches in memory if consumer
doesn't have sufficient credits.
Compressed sub batches are skipped for non Stream protocol consumers.

16. Fix incoming link flow control
Always use confirms between AMQP 1.0 queue clients and queue servers.
As already done internally by rabbit_fifo_client and
rabbit_stream_queue, use confirms for classic queues as well.

17. Include link handle into correlation when publishing messages to target queues
such that session process can correlate confirms from target queues to
incoming links.

18. Only grant more credits to publishers if publisher hasn't sufficient credits
anymore and there are not too many unconfirmed messages on the link.

19. Completely ignore `block` and `unblock` queue actions and RabbitMQ credit flow
between classic queue process and session process.

20. Link flow control is independent between links.
A client can refer to a queue or to an exchange with multiple
dynamically added target queues. Multiple incoming links can also fan
in to the same queue. However the link topology looks like, this
commit ensures that each link is only granted more credits if that link
isn't overloaded.

21. A connection or a session can send to many different queues.
In AMQP 0.9.1, a single slow queue will lead to the entire channel, and
then entire connection being blocked.
This commit makes sure that a single slow queue from one link won't slow
down sending on other links.
For example, having link A sending to a local classic queue and
link B sending to 5 replica quorum queue, link B will naturally
grant credits slower than link A. So, despite the quorum queue being
slower in confirming messages, the same AMQP 1.0 connection and session
can still pump data very fast into the classic queue.

22. If cluster wide memory or disk alarm occurs.
Each session sends a FLOW with incoming-window to 0 to sending client.
If sending clients don’t obey, force disconnect the client.

If cluster wide memory alarm clears:
Each session resumes with a FLOW defaulting to initial incoming-window.

23. All operations apart of publishing TRANSFERS to RabbitMQ can continue during cluster wide alarms,
specifically, attaching consumers and consuming, i.e. emptying queues.
There is no need for separate AMQP 1.0 connections for publishers and consumers as recommended in our AMQP 0.9.1 implementation.

24. Flow control summary:
* If queue becomes bottleneck, that’s solved by slowing down individual sending links (AMQP 1.0 link flow control).
* If session becomes bottleneck (more unlikely), that’s solved by AMQP 1.0 session flow control.
* If connection becomes bottleneck, it naturally won’t read fast enough from the socket causing TCP backpressure being applied.
Nowhere will RabbitMQ internal credit based flow control (i.e. module credit_flow) be used on the incoming AMQP 1.0 message path.

25. Register AMQP sessions
Prefer local-only pg over our custom pg_local implementation as
pg is a better process group implementation than pg_local.
pg_local was identified as bottleneck in tests where many MQTT clients were disconnected at once.

26. Start a local-only pg when Rabbit boots:
> A scope can be kept local-only by using a scope name that is unique cluster-wide, e.g. the node name:
> pg:start_link(node()).
Register AMQP 1.0 connections and sessions with pg.

In future we should remove pg_local and instead use the new local-only
pg for all registered processes such as AMQP 0.9.1 connections and channels.

27. Requeue messages if link detached
Although the spec allows to settle delivery IDs on detached links, RabbitMQ does not respect the 'closed'
field of the DETACH frame and therefore handles every DETACH frame as closed. Since the link is closed,
we expect every outstanding delivery to be requeued.
In addition to consumer cancellation, detaching a link therefore causes in flight deliveries to be requeued.
Note that this behaviour is different from merely consumer cancellation in AMQP 0.9.1:
"After a consumer is cancelled there will be no future deliveries dispatched to it. Note that there can
still be "in flight" deliveries dispatched previously. Cancelling a consumer will neither discard nor requeue them."
[https://www.rabbitmq.com/consumers.html#unsubscribing]
An AMQP receiver can first drain, and then detach to prevent "in flight" deliveries

28. Init AMQP session with BEGIN frame
Similar to how there can't be an MQTT processor without a CONNECT
frame, there can't be an AMQP session without a BEGIN frame.
This allows having strict dialyzer types for session flow control
fields (i.e. not allowing 'undefined').

29. Move serial_number to AMQP 1.0 common lib
such that it can be used by both AMQP 1.0 server and client

30. Fix AMQP client to do serial number arithmetic.

31. AMQP client: Differentiate between delivery-id and transfer-id for better
understandability.

32. Fix link flow control in classic queues
This commit fixes
```
java -jar target/perf-test.jar -ad false -f persistent -u cq -c 3000 -C 1000000 -y 0
```
followed by
```
./omq -x 0 amqp -T /queue/cq -D 1000000 --amqp-consumer-credits 2
```
Prior to this commit, (and on RabbitMQ 3.x) the consuming would halt after around
8 - 10,000 messages.

The bug was that in flight messages from classic queue process to
session process were not taken into account when topping up credit to
the classic queue process.
Fixes #2597

The solution to this bug (and a much cleaner design anyway independent of
this bug) is that queues should hold all link flow control state including
the delivery-count.

Hence, when credit API v2 is used the delivery-count will be held by the
classic queue process, quorum queue process, and stream queue client
instead of managing the delivery-count in the session.

33. The double level crediting between (a) session process and
rabbit_fifo_client, and (b) rabbit_fifo_client and rabbit_fifo was
removed. Therefore, instead of managing 3 separate delivery-counts (i. session,
ii. rabbit_fifo_client, iii. rabbit_fifo), only 1 delivery-count is used
in rabbit_fifo. This is a big simplification.

34. This commit fixes quorum queues without bumping the machine version
nor introducing new rabbit_fifo commands.

Whether credit API v2 is used is solely determined at link attachment time
depending on whether feature flag credit_api_v2 is enabled.

Even when that feature flag will be enabled later on, this link will
keep using credit API v1 until detached (or the node is shut down).

Eventually, after feature flag credit_api_v2 has been enabled and a
subsequent rolling upgrade, all links will use credit API v2.

This approach is safe and simple.

The 2 alternatives to move delivery-count from the session process to the
queue processes would have been:

i. Explicit feature flag credit_api_v2 migration function
* Can use a gen_server:call and only finish migration once all delivery-counts were migrated.
Cons:
* Extra new message format just for migration is required.
* Risky as migration will fail if a target queue doesn’t reply.

ii. Session always includes DeliveryCountSnd when crediting to the queue:
Cons:
* 2 delivery counts will be hold simultaneously in session proc and queue proc;
could be solved by deleting the session proc’s delivery-count for credit-reply
* What happens if the receiver doesn’t provide credit for a very long time? Is that a problem?

35. Support stream filtering in AMQP 1.0 (by @acogoluegnes)
Use the x-stream-filter-value message annotation
to carry the filter value in a published message.
Use the rabbitmq:stream-filter and rabbitmq:stream-match-unfiltered
filters when creating a receiver that wants to filter
out messages from a stream.

36. Remove credit extension from AMQP 0.9.1 client

37. Support maintenance mode closing AMQP 1.0 connections.

38. Remove AMQP 0.9.1 client dependency from AMQP 1.0 implementation.

39. Move AMQP 1.0 plugin to the core. AMQP 1.0 is enabled by default.
    The old rabbitmq_amqp1_0 plugin will be kept as a no-op plugin to prevent deployment
    tools from failing that execute:
```
rabbitmq-plugins enable rabbitmq_amqp1_0
rabbitmq-plugins disable rabbitmq_amqp1_0
```

40. Breaking change: Remove CLI command `rabbitmqctl list_amqp10_connections`.
Instead, list both AMQP 0.9.1 and AMQP 1.0 connections in `list_connections`:
```
rabbitmqctl list_connections protocol
Listing connections ...
protocol
{1, 0}
{0,9,1}
```

 ## Benchmarks

 ### Throughput & Latency

Setup:
* Single node Ubuntu 22.04
* Erlang 26.1.1

Start RabbitMQ:
```
make run-broker PLUGINS="rabbitmq_management rabbitmq_amqp1_0" FULL=1 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 3"
```

Predeclare durable classic queue cq1, durable quorum queue qq1, durable stream queue sq1.

Start client:
https://github.com/ssorj/quiver
https://hub.docker.com/r/ssorj/quiver/tags (digest 453a2aceda64)
```
docker run -it --rm --add-host host.docker.internal:host-gateway ssorj/quiver:latest
bash-5.1# quiver --version
quiver 0.4.0-SNAPSHOT
```

1. Classic queue
```
quiver //host.docker.internal//amq/queue/cq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000
```

This commit:
```
Count ............................................. 1,000,000 messages
Duration ............................................... 73.8 seconds
Sender rate .......................................... 13,548 messages/s
Receiver rate ........................................ 13,547 messages/s
End-to-end rate ...................................... 13,547 messages/s

Latencies by percentile:

          0% ........ 0 ms       90.00% ........ 9 ms
         25% ........ 2 ms       99.00% ....... 14 ms
         50% ........ 4 ms       99.90% ....... 17 ms
        100% ....... 26 ms       99.99% ....... 24 ms
```

RabbitMQ 3.x (main branch as of 30 January 2024):
```
---------------------- Sender -----------------------  --------------------- Receiver ----------------------  --------
Time [s]      Count [m]  Rate [m/s]  CPU [%]  RSS [M]  Time [s]      Count [m]  Rate [m/s]  CPU [%]  RSS [M]  Lat [ms]
-----------------------------------------------------  -----------------------------------------------------  --------
     2.1        130,814      65,342        6     73.6       2.1          3,217       1,607        0      8.0       511
     4.1        163,580      16,367        2     74.1       4.1          3,217           0        0      8.0         0
     6.1        229,114      32,767        3     74.1       6.1          3,217           0        0      8.0         0
     8.1        261,880      16,367        2     74.1       8.1         67,874      32,296        8      8.2     7,662
    10.1        294,646      16,367        2     74.1      10.1         67,874           0        0      8.2         0
    12.1        360,180      32,734        3     74.1      12.1         67,874           0        0      8.2         0
    14.1        392,946      16,367        3     74.1      14.1         68,604         365        0      8.2    12,147
    16.1        458,480      32,734        3     74.1      16.1         68,604           0        0      8.2         0
    18.1        491,246      16,367        2     74.1      18.1         68,604           0        0      8.2         0
    20.1        556,780      32,767        4     74.1      20.1         68,604           0        0      8.2         0
    22.1        589,546      16,375        2     74.1      22.1         68,604           0        0      8.2         0
receiver timed out
    24.1        622,312      16,367        2     74.1      24.1         68,604           0        0      8.2         0
quiver:  error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1.
Traceback (most recent call last):
  File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run
    _plano.wait(receiver, check=True)
  File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait
    raise PlanoProcessError(proc)
plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1.
```

2. Quorum queue:
```
quiver //host.docker.internal//amq/queue/qq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000
```
This commit:
```
Count ............................................. 1,000,000 messages
Duration .............................................. 101.4 seconds
Sender rate ........................................... 9,867 messages/s
Receiver rate ......................................... 9,868 messages/s
End-to-end rate ....................................... 9,865 messages/s

Latencies by percentile:

          0% ....... 11 ms       90.00% ....... 23 ms
         25% ....... 15 ms       99.00% ....... 28 ms
         50% ....... 18 ms       99.90% ....... 33 ms
        100% ....... 49 ms       99.99% ....... 47 ms
```

RabbitMQ 3.x:
```
---------------------- Sender -----------------------  --------------------- Receiver ----------------------  --------
Time [s]      Count [m]  Rate [m/s]  CPU [%]  RSS [M]  Time [s]      Count [m]  Rate [m/s]  CPU [%]  RSS [M]  Lat [ms]
-----------------------------------------------------  -----------------------------------------------------  --------
     2.1        130,814      65,342        9     69.9       2.1         18,430       9,206        5      7.6     1,221
     4.1        163,580      16,375        5     70.2       4.1         18,867         218        0      7.6     2,168
     6.1        229,114      32,767        6     70.2       6.1         18,867           0        0      7.6         0
     8.1        294,648      32,734        7     70.2       8.1         18,867           0        0      7.6         0
    10.1        360,182      32,734        6     70.2      10.1         18,867           0        0      7.6         0
    12.1        425,716      32,767        6     70.2      12.1         18,867           0        0      7.6         0
receiver timed out
    14.1        458,482      16,367        5     70.2      14.1         18,867           0        0      7.6         0
quiver:  error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1.
Traceback (most recent call last):
  File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run
    _plano.wait(receiver, check=True)
  File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait
    raise PlanoProcessError(proc)
plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1.
```

3. Stream:
```
quiver-arrow send //host.docker.internal//amq/queue/sq1 --durable --count 1m -d 10m --summary --verbose
```

This commit:
```
Count ............................................. 1,000,000 messages
Duration ................................................ 8.7 seconds
Message rate ........................................ 115,154 messages/s
```

RabbitMQ 3.x:
```
Count ............................................. 1,000,000 messages
Duration ............................................... 21.2 seconds
Message rate ......................................... 47,232 messages/s
```

 ### Memory usage

Start RabbitMQ:
```
ERL_MAX_PORTS=3000000 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+P 3000000 +S 6" make run-broker PLUGINS="rabbitmq_amqp1_0" FULL=1 RABBITMQ_CONFIG_FILE="rabbitmq.conf"
```

```
/bin/cat rabbitmq.conf

tcp_listen_options.sndbuf  = 2048
tcp_listen_options.recbuf  = 2048
vm_memory_high_watermark.relative = 0.95
vm_memory_high_watermark_paging_ratio = 0.95
loopback_users = none
```

Create 50k connections with 2 sessions per connection, i.e. 100k session in total:

```go
package main

import (
	"context"
	"log"
	"time"

	"github.com/Azure/go-amqp"
)

func main() {
	for i := 0; i < 50000; i++ {
		conn, err := amqp.Dial(context.TODO(), "amqp://nuc", &amqp.ConnOptions{SASLType: amqp.SASLTypeAnonymous()})
		if err != nil {
			log.Fatal("dialing AMQP server:", err)
		}
		_, err = conn.NewSession(context.TODO(), nil)
		if err != nil {
			log.Fatal("creating AMQP session:", err)
		}
		_, err = conn.NewSession(context.TODO(), nil)
		if err != nil {
			log.Fatal("creating AMQP session:", err)
		}
	}
	log.Println("opened all connections")
	time.Sleep(5 * time.Hour)
}
```

This commit:
```
erlang:memory().
[{total,4586376480},
 {processes,4025898504},
 {processes_used,4025871040},
 {system,560477976},
 {atom,1048841},
 {atom_used,1042841},
 {binary,233228608},
 {code,21449982},
 {ets,108560464}]

erlang:system_info(process_count).
450289
```
7 procs per connection + 1 proc per session.
(7 + 2*1) * 50,000 = 450,000 procs

RabbitMQ 3.x:
```
erlang:memory().
[{total,15168232704},
 {processes,14044779256},
 {processes_used,14044755120},
 {system,1123453448},
 {atom,1057033},
 {atom_used,1052587},
 {binary,236381264},
 {code,21790238},
 {ets,391423744}]

erlang:system_info(process_count).
1850309
```
7 procs per connection + 15 per session
(7 + 2*15) * 50,000 = 1,850,000 procs

50k connections + 100k session require
with this commit: 4.5 GB
in RabbitMQ 3.x: 15 GB

 ## Future work

1. More efficient parser and serializer
2. TODO in mc_amqp: Do not store the parsed message on disk.
3. Implement both AMQP HTTP extension and AMQP management extension to allow AMQP
clients to create RabbitMQ objects (queues, exchanges, ...).
2024-02-28 14:15:20 +01:00
Michael Klishin c1c2cd371d rabbit_pbe: minor API improvements 2024-02-14 18:29:43 -05:00
Michael Klishin 9c79ad8d55 More missed license header updates #9969 2024-02-05 12:26:25 -05:00
Michael Klishin f414c2d512
More missed license header updates #9969 2024-02-05 11:53:50 -05:00
Michael Davis 46ca3269e3
Check whether EPMD is running before starting it
`rabbit_nodes_common:ensure_epmd/0` unconditionally starts EPMD by
spawning `erl` with nodename options. Spawning `erl` can be quite slow
though (around 250ms for me locally), so we should try to avoid it when
we detect that EPMD is already running.

We can relatively cheaply check whether EPMD is already running with
`net_adm:names/0`, a function that asks the daemon on localhost to list
any registered names, the same as `epmd -names` on the comand line. If
we can successfully get the list of names from the daemon then we don't
need to start EPMD. `net_adm:names/0` is relatively cheap compared to
running `erl`, costing around 8ms when EPMD is running and less than a
millisecond when it is not.

This improves the CLI's total run time for commands that read the
`enabled_plugins_file` and `plugins_dir` config options. Those options
try to consult a running node and so they start distribution and ensure
that EPMD is running. `rabbitmqctl --help` saves nearly 500ms when EPMD
is already running as it reads both options, since previously it
spawned and blocked on `erl` when reading each option.
2024-02-01 14:20:07 -05:00
Arnaud Cogoluègnes 1f89ede396
Remove rabbit_authz_backend:state_can_expire/0
Use expiry_timestamp/1 instead, which returns 'never'
if the credentials do not expire.

Fixes #10382
2024-01-24 09:58:59 +01:00
Michael Klishin 7b151a7651 More missed (c) header updates 2024-01-22 23:44:47 -05:00
Arnaud Cogoluègnes 3bd0fdc316
Merge pull request #10299 from rabbitmq/token-expiration-in-stream-connections
Take expiry timestamp into account in stream connections
2024-01-22 10:23:20 +01:00
Michael Klishin 80615755be
Bump (c) year 2024-01-20 14:52:15 -05:00
Michael Klishin fb775afcc8
More (c) source header updates #9969 2024-01-19 19:53:28 -05:00
Arnaud Cogoluègnes 33c64d06ea
Add expiry_timestamp/1 callback to authz backend behavior
Backends return 'never' or the timestamp of the expiry time
of the credentials. Only the OAuth2 backend returns a timestamp,
other RabbitMQ authz backends return 'never'.

Client code uses rabbit_access_control, so it contains now
a new expiry_timestamp/1 function that returns the earliest
expiry time of the underlying backends.

Fixes #10298
2024-01-19 14:46:47 +01:00
Michael Klishin 48ce1b5ec7 Improve supported information units (Mi, Gi, Ti)
This revisits the information system conversion,
that is, support for suffixes like GiB, GB.

When configuration values like disk_free_limit.absolute,
vm_memory_high_watermark.absolute are set, the value
can contain an information unit (IU) suffix.

We now support several new suffixes and the meaning
a few more changes.

First, the changes:

 * k, K now mean kilobytes and not kibibytes
 * m, M now mean megabytes and not mebibytes
 * g, G now means gigabytes and not gibibytes

This is to match the system used by Kubernetes.
There is no consensus in the industry about how
"k", "m", "g", and similar single letter suffixes
should be treated. Previously it was a power of 2,
now a power of 10 to align with a very popular OSS
project that explicitly documents what suffixes it supports.

Now, the additions:

Finally, the node will now validate these suffixes
at boot time, so an unsupported value will cause
the node to stop with a rabbitmq.conf validation
error.

The message logged will look like this:

````
2024-01-15 22:11:17.829272-05:00 [error] <0.164.0> disk_free_limit.absolute invalid, supported formats: 500MB, 500MiB, 10GB, 10GiB, 2TB, 2TiB, 10000000000
2024-01-15 22:11:17.829376-05:00 [error] <0.164.0> Error preparing configuration in phase validation:
2024-01-15 22:11:17.829387-05:00 [error] <0.164.0>   - disk_free_limit.absolute invalid, supported formats: 500MB, 500MiB, 10GB, 10GiB, 2TB, 2TiB, 10000000000
````

Closes #10310
2024-01-15 22:11:57 -05:00
Péter Gömöri b6f782fd0d Fix undef internal call in supervisor2
Must be a leftover from the refactoring in commit 0a87aea

This should prevent the below crash that was seen with an exchange
federation link

```
{undef,
    [{supervisor2,try_again_restart,
         [<0.105395.0>,
          {upstream,
              [{encrypted,
                   <<"...">>}],
              <some mirrored supervisor data>},
          {upstream,
              [{encrypted,
                   <<"...">>}],
              <some mirrored supervisor data>}],
         []}]}
```
2024-01-10 13:22:16 +01:00
Michael Klishin 01092ff31f
(c) year bumps 2024-01-01 22:02:20 -05:00
Jean-Sébastien Pédron e1a315e850
rabbit_peer_discovery: Use classic config peer discovery for `make start-cluster`
[Why]
So far, we use the CLI to create the cluster after starting the
individual nodes.

It's faster to use peer discovery and gives more exposure to the
feature. Thus it will be easier to quickly test changes to the peer
discovery subsystem with a simple `make start-cluster`.

[How]
We pass the classic configuration `cluster_nodes` application
environment variable to all nodes' command line.

This is only when the target is `start-cluster`, not `start-brokers`.
2023-12-07 15:51:54 +01:00
Jean-Sébastien Pédron 1e46d334bf
rabbit_peer_discovery: Acquire a lock for the joining and the joined nodes only
[Why]
A lock is acquired to protect against concurrent cluster joins.

Some backends used to use the entire list of discovered nodes and used
`global` as the lock implementation. This was a problem because a side
effect was that all discovered Erlang nodes were connected to each
other. This led to conflicts in the global process name registry and
thus processes were killed randomly.

This was the case with the feature flags controller for instance. Nodes
are running some feature flags operation early in boot before they are
ready to cluster or run the peer discovery code. But if another node was
executing peer discovery, it could make all nodes connected. Feature
flags controller unrelated instances were thus killed because of another
node running peer discovery.

[How]
Acquiring a lock on the joining and the joined nodes only is enough to
achieve the goal of protecting against concurrent joins. This is
possible because of the new core logic which ensures the same node is
used as the "seed node". I.e. all nodes will join the same node.

Therefore the API of `rabbit_peer_discovery_backend:lock/1` is changed
to take a list of nodes (the two nodes mentionned above) instead of one
node (which was the current node, so not that helpful in the first
place).

These backends also used to check if the current node was part of the
discovered nodes. But that's already handled in the generic peer
discovery code already.

CAUTION: This brings a breaking change in the peer discovery backend
API. The `Backend:lock/1` callback now takes a list of node names
instead of a single node name. This list will contain the current node
name.
2023-12-07 15:51:54 +01:00
Jean-Sébastien Pédron fdbf0ef2b2
rabbit_runtime: Improve get_erl_path/0
[Why]
The previous implementation assumed that ERTS was in the Erlang root
dir. This is true for a standard Erlang install. However, it's not the
case of an Erlang release that does not include the runtime.

[How]
The bin directory (`bindir`) is already available as an argument to
init. We can simply query it and use the value as is.
2023-12-04 15:16:26 +01:00
Jean-Sébastien Pédron 767e5532a4
gen_server2: Fix return value of system_code_change/4 2023-12-04 14:29:46 +01:00
Michael Klishin f14bd15e4f
rabbit_writer: ignore unknown messages
of a certain structure reported in #9803.

Closes #9991.
2023-11-27 13:59:40 -05:00
Karl Nilsson f67058ac6c
Merge pull request #9830 from rabbitmq/mc-refinements
Message container conversion improvements
2023-11-24 10:22:34 +00:00
Jean-Sébastien Pédron d24315de55
rabbitmq-dist.mk: Install CLI scripts as part of the build
... instead of the `dist` target.

[Why]
We already do that when building tests. Thus it is more consistent to do
the same.

Also, it makes sense to ensure everything is ready before the `dist`
step. For instance, an Erlang release would not depend on the `dist`
target, just the build and it would still need the CLI to be ready.
2023-11-23 12:40:14 +01:00
Michael Klishin c4d54e2c04
Avoid using magic quotes, simplify (c) line printed on node startup 2023-11-22 10:31:41 -05:00
Michael Klishin 1b642353ca
Update (c) according to [1]
1. https://investors.broadcom.com/news-releases/news-release-details/broadcom-and-vmware-intend-close-transaction-november-22-2023
2023-11-21 23:18:22 -05:00
David Ansari 95c5f2ec9e Some small fixes 2023-11-16 12:33:17 +01:00
Iliia Khaprov 728dd57ffc Use application:get_env/3 2023-11-01 10:53:05 +01:00
David Ansari cad067f5fa Remove erlang:port_command/2 hack
as selective receives are efficient in OTP 26:
```
OTP-18431
Application(s):
compiler, stdlib
Related Id(s):
PR-6739
Improved the selective receive optimization, which can now be enabled for references returned from other functions.
This greatly improves the performance of gen_server:send_request/3, gen_server:wait_response/2, and similar functions.
```
2023-10-27 10:37:56 +02:00
Jean-Sébastien Pédron a26f95d26d
rabbitmq-run.mk: No need to {stop,start}_app when clustering
[Why]
`rabbitmqctl join_cluster` now takes care of stopping and restarting
whatever needs to be dealt with.
2023-10-26 11:22:48 +02:00
Michael Klishin 83bfce891c
Merge pull request #9768 from rabbitmq/driveby-opt
Optimise rabbit_registry:binary_to_type/1
2023-10-24 12:32:50 -04:00
Karl Nilsson e7b9a9e40f Optimise rabbit_registry:binary_to_type/1
By not roundtripping binary to list before converting to atom.
2023-10-24 09:41:17 +01:00
David Ansari 512aaa5912 Reduce memory with many SSL connections
By default, hibernate the SSL connection after 6 seconds, reducing its memory footprint.
This reduces memory usage of RabbitMQ by multiple GBs with thousands of
idle SSL connections.

This commit chooses a default value of 6 seconds because that hibernate_after
value is currently hard coded for rabbit_writer.
rabbit_mqtt_reader uses 1 second, rabbit_channel uses 1 - 10 seconds.

This value can be overriden by advanced.config, similar to:
```
[{rabbit, [
    {ssl_options, [
        {hibernate_after, 30000},
        {keyfile, "/etc/.../server_key.pem"},
        {certfile, "/etc/.../server_certificate.pem"},
        {cacertfile, "/etc/.../ca_certificate.pem"},
        {verify,verify_none}
    ]}
]}].
```

See https://www.erlang.org/doc/man/ssl.html#type-hibernate_after
```
When an integer-value is specified, TLS/DTLS-connection goes into
hibernation after the specified number of milliseconds of inactivity,
thus reducing its memory footprint. When undefined is specified
(this is the default), the process never goes into hibernation.
```

Relates
https://github.com/rabbitmq/rabbitmq-server/discussions/5346
https://groups.google.com/g/rabbitmq-users/c/be8qtkkpg5s/m/dHUa-Lh2DwAJ
2023-10-23 13:14:42 +00:00
Luke Bakken c94d22aceb Use pg_local to track AMQP 1.0 connections
Fixes #9371

Since each AMQP 1.0 connection opens several direct AMQP connections, we
must assign each direct connection a unique name to prevent multiple
entries in the `connection_created_stats` table.

Also, use `pg_local` to track AMQP 1.0 connections instead of walking
the supervisor trees.

Nuke authz_backends from connection created event 💥

Fix regex for connection name because UniqueId is part of it now (channel number)
2023-09-15 09:03:43 -07:00
Ayanda Dube c2217e30e8 fix os:system_time/1 time unit in credit_flow trace events, which is crashing
messaging components which use flow control when calling credit_flow:block/1,
credit_flow:unblock/1 and/or via credit_flow:peer_down/1 and handle_bump_msg/1.
This can crash all messaging components (connections, channels, queues, msg-store)
when trace is enabled and flow is set or cleared (amqp-1.0 included).
2023-09-14 13:12:30 +01:00
Michal Kuratczyk 0f8689be9a Negative zero fix 2023-09-13 09:41:00 -04:00
Michael Klishin 5903ff6c01 Erlang 27 compat: special case 0.0 (positive and negative) w/o using matching
Erlang 27 erlc requires us to match on both but earlier versions
complain that one head of the function is unreachable because the other
one always matches.

So we avoid pattern matching entirely.
2023-09-12 16:31:33 -04:00
Michael Klishin f805364293 Be more explicit here 2023-09-12 16:23:11 -04:00
Michael Klishin 2f8473b602 Try explicitly matching both 0.0 and -0.0
for OTP 27 compatibility.

Without special casing 0.0, we end up with a
math:log10/1 call which is undefined for zero.
2023-09-12 16:21:45 -04:00
Michal Kuratczyk ded944124d
0.0 doesn't require special handling
Without this change, our tests against OTP master started failing since
the negative zero change was introduced, for example:
https://github.com/rabbitmq/rabbitmq-server/actions/runs/6141080013/job/16660857916

The warning (treated as error) is:
```
rabbit_numerical.erl:33:8: matching on the float 0.0 will no longer also match -0.0 in OTP 27. If you specifically intend to match 0.0 alone, write +0.0 instead.
```
2023-09-12 20:26:59 +02:00
David Ansari 0eaaa1972f Avoid unnecessary hibernation in writer procs
Why:
The default rabbit.collect_statistics_interval is 5,000 ms which
collides with the hibernate timeout of 5,000 ms.
It happens that the writer process hibernates and is woken up just a few
microseconds thereafter to emit stats. This is wasteful as hibernation
occurs costly garbage collections.

Inserting some logs before the writer emitting stats end before the writer
hibernating shows the following (when consuming from a queue and sending a
single message to that queue via the Management UI):
```
2023-09-12 09:46:22.820514+02:00 [info] <0.784.0> rabbit_writer:216 hibernating...
2023-09-12 09:46:22.820779+02:00 [info] <0.784.0> rabbit_writer:274 emitting stats...

2023-09-12 09:46:27.821451+02:00 [info] <0.784.0> rabbit_writer:216 hibernating...
```

How:
Change hibernate timeout from 5s to 6s.
2023-09-12 10:02:10 +02:00
Ayanda Dube 779c9e4139 formatting and ensure use sup specs in rabbit_types 2023-09-11 14:13:03 +01:00
Ayanda-D 1d163886fd new rabbit_amqqueue_control for queue related control operations and fix bazel issues raised in MK's review 2023-09-11 14:13:03 +01:00
Ayanda-D 4e8474a5e0 use descriptive macros instead of magic numbers in await_new_pid/3 and await_state/4 2023-09-11 14:13:03 +01:00
Ayanda-D 0674cdf4b2 make rabbit_control_misc:await_state/4 api more consistent on the queue name arg types it accepts 2023-09-11 14:13:03 +01:00
Ayanda-D 7daaee8da0 formatting 2023-09-11 14:13:03 +01:00