This is needed when running tests interactively.
The OS updates the local chrome binary and this
node.js library has to be upgraded too.
(cherry picked from commit 6578c83a0e)
This is mostly the same as the `messages_total` property test but checks
that the Raft indexes in `ra_indexes` are the set of the indexes checked
out by all consumers union any indexes in the `returns` queue. This is
the intended state of `ra_indexes` and failing this condition could
cause bugs that would prevent snapshotting.
(cherry picked from commit 01b4051b03)
https://github.com/erlang/otp/issues/9739
In OTP28+, splitting an empty string returns an empty list, not an empty
string (the input).
Additionally `street-address` macro was removed in OTP28 - replace with
the value it used to be.
Lastly, rabbitmq_auth_backend_oauth2 has an MQTT test, so add
rabbitmq_mqtt to TEST_DEPS
(cherry picked from commit 637a2bc8cc)
Building from source using this command:
```
make RMQ_ERLC_OPTS= FULL=1
```
... then starting RabbitMQ via `make run-broker`, allows re-compilation
from the erl shell:
```
1> c(rabbit).
Recompiling /home/lbakken/development/rabbitmq/rabbitmq-server/deps/rabbit/src/rabbit.erl
{ok,rabbit}
```
When `+deterministic` is passed to `erlc`, the `compile` data in each
modules' information is missing the source path for the module.
Follow-up to #3442
(cherry picked from commit eae657fc38)
`sets` v2 were not yet available when this module was written. Compared
to `gb_sets`, v2 `sets` are faster and more memory efficient:
> List = lists:seq(1, 50_000).
> tprof:profile(sets, from_list, [List, [{version, 2}]], #{type => call_memory}).
****** Process <0.94.0> -- 100.00% of total ***
FUNCTION CALLS WORDS PER CALL [ %]
maps:from_keys/2 1 184335 184335.00 [100.00]
184335 [ 100.0]
ok
> tprof:profile(gb_sets, from_list, [List], #{type => call_memory}).
****** Process <0.97.0> -- 100.00% of total ***
FUNCTION CALLS WORDS PER CALL [ %]
lists:rumergel/3 1 2 2.00 [ 0.00]
gb_sets:from_ordset/1 1 3 3.00 [ 0.00]
lists:reverse/2 1 100000 100000.00 [16.76]
lists:usplit_1/5 49999 100002 2.00 [16.76]
gb_sets:balance_list_1/2 65535 396605 6.05 [66.48]
596612 [100.0]
(cherry picked from commit 5a32322778)
The `rabbit_mgmt_gc` gen_server performs garbage collections
periodically. When doing so it can create potentially fairly large
terms, for example by creating a set out of
`rabbit_exchange:list_names/0`. With many exchanges, for example, the
process memory usage can climb steadily especially when the management
agent is mostly idle since `rabbit_mgmt_gc` won't hit enough reductions
to cause a full-sweep GC on itself. Since the process is only active
periodically (once every 2min by default) we can hibernate it to GC the
terms it created.
This can save a medium amount of memory in situations where there are
very many pieces of metadata (exchanges, vhosts, queues, etc.). For
example on an idle single-node broker with 50k exchanges,
`rabbit_mgmt_gc` can hover around 50MB before being naturally GC'd. With
this patch the process memory usage stays consistent between `start_gc`
timer messages at around 1KB.
(cherry picked from commit ce5d42a9d6)
Building on push to any branch is wasteful and unnecessary, because most
of built images are never used. The workflow dispatch trigger covers the
use case to build an image from the latest commit in a branch.
The use case to validate/QA a PR is now covered by on pull request
trigger. This trigger has a caveat: PRs from forks won't produce a
docker image.
Why?
Because PRs from forks do not inject rabbitmq-server secrets. This is a
security mechanism from GitHub, to protect repository secrets.
With this trigger is possible to QA/validate PRs from other Core team
members. Technically, anyone with 'write' access to our repo to push
branches.
(cherry picked from commit 4efb3df39e)
# Conflicts:
# .github/workflows/oci-make.yaml
Test (make) / Build and Xref (1.17, 26) (push) Has been cancelledDetails
Test (make) / Build and Xref (1.17, 27) (push) Has been cancelledDetails
Test (make) / Test (1.17, 27, khepri) (push) Has been cancelledDetails
Test (make) / Test (1.17, 27, mnesia) (push) Has been cancelledDetails
Test (make) / Test mixed clusters (1.17, 27, khepri) (push) Has been cancelledDetails
Test (make) / Test mixed clusters (1.17, 27, mnesia) (push) Has been cancelledDetails
Test (make) / Type check (1.17, 27) (push) Has been cancelledDetails
The `below_node_connection_limit_test` and `ready_to_serve_clients_test`
cases could possibly flake because `is_quorum_critical_single_node_test`
uses the channel manager in `rabbit_ct_client_helpers` to open a
connection. This can cause the line
true = lists:all(fun(E) -> is_pid(E) end, Connections),
to fail to match. The last connection could have been rejected if the
channel manager kept its connection open, so instead of being a pid the
element would have been `{error, not_allowed}`.
With `rabbit_ct_client_helpers:close_channels_and_connection/2` we can
reset the connection manager and force it to close its connection.
This commit is backported from 314e4261fc
on main.
Returning the connection limit and active count are not really necessary
for these checks. Instead of returning them in the response to the
health check we log a warning when the connection limit is exceeded.
(cherry picked from commit 3f53e0172d)
Test (make) / Build and Xref (1.17, 26) (push) Has been cancelledDetails
Test (make) / Build and Xref (1.17, 27) (push) Has been cancelledDetails
Test (make) / Test (1.17, 27, khepri) (push) Has been cancelledDetails
Test (make) / Test (1.17, 27, mnesia) (push) Has been cancelledDetails
Test (make) / Test mixed clusters (1.17, 27, khepri) (push) Has been cancelledDetails
Test (make) / Test mixed clusters (1.17, 27, mnesia) (push) Has been cancelledDetails
Test (make) / Type check (1.17, 27) (push) Has been cancelledDetails
[skip ci] Bump com.google.googlejavaformat:google-java-format from 1.26.0 to 1.27.0 in /deps/rabbit/test/amqp_jms_SUITE_data in the dev-deps group across 1 directory