rabbitmq_cli uses some private rules_erlang apis that have changed in
the upcoming release
Additionally:
- Avoid including both standard and test versions of amqp_client in
integration test suites
- Eliminate most of the compilation order hints (explicit first_srcs)
in the bazel build
- Fix an include statement - in bazel, an app is not available to
itself as a library at compilation time
In particular:
- io_file_handle_open_attempt
- queue_index_journal_write
Neither have proven to be very useful in recent years
and with the move to FHC-less and journal-less v2 index
they will slowly become irrelevant. This should be a
good compromise until we can switch to v2 permanently
or rework the stats module to use counters.
During most of the time the file_handle_cache_stats ets table is
used for writing only.
By enabeling `write_concurrency` on the table we allow different values
to be written concurrently without taking a global lock.
There the only codepath reading from the ets table is run on the
`collect_statistics_interval` interval and reads the whole table.
So we can assume we are not blocking any large amount of concurrent reads.
as an opt-in feature. The goal is to avoid re-importing the definition
from the definition file/directory/source if we know the content
has not changed. Since this feature won't be appropriate for
every environment (sometimes unconditional reimporting is expected),
the feature is opt-in.
This is still a WIP.
bazel-erlang has been renamed rules_erlang. v2 is a substantial
refactor that brings Windows support. While this alone isn't enough to
run all rabbitmq-server suites on windows, one can at least now start
the broker (bazel run broker) and run the tests that do not start a
background broker process
This is to address another memory leak on win32 reported here:
https://groups.google.com/g/rabbitmq-users/c/UE-wxXerJl8
"RabbitMQ constant memory increase (binary_alloc) in idle state"
The root cause is the Prometheus plugin making repeated calls to `rabbit_misc:otp_version/0` which then calls `file:read_file/1` and leaks memory on win32.
See https://github.com/erlang/otp/issues/5527 for the report to the Erlang team.
Turn `badmatch` into actual error
This is copied from https://github.com/rabbitmq/rabbitmq-common/pull/349
If a message is sent to only one queue(in most application scenarios), passing through the 'delegate' is meaningless. Otherwise, it increases the delay of the message and the possibility of 'delegate' congestion.
Here are some test data:
node1: Pentium(R) Dual-Core CPU E5300 @ 2.60GHz
node2: Pentium(R) Dual-Core CPU E5300 @ 2.60GHz
Join node1 and node2 to a cluster. Create 100 queues on node2, and start 100 consumers to receive messages from these queues.
Start 100 publishers on node1 to send messages to the queues of node2. Each publisher will send 10k messages at the rate of 100/s(10k/s theoretically in total), and all the messages for all publishers is 1 million.
Before optimisation:
{1,[{msg_time,812312(=<1ms),177922(=<5ms),9507(=<50ms),221(=<500ms),38(=<1000ms),0,0,0,0,1061,1069,0,0}]}
After optimisation:
{1,[{msg_time,902854(=< 1ms),93993(=<5ms),3038(=<50ms),96(=<500ms),19(=<1000ms),0,0,0,0,1049,1060,0,0}]}
Additional information:
Time counted here is the stay time of a message in the cluster, that is, Time(leaving from node2 at) - Time(reaching node1 at).
"812312(=<1ms)" is the number of messages with time consumption less than or equal to 1ms.
Overall, the optimisation is effective.
It was automatically happening for e.g. `make start-cluster`.
But some plugins were not covered by default generated config, and
running rabbit from 2 different worktrees was a bit complicated.
A value that is too low will prevent the index from shutting
down in time when there are many queues. This leads to the
process being killed and on the next RabbitMQ restart a
(potentially very long) dirty recovery is needed.
The value of 10 minutes was chosen to mirror the shutdown
timeout of the message store. Since both queues and message
store need to have shut down gracefully in order to have
a clean restart it makes sense to use the same value.
Related: c40c2628a9
When we fail to parse name of cipher suite from PROXY protocol
just say that no ssl is used, instead of trying to fill that
with data from connection between proxy and our server.
A user could already enable single-line logging (the `single_line`
option of `logger_formatter` or RabbitMQ internal formatters) from the
configuration file. For example:
log.console.formatter.single_line = on
With this patch, the option can be enabled from the `$RABBITMQ_LOG`
environment variable as well:
make run-broker RABBITMQ_LOG=+single_line
Those environment variables are unset by default. The default values are
set in the `rabbit` application environment and can be configured in the
configuration file. However, the environment variables will take
precedence over them respectively if they are set.