GC collection can then be done by deleting all entries on the ETS table
and total counters per protocol can be kept without individually scanning
all entries
net_adm:name/1 returns a new value, 'noport', in Erlang 24. This value
being absent in the function spec in previous versions of Erlang, we get
a warning from Dialyzer until we start to the yet-to-be-release Erlang
24 in CI. Therefore we disable this specific warning.
... instead of .ez archives.
The benefits for doing this:
* We can use native code, as is the case for lz4 and zstd bindings in
the Tanzu RabbitMQ version for instance. Indeed, Erlang does not
support loading native code (NIF or port drivers) from .ez archives.
* We can remove custom code to handle .ez archives. We have special
cases in Erlang.mk plugins as well as the `rabbit_plugins` module, in
particular the code to extract .ez archives (even though Erlang knows
how to use them directly).
* Prevent hard to debug situations when the .ez archive name does not
match the top-level directory inside the archive. In this case, Erlang
says it can't load the application but doesn't tell much more.
* Debugging and "hot-patching" plugins become easier: we just have to
copy the recompiled .beam file in place of the existing one. There
is no need to unpack the plugin, replace the file and recreate the
archive.
* Release packages can be smaller. gzip, bzip2 and xz, common
compression algorithm for Unix packages, give much better result if
they compress the .beam files directly instead of "compressing" zip
files (the .ez archives are plain zip archives). For instance, the
generic-unix package goes from 15 MiB when using .ez archives to just
12 MiB when using directory.
I would also like to experiment with Erlang releases in the future.
Using directories for Erlang applications instead of .ez archives is
mandatory for this to work according to my latest tests.
Of course, this change doesn't break support for .ez archives (and we
will keep support for this). End users can still download third-party
plugins as .ez archives and drop them in the plugins directory.
On Windows, the current working directory is also searched, which can
lead to problems. Instead, use `init:get_argument(root)` to get the root
of the Erlang release, then we know `bin/erl` will always be present.
In addition to the `rabbitmq-components.mk` existence check, we now
verfy that the directory is named `deps`.
This is to increase the chance that, if we find a
`rabbitmq-componentS.mk` file in the upper directories, this project is
indeed inside a DEPS_DIR.
For instance, in our GitHub Actions workflows, when we prepared the
secondary umbrellas for mixed-version testing, it happened that the
secondary umbrellas were under a clone of rabbitmq-server. Therefore
the first (and only) condition was met and the Makefile erroneously
considered it was inside a DEPS_DIR. As a consequence, dependencies of
the umbrellas were fetched in the wrong place.
net_adm:name/1 returns a new value, 'noport', in Erlang 24. This value
being absent in the function spec in previous versions of Erlang, we get
a warning from Dialyzer until we start to the yet-to-be-release Erlang
24 in CI. Therefore we disable this specific warning.
... instead of .ez archives.
The benefits for doing this:
* We can use native code, as is the case for lz4 and zstd bindings in
the Tanzu RabbitMQ version for instance. Indeed, Erlang does not
support loading native code (NIF or port drivers) from .ez archives.
* We can remove custom code to handle .ez archives. We have special
cases in Erlang.mk plugins as well as the `rabbit_plugins` module, in
particular the code to extract .ez archives (even though Erlang knows
how to use them directly).
* Prevent hard to debug situations when the .ez archive name does not
match the top-level directory inside the archive. In this case, Erlang
says it can't load the application but doesn't tell much more.
* Debugging and "hot-patching" plugins become easier: we just have to
copy the recompiled .beam file in place of the existing one. There
is no need to unpack the plugin, replace the file and recreate the
archive.
* Release packages can be smaller. gzip, bzip2 and xz, common
compression algorithm for Unix packages, give much better result if
they compress the .beam files directly instead of "compressing" zip
files (the .ez archives are plain zip archives). For instance, the
generic-unix package goes from 15 MiB when using .ez archives to just
12 MiB when using directory.
I would also like to experiment with Erlang releases in the future.
Using directories for Erlang applications instead of .ez archives is
mandatory for this to work according to my latest tests.
Of course, this change doesn't break support for .ez archives (and we
will keep support for this). End users can still download third-party
plugins as .ez archives and drop them in the plugins directory.
On Windows, the current working directory is also searched, which can
lead to problems. Instead, use `init:get_argument(root)` to get the root
of the Erlang release, then we know `bin/erl` will always be present.
In addition to the `rabbitmq-components.mk` existence check, we now
verfy that the directory is named `deps`.
This is to increase the chance that, if we find a
`rabbitmq-componentS.mk` file in the upper directories, this project is
indeed inside a DEPS_DIR.
For instance, in our GitHub Actions workflows, when we prepared the
secondary umbrellas for mixed-version testing, it happened that the
secondary umbrellas were under a clone of rabbitmq-server. Therefore
the first (and only) condition was met and the Makefile erroneously
considered it was inside a DEPS_DIR. As a consequence, dependencies of
the umbrellas were fetched in the wrong place.
and add a VMware copyright notice.
We did not mean to make this code Incompatible with Secondary Licenses
as defined in [1].
1. https://www.mozilla.org/en-US/MPL/2.0/FAQ/
When we source the $CONF_ENV_FILE script, we set a few variables which
this script expects. Those variables are given without their prefix. For
instance, $MNESIA_BASE.
The $CONF_ENV_FILE script can set $RABBITMQ_MNESIA_BASE. Unfortunately
before this patch, the variable would be ignored, in favor of the
default value which was passed to the script ($MNESIA_BASE).
The reason is that variables set by the script are handled in the
alphabetical order. Thus $MNESIA_BASE is handled first, then
$RABBITMQ_MNESIA_BASE.
Because the code didn't give any precedence, the first variable set
would "win". This explains why users who set $RABBITMQ_MNESIA_BASE in
$CONF_ENV_FILE, but using RabbitMQ 3.8.4+ (which introduced
`rabbit_env`), unexpectedly had their node use the default Mnesia base
directory.
The patch is rather simple: when we check if a variable is already set,
we give precedence to the $RABBITMQ_* prefixed variables. Therefore, if
the $CONF_ENV_FILE script sets $RABBITMQ_MNESIA_BASE, this value will be
used, regardless of the value of $MNESIA_BASE.
This didn't happen with variables set in the environment (i.e. the
environment of rabbitmq-server(8)) because the prefixed variables
already had precedence.
Fixesrabbitmq/rabbitmq-common#401.
This allows RabbitMQ to configure `rabbit_log` as a Logger handler.
See a related commit in rabbit_prelaunch_early_logging in
rabbitmq-server, where `rabbit_log` is being configured as a Logger
handler. The commit message explains the reason behind this.
The default timeout of 30 seconds was not sufficient to allow graceful shutdown of a message store with millions of persistent messages. Rather than increase the timeout in general, introduce a new macro with a default of 600 seconds
... instead of the cache action.
The cache action is quite unstable (failing to download the cached
files). In this commit, we try to use the artefacts instead. At this
point, we don't know if it is more reliable, but we'll see with time.
As an added bonus, we can download the archives passed between jobs for
inspection if we need.
Otherwise, for instance, running Dialyzer in the Erlang client fails with the
following error if it was cloned directly (i.e. outside of the Umbrella):
dialyzer: Bad directory for -pa: .../amqp_client/deps/rabbitmq_cli/_build/dev/lib/rabbitmqctl/ebin
... and their value.
Both prefixed and non-prefixed variables are returned by this function.
While here, fix a conflict between $RABBITMQ_HOME and $HOME in
var_is_used/1: the latter shouldn't be considered as used.
When we generate the workflows, we pick the latest tag of each release
branch. That list of tags is used to clone secondary umbrellas in the
workflows and run the testsuites against each of them.
When generating workflows for `master`, we take the latest tag of each
release branch.
When generating workflows for a release branch, we take the latest tag
of each older release branch, plus the first tag of the same release
branch.
Some examples:
* `master` is tested with 3.8.3 and 3.7.25
* `v3.8.x` is tested with 3.8.0 and 3.7.25
We need a monotonically increasing number for the version used by the
Concourse S3 resource. A Git commit hash does not work because they do
not have this property.
The main entry point is `make github-actions` which generates the
workflows.
Currently, it handles workflows to test the project with different
versions of Erlang.
It generates a file called `$(PROJECT)-rabbitmq-deps.mk` which has a
dependency definition line of the form expected by Erlang.mk, for each
RabbitMQ component the project depends on.
Therefore the line indicates:
* `git` as the fetch method
* the repository URL
* the Git commit hash the dependency is on
Here is an example for rabbitmq-server:
dep_rabbit_common := git https://github.com/rabbitmq/rabbitmq-common.git d9ccd8d9cdd58310901f318fed676aff59be5afb
dep_rabbitmq_cli := git https://github.com/rabbitmq/rabbitmq-cli.git f6eaae292d27da4ded92b7c1b51a8ddcfefa69c2
dep_rabbitmq_codegen := git https://github.com/rabbitmq/rabbitmq-codegen.git 65da2e86bd65c6b6ecd48478ab092721696bc709
The double-quoting was requited in the flock(1)/lockf(1) blocks because
of the use of `sh -c`. However it's incorrect in the `else` block.
Follow-up to commit 3f32a36e50.
The CLI has a high startup time. To speed up the
`start-background-broker` and `stop-node` recipes, two CLI calls are
replaced by two more basic commands which achieve the same goal.
The problem with the previous approach was that the `$(wildcard ...)`
directives might be evaluated too early: `deps/rabbit` might not be
available yet.
Moving the computation to the body of the recipe fixes the problem
because dependencies are available at this point.
In other words, if instead of cloning the Umbrella, one cloned
rabbitmq-server directly, the `install-cli-scripts` recipe would fail to
copy the scripts because it assumed `rabbit` was under `$(DEPS_DIR)`.
Now expected places are checked and an error is emitted if the recipe
can't find the right one.
dispatch_sync sits inbetween the behavior of submit and submit_async,
blocking the caller until a worker begins the task, as opposed
to not blocking at all, or blocking util the task has finished.
This is useful when you want to throttle submissions to the pool
from a single process, such that all workers are busy, but there
exists no backlog of work for the pool.
On Darwin, the default tar fails with unkown --transform flag.
FAILS: bsdtar 2.8.3 - libarchive 2.8.3
SUCCEEDS: tar (GNU tar) 1.32
re https://github.com/rabbitmq/rabbitmq-common/pull/364
Signed-off-by: Gerhard Lazu <gerhard@lazu.co.uk>
If there are common_test logs (i.e. `logs` exists), it creates an archive
(compressed with xz(1)) in the top-level directory.
The archive is named `$(PROJECT)-ct-logs-$timestamp.tar.xz` by default.
The name can be changed by setting `$(CT_LOGS_ARCHIVE)`. The file
extension must be `.tar.xz`.
The documentation says we should be able to use ?=, but apparently it
affects the way variables are passed to sub-make.
The issue we had is that using: `make start-cluster RABBITMQ_CONFIG_FILE=...`
didn't work as expected: `$(RABBITMQ_CONFIG_FILE)` made it to the
sub-make but not to the sub-make's recipe.
Using := fixes the problem.
Doing that is ok because assigning `$(RABBITMQ_CONFIG_FILE)` in the
environment or on make(1)'s command line will override the
target-specific variable anyway.
They were plain by default & are now blue which works really well with
Gruvbox Dark. I couldn't change just the debug color, had to redefine
them all.
cc @dumbbell @lukebakken
Signed-off-by: Gerhard Lazu <gerhard@lazu.co.uk>
When running the broker locally, in dev, this is what most of us want.
To change this, use e.g. RABBITMQ_LOG=info (previous default).
Signed-off-by: Gerhard Lazu <gerhard@lazu.co.uk>
This turns off WAL preallocation and saves 400+ MiB per node directory.
This setting only applies to nodes started with `make run-broker` or
from our testsuites. RabbitMQ default configuration remains unaffected.
Using dependencies seemed sensible in the first place, but they are also
special cases like `rabbit` itself. In the end, it looks simpler to just
list rabbitmq-common and rabbitmq-amqp1.0-common in a blacklist and
install CLI for everything else.
We want to test PRs such as
https://github.com/deadtrickster/prometheus.erl/pull/102
in RabbitMQ master (3.9.x) so that we can test fixes against other
master components, like OTP 23 (erlang-git).
Signed-off-by: Gerhard Lazu <gerhard@lazu.co.uk>
... between the current project and rabbitmq-common.
Like with `rabbitmq-components.mk`, this avoids to use an incorrect copy
if the current project uses a different branch or does not have e.g. a
`v3.8.x` branch (unlike rabbitmq-common).
We need to communicate this information to rabbitmq-components.mk so it
selects the right branch for each dependency.
By default, it would query git(1), but after Travis clones and possibly
merges branches, it does not have access to the information anymore.
Forunately, the Travis environment has everything we need.
$base_rmq_ref was already set properly in a previous commit.
If Travis is building a tag, $TRAVIS_BRANCH will contain the appropriate
value, so this works in this case as well.
We now also check if `rabbitmq-components.mk` is up-to-date.
To do so, we set the language to Elixir, even though almost all our
projects are written in Erlang. But we need Elixir for the RabbitMQ CLI.
Specifying Elixir as the language in Travis allows us to:
1. make sure Elixir is installed by Travis
2. specify the versions of both Erlang/OTP and Elixir
We also set an explicit install step. Not that we care about `mix
local.hex`, but we need to override the default Travis install step
which assumes this is an Elixir (mix) based project.
We take this opportunity to add Erlang/OTP 22.2 to the build matrix.
While here, we bring two fixes:
* Warnings reported by Travis are solved: the OS is set explicitly and
`sudo` is removed.
* The "git checkout" gymnastic is replaced by simply setting
`$base_rmq_ref`. This is a better solution to make sure the
appropriate dependencies' branch is selected.
Exactly as we previously set the file log level to debug.
Note that it does not enable logging on the console, it only changes the
default log level if the user of `make run-broker` enables console
logging (using `make run-broker RABBITMQ_LOGS=-`).
[#171131596]
The previous value accepted for this behavior was "NONE". But it's more
intuitive to set it to nothing.
`rabbitmq-run.mk` is also updated to allow `$RABBITMQ_ENABLED_PLUGINS`
to be overriden e.g. on the command line.
It guesses the node name type, based on the host part of a node name.
I.e., if it contains at least a `.` character, it's a longname.
This matches the verification `net_kernel` does to make sure the node
name corresponds to the shortnames/longnames option.
There are two changes in this patch:
1. In `get_default_plugins_path_from_node(), we base the search on
`rabbit_common.app` instead of `code:lib_dir(rabbit_common)`.
The latter only works if the application directory is named
`rabbit_common` or `rabbit_common-$version`. This is not the case
with a default Git clone of the repository because the directory will
be named `rabbitmq-common`.
Using `rabbit_common.app` is fine because it is inside the `ebin`
directory, as all modules. It also brings another benefit: it is not
subject to cover-compilation or preloading (which both get rid of the
original module location).
2. The code to determine the plugins directory based on the directory
containing the module (or `rabbit_common.app`) now takes into account
plugin directories (as opposed to .ez archives). In this case, there
is one less path component compared to an .ez archive.
I.e. we record the fact that a particular value:
* is the default value, or
* comes from an environment variable, or
* comes from querying a remote node
This required a significant refactoring of the module, which explains
the large diff.
At the same time, the testsuite was extended to cover more code and
situations.
This work permits us to move remaining environment variables checked by
`rabbit` to this module. They include:
* $RABBITMQ_LOG_FF_REGISTRY
* $RABBITMQ_FEATURE_FLAGS
* $NOTIFY_SOCKET
[#170149339]
Compared to `all_module_attributes/0`, it only scans applications which
are related to RabbitMQ: either a RabbitMQ core application or a plugin
(i.e. an application which depends on `rabbit`).
On my laptop, this significantly reduce the time to query module
attributes in the case of feature flags: it goes from 830 ms to 235 ms
just by skipping all Erlang/OTP applications are third-party
dependencies.
This makes a small improvement to RabbitMQ startup time, which is
visible for developers mainly, not for a production instance.
To be used in branches other than `master`. It will take `.gitignore`
from master and replace the current copy with it.
Like a few other targets, it supports `DO_COMMIT=yes` to commit the
change as well.
When we are running Makefile recipes from an application under
`$(APPS_DIR)`, we want to locate the Umbrella correctly to:
- set `$(DEPS_DIR)` accordingly
- prevent `make distclean` from removing `$(DEPS_DIR)`
Before this change and after `rabbit/apps/rabbitmq_prelaunch` was added,
running `make distclean` in `rabbit` removed everything under
`$(DEPS_DIR)`.
There was one legitimate warning in `get_enabled_plugins()`:
`get_prefixed_env_var()` already takes care of converting an empty
string to false.
The other warning is because `loading_conf_env_file_enabled()` returns a
boolean when compiled for tests, but always true when compiled for
production. Dialyzer only sees the second case and thinks the cases
where the function returns false will never happen.
... instead of `.ez` archives.
The default is still to create `.ez` archives for all RabbitMQ
components & plugins.
However if `$(USE_RABBIT_BOOT_SCRIPT)` is set (experimental and
undocumented for now), they are distributed as directories.
This is handled by the `rabbitmq_prelaunch` application now, based on
the value of `$RABBITMQ_ENABLED_PLUGINS`.
`$(RABBITMQ_ENABLED_PLUGINS_FILE)` depended on `dist`. This dependency
was moved to individual `run-*` and `start-*` targets.
While here, re-use `test-dist` instead of `dist` if the build was
already done for tests.
The testsuites default to run `make test-dist` as a first step.
Therefore later, when it starts a node, it should re-use that instead of
depending on `make dist` which will rebuild the tested project and
remove test dependencies from plugins.
This is useful (and mandatory in fact) now that `rabbit` is packaged
like plugins because, in the case of rabbitmq-erlang-client for
instance, the broker is a `$(TEST_DEPS)`: if starting a node runs `make
dist`, the broker will be removed.
... to the plugin being worked on, instead of locating `rabbit` and
taking the scripts there.
It greatly simplifies the use of RabbitMQ and plugins inside a
development working copy because the layout is closer to what we would
have in a package. I.e. there are far less special cases.
The goal is to distribute RabbitMQ core (the `rabbit` Erlang
application) exactly as we distribute plugins. This simplifies the
startup script and CLI tools when we have to setup Erlang code search
path.
... and default values.
It can also query a remote node for some specific values. The use case
is the CLI which should know what the RabbitMQ node it controls uses
exactly.
It supports several new environment variables:
RABBITMQ_DBG:
Used to setup `dbg` for some simple tracing scenarios.
RABBITMQ_ENABLED_PLUGINS:
Used to list plugins to enable automatically on node startup.
RABBITMQ_KEEP_PID_FILE_ON_EXIT:
Used to indicate if the PID file should be removed or kept when the
node exits.
RABBITMQ_LOG:
Used to configure the global and per-category log levels and enable
ANSI colors.
`ebin/test` is always touch(1)'d by Erlang.mk, which made the list of
dependencies of an .ez archive newer than the archive itself. This caused the
archive to be recreated.
While here, set `TEST_DIR` to something random in the case of `make
test-dist`: this way, rebuilding all testsuites is skipped by Erlang.mk.
Yes, this is a hack.
At least on the Windows Server 2019 AWS EC2 image, the `tasklist`
command is unavailable.
If that's the case, we fallback to using a PowerShell oneliner. It's not
the default, just in case PowerShell is unavailable.
This is now done in xrefr (`mk/xrefr`) and rabbimq-ct-helpers when
needed.
This has several benefits:
* This fixes `make run-broker` on Windows because the computed
`$ERL_LIBS` was invalid there.
* This saves a lot of Makefile processing time, because elixir(1) is
quite slow to startup. On my laptop, a complete build in
rabbitmq-server-release from 8.5 seconds to 3 seconds.
into a list, as the function implies.
All current call sites use it to call functions that return lists.
However, rabbitmq/rabbitmq-cli#389 breaks this cycle.
* Use `noinput`
* Use `-s erlang halt` to skip small `eval` overhead
* Use `no_dot_erlang` boot file since we do not want user customizations to interfere
These should be taken into account into the limits, but always be granted.
Files must be reserved by the queues themselves using `set_reservation/0` or
`set_reservation/1`. This is an absolute reservation that increases or
decreases the number of files reserved to reach the given amount on every
call.
[#169063174]
... when we wait for a node started in the background.
This helps when the PID is written asynchronously by the Erlang node
instead of the rabbitmq-server(8) script: in this case, the `rabbitmqctl
wait` command may start to wait earlier in the former situation than the
latter one, and thus timeout earlier.
On Windows, if we pass it a Windows-native path like `C:\User\...` or
even something with forward slashes, rsync(1) will consider that `C`
(before the colon) is a hostname and it should try to connect to it.
Using `cygpath.exe` on Windows converts the Windows path to a Unix-like
one (e.g. `/c/Users/...`).
Add metadata to virtual hosts
[#166298298]
rabbit_vhost: use record defaults
The vhost record moved to a versioned record in rabbitmq-server
Co-Authored-By: Michael Klishin <mklishin@pivotal.io>
This saves a lot of time because:
1. we don't spawn a shell each time to compute the same value;
2. elixir(1) has a long startup time.
In my tests, a no-op gmake in `rabbit` goes from 2.5 seconds to 0.9
seconds.
... instead of Unix commands and a one-liner which assumes that `:` is
the path separator. This speeds up the build because we don't spawn a
shell.
While here, also remove `$(APPS_DIR)` from `$(ERL_LIBS)`.
We have to apply the same tag filtering when counting them as the one
done by git-describe(1).
This fixes the following error:
fatal: No names found, cannot describe anything
This issue was hit when there were tags in the project, but they were
all filtered out by git-describe(1).
WIP, the secret is hardcoded, which is obviously not secured. It is
enough though to see if modules/applications manipulating credentials
can use it to avoid those credentials to end up in logs when the state
of crashed processes is dumped.
[#167070941]
References rabbitmq/rabbitmq-erlang-client#123
... in `commits-since-release`.
Before this change, the script was expecting at least one tag so that
git-describe(1) worked. Without that, it would fail with:
fatal: No names found, cannot describe anything.
Now, if a component has no tag, it will display "New in this release!".
Patch from @dumbbell
The application to "package" as a plugin (an .ez archive) might be under
`$(APPS_DIR)`. Therefore now, the all the variables and recipes are
created from the path to the application not just its name.
With the update of Erlang.mk, dependencies are not rebuilt anymore by
default, except if `FULL=1` is set.
This behavior is not adapted to the work on RabbitMQ where many
components are split into many repositories, and we work on several of
them at the same time.
Therefore, the idea of this commit is to tell Erlang.mk to always visit
dependencies which are RabbitMQ components. Other dependencies are only
built once the first time.
[#166980833]
Unfortunately, the *-on-concourse targets still don't work: fly(1), the
Concourse CLI, looks to have regressed even more: it doesn't upload all
inputs. Half of them are just empty directories.
Obviously, compiling anything fails because if this.
In Erlang 22, the name is now `aes_256_cbc`.
The default cipher/hash/iterations are also set in rabbit's application
default environment. I'm going to remove those default values there
because the code already queries this module if they are missing from
the application environment.
This fixes a crash of the call to `crypto:cipher_info/1` later because
the ciphers list returned by `crypto:supports/0` contains more ciphers:
"old aliases" are added to that list and those aliases are unsupported
by `crypto:cipher_info/1`.
This reverts commit 7c9f170cee.
CLI tools cannot use this function as it logs errors.
`rabbit_resource_monitor_misc:parse_information_unit/1` is a better fit.
Erlang 22 will introduce TLS 1.3, but at the time of this commit, only
the server side is implemented. If the Erlang client requests TLS 1.3, the
server will accept but the client will either hang or crash.
So for now, just blacklist TLS 1.3 to avoid any issues, even on the
server side, just to be safe.
[#165214130]
The `creation` field might not fit into one byte which makes the
`PID_EXT` format unsuitable for this case.
The `NEW_PID_EXT` format is supported since Erlang 19.0, so it is safe
to always use it, no matter the value of `creation`, because RabbitMQ
3.7+ requires at least Erlang 19.3+.
References #313.
In OTP-22 the Creation field has been increased to be 32 bits.
For now we only need to handle it when using term_to_binary
and parsing the result manually.
We don't need this anymore, now that the high watermark is bumped
automatically when the log level is set to `debug` in rabbit_lager.
This reverts commit 49956c6423.
The stop-node command is the only make target still using erl_call who
is prone to breakage (broken in OTP 21.3) and can readily be replaced
with rabbitmqctl stop.
To fix this, introduce a new helper, `ascii_color/2`, which takes a the
same flag (`UseColors`) as its second argument. If that flag is false,
it returns an empty string.
While here, move the `isatty()` function in this module.
In particular, we drop support for Erlang R13B and older in
`rabbit_cert_info`. This fixes an error reported by Dialyzer now that
the list of dependencies (`$(LOCAL_DEPS)`) is more correct.
... in version_minor_equivalent().
Note that we'll need a special case to exclude 3.7.x versions which
don't have the feature flags modules.
At the same time, we introduce the `strict_version_minor_equivalent()`
function which has the behavior the initial function had before. This is
used in the context of plugin compatibility checks: plugins can specify
a `broker_version_requirements` property and at this point, plugins
compatible with 3.7.x should not be considered as compatible with 3.8.x.
[#159298729]
See the corresponding commit in rabbitmq-server for all the
explanations.
Now, all accesses to the #amqqueue{} record are made through the
`amqqueue` module (available in rabbitmq-server). The new type name is
`amqqueue:amqqueue()`.
The `amqqueue.hrl` header also provides some macros to help with pattern
matching and guard expressions.
To help with this, code and modules were moved from rabbitmq-common to
rabbitmq-server.
[#159298729]
* Add single active consumer flag in consumer metrics
* Add function to update consumer metrics when a consumer is promoted
to single active consumer
[#163089472]
References rabbitmq/rabbitmq-management#649
Allow the backing queue implementation to inform the amqqueue process
how to proceed when a message duplicate is encountered.
* {true, drop} the message is a duplicate and should be ignored
* {true, reject} the message is a duplicate and the publisher should
receive a rejection
* false the message is not deemed a duplicate
* true kept for backward compatibility, equivalent to {true, drop}
Signed-off-by: Matteo Cafasso <noxdafox@gmail.com>
The seq command should include a -1 increment to stop nodes in reverse
order. Previously, as an example with NODES=2, will run `seq 2 1`
which produces no items to iterate, so the entire stop-node loop does
not execute and the brokers are left running.
To check whether a value is in the queue or not. Handle both
non-priority and priority queue. Based on lists:member/2, does not take
into account the priority of the value in the queue to check equality.
References rabbitmq/rabbitmq-server#1743
It has been reported that in order to use the Erlang client, the
Erlang/OTP source must be available. This is due to one include
file that rabbit_net required. This dependency has been removed.
Instead of calling is_record(sslsocket) the macro ?IS_SSL will
now perform the same test manually (check that it is a tuple,
that the size is correct and that the first element equals sslsocket).
The tuple has not changed in a very long time so doing this
manually is at least as safe as including this private header
file (it could be removed or moved at any time).
Once Erlang/OTP 22 gets out and we know how sockets will be
represented with the NIF implementation, we could revise this
and check whether the socket is one that gen_tcp accepts
(currently it's a port, but this will probably change when
a NIF is used).
With the quorum queue code, RabbitMQ probably still works with Erlang
20.x, but it is not thoroughly tested. Thus, bump the requirement to
Erlang 21.0.
It has been reported that in order to use the Erlang client, the
Erlang/OTP source must be available. This is due to one include
file that rabbit_net required. This dependency has been removed.
Instead of calling is_record(sslsocket) the macro ?IS_SSL will
now perform the same test manually (check that it is a tuple,
that the size is correct and that the first element equals sslsocket).
The tuple has not changed in a very long time so doing this
manually is at least as safe as including this private header
file (it could be removed or moved at any time).
Once Erlang/OTP 22 gets out and we know how sockets will be
represented with the NIF implementation, we could revise this
and check whether the socket is one that gen_tcp accepts
(currently it's a port, but this will probably change when
a NIF is used).
* Lager: 3.6.4 -> 3.6.5
* Ranch: 1.5.0 -> 1.6.1
* ranch_proxy_protocol: 1.5.0 -> 2.1.0-rc.1
Note that ranch_proxy_protocol 2.1.0-rc.1 is not an official release
from upstream: it was published by us (the RabbitMQ team) because we
don't have feedback from upstream about a pull request to update Ranch
to 1.6.x (heroku/ranch_proxy_protocol#49). Hopefully upstream will merge
the pull request and cut a new official release.
Fixes rabbitmq/rabbitmq-common#269.
[#160270896]
If resource alarm is triggered during the boot process it will send
an event via rabbit_event:notify. This can crash the node if rabbit_event
is not started yet. rabbit_event starts after rabbit_alarm and any alarms
on boot were crashing the node.
Calling gen_event:notify with {rabbit_event, node()} is the same as with
rabbit_event except it does not fail.
erl_call(1) is broken in Erlang 21.0-rc.2 when we use it to evaluate
some code on a remote node. The issue was reported upstream but we can
replace erl_call(1) by `rabbitmqctl eval` to achieve the same result.
The output is still filtered using sed(1) to ensure it remains
unchanged, in case testsuites expect a particular format.
We still continue to use erl_call(1) to stop a node because we can't
achieve the same behavior using rabbitmqctl(1). This use of erl_call(1)
is working with Erlang 21.0-rc.2.
[#157964874]
OTP 21 deprecated erlang:get_stacktrace/0 in favor of a new
try/catch syntax. Unfortunately that's not realistic for projects
that support multiple Erlang versions (like us) until OTP 21 can be
the minimum version requirement. In order to compile we have to ignore
the warning. The broad compiler option seems to be the most common
way to support compilation on multiple OTP versions with warnings_as_errors.
[#157964874]
The code already verifies that `gen_event:start_link/2` is available
before calling it, and falls back on `gen_event:start_link/1`
appropriately.
Therefore we can add it to the `-ignore_xref()` list to fix the xref
check.
gen_event:start_link/2 is not available before Erlang 20. RabbitMQ
3.7 needs to support Erlang 19.3 and above.
We thought of implementing different garbage collection strategies for
rabbit_event that would be compatible with Erlang 19.3, but failed
because gen_event is a special type of process:
* fullsweep_after cannot be set via process flag, it returns a badarg
exception
* collecting on a timer would be weird because all the handlers would
receive the event
* we can't force a full GC via hibernating, because this would need to
run after each event, which would result in terrible performance
Partner-in-crime: @essen
ets:select_replace/2 is only available in Erlang 20 and above. RabbitMQ
3.7 needs to support Erlang 19.3 and above.
We haven't noticed any difference in performance when using one approach
over the other.
To keep the code simple, we decided to not detect which approach to use.
Partern-in-crime: @essen
When a node has to process events generated by creating 100k
connections, 100k channels & 50k queues, it was observed that this
process would use up to 13GB of memory and never release it.
Since we are running a full GC whenever GC runs, this will slow the
process down by ~10%, but the memory usage is stable & all memory gets
eventually released. On a busy system, this can amount to many GBs.
Partner-in-crime: @essen
Otherwise, we can trigger a stack overflow in the Erlang VM itself, as
described in ERL-592: https://bugs.erlang.org/browse/ERL-592
The code before works fine when 50k queues are being deleted out of 100k
queues, but fails when there are 150k queues in total & 50k need to be
deleted.
Partner-in-crime: @essen
Rather than using 6 ETS operations per deleted queue, build a match
spec that matches all queues & use ets:select_replace/2 to delete
metrics all queues in 4 super efficient ETS operations.
Great suggestions @michaelklishin & @hairyhum!
ee0951b1b3 (comments)
Notice that build_match_spec_conditions_to_delete_all_queues/1 is not
tail recursive, because it's more efficient this way due to the required
return value. The exact details elude me, @essen can answer this better
than I can.
Linked to rabbitmq/rabbitmq-server#1526
For initial context, see #1513
Partner-in-crime: @essen
When a node has to process events generated by creating 100k
connections, 100k channels & 50k queues, it was observed that this
process would use up to 13GB of memory and never release it.
Since we are running a full GC whenever GC runs, this will slow the
process down by ~10%, but the memory usage is stable & all memory gets
eventually released. On a busy system, this can amount to many GBs.
Partner-in-crime: @essen
Otherwise, we can trigger a stack overflow in the Erlang VM itself, as
described in ERL-592: https://bugs.erlang.org/browse/ERL-592
The code before works fine when 50k queues are being deleted out of 100k
queues, but fails when there are 150k queues in total & 50k need to be
deleted.
Partner-in-crime: @essen
Rather than using 6 ETS operations per deleted queue, build a match
spec that matches all queues & use ets:select_replace/2 to delete
metrics all queues in 4 super efficient ETS operations.
Great suggestions @michaelklishin & @hairyhum!
ee0951b1b3 (comments)
Notice that build_match_spec_conditions_to_delete_all_queues/1 is not
tail recursive, because it's more efficient this way due to the required
return value. The exact details elude me, @essen can answer this better
than I can.
Linked to rabbitmq/rabbitmq-server#1526
For initial context, see #1513
Partner-in-crime: @essen
That way, it doesn't interfere with testcases working with multiple
RabbitMQ nodes. Furthermore, independent sets of nodes won't try to
autoconnect to the common_test node, and thus unrelated nodes after.
This is useful for the upcoming VM-based test helpers.
[#153749132]
(cherry picked from commit 47a5bdfff548a5c278af2d947feeeb7836aae0c3)
... while checking if we are connected to the given PID's node. This
fixes an issue where an Erlang client, connected directly to the
RabbitMQ node (as opposed to a TCP connection), is running an a hidden
Erlang node.
[#153749132]
(cherry picked from commit 48df58b63dc1157a9954fc5413aa027cb9552db8)
That way, it doesn't interfere with testcases working with multiple
RabbitMQ nodes. Furthermore, independent sets of nodes won't try to
autoconnect to the common_test node, and thus unrelated nodes after.
This is useful for the upcoming VM-based test helpers.
[#153749132]
(cherry picked from commit 5c0546fb5bf9e7d61e90399373b11962011f548d)
... while checking if we are connected to the given PID's node. This
fixes an issue where an Erlang client, connected directly to the
RabbitMQ node (as opposed to a TCP connection), is running an a hidden
Erlang node.
[#153749132]
(cherry picked from commit 66f5ae28ea993d761507dbd9034d59491f0d2bc6)
That way, it doesn't interfere with testcases working with multiple
RabbitMQ nodes. Furthermore, independent sets of nodes won't try to
autoconnect to the common_test node, and thus unrelated nodes after.
This is useful for the upcoming VM-based test helpers.
[#153749132]
Those functions are currently looking into opaque types from
`ranch_proxy_protocol`. Until this is fixed, we just ignore the warnings
and comment out the specs.
[#153850881]
rabbitmq-common shouldn't see this, otherwise it's a reverse dependency
and thus a dependency circle.
In the future, we should fix this by adding rabbitmq-cli as an explicit
dependency to whatever component needs it.
[#153850881]
While here, silence the same warning for do_multi_call() because it
comes from an anonymous function. We could move the anonymous function
to a regular function and give it a spec. However, we prefer to keep the
diff with upstream small.
[#153850881]
They were returning `true` (the return value of ets:insert() or
ets:delete()), whereas many others were returning `ok`. So make them
return `ok` for consistency's sake. Furthermore it matches their
specification.
These warnings were reported by Dialyzer.
[#153850881]
Thus, use the correct return type of `no_return()`. Even if it's defined
as `none()` according to the documentation, it doesn't have the same
semantic.
The warning was reported by Dialyzer.
[#153850881]
Anonymous functions were converted to regular functions so that we could
add `-spec()` directives.
The `spec()` for the `error_handler()` callback was updated to accept
functions throwing exceptions.
This was reported by Dialyzer.
[#153850881]
They can mention functions which we are removing or renaming. Therefore
we need to apply the same changes to the `-dialyzer` attribute(s).
[#154054286]
This way, we are sure that the possibly newer recipe in
`rabbitmq-tools.mk` handles the operation instead of a possibly obsolete
`rabbitmq-components.mk` copy.
[#154761483]
To avoid a copy of `rabbitmq-components.mk` from rabbitmq-common's
`v3.6.x` branch to another component's `master` branch, we want to
compare branch names of both repositories.
This situation may occur when checking out the `v3.6.x` in the Umbrella
using `gmake up BRANCH=v3.6.x` but for instance some plugins don't have
such a branch. In this case, they remain on the previous branch. We
don't want to overwrite their `rabbitmq-components.mk` if this happens.
[#154761483]