It was automatically happening for e.g. `make start-cluster`.
But some plugins were not covered by default generated config, and
running rabbit from 2 different worktrees was a bit complicated.
A value that is too low will prevent the index from shutting
down in time when there are many queues. This leads to the
process being killed and on the next RabbitMQ restart a
(potentially very long) dirty recovery is needed.
The value of 10 minutes was chosen to mirror the shutdown
timeout of the message store. Since both queues and message
store need to have shut down gracefully in order to have
a clean restart it makes sense to use the same value.
Related: c40c2628a9
When we fail to parse name of cipher suite from PROXY protocol
just say that no ssl is used, instead of trying to fill that
with data from connection between proxy and our server.
A user could already enable single-line logging (the `single_line`
option of `logger_formatter` or RabbitMQ internal formatters) from the
configuration file. For example:
log.console.formatter.single_line = on
With this patch, the option can be enabled from the `$RABBITMQ_LOG`
environment variable as well:
make run-broker RABBITMQ_LOG=+single_line
Those environment variables are unset by default. The default values are
set in the `rabbit` application environment and can be configured in the
configuration file. However, the environment variables will take
precedence over them respectively if they are set.
They were trying to run `hostname` and `which`, which produced a bunch
of error messages in a hermetic build environment.
And performance of those `shell` calls is not very important, as they
are caled just a few times during script runtime anyway (there is a
hack to make these lazy, but evaluating only once - but it's hardly
worth it).
Unlike pg2, pg in Erlang 24 is eventually consistent. So this
reintroduces some of the same kind of locking mirrored_supervisor
used to rely on implicitly via pg2.
Per discussion with @lhoguin.
Closes#3260.
References #3132, #3154.
This has the unfortunate side effect of causing a rebuild of all
applications every time. I need to figure out another place to build and
install the CLI during build time (instead of as part of the dist
target).
This reverts commit 4322cca66e.
and assume it is a string-like value ("directory string")
because other values would not make much sense in the
username extraction context.
References #2983.
instead of specific ones since they will vary with the payload
(one of them likely indicates UTF string length).
This is still not perfect because we limit the maximum
allowed length but it works fine with identifiers up to 100
characters long, which should be good enough for this
best effort handling of an abscure SAN type.
References ##2983.
The parser didn't handle literals of the form:
'single-quoted'unquoted'single-quoted-again'"or-even-double-quoted"
In particular, the unquoted parsing assumed that nothing else could
follow it. The testsuite is extended with the issue reporter's case.
While here, improve escaped characters handling. They used to be not
parsed specifically at all.
Fixes#2969.
Note that the type by definition contains arbitrary values. According
to the OTP types, they are triplets that represent effectively
a key/value pair. So we assume the pair is a string that needs a bit
massaging, namely stripping the UTF encoding prefix OTP AnotherName
decoder leaves in.
Kudos to @Thibi2000 for providing an example value.
Closes#2983.
for usability. It is not any different from when a float value
is used and only exists as a counterpart to '{absolute, N}'.
Also nothing changes for rabbitmq.conf users as that format performs
validation and correct value translation.
See #2694, #2965 for background.
Adds WORKSPACE.bazel, BUILD.bazel & *.bzl files for partial build & test with Bazel. Introduces a build-time dependency on https://github.com/rabbitmq/bazel-erlang
The consolidation of `rabbitmq-components.mk` broke the previous
method by which rabbit components were detected. Now we check
$(RABBITMQ_COMPONENTS) directly.
In kind version 0.10.0, when creating a 5-node RabbitMQ cluster
with the new parallel PodManagementPolicy, we observed that some
pods were restarted. Their logs included:
```
10:10:03.794 [error]
10:10:03.804 [error] BOOT FAILED
10:10:03.805 [error] ===========
BOOT FAILED
10:10:03.805 [error] ERROR: epmd error for host r1-server-0.r1-nodes.rabbitmq-system: nxdomain (non-existing domain)
10:10:03.805 [error]
===========
ERROR: epmd error for host r1-server-0.r1-nodes.rabbitmq-system: nxdomain (non-existing domain)
10:10:04.806 [error] Supervisor rabbit_prelaunch_sup had child prelaunch started with rabbit_prelaunch:run_prelaunch_first_phase() at undefined exit with reason {epmd_error,"r1-server-0.r1-nodes.rabbitmq-system",nxdomain} in context start_error
10:10:04.806 [error] CRASH REPORT Process <0.152.0> with 0 neighbours exited with reason: {{shutdown,{failed_to_start_child,prelaunch,{epmd_error,"r1-server-0.r1-nodes.rabbitmq-system",nxdomain}}},{rabbit_prelaunch_app,start,[normal,[]]}} in application_master:init/4 line 138
```
Eventually, after some pods restarted up to 2 times, all pods were running and ready.
In kind, we observed that during the first couple of seconds, nslookup was failing as well for that domain
with nxdomain.
It took up to 30 seconds until nslookup succeeded.
With this commit, pods don't need to be restarted when creating a fresh
RabbitMQ cluster.
This allows including additional applications or third party
plugins when creating a release, running the broker locally,
or just building from the top-level Makefile.
To include Looking Glass in a release, for example:
$ make package-generic-unix ADDITIONAL_PLUGINS="looking_glass"
A Docker image can then be built using this release and will
contain Looking Glass:
$ make docker-image
Beware macOS users! Applications such as Looking Glass include
NIFs. NIFs must be compiled in the right environment. If you
are building a Docker image then make sure to build the NIF
on Linux! In the two steps above, this corresponds to Step 1.
To run the broker with Looking Glass available:
$ make run-broker ADDITIONAL_PLUGINS="looking_glass"
This commit also moves Looking Glass dependency information
into rabbitmq-components.mk so it is available at all times.
Lager strips trailing newline characters but OTP logger with the default
formatter adds a newline at the end. To avoid unintentional multi-line log
messages we have to revisit most messages logged.
Some log entries are intentionally multiline, others
are printed to stdout directly: newlines are required there
for sensible formatting.
The configuration remains the same for the end-user. The only exception
is the log root directory: it is now set through the `log_root`
application env. variable in `rabbit`. People using the Cuttlefish-based
configuration file are not affected by this exception.
The main change is how the logging facility is configured. It now
happens in `rabbit_prelaunch_logging`. The `rabbit_lager` module is
removed.
The supported outputs remain the same: the console, text files, the
`amq.rabbitmq.log` exchange and syslog.
The message text format slightly changed: the timestamp is more precise
(now to the microsecond) and the level can be abbreviated to always be
4-character long to align all messages and improve readability. Here is
an example:
2021-03-03 10:22:30.377392+01:00 [dbug] <0.229.0> == Prelaunch DONE ==
2021-03-03 10:22:30.377860+01:00 [info] <0.229.0>
2021-03-03 10:22:30.377860+01:00 [info] <0.229.0> Starting RabbitMQ 3.8.10+115.g071f3fb on Erlang 23.2.5
2021-03-03 10:22:30.377860+01:00 [info] <0.229.0> Licensed under the MPL 2.0. Website: https://rabbitmq.com
The example above also shows that multiline messages are supported and
each line is prepended with the same prefix (the timestamp, the level
and the Erlang process PID).
JSON is also supported as a message format and now for any outputs.
Indeed, it is possible to use it with e.g. syslog or the exchange. Here
is an example of a JSON-formatted message sent to syslog:
Mar 3 11:23:06 localhost rabbitmq-server[27908] <0.229.0> - {"time":"2021-03-03T11:23:06.998466+01:00","level":"notice","msg":"Logging: configured log handlers are now ACTIVE","meta":{"domain":"rabbitmq.prelaunch","file":"src/rabbit_prelaunch_logging.erl","gl":"<0.228.0>","line":311,"mfa":["rabbit_prelaunch_logging","configure_logger",1],"pid":"<0.229.0>"}}
For quick testing, the values accepted by the `$RABBITMQ_LOGS`
environment variables were extended:
* `-` still means stdout
* `-stderr` means stderr
* `syslog:` means syslog on localhost
* `exchange:` means logging to `amq.rabbitmq.log`
`$RABBITMQ_LOG` was also extended. It now accepts a `+json` modifier (in
addition to the existing `+color` one). With that modifier, messages are
formatted as JSON intead of plain text.
The `rabbitmqctl rotate_logs` command is deprecated. The reason is
Logger does not expose a function to force log rotation. However, it
will detect when a file was rotated by an external tool.
From a developer point of view, the old `rabbit_log*` API remains
supported, though it is now deprecated. It is implemented as regular
modules: there is no `parse_transform` involved anymore.
In the code, it is recommended to use the new Logger macros. For
instance, `?LOG_INFO(Format, Args)`. If possible, messages should be
augmented with some metadata. For instance (note the map after the
message):
?LOG_NOTICE("Logging: switching to configured handler(s); following "
"messages may not be visible in this log output",
#{domain => ?RMQLOG_DOMAIN_PRELAUNCH}),
Domains in Erlang Logger parlance are the way to categorize messages.
Some predefined domains, matching previous categories, are currently
defined in `rabbit_common/include/logging.hrl` or headers in the
relevant plugins for plugin-specific categories.
At this point, very few messages have been converted from the old
`rabbit_log*` API to the new macros. It can be done gradually when
working on a particular module or logging.
The Erlang builtin console/file handler, `logger_std_h`, has been forked
because it lacks date-based file rotation. The configuration of
date-based rotation is identical to Lager. Once the dust has settled for
this feature, the goal is to submit it upstream for inclusion in Erlang.
The forked module is calld `rabbit_logger_std_h` and is based
`logger_std_h` in Erlang 23.0.
Subsequent nodes fail to start since ports are already in use. This
makes it possible to start multiple nodes locally with all plugins
enabled.
Signed-off-by: Gerhard Lazu <gerhard@lazu.co.uk>
as node names grow.
Prior to this change, direct reply-to consumer channels
were encoded using term_to_binary/1, which means the result
would grow together with node name (since node name
is one of the components of an Erlang pid type).
This means that with long enough hostnames, reply-to
identifiers could overflow the 255 character limit of
message property field type, longstr.
With this change, the encoded value uses a hash of the node name
and then locates the actual node name from a map of
hashes to current cluster members.
In addition, instead of generating non-predictable "secure"
GUIDs the feature now generates "regular" predictable GUIDs
which compensates some of the additional PID pre- and post-processing
outlined above.
Now that dependencies are packaged as directories and not .ez
files, the fact that both LG and LZ4 are NIFs is no longer
an issue. And having it as regular dependencies simplifies
REPL-driven profiling.
Per discussion with @dumbbell.
The `set` command in the implementation of `/bin/sh` included in the
official RabbitMQ Docker image returns multi-line variable values
differently than the tested Bourne shell implementation (GNU Bash, dash
and FreeBSD sh).
I don't know what implementation is used by that Docker image, but here
is the output of `set`, for a variable set to "\n'test'":
TEST_VAR='
'"'"'test'"'"
The problem was reported in the following discussion:
https://github.com/rabbitmq/rabbitmq-server/discussions/2458
While here, add a small testcase to check a couple outputs.
... instead of 23.0.
Erlang 23.1 is the version the Concourse pipelines use. We expect the
Concourse pipelines and the GitHub Actions workflow to be on the same
page.
GC collection can then be done by deleting all entries on the ETS table
and total counters per protocol can be kept without individually scanning
all entries
net_adm:name/1 returns a new value, 'noport', in Erlang 24. This value
being absent in the function spec in previous versions of Erlang, we get
a warning from Dialyzer until we start to the yet-to-be-release Erlang
24 in CI. Therefore we disable this specific warning.
... instead of .ez archives.
The benefits for doing this:
* We can use native code, as is the case for lz4 and zstd bindings in
the Tanzu RabbitMQ version for instance. Indeed, Erlang does not
support loading native code (NIF or port drivers) from .ez archives.
* We can remove custom code to handle .ez archives. We have special
cases in Erlang.mk plugins as well as the `rabbit_plugins` module, in
particular the code to extract .ez archives (even though Erlang knows
how to use them directly).
* Prevent hard to debug situations when the .ez archive name does not
match the top-level directory inside the archive. In this case, Erlang
says it can't load the application but doesn't tell much more.
* Debugging and "hot-patching" plugins become easier: we just have to
copy the recompiled .beam file in place of the existing one. There
is no need to unpack the plugin, replace the file and recreate the
archive.
* Release packages can be smaller. gzip, bzip2 and xz, common
compression algorithm for Unix packages, give much better result if
they compress the .beam files directly instead of "compressing" zip
files (the .ez archives are plain zip archives). For instance, the
generic-unix package goes from 15 MiB when using .ez archives to just
12 MiB when using directory.
I would also like to experiment with Erlang releases in the future.
Using directories for Erlang applications instead of .ez archives is
mandatory for this to work according to my latest tests.
Of course, this change doesn't break support for .ez archives (and we
will keep support for this). End users can still download third-party
plugins as .ez archives and drop them in the plugins directory.
On Windows, the current working directory is also searched, which can
lead to problems. Instead, use `init:get_argument(root)` to get the root
of the Erlang release, then we know `bin/erl` will always be present.
In addition to the `rabbitmq-components.mk` existence check, we now
verfy that the directory is named `deps`.
This is to increase the chance that, if we find a
`rabbitmq-componentS.mk` file in the upper directories, this project is
indeed inside a DEPS_DIR.
For instance, in our GitHub Actions workflows, when we prepared the
secondary umbrellas for mixed-version testing, it happened that the
secondary umbrellas were under a clone of rabbitmq-server. Therefore
the first (and only) condition was met and the Makefile erroneously
considered it was inside a DEPS_DIR. As a consequence, dependencies of
the umbrellas were fetched in the wrong place.
net_adm:name/1 returns a new value, 'noport', in Erlang 24. This value
being absent in the function spec in previous versions of Erlang, we get
a warning from Dialyzer until we start to the yet-to-be-release Erlang
24 in CI. Therefore we disable this specific warning.
... instead of .ez archives.
The benefits for doing this:
* We can use native code, as is the case for lz4 and zstd bindings in
the Tanzu RabbitMQ version for instance. Indeed, Erlang does not
support loading native code (NIF or port drivers) from .ez archives.
* We can remove custom code to handle .ez archives. We have special
cases in Erlang.mk plugins as well as the `rabbit_plugins` module, in
particular the code to extract .ez archives (even though Erlang knows
how to use them directly).
* Prevent hard to debug situations when the .ez archive name does not
match the top-level directory inside the archive. In this case, Erlang
says it can't load the application but doesn't tell much more.
* Debugging and "hot-patching" plugins become easier: we just have to
copy the recompiled .beam file in place of the existing one. There
is no need to unpack the plugin, replace the file and recreate the
archive.
* Release packages can be smaller. gzip, bzip2 and xz, common
compression algorithm for Unix packages, give much better result if
they compress the .beam files directly instead of "compressing" zip
files (the .ez archives are plain zip archives). For instance, the
generic-unix package goes from 15 MiB when using .ez archives to just
12 MiB when using directory.
I would also like to experiment with Erlang releases in the future.
Using directories for Erlang applications instead of .ez archives is
mandatory for this to work according to my latest tests.
Of course, this change doesn't break support for .ez archives (and we
will keep support for this). End users can still download third-party
plugins as .ez archives and drop them in the plugins directory.
On Windows, the current working directory is also searched, which can
lead to problems. Instead, use `init:get_argument(root)` to get the root
of the Erlang release, then we know `bin/erl` will always be present.
In addition to the `rabbitmq-components.mk` existence check, we now
verfy that the directory is named `deps`.
This is to increase the chance that, if we find a
`rabbitmq-componentS.mk` file in the upper directories, this project is
indeed inside a DEPS_DIR.
For instance, in our GitHub Actions workflows, when we prepared the
secondary umbrellas for mixed-version testing, it happened that the
secondary umbrellas were under a clone of rabbitmq-server. Therefore
the first (and only) condition was met and the Makefile erroneously
considered it was inside a DEPS_DIR. As a consequence, dependencies of
the umbrellas were fetched in the wrong place.
and add a VMware copyright notice.
We did not mean to make this code Incompatible with Secondary Licenses
as defined in [1].
1. https://www.mozilla.org/en-US/MPL/2.0/FAQ/
When we source the $CONF_ENV_FILE script, we set a few variables which
this script expects. Those variables are given without their prefix. For
instance, $MNESIA_BASE.
The $CONF_ENV_FILE script can set $RABBITMQ_MNESIA_BASE. Unfortunately
before this patch, the variable would be ignored, in favor of the
default value which was passed to the script ($MNESIA_BASE).
The reason is that variables set by the script are handled in the
alphabetical order. Thus $MNESIA_BASE is handled first, then
$RABBITMQ_MNESIA_BASE.
Because the code didn't give any precedence, the first variable set
would "win". This explains why users who set $RABBITMQ_MNESIA_BASE in
$CONF_ENV_FILE, but using RabbitMQ 3.8.4+ (which introduced
`rabbit_env`), unexpectedly had their node use the default Mnesia base
directory.
The patch is rather simple: when we check if a variable is already set,
we give precedence to the $RABBITMQ_* prefixed variables. Therefore, if
the $CONF_ENV_FILE script sets $RABBITMQ_MNESIA_BASE, this value will be
used, regardless of the value of $MNESIA_BASE.
This didn't happen with variables set in the environment (i.e. the
environment of rabbitmq-server(8)) because the prefixed variables
already had precedence.
Fixesrabbitmq/rabbitmq-common#401.
This allows RabbitMQ to configure `rabbit_log` as a Logger handler.
See a related commit in rabbit_prelaunch_early_logging in
rabbitmq-server, where `rabbit_log` is being configured as a Logger
handler. The commit message explains the reason behind this.
The default timeout of 30 seconds was not sufficient to allow graceful shutdown of a message store with millions of persistent messages. Rather than increase the timeout in general, introduce a new macro with a default of 600 seconds
... instead of the cache action.
The cache action is quite unstable (failing to download the cached
files). In this commit, we try to use the artefacts instead. At this
point, we don't know if it is more reliable, but we'll see with time.
As an added bonus, we can download the archives passed between jobs for
inspection if we need.
Otherwise, for instance, running Dialyzer in the Erlang client fails with the
following error if it was cloned directly (i.e. outside of the Umbrella):
dialyzer: Bad directory for -pa: .../amqp_client/deps/rabbitmq_cli/_build/dev/lib/rabbitmqctl/ebin
... and their value.
Both prefixed and non-prefixed variables are returned by this function.
While here, fix a conflict between $RABBITMQ_HOME and $HOME in
var_is_used/1: the latter shouldn't be considered as used.
When we generate the workflows, we pick the latest tag of each release
branch. That list of tags is used to clone secondary umbrellas in the
workflows and run the testsuites against each of them.
When generating workflows for `master`, we take the latest tag of each
release branch.
When generating workflows for a release branch, we take the latest tag
of each older release branch, plus the first tag of the same release
branch.
Some examples:
* `master` is tested with 3.8.3 and 3.7.25
* `v3.8.x` is tested with 3.8.0 and 3.7.25
We need a monotonically increasing number for the version used by the
Concourse S3 resource. A Git commit hash does not work because they do
not have this property.
The main entry point is `make github-actions` which generates the
workflows.
Currently, it handles workflows to test the project with different
versions of Erlang.
It generates a file called `$(PROJECT)-rabbitmq-deps.mk` which has a
dependency definition line of the form expected by Erlang.mk, for each
RabbitMQ component the project depends on.
Therefore the line indicates:
* `git` as the fetch method
* the repository URL
* the Git commit hash the dependency is on
Here is an example for rabbitmq-server:
dep_rabbit_common := git https://github.com/rabbitmq/rabbitmq-common.git d9ccd8d9cdd58310901f318fed676aff59be5afb
dep_rabbitmq_cli := git https://github.com/rabbitmq/rabbitmq-cli.git f6eaae292d27da4ded92b7c1b51a8ddcfefa69c2
dep_rabbitmq_codegen := git https://github.com/rabbitmq/rabbitmq-codegen.git 65da2e86bd65c6b6ecd48478ab092721696bc709
The double-quoting was requited in the flock(1)/lockf(1) blocks because
of the use of `sh -c`. However it's incorrect in the `else` block.
Follow-up to commit 3f32a36e50.
The CLI has a high startup time. To speed up the
`start-background-broker` and `stop-node` recipes, two CLI calls are
replaced by two more basic commands which achieve the same goal.
The problem with the previous approach was that the `$(wildcard ...)`
directives might be evaluated too early: `deps/rabbit` might not be
available yet.
Moving the computation to the body of the recipe fixes the problem
because dependencies are available at this point.
In other words, if instead of cloning the Umbrella, one cloned
rabbitmq-server directly, the `install-cli-scripts` recipe would fail to
copy the scripts because it assumed `rabbit` was under `$(DEPS_DIR)`.
Now expected places are checked and an error is emitted if the recipe
can't find the right one.
dispatch_sync sits inbetween the behavior of submit and submit_async,
blocking the caller until a worker begins the task, as opposed
to not blocking at all, or blocking util the task has finished.
This is useful when you want to throttle submissions to the pool
from a single process, such that all workers are busy, but there
exists no backlog of work for the pool.
On Darwin, the default tar fails with unkown --transform flag.
FAILS: bsdtar 2.8.3 - libarchive 2.8.3
SUCCEEDS: tar (GNU tar) 1.32
re https://github.com/rabbitmq/rabbitmq-common/pull/364
Signed-off-by: Gerhard Lazu <gerhard@lazu.co.uk>
If there are common_test logs (i.e. `logs` exists), it creates an archive
(compressed with xz(1)) in the top-level directory.
The archive is named `$(PROJECT)-ct-logs-$timestamp.tar.xz` by default.
The name can be changed by setting `$(CT_LOGS_ARCHIVE)`. The file
extension must be `.tar.xz`.
The documentation says we should be able to use ?=, but apparently it
affects the way variables are passed to sub-make.
The issue we had is that using: `make start-cluster RABBITMQ_CONFIG_FILE=...`
didn't work as expected: `$(RABBITMQ_CONFIG_FILE)` made it to the
sub-make but not to the sub-make's recipe.
Using := fixes the problem.
Doing that is ok because assigning `$(RABBITMQ_CONFIG_FILE)` in the
environment or on make(1)'s command line will override the
target-specific variable anyway.
They were plain by default & are now blue which works really well with
Gruvbox Dark. I couldn't change just the debug color, had to redefine
them all.
cc @dumbbell @lukebakken
Signed-off-by: Gerhard Lazu <gerhard@lazu.co.uk>
When running the broker locally, in dev, this is what most of us want.
To change this, use e.g. RABBITMQ_LOG=info (previous default).
Signed-off-by: Gerhard Lazu <gerhard@lazu.co.uk>
This turns off WAL preallocation and saves 400+ MiB per node directory.
This setting only applies to nodes started with `make run-broker` or
from our testsuites. RabbitMQ default configuration remains unaffected.
Using dependencies seemed sensible in the first place, but they are also
special cases like `rabbit` itself. In the end, it looks simpler to just
list rabbitmq-common and rabbitmq-amqp1.0-common in a blacklist and
install CLI for everything else.
We want to test PRs such as
https://github.com/deadtrickster/prometheus.erl/pull/102
in RabbitMQ master (3.9.x) so that we can test fixes against other
master components, like OTP 23 (erlang-git).
Signed-off-by: Gerhard Lazu <gerhard@lazu.co.uk>
... between the current project and rabbitmq-common.
Like with `rabbitmq-components.mk`, this avoids to use an incorrect copy
if the current project uses a different branch or does not have e.g. a
`v3.8.x` branch (unlike rabbitmq-common).
We need to communicate this information to rabbitmq-components.mk so it
selects the right branch for each dependency.
By default, it would query git(1), but after Travis clones and possibly
merges branches, it does not have access to the information anymore.
Forunately, the Travis environment has everything we need.
$base_rmq_ref was already set properly in a previous commit.
If Travis is building a tag, $TRAVIS_BRANCH will contain the appropriate
value, so this works in this case as well.
We now also check if `rabbitmq-components.mk` is up-to-date.
To do so, we set the language to Elixir, even though almost all our
projects are written in Erlang. But we need Elixir for the RabbitMQ CLI.
Specifying Elixir as the language in Travis allows us to:
1. make sure Elixir is installed by Travis
2. specify the versions of both Erlang/OTP and Elixir
We also set an explicit install step. Not that we care about `mix
local.hex`, but we need to override the default Travis install step
which assumes this is an Elixir (mix) based project.
We take this opportunity to add Erlang/OTP 22.2 to the build matrix.
While here, we bring two fixes:
* Warnings reported by Travis are solved: the OS is set explicitly and
`sudo` is removed.
* The "git checkout" gymnastic is replaced by simply setting
`$base_rmq_ref`. This is a better solution to make sure the
appropriate dependencies' branch is selected.
Exactly as we previously set the file log level to debug.
Note that it does not enable logging on the console, it only changes the
default log level if the user of `make run-broker` enables console
logging (using `make run-broker RABBITMQ_LOGS=-`).
[#171131596]
The previous value accepted for this behavior was "NONE". But it's more
intuitive to set it to nothing.
`rabbitmq-run.mk` is also updated to allow `$RABBITMQ_ENABLED_PLUGINS`
to be overriden e.g. on the command line.
It guesses the node name type, based on the host part of a node name.
I.e., if it contains at least a `.` character, it's a longname.
This matches the verification `net_kernel` does to make sure the node
name corresponds to the shortnames/longnames option.
There are two changes in this patch:
1. In `get_default_plugins_path_from_node(), we base the search on
`rabbit_common.app` instead of `code:lib_dir(rabbit_common)`.
The latter only works if the application directory is named
`rabbit_common` or `rabbit_common-$version`. This is not the case
with a default Git clone of the repository because the directory will
be named `rabbitmq-common`.
Using `rabbit_common.app` is fine because it is inside the `ebin`
directory, as all modules. It also brings another benefit: it is not
subject to cover-compilation or preloading (which both get rid of the
original module location).
2. The code to determine the plugins directory based on the directory
containing the module (or `rabbit_common.app`) now takes into account
plugin directories (as opposed to .ez archives). In this case, there
is one less path component compared to an .ez archive.
I.e. we record the fact that a particular value:
* is the default value, or
* comes from an environment variable, or
* comes from querying a remote node
This required a significant refactoring of the module, which explains
the large diff.
At the same time, the testsuite was extended to cover more code and
situations.
This work permits us to move remaining environment variables checked by
`rabbit` to this module. They include:
* $RABBITMQ_LOG_FF_REGISTRY
* $RABBITMQ_FEATURE_FLAGS
* $NOTIFY_SOCKET
[#170149339]
Compared to `all_module_attributes/0`, it only scans applications which
are related to RabbitMQ: either a RabbitMQ core application or a plugin
(i.e. an application which depends on `rabbit`).
On my laptop, this significantly reduce the time to query module
attributes in the case of feature flags: it goes from 830 ms to 235 ms
just by skipping all Erlang/OTP applications are third-party
dependencies.
This makes a small improvement to RabbitMQ startup time, which is
visible for developers mainly, not for a production instance.
To be used in branches other than `master`. It will take `.gitignore`
from master and replace the current copy with it.
Like a few other targets, it supports `DO_COMMIT=yes` to commit the
change as well.
When we are running Makefile recipes from an application under
`$(APPS_DIR)`, we want to locate the Umbrella correctly to:
- set `$(DEPS_DIR)` accordingly
- prevent `make distclean` from removing `$(DEPS_DIR)`
Before this change and after `rabbit/apps/rabbitmq_prelaunch` was added,
running `make distclean` in `rabbit` removed everything under
`$(DEPS_DIR)`.
There was one legitimate warning in `get_enabled_plugins()`:
`get_prefixed_env_var()` already takes care of converting an empty
string to false.
The other warning is because `loading_conf_env_file_enabled()` returns a
boolean when compiled for tests, but always true when compiled for
production. Dialyzer only sees the second case and thinks the cases
where the function returns false will never happen.
... instead of `.ez` archives.
The default is still to create `.ez` archives for all RabbitMQ
components & plugins.
However if `$(USE_RABBIT_BOOT_SCRIPT)` is set (experimental and
undocumented for now), they are distributed as directories.
This is handled by the `rabbitmq_prelaunch` application now, based on
the value of `$RABBITMQ_ENABLED_PLUGINS`.
`$(RABBITMQ_ENABLED_PLUGINS_FILE)` depended on `dist`. This dependency
was moved to individual `run-*` and `start-*` targets.
While here, re-use `test-dist` instead of `dist` if the build was
already done for tests.
The testsuites default to run `make test-dist` as a first step.
Therefore later, when it starts a node, it should re-use that instead of
depending on `make dist` which will rebuild the tested project and
remove test dependencies from plugins.
This is useful (and mandatory in fact) now that `rabbit` is packaged
like plugins because, in the case of rabbitmq-erlang-client for
instance, the broker is a `$(TEST_DEPS)`: if starting a node runs `make
dist`, the broker will be removed.
... to the plugin being worked on, instead of locating `rabbit` and
taking the scripts there.
It greatly simplifies the use of RabbitMQ and plugins inside a
development working copy because the layout is closer to what we would
have in a package. I.e. there are far less special cases.
The goal is to distribute RabbitMQ core (the `rabbit` Erlang
application) exactly as we distribute plugins. This simplifies the
startup script and CLI tools when we have to setup Erlang code search
path.
... and default values.
It can also query a remote node for some specific values. The use case
is the CLI which should know what the RabbitMQ node it controls uses
exactly.
It supports several new environment variables:
RABBITMQ_DBG:
Used to setup `dbg` for some simple tracing scenarios.
RABBITMQ_ENABLED_PLUGINS:
Used to list plugins to enable automatically on node startup.
RABBITMQ_KEEP_PID_FILE_ON_EXIT:
Used to indicate if the PID file should be removed or kept when the
node exits.
RABBITMQ_LOG:
Used to configure the global and per-category log levels and enable
ANSI colors.
`ebin/test` is always touch(1)'d by Erlang.mk, which made the list of
dependencies of an .ez archive newer than the archive itself. This caused the
archive to be recreated.
While here, set `TEST_DIR` to something random in the case of `make
test-dist`: this way, rebuilding all testsuites is skipped by Erlang.mk.
Yes, this is a hack.
At least on the Windows Server 2019 AWS EC2 image, the `tasklist`
command is unavailable.
If that's the case, we fallback to using a PowerShell oneliner. It's not
the default, just in case PowerShell is unavailable.
This is now done in xrefr (`mk/xrefr`) and rabbimq-ct-helpers when
needed.
This has several benefits:
* This fixes `make run-broker` on Windows because the computed
`$ERL_LIBS` was invalid there.
* This saves a lot of Makefile processing time, because elixir(1) is
quite slow to startup. On my laptop, a complete build in
rabbitmq-server-release from 8.5 seconds to 3 seconds.
into a list, as the function implies.
All current call sites use it to call functions that return lists.
However, rabbitmq/rabbitmq-cli#389 breaks this cycle.
* Use `noinput`
* Use `-s erlang halt` to skip small `eval` overhead
* Use `no_dot_erlang` boot file since we do not want user customizations to interfere
These should be taken into account into the limits, but always be granted.
Files must be reserved by the queues themselves using `set_reservation/0` or
`set_reservation/1`. This is an absolute reservation that increases or
decreases the number of files reserved to reach the given amount on every
call.
[#169063174]
... when we wait for a node started in the background.
This helps when the PID is written asynchronously by the Erlang node
instead of the rabbitmq-server(8) script: in this case, the `rabbitmqctl
wait` command may start to wait earlier in the former situation than the
latter one, and thus timeout earlier.
On Windows, if we pass it a Windows-native path like `C:\User\...` or
even something with forward slashes, rsync(1) will consider that `C`
(before the colon) is a hostname and it should try to connect to it.
Using `cygpath.exe` on Windows converts the Windows path to a Unix-like
one (e.g. `/c/Users/...`).
Add metadata to virtual hosts
[#166298298]
rabbit_vhost: use record defaults
The vhost record moved to a versioned record in rabbitmq-server
Co-Authored-By: Michael Klishin <mklishin@pivotal.io>