... instead of Unix commands and a one-liner which assumes that `:` is
the path separator. This speeds up the build because we don't spawn a
shell.
While here, also remove `$(APPS_DIR)` from `$(ERL_LIBS)`.
We have to apply the same tag filtering when counting them as the one
done by git-describe(1).
This fixes the following error:
fatal: No names found, cannot describe anything
This issue was hit when there were tags in the project, but they were
all filtered out by git-describe(1).
WIP, the secret is hardcoded, which is obviously not secured. It is
enough though to see if modules/applications manipulating credentials
can use it to avoid those credentials to end up in logs when the state
of crashed processes is dumped.
[#167070941]
References rabbitmq/rabbitmq-erlang-client#123
... in `commits-since-release`.
Before this change, the script was expecting at least one tag so that
git-describe(1) worked. Without that, it would fail with:
fatal: No names found, cannot describe anything.
Now, if a component has no tag, it will display "New in this release!".
Patch from @dumbbell
The application to "package" as a plugin (an .ez archive) might be under
`$(APPS_DIR)`. Therefore now, the all the variables and recipes are
created from the path to the application not just its name.
With the update of Erlang.mk, dependencies are not rebuilt anymore by
default, except if `FULL=1` is set.
This behavior is not adapted to the work on RabbitMQ where many
components are split into many repositories, and we work on several of
them at the same time.
Therefore, the idea of this commit is to tell Erlang.mk to always visit
dependencies which are RabbitMQ components. Other dependencies are only
built once the first time.
[#166980833]
Unfortunately, the *-on-concourse targets still don't work: fly(1), the
Concourse CLI, looks to have regressed even more: it doesn't upload all
inputs. Half of them are just empty directories.
Obviously, compiling anything fails because if this.
In Erlang 22, the name is now `aes_256_cbc`.
The default cipher/hash/iterations are also set in rabbit's application
default environment. I'm going to remove those default values there
because the code already queries this module if they are missing from
the application environment.
This fixes a crash of the call to `crypto:cipher_info/1` later because
the ciphers list returned by `crypto:supports/0` contains more ciphers:
"old aliases" are added to that list and those aliases are unsupported
by `crypto:cipher_info/1`.
This reverts commit 7c9f170cee.
CLI tools cannot use this function as it logs errors.
`rabbit_resource_monitor_misc:parse_information_unit/1` is a better fit.
Erlang 22 will introduce TLS 1.3, but at the time of this commit, only
the server side is implemented. If the Erlang client requests TLS 1.3, the
server will accept but the client will either hang or crash.
So for now, just blacklist TLS 1.3 to avoid any issues, even on the
server side, just to be safe.
[#165214130]
The `creation` field might not fit into one byte which makes the
`PID_EXT` format unsuitable for this case.
The `NEW_PID_EXT` format is supported since Erlang 19.0, so it is safe
to always use it, no matter the value of `creation`, because RabbitMQ
3.7+ requires at least Erlang 19.3+.
References #313.
In OTP-22 the Creation field has been increased to be 32 bits.
For now we only need to handle it when using term_to_binary
and parsing the result manually.
We don't need this anymore, now that the high watermark is bumped
automatically when the log level is set to `debug` in rabbit_lager.
This reverts commit 49956c6423.
The stop-node command is the only make target still using erl_call who
is prone to breakage (broken in OTP 21.3) and can readily be replaced
with rabbitmqctl stop.
To fix this, introduce a new helper, `ascii_color/2`, which takes a the
same flag (`UseColors`) as its second argument. If that flag is false,
it returns an empty string.
While here, move the `isatty()` function in this module.
In particular, we drop support for Erlang R13B and older in
`rabbit_cert_info`. This fixes an error reported by Dialyzer now that
the list of dependencies (`$(LOCAL_DEPS)`) is more correct.
... in version_minor_equivalent().
Note that we'll need a special case to exclude 3.7.x versions which
don't have the feature flags modules.
At the same time, we introduce the `strict_version_minor_equivalent()`
function which has the behavior the initial function had before. This is
used in the context of plugin compatibility checks: plugins can specify
a `broker_version_requirements` property and at this point, plugins
compatible with 3.7.x should not be considered as compatible with 3.8.x.
[#159298729]
See the corresponding commit in rabbitmq-server for all the
explanations.
Now, all accesses to the #amqqueue{} record are made through the
`amqqueue` module (available in rabbitmq-server). The new type name is
`amqqueue:amqqueue()`.
The `amqqueue.hrl` header also provides some macros to help with pattern
matching and guard expressions.
To help with this, code and modules were moved from rabbitmq-common to
rabbitmq-server.
[#159298729]
* Add single active consumer flag in consumer metrics
* Add function to update consumer metrics when a consumer is promoted
to single active consumer
[#163089472]
References rabbitmq/rabbitmq-management#649
Allow the backing queue implementation to inform the amqqueue process
how to proceed when a message duplicate is encountered.
* {true, drop} the message is a duplicate and should be ignored
* {true, reject} the message is a duplicate and the publisher should
receive a rejection
* false the message is not deemed a duplicate
* true kept for backward compatibility, equivalent to {true, drop}
Signed-off-by: Matteo Cafasso <noxdafox@gmail.com>
The seq command should include a -1 increment to stop nodes in reverse
order. Previously, as an example with NODES=2, will run `seq 2 1`
which produces no items to iterate, so the entire stop-node loop does
not execute and the brokers are left running.
To check whether a value is in the queue or not. Handle both
non-priority and priority queue. Based on lists:member/2, does not take
into account the priority of the value in the queue to check equality.
References rabbitmq/rabbitmq-server#1743
It has been reported that in order to use the Erlang client, the
Erlang/OTP source must be available. This is due to one include
file that rabbit_net required. This dependency has been removed.
Instead of calling is_record(sslsocket) the macro ?IS_SSL will
now perform the same test manually (check that it is a tuple,
that the size is correct and that the first element equals sslsocket).
The tuple has not changed in a very long time so doing this
manually is at least as safe as including this private header
file (it could be removed or moved at any time).
Once Erlang/OTP 22 gets out and we know how sockets will be
represented with the NIF implementation, we could revise this
and check whether the socket is one that gen_tcp accepts
(currently it's a port, but this will probably change when
a NIF is used).
With the quorum queue code, RabbitMQ probably still works with Erlang
20.x, but it is not thoroughly tested. Thus, bump the requirement to
Erlang 21.0.
It has been reported that in order to use the Erlang client, the
Erlang/OTP source must be available. This is due to one include
file that rabbit_net required. This dependency has been removed.
Instead of calling is_record(sslsocket) the macro ?IS_SSL will
now perform the same test manually (check that it is a tuple,
that the size is correct and that the first element equals sslsocket).
The tuple has not changed in a very long time so doing this
manually is at least as safe as including this private header
file (it could be removed or moved at any time).
Once Erlang/OTP 22 gets out and we know how sockets will be
represented with the NIF implementation, we could revise this
and check whether the socket is one that gen_tcp accepts
(currently it's a port, but this will probably change when
a NIF is used).
* Lager: 3.6.4 -> 3.6.5
* Ranch: 1.5.0 -> 1.6.1
* ranch_proxy_protocol: 1.5.0 -> 2.1.0-rc.1
Note that ranch_proxy_protocol 2.1.0-rc.1 is not an official release
from upstream: it was published by us (the RabbitMQ team) because we
don't have feedback from upstream about a pull request to update Ranch
to 1.6.x (heroku/ranch_proxy_protocol#49). Hopefully upstream will merge
the pull request and cut a new official release.
Fixes rabbitmq/rabbitmq-common#269.
[#160270896]
If resource alarm is triggered during the boot process it will send
an event via rabbit_event:notify. This can crash the node if rabbit_event
is not started yet. rabbit_event starts after rabbit_alarm and any alarms
on boot were crashing the node.
Calling gen_event:notify with {rabbit_event, node()} is the same as with
rabbit_event except it does not fail.
erl_call(1) is broken in Erlang 21.0-rc.2 when we use it to evaluate
some code on a remote node. The issue was reported upstream but we can
replace erl_call(1) by `rabbitmqctl eval` to achieve the same result.
The output is still filtered using sed(1) to ensure it remains
unchanged, in case testsuites expect a particular format.
We still continue to use erl_call(1) to stop a node because we can't
achieve the same behavior using rabbitmqctl(1). This use of erl_call(1)
is working with Erlang 21.0-rc.2.
[#157964874]
OTP 21 deprecated erlang:get_stacktrace/0 in favor of a new
try/catch syntax. Unfortunately that's not realistic for projects
that support multiple Erlang versions (like us) until OTP 21 can be
the minimum version requirement. In order to compile we have to ignore
the warning. The broad compiler option seems to be the most common
way to support compilation on multiple OTP versions with warnings_as_errors.
[#157964874]
The code already verifies that `gen_event:start_link/2` is available
before calling it, and falls back on `gen_event:start_link/1`
appropriately.
Therefore we can add it to the `-ignore_xref()` list to fix the xref
check.
gen_event:start_link/2 is not available before Erlang 20. RabbitMQ
3.7 needs to support Erlang 19.3 and above.
We thought of implementing different garbage collection strategies for
rabbit_event that would be compatible with Erlang 19.3, but failed
because gen_event is a special type of process:
* fullsweep_after cannot be set via process flag, it returns a badarg
exception
* collecting on a timer would be weird because all the handlers would
receive the event
* we can't force a full GC via hibernating, because this would need to
run after each event, which would result in terrible performance
Partner-in-crime: @essen
ets:select_replace/2 is only available in Erlang 20 and above. RabbitMQ
3.7 needs to support Erlang 19.3 and above.
We haven't noticed any difference in performance when using one approach
over the other.
To keep the code simple, we decided to not detect which approach to use.
Partern-in-crime: @essen
When a node has to process events generated by creating 100k
connections, 100k channels & 50k queues, it was observed that this
process would use up to 13GB of memory and never release it.
Since we are running a full GC whenever GC runs, this will slow the
process down by ~10%, but the memory usage is stable & all memory gets
eventually released. On a busy system, this can amount to many GBs.
Partner-in-crime: @essen
Otherwise, we can trigger a stack overflow in the Erlang VM itself, as
described in ERL-592: https://bugs.erlang.org/browse/ERL-592
The code before works fine when 50k queues are being deleted out of 100k
queues, but fails when there are 150k queues in total & 50k need to be
deleted.
Partner-in-crime: @essen
Rather than using 6 ETS operations per deleted queue, build a match
spec that matches all queues & use ets:select_replace/2 to delete
metrics all queues in 4 super efficient ETS operations.
Great suggestions @michaelklishin & @hairyhum!
ee0951b1b3 (comments)
Notice that build_match_spec_conditions_to_delete_all_queues/1 is not
tail recursive, because it's more efficient this way due to the required
return value. The exact details elude me, @essen can answer this better
than I can.
Linked to rabbitmq/rabbitmq-server#1526
For initial context, see #1513
Partner-in-crime: @essen
When a node has to process events generated by creating 100k
connections, 100k channels & 50k queues, it was observed that this
process would use up to 13GB of memory and never release it.
Since we are running a full GC whenever GC runs, this will slow the
process down by ~10%, but the memory usage is stable & all memory gets
eventually released. On a busy system, this can amount to many GBs.
Partner-in-crime: @essen
Otherwise, we can trigger a stack overflow in the Erlang VM itself, as
described in ERL-592: https://bugs.erlang.org/browse/ERL-592
The code before works fine when 50k queues are being deleted out of 100k
queues, but fails when there are 150k queues in total & 50k need to be
deleted.
Partner-in-crime: @essen
Rather than using 6 ETS operations per deleted queue, build a match
spec that matches all queues & use ets:select_replace/2 to delete
metrics all queues in 4 super efficient ETS operations.
Great suggestions @michaelklishin & @hairyhum!
ee0951b1b3 (comments)
Notice that build_match_spec_conditions_to_delete_all_queues/1 is not
tail recursive, because it's more efficient this way due to the required
return value. The exact details elude me, @essen can answer this better
than I can.
Linked to rabbitmq/rabbitmq-server#1526
For initial context, see #1513
Partner-in-crime: @essen
That way, it doesn't interfere with testcases working with multiple
RabbitMQ nodes. Furthermore, independent sets of nodes won't try to
autoconnect to the common_test node, and thus unrelated nodes after.
This is useful for the upcoming VM-based test helpers.
[#153749132]
(cherry picked from commit 47a5bdfff548a5c278af2d947feeeb7836aae0c3)
... while checking if we are connected to the given PID's node. This
fixes an issue where an Erlang client, connected directly to the
RabbitMQ node (as opposed to a TCP connection), is running an a hidden
Erlang node.
[#153749132]
(cherry picked from commit 48df58b63dc1157a9954fc5413aa027cb9552db8)
That way, it doesn't interfere with testcases working with multiple
RabbitMQ nodes. Furthermore, independent sets of nodes won't try to
autoconnect to the common_test node, and thus unrelated nodes after.
This is useful for the upcoming VM-based test helpers.
[#153749132]
(cherry picked from commit 5c0546fb5bf9e7d61e90399373b11962011f548d)
... while checking if we are connected to the given PID's node. This
fixes an issue where an Erlang client, connected directly to the
RabbitMQ node (as opposed to a TCP connection), is running an a hidden
Erlang node.
[#153749132]
(cherry picked from commit 66f5ae28ea993d761507dbd9034d59491f0d2bc6)
That way, it doesn't interfere with testcases working with multiple
RabbitMQ nodes. Furthermore, independent sets of nodes won't try to
autoconnect to the common_test node, and thus unrelated nodes after.
This is useful for the upcoming VM-based test helpers.
[#153749132]
Those functions are currently looking into opaque types from
`ranch_proxy_protocol`. Until this is fixed, we just ignore the warnings
and comment out the specs.
[#153850881]
rabbitmq-common shouldn't see this, otherwise it's a reverse dependency
and thus a dependency circle.
In the future, we should fix this by adding rabbitmq-cli as an explicit
dependency to whatever component needs it.
[#153850881]
While here, silence the same warning for do_multi_call() because it
comes from an anonymous function. We could move the anonymous function
to a regular function and give it a spec. However, we prefer to keep the
diff with upstream small.
[#153850881]
They were returning `true` (the return value of ets:insert() or
ets:delete()), whereas many others were returning `ok`. So make them
return `ok` for consistency's sake. Furthermore it matches their
specification.
These warnings were reported by Dialyzer.
[#153850881]
Thus, use the correct return type of `no_return()`. Even if it's defined
as `none()` according to the documentation, it doesn't have the same
semantic.
The warning was reported by Dialyzer.
[#153850881]
Anonymous functions were converted to regular functions so that we could
add `-spec()` directives.
The `spec()` for the `error_handler()` callback was updated to accept
functions throwing exceptions.
This was reported by Dialyzer.
[#153850881]
They can mention functions which we are removing or renaming. Therefore
we need to apply the same changes to the `-dialyzer` attribute(s).
[#154054286]
This way, we are sure that the possibly newer recipe in
`rabbitmq-tools.mk` handles the operation instead of a possibly obsolete
`rabbitmq-components.mk` copy.
[#154761483]
To avoid a copy of `rabbitmq-components.mk` from rabbitmq-common's
`v3.6.x` branch to another component's `master` branch, we want to
compare branch names of both repositories.
This situation may occur when checking out the `v3.6.x` in the Umbrella
using `gmake up BRANCH=v3.6.x` but for instance some plugins don't have
such a branch. In this case, they remain on the previous branch. We
don't want to overwrite their `rabbitmq-components.mk` if this happens.
[#154761483]
If the test broker is started using:
gmake run-broker PLUGINS_FROM_DEPS_DIR=1
Then plugins are taken from `$(DEPS_DIR)` even though, the `.ez`
archives were created.
This can be handy when e.g. working in the management plugin's static
files: it makes it possible to modify a CSS and just reload the page to
see the change. There is no need to re-run `gmake run-broker`.
The value of `PLUGINS_FROM_DEPS_DIR` is not verified.
Caveats:
* Irrelevant plugins which happend to be already compiled will
be enabled because rabbitmq-plugins(8) will find them inside
`$(DEPS_DIR).
* This feature isn't tested when a plugin is cloned directly and
it probably doesn't work. The developer is expected to use the
Umbrella for now.
[#154435736]
This target is useful to get the list of commits since the last
(pre)release or a specified tag in all RabbitMQ repositories involved
(i.e. the current project and its RabbitMQ dependencies). In particular,
from rabbitmq-server-release, it can help during the review of all
commits, issues and pull requests while preparing a release.
To see an interactive list of commits for each repository:
gmake commits-since-release
To get the same list formatted as Markdown (to publish on GitHub):
gmake commits-since-release MARKDOWN=yes
To get the commits since the last prerelease:
gmake commits-since-release SINCE_TAG=last-prerelease
To get the commits since a specific release:
gmake commits-since-release SINCE_TAG=v3.7.1
[#153087397]
We are about to rename `stable` to `v3.6.x`. That's why we need to
update the script which determines if a development branch was forked
from `v3.6.x` (instead of `stable`) and falls back to `master` if it's
not the case.
While here, handle the case where the `master` branch doesn't exist
locally. This happens when using `git clone -b $branch` or in CI.
(cherry picked from commit 037bab7f799bf9cc180b0dfa2b7899b7a464cf9d)
This file is copied to Elixir-based components (e.g. rabbitmq_cli) when
we create the source archive of RabbitMQ. It allows RabbitMQ to build
offline (which is the case in many package build farms).
References rabbitmq/rabbitmq-server-release#61.
[#153358632]
Namely:
* Depend on ranch_proxy_protocol 1.4.4 (which depends on ranch 1.4.0,
the same as us).
* Fix `check-rabbitmq-components.mk` for rabbitmq-common.
This change was committed to the management-related plugins directly
during the switch to Cowboy 2.0. Unfortunately, it was lost when
rabbitmq-components.mk was updated globally.
Therefore this commit restores the new expected pinning.
Unfortunately, the "skip CI" feature isn't what I would expect: I
expected that such a commit would not trigger a job, but that the commit
would still be picked by a job triggered for another reason. In fact,
it's entirely ignored: the resource will be stuck at the previous
commit, until another regular commit (i.e. without the "skip CI"
message) is pushed.
This reverts commit c9e636c0b4.
Therefore, after we branch `v3.7.x`, rabbitmq-components.mk will try to
determine if a development branch was forked from `v3.7.x` (instead of
`stable`) and fallback to `master` if it's not the case.
While here, handle the case where the `master` branch doesn't exist
locally. This happens when using `git clone -b $branch` or in CI.
In other words, instead of copying rabbitmq-common's credentials which
won't be valid in another project, keep the existing credentials (if
any).
Tha said, if CREDS is set, they will be overwritten again.
[#152509619]
Credentials are taken from Concourse pipeline credentials. One has to
set CREDS:
gmake travis-yml CREDS=/path/to/pipeline-credentials.yaml
[#152509619]
It assumes that all projects use the same `.travis.yml` usually (we try
to achieve the same in Concourse). For special cases, like e.g.
rabbitmq-amqp1.0 which needs .NET Core, there is a `.travis.yml.patch`
in the project which is applied to the stock `.travis.yml`.
Like `rabbitmq-components-mk`, it accepts a `DO_COMMIT=yes` variable to
automatically create a commit if there is a change to `.travis.yml`.
There is also a `update-travis-yml` to update it recursively.
[#152509619]
Escape sequences used to move cursor around are not properly translated
in the HTML output, thus defeating the purpose of the Common Test output
module.
[#152509619]
While here, test on Erlang 20.1. Also, we stop testing on Erlang 19.2:
it's now unsupported.
We take Elixir from Erlang Solutions packages because kiex is affected
by GitHub API rate limiting.
Finally, the sections are reordered so the scripts are at the end. It
may help in the future if we want to template all the `.travis.tml`
files.
[#152509619]
While here, test on Erlang 19.3 (instead of 19.0) and Erlang 20.1. Also,
we stop testing on Erlang 17.5 and 18.3: it helps reduce the number of
jobs in Travis CI and allow it to go over all changes quicker.
Finally, the sections are reordered so the scripts are at the end. It
may help in the future if we want to template all the `.travis.tml`
files.
[#152509619]
This can be used to handle generic messages that the parent gen_server2 wishes to pass to backing queues. Initially used for the bump_reduce_memory_use message.
Part of rabbitmq/rabbitmq-server#1393
... for `rabbit_common` and `amqp_client`.
This should only be used for testing purpose (e.g. dry-run in CI),
otherwise dependency tracking will break: `amqp_client` depends on a
specific version of `rabbit_common`.
However in CI, we want to be able to do a publish dry-run of
`amqp_client`. As it requires `rabbit_common` to be published, we need
to override the version pinning to point it to an already published
version of `rabbit_common` (the corresponding version of `rabbit_common`
was not published either). This is ok because nothing is published in
the end.
[#150482173, #150482202]
This is due to a scenario in which the Erlang VM allocator stats report a huge increase in memory consumption which is only reflected in VSS increase, not RSS
PT #152081051
Now, when the `rabbit` application stops with an error, the Erlang node
exits.
This restores the behavior introduced by rabbitmq/rabbitmq-server#1216
but removed in commit 7a8f7be7c4. This
broke the `erlang_config` testcase in the `clustering_management`
testsuite (rabbitmq-server).
When we publish our packages to Hex.pm, we use the simplified
rabbitmq-components.hexpm.mk to replace the regular
rabbitmq-components.mk.
Before commit ba59f969b7,
rabbitmq-components.mk took 3rd-party dependencies from GitHub. Now that
it takes them from Hex.pm, we don't need to override and hard-code them
again in rabbitmq-components.hexpm.mk.
Thus now, we extract them from rabbitmq-components.mk and put them at
the end of rabbitmq-components.hexpm.mk when we publish to Hex.pm.
The other benefit is that we don't have to remember to change version
pinning in both rabbitmq-components.mk and rabbitmq-components.hexpm.mk.
[#150482173]
This way, when we publish our own packages to Hex.pm (rabbitmq-common
and rabbitmq-erlang-client as of now), we are sure that no dependency
will miss from Hex.pm.
Before that, 3rd-party dependencies were taken from GitHub, and only
when we published to Hex.pm, the source was overriden and set to Hex.pm.
This led to bad surprises such as `ranch_proxy_protocol` which was
unavailable on Hex.pm, preventing us from publishing our packages.
While here, remove pinning for mochiweb and webmachine which we are not
using anymore.
[#150482173]
This reverts commit d553e72af9, reversing
changes made to 5b014bae4b.
This needs more work on the server end: recon must be started by the time
vm_memory_monitor runs.
We were using the `master` branch before.
While here, we extract the version of `ranch_proxy_protocol` and store
it in a dedicated variable. This will be used in `rabbitmq-hexpm.mk` to
prepare `rebar.config`.