Commit Graph

33 Commits

Author SHA1 Message Date
Rin Kuryloski 887f215545 Add mnesia to LOCAL_DEPS in rabbitmq_jms_topic_exchange
it is present in the Bazel build, and if removed from the bazel side
causes :dialyze to fail
2023-10-16 18:10:50 +02:00
Rin Kuryloski e4eaf0b806 Add khepri to rabbitmq_jms_topic_exchange deps 2023-10-16 16:21:23 +02:00
Diana Parra Corbacho 5f0981c5a3
Allow to use Khepri database to store metadata instead of Mnesia
[Why]

Mnesia is a very powerful and convenient tool for Erlang applications:
it is a persistent disc-based database, it handles replication accross
multiple Erlang nodes and it is available out-of-the-box from the
Erlang/OTP distribution. RabbitMQ relies on Mnesia to manage all its
metadata:

* virtual hosts' properties
* intenal users
* queue, exchange and binding declarations (not queues data)
* runtime parameters and policies
* ...

Unfortunately Mnesia makes it difficult to handle network partition and,
as a consequence, the merge conflicts between Erlang nodes once the
network partition is resolved. RabbitMQ provides several partition
handling strategies but they are not bullet-proof. Users still hit
situations where it is a pain to repair a cluster following a network
partition.

[How]

@kjnilsson created Ra [1], a Raft consensus library that RabbitMQ
already uses successfully to implement quorum queues and streams for
instance. Those queues do not suffer from network partitions.

We created Khepri [2], a new persistent and replicated database engine
based on Ra and we want to use it in place of Mnesia in RabbitMQ to
solve the problems with network partitions.

This patch integrates Khepri as an experimental feature. When enabled,
RabbitMQ will store all its metadata in Khepri instead of Mnesia.

This change comes with behavior changes. While Khepri remains disabled,
you should see no changes to the behavior of RabbitMQ. If there are
changes, it is a bug. After Khepri is enabled, there are significant
changes of behavior that you should be aware of.

Because it is based on the Raft consensus algorithm, when there is a
network partition, only the cluster members that are in the partition
with at least `(Number of nodes in the cluster ÷ 2) + 1` number of nodes
can "make progress". In other words, only those nodes may write to the
Khepri database and read from the database and expect a consistent
result.

For instance in a cluster of 5 RabbitMQ nodes:
* If there are two partitions, one with 3 nodes, one with 2 nodes, only
  the group of 3 nodes will be able to write to the database.
* If there are three partitions, two with 2 nodes, one with 1 node, none
  of the group can write to the database.

Because the Khepri database will be used for all kind of metadata, it
means that RabbitMQ nodes that can't write to the database will be
unable to perform some operations. A list of operations and what to
expect is documented in the associated pull request and the RabbitMQ
website.

This requirement from Raft also affects the startup of RabbitMQ nodes in
a cluster. Indeed, at least a quorum number of nodes must be started at
once to allow nodes to become ready.

To enable Khepri, you need to enable the `khepri_db` feature flag:

    rabbitmqctl enable_feature_flag khepri_db

When the `khepri_db` feature flag is enabled, the migration code
performs the following two tasks:
1. It synchronizes the Khepri cluster membership from the Mnesia
   cluster. It uses `mnesia_to_khepri:sync_cluster_membership/1` from
   the `khepri_mnesia_migration` application [3].
2. It copies data from relevant Mnesia tables to Khepri, doing some
   conversion if necessary on the way. Again, it uses
   `mnesia_to_khepri:copy_tables/4` from `khepri_mnesia_migration` to do
   it.

This can be performed on a running standalone RabbitMQ node or cluster.
Data will be migrated from Mnesia to Khepri without any service
interruption. Note that during the migration, the performance may
decrease and the memory footprint may go up.

Because this feature flag is considered experimental, it is not enabled
by default even on a brand new RabbitMQ deployment.

More about the implementation details below:

In the past months, all accesses to Mnesia were isolated in a collection
of `rabbit_db*` modules. This is where the integration of Khepri mostly
takes place: we use a function called `rabbit_khepri:handle_fallback/1`
which selects the database and perform the query or the transaction.
Here is an example from `rabbit_db_vhost`:

* Up until RabbitMQ 3.12.x:

        get(VHostName) when is_binary(VHostName) ->
            get_in_mnesia(VHostName).

* Starting with RabbitMQ 3.13.0:

        get(VHostName) when is_binary(VHostName) ->
            rabbit_khepri:handle_fallback(
              #{mnesia => fun() -> get_in_mnesia(VHostName) end,
                khepri => fun() -> get_in_khepri(VHostName) end}).

This `rabbit_khepri:handle_fallback/1` function relies on two things:
1. the fact that the `khepri_db` feature flag is enabled, in which case
   it always executes the Khepri-based variant.
4. the ability or not to read and write to Mnesia tables otherwise.

Before the feature flag is enabled, or during the migration, the
function will try to execute the Mnesia-based variant. If it succeeds,
then it returns the result. If it fails because one or more Mnesia
tables can't be used, it restarts from scratch: it means the feature
flag is being enabled and depending on the outcome, either the
Mnesia-based variant will succeed (the feature flag couldn't be enabled)
or the feature flag will be marked as enabled and it will call the
Khepri-based variant. The meat of this function really lives in the
`khepri_mnesia_migration` application [3] and
`rabbit_khepri:handle_fallback/1` is a wrapper on top of it that knows
about the feature flag.

However, some calls to the database do not depend on the existence of
Mnesia tables, such as functions where we need to learn about the
members of a cluster. For those, we can't rely on exceptions from
Mnesia. Therefore, we just look at the state of the feature flag to
determine which database to use. There are two situations though:

* Sometimes, we need the feature flag state query to block because the
  function interested in it can't return a valid answer during the
  migration. Here is an example:

        case rabbit_khepri:is_enabled(RemoteNode) of
            true  -> can_join_using_khepri(RemoteNode);
            false -> can_join_using_mnesia(RemoteNode)
        end

* Sometimes, we need the feature flag state query to NOT block (for
  instance because it would cause a deadlock). Here is an example:

        case rabbit_khepri:get_feature_state() of
            enabled -> members_using_khepri();
            _       -> members_using_mnesia()
        end

Direct accesses to Mnesia still exists. They are limited to code that is
specific to Mnesia such as classic queue mirroring or network partitions
handling strategies.

Now, to discover the Mnesia tables to migrate and how to migrate them,
we use an Erlang module attribute called
`rabbit_mnesia_tables_to_khepri_db` which indicates a list of Mnesia
tables and an associated converter module. Here is an example in the
`rabbitmq_recent_history_exchange` plugin:

    -rabbit_mnesia_tables_to_khepri_db(
       [{?RH_TABLE, rabbit_db_rh_exchange_m2k_converter}]).

The converter module  — `rabbit_db_rh_exchange_m2k_converter` in this
example  — is is fact a "sub" converter module called but
`rabbit_db_m2k_converter`. See the documentation of a `mnesia_to_khepri`
converter module to learn more about these modules.

[1] https://github.com/rabbitmq/ra
[2] https://github.com/rabbitmq/khepri
[3] https://github.com/rabbitmq/khepri_mnesia_migration

See #7206.

Co-authored-by: Jean-Sébastien Pédron <jean-sebastien@rabbitmq.com>
Co-authored-by: Diana Parra Corbacho <dparracorbac@vmware.com>
Co-authored-by: Michael Davis <mcarsondavis@gmail.com>
2023-09-29 16:00:11 +02:00
Rin Kuryloski eb94a58bc9 Add a workflow to compare the bazel/erlang.mk output
To catch any drift between the builds
2023-05-15 13:54:14 +02:00
Loïc Hoguin dc70cbf281
Update Erlang.mk and switch to new xref code 2022-05-31 13:51:12 +02:00
Philip Kuryloski a63f169fcb Remove duplicate rabbitmq-components.mk and erlang.mk files
Also adjust the references in rabbitmq-components.mk to account for
post monorepo locations
2021-03-22 15:40:19 +01:00
Jean-Sébastien Pédron 31aa2d6d6c Makefile: Load the new `rabbitmq-early-plugin.mk` early-stage plugin
See the corresponding commit in rabbitmq-common for an explanation.

[#144697185]
2017-05-16 17:33:35 +02:00
Jean-Sébastien Pédron be397b89bb Move from .app.src to Makefile variables
This is the recommended way with Erlang.mk.

By default, the version is inherited from rabbitmq-server-release when
the source archive is created, or computed from git-describe(1) (see
`rabbitmq-components.mk`). One can override the version from the command
line by setting the `PROJECT_VERSION` variable.

[#130992027]
2016-12-06 16:05:16 +01:00
Jean-Sébastien Pédron 1b8a621be9 Add rabbitmq_ct_client_helpers to TEST_DEPS 2016-11-24 10:19:32 +01:00
Jean-Sébastien Pédron 579b353d98 Makefile: Explicitely list all DEPS
Sync rabbitmq-components.mk with rabbitmq-common to remove automatic
DEPS handling.

[#130086871]
2016-09-19 15:47:01 +02:00
Michael Klishin 862d065e4a List amqp_client in DEPS, not just TEST_DEPS 2016-06-21 16:42:14 +03:00
Michael Klishin 59effb412d Switch test suite to Common Test 2016-06-21 16:15:49 +03:00
Michael Klishin 8c0d487cf1 Begin migration to Common Test 2016-06-21 00:58:26 +03:00
Michael Klishin fc7e54cac7 Migrate from legacy (package.mk) build system to erlang.mk 2016-05-16 16:41:34 +03:00
Steve Powell 43a0e298ae Make pipeline fail in build-all-in-vm if any stage fails; quieten checkouts. 2015-04-18 08:45:23 +01:00
Steve Powell b88d4c367d no message 2015-03-15 16:46:11 +00:00
Steve Powell 2fa2709234 Add version check to plugin; pass version on binding and exchange declaration.
The client needs to put a version on the create/bind calls, otherwise they are rejected by the plugin.
The version is matched exactly in this release (1.2.0) but need not be in future releases.
Durable exchanges and bindings from previous releases are not recovered.

[compat-#65568580]
2014-03-19 11:29:10 +00:00
Steve Powell b8db16c54a Add erlang string parameter to jms-topic binding arguments as alternative to SQL string.
Add function to create erlang term from parameter.
Generate check function from erlang term.
Add unit test for rjms_erlang_selector binding argument presence.

[sql-parsing#62848150]
2014-01-20 16:33:12 +00:00
Steve Powell 281754efae 1.1.1-snapshot release on v1.1.x branch
Includes automation of RJMS version in artefact build

[v1.1.x]
2013-11-07 11:54:48 +00:00
Steve Powell 63be245744 Remove reference to RabbitMQ version in build.
Add selector integration test (in-broker).
Augment exchange plugin callbacks for RabbitMQ [3.1.1, oo).
2013-07-03 17:39:09 +01:00
Steve Powell ea9cbe107b Update default RMQ version. 2013-03-13 11:32:06 +00:00
Steve Powell 260987bbcb Enable unit tests. 2013-02-28 12:44:56 +00:00
Steve Powell 9d4d13083f Start to flesh out topic selector exchange 2013-02-27 17:33:06 +00:00
Steve Powell 23ab43521a Make standalone build/repo. 2013-02-14 18:08:12 +00:00
Steve Powell 06c6064116 Set MAVEN_ARTEFACT filename as a parameter. 2013-02-13 17:31:54 +00:00
Steve Powell 5a3091a3a1 Make RabbitMQ version overridable (and control umbrella building). 2013-02-13 16:37:00 +00:00
Steve Powell 99b70cc2f5 Put RabbitMQ version on exchange plugin. 2013-02-13 16:11:22 +00:00
Steve Powell e0ece8bf74 Upgrade to build against rabbitmq-3.0.1. 2013-02-05 17:42:40 +00:00
Steve Powell 4dc193d6cd Extract plugin to target/plugins directory on package. 2013-02-04 14:15:53 +00:00
Steve Powell bfcaedabcf Include sjx query source into exchange build. 2013-01-30 10:06:07 +00:00
Steve Powell 1c95cddd69 Correct rabbit plugin profile:
- better announcements in rabbitmq boot phases
  - run-in-broker make target for local hand-testing of plugin
  - removal of bad {mod, …} entry in application descriptor
2013-01-25 15:41:38 +00:00
Steve Powell bcfb07a068 Modify Makefile and build structure to have controlled makes. 2013-01-23 17:35:40 +00:00
Steve Powell 1f5acfe0fd Initial contents 2012-08-24 14:18:40 +01:00