Bump org.springframework.boot:spring-boot-starter-parent from 3.1.4 to 3.1.5 in /deps/rabbitmq_auth_backend_http/examples/rabbitmq_auth_backend_spring_boot_kotlin
Bump org.springframework.boot:spring-boot-starter-parent from 3.1.4 to 3.1.5 in /deps/rabbitmq_auth_backend_http/examples/rabbitmq_auth_backend_spring_boot
This version of rules_erlang adds coverage support
Bazel has sort of standardized on lcov for coverage, so that is what
we use.
Example:
1. `bazel coverage //deps/rabbit:eunit -t-`
2. `genhtml --output genhtml "$(bazel info
output_path)/_coverage/_coverage_report.dat"`
3. `open genhtml/index.html`
Multiple tests can be run with results aggregated, i.e. `bazel
coverage //deps/rabbit:all -t-`
Running coverage with RBE has a lot of caveats,
https://bazel.build/configure/coverage#remote-execution, so the above
commands won't work as is with RBE.
[Why]
So far, the feature states were copied from the cluster after the actual
join. However, the join may have reloaded the feature flags registry,
using the previous on-disk record, defeating the purpose of copying the
cluster's states.
This was done in this order to have a simpler error handling.
[How]
This time, we copy the remote cluster's feature states just after the
reset.
If the join fails, we reset the feature flags again, including the
on-disk states.
[Why]
Sometimes, we need to reset the in-memory registry only, like when we
restart the `rabbit` application, not the whole Erlang node. However,
sometimes, we also need to delete the feature states on disk. This is
the case when a node joins a cluster.
[How]
We expose a new `reset/0` function which covers both the in-memory and
on-disk states.
This will be used in a follow-up commit to correctly reset the feature
flags states in `rabbit_db_cluster:join/2`.
[Why]
`reset_registry/0` reset the in-memory states so far, but left the
on-disk record. This is inconsistent.
[How]
After resetting the in-memory states, we remove the file on disk.
[Why]
When a Khepri-based node joins a Mnesia-based cluster, it is reset and
switches back from Khepri to Mnesia. If there are Mnesia files left in
its data directory, Mnesia will restart with stale/incorrect data and
the operation will fail.
After a migration to Khepri, we need to make sure there is no stale
Mnesia files.
[How]
We use `rabbit_mnesia` to query the Mnesia files and delete them.
The default is 20 MiB, which is enough to upload
a definition file with 200K queues, a few virtual host
and a few users. In other words, it should accomodate
a lot of environments.
This contains a fix for a situation where a replica may not discover
the current commit offset until the next entry is written to the
stream.
Should help with a frequent flake in rabbit_stream_queue_SUITE:add_replicas