Then wait for elections to complete before shutting further
members down.
This should help avoid election storms when enabling maintenance
mode.
Transfer khepri before queues to ensure meta data store is
ready to accept pid updates.
Some other state related tweaks.
When validation fails for a policy parameter, the resulting popup can't
be read due to one extra binary encoding as well as code that escapes
HTML entites. Since the EJS template uses `<%= >` for the popup, it will
display the text as-is, and not render any HTML.
## What?
This commit displays effective filters of AMQP receivers in the Management UI.
There is a new column `Filters` for outgoing links.
Solves #13429
## Why?
This allows validating if the desired filters set by the receiver are
actually in place by the server.
In addition, it's convenient for a developer to check any filter values
including SQL filter expressions.
## How?
The session process stores the the formatted and effective filters in
its state.
The Management UI displays a box containing the filter name. This way
the table for the outgoing links is kept concise. Hovering with the
mouse over a box will show additionally the descriptor and the actual
filter-value/definition.
This branch redirects the client to the login page when the cookie
expires. To complete the logout process we should also clear any auth
data stored in local storage: local storage has no built-in expiration
mechanism.
To test this locally you can use `make run-broker`, set the session
timeout to one minute for quick testing:
application:set_env(rabbitmq_management, login_session_timeout, 1)
go to the management page (`http://localhost:15672/#/`), login with
default credentials and wait a minute. After this change the local
storage only contains info like `rabbitmq.vhost` and `rabbitmq.version`.
The `below_node_connection_limit_test` and `ready_to_serve_clients_test`
cases could possibly flake because `is_quorum_critical_single_node_test`
uses the channel manager in `rabbit_ct_client_helpers` to open a
connection. This can cause the line
true = lists:all(fun(E) -> is_pid(E) end, Connections),
to fail to match. The last connection could have been rejected if the
channel manager kept its connection open, so instead of being a pid the
element would have been `{error, not_allowed}`.
With `rabbit_ct_client_helpers:close_channels_and_connection/2` we can
reset the connection manager and force it to close its connection.
Returning the connection limit and active count are not really necessary
for these checks. Instead of returning them in the response to the
health check we log a warning when the connection limit is exceeded.
This is useful for a load balancer, for example, to be able to avoid
sending new connections to a node which is running and has listeners
bound to TCP ports but is being drained for maintenance.
This updates the health check for protocol listeners to accept a set of
protocols, comma-separated. The check only returns 200 OK when all
requested protocols have active listeners.
This is a minor change that avoids a cluster-wide query for active
listeners. The old code called `rabbit_networking:active_listeners/0`
and then filtered the results by ones available on the local node. This
caused an RPC and concatenation of all other cluster members' listeners
and then in the next line filtered down to local nodes. Equivalently we
can use `rabbit_networking:node_listeners(node())` which dumps a local
ETS table.
This is not a very impactful change but it's nice to keep the latency of
the health-check handlers low and reduce some unnecessary cluster noise.
This allows restricting access to the /api/index.html and
the /cli/index.html page to authenticated users should the
user really want to. This can be enabled via advanced.config.
Ra improvements:
* Don't allow a non-voter to start elections
* Register with ra directory before initialising ra server.
* Trigger tick_timeout immediately after entering leader state.
* Set a configurable segment max size
This commit also includes a change to turn the quorum queue
become leader callback to become a noop and instead rely on
the more promptly tick_handler to handle the meta data store
update after a leader election.
This more prompt tick update means there should be a much shorter
gap between the queue metrics being deleted from the old leader
node to them being available again on the new node resulting
in smoother message count metrics.
Fix test that relied on waiting on too simplistic a property
before asserting.