... instead of the `dist` target.
[Why]
We already do that when building tests. Thus it is more consistent to do
the same.
Also, it makes sense to ensure everything is ready before the `dist`
step. For instance, an Erlang release would not depend on the `dist`
target, just the build and it would still need the CLI to be ready.
[Why]
The initial implementation was a bit too optimistic: it just asserted
that `net_adm:ping/1` returned pong. This led to a crash if it was not
the case.
[How]
It is better to handle an error from `net_adm:ping/1` and return
something appropriate instead of crashing. The rest of the function
already does that.
It improves the integration with peer discovery.
To refine conversion behaviour add additional tests
and ensure it matches the documentation.
mc: optionally capture source environment
And pass target environment to mc:convert
This allows environmental data and configuration to be captured and
used to modify and complete conversion logic whilst allowing conversion
code to remain pure and portable.
During a rolling upgrade, all cluster nodes collectively
may (and usually will, due to Shovel migration during node restarts)
contain mirrored_supervisor children with IDs that use two different
parameters (see referenced commits below).
The old format should not trip up node startup, so new
nodes must accept it in a few places, and try to use
these older values during dynamic Shovel spec cleanup.
References ccc22cb86b, 5f0981c5a3, #9785.
See #9894.
The logger exchange needs to declare the exchange during initialisation,
which requires the metadata store to be ready.
Metadata store initalisation happens in a rabbit boot step after
logger initialisation in the second phase of the prelaunch.
The spawned process that declares the exchange, should also
wait for the store to be ready. Otherwise it enters a loop
trying to decide which store to use which generates a huge log
and delays initialisation:
'Mnesia->Khepri fallback handling: Mnesia function failed because table
`rabbit_vhost` is missing or read-only. Migration could be in progress;
waiting for migration to progress and trying again...'
This commit gives it 60 seconds for the metadata store to boot,
and only afterwards tries (and retries if needed) to declare the
exchange.
Dependency horus broke coverage on `main` branch.
After this commit, on `main` branch in rabbitmq-server root
directory, both show coverage:
1.
```
make -C deps/rabbitmq_mqtt ct-auth t=[v5,limit]:vhost_queue_limit FULL=1 COVER=1
open deps/rabbitmq_mqtt/logs/index.html
```
2.
```
bazel coverage //deps/rabbitmq_mqtt:auth_SUITE -t- --test_sharding_strategy=disabled --test_env FOCUS="-group [v5,limit] -case vhost_queue_limit"
genhtml --output genhtml "$(bazel info output_path)/_coverage/_coverage_report.dat"
open genhtml/index.html
```
where `genhtml` is
https://github.com/linux-test-project/lcov/blob/master/bin/genhtml
Prior to this commit, coverage was broken with both Bazel and Erlang.mk:
On main - below logs are printed in different outputs:
First:
```
*** CT 2023-11-07 16:40:04.959 *** COVER INFO🔗
Adding nodes to cover test: ['rmq-ct-reader_SUITE-1-21000@localhost']
```
followed by
```
Could not start cover on 'rmq-ct-reader_SUITE-1-21000@localhost': {error,
{already_started,
<20798.286.0>}}
```
followed by
```
*** CT 2023-11-07 16:40:04.960 *** COVER INFO🔗
Successfully added nodes to cover test: []
```
followed by
```
Error in process <0.202.0> on node ct_rabbitmq_mqtt@nuc with exit value:
{{badmatch,{ok,[]}},
[{rabbit_ct_broker_helpers,'-cover_add_node/1-fun-0-',1,
[{file,"rabbit_ct_broker_helpers.erl"},
{line,2211}]},
{rabbit_ct_broker_helpers,query_node,2,
[{file,"rabbit_ct_broker_helpers.erl"},
{line,824}]},
{rabbit_ct_broker_helpers,run_node_steps,4,
[{file,"rabbit_ct_broker_helpers.erl"},
{line,447}]},
{rabbit_ct_broker_helpers,start_rabbitmq_node,4,
[{file,"rabbit_ct_broker_helpers.erl"},
```
It's also worth mentioning that
`make run-broker`
on v3.12.x:
```
Starting broker... completed with 36 plugins.
1> whereis(cover_server).
undefined
```
but on main:
```
Starting broker... completed with 36 plugins.
1> whereis(cover_server).
<0.295.0>
```
So, process `cover_server` runs on main in non test code.
Prior to this commit:
1. Start RabbitMQ with MQTT plugin enabled.
2.
```
rabbitmq-diagnostics consume_event_stream
^C
```
3. The logs will print the following warning:
```
[warning] <0.570.0> ** Undefined handle_info in rabbit_mqtt_internal_event_handler
[warning] <0.570.0> ** Unhandled message: {'DOWN',#Ref<0.2410135134.1846280193.145044>,process,
[warning] <0.570.0> <52723.100.0>,noconnection}
[warning] <0.570.0>
```
This is because rabbit_event_consumer:init/1 monitors the CLI process.
Any rabbit_event handler should therefore implement handle_info/2.
It's similar to what's described in the gen_event docs about
add_sup_handler/3:
> Any event handler attached to an event manager which in turn has a
> supervised handler should expect callbacks of the shape
> Module:handle_info({'EXIT', Pid, Reason}, State).
Listing queues with the HTTP API when there are many (1000s) of
quorum queues could be excessively slow compared to the same scenario
with classic queues.
This optimises various aspects of HTTP API queue listings.
For QQs it removes the expensive cluster wide rpcs used to get the
"online" status of each quorum queue. This was previously done _before_
paging and thus would perform a cluster-wide query for _each_ quorum queue in
the vhost/system. This accounted for most of the slowness compared to
classic queues.
Secondly the query to separate the running from the down queues
consisted of two separate queries that later were combined when a single
query would have sufficed.
This commit also includes a variety of other improvements and minor
fixes discovered during testing and optimisation.
MINOR BREAKING CHANGE: quorum queues would previously only display one
of two states: running or down. Now there is a new state called minority
which is emitted when the queue has at least one member running but
cannot commit entries due to lack of quorum.
Also the quorum queue may transiently enter the down state when a node
goes down and before its elected a new leader.
WHY:
Shovelling from RabbitMQ to Azure Service Bus and Azure Event Hub fails.
Reported in
https://discord.com/channels/1092487794984755311/1092487794984755314/1169894510743011430
Reproduction steps:
1. Follow https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-integrate-with-rabbitmq
2. Publish messages to RabbitMQ:
```
java -jar target/perf-test.jar -x 1 -y 0 -u azure -p -C 100000 -s 1 -c 100000
```
Prior to this commit, after a few seconds and after around 20k messages
arrived in Azure, RabbitMQ errored and logged:
```
{function_clause,
[{amqp10_client_connection,close_sent,
[info,
{'EXIT',<0.949.0>,
{{badmatch,{error,insufficient_credit}},
[{rabbit_amqp10_shovel,forward,4,
[{file,"rabbit_amqp10_shovel.erl"},
{line,334}]},
{rabbit_shovel_worker,handle_info,2,
[{file,"rabbit_shovel_worker.erl"},
{line,101}]},
{gen_server2,handle_msg,2,
[{file,"gen_server2.erl"},{line,1056}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,241}]}]}},
```
After this commit, all 100k messages get shovelled to Azure Service Bus.
HOW:
1. Fix link credit accounting in Erlang AMQP 1.0 client library. For each
message being published, link credit must be decreased by 1 instead of
being increased by 1.
2. If the shovel plugin runs out of credits, it must wait until the
receiver (Azure Service Bus) grants more credits to RabbitMQ.
Note that the solution in this commit is rather a naive quick fix for one
obvious bug. AMQP 1.0 integration between RabbitMQ and Azure Service Bus is
not tested and not guaranteed at this point in time.
More work will be needed in the future, some work is done as part of
https://github.com/rabbitmq/rabbitmq-server/pull/9022
Previously, test pubsub was flaky
```
{shared_SUITE,pubsub,766}
{test_case_failed,missing m1}
```
because the binding wasn't present yet on node 0 when publishing to node
0.
[Why]
`rabbit_khepri` relied on undocumented internals of Ra. This made this
code very fragile and not future-proof at all.
[How]
Ra exposes a new `ra:key_metrics/1` API which fullfills the need. This
patch uses it.
At a consequence, we can get rid of the Dialyzer directives to turn off
some warnings.
Prior to this commit the follwing test was flaky:
```
bazel test //deps/rabbitmq_mqtt:v5_SUITE -t- --test_sharding_strategy=disabled \
--test_env FOCUS="-group [mqtt,cluster_size_3] -case session_takeover_v3_v5" \
--test_env RABBITMQ_METADATA_STORE=khepri --config=rbe-26 --runs_per_test=20
```
because rabbit_misc:maps_any/2 filtered out a destination queue after routing if
that destination queue wasn't associated with any matched binding key.
This commit makes the test green.
However, the root cause of this issue isn't solved:
MQTT 5.0 requires the topic exchange to return matched binding keys for
destination queues such that feature NoLocal, and Subscription
Identifiers work correctly.
The current MQTT plugin relies on session state to be stored
consistently in the database. When a new client connects, the session
state is retrieved from the database and appropriate actions are taken:
e.g. consume from a queue, modify binding arguments, etc.
With Mnesia this consistency was guaranteed thanks to sync transactions
when updating queues and bindings.
Khepri has only eventual consistency semantics. This is problematic for the
MQTT plugin in the session_takeover_v3_v5 test scenario:
1. Client subscribes on node 1 (with v3). Node 1 returns subscription
success to client.
2. **Thereafter**, another client with the same MQTT client ID connects
to node 0 (with v5). "Proper" session takeover should take place.
However due to eventual consistency, the subscription / binding isn't
present yet on node 0. Therefore the session upgrade from v3 to v5
does not take place and leads to binding keys being absent when
messages are routed to the session's queue.
During metadata store migration, Mnesia might generate messages
for Table-Key pairs for in-flight writes.
These are skipped by mnesia-khepri-migration, as do not contain
enough info to copy the record, but we ensure here that none
comes through and we try to process it.
The event with Table-Record is generated at the time of write,
which is properly processed after the initial copy of data.
Tests session_reconnect and session_takeover were flaky, specifically
when run under Khepri.
The issue was in the test itself that the connect properties didn't
apply. Therefore, prior to this commit an exclusive queue got created.
On `main` branch and v3.12.6 feature flag delete_ra_cluster_mqtt_node is
supported. Instead of skipping the entire test if that feature flag is
not enabled, enable the feature flag and run the test.
More generally:
"Instead of verifying if a feature flag is enabled or not, it's best to enable
it and react from the return value (success or failure).
Mixed version testing always turn off all feature flags by default.
So in the future, even though all nodes supports the mentionned
feature flag, the testcase will still be skipped." [JSP]
See commit message 00c77e0a1a for details.
In a multi node mixed version cluster where the lower version is
compiled with a different OTP version, anonymous Ra leader queries will
fail with a badfun error if initiated on the higher version and executed
on the leader on the lower version node.
as selective receives are efficient in OTP 26:
```
OTP-18431
Application(s):
compiler, stdlib
Related Id(s):
PR-6739
Improved the selective receive optimization, which can now be enabled for references returned from other functions.
This greatly improves the performance of gen_server:send_request/3, gen_server:wait_response/2, and similar functions.
```
Since Erlang/OTP 26:
```
OTP-18445
Application(s):
erts, stdlib
It is no longer necessary to enable a feature in the runtime system in order to load modules that are using it.
It is sufficient to enable the feature in the compiler when compiling it.
That means that to use feature maybe_expr in Erlang/OTP 26, it is sufficient to enable it during compilation.
In Erlang/OTP 27, feature maybe_expr will be enabled by default, but it will be possible to disable it.
```
This test fails when MQTT client ID tracking is performed in Ra, and the
higher version node gets compiled with a different OTP version (26) than
the lower version node (25).
The reason is described in 83eede7ef2
```
An interesting side note learned here is that the compiled file
rabbit_mqtt_collector must not be changed. This commit only modifies
function specs. However as soon as the compiled code is changed, this
module becomes a new version. The new version causes the anonymous ra query
function to fail in mixed clusters: When the old node does a
ra:leader_query where the leader is on the new node, the query function
fails on the new node with `badfun` because the new node does not have
the same module version. For more context, read:
https://web.archive.org/web/20181017104411/http://www.javalimit.com/2010/05/passing-funs-to-other-erlang-nodes.html
```
We shouldn’t use an anonymous function for ra:leader_query or ra:consistent_query.
Instead we should use the {M,F,A} form.
9e5d437a0a/src/ra.erl (L102-L103)
In MQTT the anonymous function is used in bcb95c949d/deps/rabbitmq_mqtt/src/rabbit_mqtt_collector.erl (L50)
This causes the query to return a bad fun error (silently ignored in bcb95c949d/deps/rabbitmq_mqtt/src/rabbit_mqtt_collector.erl (L70-L71) )
when executed on a different node and either:
1.) Any code in file rabbit_mqtt_collector.erl changed, or
2.) The code gets compiled with a different OTP version.
2.) is the reason for a failing mixed version test in https://github.com/rabbitmq/rabbitmq-server/pull/8553 because both higher and lower versions run OTP 26,
but the higher version node got compiled with 26 while the lower version node got compiled with 25.
The same file
compiled with OTP 26.0.1
```
1> rabbit_mqtt_collector:module_info(attributes).
[{vsn,[30045739264236496640687548892374951597]}]
```
compiled with OTP 25.3.2
```
1> rabbit_mqtt_collector:module_info(attributes).
[{vsn,[168144385419873449889532520247510637232]}]
```
Due to the very low impact that maintenance mode will not close all MQTT
client connections with feature flag delete_ra_cluster_mqtt_node being
disabled, we skip this test.
At some point Ra stopped updating the ra_open_files_metrics table
in favour of using counters. This change updates the
rabbit_quorum_queue:open_files/1 function to use ra counters instead
of querying the deprecated ETS table.
The handle_aux clause for `oldest_entry_timestamp` did not return
the updated Log structure and thus ends up leaking file descriptors
when this function is called and the segment isn't already open.
[Why]
Up until now, a user had to run the following three commands to expand a
cluster:
1. stop_app
2. join_cluster
3. start_app
Stopping and starting the `rabbit` application and taking care of the
underlying Mnesia application could be handled by `join_cluster`
directly.
[How]
After the call to `can_join/1` and before proceeding with the actual
join, the code remembers the state of `rabbit`, the Feature flags
controler and Mnesia.
After the join, it restarts whatever needs to be restarted to. It does
so regardless of the success or failure of the join. One exception is
when the node switched from Mnesia to Khepri as part of that join. In
this case, Mnesia is left stopped.
[Why]
It will be used to check the status of the Feature flags controller in a
future change to `rabbit_db_cluster:join/2`.
[How]
We check if the process is registered locally.
[Why]
In addition to stopping the supervisor child, `rabbit_sup` also takes
care of deleting the child spec from the supervisor.
This allows to re-use `rabbit_sup:start_child/1`. Otherwise, it would
return an `{error, already_present}` error.
A customer could not recover a vhost due to the following error:
```
[error] <0.32570.2> Unable to recover vhost <<"/">> data. Reason {badmatch,{error,not_found}}
[error] <0.32570.2> Stacktrace [{rabbit_binding,recover_semi_durable_route,3,
[error] <0.32570.2> [{file,"rabbit_binding.erl"},{line,113}]},
[error] <0.32570.2> {rabbit_binding,'-recover/2-lc$^1/1-1-',3,
[error] <0.32570.2> [{file,"rabbit_binding.erl"},{line,103}]},
[error] <0.32570.2> {rabbit_binding,recover,2,
[error] <0.32570.2> [{file,"rabbit_binding.erl"},{line,104}]},
```
Reference: https://vmware.slack.com/archives/C0RDGG81Z/p1698140367947859
[Why]
Khepri depends on the fact that classic queue mirroring is turned off
(i.e. the `classic_queue_mirroring` deprecated feature is denied or
removed).
Therefore, there is no need to query the database for something that
won't be there anyway.
[How]
We simply return `false` if Khepri is enabled.
[Why]
The callback wants to query the `rabbit_runtime_parameters` Mnesia table
to see if there HA policies configured. However this table may exist but
may be unavailable. This is the case in a cluster if the node running
the callback has to wait for another cluster member before Mnesia tables
can be queried or updated.
[How]
Once we verified the `rabbit_runtime_parameters` Mnesia table exists, we
wait for its availability before we query it. `rabbit_table:wait/2` will
throw an exception if the wait times out.
In particular this avoids an infinite loop in
`mnesia_to_khepri:handle_fallback()` because it relies on the
availability of the table too to determine if the table is being
migrated or not.
[Why]
The `is_feature_used` callback is used to determine if a deprecated
feature can really be denied. Therefore it only makes sense for
deprecated features in their `permitted_by_default` and
`denied_by_default` phases.
When a deprecated feature enters the `disconnected` or `removed` phases,
the code being the feature should be gone. Therefore, there is no point
in checking if it is used.
[How]
We get the stability derived from the deprecated feature phase before we
actually run the callback.
While here, if the callback returns an error, we treat it as if it
returned that the feature is used.
As a reminder, a node will refuse to start if a deprecated feature is
denied but the feature is used. This is a slight change compared to the
initial goal of the deprecated features subsystem. Indeed, the goal was
to allow users experiment with the future behavior of RabbitMQ. However,
a future node would probably still start even if there were left-overs
from a removed feature. However if a `permitted_by_default` or
`denied_by_default` deprecated feature is denied, the node would refuse
to start. We see that as a strong way to communicate the user that an
action is required from them.
gen_statem:format_status/1 did not make any difference
for logged crash reports, so we reach for the solution
we already use in multiple places.
References #9763.
Introduce GET /api/queues/detailed endpoint
Just removed garbage_collection, idle_since and any 'null' value
/api/queues with 10k classic queues returns 7.4MB of data
/api/queues/detailed with 10k classic queues returns 11MB of data
This sits behind a new feature flag, required to collect data from
all nodes: detailed_queues_endpoint
To scrub two sensitive state values. Note that this callback
is used/respect by some tools (via 'sys:get_status/1') but
that won't be enough in all cases (namely a process crash logger
would still log the original state).
By default, hibernate the SSL connection after 6 seconds, reducing its memory footprint.
This reduces memory usage of RabbitMQ by multiple GBs with thousands of
idle SSL connections.
This commit chooses a default value of 6 seconds because that hibernate_after
value is currently hard coded for rabbit_writer.
rabbit_mqtt_reader uses 1 second, rabbit_channel uses 1 - 10 seconds.
This value can be overriden by advanced.config, similar to:
```
[{rabbit, [
{ssl_options, [
{hibernate_after, 30000},
{keyfile, "/etc/.../server_key.pem"},
{certfile, "/etc/.../server_certificate.pem"},
{cacertfile, "/etc/.../ca_certificate.pem"},
{verify,verify_none}
]}
]}].
```
See https://www.erlang.org/doc/man/ssl.html#type-hibernate_after
```
When an integer-value is specified, TLS/DTLS-connection goes into
hibernation after the specified number of milliseconds of inactivity,
thus reducing its memory footprint. When undefined is specified
(this is the default), the process never goes into hibernation.
```
Relates
https://github.com/rabbitmq/rabbitmq-server/discussions/5346https://groups.google.com/g/rabbitmq-users/c/be8qtkkpg5s/m/dHUa-Lh2DwAJ
This version of rules_erlang adds coverage support
Bazel has sort of standardized on lcov for coverage, so that is what
we use.
Example:
1. `bazel coverage //deps/rabbit:eunit -t-`
2. `genhtml --output genhtml "$(bazel info
output_path)/_coverage/_coverage_report.dat"`
3. `open genhtml/index.html`
Multiple tests can be run with results aggregated, i.e. `bazel
coverage //deps/rabbit:all -t-`
Running coverage with RBE has a lot of caveats,
https://bazel.build/configure/coverage#remote-execution, so the above
commands won't work as is with RBE.
[Why]
So far, the feature states were copied from the cluster after the actual
join. However, the join may have reloaded the feature flags registry,
using the previous on-disk record, defeating the purpose of copying the
cluster's states.
This was done in this order to have a simpler error handling.
[How]
This time, we copy the remote cluster's feature states just after the
reset.
If the join fails, we reset the feature flags again, including the
on-disk states.
[Why]
Sometimes, we need to reset the in-memory registry only, like when we
restart the `rabbit` application, not the whole Erlang node. However,
sometimes, we also need to delete the feature states on disk. This is
the case when a node joins a cluster.
[How]
We expose a new `reset/0` function which covers both the in-memory and
on-disk states.
This will be used in a follow-up commit to correctly reset the feature
flags states in `rabbit_db_cluster:join/2`.
[Why]
`reset_registry/0` reset the in-memory states so far, but left the
on-disk record. This is inconsistent.
[How]
After resetting the in-memory states, we remove the file on disk.
[Why]
When a Khepri-based node joins a Mnesia-based cluster, it is reset and
switches back from Khepri to Mnesia. If there are Mnesia files left in
its data directory, Mnesia will restart with stale/incorrect data and
the operation will fail.
After a migration to Khepri, we need to make sure there is no stale
Mnesia files.
[How]
We use `rabbit_mnesia` to query the Mnesia files and delete them.
The default is 20 MiB, which is enough to upload
a definition file with 200K queues, a few virtual host
and a few users. In other words, it should accomodate
a lot of environments.
This contains a fix for a situation where a replica may not discover
the current commit offset until the next entry is written to the
stream.
Should help with a frequent flake in rabbit_stream_queue_SUITE:add_replicas
[Why]
The `feature_flags` file is used to record the states of feature flags.
The content is a formatted sorted Erlang list.
This works just fine, but reading it for a human (a developer) is more
difficult than if we had a single feature name per line. It is also more
difficult to use diff(1) on the file.
This patch ensures that the Erlang list is formatted with an item per
text line.
[How]
The format string enforces an line width of 1 column. The terms are not
truncated obviously, but the lines are wrapped immediately after an
item.
This osiris release contains optimisations and bug fixes:
* Various index scanning operations have been substantially improved
resulting in up to 10x improvement for certain cases.
* A bug which meant stream replication listener would fail if the
TLS version was limited to `tlsv1.3` has been fixed.
* A bug where the log may be incorrectly truncated when filters are
used has been fixed.
* Startup handles one more case where a file has been corrupted after
an unclean shutdown.
Providing a pre-hashed and salted password is
not significantly more secure but satisfies those
who cannot pass clear text passwords on the command
line for regulatory reasons.
Note that the optimal way of seeding users is still
definition import on node boot, not scripting with
CLI tools.
Closes#9166
when nodes are already disconnected, thus preventing
application controller timeouts and node monitor terminations,
which in some cases fails to receover completely.
When the Web-MQTT connection terminates early because of no supported
subprotocol, `terminate/3` called fhc release although no fhc obtain was
called yet. This was the case even when `use_file_handle_cache` was false,
because `#state.should_use_fhc` was not initialized.
Fixing this avoids the below harmless warning
```
[debug] error updating ets counter <0.1224.0> in table #Ref<0.2797411137.1366163457.189876>:
[{ets, update_counter,
[#Ref<0.2797411137.1366163457.189876>,
<0.1224.0>, {5, -1}],
...
[warning] FHC: failed to update counter 'obtained_socket', client pid: <0.1224.0>
```
Currently these are not allowed for use with stream queues
which is a bit too strict. Some client impl will automatically
nack or reject messages that are pending when an application
requests to stop consuming. Treating all message outcomes the same
makes as much sense as not to.
Because both `add_member` and `grow` default to Membership status `promotable`,
new members will have to catch up before they are considered cluster members.
This can be overridden with either `voter` or (permanent `non_voter` statuses.
The latter one is useless without additional tooling so kept undocumented.
- non-voters do not affect quorum size for election purposes
- `observer_cli` reports their status with lowercase 'f'
- `rabbitmq-queues check_if_node_is_quorum_critical` takes voter status into
account
Currently these are not allowed for use with stream queues
which is a bit too strict. Some client impl will automatically
nack or reject messages that are pending when an application
requests to stop consuming. Treating all message outcomes the same
makes as much sense as not to.
Because both `add_member` and `grow` default to Membership status `promotable`,
new members will have to catch up before they are considered cluster members.
This can be overridden with either `voter` or (permanent `non_voter` statuses.
The latter one is useless without additional tooling so kept undocumented.
- non-voters do not affect quorum size for election purposes
- `observer_cli` reports their status with lowercase 'f'
- `rabbitmq-queues check_if_node_is_quorum_critical` takes voter status into
account
A previous PR removed backing_queue_status as it is mostly unused,
but classic queue version is still useful. This PR returns version
as a top-level key in queue objects.
The previous timeout was too low and could time out before the
start cluster operation had completed. This would result in
the mnesia record being deleted whilst member might still be active.
60s should be plenty in most cases. Hopefully...