Allow an offset spec to be used to attach at an appropriate point in the
stream. This is done by specifying a source filter with the key rabbitmq:stream-offset-spec.
The offset is also included as a message annotation with the key x-stream-offset.
When a link is detached we also issue a basic.cancel to the 0.9.1 channel. If this wasn't done
and you detached then re-attached a link for the same queue you'd get a consumer-tag offset
error from the 0.9.1 channel.
When a consumer reaches the end of a stream it need to register an
offset listener with the local stream member so that it can be notified
when new stream messages are committed. The stream queue implementation
for some reason registered offset listeners with the leader, not the local
member.
The standard behavior of the action used in this automation is to
force push the branch if already exists. In this case we will just
short circuit our workflow. It will mean that the action can no longer
automatically close the PR if the diff has converged to zero, but that
seems worth the tradeoff.
The standard behavior of the action used in this automation is to
force push the branch if already exists. In this case we will just
short circuit our workflow. It will mean that the action can no longer
automatically close the PR if the diff has converged to zero, but that
seems worth the tradeoff.
The suite level timeout the .erl I've learned is actually per
case. By sharding bu testcase, we can better match the common test
level and bazel level timeouts, such that we can get logs from remote
test run failures.
AWS, Kubernetes and Classic peer discovery plugins use list_nodes and
Erlang global:set_lock to create a mutex lock. To unlock, these plugins
get the latest list with list_nodes and call global:del_lock.
However, if list_nodes within unlock fails, RabbitMQ will throw an
uncaught exception and the lock will not be released until the node
holding the lock is restarted. This prevents new nodes from joining the
cluster.
This failure can be avoided by passing the list of nodes from lock to
unlock. If a node goes away (and comes back) between the lock and unlock
calls, del_lock could still successfully remove the lock. Similarly, if
a new node starts up between the lock and unlock calls, del_lock
wouldn't need to inform the new node.
Use case: Allow plain connections over one (internal IP), and TLS
connections over another IP (eg. internet routable IP). Without this
patch a cluster can only support access over one or the other IP, not
both.
(cherry picked from commit b9e6aad035)