We introduce the OCF_RESKEY_allowed_cluster_node parameter which can be used to specify
which nodes of the cluster rabbitmq is expected to run on. When this variable is not
set the resource agent assumes that all nodes of the cluster (output of crm_node -l)
are eligible to run rabbitmq. The use case here is clusters that have a large
numbers of node, where only a specific subset is used for rabbitmq (usually this is
done with some constraints).
Tested in a 9-node cluster as follows:
[root@messaging-0 ~]# pcs resource config rabbitmq
Resource: rabbitmq (class=ocf provider=rabbitmq type=rabbitmq-server-ha)
Attributes: allowed_cluster_nodes="messaging-0 messaging-1 messaging-2" avoid_using_iptables=true
Meta Attrs: container-attribute-target=host master-max=3 notify=true ordered=true
Operations: demote interval=0s timeout=30 (rabbitmq-demote-interval-0s)
monitor interval=5 timeout=30 (rabbitmq-monitor-interval-5)
monitor interval=3 role=Master timeout=30 (rabbitmq-monitor-interval-3)
notify interval=0s timeout=20 (rabbitmq-notify-interval-0s)
promote interval=0s timeout=60s (rabbitmq-promote-interval-0s)
start interval=0s timeout=200s (rabbitmq-start-interval-0s)
stop interval=0s timeout=200s (rabbitmq-stop-interval-0s)
[root@messaging-0 ~]# pcs status |grep -e rabbitmq -e messaging
* Online: [ controller-0 controller-1 controller-2 database-0 database-1 database-2 messaging-0 messaging-1 messaging-2 ]
...
* Container bundle set: rabbitmq-bundle [cluster.common.tag/rhosp16-openstack-rabbitmq:pcmklatest]:
* rabbitmq-bundle-0 (ocf::rabbitmq:rabbitmq-server-ha): Master messaging-0
* rabbitmq-bundle-1 (ocf::rabbitmq:rabbitmq-server-ha): Master messaging-1
* rabbitmq-bundle-2 (ocf::rabbitmq:rabbitmq-server-ha): Master messaging-2
Currently the resource agent hard-codes iptables calls to block off
client access before the resource becomes master. This was done
historically because many libraries were fairly buggy detecting a
not-yet functional rabbitmq, so they were being helped by getting
a tcp RST packet and they would go on trying their next configured
server.
It makes sense to be able to disable this behaviour because
most libraries by now have gotten better at detecting timeouts when
talking to rabbit and because when you run rabbitmq inside a bundle
(pacemaker term for a container with an OCF resource inside) you
normally do not have access to iptables.
Tested by creating a three-node bundle cluster inside a container:
Container bundle set: rabbitmq-bundle [cluster.common.tag/rhosp16-openstack-rabbitmq:pcmklatest]
Replica[0]
rabbitmq-bundle-podman-0 (ocf:💓podman): Started controller-0
rabbitmq-bundle-0 (ocf::pacemaker:remote): Started controller-0
rabbitmq (ocf::rabbitmq:rabbitmq-server-ha): Master rabbitmq-bundle-0
Replica[1]
rabbitmq-bundle-podman-1 (ocf:💓podman): Started controller-1
rabbitmq-bundle-1 (ocf::pacemaker:remote): Started controller-1
rabbitmq (ocf::rabbitmq:rabbitmq-server-ha): Master rabbitmq-bundle-1
Replica[2]
rabbitmq-bundle-podman-2 (ocf:💓podman): Started controller-2
rabbitmq-bundle-2 (ocf::pacemaker:remote): Started controller-2
rabbitmq (ocf::rabbitmq:rabbitmq-server-ha): Master rabbitmq-bundle-2
The ocf resource was created inside a bundle with:
pcs resource create rabbitmq ocf:rabbitmq:rabbitmq-server-ha avoid_using_iptables="true" \
meta notify=true container-attribute-target=host master-max=3 ordered=true \
op start timeout=200s stop timeout=200s promote timeout=60s bundle rabbitmq-bundle
Signed-off-by: Michele Baldessari <michele@acksyn.org>
This commit updates URLs to prefer the https protocol. Redirects are not followed to avoid accidentally expanding intentionally shortened URLs (i.e. if using a URL shortener).
# Fixed URLs
## Fixed Success
These URLs were switched to an https URL with a 2xx status. While the status was successful, your review is still recommended.
* [ ] http://www.apache.org/licenses/LICENSE-2.0 with 1 occurrences migrated to:
https://www.apache.org/licenses/LICENSE-2.0 ([https](https://www.apache.org/licenses/LICENSE-2.0) result 200).
Instead of calling crm_node directly it is preferrable to use the
ocf_attribute_target function. This function will return crm_node -n
as usual, except when run inside a bundle (aka container in pcmk
language). Inside a bundle it will return the bundle name or, if the
meta attribute meta_container_attribute_target is set to 'host', it
will return the physical node name where the bundle is running.
Typically when running a rabbitmq cluster inside containers it is
desired to set 'meta_container_attribute_target=host' on the rabbit
cluster resource so that the RA is aware on which host it is running.
Tested both on baremetal (without containers):
Master/Slave Set: rabbitmq-master [rabbitmq]
Masters: [ controller-0 controller-1 controller-2 ]
And with bundles as well.
Co-Authored-By: Damien Ciabrini <dciabrin@redhat.com>
In is_clustered_with(), commands that we run to check if the node is
clustered with us, or partitioned with us may fail. When they fail, it
actually doesn't tell us anything about the remote node.
Until now, we were considering such failures as hints that the remote
node is not in a sane state with us. But doing so has pretty negative
impact, as it can cause rabbitmq to get restarted on the remote node,
causing quite some disruption.
So instead of doing this, ignore the error (it's still logged).
There was a comment in the code wondering what is the best behavior;
based on experience, I think preferring stability is the slightly more
acceptable poison between the two options.
Right now, every time we get a start notification, all nodes will ensure
the rabbitmq app is started. This makes little sense, as nodes that are
already active don't need to do that.
On top of that, this had the sideeffect of updating the start time for
each of these nodes, which could result in the master moving to another
node.
If there's nothing starting and nothing active, then we do a -z " ",
which doesn't have the same result as -z "". Instead, just test for
emptiness for each set of nodes.
It may happen that two nodes have the same start time, and one of these
is the master. When this happens, the node actually gets the same score
as the master and can get promoted. There's no reason to avoid being
stable here, so let's keep the same master in that scenario.
This enables the cluster to focus on a vhost that is not /, in case the
most important vhost is something else.
For reference, other vhosts may exist in the cluster, but these are not
guaranteed to not suffer from any data loss. This patch doesn't address
this issue.
Closes https://github.com/rabbitmq/rabbitmq-server-release/issues/22
Panicking and returning non-success on stop often leads to resource
becoming unmanaged on that node.
Before we called get_status to verify that RabbitMQ is dead. But
sometimes it returns error even though RabbitMQ is not running. There
is no reason to call it - we will just verify that there is no beam
process running.
Related fuel bug - https://bugs.launchpad.net/fuel/+bug/1626933
Partitions reported by `rabbit_node_monitor:partitions/0` are not
commutative (i.e. node1 can report itself as partitioned with node2, but
not vice versa).
Given that we now have strong notion of master in OCF script, we can
check for those fishy situations during master health check, and order
damaged nodes to restart.
Fuel bug: https://bugs.launchpad.net/fuel/+bug/1628487