Setting up a Percona Mysql Multi-Master cluster with HA using RedHat PCS.

Matteo Niccoli
3 min readNov 11, 2020

This cookbook provides my recipe to install a MySQL Multi Master cluster in full HA.

Ingredients:

Here are the steps to follow:

  1. Install Percona XtraDB Cluster on all 3 nodes (I recommend, _always_ use an odd number of nodes to avoid split-brain problems)

I will not tell you how to install Percona and how to configure it. It is a complex procedure so I suggest you read the official documentation.

  1. Install pcs
# yum install pcs
# systemctl start pcsd
# systemctl enable pcsd
# passwd hacluster (set your password)
# pcs cluster auth test01 test02 test03
  • Be sure that all the nodes see each other, either via DNS or via the /etc/hosts file.
# pcs cluster setup — name VIP test01 test02 test03
  • Now let’s enable the cluster and start it:
# pcs cluster enable --all
# pcs cluster start --all
  • Let’s set these 2 directives
# pcs property set stonith-enabled=false
# pcs property set no-quorum-policy=ignore
  • Now that the cluster is active we have to set up a proper sysctl to allow Percona to bind itself on ip addresses that are not locally configured on the running server:
# sysctl -w net.ipv4.ip_nonlocal_bind=1

The steps above are very important: Without that sysctl , mysqld can’t bind to the VIP if is not locally on server where the instance reside.

  • Add also the following directive in /etc/sysctl.conf
 net.ipv4.ip_nonlocal_bind=1
  • Once this is done, we create the resource called VIP, which will be an ip address that will run inside the cluster:
# pcs resource create VIP ocf:heartbeat:IPaddr2 ip=172.24.163.37  cidr_netmask=32 op monitor interval=5s
  • VIP resource is now ready. With the help of the command:
# pcs cluster standby test01
# pcs cluster unstandby test01

We ensure that the VIP resource is able to move themself between nodes.

Repeat the standby / unstandby procedure for all cluster nodes

Once you have verified that VIP resource moves correctly between the cluster nodes and that connectivity works on all 3 nodes, it is now necessary to configure Percona as a resource in the cluster, in order to associate a colocation to the VIP that allows us to move it automatically when the mysql resource is down. In fact, if we don’t execute the next operations, the VIP resource will move only when the node is unreachable, while we want the VIP to move even when the Percona resource is down.

# pcs resource create mysqld systemd:mysqld clone --force meta is-managed=false op monitor interval=5

The command above , in order:

  • Create a resource named mysqld by associating it with the local systemd mysqld service.
  • Clone it on all 3 servers.
  • Set is-managed = false, so PCS can’t stop / start it by itself. Being a DB I preferred to leave the stop and start operations carried out by me or by my colleagues.
  • Set the resource check interval to 5 seconds.

At this point, we can check that resource is seen correctly by the cluster using the command:

# pcs resource show
VIP (ocf::heartbeat:IPaddr2): Started test01
Clone Set: mysqld-clone [mysqld]
mysqld (systemd:mysqld): Started test01 (unmanaged)
mysqld (systemd:mysqld): Started test02 (unmanaged)
mysqld (systemd:mysqld): Started test03 (unmanaged)

Create a colocation between the VIP resource and the mysqld instances:

pcs constraint colocation add VIP with mysqld-clone INFINITY

After this, we can configure our software to connect on VIP.

To verify this is working correctly, just stop the mysqld resources on the servers where the VIP is present and verify that it is moved to a server where the mysqld resource is up and running.

That’s all. I hope this article could be useful for all people that needs to do a configuration like this.

--

--