Subscribe to our blog

High availability is one of the cornerstones of the IT industry, and rightfully so. Having a secure, redundant environment is a key to protecting any infrastructure from disaster. In this article I will create a multi-master database cluster in OpenShift Virtualization. The aim is to demonstrate that having a highly available service is just as simple and seamless in OpenShift Virtualization as it is in a bare-metal environment. We'll set up services, node anti-affinity, monitoring and metrics.

Like most projects in the IT world, I drew on varying sources to create a solution that would be feasible in most companies. Below are the three that were used:

Almog Fassy's blog regarding high availability located here.

Chandler Wilkerson wrote regarding node affinity and anti-affinity in this article.

John Herr's blog was used for the custom metrics and monitoring. That can be found here.

Working Environment

OpenShift version 4.12 with the Virtualization Operator version 4.12 was used to create the overall environment for the demonstration. The virtual machines contained therein consist of three RHEL 9.1 machines with MariaDB, Galera configured for multi-master clustering, and a Fedora virtual machine that we'll use to connect to the database using the MySQL command line client.

The OpenShift project / namespace was created by the user "testuser", who is a sudo user on both the RHEL and Fedora machines, but is not a cluster admin.

MariaDB has had the Galera clustering software included with it since version 10.1. While configuring MariaDB and Galera is beyond the scope of this blog, the my.cnf file that is used is located here, and can be customized for your environment if needed. Both Galera and MariaDB have extensive information regarding cluster configuration, and this blog was written following those steps and strategies.

Creating the Associated Services for Replication

There are a number of ports that need to be exposed in order for replication and access to the MariaDB cluster to be successful. When setting up the galera cluster, the ports that were allowed through firewalld are also the ports that we're looking to add to our services. These can be added through the UI, or you can create them by using the example services file located here. Note that you will have to create the services for each server, and the file will need to be modified according to your server naming convention. After applying / creating the services, verify they exist by running the command below.

$ oc get service

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
cnv-galera-db01-ports ClusterIP 172.30.217.29 <none> 3306/TCP,4567/TCP,4568/TCP,4444/TCP,4567/UDP
cnv-galera-db02-ports ClusterIP 172.30.130.101 <none> 3306/TCP,4567/TCP,4568/TCP,4444/TCP,4567/UDP
cnv-galera-db03-ports ClusterIP 172.30.34.148 <none> 3306/TCP,4567/TCP,4568/TCP,4444/TCP,4567/UDP

 

Now that the services are running we can connect to any of the three servers using the mysql client. As you can see from the output above we only have connectivity from within the cluster, but we have set ourselves up to be able to connect from the outside using a solution like the MetalLB operator that is available in OpenShift. For now, replication and connectivity can be verified in multiple ways, but we'll use a simple mysql statement from the Fedora host. The output below shows that we do indeed have 3 cluster members that are accepting database transfers from each other. You could also view the mysqld.log file that is created as well, but this will suffice.

MariaDB [(none)]> SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 |
+--------------------+-------+
1 row in set (0.003 sec)

 

Setting Pod Anti-Affinity for the Database Servers

In order to ensure that our virtual machines are not all on the same worker nodes when created (and thus negating redundancy we've set up) we will set pod anti-affinities for them. This process can be accomplished using node affinity as well but it requires node labeling, which someone with cluster admin privileges would have to do. First we label the virtual machines to identify as part of our clustered application.

$ oc label vm cnv-galera-db01 app=galera-cluster
virtualmachine.kubevirt.io/cnv-galera-db01 labeled

$ oc label vm cnv-galera-db02 app=galera-cluster
virtualmachine.kubevirt.io/cnv-galera-db02 labeled

$ oc label vm cnv-galera-db03 app=galera-cluster
virtualmachine.kubevirt.io/cnv-galera-db03 labeled

 

To tie it all together we then create a pod anti-affinity rule for the vms. This can be done a couple of ways, but here I've used the UI to do so. The section in the image below is located in the "Virtualization > Virtual Machines > "Your-VM-Name" > Virtual Machine Details > Scheduling > Affinity Rules" section of the OpenShift Console. I recommend using the Preferred During Scheduling option while creating the affinity. Otherwise, it could be requiring a vm to be brought up on a node that might be unschedulable for various reasons. The vm will also need to be rebooted for the preference to take effect, which is fine in our example since we have a functioning database cluster. The rule creation will need to be done for each virtual machine, then be sure to reboot each one.

pod-anti-affinity

We can then check where the pod resides by using 'oc get pods' to verify that the machines are not all on a single node.

$ oc get pod -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
virt-launcher-cnv-galera-db01-7658x 1/1 Running 0 23h 10.128.2.233 bwk-0 <none> 1/1
virt-launcher-cnv-galera-db02-wkfrq 1/1 Running 0 23h 10.131.0.109 bwk-1 <none> 1/1
virt-launcher-cnv-galera-db03-ldbv4 1/1 Running 0 22h 10.130.0.65 bwk-2 <none> 1/1

 

Monitoring and Metrics

The services are running, replication is happening and the virtual machines are now configured to be started on separate worker nodes. We'll now configure some metrics and ways to monitor our database cluster. This is another thing that can be done multiple ways through third party monitoring software, native alerts, or logging in the OS. We're going to configure them from within OpenShift to keep with our single pane of glass idea.

In his blog about custom metrics located here, John Herr demonstrates how to use Prometheus monitoring to provide insight into the goings on of our OpenShift virtual machines. I've used a modified version of that blog to view the MariaDB metrics provided by the mysqld_exporter project in the prometheus repository on github.

After downloading the latest version of the mysqld_exporter from github, I extracted the file to '/usr/bin' and created a new user named mysqld_exporter. I then created '/etc/.mysqld_exporter.cnf to configure the credentials. Similar to John, I then created a service for the exporter located in '/etc/systemd/system/mysqld_exporter.service'. You can see an example of the file here. I then enabled it to start on boot, and created the services in OpenShift that were relative to my demonstration.

From there I was able to test from the UI using a custom query with 'mysql_global_status_uptime'. As can be seen from the screenshot below, the monitoring has been successful.

metrics

Conclusion

High Availability and data redundancy is a wide ranging and constantly evolving process. As business and infrastructure needs change, so will your HA strategies. The article above demonstrates OpenShift Virtualization's ability to be as flexible and adaptable as you need it to be.


About the author

Browse by channel

automation icon

Automation

The latest on IT automation that spans tech, teams, and environments

AI icon

Artificial intelligence

Explore the platforms and partners building a faster path for AI

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

Explore how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the solutions that simplify infrastructure at the edge

Infrastructure icon

Infrastructure

Stay up to date on the world’s leading enterprise Linux platform

application development icon

Applications

The latest on our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech