Subscribe to our blog

This is a guest post by Matthew Kereczman, Solutions Architect at LINBIT.

OpenShift is a container platform built around Linux containers and Kubernetes developed by RedHat. As a result, OpenShift comes with a few things that you won’t get from a vanilla Kubernetes install, such as an image repository, an operator marketplace, a Prometheus monitoring stack, and a management GUI to name a few. However, one of the things you don't always get out of the box with OpenShift is a persistent storage provider.

LINSTOR can extend OpenShift's capabilities by providing persistent storage for the applications deployed with OpenShift. While planning LINSTOR's integration into OpenShift, LINSTOR's developers took care to support some of the distribution's major features. For example, LINSTOR is installable from the operator marketplace, exposes Prometheus metrics that OpenShift's Prometheus can scrape, and provides users with a GUI for viewing and managing LINSTOR's various objects.

In this blog post, I'll briefly cover how you can get started using LINSTOR in OpenShift and highlight some of the integration's most interesting features.

Setting Up OpenShift with LINSTOR

You’ll need to know how to get started with OpenShift and LINSTOR, so let’s cover that first. With recent versions of OpenShift (4.10 being the most recent as I’m writing this), you can install the container platform in many different environments, including AWS, Azure, GCP, IBM Cloud, VMware Cloud, RedHat OpenStack Platform, and bare metal, to name a few. LINSTOR will work in most, if not all, environments that OpenShift supports. Including environments running on ARM, x86_64, and s390x (IBM Z) architectures. I will highlight the installation process for AWS just because that’s where I’m most comfortable, but the concepts apply to all other environments.

Creating an OpenShift Cluster

Whether you’re looking to trial OpenShift with LINSTOR or are ready to install something more permanent, RedHat’s OpenShift console can guide your installation. The appropriate installation tools and instructions are provided when selecting your desired environment. For my trial in AWS, I was presented with the following simple steps to prepare for installation.

image3-Apr-20-2022-03-03-09-07-PM

The steps above could be summarized as follows:

  1. Downloading the necessary tools, i.e.:
    1. openshift-installer binary
    2. your pull secret for accessing RedHat’s container registry
    3. oc and kubectl binaries
  2. Preparing AWS account access:
    1. Instructions for creating your AWS IAM account for limited access
    2. setting your AWS CLI to use the  account as its default account
    3. finally running the openshift-installer
  3. and, of course, how to get support for your new cluster.

When you run the installer, you’ll see output like the following, instructing you on how to access your cluster. This entire process will take about 45min to complete:

matt@zeus[]openshift-linstor$ ./openshift-install create cluster
? SSH Public Key /home/matt/.ssh/aws-keypair.pub
? Platform aws
INFO Credentials loaded from the "default" profile in file "/home/matt/.aws/credentials"
? Region us-west-2
? Base Domain example.com
? Cluster Name openshift-demo
? Pull Secret [? for help] ******[...]
INFO Creating infrastructure resources...        
INFO Waiting up to 20m0s (until 7:29AM) for the Kubernetes API at https://api.openshift-demo.example.com:6443...
INFO API v1.23.3+e419edf up                      
INFO Waiting up to 30m0s (until 7:41AM) for bootstrapping to complete...
INFO Destroying the bootstrap resources...      
INFO Waiting up to 40m0s (until 8:03AM) for the cluster at https://api.openshift-demo.example.com:6443 to initialize...
INFO Waiting up to 10m0s (until 7:43AM) for the openshift-console route to be created...
INFO Install complete!                          
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/matt/Projects/openshift-linstor/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.openshift-demo.example.com
INFO Login to the console with user: "kubeadmin", and password: "abcde-fghij-klmno-pqrst"
INFO Time elapsed: 30m12s

Now you can log into your AWS console to view your new EC2 infrastructure.

Preparing OpenShift Worker Node’s Storage for LINSTOR

Once logged into your AWS console, you’ll see you have new EC2 instances labeled as either workers or masters, pertaining to their role in the OpenShift cluster. For LINSTOR to be able to provision storage from the worker instances, we need to attach new/unused EBS volumes to each worker before installing LINSTOR.

EBS volumes must exist within the same Availability Zone (AZ) as the EC2 instance to which they're attached. There are many ways to identify which AZ an EC2 instance is running. Luckily for us, the openshift-installer tags each worker's root volume with a name that includes the AZ, making it easy to identify (highlighted blue). Therefore, we can create three new EBS volumes, one for each worker in their respective region. In the image below, I created the EBS volumes named `linstor-vol-<az>` (highlighted green), so it's easy to tell which AZ and worker they belong to. And then, I attached them to the worker instance in their region.

This exact procedure of creating and attaching block volumes within their respective zones applies to all cloud platforms. If you need help identifying the correct way to attach these volumes, reach out to LINBIT or your cloud provider for assistance.

In AWS, these EBS volumes will show up on the host systems as `/dev/nvme1n1` internally. These device names are needed for the next step. You can see the names for the attached block devices by running `lsblk` or `cat /proc/partitions` from a shell on the host systems. If you don’t have shell access, you can use a busybox container:

$ oc run bb --image=busybox --command -- cat /proc/partitions
pod/bb created

$ oc logs bb
major minor  #blocks  name
259        0  125829120 nvme0n1
259        1       1024 nvme0n1p1
259        2     130048 nvme0n1p2
259        3     393216 nvme0n1p3
259        4  125303791 nvme0n1p4
259        5  104857600 nvme1n1

$ oc delete pod bb
pod "bb" deleted

Preparing OpenShift Worker Node’s Network for LINSTOR

Before installation, LINSTOR specific ports must be added to the OpenShift worker's Firewall. In AWS we add these firewall rules using the AWS Console to the security group to which the workers belong. We only need to add the following rules to the inbound rules on the worker nodes security group, as the default policy for outbound traffic is to allow everything. Also, note that the rules allow traffic from `10.0.0.0/16`, which includes the subnets my hosts are configured to use.

image7-Apr-20-2022-03-03-08-40-PM

Since LINSTOR and DRBD operate at the host level, the LINSTOR Operator cannot manage these rules, nor are they visible to/from OpenShift's CNI. If you’re unsure where to add these rules in your environment, reach out to LINBIT or your cloud provider for assistance.

Installing the LINSTOR Operator into OpenShift

With the environment provisioned and prepared for LINSTOR, we can finally install it! LINSTOR is listed in the OpenShift Operator marketplace and can be installed from there, but we can also install it using Helm, which is my preferred method as it allows me to store my LINSTOR Operator's configuration values in a local Git.

OpenShift has a concept of projects, similar to namespaces in Kubernetes, except that creation of projects is restricted to administrators by default. Create a project named `storage` to deploy LINSTOR into, then create a secret containing your LINBIT customer (or trial) account credentials which are used to access LINBIT’s container registry:

$ oc new-project storage
$ oc create secret docker-registry drbdiocred \
     --docker-server=drbd.io --docker-username=jshmoe --docker-password=s3cr3tPW

Create a Helm values override file that will be used to configure LINSTOR’s Operator. Notice the `operator.[...].devicePaths` is set to our `/dev/nvme1n1`, which will be used for LINSTOR’s storage-pools in the configuration created by the following command:

$ cat << EOF > linstor-op-vals.yaml
global:
setSecurityContext: false
operator:
satelliteSet:
  storagePools:
    lvmThinPools:
    - name: lvm-thin
      thinVolume: thinpool
      volumeGroup: ""
      devicePaths:
      - /dev/nvme1n1
  kernelModuleInjectionMode: ShippedModules
  kernelModuleInjectionImage: drbd.io/drbd9-rhel8:v9.1.6
controller:
  dbConnectionURL: k8s
stork:
enabled: false
csi:
enableTopology: true
etcd:
enabled: false
haController:
replicas: 3
EOF

Finally, add and/or update your LINSTOR charts repository to Helm, and install the operator using the values file created above:

$ helm repo add linstor https://charts.linstor.io
$ helm repo update
$ helm install linstor-op linstor/linstor --values linstor-op-vals.yaml

After a short while, you'll see LINSTOR's pods running in your `storage` project, and we can configure some storageClasses for our users that LINSTOR backs. The storageClass configurations below specify three different replication policies for 1, 2, and 3 replica LINSTOR volumes using the `autoPlace` option. All available options for storageClasses can be found in LINBIT’s documentation.

$ cat << EOF > linstor-sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: "linstor-csi-lvm-thin-r1"
provisioner: linstor.csi.linbit.com
parameters:
autoPlace: "1"
storagePool: "lvm-thin"
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: "linstor-csi-lvm-thin-r2"
provisioner: linstor.csi.linbit.com
parameters:
autoPlace: "2"
storagePool: "lvm-thin"
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: "linstor-csi-lvm-thin-r3"
provisioner: linstor.csi.linbit.com
parameters:
autoPlace: "3"
storagePool: "lvm-thin"
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
EOF
oc apply -f ./linstor-sc.yaml

Finally, you have a working OpenShift cluster with LINSTOR installed and ready to provision storage!

Using LINSTOR’s Features in OpenShift

With LINSTOR ready to use in OpenShift we can explore some of the features and benefits of coupling the two technologies. Besides the obvious benefit of running applications that require persistent storage in OpenShift, the following sections will cover what I believe are the highlights of this integration.

Dynamic Replicated and Highly Available Storage

The most critical feature gained by installing LINSTOR is its DRBD replicated storage. DRBD replicates block storage synchronously between cluster nodes, ensuring you always have a block-for-block identical replica of your volume on more than one cluster node. In addition, OpenShift, much like Kubernetes, will manage moving pods off an unhealthy cluster node after an eviction timeout is reached. During pod evictions, LINSTOR will ensure your pods' volumes get reattached to a healthy replica, no matter which worker node to which the pod is assigned. Even when the pod is assigned to a node without a physical replica of the volume, LINSTOR can “disklessly” attach the pod to the volume.

Browse to the OpenShift marketplace and search for any community Operator that requires persistence (unless you're okay with paying to play with a licensed Operator, then pick any you'd like). For example, I went with Redis because I knew it had a community version, and persistence is an option.

Choosing either Redis StandAlone or Redis Cluster, you'll be able to populate a Storage Class Name in the configuration options presented to you. Next, use one of the LINSTOR storage classes we created during LINSTOR's installation, and finally, create the Operator:

After a while, you should see that Redis is running and that LINSTOR has created some persistent volumes by inspecting the output of: `oc get pv`. Now that we have an application running on LINSTOR provisioned storage, we can take a look at enabling LINBIT’s GUI and Monitoring features.

Storage Cluster Observability using LINBIT GUI

The LINBIT GUI is packaged alongside the LINSTOR Controller for LINBIT’s customers and only needs to be exposed before it can be used. OpenShift’s route objects can route traffic to an endpoint within our cluster, much like an ingress object in Kubernetes. Let’s create the route and then get the route's DNS name to plug into our browser.

$ oc create route edge --service linstor-op-cs
$ oc get route

When you first browse LINBIT's GUI, you'll see the dashboard, which displays an overview of your most essential LINSTOR resources: nodes, resources, and volumes.

When LINSTOR is deployed into OpenShift, it will create a single highly available LINSTOR Controller pod controlled by a Deployment and a DaemonSet that creates LINSTOR Satellite pods on each of the OpenShift worker nodes. For example, in the following graphic, you can see that we have all four nodes (1x Controller and 3x Satellites) reporting "Online."

By clicking on any part of the LINBIT GUI referencing “nodes”, we can drill down into the LINSTOR cluster's detailed view of those objects. Furthermore, operations can be performed on these objects by clicking on the "stacked bullet points," also known as the "kebab menu," to the right.

The same is true for all other objects, such as resources …

image5-Apr-20-2022-03-03-08-98-PM

… and storage pools.

At the time of writing this blog post the LINBIT GUI is still undergoing rapid development. Expect to spot some differences in your deployments.

Also, don’t forget to control access to your LINBIT GUI if you’re rolling this out in production. This can be done using client-side TLS certificates on the route object created in OpenShift.

Integrating OpenShift Monitoring with LINSTOR

Finally, another great benefit of using OpenShift with LINSTOR is the seamless integration of their respective monitoring capabilities. OpenShift monitors itself using Prometheus out-of-the-box, and LINSTOR exposes Prometheus metrics on an endpoint accessible to the OpenShift Cluster by default. Since LINSTOR is responsible for our OpenShift infrastructures storage, it seems logical that OpenShift’s monitoring is extended to monitor LINSTOR.

OpenShift needs to be configured to monitor user-defined projects in its cluster-monitoring-config configMap object. The following commands can be used to enable this monitoring.

$ cat << EOF > oc-monitoring.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
  enableUserWorkload: true
EOF
$ oc apply -n openshift-monitoring -f oc-monitoring.yaml

Once applied, you can use the OpenShift GUI’s navigation to test that our LINSTOR metrics are now being scraped by OpenShift’s Prometheus stack, under Observe -> Metrics. LINSTOR related metrics will start with either `linstor_`, or `drbd_`. Begin typing either into the metrics query input, and you should be presented with all the available metrics.

I queried the `linstor_resource_definition_count` and `drbd_resource_resources` after adding a singleton Redis server to the Redis cluster I had already deployed. And I can see that we're scraping real-time metrics from LINSTOR using OpenShift.

image6-Apr-20-2022-03-03-08-82-PM

Once you’re ready to take the next step into alerting and graphing these metrics for better observability, you can check out LINBIT’s Grafana dashboard in the Grafana community, Metrics starting with `drbd_` are exposed by LINBIT’s drbd-reactor daemon, and are documented in the DRBD Reactor GitHub project. Finally, Metrics starting with LINSTOR are exposed by the LINSTOR Controller itself and are documented in the linstor-server GitHub project.

Concluding Thoughts

As I hopefully presented in this blog post, deploying LINSTOR into OpenShift to satisfy your persistent storage requirements comes with tight integration that highlights each technology's most valuable features. In addition, the LINSTOR Operator deployment is easy, certified by RedHat, and supported by LINBIT. RedHat is there to support any of your OpenShift needs. Therefore, running OpenShift from RedHat with LINSTOR from LINBIT will establish a fully supported container platform for your Enterprise.

LINBIT is committed to its Open Source community and clients. So if you have any, and I mean any, feedback or suggestions for any of LINBIT’s software, we encourage you to reach out.


About the author

Red Hatter since 2018, tech historian, founder of themade.org, serial non-profiteer.

Read full bio

Browse by channel

automation icon

Automation

The latest on IT automation that spans tech, teams, and environments

AI icon

Artificial intelligence

Explore the platforms and partners building a faster path for AI

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

Explore how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the solutions that simplify infrastructure at the edge

Infrastructure icon

Infrastructure

Stay up to date on the world’s leading enterprise Linux platform

application development icon

Applications

The latest on our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech