With the release of OpenShift 4 at Red Hat Summit in 2019 the world’s best enterprise container management platform unleashed an amazing set of features onto the world. OpenShift 4 completely redefined enterprise Kubernetes combining a new approach to installation, upgrades, and management as well as adding advanced “day 2” capabilities and automations.

It sure has been an exciting year watching OpenShift 4 roll out across the globe!

And now, one short year later, we find ourselves celebrating OpenShift again at this year’s Red Hat Summit 2020’s virtual experience and are excited to see customers using OpenShift 4’s capabilities and benefits across an ever-growing footprint of platforms. With the convenience of multiple installation methods and an increasing pool of available Operators, the options around OpenShift continue to grow and grow.

One particular area where we have seen increasing deployments and support for in OpenShift 4 is large-scale, private clouds such as Red Hat OpenStack Platform, which solves on-premises solutions by providing an API-driven, programmatically scalable infrastructure. This, it turns out, is a perfect match for OpenShift and it’s programmatically leaning ease of automation and installation.

With OpenShift 4.2 we released full support of OpenShift on Red Hat OpenStack Platform via the installer-provisioned infrastructure (IPI) method. Installer-provisioned infrastructure deployments are intentionally prescriptive and limit the amount of variance for the install profile. This ensures an easier to manage and better supported deployment for the majority of “OpenShift on OpenStack” use cases.

Integration allows for automatic provisioning by the OpenShift 4 installer of all Red Hat OpenStack Platform infrastructure simply by employing the native OpenStack APIs. This allows for a guided installation, based on best practices, with minimal preparation of configuration files or requirements to manually preparing infrastructure. Instead, the installer works with the native OpenStack components’ APIs to provide hosts, networking, storage, and more.

But this integration isn’t limited to install time or “day 0” work. Once running, an OpenShift cluster manages underlying OpenStack infrastructure programmatically via the OpenStack APIs as well. It does this using the very same mechanism as for public clouds: the Machine API Operator. The Machine API Operator provides for an easy way to manage underlying openshift nodes through the concept of defined Machines and MachineSets. These can then be extended via custom resources such as MachineAutoscalers and the ClusterAutoscaler.

With OpenShift 4’s consistent implementation methodology regardless of platform we can use the same process to dynamically manage, scale, and provision OpenStack resources from within OpenShift the same way as on public clouds!

To fully understand “OpenShift on OpenStack” autoscaling we first need to look at what an IPI-based install looks like. While full details of the installation process are beyond the scope of this blog, it’s important to understand the installer will create infrastructure within a Red Hat OpenStack Platform tenant by provisioning OpenStack instances from Nova using an image saved in Glance.

Shift on Stack

Prefer to watch a movie? This blog is available as a 5:31 video here:

 

 

In the Red Hat OpenStack Platform Horizon console, we can see six OpenStack instances all with unique, but related, naming. They have been created from an image of Red Hat Enterprise CoreOS that was created by the installer and have been assigned an OpenStack Flavor to accommodate their requirements. Note: this is a sample deployment and production deployments may require more workers or larger resource allocations.

 

Above: The OpenStack instances created by the OpenShift Installer

The installer creates all the infrastructure by utilising the OpenStack API’s to not only create the instances but also the necessary network infrastructure through interacting with OpenStack’s networking provisioning service, Neutron:

Above: View of the IPI created Network subnet created via OpenStack’s Neutron

Each OpenStack instance is now represented as a Node in OpenShift:

Above: OpenStack instances represented as Nodes in OpenShift

We can see each Node is represented in the Machine API via a Machine object. A Machine is the fundamental unit that describes the host for a Node. Machine objects are specific to each underlying platform and describe the type of compute nodes that are offered for different cloud providers (AWS, OpenStack, etc). For OpenShift on OpenStack this uses the Kubernetes Cluster API Provider for OpenStack:

Above: OpenStack instances represented as Machine objects

The installer creates a worker MachineSet out of the available machines. MachineSets allow collective scaling of many machines at once, just like ReplicaSets do for Pods!

Above: Worker MachineSet

Scaling Setup

To set up auto scaling we need to define the overall scaling limits for the cluster. This is done via the ClusterAutoscaler. For machine scaling, this object is an extension of the Machine API and allows us to set cluster-wide scaling limits. The ClusterAutoscaler is only set once per cluster and defines the limit for all projects. It is a simple YAML file and full details can be found in the documentation. For our example on Red Hat OpenStack Platform, we have only set the some basic values to ensure the cluster doesn’t scale too large for our lab-based Red Hat OpenStack Platform install:

Above: editing the ClusterAutoscaler YAML directly in the OpenShift UI

Next up is to control scaling of the worker nodes specifically. For this we need to create a Machine Autoscaler. Like the ClusterAutoscaler the Machine Autoscaler allows us to define the state and limits for scaling specific Machine types. There are many options for defining the Machine Autoscaler so review the documentation to learn what’s possible. For our example we have set a very small scaling footprint allowing no more than eight Machines and no less than two. As you can see below, we are setting these values for the “ocpra-vgx2w-worker” MachineSet, which we saw the installer create for us.

Above: editing the Machine Autoscaler YAML directly in the OpenShift UI

Notice we have set “minReplicas” to “1” which means there must be at least one Machine and one Replica, which is two machines. If you’ll recall, we currently have three worker nodes in our deployment. Let’s see what happens to the total after we scale!

And that’s really all that is needed to manage autoscaling!

Scale Up!

Now, we need to generate enough load to make the cluster need to grow. To do this, we are using a simple load producing job found in the excellent online OpenShift 4 training’s “Scaling an OpenShift 4 Cluster” section. There we find some simple example YAML to “... produce a massive load that the cluster cannot handle, and will force the autoscaler to take action …” Sounds perfect, right?! The actual job is here but we have changed it slightly for our environment:

apiVersion: batch/v1
kind: Job
metadata:
 generateName: work-queue-
spec:
 template:
   spec:
     containers:
     - name: work
       image: busybox
       command: ["sleep",  "300"]
       resources:
         requests:
           memory: 500Mi
           cpu: 500m
     restartPolicy: Never
 backoffLimit: 4
 completions: 84
 parallelism: 84

This job starts up 84 pods each using 1000MB of memory and runs them in parallel for 5 minutes. The cluster will not be able to handle this load and the autoscaling will be invoked. The pods are set to “never restart” so the load can reduce and we can watch the cluster reduce to its desired state. The job is basic and run within a dedicated project (autoscale-example):

$ oc create -n autoscale-example -f job.yaml

Once started, it immediately spawns the pods and begins to drive the resource requirements up:

Above: many pods created by the load generating job

As the machineset autoscaler detects the need for more instances, it will request them via from the OpenStack compute provider, Nova, via the OpenStack API’s. Nova will then build new instances using the Red Hat CoreOS image in Glance with the allocated Flavor:

Above: OpenStack instances being created as viewed via the OpenStack CLI show server command

As the instances are created, the Machine API tracks them and adds worker machines to the MachineSet:

Above: OpenShift’s UI showing the added, and adding, OpenStack instances as Machine objects

This is all being provided from OpenStack automatically and we can see the cloud’s resources filling up!

Above: OpenStack Horizon’s view of our simple clusters resource usage during the scale up

Scaling will continue to accommodate the increasing load across the infrastructure and will create as many instances, up to the Machine AutoScaler limits, as required.

Scale Down!

Once the scaling job’s pods finish their five minute workload they terminate and the load on the cluster will reduce. The Machine AutoScaler tracks this as well, and will scale down the OpenStack instances as the workload subsides. Remember, the autoscaler has both max and minimum limits, so it will work hard to get back to the minimum when the resource requirements will tolerate it.

Once the scale event ends, the Machine AutoScaler will begin to delete unneeded Machine resources from the MachinSet:

Above: OpenShift Machines are removed from service when the load reduces

And disable and remove the underlying nodes:

Above: OpenShift nodes are removed when the load reduces

Which in turn are then removed from the underlying OpenStack tenant:

Above: OpenStack instances are removed as the Machines and Nodes are disabled and deleted

Until we have reached the Autoscalers desired state for minReplicas of “1”. As you can see we now have three masters and only two workers, despite the original install provisioning three nicely showing that the Machine API and autoscaling functionality is now able to manage “day 2” scaling seamlessly.

Above: The cluster is now in the AutoScalers desired state for the MachineSet

Conclusion

With the power and ease of the OpenShift 4 architecture for deploying and managing dynamic, powerful, and easy to use enterprise container solutions. combined with the added support for OpenStack in OpenShift 4, it’s now as easy to deploy robust, scalable, enterprise Kubernetes clusters in your private data centre as in the public cloud!