How to manage virtual machines using Red Hat OpenShift Virtualization on Red Hat OpenShift Service on AWS

Red Hat OpenShift Virtualization gives you the ability to modernize your applications—without needing to rework all of your virtual machines—on new infrastructure. Learn how to create or migrate VMs using OpenShift Virtualization on Red Hat OpenShift Service on AWS.

Red Hat OpenShift Virtualization gives you the ability to modernize your applications—without needing to rework all of your virtual machines—on new infrastructure. Learn how to create or migrate VMs using OpenShift Virtualization on Red Hat OpenShift Service on AWS.

Configuring Red Hat OpenShift Virtualization to run on Red Hat OpenShift Service on AWS

10 mins

To use Red Hat® OpenShift® Virtualization on your bare metal nodes, you need to install and configure the OpenShift Virtualization Operator. In this resource, Alan Cowles, Technical Principal Product Manager at Red Hat, walks through the configuration steps for OpenShift Virtualization and your environment. 

What will you learn?

How to install and configure the OpenShift Virtualization Operator

What you need before starting:

Configure OpenShift Virtualization on ROSA

Alan Cowles (00:00):
Hello, everybody, my name is Alan Cowles. I'm a Principal Product Manager at Red Hat, and I'd like to introduce you to our demonstration today where we are going to do the prerequisites on a Red Hat OpenShift Service on AWS, or ROSA, cluster to enable it to host OpenShift virtualization machines. Which is a feature that we are going to begin supporting with the release of OpenShift 4.14. In order to support virtual machines, a ROSA cluster must have bare metal nodes. Now, a default deployment of ROSA. I'm going to go ahead and click over to the EC2 console right here.

(00:43):
We are running all virtual nodes in EC2. So in order to add bare metal nodes, we are going to go to our hybrid cloud console and we are going to take a look at our machine pools. And we are going to create a new machine pool. And in this case, I'm going to call this “worker-metal,” and we're going to select that metal type. There's a lot of different types of nodes to pick from here, but we're going to scroll down here until we see the M5.metal node 96 vCPU 384 gigs of RAM, and we are also going to enable autoscaling. I want a minimum of two of those nodes in my cluster and, to cut down on costs, let's go ahead and right now set the maximum for the pool to three. And we're going to create that machine pool.

(01:40):
We are also going to enable cluster autoscaling. This just makes it easy so that the machine pool goes ahead and creates those instances in AWS for us. Now what we're going to see here in EC2, and we'll refresh this screen here, and


(2:00): 
we can actually see that we have our metal5 nodes. One has already passed its initial checks. The other one is currently initializing. So if we go over to our OpenShift console, we can actually check on these nodes here. Notice, they don't show up as nodes yet. Right now, we still see just our original control planes, workers, and infra workers in the virtual types. They actually show up as machines, and it shows that two of them are currently provisioned. They can take upwards of 30 minutes to 45 minutes to actually provision and be introduced to the cluster as a node. So this is a process to be patient with along the way.

(02:45):
While that's happening though, I'm going to go ahead and prepare our environment for running OpenShift virtualization machines by installing the OpenShift Virtualization operator. So I'm going to click on OperatorHub and take a look at OpenShift Virtualization. And you can see the latest version currently is 4.13.4. We're going to go ahead and install that. This is a very simple install for the deployment we're doing. I'm going to go with the stable version, have it create the namespace “Openshift-cnv” for us, and just click on ‘Install.’ The operator install will take just a few moments here. Once the OpenShift Virtualization operator is complete, we have to create the actual deployment of OpenShift Virtualization by clicking on Create HyperConverged.

(04:09):
Now there are a couple of options we can do here. Again, we're going to take the most simple one and just go with the default options. We'll get it so that it can go ahead and begin its installation progress. We will get status updates under that column as the operator finishes its install. You can see that as the operator progresses through its install process, we get notified that there is now a web console update. We can choose to refresh that by clicking on this link. And we are now given a virtualization menu to the left over here. This gives us the ability to take a look at the virtualization features that are available, the catalog of machines that we can deploy, templates if we decide to import any, and then what bootable volumes and data sources we have available.

(05:33):
Thank you very much for watching this demonstration of configuring the prerequisites for OpenShift Virtualization on the Red Hat OpenShift Service on AWS, or ROSA. Please join me for future demonstrations where I will talk about deploying virtual machines in this environment, as well as importing virtual machines from existing hypervisors into this environment.

Now that you have successfully configured OpenShift Virtualization on your ROSA environment, you’re ready to deploy virtual machines to it. 

Get more support

Previous resource
Prerequisites
Next resource
Deploying virtual machines in OpenShift

This learning path is for operations teams or system administrators

Developers may want to check out Developing applications on OpenShift on developers.redhat.com. 

Get started on developers.redhat.com

Hybrid Cloud Logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy, sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.