OpenShift 4.5 release marks the GA for OpenShift Virtualization. OpenShift Virtualization brings the ability to deploy VMs alongside your Pods which opens up a world of possibilities for infrastructure and application architectures. OpenShift Virtualization uses the Linux Kernel Hypervisor(KVM) with Libvirt providing the management abstraction layer.

diagram3

In this first installment I will go through how to install and configure VMs in OpenShift Virtualization and in my upcoming posts I will explore more advanced topics around strategies and architectures.

Installation

OpenShift Virtualization is installed via OperatorHub. To install OpenShift Virtualization you will need to have cluster admin permissions.

Online Clusters

For clusters that have internet connection with unrestricted access to quay.io and registry.redhat.io follow the standard installation guide. It's as simple as installing the operator from Operator Hub and then creating a HyperConverged Cluster instance.

Disconnected Clusters

For both disconnected clusters and clusters with a registry proxy, you will need to create a catalog source. For disconnected clusters you will need to mirror the related images to your local registry as well and there are two ways to do this.

The first is using OCP docs for catalog creation and mirroring. The limitation of this method is that you have to mirror the entire catalogue which can take anywhere between 1 to 5 hours and take up 20-30gb space. Most of the operators in the catalogue cannot be used in disconnected clusters so its not an ideal solution for fully disconnected clusters.

The second method is using the custom catalog creation tool. This tool will let you create a custom catalogue with only the operators you want. Full instructions are provided in the repo. Once the catalogue is up, navigate to OperatorHub and install OpenShift Virtualization using the standard instructions.

Getting your VM image into OpenShift

OpenShift Virtualization supports importing of QCOW2 and RAW images. For the purposes of this guide we use the Fedora qcow2 image found here https://alt.fedoraproject.org/cloud. I will go through two ways to import images to OpenShift.

Import image to a registry

You can publish the qcow image to your local registry by using Podman to build a simple Dockerfile definition. Create a Dockerfile with the contents below and copy the qcow image to the same folder.

FROM kubevirt/container-disk-v1alpha
ADD Fedora-Cloud-Base-32-1.6.x86_64.qcow2 /disk

Build and push the Dockerfile to your registry that is accessible by OpenShift. Substitute “MyRegistry:5000” with your registry URL and port.

podman build -t MyRegistry:5000/fedora:32.1.6 .
podman push MyRegistry:5000/fedora:32.1.6

The fedora image is now available to be pulled through your registry. This will be demonstrated later in this post.

Import image to a DataVolume

A DataVolume (DV) is an abstraction for Persistent Volume Claims (PVC). It leverages Containerized Data Importer (CDI) to import data into PVCs. The virtctl cli can be used to import a QCOW to your cluster. It can be installed using the instruction in the OCP docs or get the latest version from the project’s GitHub site. This method requires a Storage Class to be setup so we can provision PVs. If your Storage Class requires manual provisioning of PVs, create a PV with a storage size of 20GB. Run the following command to upload the image to a DV managed PVC.

oc new-project vm-project
oc project vm-project
virtctl image-upload dv fedora-32-dv --size=20G --storage-class=local-storage --image-path=vm-images/Fedora-Cloud-Base-32-1.6.x86_64.qcow2

Once complete, a PVC with the qcow image will be present in your selected project. This PVC will be used as a source to clone disks for VM deployments.

Deploy VM using a registry image

If you want to create immutable VMs that have ephemeral or persistent storage, creating VMs from a registry image is the easiest way to achieve this. Earlier we created a registry image using a Fedora QCOW. Use the sample fedora-immutable-vm-pod-network.yaml to create your fedora immutable VM.

Relevant section:

volumes:
- containerDisk:
image: 'MyRegistry:5000/fedora:32.1.6'
name: rootdisk

Deploy VM using cloned PVC

It’s equally easy to create a traditional VM with persistent storage. Earlier we used the virtctl cli to import the Fedora qcow image to a PVC. We can now use that PVC as the source of the clone disk for our VM instance. See the sample yaml fedora-vm-pod-network.yaml.

Relevant section:

dataVolumeTemplates:
- apiVersion: cdi.kubevirt.io/v1alpha1
kind: DataVolume
metadata:
name: fedora2-disk-0
spec:
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 120G
storageClassName: local-storage
volumeName: fedora2-pv
volumeMode: Filesystem
source:
pvc:
name: fedora-32-dv
namespace: vm-project

The above methods make it very easy to import VM images to your cluster and begin creating VMs. Additionally you can have an URL endpoint as the disk source ( ISO hosted on a webserver) as well as creating blank VMs and using PXE to bootstrap it.

Network Configuration using Nmstate

Each VM is controlled via a virt-launcher pod that is created with each VM. The default networking type for OpenShift Virtualization VMs is Masquerade. The VM will be assigned a non routable IP and you can access the VM using the IP of the virt-launcher pod that was deployed alongside it.

interfaces:
- masquerade: {}
model: virtio
name: nic0

Alternatively you can connect the VM to the host network by creating a bridge interface on the OCP nodes using Nmstate. The Nmstate operator is installed with OpenShift Virtualization and provides you with the Node Network Configuration Policy (NNCP) object to update the host network settings. Here is a sample config to create a bridge called br1 from an interface called eth1 on the OCP nodes.

apiVersion: nmstate.io/v1alpha1
kind: NodeNetworkConfigurationPolicy
metadata:
name: br1
spec:
nodeSelector:
node-role.kubernetes.io/worker: ""
desiredState:
interfaces:
- name: br1
description: Linux bridge using eth1 device
type: linux-bridge
state: up
ipv4:
dhcp: true
enabled: true
bridge:
options:
stp:
enabled: false
port:
- name: eth1

Once this is applied to your cluster, use the following command to see the status of configuration update.

oc get nncp
oc get nnce

NNCP is a cluster wide configuration. Additionally, for each namespace you have to create a Network Attachment Definition (NAD) to use the bridge connection in your VMs. Since the configuration is a json embedded in a yaml, it is easier to create your initial NAD through the web console then use that yaml in your future automation.

apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
annotations:
k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br1
name: br1
namespace: vm-project
spec:
config: >-
{"name":"br1","cniVersion":"0.3.1","plugins":[{"type":"cnv-bridge","bridge":"br1","ipam":{}},{"type":"cnv-tuning"}]}

After applying the NAD yaml, you can use the bridge in your VMs. See relevant sections from a VM yaml below.

spec:
template:
spec:
domain:
devices:
interfaces:
- bridge: {}
model: virtio
name: nic-0
networks:
- multus:
networkName: br1
name: nic-0

See OpenShift docs and NMstate docs for more details.

Cloud Init

You can create initial VM configuration through cloud init. In the simple example below we enable password authentication and change the root password

volumes:
- cloudInitNoCloud:
userData: |
#cloud-config
ssh_pwauth: True
chpasswd:
list: |
root:password
expire: False
hostname: fedora1
name: cloudinitdisk

This data can be hard coded like the example above or you can create a secret with that data and mount the secret instead (recommended).

volumes:
- cloudInitConfigDrive:
secretRef:
name: vminit
name: cloudinitdisk

That’s it for the starter guide. Now go and spin up some VMs and experiment! In upcoming posts I will explore all the cool things virtualization in OCP enables. Stay tuned!