Subscribe to our blog

bonsai-4634225_1920

In the past, I have blogged about different ways to create working VirtualMachine (VM) definitions for OpenShift Virtualization in YAML. Whether you choose to create an example VM in the console, or get familiar with the templates and the oc process command, it has always been relatively straight-forward to get OpenShift to generate new VMs.

The big drawback is these VM definitions have always been just a bit too large to comfortably include in a blog, without forcing the reader to scroll through a large mass of YAML or cutting out large pieces of the VM with a placeholder saying:

[ . . . truncated . . . ]

Beyond blog-worthiness, the size presents a challenge for the new user to get a solid understanding of what is going on with a particular VM definition. Starting in OpenShift Virtualization version 4.13, this is starting to change. Instancetypes and Preferences, collectively a new feature from the upstream KubeVirt project, allow much of the VM definition to be replaced by shorthand references. This new feature, paired with the new virtctl create CLI command, can drop the size of the VM by almost a factor of four. The size savings help out not only in your cluster, but in your GitOps repositories, pipelines, and CI scripts as well.

In addition to the size savings, as we will see, using instance types allows you to separate basic size choices from more involved settings like IO and BIOS types that require understanding of OS support and driver installations.

Caveat Adminus

As of OpenShift Virtualization version 4.13, instance types and Preferences are a Developer Preview feature though they are slated for upgrade to Technology Preview in 4.14. This means that they are being considered for full production support, but are not there yet in this release, and there is no guarantee that they will eventually be fully supported. In the upstream KubeVirt project, they are v1alpha2, so they are relatively new and could well be subject to change. Like with the developer preview to technology preview upgrade, by the time of the 4.14 OpenShift release, KubeVirt should have a new release where this API moves to v1beta1, which promises significantly more stability. Users of this feature are encouraged to provide feedback on their use to help prioritize and/or shape the product offering.

Getting a first look at instance types and preferences

Under the updated "Create new VirtualMachine" wizard, there is a new "Instance Types" tab.

create-new-vm-top

After you select a base operating system, you are presented with a cloud provider style instance series. These series include N for general purpose workloads, CX for compute-intensive applications, and M for memory-intensive applications. There is also a GN series for clusters with NVIDIA GPU hardware.

select-type

Once you select a series, there is a second selection for a more traditional t-shirt size like medium, large and xlarge up to 8xlarge, each with their own set of resource allocations.

n-series

Once you have made your selections, details for the VM are displayed for a final confirmation before it is created on the cluster. As we will see later, the OS selection and instance type choice rely on default sets of preferences, not exposed here in this user interface, but active behind the scenes.

Taking a closer look on the command line

It may not be as obvious from the console UI, but using instance types instead of the previous standard OpenShift Virtualization templates does save a lot of YAML in the VM definition. We can use the CLI to readily demonstrate this difference.

Using virtctl to create VMs

Although it is possible to write the VM definition by hand, we will take advantage of a new feature in the virtctl command-line tool to create a VM based on the instance type.

Virtctl is available as a download from the OpenShift Console. Click on the question mark icon in the top right of the console and select "Command line tools"

command-line-tools

Download the appropriate virtctl package for your local workstation's architecture and operating system, install it from the compressed image, and place it in your shell's path.

Exploring instance types

The following commands require the oc command line client as well. Use oc login to establish a session with your cluster as an appropriate user.

First, we can check out the cluster-scoped VirtualMachineClusterInstancetypes, (vmclusterinstancetypes)

$ oc get vmclusterinstancetypes

NAME AGE
cx1.2xlarge 2d21h
cx1.4xlarge 2d21h
cx1.8xlarge 2d21h
cx1.large 2d21h
cx1.medium 2d21h
cx1.xlarge 2d21h
gn1.2xlarge 2d21h
gn1.4xlarge 2d21h
gn1.8xlarge 2d21h
gn1.xlarge 2d21h
highperformance.large 2d21h
highperformance.medium 2d21h
highperformance.small 2d21h
m1.2xlarge 2d21h
m1.4xlarge 2d21h
m1.8xlarge 2d21h
m1.large 2d21h
m1.xlarge 2d21h
n1.2xlarge 2d21h
n1.4xlarge 2d21h
n1.8xlarge 2d21h
n1.large 2d21h
n1.medium 2d21h
n1.xlarge 2d21h
server.large 2d21h
server.medium 2d21h
server.micro 2d21h
server.small 2d21h
server.tiny 2d21h

From this, we can see some instance types do not get mentioned in the UI, namely "server" and "highperformance". For the purpose of comparison to a standard VM template, let's check out the N series, medium size.

$ oc get vmclusterinstancetype n1.medium -o yaml

apiVersion: instancetype.kubevirt.io/v1alpha2
kind: VirtualMachineClusterInstancetype
metadata:
annotations:
instancetype.kubevirt.io/class: General Purpose
instancetype.kubevirt.io/description: |-
The N Series is quite neutral and provides resources for
general purpose applications.

*N* is the abbreviation for "Neutral", hinting at the neutral
attitude towards workloads.

VMs of instance types will share physical CPU cores on a
time-slice basis with other VMs.
instancetype.kubevirt.io/version: "1"
operator-sdk/primary-resource: openshift-cnv/ssp-kubevirt-hyperconverged
operator-sdk/primary-resource-type: SSP.ssp.kubevirt.io
creationTimestamp: "2023-06-05T21:36:07Z"
generation: 1
labels:
app.kubernetes.io/component: templating
app.kubernetes.io/managed-by: ssp-operator
app.kubernetes.io/name: common-instancetypes
app.kubernetes.io/part-of: hyperconverged-cluster
app.kubernetes.io/version: 4.13.0
instancetype.kubevirt.io/vendor: kubevirt.io
name: n1.medium
resourceVersion: "1436909"
uid: 30abc5ac-f07f-4a57-92a1-49717f886d10
spec:
cpu:
guest: 1
memory:
guest: 4Gi

Here we see that an n1.medium instance has one core and four GiB of RAM. Scripting out to the rest of the instance types, we can see the pattern:

for it in n1 cx1 m1; do
for size in large xlarge 2xlarge 4xlarge 8xlarge; do
echo
echo -n "$it $size "
oc get vmclusterinstancetypes ${it}.${size} -o \
jsonpath="CPU {.spec.cpu.guest} RAM {.spec.memory.guest}"
done
done

Actually, let's reformat that as a table for readability:

n1 Size CPU RAM
  large 2 8Gi
  xlarge 4 16Gi
  2xlarge 8 32Gi
  4xlarge 16 64Gi
  8xlarge 32 128Gi
cx1 Size CPU RAM
  large 2 4Gi
  xlarge 4 8Gi
  2xlarge 8 16Gi
  4xlarge 16 32Gi
  8xlarge 32 64Gi
m1 Size CPU RAM
  large 2 16Gi
  xlarge 4 32Gi
  2xlarge 8 64Gi
  4xlarge 16 128Gi
  8xlarge 32 256Gi

All the instance types associate the number of cores with the t-shirt size, doubling the CPU count with each bump in size. RAM per CPU then is a function of instance type, with 2 Gi per CPU in the CX class, 8 Gi per CPU in the M class, and 4 Gi per CPU for N.

It is not obvious from the two pieces of information stored in each Instancetype how the dramatic reduction in YAML size will be achieved. There are some pieces missing.

"But what about the GPU instance types?" you might ask at this point. Before we continue digging into how instance types render their VMs, let's quickly compare an N1 and a GN1 instance type:

$ diff -u <(oc get vmclusterinstancetype n1.xlarge -o yaml) <(oc get vmclusterinstancetype gn1.xlarge -o yaml)

[ . . . Skipping to the spec . . . ]

spec:
cpu:
guest: 4
+ gpus:
+ - deviceName: nvidia.com/A400
+ name: gpu1
memory:
guest: 16Gi

As you can see, the GPU version adds a simple stanza to include an NVIDIA GPU.

Exploring DataSources

In the UI, the wizard starts with the OS selection. That's where data sources come into play, let's check those next.

$ oc -n openshift-virtualization-os-images get datasources

NAME AGE
centos-stream8 2d21h
centos-stream9 2d21h
centos7 2d21h
fedora 2d21h
rhel7 2d21h
rhel8 2d21h
rhel9 2d21h
win10 2d21h
win11 2d21h
win2k12r2 2d21h
win2k16 2d21h
win2k19 2d21h
win2k22 2d21h

Taking a look at the contents of the rhel9 DataSource:

$ oc -n openshift-virtualization-os-images get datasource rhel9 -o yaml

apiVersion: cdi.kubevirt.io/v1beta1
kind: DataSource
metadata:
annotations:
operator-sdk/primary-resource: openshift-cnv/ssp-kubevirt-hyperconverged
operator-sdk/primary-resource-type: SSP.ssp.kubevirt.io
creationTimestamp: "2023-06-05T21:36:08Z"
generation: 4
labels:
app.kubernetes.io/component: storage
app.kubernetes.io/managed-by: cdi-controller
app.kubernetes.io/part-of: hyperconverged-cluster
app.kubernetes.io/version: 4.13.0
cdi.kubevirt.io/dataImportCron: rhel9-image-cron
instancetype.kubevirt.io/default-instancetype: server.medium
instancetype.kubevirt.io/default-preference: rhel.9
name: rhel9
namespace: openshift-virtualization-os-images
resourceVersion: "1438721"
uid: d470b294-5e51-4762-8a68-ed6c3102f49a
spec:
source:
pvc:
name: rhel9-d1d2fc222d93
namespace: openshift-virtualization-os-images
status:
conditions:
- lastHeartbeatTime: "2023-06-05T21:36:54Z"
lastTransitionTime: "2023-06-05T21:36:54Z"
message: DataSource is ready to be consumed
reason: Ready
status: "True"
type: Ready
source:
pvc:
name: rhel9-d1d2fc222d93
namespace: openshift-virtualization-os-images

Here, the spec section just points at a PVC with the disk image, but the really important part is in the labels:

    instancetype.kubevirt.io/default-instancetype: server.medium
instancetype.kubevirt.io/default-preference: rhel.9

So if you pick this DataSource, and do not specify an InstanceType, you will be assigned a default of server.medium. We know that just assigns CPU and memory, so what all is tucked away in the Preference called rhel.9?

Exploring Preferences

The cluster-wide VM Preferences are called VirtualMachineClusterPreferences (vmcps).

$ oc get vmcps 

NAME AGE
alpine 2d21h
centos.7 2d21h
centos.7.desktop 2d21h
centos.8.stream 2d21h
centos.8.stream.desktop 2d21h
centos.9.stream 2d21h
centos.9.stream.desktop 2d21h
cirros 2d21h
fedora 2d21h
rhel.7 2d21h
rhel.7.desktop 2d21h
rhel.8 2d21h
rhel.8.desktop 2d21h
rhel.9 2d21h
rhel.9.desktop 2d21h
ubuntu 2d21h
windows.10 2d21h
windows.10.virtio 2d21h
windows.11 2d21h
windows.11.virtio 2d21h
windows.2k12 2d21h
windows.2k12.virtio 2d21h
windows.2k16 2d21h
windows.2k16.virtio 2d21h
windows.2k19 2d21h
windows.2k19.virtio 2d21h
windows.2k22 2d21h
windows.2k22.virtio 2d21h

We can take a look at the rhel.9 VM cluster preference mentioned in the rhel9 DataSource:

$ oc get vmcps rhel.9 -o yaml

apiVersion: instancetype.kubevirt.io/v1alpha2
kind: VirtualMachineClusterPreference
metadata:
annotations:
iconClass: icon-rhel
openshift.io/display-name: Red Hat Enterprise Linux 9
openshift.io/documentation-url: https://github.com/kubevirt/common-instancetypes
openshift.io/provider-display-name: KubeVirt
openshift.io/support-url: https://github.com/kubevirt/common-instancetypes/issues
operator-sdk/primary-resource: openshift-cnv/ssp-kubevirt-hyperconverged
operator-sdk/primary-resource-type: SSP.ssp.kubevirt.io
tags: hidden,kubevirt,linux,rhel
creationTimestamp: "2023-06-05T21:36:07Z"
generation: 1
labels:
app.kubernetes.io/component: templating
app.kubernetes.io/managed-by: ssp-operator
app.kubernetes.io/name: common-instancetypes
app.kubernetes.io/part-of: hyperconverged-cluster
app.kubernetes.io/version: 4.13.0
instancetype.kubevirt.io/os-type: linux
instancetype.kubevirt.io/vendor: kubevirt.io
name: rhel.9
resourceVersion: "1436930"
uid: 5fc40847-5576-47be-8481-06c47bf3d4c3
spec:
devices:
preferredDiskBus: virtio
preferredDiskDedicatedIoThread: true
preferredInterfaceModel: virtio
preferredRng: {}
features:
preferredSmm: {}
firmware:
preferredUseEfi: true
preferredUseSecureBoot: true

This is a much richer source of data than the InstanceType alone! By associating with this vmcp, the rhel9 OS DataSource gets useful UI formatting data like icon, display-name, and tags, but the most important parts are filled out in the spec. Disks and network adapters should use virtio drivers, while dedicated IO threads, EFI, and SecureBoot features are all enabled. Using this shorthand saves a lot of space in the VM definition.

Let's see that in action next.

Creating a VM

Start by checking the options for the new create vm command:

$ virtctl create vm --help

Create a VirtualMachine manifest.

Please note that volumes currently have the following fixed boot order:
Containerdisk > DataSource > Clone PVC > PVC

Usage:
virtctl create vm [flags]

Examples:

There are a lot of examples so I cut them out here to save space. I'll leave it as an exercise for the reader to check them out and instead focus on the flags:

Flags:
--name string Specify the name of the VM. (default "vm-mhs8d")
--run-strategy string Specify the RunStrategy of the VM. (default "Always")
--termination-grace-period int Specify the termination grace period of the VM. (default 180)
--instancetype string Specify the Instance Type of the VM.
--infer-instancetype Specify that the Instance Type of the VM is inferred from the booted volume.
--preference string Specify the Preference of the VM.
--infer-preference Specify that the Preference of the VM is inferred from the booted volume.
--volume-containerdisk stringArray Specify a containerdisk to be used by the VM. Can be provided multiple times.
Supported parameters: name:string,src:string
--volume-datasource stringArray Specify a DataSource to be cloned by the VM. Can be provided multiple times.
Supported parameters: name:string,src:string,size:resource.Quantity
--volume-clone-pvc stringArray Specify a PVC to be cloned by the VM. Can be provided multiple times.
Supported parameters: name:string,src:string,size:resource.Quantity
--volume-pvc stringArray Specify a PVCs to be used by the VM. Can be provided multiple times.
Supported parameters: name:string,src:string
--volume-blank stringArray Specify a blank volume to be used by the VM. Can be provided multiple times.
Supported parameters: name:string,size:resource.Quantity
--cloud-init-user-data string Specify the base64 encoded cloud-init user data of the VM.
--cloud-init-network-data string Specify the base64 encoded cloud-init network data of the VM.
-h, --help help for vm

Use "virtctl options" for a list of global command-line options (applies to all commands).

Following that guide, we want to include our instancetype (n1.medium), and our DataSource (rhel9 in the openshift-virtualization-os-images Namespace), and use the default Preference associated with the DataSource by telling virtctl to infer:

virtctl create vm \
--name rhel9-instancetype \
--instancetype n1.medium \
--infer-preference \
--volume-datasource name:root,src:openshift-virtualization-os-images/rhel9,size:30Gi \
> rhel9-instancetype.yaml

Before we check the output, let's compare to the current supported method, using OpenShift Templates:

With a template:

oc process -n openshift rhel9-server-medium -pNAME=rhel9-template -o yaml > rhel9-template.yaml

We can quickly compare the sizes:

$ ls -lh

total 12K
-rw-rw-r--. 1 kni kni 2.5K Jun 1 20:25 rhel9-template.yaml
-rw-rw-r--. 1 kni kni 754 Jun 1 20:24 rhel9-instancetype.yaml

That's almost a fourfold improvement in size (by my math, the instance type setup is 26% the size of the template VM).

Let's take a look at the VM we created:

$ cat rhel9-instancetype.yaml 

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
creationTimestamp: null
name: rhel9-instancetype
spec:
dataVolumeTemplates:
- metadata:
creationTimestamp: null
name: root
spec:
sourceRef:
kind: DataSource
name: rhel9
namespace: openshift-virtualization-os-images
storage:
resources:
requests:
storage: 30Gi
instancetype:
name: n1.medium
preference:
inferFromVolume: root
runStrategy: Always
template:
metadata:
creationTimestamp: null
spec:
domain:
devices: {}
resources: {}
terminationGracePeriodSeconds: 180
volumes:
- dataVolume:
name: root
name: root
status: {}

The first thing you might notice is that there are no annotations or labels in either the VM definition or its spec.template section. All the annotations are found in the rhel.9 Preference instead.

Let's create a VM from this definition and see how it looks on cluster:

$ oc create -f rhel9-instancetype.yaml 

virtualmachine.kubevirt.io/rhel9-instancetype created

$ oc get vms

NAME AGE STATUS READY
rhel9-instancetype 54s Running True

To see instance types and preferences at work, let's compare the vm before and after it is applied to the cluster. Here I have a slightly edited unified diff of the generated YAML (original) and the running VM (+):

--- rhel9-instancetype.yaml	2023-06-16 17:33:43.280784215 +0000
+++ running-instancetype.yaml 2023-06-16 17:40:47.799306473 +0000
@@ -1,8 +1,19 @@
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
+ annotations:
+ kubevirt.io/latest-observed-api-version: v1
+ kubevirt.io/storage-observed-api-version: v1alpha3
- creationTimestamp: null
+ creationTimestamp: "2023-06-16T17:39:59Z"
+ finalizers:
+ - kubevirt.io/virtualMachineControllerFinalize
+ generation: 2
name: rhel9-instancetype
+ namespace: default
+ resourceVersion: "15174447"
+ uid: ce5efb5c-c1b1-4d22-991b-ab27fcffa638
spec:
dataVolumeTemplates:
- metadata:
@@ -18,9 +29,13 @@
requests:
storage: 30Gi
instancetype:
+ kind: virtualmachineclusterinstancetype
name: n1.medium
+ revisionName: rhel9-instancetype-n1.medium-30abc5ac-f07f-4a57-92a1-49717f886d10-1
preference:
- inferFromVolume: root
+ kind: virtualmachineclusterpreference
+ name: rhel.9
+ revisionName: rhel9-instancetype-rhel.9-5fc40847-5576-47be-8481-06c47bf3d4c3-1
runStrategy: Always
template:
metadata:
@@ -28,10 +43,31 @@
spec:
domain:
devices: {}
+ machine:
+ type: pc-q35-rhel9.2.0
resources: {}
terminationGracePeriodSeconds: 180
volumes:
- dataVolume:
name: root
name: root
-status: {}
+status:
+ conditions:
+ - lastProbeTime: null
+ lastTransitionTime: "2023-06-16T17:40:09Z"
+ status: "True"
+ type: Ready
+ - lastProbeTime: null
+ lastTransitionTime: null
+ status: "True"
+ type: LiveMigratable
+ - lastProbeTime: "2023-06-16T17:40:29Z"
+ lastTransitionTime: null
+ status: "True"
+ type: AgentConnected
+ created: true
+ printableStatus: Running
+ ready: true
+ volumeSnapshotStatuses:
+ - enabled: true
+ name: root

The OpenShift Virtualization operator adds a couple useful lines that allow the VM to be successfully migrated and/or restarted if necessary. The first of these is the filling in of the rhel.9 Preference and removal of the inference rule. Next note the addition of revisionName for both instance type and preference. With this, even if the referenced n1.medium Instancetype or rhel.9 Preference on the cluster are updated, the original revisions this VM was created from will be saved.

Another important addition is the machine type pc-q35-rhel9.2.0. If the cluster has a mix of different node types, this helps prevent the VM from ending up on an incompatible node. If this generated VM were to be used in a GitOps repository, it would be a good idea to add in the instancetype: kind line as well as the machine: type to avoid drift.

Finally, to see everything put together on the system, let's take a look at the running VMI compared to its VM definition:

@@ -1,57 +1,75 @@
apiVersion: kubevirt.io/v1
-kind: VirtualMachine
+kind: VirtualMachineInstance
metadata:
annotations:
+ kubevirt.io/cluster-instancetype-name: n1.medium
+ kubevirt.io/cluster-preference-name: rhel.9
kubevirt.io/latest-observed-api-version: v1
kubevirt.io/storage-observed-api-version: v1alpha3
- creationTimestamp: "2023-06-16T17:39:59Z"
+ creationTimestamp: "2023-06-16T17:40:01Z"
finalizers:
- kubevirt.io/virtualMachineControllerFinalize
- generation: 2
+ - foregroundDeleteVirtualMachine
+ generation: 11
+ labels:
+ kubevirt.io/nodeName: wkr2
name: rhel9-instancetype
namespace: default
- resourceVersion: "15174447"
- uid: ce5efb5c-c1b1-4d22-991b-ab27fcffa638
+ ownerReferences:
+ - apiVersion: kubevirt.io/v1
+ blockOwnerDeletion: true
+ controller: true
+ kind: VirtualMachine
+ name: rhel9-instancetype
+ uid: ce5efb5c-c1b1-4d22-991b-ab27fcffa638
+ resourceVersion: "15174449"
+ uid: c09876a7-a0b3-4482-900d-7c297666a161
spec:
- dataVolumeTemplates:
- - metadata:
- creationTimestamp: null
- name: root
- spec:
- sourceRef:
- kind: DataSource
- name: rhel9
- namespace: openshift-virtualization-os-images
- storage:
- resources:
- requests:
- storage: 30Gi
- instancetype:
- kind: virtualmachineclusterinstancetype
- name: n1.medium
- revisionName: rhel9-instancetype-n1.medium-30abc5ac-f07f-4a57-92a1-49717f886d10-1
- preference:
- kind: virtualmachineclusterpreference
- name: rhel.9
- revisionName: rhel9-instancetype-rhel.9-5fc40847-5576-47be-8481-06c47bf3d4c3-1
- runStrategy: Always
- template:
- metadata:
- creationTimestamp: null
- spec:
- domain:
- devices: {}
- machine:
- type: pc-q35-rhel9.2.0
- resources: {}
- terminationGracePeriodSeconds: 180
- volumes:
- - dataVolume:
- name: root
+ domain:
+ cpu:
+ cores: 1
+ model: host-model
+ sockets: 1
+ threads: 1
+ devices:
+ disks:
+ - dedicatedIOThread: true
+ disk:
+ bus: virtio
name: root
+ interfaces:
+ - masquerade: {}
+ model: virtio
+ name: default
+ rng: {}
+ features:
+ acpi:
+ enabled: true
+ smm:
+ enabled: true
+ firmware:
+ bootloader:
+ efi:
+ secureBoot: true
+ uuid: 07e1c90a-9683-5a71-8794-64b24a0b5df9
+ machine:
+ type: pc-q35-rhel9.2.0
+ memory:
+ guest: 4Gi
+ resources:
+ requests:
+ memory: 4Gi
+ networks:
+ - name: default
+ pod: {}
+ terminationGracePeriodSeconds: 180
+ volumes:
+ - dataVolume:
+ name: root
+ name: root

[ . . . Status left out to save space . . . ]

Conclusion

In the spirit of the hybrid cloud, OpenShift Virtualization continually improves the experience of virtual machine admins. Whether one spins up test VMs in the OpenShift Console or uses GitOps to manage every facet of a complicated infrastructure, the new instance types and preferences comprise a great new feature that enhances virtual machine creation in OpenShift.


About the author

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech