Get basic instructions for deploying KubeVirt on top of a running OpenShift instance. I'm assuming you have already deployed your OpenShift cluster, for instance using oc cluster up.

What is KubeVirt?

The high-level goal of the project is to build a Kubernetes add-on to enable management of (libvirt) virtual machines. With KubeVirt, you will basically declare your virtual machines (called VMs from now on) like you declare pods.

Installation Instructions

The following commands are used to:

  • Disable SELinux (I know, I know).
  • Install dependencies for spice consoles.
  • Add the service account kubevirt-infra to the relevant scc (privileged and hostmount-anyuid).
  • Download and instantiate the KubeVirtand spice-proxy templates.
  • Download and instantiate an additional iscsi helper template designed to provide virtual disks to the VMs, by using paths of the local node (you wouldn't do that in production).
  • Expose the deployment and service haproxy (used to access consoles of the VMs) (on port 8184).
  • Install the auxiliary tool virtctl, used to access the console of the VMs.

Also note that we are logged as an admin user, so you will need to have this kind of privileges in your setup.

VERSION="v0.1.0"
yum -y install xorg-x11-xauth remote-viewer
sed -i "s/SELINUX=enforcing/SELINUX=permissive/" /etc/selinux/config
setenforce 0
oc project kube-system
oc adm policy add-scc-to-user privileged -z kubevirt-infra
oc adm policy add-scc-to-user hostmount-anyuid -z kubevirt-infra
wget https://github.com/kubevirt/kubevirt/releases/download/$VERSION/kubevirt.yaml
wget https://github.com/kubevirt/kubevirt/releases/download/$VERSION/spice-proxy.yaml
wget https://raw.githubusercontent.com/karmab/kcli/master/plans/openshift/iscsi-demo-target.yaml
oc create -f kubevirt.yaml
oc create -f spice-proxy.yaml
oc expose deploy haproxy --port=8184
oc expose svc haproxy
wget https://github.com/kubevirt/kubevirt/releases/download/$VERSION/virtctl-$VERSION-linux-amd64
mv virtctl-$VERSION-linux-amd64 /usr/bin/virtctl
chmod u+x /usr/bin/virtctl
oc create -f /root/iscsi-demo-target.yaml

Check that KubeVirt is Up

You can use the following to see pods are properly running:

oc get pod
NAME READY STATUS RESTARTS AGE
haproxy-474153518-lb6cz 1/1 Running 0 38m
iscsi-demo-target-tgtd-679296829-qksvm 1/1 Running 0 38m
libvirt-t82wm 2/2 Running 0 38m
spice-proxy-3374246092-ttktm 1/1 Running 0 38m
virt-api-903161196-zg89r 1/1 Running 0 38m
virt-controller-1724409353-ptnmz 0/1 Running 0 38m
virt-controller-1724409353-sbf7r 1/1 Running 0 38m
virt-handler-9jzzt 1/1 Running 3 38m

Quick Review of the Components

You can see there are different pods (coming from several deployments ) with different functions:

  • virt-api is the aPI other components communicate with. The end user doesn't really need to access this API, as the declarative syntax of Kubernetes will instead be used (under the hood, custom resource definitions for VMs and migrations are used).
  • virt-controller monitors the custom resource definitions representing the VMs, and manages the pods associated to them.
  • virt-handler runs on each of the node, typically deployed as a daemon set. Think of it as the kubelet within KubeVirt. Communicates with libvirtd instance to define VMs.
  • libvirt encapsulates our beloved libvirt functionality into a single pod. Note that you can oc rsh into the corresponding one and run virsh commands.
  • haproxy proxies connections to the consoles of the VMs.
  • spice-proxy acts as a proxy for spice connections.

Additionally, when you create a VM, you should see an additional pod launched, called virt-launcher-$YOUR_VM which continuously remediates the real state of the machine versus the one declared as a Kubernetes object.

Use KubeVirt

Create a VM

To create a VM, we simply use a yaml file with its definition. For instance, to do it in the default project:

oc project default
oc create -f vm.yaml

Using the following definition (we are using our iscsi deployment to provide disks):

apiVersion: kubevirt.io/v1alpha1
kind: VirtualMachine
metadata:
name: testvm
spec:
terminationGracePeriodSeconds: 0
domain:
devices:
graphics:
- type: spice
interfaces:
- type: network
source:
network: default
video:
- type: qxl
disks:
- type: network
snapshot: external
device: disk
driver:
name: qemu
type: raw
cache: none
source:
host:
name: iscsi-demo-target.kube-system
port: "3260"
protocol: iscsi
name: iqn.2017-01.io.kubevirt:sn.42/2
target:
dev: vda
consoles:
- type: pty
memory:
unit: MB
value: 64
os:
type:
os: hvm
type: qemu

Very shortly, you should see the virt-launcher pod there but also a qemu process representing your VM! You can also check your VM definition with oc get vm testvm -o yaml and have a look at some relevant fields under status:

  • nodeName indicating on which host this VM is currently running. It also appears under the label kubevirt.io/nodeName
  • phase which should show "Running"

Accessing the VM

The VM serial console can be accessed in different ways (adjust the namespace to the one where your vm was created)

Using virtctl Utility

HAPROXY_URL=`oc get route haproxy -n kube-system -o jsonpath={.spec.host}`
virtctl console -s http://$HAPROXY_URL testvm -n default

Using the libvirt Pod

oc exec -it `oc get pod -l kubevirt.io=libvirt -o jsonpath='{range .items[*].metadata}{.name}'` virsh console default_testvm

Delete a VM

oc delete vm testvm

Migrations

To migrate a VM, and provided you deployed KubeVirt with several nodes, you create a yaml file with a Migration object as the one that follows and launch it with oc create -f

apiVersion: kubevirt.io/v1alpha1
kind: Migration
metadata:
name: testvm
spec:
selector:
name: testvm

Conclusion

If you need VMs, KubeVirt provides a nice way to make use of your k8s/OpenShift cluster to host them. This project is still young, and things like storage and networking need working, but development is going fast and the community is growing!