OpenShift Virtualization provides a great solution for non-containerized applications, but it does introduce some challenges over legacy virtualization products and bare-metal systems. One such challenge involves interacting with virtual machines (VMs). OpenShift is geared toward containerized applications that do not usually need incoming connections to configure and manage them, at least not the same type of connections as a VM would need for management or use.

This blog discusses several methods to access VMs running in an OpenShift Virtualization environment. Here is a brief summary of these methods.

  • The OpenShift User Interface (UI)

    VNC connections through the UI provide direct access to a VM's console and is provided by OpenShift Virtualization. Serial connections through the UI do not require any configuration when using images provided by Red Hat. These connection methods are useful for troubleshooting issues with a VM.

  • The virtctl command

    The virtctl command uses websockets to make connections to a VM. It provides VNC Console, Serial Console, and SSH access into the VM. VNC Console access and Serial Console access are provided by OpenShift Virtualization as in the UI. VNC Console access requires a VNC client on the client running the virtctl command. Serial Console access requires the same VM configuration as Serial Console access through the UI. SSH access requires the OS of the VM to be configured for SSH access. See the documentation for the VM's image for its SSH requirements.

  • The Pod network

    Exposing a port using a Service allows network connections into a VM. Any port on a VM can be exposed using a Service. Common ports would be 22 (ssh), 5900+ (VNC), 3389 (RDP). Three different types of Services are shown in this blog.

    • ClusterIP

      A ClusterIP Service exposes a VM's port internally to the cluster. This allows VMs to communicate with each other but does not allow connections from outside the cluster.

    • NodePort

      A NodePort Service exposes a VM's port outside the cluster through the cluster nodes. The VM's port is mapped to a port on the nodes. The port on the nodes is usually not the same as the port on the VM. The VM is accessed by connecting to a nodes IP and the appropriate port number.

    • LoadBalancer (LB)

      An LB Service provides a pool of IP addresses for the VMs to use. This Service type exposes a VM's port externally to the cluster. An IP gained, or specified, address pool is used to connect to the VM.

  • A Layer 2 interface

    A network interface on the cluster's nodes can be configured as a bridge to allow L2 connectivity to a VM. A VM's interface connects to the bridge using a NetworkAttachmentDefinition. This bypasses the Cluster's network stack and exposes the VM's interface directly to the bridged network. By bypassing the Cluster's network stack, it also bypasses the cluster's built in security. The VM should be secured the same as a physical server connected to a network.

A little about the cluster

The cluster used in this blog is called wd and is in the example.org domain. It consists of three baremetal control plane nodes (wcp-0, wcp-1, wcp-2) and three baremetal worker nodes (wwk-0, wwk-1, wwk-2). These nodes are on the cluster's primary network of 10.19.3.0/24.

Node Role IP FQDN
wcp-0 control plane 10.19.3.95 wcp-0.wd.example.org
wcp-1 control plane 10.19.3.94 wcp-1.wd.example.org
wcp-2 control plane 10.19.3.93 wcp-2.wd.example.org
wwk-0 worker 10.19.3.92 wwk-0.wd.example.org
wwk-1 worker 10.19.3.91 wwk-1.wd.example.org
wwk-2 worker 10.19.3.90 wwk-2.wd.example.org

MetalLB is configured to provide four IP addresses (10.19.3.112-115) to VMs from this network.

The cluster nodes have a secondary network interface on the 10.19.136.0/24 network. This secondary network has a DHCP server that provides IP addresses.

The cluster has the following Operators installed. All the operators are provided by Red Hat, Inc.

Operator Why is it installed
Kubernetes NMState Operator Used to configure the second interface on the nodes
OpenShift Virtualization Provides the mechanisms to run VMs
Local Storage Needed by the OpenShift Data Foundation Operator when using local HDDs
MetalLB Operator Provides the LoadBalancing Service used in this blog
OpenShift Data Foundation Provides storage for the cluster. The storage is created using a second HDD on the Nodes.

There are a few VMs running on the cluster. The VMs run in the blog namespace.

  • A Fedora 38 VM called fedora
  • A Red Hat Enterprise Linux 9 (RHEL9) VM called rhel9
  • A Windows 11 VM called win11

Connecting through the UI

There are various tabs presented when viewing a VM through the UI. All provide methods to view or configure various aspects concerning the VM. One tab in particular is the Console tab. This tab provides three methods to connect to the VM: VNC Console, Serial Console, or Desktop viewer (RDP). The RDP method is only displayed for VMs running Microsoft Windows.

VNC console

The VNC console is always available for any VM. The VNC service is provided by OpenShift Virtualization and does not require any configuration of the VMs operating system (OS). It just works. Screenshot of VNC Console Windows 11

Serial console

The serial console requires configuration within the VM's OS. If the OS is not configured to output to the VM's serial port, then this connection method does not work. The VM images provided by Red Hat are configured to output boot information to the serial port and provide a login prompt when the VM has finished its boot process. 

rhel9-serial-console-web-console-1

Desktop viewer

This connection requires that a Remote Desktop (RDP) service is installed and running on the VM. When choosing to connect using RDP in the Console tab, the system will indicate that there is not currently an RDP service for the VM and will provide an option to create one. Selecting this option produces a popup window with a checkbox to Expose RDP Service. Checking this box creates a Service for the VM that allows RDP connections.

ui-rdp-enable-service-rdp1-win-1

After the service is created, the Console tab provides the information for the RDP connection. 

ui-rdp-enable-service-win-1

A button to Launch Remote Desktop is also provided. Selecting this button downloads a file called console.rdp. If the browser is configured to open .rdp files, it should open the console.rdp file in an RDP client.

Connecting using the virtctl command

The virtctl command provides VNC, Serial console, and SSH access into the VM using a network tunnel over the WebSocket protocol.

  • The user running the virtctl command needs to be logged in to the cluster from the command line.
  • If the user is not in the same namespace as the VM, the --namespace option needs to be specified.

The correct version of the virtctl and other clients can be downloaded from your cluster from a URL similar to https://console-openshift-console.apps.CLUSTERNAME.CLUSTERDOMAIN/command-line-tools. It can also be downloaded by clicking the question mark icon at the top of the UI and selecting Command line tools.

VNC console

The virtctl command connects to the VNC server provided by OpenShift Virtualization. The system running the virtctl command needs the virtctl command and a VNC client installed.

Opening a VNC connection is simply done by running the virtctl vnc command. Information about the connection is displayed in the terminal and a new VNC Console session is displayed. 

virtctl-vnc-win11-1

Serial console

Connecting to the serial console using the virtctl command is done by running virtctl console. If the VM is configured to output to its serial port, as discussed earlier, the output from the boot process or a login prompt should appear.

$ virtctl console rhel9
Successfully connected to rhel9 console. The escape sequence is ^]


[ 8.463919] cloud-init[1145]: Cloud-init v. 22.1-7.el9_1 running 'modules:config' at Wed, 05 Apr 2023 19:05:38 +0000. Up 8.41 seconds.
[ OK ] Finished Apply the settings specified in cloud-config.
Starting Execute cloud user/final scripts...
[ 8.898813] cloud-init[1228]: Cloud-init v. 22.1-7.el9_1 running 'modules:final' at Wed, 05 Apr 2023 19:05:38 +0000. Up 8.82 seconds.
[ 8.960342] cloud-init[1228]: Cloud-init v. 22.1-7.el9_1 finished at Wed, 05 Apr 2023 19:05:38 +0000. Datasource DataSourceNoCloud [seed=/dev/vdb][dsmode=net]. Up 8.95 seconds
[ OK ] Finished Execute cloud user/final scripts.
[ OK ] Reached target Cloud-init target.
[ OK ] Finished Crash recovery kernel arming.

Red Hat Enterprise Linux 9.1 (Plow)
Kernel 5.14.0-162.18.1.el9_1.x86_64 on an x86_64

Activate the web console with: systemctl enable --now cockpit.socket

rhel9 login: cloud-user
Password:
Last login: Wed Apr 5 15:05:15 on ttyS0
[cloud-user@rhel9 ~]$

SSH

The ssh client is invoked using the virtctl ssh command. The -i option to this command allows the user to specify a private key to use.

$ virtctl ssh cloud-user@rhel9-one -i ~/.ssh/id_rsa_cloud-user
Last login: Wed May 3 16:06:41 2023

[cloud-user@rhel9-one ~]$

There is also the virtctl scp command that can be used to transfer files to a VM. I mention it here because it works similarly to the virtctl ssh command.

Port forwarding

The virtctl command can also forward traffic from a user's local ports to a port on the VM. See the OpenShift Documentation for information how this works.

One use for this is to forward your local OpenSSH client, because it is more robust, to the VM instead of using the built-in ssh client of the virtctl command. See the Kubevirt documentation for an example on doing this.

Another use is to connect to a service on a VM when you do not want to create an OpenShift Service to expose the port.

For instance, I have a VM called fedora-proxy with the NGINX webserver installed. A custom script on the VM writes some statistics to a file called process-status.out. I am the only person interested in the file's contents, but I would like to view this file throughout the day. I can use the virtctl port-forward command to forward a local port on my laptop or desktop to port 80 of the VM. I can write a short script that can gather the data whenever I want it.

#! /bin/bash

# Create a tunnel
virtctl port-forward vm/fedora-proxy 22080:80 &

# Need to give a little time for the tunnel to come up
sleep 1

# Get the data
curl http://localhost:22080/process-status.out

# Stop the tunnel
pkill -P $$

Running the script gets me the data I want and cleans up after itself.

$ gather_stats.sh 
{"component":"","level":"info","msg":"forwarding tcp 127.0.0.1:22080 to 80","pos":"portforwarder.go:23","timestamp":"2023-05-04T14:27:54.670662Z"}
{"component":"","level":"info","msg":"opening new tcp tunnel to 80","pos":"tcp.go:34","timestamp":"2023-05-04T14:27:55.659832Z"}
{"component":"","level":"info","msg":"handling tcp connection for 22080","pos":"tcp.go:47","timestamp":"2023-05-04T14:27:55.689706Z"}

Test Process One Status: Bad
Test Process One Misses: 125

Test Process Two Status: Good
Test Process Two Misses: 23

Connecting through an exposed port on the Pod network (Services)

Services

Services in OpenShift are used to expose the ports of a VM for incoming traffic. This incoming traffic could be from other VMs and Pods, or it can be from a source external to the cluster.

This blog shows how to create three types of Services: ClusterIP, NodePort, and LoadBalancer. The ClusterIP Service type does not allow external access to the VMs. All three types provide internal access between VMs and Pods, this is the preferred method for VMs within the cluster to communicate with each other. The following table lists the three Service types and their scope of accessibility.

Type Internal Scope from the Cluster's Internal DNS External Scope
ClusterIP <service-name>.<namespace>.svc.cluster.local None
NodePort <service-name>.<namespace>.svc.cluster.local IP Address of a Cluster Node
LoadBalancer <service-name>.<namespace>.svc.cluster.local External IP Address from the LoadBalancers IPAddressPools

Services can be created using the virtctl expose command or by defining it in YAML. Creating a Service using YAML can be done from either the command line or the UI.

First, let's define a Service using the Virtctl command.

Creating a Service using the virtctl Command

When using the virtctl command, the user needs to be logged into the cluster. If the user is not in the same namespace as the VM, then --namespace option can be used to specify the namespace the VM is in.

The virtctl expose vm command creates a service that can be used to expose a VM's port. The following are common options used with the virtctl expose command when creating a Service.

   
--name Name of the service to create.
--type This specifies the type of Service to create: ClusterIP, NodePort, LoadBalancer
--port This is the port number the Service listens for traffic.
--target-port Optional. This is the VM's port to expose. If unspecified, it is the same as --port
--protocol Optional. The protocol the Service should listen for. Defaults to TCP.

The following command creates a service for ssh access into a VM called RHEL9.

$ virtctl expose vm rhel9 --name rhel9-ssh --type NodePort --port 22

View the service to determine the port to use to access the VM from outside the cluster.

$ oc get service

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rhel9-ssh NodePort 172.30.244.228 <none> 22:32317/TCP 3s

Let's delete the port for now.

$ oc delete service rhel9-ssh
service "rhel9-ssh" deleted

Creating a Service using YAML

Creating a service using YAML can be done from command line using the oc create -f command or using an editor in the UI. Either method works and each has its own advantages. The command line is easier to script, but the UI provides help to the schema used to define a service.

First let's discuss the YAML file since it is the same for both methods.

A single Service definition can expose a single port or multiple ports. The YAML file below is an example Service definition that exposes two ports, one for ssh traffic and one for VNC traffic. The ports are exposed as a NodePort Explanations of the key items are listed after the YAML.

apiVersion: v1
kind: Service
metadata:
name: rhel-services
namespace: blog
spec:
ports:
- name: ssh
protocol: TCP
nodePort: 31798
port: 22000
targetPort: 22
- name: vnc
protocol: TCP
nodePort: 31799
port: 22900
targetPort: 5900
type: NodePort
selector:
kubevirt.io/domain: rhel9

Here are a few settings to note in the file:

   
metadata.name The name of the service, it is unique in it's namespace.
metadata.namespace The namespace the service is in.
spec.ports.name A name for the port being defined.
spec.ports.protocol The protocol of the network traffic, TCP or UDP.
spec.ports.nodePort The port that is exposed outside the cluster. It is unique within the cluster.
spec.ports.port A port used internally within the clusters network.
spec.ports.targetPort The port exposed by the VM, multiple VMs can expose the same port.
spec.type This is the type of Service to create. We are using NodePort
spec.selector A selector used to bind the service to a VM. The example bind to a VM called fedora
Create a Service from the command line

Let's create the two services in the YAML file from the command line. The command to use is oc create -f.

$ oc create -f service-two.yaml 
service/rhel-services created

$ oc get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rhel-services NodePort 172.30.11.97 <none> 22000:31798/TCP,22900:31799/TCP 4s

We can see that two ports are exposed in the single service. Now let's remove the Service using the oc delete service command.

$ oc delete service rhel-services
service "rhel-services" deleted
Create a Service from the UI

Let's create the same Service using the UI. To create a Service using the UI, navigate to Networking -> Services and select Create Service. An editor opens with a prepopulated Service definition and a reference to the Schema. Paste the YAML from above into the editor and select Create to create a Service.

service-ui-two

After selecting Create, the details of the Service are shown. 

service-rhel-services

The Services attached to a VM can also be seen on the VMs Details tab or from the command line using the oc get service command as before. We will remove the service as we did before.

$ oc get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rhel-services NodePort 172.30.11.97 <none> 22000:31798/TCP,22900:31799/TCP 4s

$ oc delete service rhel-services
service "rhel-services" deleted

Creating SSH and RDP Services the easy way

The UI provides simple point and click methods to create SSH and RDP Services on VMs.

To enable SSH easily, there is an SSH service type drop down on the Details tab of the VM. The drop down also the easy creation of either a NodePort or a LoadBalancer Service. 

ssh-ui-dropdown

Once the Service type is selected, the Service is created. The UI displays a command that can be used to connect to the service and the service it created. 

lb-service-enabled-1

Enabling RDP is done through the Console tab of the VM. If the VM is a Windows based VM, Desktop viewer becomes an option in the console drop down. 

rdp-viewer-dropdown

Once selected, an option to Create RDP Service appears. 

create-rdp-ui-1

Selecting the option provides a pop-up to Expose RDP Service. 

rdp-popup

Once the Service is created, the Console tab shows the connection information. 

rdp-connection-info

Example connection using a ClusterIP Service

Services of Type ClusterIP allow VMs to connect to each other internal to the cluster. This is useful if one VM provides a service to other VMs, such as a database instance. Instead of configuring a database on a VM, let's just expose SSH on the Fedora VM using a ClusterIP.

Let's create a YAML file that creates a Service that exposes the SSH port of the Fedora VM internally to the cluster.

apiVersion: v1
kind: Service
metadata:
name: fedora-internal-ssh
namespace: blog
spec:
ports:
- protocol: TCP
port: 22
selector:
kubevirt.io/domain: fedora
type: ClusterIP

Let's apply the configuration.

$ oc create -f service-fedora-ssh-clusterip.yaml 
service/fedora-internal-ssh created

$ oc get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
fedora-internal-ssh ClusterIP 172.30.202.64 <none> 22/TCP 7s

Using the RHEL9 VM, we can see that we can connect to the Fedora VM using SSH.

$ virtctl console rhel9
Successfully connected to rhel9 console. The escape sequence is ^]

rhel9 login: cloud-user
Password:
Last login: Wed May 10 10:20:23 on ttyS0

[cloud-user@rhel9 ~]$ ssh fedora@fedora-internal-ssh.blog.svc.cluster.local
The authenticity of host 'fedora-internal-ssh.blog.svc.cluster.local (172.30.202.64)' can't be established.
ED25519 key fingerprint is SHA256:ianF/CVuQ4kxg6kYyS0ITGqGfh6Vik5ikoqhCPrIlqM.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'fedora-internal-ssh.blog.svc.cluster.local' (ED25519) to the list of known hosts.
Last login: Wed May 10 14:25:15 2023
[fedora@fedora ~]$

Example connection using a NodePort Service

For this example, let's expose RDP from the windows11 VM using a NodePort so we can connect to its desktop for a better experience than using the Console tab. This connection will be for trusted users to use since they will have knowledge of the IPs of the cluster nodes.

A Note about OVNKubernetes

The latest version of the OpenShift installer defaults to using the OVNKubernetes network stack. If the cluster is running the OVNKubernetes network stack and a NodePort Service is used, then egress traffic from the VMs will not work until routingViaHost is enabled.

A simple patch to the cluster enables egress traffic when using a NodePort Service.

$ oc patch network.operator cluster -p '{"spec": {"defaultNetwork": {"ovnKubernetesConfig": {"gatewayConfig": {"routingViaHost": true}}}}}' --type merge

$ oc get network.operator cluster -o yaml
apiVersion: operator.openshift.io/v1
kind: Network
spec:
defaultNetwork:
ovnKubernetesConfig:
gatewayConfig:
routingViaHost: true
...

This patch is not needed if the cluster uses the OpenShiftSDN network stack or a MetalLB Service is used.

Example connection using a NodePort Service

Let's create the NodePort Service by first defining it in a YAML file.

apiVersion: v1
kind: Service
metadata:
name: win11-rdp-np
namespace: blog
spec:
ports:
- name: rdp
protocol: TCP
nodePort: 32389
port: 22389
targetPort: 3389
type: NodePort
selector:
kubevirt.io/domain: windows11

Create the Service.

$ oc create -f service-windows11-rdp-nodeport.yaml 
service/win11-rdp-np created

$ oc get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
win11-rdp-np NodePort 172.30.245.211 <none> 22389:32389/TCP 5s

Since this is a NodePort Service, we can connect to it using the IP of any Node. The oc get nodes command shows the IP addresses of the nodes.

$ oc get nodes -o=custom-columns=Name:.metadata.name,IP:status.addresses[0].address
Name IP
wcp-0 10.19.3.95
wcp-1 10.19.3.94
wcp-2 10.19.3.93
wwk-0 10.19.3.92
wwk-1 10.19.3.91
wwk-2 10.19.3.90

The xfreerdp program is one client program that can be used for RDP connections. We will tell it to connect to the wcp-0 node using RDP port exposed on the cluster.

$ xfreerdp /v:10.19.3.95:32389 /u:cnvuser /p:hiddenpass

[14:32:43:813] [19743:19744] [WARN][com.freerdp.crypto] - Certificate verification failure 'self-signed certificate (18)' at stack position 0
[14:32:43:813] [19743:19744] [WARN][com.freerdp.crypto] - CN = DESKTOP-FCUALC4
[14:32:44:118] [19743:19744] [INFO][com.freerdp.gdi] - Local framebuffer format PIXEL_FORMAT_BGRX32
[14:32:44:118] [19743:19744] [INFO][com.freerdp.gdi] - Remote framebuffer format PIXEL_FORMAT_BGRA32
[14:32:44:130] [19743:19744] [INFO][com.freerdp.channels.rdpsnd.client] - [static] Loaded fake backend for rdpsnd
[14:32:44:130] [19743:19744] [INFO][com.freerdp.channels.drdynvc.client] - Loading Dynamic Virtual Channel rdpgfx
[14:32:45:209] [19743:19744] [WARN][com.freerdp.core.rdp] - pduType PDU_TYPE_DATA not properly parsed, 562 bytes remaining unhandled. Skipping.

We have a connection to the VM. 

freerdp-nodeport

Example connection using a LoadBalancer Service

Let's create the LoadBalancer Service by first defining it in a YAML file. We will use the Windows VM and expose RDP.

apiVersion: v1
kind: Service
metadata:
name: win11-rdp-lb
namespace: blog
spec:
ports:
- name: rdp
protocol: TCP
port: 3389
targetPort: 3389
type: LoadBalancer
selector:
kubevirt.io/domain: windows11

Create the Service. We see that it automatically gets an IP.

$ oc create -f service-windows11-rdp-loadbalancer.yaml 
service/win11-rdp-lb created

$ oc get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
win11-rdp-lb LoadBalancer 172.30.125.205 10.19.3.112 3389:31258/TCP 3s

We can see that we connect to the EXTERNAL-IP from the Service and the standard RDP port of 3389 that we exposed using the Service. The output of the xfreerdp command shows the connection was successful.

$ xfreerdp /v:10.19.3.112 /u:cnvuser /p:changeme
[15:51:21:333] [25201:25202] [WARN][com.freerdp.crypto] - Certificate verification failure 'self-signed certificate (18)' at stack position 0
[15:51:21:333] [25201:25202] [WARN][com.freerdp.crypto] - CN = DESKTOP-FCUALC4
[15:51:23:739] [25201:25202] [INFO][com.freerdp.gdi] - Local framebuffer format PIXEL_FORMAT_BGRX32
[15:51:23:739] [25201:25202] [INFO][com.freerdp.gdi] - Remote framebuffer format PIXEL_FORMAT_BGRA32
[15:51:23:752] [25201:25202] [INFO][com.freerdp.channels.rdpsnd.client] - [static] Loaded fake backend for rdpsnd
[15:51:23:752] [25201:25202] [INFO][com.freerdp.channels.drdynvc.client] - Loading Dynamic Virtual Channel rdpgfx
[15:51:24:922] [25201:25202] [WARN][com.freerdp.core.rdp] - pduType PDU_TYPE_DATA not properly parsed, 562 bytes remaining unhandled. Skipping.

No screenshot is attached since it is the same screenshot as above.

Connecting using a Layer 2 interface

If the VM's interface is to be used internally and does not need to be exposed publicly, then connecting using a NetworkAttachmentDefinition and a bridged interface on the nodes can be a good choice. This method bypasses the clusters network stack, meaning the clusters network stack does not need to process each packet of data, which can give a performance improvement with network traffic.

This method does have some drawbacks, the VMs are exposed directly to a network and are not protected by any of the clusters security. If a VM is compromised, then an intruder could gain access to the network(s) the VM is connected to. Care should be taken to provide the appropriate security within the VM's operating system if using this method.

NMState

The NMState operator provided by Red Hat can be used to configure physical interfaces on the Nodes after the cluster is deployed. Various configurations can be applied including bridges, VLANs, bonds, and more. We will use it to configure a bridge on an unused interface on each Node in the cluster. See the OpenShift documentation for more information on using the NMState Operator.

Let's configure a simple bridge on an unused interface on the Nodes. The interface is attached to a network that provides DHCP and hands out addresses on the 10.19.142.0 network. The following YAML creates a bridge called brint on the ens5f1 network interface.

---
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: brint-ens5f1
spec:
nodeSelector:
node-role.kubernetes.io/worker: ""
desiredState:
interfaces:
- name: brint
description: Internal Network Bridge
type: linux-bridge
state: up
ipv4:
enabled: false
bridge:
options:
stp:
enabled: false
port:
- name: ens5f1

Apply the YAML file to create the bridge on the worker nodes.

$ oc create -f brint.yaml 
nodenetworkconfigurationpolicy.nmstate.io/brint-ens5f1 created

Use the oc get nncp command to view the state of the NodeNetworkConfigurationPolicy. Use the oc get nnce command to view the state of the individual nodes configuration. Once the configuration is applied, the STATUS from both commands shows Available and the REASON shows SuccessfullyConfigured.

$ oc get nncp
NAME STATUS REASON
brint-ens5f1 Progressing ConfigurationProgressing

$ oc get nnce
NAME STATUS REASON
wwk-0.brint-ens5f1 Pending MaxUnavailableLimitReached
wwk-1.brint-ens5f1 Available SuccessfullyConfigured
wwk-2.brint-ens5f1 Progressing ConfigurationProgressing

NetworkAttachmentDefinition

VMs cannot attach directly to the bridge we created, but they can attach to a NetworkAttachmentDefinition (NAD). The following creates a NAD called nad-brint that attaches to the brint bridge created on the Node. See the OpenShift documentation for an explanation on how to create the NAD.

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: nad-brint
annotations:
k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/brint
spec:
config: '{
"cniVersion": "0.3.1",
"name": "nad-brint",
"type": "cnv-bridge",
"bridge": "brint",
"macspoofchk": true
}'

After applying the YAML, the NAD can be viewed using the oc get network-attachment-definition command.

$ oc create -f brint-nad.yaml 
networkattachmentdefinition.k8s.cni.cncf.io/nad-brint created

$ oc get network-attachment-definition
NAME AGE
nad-brint 19s

The NAD can also be created from the UI by navigating to Networking -> NetworkAttachmentDefinitions.

Example Connection using an Layer 2 interface

With the NAD created, a network interface can be added to the VM or an existing interface can be modified to use it. Let's add a new interface by navigating to the VM's details and selecting the Network interfaces tab. There is an option to Add network interface, choose this. An existing interface can be modified by selecting the kebab menu next to it. 

ui-attach-nad-1

After restarting the VM, the Overview tab of the VM's details shows the IP address received from DHCP. 

ui-rhel9-ip

We can now connect to the VM using the IP address it acquired from a DHCP server on the bridged interface.

$ ssh cloud-user@10.19.142.213 -i ~/.ssh/id_rsa_cloud-user

The authenticity of host '10.19.142.213 (10.19.142.213)' can't be established.
ECDSA key fingerprint is SHA256:0YNVhGjHmqOTL02mURjleMtk9lW5cfviJ3ubTc5j0Dg.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.19.142.213' (ECDSA) to the list of known hosts.

Last login: Wed May 17 11:12:37 2023
[cloud-user@rhel9 ~]$

The SSH traffic is passing from the VM through the bridge and out the physical network interface. The traffic bypasses the pod network and appears to be on the network the bridged interface resides on. The VM is not protected by any firewall when connected in this manner and all ports of the VM are accessible including those used for ssh and VNC.

Closing

We have seen various methods of connecting to VMs running inside OpenShift Virtualization. These methods provides ways to troubleshoot VMs that are not working properly as well as connecting to them for day to day use. The VMs can interact with each other locally within the cluster or systems external to the cluster can access them either directly or using the clusters pod network. This helps the transition of moving physical systems and VMs on other platforms to OpenShift Virtualization.