This is part 6 of a tutorial that demonstrates how to add OpenShift Virtualization 2.5 to an existing OpenShift 4.6 cluster and start a Fedora Linux VM inside that cluster.

Please refer to “Your First VM with OpenShift Virtualization Using the Web Console” for the introduction of this tutorial and for links to all of its parts and refer to “Peeking Inside an OpenShift Virtualization a VM Using the Web Console” for part 5 of this tutorial.

Because this tutorial performs all actions using the OpenShift Web Console, you could follow it from any machine you use as a personal workstation, such as a Windows laptop. You do not require a shell prompt to type oc or kubectl commands.

Maybe your qcow2 image provides a default, well-known user account for console access, but you would probably prefer to disable that user and access your VM using SSH. Your VM’s purpose is likely to accept network connections of some sort, and the final part of this tutorial shows how to access your VM using OpenShift networking features.

Accessing Your VM from Inside Your Cluster

The first approach described here might not appeal to you if you want to direct access to your VM from your computer or from any server outside of your OpenShift cluster. But it has the advantage of working independent of your cluster’s networking settings because it relies solely on basic Kubernetes networking.

This approach would work well for a small-scale production scenario where you run everything, including your new and legacy applications, CI/CD pipelines, and Ansible automation from inside a single OpenShift cluster.

The example here requires a pod that runs an SSH client inside your OpenShift cluster, and it also requires the private SSH key authorized to access your VM. I could create a custom container image with an SSH client, run that container using a Kubernetes Job, and take that key from a Kubernetes Secret. But for simplicity I will just run a “pet” pod and make changes to it interactively.

You have to perform these instructions as a cluster administrator because your “pet” pod needs to be able to run as root to install RPM packages. If you create it as a regular OpenShift user, you will not be able to run as root inside the container. Creating (or reusing) a custom container image with an SSH client pre-installed would allow using a regular user account.

Please rely on the written instructions more than on the screen captures. They are here mostly to provide you visual aid and assurance that you are on the correct page for each step.

1. Create an interactive, unmanaged “pet” pod using the UBI image and installing an SSH client inside it.

Log in to your cluster’s web console as a cluster administrator and make sure that myvms project is the default for the web console before proceeding.

Unfortunately, the OpenShift Web Console does not provide a simple equivalent of the oc run command for creating unmanaged pods, and the only alternative is creating that “pet” pod from a small YAML file. You may end up enjoying the way the OpenShift web console handles raw Kubernetes manifests as YAML files.

Click Workloads → Pods and click Create Pod, then type the definition of our “pet” pod as in the following screen capture. The source code for this Pod and other resources from this blog series are available from GitHub, so you can cut and paste if you prefer.

You do not need labels nor exposed ports; just give names to the pod and its single container, use the UBI image from Red Hat’s public registry, and configure your container for interactive input.

 

Click Create and wait until your new pod is and running.

2. Install an SSH client in your “pet” pod.

Click the Terminal tab of the Pod Details page of your ubi pod and run a yum command to install the openssh-client package inside the single container of your pod.

I am sorry that I promised “no shell commands,” but I was not able to avoid those ones inside the throwaway pod. Anyway, it is good to know you do not need to switch to a shell prompt and can continue using the web console.

 

3. Add the SSH private key to your “pet” pod.

Still in the Terminal tab of the ubi pod, use the vi command to create the /tmp/id_rsa file inside the container. On your local desktop, open your private SSH Key file (by default ~/.ssh/id_rsa on Linux and Mac) and copy its contents. Paste these contents to the Terminal tab of your ubi pod.

Save the new contents of the /tmp/id_rsa file inside the container and change its permissions to grant access to its owner only. image7-May-05-2022-05-39-57-81-PM

4. Use the SSH client and key in the “pet” pod to access your VM.

You need the IP address of your VM, which you already saw in the previous part of this series. If you did not make a note of it, switch to Workloads → Virtualization to get the address, which in my example is 10.8.0.28. You can also select and copy the IP address to your system’s clipboard and later paste it into your “pet” pod’s terminal.

image1-May-05-2022-05-39-57-79-PM Switch back to Workloads → Pods, enter the ubi pod, and in the Terminal tab use the ssh command to access your Fedora VM.

image2-May-05-2022-05-39-58-18-PM Now that you proved you can access your VM using SSH from inside your OpenShift cluster, you can delete the ubi pod from the myvms project.

Accessing Your VM From Outside of Your Cluster

The approach described here allows direct access to your VMs from your workstation and any other machine outside of your OpenShift cluster, with two caveats:

1. You can live with nonstandard TCP ports.

2. Your machines have network access to all your OpenShift cluster nodes (or at least to all nodes enabled to run VM workloads).

The idea involves creating a node port service to expose your VM in a TCP port on your cluster nodes. That kind of service exposes the same TCP port in all cluster nodes, so you cannot have two VMs exposing the same port if using that approach.

1. Create a node port service to expose your VM.

Log in on your cluster’s web console using a regular account. You do not need cluster administrator privileges to create a node port service that exposes to a VMI.

The current OpenShift Web Console does not provide an action menu to expose a VMI as a service, but you can create the required service as a minimal YAML file.

Click Networking → Services and click Create Service. Name the service as test-ssh and make it a type: NodePort service. Expose port 22 and configure a selector on the kubevirt.io/domain label with the name of your VM as its value, as in the following screenshot.

The source code for this Service and other resources from this blog series are available from GitHub so you can cut-and-paste if you prefer.

image5-May-05-2022-05-39-57-97-PM

Click Create and see that the Service Details page shows the Node port assigned by OpenShift to the service, which in the example is 30232.

image3-May-05-2022-05-39-57-83-PM In case you are wondering where I got the selector label kubevirt.io/domain, remember that your selectors target VMI instances instead of VM instances, just like selectors target pods instead of their deployments.

If you want to see the available labels, here is again how to find a VMI resource: click Home → Search and select VirtualMachineInstance in the Resource field. image6-May-05-2022-05-39-57-83-PM

Click testvm and select the Details tab to see the list of labels attached to your VMI instance.

image8-May-05-2022-05-39-57-81-PM

Note that the IP address that you saw previously in the VM instance actually comes from its corresponding VMI instance.

2. Find the IP address of a cluster node.

You could use the address of any node, but if you pick the node that is running your VM, you will avoid a network hop. A previous screen capture in this post shows that the VM is running on master02.

Click Compute → Nodes and click master02 to enter its Node Details page. Click the Details tab to see the IP address of the node. In the example it is 192.168.50.11.

image9-May-05-2022-05-39-57-73-PM 3. Use a local SSH client to connect to your VM.

Now you can join the information you got from the Node and from the Service to open a SSH connection from your workstation machine to the VM. Do not forget to use the SSH key that you provided to your VM in part 3 of this series.

The following example uses a command-line SSH client that you would run from a Mac or Linux workstation, but you could use a GUI client such as Putty for windows.

$ ssh -i .ssh/id_rsa -p 30232 fedora@192.168.50.11
Warning: Permanently added '[192.168.50.1]:30232' (ECDSA) to the list of known hosts.
Last login: Thu Nov 26 15:42:46 2020 from 10.9.2.20
[fedora@testvm ~]$ cat /etc/fedora-release
Fedora release 32 (Thirty Two)
[fedora@testvm ~]$ logout
Connection to 192.168.50.13 closed.

A node service port is not the best, generic way of exposing OpenShift VMs to the outside world. It is just the simplest one if you have network access to your cluster nodes. There are alternatives, including connecting your VMs to external networks by using Multus, that are outside the scope of this blog series.

Other Ways of Accessing Your VMs

Both alternatives demonstrated in this tutorial rely on standard Kubernetes networking, the same you would use for accessing containers inside your pods. If your VMs only expose HTTP services, you could use OpenShift Routes and Kubernetes Ingress resources to expose them to external accesses.

If you need more traditional VM access, from outside of your OpenShift cluster, to non-HTTP services, you require some advanced knowledge of OpenShift networking and also some ahead planning of your cluster architecture.

If you want to give your VM a dedicated IP address, from outside of the OpenShift node and pod networks (which are strictly internal to the cluster), you have two main alternatives:

1. Configure a pool of external IPs on your cluster, and allocate IPs from that pool to your VMs.

2. Configure extra networks using Multus and add NICs connected to these networks to your VMs.

These are somewhat advanced OpenShift administration topics and outside the scope of this blog series. The Red Hat OpenShift Container Platform product documentation has the information you need. Check also the Virtualization and Networking topics of the OpenShift blog.

Next Steps

Now that you know how to open SSH sessions to your VMs, you can return to the start page for the conclusion of this tutorial.


About the author

Fernando lives in Rio de Janeiro, Brazil, and works on Red Hat's certification training for OpenShift, containers, and DevOps.

Read full bio