X

The Apple M1 processor has received a lot of attention in the press since its release. Based on a system on chip (SoC) design, the M1 integrates several different components, including the Arm core CPU, GPU, unified memory architecture, SSD controller, image signal processor, Thunderbolt controller, and more, all of which powers the features in MacOS. Given the M1's Arm core I started to wonder if it was possible to install OpenShift on a virtual machine in MacOS. The rest of this blog will detail this experience, which as of this writing, is very much experimental and requires special images, but is important to document at this early stage.

First, let's discuss MacOS and the Apple Virtualization Framework in MacOS Ventura Beta 8. The framework, while introduced in MacOS BigSur, has additional features added to it in the Ventura release, all of which will enable us to leverage Linux in a virtual machine. One of those features was the support for an EFI boot loader, which can enable the discovery of any virtual device attached to the virtual machine from which to boot. This makes it possible for us to use a Linux ISO image for the Arm architecture to boot our virtual machine. Additionally, another feature will enable the use of VirtioGPU 2D, allowing Linux virtual machines to have a UI interface in the virtual machine. Together, these features, along with the features carried over, make running Arm-based Linux in a virtual machine on MacOS a real possibility.

Recently we experimented with installing Red Hat Enterprise Linux 9 for Arm on a MacBook Pro with an M1 processor as a virtual machine. The take away from that experience, given that Red Hat Enterprise Linux can run in an M1 virtual machine, was wondering if Single Node OpenShift (SNO) could run in that virtual environment also. This led to the next experiment where we did a recording demonstrating using Assisted Installer local Podman version to install SNO on the virtual machine on the MacOS. The link to the video is below:

After those experiences I began to wonder if we could use Red Hat Advanced Cluster Management for Kubernetes (RHACM) to deploy SNO on the same MacOS M1 virtual machine. That is what the rest of this blog will cover in detail of how to get there.

Lab Environment

The following lab environment was created in order to test this experiment, which includes the following:

SNO-OCP-M1

On the MacOS side we went ahead and pre-created a virtual machine using UTM to ensure we are able to take advantage of the Virtualization Framework built into MacOS.

In the OpenShift environment we have gone ahead and configured the Central Infrastructure Management (CIM) service, which is a component of RHACM. I want to take a moment to point out a few special items I made sure to include when configuring the CIM. First, I went ahead and enabled my cluster image set for the pre-release OpenShift version I was using:

$ oc get clusterimageset openshift-v4.12.0-ec.3
NAME RELEASE
openshift-v4.12.0-ec.3 registry.ci.openshift.org/rhcos-devel/ocp-4.12-9.0-aarch64:4.12.0-ec.3

Second, I configured my agent service configuration to have an Arm architecture under OS images and point to a specially built Red Hat CoreOS (RHCOS) LiveISO:

$ oc get agentserviceconfig agent -o yaml
apiVersion: agent-install.openshift.io/v1beta1
kind: AgentServiceConfig
metadata:
annotations:
unsupported.agent-install.openshift.io/assisted-service-configmap: full-iso-assisted-service-config
creationTimestamp: "2022-09-08T18:21:04Z"
finalizers:
- agentserviceconfig.agent-install.openshift.io/ai-deprovision
generation: 2
name: agent
resourceVersion: "313151372"
uid: 3eb4fe25-c7f7-405f-8304-709b3640c720
spec:
databaseStorage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
filesystemStorage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
iPXEHTTPRoute: enabled
osImages:
- cpuArchitecture: x86_64
openshiftVersion: "4.11"
rootFSUrl: https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.11/4.11.2/rhcos-4.11.2-x86_64-live-rootfs.x86_64.img
url: https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.11/4.11.2/rhcos-4.11.2-x86_64-live.x86_64.iso
version: 411.86.202208191320-0
- cpuArchitecture: arm64
openshiftVersion: 4.12.0-ec.3
rootFSUrl: http://192.168.0.29/images/rhcos-412.90.202209211533-0-live-rootfs.aarch64.img
url: http://192.168.0.29/images/rhcos-412.90.202209211533-0-live.aarch64.iso
version: 412.90.202209211533-0
status:
conditions:
- lastTransitionTime: "2022-09-28T12:46:39Z"
message: AgentServiceConfig reconcile completed without error.
reason: ReconcileSucceeded
status: "True"
type: ReconcileCompleted
- lastTransitionTime: "2022-09-29T12:52:11Z"
message: All the deployments managed by Infrastructure-operator are healthy.
reason: DeploymentSucceeded
status: "True"
type: DeploymentsHealthy

We also made sure to configure a ConfigMap so that the CIM would provide full ISOs instead of minimal ISOs when our infrastrusture environment requested them:

$ oc get configmap -n multicluster-engine full-iso-assisted-service-config -o yaml
apiVersion: v1
data:
ISO_IMAGE_TYPE: full-iso
kind: ConfigMap
metadata:
creationTimestamp: "2022-09-29T20:44:41Z"
name: full-iso-assisted-service-config
namespace: multicluster-engine
resourceVersion: "314329579"
uid: f57570b1-2407-47c0-91cc-418ab78e83d9

With those configurations in place we can proceed to discovery and deployment of our Single Node OpenShift.

Create Infrastructure Environment

Now that we have a good idea of what our lab looks like let's go ahead and discover and deploy our SNO node. The first step for that will be to create a new infrastructure environment. Let's go ahead and create the namespace for it first:

$ oc create namespace m1
namespace/m1 created

Next, let's create a pull-secret for our namespace:

$ oc create secret generic pull-secret -n m1 --from-file=.dockerconfigjson=pull-secret-nospace.json --type=kubernetes.io/dockerconfigjson
secret/pull-secret created

Now let's create an nmstate configuration for the host that will become the SNO node. This nmstate configuration will contain the static IP address and other networking configurations for the SNO node so that we do not have to rely on DHCP. We can create the YAML and then apply it to the cluster:

$ cat << EOF > ~/m1-nmstate-config.yaml
apiVersion: agent-install.openshift.io/v1beta1
kind: NMStateConfig
metadata:
name: m1
namespace: m1
labels:
sno-cluster-m1: m1
spec:
config:
interfaces:
- name: enp0s1
type: ethernet
state: up
ipv4:
enabled: true
address:
- ip: 10.0.0.25
prefix-length: 24
dhcp: false
dns-resolver:
config:
server:
- 192.168.0.10
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 10.0.0.1
next-hop-interface: enp0s1
table-id: 254
interfaces:
- name: "enp0s1"
macAddress: 86:C7:EE:FE:D8:E1

$ oc create -f m1-nmstate-config.yaml
nmstateconfig.agent-install.openshift.io/m1 created

Finally we are ready to create the infrastructure environment custom resource YAML and apply it to the cluster:

$ cat << EOF > ~/arm-infraenv.yaml 
---
apiVersion: agent-install.openshift.io/v1beta1
kind: InfraEnv
metadata:
name: m1
namespace: m1
spec:
agentLabels:
project: m1
cpuArchitecture: arm64
sshAuthorizedKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoy2/8SC8K+9PDNOqeNady8xck4AgXqQkf0uusYfDJ8IS4pFh178AVkz2sz3GSbU41CMxO6IhyQS4Rga3Ft/VlW6ZAW7icz3mw6IrLRacAAeY1BlfxfupQL/yHjKSZRze9vDjfQ9UDqlHF/II779Kz5yRKYqXsCt+wYESU7DzdPuGgbEKXrwi9GrxuXqbRZOz5994dQW7bHRTwuRmF9KzU7gMtMCah+RskLzE46fc2e4zD1AKaQFaEm4aGbJjQkELfcekrE/VH3i35cBUDacGcUYmUEaco3c/+phkNP4Iblz4AiDcN/TpjlhbU3Mbx8ln6W4aaYIyC4EVMfgvkRVS1xzXcHexs1fox724J07M1nhy+YxvaOYorQLvXMGhcBc9Z2Au2GA5qAr5hr96AHgu3600qeji0nMM/0HoiEVbxNWfkj4kAegbItUEVBAWjjpkncbe5Ph9nF2DsBrrg4TsJIplYQ+lGewzLTm/cZ1DnIMZvTY/Vnimh7qa9aRrpMB0= bschmaus@provisioning"
pullSecretRef:
name: pull-secret
nmStateConfigLabelSelector:
matchLabels:
sno-cluster-m1: m1
EOF

$ oc create -f arm-infraenv.yaml
infraenv.agent-install.openshift.io/m1 created

Discovering and Deploying SNO on an M1 Virtual Machine

With our infrastructure environment created we can now proceed to discover our node, in this case our virtual machine on MacOS. To do this we need to get the ISO being served by the CIM service onto MacOS. Therefore the following command can be executed:

$ oc get infraenv -n m1 m1 -o jsonpath='{.status.isoDownloadURL}' | xargs curl -kLo ~/discovery-m1.iso
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 812M 100 812M 0 0 56.2M 0 0:00:14 0:00:14 --:--:-- 63.4M

Using the UTM interface we will map this ISO to the virtual machine we had pre-created and then boot the virtual machine.

After a few minutes the node should have booted up and should then appear under the agents in the "m1" namespace:

$ oc get agent -n m1
NAME CLUSTER APPROVED ROLE STAGE
a52a0164-d39c-a049-a8ad-f2ed18a806f4 false auto-assign

Because the name comes up as localhost.localdomain we will need to patch it to be able to consume it as a usable node:

$ oc -n m1 patch -p '{"spec":{"hostname":"m1"}}' --type merge agent a52a0164-d39c-a049-a8ad-f2ed18a806f4
agent.agent-install.openshift.io/a52a0164-d39c-a049-a8ad-f2ed18a806f4 patched

With host discovered we can now create our cluster deployment manifest, which will be generated in sections so we can discuss each portion. The first portion is the AgentClusterInstall resource which will hold the configuration for the cluster that we're about to provision, also will be the resource that we will be watching to debug issues.

We need to adjust the following fields accordingly:

  • imageSetRef - This is the ClusterImageSet that will be used (Openshift Version)
  • cpuArchitecture - The architecture in use, which needs to be set since we are using arm64
  • clusterNetwork.cidr - Networking CIDR for the Kubernetes pods
  • clusterNetwork.hostPrefix - Network prefix that will determine how many IPs are reserved for each node
  • serviceNetwork - The CIDR that will be used for Kubernetes services
  • controlPlaneAgents - The number of control plane nodes we will be provisioning
  • workerAgents - Number of worker nodes we are provisioning now
  • sshPublicKey - SSH public key that will be added to core username's authorized_keys in every node (for troubleshooting)
$ cat << EOF > ~/m1-sno-cluster.yaml
---
apiVersion: extensions.hive.openshift.io/v1beta1
kind: AgentClusterInstall
metadata:
name: m1
namespace: m1
spec:
clusterDeploymentRef:
name: m1
imageSetRef:
name: openshift-v4.12.0-ec.3
cpuArchitecture: arm64
networking:
clusterNetwork:
- cidr: "10.128.0.0/14"
hostPrefix: 23
serviceNetwork:
- "172.30.0.0/16"
provisionRequirements:
controlPlaneAgents: 1
workerAgents: 0
sshPublicKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoy2/8SC8K+9PDNOqeNady8xck4AgXqQkf0uusYfDJ8IS4pFh178AVkz2sz3GSbU41CMxO6IhyQS4Rga3Ft/VlW6ZAW7icz3mw6IrLRacAAeY1BlfxfupQL/yHjKSZRze9vDjfQ9UDqlHF/II779Kz5yRKYq
XsCt+wYESU7DzdPuGgbEKXrwi9GrxuXqbRZOz5994dQW7bHRTwuRmF9KzU7gMtMCah+RskLzE46fc2e4zD1AKaQFaEm4aGbJjQkELfcekrE/VH3i35cBUDacGcUYmUEaco3c/+phkNP4Iblz4AiDcN/TpjlhbU3Mbx8ln6W4aaYIyC4EVMfgvkRVS1xzXcHexs1fox724J07M1nhy+Y
xvaOYorQLvXMGhcBc9Z2Au2GA5qAr5hr96AHgu3600qeji0nMM/0HoiEVbxNWfkj4kAegbItUEVBAWjjpkncbe5Ph9nF2DsBrrg4TsJIplYQ+lGewzLTm/cZ1DnIMZvTY/Vnimh7qa9aRrpMB0= bschmaus@provisioning"
EOF

Next, in the ClusterDeployment resource we need to define the following attributes:

  • baseDomain - the base DNS domain of the cluster we want to deploy
  • clusterName - the unique name of the cluster (prefixed to the baseDomain)
  • The name of the AgentClusterInstall resource associated to this cluster
  • An agentSelector used to match nodes tagged (in our case all nodes in the "m1" namespace)
  • The pullSecretRef containing our pull secret - used to authenticate/authorize the pulling of images
$ cat << EOF >> ~/m1-sno-cluster.yaml
---
apiVersion: hive.openshift.io/v1
kind: ClusterDeployment
metadata:
name: m1
namespace: m1
spec:
baseDomain: schmaustech.com
clusterName: m1
controlPlaneConfig:
servingCertificates: {}
installed: false
clusterInstallRef:
group: extensions.hive.openshift.io
kind: AgentClusterInstall
name: m1
version: v1beta1
platform:
agentBareMetal:
agentSelector:
matchLabels:
project: m1
pullSecretRef:
name: pull-secret
EOF

The rest of the resources only need to be adjusted to match the proper "m1" naming and namespace, but what we're doing here is configuring ACM to manage and integrate this new SNO deployment just like any other cluster:

$ cat << EOF >> ~/m1-sno-cluster.yaml
---
apiVersion: agent.open-cluster-management.io/v1
kind: KlusterletAddonConfig
metadata:
name: m1
namespace: m1
spec:
clusterName: m1
clusterNamespace: m1
clusterLabels:
cloud: auto-detect
vendor: auto-detect
applicationManager:
enabled: false
certPolicyController:
enabled: false
iamPolicyController:
enabled: false
policyController:
enabled: false
searchCollector:
enabled: false
---
apiVersion: cluster.open-cluster-management.io/v1
kind: ManagedCluster
metadata:
name: m1
namespace: m1
spec:
hubAcceptsClient: true
EOF

Now that the cluster deployment manifest is created we can apply it to the hub cluster and this will initiate the deployment of the SNO node:

$ oc create -f  ~/m1-sno-cluster.yaml
agentclusterinstall.extensions.hive.openshift.io/m1 created

We can validate the deployment exists by looking at the cluster deployment for "m1" in the namespace of "m1":

$ oc get clusterdeployment -n m1
NAME INFRAID PLATFORM REGION VERSION CLUSTERTYPE PROVISIONSTATUS POWERSTATE AGE
m1 agent-baremetal Initialized 7s

In a previous step we used an agentSelector to allow the cluster deployment to consume nodes in our "m1" namespace, but we still need to bind the agent node (our M1 virtual machine) to the cluster we're trying to create:

$ oc -n m1 patch -p '{"spec":{"clusterDeploymentName":{"name": "m1", "namespace": "m1"}}}' --type merge agent a52a0164-d39c-a049-a8ad-f2ed18a806f4
agent.agent-install.openshift.io/a52a0164-d39c-a049-a8ad-f2ed18a806f4 patched

The next step is to approve the agent node so it can be used by cluster "m1", as by default this will not be the case, an administrator needs to enable this node to become part of the cluster deployment:

$ oc -n m1 patch -p '{"spec":{"approved":true}}' --type merge agent a52a0164-d39c-a049-a8ad-f2ed18a806f4
agent.agent-install.openshift.io/a52a0164-d39c-a049-a8ad-f2ed18a806f4 patched

We can validate the agent was approved and assigned to the cluster by looking at the agent in the "m1" namespace:

$ oc get agent -n m1
NAME CLUSTER APPROVED ROLE STAGE
a52a0164-d39c-a049-a8ad-f2ed18a806f4 m1 true auto-assign

Now let's look at the agentClusterInstall state. We should see it transitioning through various states during the deployment:

$ oc get agentClusterInstall -n m1
NAME CLUSTER STATE
m1 m1 preparing-for-installation

$ oc get agentClusterInstall -n m1
NAME CLUSTER STATE
m1 m1 installing

$ oc get agentClusterInstall -n m1
NAME CLUSTER STATE
m1 m1 finalizing

After about 45 minutes we should now see the installation has completed the cluster deployment; the agentClusterInstall and agent output should look similar to the following:

$ oc get clusterdeployment -n m1
NAMESPACE NAME INFRAID PLATFORM REGION VERSION CLUSTERTYPE PROVISIONSTATUS POWERSTATE AGE
m1 m1 c6e42d16-76d0-4cb9-9f0b-a4ec6801cd34 agent-baremetal 4.12.0-ec.3 Provisioned 55m

$ oc get agentClusterInstall -n m1
NAME CLUSTER STATE
m1 m1 adding-hosts

$ oc get agent -n m1
NAME CLUSTER APPROVED ROLE STAGE
a52a0164-d39c-a049-a8ad-f2ed18a806f4 m1 true master Done

Now let's go ahead and validate the state of our SNO node to confirm it is up, ready, and operational. To do this we first need to extract the kubeconfig from the hub cluster with the following command:

$ oc get secret -n m1 m1-admin-kubeconfig  -ojsonpath='{.data.kubeconfig}'| base64 -d > m1-kubeconfig

Now we can use the extracted kubeconfig and run a few commands to confirm that our SNO cluster is indeed running and fully functional:

$ KUBECONFIG=m1-kubeconfig oc get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
m1 Ready control-plane,master,worker 13h v1.24.0+8c7c967 10.0.0.25 <none> Red Hat Enterprise Linux CoreOS 412.90.202209211533-0 (Plow) 5.14.0-70.26.1.el9_0.aarch64 cri-o://1.25.0-53.rhaos4.12.git7629206.el8

$ KUBECONFIG=m1-kubeconfig oc get clusteroperators
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
authentication 4.12.0-ec.3 True False False 12h
baremetal 4.12.0-ec.3 True False False 12h
cloud-controller-manager 4.12.0-ec.3 True False False 12h
cloud-credential 4.12.0-ec.3 True False False 12h
cluster-autoscaler 4.12.0-ec.3 True False False 12h
config-operator 4.12.0-ec.3 True False False 12h
console 4.12.0-ec.3 True False False 12h
control-plane-machine-set 4.12.0-ec.3 True False False 12h
csi-snapshot-controller 4.12.0-ec.3 True False False 12h
dns 4.12.0-ec.3 True False False 12h
etcd 4.12.0-ec.3 True False False 12h
image-registry 4.12.0-ec.3 True False False 12h
ingress 4.12.0-ec.3 True False False 12h
insights 4.12.0-ec.3 True False False 12h
kube-apiserver 4.12.0-ec.3 True False False 12h
kube-controller-manager 4.12.0-ec.3 True False False 12h
kube-scheduler 4.12.0-ec.3 True False False 12h
kube-storage-version-migrator 4.12.0-ec.3 True False False 12h
machine-api 4.12.0-ec.3 True False False 12h
machine-approver 4.12.0-ec.3 True False False 12h
machine-config 4.12.0-ec.3 True False False 12h
marketplace 4.12.0-ec.3 True False False 12h
monitoring 4.12.0-ec.3 True False False 12h
network 4.12.0-ec.3 True False False 12h
node-tuning 4.12.0-ec.3 True False False 12h
openshift-apiserver 4.12.0-ec.3 True False False 12h
openshift-controller-manager 4.12.0-ec.3 True False False 12h
openshift-samples 4.12.0-ec.3 True False False 12h
operator-lifecycle-manager 4.12.0-ec.3 True False False 12h
operator-lifecycle-manager-catalog 4.12.0-ec.3 True False False 12h
operator-lifecycle-manager-packageserver 4.12.0-ec.3 True False False 12h
service-ca 4.12.0-ec.3 True False False 12h
storage 4.12.0-ec.3 True False False 12h

$ KUBECONFIG=m1-kubeconfig oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.12.0-ec.3 True False 12h Cluster version is 4.12.0-ec.3

As we can see, the SNO cluster on the MacOS M1 virtual machine is fully operational and ready to take on development workloads. Let's look at one more item though just to show its on a MacOS M1 virtual machine. To do that let's SSH into the SNO node and run dmidecode:

$ ssh core@10.0.0.25
Red Hat Enterprise Linux CoreOS 412.90.202209211533-0
Part of OpenShift 4.12, RHCOS is a Kubernetes native operating system
managed by the Machine Config Operator (`clusteroperator/machine-config`).

WARNING: Direct SSH access to machines is not recommended; instead,
make configuration changes via `machineconfig` objects:
https://docs.openshift.com/container-platform/4.12/architecture/architecture-rhcos.html

---
Last login: Fri Sep 30 13:55:36 2022 from 192.168.0.29
[systemd]
Failed Units: 1
systemd-network-generator.service

[core@m1 ~]$ uname -a
Linux m1 5.14.0-70.26.1.el9_0.aarch64 #1 SMP Fri Sep 2 15:56:00 EDT 2022 aarch64 aarch64 aarch64 GNU/Linux

[core@m1 ~]$ sudo dmidecode|more
# dmidecode 3.3
Getting SMBIOS data from sysfs.
SMBIOS 3.3.0 present.
Table at 0x64BDE9000.

Handle 0x0000, DMI type 1, 27 bytes
System Information
Manufacturer: Apple Inc.
Product Name: Apple Virtualization Generic Platform
Version: 1
Serial Number: Virtualization-64012aa5-9cd3-49a0-a8ad-f2ed18a806f4
UUID: a52a0164-d39c-a049-a8ad-f2ed18a806f4
Wake-up Type: Power Switch
SKU Number: Not Specified
Family: Not Specified
(...)

There we go, we can see that the virtual machine is reporting the platform information as "Apple Virtualization Generic Platform", further confirming that we're fully operational on an M1 Mac. Hopefully this blog gave some insight on what the potential future might hold when it comes to running Single Node OpenShift in a virtual machine on a MacBook Pro with an M1 processor (or future generations of said architecture). This kind of configuration has the potential to enable developers to run small OpenShift environments on the same laptop they might use for everyday work, giving them the freedom of curiosity to experiment and efficiency to test new ideas before taking them to a production cluster.


Categories

How-tos, Red Hat Advanced Cluster Management, single node

< Back to the blog