Purpose

Single Node OpenShift (SNO) is a configuration of a standard OpenShift cluster that consists of a single control plane node that is configured to run workloads on it. Single Node OpenShift configuration offers both control and worker node functionality, allowing users to deploy a smaller OpenShift environment and have minimal to no dependence on a centralized management cluster.

By deploying SNO, users can experience the benefits of OpenShift in a more compact environment that requires fewer resources. It can also provide a simple and efficient way to test new features or applications in a controlled environment. Note that SNO lacks high availability, so it may not be suitable for mission-critical workloads that require constant uptime. For edge sites or scenarios where OpenShift clusters are required, but high availability is not critical then a Single Node OpenShift can be an appropriate solution.

By scaling/adding a worker to a running SNO it would benefit users to achieve more flexibility to run additional workloads.

In this blog we will show you how to add additional workers to a SNO.The same procedure can be used to add worker to any kind of OCP cluster.

Prerequisites

  • Existing Baremetal SNO running OCP >= 4.12.1
  • Be sure that all the required DNS records exist (for simplicity we used DHCP)
  • You have access to the cluster as a user with the cluster-admin role.
  • coreos-installer CLI installed

In the demo , we are using DHCP setup for the cluster.

Initial setup

$ oc get nodes
NAME STATUS ROLES AGE VERSION
master-0 Ready control-plane,master,worker 5h17m v1.25.8+27e744f

Prepare RHCOS live image

Download RHCOS live image (For correct rhcos version) here we are using 4.12

$ wget https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.12/latest/rhcos-4.12.10-x86_64-live.x86_64.iso
Generate ignition config which will allow ssh from current node/user
$ SSH_PUB=$(cat ~/.ssh/id_rsa.pub)

cat <<EOF > ssh.ign
{
"ignition": {
"version": "3.1.0"
},
"passwd": {
"users": [
{"groups":["sudo"],"name":"core","passwordHash":"!","sshAuthorizedKeys":["${SSH_PUB}"]}
]
}
}
EOF
#### Embedded the ssh ignition to the iso
$ coreos-installer iso ignition embed -fi ssh.ign rhcos-4.12.10-x86_64-live.x86_64.iso

#### Get the Worker Ignition
$ oc extract -n openshift-machine-api secret/worker-user-data-managed --to=/tmp/ --confirm
$ mv /tmp/userData worker.ign

We will copy this worker.ign file to our web server. you could just scp it to the new worker node after Coreos installation

Now boot the server from the embedded ISO.

Once CoreOS is installed on the server you will need to get the worker ignition to kick-off the installation.

$ ssh core@192.168.24.107 ( IP of the worker node you are adding )
[core@worker-0 ~]$ sudo coreos-installer install /dev/sdb --ignition-url http://192.168.24.80:9000/worker.ign --insecure-ignition
Installing Red Hat Enterprise Linux CoreOS 412.86.202305030814-0 (Ootpa) x86_64 (512-byte sectors)
> Read disk 4.1 GiB/4.1 GiB (100%)
Writing Ignition config
  • /dev/sdb is the drive that will be used for installation disk
  • ignition-url is where we put the ignition we generate earlier ( here we used a web server to store it)
Note: Eject the virtual media prior to reboot the server

Reboot the server so the installation can continue

[core@worker-0 ~]$ sudo reboot 

Server will be reboot another time before sending CSR to join the cluster.

Watch for csr then approve them so the worker can join the SNO

$ oc get csr -w
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
csr-2bpwg 31m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Approved,Issued
csr-6mmlw 16s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Approved,Issued
csr-g47q5 3s kubernetes.io/kubelet-serving system:node:worker-0 <none> Pending

Approve the pending CSR

$ oc get csr | grep Pending | | awk '{print $1}' | xargs oc  adm certificate approve  
certificatesigningrequest.certificates.k8s.io/csr-g47q5 approved

$ oc get csr -w
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
csr-2bpwg 32m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Approved,Issued
csr-6mmlw 51s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Approved,Issued
csr-g47q5 38s kubernetes.io/kubelet-serving system:node:worker-0 <none> Approved,Issued

After the CSR approval worker will join the cluster

$ oc get nodes
NAME STATUS ROLES AGE VERSION
master-0 Ready control-plane,master,worker 3h59m v1.25.8+27e744f
worker-0 Ready worker 32m v1.25.8+27e744f

by iterating the same process , we were able to add up to five (5) workers to the SNO.

$ oc get nodes
NAME STATUS ROLES AGE VERSION
master-0 Ready control-plane,master,worker 9h v1.25.8+27e744f
worker-0 Ready worker 5h55m v1.25.8+27e744f
worker-1 Ready worker 5h24m v1.25.8+27e744f
worker-2 Ready worker 4h59m v1.25.8+27e744f
worker-3 Ready worker 4h23m v1.25.8+27e744f
worker-4 Ready worker 4h8m v1.25.8+27e744f
worker-5 Ready worker 4h6m v1.25.8+27e744f

$ oc get clusterversions.config.openshift.io
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.12.14 True False 8h Cluster version is 4.12.14
[root@rack1-jumphost 20230510-17:24:47]$ oc get co
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
authentication 4.12.14 True False False 9h
baremetal 4.12.14 True False False 9h
cloud-controller-manager 4.12.14 True False False 9h
cloud-credential 4.12.14 True False False 9h
cluster-autoscaler 4.12.14 True False False 9h
config-operator 4.12.14 True False False 9h
console 4.12.14 True False False 9h
control-plane-machine-set 4.12.14 True False False 9h
csi-snapshot-controller 4.12.14 True False False 9h
dns 4.12.14 True False False 9h
etcd 4.12.14 True False False 9h
image-registry 4.12.14 True False False 9h
ingress 4.12.14 True False False 9h
insights 4.12.14 True False False 9h
kube-apiserver 4.12.14 True False False 9h
kube-controller-manager 4.12.14 True False False 9h
kube-scheduler 4.12.14 True False False 9h
kube-storage-version-migrator 4.12.14 True False False 9h
machine-api 4.12.14 True False False 9h
machine-approver 4.12.14 True False False 9h
machine-config 4.12.14 True False False 9h
marketplace 4.12.14 True False False 9h
monitoring 4.12.14 True False False 9h
network 4.12.14 True False False 9h
node-tuning 4.12.14 True False False 9h
openshift-apiserver 4.12.14 True False False 9h
openshift-controller-manager 4.12.14 True False False 9h
openshift-samples 4.12.14 True False False 9h
operator-lifecycle-manager 4.12.14 True False False 9h
operator-lifecycle-manager-catalog 4.12.14 True False False 9h
operator-lifecycle-manager-packageserver 4.12.14 True False False 9h
service-ca 4.12.14 True False False 9h
storage 4.12.14 True False False 9h