Subscribe to our blog

high-level-overview-2

In a previous blog on How to Build Bare Metal Hosted Clusters on Red Hat Advanced Cluster Management for Kubernetes we discussed how one could deploy a hosted cluster. The blog outlined the benefits of running hosted clusters which included minimal time to deploy and cost savings due to the control plane running on an existing OpenShift cluster. Then we continued that journey in the blog Workloads on Bare Metal Hosted Clusters Deployed From Red Hat Advanced Cluster Management for Kubernetes where we deployed the LocalStorage Operator, OpenShift Data Foundation and OpenShift Virtualization all so we could run a workload virtual machine on our hosted cluster. Today however we want to move onto another day two activity that enables us to add a worker node to our hosted cluster by way of scaling our node pool.

Lab Environment

First let's review the lab environment so we are familiar with how the hosted cluster was deployed. Looking back we originally had a 3 node compact Red Hat Advanced Cluster Management for Kubernetes 2.6 hub cluster running on OpenShift 4.11.2 called kni20. We used the hub cluster to deploy a hosted cluster running OpenShift 4.11.2 called kni21 where the control plane is running as containers on our hub cluster and then we also have 3 bare metal worker nodes for our workloads. The high level architecture looks like the diagram below.

Hosted-Node-Pool1

Currently we have three agent hosts assigned to cluster kni21 as worker nodes.

$ oc get agent -n kni21
NAME CLUSTER APPROVED ROLE STAGE
304d2046-5a32-4cec-8c08-96a2b76a6537 kni21 true worker Done
4ca57efe-0940-4fd6-afcb-7db41b8c918b kni21 true worker Done
65f41daa-7ea8-4637-a7c6-f2cde634404a kni21 true worker Done

If we also look at the nodepool we can see that it currently is set to have three desired nodes and there are three current nodes.

$ oc get nodepool -n kni21
NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE
nodepool-kni21-1 kni21 3 3 False False 4.11.2

Now that we have an idea of how the environment is configured let's turn our attention to discovering and adding an additional worker.

Discovering and Adding a Worker

The process of adding a new worker node to our hosted cluster will first involve discovering a new available node through the Central Infrastructure Management component of Red Hat Advanced Cluster Management for Kubernetes. We can use the existing infrastructure environment from the previow blogs and boot our new soon to be worker node from the discovery ISO. In a few minutes we should see a new agent host appear in our list of agents.

$ oc get agent -n kni21
NAME CLUSTER APPROVED ROLE STAGE
1e37f102-4aab-4d38-9d52-45f35268f190 false auto-assign
304d2046-5a32-4cec-8c08-96a2b76a6537 kni21 true worker Done
4ca57efe-0940-4fd6-afcb-7db41b8c918b kni21 true worker Done
65f41daa-7ea8-4637-a7c6-f2cde634404a kni21 true worker Done

If we follow along with our diagram we should now have a new discovered host.

Hosted-Node-Pool2

In order to consume the host in our nodepool we first must approve it. Approval enables us to control what nodes can be added during our scaling phase. We can achieve this approval by patching the agent and setting the spec setting of approved to true.

$ oc -n kni21 patch -p '{"spec":{"approved":true}}' --type merge agent 1e37f102-4aab-4d38-9d52-45f35268f190
agent.agent-install.openshift.io/1e37f102-4aab-4d38-9d52-45f35268f190 patched

Then we can scale the nodepool by the number of replicas from 3 to 4 in this example.

$ oc scale nodepool -n kni21 nodepool-kni21-1 --replicas=4
nodepool.hypershift.openshift.io/nodepool-kni21-1 scaled

Once the nodepool is scaled the desired node count will show the new number of desired nodes but the current node count will still show the previous count. However behind the scenes the processes are happening in order to pull in the agent host we approved in the previous steps.

$ oc get nodepool -n kni21
NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE
nodepool-kni21-1 kni21 4 3 False False 4.11.2

If we follow along with the diagram the new node is entering a provisioning stage of the process.

Hosted-Node-Pool3

We can clearly see that the changes are happening if we look at the agent again. We should observe that the agent has now been assigned to the cluster.

$ oc get agent -n kni21
NAME CLUSTER APPROVED ROLE STAGE
1e37f102-4aab-4d38-9d52-45f35268f190 kni21 true auto-assign
304d2046-5a32-4cec-8c08-96a2b76a6537 kni21 true worker Done
4ca57efe-0940-4fd6-afcb-7db41b8c918b kni21 true worker Done
65f41daa-7ea8-4637-a7c6-f2cde634404a kni21 true worker Done

If we look at the machines view we can also see a new machine is being provisioned to satisfy the desired replica count.

$ oc get machines -n kni21-kni21
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
nodepool-kni21-1-9gpcf kni21 asus3-vm3.kni.schmaustech.com agent://4ca57efe-0940-4fd6-afcb-7db41b8c918b Running 20d 4.11.2
nodepool-kni21-1-gjdcg kni21 asus3-vm2.kni.schmaustech.com agent://304d2046-5a32-4cec-8c08-96a2b76a6537 Running 20d 4.11.2
nodepool-kni21-1-vc6f2 kni21 asus3-vm1.kni.schmaustech.com agent://65f41daa-7ea8-4637-a7c6-f2cde634404a Running 20d 4.11.2
nodepool-kni21-1-vl98g kni21 Provisioning 105s 4.11.2

After waiting a few more minutes we can now see the agent id from the node we approved has been assigned as the newest member of the nodepool and the provisioning portion has completed.

$ oc get machines -n kni21-kni21
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
nodepool-kni21-1-9gpcf kni21 asus3-vm3.kni.schmaustech.com agent://4ca57efe-0940-4fd6-afcb-7db41b8c918b Running 20d 4.11.2
nodepool-kni21-1-gjdcg kni21 asus3-vm2.kni.schmaustech.com agent://304d2046-5a32-4cec-8c08-96a2b76a6537 Running 20d 4.11.2
nodepool-kni21-1-vc6f2 kni21 asus3-vm1.kni.schmaustech.com agent://65f41daa-7ea8-4637-a7c6-f2cde634404a Running 20d 4.11.2
nodepool-kni21-1-vl98g kni21 agent://1e37f102-4aab-4d38-9d52-45f35268f190 Provisioned 5m59s 4.11.2

At this point we can see the node has been provisioned as reflected in the diagram. However the node is still being added as a worker to the hosted cluster.

Hosted-Node-Pool4

Checking the agent again we can now see the role has been assigned to the discovered agent node.

$ oc get agent -n kni21
NAME CLUSTER APPROVED ROLE STAGE
1e37f102-4aab-4d38-9d52-45f35268f190 kni21 true worker Done
304d2046-5a32-4cec-8c08-96a2b76a6537 kni21 true worker Done
4ca57efe-0940-4fd6-afcb-7db41b8c918b kni21 true worker Done
65f41daa-7ea8-4637-a7c6-f2cde634404a kni21 true worker Done

Further if we check the machine output again we can see the additional node that was in a provisioning/provisioned state is now in a running state indicating that it has been joined to the hosted cluster.

$ oc get machines -n kni21-kni21
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
nodepool-kni21-1-9gpcf kni21 asus3-vm3.kni.schmaustech.com agent://4ca57efe-0940-4fd6-afcb-7db41b8c918b Running 20d 4.11.2
nodepool-kni21-1-gjdcg kni21 asus3-vm2.kni.schmaustech.com agent://304d2046-5a32-4cec-8c08-96a2b76a6537 Running 20d 4.11.2
nodepool-kni21-1-vc6f2 kni21 asus3-vm1.kni.schmaustech.com agent://65f41daa-7ea8-4637-a7c6-f2cde634404a Running 20d 4.11.2
nodepool-kni21-1-vl98g kni21 asus3-vm4.kni.schmaustech.com agent://1e37f102-4aab-4d38-9d52-45f35268f190 Running 10m 4.11.2

Hosted-Node-Pool5

We can validate the worker node is added by passing the kubeconfig variable and looking at the nodes on the hosted cluster.

$ KUBECONFIG=/home/bschmaus/kubeconfig-kni21 oc get nodes
NAME STATUS ROLES AGE VERSION
asus3-vm1.kni.schmaustech.com Ready worker 20d v1.24.0+b62823b
asus3-vm2.kni.schmaustech.com Ready worker 20d v1.24.0+b62823b
asus3-vm3.kni.schmaustech.com Ready worker 20d v1.24.0+b62823b
asus3-vm4.kni.schmaustech.com Ready worker 2m52s v1.24.0+b62823b

After confirmation we can conclude we were successful in scaling our nodepool to expand our hosted cluster. Hopefully this insight was valuable in explaining and demonstrating the process of scaling the nodepool!


About the author

Browse by channel

automation icon

Automation

The latest on IT automation that spans tech, teams, and environments

AI icon

Artificial intelligence

Explore the platforms and partners building a faster path for AI

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

Explore how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the solutions that simplify infrastructure at the edge

Infrastructure icon

Infrastructure

Stay up to date on the world’s leading enterprise Linux platform

application development icon

Applications

The latest on our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech