How to use workload partitioning with Red Hat OpenShift Container Platform
Workload Partitioning is a powerful feature in Red Hat® OpenShift® Container Platform designed to provide CPU core isolation between critical OpenShift system components and user-deployed workloads. This isolation is particularly beneficial in resource-constrained environments such as single node OpenShift (SNO) or 3-node compact clusters, where system services and user applications run on the same hosts and compete for CPU resources.
By dedicating specific CPU cores to the system and others to workloads, you can achieve more predictable performance and prevent user applications from impacting the stability and responsiveness of the cluster's control plane.
This learning path will show how to configure and verify workload partitioning. We will deploy sample pods and a virtual machine to observe how the CPU isolation is enforced.
Note: This is an example walkthrough of the feature. Please refer to the Red Hat OpenShift Container Platform documentation for more information.
What do you need before starting?
- Red Hat account
- Access to the Red Hat Hybrid Cloud Console
- Red Hat OpenShift Container Platform 4.18+
What is included in this learning path?
- Enabling workload partitioning
- Testing workload partitioning
- Testing workload isolation on a virtual machine
- Behind the scenes explanation of the feature
What will you get?
- Understanding of how workload partitioning functions
- Experience enabling this feature on a cluster and a virtual machine
- Breakdown of testing options and output comparisons
This learning path is for operations teams or system administrators
Developers may want to check out Network observability with eBPF on developers.redhat.com.