How to use workload partitioning with Red Hat OpenShift Container Platform

Learn how to use workload partitioning with Red Hat® OpenShift® Container Platform to benefit resource-constrained environments.

Learn how to use workload partitioning with Red Hat® OpenShift® Container Platform to benefit resource-constrained environments.

Enabling workload partitioning on Red Hat OpenShift Container Platform

1 hr

Before we can fully jump into partitioning, the option must be enabled first. In order to accomplish this, we must first modify the installation file before anything else. This is best done while creating a new cluster in your given environment. 

What will you learn?

  • Enabling partitioning during cluster installation
  • Applying performance profiles
  • Verifying affinity

What do you need before starting?

Steps for enabling workload partitioning

Workload partitioning must be enabled during the initial cluster installation. This is done by adding the cpuPartitioningMode parameter to your install-config.yaml file.

# install-config.yaml
...
cpuPartitioningMode: AllNodes
...

After the cluster is installed with this setting, you can apply a PerformanceProfile to define the CPU allocation.

Applying the PerformanceProfile

In this example, we have a 3-node compact cluster where each node has 24 CPU cores. For demonstration purposes, we will reserve the first 20 cores (0-19) for OpenShift system components and leave the remaining 4 cores (20-23) for user workloads. For actual production environments, the number of reserved CPU cores for OpenShift components depends on the specific deployment. A minimum of 4 CPU cores is required, but 8 cores per node are recommended for a 3-node compact cluster.

Create and apply the following PerformanceProfile manifest:

# Note: Ensure $BASE_DIR is set to your working directory
tee $BASE_DIR/data/install/performance-profile.yaml << 'EOF'
---
apiVersion: performance.openshift.io/v2
kind: PerformanceProfile
metadata:
  name: openshift-node-performance-profile
spec:
  cpu:
    # Cores reserved for user workloads
    isolated: "20-23" 
    # Cores reserved for OpenShift system components and the OS
    reserved: "0-19"
  machineConfigPoolSelector:
    pools.operator.machineconfiguration.openshift.io/master: ""
  nodeSelector:
    node-role.kubernetes.io/master: ''
  numa:
    # "restricted" policy enhances CPU affinity
    topologyPolicy: "restricted"
  realTimeKernel:
    enabled: false
  workloadHints:
    realTime: false
    highPowerConsumption: false
    perPodPowerManagement: false
EOF

oc apply -f $BASE_DIR/data/install/performance-profile.yaml

Verifying system component CPU affinity

Once the PerformanceProfile is applied, the nodes will reboot to apply the new configuration. We can then verify that critical system processes, like etcd, are correctly pinned to the reserved CPU cores.

First, log in to a master node. Then, use the following script to check the CPU affinity for all etcd processes:

# Find all PIDs for etcd processes
ETCD_PIDS=$(ps -ef | grep "etcd " | grep -v grep | awk '{print $2}')
# Iterate through each PID and check its CPU affinity
for pid in $ETCD_PIDS; do
    echo "----------------------------------------"
    echo "Checking PID: ${pid}"
    
    COMMAND=$(ps -o args= -p "$pid")
    echo "Command: ${COMMAND}"
    
    echo -n "CPU affinity (Cpuset): "
    taskset -c -p "$pid"
done

The output should confirm that the etcd processes are running on the reserved cores (0-19).

# Expected Output
----------------------------------------
Checking PID: 4332
Command: etcd --logger=zap ...
CPU affinity (Cpuset): pid 4332's current affinity list: 0-19
----------------------------------------
Checking PID: 4369
Command: etcd grpc-proxy start ...
CPU affinity (Cpuset): pid 4369's current affinity list: 0-19

Now that the profile has been successfully applied and the nodes are rebooted, we are ready to test workload isolation.

Previous resource
Prerequisites
Next resource
Testing on VMs

This learning path is for operations teams or system administrators
Developers may want to check out Network observability with eBPF on developers.redhat.com. 

Get started on developers.redhat.com

Hybrid Cloud Logo LinkedIn YouTube Facebook Twitter

Platforms

Tools

Try, buy, sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2025 Red Hat