Background

During installation of Openshift (OCP), only one disk partition is created that contains the Operating systems (CoreOS) and all the other OCP components binaries including the container images. This means that by default, all the binaries, configurations and logs are subdirectories of the sysroot partition.

For better performance, and security a separate partition can be created for the /var/lib/containers folder at installation time using Machine Configuration (MC) Custom Resource (CR). You might create additional partitions but it is not guaranteed to work or at least they will not be monitored by OCP (for disk usage etc.). Up to the date of writing this document, only two partitions are monitored by OCP CoreOS (OCP 4.11 and less).

There are many ways of installing OCP: UPI, IPI, aicli and using ACM.
In this blog we are going to describe the aicli procedure which is command line based. This might be an advantage when doing installation on hardware with limited access to the GUIs (inside a Linux jump server for example). This same MC yaml file describing the creation of this partition in our way of installing OCP (aicli), can be included in the manifest subfolder of the IPI, UPI or any other way of the installation folder. Though, only aicli and IPI were tested and are proven working. Most of the installation methods are described on openshift online documentation, including IPI (which can be found here).

For IPI, the MC contents that we are providing in this document can be copied as is (with making sure the disk name and partitions’ sizes are correct) to the folder called manifests after running the command : openshift-install create manifests --dir=./

Prerequisite

Many installation methods of OCP exist nowadays, including the assisted installer, which provides command line interface to communicate with the Red Hat’s cloud console to create boot images that will be used to boot the hardware with and then the node(s) will communicate with the cloud console to continue the installation. The aicli command line interface is a tool that can be installed as a python (python3) pip module or using podman. Here I am going to provide the steps on how to use aicli as a python module. So python3 and pip3 must be installed before starting. podman can be used also by aliasing the command with some options. Podman is the only containers’ runtime engine that works for aicli. This method has not been tested with docker, or other container runtime engines.

Configuring aicli

The aicli command tool will enable us to inject configuration to the iso image that will be used to launch the installation. aicli communicates with Red Hat’s cloud console to generate this iso. It will also have content to point to all the OCP images and agent installer that will kick the installation when the iso is mounted. Following are the steps to install the aicli command line modules.
The procedure can also be found here

# sudo pip3 install -U aicli
# sudo pip3 install -U assisted-service-client

Now to be able to communicate with the cloud console API, you need to download a token so that you are not required to login to your account on that cloud console. To download the token go to https://cloud.redhat.com/openshift/token Then click on “Load token” And you copy the token from the clipboard .

token Once the token is copied, go the terminal of the server where aicli has been installed and define this environment variable:

# export OFFLINETOKEN="eyJhbGciOiJIUzI1NiIsInR5cCIgOi..."

It is also recommended to save it into a file for future use. A pull secret is needed to download the OCP containers’ images during installation. The pull secret is composed of a set of tokens, account usernames and the docker links to download the OCP images from the different online Repository. To download the the pull secret go to this link and click “download pull secret” https://console.redhat.com/openshift/install/pull-secret The file will be saved to pull_secret.txt, you can rename it to pull_secret.json or keep it as is but remember to change the commands as required each time the file name is used. In this document, we are referring to it as pull_secret.json

Next step is to define the aicli alias as follows:

$  alias aicli="aicli --offlinetoken $OFFLINETOKEN"
$ aicli list cluster
+---------+----+--------+------------+
| Cluster | Id | Status | Dns Domain |
+---------+----+--------+------------+
+---------+----+--------+------------+

SNO Deployment

Note: This Procedure has been tested on SNO and 3 node compact cluster, with the label set to master. In a multi-node cluster, this will apply to all master nodes. If you want to do the same thing on worker nodes on multi-node cluster, you must create a second file in the same directory and set the MachineConf Name to different one and set the label to worker : machineconfiguration.openshift.io/role: worker ; name: 98-var-partition-worker

In this section, we are describing the steps to make the partition configuration of /var/lib/containers from the installation steps, so there will be no need later on to configure it again. Following are the steps to install the SNO using aicli and add the manifest of the MC to create the partition during installation. The MC is based on the instructions provided in OCP documentation found here

$ mkdir test-aicli
cd test-aicli/

Create the aicli parameters files as follows. Remember to update all the fields as required

$ cat aicli-parameters.yaml

ssh_public_key: 'ssh-rsa AAAAB3NzaC...QahoPlBzwy48U' # replace
base_dns_domain: mylab.mydomain.com # replace
openshift_version: 4.9.17 # set the correct OCP version
image_type: minimal-iso
static_network_config:
- interfaces:
- name: eno1
type: ethernet
state: up
ipv4:
address:
- ip: 192.168.2.90 # replace
prefix-length: 25 # replace
enabled: true
mac-address: b8:ce:f6:56:4x:yx # replace
dns-resolver:
config:
server:
- 192.168.2.80 # replace
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 192.168.2.1
next-hop-interface: eno1
table-id: 254
$ pwd
${HOME}/test-aicli
$ ls -lrt
total 8
-rw-r--r--. 1 username username 1128 Aug 22 15:28 aicli-parameters.yaml
-rw-r--r--. 1 username username 2771 Aug 22 15:32 openshift_pull.json
$ cat openshift_pull.json 
{"auths":{"cloud.openshift.com":{"auth":"b3BlbnNoaWZ0LXJlbGVhc2UtZGV2K.....0VGSFU1OFNDRTlGR1gyOERLNQ==","email":"username@redhat.com"},"quay.io":{"auth":"b3BlbnNo...","email":"username@redhat.com"},"

creating the cluster using the values from the parameters file

$ aicli create cluster sno-21 -P pull_secret=./openshift_pull.json -P sno=true --paramfile aicli-parameters.yaml

Creating cluster sno-21
Forcing network_type to OVNKubernetes
$ aicli list cluster 
+----------------+--------------------------------------+-------------------+--------------------------------------+
| Cluster | Id | Status | Dns Domain |
+----------------+--------------------------------------+-------------------+--------------------------------------+
| sno-21 | 45bd2c66-6257-47c4-b2cd-2ea366965d21 | pending-for-input | mylab.mydomain.com |
+----------------+--------------------------------------+-------------------+--------------------------------------+

At this step, before creating the iso, you need to include the manifests files that create additional configurations. In this case, I am adding the MC manifest for the creation of the additional /var/lib/containers partition.

$  mkdir ai-manifests
cd ai-manifests
cat 98_var_partition.yaml


apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: 98-var-partition
spec:
config:
ignition:
version: 3.2.0
storage:
disks:
- device: /dev/sdb
partitions:
- label: varlibcontainers
sizeMiB: 150000 # size of the partition
startMiB: 220000 # space left for the CoreOS partition.
filesystems:
- device: /dev/disk/by-partlabel/varlibcontainers
format: xfs
mountOptions:
- defaults
- prjquota
path: /var/lib/containers
systemd:
units:
- contents: |-
# Generated by Butane
[Unit]
Before=local-fs.target
Requires=systemd-fsck@dev-disk-by\x2dpartlabel-varlibcontainers.service
After=systemd-fsck@dev-disk-by\x2dpartlabel-varlibcontainers.service

[Mount]
Where=/var/lib/containers
What=/dev/disk/by-partlabel/varlibcontainers
Type=xfs
Options=defaults,prjquota

[Install]
RequiredBy=local-fs.target
enabled: true
name: var-lib-containers.mount
$  cd ..
$ aicli create manifests --dir ./ai-manifests/ sno-21
Uploading manifests for Cluster sno-21
uploading file 98_var_partition.yaml

As a validation we can list the manifest we applied to the cluster

$ aicli list manifest sno-21
Retrieving manifests for Cluster sno-21
+-------------------------------+-----------+
| File | Folder |
+-------------------------------+-----------+
| 98_var_partition.yaml | manifests |
+-------------------------------+-----------+

creating the ISO

$ aicli create iso sno-21
This api call is deprecated # ignore this warning
Getting Iso url for infraenv sno-21
https://api.openshift.com/api/assisted-images/images/24b00efc-abd8-4a73-9027-7a3452be3113?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NjEyMTE1MzQsInN1YiI6IjI0YjAwZWZjLWFiZDgtNGE3My05MDI3LTdhMzQ1MmJlMzExMyJ9.IyIKz9yDnui5eeBZoxofyX8U7S9AevHpKTq3jJyBFHY&type=minimal-iso&version=4.9

Downloading the ISO

$ aicli download iso -p . sno-21
Downloading Iso for infraenv sno-21 in .
$ ls -lrt
total 106244
-rw-r--r--. 1 username username 1128 Aug 22 15:28 aicli-parameters.yaml
-rw-r--r--. 1 username username 2771 Aug 22 15:32 openshift_pull.json
-rw-r--r--. 1 username username 108785664 Aug 22 15:42 sno-21.iso
drwx-r----. 1 username username 1024 Aug 22 15:42 -rw-r--r--. 1 username username ai-manifests

Boot the server from the ISO and wait check the status until it get discovered

$ aicli list cluster 
+----------------+--------------------------------------+-------------------+--------------------------------------+
| Cluster | Id | Status | Dns Domain |
+----------------+--------------------------------------+-------------------+--------------------------------------+
| sno-21 | 45bd2c66-6257-47c4-b2cd-2ea366965d21 | pending-for-input | mylab.mydomain.com |
+----------------+--------------------------------------+-------------------+--------------------------------------+
$ aicli list cluster
+----------------+--------------------------------------+--------------+--------------------------------------+
| Cluster | Id | Status | Dns Domain |
+----------------+--------------------------------------+--------------+--------------------------------------+
| sno-21 | 45bd2c66-6257-47c4-b2cd-2ea366965d21 | insufficient | mylab.mydomain.com |
+----------------+--------------------------------------+--------------+--------------------------------------+

Keep checking the cluster until its status is Ready

$ aicli list cluster 
+----------------+--------------------------------------+--------+--------------------------------------+
| Cluster | Id | Status | Dns Domain |
+----------------+--------------------------------------+--------+--------------------------------------+
| sno-21 | 45bd2c66-6257-47c4-b2cd-2ea366965d21 | ready | mylab.mydomain.com |
+----------------+--------------------------------------+--------+--------------------------------------+

Now we are ready to start the SNO installation

$ aicli start cluster sno-21
Starting cluster sno-21
$ aicli list cluster
+----------------+--------------------------------------+----------------------------+-------------------------------+
| Cluster | Id | Status | Dns Domain |
+----------------+--------------------------------------+----------------------------+-------------------------------+
| sno-21 | 45bd2c66-6257-47c4-b2cd-2ea366965d21 | preparing-for-installation | mylab.mydomain.com |
+----------------+--------------------------------------+----------------------------+-------------------------------+

Keep checking the cluster installation until status is: installed

$ aicli list cluster 
+----------------+--------------------------------------+-------------------+--------------------------------------+
| Cluster | Id | Status | Dns Domain |
+----------------+--------------------------------------+-------------------+--------------------------------------+
| sno-21 | 45bd2c66-6257-47c4-b2cd-2ea366965d21 | pending-for-input | mylab.mydomain.com |
+----------------+--------------------------------------+-------------------+--------------------------------------+
$ aicli list cluster 
+----------------+--------------------------------------+------------+--------------------------------------+
| Cluster | Id | Status | Dns Domain |
+----------------+--------------------------------------+------------+--------------------------------------+
| sno-21 | 45bd2c66-6257-47c4-b2cd-2ea366965d21 | installing | mylab.mydomain.com |
+----------------+--------------------------------------+------------+--------------------------------------+
$ aicli list cluster 
+----------------+--------------------------------------+------------+--------------------------------------+
| Cluster | Id | Status | Dns Domain |
+----------------+--------------------------------------+------------+--------------------------------------+
| sno-21 | 45bd2c66-6257-47c4-b2cd-2ea366965d21 | installing | mylab.mydomain.com |
+----------------+--------------------------------------+------------+--------------------------------------+

You can download the kubeconfig to interac with the cluster while installation is on-going

$ aicli download kubeconfig sno-21
Downloading Kubeconfig for Cluster sno-21 in ./kubeconfig.sno-21
$ ls -lrt
total 106256
-rw-r--r--. 1 username username 1128 Aug 22 15:28 aicli-parameters.yaml
-rw-r--r--. 1 username username 2771 Aug 22 15:32 openshift_pull.json
-rw-r--r--. 1 username username 108785664 Aug 22 15:42 sno-21.iso
-rw-r--r--. 1 username username 8997 Aug 22 16:05 kubeconfig.sno-21
$ oc get nodes --kubeconfig kubeconfig.sno-21
NAME STATUS ROLES AGE VERSION
master-0.sno-21.mylab.mydomain.com NotReady master 12s v1.22.3+e790d7f

$ oc get nodes --kubeconfig kubeconfig.sno-21
NAME STATUS ROLES AGE VERSION
master-0.sno-21.mylab.mydomain.com Ready master,worker 5m35s v1.22.3+e790d7f
$ oc get clusterversion --kubeconfig kubeconfig.sno-21
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version False True 19m Working towards 4.9.17: 454 of 738 done (61% complete)

You also can check the installation is complete from aicli commands

$ aicli list cluster 
+----------------+--------------------------------------+------------+--------------------------------------+
| Cluster | Id | Status | Dns Domain |
+----------------+--------------------------------------+------------+--------------------------------------+
| sno-21 | 45bd2c66-6257-47c4-b2cd-2ea366965d21 | installing | mylab.mydomain.com |
+----------------+--------------------------------------+------------+--------------------------------------+

$ aicli list cluster
+----------------+--------------------------------------+-----------+--------------------------------------+
| Cluster | Id | Status | Dns Domain |
+----------------+--------------------------------------+-----------+--------------------------------------+
| sno-21 | 45bd2c66-6257-47c4-b2cd-2ea366965d21 | installed | mylab.mydomain.com |
+----------------+--------------------------------------+-----------+--------------------------------------+

Cluster validation

$ oc get co --kubeconfig kubeconfig.sno-21
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
authentication 4.9.17 True False False 8h
baremetal 4.9.17 True False False 10h
cloud-controller-manager 4.9.17 True False False 10h
cloud-credential 4.9.17 True False False 10h
cluster-autoscaler 4.9.17 True False False 10h
config-operator 4.9.17 True False False 10h
console 4.9.17 True False False 8h
csi-snapshot-controller 4.9.17 True False False 10h
dns 4.9.17 True False False 10h
etcd 4.9.17 True False False 10h
image-registry 4.9.17 True False False 9h
ingress 4.9.17 True False False 10h
insights 4.9.17 True False False 9h
kube-apiserver 4.9.17 True False False 9h
kube-controller-manager 4.9.17 True False False 10h
kube-scheduler 4.9.17 True False False 10h
kube-storage-version-migrator 4.9.17 True False False 10h
machine-api 4.9.17 True False False 10h
machine-approver 4.9.17 True False False 10h
machine-config 4.9.17 True False False 7h37m
marketplace 4.9.17 True False False 10h
monitoring 4.9.17 True False False 9h
network 4.9.17 True False False 10h
node-tuning 4.9.17 True False False 10h
openshift-apiserver 4.9.17 True False False 8h
openshift-controller-manager 4.9.17 True False False 9h
openshift-samples 4.9.17 True False False 10h
operator-lifecycle-manager 4.9.17 True False False 10h
operator-lifecycle-manager-catalog 4.9.17 True False False 10h
operator-lifecycle-manager-packageserver 4.9.17 True False False 10h
service-ca 4.9.17 True False False 10h
storage 4.9.17 True False False 10h

$ oc get clusterversions.config.openshift.io --kubeconfig kubeconfig.sno-21
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.9.17 True False 9h Cluster version is 4.9.17

Once the installation is complete, you can login to the node and verify the existence of the partition:

oc debug node/master-0.sno-21.mylab.mydomain.com --kubeconfig kubeconfig.sno-21
Starting pod/master-0.sno-21.mylab.mydomain.com-debug ...
To use host binaries, run `chroot /host`
Pod IP: 192.168.24.91
If you don't see a command prompt, try pressing enter.
sh-4.4# chroot /host
sh-4.4#  df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 94G 0 94G 0% /dev
tmpfs 94G 168K 94G 1% /dev/shm
tmpfs 94G 70M 94G 1% /run
tmpfs 94G 0 94G 0% /sys/fs/cgroup
/dev/sdb4 215G 5.9G 209G 3% /sysroot
tmpfs 94G 12K 94G 1% /tmp
/dev/sdb5 147G 12G 136G 8% /var/lib/containers
/dev/sdb3 364M 194M 147M 57% /boot
overlay 94G 70M 94G 1% /etc/NetworkManager/systemConnectionsMerged
tmpfs 19G 8.0K 19G 1% /run/user/1000
sh-4.4# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 447.1G 0 disk
├─sdb1 8:17 0 1M 0 part
├─sdb2 8:18 0 127M 0 part
├─sdb3 8:19 0 384M 0 part /boot
├─sdb4 8:20 0 214.4G 0 part /sysroot
└─sdb5 8:21 0 146.5G 0 part /var/lib/containers

Conclusion

In this publication, we have provided a method on how to create an additional partition on OCP coreOS during installation. This method will help alleviate the burden of creating this partition later on during day-1 or day-2 and save time for other operational important tasks.