Recently, I have been working on the openshift-auto-upi project, which automates UPI deployments of OpenShift. I was looking for a way to configure OpenShift nodes with static IP addresses. After several failed attempts, I found a working approach that can be easily automated. If you prefer using static IPs over the default DHCP provisioning, please read on as I share my approach with you.

Current OpenShift installation methods assume that network configuration of OpenShift nodes is done via DHCP. If you are not interested in using DHCP, official installation guides won't help you any further. After searching on the Internet, I found several approaches that accomplish static IP configuration:

  • Manual approach. During the machine startup, go to the GRUB menu and modify kernel command-line options to achieve static IP provisioning.
  • Occasional DHCP. For the initial boot, use a DHCP server to allow the machines to download their ignition configs. After the initial boot, machines can be configured to use static IPs. Unfortunately, a typical reason for using static IPs is to avoid the need for a DHCP server. This approach defeats this purpose. Note that with this approach, you will need to turn the DHCP server back on any time you are adding nodes to the cluster.
  • Bare metal installer. Utilize coreos-iso-maker to create a set of customized bare metal installer images. Deploy OpenShift with static IPs using the bare metal installation method.

The coreos-iso-maker approach appealed to me as it allows for full automation of the installation process. The one thing that I was less fond of though, was using the bare metal installation process on virtualization platforms like vSphere or RHEV. What's the problem? While bare metal installation works for vSphere and RHEV, the problem is that it doesn't leverage the capabilities of these platforms to their full extent. For example, when installing on vSphere, you upload the CoreOS machine image into the vSphere datastore only once. From there, the machine image is accessible to hypervisors while heavily leveraging memory sharing and caching to preserve hardware resources. In contrast, if you deploy OpenShift on vSphere using the bare metal installation method, you will end up with independent machine images. The optimizations to preserve hardware resources are no longer possible.

So, how can we achieve static IPs while leveraging the existing installation methods? The approach is in a nutshell:

  1. Extract the GRUB configuration from the original CoreOS machine image
  2. Customize the GRUB configuration to use static IPs
  3. Create a bootable ISO image that includes the custom GRUB configuration
  4. Create OpenShift node using the original CoreOS machine image and boot it from the ISO

At the beginning of the boot process, the custom GRUB configuration takes effect and overrides the configuration stored on the original CoreOS image. After that, the boot process continues using the original OpenShift CoreOS machine image.

In the following sections, I am going to walk you through the steps outlined above. We will boot an OpenShift CoreOS image on vSphere using static IP configuration. Note that I used a RHEL8 machine to run the commands in this blog post.

Extracting GRUB script from VMware disk image

Let's start off with obtaining an OpenShift CoreOS image. Visit the OpenShift 4 image repo and download an image that you want to use for OpenShift deployment. I am going to be installing OpenShift 4.3 on vSphere, so I downloaded the latest VMware image for OpenShift 4.3 called rhcos-4.3.8-x86_64-vmware.x86_64.ova

The OVA file format is a tar archive that contains an XML descriptor desc.ovf and a virtual machine disk image disk.vmdk. You can untar the OVA file with:

$ tar xfv rhcos-4.3.8-x86_64-vmware.x86_64.ova

Let's install qemu-img tool that can convert disk images between different formats:

$ yum install qemu-img

Next, convert the VMware disk image to a raw binary image:

$ qemu-img convert -f vmdk -O raw disk.vmdk disk.raw

The previous command creates a file disk.raw in the current working directory. This file is a raw binary image of the virtual machine disk. We can explore it using the fdisk command:

$ fdisk -l disk.raw
Disk disk.raw: 16 GiB, 17179869184 bytes, 33554432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 00000000-0000-4000-A000-000000000001

Device Start End Sectors Size Type
disk.raw1 2048 788479 786432 384M Linux filesystem
disk.raw2 788480 1048575 260096 127M EFI System
disk.raw3 1048576 1050623 2048 1M BIOS boot
disk.raw4 1050624 6580223 5529600 2.7G Linux filesystem

The output of the fdisk command shows that the disk contains four partitions. The first partition is of interest to us as it holds the GRUB bootloader configuration. On the disk, the boot partition starts at the sector number 2048. Given that the size of the sector is 512 bytes, we can tell that the boot partition starts at offset 1048576 (2048 * 512) bytes from the start of the disk image. Knowing the offset of the boot partition, let’s loop mount this partition under /mnt:

$ mount -o loop,offset=1048576 disk.raw /mnt

You can now inspect the GRUB bootloader script used by the Red Hat CoreOS OpenShift image. In the following sample output, I am showing only the end part of the script which we will discuss further. This part is responsible for the initial network configuration:

$ cat /mnt/grub2/grub.cfg
... removed for brevity ...

set ignition_firstboot=""
if [ -f "/ignition.firstboot" ]; then
# default to dhcp networking parameters to be used with ignition
set ignition_network_kcmdline='rd.neednet=1 ip=dhcp,dhcp6'

# source in the `ignition.firstboot` file which could override the
# above $ignition_network_kcmdline with static networking config.
# This override feature is primarily used by coreos-installer to
# persist static networking config provided during install to the
# first boot of the machine.
source "/ignition.firstboot"

set ignition_firstboot="ignition.firstboot ${ignition_network_kcmdline}"


How is the network configured during the machine startup? During the first boot, network interfaces are configured by dracut. Dracut reads the rd.neednet=1 ip=dhcp,dhcp6 parameters set in the above GRUB script, generates the respective network configuration, and writes it to /etc/sysconfig/network. After the configuration has been written, dracut brings the network interfaces up. Netwok is configured by dracut during the first boot only. In the following boots, the above GRUB script skips setting the dracut network parameters and hence dracut doesn't touch the network configuration any more. Instead, network interfaces are initialized in a standard way by systemd using the network configuration written previously.

GRUB is a very powerul bootloader that can be configured in many ways. You can refer to the GNU GRUB Manual for further information on GRUB configuration.

Creating bootable ISO image

In this section, we will create a bootable ISO image that will include a customized GRUB script. First, create the required directory structure:

$ mkdir -p iso/boot/grub

Next, copy the original GRUB script found on the OpenShift image to the ISO image:

$ cp /mnt/grub2/grub.cfg iso/boot/grub

After we copied the GRUB script out of the original OpenShift image, we can unmount the OpenShift disk partition. It won't be needed anymore:

$ umount /mnt

We are now ready to customize the GRUB configuration:

$ vi iso/boot/grub/grub.cfg

This is the network configuration from the original grub.cfg script:

set ignition_network_kcmdline='rd.neednet=1 ip=dhcp,dhcp6'

In order to provision the ens192 network interface with a static IP, I changed the above line to read:

set ignition_network_kcmdline='rd.neednet=1 ip= nameserver='

Based on the above network settings, dracut will generate a network configuration file /etc/sysconfig/network-scripts/ifcfg-ens192 that will look as follows:

# Generated by dracut initrd

Note that the name of the first network interface depends on the virtual hardware configuration. On vSphere using the default settings, the first network interface is called ens192. If you are running QEMU/KVM hypervisor and using virtio network drivers, your first network interface is likely called ens3. To find out the name of the network interface on your virtual machines, I think it's best to just boot up a temporary virtual machine and take a look.

The above configuration uses dracut to statically configure a network interface. You can refer to dracut.cmdline man page for a complete list of dracut kernel command line options.

After we've finished off the changes to the GRUB script, we can include it on a bootable ISO image. As the original Red Hat CoreOS image is capable of booting on machines with BIOS as well as EFI firmware, we are going to create an ISO image that supports both firmwares as well.

First, install GRUB modules for BIOS and EFI:

$ yum install grub2-pc-modules grub2-efi-x64-modules

Install xorriso and mtools rpms that are used by grub2-mkrescue while creating the ISO image:

$ yum install xorriso mtools

Finally, generate a bootable ISO image using the command:

$ grub2-mkrescue -o staticip.iso iso

An ISO image staticip.iso of about 13 MB in size should be sitting in your working directory now.

Booting Red Hat CoreOS using custom ISO image

In order to boot a Red Hat CoreOS virtual machine using the custom ISO image, you can follow these steps:

  1. Upload the custom ISO image staticip.iso into a datastore in vSphere.
  2. Create a virtual machine using the original OpenShift CoreOS VMware image as its disk image.
  3. Add a new CD/DVD drive to the virtual machine with the staticip.iso inserted in.
  4. Boot the virtual machine using CD-ROM as a boot device instead of the first hard drive.

After the virtual machine has started booting, you can switch to the machine console. When the GRUB menu appears on the screen, press 'e' to display the details of the currently selected item. Verify that your network parameters have been included in the kernel command-line:


If the virtual machine boots up using DHCP instead of a static IP, double check that the machine is really booting from CD-ROM. It was tricky for me to make the virtual machine boot from CD-ROM. After several failed attempts, I found a way that worked for me. I used the option to force the virtual machine to enter the BIOS setup screen during the next boot. Upon entering the BIOS, I manually changed the boot device order so that CD-ROM appeared on top.

After the machine boots up successfully and the ignition tool completes the first-boot setup, you can configure the virtual machine to go back to booting from the original Red Hat CoreOS disk image. You can change the device boot order and place the first hard-drive back on top. Alternatively, you can disconnect the CD-ROM drive from the virtual machine altogether.


In this blog post, we reviewed existing approaches to configure OpenShift nodes with static IP addresses. An alternative approach was demonstrated, which allows for using the existing OpenShift installation methods and introduces only minimum modifications.

The openshift-auto-upi is hosted on GitHub. If you are interested in installing OpenShift on bare metal, libvirt, RHEV, or vSphere using static IPs, I recommend that you check out openshift-auto-upi. The static IP configuration described in this blog will be implemented in one of the future releases of this project.


How-tos, OpenShift 4, deployments, UPI

< Back to the blog