A chart explaining what is included in MicroShift

Red Hat Device Edge is the new product offering from Red Hat that unites the performance, security and reliability of Red Hat Enterprise Linux with a lightweight version of OpenShift called MicroShift(currently development preview). This union provides versatility and flexibity for a variety of operations at the edge. Whether it's a small packaged application, a container run via podman or a Kubernetes experience without the cpu and memory requirements of Single Node OpenShift, Red Hat Device Edge can meet those needs. In the following blog we will demonstrate the workflow of building a zero touch Red Hat Device Edge image with MicroShift and deploying it on a simulated edge device.

Initial Prerequisites

To demonstrate Red Hat Device Edge and the zero touch workflow we need to first start with a fresh install of Red Hat Enterprise Linux 8.7 either on a physical host or a virtual machine. I am using the latter for this demonstration and it will act as our builder host. On this host we need to fulfill a few prerequisites. One of those prerequisites is to enable the following repositories.

$ sudo yum repolist
Updating Subscription Management repositories.
repo id repo name
fast-datapath-for-rhel-8-x86_64-rpms Fast Datapath for RHEL 8 x86_64 (RPMs)
rhel-8-for-x86_64-appstream-rpms Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs)
rhel-8-for-x86_64-baseos-rpms Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs)
rhocp-4.12-for-rhel-8-x86_64-rpms Red Hat OpenShift Container Platform 4.12 for RHEL 8 x86_64 (RPMs)

 

Next we need to make sure we have the following packages installed on the host as these will all be needed for the image compose process and building the custom iso.

$ sudo dnf -y install createrepo yum-utils lorax skopeo composer-cli cockpit-composer podman genisoimage syslinux isomd5sum

 

Then once the required packages are installed we need to enable the cockpit and osbuild-composer services.

$ sudo systemctl enable --now cockpit.socket
Created symlink /etc/systemd/system/sockets.target.wants/cockpit.socket → /usr/lib/systemd/system/cockpit.socket.

$ sudo systemctl enable --now osbuild-composer.socket
Created symlink /etc/systemd/system/sockets.target.wants/osbuild-composer.socket → /usr/lib/systemd/system/osbuild-composer.socket.

 

Image Building Process

Now that we have our prerequisites let us move onto the image building process for building our Red Hat Device Edge image that will also contain MicroShift. Since we need MicroShift we will have to download the rpms from the Red Hat OpenShift and Fast Data Path repository where MicroShift and its dependency rpms reside. To do this we will create a directory location for the rpms and then use reposync to sync them down.

$ sudo mkdir -p /var/repos/microshift-local
$ sudo reposync --arch=$(uname -i) --arch=noarch --gpgcheck --download-path /var/repos/microshift-local --repo=rhocp-4.12-for-rhel-8-x86_64-rpms --repo=fast-datapath-for-rhel-8-x86_64-rpms
Updating Subscription Management repositories.
Red Hat OpenShift Container Platform 4.12 for RHEL 8 x86_64 (RPMs) 23 kB/s | 4.0 kB 00:00
Fast Datapath for RHEL 8 x86_64 (RPMs) 29 kB/s | 4.0 kB 00:00
(1/196): ansible-runner-1.4.6-2.el8ar.noarch.rpm 36 kB/s | 8.1 kB 00:00
(...)
(195/196): openshift-hyperkube-4.12.0-202301102025.p0.ga34b9e9.assembly.stream.el8.x86_64.rpm 4.4 MB/s | 77 MB 00:17
(196/196): microshift-4.12.1-202301261525.p0.g3db9e81.assembly.4.12.1.el8.x86_64.rpm 3.4 MB/s | 26 MB 00:07
(1/994): tuned-profiles-realtime-2.18.0-1.2.20220511git9fa66f19.el8fdp.noarch.rpm 159 kB/s | 40 kB 00:00
(2/994): tuned-utils-2.18.0-3.1.20220714git70732a57.el8fdp.noarch.rpm
(...)
(992/994): openvswitch2.17-test-2.17.0-67.el8fdp.noarch.rpm 1.3 MB/s | 122 kB 00:00
(993/994): openvswitch2.16-2.16.0-108.el8fdp.x86_64.rpm 3.2 MB/s | 16 MB 00:04
(994/994): openvswitch2.17-2.17.0-67.el8fdp.x86_64.rpm 6.7 MB/s | 17 MB 00:02

 

Next we need to remove any CoreOS packages that might cause conflicts.

$ sudo find /var/repos/microshift-local -name \*coreos\* -print -exec rm -f {} \;
/var/repos/microshift-local/rhocp-4.12-for-rhel-8-x86_64-rpms/Packages/c/coreos-installer-0.16.1-1.rhaos4.12.el8.x86_64.rpm

 

Now we can use the createrepo command to create a local repository of those rpms we just synced.

$ sudo createrepo /var/repos/microshift-local
Directory walk started
Directory walk done - 1189 packages
Temporary output repo path: /var/repos/microshift-local/.repodata/
Preparing sqlite DBs
Pool started (with 5 workers)
Pool finished

 

With the repository created we now need to build a repository toml (Tom's Obvious Minimal Language) that defines the packages source.

$ sudo cat << EOF > /var/repos/microshift-local/microshift.toml
id = "microshift-local"
name = "MicroShift local repo"
type = "yum-baseurl"
url = "file:///var/repos/microshift-local/"
check_gpg = false
check_ssl = false
system = false
EOF

 

Take the toml file we created above and apply it to the osbuild-composer environment by adding it as a source. Once we have added it as a source we can validate what sources are available by listing them out.

$ sudo composer-cli sources add /var/repos/microshift-local/microshift.toml

$ sudo composer-cli sources list
appstream
baseos
microshift-local

 

Now that we have all the package sources setup for our Red Hat Device Edge MicroShift image we can now begin to construct the toml file that will define our image. In our toml we will define a version, the packages to be included in the image and what services to be enabled.

$ cat << EOF > ~/rhde-microshift.toml
name = "rhde-microshift"
description = "RHDE Microshift Image"
version = "1.0.0"
modules = []
groups = []

[[packages]]
name = "microshift"
version = "*"

[[packages]]
name = "openshift-clients"
version = "*"

[[packages]]
name = "git"
version = "*"

[[packages]]
name = "iputils"
version = "*"

[[packages]]
name = "bind-utils"
version = "*"

[[packages]]
name = "net-tools"
version = "*"

[[packages]]
name = "iotop"
version = "*"

[[packages]]
name = "redhat-release"
version = "*"

[customizations]

[customizations.services]
enabled = ["microshift"]
EOF

 

The blueprint toml we created above can now be pushed into the osbuild-composer and we can validate it is there by listing the available blueprints.

$ sudo composer-cli blueprints push ~/rhde-microshift.toml

$ sudo composer-cli blueprints list
rhde-microshift

 

At this point we are now ready to compose our image by issuing the composer-cli compose command to start building our image. In our case the image will use the rhde-microshift blueprint and will build a rhel-edge-container.

$ sudo composer-cli compose start-ostree rhde-microshift rhel-edge-container
Compose d5d57d57-8da5-487f-81a5-162691d2e912 added to the queue

 

The process of building the image can take some time depending on the system it is run on. We can watch the progress either by running the composer-cli compose status command over and over or place a watch in front of it.

$ sudo composer-cli compose status
d5d57d57-8da5-487f-81a5-162691d2e912 RUNNING Sat Feb 4 14:03:28 2023 rhde-microshift 1.0.0 edge-container

$ watch sudo composer-cli compose status

 

Once the image has finished being built we should see a status like the one below.

$ sudo composer-cli compose status
d5d57d57-8da5-487f-81a5-162691d2e912 FINISHED Sat Feb 4 14:11:00 2023 rhde-microshift 1.0.0 edge-container

 

Now we need to pull down a local copy of the image so we can work with it by using composer-cli compose image.

$ sudo composer-cli compose image d5d57d57-8da5-487f-81a5-162691d2e912
d5d57d57-8da5-487f-81a5-162691d2e912-container.tar

 

One the image file is downloaded we next need to copy it into the local container-storage of our host and tag it accordingly. We can also validate it is there by running a podman images command.

$ sudo skopeo copy oci-archive:d5d57d57-8da5-487f-81a5-162691d2e912-container.tar containers-storage:localhost/rhde-microshift:latest
INFO[0000] Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled
INFO[0000] Image operating system mismatch: image uses OS "linux"+architecture "x86_64", expecting one of "linux+amd64"
Getting image source signatures
Copying blob c0727d0291f3 done
Copying config 47da294161 done
Writing manifest to image destination
Storing signatures

$ sudo podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/rhde-microshift latest 47da294161b1 20 hours ago 1.64 GB

 

Now we will go ahead and start the container locally with podman. We need to do this because we want to extract the contents of the container image.

$ sudo podman run --rm -p 8000:8080 rhde-microshift:latest &
[1] 126963

$ sudo podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aa515e9d9cd2 localhost/rhde-microshift:latest nginx -c /etc/ngi... 5 seconds ago Up 5 seconds ago 0.0.0.0:8000->8080/tcp gallant_mahavira

 

Create Zero Touch Provisioning ISO For Red Hat Device Edge

With our Red Hat Device Edge container image running we now need to create a directory structure that will be the location for the artifacts we need to gather so we can generate a complete zero touch Red Hat Device Edge bootable iso image. First we will create the generate-iso directory and an ostree subdirectory inside. We will copy the repo directory from the running container into this ostree subdirectory. Once we have completed the copy we can stop the container as it will no longer be needed. We can also validate the contents of the ostree/repo directory to confirm it looks like the listing below.

$ mkdir -p ~/generate-iso/ostree

$ sudo podman cp aa515e9d9cd2:/usr/share/nginx/html/repo ~/generate-iso/ostree

$ sudo podman stop aa515e9d9cd2
aa515e9d9cd2

$ sudo ls -l ~/generate-iso/ostree/repo
total 16
-rw-r--r--. 1 root root 38 Feb 4 14:10 config
drwxr-xr-x. 2 root root 6 Feb 4 14:10 extensions
drwxr-xr-x. 258 root root 8192 Feb 4 14:10 objects
drwxr-xr-x. 5 root root 49 Feb 4 14:10 refs
drwxr-xr-x. 2 root root 6 Feb 4 14:10 state
drwxr-xr-x. 3 root root 19 Feb 4 14:10 tmp

 

Another artifact we need to create is a new isolinux.cfg file in the generate-iso directory. This new file will reference the ks.cfg custom kickstart and also title the boot screen with zero touch provisioning naming.

$ cat << EOF > ~/generate-iso/isolinux.cfg 
default vesamenu.c32
timeout 600

display boot.msg

# Clear the screen when exiting the menu, instead of leaving the menu displayed.
# For vesamenu, this means the graphical background is still displayed without
# the menu itself for as long as the screen remains in graphics mode.
menu clear
menu background splash.png
menu title Red Hat Enterprise Linux 8.7
menu vshift 8
menu rows 18
menu margin 8
#menu hidden
menu helpmsgrow 15
menu tabmsgrow 13

# Border Area
menu color border * #00000000 #00000000 none

# Selected item
menu color sel 0 #ffffffff #00000000 none

# Title bar
menu color title 0 #ff7ba3d0 #00000000 none

# Press [Tab] message
menu color tabmsg 0 #ff3a6496 #00000000 none

# Unselected menu item
menu color unsel 0 #84b8ffff #00000000 none

# Selected hotkey
menu color hotsel 0 #84b8ffff #00000000 none

# Unselected hotkey
menu color hotkey 0 #ffffffff #00000000 none

# Help text
menu color help 0 #ffffffff #00000000 none

# A scrollbar of some type? Not sure.
menu color scrollbar 0 #ffffffff #ff355594 none

# Timeout msg
menu color timeout 0 #ffffffff #00000000 none
menu color timeout_msg 0 #ffffffff #00000000 none

# Command prompt text
menu color cmdmark 0 #84b8ffff #00000000 none
menu color cmdline 0 #ffffffff #00000000 none

# Do not display the actual menu unless the user presses a key. All that is displayed is a timeout message.

menu tabmsg Press Tab for full configuration options on menu items.

menu separator # insert an empty line
menu separator # insert an empty line

label install
menu label ^Zero Touch Provision Red Hat Device Edge
kernel vmlinuz
append initrd=initrd.img inst.stage2=hd:LABEL=RHEL-8-7-0-BaseOS-x86_64 inst.ks=hd:LABEL=RHEL-8-7-0-BaseOS-x86_64:/ks.cfg quiet

label check
menu label Test this ^media & Zero Touch Provision Red Hat Device Edge
menu default
kernel vmlinuz
append initrd=initrd.img inst.stage2=hd:LABEL=RHEL-8-7-0-BaseOS-x86_64 rd.live.check inst.ks=hd:LABEL=RHEL-8-7-0-BaseOS-x86_64:/ks.cfg quiet

menu separator # insert an empty line

# utilities submenu
menu begin ^Troubleshooting
menu title Troubleshooting

label vesa
menu indent count 5
menu label Install Red Hat Enterprise Linux 8.7 in ^basic graphics mode
text help
Try this option out if you're having trouble installing
Red Hat Enterprise Linux 8.7.
endtext
kernel vmlinuz
append initrd=initrd.img inst.stage2=hd:LABEL=RHEL-8-7-0-BaseOS-x86_64 nomodeset quiet

label rescue
menu indent count 5
menu label ^Rescue a Red Hat Enterprise Linux system
text help
If the system will not boot, this lets you access files
and edit config files to try to get it booting again.
endtext
kernel vmlinuz
append initrd=initrd.img inst.stage2=hd:LABEL=RHEL-8-7-0-BaseOS-x86_64 inst.rescue quiet

label memtest
menu label Run a ^memory test
text help
If your system is having issues, a problem with your
system's memory may be the cause. Use this utility to
see if the memory is working correctly.
endtext
kernel memtest

menu separator # insert an empty line

label local
menu label Boot from ^local drive
localboot 0xffff

menu separator # insert an empty line
menu separator # insert an empty line

label returntomain
menu label Return to ^main menu
menu exit

menu end
EOF

 

We also need to create a grub.cfg with the same modification types we did for the isolinux.cfg and place it into the generate-iso directory.

$ cat << EOF > ~/generate-iso/grub.cfg
set default="1"

function load_video {
insmod efi_gop
insmod efi_uga
insmod video_bochs
insmod video_cirrus
insmod all_video
}

load_video
set gfxpayload=keep
insmod gzio
insmod part_gpt
insmod ext2

set timeout=10
### END /etc/grub.d/00_header ###

search --no-floppy --set=root -l 'RHEL-8-7-0-BaseOS-x86_64'

### BEGIN /etc/grub.d/10_linux ###
menuentry 'Zero Touch Provision Red Hat Device Edge' --class fedora --class gnu-linux --class gnu --class os {
linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-8-7-0-BaseOS-x86_64 inst.ks=hd:LABEL=RHEL-8-7-0-BaseOS-x86_64:/ks.cfg quiet
initrdefi /images/pxeboot/initrd.img
}
menuentry 'Test this media & Zero Touch Provision Red Hat Device Edge' --class fedora --class gnu-linux --class gnu --class os {
linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-8-7-0-BaseOS-x86_64 rd.live.check inst.ks=hd:LABEL=RHEL-8-7-0-BaseOS-x86_64:/ks.cfg quiet
initrdefi /images/pxeboot/initrd.img
}
submenu 'Troubleshooting -->' {
menuentry 'Install Red Hat Enterprise Linux 8.7 in basic graphics mode' --class fedora --class gnu-linux --class gnu --class os {
linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-8-7-0-BaseOS-x86_64 nomodeset quiet
initrdefi /images/pxeboot/initrd.img
}
menuentry 'Rescue a Red Hat Enterprise Linux system' --class fedora --class gnu-linux --class gnu --class os {
linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-8-7-0-BaseOS-x86_64 inst.rescue quiet
initrdefi /images/pxeboot/initrd.img
}
}
EOF

 

For our zero touch provisioning workflow we also need a kickstart file to automate the installation process. The kickstart below is a straight forward example however I want to point out a few things that of interest

  • We are defining the ostreesetup to consume the image that will be built into the iso image we will create.
  • We are updating the /etc/ostree/remotes.d/edge.conf to point to a remote locations for ostree updates.
  • We are enabling the MicroShift firewall rules needed for access.
  • We need to define a pull-secret so we can pull down the additional images when MicroShift starts.
  • We are setting the volume group name for our partitions to rhel which is also the default that LVMS will use in MicroShift.
$ cat << EOF > ~/generate-iso/ks.cfg
network --bootproto=dhcp --device=enp1s0 --onboot=off --ipv6=auto
network --bootproto=dhcp --device=enp2s0 --onboot=on --ipv6=auto --activate
keyboard --xlayouts='us'
lang en_US.UTF-8
timezone America/Chicago --isUtc
zerombr
ignoredisk --only-use=sda
clearpart --all --initlabel
part /boot/efi --size 512 --asprimary --fstype=vfat --ondrive=sda
part /boot --size 1024 --asprimary --fstype=xfs --ondrive=sda
part pv.2 --size 1 --grow --fstype=xfs --ondrive=sda
volgroup rhel --pesize=32768 pv.2
logvol / --fstype xfs --vgname rhel --size=98304 --name=root
reboot
text
rootpw --iscrypted $6$JDSTC2MKBr35O1SM$ZKHoLV29XAoOITHKj00HnKSJ9AUFDgCeM9UY4907dBsy9ICZNTYjsYLf/VjGmOvE422gBwQZwevN/1vB6mBSl1
user --groups=wheel --name=bschmaus --password=$6$taTBd76gNB99NquE$RON0n03WXShKLYc1eLIYZUuWln0H9Q0MadBkEDtCKFGs.gA8SnimPwK03YkkNMYDXVJOL.7jIfUd7Mg6mtyaD0 --iscrypted --gecos="bschmaus"
services --enabled=ostree-remount
ostreesetup --nogpg --url=file:///run/install/repo/ostree/repo --osname=rhel --ref=rhel/8/x86_64/edge

%post --log=/var/log/anaconda/post-install.log --erroronfail

echo -e 'bschmaus\tALL=(ALL)\tNOPASSWD: ALL' >> /etc/sudoers

echo -e 'url=http://remoteupdates.schmaustech.com:8000/repo' >> /etc/ostree/remotes.d/edge.conf

mkdir -p /etc/crio
cat > /etc/crio/openshift-pull-secret << PULLSECRETEOF
***PUT YOUR PULL-SECRET HERE***
PULLSECRETEOF
chmod 600 /etc/crio/openshift-pull-secret

firewall-offline-cmd --zone=trusted --add-source=10.42.0.0/16
firewall-offline-cmd --zone=trusted --add-source=169.254.169.1
firewall-offline-cmd --zone=public --add-port=6443/tcp
firewall-offline-cmd --zone=public --add-port=80/tcp
firewall-offline-cmd --zone=public --add-port=443/tcp

%end
EOF

 

Next we need to pull in a Red Hat Enterprise Linux boot iso from Red Hat. I am pulling my iso from a location within my lab.

$ scp root@192.168.0.22:/var/lib/libvirt/images/rhel-8.7-x86_64-boot.iso ~/generate-iso
root@192.168.0.22's password:
rhel-8.7-x86_64-boot.iso 100% 861MB 397.8MB/s 00:02

 

Finally we need to create the recook script. This script will do the dirty work for us in creating our zero touch provisioning iso and packing it with our kickstart and Red Hat Device Edge image we composed. Note the variables in the script have been escaped so it can be copied from the blog into a file without variables being interpreted.

$ cat << EOF > ~/generate-iso/recook.sh
#!/bin/bash
# Ensure this script is run as root
if [ "\$EUID" != "0" ]; then
echo "Please run as root" >&2
exit 1
fi

# Set a few bash options
cd "\$(dirname "\$(realpath "\$0")")"
set -ex

# Create a temp dir
tmp=\$(mktemp -d)
mkdir "\$tmp/iso"

# Mount the boot iso into our temp dir
mount rhel-8.7-x86_64-boot.iso "\$tmp/iso"

# Create a directory for our new ISO
mkdir "\$tmp/new"

# Copy the contents of the boot ISO to our new directory
cp -a "\$tmp/iso/" "\$tmp/new/"

# Unmount the boot ISO
umount "\$tmp/iso"

# Copy our customized files into the new ISO directory
cp ks.cfg "\$tmp/new/iso/"
cp isolinux.cfg "\$tmp/new/iso/isolinux/"
cp grub.cfg "\$tmp/new/iso/EFI/BOOT/"
cp -r ostree "\$tmp/new/iso/"

# Push directory of new ISO for later commands
pushd "\$tmp/new/iso"

# Create our new ISO
mkisofs -o ../rhde-ztp.iso -b isolinux/isolinux.bin -J -R -l -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot -graft-points -joliet-long -V "RHEL-8-7-0-BaseOS-x86_64" .
isohybrid --uefi ../rhde-ztp.iso
implantisomd5 ../rhde-ztp.iso

# Return to previous directory
popd

# Cleanup and give user ownership of ISO
mv "\$tmp/new/rhde-ztp.iso" ./
rm -rf "\$tmp"
chown \$(stat -c '%U:%G' .) ./rhde-ztp.iso
EOF

$ chmod 755 ~/generate-iso/recook.sh

 

Let's now confirm that our directory structure looks correct. We should have two config files, a script, our ostree directory with the image contents in it and Red Hat Enterprise Linux 8.7 source iso.

$ cd ~/generate-iso
$ ls -l
total 881676
-rw-rw-r--. 1 bschmaus bschmaus 3257 Feb 4 15:17 isolinux.cfg
-rw-rw-r--. 1 bschmaus bschmaus 3673 Feb 4 15:21 ks.cfg
-rw-rw-r--. 1 bschmaus bschmaus 1281 Feb 4 15:27 recook.sh
drwxrwxr-x. 3 bschmaus bschmaus 55 Feb 4 15:14 ostree
-rw-r--r--. 1 bschmaus bschmaus 902823936 Feb 4 15:24 rhel-8.7-x86_64-boot.iso

 

At this point if everything looks good from the directory structure layout we should now be able to generate our zero touch Red Hat Device Edge iso using the recook script we created in a few steps above.

$ sudo ./recook.sh 
++ mktemp -d
+ tmp=/tmp/tmp.RqMxf2Zz0S
+ mkdir /tmp/tmp.RqMxf2Zz0S/iso
+ mount rhel-8.7-x86_64-boot.iso /tmp/tmp.RqMxf2Zz0S/iso
mount: /tmp/tmp.RqMxf2Zz0S/iso: WARNING: device write-protected, mounted read-only.
+ mkdir /tmp/tmp.RqMxf2Zz0S/new
+ cp -a /tmp/tmp.RqMxf2Zz0S/iso/ /tmp/tmp.RqMxf2Zz0S/new/
+ umount /tmp/tmp.RqMxf2Zz0S/iso
+ cp ks.cfg /tmp/tmp.RqMxf2Zz0S/new/iso/
+ cp isolinux.cfg /tmp/tmp.RqMxf2Zz0S/new/iso/isolinux/
+ cp grub.cfg /tmp/tmp.RqMxf2Zz0S/new/iso/EFI/BOOT/
+ cp -r ostree /tmp/tmp.RqMxf2Zz0S/new/iso/
+ pushd /tmp/tmp.RqMxf2Zz0S/new/iso
/tmp/tmp.RqMxf2Zz0S/new/iso /home/bschmaus/generate-iso
+ mkisofs -o ../rhde-ztp.iso -b isolinux/isolinux.bin -J -R -l -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot -graft-points -joliet-long -V RHEL-8-7-0-BaseOS-x86_64 .
I: -input-charset not specified, using utf-8 (detected in locale settings)
Size of boot image is 4 sectors -> No emulation
Size of boot image is 19468 sectors -> No emulation
1.00% done, estimate finish Sun Feb 5 10:32:14 2023
1.50% done, estimate finish Sun Feb 5 10:32:14 2023
2.00% done, estimate finish Sun Feb 5 10:32:14 2023
(...)
98.83% done, estimate finish Sun Feb 5 10:32:15 2023
99.33% done, estimate finish Sun Feb 5 10:32:15 2023
99.83% done, estimate finish Sun Feb 5 10:32:15 2023
Total translation table size: 2048
Total rockridge attributes bytes: 4877712
Total directory bytes: 8169472
Path table size(bytes): 2848
Max brk space used 22ae000
1001693 extents written (1956 MB)
+ isohybrid --uefi ../rhde-ztp.iso
isohybrid: Warning: more than 1024 cylinders: 1957
isohybrid: Not all BIOSes will be able to boot this device
+ implantisomd5 ../rhde-ztp.iso
Inserting md5sum into iso image...
md5 = 86f9b86c942baaec504cd90e76a71878
Inserting fragment md5sums into iso image...
fragmd5 = 8521e8de2e14ea0ac55d850c55f4ab29283a8c524b661da17c2ba1ddf1d3
frags = 20
Setting supported flag to 0
+ popd
/home/bschmaus/generate-iso
+ mv /tmp/tmp.RqMxf2Zz0S/new/rhde-ztp.iso ./
+ rm -rf /tmp/tmp.RqMxf2Zz0S
++ stat -c %U:%G .
+ chown bschmaus:bschmaus ./rhde-ztp.iso

 

Once the script is completed we should have a rhde-ztp.iso in our directory.

$ ls -l rhde-ztp.iso 
-rw-r--r--. 1 bschmaus bschmaus 2052063232 Feb 5 10:32 rhde-ztp.iso

 

Boot Zero Touch Provisioning ISO for Red Hat Device Edge with MicroShift

Take the iso and either write it onto a usb drive or copy it to a hypervisor where a virtual machine can consume it. I am doing the latter for this demonstration. The video below will show the process of the automated kickstart and finally that MicroShift is running.

 

Now that we have kickstarted the edge device host we should now be able to login into the host and confirm MicroShift is fully operational.

$ ssh bschmaus@192.168.0.118
The authenticity of host '192.168.0.118 (192.168.0.118)' can't be established.
ECDSA key fingerprint is SHA256:VeFkIOJUFZ68PaeJ6Qwhu+8YjV3VuzWGpOgIs3BFPmc.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.0.118' (ECDSA) to the list of known hosts.
bschmaus@192.168.0.118's password:
Script '01_update_platforms_check.sh' FAILURE (exit code '1'). Continuing...
Boot Status is GREEN - Health Check SUCCESS

$ mkdir -p ~/.kube/

$ sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config

$ oc get all -A
NAMESPACE NAME READY STATUS RESTARTS AGE
openshift-dns pod/dns-default-pcjft 2/2 Running 0 6m26s
openshift-dns pod/node-resolver-nmr8d 1/1 Running 0 7m48s
openshift-ingress pod/router-default-d6cc9845f-8njpd 1/1 Running 0 7m42s
openshift-ovn-kubernetes pod/ovnkube-master-8prkg 4/4 Running 0 7m48s
openshift-ovn-kubernetes pod/ovnkube-node-7mw2n 1/1 Running 0 7m48s
openshift-service-ca pod/service-ca-5d96446959-zfmdw 1/1 Running 0 7m42s
openshift-storage pod/topolvm-controller-677cd8d9df-jjdtb 4/4 Running 0 7m48s
openshift-storage pod/topolvm-node-fmwrn 4/4 Running 0 6m26s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 8m13s
openshift-dns service/dns-default ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9154/TCP 7m48s
openshift-ingress service/router-internal-default ClusterIP 10.43.45.145 <none> 80/TCP,443/TCP,1936/TCP 7m48s

NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
openshift-dns daemonset.apps/dns-default 1 1 1 1 1 kubernetes.io/os=linux 7m48s
openshift-dns daemonset.apps/node-resolver 1 1 1 1 1 kubernetes.io/os=linux 7m48s
openshift-ovn-kubernetes daemonset.apps/ovnkube-master 1 1 1 1 1 kubernetes.io/os=linux 7m48s
openshift-ovn-kubernetes daemonset.apps/ovnkube-node 1 1 1 1 1 kubernetes.io/os=linux 7m48s
openshift-storage daemonset.apps/topolvm-node 1 1 1 1 1 <none> 7m48s

NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
openshift-ingress deployment.apps/router-default 1/1 1 1 7m48s
openshift-service-ca deployment.apps/service-ca 1/1 1 1 7m48s
openshift-storage deployment.apps/topolvm-controller 1/1 1 1 7m48s

NAMESPACE NAME DESIRED CURRENT READY AGE
openshift-ingress replicaset.apps/router-default-d6cc9845f 1 1 1 7m48s
openshift-service-ca replicaset.apps/service-ca-5d96446959 1 1 1 7m48s
openshift-storage replicaset.apps/topolvm-controller-677cd8d9df 1 1 1 7m48s

 

We have confirmed MicroShift is fully functional here and ready to deploy workloads. Hopefully this blog provides an idea of what the workflow process looks like with Red Hat Device Edge and gives a basic overview of some of the concepts required to get from an idea of what we want to run on an edge device to the process of deploying that edge device. It should also come as no surprise that all of this can be automated using Ansible Automation Platform and the Red Hat Device Edge collections but I will save that for another day.

Reference Links: