Why OpenShift on Openstack

Nowadays, almost every other organization is moving towards containers due to the benefits it is providing in terms of less resources, portability, efficiency, better application development, and scaling. But the container technology needs an orchestration framework to manage the containers. This is where  Kubernetes or Red Hat OpenShift (a tool built on top of Kubernetes with a lot of add-on features) come into the picture. But the container's orchestrators need a highly automatically scalable infrastructure as a service. Most people choose a public cloud for their needs and end up spending a lot of money.

While this container technology is the current trend, if you spin the clock a little backward, OpenStack was the go-to option on private cloud management. Organizations setting up the on-premises infrastructure chose OpenStack as their infrastructure management software. Even though we are moving to container technology today, we still need IaaS software to manage the bare metal nodes, and OpenStack remains one of the best choices for private cloud management.

This is how OpenShift on OpenStack has come into the picture, and it has many advantages, such as  on-premises container cloud, cost effectiveness, complete open source based datacenter, and the architecture of OpenStack and OpenShift that complement each other to work very well together.

Types of OpenShift Installers

The rest of this article will show how to deploy OpenShift on OpenStack and what plug-ins/options are needed  to be enabled on OpenStack to host an OpenShift on top of it.

In OpenShift, there are two types of installers available on air:

  1. Installer provisioned infrastructure (IPI)
  2. User provisioned Infrastructure (UPI)

The main difference between these two types of installer is the IPI will create and configure all the underlying infrastructure needed for OpenShift, while UPI requires the resources to be created manually. 

For example, IPI will create the required VMs, networks, and load balancers all by itself and deploy the OpenShift in a documented way. However, it does not have many configuration options but makes the deployment simple. UPI is opted in, only if your needs are not fulfilled with the default deployment configs or when the infrastructure does not  support the creation of underlying resources by the installer. An example would be  bare metal nodes not having an option to create load balancers.

To begin, we will be discussing the IPI method as it is easy to operate and OpenStack supports the creation of all the underlying dependencies by the installer. 

Configuration Before Installing OpenStack

Before getting into the OCP deployment, let us take a glance at the plug-ins and services that need to be enabled on OpenStack to host OCP on it.

OpenShift API services use load balancer to route the incoming application requests. So Octavia service, that is  Load Balancer as a Service (LBaS), needs to be enabled on OSP. 

Octavia can be enabled by passing https://github.com/openstack/tripleo-heat-templates/blob/master/environments/services/octavia.yaml template while deploying using tripleo (or OSP director).

Configuration After Installing OpenStack

Now OpenStack is deployed with all the necessary services. The stack is ready to create some of the OpenStack resources needed for OpenShift deployment.

OpenShift will create different OpenStack resources based on the configuration. If the OpenStack has quota limitations for the given tenant, the resource creation will fail and lead to OpenShift deployment failure. So it is good to set the quota unlimited for learning and testings. If you want to set the exact amount of resources required, refer to the installation guide. The below command will set the resource quota to unlimited:

#!/bin/bash
 openstack quota set --properties -1 --server-groups -1 --ram -1 --key-pairs -1 --instances -1  --cores -1 --per-volume-gigabytes -1 --gigabytes -1 --backup-gigabytes -1 --snapshots -1 --volumes -1 --backups -1 --subnetpools -1 --ports -1 --subnets -1 --networks -1 --floating-ips -1 --secgroup-rules -1 --secgroups -1 --routers -1 --rbac-policies -1 <overcloud_tenant_id>

The OpenShift installer will create VMs to host the OpenShift master and worker nodes. The flavors  required for VMs need to be created before kick-starting the installer. Master and worker nodes can use the same or different flavors. To create flavors use:

#!/bin/bash
 openstack flavor create --ram <1024> --disk <80> --vcpus <4> --public <master_flavor>

To host the API and APPS (Ingress traffic) end point on the OpenShift, you need to create a unique floating IP  for each. Openstack networks and subnets are required to create a floating IP. Below are the commands to create the same.

Create a flat or VLAN  network based on your deployment:

#!/bin/bash
  openstack network create --external --provider-network-type <flat> --provider-physical-network <datacentre> <my_network>

To create subnet on the network:

#!/bin/bash
 openstack subnet create --ip-version 4 --gateway <gateway_addr> --allocation-pool start=<>,end=<> --no-dhcp --subnet-range <range-in-cidr> <my_network> <my_subnet>

To create floating IP:

#!/bin/bash
openstack floating ip create --floating-ip-address <ip> <my_network>

Note: If Public IPs are configured in the DNS server, the same can be provided while creating the floating IP. Otherwise the generated IP needs to be added in the `/etc/hosts` so that the endpoints are accessible from the local network.

Now the OpenStack deployment is ready to host the OpenShift. 

Deploy OpenShift 

Below is the link to download the OpenShift installer and client:

https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp

From the above endpoint, the required version of OpenShift on the target OS can be opted. Installers and clients are available for Linux, Mac, and Windows.

To run the installer, the required install configuration data can be provided either in the installer’s interactive prompt or using the `install-config.yaml` file. Then the installer will fetch all the details from the `install-config.yaml` and run in non-interactive mode.

The installer needs a `pull-secret` text that is unique for each registered user, and it can be downloaded from the link below:

https://cloud.redhat.com/openshift/install/openstack/installer-provisioned

Apart from this pull secret, the `install-config.yaml` needs some data like the SSH public key, Ingress and API endpoint IPs, and master and worker node details like node counts, flavors, external network, and region details. Install-config.yaml has many more configuration options to customize the OCP cluster. Refer here for more information.

Below are the step-by-step instructions to deploy the OCP on OSP.

Create the installation directory:

#!/bin/bash
mkdir ostest

Download and untar the installer:

#!/bin/bash
wget https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/4.7.12/openshift-install-linux.tar.gz
tar -xvzf openshift-install-linux.tar.gz

Copy the install-config.yaml:

#!/bin/bash
cp ~/install-config.yaml ~/ostest/.

Run the installer:

#!/bin/bash
./openshift-install create cluster --dir=~/ostest

The installer will run for around 45 minutes to complete the installation.

Setup Openshift Client

Download the OCP client:

#!/bin/bash
wget https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/4.7.12/openshift-client-linux.tar.gz
tar -xvzf openshift-client-linux.tar.gz

Copy the oc lib to bin dir to run from any path:.

#!/bin/bash
cp oc /usr/local/bin/oc
chmod +x /usr/local/bin/oc

The installer generated kubeconfig file is available at the user-created install directory. The same can be copied to the default location, or the path of the file can be exported to access the OCP cluster:

#!/bin/bash

mkdir ~/.kube

cp ~/ostest/auth/kubeconfig ~/.kube/config

Now the client is ready to access the cluster, and the same can be validated by running the `oc` command on the terminal.

Here is the sample output of the `oc` command to list all the nodes:

#!/bin/bash
$ oc get nodes
NAME                           STATUS   ROLES      AGE     VERSION
vlan608-w6g88-master-0         Ready    master     7d23h   v1.20.0+87cc9a4
vlan608-w6g88-master-1         Ready    master     7d23h   v1.20.0+87cc9a4
vlan608-w6g88-master-2         Ready    master     7d23h   v1.20.0+87cc9a4
vlan608-w6g88-worker-0-28bts   Ready    worker     7d22h   v1.20.0+87cc9a4
vlan608-w6g88-worker-0-4s26c   Ready    worker     7d22h   v1.20.0+87cc9a4
vlan608-w6g88-worker-0-s7g2q   Ready    worker     7d22h   v1.20.0+87cc9a4

About the author

Masco Kaliyamoorthy has been at Red Hat since 2016 and is a part of the OpenStack performance and scale teams. Prior to this current role, he was an upstream contributor to various Openstack projects and was a core member on Skydive, a network analysis and debuging tool. Before Red Hat, Kaliyamoorthy worked with Cisco on DigitalTV and STB middleware developments. 

Read full bio