The journey to get a network function on Red Hat OpenShift can start by building its associated container images from source using Red Hat OpenShift Pipelines and Red Hat Universal Base Image (UBI). There are two approaches that can be followed, and they will bring different outcomes:

  1. Train developers to create containers on Kubernetes: Long learning curve with a small number of developers who will become experts. Prone to delaying delivery of the CNF-based solution.
  2. Make containers and Kubernetes “invisible” to the developers: Short learning curve. Expert developers can develop directly on OpenShift. Containers can be built on code save (before a commit) for an immediate feedback loop.

As development teams mature, the typical path is starting with option one and eventually getting to option two. But this journey must be driven by well=defined KPI to measure the velocity of application rollout and developer efficiency at producing new features.

In this blog series, we are sharing the journey using Intel’s Access Network Dataplanes as an example.

Red Hat Universal Base Image

UBI is designed to be a foundation for cloud-native and web application use cases, developed in containers. All UBI content is a subset of RHEL. All of the packages in UBI come from RHEL channels and are supported like RHEL when run on a Red Hat-supported platform like OpenShift & RHEL.

Do note that UBI images are freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products.

By using UBI, you or your users could decide at any time to become Red Hat customers, which entitles you to Red Hat's support services without rebuilding your container images or redeploying your applications to become RHEL-based.

With Red Hat Universal Base Image (UBI), you can take advantage of the greater reliability, security, and performance of official Red Hat container images where OCI-compliant Linux containers run.

There are three base images from which you can choose:

  • Minimal: designed for applications that contain their own dependencies
  • Platform: for applications that run on RHEL
  • Multiservice: eases running multiple services in a single container

In our case, we will use the Platform one that includes useful basic OS tools (tar, gzip, vi, and others) along with the full YUM stack; this one is more suitable for applications. It is also referred to as the UBI standard image.

By default, the UBI image comes with preconfigured UBI repositories from which you can pull dependencies, but by design, the available packages are fairly limited. If you are running UBI on top of a subscribed RHEL operating system, you can pull packages from other repositories.

In the case you are building a container using UBI on top of a non-RHEL operating system, you will not be able to pull any RHEL dependencies, as the underlying package manager will not be able to validate whether the host has a valid subscription. As such, the approach is to leverage packages coming from CentOS mirror or Fedora mirror. As you install rpm packages not coming from the official Red Hat repositories, those packages will not be supported.

In order to correctly pull packages, you will have to use the following option: --disableplugin=subscription-manager

One of the goals when building a container image is to ensure its resulting size is as small as possible for multiple reasons:

  • Reduce the attack surface
  • Pushing and pulling it quickly
  • Distributing it across container registry

As such, one of the options to pass to the dnf install command is:--setopt=tsflags=nodocs

This will download the package without its documentation, because it is useless in a runtime, immutable, environment.

Another option is to use a build container and copy the resulting binaries over the final container using a multistage build (we will detail our approach in the following section).

Self-contained build

Building a container image on your laptop is great, and probably where everything starts. But the environment used to build the image will contain dependencies you might not think about. As such, when your colleague reproduces that same build, it might not work. To avoid this, you should strive to have self-container build.

We briefly alluded to the multistage build in the previous section, and this is really the key of self-contained build. It allows you to express different stages in your build: For instance, the first stage could be to pull build dependencies and build your software, and the second stage could be to pull runtime dependencies and copy the binaries produced in the previous stage.

Not only does it allow you to reduce the final image size, but it also allows you to create a builder stage containing all the dependencies required to build, making the build easily reproducible.

To achieve this, one of the available UBI images Red Hat has in its catalog is buildah. Buildah is a tool to help build Open Container Initiative (OCI)-compliant container images. It can serve as the base environment for a build, from which you could trigger another build.

OpenShift Pipelines

Built with the mindset of reproducible build, OpenShift Pipelines is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. It uses Tekton building blocks to automate the build and deployments of applications across multiple platforms by abstracting away the underlying implementation details. More details can be found here.

At a very high level, a pipeline is composed of tasks that can be executed in parallel and/or in sequence.To share data between Task, a Workspace can be created, which requires persistent volume.

Looking at our vCMTS example, the tasks defined are as follows:

  1. Clone the code repository containing the build scripts and deployment manifests.
  2. Fetch the vCMTS archive containing the code.
  3. Start parallel tasks to build various vCMTS applications.

As you might have understood, each build task is based on the buildah UBI image and performs the following steps:

  • Pulls required build time dependencies
  • Copies the code / artifacts required to perform the build
  • Builds and generates the application binaries
  • Builds and pushes the container image
    • Pulls required runtime dependencies
    • Copies application binaries
    • Sets user and work paths

Given OpenShift has an internal registry, from within the OpenShift Pipeline build, you can deploy the resulting container in the registry, making the application immediately usable by the platform.

What’s next

We have reviewed the building blocks and the process to go from source to container image. Moving forward, we will explain how from a “code save” action this build pipeline can be triggered. And finally, we will demonstrate how to continuously deploy the applications to sanitize and validate the new code.


Categories

How-tos, data pipelines, partners

< Back to the blog