This post will describe proposed behaviors of a class of container I'll call "flexible containers." I'll describe a few aspects of the container image, but also the behavior of the running container. The flexible container concept focuses on building container images in such a way that customization and configuration of software components is enabled and well documented. A flexible container will behave the same way (and will behave well), no matter which of your teams contributed it to the company’s internal ecosystem. Flexible containers will be the building blocks of an efficient software supply chain. If DevOps teams provide flexible containers within the company internal ecosystem, they will enable customization, fast adoption, and reuse of software components throughout the company. For the purpose of this article we will look at one example: The Jenkins 2 container image provided by Red Hat. You may wonder what are some aspects of a flexible container? Build time? Extension? Runtime? Configuration? You'll see it's one image with many uses and many images with many configurations.

Motivation: Why Think About Behaviors of Containers?

For most DevOps teams, containers are a means to increase productivity or start enabling DevOps in general. Those familiar with containers already know how crucial it is to provide very effective container images. In most companies container images are brought to production (and modified) by a set of teams using a common platform. To bring even more peace in our time to Dev and Ops, let’s have a look at some characteristics of software components that have always been a point of contention between Dev and Ops:

  • How to define a contract between Dev and Ops on application runtime configuration?
  • How to define a process and tool chain to customize existing application components?
  • How do teams benefit from these conventions?

There are four behaviors for flexible containers. Let's start with an exploration of two aspects of a flexible container: Adding configuration and adding additional software components.

Runtime Configuration

Adding runtime configuration could be an easy job: We just bake the config files needed into a container image layer on top of the image container the software. Bu what if we run hundreds of different configurations? Will we really build hundreds of container images? OpenShift provides a way to provide runtime configuration to containers: A ConfigMap and a Secret. To quote from the documentation, "The ConfigMap object provides mechanisms to inject containers with configuration data while keeping containers agnostic of OpenShift Container Platform." The configuration information and secret information can be exposed inside a container as:

  • Environment variables containing configuration or secret information
  • Command-line arguments
  • Configuration files in a volume
  • File in a volume containing the secret

This is the most flexible way to provide runtime configuration to a software assuming that the software we run is not subject to changes.

This leads us to behavior 1: A flexible container should use ConfigMaps and Secrets to provide runtime configuration to the software components running in containers. This is in line with 12factor’s paradigm of storing configuration in the environment. One problem you might face is providing different configurations to test, staging, or production environments. Baking the configuration into a container image may make transporting a software component from test to prod easier, as all config is included. But, the explosion of the number of container images and different images running in test and prod are strong arguments against this pattern. What about something like default configuration? It could be provided via a ConfigMap, but we could also put it as a file in a container image layer. Why? Read on!

Providing Sane Default Configuration and Additional Software Components

Providing additional software components and default configuration is another major focus area of flexible containers. How do our DevOps teams deliver the featureset required by their consumers? How do we build a container image that could be reused as often as possible? If we move from the environment layer of an application to the Application Definition Layer we see that some configuration or wiring is not part of the runtime environment of a container, it is more like "the customization or pre-configuration of an application." For this kind of pattern, I recommend using Source-to-Image builds to create a repeatable build that adds configuration to a pre existing software.

This brings us to behavior 2: A flexible container should be customized using Source-to-Image and an OpenShift based build tool chain.

Customization could be something like a default configuration of a Jenkins plug-in, this default might be specific to the teams using that Jenkins. Customization might also be adding some plug-ins to Jenkins.

Adding Low-Level Software Components

Customizing a component to the requirements of a new consumer will most often involve more than just putting configuration files in place. It may require adding software, for example, a JDBC driver used by a Jenkins plug-in. Most of OpenShift platform operators disable the docker build strategy, as it runs in a privileged container and has access to the docker daemon. So there is no way to do a docker build and add that JDBC driver via yum install. Providing base images containing these low-level software components is not part of the flexible container use case. These components should be layered into the base image. It may be required to build these base image outside of OpenShift and it’s S2I builds.

Hence, behavior 3: A flexible container does not replace low-level software components, as it's not a base container image.


Initially I wrote about a contract between Dev and Ops, how to create a tool chain that everyone loves to work with. This contract needs documentation so that all parties can understand what to expect. To do so, requires behavior 4: A flexible container documents:

  • Environment variable required or use by the entry point
  • Required or used
  • Command line arguments to it
  • Where configuration files life
  • The artifacts used by the Source-to-Image assemble scripts

For a good example of documenting a flexible container have a look at the Github repository of Jenkins used with OpenShift.


These statements about what we call a flexible container are not mandatory requirements, they are meant as a good practice to design containers and plan your application design. Using ConfigMaps and Secrets for runtime, S2I builds for defaults, and most importantly, very descriptive documentation are the recommendations for a flexible container. Work continues, as we need to get a better understanding how to mix feature sets such as PostgreSQL, PostGIS, PostgreSQL+HA, and PostgreSQL+HA plus PostGIS.

Bonus Track - Let’s Walk Through an Example

Let’s assume our DevOps engineer Dan is providing a Jenkins instance to his team and he would like to use OpenShift for OAuth and add the Mattermost plug-in so that he can send out notifications from pipeline build steps. As the Red Hat Jenkins container image for OpenShift is a flexible container image, Dan will use:

  • Source-to-Image to add the plug-in
  • Source-to-Image to add default Mattermost configuration
  • ConfigMap to set environment variable to use OAuth

To easily follow along with this example, I have create a repository containing all the files you need.

Adding a Plug-in to Jenkins

To customize Red Hat’s Jenkins container image a Source-to-Image build is used, Its source is a relatively simple Git repository: It just needs to contain a file called plugins.txt which must contain a pluginId:pluginVersion on each line. The BuildConfig itself is very simple:

kind: BuildConfig
apiVersion: v1
   build: jenkins-acme
 name: jenkins-acme
 nodeSelector: null
     kind: ImageStreamTag
     name: jenkins-acme:latest
 postCommit: {}
 resources: {}
 runPolicy: Serial
   type: Git
       kind: ImageStreamTag
       name: jenkins:2
       namespace: openshift
   type: Source
 - type: ConfigChange
 - imageChange:
     lastTriggeredImageID: openshift/jenkins-2-centos7
   type: ImageChange

I used oc new-build openshift/jenkins~ --name=jenkins-acme to create it. So, whenever the base container image provided by Red Hat changes, or the BuildConfig itself changes, our ACME Corp Jenkins Flexible Container image is rebuilt.

Setting Defaults for a Plug-in

Now that we have a Jenkins including the Mattermost plug-in, we need to set a default configuration for it, so we will provide a XML configuration file and make it part of the ACME Corp Jenkins image. The configuration file could be placed in the Git repository, and Jenkin’s Source-to-Image assemble script will pick it up and place it in ACME Corp’s Jenkins image. Starting a new build will result in an ACME Jenkins with Mattermost plug-in and a default configuration: oc start-build jenkins-acme

Setting Runtime Configuration

The final step is to set runtime configuration. I want to use OpenShift as an OAuth provider for Jenkins. To do so, we need to set the OPENSHIFT_ENABLE_OAUTH environment variable for each container running Jenkins. We will use a ConfigMap to do so. ConfigMaps could be created from files or directories. The do this quickly, I create them just from command line arguments:oc create configmap jenkins-config --from-literal=OAUTH=True. As always, see the OpenShift documentation for detailed instructions. Now that we've provided the runtime configuration, we need to tell each deployment of Jenkins to populate the environment variable with the value from the ConfigMap. This could be archived by changing the container spec, a snipped:

             name: jenkins-config
             key: OAUTH

If you are using Jenkins from a template, make sure that:

  1. The ACME Corp Jenkins Flexible Container image is used.
  2. Environment variables are set from the ConfigMap.

Final Conclusion

Building, providing, and using flexible container should be a goal for every DevOps minded organization. The techniques applied are good practices within the container/OpenShift domain. And these good practices will enable more efficient product delivery. Have fun!


Jenkins, OpenShift Container Platform, OpenShift Dedicated, OpenShift Online

< Back to the blog