Continuous Integration and Continuous Deployment have been hot topics in our industry for the last few years. From the initial release of OpenShift 3.0, we’ve included features to let you build automated workflows to consume changes and redeploy applications. For example, builds can automatically run when source code changes, or an underlying image changes. Deployments can be triggered when the image they are based on changes. Tests can be executed against newly built images before they are pushed to a registry. All these things enable the creation of relatively simple pipelines that can take you from modified source code to a redeployed application in a tested, confident manner.

However, we recognize that the ecosystem of CI/CD tools is extremely broad and sophisticated. Rather than reinvent those capabilities in OpenShift, we’ve been working to integrate those tools.

Open APIs

The first step towards enabling OpenShift as a CI/CD platform was to provide a robust, open API. Any action that can be performed via the ‘oc’ CLI tooling or the web console can also be performed via a REST API with the appropriate authentication token. In fact,the CLI and web console are using those APIs. This means existing CI/CD tools can easily integrate with OpenShift by calling out to those APIs to trigger actions within the platform or check the status of running components. This means it’s easy to take your existing CI/CD workflow that is building your application, add calls to OpenShift that will package your application into an image and deploy it, and then continue the existing workflow steps to test the deployed application.

Integrating Jenkins

Having open APIs also made it easy for us to provide first-class integration with Jenkins via the OpenShift Pipeline Jenkins plugin which allows Jenkins jobs to include OpenShift actions as standard build steps. For example, a Jenkins job can now trigger an OpenShift build or application deployment, perform a scale up/down action, or tag images. This allows you to easily tie into an existing Jenkins-based workflow. Code can be built via an existing Jenkins job, but then deployed on OpenShift and tested, using easily provisioned cloud resources.

The full set of possible build steps includes:

  • Trigger a build in OpenShift
  • Verify a build succeeded
  • Trigger a deployment
  • Scale a deployment up/down
  • Verify a deployment succeeded
  • Verify a service is accessible
  • Tag an image
  • Create any resource from yaml/json
  • Delete any resource

Many of these steps include further customization such as whether to wait for completion, whether to stream back logs from the action, and whether to verify success.

Jenkins Plugin Start Build Config

Figure 1: Configuring the “trigger a build step”

In addition to standard build steps, the plugin also offers SCM-style image polling, meaning you can configure a Jenkins job to be run whenever an ImageStream changes in OpenShift.

Finally, the plugin offers a few post-build actions for cleaning up:

  • Cancel a build in OpenShift
  • Cancel a deployment
  • Scale a deployment up/down

OpenShift Pipelines

To take the OpenShift CI/CD story to the next level, we wanted to integrate the ability to run truly complex workflows on the platform, while still avoiding reimplementing the wheel. To that end, we looked at the traction the Jenkins Pipeline plugin has in the community and decided to leverage it as our pipeline execution engine. Of course, thanks to our existing integration points, using Jenkins pipelines with OpenShift was something users could already do, but we wanted to create a seamless experience. This led to a number of enhancements:

  • A Domain Specific Language (DSL) for the OpenShift Jenkins plugin
  • A new ‘Pipeline’ BuildConfig strategy
  • Synchronization between Jenkins and OpenShift
  • New web console views
  • In-the-box Jenkins images
  • Jenkins auto-provisioning

Let’s go through how each of these pieces contributes to expose the power of Jenkins Pipelines to OpenShift.

DSL for the OpenShift Jenkins Plugin

The Jenkinsfile used by Jenkins Pipelines is a freeform groovy syntax, so it can call out to any existing plugin’s java code, but writing a workflow definition using raw java invocation is somewhat tedious and error prone. Many plugins offer a simpler DSL that can be used with the pipeline plugin and the OpenShift plugin is no exception. You can write simple statements like:

node('maven') {
stage 'build'
openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true')
stage 'deploy'
openshiftDeploy(deploymentConfig: 'frontend')
}

This basic example will start a pipeline stage named “build”, trigger a build in OpenShift, wait for it to complete, start a stage named “deploy”, and trigger a deployment(of the newly build image) in OpenShift.
The full DSL is documented on the plugin page.

Pipeline BuildConfig Strategy

In order to represent pipelines within OpenShift, we created a new build configuration strategy type. The existing types (source, docker, custom) represent build actions that take place within the cluster. The new type, ‘pipeline’, represents a build in which the actions are executed by a Jenkins server. In addition, rather than providing typical source code inputs, this new type of build takes a Jenkinsfile definition as the primary input. Here is an example of a BuildConfig definition for this new strategy:

kind: BuildConfig
apiVersion: v1
metadata:
name: sample-pipeline
labels:
name: sample-pipeline
spec:
triggers:
- type: GitHub
github:
secret: secret101
- type: Generic
generic:
secret: secret101
strategy:
type: JenkinsPipeline
jenkinsPipelineStrategy:
jenkinsfile: |-
node('maven') {
stage 'build'
openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true')
stage 'deploy'
openshiftDeploy(deploymentConfig: 'frontend')
}

Notice that other than the strategy section, the BuildConfig is defined in the same way as any other BuildConfig, including webhook or image change triggers.

In this particular example, the Jenkinsfile is defined inline with the BuildConfig definition. It is also possible to reference a git repository which contains a Jenkinsfile. In that case, the git repository is referenced in the same manner as any other BuildConfig with a source reference and the expectation is that the Jenkinsfile will reside in the root directory of the repository, or a contextDir must be specified.

Now that the pipeline is represented as a BuildConfig definition in OpenShift, it is possible to manage it like any other BuildConfig. That is, it can be started via “oc start-build”, the builds will show up in the web console, etc.

Synchronization between Jenkins and OpenShift

Since the build can be started and managed from OpenShift, you may be wondering how Jenkins knows when to execute the pipeline or how OpenShift knows the state of the pipeline execution. This is accomplished via an additional Jenkins plugin called the OpenShift Sync plugin. As the name implies, this plugin is responsible for synchronizing state between OpenShift and Jenkins.

When a Pipeline BuildConfig is created, the sync plugin will create a new job in Jenkins to represent the BuildConfig. When a build is started from the BuildConfig, the sync plugin will start the job in Jenkins to perform the execution. And as the job executes within Jenkins, the sync plugin will update the build object in OpenShift with annotation metadata about the state of the execution. This metadata enables the OpenShift web console to display the state of the execution including what pipeline stage is being executed, success or failure of the stage, and how long it has been running.

This plugin is included in the Jenkins image provided with OpenShift, but it can also be installed to existing Jenkins servers to allow for integration to external Jenkins deployments.

Web Console Pipeline Visualization

Pipeline builds show up in the web console like any other build, but they are visualized differently. The stages of the pipeline are shown along with information about which stage is currently executing and the duration of each stage:

OpenShift Pipelines View

Figure 2: OpenShift Console Pipeline

The console also shows the details of the pipeline definition when the Jenkinsfile is embedded in the BuildConfig definition. Finally, the view provides links to the Jenkins job instance that is executing the pipeline. These links go to the Jenkins server and provide access to the detailed logs and other detailed information about the executing job.

New Jenkins Images and Plugins

The Jenkins image provided with OpenShift includes a rich set of plugins that enable the full pipeline flow. In addition to the aforementioned OpenShift plugin and the Sync plugin, the image also includes the Kubernetes plugin which enables Jenkins to launch slave executors as pods on a Kubernetes or OpenShift cluster.

The Kubernetes plugin is configured by defining pod templates which describe a pod which will be created when a new slave is required. Of foremost importance in the pod specification is the image that the pod should run. This image must include the Jenkins client logic so the slave can link back to the master, and it must include any build tools that will be required by the job steps that will be executed on the slave.

 

Jenkins Plugin Kube Config

Figure 3: Kubernetes plugin configuration - cluster connection information

 

 

Jenkins Plugin Builder Config

Kubernetes plugin configuration - Slave pod template definition

OpenShift provides four slave images that can be used as is or extended:

  • openshift/jenkins-slave-maven-rhel7
  • openshift/jenkins-slave-maven-centos7
  • openshift/jenkins-slave-nodejs-rhel7
  • openshift/jenkins-slave-nodejs-centos7

The maven image provides a JDK and Maven tooling and the Node.JS image provides a Node.JS+NPM distribution and associated tooling. All of these images also include the OpenShift command line tooling (oc) and git tools as well.

The RHEL7 and CentOS7 image variants are identical other than being based on RHEL and CentOS base images, respectively, and using the associated RHEL or Centos distributions of the installed tools.

Additional slave images for other frameworks can easily be constructed by starting from the slave base image or by extending one of the existing slave images to install additional packages.

Jenkins Auto-provisioning

Making all these pipeline components transparent to the user is the auto-provisioning logic. Though disabled by default, when enabled this feature automatically deploys a Jenkins server in a user project when a pipeline build configuration is first defined within that project.

The Jenkins server will run as a pod inside the user’s project and includes all credentials and configuration needed to synchronize pipeline builds and launch slave pods to execute build steps. The Jenkins server and related resources are defined in a standard OpenShift Template which is instantiated within the user’s project as needed. The template is customizable by the cluster administrator to control the Jenkins experience users will see.

To configure and enable auto-provisioning, see the OpenShift documentation on the topic.

Summary

With the OpenShift Pipeline feature, you can define and execute Jenkins Pipelines which provide powerful CI/CD capabilities for managing your application lifecycle. The pipeline will be executed on a Jenkins server that runs in your OpenShift environment, using slave executors that also run on your cluster. You can launch and monitor the pipeline execution from within OpenShift and using OpenShift tooling, with no need to interact with Jenkins directly, though of course it is available for power users.

To see an end to end demonstration of a pipeline buildconfig being created, Jenkins auto-provisioning, pipeline execution, and web console visualization, check out this video:

References