Subscribe to our blog

OpenShift Container Platform has provided CI/CD services since the early days, allowing developers to easily build and deploy their applications from code to containers.

As CI/CD in the kubernetes space evolved, OpenShift stayed on the edge by providing services like OpenShift Pipelines (based on Tekton upstream project), a more kubernetes-native way of doing Continuous Delivery.

To further empower developers wanting to embrace Tekton on OpenShift, we have added a new feature (currently in Dev Preview) called “Pipelines-as-code”, which aims at leveraging the Tekton building blocks, but bootstrapping a lot of the plumbing needed to make it work and apply GitOps principles to pipelines.

What is this new feature called “Pipelines-as-code” in OpenShift?

Pipelines-as-code (later referred to as PAC) allows developers to ship their CI/CD pipelines within the same git repository as their application, making it easier to keep both of them in sync in terms of release updates.

By providing the pipeline as code (yaml) in a specific folder within the application, the developer can automatically trigger that pipeline on OpenShift, without much worrying about creating things like webhooks, EventListeners, etc, that are normally required to be set up for Tekton.

Figure 1: Pipelines-as-code overview

Here is the general workflow to use that feature, assuming it has been enabled on the cluster:

  1. The developer initializes the PAC feature for his project, using the tkn-pac CLI.
  2. The developer can choose what git events will trigger the pipeline, and scaffold a sample pipeline file matching the selected events (commit, pull-request for the moment).
  3. They then add a .tekton folder on the root of the application’s git repository, containing the pipeline Yaml file.
  4. They configure the pipeline content in the Yaml file by adding the required steps.
  5. When the targeted event (pull-request or commit) happens in the git repository, PAC on OpenShift intercepts the event and creates and runs the pipeline in the desired namespace.
  6. When running with GitHub, the bi-directional integration allows users to follow the pipeline execution status directly in GitHub UI, and link to OpenShift for more details.

What’s new and great with this new feature, compared to “just using” OpenShift Pipelines (Tekton) on OpenShift ?

If you have already used Tekton on Kubernetes, you probably know that the learning curve could be steep, since it still a project that lays the foundations of a kubernetes-native CI/CD, but the sugar & sweet required to make it very user-friendly is still in the making.

With OpenShift Pipelines, we already make it much easier to use Tekton, by providing several user-friendly capabilities such as a UI to create pipelines, a visual representation to follow their execution, and help troubleshoot them when there are issues.

Nonetheless, the developer had to do some preparation before the pipelines could be triggered, and the pipelines were not tracked in a git repository, instead they were directly instantiated as a “static” kubernetes resource, that needed manual update whenever the pipeline needed to change for a new release for example.

Ship code and pipelines, all in the same repository

Now, by allowing the developer to ship the pipeline with the application’s code, and letting OpenShift take care of updating and running the pipeline, we truly help the developer focus on the core, which is to write useful code, both for the application and it’s CI/CD pipeline.

Figure 2: Pipelines can be stored within the .tekton folder in the application’s repository


UI sweetness, and user happiness

In addition to that, there’s a native integration with GitHub and BitBucket (with GitLab soon to come) that allows the developer to track the pipeline’s execution history directly with the GitHub UI, and provides direct links to the execution logs within OpenShift.

Figure 3 : OpenShift updates GitHub with the pipeline execution status


Modern GitHub workflows

When the developer is working on a pull request (PR), and requires the pipeline to be run again without a new commit, a simple comment like “/retest” in the GitHub PR triggers a new execution of the pipeline and captures the results.

Figure 4: Triggering a pipeline run on OpenShift with a “/retest” command in GitHub

That’s great, now let’s see that in action - DEMO

The following demonstration shows the “Pipelines-as-code” feature in action:

 

The Level Up Hour (E53) | Pipeline as code

 

Additional resources


About the author

Browse by channel

automation icon

Automation

The latest on IT automation that spans tech, teams, and environments

AI icon

Artificial intelligence

Explore the platforms and partners building a faster path for AI

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

Explore how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the solutions that simplify infrastructure at the edge

Infrastructure icon

Infrastructure

Stay up to date on the world’s leading enterprise Linux platform

application development icon

Applications

The latest on our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech