Subscribe to our blog

As part of a transition to cloud-native, container-based applications, organizations are looking at alternatives to traditional continuous integration and continuous delivery processes. Tekton is an innovative way to run container build and test processes directly on the OpenShift platform.

Many organizations follow a GitOps model and store the source code and configuration for applications in a Git repository. To make efficient use of the repositories, some teams store the source code for multiple microservices in the same Git repository. This simplifies repository management and helps developers quickly switch from one microservice to another. Additionally, many teams use multiple branches to segregate different streams of development.

This raises a question: Do all builds need to be executed for each change in the repository? Since changes are isolated by directory and branch, there is efficiency to be gained by filtering the conditions that will trigger each build operation for different microservices in the same Git repository.

Tekton tasks run within pods and only run when a build is required, so no engine is constantly consuming CPU and memory on the cluster. In this respect it follows a serverless mode of operation. Red Hat provides a supported version of Tekton as OpenShift Pipelines that runs on the OpenShift platform and is included within the OpenShift subscription. Tekton is an efficient way to operate build pipelines. With some planning, and the use of the Tekton trigger interceptor, you can reduce the noise of build and test processes that are not required. This article will show you how to use the Tekton trigger interceptor process to make sure that the right builds run under the right circumstances.

For background reading, see this guide to OpenShift pipelines, which covers the construction of tasks, steps, and pipelines to create a complete CI system that delivers the automation required for a repeatable and predictable container build process. Part 6 of the guide includes information on the triggering process for pipeline execution.

A note on Tekton versions

This article has several links to the Tekton documentation at tekton.dev/docs. The links will often go to the latest version of the upstream Tekton documentation, which may be ahead of what is delivered in the OpenShift operator. To check the version of Tekton components in the operator deployed on your cluster, go to the OpenShift web user interface and navigate to Administrator View → Operators → Installed Operators → Red Hat OpenShift Pipelines → Details View. Scroll down to see the version of the specific components. If you do not have permission to perform the above steps, ask a cluster administrator to do it for you.

Once you know the version of the Tekton components, you can switch the tekton.dev/docs pages to the matching version using the Latest Versions drop-down menu on the right side of the menu bar.

When to perform a build?

Initiating a build process should be as simple as possible for a developer, such that all they need to do is commit new changes and push to the GitHub repository. A webhook configured in the GitHub repository can then be fired to initiate the pipeline execution. This article will refer to GitHub and will use GitHub for examples; however, similar processes are available with GitLab and other source code management systems.

Pipelines may be triggered by various actions on a Git repository via the webhook process, such as pushing commits or creating and merging pull requests. Tekton triggers receive the webhook request from the Git repository, then initiate the execution of a specific pipeline process. Tekton triggers require a continuously running process, in a pod, that listens for the webhook request to start a pipeline. When a webhook fires, the Tekton trigger receives a webhook payload of information that includes the GitHub repository, references to any relevant commits, the branch on which the commits were made, and various other fields of information that differ slightly depending on the action within GitHub that triggered the webhook event.

The decision of when to build is complex and is often based on a multidimensional consideration of factors, including:

    • The directory of the content changed

    • The branch on which the change was made

    • The GitHub action performed

Directory-based filtering

Many organizations store the source code for multiple microservices in the same Git repository. This simplifies repository management and helps developers quickly switch from one microservice to another. Figure 1 shows an example of a simple structure in which three applications are stored in the same GitHub repository.

Diagram of multi-app directory structure

Figure 1: Example multi-app directory structure in a GitHub repository

Each application has a source code directory and a pipeline directory containing the Tekton pipeline resource, together with any custom tasks used exclusively for that application. A common tasks directory is used to store Tekton tasks that are used by multiple applications.

A change to the content of the source code directory of app-01 should result in the execution of only the pipeline for app-01. A change to the content of the pipelines directory of app-01 should not result in rebuilding the containerized application. However, a change to the content of the pipelines directory should initiate a different process involving the use of OpenShift GitOps (ArgoCD) to redeploy the Tekton resources. This introduction to GitOps provides further information on the process. With a basic webhook and Tekton trigger process, each application build process is triggered by any change to the content of the GitHub repository, which can result in unnecessary operations on the cluster and unnecessary post-build actions such as container image moves, testing cycles, and application deployments.

Branch-based filtering

Most teams will use different branches for specific tasks within the development lifecycle. It is fair to assume that not all events on each branch should trigger the same pipeline execution. For example, a commit to a feature branch may require a simple build and test process, whereas a commit to a release branch may require a full build, test, vulnerability analysis, deployment to QA, and full QA testing cycle.

Additionally, specific pipeline processes may be required based on other events, such as creating or merging a pull request. Depending on which branch a change has been committed, different build, container image move, and test operations are necessary.

The diagram in Figure 2 shows a simple branching model. A main branch is used to release the application for production use, and a development branch is used to isolate development team changes. The red dotted line indicates a merge of work back to the main branch for a release, which is labeled. From the labeled release points, two short-term fix branches are created to isolate fix work from the development efforts. When the release 1 work has been completed, the changes are merged to the main branch for a fix release (REL-1.1), and those changes are also merged to the development branch to prevent the fix from being regressed later.

Diagram depicting branched development work in GitHub

Figure 2: Branched development work

Tekton trigger filtering

The Tekton trigger process requires a filtering mechanism to allow a specific directory, action, branch, or other condition to be used to select whether a particular Tekton pipeline is triggered.

The main Tekton triggers documentation covers the use of event listeners, trigger templates, and trigger bindings. Tekton trigger documentation also covers the use of interceptors, which can be used to filter the webhook payload. Interceptors can also verify the identity of the triggering repository using a secret and transform and extract specific content from the webhook payload.

The interceptor analyzes and transforms the webhook data, which is filtered by the trigger binding, and specific fields are then forwarded to the trigger template. This part of the process is effectively filtering the large block of webhook payload data down to just the fields that are required by the pipeline. The diagram in Figure 3 shows the event listener object and how it interacts with the trigger binding and trigger template to eventually call the pipeline. The interceptor process takes place within the event listener. Diagram of event listener and trigger of an object relationship

Figure 3: Event listener and triggering object relationship

An example of a trigger binding is included in Figure 4, and the webhook payload from which it is extracting data fields is shown in Figure 5.

apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerBinding
metadata:
 name: simple-pipeline-binding
spec:
 params:
 - name: gitrepository.url
   value: $(body.repository.clone_url)
 - name: gitrevision
   value: $(body.head_commit.id)

Figure 4: Example trigger binding

The example trigger binding in Figure 4 shows references to specific fields:

  • body.repository.clone_url
  • body.head_commit.id

Figure 5 shows an abbreviated example webhook payload for a push event. The entire block of payload information is referred to as the body of the payload, hence each field reference shown above begins with body.

{
  ref: refs/heads/main,
  before: c8e7e83f25819c1a28973ee65dd5330d16b804ca,
  after: 3ac79d5c3a1d415351a12edbf68c1a8cbca2bcbf,
  repository: {
    name: simple-apps,
    full_name: marrober/simple-apps,
    private: false,
    owner: {
      name: marrober,
      email: marrober@redhat.com,
      url: https://api.github.com/users/marrober,
      site_admin: false
    },
    html_url: https://github.com/marrober/simple-apps,
    clone_url: https://github.com/marrober/simple-apps.git,
  head_commit: {
    id: 3ac79d5c3a1d415351a12edbf68c1a8cbca2bcbf,
  }
}

Figure 5: Abbreviated example webhook payload

These are the values that would be passed to the trigger in the named fields:

gitrepository.url        = https://github.com/marrober/simple-apps.git

gitrevision                 = 3ac79d5c3a1d415351a12edbf68c1a8cbca2bcbf

 

Event filtering with interceptors

To filter the fields of information within the body of the webhook payload, add the interceptor section to the event listener. Specific interception capabilities and syntax are available, depending on whether you use GitHub, GitLab, or BitBucket. Tekton provides detailed documentation for each software configuration management system. This article will demonstrate with GitHub.

Interceptor definition: branch filtering

The requirement for branch-based filtering was described above. This section will explain the steps required to add a branch-based pipeline execution filter to the Tekton trigger process.

The example event listener in Figure 6 shows the inclusion of the interceptor for which two fields from the webhook payload and a header are being examined. The full range of syntax that can be used is included in the Tekton reference page for the Common Expression Language (CEL).

apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
 name: pipeline-test-listener-interceptor
spec:
 serviceAccountName: pipeline
 triggers:
 - name: github-listener
   interceptors:
   - ref:
       name: cel
     params:
     - name: filter
       value: 'body[ref].contains(main) && body[repository][name] ==
simple-apps && header.match(X-GitHub-Event, push)'
     - name: overlays
       value:
       - key: branch
         expression: body.ref.split('/')[2]
   bindings:
   - ref: pipeline-test-binding
   template:
     ref: pipeline-test-trigger-template

Figure 6: Event listener with interceptor

The filtering action

The filter parameter takes three distinct filter expressions and combines them with the AND (&&) operator. The individual parts include filters on the following:

  • body[ref].contains(main): the presence of the word “main” in the branch name contained within the ref object

  • body[repository][name] == simple-apps: the value indicated for the field repository.name
  • header.match(X-GitHub-Event, push): the GitHub push event

It is possible to select a subset of GitHub actions that will initiate a webhook to be fired from within Github. However, the filtering approach described here allows different pipelines to be triggered as a result of different events in a simplified way with just a single event listener object and, potentially, a single webhook.

Extending the event listener: new fields

The event listener shown above includes the overlays section repeated below in Figure 7:

     - name: overlays
       value:
       - key: branch
         expression: body.ref.split('/')[2]

Figure 7: Overlay to create a new field

The YAML in Figure 7 will generate a new field called branch containing the third field from the body.ref field when the field is delimited with a / character. Since the body.ref field contains the text refs/head/main in the example webhook payload, shown in Figure 5, the branch parameter will hold the value main.

Passing new fields to the pipeline

Newly generated fields, such as branch above, need to be added to the pipeline so they can be used in steps. This is done by adding an entry to the trigger binding object, as shown in the final two lines of Figure 8.

Fields added by the interceptor overlay all begin with the term “extensions.” The branch extracted field will go forward to the trigger template under the name ext-branch.

apiVersion: triggers.tekton.dev/v1alpha1 
kind: TriggerBinding
metadata:
  name: app-pipeline-binding
spec:
  params:
  - name: gitrepository.url
    value: $(body.repository.clone_url)
  - name: gitrevision
    value: $(body.after)
  - name: ext-branch
    value: $(extensions.branch)

Figure 8: Trigger binding definition with extension parameter

Interceptor definition: directory filtering

The requirement for directory-based filtering was described above. This section will explain the steps required to add a directory-based pipeline execution filter to the Tekton trigger process. It will also illustrate how multiple pipelines can be managed with a single event listener, so that the number of constantly running processes is kept to the minimum while supporting the flexibility required.

The example event listener in Figure 9 shows the inclusion of the interceptors required for directory-based filtering. In this example, two triggers are added to the same event listener: one for app-1 and another for app-2. Each trigger has two event listeners required for the operation of the directory filtering.

The first event listener, called github, is a GitHub cluster interceptor that gathers data using a parameter called AddChangedFiles. This parameter has a value of true to switch on the gathering of the set of files that have been either added, removed, or changed within the commits associated with the GitHub event. The set of files is available to the second interceptor in a field called extensions.changed_files.

The second interceptor, called cel, is a filter interceptor that uses the field from the previous interceptor to match changed file path names with a path of interest for the application. The filter operator extensions.changed_files.matches("app-1/src/") makes sure the pipeline is only triggered when the path of changed files includes the path app-1/src. The trigger also includes the overlay capability to provide the branch name on which the change was made.

The trigger content is repeated for the second instance in which pipeline execution is triggered for file changes in app-2. Each trigger shares a common binding that pulls specific referenced fields from the webhook content, although if necessary they may be different. Each trigger is required to have a separate trigger template reference, since the trigger template is used to select the specific pipeline that is to be executed.

apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
 name: multi-app-event-listener
spec:
 serviceAccountName: pipeline
 triggers:
   - name: github-listener-app-1
     interceptors:
     - ref:
         name: "github"
         kind: ClusterInterceptor
         apiVersion: triggers.tekton.dev
       params:
       - name: "addChangedFiles"
         value:
           enabled: true
     - ref:
         name: cel
       params:
       - name: "filter"
         value: 'header.match("X-GitHub-Event", "push") &&
extensions.changed_files.matches("app-1/src/")'
       - name: "overlays"
         value:
         - key: branch
           expression: "body.ref.split('/')[2]"
     bindings:
       - ref: app-pipeline-binding
     template:
       ref: app-1-pipeline-template
   - name: github-listener-app-2
     interceptors:
     - ref:
         name: "github"
         kind: ClusterInterceptor
         apiVersion: triggers.tekton.dev
       params:
       - name: "addChangedFiles"
         value:
           enabled: true
     - ref:
         name: cel
       params:
       - name: "filter"
         value: 'header.match("X-GitHub-Event", "push") &&
extensions.changed_files.matches("app-2/src/")'
       - name: "overlays"
         value:
         - key: branch
           expression: "body.ref.split('/')[2]"
     bindings:
       - ref: app-pipeline-binding
     template:
       ref: app-2-pipeline-template

Figure 9: Event listener for multiple pipelines and directory-based filtering

The relationship of the various entities explained above is shown in the diagram in Figure 10.

  Diagram of event listener and trigger

Figure 10: Event listener, trigger interceptors, and relationship to applications

The right side of Figure 10 shows a number of executions of the pipeline. The first execution of the pipeline is driven by the “pipeline run” Tekton resource, which executes the pipeline manually. Subsequent executions of the pipeline are triggered by a GitHub operation that matches the header, branch, and file match interceptor filters.

Trigger template object

An example trigger template for app-1, which consumes the fields supplied to it by the trigger binding, is shown below in Figure 11. Note the use of $(tt.params.<property-name>) to refer to properties supplied by the trigger process.

apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerTemplate
metadata:
  name: app-1-pipeline-template
spec:
  params:
  - name: gitrepository.url
  - name: gitrevision
  - name: branch
  resourcetemplates:
  - apiVersion: tekton.dev/v1beta1
    kind: PipelineRun
    metadata:
      generateName: pipeline-app-1-pr-tr-
    spec:
      serviceAccountName: pipeline
      pipelineRef:
        name: pipeline-app-1
      params:
        - name: git-url
          value: $(tt.params.gitrepository.url)
        - name: git-revision
          value: $(tt.params.gitrevision)
        - name: branch
          value: $(tt.params.branch)
        - name: app-dir
          value: node/multi-apps/app-1/src
        - name: imagestream-url
          value: image-registry. . .svc:5000/simple-pipeline/app-1
      workspaces:
      - name: resources
        volumeClaimTemplate:
          spec:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 5Gi

Figure 11: Trigger template with references to fields from the webhook payload

Triggers and filters in operation

When triggered by a user operation, such as a git push, the filter within the event listener will process the incoming data and will then either call the pipeline, or it will choose not to call the pipeline. When the pipeline is not called, the log for the event listener will show that the combined filter block did not return true, as shown in the example log snippet in Figure 12.

FailedPrecondition desc = expression body[\ref\].contains(\main\) && body[\repository\][\name\] == \pipeline-test-gogs\ did not return true,

Figure 12: Example of a log entry for a pipeline not being executed

Custom event processors

An event listener can call a rest-based application running on the OpenShift cluster to give greater flexibility for the analysis of the webhook payload and for the manipulation of the payload fields. This approach is described in the Tekton documentation. The maintenance overhead to operating the additional microservice for the purpose of filtering must be considered when selecting the appropriate filtering process. In most cases, every effort should be made to accommodate the event listener customization within the event listener configuration, as shown in the examples in this article.

Summary

The use of filters within Tekton event listeners can help teams to reduce the noise of unwanted builds when events occur on specific branches, directories, or events of a specific type.

Example trigger process

The trigger processes described in this article are available in the location:

  • GitHub repository
  • Branch-based filtering is in the directory pipelines/simplePipeline-1/
  • Directory-based filtering, including a multi application example, is in the directory node/multi-apps

The only prerequisite for the use of the above examples is to import the node-js base container image to an image stream within the project such that the dockerfile can use it. The command to import the image is below:

oc import-image node-js --from=registry.redhat.io/rhel8/nodejs-16 --confirm

 

The dockerfile used in the container build examples begins:

FROM image-registry.openshift-image-registry.svc:5000/simple-pipeline/node-js

 

The content simple-pipeline in red above is a reference to the OpenShift namespace in which the work is taking place. If you use a different namespace in experiments, you will need to change the line above in each of the dockerfiles and then commit that to the Git repository.

The dockerfile containing the above lines can be found here:

  • Branch-based filtering example: <repo-location>/node/layers/dockerfile
  • Directory-based filtering: <repo-location>/node/multi-apps/app-1/src/dockerfile and <repo-location>/node/multi-apps/app-2/src/dockerfile

The pipelineRun objects also have two specific references that will need to be updated if you choose to try out the process. These are the git-url and imagestream-url properties.


About the author

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech