This is part 1 of a 5 part series. Links to the other parts are here:

Introduction

OpenShift Pipelines is a Continuous Integration / Continuous Delivery (CI/CD) solution based on the open source Tekton project. The previous article in the series showed how Tekton can be used to drive the OpenShift source 2 image process to create an image containing source code, builder tools and the deployable runtime executable. This article will show how to create a runtime image that contains assets harvested from the builder image created in the previous article, and it will also show how to move the images around ready for testing and formal release.

Access to the source content

All assets required to create your own instance of the resources described in this article can be found in the GitHub repository here. In an attempt to save space and make the article more readable only the important aspects of Tekton resources are included within the text. Readers who want to see the context of a section of YAML should clone or download the Git repository and refer to the full version of the appropriate file.

Image creation in Tekton

Creating the runtime image

At this stage the intermediate builder image contains the built asset which is a war file compiled from the java source. Figure 1 shows the progression of the assets through this stage of the process. Due to the execution of the build task the builder image has been taken from the image registry, the source code has been pulled into the builder image and the builder image has become the intermediate image containing the source code, tools, and deliverables (steps 1 to 3 in figure 1).

 

 

Figure 1 - Creation of the runtime image

The Tekton task for this stage of the process is in the file /build/tasks/createRuntimeImage.yaml.

This task takes as an input the intermediate image and produces as an output the runtime-image which is to be stored and executed. A step is used to create a new dockerfile simply by echoing the commands and piping them to the file.

The dockerfile that is created is shown below:

FROM $(resources.inputs.intermediate-image.url) as intermediate-image
FROM docker.io/openliberty/open-liberty as runtime-image
COPY --from=intermediate-image \
/tmp/src/target/liberty-rest-app.war /config/apps/liberty-rest-app.war
COPY --from=intermediate-image \
/tmp/src/src/main/liberty/config/server.xml \ /config/server.xml

The dockerfile uses two FROM statements to open the intermediate image and the new runtime image (an Open Liberty image taken from docker.io - step 4 in figure 1). The instructions in the dockerfile then copy the liberty-rest-app.war file from the location /tmp/src/target in the intermediate image to the location /config/apps in the runtime image (step 5 in figure 1). This process is repeated for the server.xml file so that the required files are in the correct locations of the runtime image.

The dockerfile is executed using the same process as before and the resulting image is stored in the local Buildah repository identified by the runtime image resource which is (repeated here) :

apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: liberty-rest-app
namespace: liberty-rest
spec:
params:
- name: url
  value: [image registry url]:5000/liberty-rest/liberty-rest-app
type: image

The url value which has been truncated here for readability is the url of an OpenShift image stream. However this is simply a tag that has been applied to an image in the Buildah repository and nothing has been copied into the OpenShift image registry at this point.

The next stage is to push the image to the OpenShift image registry (step 6 in figure 1) using the Tekton step shown below:

- name: push-image-to-openshift
    command:
      - buildah
      - push
      - '--tls-verify=$(params.TLSVERIFY)'
      - $(resources.outputs.runtime-image.url)
      - 'docker://$(resources.outputs.runtime-image.url)'
    image: registry.redhat.io/rhel8/buildah
    resources: {}
    securityContext:
      privileged: true
    volumeMounts:
      - name: pipeline-cache
        mountPath: /var/lib/containers

The third argument identifies the image in the local repository (accessible through mounting the pipeline-cache persistent volume claim to the location /var/lib/containers) and the fourth argument is the identifier of the image in the OpenShift repository.

If you are following along, looking at the Tekton yaml files from the Git repository then you will see that there are also commands to list the content of the local Buildah repository in the tasks.

Clear existing application resources

When quickly moving through the build-deploy-build-deploy cycle of development it is necessary to ensure that the runtime environment is ready to accept a new deployment of the application. To ensure the environment is not impacted by the assets or configuration of a prior build it is easier simply to remove the existing application.

Dependent tasks in tekton

In a situation where a build fails it is sensible to leave behind the previous deployed assets so that further testing can continue against a running application. As a result the task that removes the application resources does not start until the build task has been successfully completed. This is achieved by adding a runAfter directive to the pipeline that orchestrates the whole process, which is described later.

Clear Resources task

The clear resource task needs to execute a number of ‘oc’ command line operations so it is sensible to put them into a shell script. An image exists within the OpenShift cluster to deliver an oc command line interface so that image can be used for this task. To view all images that are available use the command : ‘oc get is -A’. This command will list all images streams that are visible to the current user. Any of these images can be used within a step if appropriate however the URL listed in the output of the command is the public URL for the image and the image repository address is required to be used for OpenShift hosted images in steps.

Figure 2 shows an image stream in OpenShift in which the URL reported by ‘oc get is’ is the second URL whereas the URL required to be used in the step is the first shown URL beginning image-repository.

Figure 2 - image stream URL’s presented by OpenShift

The clear resource step, taken from the file build/tasks/clearResources.yaml is shown below, in which a shell script is created and executed within the same step.

- name: clear-resources
 command:
   - /bin/sh
   - '-c'
 args:
   - |-
     echo "------------------------------------------------------------"
     echo "echo $(params.appName)"
     echo "oc get all -l app=$(params.appName)" > clear-cmd.sh
     echo "oc delete all -l app=$(params.appName)" >> clear-cmd.sh
     echo "oc get all -l app=$(params.appName)" >> clear-cmd.sh
     echo "Generated script to clear resources ready for new deployment"
     cat clear-cmd.sh
     echo "------------------------------------------------------------"
     chmod u+x clear-cmd.sh
     ./clear-cmd.sh
    image: [image registry url]:5000/openshift/cli:latest

Push image to quay.io

The task that pushes the image to quay.io takes a number of input parameters to define the quay.io account name, the repository within the account to which the image should be pushed and the image tag (version) for the image that is to be pushed. The task also has an input resource which is the image resource to be pushed. This enables the task to be generic enough for any user and any image.

The push image task is in the file build/tasks/pushImageToQuay.yaml

Any access to quay.io, or many other repositories, requires a stage of authentication. It may appear to be reasonable to use a step to authenticate and a second step to perform an action as the authenticated user. However, since each step takes place in an isolated container the authentication action would be immediately forgotten as soon as the login step completed leaving the action step unauthenticated. The answer for this when pushing images is that the buildah command can take as an argument an authentication file. The authentication information is not stored in a parameter or in a pipeline run object; instead it is stored in an OpenShift secret.

Instructions for creating the secret from quay.io are in the section on using the example build process later.

Once the secret has been create and downloaded, create the secret using the command:

oc create -f <filename>

The secret data is stored as shown in the example below:

{
   "apiVersion": "v1",
   "data": {
       ".dockerconfigjson": "ab ... n0="
   },
   "kind": "Secret",

When mounted into the container for the step the data is accessible from a file called .dockerconfigjson.

The secret is used by the Tekton task by mounting it as a volume as shown below:

 volumes:
   - name: quay-auth-secret
     secret:
       secretName: quay-auth-secrett

Content in the buildah repository

Before pushing the image to quay.io the images in the local buildah repository are viewed. This will show similar to the below:

readout

The content has been cut down on the first and last lines to avoid wrapping:

<A> = image-registry.openshift-image-registry.svc:5000/liberty-rest

<B> = registry.access.redhat.com/redhat-openjdk-18

Note that the images listed by Buildah are simply in the Buildah repository, in a similar manner to using Docker build to create a local image.. The fact that the repository identifier shows the OpenShift repository or the Quay.io repository does not mean that the image is in that repository. It must be pushed to actually appear in that repository.

Image 1 is the repository on the OpenShift cluster to which the image is pushed ready to be deployed for testing.

Image 2 is the tagged runtime image ready to be pushed to quay.io.

Note that the Image ID is the same for images 1 and 2 confirming that what is being executed on OpenShift is the same as what is to be pushed to quay.io.

Image 3 is the intermediate builder image.

Image 4 is the runtime image pulled from docker.io before it has had the war file and server.xml files added to it.

Image 5 is the builder image before it has performed the build operation indicating that the difference of 28 MB between image 5 and image 3 is the source code and built war file.

Push to quay.io

The step to push the image to quay.io is shown below:

- name: push-image-to-quay
    command:
      - buildah
      - push
      - '--authfile'
      - /etc/secret-volume/.dockerconfigjson
quay.io/$(params.quay-io-account)/$(params.quay-io-repository):$(params.quay-io-image-tag-name)
    image: registry.redhat.io/rhel8/buildah
    securityContext:
      privileged: true
    volumeMounts:   
      - name: quay-auth-secret
        mountPath: /etc/secret-volume
        readOnly: true
      - name: pipeline-cache
        mountPath: /var/lib/containers

The authentication secret is mounted read-only at /etc/secret-volume, therefore the secret data is accessible from the path /etc/secret-volume/.dockerconfigjson and can be used directly with the buildah push command.

The image has previously been tagged ready for pushing to quay.io so the image name and tag can simply be used in the push command.

Summary

This article showed how Tekton can be used for image management. The built executables were extracted from the builder image and put into a specific runtime image which was then stored in both the OpenShift registry and quay.io. The immutable image is ready to run from the OpenShift registry for initial testing and the image stored in quay.io is ready for use in other clusters (such as QA, pre-production and production) as part of a wider pipeline of application release process.

What’s next

The next article in this series covers the overall orchestration process of Tekton pipelines to bring together the tasks explained so far into a single seamless process that executes all required steps This also shows how parameters can feed into the process when the pipeline is executed.