Introduction

A GitOps approach to continuous delivery enables teams to deploy micro service based applications using a set of YAML files held within a Git repository. Red Hat OpenShift GitOps facilitates the consistent and automated deployment of Git based resources, to a selection of environments on Kubernetes platforms as content progresses from development to production.

Subtle differences exist between the content used in each environment on the route-to-live. For example, the number of replicas used will probably be higher in a production environment compared to a quality assurance environment. Additionally, resource constraints on pods may differ between environments based on the amount of CPU and memory required for the configuration of the application in each environment. There are obvious differences in configuration between environments too, such as the database connection string used to configure an application or the integration service to which an application should connect.

This article describes an approach to managing environmental differences, ensuring consistency where required and enabling variance where necessary.

Specifically, this article will show how the open source application Kustomize, a component of OpenShift GitOps, can be used to manage the set of resources used to deploy an application to a variety of environments. Additionally this article will show how OpenShift GitOps can be used to automate the delivery of content to target environments based on the assets manipulated by Kustomize. A previous article in this series, here, introduced OpenShift GitOps, based on the ArgoCD open source project. That article explains the core function of ArgoCD within an overall continuous integration and continuous delivery process.

Another article in this series, here, introduced the use of Kustomize for the specific function of updating the image tag that is to be used in a deployment. This avoids the chicken-and-egg scenario in which a deployment needs to use a variable (the image tag), but doesn’t know what that image tag is when the automated process starts.

Avoiding Duplication

Duplication of resources across environments is always to be discouraged. Many organisations have a large number of environments between development and production and the duplication of the entire set of deployment assets could result in common changes not being applied to all environments by mistake. Missing out one environment could result in confusion or chaos depending on the result. For example, consider the change of a label selector used to identify a deployment to which a service is to be associated.

Extracts of the deployment and service yaml files are shown below in figure 1:

Deployment yamlService yaml
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
  app: myapp
name: myapp
namespace: myapp-development

 

apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
  app: myapp
namespace: myapp-development
spec:
ports:
- name: http
  port: 9080
selector:
  app: myapp-update

Figure 1: Misconfigured deployment and service selectors

The service selector has been updated but the corresponding change has not been applied to the deployment yaml file. This will break the deployment of the application and if the change is made in this manner in the production copy of the deployment resources then the issue will only be discovered during the production deployment. Eliminating duplication, and using a common set of assets, lowers the volume of change in each environment. This reduces the amount of effort required to validate changes and also lowers the risk of errors.

A Common Core set of Assets

The approach to maintaining a common core set of assets suggests the use of a ‘base’ set of resource files. This set of files will typically be enough to deploy the application to a development environment with little variance. To apply the variance required for further environments a set of ‘overlay’ changes are created. If supported by the source code management system, it is possible to apply fine grained permission control such that specific groups or individuals are authorised to make configuration changes for each environment. All changes to the content should be made using the GitOps principles which requires the use of branches for changes and pull requests to approve and merge changes. The pull request process is explained in detail in a subsequent article in this series.

The simple directory structure shown in figure 2 is suggested to separate the content for each environment within the deployment assets Git repository:

Figure 2: Environment specific and base directories

As much content as possible will be stored in the base directory, with the variance required for each environment contained within each specific environment directory.

Patch or replace

A simple replacement process could be used to modify the base resources for a specific environment. However, this would still result in duplication with the accompanying problems described above. An alternative approach is to document a series of patches that are to be applied to the resources in the base directory. This will create a set of assets that can be used in a specific environment. The mechanism used to create and apply the patches is the open source utility called Kustomize.

Introducing Kustomize

The basic principle of Kustomize is that it enables teams to create a common set of files that are modified at runtime for a deployment to a specific environment. To be absolutely transparent, not everyone is a fan of this approach. One of the reasons for this is that you cannot look at the yaml assets in the Git repository and state with certainty that the files, in that exact format, will be applied to the OpenShift environment. The reason for this, is that Kustomize may apply a patch to the files to change them at deployment time. While this may appear to add a layer of further complexity, the avoidance of duplication and the flexibility of the patching process makes it worthwhile.

To assist teams creating such assets a process for testing the patches is shown later.

Detailed information about the myriad of capabilities of Kustomize can be found at the open source projects web site here. This article will focus on a specific set of capabilities, on the basis that you rarely need to know everything, but you can quickly understand enough to make something useful.

A comparison to Helm

A key difference between the use of Kustomization and Helm charts is that the base deployment assets used with Kustomization are in a format that can be directly deployed. There is no requirement to translate resource files to templates. Any change required to fields when using Kustomize is implemented by a patch described below. Alternatively, Helm charts have placeholders within the yaml template which are replaced with specific results from a Helm values file at deployment time. For any Kubernetes resource attribute to be updated during a Helm deployment requires the current value to be replaced by a variable in  a template file, and the value needs to be added to an environment specific Helm values file.

The Kustomize File

A fundamental aspect of the Kustomize process is to organise files into a managed structure that clearly identifies the base set of assets and the patches that are to be applied for a specific environment. This is done through the identification of a base set of assets and a series of overlays containing patches. This section begins with content organisational structure and then continues to the actual patching process.

Resource inclusion

The Kustomize file acts as a reference to the resources that are to be deployed. Such resources can be drawn from a number of directories allowing teams to organise assets into any logical structure. For example, a large application consisting of many micro services can benefit from separate directories for the resources of each micro service. To further extend this approach different micro services could be defined in separate Git repositories if that level of separation of control is required.

Base, Overlay and Resource Directives

Kustomize has a number of terms related to the organisation of Kubernetes resources that are to be deployed.

Resource - A resource is a simple identification of a file that is to be deployed using Kustomize. Resource references appear in Kustomization.yaml files as shown in figure 3:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- myapp-deployment.yaml
- myapp-service.yaml
- myapp-route.yaml

Figure 3:  Kustomize file showing the use of resource directives

Base - A base is a directory that contains a kustomization.yaml file. The Kustomization file contains a set of references to resources that are to be deployed together with any patches that are required. Good practices for bases are :

  • Resource references should only include content directly within this directory.
  • There should be no patches in the base so that the content can be considered to be an untainted set of files to which patches can be applied by overlays.
  •  Namespaces should not be directly referenced in base assets such that the content can be applied to any namespace.

In the sample directory structure shown in figure 4, the base directory has a kustomization file that refers to the resources within the same directory that are to be deployed. The content of the kustomization file in figure 4 is shown in figure 3.

Figure 4: Base directory with content

Overlay - An overlay is a directory that contains a kustomization.yaml file. The Kustomization file contains a set of references to other kustomization directories as bases. Each Kustomization file that is referenced will either have another overlay or a base definition as shown in figure 5.

Figure 5: An overlay reference to a location holding another kustomization.yaml file

In the example in figure 5, above, the Kustomization file in the 01-development directory refers to a second kustomization file (the overlay) in the ../base directory. The use of overlays enables teams to build a structure of deployment assets from a series of other asset collections that themselves are managed by a kustomization file. This allows common sets of content to be created and shared as much as possible, such as micro service deployment assets, role based access control, service account configuration etc.

Relative paths are always used when referring to overlays such that the parent location, on a local machine or cloned as a set of assets during a continuous deployment process, is not relevant.

Patching Process

The patching mechanism included in Kustomize enables a set of core assets to be updated for a specific context. If the patch directives do not identify a specific Kubernetes resource then the resource is deployed untouched.

In its infancy Kustomize had a number of different patching constructs available. However since version 5 of Kustomize was released in February, 2023 two methods, PatchesJson6902 and PatchesStrategicMerge, have been deprecated leaving the single ‘patches’ mechanism. The patch process involves two elements - the identification of the resource to be updated and the update that is to be applied.

Updating a yaml value

Requirement - Update the value of the metadata/name field in a resource of kind ‘deployment’, called ‘myapp’, within the location indicated by the set of ‘bases’. The Kustomization.yaml file in figure 6, below, will identify any resource that matches the target specified and replace the metadata/name field with the new value.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patches:
- patch: |-
 - op: replace
   path: /metadata/name
   value: myapp-development
 target:
   kind: Deployment
   name: myapp
bases:
- ../base/main

Figure 6: Kustomization.yaml file to update a value of a deployment yaml file

Extracts of the original file and the patched variant are shown in figure 7:

Original resource filePatched resource file
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
  app: myapp
name: myapp
spec:
replicas: 1
. . .

 

kind: Deployment
apiVersion: apps/v1
metadata:
labels:
  app: myapp
name: myapp-development
spec:
replicas: 1
. . .

Figure 7: Unpatched and patched deployment yaml file

Modifying the number of replicas

The original base deployment file in figure 7,  above shows that only a single replica is used. To patch the number of replicas requires the patch shown in figure 8:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patches:
- patch: |-
- op: replace
  path: /metadata/name
  value: myapp-prod
target:
  kind: Deployment
  name: myapp
- patch: |-
- op: replace
  path: /spec/replicas
  value: 4
target:
  kind: Deployment
  name: myapp

Figure 8 : Patch for the number of replicas and the deployment name

Two patch directives are included in the above file and the second will update the number of replicas to 4. However, there is something important to highlight regarding the patching sequence. Both patches identify the same target - a deployment resource with the name of ‘myapp’. The first directive is responsible for changing the name of the deployment object within the yaml file to ‘myapp-development’ as shown in the table in figure 7. It may be expected that the second replacement will now fail because that is looking for a deployment resource called ‘myapp’ that doesn’t exist after the first patch has been performed. The good news is that this does not matter, and Kustomize will find renamed resources under both the new and the old names.

It is possible to combine the above patches into a single instance of a patch which would have a single target selector and two patch directives as shown in figure 9.

- patch: |-
 - op: replace
   path: /metadata/name
   value: myapp-production
 - op: replace
   path: /spec/replicas
   value: 4
target:
 kind: Deployment
 name: myapp

Figure 9 : Combined two patch directives into a single patch

Updating array objects

Kustomize has the capability to update array based objects within a resource yaml file. The example in figure 10 shows a role binding resource.

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: myapp-development
namespace: myapp-ci
subjects:
- kind: ServiceAccount
name: default
namespace: myapp-development
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: 'system:image-puller'

Figure 10: Role binding resource to be updated

The ‘subjects’ field can have multiple objects defined within it. To patch the namespace entry in one of these objects use a numeric reference in the patch directive as shown in figure 11. A challenge does exist with respect to ordering when using indexes and care must be taken to ensure that the correct array item is updated. Fixed index numbers are not ideal and a label based mechanism to select the correct array item would be better, but unfortunately there is currently no better way.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patches:
- patch: |-
- op: replace
  path: /subjects/0/namespace
  value: myapp-qa
target:
  kind: RoleBinding
  name: myapp-development

Figure 11: Patch to update the namespace in the first subject item

Removing content

The remove directive can be used to remove a specific field from the resource definition or to remove an entire array object definition. The patch directives shown below, in figures 12 and 13, each refer to the role binding definition shown in figure 10 above.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patches:
- patch: |-
- op: remove
  path: /subjects/0/namespace
  value: myapp-qa
target:
  kind: RoleBinding
  name: myapp-development

Figure 12: Patch to remove the namespace field from the first subject

The patch directive shown above, in figure 12, will simply remove the namespace field from the service account subject of the role binding.

The patch directive shown below, in figure 13, will remove the complete service account subject field, leaving the role binding as shown in figure 14.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patches:
- patch: |-
- op: remove
  path: /subjects/0
  value: myapp-qa
target:
  kind: RoleBinding
  name: myapp-development

Figure 13: Patch to remove entire subject array

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: myapp-development
namespace: myapp-ci
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: 'system:image-puller'
subjects: []

Figure 14 : Role binding with the subject removed

Patching with new yaml content

A further way to patch yaml files is to use a file containing the elements to be replaced. In the example presented here the container image location is being updated. The original deployment file is shown below in figure 15, in which the container image is taken from an OpenShift image stream (abbreviated below).

kind: Deployment
apiVersion: apps/v1
metadata:
labels:
 app: myapp
 app.kubernetes.io/part-of: liberty
name: myapp
namespace: myapp-development
spec:
replicas: 1
selector:
 matchLabels:
   app: myapp
template:
 metadata:
   labels:
     app: myapp
 spec:
   containers:
   - name: myapp
     image: image-registry. . . .svc:5000/myapp-ci/myapp-runtime 
     imagePullPolicy: Always
     ports:
     - containerPort: 9080
       name: http
       protocol: TCP

Figure 15 : Deployment resource referencing a container image from an image stream

The requirement is to replace the container image reference with a container image from quay.io. This can be achieved with a replace directive shown in the prior examples, but in this example a patch file containing enough information to identify the correct deployment file, and the required change will be used. The patch file is shown on the left in figure 16, and the relevant section of the kustomization.yaml file referring to the patch is shown on the right in figure 16.

patch.yamlkustomization.yaml (extract)
kind: Deployment
apiVersion: apps/v1
metadata:
name: myapp
spec:
template:
  spec:
    containers:
     - name: myapp
       image: quay.io/marrober/myapp
patches:
- path: patch.yaml
target:
  kind: Deployment
  name: myapp

Figure 16: Patch file and kustomization file referring to the patch file

The advantage of using the patch file mechanism is that the patches are generally easy to produce as they are a subset of the yaml file to which they apply with the corrected values added to them. A possible negative is that the patches become fragmented across a number of files which could lead to confusion.

Patching Testing

To assist teams creating Kustomize assets, it is possible to generate the set of assets that will be applied to an environment using the Kustomize command line. This capability is often used during pipeline processes to generate a set of assets that will ultimately be applied to an environment, such that those assets can be evaluated for security and policy compliance.

The patching process should always be tested to ensure that the patches designed deliver the required result. The simple way to do this is to execute the command below:

kustomize build

The command should be executed in the directory containing the kustomization.yaml file that you wish to test. This will generate a result on screen containing all processed yaml files in a single contiguous unit separated by ‘---’ characters. To generate a file containing the content simply pipe the output to a file as shown below:

kustomize build > new-resources.yaml

A second way to test the Kustomize patches is to use the command shown below. This can be particularly useful on a system that doesn’t have the kustomize command line utility installed since the OpenShift ‘oc’ command can process Kustomize files too.

oc apply -k . --dry-run="client"

The above command will show the resources that would be created in the cluster if the command was executed without the dry run option.

When things go wrong

If you have errors in the Kustomize file then depending on the type of error you will either see something obvious or you may have to dig a little deeper. Errors in the target selection will typically not show up and so the use of ‘kustomize build’ is very important to ensure that your patches are being applied to the right objects. A spelling mistake in the kind or the named object within the target block will simply skip the file you intended to patch and kustomize will return a zero return code.

If you have an error in the patch directive then the ‘Kustomize build’ command and the ‘oc apply command shown above will display an error message and you will get a return code of 1 from the commands too.

Kustomize and ArgoCD

Kustomize and ArgoCD work together very well. ArgoCD joins together a specific directory of a Git repository and a namespace on a Kubernetes cluster.  If the Git location referred to by the ArogCD application contains a kustomization.yaml file then it will be processed to generate the Kubernetes resources to be applied to the cluster. This tight coupling of Kustomize and ArgoCD is why support for Kustomize is included in the Red Hat OpenShift GitOps operator. Full details of the exact versions of ArgoCD and Kustomize supported by the OpenShift GitOps operator on an OpenShift platform can be found in the OpenShift documentation in the section : “CI/CD -> GitOps -> OpenShift GitOps release notes”.

If you introduce a patch error into the Kustomization.yaml file and commit it to the Git repository ArgoCD will attempt to use the damaged file to update the OpenShift environment. This will fail and the ArgoCD application screen will display an error similar to that which is shown in figure 17.

Figure 17: ArgoCD error with a damaged kustomization.yaml file

The events for the application will show a little more detail as indicated in figure 18 below.

Sync operation to failed: ComparisonError: rpc error: code = Unknown desc = Manifest generation error (cached): `kustomize build .env/03-production` failed exit status 1: Error: Unexpected kind: replaces (retried 1 times).

Figure 18: ArgoCD Application event showing error in the kustomization.yaml file

Summary

Kustomize is an excellent utility for the configuration of kubernetes resources to be used in specific environments. Kustomize enables teams to separate out configuration data into a core set with overlays for each environment, while also avoiding duplication and the inherent risk of missing an update.