As outlined in this blog post, Red Hat Advanced Cluster Management for Kubernetes (RHACM) governance provides a powerful policy framework for governing configurations to a desired state needed for best practices across your fleet of managed clusters. A ConfigurationPolicy object, where configurations are specified to the policy framework, was previously static and the same definition was applied across all managed clusters the policy was deployed to. With the introduction of the templates feature, you can now define flexible configuration policies with variable content. Templatization enables policy-based governance to be applied at scale as a single policy is customized, rather than having to create multiple policies for managed clusters that require a slightly different behavior.

In this post, I showcase the benefits of templates and the usage of various template functions through examples.

Templates Overview

Templates in policy definitions must conform to the Go text template specification. You can specify the templates to be processed on the managed-cluster or the hub cluster.

  • Introduced in RHACM 2.3, templates delimited by {{ ... }} are processed on the target managed cluster by the configuration policy controller just before the configuration is enforced or monitored.
  • Added in RHACM 2.4, templates delimited by {{hub … hub}} are processed on the hub cluster by the policy controller just before it is propagated to the managed clusters for enforcement.

Along with the ability to include Go templates, the policy framework also provides various custom template functions for inclusion in the policy definition. View the following attributes of the definition:

  • Resource specific functions like fromConfigMap(ns, name, field) and others to reference the contents of specific resource objects on the cluster.
  • A generic lookup function to reference any object on the cluster.
  • Several utility functions e.g. base64enc, base64dec, etc.
  • Objects referenced in custom functions that are retrieved from the cluster where the template is being processed.
  • Any object on the managed cluster can be referenced in the templates, but on the hub for access control, templates in a Policy object can only reference objects that are in the same namespaces as the Policy object.

For more information about the policy templates feature and the complete list of custom functions, see the product documentation.

Customizing Configuration to the Target Managed Cluster

Problem Description

Consider an example scenario where a Deployment object for nginx pods must be enforced on the managed cluster. The contents of the Deployment, however, depend on the target managed cluster configuration (i.e.,):

  • The nginx container image version depends on the Red Hat OpenShift version of the managed cluster.
  • The number of replicas to configure in the Deployment depends on the environment type of the cluster e.g. production, dev, or test cluster, and so on.

Without templates, this requires (a) knowing the managed cluster configuration upfront, (b) hardcoding the container image and replicas values in the policy definition, and (c) defining a separate policy for each target managed cluster with a different configuration.

Solution with templates

Typically, the Red Hat OpenShift version and environment type are available as labels on the managed cluster resource object. Every such label results in a cluster.open-cluster-management.io.ClusterClaim object on the managed cluster. The policy for the nginx Deployment can be defined using templates as follows to dynamically set up the values of the managed cluster specific configuration:

  • Include ManagedCluster templates to access the ClusterClaims as these objects are on the managed cluster.

  • Use the fromClusterClaim “version.openshift” custom template function to get the version value of the target managed cluster.

  • Use the fromClusterClaim “environment” custom template function to get the environment type of the target managed cluster.

  • If/else template constructs supports choosing between custom and default values.


    apiVersion: policy.open-cluster-management.io/v1
    kind: ConfigurationPolicy
    metadata:
    name: policy-nginx-deployment
    spec:
    remediationAction: enforce

    object-templates:
    - complianceType: musthave
    objectDefinition:
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: nginx-deployment-templatized

    spec:
    # if label env == prod, set replicas to 3 else 1
    replicas: '{{- if eq (fromClusterClaim "env") "prod" -}} 3 {{- else -}} 1 {{- end -}} | toInt }}'

    spec:
    containers:
    - name: nginx
    # if ocp version is 4.9.13 set nginx to v 1.21.4 else v 1.20.2
    image: '{{- if eq (fromClusterClaim "version.openshift.io") "4.9.13" -}} nginx:1.21.4 {{- else -}} nginx:1.20.2 {{- end -}}'
    ports:
    - containerPort: 80

In the previous example, templates can be used to avoid hardcoding the values and create policy definitions customized to the target cluster. The complete policy can be found here. Another example of this scenario is a policy that configures the OpenShift LogForwarder, where managed cluster specific information is also defined using templates.

Improving Scalability When Managing Configuration of a Large Fleet of Managed Clusters

Problem Description

Consider the scenario where configuration of a network resource must be managed on a large fleet of 1000 managed clusters. The configuration is mostly the same across the fleet except for values of a few settings that vary for each cluster (i.e.,):

  • A SriovNetwork object must be enforced across the entire fleet of clusters.
  • Network VLAN IDs and resource names vary for each managed cluster.

Without templates, achieving this requires creating a separate policy for each managed cluster with its own specific values for VLAN ID and resource name (i.e., for 1000 managed clusters); it requires deploying 1000 policy objects and an additional 1000 placement resources to be created on the hub and propagated to the managed clusters just for the enforcement of a single SriovNetwork object.

Solution with templates

This can be achieved through a single policy by storing variable configuration values of VLAN ID and resource name on the hub cluster and including hub templates in the policy definition to retrieve the values per managed cluster at runtime.

  • On the hub cluster, create a ConfigMap with all the variable data for each target cluster:

    # Configmap with values of variable data for each target cluster
    kind: ConfigMap
    apiVersion: v1
    metadata:
    name: sno-master-config
    namespace: sites-sub
    data:
    cluster0001-interface: "ens5f0"
    cluster0001-resourceName: "du_fh"
    cluster0001-vlan: "3620"
    cluster0002-interface: "ens5f0"
    cluster0002-resourceName: "du_mh"
    cluster0002-vlan: "3621"

  • Define the policy by including Hub templates to retrieve managed-cluster specific values for the VLAN ID and resource name from the preceding configmap.

    vlan: '{{hub fromConfigMap "" "site-config"  (printf "%s-vlan" .ManagedClusterName) | toInt hub}}'
  • .ManagedClusterName is a template context value that is at runtime set to the value of the target managed cluster, for which it is being processed by the hub cluster controller for propagation.

  • Create Policy object in the same namespace as the ConfigMap due to access control restrictions:

    # policy that enforces the network resource 

    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
    name: policy-site-nw-templatized
    namespace: sites-sub

    spec:
    remediationAction: enforce
    policy-templates:
    - objectDefinition:
    apiVersion: policy.open-cluster-management.io/v1
    kind: ConfigurationPolicy
    metadata:
    name: policy-site-nw-templatized
    spec:
    remediationAction: enforce

    object-templates:
    - complianceType: musthave
    objectDefinition:
    apiVersion: sriovnetwork.openshift.io/v1
    kind: SriovNetwork
    metadata:
    name: sriov-nw
    namespace: openshift-sriov-network-operator
    spec:
    networkNamespace: openshift-sriov-network-operator
    # use hub templates with fromConfigMap custom function templates to retrieve values from site-config configmap
    resourceName: '{{hub fromConfigMap "" "site-config" (printf "%s-resourceName" .ManagedClusterName) hub}}'
    vlan: '{{hub fromConfigMap "" "site-config" (printf "%s-vlan" .ManagedClusterName) | toInt hub}}'
    ...

As shown in the previous sample, using templates drastically reduces the number of resource objects required to manage the configuration from 1000 Policy objects to one, and 1000 Placement objects to one, thereby making it easy to create and maintain them and also greatly reducing the load on the system. The complete policy definition can be found here, and another example of this scenario is the policy-autoscaler-templatized.yaml sample policy that demonstrates configuring ClusterAutoScaler.

Securely Managing and Distributing Sensitive Data

Problem Description

Consider a scenario where configuration of some sensitive data must be managed on the remote clusters, for example:

  • Database credentials are stored externally in a secret management system like HashiCorp Vault.
  • These credentials must be distributed to your fleet of managed clusters for use by an application workload running on the clusters.

Without templates, such sensitive information cannot be deployed to managed clusters using policies, as Policy objects are not ETCD encrypted, and it is not recommended to store such sensitive information in plain-text within policies stored in the Git repository and deployed through GitOps. Also, typical security best practice is to dynamically retrieve sensitive information from a secrets management system.

Solution with templates

This can be achieved by making the sensitive data available on the hub cluster as a Kubernetes Secret object and using hub cluster templates in the policy definition to retrieve the values at runtime and securely propagate secrets to the managed clusters.

Notes:

  • Using an operator like external-secrets, retrieve the database credentials from Hashicorp Vault and store them on the Hub as a Secret resource object.
  • Use the fromSecret template function to reference the values of the database credentials Secret object from the previous step.
  • At runtime, the hub cluster retrieves the database credentials from the Secret resource object, encrypts the sensitive data string, replaces the template with the encrypted string, and propagates the updated Policy definition to the managed cluster.
  • On the managed cluster, sensitive data in the Policy definition is decrypted before being enforced locally.

View the following Secret example:

# example secret resource  that needs to be delivered to the remote managed cluster 

apiVersion: v1
kind: Secret
metadata:
name: db-creds

type: Opaque
data:
db_username: XXX
db_password: YYY


# policy that delivers and enforces the secret on the remote managed cluster

apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
name: policy-securesecret

spec:

policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: policy-securesecret
spec:

object-templates:
- complianceType: musthave
objectDefinition:
apiVersion: v1
kind: Secret
metadata:
name: cloudconnectioncreds
namespace: default
type: Opaque
data:
db_USER: '{{hub fromSecret "" "db-creds" “db_username” | base64enc hub}}'
db_PASS: '{{hub fromSecret "" "db-creds" “db_password” | base64enc hub}}'

In the previous example, the Policy definition shows how to securely distribute secrets from an external secret management synced with the hub cluster to the managed clusters. This secret can then be used by the application workloads on the cluster to access the database. Also, once the Policy object is created, any changes to the sensitive data synced from external secret management systems are not automatically refreshed on the managed cluster until the Policy object itself is updated on the hub cluster.

Conclusion

As this post shows, templates provide various benefits, from preventing hardcoding of values in the Policy definition, to customizing policies to the target managed cluster, to securely referencing and distributing sensitive data in your policies. In all cases, it reduces the number of Policy objects needed to manage configuration in environments with a large fleet of managed clusters, thereby improving ease of use and enabling policy-based governance to be applied at scale.

Check out the policy collection repository for more examples of policies that use the templates feature.


Categories

Red Hat Advanced Cluster Management, Multi-Cluster, policy-based governance

< Back to the blog