Subscribe to our blog

Introduction

Red Hat Advanced Cluster Management for Kubernetes (RHACM) Governance provides an extensible framework for enterprises to introduce their own security and configuration policies that can be applied to managed OpenShift or Kubernetes clusters. For more information on RHACM policies, I recommend that you read the Comply to standards using policy based governance, and Implement Policy-based Governance Using Configuration Management blogs.

In order to assist in the creation and management of RHACM policies, use the policy generator tool. This tool, along with GitOps, greatly simplifies the distribution of Kubernetes resource objects to managed OpenShift or Kubernetes clusters through RHACM policies. In particular, the policy generator supports the following functionalities:

  • Convert any Kubernetes manifest files to RHACM configuration policies.
  • Patch the input Kubernetes manifests before they are inserted into a generated RHACM policy.
  • Generate additional configuration policies to be able to report on Gatekeeper and Kyverno policy violations through RHACM.
  • Be run locally so that generated RHACM policies can be stored directly in Git and applied using GitOps.
  • Be run by GitOps tools so that only the manifests and the policy generator configuration need to be stored in Git.

The policy generator is implemented as a Kustomize generator plugin. This allows for any Kustomize configuration (e.g. patches, common labels) to be applied to the generated policies. This also means that the policy generator can be integrated into existing GitOps processes that support Kustomize, such as RHACM Application Lifecycle Management and OpenShift GitOps (i.e. ArgoCD).

In this blog, I show you how to use the policy generator locally and also how to integrate it with GitOps.

How it works

The policy generator is implemented as a Kustomize generator plugin. This means that it must be specified in the generators section in a kustomization.yaml file. View the following example:

generators:
- policy-generator-config.yaml

The policy-generator-config.yaml file referenced in the previous example is a YAML file with the instructions of the policies to generate. In order for Kustomize to know which generator plugin this file is for, the following values must be present in the file:

apiVersion: policy.open-cluster-management.io/v1
kind: PolicyGenerator
metadata:
# This is used to uniquely identify the configuration file. Replace this with a meaningful value.
name: my-awesome-policies

In addition, there are two main sections in the policy generator configuration file. The first is policies and the second is policyDefaults.

cygenerator/PolicyGenerator
The policies section is an array of policies that you would like to generate. Each policy entry is an object with at least two keys. The first being name, which is the name of the generated policy. The second being manifests, which is an array of objects with at least the path key set to a Kubernetes resource object manifest file, or a flat directory of such files. Note that a single file can contain multiple manifests. The remaining optional keys for a policy entry are overrides to the default settings and customizations to the input manifest.

The policyDefaults section is an object that overrides to the default values that the policy generator uses across all the policies in the manifest, and the namespace key is required. This is the OpenShift or Kubernetes namespace that the generated policies are created in. This is the only policyDefaults value that cannot be overridden in an entry in the policies section.

With all that in mind, a simple policy generator configuration file might be similar to the following example:

apiVersion: policy.open-cluster-management.io/v1
kind: PolicyGenerator
metadata:
name: config-data-policies
policyDefaults:
namespace: policies
policies:
- name: config-data
manifests:
- path: configmap.yaml

In the previous example, the resulting policy is called config-data in the policies namespace and verifies that the ConfigMap in configmap.yaml is set on all managed OpenShift or Kubernetes clusters. Note that the manifests can be any Kubernetes resource object manifest, but this example just uses a single ConfigMap for simplicity. To distribute the ConfigMap to all managed clusters, the remediationAction can be set to enforce (defaults to inform) as shown in the following example:

policies:
- name: config-data
manifests:
- path: configmap.yaml
remediationAction: enforce

Let's imagine that you want to limit the distribution of this ConfigMap to just OpenShift clusters; you can specify a clusterSelectors entry as shown in the following example. For more information on cluster selectors, see the Placement rule documentation:

policies:
- name: config-data
manifests:
- path: configmap.yaml
remediationAction: enforce
placement:
clusterSelectors:
vendor: "OpenShift"

Note: Both of these approaches can be specified in the policyDefaults section instead.

For details on all the configuration possibilities, refer to the configuration reference file.

Installing and running the generator locally

Installation

To install the policy generator, the compiled binary must be in the proper directory structure that is defined in Kustomize. The following example shows how to install the v1.8.0 release on a Linux system with the x86-64 (amd64) architecture. Other precompiled binaries are available for download on the GitHub release page. Instructions for installing the policy generator on other operating systems or compiling it on your own, are available in the policy generator README.

First, start by creating the proper directory structure with the following command:

mkdir -p ${HOME}/.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator

Then download and install the policy generator with the following commands:

curl -L \
-o ${HOME}/.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator \
https://github.com/stolostron/policy-generator-plugin/releases/download/v1.8.0/linux-amd64-PolicyGenerator
chmod +x ${HOME}/.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator

Running the policy generator locally

Now that the policy generator is installed, let's try generating the policy with the simple policy generator configuration from the How it works section. To recap, the following files are required:

  • kustomization.yaml

    generators:
    - policy-generator-config.yaml
  • policy-generator-config.yaml

    apiVersion: policy.open-cluster-management.io/v1
    kind: PolicyGenerator
    metadata:
    name: config-data-policies
    policyDefaults:
    namespace: policies
    policies:
    - name: config-data
    manifests:
    - path: configmap.yaml
    remediationAction: enforce
    placement:
    clusterSelectors:
    vendor: "OpenShift"
  • configmap.yaml

    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: game-config
    namespace: default
    data:
    game.properties: |
    enemy=jabba-the-hutt
    weapon=lightsaber
    ui.properties: |
    color.bad=red

Once the files are created in the same directory, run the following command to generate the policy:

kustomize build --enable-alpha-plugins

The generated policy and related manifests should resemble the following example:

apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
name: placement-config-data
namespace: policies
spec:
clusterConditions:
- status: "True"
type: ManagedClusterConditionAvailable
clusterSelector:
matchExpressions:
- key: vendor
operator: In
values:
- OpenShift
---
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
name: binding-config-data
namespace: policies
placementRef:
apiGroup: apps.open-cluster-management.io
kind: PlacementRule
name: placement-config-data
subjects:
- apiGroup: policy.open-cluster-management.io
kind: Policy
name: config-data
---
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
annotations:
policy.open-cluster-management.io/categories: CM Configuration Management
policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
policy.open-cluster-management.io/standards: NIST SP 800-53
name: config-data
namespace: policies
spec:
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: config-data
spec:
object-templates:
- complianceType: musthave
objectDefinition:
apiVersion: v1
data:
game.properties: |
enemy=jabba-the-hutt
weapon=lightsaber
ui.properties: |
color.bad=red
kind: ConfigMap
metadata:
name: game-config
namespace: default
remediationAction: enforce
severity: low

Analyzing the output

When you look at the output from the Running the policy generator locally section, you might notice that there are three manifests:

  • The first is a PlacementRule manifest, which is used to determine the target managed OpenShift or Kubernetes clusters that this policy applies to. In this case, since the policy generator configuration file had a clusterSelectors entry of vendor: OpenShift, the PlacementRule is applied to all OpenShift managed clusters.

  • The second manifest is a PlacementBinding manifest. This is used to bind the PlacementRule to the generated policy. Without this, the PlacementRule does not take effect.

  • Lastly, the third manifest is a Policy manifest. This is the actual RHACM policy that distributes the game-config ConfigMap in the default namespace on all managed OpenShift clusters.

Adding another policy

Let's modify the previous example so that there are two policies that are generated:

  1. Start by creating a second ConfigMap manifest file called configmap2.yaml with the following content:

    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: game-config2
    namespace: default
    data:
    game.properties: |
    hero=mace-windu
    weapon=lightsaber
    ui.properties: |
    color.bad=red
  2. Modify policy-generator-config.yaml to have the following content:

    apiVersion: policy.open-cluster-management.io/v1
    kind: PolicyGenerator
    metadata:
    name: config-data-policies
    placementBindingDefaults:
    name: binding-config-data
    policyDefaults:
    namespace: policies
    placement:
    name: placement-config-data
    clusterSelectors:
    vendor: "OpenShift"
    remediationAction: enforce
    policies:
    - name: config-data
    manifests:
    - path: configmap.yaml
    - name: config-data2
    manifests:
    - path: configmap2.yaml

Notice that much of the configuration was moved to the policyDefaults section in order to avoid duplicating it for the second policy entry. Additionally, the placementBindingDefaults object is set. This is so that the policy generator can use a single PlacementBinding for all the generated policies rather than generate one for each policy. There is also a name value in the placement object so that the policy generator can use a single PlacementRule for all matching clusterSelectors configuration, rather than generate one for each policy. Lastly, a second policy is generated called config-data2 that distributes the ConfigMap in the configmap2.yaml file on all managed OpenShift clusters.

  1. Run the kustomize build --enable-alpha-plugins command again to generate the following output:

    apiVersion: apps.open-cluster-management.io/v1
    kind: PlacementRule
    metadata:
    name: placement-config-data
    namespace: policies
    spec:
    clusterConditions:
    - status: "True"
    type: ManagedClusterConditionAvailable
    clusterSelector:
    matchExpressions:
    - key: vendor
    operator: In
    values:
    - OpenShift
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
    name: binding-config-data
    namespace: policies
    placementRef:
    apiGroup: apps.open-cluster-management.io
    kind: PlacementRule
    name: placement-config-data
    subjects:
    - apiGroup: policy.open-cluster-management.io
    kind: Policy
    name: config-data
    - apiGroup: policy.open-cluster-management.io
    kind: Policy
    name: config-data2
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
    annotations:
    policy.open-cluster-management.io/categories: CM Configuration Management
    policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
    policy.open-cluster-management.io/standards: NIST SP 800-53
    name: config-data
    namespace: policies
    spec:
    disabled: false
    policy-templates:
    - objectDefinition:
    apiVersion: policy.open-cluster-management.io/v1
    kind: ConfigurationPolicy
    metadata:
    name: config-data
    spec:
    object-templates:
    - complianceType: musthave
    objectDefinition:
    apiVersion: v1
    data:
    game.properties: |
    enemy=jabba-the-hutt
    weapon=lightsaber
    ui.properties: |
    color.bad=red
    kind: ConfigMap
    metadata:
    name: game-config
    namespace: default
    remediationAction: enforce
    severity: low
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
    annotations:
    policy.open-cluster-management.io/categories: CM Configuration Management
    policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
    policy.open-cluster-management.io/standards: NIST SP 800-53
    name: config-data2
    namespace: policies
    spec:
    disabled: false
    policy-templates:
    - objectDefinition:
    apiVersion: policy.open-cluster-management.io/v1
    kind: ConfigurationPolicy
    metadata:
    name: config-data2
    spec:
    object-templates:
    - complianceType: musthave
    objectDefinition:
    apiVersion: v1
    data:
    game.properties: |
    hero=mace-windu
    weapon=lightsaber
    ui.properties: |
    color.bad=red
    kind: ConfigMap
    metadata:
    name: game-config2
    namespace: default
    remediationAction: enforce
    severity: low

Generating a policy to install an operator

A common use for RHACM policies is to install an operator on one or more managed OpenShift clusters. See Operator for details. Two examples are provided in this blog, each covering different installation modes.

A policy to install OpenShift GitOps

This example describes how to generate a policy that installs OpenShift GitOps using the policy generator. The OpenShift GitOps operator has the all namespaces installation mode.

  • First, start by creating a Subscription manifest file called openshift-gitops-subscription.yaml. Your file should be similar to the following example:

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
    name: openshift-gitops-operator
    namespace: openshift-operators
    spec:
    channel: stable
    name: openshift-gitops-operator
    source: redhat-operators
    sourceNamespace: openshift-marketplace

If you want to pin to a specific version of the operator, you can set the spec.startingCSV value to openshift-gitops-operator.v1.2.1 (replacing v1.2.1 with your preferred version).

  • Next, create a policy generator configuration file called policy-generator-config.yaml. The following example shows a single policy that installs OpenShift GitOps on all OpenShift managed clusters:

    apiVersion: policy.open-cluster-management.io/v1
    kind: PolicyGenerator
    metadata:
    name: install-openshift-gitops
    policyDefaults:
    namespace: policies
    placement:
    clusterSelectors:
    vendor: "OpenShift"
    remediationAction: enforce
    policies:
    - name: install-openshift-gitops
    manifests:
    - path: openshift-gitops-subscription.yaml
  • The last file that is required is the kustomization.yaml file. Your file should be similar to the following example:

    generators:
    - policy-generator-config.yaml

    With all those files in place, run kustomize build --enable-alpha-plugins to view a similar output as the following content:

    apiVersion: apps.open-cluster-management.io/v1
    kind: PlacementRule
    metadata:
    name: placement-install-openshift-gitops
    namespace: policies
    spec:
    clusterConditions:
    - status: "True"
    type: ManagedClusterConditionAvailable
    clusterSelector:
    matchExpressions:
    - key: vendor
    operator: In
    values:
    - OpenShift
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
    name: binding-install-openshift-gitops
    namespace: policies
    placementRef:
    apiGroup: apps.open-cluster-management.io
    kind: PlacementRule
    name: placement-install-openshift-gitops
    subjects:
    - apiGroup: policy.open-cluster-management.io
    kind: Policy
    name: install-openshift-gitops
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
    annotations:
    policy.open-cluster-management.io/categories: CM Configuration Management
    policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
    policy.open-cluster-management.io/standards: NIST SP 800-53
    name: install-openshift-gitops
    namespace: policies
    spec:
    disabled: false
    policy-templates:
    - objectDefinition:
    apiVersion: policy.open-cluster-management.io/v1
    kind: ConfigurationPolicy
    metadata:
    name: install-openshift-gitops
    spec:
    object-templates:
    - complianceType: musthave
    objectDefinition:
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
    name: openshift-gitops-operator
    namespace: openshift-operators
    spec:
    channel: stable
    name: openshift-gitops-operator
    source: redhat-operators
    sourceNamespace: openshift-marketplace
    remediationAction: enforce
    severity: low

A policy to install the compliance operator

For an operator that has a namespaced installation mode, such as the compliance operator; an OperatorGroup manifest is also required. This example explores generating a policy to install the compliance operator.

First, start by creating a YAML file with a Namespace, a Subscription, and an OperatorGroup manifest called compliance-operator.yaml. The following example installs this in the compliance-operator namespace:

apiVersion: v1
kind: Namespace
metadata:
name: openshift-compliance
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: compliance-operator
namespace: openshift-compliance
spec:
channel: release-0.1
name: compliance-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: compliance-operator
namespace: openshift-compliance
spec:
targetNamespaces:
- compliance-operator

Next, create a policy generator configuration file called policy-generator-config.yaml. The following example shows a single policy that installs the compliance operator on all OpenShift managed clusters:

apiVersion: policy.open-cluster-management.io/v1
kind: PolicyGenerator
metadata:
name: install-compliance-operator
policyDefaults:
namespace: policies
placement:
clusterSelectors:
vendor: "OpenShift"
remediationAction: enforce
policies:
- name: install-compliance-operator
manifests:
- path: compliance-operator.yaml

The last file that is required is the kustomization.yaml file. Your file should resemble the following example:

generators:
- policy-generator-config.yaml

With all those files in place, run kustomize build --enable-alpha-plugins to get the following output:

apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
name: placement-install-compliance-operator
namespace: policies
spec:
clusterConditions:
- status: "True"
type: ManagedClusterConditionAvailable
clusterSelector:
matchExpressions:
- key: vendor
operator: In
values:
- OpenShift
---
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
name: binding-install-compliance-operator
namespace: policies
placementRef:
apiGroup: apps.open-cluster-management.io
kind: PlacementRule
name: placement-install-compliance-operator
subjects:
- apiGroup: policy.open-cluster-management.io
kind: Policy
name: install-compliance-operator
---
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
annotations:
policy.open-cluster-management.io/categories: CM Configuration Management
policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
policy.open-cluster-management.io/standards: NIST SP 800-53
name: install-compliance-operator
namespace: policies
spec:
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: install-compliance-operator
spec:
object-templates:
- complianceType: musthave
objectDefinition:
apiVersion: v1
kind: Namespace
metadata:
name: openshift-compliance
- complianceType: musthave
objectDefinition:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: compliance-operator
namespace: openshift-compliance
spec:
channel: release-0.1
name: compliance-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
- complianceType: musthave
objectDefinition:
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: compliance-operator
namespace: openshift-compliance
spec:
targetNamespaces:
- compliance-operator
remediationAction: enforce
severity: low

Policy expanders

Policy expanders generate additional policies based on the kinds of the input manifests. These apply to RHACM policies that utilize other policy engines. This is to give a complete picture of violations or status using RHACM policies. Currently, there is an expander for Kyverno and one for Gatekeeper. These expanders are enabled by default, but can be disabled individually by setting the flag associated with the expander in the policy generator configuration file, either in policyDefaults or for a particular policy in the policies array.See the configuration reference file for the particular flag applicable to each expander.

Let's take a look at an example of generating a RHACM policy to distribute a Kyveno policy and report the status back into RHACM. First create a file called kyverno.yaml in the same directory as the kustomization.yaml file. The contents should be similar to the following example:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-labels
spec:
validationFailureAction: audit
rules:
- name: check-for-labels
match:
resources:
kinds:
- Namespace
validate:
message: "The label `purpose` is required."
pattern:
metadata:
labels:
purpose: "?*"

This Kyverno policy verifies that all namespaces (i.e. OpenShift projects) have the label of purpose set. If not, a violation is generated by Kyverno. Next, modify the policy-generator-config.yaml file to have the following content:

apiVersion: policy.open-cluster-management.io/v1
kind: PolicyGenerator
metadata:
name: config-data-policies
policyDefaults:
namespace: policies
remediationAction: enforce
policies:
- name: namespace-purpose-required
manifests:
- path: kyverno.yaml

Now run kustomize build --enable-alpha-plugins to generate the policy. The following content is displayed; however, the PlacementRule and PlacementBinding are omitted for brevity:

apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
annotations:
policy.open-cluster-management.io/categories: CM Configuration Management
policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
policy.open-cluster-management.io/standards: NIST SP 800-53
name: namespace-purpose-required
namespace: policies
spec:
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: namespace-purpose-required
spec:
object-templates:
- complianceType: musthave
objectDefinition:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-labels
spec:
rules:
- match:
resources:
kinds:
- Namespace
name: check-for-labels
validate:
message: The label `purpose` is required.
pattern:
metadata:
labels:
purpose: ?*
validationFailureAction: audit
remediationAction: enforce
severity: low
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: inform-kyverno-require-labels
spec:
namespaceSelector:
exclude:
- kube-*
include:
- "*"
object-templates:
- complianceType: mustnothave
objectDefinition:
apiVersion: wgpolicyk8s.io/v1alpha2
kind: ClusterPolicyReport
results:
- policy: require-labels
result: fail
- complianceType: mustnothave
objectDefinition:
apiVersion: wgpolicyk8s.io/v1alpha2
kind: PolicyReport
results:
- policy: require-labels
result: fail
remediationAction: inform
severity: low

When you examine the generated policy, you might notice that there is a second entry for the policy-templates array with a ConfigurationPolicy called inform-kyverno-require-labels. This additional entry was generated by the Kyverno expander. Depending on what the Kyveno policy is checking, a ClusterPolicyReport or a PolicyReport Kubernetes resource object exists, detailing if the Kyverno policy is compliant. The automatically generated inform-kyverno-require-labels ConfigurationPolicy reports the status of the Kyverno policy for the managed OpenShift or Kubernetes clusters that the Kyverno policy was distributed to.

GitOps

Because the policy generator is implemented as a Kustomize plugin, many GitOps processing tools support the policy generator after it is installed in the container image or system that runs Kustomize. In RHACM 2.4, this integration comes out-of-the-box. OpenShift GitOps, based on ArgoCD, is another popular GitOps tool that requires some configuration changes which are described in the upcoming section.

Integrating with RHACM GitOps

Since the policy generator is preinstalled in RHACM 2.4 Application Lifecycle Management, no further configuration is necessary for it to be enabled. Configuring GitOps for generating policies is thus the same process as for a Git repository that uses Kustomize without the policy generator.

To illustrate this, a policy is generated that propagates a simple Apache web server deployment to all the managed OpenShift or Kubernetes clusters. To follow along, you must run the following command as the subscription-admin. This command creates a namespace called policy-generator-blog and an Application which allows for visualization in the RHACM console.

Then a Channel is created, which specifies the Git repository that the Kustomize policy generator configuration is in. In this case, the namespace used is https://github.com/stolostron/grc-policy-generator-blog.git. A Subscription is also created, which tells how to utilize the channel. In this case, it's saying to use the main branch and the kustomize directory within the Git repository.

Lastly, a PlacementRule is created, which declares the generated policy should only be deployed on the RHACM hub cluster. Note that this does not affect the placement of the Apache web server deployment; this is only for the generated policy itself. Run the following command:

Now, after you navigate to the RHACM Application Lifecycle Management page, the following application entry is displayed from the console:

01-1

After you select the application name, the Overview tab displays the following topology. This can take a few minutes to properly appear:

02-1

From the Overview section, notice that there are three resource objects deployed to the hub cluster (i.e. local-cluster). Those resource objects are the Policy, PlacementRule, and PlacementBinding.

At this point, take a look at the RHACM Governance page to see the deployed policy. The policy-generator-blog-app policy is displayed in the table as shown in the following image. It can take a few minutes for the policies to get created and show up on this page:

03-1

In this case, the policy is compliant, and the following resource objects were created:

  • A namespace of policy-generator-blog-app

  • A Deployment which creates a Pod that contains a single Apache web server container

  • A Service to expose port 8080 in the container within the namespace

  • A Route to expose the Apache web server outside of the namespace. To verify that this is set up properly, you can get the URL of the route with the following bash command:

    echo "https://$(oc get -n policy-generator-blog-app route policy-generator-blog -o=jsonpath='{@.spec.host}')"

After you open the URL in a web browser, the following web page is displayed:

04-1 Clean up the created resource objects by running the following command on the RHACM hub cluster:

oc delete namespace policy-generator-blog

Then on every managed OpenShift or Kubernetes cluster, run the following command:

oc delete namespace policy-generator-blog-app

Integrating with OpenShift GitOps (ArgoCD)

OpenShift GitOps, based on ArgoCD, can also be used to generate policies using the policy generator through GitOps. Since the policy generator does not come preinstalled in the OpenShift GitOps container image, some customization must take place. In order to follow along, it is expected that you have the OpenShift GitOps Operator installed on the RHACM hub cluster and be sure to log in to the hub cluster.

In order for OpenShift GitOps to have access to the policy generator when you run Kustomize, an Init Container is required to copy the policy generator binary from the RHACM Application Subscription container image to the OpenShift GitOps container that runs Kustomize. Additionally, OpenShift GitOps must be configured to provide the --enable-alpha-plugins flag when you run Kustomize. Run the following command to configure OpenShift GitOps:

oc -n openshift-gitops patch argocd openshift-gitops --type merge --patch "$(curl https://raw.githubusercontent.com/stolostron/grc-policy-generator-blog/main/openshift-gitops/argocd-patch.yaml)"

Now that OpenShift GitOps can use the policy generator, we must now give OpenShift GitOps access to create policies on the RHACM hub cluster. To do that, run the following command to create a ClusterRole called openshift-gitops-policy-admin, with access to create, read, update, and delete policies. Additionally, a ClusterRoleBinding is created to grant the OpenShift GitOps service account the openshift-gitops-policy-admin ClusterRole:

oc apply -f https://raw.githubusercontent.com/open-cluster-management/grc-policy-generator-blog/main/openshift-gitops/cluster-role.yaml

It's finally time to generate the policy using OpenShift GitOps:

  1. Start by creating the policy-generator-blog namespace that contains the generated policy on the RHACM hub cluster.

  2. Continue by creating the actual ArgoCD Application object that configures OpenShift GitOps to apply the Kustomize configuration in the grc-policy-generator-blog.git Git repository. Run the following commands:

  3. Navigate to the ArgoCD console. If you need help doing so, read the Logging in to the Argo CD instance documentation. After ArgoCD is done syncing the Git repository, you should see something similar to the following image:

    As shown in the image, the policy is successfully generated and created on the RHACM hub cluster. Notice how there is an additional policy called policy-generator-blog.policy-generator-blog-app. This shows that the policy is successfully distributed to a managed cluster. In the case of the previous screenshot, this is the local-cluster managed cluster (i.e. RHACM hub cluster). Note that in order for ArgoCD to not constantly delete this distributed policy or show that the ArgoCD Application is out of sync, the following parameter value was added to the kustomization.yaml file to set the IgnoreExtraneous option on the policy:

    commonAnnotations:
    argocd.argoproj.io/compare-options: IgnoreExtraneous
  4. Clean up the created resource objects and revert the OpenShift GitOps configuration by running the following commands on the RHACM hub cluster:

    oc delete namespace policy-generator-blog
    oc delete -f https://raw.githubusercontent.com/stolostron/grc-policy-generator-blog/main/openshift-gitops/cluster-role.yaml
    oc -n openshift-gitops patch argocd openshift-gitops --type merge --patch '{"spec": {"kustomizeBuildOptions": null, "repo": {"env": null, "initContainers": null, "volumeMounts": null, "volumes": null}}}'
  5. Run the following command on every managed OpenShift or Kubernetes cluster to delete the application:

    oc delete namespace policy-generator-blog-app

Conclusion

In this blog post, we've explored the policy generator tool. It enables an organization to leverage the power of RHACM policies with less complexity through a multitude of workflows due to it being a Kustomize generator plugin. An organization can decide to generate policies that are committed directly to a Git repository that GitOps is configured for, or they can leverage the policy generator integration with GitOps tools without ever needing to maintain the actual Policy manifest files. This flexibility allows for RHACM users to leverage RHACM policies with a wide range of skill sets and roles.

For more advanced features of the policy generator, refer to the configuration reference file.


About the author

Browse by channel

automation icon

Automation

The latest on IT automation that spans tech, teams, and environments

AI icon

Artificial intelligence

Explore the platforms and partners building a faster path for AI

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

Explore how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the solutions that simplify infrastructure at the edge

Infrastructure icon

Infrastructure

Stay up to date on the world’s leading enterprise Linux platform

application development icon

Applications

The latest on our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech