Overview

The policy framework in Red Hat Advanced Cluster Management for Kubernetes (RHACM) is a powerful feature that help you to govern your configurations across multiple clusters. You can enforce specific configuration settings with these policies and also monitor your configurations in your clusters from a security perspective. In other words, the security policies can have important configurations for your clusters and if one of them is tampered by an unauthorized person, some credentials might be compromised or your cluster might be used for cyberattacks.

Continue reading this blog for a guide on how to protect your security policies, and to prevent unauthorized changes on them with signature protection. For this protection, you can use the K8s Integrity Shield, which verifies a request of creating or updating resources with an attached signature on your clusters. When you want to protect your Kubernetes resource, it is recommended to sign the resource YAML file beforehand, and then define protection rule for the resource. When you deploy the resource, the K8s Integrity Shield on the cluster automatically verifies the attached signature. The protection rule for policies are already defined there if you installed K8s Integrity Shield with a policy.

With the signature protection, your policies are managed in the lifecycle as displayed in the following diagram:



The diagram shows the policy lifecycle with and without signature protection. The beginning of the diagram shows the normal policy lifecycle, where the git pull command is run to get the latest policies from the GitHub repository, and the policies are updated. Then the git push command is run to upload the policies to a remote repository, and it initiates GitOps sync to your target clusters.

On the other hand, you can see what is expected to change after you enable signature protection, as shown in the later part of the previous diagram. A signer, who is responsible for approving your updates, checks your changes and sign the updated policies if they are ok. In some cases, you can be the signer. When the sync happens in the target clusters, the K8s Integrity Shield verifies the signatures, and decides if the update request should be allowed or not. If signature verification fails, the request is denied.

Getting started

In this section, you can follow the end-to-end steps of policy protection. First, generate your own signing and verification key. Setup the verification key on your clusters, and then enable signature protection. Eventually, you can update the protected policies with your signature.

Prerequisites

For this blog, the following prerequsites are required to be ready on your clusters or your local machine:

  • Hub cluster and managed clusters

  • Policies that are already synced between the GitHub repository and clusters (see Contributing and deploying community policies with Red Hat Advanced Cluster Management and GitOps for more information)

  • Install the gpg, jq, yq commands

    Run the following command to install the perviously mentioned commands:

    # RHEL/CentOS
    $ yum install gnupg jq yq

    View the following diagram that illustrates the prerequisites for the policies and GitOps. The policy repository is cloned on your local machine, GitOps is configured on your hub cluster, and policy sync is already setup for the target clusters. This is the starting point for enabling the policy protection:



1. Verification Key Setup

Goal of Step 1: Deploy your verification key on all the target clusters

View the following image that displays the verification key setup:



At this step, create your own signing key and verification key first. Then enable the auto-sync mechanism between the hub cluster and managed clusters.

1.1 Create Your Own Key

First, let's prepare your pgp signing key and verification key pair. This verification key is used for signature verification, later in this blog. You can create your own signing key and verification key with the gpg command. In this instruction, signer@enterprise.com is used as an example email address for the signer.

Next, let's export your public key as a .gpg file, which is converted into a Kubernetes secret at the next step:

# Export public key file
$ gpg --export signer@enterprise.com > /tmp/pubring.gpg

# Check if the file is created
$ ls -l /tmp/pubring.gpg
-rw-r--r--@ 1 user wheel 5959 2 17 14:06 /tmp/pubring.gpg

The public key is ready, so let's move on to the key setup.

1.2 Sync Verification Key Between the Hub Cluster and Target Clusters

Your public key is synced between your hub cluster and target clusters at this step. For more details about a sync mechanism, please refer to this documentation.

First, prepare for the sync by creating an integrity-shield-operator-system namespace on your hub cluster. This namespace is only used for verification key sync. Run the following commands:

# Create a namespace for key setup 
$ oc create ns integrity-shield-operator-system
namespace/integrity-shield-operator-system created

# Check if the namespace has been created
$ oc get namespace integrity-shield-operator-system
NAME STATUS AGE
integrity-shield-operator-system Active 8s

Then, let's check which labels are on your target cluster. When you deploy the verification key, you can specify a cluster selector label that matches some of your managed clusters. You can check the labels of your managed clusters from the RHACM web console. In this instruction, let's use environment=dev label as a target cluster selector. Verify that the environment=dev label is on your target cluster as shown in the following image:



Next, setup the key sync with the following command using the cluster label. The following command creates a Secret, Channel, PlacementRule, and Subscription on the hub cluster for key sync:

# Set URL of sync setup script
export KEY_SYNC_SCRIPT_URL=https://raw.githubusercontent.com/open-cluster-management/integrity-shield/master/scripts/ACM/acm-verification-key-setup.sh

# Setup key secret sync with script
$ curl -s $KEY_SYNC_SCRIPT_URL | bash -s - --path /tmp/pubring.gpg --label environment=dev | oc apply -f -

secret/keyring-secret configured
channel.apps.open-cluster-management.io/keyring-secret-deployments configured
placementrule.apps.open-cluster-management.io/secret-placement configured
subscription.apps.open-cluster-management.io/keyring-secret configured

Sync should be configured now on your cluster. Let's check it by running the following command:

# Check if a channel is created
$ oc get channel -n integrity-shield-operator-system
NAME TYPE PATHNAME AGE
keyring-secret-deployments Namespace integrity-shield-operator-system 20s

At this point, you may not find the key secret on your target clusters. However, once the integrity-shield-operator-system namespace is created there, the secret is automatically applied in the namespace. The integrity-shield-operator-system namespace on your target clusters is also created automatically at step 3, so let's move on the next step.

2. Change the policy-integrity-shield and Sign All Policies

Goal of Step 2: Sign all of your policies

View the following image that displays the work flow for this step:



Confirm that there is configuration for a policy to install K8s Integrity Shield, and then sign all of the policies for K8s Integrity Shield protection.

2.1 Change policy-integrity-shield to install it

You can update the policy-integrity-shield policy to install K8s Integrity Shield. From your local repository, find the policy-integrity-shield.yaml in the community/CM-Configuration-Management/ directory. Let's update the remediationAction and PlacementRule field in the policy to install K8s Integrity Shield.

First, update the remediationAction to enforce, so that the policy automatically installs K8s Integrity Shield if it is not installed already:

spec:
remediationAction: enforce # CHANGEHERE; `inform` is set here by default
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: policy-integrity-shield-namespace
spec:

Then, verify if PlacementRule is using the correct target cluster label:

---
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
name: placement-policy-integrity-shield
spec:
clusterConditions:
- status: "True"
type: ManagedClusterConditionAvailable
clusterSelector:
matchExpressions:
- {key: environment, operator: In, values: ["dev"]} # CHANGEHERE; make sure this is your own target cluster label

When these changes are applied correctly, it is ready to be signed.

2.2 Sign all policies

Once you sign the policies and K8s Integrity Shield is running, no one can change your policies without a signature.

To sign a specific policy, run the following command. For this example, the policy-integrity-shield.yaml policy is used and signer@enterprise.com is used as an example signer:

# Set URL of policy singing script
export POLICY_SINGING_SCRIPT_URL=https://raw.githubusercontent.com/open-cluster-management/integrity-shield/master/scripts/gpg-annotation-sign.sh


# Sign a specific policy yaml
$ curl -s $POLICY_SINGING_SCRIPT_URL | bash -s signer@enterprise.com community/CM-Configuration-Management/policy-integrity-shield.yaml

Note that you need to sign all your policies at this step for signature protection. If you missed a policy to sign in this step, the policy might not be synced correctly afterwards. As an example, you can sign all policies, but be careful with this kind of batch processing. Run the following command to sign all of your policies:

find community -name \*.yaml | xargs -I% bash -c "curl -s $POLICY_SIGNING_SCRIPT_URL | bash -s signer@enterprise.com %"

Then you can check signature annotations in the signed policy YAML with the following command:

# Please check that `integrityshield.io/signature` and `integrityshield.io/message` are attached to the annotation.
$ less community/CM-Configuration-Management/policy-integrity-shield.yaml

Now, all the setup steps for policies are finished!

3. Deploy K8s Integrity Shield

Goal of Step 3: Install K8s Integrity Shield using a Policy resource



Let's move on to the installation step of K8s Integrity Shield and other policies, using GitOps.

First, commit and push your policy local changes to the remote repository:

# Push changes to remote repo
$ git push

Based on the prerequisites of this blog, you should have GitOps for community policies set up on your clusters. When you run the git push command, the policy initiates an installation of the K8s Integrity Shield on your target clusters. K8s Integrity Shield runs after a few minutes. Once all the components are deployed, you should see a Compliant status from the RHACM console. View the following image:



4. Work with Signed Policies

Once the K8s Integrity Shield is deployed, changes are not supported for all of the policies without a signature. When you want to change some attributes in your policy, you can do it by signing an updated YAML, as described in Step 2.

Disabling Signature Protection

To disable signature protection, you must uninstall the K8s Integrity Shield policy. However, there are unique steps to disable policy protection. Continue reading to learn more.

First, change all of the complianceType values from musthave to mustnothave in the policy-integrity-shield policy. There are four complianceType fields in the policy, change all of them to mustnothave. Then, sign the policy and push the change to the remote repository as explained in the previous section. Now the updated policy-integrity-shield is synced on your target clusters, and K8s Integrity Shield should be uninstalled sucecssfully.

Conclusion

Throughout this blog, I have explained to you how to create your own signing key and set it up to enable signature protection on your policies. This powerful protection increases a tamper-proof environment and decreases unauthorized modifications for your clusters.

References: