In this blog, we introduce the new, tech-preview component: K8s Integrity Shield, which is integrated with the policy framework of Red Hat Advanced Cluster Management for Kubernetes® version 2.1 and later to protect the integrity of resources on a cluster by signing those resources. A custom policy automates deployment of this new capability.

New integrity protection can be enabled using custom policy that deploys K8s Integrity Shield on target clusters using the K8s Integrity Shield Operator.

K8s Integrity Shield is a tech-preview capability, so it is not supported as product by Red Hat.

Integrity for Kubernetes resources

Integrity Shield protects configuration integrity of deployed applications on a cluster as an important part of several compliance and audit requirements. For example, NIST800-53 CM-5 and NIST800-53 CM-6 require the following:

  • "The information system prevents the installation of organization-defined software without verification that the component has been digitally signed using a certificate that is recognized and approved by the organization (NIST800-53 CM-5)."

  • "The organization employs automated mechanisms to centrally manage, apply, and verify configuration settings for organization-defined information system components (NIST800-53 CM-6)."

Kubernetes resources are represented as YAML files, which are applied to clusters when you create and update the resource. The YAML content is designed carefully to achieve application desired state and should not be tampered with. If the YAML content is modified maliciously or accidentally, and applied to a cluster without notice, the cluster moves to an unexpected state.

For example, some YAML artifacts might be modified to inject malicious scripts and configurations in stealthy manner. As a result, administrators might risk deploying it without knowing about the falsification.

Digital signature provides cryptographic assurance for protecting the integrity of data. Signature protects the integrity of the resource. YAML content is signed, as you can see in the following ConfigMap annotations, and an encoded signature value is attached to the resource:

apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: sample-app-config
name: sample-app-config
annotations:
integrityshield.io/signature: "sSDjkqf7EZw5C...ELQ==..."
integrityshield.io/message: "YXBpVmVyc2lvbjo...Igo=..."
data:
audit_enabled: "true"

Resource signing ensures that the resource is not substituted with something malicious before you apply it to the cluster. The YAML file is signed by a legitimate signer, and the signature is verified on the cluster when the request comes in.

K8s Integrity Shield provides preventive control for enforcing signature verification for any requests to create or update resources. The resources cannot be created in the cluster. The change to existing resources on a cluster is also prevented when a new signature to the resource is verified.

Use cases include the following:

  • Quality assurance reviewer signs the resources before it can be applied.
  • DevSecOps pipeline signs the resources after their review and validation.
  • In GitOps, the resources (YAML) are managed in the Git repository. Resource signing validates what was sent to the cluster is from the valid Git repository.
  • Many Kubernetes applications are installed with installer in YAML form. Integrity of installer and authenticity of its provider must be ensured before installation.

How K8s Integrity Shield works

K8s Integrity Shield works as an admission controller that handles all incoming admission requests. The following diagram shows how the admission requests are inspected and blocked by the K8s Integrity Shield. Every request is checked according to a profile (Resource Signing Profile, RSP) that specifies which resources should be signed. If the resource must be signed, K8s Integrity Shield verifies if a valid signature is supplied for the resource by preconfigured verification keys. If a valid signature is found, the request is applied as usual. If not, the request is blocked.

k8s-ishield

The K8s Integrity Shield operator, which is available from the Operator-hub, deploys and manages all components required for enabling protection on the cluster, so it can be enabled on any Red Hat OpenShift clusters. It is also integrated as a custom policy with Red Hat Advanced Cluster Management for Kubernetes, which supports integrating new controls with its governance framework. K8s Integrity Shield can be deployed to target clusters automatically by using this custom policy and the K8s Integrity Shield operator.

Getting Started

Let’s take a look at how to enable K8s Integrity Shield by using custom policy with Red Hat Advanced Cluster Management.

Prerequisite

You need to have GPG key pair for signing and verifying signature. If you do not have one, see this key setup instruction to set up your key pair.

Step 1. Setup verification key on target clusters

A verification key needs to be configured on clusters before installing K8s Integrity Shield. The private key for signing is stored in signer host and never stored in the cluster.

First, export verification key to file /tmp/pubring.gpg. Run the following command:

gpg --export signer@enterprise.com > /tmp/pubring.gpg

Next, create a namespace integrity-shield-operator-system on hub cluster:

oc create namespace integrity-shield-operator-system

Then, run the script to setup the verification key. The script is available from the Git repository. See the example to register public verification key from your signing host.

For example, run the following script to deploy key as keyring-secret secret resource in integrity-shield-operator-system namespace on all dev clusters:

curl -s  https://raw.githubusercontent.com/open-cluster-management/integrity-shield/master/scripts/ACM/acm-verification-key-setup.sh | bash -s - \
--namespace integrity-shield-operator-system \
--secret keyring-secret \
--path /tmp/pubring.gpg \
--label environment=dev | oc apply -f -

The custom policy already includes signer configuration with file name pubring.gpg and secret name keyring-secret. You can define any other name in the CR if you want. See the signer configuration guide for information.

Step 2. Enable custom policy on target clusters

The custom policy policy-integrity-shield.yaml for deploying K8s Integrity Shield can be found in the policy-collection repository on GitHub. The best practice is to use the contributed policies with GitOps, which is an automated way to track, manage, and control your policies with a Git repository such as GitHub. See the GitOps blog on how to deploy community policies with GitOps.

By default, the policy is deployed in inform mode, so K8s Integrity Shield is not deployed just after deploying the default policy. Configure the following policy to enable K8s Integrity Shield on target clusters:

  • Set remediationAction in the following specification to enforce (changed from inform)
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
name: policy-integrity-shield
annotations:
policy.open-cluster-management.io/standards: NIST SP 800-53
policy.open-cluster-management.io/categories: CM Configuration Management
policy.open-cluster-management.io/controls: CM-5 Access Restrictions for Change
spec:
remediationAction: inform #CHANGE THIS WHEN ENABLED
disabled: false
  • Set target clusters in PlacementRule.
---
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
name: placement-policy-integrity-shield
spec:
clusterConditions:
- status: "True"
type: ManagedClusterConditionAvailable
clusterSelector:
matchExpressions:
- {key: environment, operator: In, values: ["dev"]} #SPECIFY TARGET CLUSTERS

After these changes are applied, K8s Integrity Shield blocks any request without a signature to create and update resources specified in profiles. So, signature must be supplied for further changes.

The default profile deployed with this policy is configured to protect all policy resources on target clusters. The following diagram shows how signature protects the integrity of policies managed in GitOps. Let's assume v1 policy is already signed with v1 signature. If it is modified to v2 policy but not signed, the attached signature is still v1. So, K8s Integrity Shield blocks admission of the v2 policy since v2 signature is not supplied.

k8s-ishield-policy-gitops

For example, the following error is reported when a change of policy is requested on the cluster directly. This is because the signature is not valid for the new policy content:

> admission webhook "ac-server.integrity-shield-operator-system.svc" denied the request: Signature verification is required for this request, but no signature is found. Please attach a valid signature to the annotation or by a ResourceSignature. (Request: {"kind":"Policy","name":"policy-community.policy-namespace","namespace":"ma4dev7","operation":"UPDATE","request.uid":"d884c010-5d32-4d87-8ab9-16004fd6b206","scope":"Namespaced","userName":"kube:admin"})

The error is also reported as a Kubernetes Event. The administrator is notified of the change that was attempted without a valid signature. See the following event notice. The admin can fix the issues or supply new signature after necessary reviews and approvals:

$ oc get event --field-selector type=IntegrityShield
LAST SEEN TYPE REASON OBJECT MESSAGE
3m31s IntegrityShield no-signature policy/policy-community.policy-namespace [IntegrityShieldEvent] Result: deny, Reason: "Signature verification is required for this request, but no signature is found. Please attach a valid signature to the annotation or by a ResourceSignature.", Request: {"kind":"Policy","name":"policy-community.policy-namespace","namespace":"ma4dev7","operation":"UPDATE","request.uid":"d884c010-5d32-4d87-8ab9-16004fd6b206","scope":"Namespaced","userName":"kube:admin"}

You need to create a new signature whenever you change a policy and apply it to clusters. Otherwise, the change is blocked and not applied.

A new signature can be added to the YAML file by running a sign script on signer's host. The yq command is required.

See the following example when signing policy file policy-xxxxx.yaml with the key of signer signer@enterprise.com. Then, request to apply this YAML file:

# CAUTION: Specified YAML file is modified with new signature
curl -s https://raw.githubusercontent.com/open-cluster-management/integrity-shield/master/scripts/gpg-annotation-sign.sh | bash -s \
signer@enterprise.com \
policy-xxxxx.yaml

Customize profiles

K8s Integrity Shield allows you to protect any other resources than policies with signature. You can define your own profile for protecting specific resources in specific namespace.

For example, the following profile is designed to protect some resources in the secure-ns namespace. See that ResourceSigningProfile is created in the same namespace as protected resources:

apiVersion: apis.integrityshield.io/v1alpha1
kind: ResourceSigningProfile
metadata:
name: sample-rsp
namespace: secure-ns
spec:
protectRules:
- match:
- kind: ConfigMap
- kind: Secret
exclude:
- kind: ConfigMap
name: unprotected-cm
- match:
- apiGroup: rbac.authorization.k8s.io
ignoreRules:
- username: system:serviceaccount:secure-ns:secure-operator
ignoreAttrs:
- match:
- name: protected-cm
kind: ConfigMap
attrs:
- data.comment1

After this profile is created, the following protections are added:

  • Any ConfigMap and secret kinds must be signed, except those with the name unprotected-cm.
  • All resources in rbac.authorization.k8s.io API group must be signed.
  • Any request by secure-operator service account is allowed without signature.
  • Any changes to the attribute data.comment1 in configmap protected-cm is allowed without signature.

You can customize allowance and blocking pattern flexibly by defining profiles. For example, you can define a profile to deploy a Deployment resource, while allowing change to a replica count for scaling only. See the ResourceSigningProfile documentation for further detail.

Resources

Try K8s Integrity Shield tech-preview to enable integrity protection of your resources on your cluster. For more information, see the K8s Integrity Shield documentation and the following references: