Overview
As outlined in the blog, Policy-based governance of Red Hat Advanced Cluster Management, Red Hat Advanced Cluster Management can be used to comply with enterprise and industry standards for aspects such as security and regulatory compliance, resiliency, and software engineering. Deviation from defined values for such standards represents a configuration drift, which can be detected using the built-in configuration policy controller of Red Hat Advanced Cluster Management.
Typically, technical capabilities for Red Hat OpenShift clusters are delivered and deployed as operators. Red Hat Advanced Cluster Management for Kubernetes policies can be used to deploy such operators by defining appropriate Kubernetes resources. Many of these operators return their operational status, along with information on the security controls they check using Kubernetes resources. Processing the operational status can be accomplished by using the configuration policy controller.
In this blog, we explain how to use Red Hat Advanced Cluster Management built-in configuration policy controller to complete the following actions:
- Use best practices to configure Kubernetes resources used to ensure various security aspects such as access control and encryption.
- Deploy operators, check if they are operating and are configured properly, as well as receive status results from the operators.
Note: You can accomplish the actions previously mentioned by writing declarative policies, which reduce the required efforts to enable rapid rollout, conform to standards, and use best practices at scale across a fleet of Kubernetes clusters that are managed by Red Hat Advanced Cluster Management. It is not required to programmatically write code.
Best practices
While policies can be configured using the Red Hat Advanced Cluster Management console and CLI, the preferred approach is to use GitOps. In the GitOps approach, policies are applied with a Red Hat Advanced Cluster Management subscription, which retrieves policies from a folder within a Git repository. You can limit which users have access to create subscriptions and assign users with a view
role for policies in the Red Hat Advanced Cluster Management console.
Red Hat Advanced Cluster Management can be used to configure all managed clusters from the hub cluster to restrict any direct activities on managed clusters. It is not mandatory, but it is recommended to manage all clusters from that central point. When you follow this approach, you gain solutions to the following questions as you manage your cluster:
- How can I detect configuration drift? Are all my clusters configured in the same uniform way?
- Do all clusters have the same permissions?
- How can I find out why permissions have been changed on a certain cluster?
- Roles have unexpected high permissions, how can that happen and how can we review why that happened?
- How can I deploy my operators consistently among several clusters?
- How do I ensure that security policies are enforced across all my clusters?
- How do I update and manage all my policies across clusters?
Setting up the examples
The open-cluster-management/policy-collection repo provides a spot for collaborative development of policies, and there are policies for several use cases. In the following sections, some examples from the policy-collection
repo are detailed to illustrate how Red Hat Advanced Cluster Management configuration management can be used to achieve policy-based governance and compliance to enterprise and industry standards, for aspects such as security and regulatory compliance, resiliency, and software engineering.
For our setup, we are using an OpenShift 4.6.6 cluster on AWS with Red Hat Advanced Cluster Management 2.1 installed. Since you can self-manage Red Hat Advanced Cluster Management 2.1, let's apply the tests directly on the hub cluster. Complete the following steps:
-
Use the following script to configure the size for the workers, and use spot instances to benefit from cheaper prices:
export MACHINESETS=$(oc get machineset -n openshift-machine-api -o json | jq '.items[]|.metadata.name' -r )
for ms in $MACHINESETS
do
oc scale MACHINESET $ms --replicas=0 -n openshift-machine-api
oc patch MACHINESET $ms -p='{"spec":{"template":{"spec":{"providerSpec":{"value":{"spotMarketOptions":{"maxPrice":0.40}}}}}}}' --type=merge -n openshift-machine-api
oc patch MACHINESET $ms -p='{"spec":{"template":{"spec":{"providerSpec":{"value":{"instanceType":"m5.xlarge"}}}}}}' --type=merge -n openshift-machine-api
oc scale MACHINESET $ms --replicas=1 -n openshift-machine-api
done -
Install Red Hat Advanced Cluster Management as outlined in the product documentation.
-
Apply the examples to the
subscription-folder
with the following command:git clone https://github.com/ch-stark/policies-demo-blog
oc apply -k policy-subscription
cd policies-demo-blogAfterwards, an application is displayed in the Red Hat Advanced Cluster Management console showing all the policies, which are referenced by the channel and subscription.
-
If you want to set all the policies to
enforce
, run the following commands://Replace inform with enforce
find . -type f -name "*.*" -print0 | xargs -0 sed -i "s/enforce/inform/g"
git add .
git commit -am ‘set to inform/enforce’
git push origin master
Now, if all policies are set to enforce, every Kubernetes object we want to create and is not present in the clusters, is going to be generated.
View examples for how Red Hat Advanced Cluster Management can be used to comply with enterprise and industry standards for the following aspects:
Security and regulatory compliance
Now let’s learn about some of the policies for Security & Regulatory Compliance. View the following table for a description and instructions for these policies:
Policy | Description | Instructions |
---|---|---|
Red Hat OpenShift Compliance Operator (policy-compliance-operator ) |
The Compliance Operator is an optional OpenShift Operator for OpenShift 4.6, which allows an administrator to run compliance scans and provide remediations for the issues that are found. The operator leverages OpenSCAP to perform the scans. Be sure that all namespaced resources like the deployment, or the custom resources that the operator consumes are created there. However, it is possible for the operator to be deployed in other namespaces. A primary object of the compliance operator is the ComplianceSuite , which represents a set of scans. The ComplianceSuite can be defined either manually or with the ScanSetting and ScanSettingBinding objects. |
Access the Compliance Operator YAML from the open-cluster-management/policy-collection repo. Install and configure the Compliance Operator to create a ScanSettingBinding that the ComplianceOperator uses to scan the cluster for compliance with the E8 benchmark. See the Compliance Operator E8 scan policy. |
ETCD encryption (policy-etcdencryption |
You can enable ETCD encryption to provide an additional layer of data security. See OpenShift documentation for more details. This policy can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties. | See Red Hat Advanced Cluster Management ETCD encryption for more details. View an example of an etcd policy from the policy-collection/stable-folder. You can enhance the policy if you like to view the progress of ETCD-encryption by monitoring the status fields |
Removal of Kubeadmin (policy-security-remove-kubeadmin.yaml ) |
Use this policy to remove the kubeadmin-user of managed clusters. How does the communication between managed and managing cluster work when kubeadmin has been removed? Each add-on or policy controller uses its own service account to ensure minimal privilege principle. This ServiceAccount handles the policy synchronization between the managed and managing cluster. In this case the ServiceAccount used is klusterlet-addon-policyctrl , which has the same permissions as the standard cluster role for the cluster admin. It is being used by the klusterlet-addon-policyctrl-framework pod. |
You can disable the kubeadmin-user , and add temporary administrator users to a managed cluster. You can also pause a subscription with the timeWindow parameter, which disables the removal of admin-user policy for some time, while you manually add it. For more details, see the Subscription YAML structure from the product documentation. |
Role-based access control (RBAC) (policy-role.yaml ) |
Use the RBAC policy to assign roles or cluster-roles to groups and users. | You can add some users to the cluster admin ClusterRole, and users and groups to the self-provisioned ClusterRole. See the policy-role example. |
Restrict self-provisioned ClusterRole (policy-security-configure-self-provisioner-crb.yaml ) |
Red Hat Advanced Cluster Management shines in this use case. Use the policy-security-configure-self-provisioner-crb.yaml to limit that authenticated users that have permission to provision new projects, which is the default behavior within OpenShift. |
Ensure that the self-provisioned ClusterRole is applied to all clusters in a multicluster environment. Update the rbac.authorization.kubernetes.io/autoupdate value to false to prevent automatic updates to the role when a node is rebooted. For more details, see Disabling project self-provisioning. |
SecurityContextConstraints (policy-security-scc.yaml ) |
Red Hat OpenShift provides security context constraints (SCCs), a security mechanism that restricts access to resources, but not to operations in OpenShift. | View the policy-scc example for some example configuration. |
Resiliency
Let's view use cases for Resiliency in the following sections. The goal of resiliency is to configure OpenShift clusters in a way, where we can easily monitor
and audit
the environment. Now, we are going to install the logging operator using policies.
Install and configure the Logging Operator
You can install the log operator using the policy-resiliency-setup-logging.yaml
policy. For more information about installing the Log Operator, see Installing cluster logging using the CLI.
Log-forwarding configuration is a dynamic feature of the logging stack and can be applied using a policy.
View the policy-resiliency-enableclusterlogforwarder.yaml
file, where it is defined for the cluster to send logs to the following Kafka-topics: app-topic
, infra-topic
and audit-topic
.
Check the health of the Cluster Operators
Check the health of the Cluster Operators with the ClusterOperator resource. This policy checks that there are no degraded Cluster Operators by making an implicit query over all ClusterOperator resources:
- complianceType: mustnothave
objectDefinition:
apiVersion: config.openshift.io/v1
kind: ClusterOperator
status:
conditions:
- status: 'False'
type: Available
- complianceType: mustnothave
objectDefinition:
apiVersion: config.openshift.io/v1
kind: ClusterOperator
status:
conditions:
- status: 'True'
type: Degraded
Install and use Gatekeeper policies
Gatekeeper is a customizable admission webhook for Kubernetes that enforces policies that are run with the Open Policy Agent (OPA), a policy engine for Cloud Native environments. You can find all gatekeeper related policies in the community folder.
You can install Gatekeeper to create a gatkeeper policy on your managed cluster. View the following steps to install and configure a Gatekeeper policy:
-
Use the Gatekeeper operator policy to install the community version of Gatekeeper on your managed cluster.
You can also use the Gatekeeper policy to enforce containers in deployable resources to not use images with the latest tag, as well as enforce pods that have a liveness probe or readiness probe.
Configure audit log policies
OpenShift 4.6 supports audit-log-policy.profile
, where you can set the desired auditlogging
value. View the policy-resilience-audit-logging-sample.yaml
file for the example.
You can set profile
to one of the following values:
Default
: Logs only metadata for read and write requests; does not log request bodies. This is the default policy.WriteRequestBodies
: In addition to logging metadata for all requests, logs request bodies for every write request to the API servers (create, update, patch). This profile has more resource overhead than the Default profile.AllRequestBodies
: In addition to logging metadata for all requests, logs request bodies for every read and write request to the API servers (get, list, create, update, patch). This profile has the most resource overhead.
Configure and monitor the image pruning objects
An important use case for Red Hat Advanced Cluster Management resiliency is configuring image pruning objects from a centralized console. As described in OpenShift documentation, you can automatically prune images that are no longer required by the system due to age, status, or exceed limits. For more information, see Pruning objects to reclaim resources.
Use the policy-resiliency-image-pruner.yaml
file to configure automatic pruning.
Setup and configure the ClusterResourceOverwriter
An OpenShift Container Platform administrator can control the level of overcommit and manage container density on nodes. Master nodes can be configured to override the ratio between requests and limits that are set on developer containers. In conjunction with a per-project LimitRange specifying limits and defaults, this adjusts the container limit and request to achieve the desired level of overcommit.
OpenShift Container Platform 4.4 and later provide the Cluster Resource Override Operator, which is an admission webhook that allows you to control the level of overcommit and manage container density across all the nodes in your cluster. See Cluster-level overcommit using the Cluster Resource Override Operator for information about configuring ClusterResourceOverwriter.
View the cluster-resource-overwrite.yaml
file to configure the ClusterResourceOverwriter.
Configure Prometheus using policies
Learn how to configure the OpenShift monitoring stack consistently among the clusters with applied policies. View the following descriptions of the policies that you can use:
-
Change retention: By default, the OpenShift Container Platform monitoring stack configures the retention time for Prometheus data to be 15 days. You can modify the retention time to change how soon the data is deleted. View the
policy-resiliency-prometheus-changeretentionuserworkload.yaml
file for an example. -
Monitor custom namespaces: You can enable OpenShift Container Platform 4.6 feature to monitor custom namespaces. For more information, see Enabling monitoring for user-defined projects. View the
policy-resiliency-prometheus-enableuserworkload.yaml
file for an example. -
Enable remote rewrite: You can configure Prometheus to forward it's monitoring to a Kafka-topic. View the
policy-resiliency-prometheus-remote-rewrite.yaml
file for an example. Note, that this policy not supported in OpenShift Container Platform.
Software engineering
Now, let's view some use cases for Software engineering in the following sections.
Install and configure the Kafka (AMQ-Streams) Operator
Let's deploy the Kafka (AMQ-Streams) Operator using the policy-engineering-kafka-operator.yaml
file to install the AMQ-Streams Operator.
Configure the operator and create a Kafka-cluster using four topics, to send messages from the monitoring and logging stack to Kafka-Topics. To achieve this, use the policy-engineering-kafka-config.yaml
file.
Other policies regarding best practices for Software engineering
View the following policy examples for best practices to comply to industry standards and controls for software engineering to learn how to configure ConfigMaps for secure engineering:
-
Use
policy-engineering-configmap-sample.yaml
to configure ConfigMaps with Red Hat Advanced Cluster Management to keep environment-specific configuration outside of the container image. -
Use the following policies to set quotas and limit ranges on either the project or cluster-level as outlined in the OpenShift documentation, Resource quotas across multiple projects:
policy-engineering-clusterresourcequota.yaml
policy-engineering-resource-quota.yaml
policy-engineering-limit-ranges.yaml -
Use the
policy-engineering-pod-disruption-budget.yaml
to protect the applications (e.g. during updates) by using pod disruption budgets. See, Specifying the number of pods that must be up with pod disruption budgets for more information.
Again, the value of the approach is that you can apply the necessary customization of a cluster from a centralized environment.
Label clusters using GitOps
There are use cases where you might need to label clusters using GitOps. For example, you can add labels to a cluster if you want to integrate some custom logic. View the policy-engineering-label-cluster.yaml
file. For instance, f you add the label profile: demo
to the local-cluster, your policy might resemble the following excerpt:
object-templates:
- complianceType: musthave
objectDefinition:
apiVersion: cluster.open-cluster-management.io/v1
kind: ManagedCluster
metadata:
labels:
profile: demo
name: local-cluster
spec:
hubAcceptsClient: true
Conclusion
Throughout the blog, we outlined best practices to create declarative policies for aspects of security and regulatory compliance, resiliency, and software engineering, all of which can be put in place without programming. In this manner, day-2 operational aspects can be rapidly rolled out on a fleet of OpenShift clusters at an ongoing basis, using Red Hat Advanced Cluster Management configuration management. We are continuously adding more policies to the open-cluster-management/policy-collection repo, encourage feedback, and welcome additional policy contributions.
Categories
How-tos, configuration, Compliance, Red Hat Advanced Cluster Management, OpenShift 4.6, Open Policy Agent