A year ago, in The Path to Improving the Experience With RHEL Entitlements on OpenShift,
I described the OpenShift Engineering team’s efforts to improve the use of RHEL Entitlements in Kubernetes-based workflows. Our goal has been to make the user experience on OpenShift 4.x comparable to OpenShift 3.x.
We’ve made a great deal of progress since March of 2021. First, I’ll demonstrate our progress over the past 11 months and then detail the changes that led to that progress.
What can we demonstrate?
Previously, OpenShift 4.9 and earlier suggested accessing RHEL entitlements from OpenShift Builds. Now, you no longer need to create a secret with your subscription credentials in the same Namespace as your Build. Instead, in OpenShift 4.10 and later, you can securely access Secrets in one Namespace from a separate Namespace.
We performed this new approach on a 4.10 Cluster with “TechPreviewNoUpgrade” enabled (see Next Steps for details on the path for this new feature moving from Tech Preview to Full Support).
In this example, there are a few actors of note:
- An administrator with cluster-level permissions to access multiple Namespaces, create cluster-scoped objects, and create Roles and RoleBindings around those cluster-scoped objects, both for discovery and use.
- Optionally, an organization might also employ Namespace administrators, and they may own some of the necessary tasks for this scenario, like creating Roles and RoleBindings for users or ServiceAccounts
- General users (‘developers’ or ‘testers’) of the cluster, who are generally restricted to access a single Namespace to build images with OpenShift Builds. Then, to use RHEL Entitlements in those Builds, administrators give them access to discover the existence of SharedSecrets and/or SharedConfigMaps so that they can determine which one to reference in their BuildConfigs. Also, their Namespace “builder” ServiceAccount has been permitted to mount the specific SharedSecret as a Shared Resource CSI Volume in OpenShift Build Pods.
First, we create the following API Objects:
- A Namespace to host your BuildConfig, Builds, etc. Either a cluster or namespace administrator can do this.
apiVersion: v1 |
- A SharedSecret that references the Secret where the Insights Operator stores the Subscription Credentials available to the cluster created and registered at https://console.redhat.com/. A cluster administrator does this.
apiVersion: sharedresource.openshift.io/v1alpha1 |
- A Role and RoleBinding, so that the “builder” service account in our Namespace can access the SharedSecret we created. A cluster or namespace administrator does this.
apiVersion: rbac.authorization.k8s.io/v1 |
apiVersion: rbac.authorization.k8s.io/v1 |
- And a BuildConfig that accesses RHEL Entitlements. A user / developer in a given Namespace does this.
apiVersion: build.openshift.io/v1 |
Once these API objects are in place, you can start a Build from the BuildConfig and follow the logs with the `oc` command. Here is an example invocation and a condensed log output for brevity (where ”...” replaces lines of actual output):
$ oc start-build my-csi-bc -F |
And voila, an OpenShift Build in one Namespace securely utilizes RHEL Entitlements stored in a Secret in another Namespace.
Next Steps
Some items need to be completed for SharedResources to be promoted from Tech Preview to Fully Supported.
Fine-tuning “read only” enforcement
Currently, the CSI Driver is the sole gatekeeper for the restriction of any Shared Resource Volume being read-only. Therefore, a user’s attempt to provision a read-write Pod must still go through the kubelet flow, including any necessary pulling of Pod images, before an error is flagged.
Additionally, notification of the error does not occur when you create the Pod. You see it when the Pod Create eventually fails, and you have to inspect Pod status or Events to see that the mount was rejected.
In upcoming releases, Kubernetes Validating Admission Controls will be put into place so that errors are flagged on Pod creation, and you’ll avoid all that extra execution.
Name reservation
Similar to what you see with some aspects of the upstream Kubernetes and Project/Namespaces names in OpenShift, we’ll also be adding Kubernetes Admission control over the names of SharedSecrets and SharedConfigMaps, where most likely we’ll reserve names that start with “openshift-*”.
This change will allow us to safely pre-populate Clusters with well-known SharedSecrets and SharedConfigMaps without the risk of conflicting with instances you the user may have created.
SIDE NOTE: We’ll also be looking into whether any Roles and RoleBindings around viewing or using SharedSecrets and SharedConfigMaps make sense. Feel free to provide input and opinions here.
Trusted CSI Drivers
Work still needs to be done to fine-tune the story around SecurityContextConstraints, PodSecurity admission control, and which in-line CSI Volume providers are trusted to be used by unrestricted users. Both upstream and downstream conversations around this topic were in progress when this blog post was published.
What happened with the “Next Steps” from the prior blog post?
The short answer is that none of the “Next Steps” manifested as predicted. The prototype described in the blog post evolved significantly, even as the core concept of securely sharing Secrets and ConfigMaps across multiple Namespaces stayed the same.
If you are curious about some of the details and rationale in how this project evolved, continue reading.
What from the core concepts stayed the same?
- Custom resource definitions (CRDs) declare which Secrets and ConfigMaps can be shared across namespaces.
- A new component still exists that is both an In-Line Ephemeral Container Storage Interface (CSI) Driver and a Kubernetes Controller that watches specific Secrets and ConfigMaps for updates.
- Kubernetes RBAC, including Roles, RoleBindings, and SubjectAccessReviews, make sure that the ServiceAccount for a Pod has been given permission to use a shared Secret or ConfigMap. Using the shared Secret or ConfigMap is done by declaring a Volume in the Pod Spec that uses our new CSI Driver to mount the data associated with that CRD instance.
- Even after a Pod has been provisioned with a CSI Volume the new driver provides, permissions are periodically re-validated to make sure that the Pod can STILL access the data.
If CSI Volumes and Drivers are new to you, check out the overview and diagrams in the OpenShift Documentation.
What has changed?
Quite a bit, actually. But rest assured that it is all for the better.
Names and API
We got a consensus and renamed the CSI Driver to “Shared Resource CSI Driver.”
Instead of a single Custom Resource, we created separate types for sharing Secrets (SharedSecret) and ConfigMaps (SharedConfigMap). The CRDs and associated declarations were incorporated into the official core OpenShift API at https://github.com/openshift/api/tree/release-4.10/sharedresource, which prompted a change to the installation procedure and delivery.
Install / Delivery
After much debate and discussion, we decided to deliver this driver with OpenShift itself rather than as an operator installed with OLM. In the end, RHEL Entitlements are still central to how Red Hat enables customers to use supported software, so features facilitating that model should be available out of the box.
RBAC
The SubjectAccessReview semantics were slightly adjusted. Instead of employing the “get” verb, the Shared Resources Driver now employs the “use” verb when determining if a Pod (through its ServiceAccount) is allowed to access a SharedSecret or SharedConfigMap.
This approach better aligns with the discoverability concepts promoted in Kubernetes API design. In particular, a cluster administrator can separate rules for who can discover the presence of SharedSecrets and SharedConfigMaps vs. those who can mount them into the Pods.
The SharedSecret contains only the name and Namespace of the Secret that it references. You cannot “view” the contents of the Secret that it references. Separate permissions are needed to access the Namespace where the Secret resides.
SELinux
It turns out there is a known “situation” with “read-only” CSI in-line Volumes. Suppose the user requests “read-only”, and the CSI Driver allocates a read-only filesystem in conjunction with that request. In that case, CRI-O cannot update the filesystem with the correct SELinux labels.
There are currently upstream Kubernetes proposals for addressing this that center around the CSI driver allowing/supporting only “read-only” Volumes, and providing a read-write filesystem for that Volume so that CRI-O can update the SELinux labels. Then, the Kubelet and CRI-O can construct the necessary labeling and filesystem layering so that what the consuming Pod gets is still read-only, and no updates to the Volume can be made from the Pod.
However, distinguishing between read-only and read-write volumes becomes difficult with this technique. So, as a compromise, only read-only is supported in this scenario.
Links to the current upstream activity are here and here. Some upstream drivers already employ this technique, such as the Secrets Store CSI Driver.
In anticipation of these KEPs getting accepted, and the prior precedence from upstream CSI drivers, the Shared Resource CSI Driver also employs this approach.
As such, the Shared Resource CSI Driver only allows volume mounts where “readOnly” is set to true, whereas the earlier prototypes did allow read-write Volumes. But the SELinux labels are now correctly set on any Volumes and their underlying filesystems that the driver provisions.
Injection
We didn’t use the Subscription Injection Enhancement Proposal, which was cited in the first blog post as a dependency for enabling access to RHEL Entitlements from OpenShift Builds, Nor as of yet has it been implemented (and quite possibly will never be implemented at this point).
Instead, the OpenShift Build API was extended to provide declaration of explicit Secret, ConfigMap, and CSI Volumes in suitable BuildStrategy types. And to be clear, with this change, OpenShift Builds can consume any CSI Volumes, not just those provided by the Shared Resource CSI Driver.
Subscription Content Access
Several elements, however, of the Subscription Content Access Enhancement Proposal have been implemented since the first blog post. Namely, the Insights Operator and its RHEL Simple Content Access Feature allow access to your RHEL Entitlements subscription through the registration of your cluster at https://console.redhat.com/. That operator stores it in a well-known Secret in the cluster, activates it, and keeps it activated.
And as of 4.10, that feature has been promoted out of Tech Preview and into general availability and complete support.
As you saw in the demonstration, all the hacks and hand waving that occurred in the "Get Your Credentials and Make Them Accessible From Your OpenShift Cluster" section of the first blog post are replaced with a solution that is on par with what was available with OpenShift 3.x.
Troubleshooting
Please see this KCS article for details on problem determination with Shared Resources.
Categories