Subscribe to our blog

A year ago, in The Path to Improving the Experience With RHEL Entitlements on OpenShift,

I described the OpenShift Engineering team’s efforts to improve the use of RHEL Entitlements in Kubernetes-based workflows. Our goal has been to make the user experience on OpenShift 4.x comparable to OpenShift 3.x.

We’ve made a great deal of progress since March of 2021. First, I’ll demonstrate our progress over the past 11 months and then detail the changes that led to that progress.

What can we demonstrate?

Previously, OpenShift 4.9 and earlier suggested accessing RHEL entitlements from OpenShift Builds. Now, you no longer need to create a secret with your subscription credentials in the same Namespace as your Build. Instead, in OpenShift 4.10 and later, you can securely access Secrets in one Namespace from a separate Namespace.

We performed this new approach on a 4.10 Cluster with “TechPreviewNoUpgrade” enabled (see Next Steps for details on the path for this new feature moving from Tech Preview to Full Support).

In this example, there are a few actors of note:

  • An administrator with cluster-level permissions to access multiple Namespaces, create cluster-scoped objects, and create Roles and RoleBindings around those cluster-scoped objects, both for discovery and use.
  • Optionally, an organization might also employ Namespace administrators, and they may own some of the necessary tasks for this scenario, like creating Roles and RoleBindings for users or ServiceAccounts
  • General users (‘developers’ or ‘testers’) of the cluster, who are generally restricted to access a single Namespace to build images with OpenShift Builds. Then, to use RHEL Entitlements in those Builds, administrators give them access to discover the existence of SharedSecrets and/or SharedConfigMaps so that they can determine which one to reference in their BuildConfigs. Also, their Namespace “builder” ServiceAccount has been permitted to mount the specific SharedSecret as a Shared Resource CSI Volume in OpenShift Build Pods.

First, we create the following API Objects:

  • A Namespace to host your BuildConfig, Builds, etc. Either a cluster or namespace administrator can do this.

apiVersion: v1
kind:
Namespace
metadata:
 name:
my-csi-app-namespace

  • A SharedSecret that references the Secret where the Insights Operator stores the Subscription Credentials available to the cluster created and registered at https://console.redhat.com/. A cluster administrator does this.

apiVersion: sharedresource.openshift.io/v1alpha1
kind:
SharedSecret
metadata:
 name:
my-share-bc
spec:
 secretRef:
   name:
etc-pki-entitlement
   namespace:
openshift-config-managed

  • A Role and RoleBinding, so that the “builder” service account in our Namespace can access the SharedSecret we created. A cluster or namespace administrator does this.

apiVersion: rbac.authorization.k8s.io/v1
kind:
Role
metadata:
 name:
shared-resource-my-share-bc
 namespace:
my-csi-app-namespace
rules:
 - apiGroups:
      - sharedresource.openshift.io
   resources:
      - sharedsecrets
   resourceNames:
      - my-share-bc
   verbs:
      - use

apiVersion: rbac.authorization.k8s.io/v1
kind:
RoleBinding
metadata:
 name:
shared-resource-my-share-bc
 namespace:
my-csi-app-namespace
roleRef:
 apiGroup:
rbac.authorization.k8s.io
 kind:
Role
 name:
shared-resource-my-share-bc
subjects:
 - kind:
ServiceAccount
   name:
builder
   namespace:
my-csi-app-namespace

  • And a BuildConfig that accesses RHEL Entitlements. A user / developer in a given Namespace does this.

apiVersion: build.openshift.io/v1
kind:
BuildConfig
metadata:
 name:
my-csi-bc
 namespace:
my-csi-app-namespace
spec:
 runPolicy:
Serial
 source:
   dockerfile:
|
     FROM registry.redhat.io/ubi8/ubi:latest
     RUN ls -la /etc/pki/entitlement
     RUN rm /etc/rhsm-host
     RUN yum repolist --disablerepo=*
     RUN subscription-manager repos --enable rhocp-4.9-for-rhel-8-x86_64-rpms
     RUN yum -y update
     RUN yum install -y openshift-clients.x86_64
  strategy:
   type:
Docker
   dockerStrategy:
     volumes:
       - mounts:
           - destinationPath:
"/etc/pki/entitlement"
         name:
my-csi-shared-secret
         source:
           csi:
             driver:
csi.sharedresource.openshift.io
             readOnly:
true
             volumeAttributes:
               sharedSecret:
my-share-bc
           type:
CSI

Once these API objects are in place, you can start a Build from the BuildConfig and follow the logs with the `oc` command. Here is an example invocation and a condensed log output for brevity (where ”...” replaces lines of actual output):

$ oc start-build my-csi-bc -F
build.build.openshift.io/my-csi-bc-1 started
Caching blobs under
"/var/cache/blobs".

Pulling image registry.redhat.io/ubi8/ubi:latest ...
Trying to pull registry.redhat.io/ubi8/ubi:latest...
Getting image
source signatures
Copying blob sha256:5dcbdc60ea6b60326f98e2b49d6ebcb7771df4b70c6297ddf2d7dede6692df6e
Copying blob sha256:8671113e1c57d3106acaef2383f9bbfe1c45a26eacb03ec82786a494e15956c3
Copying config sha256:b81e86a2cb9a001916dc4697d7ed4777a60f757f0b8dcc2c4d8df42f2f7edb3a
Writing manifest to image destination
Storing signatures
Adding transient rw
bind mount for /run/secrets/rhsm
STEP 1/9: FROM registry.redhat.io/ubi8/ubi:latest
STEP 2/9: RUN ls -la /etc/pki/entitlement
total 360
drwxrwxrwt. 2 root root         80 Feb  3 20:28 .
drwxr-xr-x. 10 root root        154 Jan 27 15:53 ..
-rw-r--r--. 1 root root   3243 Feb  3 20:28 entitlement-key.pem
-rw-r--r--. 1 root root 362540 Feb  3 20:28 entitlement.pem
time=
"2022-02-03T20:28:32Z" level=warning msg="Adding metacopy option, configured globally"
--> 1ef7c6d8c1a
STEP 3/9: RUN rm /etc/rhsm-host
time=
"2022-02-03T20:28:33Z" level=warning msg="Adding metacopy option, configured globally"
--> b1c61f88b39
STEP 4/9: RUN yum repolist --disablerepo=*
Updating Subscription Management repositories.


...

--> b067f1d63eb
STEP 5/9: RUN subscription-manager repos --
enable rhocp-4.9-for-rhel-8-x86_64-rpms
Repository
'rhocp-4.9-for-rhel-8-x86_64-rpms' is enabled for this system.
time=
"2022-02-03T20:28:40Z" level=warning msg="Adding metacopy option, configured globally"
--> 03927607ebd
STEP 6/9: RUN yum -y update
Updating Subscription Management repositories.

...

Upgraded:
 systemd-239-51.el8_5.3.x86_64              systemd-libs-239-51.el8_5.3.x86_64        
 systemd-pam-239-51.el8_5.3.x86_64        
Installed:
 diffutils-3.6-6.el8.x86_64                   libxkbcommon-0.9.1-1.el8.x86_64          
 xkeyboard-config-2.28-1.el8.noarch          

Complete!
time=
"2022-02-03T20:29:05Z" level=warning msg="Adding metacopy option, configured globally"
--> db57e92ff63
STEP 7/9: RUN yum install -y openshift-clients.x86_64
Updating Subscription Management repositories.

...

Installed:
 bash-completion-1:2.7-5.el8.noarch                                                
 libpkgconf-1.4.2-1.el8.x86_64                                                    
 openshift-clients-4.9.0-202201211735.p0.g3f16530.assembly.stream.el8.x86_64  
 pkgconf-1.4.2-1.el8.x86_64                                                        
 pkgconf-m4-1.4.2-1.el8.noarch                                                    
 pkgconf-pkg-config-1.4.2-1.el8.x86_64                                            

Complete!
time=
"2022-02-03T20:29:19Z" level=warning msg="Adding metacopy option, configured globally"
--> 609507b059e
STEP 8/9: ENV
"OPENSHIFT_BUILD_NAME"="my-csi-bc-1" "OPENSHIFT_BUILD_NAMESPACE"="my-csi-app-namespace"
--> cab2da3efc4
STEP 9/9: LABEL
"io.openshift.build.name"="my-csi-bc-1" "io.openshift.build.namespace"="my-csi-app-namespace"
COMMIT temp.builder.openshift.io/my-csi-app-namespace/my-csi-bc-1:edfe12ca
--> 821b582320b
Successfully tagged temp.builder.openshift.io/my-csi-app-namespace/my-csi-bc-1:edfe12ca
821b582320b41f1d7bab4001395133f86fa9cc99cc0b2b64c5a53f2b6750db91
Build complete, no image push requested

And voila, an OpenShift Build in one Namespace securely utilizes RHEL Entitlements stored in a Secret in another Namespace.

Next Steps

Some items need to be completed for SharedResources to be promoted from Tech Preview to Fully Supported.

Fine-tuning “read only” enforcement

Currently, the CSI Driver is the sole gatekeeper for the restriction of any Shared Resource Volume being read-only. Therefore, a user’s attempt to provision a read-write Pod must still go through the kubelet flow, including any necessary pulling of Pod images, before an error is flagged.

Additionally, notification of the error does not occur when you create the Pod. You see it when the Pod Create eventually fails, and you have to inspect Pod status or Events to see that the mount was rejected.

In upcoming releases, Kubernetes Validating Admission Controls will be put into place so that errors are flagged on Pod creation, and you’ll avoid all that extra execution.

Name reservation

Similar to what you see with some aspects of the upstream Kubernetes and Project/Namespaces names in OpenShift, we’ll also be adding Kubernetes Admission control over the names of SharedSecrets and SharedConfigMaps, where most likely we’ll reserve names that start with “openshift-*”.

This change will allow us to safely pre-populate Clusters with well-known SharedSecrets and SharedConfigMaps without the risk of conflicting with instances you the user may have created.

SIDE NOTE: We’ll also be looking into whether any Roles and RoleBindings around viewing or using SharedSecrets and SharedConfigMaps make sense. Feel free to provide input and opinions here.

Trusted CSI Drivers

Work still needs to be done to fine-tune the story around SecurityContextConstraints, PodSecurity admission control, and which in-line CSI Volume providers are trusted to be used by unrestricted users. Both upstream and downstream conversations around this topic were in progress when this blog post was published.

What happened with the “Next Steps” from the prior blog post?

The short answer is that none of the “Next Steps” manifested as predicted. The prototype described in the blog post evolved significantly, even as the core concept of securely sharing Secrets and ConfigMaps across multiple Namespaces stayed the same.

If you are curious about some of the details and rationale in how this project evolved, continue reading.

What from the core concepts stayed the same?

  1. Custom resource definitions (CRDs) declare which Secrets and ConfigMaps can be shared across namespaces.
  2. A new component still exists that is both an In-Line Ephemeral Container Storage Interface (CSI) Driver and a Kubernetes Controller that watches specific Secrets and ConfigMaps for updates.
  3. Kubernetes RBAC, including Roles, RoleBindings, and SubjectAccessReviews, make sure that the ServiceAccount for a Pod has been given permission to use a shared Secret or ConfigMap. Using the shared Secret or ConfigMap is done by declaring a Volume in the Pod Spec that uses our new CSI Driver to mount the data associated with that CRD instance.
  4. Even after a Pod has been provisioned with a CSI Volume the new driver provides, permissions are periodically re-validated to make sure that the Pod can STILL access the data.

If CSI Volumes and Drivers are new to you, check out the overview and diagrams in the OpenShift Documentation.

What has changed?

Quite a bit, actually. But rest assured that it is all for the better.

Names and API

We got a consensus and renamed the CSI Driver to “Shared Resource CSI Driver.”

Instead of a single Custom Resource, we created separate types for sharing Secrets (SharedSecret) and ConfigMaps (SharedConfigMap). The CRDs and associated declarations were incorporated into the official core OpenShift API at https://github.com/openshift/api/tree/release-4.10/sharedresource, which prompted a change to the installation procedure and delivery.

Install / Delivery

After much debate and discussion, we decided to deliver this driver with OpenShift itself rather than as an operator installed with OLM. In the end, RHEL Entitlements are still central to how Red Hat enables customers to use supported software, so features facilitating that model should be available out of the box.

RBAC

The SubjectAccessReview semantics were slightly adjusted. Instead of employing the “get” verb, the Shared Resources Driver now employs the “use” verb when determining if a Pod (through its ServiceAccount) is allowed to access a SharedSecret or SharedConfigMap.

This approach better aligns with the discoverability concepts promoted in Kubernetes API design. In particular, a cluster administrator can separate rules for who can discover the presence of SharedSecrets and SharedConfigMaps vs. those who can mount them into the Pods.

The SharedSecret contains only the name and Namespace of the Secret that it references. You cannot “view” the contents of the Secret that it references. Separate permissions are needed to access the Namespace where the Secret resides.

SELinux

It turns out there is a known “situation” with “read-only” CSI in-line Volumes. Suppose the user requests “read-only”, and the CSI Driver allocates a read-only filesystem in conjunction with that request. In that case, CRI-O cannot update the filesystem with the correct SELinux labels.

There are currently upstream Kubernetes proposals for addressing this that center around the CSI driver allowing/supporting only “read-only” Volumes, and providing a read-write filesystem for that Volume so that CRI-O can update the SELinux labels. Then, the Kubelet and CRI-O can construct the necessary labeling and filesystem layering so that what the consuming Pod gets is still read-only, and no updates to the Volume can be made from the Pod.

However, distinguishing between read-only and read-write volumes becomes difficult with this technique. So, as a compromise, only read-only is supported in this scenario.

Links to the current upstream activity are here and here. Some upstream drivers already employ this technique, such as the Secrets Store CSI Driver.

In anticipation of these KEPs getting accepted, and the prior precedence from upstream CSI drivers, the Shared Resource CSI Driver also employs this approach.

As such, the Shared Resource CSI Driver only allows volume mounts where “readOnly” is set to true, whereas the earlier prototypes did allow read-write Volumes. But the SELinux labels are now correctly set on any Volumes and their underlying filesystems that the driver provisions.

Injection

We didn’t use the Subscription Injection Enhancement Proposal, which was cited in the first blog post as a dependency for enabling access to RHEL Entitlements from OpenShift Builds, Nor as of yet has it been implemented (and quite possibly will never be implemented at this point).

Instead, the OpenShift Build API was extended to provide declaration of explicit Secret, ConfigMap, and CSI Volumes in suitable BuildStrategy types. And to be clear, with this change, OpenShift Builds can consume any CSI Volumes, not just those provided by the Shared Resource CSI Driver.

Subscription Content Access

Several elements, however, of the Subscription Content Access Enhancement Proposal have been implemented since the first blog post. Namely, the Insights Operator and its RHEL Simple Content Access Feature allow access to your RHEL Entitlements subscription through the registration of your cluster at https://console.redhat.com/. That operator stores it in a well-known Secret in the cluster, activates it, and keeps it activated.

And as of 4.10, that feature has been promoted out of Tech Preview and into general availability and complete support.

As you saw in the demonstration, all the hacks and hand waving that occurred in the "Get Your Credentials and Make Them Accessible From Your OpenShift Cluster" section of the first blog post are replaced with a solution that is on par with what was available with OpenShift 3.x.

Troubleshooting

Please see this KCS article for details on problem determination with Shared Resources.


About the author

Starting in June 2015 at Red Hat, I have worked either on core OpenShift or various add-ons to the OpenShift ecosystem. Builds, Jenkins, Tekton, CLIs, Templates, Samples, and RHTAP are some of the broader areas I have either lead or contributed to during my time at the company.

Read full bio

Browse by channel

automation icon

Automation

The latest on IT automation that spans tech, teams, and environments

AI icon

Artificial intelligence

Explore the platforms and partners building a faster path for AI

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

Explore how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the solutions that simplify infrastructure at the edge

Infrastructure icon

Infrastructure

Stay up to date on the world’s leading enterprise Linux platform

application development icon

Applications

The latest on our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech