Subscribe to our blog

Folks who remember installing OpenShift 3.x and using their Red Hat entitlements when building new images with OpenShift 3.x will also remember a seamless experience from a developer’s perspective. In other words, it was “magic.” The cluster admin would set it all up and make it available to everyone as part of the install. The owner of a 'BuildConfig' in a given Project/Namespace could simply leverage the entitlements provided during install.

Then OpenShift 4.x Happened …

For all the value provided by 4.x’s new installation experience, in part because of the move to Red Hat Enterprise Linux CoreOS or RHCOS, one of the changes is that a given set of subscription entitlements is not required to set up a cluster.

That means there is not by default a set of entitlements stored on the node/host that is accessible to everyone.

In the cluster administrator’s eyes, that, among other things, could be viewed as a security improvement. It became apparent in customer interactions that a cluster administrator might not want those credentials to be available to anyone who could create a 'BuildConfig.'

But for the developers working on that cluster, their path to leverage entitlements became more complicated. And that complication circled back to the cluster administrator, which was not all good news for that person, as a proliferation of credential copies could manifest as various developers in various namespaces and attempt to get their entitlement requiring  'BuildConfigs' functional on 4.x.

The Initial Forays for Mitigating the Change

Depending on the precise nature of your entitlement needs, there were a few solutions to consider:

Cluster-wide: If your needs include kernel modules, you get to deal with install time and machine config operator actions. We started documenting that approach in 4.3. Pods and any of their subtypes could subsequently consume those actions.

Per BuildConfig: For less intrusive entitlements not requiring install time manipulation, OpenShift Builds could be modified on a per Project basis to consume entitlements, assuming a cluster admin had previously logged onto your cluster’s nodes to set up the entitlements and subsequently created Kubernetes Secrets for you in your Namespace or supplied you the pem files so that you could create the Secrets in your Project in question. That process has been documented since 4.1.

Processing the Feedback From These Mitigations

Extensive feedback was provided on these supported mitigations. To summarize:

  • The process for injecting the credentials was tedious.
  • The process for injecting the credentials did not easily allow for applying permission rules to consume the credentials.

There are various initiatives underway to improve the experience. This article will not attempt to dive into all of them. The current blog is the first part of these initiatives that focuses on automating exposing Red Hat Enterprise Linux (RHEL) entitlements inside builds and pods.  

This article will dive into our efforts to facilitate the following subset of requirements from the feedback.

  • Inject a given set of entitlement information into a Kubernetes Secret and/or ConfigMap ONCE AND ONLY ONCE and share it among multiple namespaces as the cluster administrator deems appropriate
  • Use the standard Kubernetes constructs Roles / RoleBindings / ClusterRoles / ClusterRoleBindings to control which namespaces (and specifically which ServiceAccounts in those namespaces) have access to the entitlement(s) in question.
  • Access entitlement information from a broad array of Kubernetes and OpenShift Object types
  • Avoid using Kubernetes Hostpath volumes and privileged Pods to access entitlements, as those are not desirable in many customer clusters.

Where Are We?

The following OpenShift Enhancement Proposals were accepted and merged to simplify accessing RHEL entitlements in builds and pods.

  1. How we would mount shared Secrets and ConfigMaps into Pods was captured in Share Secrets And ConfigMaps Across Namespaces via a CSI Driver.  Subsequently, an initial implementation for that proposal is up at this git repository.
  2. The approach to mutate the multitude of API objects that are ultimately converted into Pods was captured in Subscription Injection Operator.
  3. And the higher-level description of how the various efforts are coordinated into a holistic solution was captured in Subscription Content Access.

And What Can We Demonstrate Now?

The Projected Resource API and CSI Driver are at a point where we can access entitlement content in a Pod.

In illustrating this, we’ll first take a step back and revisit how you can leverage entitlements in OpenShift Builds today. Then, after showing what can be done with the new Projected Resource function, the “before” and “after” pictures will be clear enough that the path to improvement will be discernible.

Some quick references:

Get Your Credentials and Make Them Accessible From Your OpenShift Cluster

So as this is an OpenShift related article, detailing how you log in or “register” with ‘subscription-manager’ is something the article will assume you know how to do.

Just in case though, the link https://access.redhat.com/labs/registrationassistant/ can help with getting access to your subscriptions if you need it.

The only other detail of worth that stemmed from our testing is that your ‘subscription-manager register” session needs to be active for the entitlement PEM files it creates, that we subsequently store in OpenShift ‘Secrets’, to work.  

So, from wherever you ran ‘subscription-manager register’, you can then examine ‘/etc/yum.repos.d/redhat.repo’ for the various content you have access to. As it so happens,  the host where we established our ‘subscription-manager’ session did not have the 'oc' binary, which is the client access command for OpenShift. Since we need that to log into our cluster and inject our Subscription credentials into a Secret, we’ll use that associated RPM as the “entitled content” for the rest of our demonstration!  But certainly replace the RPM / repo examples with any repos and RPMs you have access for and interest in.

So, finding the repo for OpenShift, we go through the standard means of enabling that repo and then installing the RPM:

$ subscription-manager repos --enable \ rhocp-4.6-for-rhel-8-x86_64-rpms # we found rhocp-4.6-for-rhel-8-x86_64.rpms by examining ‘/etc/yum.repos.d/redhat.repo’
$ yum --disablerepo="*" \ --enablerepo="rhocp-4.6-for-rhel-8-x86_64-rpms" list available
yum install openshift-clients.x86_64 # we found openshift-client.x86_64 from examining the ‘yum list available’ output

Now that we have 'oc,' we can log into our cluster f using our OpenShift credentials:

$ oc login --token=<your token>  --server=<your api server's URL>

You can now inject your subscription credentials (.pem files)  into a Secret.  For our demonstration, we are going to create two secrets. One in the “per developer” namespace that will be used by OpenShift Builds, and the other in a “shared” namespace presumably established by an administrator using it to give access to that Secret to multiple developers operating in separate namespaces from the “shared” namespace.

So here is the yaml the administrator will use to create the namespaces for our demonstration via and 'oc apply -f' (does not matter which host the administrator does this from, as long as the administrator is logged into the cluster in question):

apiVersion: v1
kind: Namespace
metadata:
labels:
  openshift.io/cluster-monitoring: "true"
name: my-csi-app-namespace

---

apiVersion: v1
kind: Namespace
metadata:
labels:
  openshift.io/cluster-monitoring: "true"
name: shared-secrets-configmaps

Now from where we registered with Subscription Manager and logged into our OpenShift cluster, we are going to create those secrets:

# ID will change between shutdown / restart of a UBI 8 Container and subsequent subscription-manager register invocations
$ oc create secret generic etc-pki-entitlement --from-file \ /etc/pki/entitlement/<ID>.pem --from-file \ /etc/pki/entitlement/<ID>-key.pem -n my-csi-app-namespace
$ oc create secret generic etc-pki-entitlement --from-file \
/etc/pki/entitlement/<ID>.pem --from-file \
/etc/pki/entitlement/<ID>-key.pem -n shared-secrets-configmaps

With that, we have sufficiently injected our Red Hat Subscription Credentials for use in our cluster. Let’s now visit ways to use them.

Revisit Using the Credentials in an OpenShift Build

Switch into the 'my-csi-app-namespace' we created in the last section. We’ll use that as our “developer” namespace for both revisiting using OpenShift Builds with your credentials as well as using the new Projected Resource Share API in the next section.

You should see the secret with our credentials here:

$ oc project my-csi-app-namespace
$ oc get secret etc-pki-entitlement
NAME              TYPE DATA   AGE
etc-pki-entitlement   Opaque   2  39s
$

Now, if you need proof that we cannot install entitled content without this secret, we can quickly prove that. Create a 'BuildConfig' based on this yaml:

apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
name: verify-entitlements-fail-without-creds
spec:
runPolicy: Serial
source:
  dockerfile: |-
      FROM registry.redhat.io/ubi8/ubi:latest
      USER root
      #COPY ./etc-pki-entitlement /etc/pki/entitlement
      #COPY ./rhsm-conf /etc/rhsm
      #COPY ./rhsm-ca /etc/rhsm/ca
      RUN rm /etc/rhsm-host && \
      yum repolist --disablerepo=* && \
      subscription-manager repos --enable rhocp-4.6-for-rhel-8-x86_64-rpms && \
      yum -y update && \
      yum install -y openshift-clients.x86_64 && \
      #rm -rf /etc/pki-entitlement && \
      #rm -rf /etc/rhsm
      USER 1001
      ENTRYPOINT ["/bin/bash"]
  type: Dockerfile
strategy:
  dockerStrategy:
    #env:
    #- name: BUILD_LOGLEVEL
    #  value: "10"
  type: Docker

Notice our credential secret is not included in the 'BuildConfig' definition. Start a build:

$ oc start-build verify-entitlements-fail-without-creds
build.build.openshift.io/verify-entitlements-fail-without-creds-1 started
$

And you’ll see from the results and log we needed our credentials:

$ oc get builds
NAME                                   TYPE FROM     STATUS                   STARTED     DURATION
verify-entitlements-fail-without-creds-1   Docker   Dockerfile   Failed (DockerBuildFailed)   2 minutes ago   19s
$

 

$ oc logs bc/verify-entitlements-fail-without-creds
...
This system has no repositories available through subscriptions.
error: build error: error building at STEP "RUN rm /etc/rhsm-host && yum repolist --disablerepo=* && subscription-manager repos --enable rhocp-4.6-for-rhel-8-x86_64-rpms && yum -y update && yum install -y openshift-clients.x86_64 && USER 1001": error while running runtime: exit status 1
$

Now, if instead, we had chosen to use a 'BuildConfig' which declared that our credential secret get mounted into the 'Build' 'Pod:'

apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
name: verify-entitlements-work-with-creds
spec:
runPolicy: Serial
source:
  secrets:
    - secret:
        name: etc-pki-entitlement
      destinationDir: etc-pki-entitlement
  dockerfile: |-
      FROM registry.redhat.io/ubi8/ubi:latest
      USER root
      COPY ./etc-pki-entitlement /etc/pki/entitlement
      #COPY ./rhsm-conf /etc/rhsm
      #COPY ./rhsm-ca /etc/rhsm/ca
      RUN rm /etc/rhsm-host && \
      yum repolist --disablerepo=* && \
      subscription-manager repos --enable rhocp-4.6-for-rhel-8-x86_64-rpms && \
      yum -y update && \
      yum install -y openshift-clients.x86_64 && \
      rm -rf /etc/pki-entitlement #&& \
      #rm -rf /etc/rhsm
      USER 1001
      ENTRYPOINT ["/bin/bash"]
  type: Dockerfile
strategy:
  dockerStrategy:
    #env:
    #- name: BUILD_LOGLEVEL
    #  value: "10"
  type: Docker

Notice the secret mount. And notice that the COPY in the Dockerfile uses the destination of the secret mount as the source for getting the entitlement credentials into '/etc/pki/entitlement'. The build now works:

$ oc start-build verify-entitlements-work-with-creds
build.build.openshift.io/verify-entitlements-work-with-creds-1 started
$

$ oc get builds
NAME                                TYPE FROM     STATUS STARTED          DURATION
verify-entitlements-work-with-creds-1   Docker   Dockerfile   Complete   About a minute ago   1m1s
$

$ oc logs bc/verify-entitlements-work-with-creds
....
Installed:
  bash-completion-1:2.7-5.el8.noarch                                      
  libpkgconf-1.4.2-1.el8.x86_64                                           
  openshift-clients-4.6.0-202101160934.p0.git.3808.a1bca2f.el8.x86_64     
  pkgconf-1.4.2-1.el8.x86_64                                              
  pkgconf-m4-1.4.2-1.el8.noarch                                           
  pkgconf-pkg-config-1.4.2-1.el8.x86_64                                   

Complete!
--> 51f0aacc487
STEP 5: ENTRYPOINT ["/bin/bash"]
--> cee9d554f2f
STEP 6: ENV "OPENSHIFT_BUILD_NAME"="verify-entitlements-work-with-creds-1" "OPENSHIFT_BUILD_NAMESPACE"="my-csi-app-namespace"
--> 974a7c9edb0
STEP 7: LABEL "io.openshift.build.name"="verify-entitlements-work-with-creds-1" "io.openshift.build.namespace"="my-csi-app-namespace"
STEP 8: COMMIT temp.builder.openshift.io/my-csi-app-namespace/verify-entitlements-work-with-creds-1:385debb0
--> 21e3bee864e
21e3bee864ef73dcffb6656efc2d096a70d99569b09d95e18ef41d640755ebae
Build complete, no image push requested

$

With that baseline established, let’s move onto leveraging the Projected Resource API and associated Driver and Operator and how it starts to address some of the concerns in this space since OpenShift 4.x arrived as noted earlier.

Projected Resource … What Can You Do Today?

So as of the writing of this article, installation via OLM and the Operator Hub is still in progress.

If possible we will post an update to this article, or supply an update in the comment section, when the Operator is available.  Or if you are adventurous, when searching for it in Operator Hub, we currently believe that name will be "Shared Resources Operator".

But until then, you have to check out the code and leverage the developer deploy scripts that are included. You do not have to build the code. The deploy scripts and associated yaml files, at the time a branch/tag of the repository was cut for this blog post, leverage the image ‘quay.io/redhat-developer/origin-csi-driver-projected-resource:blog.post’, which was a copy of the image  'quay.io/openshift/origin-csi-driver-projected-resource:4.8.0' at the time this article was published. The intent here is to provide a stable image for the steps articulated below while the team continues development resulting in updates to the ‘quay.io/openshift/origin-csi-driver-projected-resource’ image.

So the repo again is here.

Clone it via your preferred 'git' interactions.  Be sure to use the ‘blog-post-dev-preview’ branch of the repository if you want to use the same level we used to verify all the steps that follow.  Most likely the ‘master’ will still hold up, at least for a while.  But there are no guarantees as the repository evolves that we do not change install procedures or make API changes that no longer work with what is articulated throughout the rest of this article.

Change directories into your clone.

Then, while logged into your OpenShift cluster as kube:admin, run the following, which should output similar to the following:

$ ./deploy/deploy.sh

deploying hostpath components
  ./deploy/0000_10_projectedresource.crd.yaml
oc apply -f ./deploy/0000_10_projectedresource.crd.yaml
customresourcedefinition.apiextensions.k8s.io/shares.projectedresource.storage.openshift.io created
  ./deploy/00-namespace.yaml
oc apply -f ./deploy/00-namespace.yaml
namespace/csi-driver-projected-resource created
  ./deploy/01-service-account.yaml
oc apply -f ./deploy/01-service-account.yaml
serviceaccount/csi-driver-projected-resource-plugin created
  ./deploy/02-cluster-role.yaml
oc apply -f ./deploy/02-cluster-role.yaml
clusterrole.rbac.authorization.k8s.io/projected-resource-secret-configmap-share-watch-sar-create created
  ./deploy/03-cluster-role-binding.yaml
oc apply -f ./deploy/03-cluster-role-binding.yaml
clusterrolebinding.rbac.authorization.k8s.io/projected-resource-privileged created
clusterrolebinding.rbac.authorization.k8s.io/projected-resource-secret-configmap-share-watch-sar-create created
  ./deploy/csi-hostpath-driverinfo.yaml
oc apply -f ./deploy/csi-hostpath-driverinfo.yaml
csidriver.storage.k8s.io/csi-driver-projected-resource.openshift.io created
  ./deploy/csi-hostpath-plugin.yaml
oc apply -f ./deploy/csi-hostpath-plugin.yaml
service/csi-hostpathplugin created
daemonset.apps/csi-hostpathplugin created
08:56:31 waiting for hostpath deployment to complete, attempt #0
08:56:42 waiting for hostpath deployment to complete, attempt #1

$

Let’s make sure the “shared” secret we created with our entitlements is there:

$ oc get secret etc-pki-entitlement -n shared-secrets-configmaps
NAME              TYPE DATA   AGE
etc-pki-entitlement   Opaque   2  132m
$

Also, you no longer need the secret in the 'my-csi-app-namespace' namespace. You can delete that one now.

Let’s Show Some Successful Access to Entitled Content Using Projected Resources

We are now ready to start on the Projected Resource pieces. First, we are going to create an instance of the new CRD, 'Share', to create a cluster scoped reference to our “shared” secret in the 'shared-secrets-configmaps' namespace. Here is the yaml:

apiVersion: projectedresource.storage.openshift.io/v1alpha1
kind: Share
metadata:
name: my-share
spec:
backingResource:
  kind: Secret
  apiVersion: v1
  name: etc-pki-entitlement
  namespace: shared-secrets-configmaps

We are going to then create a 'ClusterRole' and 'ClusterRoleBinding' so that the 'ServiceAccount' that will be associated with our 'Pod' will have permission to access our new 'Share:'

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: projected-resource-my-share
rules:
- apiGroups:
    - projectedresource.storage.openshift.io
  resources:
    - shares
  resourceNames:
    - my-share
  verbs:
    - get
    - list
    - watch

And:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: projected-resource-my-share
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: projected-resource-my-share
subjects:
- kind: ServiceAccount
  name: default
  namespace: my-csi-app-namespace

Finally, how does the Pod access our 'Share'? Here is the yaml:

kind: Pod
apiVersion: v1
metadata:
name: my-csi-app
namespace: my-csi-app-namespace
spec:
serviceAccountName: default
containers:
  - name: my-frontend
    image: registry.redhat.io/ubi8/ubi:latest
    volumeMounts:
      - mountPath: /etc/pki/entitlement
        name: my-csi-volume
    command:
      - sh
      - -c
      - |
        ls -la /etc/pki/entitlement
        rm /etc/rhsm-host
        echo "Repo enablement"
        yum repolist --disablerepo=*
        subscription-manager repos --enable rhocp-4.6-for-rhel-8-x86_64-rpms && \
        echo "Install entitled content"
        yum -y update
        yum install -y openshift-clients.x86_64
volumes:
  - name: my-csi-volume
    csi:
      driver: csi-driver-projected-resource.openshift.io
      volumeAttributes:
        share: my-share

The command of the 'Pod' mimics the steps that existed in our 'BuildConfig’s' Dockerfile.  Some deep dive into that yaml:

  • The 'volumes' and 'volumeMount' sections in the yaml leverage the existing Kubernetes CSI volume type that our new driver implements.  
  • The 'volumeAttributes' map allows us to specify that the Cluster Scope 'Share' instance to use for mounting, where the 'Share' instance name is named 'my-share'. 
  • And the 'volumeMounts' section says to mount it '/data', where you’ll see that location is referenced in our 'Pod’s' command when we copy the entitlement data to the required location of '/etc/pki/entitlement.' After '/data,' the subdirectory 'secrets' corresponds to the fact our Share maps to a secret (where the other supported type is ConfigMap), then the next subdirectory is a namespace:name tuple for our secret. And then the .pem files we injected into the secret are made available.
  • And you’ll notice the 'privileged' flag is not set on this pod. :-)
  • With your credential in the expected spot, the subscription management functionality that gets employed during your 'yum install' can find your credentials and your 'yum install' will succeed:
$ oc logs my-csi-app
...
Installed:
  bash-completion-1:2.7-5.el8.noarch                                      
  libpkgconf-1.4.2-1.el8.x86_64                                           
  openshift-clients-4.6.0-202101160934.p0.git.3808.a1bca2f.el8.x86_64     
  pkgconf-1.4.2-1.el8.x86_64                                              
  pkgconf-m4-1.4.2-1.el8.noarch                                           
  pkgconf-pkg-config-1.4.2-1.el8.x86_64                                   

Complete!

$

Next Steps

If you have more detailed questions on the new driver’s behavior when different events occur, check out this section in the repository's README.

Otherwise, as previously mentioned, at the time of writing this article, work was underway so that the Projected Resource driver could be installed via an OLM Operator that could be accessed via OpenShift Operator Hub.

Once that is ready, we’ll claim “Developer Preview” status for the functionality.

After that, the team will move on to the Subscription Injection Operator, which should allow us to annotate other API Objects, like an OpenShift BuildConfig with references to Projected Resource Shares, so that any Build’s generated from that BuildConfig would have the secret or ConfigMap associated with that share available for use in the Build. This would be a replacement to the approach documented today where per namespace secrets are specified in a BuildConfig.


About the author

Starting in June 2015 at Red Hat, I have worked either on core OpenShift or various add-ons to the OpenShift ecosystem. Builds, Jenkins, Tekton, CLIs, Templates, Samples, and RHTAP are some of the broader areas I have either lead or contributed to during my time at the company.

Read full bio

Browse by channel

automation icon

Automation

The latest on IT automation that spans tech, teams, and environments

AI icon

Artificial intelligence

Explore the platforms and partners building a faster path for AI

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

Explore how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the solutions that simplify infrastructure at the edge

Infrastructure icon

Infrastructure

Stay up to date on the world’s leading enterprise Linux platform

application development icon

Applications

The latest on our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech