Subscribe to our blog

Storage classes are a key aspect of Kubernetes storage user experience. They make dynamic storage provisioning possible, and they allow administrators to abstract storage consumption by exposing classes to end-users. This way administrators choose what kind of storage they want to offer, while users can choose the right storage option based on their application requirements.

With OpenShift 4.13, we introduce new features on how storage classes are managed, and this blog walks you through the details of these changes.

Manage operator’s storage class creation

When deploying OpenShift on top of cloud providers such as AWS, Azure, or vSphere, the Container Storage Interface (CSI) driver Operator automatically creates and maintains a default storage class for you, so that users are ready to deploy stateful workloads without further administrative intervention. 

However, this is not always the desired behavior. Some administrators prefer to manage their storage classes themselves. In OpenShift 4.13, we add more flexibility in the way operators manage their storage classes. The most popular reasons for this use cases are as follows:

  • The storage class does not meet the business or technical requirements.
  • The storage class naming nomenclature is not compliant; for example, administrators may want to have the same default storage class name across environments.
  • Remove the storage classes in order to disable dynamic provisioning on limited access environments.

With OpenShift 4.13, we introduce new storage class states that you set to define how operators should manage the storage classes.

  • Managed (Default): The CSI operator actively manages its default storage class, that is the default storage class is continuously reconciled should an administrator try to manually modify or remove it. 
  • Unmanaged: This allows the administrator to modify and tune the storage class to meet their needs. The CSI operator is not actively reconciling storage classes.
  • Removed: The CSI operator deletes the default storage class. The administrator needs to recreate a new one from scratch.

So how does it work? To change the storage class policy, you need to update the ClusterCSIDriver object either through the web UI or CLI, and set the storageClassState parameter to one of the options listed above.

To edit the ClusterCSIDriver object, list the storage classes, and use the oc get command to edit the resource.

$ oc get sc
$ oc edit clustercsidriver <name_of_the_provisioner>

 

For example, on an AWS deployed cluster, the procedure will look like this:

$oc get sc

oc get sc
NAME                PROVISIONER       RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gp2-csi             ebs.csi.aws.com   Delete          WaitForFirstConsumer   true                   114m
gp3-csi (default)   ebs.csi.aws.com   Delete          WaitForFirstConsumer   true                   114m

 

The provisioner name is ebs.csi.aws.com, edit the ClusterCSIDriver object, and set the storageClassState parameter under the spec section.

# oc edit clustercsidriver ebs.csi.aws.com

apiVersion: operator.openshift.io/v1
kind: ClusterCSIDriver
metadata:
   name: ebs.csi.aws.com
(...)
spec:
(...)
 storageClassState: Unmanaged   # Add parameter here

 

With the storageClassState parameter set to Unmanaged, the OpenShift EBS operator will not reconcile the changes, and the administrator can safely update the storage class to fit their needs.

You can also perform the same action with the oc patch command:

$ oc patch clustercsidriver  ebs.csi.aws.com --type=merge -p "{\"spec\":{\"storageClassState\":\"Unmanaged\"”

 

This new feature is available with the following CSI drivers:

  • vSphere
  • AWS EBS
  • Azure Disk
  • Azure File
  • GCP PD
  • IBM VPC Block
  • AliCloud Disk
  • RH-OSP Cinder
  • Ovirt

Support for multiple default storage classes

Default storage class is a convenient feature to apply a storage class for all PVs that don’t explicitly set one in their PVC. It can be really helpful for workload’s portability across clusters as the storage classes names can change between them. 

Before OpenShift 4.13, there could be only one default storage class set per cluster. If multiple default storage classes were set, any new PVC request would fail.

oc get sc
NAME                 PROVISIONER       RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gp2-csi              ebs.csi.aws.com   Delete          WaitForFirstConsumer   true                   161m
gp3-csi (default)    ebs.csi.aws.com   Delete          WaitForFirstConsumer   true                   161m
gp3-test (default)   ebs.csi.aws.com   Delete          WaitForFirstConsumer   true                   48s

bash-4.4 ~ $ oc create -f pvc.yaml 
Error from server (Forbidden): error when creating "pvc.yaml": persistentvolumeclaims "pvc_test" is forbidden: Internal error occurred: 2 default StorageClasses were found

 

There shouldn't be multiple default classes in a cluster; however, there can be situations where this could happen, e.g. it can be due to a human error or when an administrator changes the default storage class, wherein during a short period of time, two default classes exist in the cluster. We want to limit user’s requests to fail as much as possible. Starting with OpenShift 4.13, the user experience is different when multiple default classes are detected.

If multiple default classes are detected, the default storage class with the newest metadata.creationTimestamp will be used. In another word, the most recently created storage class will be used. The cluster is looking at the most recent storage class creation time, and not the time the storage class has been set as default.

This new feature increases flexibility by limiting the number of user-facing errors; however a cluster shouldn’t have more than one default storage class; admin should still be notified therefore an alert is raised if that’s the case.

Retro active storage class assignment (Tech Preview)

We just covered how OpenShift manages multiple default storage classes, now what happens if there is no default storage class set? Currently the PVC will stay in PENDING forever and users will have to delete and recreate it with an explicit storage class name.

With this new feature, the PVC will remain in the pending state until a default storage class is created, or one of the existing storage classes is set as default. As soon as a new default storage class is set, the PVC will be retroactively assigned to the new default class.

This new behavior can be useful in various situations

  • Administrators omit to set a default storage class, users won’t need to recreate their PVs, the default storage class will be retroactively set on pending PVs
  • During a cluster’s install, components could request a PVC while the storage operator didn’t set a default storage class yet.
  • When switching default storage classes, during a short period of time, no default class exists in the cluster.

It is worth noting that this feature is currently shipped as Technology Preview and behind a feature gate. The feature is disabled by default, which means that PVCs won’t be retroactively  assigned unless the TechPreviewNoUpgrade feature set is enabled.

We encourage everyone to try this feature and provide feedback; however please make sure you only do so in a sandbox environment, enabling the TechPreviewNoUpgrade feature set will enable all other Technology Preview features and mark your cluster unupgradable.

You can use the following YAML to enable the TechPreviewNoUpgrade feature set:

apiVersion: config.openshift.io/v1
kind: FeatureGate
metadata:
 name: cluster
spec:
 featureSet: TechPreviewNoUpgrade

About the author

Gregory Charot is a Principal Technical Product Manager at Red Hat covering OpenStack Storage, Ceph integration, Edge computing as well as OpenShift on OpenStack storage. His primary mission is to define product strategy and design features based on customers and market demands, as well as driving the overall productization to deliver production-ready solutions to the market. OpenSource and Linux-passionate since 1998, Charot worked as a production engineer and system architect for eight years before joining Red Hat—first as an Architect, then as a Field Product Manager prior to his current role as a Technical Product Manager for the Cloud Platform Business Unit.

Read full bio

Browse by channel

automation icon

Automation

The latest on IT automation that spans tech, teams, and environments

AI icon

Artificial intelligence

Explore the platforms and partners building a faster path for AI

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

Explore how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the solutions that simplify infrastructure at the edge

Infrastructure icon

Infrastructure

Stay up to date on the world’s leading enterprise Linux platform

application development icon

Applications

The latest on our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech