Use Azure Blob storage Container Storage Interface (CSI) driver on an ARO cluster
This content is authored by Red Hat experts, but has not yet been tested on every supported configuration. This guide has been validated on OpenShift 4.20. Operator CRD names, API versions, and console paths may differ on other versions.
The Azure Blob Storage Container Storage Interface (CSI) is a CSI-compliant driver that can be installed on an Azure Red Hat OpenShift (ARO) cluster to provision and mount Azure Blob storage for Kubernetes workloads.
When you use this CSI driver to mount Azure Blob storage into a pod, it allows you to use blob storage to work with massive amounts of data.
You can also refer to the driver’s documentation here .
The Azure Blob CSI driver supports two common dynamic provisioning models:
- Driver-managed path: the driver can select or create a suitable storage account when one is not explicitly specified in the StorageClass.
- Bring your own (BYO) storage account path: the StorageClass is tied to an existing storage account that you create and manage.
Scope of validation
- The steps below were validated on ARO using a dynamic provisioning path with BlobFuse2 and a manually specified storage account.
- The driver-managed storage account creation path was outside the scope of this update.
- During validation, additional ARO-specific controller credential wiring was required:
azure-cred-fileConfigMap inkube-systemazure-cloud-providersecret inkube-system- explicit BlobFuse2 working directory in the StorageClass mount options
- Static provisioning with Azure Blob CSI was also validated separately in lab testing, but it is outside the scope of this walkthrough.
Prerequisites
- ARO cluster up and running.
- Helm - command line utility
- oc - command line utility.
- jq - command line utility.
- Azure CLI logged into the correct subscription.
- Permissions to create or access an Azure Storage Account for the test workflow.
- Permissions to create a service principal and assign Azure RBAC roles.
-
Set the environment variables related to your cluster environment:
Update the
LOCATION,CLUSTER_NAME,RG_NAME, andVNET_NAMEvariables in the snippet below to match your cluster details: -
Set environment variables related to the project and secret names used to install the driver, and the testing project where a pod will use the configured storage:
-
Set additional environment variables for the storage resources used by the test workflow, including the storage account and blob container names:
The storage account name must be globally unique in Azure. Update STORAGE_ACCOUNT_NAME if the sample name is already in use.
Create an identity for the CSI Driver to access the Blob Storage
The Azure Blob CSI driver needs Azure credentials so it can access the blob storage resources used by this walkthrough.
For the validated ARO path used here, this includes:
- a service principal with the required Azure permissions
- a Kubernetes secret for the driver
- additional ARO-specific controller cloud configuration in
kube-system
-
Create a service principal for the Blob CSI driver:
Example output:
Note that the service principal name based on
http://$CSI_BLOB_SECRETmust be unique enough to avoid colliding with an existing Azure AD app. -
Export the values from the command output:
-
Assign the required roles to the service principal.
For the validated ARO path in this article, the service principal required:
Contributoron the relevant resource groupStorage Account Contributoron the target storage account after that account is created
Assign
Contributoron the resource group: -
Create the
azure-cred-fileConfigMap inkube-systemso the controller can locate the host cloud configuration: -
Create the
azure-cloud-providersecret inkube-systemwith the Azure cloud configuration used by the controller:
Driver installation
After creating the identity and required Azure configuration, install the Azure Blob CSI driver on the cluster.
On ARO, the default Blob CSI Helm install needs an additional workaround. During validation on ARO 4.20, the default chart left the node pods stuck in Init:CreateContainerError because the init container tries to mount /usr/local, which is a symlink on RHCOS. To work around this, disable those components at install time and remove the init container from the node DaemonSet.
-
Add the Blob CSI driver Helm repository:
-
Install the Blob CSI driver chart with the ARO-specific settings:
-
Confirm the Blob CSI node DaemonSet name:
In this case, the node DaemonSet was
csi-blob-node. -
Remove the init container from the Blob CSI node DaemonSet:
-
Verify that the Blob CSI driver pods are running:
Expected output should include the Blob CSI controller and node pods in a
Runningstate after applying the ARO-specific workaround above. -
If you created or updated the ARO-specific controller configuration in the previous section, restart the controller so it picks up the latest configuration:
-
Confirm that the controller is healthy before continuing:
At this point, the Blob CSI controller should be running without missing cloud configuration errors and ready for the StorageClass and PVC workflow used in the next section.
Test the CSI driver is working
To test the Blob CSI driver, create the required storage resources, then create a StorageClass, PersistentVolumeClaim, and a pod that mounts the provisioned storage.
-
Create the storage account:
Assign
Storage Account Contributoron the storage account: -
Create the blob container:
-
Create the test project:
-
Create a secret in the test project containing the storage account credentials:
-
Create the StorageClass:
When using BlobFuse2 on ARO, adding `--default-working-dir=/tmp/blobfuse2` avoids mount failures caused by the default `/.blobfuse2` path being read-only. -
Create the PersistentVolumeClaim:
-
Create a test pod that mounts the claim:
-
Verify that the pod is running:
-
Verify that the volume is mounted successfully:
Expected result:
- the pod is in
Runningstate - the mount is present at
/mnt/blob - the filesystem shows
blobfuse2
- the pod is in
Troubleshooting
Blob CSI controller is not healthy after installation
If the controller pods are not running, check their status and logs:
During validation on ARO, dynamic provisioning required additional controller-side Azure configuration in kube-system:
azure-cred-fileConfigMapazure-cloud-providersecret
If those resources are missing or incomplete, the controller may fail to initialize correctly.
You can recreate them and then restart the controller:
PVC stays in Pending state
If the PersistentVolumeClaim does not bind, describe the PVC and review the related events:
Also verify:
- the
StorageClassname matches the PVC - the storage account name and key are correct in the secret
- the blob container exists
- the controller has the required Azure permissions
Pod is stuck in ContainerCreating
If the pod does not start, describe the pod and review the events:
During validation with BlobFuse2 on ARO, one observed failure was a read-only filesystem error for the default BlobFuse2 working directory.
To avoid this, include the following mount option in the StorageClass:
Verify the mount inside the pod
To confirm the mount is working:
Expected result:
- the pod is in
Runningstate - the mount is present at
/mnt/blob - the filesystem shows
blobfuse2
Clean up
After testing is complete, remove the test resources created for the Blob CSI validation.
-
Delete the test pod, PersistentVolumeClaim, and StorageClass:
-
Delete the secret from the test project:
-
Delete the test project:
-
Uninstall the Blob CSI driver Helm chart:
-
If you created the ARO-specific controller configuration for this validation and no longer need it, remove it from
kube-system: -
If no longer needed, delete the service principal created for the Blob CSI driver:
-
If you created a temporary storage account and blob container specifically for this test, delete them: