Trident operator setup for Azure NetApp Files on ARO
This content is authored by Red Hat experts, but has not yet been tested on every supported configuration. This guide has been validated on OpenShift 4.20. Operator CRD names, API versions, and console paths may differ on other versions.
Prerequisites
- An Azure Red Hat OpenShift cluster installed with Service Principal role/credentials.
- oc cli
Please review the current NetApp Trident documentation for Azure NetApp Files prerequisites and required permissions.
In this guide, you will need service principal and region details. Please have these handy.
- Azure subscriptionID
- Azure tenantID
- Azure clientID (Service Principal)
- Azure clientSecret (Service Principal Secret)
- Azure Region
If you do not want to reuse the existing ARO service principal, you can create a separate service principal and grant it the permissions required to manage the Azure NetApp Files resources used by Trident.
Important Concepts
Persistent Volume Claims are namespaced objects . Mounting RWX/ROX is only possible within the same namespace.
Azure NetApp Files must have a delegated subnet within your ARO VNet, and that subnet must be delegated to Microsoft.NetApp/volumes.
Configure Azure
You must first register the Microsoft.NetApp provider and create an Azure NetApp Files account before you can use Azure NetApp Files.
Register Azure NetApp files
Create Azure NetApp Files account
Again, for brevity I am using the same RESOURCE_GROUP and Service Principal that the cluster was created with.
Or with the Azure CLI:
Create capacity pool
Creating one pool for now. The common pattern is to expose all three levels with unique pool names respective of each service level.
Or with the Azure CLI:
Delegate subnet to ARO
Login to the Azure console, find the VNet used by your ARO cluster, and add a delegated subnet for Azure NetApp Files. Make sure the backend configuration later in this guide references the exact subnet name/path you created.
Install Trident Operator from OperatorHub/Software Catalog
Login to your ARO cluster and install NetApp Trident from OperatorHub (or Software Catalog) using the certified operator.
- In the OpenShift console, go to OperatorHub.
- Search for NetApp Trident.
- Select the most recent available operator version.
- Install the operator in the default recommended configuration.
- Create a
TridentOrchestratorinstance.
Example:
Apply and verify:
Create Trident backend
Create the backend using a Kubernetes Secret and a TridentBackendConfig custom resource.
Create the credentials secret first:
Create the backend definition:
Add the following snippet:
Apply it:
Example successful output:
If backend creation fails, review the Trident controller logs:
Create storage class
Example of StorageClass:
Output:
Troubleshooting notes
If the backend does not initialize successfully, PVC creation can later fail with errors such as no available backends for storage class ... or remain in Pending.
Common Azure resource discovery symptoms include:
Subnet query returned no dataResource group referenced in pool not foundVirtual network referenced in pool not foundSubnet referenced in pool not foundno capacity pools found for storage pool <pool-name>
These usually indicate one or more of the following:
- the resource group, virtual network, subnet, or capacity pool name does not exactly match the Azure resource
- the subnet is not delegated to
Microsoft.NetApp/volumes - the service principal role assignment scope is too narrow
- the service principal cannot read the VNet/subnet resources required for backend discovery
During ARO 4.20 validation, two additional Trident-specific issues were observed:
- inline backend credentials were rejected and had to be moved to a Kubernetes Secret referenced by
spec.credentials - using
backendNameas a StorageClass parameter was rejected;backendType: "azure-netapp-files"worked
Useful validation commands:
Provision volume
Create a new project and set up a persistent volume claim. PersistentVolumeClaims are namespaced objects, so create the claim in the namespace where it will be used. In this example, we use the project netappdemo.
Now create the PVC:
Output:
Verify that the claim binds successfully:
Verify
Verify that the StorageClass and PersistentVolumeClaim were created successfully.
Verify with CLI
Check the StorageClass:
Example output:
Check the PersistentVolumeClaim:
Example output:
Check the PersistentVolumes:
Example output:
Verify in OpenShift Console
Login to the cluster as cluster-admin and confirm that:
- the
anf-scStorageClass is present - the
anf-pvcclaim in thenetappdemoproject isBound - a dynamically provisioned PersistentVolume was created for the claim
Create Pods to test Azure NetApp
Create two pods to validate the Azure NetApp file mount. One pod writes data to the shared volume, and the second pod reads the same data back to confirm ReadWriteMany access is working correctly.
Writer Pod
This pod writes hello netapp to the shared mount backed by the anf-pvc claim.
Watch for the pod to become ready:
Verify the file was written:
Expected output:
Reader Pod
This pod reads the same file from the shared mount.
Wait for the pod to be ready:
Verify the reader pod can access the shared file:
Expected output:
The first hello netapp is from the pod logs, and the second is from the oc exec command. This confirms that both pods successfully accessed the same Azure NetApp-backed ReadWriteMany volume.