Cloud Experts Documentation

Trident operator setup for Azure NetApp Files on ARO

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration. This guide has been validated on OpenShift 4.20. Operator CRD names, API versions, and console paths may differ on other versions.

This guide is a simple "happy path" that shows a minimal friction way to use Azure NetApp Files with Azure Red Hat OpenShift. This may not be the best behavior for any system beyond demonstration purposes.

Prerequisites

  • An Azure Red Hat OpenShift cluster installed with Service Principal role/credentials.
  • oc cli

Please review the current NetApp Trident documentation for Azure NetApp Files prerequisites and required permissions.

In this guide, you will need service principal and region details. Please have these handy.

  • Azure subscriptionID
  • Azure tenantID
  • Azure clientID (Service Principal)
  • Azure clientSecret (Service Principal Secret)
  • Azure Region

If you do not want to reuse the existing ARO service principal, you can create a separate service principal and grant it the permissions required to manage the Azure NetApp Files resources used by Trident.

Important Concepts

Persistent Volume Claims are namespaced objectsexternal link (opens in new tab) . Mounting RWX/ROX is only possible within the same namespace.

Azure NetApp Files must have a delegated subnet within your ARO VNet, and that subnet must be delegated to Microsoft.NetApp/volumes.

Configure Azure

You must first register the Microsoft.NetApp provider and create an Azure NetApp Files account before you can use Azure NetApp Files.

Register Azure NetApp files

Azure Consoleexternal link (opens in new tab)

Create Azure NetApp Files account

Again, for brevity I am using the same RESOURCE_GROUP and Service Principal that the cluster was created with.

Azure Consoleexternal link (opens in new tab)

Or with the Azure CLI:

Create capacity pool

Creating one pool for now. The common pattern is to expose all three levels with unique pool names respective of each service level.

Azure Consoleexternal link (opens in new tab)

Or with the Azure CLI:

Delegate subnet to ARO

Login to the Azure console, find the VNet used by your ARO cluster, and add a delegated subnet for Azure NetApp Files. Make sure the backend configuration later in this guide references the exact subnet name/path you created.

Install Trident Operator from OperatorHub/Software Catalog

Login to your ARO cluster and install NetApp Trident from OperatorHub (or Software Catalog) using the certified operator.

  1. In the OpenShift console, go to OperatorHub.
  2. Search for NetApp Trident.
  3. Select the most recent available operator version.
  4. Install the operator in the default recommended configuration.
  5. Create a TridentOrchestrator instance.

Example:

Apply and verify:

Create Trident backend

Create the backend using a Kubernetes Secret and a TridentBackendConfig custom resource.

Create the credentials secret first:

* Ensure the service principal has the required Azure permissions for Azure NetApp Files resources. * If permissions are missing, you may see an error similar to: `capacity pool query returned no data; no capacity pools found for storage pool`.

Create the backend definition:

Add the following snippet:

Apply it:

Example successful output:

If backend creation fails, review the Trident controller logs:

Create storage class

Example of StorageClass:

Output:

Troubleshooting notes

If the backend does not initialize successfully, PVC creation can later fail with errors such as no available backends for storage class ... or remain in Pending.

Common Azure resource discovery symptoms include:

  • Subnet query returned no data
  • Resource group referenced in pool not found
  • Virtual network referenced in pool not found
  • Subnet referenced in pool not found
  • no capacity pools found for storage pool <pool-name>

These usually indicate one or more of the following:

  • the resource group, virtual network, subnet, or capacity pool name does not exactly match the Azure resource
  • the subnet is not delegated to Microsoft.NetApp/volumes
  • the service principal role assignment scope is too narrow
  • the service principal cannot read the VNet/subnet resources required for backend discovery

During ARO 4.20 validation, two additional Trident-specific issues were observed:

  • inline backend credentials were rejected and had to be moved to a Kubernetes Secret referenced by spec.credentials
  • using backendName as a StorageClass parameter was rejected; backendType: "azure-netapp-files" worked

Useful validation commands:

Provision volume

Create a new project and set up a persistent volume claim. PersistentVolumeClaims are namespaced objects, so create the claim in the namespace where it will be used. In this example, we use the project netappdemo.

Now create the PVC:

Output:

Verify that the claim binds successfully:

Verify

Verify that the StorageClass and PersistentVolumeClaim were created successfully.

Verify with CLI

Check the StorageClass:

Example output:

Check the PersistentVolumeClaim:

Example output:

Check the PersistentVolumes:

Example output:

Verify in OpenShift Console

Login to the cluster as cluster-admin and confirm that:

  • the anf-sc StorageClass is present
  • the anf-pvc claim in the netappdemo project is Bound
  • a dynamically provisioned PersistentVolume was created for the claim

Create Pods to test Azure NetApp

Create two pods to validate the Azure NetApp file mount. One pod writes data to the shared volume, and the second pod reads the same data back to confirm ReadWriteMany access is working correctly.

On current OpenShift clusters, these simple demo pods may emit Pod Security warnings unless restricted-compatible `securityContext` settings are added. The sample still works for basic validation.

Writer Pod

This pod writes hello netapp to the shared mount backed by the anf-pvc claim.

Watch for the pod to become ready:

Verify the file was written:

Expected output:

Reader Pod

This pod reads the same file from the shared mount.

Wait for the pod to be ready:

Verify the reader pod can access the shared file:

Expected output:

The first hello netapp is from the pod logs, and the second is from the oc exec command. This confirms that both pods successfully accessed the same Azure NetApp-backed ReadWriteMany volume.

Back to top

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy & sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2026 Red Hat