Cloud Experts Documentation

Trident NetApp operator setup for Azure NetApp files

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.

Note: This guide a simple “happy path” to show the path of least friction to showcasing how to use NetApp files with Azure Red Hat OpenShift. This may not be the best behavior for any system beyond demonstration purposes.

Prerequisites

In this guide, you will need service principal and region details. Please have these handy.

  • Azure subscriptionID
  • Azure tenantID
  • Azure clientID (Service Principal)
  • Azure clientSecret (Service Principal Secret)
  • Azure Region

If you don’t have your existing ARO service principal credentials, you can create your own service principal and grant it contributor to be able to manage the required resources. Please review the official Trident documentationexternal link (opens in new tab) regarding Azure NetApp files and required permissions.

Important Concepts

Persistent Volume Claims are namespaced objectsexternal link (opens in new tab) . Mounting RWX/ROX is only possible within the same namespace.

NetApp files must be have a delegated subnet within your ARO Vnet’s and you must assign it to the Microsoft.Netapp/volumes service.

Configure Azure

You must first register the Microsoft.NetApp provider and Create a NetApp account on Azure before you can use Azure NetApp Files.

Register NetApp files

Azure Consoleexternal link (opens in new tab)

Create storage account

Again, for brevity I am using the same RESOURCE_GROUP and Service Principal that the cluster was created with.

Azure Consoleexternal link (opens in new tab)

or az cli

Create capacity pool

Creating one pool for now. The common pattern is to expose all three levels with unique pool names respective of each service level.

Azure Consoleexternal link (opens in new tab)

or az cli:

Delegate subnet to ARO

Login to azure console, find the subnets for your ARO cluster and click add subnet. We need to call this subnet anf.subnet since that is the name we refer to in later configuration.

delegate subnet

Install Trident Operator

Login/Authenticate to ARO

Login to your ARO cluster. You can create a token to login via cli straight from the web gui

get openshift oc credentials

Helm Install

Download latest Trident package

Extract tar.gz into working directory

cd into installer

Helm install

Example output from installation:

Validate

Note for mac users: take a look at the directory extras/macos/bin to find the proper tridentctl binary for MacOS

Install tridentctl

I put all my cli’s in /usr/local/bin

example output:

Create trident backend

FYI - Sample files for review are in sample-input/backends-samples/azure-netapp-files directory from the trident tgz we extracted earlier.

  1. Replace client ID with service principal ID
  2. Replace clientSecret with Service Principal Secret
  3. Replace tenantID with your account tenant ID
  4. Replace subscriptionID with your azure SubscriptionID
  5. Ensure location matches your Azure Region

Notes:

  • In case you don’t have the Service Principal Secret, you can create a new secret within the credentials pane of the service account in AAD/app registrations.
  • I have used nfsv3 for basic compatibility. You can remove that line and use NetApp files defaults.
  • For further steps you must ensure the Service Principal has the privileges in place. Otherwise you will face an error like this: “error initializing azure-netapp-files SDK client. capacity pool query returned no data; no capacity pools found for storage pool”. One way to avoid this situation is by creating a new custom role (Subscription->IAM->Create a custom role)with all privileges associated in official documentation ( netapp for azureexternal link (opens in new tab) ) and associate this new role to the cluster’s service principal. Storage Class

Add the following snippet:

run

example output:

if you get a failure here, you can run to following command to review logs:

To view log output that may help steer you in the right direction.

Create storage class

output:

Provision volume

Let’s create a new project and set up a persistent volume claim. Remember that PV Claims are namespaced objects and you must create the pvc in the namespace where it will be allocated. I’ll use the project “netappdemo”.

Now we’ll create a PV claim in the “netappdemo” project we just created.

output:

Verify

Quick verification of storage, volumes and services.

Verify Kubectl

Verify OpenShift

Login to your cluster as cluster-admin and verify your storage classes and persistent volumes.

Storage Class

Storage Class

Persisent Volumes Persistent Volumes

Create Pods to test Azure NetApp

We’ll create two pods here to exercise the Azure NetApp file mount. One to write data and another to read data to show that it is mounted as “read write many” and correctly working.

Writer Pod

This pod will write “hello netapp” to a shared NetApp mount.

You can watch for this container to be ready:

Or view it in the OpenShift Pod console for the netappdemo project.

Netapp Trident Demo Container

Reader Pod

This pod will read back the data from the shared NetApp mount.

Now let’s verify the POD is reading from shared volume.

You can also see the pod details in OpenShift for the reader:

OpenShift Netapp Reader Pod

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy & sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2023 Red Hat, Inc.