How to provision better storage and enhance cluster deployments on Red Hat OpenShift Service on AWS with Amazon FSx for NetApp ONTAP

Learn how to create container applications and virtual machines on Red Hat® OpenShift® Service on AWS clusters using Internet Small Computer System Interface (iSCSI) volumes with Amazon FSx for NetApp ONTAP. Provision better storage and enhance your cluster deployments.

Authors: Mayur Shetty, Principal Ecosystem Solution Architect, Red Hat and Banu Sundhar, Senior Technical Marketing Engineer, NetApp

Learn how to create container applications and virtual machines on Red Hat® OpenShift® Service on AWS clusters using Internet Small Computer System Interface (iSCSI) volumes with Amazon FSx for NetApp ONTAP. Provision better storage and enhance your cluster deployments.

Authors: Mayur Shetty, Principal Ecosystem Solution Architect, Red Hat and Banu Sundhar, Senior Technical Marketing Engineer, NetApp

Setting up Amazon FSx for NetApp ONTAP for Red Hat OpenShift Service on AWS (ROSA) with iSCSI storage integration

20 mins

Red Hat® OpenShift® Service on AWS (ROSA) integrates with Amazon FSx for NetApp ONTAP (FSxN)—a fully managed, scalable, shared storage service built on NetApp's renowned ONTAP file system. The integration with NetApp Trident—a dynamic Container Storage Interface (CSI)—facilitates the management of Kubernetes Persistent Volume Claims (PVCs) for storage disks. NetApp Trident automates on-demand provisioning of storage volumes across diverse deployment environments, making it easier to scale and protect data for applications.

NetApp customers using the Red Hat OpenShift Container Platform to run their modern application or virtual machine workloads have an easier way to install NetApp Trident (starting from version 25.02 and onwards) on their Red Hat OpenShift clusters. The Trident 25.02 operator offers the optional benefit of preparing the worker nodes for iSCSI, significantly mitigating the challenge of worker node preparations. This enhancement eliminates the need for manual preparation and streamlines the process of provisioning persistent volumes for various workloads.

What will you learn?

  • How to add FSxN with iSCSI integration to your ROSA environment
  • How to verify iSCSI status on ROSA cluster nodes

What you need before starting:

Provisioning FSx for NetApp ONTAP

To begin, you must create a multi-availability zone (AZ) FSx for NetApp ONTAP in the same virtual private cloud (VPC) as the ROSA cluster. There are several ways to do this but for the purposes of this learning path we will be using a CloudFormation Stack.

  1. Clone the GitHub  repository.

    # git clone https://github.com/aws-samples/rosa-fsx-netapp-ontap.git
  2. Run the CloudFormation Stack.

    Run the command below by replacing the parameter values with your own values:

    # cd rosa-fsx-netapp-ontap/fsx
                            aws cloudformation create-stack \
                              --stack-name ROSA-FSXONTAP \
                              --template-body file://./FSxONTAP.yaml \
                              --region <region-name> \
                              --parameters \
                              ParameterKey=Subnet1ID,ParameterValue=[subnet1_ID] \
                              ParameterKey=Subnet2ID,ParameterValue=[subnet2_ID] \
                              ParameterKey=myVpc,ParameterValue=[VPC_ID] \
                            ParameterKey=FSxONTAPRouteTable,ParameterValue=[routetable1_ID,routetable2_ID] \
                              ParameterKey=FileSystemName,ParameterValue=ROSA-myFSxONTAP \
                              ParameterKey=ThroughputCapacity,ParameterValue=1024 \
                              ParameterKey=FSxAllowedCIDR,ParameterValue=[your_allowed_CIDR] \
                              ParameterKey=FsxAdminPassword,ParameterValue=[Define Admin password] \
                              ParameterKey=SvmAdminPassword,ParameterValue=[Define SVM password] \
                              --capabilities CAPABILITY_NAMED_IAM

    Where:

    region-name: same as the region where the ROSA cluster is deployed 

    subnet1_ID: id of the Preferred subnet for FSxN \

    subnet2_ID: id of the Standby subnet for FSxN 

    VPC_ID: id of the VPC where the ROSA cluster is deployed 

    routetable1_ID, routetable2_ID: ids of the route tables associated with the subnets chosen above 

    your_allowed_CIDR: allowed CIDR range for the FSx for ONTAP security groups ingress rules to control access. You can use 0.0.0.0/0 or any appropriate CIDR to allow all traffic to access the specific ports of FSx for ONTAP. 

    Define Admin password: A password to login to FSxN Define SVM password: A password to login to SVM that will be created

  3. Verify that your file system and storage virtual machine (SVM) has been created using the Amazon FSx console, shown below:

The Amazon console page displaying the newly created OntapFileSystem and its availability.

The Amazon console page displaying the newly created OntapFileSystem and its availability. 

Installing and configuring Trident using the Trident Operator

  1. Before installing Trident 25.02, verify that no Trident objects are present on the cluster.
    Figure 1: image shows the command # oc get pods -n trident and the result "no resources found in trident namespace"
  2. From the OperatorHub, locate the Red Hat certified Trident Operator and install it (clicking on the blue install button, shown below, will install the Trident Operator in all the namespaces on the cluster).
    Figure 2: image shows "NetApp Trident" box selected and highlighted in the OperatorHub menu

    Figure 3: image shows the successful install notice for NetApp Trident

    Figure 4 shows the "install operator" form under "OperatorHub." Under "installation mode" the default selection ("all namespaces on the cluster) is selected

    Figure 5: image shows the success screen for installation and a "view operator" button
     
  3. From the “Installed Operators” selection, create the Trident Orchestrator instance. When you click on create using the form view, you will install Trident using default options (step a). However, if you want to set any custom values or turn on iSCSI node prep during installation, use the YAML view (step b).

    Figure 6: image shows the Installed Operators form that lists the Trident Orchestrator and Trident Configurator APIs
     

    a. To create via the form view, select “create instance” then “form view” 
    Figure 7: image shows the form view for creating the Trident Orchestrator, which includes the required name "trident" and a blue "create" button
     

    b. To set custom values or turn on iSCSI node prep during installation, use the YAML view (example below). The image below shows that the IPv6 support has been set to false, and the debug option has been set to true for the TridentOrchestrator in the trident namespace.
    Figure 8: image shows the YAML view for creating Trident Orchestrator, including the name trident, nodeprep: -iscsi, and namespace trident
     

    Figure 9: image shows the confirmation screen that the TridentOrchestrator has been installed successfully under the "status" column
     

  4. Verify that the Trident objects are installed.
    Figure 10: command # oc get pods -n trident results with ready, status, restarts, and age. "Status" shows Trident objects are "running."
     
  5. Verify that the worker nodes have iscsid and multipathd enabled and the /etc/multipath.conf file has the entries as shown in Figure 11.
    1. Log back in to each of the worker nodes and verify they have iscsid, multipathd running. See an example command below:
      # oc debug node/ip-10-0-0-17.us-west-2.compute.internal
    2. Execute the following commands:
      #chroot /host
      #systemctl status iscsid
      Figure 11: return result for command shows iscsid.service is loaded (enabled) and active (running)

      Figure 12: return result for command shows multipathd.service is loaded (enabled) and active (running)
       
  6. Verify that the multipath.conf file has the following entry:
    Figure 13: cat command used to show contents of multipath.conf file, verifying that defaults for find_multipaths is no, blacklist vendor and product entries are .* and blacklist_exceptions for vendor are NETAPP and for product are LUN

Configure the Trident CSI backend to use FSx for NetApp ONTAP (ONTAP SAN for iSCSI)

The Trident back-end configuration tells Trident how to communicate with the storage system (in this case, FSxN). For creating the backend, we will provide the credentials of the Storage Virtual machine (SVM) to connect to, along with the Cluster Management LIF and the SVM to use for storage provisioning. We will use the ontap-san driver to provision storage volumes in the FSxN file system.

  1. Create a securely stored login key (secret) with the command #cat tbc-fsx-san.yaml. Then use the following yaml:  

    1. apiVersion: v1
      kind: Secret
      metadata:
        name: tbc-fsx-san-secret
      type: Opaque
      stringData:
        username: fsxadmin
        password: 

    b. Note: you can also retrieve the SVM password created for FSxN from the AWS Secrets Manager as shown below.

    Figure 14: image shows "secrets" screen under "AWS Secrets Manager", which lists secret name and description


    Figure 15: AWS secrets manager page with ROSA-FSXONTAP-FsxAdminPassword selected, showing secret details, description, value, and resource permissions. There is an Actions dropdown menu in the top right corner.
     

  2. Configure Trident using the following yaml: 
    1.  apiVersion: trident.netapp.io/v1
      kind: TridentBackendConfig
      metadata:
        name: tbc-fsx-san
      spec:
        version: 1
        storageDriverName: ontap-san
        managementLIF: <management LIF of the file system in AWS>
        backendName: tbc-fsx-san
        svm: <SVM name that is created in the file system>
        defaults:
          storagePrefix: demo
          nameTemplate: "{{ .config.StoragePrefix }}_{{ .volume.Namespace }}_{{ .volume.RequestName }}"
        credentials:
          name: tbc-fsx-san-secret
    2. Note: you can get the Management LIF and the storage virtual machine (SVM) SVM name from the Amazon FSx Console as shown below:
    3. Figure 16: the Amazon FSx Console shows the fs-0d82135cccceb074a summary screen including file system ID, lifecycle state, file system and deploymenttupe, storage and throughput capacity. The "Administration" tab is highlighted, showing ONTAP administration DNS name for the management endpoint and inter-cluster endpoint.
       
  3. Enter the following command to deploy the configuration: # oc apply -f tbc-fsx-san.yaml
  4. Verify the backend object has been created and "Phase" is showing "Bound" and "Status" is "Success"
    Figure 17: Command # oc create -f tbc-fsx-san.yaml -n trident showing return name, backend name and UUID, phase (bound) and status (success).

Create Storage Class for iSCSI

Now that the Trident backend is configured, you can create a Kubernetes storage class to use the backend. Storage class is a resource object made available to the cluster. It describes and classifies the type of storage that you can request for an application.

  1. Enter the command # cat sc-fsx-san.yaml to review the file.

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: sc-fsx-san
    provisioner: csi.trident.netapp.io
    parameters:
      backendType: "ontap-san"
      media: "ssd"
      provisioningType: "thin"
      fsType: ext4
      snapshots: "true"
      storagePools: "tbc-fsx-san:.*"
    allowVolumeExpansion: true
  2. Create Storage Class in ROSA cluster with the command #oc create -f sc-fsx-san.yaml and verify that trident-csi storage class has been created.
    Figure 18: Return result in command line showing storage class is created and listing name, provisioner, reclaimpolicy, volumebindingmode, allowvolumeexpansions, and age.

Create a Snapshot Class in Trident so that CSI snapshots can be taken

Snapshots are backups that keep your data safe should the original ever be accidentally removed. 

  1. Use the command # cat snapshotclass.yaml to view the contents of your snapshot blueprint. 

    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshotClass
    metadata:
      name: trident-snapshotclass
    driver: csi.trident.netapp.io
    deletionPolicy: Retain
  2. Use the command  # oc create -f snapshotclass.yaml to create the snapshot blueprint available to your system and confirm that it is created.
    Figure 19: result stating snapshotclass has been created and listing name, driver, deletionpolicy, and age.

Optional: Verify iSCSI status on the ROSA Cluster nodes

This optional step shows you how to verify that ROSA worker nodes are not yet prepared for iSCSI workloads. 

  1. Use the command # oc get nodes/code to verify that all the nodes are in a "Ready" state.
  2. Log into a ROSA cluster worker node. 

    a. Run the command # oc debug node/ip-10-0-0-17.us-west-2.compute.internal

    b. Create a pod Starting pod/ip-10-0-0-17us-west-2.computeinternal-debug-94whw/code>

    c. To use host binaries, enter the command chroot /host 

    d. Pod IP: 10.0.0.17 confirms the temporary pod is launched.

    e. Note: If you don’t see a command prompt, try pressing enter.

  3. View the status of iSCSI daemon, multipathing daemon, and the contents of the multipath.conf file—this should indicate that the iSCSI and multipathing is not configured (see Figure 20, below). Without configuring this, the volumes created in ONTAP cannot be mounted by the application pods using Trident. The output of the commands below should show “loaded” and “inactive.” You will also see that the /etc/multipath.conf does not exist, and all devices are blacklisted.
    sh-5.1# chroot /host
    sh-5.1# systemctl status iscsid
    sh-5.1# systemctl status multipathd

    Figure 20: command results highlighting the following: [root@localhost fsx] # oc debug node/ip-10-0-0-17.us-west-2.compute.internal; sh 5.1 # chroot/host sh-5.1# systemctl status iscsid o iscsid.service- Open (loaded, inactive); sh-5.1# sh-5.1# systemctl status multipathd (loaded, inactive);  /etc/multipath.conf does not exist, blacklisting all devices.

    This completes the installation of Trident CSI driver and its connectivity to FSxN file system using iSCSI. You are now ready to use iSCSI storage for container apps and virtual machines in ROSA.

Previous resource
Prerequisites
Next resource
Using iSCSI storage for container apps on ROSA

This learning path is for operations teams or system administrators

Developers may want to check out Transitioning to ROSA HCP on developers.redhat.com. 

Get started on developers.redhat.com

Hybrid Cloud Logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy, sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2025 Red Hat