How to set up Amazon FSx for NetApp ONTAP to use with Red Hat OpenShift Service on AWS

Learn how to implement Amazon FSx for NetApp ONTAP as a persistent layer storage to use with your Red Hat OpenShift Service on AWS clusters. Explore key strategies for your application data, as well as how to import your existing virtual machines.

Learn how to implement Amazon FSx for NetApp ONTAP as a persistent layer storage to use with your Red Hat OpenShift Service on AWS clusters. Explore key strategies for your application data, as well as how to import your existing virtual machines.

Setting up Amazon FSx for NetApp ONTAP for Red Hat OpenShift Service on AWS (ROSA) with Hosted Control Plane

2 hrs

Red Hat® OpenShift® Service on AWS (ROSA) integrates seamlessly with Amazon FSx for NetApp ONTAP (FSxN), a fully managed, scalable shared storage service built on NetApp's renowned ONTAP file system.

The integration with the NetApp Trident driver—a dynamic Container Storage Interface (CSI)—facilitates the management of Kubernetes Persistent Volume Claims (PVCs) on storage disks. This driver automates the on-demand provisioning of storage volumes across diverse deployment environments, making it simpler to scale and protect data for your applications.

What will you learn?

  • How to integrate FSxN with ROSA

Provisioning FSx for NetApp ONTAP

To begin, you must create a multi-availability zone (AZ) FSx for NetApp ONTAP in the same virtual private cloud (VPC) as the ROSA cluster. There are several ways to do this but for the purposes of this learning path we will be using a CloudFormation Stack.

  1. Clone the GitHub  repository

    # git clone https://github.com/aws-samples/rosa-fsx-netapp-ontap.git
  2.  Run the CloudFormation Stack

    Run the command below by replacing the parameter values with your own values:

    # cd rosa-fsx-netapp-ontap/fsx
                            aws cloudformation create-stack \
                              --stack-name ROSA-FSXONTAP \
                              --template-body file://./FSxONTAP.yaml \
                              --region <region-name> \
                              --parameters \
                              ParameterKey=Subnet1ID,ParameterValue=[subnet1_ID] \
                              ParameterKey=Subnet2ID,ParameterValue=[subnet2_ID] \
                              ParameterKey=myVpc,ParameterValue=[VPC_ID] \
                            ParameterKey=FSxONTAPRouteTable,ParameterValue=[routetable1_ID,routetable2_ID] \
                              ParameterKey=FileSystemName,ParameterValue=ROSA-myFSxONTAP \
                              ParameterKey=ThroughputCapacity,ParameterValue=1024 \
                              ParameterKey=FSxAllowedCIDR,ParameterValue=[your_allowed_CIDR] \
                              ParameterKey=FsxAdminPassword,ParameterValue=[Define Admin password] \
                              ParameterKey=SvmAdminPassword,ParameterValue=[Define SVM password] \
                              --capabilities CAPABILITY_NAMED_IAM

    Where:

    1. region-name: same as the region where the ROSA cluster is deployed 

      subnet1_ID: id of the Preferred subnet for FSxN \

      subnet2_ID: id of the Standby subnet for FSxN 

      VPC_ID: id of the VPC where the ROSA cluster is deployed 

      routetable1_ID, routetable2_ID: ids of the route tables associated with the subnets chosen above 

      your_allowed_CIDR: allowed CIDR range for the FSx for ONTAP security groups ingress rules to control access. You can use 0.0.0.0/0 or any appropriate CIDR to allow all traffic to access the specific ports of FSx for ONTAP. 

      Define Admin password: A password to login to FSxN Define SVM password: A password to login to SVM that will be created

  3. Verify that your file system and storage virtual machine (SVM) has been created using the Amazon FSx console, shown below:

    The Amazon console page displaying the newly created OntapFileSystem and its availability.
    The Amazon console page displaying the newly created OntapFileSystem and its availability. 

Installing and configuring Trident CSI driver for the ROSA cluster

  1. Add the NetApp Trident Helm repository.

    # helm repo add netapp-trident https://netapp.github.io/trident-helm-chart
  2. Install trident using helm
    1. Note that depending on the version you install, the version parameter will need to be changed in the following command (refer to the documentation for the correct version number):
    2. # helm install trident netapp-trident/trident-operator --version 100.2406.0 --create-namespace --namespace trident
    3. For the correct installation method for the specific Trident version you want to install,  refer to the Trident documentation
  3. Verify that all Trident pods are in the running state.
    1. #oc get pods -n trident
                      NAME								READY		STATUS		RESTARTS	AGE
                      trident-controller-f5f6796f-vd2sk	6/6			Running		0			19h
                      trident-node-linux-4svgz			2/2			Running		0			19h
                      trident-node-linux-dj9j4			2/2			Running		0			19h
                      trident-node-linux-jlshh			2/2			Running		0			19h
                      

Configure the Trident CSI backend to use FSx for NetApp ONTAP (ONTAP NAS)

The Trident back-end configuration tells Trident how to communicate with the storage system (in this case, FSxN). For creating the backend, we will provide the credentials of the SVM to connect to, along with the Cluster Management and the NFS data interfaces. We will use  the ontap-nas driver to provision storage volumes in the FSx file system.

  1. Create a secret for the SVM credentials using the following yaml.
    1. apiVersion: v1
                                                      kind: Secret
                                                      metadata:
                                                        name: backend-fsx-ontap-nas-secret
                                                        namespace: trident
                                                      type: Opaque
                                                      stringData:
                                                        username: vsadmin
                                                        password: <value provided for Define SVM password as a parameter to the Cloud Formation Stack>
    2. Note: You can also retrieve the SVM password created for FSxN from the AWS Secrets Manager as shown below.

      AWS Secrets Manager menu displaying current secrets available.
      AWS Secrets Manager menu displaying current secrets available.

       

      AWS secrets manager page with Actions dropdown menu highlighted.
      AWS secrets manager page with Actions dropdown menu highlighted. 

       

  2. Add the secret for the SVM credentials to the ROSA cluster using the following command:
    1. # oc apply -f svm_secret.yaml
    2. You can verify that the secret has been added in the trident namespace using the following command: 
    3. # oc get secrets -n trident |grep backend-fsx-ontap-nas-secret
  3. Create the backend object.
    1. For this, move into the fsx directory of your cloned Git repository. Open the file backend-ontap-nas.yaml. Replace the following:
    2. managementLIF with the Management DNS name 
    3. dataLIF with the NFS DNS name of the Amazon FSx SVM and
    4. svm with the SVM name. Create the backend object using the following command.
    5. # oc apply -f backend-ontap-nas.yaml
    6. Note: You can get the Management DNS name, NFS DNS name and the SVM name from the Amazon FSxN Console as shown in the screenshot below:

      The summary screen on Amazon FSx console displaying the SVM ID, NFS DNS name and SVM name highlighted.
      The summary screen on Amazon FSx console displaying the SVM ID, NFS DNS name and SVM name highlighted. 

       

  4. Run the following command to verify that the backend object has been created and Phase is showing Bound and Status is Success.

    Returned command line results showing the backend-fsx-ontap-nas object as Bound and Success.
    Returned command line results showing the backend-fsx-ontap-nas object as Bound and Success.

     

Create Storage Class

Now that the Trident backend is configured, you can create a Kubernetes storage class to use the backend. Storage class is a resource object made available to the cluster. It describes and classifies the type of storage that you can request for an application.

  1. Review the file storage-class-csi-nas.yaml in the fsx folder.
    1. apiVersion: storage.k8s.io/v1
                                      kind: StorageClass
                                      metadata:
                                        name: trident-csi
                                      provisioner: csi.trident.netapp.io
                                      parameters:
                                        backendType: "ontap-nas"
                                        fsType: "ext4"
                                      allowVolumeExpansion: True
                                      reclaimPolicy: Retain
  2. Create Storage Class in ROSA cluster and verify that trident-csi storage class has been created.
    1. # oc apply -f storage-class-csi-nas.yaml
      Command line results showing names, provisioners, reclaim policy, volume binding mode, allow volume expansion, and age of created storage classes.
      Command line results showing names, provisioners, reclaim policy, volume binding mode, allow volume expansion, and age of created storage classes.

       

       This completes the installation of Trident CSI driver and its connectivity to FSx for ONTAP file system. Now you can deploy a sample Postgresql stateful application on ROSA using file volumes on FSx for ONTAP.

  3. Verify that there are no PVCs and PVs created using the trident-csi storage class.

    Command line results from "oc get pvc -4" command showing existing storage classes over multiple namespaces.
    Command line results from "oc get pvc -4" command showing existing storage classes over multiple namespaces. 

     

  4. Verify that applications can create PV using Trident CSI.
    1. Create a PVC using the pvc-trident.yaml file provided in the fsx folder.
    2. pvc-trident.yaml 
                      kind: PersistentVolumeClaim
                      apiVersion: v1
                      metadata:
                        name: basic
                      spec:
                        accessModes:
                          - ReadWriteMany
                        resources:
                          requests:
                            storage: 10Gi
                        storageClassName: trident-csi
    3. You can issue the following commands to create a pvc and verify that it has been created: 
    4. # oc create -f pvc-trident.yaml -n trident
                      # oc get pvc -n trident
      Command line results from "oc get pvc -n trident" command showing created pvcs.
      Command line results from "oc get pvc -n trident" command showing created pvcs.

Deploy a sample Postgresql stateful application

  1. Use helm to install postgresql
    1. # helm install postgresql bitnami/postgresql -n postgresql --create-namespace
      Command line readout from helm installation.
      Command line readout from helm installation. 
  2. Verify that the application pod is running, and a PVC and PV is created for the application.
    1. # oc get pods -n postgresql
      Command line result showing pods running.
      Command line result showing pods running.

       

      Command line result showing pvc readout.
      Command line result showing pvc readout.

       

      Command line result showing the created PV.
      Command line result showing the created PV.

       

  3. Use the following command to get the password for the postgresql server that was installed. 
    1. # export POSTGRES_PASSWORD=$(kubectl get secret --namespace postgresql postgresql -o jsoata.postgres-password}" | base64 -d)
  4. Use the following command to run a postgresql client and connect to the  server using the password.
    1. # kubectl run postgresql-client --rm --tty -i --restart='Never' --namespace postgresql --image docker.io/bitnami/postgresql:16.2.0-debian-11-r1 --env="PGPASSWORD=$POSTGRES_PASSWORD" \
                                      > --command -- psql --host postgresql -U postgres -d postgres -p 5432
      Comman line readout after using command to run postgresql client.
      Comman line readout after using command to run postgresql client.

       

  5. Create a database and a table. Create a schema for the table and insert 2 rows of data into the table.

    Example create database and table commands.
    Example create database and table commands. 

     

    Table example showing a single row with id of 1 belonging to John Doe.
    Table example showing a single row with id of 1 belonging to John Doe. 

     

    Example created database with two rows. One for John Doe (id=1) and Jane Scott (id=2)
    Example created database with two rows. One for John Doe (id=1) and Jane Scott (id=2)

     

  6. Create a Snapshot of the app volume
    1. Create a VolumeSnapshotClass
      Save the following manifest in a file called volume-snapshot-class.yaml
    2. apiVersion: snapshot.storage.k8s.io/v1
                                      kind: VolumeSnapshotClass
                                      metadata:
                                       name: fsx-snapclass
                                      driver: csi.trident.netapp.io
                                      deletionPolicy: Delete
    3.  Create a snapshot by using the above manifest.
    4. command line readout from executing snapshot command.
      Command line readout from executing snapshot command. 
  7. Create a snapshot.
    1. Create a snapshot of the existing PVC by creating VolumeSnapshot to take a point-in-time copy of your Postgresql data. This creates an FSx snapshot that takes almost no space in the filesystem backend. Save the following manifest in a file called volume-snapshot.yaml:
    2. apiVersion: snapshot.storage.k8s.io/v1
                                      kind: VolumeSnapshot
                                      metadata:
                                       name: postgresql-volume-snap-01
                                      spec:
                                       volumeSnapshotClassName: fsx-snapclass
                                       source:
                                         persistentVolumeClaimName: data-postgresql-0
  8. Create the volume snapshot and confirm that it is created:
    1. Create a snapshot of the existing PVC by creating VolumeSnapshot to take a point-in-time copy of your Postgresql data. This creates an FSx snapshot that takes almost no space in the filesystem backend. Save the following manifest in a file called volume-snapshot.yaml:
    2. apiVersion: snapshot.storage.k8s.io/v1
                                      kind: VolumeSnapshot
                                      metadata:
                                       name: postgresql-volume-snap-01
                                      spec:
                                       volumeSnapshotClassName: fsx-snapclass
                                       source:
                                         persistentVolumeClaimName: data-postgresql-0
  9. Create the volume snapshot and confirm that it is created:
    1. #oc create -f postgresql-volume-snapchat.yaml -n postgresql
    2. #oc get VolumeSnapShot -n postgresql

      Command line readout retrieving the volume snapshot
      Command line readout retrieving the volume snapshot

       

  10. Delete the database to simulate the loss of data (data loss can happen due to a variety of reasons, here we are just simulating it by deleting the database).

    Previously created database being shown.
    Previously created database being shown.

     

    Showing the database being dropped via command line.
    Showing the database being dropped via command line.

Restore from Snapshot

To restore the volume to its previous state, you must create a new PVC based on the data in the snapshot you took. 

  1. Create a volume clone from the snapshot. Save the following manifest in a file named pvc-clone.yaml.
    1. apiVersion: v1
                      kind: PersistentVolumeClaim
                      metadata:
                       name: postgresql-volume-clone
                      spec:
                       accessModes:
                         - ReadWriteOnce
                       storageClassName: trident-csi
                       resources:
                         requests:
                           storage: 8Gi
                       dataSource:
                         name: postgresql-volume-snap-01
                         kind: VolumeSnapshot
                         apiGroup: snapshot.storage.k8s.io
    2. Create a clone of the volume by creating a PVC using the snapshot as the source using the above manifest. 
  2. Apply the manifest and ensure that the clone is created.
    1. # oc create -f postgresql-pvc-clone.yaml -n postgresql
      Command line readout after applying the manifest
      Command line readout after applying the manifest

       

  3. Delete the original postgresql installation.
    1. # helm uninstall postgresql -n postgresql
      Command line readout for deletion of helm.
      Command line readout for deletion of helm.

       

  4. Create a new postgresql application using the new clone PVC.
    1. # helm install postgresql bitnami/postgresql --set primary.persistence.enabled=true --set primary.persistence.existingClaim=postgresql-volume-clone -n postgresql
      Command line showing the post helm install.
      Command line showing the post helm install.

       

  5. Verify that the application pod is in the running state.

    Command line showing running status for postgresql-0
    Command line showing running status for postgresql-0

     

  6. Verify that the pod uses the clone as its PVC.
    1. # oc describe pod/postgresql-0 -n postgresql

      Command line showing input of describe pod command.
      Command line showing input of describe pod command.

       

      Command line readout of ready containers
      Command line readout of ready containers

       

  7. To validate that the database has been restored as expected, go back to the container console and show the existing databases.

    Container console showing restored database.
    Container console showing restored database. 

     

Congrats! You have successfully primed your environment with FSxN and deployed a containerized database to it. Now that you have a ROSA HCP cluster with FSxN attached, you can now look into the live migration of the virtual machines with OpenShift Virtualization on the ROSA cluster.

Previous resource
What is Amazon FSxN?
Next resource
Migrating live VMs

This learning path is for operations teams or system administrators
Developers may want to check out Transitioning to ROSA HCP on developers.redhat.com.

Get started on developers.redhat.com

Hybrid Cloud Logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy, sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.