Setting up Amazon FSx for NetApp ONTAP for Red Hat OpenShift Service on AWS (ROSA) with iSCSI storage integration
Red Hat® OpenShift® Service on AWS (ROSA) integrates with Amazon FSx for NetApp ONTAP (FSxN)—a fully managed, scalable, shared storage service built on NetApp's renowned ONTAP file system. The integration with NetApp Trident—a dynamic Container Storage Interface (CSI)—facilitates the management of Kubernetes Persistent Volume Claims (PVCs) for storage disks. NetApp Trident automates on-demand provisioning of storage volumes across diverse deployment environments, making it easier to scale and protect data for applications.
NetApp customers using the Red Hat OpenShift Container Platform to run their modern application or virtual machine workloads have an easier way to install NetApp Trident (starting from version 25.02 and onwards) on their Red Hat OpenShift clusters. The Trident 25.02 operator offers the optional benefit of preparing the worker nodes for iSCSI, significantly mitigating the challenge of worker node preparations. This enhancement eliminates the need for manual preparation and streamlines the process of provisioning persistent volumes for various workloads.
What will you learn?
- How to add FSxN with iSCSI integration to your ROSA environment
- How to verify iSCSI status on ROSA cluster nodes
What you need before starting:
- Red Hat account
- Access to Red Hat OpenShift on the Red Hat Hybrid Cloud Console
- Amazon Web Services account
- IAM user credential with appropriate permissions to create and access ROSA cluster
- AWS command line-interface
- ROSA command line-interface
- OpenShift command-line interface (oc)
- Helm 3 documentation
- A HCP ROSA cluster
Provisioning FSx for NetApp ONTAP
To begin, you must create a multi-availability zone (AZ) FSx for NetApp ONTAP in the same virtual private cloud (VPC) as the ROSA cluster. There are several ways to do this but for the purposes of this learning path we will be using a CloudFormation Stack.
Clone the GitHub repository.
# git clone https://github.com/aws-samples/rosa-fsx-netapp-ontap.git
Run the CloudFormation Stack.
Run the command below by replacing the parameter values with your own values:# cd rosa-fsx-netapp-ontap/fsx aws cloudformation create-stack \ --stack-name ROSA-FSXONTAP \ --template-body file://./FSxONTAP.yaml \ --region <region-name> \ --parameters \ ParameterKey=Subnet1ID,ParameterValue=[subnet1_ID] \ ParameterKey=Subnet2ID,ParameterValue=[subnet2_ID] \ ParameterKey=myVpc,ParameterValue=[VPC_ID] \ ParameterKey=FSxONTAPRouteTable,ParameterValue=[routetable1_ID,routetable2_ID] \ ParameterKey=FileSystemName,ParameterValue=ROSA-myFSxONTAP \ ParameterKey=ThroughputCapacity,ParameterValue=1024 \ ParameterKey=FSxAllowedCIDR,ParameterValue=[your_allowed_CIDR] \ ParameterKey=FsxAdminPassword,ParameterValue=[Define Admin password] \ ParameterKey=SvmAdminPassword,ParameterValue=[Define SVM password] \ --capabilities CAPABILITY_NAMED_IAM
Where:
region-name: same as the region where the ROSA cluster is deployed
subnet1_ID: id of the Preferred subnet for FSxN \
subnet2_ID: id of the Standby subnet for FSxN
VPC_ID: id of the VPC where the ROSA cluster is deployed
routetable1_ID, routetable2_ID: ids of the route tables associated with the subnets chosen above
your_allowed_CIDR: allowed CIDR range for the FSx for ONTAP security groups ingress rules to control access. You can use 0.0.0.0/0 or any appropriate CIDR to allow all traffic to access the specific ports of FSx for ONTAP.
Define Admin password: A password to login to FSxN Define SVM password: A password to login to SVM that will be created- Verify that your file system and storage virtual machine (SVM) has been created using the Amazon FSx console, shown below:
The Amazon console page displaying the newly created OntapFileSystem and its availability.
Installing and configuring Trident using the Trident Operator
- Before installing Trident 25.02, verify that no Trident objects are present on the cluster.
- From the OperatorHub, locate the Red Hat certified Trident Operator and install it (clicking on the blue install button, shown below, will install the Trident Operator in all the namespaces on the cluster).
From the “Installed Operators” selection, create the Trident Orchestrator instance. When you click on create using the form view, you will install Trident using default options (step a). However, if you want to set any custom values or turn on iSCSI node prep during installation, use the YAML view (step b).
a. To create via the form view, select “create instance” then “form view”
b. To set custom values or turn on iSCSI node prep during installation, use the YAML view (example below). The image below shows that the IPv6 support has been set to false, and the debug option has been set to true for the TridentOrchestrator in the trident namespace.
- Verify that the Trident objects are installed.
- Verify that the worker nodes have
iscsid
andmultipathd
enabled and the/etc/multipath.conf
file has the entries as shown in Figure 11.- Log back in to each of the worker nodes and verify they have
iscsid
,multipathd
running. See an example command below:# oc debug node/ip-10-0-0-17.us-west-2.compute.internal
- Execute the following commands:
#chroot /host
#systemctl status iscsid
- Log back in to each of the worker nodes and verify they have
- Verify that the
multipath.conf
file has the following entry:
Configure the Trident CSI backend to use FSx for NetApp ONTAP (ONTAP SAN for iSCSI)
The Trident back-end configuration tells Trident how to communicate with the storage system (in this case, FSxN). For creating the backend, we will provide the credentials of the Storage Virtual machine (SVM) to connect to, along with the Cluster Management LIF and the SVM to use for storage provisioning. We will use the ontap-san driver to provision storage volumes in the FSxN file system.
Create a securely stored login key (secret) with the command
#cat tbc-fsx-san.yaml
. Then use the following yaml:apiVersion: v1 kind: Secret metadata: name: tbc-fsx-san-secret type: Opaque stringData: username: fsxadmin password:
b. Note: you can also retrieve the SVM password created for FSxN from the AWS Secrets Manager as shown below.
- Configure Trident using the following yaml:
apiVersion: trident.netapp.io/v1 kind: TridentBackendConfig metadata: name: tbc-fsx-san spec: version: 1 storageDriverName: ontap-san managementLIF: <management LIF of the file system in AWS> backendName: tbc-fsx-san svm: <SVM name that is created in the file system> defaults: storagePrefix: demo nameTemplate: "{{ .config.StoragePrefix }}_{{ .volume.Namespace }}_{{ .volume.RequestName }}" credentials: name: tbc-fsx-san-secret
- Note: you can get the Management LIF and the storage virtual machine (SVM) SVM name from the Amazon FSx Console as shown below:
- Enter the following command to deploy the configuration:
# oc apply -f tbc-fsx-san.yaml
- Verify the backend object has been created and "Phase" is showing "Bound" and "Status" is "Success"
Create Storage Class for iSCSI
Now that the Trident backend is configured, you can create a Kubernetes storage class to use the backend. Storage class is a resource object made available to the cluster. It describes and classifies the type of storage that you can request for an application.
Enter the command
# cat sc-fsx-san.yaml
to review the file.apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: sc-fsx-san provisioner: csi.trident.netapp.io parameters: backendType: "ontap-san" media: "ssd" provisioningType: "thin" fsType: ext4 snapshots: "true" storagePools: "tbc-fsx-san:.*" allowVolumeExpansion: true
- Create Storage Class in ROSA cluster with the command
#oc create -f sc-fsx-san.yaml
and verify that trident-csi storage class has been created.
Create a Snapshot Class in Trident so that CSI snapshots can be taken
Snapshots are backups that keep your data safe should the original ever be accidentally removed.
Use the command
# cat snapshotclass.yaml
to view the contents of your snapshot blueprint.apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: trident-snapshotclass driver: csi.trident.netapp.io deletionPolicy: Retain
- Use the command
# oc create -f snapshotclass.yaml
to create the snapshot blueprint available to your system and confirm that it is created.
Optional: Verify iSCSI status on the ROSA Cluster nodes
This optional step shows you how to verify that ROSA worker nodes are not yet prepared for iSCSI workloads.
- Use the command
# oc get nodes/code
to verify that all the nodes are in a "Ready" state. Log into a ROSA cluster worker node.
a. Run the command
# oc debug node/ip-10-0-0-17.us-west-2.compute.internal
b. Create a pod
Starting pod/ip-10-0-0-17us-west-2.computeinternal-debug-94whw/code>
c. To use host binaries, enter the command
chroot /host
d. Pod IP: 10.0.0.17 confirms the temporary pod is launched.
e. Note: If you don’t see a command prompt, try pressing enter.
View the status of iSCSI daemon, multipathing daemon, and the contents of the multipath.conf file—this should indicate that the iSCSI and multipathing is not configured (see Figure 20, below). Without configuring this, the volumes created in ONTAP cannot be mounted by the application pods using Trident. The output of the commands below should show “loaded” and “inactive.” You will also see that the /etc/multipath.conf does not exist, and all devices are blacklisted.
sh-5.1# chroot /host
sh-5.1# systemctl status iscsid
sh-5.1# systemctl status multipathd
This completes the installation of Trident CSI driver and its connectivity to FSxN file system using iSCSI. You are now ready to use iSCSI storage for container apps and virtual machines in ROSA.