Container platforms have become an integral part of enterprises in their modernization journey. Containers provide scalability, resiliency and performance to modern applications. Containerized applications have the capacity to scale up or down to meet changing user demand.

There are a few things to consider when designing applications in Red Hat OpenShift Service on AWS (ROSA). First, running resilient containerized applications requires scaling of pods on multiple hosts spread across multiple Availability Zones. The challenge is ensuring these container workloads have access to common shared data, as they move from one host to another host or from one AZ to another AZ. Second, as the data grows, customers need to get away from provisioned storage and take advantage of dynamic scaling of shared storage. These challenges can be easily addressed, by using Amazon Elastic File System (Amazon EFS). Amazon EFS provides a simple, scalable, fully managed, elastic NFS file system for use with AWS Cloud services. ROSA can be integrated with Amazon EFS by using EFS CSI driver. More details on EFS CSI driver can be found here.

Overview:

This post will demonstrate how to integrate EFS filesystem with ROSA cluster, creation of EFS file system, provisioning persistent volume using EFS filesystem. Then deploying a writer pod which will write to EFS filesystem and a reader pod that reads from the same EFS filesystem. We will then scale the reader pods across multiple AZs using EFS filesystem as a shared storage.

Pre-requisites:

You will need the following resources:

  • AWS account
  • IAM user with appropriate permission to create and access ROSA cluster
  • A ROSA cluster created
  • Access to Red Hat OpenShift web console
  • Note down the VPC ID of the ROSA EC2 worker nodes: From AWS management console, search for EC2 service, select any one EC2 worker node that belongs to your ROSA cluster. In the EC2 details page, you will find VPC ID.
  • Note down the security group ID of the ROSA EC2 worker nodes- From EC2 details page, select Security tab, note down the Security Group ID under Security groups.

Walkthrough:

  1. Create Amazon EFS filesystem on ROSA cluster’s AWS account
  2. Install AWS EFS operator in Openshift Web Console
  3. Create Shared volume
  4. Deploy a pod to write to EFS file system
  5. Deploy a pod to read from EFS file system
  6. Scale the reader pods across multiple AZs

Let’s go ahead and execute these steps.

  1. Create Amazon EFS filesystem on cluster’s AWS Account.

a. Login to AWS Account and search for EFS in the search bar. Select the same AWS region from the top right corner as that of ROSA cluster region.

b. In the AWS EFS page, select Create file system.

Screen Shot 2021-09-23 at 9.35.23 PM.png

c. Enter a name for your filesystem ‘rosa-filesystem’, and select the same VPC ID as that of rosa cluster. Select Create

Screen Shot 2021-09-27 at 2.48.01 PM.png

d. Once the file system is created, get into the details page as shown below, Select Network tab from the details page.

image3-Nov-30-2021-04-39-17-22-PM

e. select Manage button, and edit the security group to include ROSA worker node EC2 security group. Select Save.

Screen Shot 2021-09-27 at 3.00.32 PM.png

f. Select Access points from the EFS detail page,

image5-Nov-30-2021-04-39-39-40-PMg. Select Create Access point and enter details as below

  • Name :rosa-efs-data
  • Root directory path : /efs-mount
  • Leave POSIX user optional

image8-Nov-30-2021-04-40-10-01-PM

Configure below details which will allow application pods to access EFS file system with appropriate Owner ID and group ID of the pods. The example below, grants read-write-execute access to all users. This should be changed to restrict access to only valid applications/users as appropriate.

  • Owner user ID : 65534
  • Owner group ID : 65534
  • POSIX permissions to apply to the root directory path : 777
  • select Create access point
  • Note down the file system ID ( fs-xxxxxxxx) and Access point ID (fsap-xxxxxxxxxxxxxxx). we will need this information to create Shared Volume in later steps.

image7-Nov-30-2021-04-40-37-05-PM

  1. Install AWS EFS operator in Openshift web console.
    1. Login to Red Hat OpenShift web console using your credentials.
    2. From the left pane, drop down, select Administrator. select Operators→OperatorHub

Screen Shot 2021-10-04 at 1.20.30 PM.png

  1. Type efs in Filter by keyword field.

b. Select AWS EFS Operator. The AWS EFS operator we are installing here is a Community operator.Red Hat does not provide official support for EFS operator itself as it is supported by the community. This will not impact the support customers get from Red Hat on their ROSA clusters. Though ROSA is a fully managed offer, customers are responsible for backing up and restoring data. This includes stored on EFS to protect against possible outages or data loss. Any bugs or issues related to this driver, Red Hat will provide reasonable commercial support without SLA. Refer to Support for AWS EFS on Red Hat OpenShift Container Platform for additional information.

image9-Nov-30-2021-04-41-00-50-PM

 c. Accept all default fields and Select Install .

Screen Shot 2021-10-04 at 1.27.41 PM.png

Screen Shot 2021-10-04 at 1.27.24 PM.png

  1. Create Shared Volume

Create Shared volume in each project where your application pods require EFS access using File System ID/Access Point ID that we noted earlier. For additional shared volumes, create additional access point ID with EFS file system.

a. Create new project, from left navigation pane , select Project → Create Project

Screen Shot 2021-10-04 at 1.52.08 PM.png

b. Enter efs-app under Name & Display name fields and select create.

image14-Nov-30-2021-04-41-59-45-PM

c. Navigate to Operators→ Installed Operators. Select AWS EFS Operator, Select Shared Volume tab and select Create instance.

Screen Shot 2021-10-04 at 2.59.18 PM.png

d. Select YAML view , edit YAML to include your fileSystemID and AccessPoint ID as below and select Create. This will create Persistent Volume pv-efs-app-sv1 and Persistent Volume Claim pvc-sv1 under project efs-app.

Screen Shot 2021-10-04 at 3.05.28 PM.png 

e. Verify that there is Persistent Volume Claim created under Storage.

Screen Shot 2021-10-12 at 12.17.18 PM.png

  1. Now lets deploy efs-writer pod in efs-app project to write to EFS filesystem. EFS writer pod writes “hello efs” string to EFS file system every 30 seconds.

a. Create YAML file for efs writer and save it with file name efs-writer.yaml

---
apiVersion: v1
kind: Pod
metadata:
name: efs-writer
spec:
volumes:
- name: efs-storage-vol
  persistentVolumeClaim:
    claimName: pvc-sv1
containers:
- name: efs-writer
  image: centos:latest
  command: [ "/bin/bash", "-c", "--" ]
  args: [ "while true; do echo 'hello efs' >> /mnt/efs-data/verify-efs && echo 'hello efs' && sleep 30; done;" ]
  volumeMounts:
    - mountPath: "/mnt/efs-data"
      name: efs-storage-vol
b. From the CLI terminal, login to OpenShift Cluster using OC command line tool with a valid token. If you do not have the token, follow the instruction by executing below command.
$oc login
c. Change the project namespace to efs-app
$oc project efs-app
c. Deploy efs-writer pod using efs-writer.yaml file using the below command
$oc apply -f efs-writer.yaml
  1. Deploy efs-reader pod to read from the same EFS file system.
    1. Create YAML file for efs reader and save it with file name efs-reader.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: efs-reader
spec:
volumes:
- name: efs-storage-vol
  persistentVolumeClaim:
    claimName: pvc-sv1
containers:
- name: efs-reader
  image: centos:latest
  command: [ "/bin/bash", "-c", "--" ]
  args: [ "while true; do cat /mnt/efs-data/verify-efs  && sleep 30; done;" ]
  volumeMounts:
    - mountPath: "/mnt/efs-data"
      name: efs-storage-vol

b. Deploy efs-reader pod using efs-reader.yaml file using the below command.

$oc apply -f efs-reader.yaml

c. From OpenShift web console, left pane, navigate to Workloads → pods→select efs-reader pod.

Screen Shot 2021-10-04 at 8.31.56 PM.png

d. Navigate to Terminal tab to login to efs-reader pod. Navigate to folder /mnt/efs-data and view the verify-efs file.

image19-2

e. Create a deployment YAML to include replica set of 3 efs-reader pods in a file as below and save it with efs-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: example
namespace: efs-app
spec:
selector:
 matchLabels:
   app: efs-reader
replicas: 3
template:
 metadata:
   labels:
     app: efs-reader
 spec:
  volumes:
    - name: efs-storage-vol
      persistentVolumeClaim:
        claimName: pvc-sv1
  containers:
    - name: efs-reader
      image: centos:latest
      command: [ "/bin/bash", "-c", "--" ]
      args: [ "while true; do cat /mnt/efs-data/verify-efs  && sleep 30; done;" ]
      volumeMounts:
        - mountPath: "/mnt/efs-data"
          name: efs-storage-vol

f. Deploy EFS deployment config using efs-deployment.yaml file.

$ oc apply -f efs-deployment.yaml

Navigate to Deployments on the left side, and go to the Pods tab of example deployment that was just created. Note that there are 3 pods already deployed.

image20-2

g. Now select the Details tab and increase the number of pods to 5 and verify the number of pods scaled to 5 under Pods tab.

Screen Shot 2021-10-04 at 8.57.09 PM.png

Verify pods running on different worker nodes across multiple AZs by navigating to workloads→ReplicaSets from the left navigation pane. Under Node columns you will see IP addresses of EC2 instances which are spread across multiple AZs.

image1-Nov-30-2021-04-45-06-06-PM

Conclusion:

In this post, we have covered how to install an EFS CSI driver through the AWS EFS Operator in a ROSA cluster, deploy pods to write and read from the same file system. We have scaled pods horizontally using the EFS file system across multiple availability zones. This example demonstrated how to architect a ROSA cluster to provide better resiliency to applications. It is important to note that it is the customer’s responsibility to take EFS storage backup and restore in the event of storage issues or data loss. Instructions for taking EFS backup are covered here .


Categories

Storage, How-tos, cloud scale, massive scale, AWS

< Back to the blog