Cloud Experts Documentation

Configure ARO with OpenShift Data Foundation

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.

NOTE: This guide demonstrates how to setup and configure self-managed OpenShift Data Foundation in Internal Mode on an ARO Cluster and test it out.

Prerequisites

Install compute nodes for ODF

A best practice for optimal performance is to run ODF on dedicated nodes with a minimum of one per zone. In this guide, we will be provisioning 3 additional compute nodes, one per zone. Run the following script to create the additional nodes:

  1. Log into your ARO Cluster

  2. Create the new compute nodes

  3. wait for compute node to be up and running It takes just a couple of minutes for new nodes to provision

  4. Label new compute nodes

    Check if the nodes are ready:

    expected output:

    Once you see the three nodes, the next step we need to do is label and taint the nodes. This will ensure the OpenShift Data Foundation is installed on these nodes, and no other workload will be placed on the nodes.

    Check nodes labels. The following command should list all three ODF storage nodes, filtered by the label we just applied:

Deploy OpenShift Data Foundation

Next, we will install OpenShift Data Foundation via an Operator.

  1. Create the openshift-storage namespace
  2. Create the Operator Group for openshift-storage
  3. Subscribe to the ocs-operator
  4. Subscribe to the odf-operator
  5. Install the Console Plugin if needed. This gives you a specific tile in the OpenShift console to manage your ODF Storage Cluster. By running this command, you will see the OpenShift console refresh itself, as the console pods must restart to inherit this new configuration.

After install the plugin, you should enable through Console > Installed Operators > OpenShift Data Foundation > Details. Change the option Console plugin from Disabled to Enabled then Save. After a few minutes you will be able to see new itens under the menu Storage, including the option Data Foundation.

  1. Create a Storage Cluster

Validate the install

  1. List the cluster service version for the ODF operators

    verify that the operators below have succeeded.

  2. Check that Storage cluster is ready

  3. Check that the ocs storage classes have been created

    note: this can take around 5 minutes

Here is an example of what ODF looks like in the console with a working cluster:

ODF Dashboard

You can access it after enabling the Console plugin, going through Storage > Data Foundation > Storage Systems > ocs-storagecluster-storagesystem

Test it out

To test out ODF, we will create ‘writer’ pods on each node across all zones and then a reader pod to read the data that is written. This will prove both regional storage along with “read write many” mode is working correctly.

  1. Create a new project

  2. Create a RWX Persistent Volume Claim for ODF

  3. Check PVC and PV status. It should be “Bound”

  4. Create writer pods via a DaemonSet Using a deamonset will ensure that we have a ‘writer pod’ on each worker node and will also prove that we correctly set a taint on the ‘ODF Workers’ where which we do not want workload to be added to.

    The writer pods will write out which worker node the pod is running on, the data, and a hello message.

  5. Check the writer pods are running.

    note: there should be 1 pod per non-ODF worker node

    expected output

  6. Create a reader pod The reader pod will simply log data written by the writer pods.

  7. Now let’s verify the POD is reading from shared volume.

    Expected output

    Notice that pods in different zones are writing to the PVC which is managed by ODF.

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy & sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2023 Red Hat, Inc.