Cloud Experts Documentation

Deploying Advanced Cluster Management and OpenShift Data Foundation for ARO Disaster Recovery

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.

A guide to deploying Advanced Cluster Management (ACM) and OpenShift Data Foundation (ODF) for Azure Red hat OpenShift (ARO) Disaster Recovery

Overview

VolSync is not supported for ARO in ACM: https://access.redhat.com/articles/7006295 so if you run into issues and file a support ticket, you will receive the information that ARO is not supported.

In today’s fast-paced and data-driven world, ensuring the resilience and availability of your applications and data has never been more critical. The unexpected can happen at any moment, and the ability to recover quickly and efficiently is paramount. That’s where OpenShift Advanced Cluster Management (ACM) and OpenShift Data Foundation (ODF) come into play. In this guide, we will explore the deployment of ACM and ODF for disaster recovery (DR) purposes, empowering you to safeguard your applications and data across multiple clusters.

Sample Architecture Sample architecture Download a Visio file of this architecture

Hub Cluster (East US Region):

  • This is the central control and management cluster of your multi-cluster environment.
  • It hosts Red Hat Advanced Cluster Management (ACM), which is a powerful tool for managing and orchestrating multiple OpenShift clusters.
  • Within the Hub Cluster, you have MultiClusterHub, which is a component of ACM that facilitates the management of multiple OpenShift clusters from a single control point.
  • Additionally, you have OpenShift Data Foundation (ODF) Multicluster Orchestrator in the Hub Cluster. ODF provides data storage, management, and services across clusters.
  • The Hub Cluster shares the same Virtual Network (VNET) with the Primary Cluster, but they use different subnets within that VNET.
  • VNET peering is established between the Hub Cluster’s VNET and the Secondary Cluster’s dedicated VNET in the Central US region. This allows communication between the clusters.

Primary Cluster (East US Region):

  • This cluster serves as the primary application deployment cluster.
  • It has the Submariner Add-On, which is a component that enables network connectivity and service discovery between clusters.
  • ODF is also deployed in the Primary Cluster, providing storage and data services to applications running in this cluster.
  • By using Submariner and ODF in the Primary Cluster, you enhance the availability and data management capabilities of your applications.

Secondary Cluster (Central US Region):

  • This cluster functions as a secondary or backup cluster for disaster recovery (DR) purposes.
  • Similar to the Primary Cluster, it has the Submariner Add-On to establish network connectivity.
  • ODF is deployed here as well, ensuring that data can be replicated and managed across clusters.
  • The Secondary Cluster resides in its own dedicated VNET in the Central US region.

In summary, this multi-cluster topology is designed for high availability and disaster recovery. The Hub Cluster with ACM and ODF Multicluster Orchestrator serves as the central control point for managing and orchestrating the Primary and Secondary Clusters. The use of Submariner and ODF in both the Primary and Secondary Clusters ensures that applications can seamlessly failover to the Secondary Cluster in the event of a disaster, while data remains accessible and consistent across all clusters. The VNET peering between clusters enables secure communication and data replication between regions.

Prerequisites

Azure Account

  1. Log into the Azure CLI by running the following and then authorizing through your Web Browser

  2. Make sure you have enough Quota (change the location if you’re not using East US)

    See Addendum - Adding Quota to ARO account if you have less than 36 Quota left for Total Regional vCPUs.

  3. Register resource providers

Red Hat pull secret

  1. Log into https://cloud.redhat.com
  2. Browse to https://cloud.redhat.com/openshift/install/azure/aro-provisioned
  3. Click the Download pull secret button and remember where you saved it, you’ll reference it later.

Manage Multiple Logins

  1. In order to manage several clusters, we will add a new Kubeconfig file to manage the logins and change quickly from one context to another

Create clusters

  1. Set environment variables

  2. Create environment variables for hub cluster

  3. Set environment variables for primary cluster

  4. Set environment variables for secondary cluster

    Note: Pod and Service CIDRs CANNOT overlap between primary and secondary clusters (because we are using Submariner). So we will use the parameters “–pod-cidr” and “–service-cidr” to avoid using the default ranges. Details about POD and Service CIDRs are available hereexternal link (opens in new tab) .

Deploying the Hub Cluster

  1. Create an Azure resource group

  2. Create virtual network

  3. Create control plane subnet

  4. Create worker subnet

  5. Create the cluster

    This will take between 30 and 45 minutes

Deploying the Primary cluster

  1. Create control plane subnet

  2. Create worker subnet

  3. Create the cluster

    This will take between 30 and 45 minutes

Connect to Hub and Primary Clusters

With the cluster in a private network, we can create a jump host in order to connect to it.

  1. Create the jump subnet

  2. Create a jump host

  3. Save the jump host public IP address

    Run this command in a second terminal
  4. Use sshuttle to create a SSH VPN via the jump host (use a separate terminal session)

    Run this command in a second terminal

    Replace the IP with the IP of the jump box from the previous step

  5. Get OpenShift API routes

  6. Get OpenShift credentials

  7. Log into Hub and configure context

  8. Log into Primary and configure context

    You can now switch between the hub and primary clusters with oc config

Deploying the Secondary Cluster

  1. Create an Azure resource group

  2. Create virtual network

  3. Create control plane subnet

  4. Create worker subnet

  5. Create the cluster

    This will take between 30 and 45 minutes

VNet Peering

  1. Create a peering between both VNETs (Hub Cluster in EastUS and Secondary Cluster in Central US)

Connect to Secondary cluster

Since this cluster will reside in a different virtual network, we should create another jump host.

  1. Create the jump subnet

  2. Create a jump host

  3. Save the jump host public IP address

    Run this command in a second terminal
  4. Use sshuttle to create a SSH VPN via the jump host

    Run this command in a second terminal

    Replace the IP with the IP of the jump box from the previous step

  5. Get OpenShift API routes

  6. Get OpenShift credentials

  7. Log into Secondary and configure context

    You can switch to the secondary cluster with oc config

Setup Hub Cluster

  • Ensure you are in the right context

Configure ACM

  1. Create ACM namespace

  2. Create ACM Operator Group

  3. Install ACM version 2.8

  4. Check if installation succeeded

    If you get the following error, it means that the installation wasn’t completed yet. Wait 3-5 minutes and run the last command again.

    A successful output should be similar to:

  5. Install MultiClusterHub instance in the ACM namespace

  6. Check that the MultiClusterHub is installed and running properly

Configure ODF Multicluster Orchestrator

  1. Install the ODF Multicluster Orchestrator version 4.12

  2. Check if installation succeeded

    If you get the following error, it means that the installation wasn’t completed yet. Wait 3-5 minutes and run the last command again.

    A successful output should be similar to:

Import Clusters into ACM

  1. Create a Managed Cluster Set

    Make sure you are running sshuttle --dns -NHr "aro@${EAST_JUMP_IP}" $HUB_VIRTUAL_NETWORK in second terminal
  2. Retrive token and server from primary cluster

  3. Retrieve token and server from secondary cluster

    Make sure you are running sshuttle --dns -NHr "aro@${CENTRAL_JUMP_IP}" $SECONDARY_VIRTUAL_NETWORK in second terminal

Import Primary Cluster

  1. Ensure you are in the right context

    Make sure you are running sshuttle --dns -NHr "aro@${EAST_JUMP_IP}" $HUB_VIRTUAL_NETWORK in second terminal
  2. Create Managed Cluster

  3. Create auto-import-secret.yaml secret

  4. Create addon config for cluster

  5. Check if cluster imported

Import Secondary Cluster

  1. Create Managed Cluster

  2. Create auto-import-secret.yaml secret

  3. Create addon config for cluster

  4. Check if cluster imported

Configure Submariner Add-On

  1. Create Broker configuration

  2. Deploy Submariner config to Primary cluster

  3. Deploy Submariner to Primary cluster

  4. Deploy Submariner config to Secondary cluster

  5. Deploy Submariner to Secondary cluster

  6. Check connection status for primary cluster (wait a few minutes)

    Look for the connection established status. The status indicates the connection is not degraded and healthy.

  7. Check connection status for secondary cluster

    Look for the connection established status. The status indicates the connection is not degraded and healthy.

Install ODF

Please note that when you subscribe to the ocs-operator and to odf-operator, you should change the channel from channel: stable-4.11 to channel:stable-4.12 since we are using the version 4.12 in this example.

Primary Cluster

  1. Switch the context to the primary cluster

  2. Follow these steps to deploy ODF into the Primary Cluster: https://cloud.redhat.com/experts/aro/odf/

Secondary Cluster

  1. Switch the context to the secondary cluster

  2. Follow these steps to deploy ODF into the Secondary Cluster: https://cloud.redhat.com/experts/aro/odf/

Finishing the setup of the disaster recovery solution

Creating Disaster Recovery Policy on Hub cluster

  1. Switch the context to the hub cluster

  2. Create a DR policy to enable replication between primary and secondary cluster

  3. Wait for DR policy to be validated

    This can take up to 10 minutes

    You should see

  4. Two DRClusters are also created

    You should see

Creating the Namespace, the Custom Resource Definition, and the PlacementRule

  1. First, log into the Hub Cluster and create a namespace for the application:

    Now, still logged into the Hub Cluster create a Custom Resource Definition (CRD) for the PlacementRule installed in the busybox-sample namespace. You can do this by applying the CRD YAML file before creating the PlacementRule. Here are the steps:

  2. Install the CRD for PlacementRule

  3. Create the PlacementRule

Create application and failover

  1. Create an application with ACM

  2. Associate the DR policy to the application

  3. Failover sample application to secondary cluster

  4. Verify application runs in secondary cluster

    Make sure you are running sshuttle --dns -NHr "aro@${CENTRAL_JUMP_IP}" $SECONDARY_VIRTUAL_NETWORK in second terminal

Cleanup

Once you’re done it’s a good idea to delete the cluster to ensure that you don’t get a surprise bill.

Delete the clusters and resources

Additional reference resources:

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy & sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2023 Red Hat, Inc.