Subscribe to our blog

Background

Backup is defined as the process of creating copies of data and storing them in separate locations or mediums, while restore is defined as the process of retrieving the backed-up data and returning it to its original location or system or to a new one. In other words, backup is akin to data preservation, and restore is in essence data retrieval.

In this article, I will discuss several considerations in the backup and restore processes for both Red Hat OpenShift Service on AWS (ROSA) and Azure Red Hat OpenShift (ARO) clusters. I will also discuss what to back up, and lastly, what tools can be used to support backup and restore processes. The objective of this article is to share best practices for backup and restore for your ROSA and ARO clusters at a high level.

Considerations

First and foremost, I recommend that you do backup and restore processes at the application level instead of at the cluster level. Note that this applies to all the following considerations.

  • Disaster Recovery (DR) plan
    • High Availability (HA) vs Fault Tolerant (FT)

      Both HA and FT are important considerations in any DR plan. Designing applications with HA in mind will result in minimal downtime, such as during failover or maintenance; thus the focus is to reduce this downtime to an acceptable level.

      On the other hand, designing FT systems will ensure no downtime and the overall systems will continue operating correctly even when some parts are failing. As such, these systems are typically able to recover in real-time without user intervention.

      In short, the former design is commonly used for applications that can tolerate downtime to some extent, while the latter is for critical applications where downtime is unacceptable.

    • Recovery Point Objective (RPO) vs Recovery Time Objective (RTO)

      Other critical considerations in the DR plan are the organization’s RPO and RTO. RPO refers to the acceptable amount of data loss in the event of disaster, while RTO is defined as the acceptable downtime before the systems recover.

      Let's say your RPO is one hour and RTO is also one hour. This means that if a disaster happens at 12 noon, then you could only tolerate data loss from 11am and your systems need to resume normal operations by 1pm.  

Figure 1. Illustration of DR scenario

What to Back Up

  • Frequency
    • As discussed above, your RPO and RTO determine how often or how frequently you need to do backup. Using the previous example, since your RPO is one hour, you will need to do backup at least every hour to meet the RPO.
  • Location
    • To ensure that your systems remain available during an outage, you might consider distributing your clusters to multiple availability zones (AZs) and/or regions. In other words, if you currently have a single cluster in a single AZ, you might consider having your cluster replicated in multiple AZs instead. Similarly, you might want to ensure that your cluster can failover to another region in the event of a catastrophe affecting the region.
  • Security
    • Security is another important consideration since you don't want anyone without proper authentication or authorization to access your backup data. That said, you should restrict the access to this data. You should store the backup in a secure location and have it encrypted both at rest and in transit.
  • Automation
    • You might want to automate the backup and restore processes. I will discuss which tools can help you with the process below.
  • Test and validation
    • Last but not least, you want to ensure that all plans with the considerations above are tested, validated, documented, and maintained. This will ensure that the backup and restore processes are functioning properly in disaster recovery scenarios.

What to Back Up

When it comes to what you need to back up, consider this at the application level. The first rule: It is not advisable to backup etcd since it is highly unlikely that your new cluster will be the exact copy of the one that you are about to backup. So do not backup etcd!

  • Namespaces
    • Determine which namespaces are critical to your applications.
  • Custom Resources (CRs)/Custom Resource Definitions (CRDs)
    • Along with your namespaces, select which CR and CRDs are relevant to your cluster.
  • YAML Manifests
    • Select which YAML manifests you want to prioritize and back up.
  • Persistent Volumes (PVs)
    • If your cluster is using PVs, see where those volumes reside (i.e., inside or outside the cluster) and consider how to back up these volumes.

Tools

Finally, let's discuss several tools that can help you with backup and restore processes.

  • GitOps and CI/CD
    • Automation is an important consideration in the backup and restore processes. GitOps helps in tracking changes and can be beneficial in identifying the cause of data loss during backup and restore. CI/CD pipelines can also be extended to include deployment of backup and processes themselves, allowing you to perform and test the processes regularly.
  • Red Hat OpenShift API for Data Protection (OADP)
    • OADP is an API that will allow external backup and recovery tools to interact with OpenShift clusters, allowing you to utilize your preferred data protection solutions and ensuring availability and recoverability of the application data in the event of catastrophe. Please refer here for more details.
  • Migration Toolkit for Containers (MTC)
    • On the other hand, MTC is a toolkit that provides tools and resources for migrating applications within the same Red Hat OpenShift cluster or between clusters. Please refer here for more details.
  • Third-party tools

Other third-party tools we recommend include TrilioKonveyorVeleroKasten K10, and Portworx.  


About the author

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech