Learn how to bring workloads closer to your users by deploying OpenShift on a remote OpenStack edge site

Alt text: edge-iot-workloads

Source


Introduction

Edge computing means bringing data processing as close to the user as possible. You can accomplish this by deploying workloads to remote locations (usually smaller than the typical data center) near users rather than at data centers.

You can use Red Hat OpenStack Platform to deploy your edge computing infrastructure with Distributed Compute Nodes (DCN). Compute nodes are deployed to remote locations, called “edge sites,” and are defined by their availability zone. Each edge site has its own compute, storage, and networking capabilities.

Source

The central site hosts the OpenStack control plane, which consists of at least three controllers. This site can optionally have compute and storage services.

The Red Hat OpenStack guide will help you deploy your OpenStack cloud with an edge architecture. By following this post, you will learn how to deploy OpenShift on top of this platform.

Use Cases

There are many possibilities when it comes to deploying OpenShift on top of an OpenStack cloud, but today we will cover what will be delivered as a Tech Preview in a future release of OpenShift.

Let’s discuss use cases first, then requirements, and then propose a role profile that would meet these needs.

Let’s say we are using OpenStack infrastructure in a distributed fashion across multiple physical areas, or edge sites. Each edge site contains between 5 and 20 machines, and I would like to deploy one or more OpenShift clusters at each one.

These clusters will run my applications with these benefits:

  • low latency to the end user
  • smaller footprint
  • confined fault domain
  • connected to the central site for control applications or disaster recovery.

Requirements

Our solution must:

  • Meet OpenStack requirements for storage and networking.
  • Run OpenShift machines (virtual servers in OpenStack) within the same edge site, also referred to as an availability zone.
  • Maintain storage for OpenShift within the same site. For OpenShift, we will require Cinder (block storage) to be deployed at the edge site. We do not recommend using Swift (object storage), because it is only available from the central site. Also, keep in mind that Manila (shared file systems) is also only available from the central site.
  • Have network connectivity that is decentralized and delegated to each edge site. We recommend that you use routed provider networks, which provide agility to the network administrators. Each site can have its own overlay identifiers & protocols.

    You can learn more about using provider networks in the
    Red Hat OpenStack Networking guide.

Role Profile

This is an example of a combined role profile between OpenStack DCN and OpenShift running at each edge site that could be used for a Provider scenario.

In this particular scenario, virtualized network functions (VNF) are hosted by OpenStack Nova, while the cloud-native network functions (CNF) live in the OpenShift clusters. Both are in the same availability zone; they can share the same compute, network, and storage resources.

Known Limitations

As of today, this role profile is fairly new, and is still planned to be delivered as Tech Preview with a good number of known limitations. This list is non-exhaustive, but this will give you a good idea on what cannot be done as of today:

  • Due to the DCN limitations of OSP13, we are only targeting OSP16 at the moment.
  • Only Hyper-Converged Infrastructure (HCI)/Ceph is supported at the edge for 16.1.4.
  • Non-HCI/Ceph at the edge will be supported for 16.2.
  • Networks must be pre-created (Bring Your Own Network), either as tenant or provider networks. They have to be scheduled in the right availability zone.

You should also check the limitations of distributed compute node (DCN) architecture in OSP16.1. A few of these limitations will be addressed in a future OSP16.x z-stream version).

Enough talking: Let’s Deploy

Once your infrastructure has met all the requirements, you are ready to deploy your OpenShift cluster on an edge site.

In our particular use case, our goal is to keep the OpenShift Image Registry as close to our workloads as possible. First, disable Swift:

$ openstack role remove --user <your-user> --project <your-project> swiftoperator

You can check whether your user has Swift access by running the following command after loading your OpenStack credentials:

$ openstack container create test
Forbidden (HTTP 403)

As you can see, it returned an error. We know that this user cannot use Swift.

Now, let’s have a look at the availability zones (AZs). The AZ is a physical location where our remote compute nodes live. They provide both compute and storage services.

We can list the available zones with this command:

$ openstack availability zone list --compute
+----------------+-----------------+
| Zone Name | Zone Status |
+----------------+-----------------+
| central        | available      |
| az0            | available      |
| az1           | available      |
+----------------+-----------------+

Prior to the OpenShift deployment, the Cloud administrator should upload the RHCOS image in OpenStack Glance.

The image can be uploaded at the central location and then distributed to the remote edge sites, or it can directly be uploaded into a specific site.

To learn more about this process, visit the official manual.

We are almost all set to deploy, but next, let’s look at an example of the install-config.yaml:

apiVersion: v1
baseDomain: shiftstack.com
compute:
- name: worker
 platform:
   openstack:
     type: m1.large
     zones:
     - az0
     rootVolume:
       size: 30
       type: tripleo
       zones:
       - az0
 replicas: 3
controlPlane:
- name: master
 platform:
   openstack:
     type: m1.xlarge
     zones:
     - az0
     rootVolume:
       size: 30
       type: tripleo
       zones:
       - az0
 replicas: 3
metadata:
 name: my-cluster
networking:
 machineNetwork:
 - cidr: 192.168.26.0/24
 networkType: OpenShiftSDN
platform:
 openstack:
cloud:            openstack
computeFlavor:   m1.xlarge
clusterOSImage: rhcos-4.8
apiVIP:           192.168.26.214
ingressVIP:       192.168.26.111
machinesSubnet: 072fefda-df2d-4a84-97cd-7c911048f402
defaultMachinePlatform:
    type: m1.xlarge
pullSecret: <redacted>
sshKey: <redacted>
  • For each machine pool, we configure the zones parameter, where we want to deploy the machines. For now, we only support one zone, since the cluster lives within a single edge site.
  • Optionally, you can deploy with rootVolume (documented here) and then specify a zone (again, we only support one zone for now). The zone must be the same as where the machine is deployed.
  • machineNetwork specifies the CIDR of the Provider Network subnet that is used to deploy OpenShift machines. machinesSubnet is the Neutron UUID of that subnet. The subnet is unique per edge site and tied to the Provider Network that the network administrator creates. To know more about the BYON options, have a look at the official manual.
  • clusterOSImage is the parameter to define which Glance image is used for RHCOS.
  • The rest of the parameters are not edge-specific and are documented in the OpenShift manual.

You can now deploy your OpenShift cluster that will live at the edge.

Once your cluster is deployed, you should be able to see that the virtual machines and volumes have been created in the right availability zone.


What’s on the Roadmap

This is only the beginning of how we can deploy OpenShift at the edge with OpenStack. This first iteration allows us to cover the most common use cases, but more work is in the pipeline:

  • After OpenStack Octavia is supported at the edge, we will be able to schedule load balancers on specific edge sites.
  • After OpenStack Manila is supported at the edge, workloads will be able to consume shares within the same physical location.
  • Kuryr for CNI with AZ support is a work in progress.
  • We will support non-BYON at installation
  • Stretched OpenShift clusters
  • Deploy with distributed heterogeneous compute architectures
  • Offering specific guides/tutorials for advanced use-cases

Conclusion

Combining OpenStack DCN with OpenShift Container Platform gives our customers the power to deploy their workloads where they belong: close to their users.

Whether it is for Enterprise, IoT, or Providers, we provide the featuresets to enable these types of role profiles.


About the author

Emilien is a French citizen living in Canada (QC) who has been contributing to OpenStack since 2011 when it was still a young project. While his major focus has been the installer, his impact has helped customers have a better experience when deploying, upgrading and operating OpenStack at large scale. Technical and team leader at Red Hat, he's developing leadership skills with passion for teamwork and technical challenges. He loves sharing his knowledge and often give talks to conferences.

Read full bio