This article provides the steps to run OpenShift on edge computing solutions being deployed by AWS. This provides a unique value proposition for applications that want to remain agnostic toward underlying infrastructure and run on any public cloud provider based solution.

AWS Wavelength

AWS Wavelength is an AWS Infrastructure offering optimized for mobile edge computing applications. Wavelength Zones are AWS infrastructure deployments that embed AWS compute and storage services within communications service providers’ (CSP) data centers at the edge of the 5G network. This allows  application traffic from 5G devices to reach application servers running in Wavelength Zones without leaving the telecommunications network. This avoids the latency that would result from application traffic having to traverse multiple hops across the internet to reach their destination, enabling customers to take full advantage of the latency and bandwidth benefits offered by modern 5G networks. AWS Wavelength Zones are available in 10 U.S. with Verizon, in Tokyo, Japan, with KDDI, and in Daejeon, South Korea, with SKT.

Wavelength Concepts

The following are the key concepts for Wavelengths:

  • Wavelength — A new type of AWS infrastructure designed to run workloads that require ultra-low latency over mobile networks.
  • Wavelength Zone (WZ) — A zone in the carrier location where the Wavelength infrastructure is deployed. Wavelength Zones are associated with an AWS Region. A Wavelength Zone is a logical extension of the region, and is managed by the control plane in the region.
  • VPC — A customer virtual private cloud (VPC) that spans Availability Zones, Local Zones, and Wavelength Zones, and has deployed resources such as Amazon EC2 instances in the subnets that are associated with the zones.
  • Subnet — A subnet that you create in a Wavelength Zone. You can create one or more subnets, and then run and manage AWS services, such as Amazon EC2 instances, in the subnet.
  • Carrier Gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location and allows outbound traffic to the carrier network and internet.
  • Network Border Group — A unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses.
  • Wavelength Application — An application that you run on an AWS resource in a Wavelength Zone.

How Wavelength Works

  • Extends the Amazon VPC to include a Wavelength Zone and then creates AWS resources like Amazon EC2 instances in the desired subnets
  • Deploys the portions of an application that require ultra-low latency in a Wavelength Zone and then seamlessly connect back to the rest of the application and the full range of cloud services running in the AWS Region
  • Allows application traffic to reach application servers running in Wavelength Zones without leaving the mobile network

Deploying OpenShift

The above diagram is a high-level overview of deployment architecture for OCP in AWS with wavelength. We have first deployed Red Hat OpenShift using the installer-provisioned infrastructure (IPI) deployment method on an existing VPC with predefined subnets. We did this because if the installer is left to create the underlying infrastructure, it will use the entire VPC CIDR, and we’re left with no option for creating additional subnets for the wavelength zone. Now let us see in detail how this was implemented.

Creating VPC and Subnets

We will start with first creating VPC and subnets in the AWS region. For this, we will use a cloudformation template to deploy required VPC and subnets. We can refer to this guide for information on creating VPC and subnets for OCP using cloudformation templates. Keep these in mind while creating VPC and subnets:

  1. Subnets bits
  2. Availability Zone Count

Following is the template I used for deploying VPC and subnets for this test:



Create the VPC and subnets using aws cli command:


On successful completion of above command, you will have VPC and Subnets:


Subnets (names were given later manually)

Installer-Provisioned Infrastructure (IPI) Installation

Once the required VPC and subnets are created, we will proceed to install the OpenShift cluster. We will be using the IPI installation method. For IPI installation with existing VPC, please follow the Installing a cluster on AWS into an existing VPC document. It will take around 35 to 40 minutes to complete the installation.


Once the installation is complete, login to the web console to see if the cluster is ready and operational.

Creating subnet in VPC for AWS wavelength

Login to AWS web console and go to ‘Services>VPC’ and click on the subnets:

Create the subnet by providing the VPC ID, Subnet name, Availability Zone, and CIDR for subnet:

Make sure that availability zone is in the wavelength zone. Also, create the Carrier Gateway for wavelength subnet.

Creating MachineSets for Wavelength Zone

Now we will create a MachineSet for the AWS wavelength zone. (A MachineSet is a group of machines. MachineSets are to the machines as ReplicaSets are to the Pods. MachineSets have a template of machines’ specifications. For more information on MachineSets, please refer to this document.)


We will use the existing MachineSet template and edit it to create wavelength MachineSet:


In the yaml file, you will need to edit following attributes:


  1. Specify the infrastructure ID that is based on the cluster ID that you set when we provisioned the cluster.Note If you have copied the MachineSet from an existing AZ, then you don't  have to modify the infrastructure ID or cluster ID.
    # oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
  2. Specify the infrastructure ID, node label, and zone
  3. Specify the node label to add
  4. Instance type has to be t3.xlarge because Wavelength supports the following instance types for edge workloads: t3.medium, t3.xlarge, and r5.2xlarge. Note, we have tested only with t3.xlarge . We believe r5.2xlarge should also work fine!.
  5. Availability zone has to wavelength zone
  6. Subnet ID of the subnet created for wavelength zone

Testing Communication

Now we will test the communication between pods running on a wavelength node and a normal node. We can use iperf ( for this. To run this test, clone the above repository, login to OpenShift cluster, and run the following command:


The output should look similar to this:


The highlighted result is of a wavelength node. We can observe that the pods in the wavelength zone are unable to communicate with other pods running in normal zones. The reason behind this is MTU. The default MTU in wavelength is 1300 whereas in a normal zone, it is 1500.When a packet is larger than the MTU size that is transmitted over HTTP, the physical network router is able to break the packet into multiple packets to transmit the data. However, when a packet is larger than the MTU size that is transmitted over HTTPS, the router is forced to drop the packet.

Verifying Maximum Transmission Unit (MTU) Issue

To verify the MTU issue, we can check by following this procedure. We can try to connect to the internal URL of an application running on OpenShift. In this case, we can use the internal registry for communication.



Changing MTU for OpenShift SDN

To overcome the MTU issue, we have to change the MTU size of the SDN. This way, the overall MTU size is not more than 1300, which is the MTU size for nodes in the wavelength zone. The procedure to change MTU for OpenShift SDN is as follows:

  • The SDN MTU can be configured in the object cluster/, and the actual configuration can be retrieved by:

  • Edit the network object to configure the new MTU:
  • Move to the key spec.defaultNetwork and configure it with the desired MTU and save the changes:
  • The network operator maintains a configmap with the current network configuration applied to the cluster. Delete it so the operator re-reads the modified network configuration and recreates the configuration:
  • Check that the configmap is created with the new MTU:
  • The object data should look like the following (notice the field ‘mtu: 1188’):

Check Connectivity After MTU Change

We will use iperf again to test the communication between pods:


As we can see from the output, we can communicate between pods. Now we will be able to deploy applications on a worker node running in wavelength zones.

Deploying an Application

To test further, we can deploy an application on the worker node in wavelength zone.

Preparing wavelength node

To make sure that our application intended to run on the wavelength zone only runs on nodes running in wavelength zones, we will label it:



Once the node is labeled, we will create a project and annotate the node selector with the wavelength zone so that any application deployed in this project runs on a wavelength node:


Now we will deploy a sample application in this namespace using the OpenShift web UI.

  1. Go to the developer perspective of the project test-wvlz and click on add:
  2. Click on Samples, select HTTP and click on create:
  3. The application will be deployed:
  4. Click on the pod to see on which node it is running. It should be running on ip-10-0-99-126.ec2.internal, which is our wavelength node:
  5. Usage details:
    1. We can also observe the resource utilization of the whole cluster:
    2. We can also observe the resource utilization for the worker node on wavelength:



The Installer Provisioned Infrastructure deployment method can be used to install a stretched OpenShift cluster with centralized masters running in regular AWS zones (us-east-1[a,b,c]) and worker nodes running on an AWS wavelength zone (us-east-1-wl1-nyc-wlz1). This architecture significantly reduces the footprint of an OpenShift cluster in edge locations allowing edge compute infrastructure to devote more resources to applications.

Based on specific use case requirements, the deployment can be further configured to move any additional OpenShift components such as load balancers also in wavelength zones or open up node ports to allow direct communication with workloads running on wavelength zones.


How-tos, OpenShift 4, Edge, AWS

< Back to the blog