Red Hat OpenShift Service on AWS (ROSA) explained

Learn about specific use cases, detailed deep dives, and specialized strategies to get the most out of Red Hat OpenShift Service on AWS for your business needs through this series of videos. 

You can also watch this interactive demonstration on how to install ROSA, from creating an account to deploying applications.

Learn about specific use cases, detailed deep dives, and specialized strategies to get the most out of Red Hat OpenShift Service on AWS for your business needs through this series of videos. 

You can also watch this interactive demonstration on how to install ROSA, from creating an account to deploying applications.

Red Hat OpenShift Service on AWS (ROSA) 101

8 mins

Charlotte Fung, from the Red Hat Managed OpenShift Black Belt team, discusses the underlying architecture of the Red Hat OpenShift Service on AWS (ROSA), focusing on nodes, function, control and data plane, and AWS VPC constructs.

To view this video within our in-depth learning path, please visit the Getting started with Red Hat OpenShift Service on AWS (ROSA) page

 

 

Charlotte Fung (00:00):
Hi, my name is Charlotte Fung, and I am a Cloud Services Black Belt at Red Hat. Today I'll be talking to you about Red Hat OpenShift Service on AWS, commonly known as ROSA. So what is ROSA? ROSA is a fully managed, jointly engineered product by Red Hat and AWS, which gives you a Kubernetes-ready platform where you can run your applications.

Fundamental architecture


(00:31):
But for today, I'll be talking about the fundamental architecture of ROSA, which is like the simplest architecture you can think of. So because ROSA runs in AWS, you get, your cluster will run in a VPC. All right. And for the fundamental, the simplest deployment of ROSA cluster, it would be a Single-AZ deployment. So we have your Single-AZ, and I just want to point out, this is the simplest default architecture for a new user that's trying to understand what ROSA is, and to get started. For the fundamental default deployment, we need two subnets. One would be a public subnet, and the second subnet, will be a private subnet.


(02:06):
So what happens is, all your cluster resources will be in your private subnet, and in the public subnet you're going to have your egress and egress resources. So for each cluster, each cluster is deployed with three control nodes.


(02:29):
At least three. And this is to account for resiliency and high availability. Each control node comes with an API server, an Etcd, and with the controller. We also get two infrastructure nodes, a minimum of two infrastructure nodes. And also this is to account for resiliency. And each infrastructure node contains an in-built registry. It also has a router layer, and it also comes with a monitoring server. And also for each cluster, you get at least two worker nodes, which is where all your applications will be running on.


(03:56):
So worker times two, infra times two, and control.

Accessing the cluster


(04:15):
So this basically is like your OpenShift, your ROSA resources, which will be located in your private subnet. For these resources to communicate with the internet, you can make use of a NAT gateway, which sits in your public subnet. So now you may wonder how do you get access to the cluster in the private subnet? So ROSA comes with pre-built load balancers, and I'm going to be talking about them right now.


(04:57):
So let's assume this is a developer or even an SRE, which is a site reliability engineer, that supports ROSA, that manages the cluster for you. So for default deployment, both will access your cluster, through the internet, and they'll make use of a series of load balancers that come pre-built with your cluster. So the first load balancer will be your external/internal API network load balancers, that gives access to your control plane. The second load balancer, we call this the SRE API Elastic Load Balancer. And this is what our SRE team uses to manage your cluster. For end user application access, we will make use of an external/internal elastic load balancer, and this will talk directly with your router layer on the infrastructure node.

(07:09):
And we also have a fourth load balancer, for SRE console access. And this also communicates, and SREs also use that to manage your cluster. Internally, your clusters would communicate with each other, using the internal network load balancer.


(07:46):
So from a fundamental perspective, this is what your cluster will look like. We do not recommend this for a production, great deployment. For production, we highly recommend that you use a Multi-AZ deployment where you're going to have three control planes in each AZ, three in front nodes in each, you have three control plane, one in each AZ, three in front nodes, one in each AZ, and at least three worker nodes, one in each AZ, which helps you to make use of the high availability, high resiliency, of the AWS cloud.
 

Previous resource
Overview: Red Hat OpenShift Service on AWS (ROSA) explained
Next resource
ROSA: Who does what?
Hybrid Cloud Logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy, sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.