Red Hat OpenShift Service on AWS (ROSA) explained

Learn about specific use cases, detailed deep dives, and specialized strategies to get the most out of Red Hat OpenShift Service on AWS for your business needs through this series of videos. 

You can also watch this interactive demonstration on how to install ROSA, from creating an account to deploying applications.

Learn about specific use cases, detailed deep dives, and specialized strategies to get the most out of Red Hat OpenShift Service on AWS for your business needs through this series of videos. 

You can also watch this interactive demonstration on how to install ROSA, from creating an account to deploying applications.

App modernization for administrators through Red Hat OpenShift GitOps

8 mins

Ryan Niksch (AWS) and Thatcher Hubbard (Red Hat) discuss OpenShift GitOps, which provides administrators and infrastructure teams the same benefits as application owners.

Feel like trying out some configuration options yourself? Visit our Developer Sandbox, where you can experiment with containerization and cloud-native development with a 30-day, no-cost trial.

 

Ryan Niksch (00:01):
Greetings. My name is Ryan Niksch. I am a Principal Solutions Architect with Amazon Web Services. Joining me today is Thatcher from Red Hat Thatcher say hi.


Thatcher Hubbard (00:10):
Hi, I'm Thatcher Hubbard. I'm with the Managed OpenShift Black Belt team at Red Hat.


Ryan Niksch (00:15):
Thatcher, I've been working quite a bit with OpenShift customers on AWS, more specifically managed OpenShift. So if you look at something like the Red Hat OpenShift Service on AWS (ROSA), and many of my customers are managing their application workloads, their container based solutions through a CI/CD process, pipelines, automated delivery. There is a growing trend to do operational configurations. So cluster configurations, configurations of add-ons, tracking state through a similar process. What is Red Hat bringing to customers to help in that sort of space?

 

What is GitOps?


Thatcher Hubbard (01:00):
Okay. Well it starts with the GitOps operator, the Red Hat GitOps operator, which is an add-on of itself. And this is a, I want to note, it is an operator, which means it instantiates other instances of the workload to actually manage various typically tied to a name space. But yeah, so we start with the GitOps operator and that. Well, let's start with kind of your original use case, about the GitOps operator, which is-


Ryan Niksch (01:31):
Well, before you get there, under the hood of this operator, the upstream magic of this with some secret sauce added to it is Argo.


Thatcher Hubbard (01:39):
It is Argo. It is very much Argo-


Ryan Niksch (01:42):
But this is not cut yourself managing your own Argo. This is a packaged up simplified-


Thatcher Hubbard (01:48):
It's Argo with the integration work done by Red Hat and supported by Red Hat. So yes, thank you for calling that out. So let's talk about your first use case, which is?

 

Application development

 

Ryan Niksch (02:01):
Applications.


Thatcher Hubbard (02:01):
Yeah, application development.


Ryan Niksch (02:03):
And I think this is the bit that a lot of people are already familiar with or doing something similar to this.


Thatcher Hubbard (02:07):
But I just want to set the stage and make sure folks get an opportunity to know precisely what it is we're talking about. So we got a Git repo over here. I'm okay at drawing that icon and we've got a developer, or most likely a lot of developers, and they push code to this repo, push it to branches, they PR it into a specific branch. And when that happens-


Ryan Niksch (02:32):
Argo is going to be monitoring this and responding to that. So whenever there's a change in the repo, and I'm assuming Argo would connect to OpenShift's API endpoint and push those configurations.


Thatcher Hubbard (02:45):
Right, Argo watches the repo, it pulls it actually, watches the repo and when a change happens, that repo just holds, OpenShift manifests. It's all from manipulating OpenShift resources, the API, it will just apply it. And so if we have a little deployment over here with a few pods in it, if something changed in the deployment resource in that repo, Argo would go ahead, talk to the API and redeploy it just like you would, maybe would, manually. Then you don't have to do it. It's an automated process that's managed by Argo. So yeah, that's kind of our original use case. But yes, moving on to your second use case, which is managing the cluster configuration itself. And there's a lot of things that you, I guess I would say go under Day 2 operations, things that are very important, services that serve your developers on the cluster. Logging being, kind of first and foremost. Thank you, Ryan. Logging's a very important one. There's a variety of things that come directly from Red Hat, the CSI, the Secrets operator is a very important one.

 

Vlogging


Ryan Niksch (03:58):
If you're talking CSIs there's a very broad collection of operators for the operator. Third party solutions, monitoring, logging, security.


Thatcher Hubbard (04:10):
Right. External DNS being one sort of example, something that's commonly deployed on our OpenShift cluster to ease in management.


Ryan Niksch (04:16):
This could be pretty much anything that you're able to script. Any configuration change that you're doing on the cluster or the add-ons onto the clusters. I'm going to add add-ons over there. So if you could script that, you could then manage that configuration through-

 

Cluster state


Thatcher Hubbard (04:37):
Anything you can manipulate via the API. Because again, what's held in these Git repos, and I'm going to draw another one in here just to represent this. This would be your cluster state. It's your source of truth for what's configured on the cluster at any time. And the same thing happens. You'd have an Argo here, these would both be instantiated and managed by the operator. Argo would pull that repo and if there was a change to one of the resources inside of it, say the configuration of the logging and monitoring stack, it would go ahead and arbitrate that and it would get updated. So if you are adding or changing something, it's as simple as-


Ryan Niksch (05:12):
So two things are standing out for me over here. The first is, this is not upstream Argo, that Red Hat has taken this and given customers a fully supported and added a little secret sauce to it. The second thing is, for me personally, it's that perspective of desired state. That I have a single source of truth. This is what my configurations are. But where this becomes more exciting to me is none of the customers I'm working with have one OpenShift cluster. They have non-production, they've got production, they are hybrid, so they have on-premises, they're on the cloud. And they may in many cases be hybrid cloud users where they're using more than one cloud. So how do we take this and blanket apply it to say, a fleet of OpenShift implementations that may include self-managed OpenShift as well as management?

 

Advanced Cluster Manager


Thatcher Hubbard (06:06):
The managed OpenShift products? Well, Red Hat Advanced Cluster Manager is the answer to that question, or the best answer to that question in my opinion. And Advanced Cluster Manager essentially extends the functionality that we're talking about here at a cluster level where you can manage the services that run on it, not necessarily the individual apps. That's up to you too, if you want to do that. You may have a reason to keep that different based on what's on the cluster. But for cluster wide operators and configuration, when a cluster gets added to Advanced Cluster Manager, you can have it set up. So Advanced Cluster Manager will instruct the cluster to install the GitOps operator and then use the same repo because this does support a certain level of templating.


(06:53):
The same repo to do that sort of Day 2 configuration. So if you have a consistent logging and monitoring, say you're forwarding logs to CloudWatch, you're on AWS, you're forwarding logs to CloudWatch, well, your cluster may be in different regions and so that's something that you'd want to change, have templated. But you do consistently want a naming scheme for the logs that could push to CloudWatch. That configuration would be in your repo, you would template it. When a cluster was added or Red Hat ACM, it would automatically get that configuration.


Ryan Niksch (07:23):
There's another thing I want to call out here. If you have a net new cluster, so you've just gone through a provisioning process, you can configure Advanced Cluster Manager to actually install and configure the GitOps operator and then push configuration down. But you can also take a cluster with an existing GitOps operator and-


Thatcher Hubbard (07:41):
Join it.


Ryan Niksch (07:41):
Register it, and then the RHACM environment actually becomes authoritative over it?


Thatcher Hubbard (07:48):
Right. It is fleet management. That is the intent behind ACM is that it is fleet management. And that's a really key part. We talk about policy management. Policies like anything else on Red Hat or on OCP are it's YAML that the API understands. So.


Ryan Niksch (08:04):
I get all of the benefits that developers have had with managing their application fleets for my administrative operational side of managing OpenShift across a fleet, irrespective of which flavor of OpenShift as well as where it exists.

 

Learn more about modernizing applications by reading our downloadable ebook.

Previous resource
OpenShift and developer experience tools
Next resource
Modernizing app workloads
Hybrid Cloud Logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy, sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.