Red Hat OpenShift Service on AWS (ROSA) explained

Learn about specific use cases, detailed deep dives, and specialized strategies to get the most out of Red Hat OpenShift Service on AWS for your business needs through this series of videos. 

You can also watch this interactive demonstration on how to install ROSA, from creating an account to deploying applications.

Learn about specific use cases, detailed deep dives, and specialized strategies to get the most out of Red Hat OpenShift Service on AWS for your business needs through this series of videos. 

You can also watch this interactive demonstration on how to install ROSA, from creating an account to deploying applications.

Adding extra security with AWS WAF, CloudFront and ROSA

10 mins

In this video, Red Hat OpenShift Black Belt member Paul Czarkowski and Amazon Web Services Principal Solutions Architect Ryan Niksch discuss how customers can combine AWS CloudFront, AWS WAF, and Red Hat OpenShift Service on AWS (ROSA) to apply the best security to their application workloads on OpenShift. 

To learn more about applying Red Hat applications for your business, please visit our Learning Hub. 

 

Ryan Niksch (00:00):
Greetings. My name is Ryan Niksch. I am a principal solutions architect with Amazon Web Services. Joining me here today is Paul, who is from Red Hat, and he is a managed OpenShift Black Belt service member. Paul, say hi and give us a very brief description of your role at Red Hat.
 

Paul Czarkowski (00:20):
Hi. Yeah. My name is Paul Czarkowski. I focus on helping our customers with cloud services, specifically in this case Red Hat OpenShift Service on AWS, or ROSA.
 

Ryan Niksch (00:32):
Now, ROSA is managed OpenShift. So, it's everyday normal OpenShift, but there is an SRE team managing it for the customer.
 

Paul Czarkowski (00:42):
Yep.

AWS services integration with layers
 

Ryan Niksch (00:42):
Typically, when we deploy ROSA, it caters for ingress from a customer perspective to the API, the control plane, your application workload. What if I had a customer who wanted to use something like a customer domain name or wanted to facilitate something like a security or a caching layer in front of that? How would they go about facilitating the integration with other AWS services?
 

Paul Czarkowski (01:11):
Got it. So let's draw up a couple of AWS services. So we'll do AWS CloudFront and AWS WAF.
 

Ryan Niksch (01:23):
Okay. So CloudFront being our CDN sort of caching service. Anything that hits from the CloudFront gets cached there. So customers get a performance benefit for those objects that can be cached. WAF being a typical web application firewall. So really just adding an additional layer of security over and above what they're getting from OpenShift in itself.
 

Paul Czarkowski (01:49):
Exactly.

Creating custom domains
 

Ryan Niksch (01:49):
All right. These services, you deploy them into AWS, but they're going to require a different set of load balances and a different set of naming conventions. How do you create a custom domain? Is there anything we need to add to the OpenShift cluster to facilitate that?
 

Paul Czarkowski (02:07):
Yeah, that's a great question. So we have an aptly named custom domain operator right here. And so you feed it a few things.
 

Ryan Niksch (02:18):
That's an add-on into OpenShift. Assuming from within OpenShift you're just going into “add-ons”, select “operator”.
 

Paul Czarkowski (02:24):
So it's actually part of ROSA itself. So you don't have to add this in, it's just in there by default.
 

Ryan Niksch (02:29):
Oh, okay. Great. And what do we need to pass it? I'm assuming at the very least some certificates and a scope of sorts.
 

Paul Czarkowski (02:37):
Yeah, exactly. There's three things. You have a wildcard DNS. So we'll just show that. And you usually configure that in Route 53. You have a TLS certificate that secures that wildcard DNS. And then you do have a scope. So you have a scope which can either be internal or external; because we're exposing this to the internet, you would do external.
 

Ryan Niksch (03:07):
Assuming that's going to generate another load balance.
 

Paul Czarkowski (03:09):
Exactly.
 

Ryan Niksch (03:10):
So this is part of OpenShift's integration with AWS. You don't have to manually create the load balances. The OpenShift API machine sets installers in the background. They'll create those objects for you.
 

Paul Czarkowski (03:22):
Yeah.
 

Ryan Niksch (03:23):
And then these certificates and things are already added to that AWS Elastic Load Balance. So you don't have to manually import.
 

Paul Czarkowski (03:31):
Exactly.
 

Ryan Niksch (03:32):
Okay.
 

Paul Czarkowski (03:32):
Right. So now what we need to do is, we need to wire everything together. So this here will have a host name. And so in CloudFront here, you use that host name as the destination endpoint.
 

Ryan Niksch (03:45):
As if you had a CloudFront origin over here. And you have to fill in all of those elements.
 

Paul Czarkowski (03:49):
Exactly. There's a few things. You have to make sure you pass host headers through. A few other things to make sure, that the DNS name that you're going to use to access your apps is preserved the whole way through. And then you also tell CloudFront that it wants to use AWS WAF.
 

Ryan Niksch (04:03):
So this is interesting because there's multiple layers of security here. I'm assuming here that end connections with customers are not going to go directly to the Load Balancer. So they're going to be forced to go through these different layers. They'll only be able to talk to WAF. WAF will only be able to hit CloudFront. And only CloudFront will be able to interact with that Load Balancer as degrees of separation.
 

Paul Czarkowski (04:24):
That's right.
 

Ryan Niksch (04:24):
Okay.
 

Paul Czarkowski (04:24):
That's right. So you take the TLS certificate you provided here and you also provide it to CloudFront and you use the Amazon Certificate Manager to do that.
 

Ryan Niksch (04:34):
Okay. So we've got Amazon Certificate Manager or ACM here, not to be confused with Red Hat’s Advanced Cluster Manager. And we are adding that as a TLS endpoint thing.
 

Paul Czarkowski (04:45):
Right. So now that has, we can start actually wiring things together. And so we have in-
 

Ryan Niksch (04:50):
You did mention Route 53 as well for the DNS records, so I think it makes sense to add in.
 

Paul Czarkowski (04:58):
So we add that.
 

Ryan Niksch (04:59):
Route 53 hosted zone. There's technically two hosted zones here. There's the hosted zone that ROSA creates itself when you provision ROSA. This is the customer's own company domain.
 

Paul Czarkowski (05:12):
Correct.
 

Ryan Niksch (05:13):
We're talking about a custom domain here.
 

Paul Czarkowski (05:13):
Right.
 

Ryan Niksch (05:15):
And we've got the AWS Certificate Manager potentially coming into the picture here. But I don't think customers are strictly needing to use AWS as Certificate Manager. There are good options here.
 

Paul Czarkowski (05:28):
That's right. They can create certificates using whatever certificate system they like to use. They just need to make sure they put it in here and then also here, and you put it in here by adding it to ACM.
 

Ryan Niksch (05:40):
Okay. So this could be Amazon Certificate Manager. They could use their own sort of public key infrastructure if they've got something like a Microsoft Active Directory.
 

Paul Czarkowski (05:51):
Right. Exactly.
 

Ryan Niksch (05:52):
Certificate driven platform.
 

Paul Czarkowski (05:54):
Let's Encrypt.
 

Ryan Niksch (05:55):
Third party, Let's Encrypt.
 

Paul Czarkowski (05:56):
Let's Encrypt is one I use a lot just because it's really easy to do and you still get the secure public certificates.
 

Ryan Niksch (06:03):
One thing with ACM to note, you are needing to export the public and private certificate for import to this operator. So ACM tends to work better for external or public-facing implementations.
 

Paul Czarkowski (06:19):
Exactly.
 

Ryan Niksch (06:20):
Private environments, you're more likely going to see a Let's Encrypt or a PKI.
 

Paul Czarkowski (06:25):
That's right.

Getting application workload inside OpenShift
 

Ryan Niksch (06:26):
What is the flow for an end user going through all of this to get into their application workload inside OpenShift?
 

Paul Czarkowski (06:35):
Right, so let's just kind of show that. So you have your end user here who is the spitting image of you.
 

Ryan Niksch (06:35):
Thank you.
 

Paul Czarkowski (06:42):
And they'll try to access something via their web browser. So it'll make a request to DNS. And that DNS has a CNAME pointing to the CloudFront. So they'll then send their traffic over to CloudFront. CloudFront knows to send it to WAF. So it will send it to WAF. The AWS WAF will inspect it and, if everything is okay, it will then send it back. And then CloudFront will send it down to the ELB down here. And this is now where it's starting to hit your ROSA cluster itself.
 

Ryan Niksch (07:15):
That's a whole new ingress controller inside OpenShift.
 

Paul Czarkowski (07:17):
That's right.
 

Ryan Niksch (07:18):
So this AWS Load Balance is forwarding to a router layer inside OpenShift itself and that in turn is forwarding to the actual parts of the workload.
 

Paul Czarkowski (07:27):
Right. So in the cluster you'll have a service and that service will route for a set of pods that are set up for your application.
 

Ryan Niksch (07:36):
Essentially that's building a bridge between the networking inside AWS, the VPC itself, and the SDN. The Software Defined Network internally in OpenShift, correct?
 

Paul Czarkowski (07:47):
That's exactly right, yes.
 

Ryan Niksch (07:50):
Okay. Do we need to facilitate any sort of return here? It's essentially the same path. Anything that's accessed there goes here, gets cached in CloudFront, and then if a customer had to request the same information again, they would just feed from that cache.
 

Paul Czarkowski (08:06):
Exactly. So the next request, if it's the same, it's just that.
 

Ryan Niksch (08:10):
Yeah, so we get a performance break here, we get additional security, and we are stitching together a lot of the benefits of AWS with OpenShift. We're talking about ROSA here. So if it's managed OpenShift, is this significantly different from other OpenShift implementations such as self-managed OCP, for argument's sake?
 

Paul Czarkowski (08:31):
It's roughly the same. You don't get the Custom Domain Operator in a self-managed OpenShift. So you would create an ingress controller directly. But because ROSA wants to help you have your uptime, have your resilience, the ROSA tooling manages the ingress via the Custom Domain Operator just so that then the SRE and the automation services can keep that ingress healthy.
 

Ryan Niksch (09:01):
Okay, so same building blocks, just a little bit more of a manual process to get there.
 

Paul Czarkowski (09:05):
Right.
 

Ryan Niksch (09:06):
It's not changing the requirements of what needs to be put into it.
 

Paul Czarkowski (09:11):
That's right.
 

Ryan Niksch (09:11):
From an SRE perspective, none of this really touches the SREs managing it because they're not accessing the applications.
 

Paul Czarkowski (09:22):
Exactly.
 

Ryan Niksch (09:23):
They're still coming in and hitting the API endpoint of that cluster for their day-to-day operational aspects. They're monitoring all of those break fix elements.
 

Paul Czarkowski (09:33):
Exactly. That's right.
 

Ryan Niksch (09:35):
Paul, thank you very much for this. As always, it is great having you here and thank you for joining us.
 

Paul Czarkowski (09:42):
Thanks a lot.

Previous resource
Integrating ROSA and AWS ECR
Next resource
ROSA logging options
Hybrid Cloud Logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy, sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.