Further modernizing applications workloads with ROSA and ACK
In this video, Red Hat Manager of OpenShift Black Belt Thatcher Hubbard and Amazon Web Services Principal Solutions Architect Ryan Niksch give insight into how customers can achieve greater agility with their application workloads utilizing Amazon Controller for Kubernetes (ACK) integrated with Red Hat OpenShift on AWS (ROSA).
Feel like trying out some configuration options yourself? Visit our Developer Sandbox, where you can experiment with containerization and cloud-native development with a 30-day, no-cost trial.
Ryan Niksch (00:00):
Greetings. My name is Ryan Niksch. I'm a Principal Solutions Architect with Amazon Web Services. Joining me here again is Thatcher from Red Hat. Thatcher, say hi.
Thatcher Hubbard (00:10):
Hi. I'm Thatcher, I'm a manager of OpenShift Black Belt with Red Hat. It's my job to help customers figure out how to transition from legacy on-premises OpenShift to a managed service in the cloud.
Ryan Niksch (00:25):
Right. You said “legacy”. So that ties in very nicely and neatly with my problem today. For the last eight years, I’ve been talking to customers. Everybody has this desire for greater agility, to move faster. They're modernizing their businesses. One piece of that modernization is moving towards adopting a container solution, changing their development strategy. And then they move their workloads into that environment. It's typically OpenShift. Nowadays, it's modernizing to a more managed approach. So managed OpenShift on AWS. It's Red Hat OpenShift Service on AWS, called ROSA. But then customers come to me and they take that next step of evolution. How do I take native AWS services, so the AWS services that are built from the ground up to be scalable, resilient, secure, and manage it for me? And how do I take advantage of those to compliment the workloads I already have in OpenShift? And automatically I'm thinking things like databases, queuing mechanisms, no sequel environments.
Thatcher Hubbard (01:33):
Dynamo, even S3.
Ryan Niksch (01:34):
Here's the spanner in the works. The one thing that everybody's asking for: "How do I do that without needing to context-switch between something like OpenShift and an AWS console? How do I stop logging in, out of different environments? How do I-
Mapping a solution
Thatcher Hubbard (01:51):
Having to contact maybe a second team that's responsible for provisioning those things? Yeah, reducing the friction for adopting those cloud native services that are available on the platform.
Ryan Niksch (02:01):
It's agile. How do I take that shift to the left and really enable app owners that are using OpenShift? And there are some industry buzzwords in there. I will consider that.
Thatcher Hubbard (02:09):
Okay. Well, let's start by drawing ourselves a notional ROSA cluster.
Ryan Niksch (02:16):
So this is managed OpenShift. It's OpenShift, the only difference is you've got a Red Hat SRE team managing this flow.
Thatcher Hubbard (02:22):
It's low-drama OpenShift.
Ryan Niksch (02:23):
And we would typically get an app team. This is a group of developers, maybe some operator admins interacting with that OpenShift cluster or the CI/CD process around it.
Thatcher Hubbard (02:43):
They’re deploying. They’re doing deployments. They're deploying the products they're building or products that they support on-cluster.
Ryan Niksch (02:47):
They're typically interacting with either Kubernetes directly, or they're using some sort of manifest file that they are pushing into that OpenShift environment. One way to do this would have this person go into an AWS console, stand up an AWS service, RDS is an example, then come back to OpenShift and create services and bindings. And it's a very clunky, stitching together.
Thatcher Hubbard (03:17):
Right. And maybe after a little bit, they write themselves some bash scripts where they run against the AWS CLI to do it. There's still a lot of friction there. So there is an approach for this that keeps the developer entirely interacting with ROSA through the control plane. And what I'm going to go ahead and do here is just draw the Kubernetes API as a piece of this. Because of course, that's kind of the core of OpenShift. And what that API allows you to do is install customer resource definitions.
Amazon controller for Kubernetes
Ryan Niksch (03:51):
I think I see where you're going. You're going to recommend the Amazon controller for Kubernetes.
Thatcher Hubbard (03:56):
I am. I am.
Ryan Niksch (03:56):
Okay. So we got ACK over here, and this is an operator framework that allows you to define AWS services and basically control and manage them from within OpenShift. Now, it's not a product, it's not that you're installing one AC, you're kind of installing a collection of different...
Thatcher Hubbard (04:17):
No, it's more of a gateway that allows you to interact with the AWS API, but using sort of this Kubernetes native YAML that you are already used to. So most people who use OCP or Kubernetes are familiar with a Deployment, capital D. After installing ACK, the correct controllers, you could refer to a Bucket, capital B, that's an S3 bucket or a table, capital T, which is DynamoDB table.
Ryan Niksch (04:44):
What does this practically look like? You're going into OpenShift, you're going to go to the operator hub, and you'll see an ACK for RDS as a relational database.
Thatcher Hubbard (04:54):
Right. And they are each ACK specific to an AWS product.
Ryan Niksch (05:00):
What would they typically be seeing? Some sort of queuing mechanism?
Thatcher Hubbard (05:02):
Oh, yes. S3 Amazon message queue, Dynamo of course, very common one. There are others. I know API Gateway is in there. Yeah, there's quite a number.
Ryan Niksch (05:17):
Once the operator is installed, you can continuously invoke it or interact with it to provision as well as bind that service back to your application running here in OpenShift?
Thatcher Hubbard (05:28):
Right. Right. So as an example, let's use RDS, very common. You would post a chunk of YAML that defined a DB instance object to the Kubernetes API. The API itself would validate that request and then it would hand it to the controller that's running. And it's the controller's job to translate that into calls to the AWS API, so native. Assuming that succeeds, that the correct permissions are there, and we'll talk about that in a minute here. The controller will then inject a secret back into the requesting name space that contains the connection string. So, host name, endpoint, and credentials to talk to that RDS instance. And allowing you then to consume that information in other pieces of the solution that you've built. So whether it's a deployment or a service to connect those up.
Ryan Niksch (06:26):
So there's a couple of things that you mentioned over there. Inside OpenShift, there's a few constructs. There are the built-in secrets inside OpenShift. We're going to take the information, and if you're talking about RDS, we're talking about the RDS database endpoint. That would be one piece of information. The IAM credentials needed to authenticate to that. So those would be stored inside OpenShift as secrets and my applications would be able to… is binding the correct word?
Thatcher Hubbard (06:56):
I think that's fair. Yeah. People often use that term to describe binding a secret to a workload.
Ryan Niksch (07:01):
We also have this context of a Kubernetes service or an OpenShift service. I typically use them for user accounts or credentials that I'm linking to my applications. So if my app needed to interact with RDS, I may create a service for that. And these are internal to OpenShift, whether it's managed or self-managed.
Thatcher Hubbard (07:24):
Right, they're native to the API.
Ryan Niksch (07:26):
Yeah. So the ACK platform ultimately will manifest an RDS instance or a-
Thatcher Hubbard (07:36):
Or a bunch of them.
Ryan Niksch (07:38):
A bunch of S3 buckets. And-
Thatcher Hubbard (07:40):
A DynamoDB table, a message queue.
Ryan Niksch (07:44):
... DB table. And these objects exist in the AWS account. They don't exist inside OpenShift. We're connecting from my application workload on OpenShift to those systems there. That has an interesting benefit. If I look at it, what we've done is we've now taken all of the storage that used to be assistant storage here, and we've made that...
Thatcher Hubbard (08:08):
Moved it into managed services. So again, that sort of second evolution that often happens as organizations move into the cloud, which is, they move the workloads that they have, they get them working, and then they start to adapt their architecture to take advantage of how easy it is to use these managed services to their best effect.
Ryan Niksch (08:28):
Very recently, well not very recently, ROSA had something called STS, where we can use the secure token service from AWS to get least privilege as well as temporary credentials. So credential cycling.
Thatcher Hubbard (08:44):
More like a managed credential, rather than a long lived username and password.
Ryan Niksch (08:48):
Recently it has been updated to ACK as well.
Thatcher Hubbard (08:52):
And it works.
Ryan Niksch (08:52):
So we now have STS capabilities on each. So the ACK operator for RDS now gets a policy specific to that RDS.
Thatcher Hubbard (08:52):
True. Oh yes.
Ryan Niksch (09:04):
So again, least privilege. And we get that credential rotation really taking the benefit of what managed OpenShift has had for a little while now and bringing that into each of these operators. And I think as we see more and more operators being created, whether for AWS services or for other components of OpenShift, we're going to continue to see that least privilege, that temporary credential cycling manifest.
Thatcher Hubbard (09:29):
So less and less is required from a security perspective. Yeah. Worth noting too that the policies to be attached to each of these ACK instances, each of these has its own little GitHub repo and there's documentation in there. One of the things in that documentation that's required to be there is an example policy to get you started, if you do want to adopt ACK and start using it.
Ryan Niksch (09:56):
This is a really great way for customers to take that next step of evolution and really cut down on undifferentiated heavy lifting. Not have to solve scaling security issues and resilience issues themselves.
Thatcher Hubbard (10:14):
Or even provisioning issues. Really. If a developer needs an RDS instance, they can get one.
Ryan Niksch (10:23):
Keep me honest, ACK does not support every single AWS service yet. It's ever growing.
Thatcher Hubbard (10:29):
It does not. I know one that is near and dear to my own heart, that I know is not on the list is Kinesis. I'm half-tempted to write it myself at this point.
Ryan Niksch (10:39):
It's a very valid point. So ACK does provide a framework. There's nothing stopping customers from writing their own operators. But what AWS and Red Hat are doing is, they are gradually adding in more and more service support, and as it manifests in ACK, these are appearing inside OpenShift, inside the operator hub as a growing list of AWS services to complement your workloads. Thatcher, it’s always fantastic having you here. Fun discussions, and-
Thatcher Hubbard (11:09):
Great to be here with you. Thank you.
Ryan Niksch (11:10):
... Thank you all for watching. Yeah.