Red Hat OpenShift Service on AWS (ROSA) explained

Learn about specific use cases, detailed deep dives, and specialized strategies to get the most out of Red Hat OpenShift Service on AWS for your business needs through this series of videos. 

You can also watch this interactive demonstration on how to install ROSA, from creating an account to deploying applications.

Learn about specific use cases, detailed deep dives, and specialized strategies to get the most out of Red Hat OpenShift Service on AWS for your business needs through this series of videos. 

You can also watch this interactive demonstration on how to install ROSA, from creating an account to deploying applications.

ROSA logging options

14 mins

Ryan Niksch (AWS) and Paul Czarkowski (Red Hat) explore the various options for logging and log forwarding customers can make use of in Red Hat OpenShift Service on Amazon Web Services (ROSA). This will touch on building in options, 3rd party solutions as well as AWS Cloud Watch.

To view this video within an in-depth learning path, please visit the Deploying an Application on ROSA page. 

 

Ryan Niksch (00:01):
Greetings. My name is Ryan Niksch. I'm a principal solutions architect with Amazon Web Services. I am joined here today by Paul from Red Hat. Paul, give us a shout out on your role and say hi.

Paul Czarkowski (00:12):
Yeah, I'm Paul Czarkowski. I'm a Managed OpenShift Black Belt at Red Hat and we focus on some of our cloud services. Specifically here, we're talking about Red Hat OpenShift Service on AWS, or ROSA.

ROSA logging options

 

Ryan Niksch (00:26):
ROSA is the managed version of OpenShift. It is OpenShift that customers are investing in and love, but there is an SRE team that is managing that, reducing the undifferentiated heavy lifting. Paul, today I have an interesting challenge for you and it's not ROSA-specific. It applies to managed as well as self-managed OpenShift, and that is logging. What are the different kinds of logs? How can we get that logging environment up and running? What are the different options available to customers? Maybe we can flesh some things out here. Now, OpenShift I think makes sense to have a look at what logs are available that could be exposed.

Paul Czarkowski (01:07):
Yes, so OpenShift generates three types of logs and those are audit, infrastructure, and application.
 

Ryan Niksch (01:26):
Audit, any action that is being done by anybody with privileges. That could be, in the case of ROSA, the customer themselves, but it would also encapsulate SRE team members doing management on the customer's behalf?
 

Paul Czarkowski (01:41):
Exactly. That's correct.
 

Ryan Niksch (01:42):
Infrastructure is any sort of logging that pertains to the cluster itself as it scales, health of the operators, those sort of functions.
 

Paul Czarkowski (01:51):
Yes, that's right.
 

Ryan Niksch (01:52):
App, I'm assuming the actual application workloads?
 

Paul Czarkowski (01:57):
Right. As you start running applications in the cluster, it will write those application logs so that your app developers can see their logs when they want to troubleshoot issues.
 

Ryan Niksch (02:07):
That's a bit more conscious. That is the app developer would need to put something into the app to, say, generate logging and if that's there it would collect that?
 

Paul Czarkowski (02:16):
Yes. As long as the application is logging to standard outs, it will get captured by the OpenShift logging system.
 

Ryan Niksch (02:23):
Okay.
 

(02:23):
All right, cool. Do Prometheus, Grafana, those sort of things overlap the app log or do they complement that app log?
 

Paul Czarkowski (02:33):
They complement it. Prometheus and Grafana are for the metrics side of things, whereas the logging system is for actual logs and events.
 

Ryan Niksch (02:42):
Okay, right. Cool. The reason I asked is, a lot of customers have a bit of a misconception that I could replace the one with the other and really it's not the case, so you need both.

Paul Czarkowski (02:51):
Yeah, definitely.
 

Ryan Niksch (02:52):
Once you have logging enabled, OpenShift collects these logs. Historically, OpenShift would have a built-in logging and monitoring environment that allowed you to visualize the logs and allowed you to do analytics on that. There was some Elasticsearch under the hood. There was some Kibana under the hood. That's still there in a modern-day context.

Cluster log forwarding
 

Paul Czarkowski (03:15):
That's still there. By default, when you spin up a cluster, the logs exist, but they're not in a viewable format. You can't actually get to them unless you're logging into the machines directly, which you shouldn't be doing. The first thing you do is you use the OpenShift logging operator. Let me draw that up. We'll just do “OLO” for short and that exposes two new resources, cluster logging and cluster log forwarding, we'll just do it for short. Cluster logging will then spin up an Elasticsearch cluster, so Elasticsearch Kibana. It can also spin up a Loki cluster, which is–
 

Ryan Niksch (04:09):
So we've got Elasticsearch, we've got Kibana, and Loki or, or Loki?
 

Paul Czarkowski (04:18):
And/or Loki. So you can do both. Usually you make a choice between one or the other. So Elastic is, I won't quite say a legacy, but that's been there for quite some time and the whole thing.
 

Ryan Niksch (04:31):
And that's a search filtering-
 

Paul Czarkowski (04:33):
Exactly.
 

Ryan Niksch (04:33):
–mechanism where you can really define what logs are interesting to me, control who can access them to some degree.
 

Paul Czarkowski (04:41):
Right.
 

Ryan Niksch (04:41):
Kibana is really the visualization layer of that.
 

Paul Czarkowski (04:43):
Exactly.
 

Ryan Niksch (04:45):
Loki's a little bit different, and we'll come back to Loki in a second because there's a lot of fun with Loki in modern-day logging. I historically didn't always see customers enable the Elasticsearch Kibana through into D-type logging inside OpenShift. Many OpenShift 3 customers would take an external approach that has their own Elasticsearch environment outside of OpenShift. It's the same building blocks, just external to the cluster. And in that case, I think the log forwardings coming and forwarding to those-
 

Paul Czarkowski (05:19):
Right.
 

Ryan Niksch (05:19):
External resources.
 

Paul Czarkowski (05:20):
So in any case, you actually use the cluster log forwarding to set up your forwarding to either the Elastic that you've created internally, or to an Elastic you have off board. I always say to customers when I'm talking to them, whatever logging stack you currently use for the rest of your infrastructure, you can use that with OpenShift as well. So don't just, like, adopt what OpenShift’s default is, bring what you do and we'll have a way to integrate with that. And usually-
 

Ryan Niksch (05:48):
If I find what's most meaningful for your business.
 

Paul Czarkowski (05:50):
Exactly. And usually that integration happens at the cluster log forwarding. The other thing is, you mentioned bringing your own Elasticsearch. You can also bring in, say, Datadog or Splunk or Curator.
 

Ryan Niksch (06:04):
Third parties.
 

Paul Czarkowski (06:05):
And a lot of those are not through the cluster log forwarding, but are done by an operator that's maintained by that third party. So Datadog has an operator, Splunk has an operator, and I think IBM has an operator for Curator. And so you would bypass cluster log forwarding and you would deploy the operator through operator hub and get your logs that way.

Third Party options
 

Ryan Niksch (06:31):
So I mean there's a lot of third party options here. So I've listed some of the ones that you've mentioned. Datadog, Splunk has been around for a very long time, very popular, likewise Curator.
 

(06:42):
IBM's Instana is a growing product where there is a lot of analytics AI backing that up. So it's not just a seen product as we see with some of the others there. There's a lot more capability there. And I think generally in the third party space, there is the capability to filter the logs, there's the ability to analyze those logs, visualize them. And what we're seeing in that third party space is a growing trend of an analytics AI platform to get better intelligence out of the logs than what we had previously. So I see a lot of value in this. Again, that's that log forwarding from the OpenShift log operator to that third party, so an end point for the third party platform. That said, being on AWS, a lot of customers that I work with that are bringing OpenShift to AWS are also taking advantage of AWS for their logging environments.
 

Paul Czarkowski (07:46):
That's right.
 

Ryan Niksch (07:47):
Not just OpenShift, in a more general sense. So something like AWS CloudWatch.
 

Paul Czarkowski (07:56):
Yep, that's right.
 

Ryan Niksch (07:57):
So log forwarding from your logging environment into CloudWatch, and then from within AWS, you could add in something like SageMaker to add in that AI analytics of the data, or you could have something like Athena or you could have something like QuickSight to actually visualize that. But either way, here you're going to have to decide on which AWS building blocks do you want to aggregate together to meet your requirements. Where with a third party, you're potentially paying for a build solution, that “buy-versus-build” mentality. What are we missing?
 

Paul Czarkowski (08:48):
What we're missing is how we actually get from OpenShift to CloudWatch. So you have the cluster log forwarding, and that basically lets you define a set of pipelines. So you define an input, which is one or more of these, and then you define an output, which could be CloudWatch or it could be any of these up here. And then you basically set up a pipeline that says this input goes to this output.
 

Ryan Niksch (09:17):
And you can define multiple.
 

Paul Czarkowski (09:19):
Absolutely.

Multiple options
 

Ryan Niksch (09:19):
So if you had a requirement to do whatever, a larger business use case where my marketing team wanted analytics, but my infrastructure monitoring team has already invested in something like Curator or Splunk, you could do a pipeline to each of those.
 

Paul Czarkowski (09:34):
So a good example is a lot of the times security teams are using Splunk to do their security audits, to look for intrusions and stuff like that. So you can say, "Okay, send my audit logs to Splunk via Syslog." Your application developers, they may want to use the in-cluster Elasticsearch or Kibana just to keep everything in the OpenShift cluster and within the OpenShift authentication. For example, if you push your application logs to Elasticsearch, and then as a developer you log into Kibana, you'll only see the applications that are in your name spaces. So there's some security put in there so that teams can't see each other's logs and stuff like that, which is often very important when you have a separation of duties and you have multi-tenant clusters, you need to be able to separate out who has permission to access what. And one of the things OpenShift does is it brings that multi-tenancy of OpenShift and it brings it into the other things that OpenShift includes, like the Elasticsearch and Kibana stack.
 

Ryan Niksch (10:35):
I want to throw a spanner in the works.
 

Paul Czarkowski (10:36):
Of course.
 

Ryan Niksch (10:39):
We are talking about applications that scale here, logging, auditing, it's a tremendous amount of data that adds an overhead in terms of compute and storage.
 

Paul Czarkowski (10:48):
That's right.
 

Ryan Niksch (10:49):
So I want to zoom in on two things here. If I'm pushing log information out to AWS CloudWatch that scales very effectively for me. It manages a computer for managed servers. Talk to me about Loki. Loki does something fun as well.
 

Paul Czarkowski (11:03):
Yeah, so Loki. It runs in the cluster, but the actual storage for the logs itself is in S3 and that means you get Amazon's unlimited (as long as you have a credit card on file) storage of your data.
 

Ryan Niksch (11:19):
And it dynamically scales in terms of performance as well.
 

Paul Czarkowski (11:19):
Exactly.
 

Ryan Niksch (11:22):
Once it's in S3, you can also take advantage of any number of other applications on AWS for what you want to do with that data. So there's a lot of things around that. I think, in summation, customers have a choice. You decide what is most meaningful to your business, whether that's to take advantage of something built-in, because you don't already have a logging investment or if you have an existing logging investment that you're using in a hybrid context or an existing infrastructure space, you can use that third party. Or if you're actually moving directly into the cloud, you can take advantage of those or combinations of the above. And OpenShift is really that flexible. ROSA OCP for a little while had a separate operator to facilitate things like CloudWatch. Very recently there's been an update to the logging operator for OpenShift in general.
 

Paul Czarkowski (12:19):
That's right.
 

Ryan Niksch (12:20):
And am I correct in saying that OpenShift now directly supports CloudWatch without an additional operator? It's the standard logging approach.
 

Paul Czarkowski (12:28):
That's correct. The standard cluster log forwarding operator fully supports CloudWatch now.
 

Ryan Niksch (12:34):
And that's as 4.11?
 

Paul Czarkowski (12:36):
That's a very specific version of the cluster logging stack, which you asked me the one question I don't know the answer to.
 

Ryan Niksch (12:46):
Okay. So I think it is-
 

Paul Czarkowski (12:50):
But I think 4.11 for sure has it. It will probably get back boarded to 4.10 as well, but I don't know how far back it will get back boarded. That means that the cluster logging add-on is no longer necessary. And so it's not exactly deprecated yet, but the requirement for it has disappeared.
 

Ryan Niksch (13:09):
So long and short of it is, the approach for all of these is consistent whether you are using managed OpenShift or self-managed OpenShift, you've absolutely got the same options.
 

Paul Czarkowski (13:18):
The same options.
 

Ryan Niksch (13:19):
I think there's a couple of things that we haven't mentioned, but I think we got the bulk of what we're seeing across enterprise customers here.
 

Paul Czarkowski (13:26):
I think the one thing we didn't mention is, if you are logging to CloudWatch or if you're logging to Loki back into S3, there is obviously the need to get authentication into the cluster to have permission to do that. And you can do that both via doing user IM and injecting the service key credentials. Or you can use STS and POD identity.
 

Ryan Niksch (13:49):
Either making use of keys, which I wouldn't recommend, or alternatively using an STS implementation, and that STS is going to give you least privilege as well as temporary credentials, which I think from a security perspective are both super attractive. Paul, as always, thank you very much for joining me.
 

Paul Czarkowski (14:17):
Of course.
 

Ryan Niksch (14:18):
It's a pleasure having you here.
 

Paul Czarkowski (14:19):
Thank you so much.


Ryan Niksch (14:20):
And thank you for joining us.

Previous resource
AWS WAF, CloudFront, and ROSA
Next resource
Resilience on ROSA
Hybrid Cloud Logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy, sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.