Cloud Experts Documentation

Configuring OpenShift Logging 6 on ROSA HCP

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.

ROSA HCP clusters now only support openshift Logging 6.x and above. This guide aims to provide a step-by-step guide for implementing logging 6.x on ROSA HCP,setting up a log store with Loki with S3 and/or log forwarding to AWS CloudWatch.

For ROSA Classic refer to the LokiStack on ROSA article.

Components of the Logging Subsystem

The OpenShift logging subsystem is designed to collect, store, and visualize logs from various sources within the cluster, including node system logs, application container logs, and infrastructure logs. The OpenShift logging subsystem comprises several key components that work together to achieve log aggregation and management. The collector, residing on each node in the OpenShift cluster, is responsible for gathering logs. The primary implementation for the collector has historically been FluentD. However, a newer alternative, Vector, is increasingly being adopted for its performance and features. The collector gathers system logs from journald and container logs from /var/log/containers/*.log. Additionally, it can be configured to collect audit logs from /var/log/audit/audit.log. The collector is deployed and managed as a DaemonSet, ensuring that a collector pod runs on every node within the OpenShift cluster. The aggregated logs are then stored in a log store. The default log store for OpenShift Logging has traditionally been Elasticsearch. However, Loki is now offered as a performant alternative, particularly in ROSA HCP environments now defaults to Loki Operator. The ROSA HCP cluster log visualization component is provided using Cluster Observability Operator’s (COO) Logging UI Plugin.

Refer to openshift logging official documentation and 6.x quick guide for more details.

For ROSA HCP with logging 6 now required following operators

  1. Loki Operator (log store)
  2. Red Hat OpenShift Logging Operator
  3. Cluster Observability Operator (log visualizing)

Prerequisites

  1. ROSA HCP Cluster logged in with cluster-admin permissions
  2. OpenShift CLI (oc)
  3. Access AWS resources i.e. IAM, S3 and Cloudwatch

Note: The OpenShift Logging stack requires quite a bit of resources, you will need at least 32 vCPUs in your cluster.

Create environment variables

  1. Create environment variables :

Install the Loki Operator

  1. Create a S3 bucket for the LokiStack Operator
  1. Create a S3 IAM policy document for the Loki operator
  1. Create a S3 IAM Policy for Loki stack access
  1. Create an IAM Role trust policy

Note: logging-collector = The name of the OpenShift service account used by log collector

  1. Create an IAM Role and link the trust policy

Note: Save this role_arn for installation of the lokistack operator later.

  1. Attach S3 IAM Policy for Loki stack access to the above role
  1. OpenShift project for Loki operator

Note: ROSA HCP cluster has a built in openshift-operators-redhat project. Make sure it has the “openshift.io/cluster-monitoring: “true”” label.

If the openshift-operators-redhat project does not exist create it.

  1. Create an OperatorGroup
  1. Create a Subscription for Loki Operator

Note: Make sure to validate the current stable channel version. e.g: 6.3

  1. Verify Operator Installation

Note: This can take up to a minute

Example Output

  1. Label the openshift-logging namespace to deploy the LokiStack:

Note: ROSA HCP cluster has a built in openshift-logging project. Make sure it has the “openshift.io/cluster-monitoring: “true”” label. If not add label using following command

  1. Create a secret with the above Role for Loki stack to access S3 bucket.

Note: Make sure endpoint has correct S3 region for your environment

  1. Create a LokiStack Customer Resource

Note: Specify the deployment size. Supported size options for production instances of Loki are 1x.extra-small, 1x.small, or 1x.medium. Additionally, 1x.pico is supported starting with logging 6.1. Loki deployment sizing

  1. Verify LokiStack Installation

Note: If you see pods in Pending state, confirm that you have sufficient resources in the cluster to run a LokiStack. You could set your ROSA machine pool to auto scale or create new machine pool with following command

Install the OpenShift Cluster Logging Operator

  1. Create an OperatorGroup object
  1. Create a Subscription object for Red Hat OpenShift Logging Operator

Note: Make sure to select latest stable channel e.g: 6.2

  1. Verify the Operator installation, the PHASE should be Succeeded

Example Output

  1. Create a service account to be used by the log collector:

Note: SA name should match service account name used in above s3 access trust policy. i.e: logging-collector

  1. Grant necessary permissions to the service account so it’s able to collect and forward logs. In this example, the collector is provided permissions to collect logs from infrastructure, audit and application logs.
  1. Create a ClusterLogForwarder CR to store logs in S3
  1. Confirm you can see collector pods called “instance” starting up using the following command. There should be one per node.

Example output:

Wait until all instances show running

  1. Verify your S3 bucket S3 bucket

Configuring log forwarding to cloudwatch

The ClusterLogForwarder (CLF) allows users to configure forwarding of logs to various destinations (i.e AWS cloudwatch) apart from ClusterLogging storage system (i.e: Loki stack)

Prerequisites

  1. Created a serviceAccount in the same namespace as the ClusterLogForwarder CR (we’ll use same SA as Loki stack i.e logging-collector)

  2. Assigned the collect-audit-logs, collect-application-logs and collect-infrastructure-logs cluster roles to the serviceAccount.

  3. Create a CW IAM policy document for CLF

  1. Create the CW IAM Policy for CLF’s access
  1. Create an IAM Role trust policy document

Note: logging-collector - The name of the openshift service account used by log collector

  1. Create an IAM Role and link the trust policy

Note: Save this role_arn for installation of the cluster log forwarder (CLF) later.

  1. Attach CW IAM Policy to the above role
  1. Create a secret with above Role for CLF to access CW
  1. Create a ClusterLogForwarder CR to forward logs to AWS cloudwatch

Note: Make sure to format group name and set correct AWS region

This example selects all application, infrastructure and audit logs and forwards them to cloudwatch. Refer to openshift logging documentation for more configuration options like log formating,filtering..etc.

  1. Verify CW log groups cloudwatch log groups

Log visualization in openshift console

Visualization for logging is provided by deploying the Logging UI Plugin of the Cluster Observability Operator(COO). Follow detail instructions for Installing the Cluster Observability operator

  1. Openshift project for COO
  1. Create an OperatorGroup object for COO
  1. Create a Subscription object for the Cluster Oberservability Operator
  1. Verify the Cluster Oberservability Operator Installation

Wait until the Cluster Observability Operator shows Succeeded

  1. Create a Cluster Observability Operator Logging UI plugin CR

Note: Make sure to provide correct lokiStack name configured above (i.e:logging-loki )

  1. Verify Logging UI plugin Wait until you see the openshift web console refresh request. Once the console is refreshed, expand Observe in the left hand side of the openshift console and go to the log tab.
logs in Openshift Web console

Cleanup

  1. Remove COOC UIPlugin
  1. Remove Cluster Observability Operator
  1. Remove the ClusterLogForwarder Instance:
  1. Remove the LokiStack Instance:
  1. Remove the Cluster Logging Operator:
  1. Remove the LokiStack Operator:
  1. Cleanup your AWS Bucket
  1. Cleanup your AWS Policies and roles

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy & sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2023 Red Hat, Inc.