Cloud Experts Documentation

OpenShift Logging

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.

A guide to shipping logs and metrics on OpenShift

Prerequisites

  1. OpenShift CLI (oc)
  2. Rights to install operators on the cluster

Setup OpenShift Logging

This is for setup of centralized logging on OpenShift making use of Elasticsearch OSS edition. This largely follows the processes outlined in the OpenShift documentation here . Retention and storage considerations are reviewed in Red Hat’s primary source documentation.

This setup is primarily concerned with simplicity and basic log searching. Consequently it is insufficient for long-lived retention or for advanced visualization of logs. For more advanced observability setups, you’ll want to look at Forwarding Logs to Third Party Systems

  1. Create a namespace for the OpenShift Elasticsearch Operator.

    This is necessary to avoid potential conflicts with community operators that could send similarly named metrics/logs into the stack.

  2. Create a namespace for the OpenShift Logging Operator

  3. Install the OpenShift Elasticsearch Operator by creating the following objects:

    1. Operator Group for OpenShift Elasticsearch Operator

    2. Subscription object to subscribe a Namespace to the OpenShift Elasticsearch Operator

    3. Verify Operator Installation

      Example Output

  4. Install the Red Hat OpenShift Logging Operator by creating the following objects:

    1. The Cluster Logging OperatorGroup

    2. Subscription Object to subscribe a Namespace to the Red Hat OpenShift Logging Operator

    3. Verify the Operator installation, the PHASE should be Succeeded

    Example Output

  5. Create an OpenShift Logging instance:

    NOTE: For the storageClassName below, you will need to adjust for the platform on which you’re running OpenShift. managed-premium as listed below is for Azure Red Hat OpenShift (ARO). You can verify your available storage classes with oc get storageClasses

  6. It will take a few minutes for everything to start up. You can monitor this progress by watching the pods.

  7. Your logging instances are now configured and recieving logs. To view them, you will need to log into your Kibana instance and create the appropriate index patterns. For more information on index patterns, see the Kibana documentation.external link (opens in new tab)

    NOTE: The following restrictions and notes apply to index patterns:

    • All users can view the app- logs for namespaces they have access to
    • Only cluster-admins can view the infra- and audit- logs
    • For best accuracy, use the @timestamp field for determining chronology

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy & sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2023 Red Hat, Inc.