Cloud Experts Documentation

Observability

Deploying Grafana on Openshift 4

OpenShift users want access to a Grafana interface in order to build custom dashboards for their cluster and application workloads. The Grafana that shipped with OpenShift was read-only and has been deprecated in OpenShift 4.11 and removed in OpenShift 4.12 . Since OpenShift uses Prometheus for both Cluster and User Workload metrics, its fairly straight forward to deploy a Grafana instance using the Grafana Operator and then view those cluster metrics and create custom Dashboards.

Advanced Cluster Management Observability on ROSA

This document will take you through deploying ACM Observability on a ROSA cluster. see here for the original documentation. Prerequisites An existing ROSA cluster An Advanced Cluster Management (ACM) deployment Set up environment Set environment variables export CLUSTER_NAME=my-cluster export S3_BUCKET=$CLUSTER_NAME-acm-observability export REGION=us-east-2 export NAMESPACE=open-cluster-management-observability export SA=tbd export SCRATCH_DIR=/tmp/scratch export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) export AWS_PAGER="" rm -rf $SCRATCH_DIR mkdir -p $SCRATCH_DIR Prepare AWS Account Create an S3 bucket

Configuring OpenShift Logging using LokiStack on ROSA and (soon) ARO

A guide to shipping logs and metrics on OpenShift using the new LokiStack setup. Recently, the default logging system with OpenShift swapped from ElasticSearch/FluentD/Kibana to a system based on LokiStack/Vector/OCP Console. LokiStack requires an object store in order to function, and this guide is designed to walk the user through the steps required to set this up. Overview of the components of OpenShift Cluster Logging Prerequisites OpenShift CLI (oc) Rights to install operators on the cluster Access to create S3 buckets (AWS/ROSA), Blob Storage Container (Azure), Storage Bucket (GCP) Setting up your environment for ROSA Create environment variables to use later in this process by running the following commands: $ export REGION=$(oc get infrastructure cluster -o=jsonpath="{.

OpenShift Logging

A guide to shipping logs and metrics on OpenShift Prerequisites OpenShift CLI (oc) Rights to install operators on the cluster Setup OpenShift Logging This is for setup of centralized logging on OpenShift making use of Elasticsearch OSS edition. This largely follows the processes outlined in the OpenShift documentation here . Retention and storage considerations are reviewed in Red Hat’s primary source documentation. This setup is primarily concerned with simplicity and basic log searching.

Shipping logs to Azure Log Analytics

This document follows the steps outlined by Microsoft in their documentationexternal link (opens in new tab) Follow docs. Step 4, needs additional command of: az resource list --resource-type Microsoft.RedHatOpenShift/OpenShiftClusters -o json to capture resource ID of ARO cluster as well, needed for export in step 6 bash enable-monitoring.sh --resource-id $azureAroV4ClusterResourceId --workspace-id $logAnalyticsWorkspaceResourceId works successfully can verify pods starting Verify logs flowing with container solutions showing in log analytics workbook? Configure Prometheus metric scraping following steps outlined here: https://docs.

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy & sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2023 Red Hat, Inc.