Cloud Experts Documentation

Shipping logs and metrics to Azure Blob storage

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.

Azure Red Hat Openshiftexternal link (opens in new tab) clusters have built in metrics and logs that can be viewed by both Administrators and Developers via the OpenShift Console. But there are many reasons you might want to store and view these metrics and logs from outside of the cluster.

The OpenShift developers have anticipated this needs and have provided ways to ship both metrics and logs outside of the cluster. In Azure we have the Azure Blob storage service which is perfect for storing the data.

In this guide we’ll be setting up Thanosexternal link (opens in new tab) and Grafana Agentexternal link (opens in new tab) to forward cluster and user workload metrics to Azure Blob as well the Cluster Logging Operator to forward logs to Lokiexternal link (opens in new tab) which stores the logs in Azure Blob.

Prerequisites

Preparation

Note: This guide was written on Fedora Linux (using the zsh shell) running inside Windows 11 WSL2. You may need to modify these instructions slightly to suit your Operating System / Shell of choice.

  1. Create some environment variables to be reused through this guide

    Modify these values to suit your environment, especially the storage account name which must be globally unique.

  2. Log into Azure CLI

Create ARO Cluster

You can skip this step if you already have a cluster, or if you want to create it another way.

This will create a default ARO cluster named aro-${USERNAME}, you can modify the TF variables/Makefile to change settings, just update the environment variables loaded above to suit.

  1. clone down the Black Belt ARO Terraform repo

  2. Initialize, Create a plan, and apply

    This should take about 35 minutes and the final lines of the output should look like

  3. Save, display the ARO credentials, and login

  4. Now would be a good time to use the output of this command to log into the OCP Console, you can always run echo "Login to ${CONSOLE} as ${OCP_USER} with password ${OCP_PASS}" at any time to remind yourself of the URL and credentials.

Configure additional Azure resources

These steps create the Storage Account (and two storage containers) in the same Resource Group as the ARO cluster to make cleanup easier. You may want to change it, especially if you plan to host metrics and logs for multiple clusters in the one Storage Account.

  1. Create Azure Storage Account and Storage Containers

Configure MOBB Helm Repository

Helm charts do a lot of the heavy lifting for us, and reduce the need for us to copy/paste a pile of YAML. The Managed OpenShift Black Belt team maintain these charts hereexternal link (opens in new tab) .

  1. Add the MOBB chart repository to your Helm

  2. Update your Helm repositories

  3. Create a namespace to use

Update the Pull Secret and enable OperatorHub

This is required to provide credentials to the cluster to pull various Red Hat images in order to enable and configure the Operator Hub.

  1. Download a Pull secret from Red Hat Cloud Console and save it in ${SCRATCHDIR}/pullsecret.txt

  2. Update the cluster’s pull secret using the mobb/aro-pull-secretexternal link (opens in new tab) Helm Chart

  3. Wait for OperatorHub pods to be ready

Configure Metrics Federation to Azure Blob Storage

Next we can configure Metrics Federation to Azure Blob Storage. This is done by deploying the Grafana Operator (to install Grafana to view the metrics later) and the Resource Locker Operator (to configure the User Workload Metrics) and then the mobb/aro-thanos-af Helm Chart to Deploy and Configure Thanos and Grafana Agent to store and retrieve the metrics in Azure Blob.

Grafana Operator

  1. Deploy the Grafana Operator

    After a few minutes you should see the following

Resource Locker Operator

  1. Deploy the Resource Locker Operator

    After a few minutes you should see the following

Deploy OpenShift Path Operator

  1. Use Helm deploy the OpenShift Patch Operator

  2. Wait for the Operator to be ready

Configure Metrics Federation

  1. Deploy mobb/aro-thanos-af Helm Chart to configure metrics federation

Deploy and Configure Cluster Logging and Loki

  1. Configure the loki stack to log to Azure Blob

    Note: Only Infrastructure and Application logs are configured to forward by default to reduce storage and traffic. You can add the argument --set clf.audit=true to also forward debug logs.

  2. Wait for the logging stack to come online

  3. Sometimes the log collector needs to be restarted for logs to flow correctly into Loki. Wait a few minutes then run the following

Validate Metrics and Logs

Now that the Metrics and Log forwarding is set up we can view them in Grafana.

  1. Fetch the Route for Grafana

  2. Browse to the provided route address and login using your OpenShift credentials (username kubeadmin, password echo $OCP_PASS).

  3. View an existing dashboard such as mobb-aro-obs -> Node Exporter -> USE Method -> Cluster.

    screenshot showing federated metrics
  4. Click the Explore (compass) Icon in the left hand menu, select “Loki (Application)” in the dropdown and search for {kubernetes_namespace_name="mobb-aro-obs"}

    screenshot showing logs

Debugging loki

If you don’t see logs in Grafana you can validate that Loki is correctly storing them by querying it directly like so.

  1. Port forward to the Loki Service

  2. Make sure you can curl the Loki service and get a list of labels

    You can get the bearer token from the login command screen in the OCP Dashboard

  3. You can also use the Loki CLIexternal link (opens in new tab)

Cleanup

Assuming you didn’t deviate from the guide then you created everything in the Resource Group of the ARO cluster and you can simply destroy our Terraform stack and everything will be cleaned up.

  1. Delete the ARO cluster

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy & sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2023 Red Hat, Inc.