Shipping logs and metrics to Azure Blob storage
This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.
Azure Red Hat Openshift clusters have built in metrics and logs that can be viewed by both Administrators and Developers via the OpenShift Console. But there are many reasons you might want to store and view these metrics and logs from outside of the cluster.
The OpenShift developers have anticipated this needs and have provided ways to ship both metrics and logs outside of the cluster. In Azure we have the Azure Blob storage service which is perfect for storing the data.
In this guide we’ll be setting up Thanos and Grafana Agent to forward cluster and user workload metrics to Azure Blob as well the Cluster Logging Operator to forward logs to Loki which stores the logs in Azure Blob.
Prerequisites
Preparation
Note: This guide was written on Fedora Linux (using the zsh shell) running inside Windows 11 WSL2. You may need to modify these instructions slightly to suit your Operating System / Shell of choice.
Create some environment variables to be reused through this guide
Modify these values to suit your environment, especially the storage account name which must be globally unique.
Log into Azure CLI
Create ARO Cluster
You can skip this step if you already have a cluster, or if you want to create it another way.
This will create a default ARO cluster named aro-${USERNAME}, you can modify the TF variables/Makefile to change settings, just update the environment variables loaded above to suit.
clone down the Black Belt ARO Terraform repo
Initialize, Create a plan, and apply
This should take about 35 minutes and the final lines of the output should look like
Save, display the ARO credentials, and login
Now would be a good time to use the output of this command to log into the OCP Console, you can always run
echo "Login to ${CONSOLE} as ${OCP_USER} with password ${OCP_PASS}"at any time to remind yourself of the URL and credentials.
Configure additional Azure resources
These steps create the Storage Account (and two storage containers) in the same Resource Group as the ARO cluster to make cleanup easier. You may want to change it, especially if you plan to host metrics and logs for multiple clusters in the one Storage Account.
Create Azure Storage Account and Storage Containers
Configure MOBB Helm Repository
Helm charts do a lot of the heavy lifting for us, and reduce the need for us to copy/paste a pile of YAML. The Managed OpenShift Black Belt team maintain these charts here .
Add the MOBB chart repository to your Helm
Update your Helm repositories
Create a namespace to use
Update the Pull Secret and enable OperatorHub
This is required to provide credentials to the cluster to pull various Red Hat images in order to enable and configure the Operator Hub.
Download a Pull secret from Red Hat Cloud Console and save it in
${SCRATCHDIR}/pullsecret.txtUpdate the cluster’s pull secret using the mobb/aro-pull-secret Helm Chart
Wait for OperatorHub pods to be ready
Configure Metrics Federation to Azure Blob Storage
Next we can configure Metrics Federation to Azure Blob Storage. This is done by deploying the Grafana Operator (to install Grafana to view the metrics later) and the Resource Locker Operator (to configure the User Workload Metrics) and then the mobb/aro-thanos-af Helm Chart to Deploy and Configure Thanos and Grafana Agent to store and retrieve the metrics in Azure Blob.
Grafana Operator
Deploy the Grafana Operator
After a few minutes you should see the following
Resource Locker Operator
Deploy the Resource Locker Operator
After a few minutes you should see the following
Deploy OpenShift Path Operator
Use Helm deploy the OpenShift Patch Operator
Wait for the Operator to be ready
Configure Metrics Federation
Deploy
mobb/aro-thanos-afHelm Chart to configure metrics federation
Deploy and Configure Cluster Logging and Loki
Configure the loki stack to log to Azure Blob
Note: Only Infrastructure and Application logs are configured to forward by default to reduce storage and traffic. You can add the argument
--set clf.audit=trueto also forward debug logs.Wait for the logging stack to come online
Sometimes the log collector needs to be restarted for logs to flow correctly into Loki. Wait a few minutes then run the following
Validate Metrics and Logs
Now that the Metrics and Log forwarding is set up we can view them in Grafana.
Fetch the
RouteforGrafanaBrowse to the provided route address and login using your OpenShift credentials (username
kubeadmin, passwordecho $OCP_PASS).View an existing dashboard such as mobb-aro-obs -> Node Exporter -> USE Method -> Cluster.

Click the Explore (compass) Icon in the left hand menu, select “Loki (Application)” in the dropdown and search for
{kubernetes_namespace_name="mobb-aro-obs"}
Debugging loki
If you don’t see logs in Grafana you can validate that Loki is correctly storing them by querying it directly like so.
Port forward to the Loki Service
Make sure you can curl the Loki service and get a list of labels
You can get the bearer token from the login command screen in the OCP Dashboard
You can also use the Loki CLI
Cleanup
Assuming you didn’t deviate from the guide then you created everything in the Resource Group of the ARO cluster and you can simply destroy our Terraform stack and everything will be cleaned up.
Delete the ARO cluster