Cloud Experts Documentation

Configure an ARO cluster with Azure Files using a private endpoint

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.

Effectively securing your Azure Storage Account requires more than just basic access controls. Azure Private Endpoints provide a powerful layer of protection by establishing a direct, private connection between your virtual network and storage resources—completely bypassing the public internet. This approach not only minimizes your attack surface and the risk of data exfiltration, but also enhances performance through reduced latency, simplifies network architecture, supports compliance efforts, and enables secure hybrid connectivity. It’s a comprehensive solution for protecting your critical cloud data.

Configuring private endpoint access to an Azure Storage Account involves three key steps:

  1. Create the storage account

  2. Create the private endpoint

  3. Define a new storage class for Azure Red Hat OpenShift (ARO)

Note: In many environments, Azure administrators use automation to streamline steps 1 and 2. This typically ensures the storage account is provisioned according to organizational policies—such as encryption and security configurations—along with the automatic creation of the associated private endpoint.

WARNING please note that this approach does not work on FIPS-enabled clusters. This is due to the CIFS protocol being largely non-compliant with FIPS cryptographic requirements. Please see the following for more information:

Pre Requisites

  • ARO cluster logged into
  • oc cli

Set Environment Variables

Set the following variables to match your ARO cluster and Azure storage account naming.

Dynamically get the region the ARO cluster is in

The Azure Private endpoint needs to be placed in a subnet. General best practices are to place private endpoints in their own subnet. Often times however, this might not be possible due to the vnet design and the private endpoint will need to placed in the worker node subnet.

Option 1: Retrieve the worker node subnet that the private endpoint will be create it.

Option 2: Manually specify the private service endpoint subnet and vnet you would like to use.

Self-Provision a Storage Account

  1. Create the storage account and attach the private endpoint to it

Create/Configure the Private Endpoint

  1. Create private endpoint

DNS Resolution for Private Connection

  1. Configure the private DNS zone for the private link connection

In order to use the private endpoint connection you will need to create a private DNS zone, if not configured correctly, the connection will attempt to use the public IP (file.core.windows.net) whereas the private connection’s domain is prefixed with ‘privatelink’

If you are using a custom DNS server on your network, clients must be able to resolve the FQDN for the storage account endpoint to the private endpoint IP address. You should configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet, or configure the A records for StorageAccountA.privatelink.file.core.windows.net with the private endpoint IP address.

When using a custom or on-premises DNS server, you should configure your DNS server to resolve the storage account name in the privatelink subdomain to the private endpoint IP address. You can do this by delegating the privatelink subdomain to the private DNS zone of the VNet or by configuring the DNS zone on your DNS server and adding the DNS A records.

Note: For MAG customers: GOV Private Endpoint DNSexternal link (opens in new tab) Custom DNS Configexternal link (opens in new tab)

  1. Retrieve the private IP from the private link connection:
  1. Create the DNS records for the private link connection:
  1. test private endpoint connectivity
  • on a VM or Openshift worker node
  • Should return:

Configure ARO Storage Resources

  1. Login to your cluster

  2. Set ARO Cluster permissions

  1. Create a storage class
  • Using an existing storage account

Test it out

  1. Create a PVC

  2. Create a Pod to write to the Azure Files Volume

    It may take a few minutes for the pod to be ready.

  3. Wait for the Pod to be ready

  4. Create a Pod to read from the Azure Files Volume

  5. Verify the second POD can read the Azure Files Volume

    You should see a stream of “hello azure files”

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy & sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2023 Red Hat, Inc.