Configure an ARO cluster with Azure Files using a private endpoint
This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.
Effectively securing your Azure Storage Account requires more than just basic access controls. Azure Private Endpoints provide a powerful layer of protection by establishing a direct, private connection between your virtual network and storage resources—completely bypassing the public internet. This approach not only minimizes your attack surface and the risk of data exfiltration, but also enhances performance through reduced latency, simplifies network architecture, supports compliance efforts, and enables secure hybrid connectivity. It’s a comprehensive solution for protecting your critical cloud data.
Configuring private endpoint access to an Azure Storage Account involves three key steps:
Create the storage account
Create the private endpoint
Define a new storage class for Azure Red Hat OpenShift (ARO)
Note: In many environments, Azure administrators use automation to streamline steps 1 and 2. This typically ensures the storage account is provisioned according to organizational policies—such as encryption and security configurations—along with the automatic creation of the associated private endpoint.
WARNING please note that this approach does not work on FIPS-enabled clusters. This is due to the CIFS protocol being largely non-compliant with FIPS cryptographic requirements. Please see the following for more information:
Pre Requisites
- ARO cluster logged into
- oc cli
Set Environment Variables
Set the following variables to match your ARO cluster and Azure storage account naming.
Dynamically get the region the ARO cluster is in
The Azure Private endpoint needs to be placed in a subnet. General best practices are to place private endpoints in their own subnet. Often times however, this might not be possible due to the vnet design and the private endpoint will need to placed in the worker node subnet.
Option 1: Retrieve the worker node subnet that the private endpoint will be create it.
Option 2: Manually specify the private service endpoint subnet and vnet you would like to use.
Self-Provision a Storage Account
- Create the storage account and attach the private endpoint to it
Create/Configure the Private Endpoint
- Create private endpoint
DNS Resolution for Private Connection
- Configure the private DNS zone for the private link connection
In order to use the private endpoint connection you will need to create a private DNS zone, if not configured correctly, the connection will attempt to use the public IP (file.core.windows.net) whereas the private connection’s domain is prefixed with ‘privatelink’
If you are using a custom DNS server on your network, clients must be able to resolve the FQDN for the storage account endpoint to the private endpoint IP address. You should configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet, or configure the A records for StorageAccountA.privatelink.file.core.windows.net with the private endpoint IP address.
When using a custom or on-premises DNS server, you should configure your DNS server to resolve the storage account name in the privatelink subdomain to the private endpoint IP address. You can do this by delegating the privatelink subdomain to the private DNS zone of the VNet or by configuring the DNS zone on your DNS server and adding the DNS A records.
Note: For MAG customers: GOV Private Endpoint DNS Custom DNS Config
- Retrieve the private IP from the private link connection:
- Create the DNS records for the private link connection:
- test private endpoint connectivity
- on a VM or Openshift worker node
- Should return:
Configure ARO Storage Resources
Login to your cluster
Set ARO Cluster permissions
- Create a storage class
- Using an existing storage account
Test it out
Create a PVC
Create a Pod to write to the Azure Files Volume
It may take a few minutes for the pod to be ready.
Wait for the Pod to be ready
Create a Pod to read from the Azure Files Volume
Verify the second POD can read the Azure Files Volume
You should see a stream of “hello azure files”