This is a guest blog by Matt Sarrel, Director of Technical Marketing, MinIO.

Object storage is a foundational component of cloud-native architectures. It provides the framework for developing and operating microservices and other container-based workloads across disparate infrastructure and environments. As a result, DevOps and DataOps teams are hungry for object storage on OpenShift and other Kubernetes distributions. These fast-moving teams often provision object storage for emerging applications on the public cloud - but in doing so they create challenges for IT teams tasked with maintaining security and compliance while keeping costs low.

In MinIO, IT teams now have a way to build their own large-scale multitenant object storage as a service on Red Hat OpenShift and operate it across multiple public and private clouds. MinIO is built to take full advantage of the OpenShift architecture to simplify deploying and managing multitenant object storage.    

Together, MinIO and OpenShift enable organizations to realize hybrid cloud initiatives while avoiding cloud lock-in. Enterprises can create and control a private cloud wherever they run OpenShift, with Kubernetes providing compute infrastructure and MinIO providing the object storage. Combining the dependable OpenShift platform with the performance, reliability, and scalability of MinIO Kubernetes-native object storage gives enterprise IT the power to consolidate disparate storage silos and securely expose them to DevOps teams and their applications.  

The MinIO Kubernetes Operator and Operator Console provide familiar and intuitive management within OpenShift for the fastest and most widely implemented cloud-native object storage on the planet. MinIO is everywhere - with over 620 million Docker pulls and thousands of production deployments. There are more than 550,000 hosts running MinIO in AWS, Azure, and GCP. Fully S3-compatible, MinIO serves as the primary storage tier for a diverse set of workloads that are critical components of today’s application stack, including Apache Kafka, Apache Spark, TensorFlow, KubeFlow, Presto/Starburst, and more.

The MinIO Operator and the MinIO oc plug-in simplify the deployment and management of MinIO object storage on OpenShift. The result is object storage that easily integrates into your existing IT management and DevOps tool sets. MinIO tenants can be deployed on demand and fully managed using either the CLI or GUI.

MinIO Tenant Architecture

 MinIO is built for multitenancy on OpenShift. Since the server binary is fast and lightweight, MinIO's operator is able to densely co-locate tenants and use resources efficiently. Tenants are fully isolated from one another. Each tenant is its own namespace, forming a logical cluster of independent server pools. Tenants can have different storage capacity, CPU and memory resources, and number of pods, as well as separate configurations for identity providers, encryption, and versions. MinIO tenants scale independently while isolation protects them from disruption and potential downtime due to another tenant’s upgrades, updates, and configuration changes. 

The following diagram describes the architecture of a MinIO tenant deployed into Kubernetes:

Tenant Architecture

Getting Started with MinIO on OpenShift

There are a few ways to install MinIO Operator on OpenShift, and you are free to choose the one that best suits your requirements.Prerequisites

Red Hat OpenShift 4.7 or later

The cluster must be registered for Red Hat Marketplace with the necessary namespace. See Register OpenShift cluster with Red Hat Marketplace for complete instructions.

You must log into OpenShift with an account with cluster-admin privileges to install Operators using the RedHat Marketplace.

You should also install the OpenShift Cluster Manager (oc) for optional CLI access.

The easiest way to get started is to install the MinIO Operator using Red Hat Marketplace. Our entry contains detailed walk-throughs, documentation, and best practices for running MinIO on OpenShift. 

Procedure

Step 1: Purchase MinIO Operator from Red Hat Marketplace

Open https://marketplace.redhat.com/ and type "MinIO Hybrid Cloud Object Storage" into the search box:

From the MinIO page, click Purchase to purchase the MinIO Operator.

Step 2: Operator Installation from Red Hat Marketplace

  1. Log into your Red Hat Marketplace account at https://marketplace.redhat.com.
  2. From the main menu, click Workspace > My Software > MinIO Hybrid Cloud Object Storage > Install Operator.
  3. On the Update Channel section, select an option.
  4. On the Approval Strategy section, select either Automatic or Manual. The approval strategy corresponds to how you want to process operator upgrades.
  5. For Installation Mode, select All namespaces on the cluster.
    1. For OpenShift consoles managing multiple clusters, under Target Clusters, select the checkbox for each cluster on which you want to install the MinIO Operator. Ensure the Namespace Scope is set for All namespaces.
  6. Click Install. It may take several minutes for installation to complete.
  7. Once installation is complete, the status will change from installing to Up to date.
  8. For further information, see the Red Hat Marketplace Operator documentation.

Step 3: Verification of operator installation

  1. Once status changes to Up to date, click the vertical ellipses and select Cluster Console.
  2. Open the cluster where you installed the product.
  3. Go to Operators > Installed Operators.
  4. For the Project dropdown, select openshift-operators.
  5. The list of operators should include a row for MinIO Operator. The Status column for MinIO Operator reads Succeeded once installation completes.
  6. Click MinIO Operator to open the Operator Details page.

Congratulations, you’ve installed MinIO Operator for OpenShift.

The next step is to create your first tenant. You can create a MinIO tenant using either the Command Line Interface (CLI) or the OpenShift Operator Hub tools.

Create a MinIO Tenant Using the CLI

Prerequisites

Local Persistent Volumes and Associated Storage Class

MinIO strongly recommends using locally attached persistent volumes as the storage for MinIO tenants. For example, Local Persistent Volumes provide MinIO with access to locally attached storage for best performance. The following example YAML describes a local persistent volume that meets the stated requirements:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: <PV-NAME>
spec:
  capacity:
  storage: 1Ti
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storage-class: local-storage
  local:
  path: </PATH/TO/DISK>
  nodeAffinity:
  required:
     nodeSelectorTerms:
     - matchExpressions:
        - key: kubernetes.io/hostname
           operator: In
           values:
           - <https://NODE-NAME.DOMAIN.TLD>

Replace values surrounded by angle brackets <VALUE> with the appropriate values for each node's locally attached disk. Create one PV with the necessary capacity for each volume the tenant requires. For example, a MinIO tenant using 16 disks requires 16 Persistent Volumes.

Create a storage class for the local volumes associated with the nodes on which you deploy MinIO. MinIO generates Persistent Volume Claims with the specified storage class and only binds to Persistent Volumes within that class. The storage class must have volumeBindingMode set to WaitForFirstConsumer. The following example YAML describes a storage class that meets the stated requirements: The name of the storage class must match the storage class applied to each persistent volume that the MinIO tenant uses.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

Create a Namespace

MinIO supports deploying no more than one MinIO tenant per Kubernetes namespace. Create the namespace before creating the tenant.

Check Security Context Constraints

The MinIO Operator deploys pods using the following default Security Context per pod:

securityContext:
 runAsUser: 1000
 runAsGroup: 1000
 runAsNonRoot: true
 fsGroup: 1000

Certain OpenShift Security Context Constraints limit the allowed UID or GID for a pod such that MinIO cannot deploy the tenant successfully. Ensure that the project in which the operator deploys the tenant has sufficient SCC settings that allow the default pod security context. The following command returns the optimal value for the securityContext:

oc get namespace <namespace> \-o=jsonpath='{.metadata.annotations.openshift\.io/sa\.scc\.supplemental-groups}{"\n"}'

The command returns output similar to the following:

1056560000/10000

Take note of this value before the slash for use in this procedure.

Procedure

Use the MinIO Plugin to Create the Tenant

After you have the MinIO Operator installed (using the OperatorHub or oc plug-in), you can create your MinIO tenant. You can use the GUI or the CLI and the result will be the same.

The MinIO documentation Deploy a MinIO Tenant using the MinIO Plugin has complete instructions. The following provides an example of tenant creation using the CLI:

The following oc minio command creates a MinIO Tenant named minio-tenant-1:

  oc minio tenant create minio-tenant-1   \
  --servers 4                         \
  --volumes 16                        \
  --capacity 16Ti                     \
  --namespace minio-tenant-1          \
  --storage-class local-storage    \
     --output > tenant.yaml

Argument

Description

--servers

The number of MinIO servers to deploy in the tenant.

--volumes

The total number of volumes to provision for the cluster. The MinIO Operator generates one Persistent Volume Claim per volume.

MinIO allocates volumes to each server using the formula volumes / servers = volumes per server. The example above requires 16 Persistent Volumes.

--capacity

The total capacity of the MinIO tenant. Supports standard Kubernetes units of quantity such as Pi (Pebibyte), Ti (Tebibyte), or Gi (Gibibyte).

MinIO requests storage for each generated persistent volume claim using the formula capacity / volumes = request per volume. The example above requests 1Ti per volume.

--namespace

The Kubernetes namespace in which the Operator deploys the Tenant.

The namespace must exist and have no other MinIO Tenants.

--storage-class

The Kubernetes storage class to associate with each generated Persistent Volume Claim.

The storage class must exist and have sufficient associated Persistent Volumes to satisfy the generated Persistent Volume Claims for the tenant.

--output

Outputs the YAML for generating the MinIO tenant.

This option is required if the OpenShift cluster Security Context Configuration restricts pods to specific values. You can omit this step if your OpenShift cluster has more permissive SCC settings.

Modify the YAML produced by the MinIO Kubernetes Plug-in by using the --output tenant.yaml option as part of the command string to output the raw YAML. Modify the spec.pools[n].securityContext and spec.console.securityContext settings to use a supported UID based on the SCC of your OpenShift Cluster, then use kubectl apply -f to apply the modified YAML object.

Save the Tenant Credentials

MinIO outputs credentials for connecting to the MinIO tenant as part of the creation process:

Tenant 'minio-tenant-1' created in 'minio-tenant-1' Namespace
 Username: admin
 Password: dbc978c2-bfbe-41bf-9dc6-699c76bafcd0
+-------------+------------------------+------------------+--------------+-----------------+
| APPLICATION |  SERVICE NAME  | NAMESPACE | SERVICE TYPE | SERVICE PORT(S) |
+-------------+------------------------+------------------+--------------+-----------------+
| MinIO   | minio              | minio-tenant-1   | ClusterIP | 443         |
| Console | minio-tenant-1-console | minio-tenant-1   | ClusterIP | 9090,9443   |
+-------------+------------------------+------------------+--------------+-----------------+

Copy the credentials to a secure location, such as a password protected key manager. MinIO does not display these credentials again.

MinIO tenants deploy with TLS enabled by default, where the MinIO Operator uses the Kubernetes certificates.k8s.io API to generate the required x.509 certificates. Each certificate is signed using the Kubernetes Certificate Authority (CA) configured during cluster deployment. While Kubernetes mounts this CA on Pods in the cluster, Pods do not trust that CA by default. You must copy the CA to a directory such that the update-ca-certificates utility can find and add it to the system trust store to enable validation of MinIO TLS certificates:

cp /var/run/secrets/kubernetes.io/serviceaccount/ca.crt /usr/local/share/ca-certificates/
update-ca-certificates

Connect to the Tenant

For applications internal to the Kubernetes cluster, you can connect directly to the MinIO service created by the operator. Use oc get svc --namespace NAMESPACE to retrieve the services for the tenant.

For applications external to the Kubernetes cluster, you must configure Ingress or a Load Balancer to expose the MinIO tenant services. Alternatively, you can use the oc port-forward command to temporarily forward traffic from the local host to the MinIO tenant.

  • The minio service provides access to MinIO Object Storage operations.
  • The *-console service provides access to the MinIO Console. The MinIO Console supports GUI administration of the MinIO Tenant.

Create a MinIO Tenant with OpenShift OperatorHub

Prerequisites

Local Persistent Volumes and Associated Storage Class

MinIO strongly recommends using locally attached persistent volumes as the storage for MinIO tenants. For example, Local Persistent Volumes provide MinIO with access to locally attached storage for best performance. The following example YAML describes a local persistent volume that meets the stated requirements:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: <PV-NAME>
spec:
  capacity:
  storage: 1Ti
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storage-class: local-storage
  local:
  path: </PATH/TO/DISK>
  nodeAffinity:
  required:
     nodeSelectorTerms:
     - matchExpressions:
        - key: kubernetes.io/hostname
           operator: In
           values:
           - <https://NODE-NAME.DOMAIN.TLD>

Replace values surrounded by angle brackets <VALUE> with the appropriate values for each node's locally attached disk. Create one PV with the necessary capacity for each volume the tenant requires. For example, a MinIO tenant using 16 disks requires 16 Persistent Volumes.

Create a storage class for the local volumes associated with the nodes on which you deploy MinIO. MinIO generates Persistent Volume Claims with the specified storage class and only binds to Persistent Volumes within that class. The storage class must have volumeBindingMode set to WaitForFirstConsumer. The following example YAML describes a storage class that meets the stated requirements:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

The name of the storage class must match the storage class applied to each persistent volume which the MinIO tenant uses.

Create a Namespace

MinIO supports deploying no more than one MinIO tenant per Kubernetes namespace. Create the namespace before creating the tenant.

Create Kubernetes Secrets

The MinIO Operator looks for two Kubernetes opaque secrets to support creating the MinIO Tenant.

MinIO Root User Secret

Create an opaque secret with two data keys, where all values are Base64 encoded. The MinIO Operator uses this secret for setting the root user permissions. The name of the secret must match the value specified to the spec.credsSecret.name key in the tenant object specification.

Key

Value

accesskey

The access key for the root user.

secretkey

The corresponding secret key for the root user.

The value for both data keys should be a string that is long, secure, and unique. The following example YAML describes a secret that meets the stated requirements. The name minio-creds-secret assumes the tenant YAML has spec.credsSecret.name set to a matching value. Consider using the tenant name as a prefix to the secret name to ensure each tenant has its own secret key (for example,  minio-tenant-1 should have its MinIO secret named minio-tenant-1-creds-secret).

apiVersion: v1
kind: Secret
metadata:
 name: minio-creds-secret
type: Opaque
data:
 accesskey: bWluaW8=
 secretkey: bWluaW8xMjM=

MinIO Console User Secret

Create the opaque secret with four data keys, where all values are Base64 encoded. The MinIO Operator uses this secret for configuring the MinIO Console access to the MinIO tenant. The name of the secret must match the value specified to the console.consoleSecret.name key in the tenant object specification.

Key

Value

CONSOLE_PBKDF_PASSPHRASE

Passphrase used by the MinIO Console to encode generated authentication tokens.

CONSOLE_PBKDF_SALT

The salt used by the MinIO Console to encode generated authentication tokens.

CONSOLE_ACCESS_KEY

The access key for the MinIO Console administrative user

CONSOLE_SECRET_KEY

The corresponding secret key for the MinIO Console administrative user

The value for all data keys should be a string that is long, secure, and unique. The following example YAML describes a secret that meets the stated requirements. The name minio-console-secret assumes the tenant YAML has console.consoleSecret.name set to a matching value. Consider using the tenant name as a prefix to the secret name to ensure each tenant has its own secret key (for example,  minio-tenant-1 should have its MinIO Console secret named minio-tenant-1-console-secret).

apiVersion: v1
kind: Secret
metadata:
 name: console-secret
type: Opaque
data:
 CONSOLE_PBKDF_PASSPHRASE: U0VDUkVU
 CONSOLE_PBKDF_SALT: U0VDUkVU
 CONSOLE_ACCESS_KEY: WU9VUkNPTlNPTEVBQ0NFU1M=
 CONSOLE_SECRET_KEY: WU9VUkNPTlNPTEVTRUNSRVQ=

Check Security Context Constraints

The MinIO Operator deploys pods using the following default Security Context per pod:

securityContext:
 runAsUser: 1000
 runAsGroup: 1000
 runAsNonRoot: true
 fsGroup: 1000

Certain OpenShift Security Context Constraints limit the allowed UID or GID for a pod such that MinIO cannot deploy the tenant successfully. Ensure that the project in which the operator deploys the tenant has sufficient SCC settings that allow the default pod security context. The following command returns the optimal value for the securityContext:

oc get namespace <namespace> \-o=jsonpath='{.metadata.annotations.openshift\.io/sa\.scc\.supplemental-groups}{"\n"}'

The command returns output similar to the following:

1056560000/10000

Take note of this value before the slash for use in this procedure.

Procedure

Access the MinIO Operator through OpenShift

After you have the MinIO Operator installed (using the OperatorHub or oc plug-in), you can create your MinIO tenant using the OperatorHub.

From the OperatorHub, select Operators, then Installed Operators. From the Project dropdown, select openshift-operators. Select MinIO Operator from the list of installed operators.

Create the Tenant using the Form or YAML View

From the MinIO Operator detail page, click Create Tenant. Enter all required information into the Form view.

  • Ensure the Tenant Secret -> Name is set to the name of the MinIO Root User Kubernetes Secret created as part of the prerequisites.
  • Ensure the Console -> Console Secret -> Name is set to the name of the MinIO Console Kubernetes Secret created as part of the prerequisites.

You can also use the YAML view to perform more granular configuration of the MinIO tenant. Refer to the MinIO Custom Resource Definition Documentation for guidance on setting specific fields. MinIO also publishes examples for additional guidance in creating custom tenant YAML objects. Note that the OperatorHub YAML view supports creating only the MinIO tenant object. Do not specify any other objects as part of the YAML input.

If your OpenShift cluster Security Context Configuration restricts the supported pod security contexts, open the YAML View and locate the spec.pools[n].securityContext and spec.console.securityContext objects. Modify the securityContext settings to use a supported UID based on the SCC of your OpenShift Cluster, then use kubectl apply -f to apply the modified YAML object.

Click Create to create the MinIO Tenant using the specified configuration. Use the credentials specified as part of the MinIO Root User secret to access the MinIO server.

Connect to the Tenant

For applications internal to the Kubernetes cluster, you can connect directly to the MinIO service created by the Operator. Use oc get svc --namespace NAMESPACE to retrieve the services for the tenant.

For applications external to the Kubernetes cluster, you must configure Ingress or a Load Balancer to expose the MinIO tenant services. Alternatively, you can use the oc port-forward command to temporarily forward traffic from the local host to the MinIO Tenant.

  • The minio service provides access to MinIO Object Storage operations.
  • The *-console service provides access to the MinIO Console. The MinIO Console supports GUI administration of the MinIO Tenant.

Build Cloud Object Storage as a Service with MinIO on Red Hat OpenShift

With MinIO and OpenShift, enterprise IT teams can quickly and easily provision multitenant object storage as a service across a wide variety of cloud architectures - public, private, multi, hybrid - and grow without downtime. DevOps and DataOps teams can have all the object storage they need for their most demanding workloads, within the guardrails IT establishes to meet performance, availability and security requirements.

The MinIO plug-in, operator and console provide complete functionality with the OpenShift toolchain, making it easy to leverage MinIO within existing workflows on this enterprise-grade container platform. MinIO on OpenShift puts hybrid cloud object storage a mere click away, so get started today by installing from Red Hat Marketplace or the Red Hat Ecosystem Catalog.

MinIO and Red Hat Marketplace Resources

To learn more about MinIO hybrid cloud object storage and Red Hat Marketplace, check out the following resources:


About the author

Red Hatter since 2018, tech historian, founder of themade.org, serial non-profiteer.

Read full bio