Red Hat OpenShift 4.13, based on Kubernetes 1.26 and Crio 1.26 focused on three core themes to accelerate modern application development and delivery anywhere: enhanced security, hybrid cloud infrastructure flexibility, and scalability. We highlighted those core features in the What’s New in Red Hat OpenShift 4.13 blog. In this blog, we’ll dive deep to review the notable innovations that make up OpenShift 4.13. A comprehensive list of the OpenShift 4.13 changes may be found in the OpenShift 4.13 Release Notes.

33 Customers’ Requested Enhancements Delivered

In this latest release, we introduce 33 customer requested product enhancements (RFEs). Our primary focus for this release is quality, stability, scale, and security. Among the most requested enhancements for this release were zone awareness for OpenShift in VMware vSphere, the expansion of ClusterNetworks, the addition of a login capability to nodes via the Red Hat Enterprise Linux CoreOS Console, support Azure user-defined tags, and the ability to install in Google Cloud Platform into a shared virtual private cloud (VPC). In this blog, we review each of these highly sought-after features.

Manage OpenShift clusters at scale with hosted control planes (Public Preview)

Hosted control planes for OpenShift provide you an option to optimize the multicluster deployments at scale for efficient resource utilization and faster provisioning time. Hosted Control Planes is an OpenShift topology that provides a separation of concerns between platform management and workload management to enable hybrid cloud operations at scale.

Hosted control planes is available for preview on Red Hat OpenShift Service on AWS (ROSA). You can check out hosted control planes for self-managed OpenShift on the following providers: bare metal (available as Technology Preview), AWS (which remains in Technology Preview), and OpenShift Virtualization (available as Technology Preview). To get started with hosted control planes on self-managed OpenShift, you will need to install and enable multicluster engine for Kubernetes operator version 2.3.

OpenShift based on Red Hat Enterprise Linux CoreOS 9.2

OpenShift 4.13, based on Red Hat Enterprise Linux CoreOS (RHCOS), uses Red Hat Enterprise Linux (RHEL) 9.2 packages. This enables you to have the latest fixes, features, and enhancements, as well as the latest hardware support and driver updates. To upgrade to OpenShift 4.13, note these three considerations when you plan your OpenShift upgrade:

  1. Some component configuration options and services may have changed between RHEL 8.6 and RHEL 9.2, which means existing machine configuration files may no longer be valid.
  2. RHEL 6 base image containers are not supported on RHCOS container hosts, but are supported on RHEL 8 worker nodes as detailed in Red Hat Container Compatibility Matrix.
  3. Some device drivers are deprecated. Refer to the RHEL documentation for more information.

Zone aware OpenShift in VMware vSphere now Generally Available

You can deploy an OpenShift cluster to multiple VMware vSphere datacenters and vSphere clusters that run in a single VMware vCenter. You associate the vSphere datacenter to a region and vSphere cluster to a zone to define logical failure domains. This configuration reduces the risk of a hardware failure or network outage causing your cluster to fail. This feature is available for new OpenShift 4.13 clusters.

Deploy OpenShift on encrypted VMs and encrypted storage in vSphere

With OpenShift 4.13, you can deploy OpenShift on top of encrypted vSphere virtual machines (VMs) as well as encrypting PVs provisioned by the vSphere CSI driver to comply with your corporate security policies or regulatory mandates. This allows traffic between the hypervisor and storage backend to be encrypted. This feature requires a certified  key provider (KMS) in vSphere to securely manage the encryption keys. Review the requirements for encrypting virtual machines in order to take advantage of vSphere encryption.

VMware vSphere version 8 support added

OpenShift 4.13 now supports VMware vSphere version 8.0. You can also install OpenShift 4.12 on VMware vSphere version 8.0. Refer to VMware vSphere infrastructure requirements for additional information.

OpenShift on VMware Cloud Verified clouds

We often receive inquiries about running OpenShift on various vSphere clouds. To clarify, you can host OpenShift on a VMware vSphere infrastructure on-premises or on a VMware Cloud Verified provider that meets the requirements outlined in the VMware vSphere infrastructure requirements.

More topology options for edge computing and regions for public clouds

OpenShift 4.13 continues to add to OpenShift’s flexibility by allowing more workloads in more places. Now, deploying a compact three-node cluster is now supported on Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and VMware vSphere. Three node clusters contain three control plane nodes, so the control plane and cluster workloads run together. This provides smaller, more resource-efficient clusters in resource-constrained environments for cluster administrators and developers to use for development, production, and testing purposes. Refer to Installing a three-node cluster on AWS, Installing a three-node cluster on Azure, Installing a three-node cluster on GCP, and Installing a three-node cluster on vSphere for more information.

In addition, you can now deploy single node OpenShift (SNO) on x86-based instances in AWS to provide a simple, economical development topology option for testing new applications before rolling them out to single nodes operating in remote locations.  

Support for single node Arm-based bare metal deployments is also new. This paves the way for a vast array of highly efficient edge deployment configurations. Combining high performance with low power consumption means new applications can run close to their data sources, thus delivering insights quickly and locally.  

We’ve also added support for several new regions for the following 2 cloud providers:

  • AWS: ap-south-2 (Hyderabad, India), ap-southeast-4 (Melbourne, Australia), eu-central-2 (Zurich, Switzerland), eu-south-2 (Zaragoza, Spain), and me-central-1 (United Arab Emirates)
  • GCP: europe-southwest1 (Madrid, Spain), europe-west8 (Milan, Italy), europe-west9 (Paris, France), europe-west12 (Turin, Italy), me-west1 (Tel Aviv, Israel), southamerica-west1 (Santiago, Chile), us-east5 (Columbus, Ohio, USA), and us-south1 (Dallas, Texas, USA)

Expand cluster network

In Openshift 4.13, the cluster administrator can change the CIDR mask of the cluster network to increase the number of available IP addresses to add more nodes to the cluster. For example, if you previously deployed a cluster network with 10.128.0.0/19, hostPrefix: 23, your cluster would only accommodate 16 cluster nodes. To expand the cluster, you would change the CIDR mask to a /14 mask, which yields the possibility of 510 nodes.

Single click control plane scaling on Azure and Google Cloud Platform

You can now scale control plane nodes automatically in an OpenShift cluster on Azure and Google Cloud Platform just like you would your worker nodes. This leverages control plane machine sets, introduced in OpenShift 4.12, to manage the cluster’s control plane machines and adds additional automation on existing Machine API concepts. This operational flexibility is especially useful to tackle growth or a control plane node failure.

For Azure, new OpenShift 4.13 on Azure clusters have a control plane machine set that is active by default. For existing OpenShift on Azure clusters that upgrade to version 4.13, an inactive custom resource (CR) is generated for the cluster and can be activated after you verify the CR values are correct for your control plane machines. Refer to Getting started with the Control Plane Machine Set Operator for additional information.

Automatically scale applications based on custom metrics

The Custom Metrics Autoscaler operator, based on Kubernetes Event Driven Autoscaler (KEDA), is generally available in OpenShift 4.13. This autoscaler enables developers to horizontally scale the number of pods for their application workloads based on resource utilization metrics (CPU and memory usage), events, and custom metrics.

NUMA-aware scheduling with the NUMA Resources Operator is Generally Available

NUMA-aware scheduling with the NUMA Resources Operator, previously introduced as a Technology Preview in OpenShift 4.10, is now Generally Available. The NUMA Resources Operator deploys a NUMA-aware secondary scheduler that makes scheduling decisions for workloads based on a complete picture of available NUMA zones in clusters. This enhanced NUMA-aware scheduling makes sure that latency-sensitive workloads are processed in a single NUMA zone for maximum efficiency and performance. This update adds fine-tuning of API polling for NUMA resource reports, and provides configuration options at the node group level for the node topology exporter.

NUMA-aware scheduling

Manage Azure cloud-based resources with user-defined tags (Technology Preview)

You can configure user-defined tags in Azure for grouping resources to manage resource access and cost. Azure user-defined tags can only be configured during cluster creation. In addition to user-defined tags, OpenShift adds its own tags for internal use to all the resources. Support for Azure user-defined tags is only available for the resources created in Azure Public Cloud, and is available as a Technology Preview feature in OpenShift 4.13 . You define the tags on the Azure resources in the install-config.yaml file only during cluster creation.

Install in Google Cloud Platform into a shared VPC

You can now deploy OpenShift cluster(s) into a shared Virtual Private Cloud (VPC) in Google Cloud Platform (GCP) using Installer Provisioned Infrastructure (IPI). This feature is Generally Available in OpenShift 4.13. With this installation method, you configure the cluster to use an existing shared VPC from a different GCP project.

Confidential Computing and Shielded VMs in Google Cloud

In this new release, OpenShift introduces support for Google Cloud Platform (GCP) instances with the Confidential VM Service enabled to take advantage of the “Isolation” feature, so your data is secure by keeping it encrypted while in use. In future releases, we will add support for attestation and high performance features.  This Confidential Computing feature is available as a Technology Preview in this release.

In conjunction, you can also use OpenShift in Shielded VMs, hardened virtual machines on Google Cloud, to protect your workloads running on these VMs from threats like remote attacks, privilege escalation, and malicious insiders. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. Refer to Enabling Shielded VMs for more information.

Cert-manager is Generally Available

The cert-manager operator is now Generally Available as a cluster-wide service that provides application certificate lifecycle management. Cert-manager allows you to integrate with external certificate authorities and provides certificate provisioning, renewal, and retirement. Cert-manager introduces certificate authorities and certificates as resource types in the Kubernetes API, which makes it possible to provide certificates on demand to developers working within the cluster.

Pod security admission restricted enforcement (Technology Preview)

With this release, pod security admission restricted enforcement is available as a Technology Preview feature by enabling the TechPreviewNoUpgrade feature set. If you enable the TechPreviewNoUpgrade feature set, pods are rejected if they violate pod security standards, instead of only logging a warning.

Encrypt etcd with AES-GCM ciphers

Organizations concerned with security will be pleased to know they can now use AES-GCM ciphers to encrypt etcd in order to meet compliance requirements for cryptographic standards. This configuration enables the use of AES-GCM with random nonce and a 32-byte key to perform encryption. Encryption keys are rotated once per week.    

Storage updates

In OpenShift 4.13, Logical Volume Manager (LVM) Storage support is added for dual-stack for IPv4 and IPv6 network environments. Additionally, multiple storage classes that take advantage of HDD and NVMe disks are now supported with LVM Storage.

With the new release, the OpenShift Administrator can now change the way CSI storage operators manage the default storage class. They can create their own preferred default storage class and customize it accordingly.

User managed key to encrypt storage on AWS, Azure, and GCP

Prior to OpenShift 4.13, when user-managed encryption keys are provided at install time, only root volumes for nodes are encrypted with those keys. With OpenShift 4.13, the default storage class on AWS, Azure, and GCP uses the same user-managed encryption keys to apply to all block Container Storage Interface (CSI) provisioned volumes automatically without requiring additional post-installation configurations. Refer to CSI drivers supported by OpenShift for the list of supported CSI drivers.

Automatic migration of in-tree volumes to Container Storage Interface

In OpenShift 4.13, new OpenShift clusters will have CSI migration enabled by default for both vSphere and Azure File drivers. Existing clusters that use Azure File in-tree storage drivers will automatically migrate to the equivalent CSI drivers and require no manual intervention. Automatic migration of existing vSphere in-tree volumes to the equivalent vSphere CSI driver is planned in the next OpenShift release; however, vSphere users have the option to opt-in to CSI migration in OpenShift 4.13.

CSI inline ephemeral volumes General Availability

CSI inline ephemeral volumes were introduced in OpenShift 4.5 as a Technology Preview feature. In OpenShift 4.13, this feature is now Generally Available. It allows defining a pod spec that creates inline ephemeral volumes when a pod is deployed and deletes them when the pod is destroyed. CSI inline ephemeral volumes are only available for drivers that support this feature.

crun and Linux Control Group version 2 General Availability

The crun container runtime is now Generally Available in OpenShift 4.13. Users can switch between the crun container runtime and the default container runtime as needed by using a ContainerRuntimeConfig custom resource (CR) as detailed in About the container engine and container runtime.

cgroups v2, the next version of the kernel control groups, is also Generally Available in this new release. cgroups v2 offers multiple improvements, including a unified hierarchy, safer sub-tree delegation, new features, such as Pressure Stall Information, as well as enhanced resource management and isolation. It also includes unified accounting for different types of memory allocations for network memory and kernel memory, as well as accounting for non-immediate resource changes, such as page cache write backs. Cgroup v2 provides better control, performance, and stability for nodes where Out-Of-Memory (OOM) kill conditions occur, especially where workloads consume more than the allocated memory.

RunOnceDuration Operator 

OpenShift Container Platform relies on run-once pods to perform tasks such as deploying a pod or performing a build. Run-once pods are pods that have a RestartPolicy of Never or OnFailure.

In OpenShift 4.13, the cluster administrator uses the RunOnceDuration Operator to force a limit on the time those run-once pods can be active. Once the time limit expires, the cluster tries to actively terminate those pods. Use this operator to prevent tasks such as builds from running for an excessive period of time.

Pull images from mirrored registry using tags

You can now pull images from a mirrored registry by using image tags in addition to digest specifications. To accomplish this change, the ImageContentSourcePolicy (ICSP) object is deprecated. You now use an ImageDigestMirrorSet (IDMS) object to pull images by using digest specifications or an ImageTagMirrorSet (ITMS) object to pull images by using image tags.

If you have existing YAML files that create ICSP objects, use oc adm migrate icsp to convert those files to an IDMS YAML file. Refer to Converting ImageContentSourcePolicy (ICSP) files for image registry repository mirroring on converting existing ICSP YAML files to IDMS YAML files.

Log into node through RHCOS system console

In OpenShift 4.13, you can now log into a node via the RHCOS system console. This is especially useful for troubleshooting when the kubelet is down and the the node isn’t reachable via ssh or the OpenShift API. To access the RHCOS system console, you set the “core” user account password via MachineConfig.

Add third party and custom content to RHCOS

You can now use RHCOS image layering to add third-party and custom content to cluster nodes. RHCOS image layering lets you extend the functionality of your base RHCOS image by layering additional content onto RHCOS, without modifying the base image. Instead, it creates a custom layered image that includes all RHCOS functionality and adds additional functionality to groups of nodes in the cluster.

Install on IBM Power or install on IBM zSystems and IBM LinuxONE with RHEL KVM via Assisted Installer

Prior to OpenShift 4.13, you had to manually install OpenShift Container Platform on IBM Power, or manually install on IBM zSystems and IBM LinuxONE with RHEL KVM. With OpenShift 4.13 and Assisted Installer, you can now provision new bare metal nodes and create OpenShift clusters on these platforms easily with the web-based guided experience.

Build and deploy with OpenShift Serverless 1.28

OpenShift Serverless 1.28, based on Knative version 1.7, is now Generally Available. This release adds Generally Available support for Node.js and TypeScript runtime for Serverless functions. With Serverless functions, you bring your code and deploy to the cloud in two steps with easy to start Quarkus, Node.js, and TypeScript templates. As Technology Preview features, we’ve added support for Python runtime in Serverless functions and multi-container support to enable you to deploy a multi-container pod using a single Knative service. An upgraded Developer Preview for Serverless logic is also available with this release. Serverless logic offers low-code/no code orchestration of services and functions for event-driven applications.

Web console enhancements

For the Developer, you can now perform the following actions in the Developer perspective of the web console: create a Serverless Function by using the Import from Git flow or Create Serverless Function flow available on Add page, select pipeline-as-code as an option in the Import from Git workflow, view which pods are receiving traffic, and customize the timeout period or provide your own image when instantiating a Web Terminal. Additionally, the administrator can now set default resources to be pre-pinned in the Developer perspective navigation for all users.

Cloud economics insights with Red Hat Insights Cost Management

OpenShift 4.13 brings you more insights into your cloud costs with Red Hat Insights Cost Management.  Insights Cost Management is included in every OpenShift subscription, and it continuously analyzes platforms and applications to predict risk, recommend actions, and track costs so enterprises can better manage hybrid cloud environments. In this release, we add support for AWS Savings Plans, AWS Cost Categories, new AWS regions, as well as Oracle Cloud Infrastructure (OCI). The cost of running the OpenShift control plane and unallocated capacity are now reported and can be distributed to user workloads.

Red Hat Advanced Cluster Management for Kubernetes 2.8

Following closely on the heels of OpenShift 4.13 is Red Hat Advanced Cluster Management for Kubernetes (RHACM) 2.8. Key highlights of RHACM 2.8 include PolicySet for Red Hat OpenShift Platform Plus and Regional stateful application replication with Red Hat OpenShift Data Foundation (ODF) 4.13. PolicySet for OpenShift Platform Plus (OPP) is promoted to stable. A PolicySet is a Kubernetes Custom Resource Definition (CRD) that contains a set of policies for managing Kubernetes resources across multiple clusters. With the OPP PolicySet used to enforce security requirements, provide compliance with regulatory standards and automate the deployment of OPP resources at the hub, the OpenShift fleet management, ODF and Red Hat Advanced Cluster Security for Kubernetes (ACS) can be quickly stood up with best practices and safeguards in place.

Regional stateful application replication with ODF 4.13 allows you to replicate stateful applications across multiple regions or data centers. This provideshigh availability and disaster recovery for mission-critical applications that require persistent storage. With regional stateful application replication, data is replicated synchronously or asynchronously between two or more regions, providing near-real-time data replication and failover capabilities. This allows for automatic failover to a secondary region in case of a primary region outage or failure.

Try Red Hat OpenShift 4.13 Today

Beyond the new features mentioned in this blog, check out the following resources to learn more about the new OpenShift 4.13 release:

Thank you for reading about what’s new in OpenShift 4.13. Please comment or send us feedback either through your usual Red Hat contacts or as an issue on OpenShift on GitHub.


About the author

Ju Lim works on the core Red Hat OpenShift Container Platform for hybrid and multi-cloud environments to enable customers to run Red Hat OpenShift anywhere. Ju leads the product management teams responsible for installation, updates, provider integration, and cloud infrastructure.

Read full bio