Today, I am pleased to announce the general availability of Red Hat OpenShift 4.9. While this blog provides an overview of the features, functions, and benefits, the OpenShift 4.9 press release about Single node OpenShift and our developer experience enhancements are  companion reads if you are interested in some use cases and how our customers and partners are using OpenShift. 

Based on Kubernetes 1.22 (and CRI-O 1.22), OpenShift 4.9 has a number of exciting new enhancements and new features for developers of cloud-native apps, for cloud-native DevOps practitioners, and for cluster and cloud administrators. 

For the developer: 

With OpenShift 4.9, we have automated and streamlined access to Red Hat Enterprise Linux (RHEL) entitlements on OpenShift to make the entire RHEL content ecosystem easily accessible to developers when building images. The Red Hat Insights Operator pulls and manages the RHEL entitlements from console.redhat.com and stores it as secret on the cluster. Developers can mount the RHEL entitlement secret into Pods, Tekton Pipelines and BuildConfigs, and they will have access to the rich content available in RHEL repositories. Furthermore, mounting volumes in BuildConfigs is not limited to RHEL entitlements, and can be used to mount any secret and configmap (for example, git credentials) that may be required during image builds. 

Developers on OpenShift 4.9 can now have multiple logins to the same registry in a single pull secret. Prior to this, you could only have one login for an entire registry in a single pull secret. Therefore, users can access a variety of content with only one pull secret thus simplifying secret management (one secret versus multiple secrets). This required the use of many pull secrets for deployments with multiple components. This can now be simplified by using a single secret containing multiple logins for the same registry, either per registry namespace or per image in a registry.

OpenShift Service Mesh 2.1 (will ship after OpenShift 4.9 GA), based on Istio 1.9, will introduce new resources for federating service meshes across multiple OpenShift clusters. This will allow services across the hybrid cloud to be connected securely in a manner that respects OpenShift Service Mesh’s multitenant deployment model. This allows admins to securely connect service meshes across OpenShift clusters anywhere and lets developers access remote services as if they were local services.  

OpenShift Pipelines 1.6 enhances auto-pruning by allowing per-namespace configuration which enables development teams to adjust the clean-up process to their requirements. In addition, triggers, which are used for creating webhook for pipelines, have reached GA status. This release also adds BitBucket to the list of supported Git providers when using pipeline-as-code, and brings a guided experience for getting started with pipeline-as-code on GitHub.  

OpenShift GitOps 1.3 includes diverse improvements in the Argo CD default security settings and provides a more seamless authentication and authorization experience for Argo CD by taking advantage of OpenShift credentials and user groups. Furthermore, use of external TLS certificate managers is now supported by Argo CD.

The need for compute intensive workloads such as Artificial Intelligence and Machine Learning (AI/ML), as well as the need for advanced security that is closer to the hardware, has fueled the growth and adoption of specialized hardware accelerators. We introduce the Special Resource Operator (SRO), a template for exposing and managing accelerator cards in a Kubernetes cluster. It handles the hardware seamlessly from bootstrapping to updates and upgrades fully managed. Developers in this ecosystem can use the SRO to seamlessly integrate third-party  devices into Kubernetes and OpenShift. 

There are important changes to the Kubernetes API that developers and administrators should pay attention to. There are a number of deprecations and removals of v1beta1 APIs including Custom Resource Definitions (CRD) and others. This also impacts Kubernetes operators. You can find more information here. Developers should refactor their applications to account for these changes. Administrators must evaluate their cluster for any impacted operators and applications and plan to modify them before upgrading the clusters to OpenShift 4.9. To help with this evaluation, a review step has been added to the cluster upgrade process in the Console, where the Operator Lifecycle Manager informs cluster administrators about Operators which need to be updated in order to continue to work on OpenShift 4.9.

There are other improvements, including a better developer experience in the OpenShift console for Certified Helm charts, GitOps Pipelines, and Custom Domains for OpenShift Serverless. 

For cluster, cloud, and infrastructure administrators: 

Administrators get a powerful new way to deploy OpenShift at the edge with Single Node OpenShift (SNO) in production. SNO is an all-in-one OpenShift node on a single physical or virtual server (minimum 8 cores and 32GB memory, where approximately 2 cores and 16GB is used for OpenShift control plane and the rest is available to run user workloads). Additionally, we have simplified the deployment of SNO. With Red Hat ACM 2.4, one can deploy SNO at the edge locations with no need for additional bootstrap nodes with the mechanism we are calling Zero Touch Provisioning (ZTP). 

Microsoft has been extending its Azure Cloud into private datacenters with Azure Stack Hub. With OpenShift 4.9, we support the installation of OpenShift on Azure Stack Hub (with UPI), allowing administrators to run their Open Hybrid Cloud Azure, OpenShift, and Kubernetes workloads on cloud and on-prem, inside of the Microsoft ecosystem.

The ability to “Bring Your Own Microsoft Windows nodes to OpenShift” is now generally available. This capability extends to customers that have dedicated Windows server instances in their datacenters that they regularly update, patch, and manage. Admins can now add these custom Windows nodes into an OpenShift cluster.

Additionally, with OpenShift 4.9, we have a number of new incremental capabilities such as support for OpenShift on AWS China, OpenShift on IBM Cloud Bare Metal (with IPI), a streamlined experience for remote worker nodes (on bare metal), and support for Red Hat Enterprise 8 (RHEL 8) worker nodes. Please also note that RHEL 7 worker nodes are deprecated and will be removed in the next minor release (4.10). These capabilities further our hybrid cloud journey, bringing OpenShift on a variety of cloud, datacenter, and edge footprints thus allowing apps to be truly portable across these environments.   

With OpenShift 4.9, we introduce a number of new features and capabilities in the control plane. The OpenShift Scheduler can be customized to fit the workloads that administrators are running on the cluster. Three “Pre-Build” profiles are provided out of the box, while the customization allows admins to build their own scheduling profiles based on their workload needs. 

The default route name for certain OpenShift Cluster Components (such as OAuth server, console, downloads) now allows for any level of flexibility in customers' environments. You can configure an audit log policy that defines custom rules using specific multiple groups and define which profile to use for that group. You can also disable audit logging for OpenShift Container Platform. Improvements to etcd include TLS ciphers customization, automatic certificate rotation and automated defragmentation.

For bare metal environments inside the customer datacenter, we introduce support for a new load balancer called MetalLB. MetalLB allows you to create Kubernetes services of the type LoadBalancer in clusters for your application traffic. Layer 2 mode is supported in OpenShift 4.9 and BGP mode is on the roadmap. The administrator assigns an external IP address from a pre-configured range of IPs that the admin provides to a OpenShift Service which does the load balancing within the cluster and then uses standard address discovery protocols (ARP for IPv4, NDP for IPv6) to make those IPs reachable on the network. 

Important changes to OpenShift 4 minor version lifecycle. 

OpenShift 4 will change the lifecycle from the current version-based lifecycle policy to a timeline-based lifecycle of 18 months for all minor releases of OpenShift 4. This change will take effect with Red Hat OpenShift Container Platform version 4.7 and higher. Furthermore, even number releases will be designated as Extended Update Service (EUS). For example, versions 4.8, 4.10, and so forth, will be EUS. Note that the EUS releases also have a 18-month lifecycle. The product will provide an upgrade experience from one EUS release to the next EUS release. For example, from version 4.8 to 4.10 will have an upgrade experience. However, upgrades between non-EUS releases (for example, versions 4.7 to 4.9) still need to be in two steps (from 4.7 first to 4.8 and then to 4.9). You can find more details of the new lifecycle here

The full list of improvements in Red Hat OpenShift 4.9 can be found in the full release notes, including details on all the new technologies in technology preview and on deprecations. In addition, there are a number of companion blogs that delve into select details of What’s New in 4.9. These will be published over the coming days, here and elsewhere.

My fellow product managers delivered an amazing "What's New" presentation with details of what you see here and more. If you have time to sit through all of these changes explained in person, you can check out this deep dive into the changes in 4.9, which was originally broadcast on OpenShift.tv

If you want to try out Red Hat OpenShift 4.9, there are several ways to do it, from online learning tutorials and demos on your laptop to how-to’s for the public cloud or your own data center.