In the modern mobile, web and cloud-based world, the term “411” may be a little out of date. It used to be the number one dialed on the phone to get “Information,” otherwise known as asking the operator to find a phone number. Despite its age, we decided to dust the term off and give you the 411 on Red Hat OpenShift 4.11, which is generally available today.

Based on Kubernetes 1.24, OpenShift 4.11 is ready to be at the center of your information ecosystem, providing new features, updates and fixes for developers and administrators alike. This blog highlights some of the notable additions and latest innovations we’re introducing. A complete list of the 4.11 changes can be seen in the OpenShift 4.11 Release Notes.

43 Customers’ Requested Enhancements Delivered

OpenShift is used worldwide by organizations in every industry, in every vertical, including Audi, BP, Deutsche Bank, GE, HCA Healthcare, Kohl’s, NASA, Sabre, TIAA, Turkcell, Verizon, UK Department for Work and Pensions, and U.S. Department of Energy Laboratories. These customers depend on OpenShift to accelerate and fuel innovation within their organizations to create new applications, new products, and transform the way we live. For example, Argentina’s Ministry of Health uses OpenShift to offer universal healthcare to all citizens and residents. During the first month of the COVID-19 pandemic, the Ministry of Health accommodated and responded quickly to a 1,500% increase in patient transaction volume. At the Lawrence Livermore National Laboratory, OpenShift is a significant part of the software ecosystem contributing to the convergence of the high performance computing (HPC) and cloud realms. In banking, Riyad Bank in the Kingdom of Saudi Arabia (KSA) has implemented a hybrid cloud strategy built on Red Hat’s portfolio of open hybrid cloud technologies, including Red Hat OpenShift, as a keystone of the bank’s drive to speed up innovation and time-to-market of digital services and products.

OpenShift 4.11 brings 43 requests for enhancements (RFEs) from customers, on par with the 45 in OpenShift 4.10. Of the 43 RFEs, the number one most requested was to be able to run multiple routers on the same node on different ports, and that is now possible. Exposing different ports to the ingress operator enables customers to run multiple router deployments on the same node, which helps cut down Infrastructure costs incurred in scaling the ingress at present. In response to other RFEs, we've made Kerberos packages part of the RHEL CoreOS extensions functionality, enhanced router sharding, and now provide customers a way to customize the length of time between subsequent liveness checks on backends in HAProxy, when used for load balancing.

Get and Pay for OpenShift directly from AWS and Azure Marketplaces

Customers can now buy and pay for OpenShift Container Platform, OpenShift Platform Plus, and OpenShift Kubernetes Engine directly from the AWS and Azure Marketplaces. This complements existing managed OpenShift offerings like Microsoft Azure Red Hat OpenShift (ARO) and Red Hat OpenShift Service on AWS (ROSA) with self-managed options, which customers can use for custom deployments on these cloud providers. Customers can now benefit from hourly or yearly upfront billing per vCPU of worker nodes with their cloud provider account. Billing is facilitated by the cloud provider and is based on the usage of specific VM images (VHDs on Azure, AMIs on AWS) the marketplaces provide.

With OpenShift 4.11 we are providing self-managed OpenShift on AWS in North America and AWS GovCloud, and we are adding the European regions in a few weeks. OpenShift is also available in Azure Marketplace for customers from North America and Europe, as well as the Microsoft Azure Government Marketplace. We will be enabling Google Cloud Platform Marketplace, which will be transactable worldwide, later this year.

Reduce Operational Cost and Simplify Fleet Management with Hosted Control Planes

In OpenShift 4.11, we added hosted control planes, a much sought after feature. Based on the HyperShift project, clusters based on hosted control planes are now available on Amazon Web Services (AWS) as a technology preview feature enabled through the multicluster engine for Kubernetes operator version 2.0.  

Deploying OpenShift with a hosted control plane decouples the control plane from the data plane (workers), separates network domains for control plane and user workloads, and provides a shared interface through which administrators and Site Reliability Engineers (SREs) can easily operate a fleet of clusters. Now, the control plane acts and behaves like any other workload. The same rich stack used to monitor, secure, and operate your applications is re-used for managing the control plane.

Hosted control planes simplify fleet management by providing a consistent management and operational experience. With hosted control planes it’s more than twice as fast to deploy a control plane from infrastructure to readiness. This feature also reduces infrastructure costs by as much 3X, whereby multiple cluster control planes are hosted as workloads that share the hosting service's cluster nodes. Hosted control planes also  provide a strong separation of users’ workloads vs. regular control planes.

Pod Security Admission Integration

Kubernetes API has been changing, and the PodSecurityPolicy API is deprecated and will no longer be served from Kubernetes 1.25. This API is replaced by a new built-in admission controller (KEP-2579: Pod Security Admission Control), allowing cluster administrators to enforce the Pod Security Standards with Namespace Labels.

OpenShift APIs also have been changing to address these needs, and a new Security Context (SCC) introduces the new privilege, restricted-v2, to comply with the Kubernetes changes from Openshift 4.11. Read more about these changes in Pod Security Admission in OpenShift 4.11

Agent-based Installer for Disconnected OpenShift Deployments

With 4.11, we’re making it easier for customers to install OpenShift in disconnected or air-gapped environments with the agent-based Installer, which is available as a Developer Preview. Initially, the agent-based installer focuses on bare metal environments.

Using the agent-based installer, customers can deploy all supported OpenShift topologies including single node clusters, three-node compact clusters, or standard high availability clusters. With the agent-based installer, an in-place bootstrap is provided, so there is no need for a dedicated provisioning node. Customers can optionally automate the agent-based installer workflow with their preferred automation tooling for a complete hands-off deployment.

The agent-based installer runs as a new command of openshift-install. Customers begin by specifying their cluster configuration details like pull secrets, host network configurations like static IPs, bonds, and VLAN configurations. Then, generate the cluster-specific bootable image, which builds the cluster.

❯ openshift-install
Creates OpenShift clusters

Usage:
openshift-install [command]

Available Commands:
agent           Commands for supporting cluster installation using agent installer
analyze         Analyze debugging data for a given installation failure
completion  Outputs shell completions for the openshift-install command
coreos          Commands for operating on CoreOS boot images
create          Create part of an OpenShift cluster
destroy         Destroy part of an OpenShift cluster
explain         List the fields for supported InstallConfig versions
gather          Gather debugging data for a given installation failure
graph           Outputs the internal dependency graph for installer
help            Help about any command
migrate         Do a migration
version         Print version information
wait-for        Wait for install-time events

Flags:
        --dir string             assets directory (default ".")
-h, --help                   help for openshift-install
        --log-level string   log level (e.g. "debug | info | warn | error") (default "info")

Use "openshift-install [command] --help" for more information about a command.

Bring Your Own DNS for OpenShift in Cloud Providers

Customers have been asking for the flexibility to leverage their own custom managed ingress DNS solutions for OpenShift. In 4.11, we added the External DNS Operator, which allows customers to control DNS records dynamically via Kubernetes resources in a DNS provider-agnostic way. We use it to synchronize exposed OpenShift Services and routes with DNS providers. The External DNS Operator is generally available for AWS Route53, GCP Cloud DNS and AzureDNS, and Infoblox, as well as technology preview for Bluecat. To begin using the External DNS Operator, customers can install this operator via the OperatorHub.

Deploy OpenShift on Nutanix AOS

With OpenShift 4.11, you can now deploy OpenShift clusters on Nutanix AOS using the installer provisioned infrastructure, where the installer controls all areas of the installation including infrastructure provisioning with an opinionated best practices deployment of OpenShift.  OpenShift deployments are supported on both LTS (Long Term Support) and STS (Short Term Support) Nutanix AOS releases.

With the OpenShift on Nutanix AOS integration, the Cloud Credential Operator (CCO) supports Manual mode for the credentials integration with the Nutanix platform, and the CSI integration is configured post-cluster deployment. In the future, we plan to automatically configure CSI in the installation workflow.

Make OpenShift Install More Flexible

Customers desire to move away from "one size fits all" cluster installations, and towards flexibility about what should/should not exist in a new cluster out-of-the-box. This can be seen in efforts such as hosted control planes, single node OpenShift, and Red Hat OpenShift Local. While each of these efforts makes the installation more flexible, we’re moving towards making OpenShift more composable by providing a mechanism for a cluster administrator to exclude one or more optional components for the installation. This in turn determines which payload components are installed or not installed in the cluster. OpenShift 4.11 allows you to disable the baremetal operator, marketplace, and the openshift-samples content that is stored in the openshift namespace in your OpenShift installation. You can disable these features by setting the baselineCapabilitySet and additionalEnabledCapabilities parameters in the install-config.yaml configuration file prior to installation. If you disable any of these capabilities during the installation, you can enable them after the cluster is installed.

FedRAMP High for Compliance Operator

The Compliance Operator has been part of OpenShift since 4.6. This Operator allows customers to scan their infrastructure for compliance issues and remediate them based on industry standards. Today, more than 1000 customers use the Compliance Operator to scan their infrastructure daily. In 4.11, we expanded the Federal Risk and Authorization Management Program (FedRAMP) profile to support the High Impact Level. This profile enables our customers to achieve authorizations to support U.S. government agencies.

Streamlined Disconnected Mirroring Workflow

Disconnected mirroring is important to all customers who run behind a corporate firewall or in other ways not directly connected to redhat.com services. The new oc mirror tool provides a major simplification of this process, becoming generally available in 4.11. Whereas before there were different tools to be used for mirroring the various parts of OpenShift content and every customer had to invest in their own automation for multiple clusters and regular mirroring, we now have a single command that provides a single entry point for disconnected content management.

oc mirror provides a declarative file-based configuration approach to describing the content you want to mirror. It automatically detects newer OpenShift releases and operator content, figures out what intermediate OpenShift releases or operators need to be downloaded to reach a target or newest release, and resolves OLM operators dependencies.

In 4.11, customers can specify a minimum and maximum version of OpenShift and operator releases to download, so as to move forward at their own pace. To avoid running out of registry storage, customers can adjust the minimum or maximum version range, and oc mirror automatically detects if previously mirrored OpenShift or operator releases fall out of that range, so older content can be deleted from the registry. In addition, customers can perform dry runs of their mirroring jobs to get the list of images that would be mirrored, thus allowing customers to use their own tools and processes to carry out the image download and transfer.

Automatic fail-forward updates for failed operator installations

The Operator Lifecycle Manager (OLM) gains the ability in 4.11 to automatically move forward from failed upgrades. Previously, a user had to remove and reinstall the operator if the update failed. OLM, by default, cannot guarantee that moving forward to the next version will not break things even more, despite a lot of checks being performed, because it is impossible to know if code of the failed operator instance has started a migration of data and then failed midway through. To resolve a failed update typically requires manual cleanup.

When customers subscribe to a channel with only patch updates that do not contain any disruptive changes that necessitate complex migrations, there’s a new option in OLM to automatically recover failed updates if a newer version of the affected operator has been published. This is useful for managed service providers and SREs who regularly release newer patch versions of their operators across a large fleet of clusters and want to quickly recover from a failed update. When this feature is enabled in OLM, it automatically triggers an update as soon as a newer version of an operator appears in the catalog, regardless of the state of the previous update. This results in significantly less manual clean up work for SREs to perform across the OpenShift fleet they manage.

Application autoscaling made simple

Customers can now use our newly introduced custom metric autoscaler, based on the KEDA project, to scale their application workloads. The custom metric autoscaler is in Technology Preview in OpenShift 4.11. To take advantage of this new feature, customers use the Scaled object custom resource definition to define how an application should scale and what the metric-based triggers are. Behind the scenes, it uses the Horizontal Pod Autoscaler (HPA) to scale the application pods. With the custom metric autoscaler, customers will now have the ability to scale down their pods to 0.

Customize VPA recommenders for Different Workloads

The existing Vertical Pod Autoscaler (VPA) recommends CPU and memory reservations based on a single default recommender to right-size application workloads. This has been challenging for customers who want to define their own customized policies to handle different workloads. In OpenShift 4.11, users and developers can configure different VPA recommenders for each of their application workloads to support distinct resource usage behaviors.

Adapt to Network Conditions with Worker Latency Profiles

By default, OpenShift ships with default reaction times for different events. However, this may not be ideal for every scenario as there are cases when the reaction time may either be too fast or too slow. For example, whenever network latency increases between control plane and worker nodes, the Kube controller manager waits 40 seconds before declaring the worker node as unreachable. In certain scenarios, 40 seconds is too short a reaction time and may cause unnecessary churn in the infrastructure. Similarly, when a node is deemed unhealthy, it gets tainted. Pods that are part of a deployment set are re-scheduled elsewhere based on their replica count. By default, it takes 300 seconds before the scheduler re-schedules the pod. For certain use cases, 300 seconds may be too long to start the application. To address the need for varying event reaction times, we introduced WorkerLatencyProfiles in OpenShift 4.11. Customers can now choose between two additional profiles – Medium Update Average Reaction  and Low Update Slow Reaction – in addition to the Default profile, based on their clusters’ network conditions and application needs.

Container Storage Interface Updates

On the storage front, Azure File Container Storage Interface (CSI) driver is now Generally Available, allowing OpenShift clusters in Azure to consume file storage over CIFS and consume RWX persistent volumes (PVs).

OpenShift 4.11 is the first release where CSI migration is enabled for some drivers; this includes Azure Disk and OpenStack Cinder. CSI migration is transparent and enabled, by default, and does not require any user not administrative action. It’s worth noting that CSI migration does not perform any data migration; it works by translating the in-tree PV object calls to CSI in-memory, and not on disk.

We also added a new option that lets Kubernetes consume ephemeral data through CSI with Generic Ephemeral Volumes. As the name implies, it is not driver dependent. Rather, it is supported by all CSI drivers that support dynamic provisioning, and since it is backed by CSI, users can benefit from features, such as network attached backends, snapshots, expansion, and cloning.

Make Data Scientists' life easier and run MLOps platforms at scale

With OpenShift 4.11, we added new features to make Data Scientists' life easier and to run machine learning operations (MLOps) platforms at scale.  Earlier this year, we introduced NVIDIA AI Enterprise, a single end-to-end cloud-native suite of AI and data analytics software. The new NVIDIA AI Enterprise 2.1 suite with OpenShift is supported on AWS, Azure, and Google Cloud. Customers can easily run supported GPU-accelerated frameworks, such as RAPIDS, TensorFlow, or Triton in public clouds.

In this 4.11 release, we enabled four new features in the NVIDIA GPU Operator: (1) GPU time-sharing with GPU sharing capabilities with MIG and specific Ampere GPUs, where OpenShift administrators define a set of replicas for a GPU, and users can simply run multiple pods per GPU, (2) New GPU Dashboard so GPU utilization and GPU quotas can be monitored directly from the console, and (3) OpenShift Virtualization vGPU enablement with the NVIDIA GPU Operator which is in Tech Preview, and (4) GPU enablement for OpenShift on Arm (Tech Preview).

Update Improvements in the OpenShift Console

In the OpenShift Console, we have two main enhancements focused on cluster upgrades. First is the ability to do partial updates, which allows users to update just the control plane, or the control plane plus select machine pools. Cluster administrators pause and unpause the update of each defined machine pool to minimize disruption to their applications. Customers must complete their update within 60 days from when the update is started.

The second enhancement is conditional updates, whereby cluster administrators have a new level of added transparency into why certain versions are not recommended or are blocked in the Console.  Cluster administrators are warned when an update is not recommended because a risk might apply, and they can override the recommendation and proceed with the update if they deem the risk is acceptable.

Another exciting enhancement in the Console is the ability to manage pod disruption budgets (PDBs). With PDBs, application owners can state what the minimum number of replicas of a deployment can be at any given time to protect their applications and critical workloads from voluntary disruptions.

In OpenShift 4.11, users can see all PDBs from a central list, create PDBs from the our new form-based experience, select a PDB and see the affected pods, or select any workload (deployment, statefulset, daemonset, replica set) to view if a PDB has been attached to that workload.

In every release, we like to tackle some of the highest requested feature enhancements from our customers. This release we added a dark mode, so users can now select their preferred theme.

Observability Improvements in the OpenShift Console 

With OpenShift 4.11, a series of major observability features have been added. On the monitoring front, Red Hat continues to invest in improving the both Administrator and Developer user experience inside the Observe section in the OpenShift web console. The metrics page shows functions and metrics suggestions to users when typing into the query browser, while the Dashboard page allows users to benefit from a higher data sampling rate. Both Prometheus UI and Grafana UI have been deprecated and removed. Starting with OpenShift 4.11, the OpenShift Console will replace the dashboard visualization, functionality and management features that were previously accessible via Grafana.

With the Cluster Monitoring Operator, customers configure remote write with all authentication methods supported by community-based Prometheus. By configuring the retention size for their metrics, cluster administrators define how much data to retain on the persistent volume.  Users can also now benefit from the fully supported Alertmanager user-defined alerts. They can also use the new metrics federation in the user-defined monitoring via a Prometheus endpoint.

For Logging, there’s more emphasis on Vector as an alternate collector and Loki as an alternate log store. With Vector as a log collector, customers can send logs to an Amazon CloudWatch destination for further analysis. Additionally, customers can now assemble log messages as a stacktrace in one single log entity using JSON format.

The Logs view in the Console dedicated to logs exploration allows users to deep dive into the root cause of their system’s problems  This new Logs view displays the original log entries, and lets users filter them by severity, as well as zoom in for further investigation.

These updates provide better product support and a better out of the box experience within the OpenShift Console across Red Hat Private, Hybrid, and Managed deployments going forward.

OpenShift Developer Experience

For developers, the OpenShift Console now features an updated Developer Perspective.  odo v3, the OpenShift Developer CLI, is now in beta 1 with improved developer flows. There are new container tooling initiatives that include Podman Desktop and Docker Desktop extension for OpenShift.

OpenShift Dev Spaces 3.0, formerly CodeReady Workspaces, has been rewritten as an OpenShift Operator. This new DevWorkspace Operator provides the following benefits to both Administrators and Developers: (1) Scalable and high available, (2) Simplified Authentication using OpenShift OAuth for Authentication, (3) Support for both Devfile V1 and V2, (4) Technology preview support for Visual Studio Code as an IDE (in addition to Eclipse Theia and JetBrains IDEs), and (5) Workspaces load faster, with fewer containers per workspace.

OpenShift Serverless updated to Version 1.24

OpenShift Serverless was updated to version 1.24 using upstream Knative version 1.3. New features include the ability to integrate Serverless applications with Cost Management Services, so users can track costs for clouds and containers. Support for Init Containers is Generally Available, and these specialized containers implement initialization logic for the application. Knative Kafka Broker, Knative Kafa Sink, and Serverless Functions/Knative Functions are upgraded to Technology Preview. 

For the Developer Experience, we added a form driven experience to create Event Sinks on Dev Console when the Camel K operator is installed and two new Serverless Dashboards for Developer Perspective. We introduce Serverless logic as a  Developer Preview, workflow models that orchestrate functions for event-driven applications. Refer to the OpenShift Serverless 1.24.0 release notes for more details and more features for this and previous releases.

Try out OpenShift 4.11 Today!

Beyond what’s highlighted in this blog, check out the following resources to learn more about the new OpenShift 4.11 release:

Thank you for reading about OpenShift 4.11. Please comment or send us feedback either through your usual Red Hat contacts or as an issue on OpenShift on Github.


About the author

Ju Lim works on the core Red Hat OpenShift Container Platform for hybrid and multi-cloud environments to enable customers to run Red Hat OpenShift anywhere. Ju leads the product management teams responsible for installation, updates, provider integration, and cloud infrastructure.

Read full bio