As Kubernetes and the Red Hat OpenShift platform evolve over time, they are becoming more important to the daily operations of enterprises around the world. OpenShift 4.5 incorporates Kubernetes 1.18, a lot of fit and finish work that took place to enable stability for high scale operations. The underlying infrastructure for any open hybrid cloud has to include all manner of infrastructures, from cloud providers, to virtual machines to bare metal. Indeed, the very idea of the open hybrid cloud insinuated multiple clusters running in multiple datacenters owned by multiple partners.
With OpenShift 4.5, we are pleased to introduce support for OpenShift on vSphere with full stack automation experience (aka IPI). This is in addition to our existing support for vSphere with user provisioned infrastructure experience (UPI). OpenShift 4.5 also introduces support for running OpenShift clusters on Google Cloud using the pre-existing infrastructure installation model on a shared VPC.
For customers looking to deploy OpenShift 4 into resource-constrained environments like edge locations, OpenShift 4.5 adds support for compact 3-node clusters. This is now supported in bare metal deployments on OpenShift 4.5; take a look at this document on running a three-node cluster.
There are a lot of quality of life improvements in OpenShift 4.5, and the full release notes, including details on all the new technologies in tech preview and on deprecations, can be found here.
Below, we’ve included a listing of most of the new features not already mentioned above:
New features and enhancements
This release adds improvements related to the following components and concepts.
Nodes
New descheduler strategy is available (Technology Preview)
The descheduler now allows you to configure the RemovePodsHavingTooManyRestarts strategy. This strategy ensures that Pods that have been restarted too many times are removed from nodes.
See Descheduler strategies for more information.
Node pull secrets
You can import and use images from any registry configured during or after the cluster installation by sharing the node’s pull secret credentials with the openshift-api, builder, and image-registry Pods.
Vertical Pod Autoscaler (Technology Preview)
OpenShift Container Platform 4.5 introduces the Vertical Pod Autoscaler (VPA). The VPA reviews the historic and current CPU and memory resources for containers in Pods and can update the resource limits and requests based on the usage values it learns. You can configure the VPA to update the Pods associated with a workload object, such as a Deployment, Deployment Config, StatefulSet, Job, DaemonSet, ReplicaSet, or ReplicationController. The VPA can optimize the CPU and memory allocation for applications and can automatically maintain Pod resources through the Pod lifecycle.
Web console
New Infrastructure Features filters for Operators in OperatorHub
You can now filter Operators by Infrastructure Features in OperatorHub. For example, select Disconnected to see Operators that work in disconnected environments.
Networking
Migrating from the OpenShift SDN default CNI network provider (Technology Preview)
You can now migrate to the OVN-Kubernetes default Container Network Interface (CNI) network provider from the OpenShift SDN default CNI network provider.
For more information, see Migrate from the OpenShift SDN default CNI network provider.
Ingress enhancements
There are two noteworthy Ingress enhancements introduced in OpenShift Container Platform 4.5:
- You can enable access logs for the Ingress Controller.
- You can specify a wildcard route policy through the Ingress Controller.
Developer experience
oc new-app now produces Deployment resources
The oc new-app command now produces Deployment resources instead of DeploymentConfig resources by default. If you prefer to create DeploymentConfig resources, you can pass the --as-deployment-config flag when invoking oc new-app. For more information, see Understanding Deployments and DeploymentConfigs.
Disaster recovery
Automatic control plane certificate recovery
OpenShift Container Platform can now automatically recover from expired control plane certificates. The exception is that you must manually approve pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates.
See Recovering from expired control plane certificates for more information.
Shutting down the cluster gracefully
Process to gracefully shutdown and recover an OpenShift cluster.
Storage
Persistent storage using the AWS EBS CSI Driver Operator (Technology Preview)
You can now use the Container Storage Interface (CSI) to deploy the CSI driver you need for provisioning AWS Elastic Block Store (EBS) persistent storage. This Operator is in Technology Preview.
Persistent storage using the OpenStack Manila CSI Driver Operator
You can now use CSI to provision a PersistentVolume using the CSI driver for the OpenStack Manila shared file system service.
Persistent storage using CSI inline ephemeral volumes (Technology Preview)
You can now use CSI to specify volumes directly in the Pod specification, rather than in a PersistentVolume. This feature is in Technology Preview and is available by default when using CSI drivers. For more information, see CSI inline ephemeral volumes.
Persistent storage using CSI volume cloning
Volume cloning using CSI, previously in Technology Preview, is now fully supported in OpenShift Container Platform 4.5. For more information, see CSI volume cloning.
Operator Lifecycle Manager
v1 CRD support
Operator Lifecycle Manager (OLM) now supports Operators using v1 Custom Resource Definitions (CRDs) when loading Operators into catalogs and deploying them on cluster. Previously, OLM only supported v1beta1 CRDs; OLM now manages both v1 and v1beta1 CRDs in the same way.
Notable technical changes
OpenShift Container Platform 4.5 introduces the following notable technical changes.
Operator SDK v0.17.1
OpenShift Container Platform 4.5 supports Operator SDK v0.17.1, which introduces the following notable technical changes:
- The --crd-version flag was added to the new, add api, add crd, and generate crds commands so that users can opt-in to v1 CRDs. The default setting is v1beta1.
Ansible-based Operator enhancements include:
- Support for relative Ansible roles and playbooks paths in the Ansible-based Operator Watches files.
- Event statistics output to the Operator logs.
Helm-based Operator enhancements include:
- Support for Prometheus metrics.
terminationGracePeriod parameter support
OpenShift Container Platform now properly supports the terminationGracePeriodseconds parameter with the CRI-O container runtime.
Categories