Logo-Red_Hat-OpenShift-A-Standard-RGB

Today, I am pleased to announce the general availability of Red Hat OpenShift 4.8. While this blog provides an overview of the features, functions, and benefits, the OpenShift 4.8 press release is a companion read if you are interested in some use cases and how our customers and partners are using OpenShift. 

Based on Kubernetes 1.21 (and CRI-O 1.21), OpenShift 4.8 has a number of exciting new enhancements and new features for developers of cloud-native apps, for cloud-native devops practitioners, and for cluster and cloud administrators. 

For the Developer: 

We introduce OpenShift Serverless Functions as a technology preview. This builds on the previously available OpenShift Serverless Eventing and Serving. Developers are able to consume events via functions-based APIs and provide a simplified programming model. These functions can be written in your favorite language and framework of choice, such as Quarkus, Node.js, Python, Go, and Spring Boot. Serverless also includes Kafka Channel/Event Source alongside built-in Event Sources (Kubernetes APIs, Ping, Kafka, and ContainerSource) and several other event sources, powered by Camel-K (TP) to build Serverless applications. These functions can be triggered using CloudEvents or plain HttpRequest and automatically scale up (and down) in response to incoming demand.

OpenShift sandboxed containers, based on the Kata containers open source project, provides an optional OCI compliant runtime to run containerised workloads in light-weight virtual machines. While the vast majority of applications and services are well-served by the strong security features of Linux containers, OpenShift sandboxed containers can provide an additional layer of isolation ideal for highly sensitive tasks, such as privileged workloads or running untrusted code. Under the hood, the new OpenShift sandboxed containers operator operationalizes the tasks required to install and life cycle Kata containers on an OpenShift cluster including the installation of the Kata Containers RPMs, the configuration of CRI-O runtime handlers, the installation of QEMU (the underlying VMM), and the creation of the required `RuntimeClass.`

GPU NVIDIA Multi-instance GPU (MIG) support is now available for the OpenShift NVIDIA GPU operator. MIG speeds both development and serving of AI models by giving up to seven data scientists simultaneous access to what feels like a dedicated GPU so they can work in parallel. 

OpenShift Logging adds support for JSON logs. Users have fine-grained control of container logs in the JSON format, as well as label common JSON schemas so that a log management system (either OpenShift’s out-of-the-box Elasticsearch or a third party) knows how to handle these logs. Most notably, developers can investigate quickly by querying logs by specific fields.

OpenShift Container Platform 4.8 introduces two new alerts that fire when an API that will be removed in the next release is in use: APIRemovedInNextReleaseInUse and APIRemovedInNextEUSReleaseInUse. Developers and administrators can use the new APIRequestCount API to track deprecated APIs. This allows you to plan for upgrades.

For the DevOps Practitioner: 

OpenShift Pipelines is generally available with OpenShift Pipelines 1.5 on OpenShift 4.8. Auto-pruning allows admins to configure automatic removal of PipelineRuns and TaskRuns to offload the cluster from previous pipeline executions that are not not needed in the short term. Pipeline as code, which is released as a Developer Preview, enables developers to treat Git as the single source of truth for their Tekton pipelines. On Git events, which are customizable by the developer, the pipeline definition and related tasks are fetched from the Git repository and executed on the OpenShift cluster. 

OpenShift GitOps 1.2 is new and has two major highlights. Argo CD authentication is integrated out of the box with Red Hat Single Sign On (SSO) and allows users to log into Argo CD using their OpenShift credentials. Argo CD privilege configuration is simplified through the use of appropriate labels and annotations in the OpenShift GitOps operator.

It is  a challenge to appropriately allocate resources to an application as demand changes. Over provisioning leads to workloads not being scheduled, even when there is available capacity on the workers, and under provisioning can impact the application’s service levels. Now generally available, Vertical Pod Autoscaling (VPA) allows devops to “right size” the workload with “off” mode in VPA for openshift to recommend appropriate size of the workload based on historical CPU and memory usage.

For Cluster, Cloud, and Infrastructure Administrators: 

As Kubernetes and OpenShift become popular and serve increasingly more users, use cases and workloads (see our press release on this) for our customers, multicluster deployment, and management are becoming critically important. Zero-touch-provisioning takes 

OpenShift 4.8 introduces support for IPv6. IPv6 single stack and IPv4/IPv6 dual stack are generally available via OpenShift OVN. This allows  hosts outside the cluster to connect to pods in the cluster and vice-versa using IPv4/IPv6. This adds critical functionality for a number of Telco vendors and service providers, which extends to  governmental agencies, with more industries and organizations being added to that list. 

OpenShift Ingress/Egress adds a number of customer-driven enhancements: HAProxy upgrade to 2.2 LTS (with 2048 bit certificates and storage), enhancements to allow users to customize HAProxy, IP Failover support with keepalived, and EgressIP load-balancing enhancement for OpenShift SDN to spread traffic across cluster nodes. 

In addition, we are very excited to introduce a developer preview for the Gateway API. This is an Ingress unifying technology with Contour as the primary Ingress Controller for Gateway API traffic along with HAproxy and with improved integration with Envoy / Service Mesh. Keep an eye on this topic as we work upstream in the community to continue to enhance, stabilize, and bring it to you, our customers, as fully supported capabilities in the future. 

OpenShift on bare metal has been of significant interest to many of our large customers, especially in industries such as financial services and the public sector. OpenShift 4.8 improves the bare metal experience. This includes things like UEFI Secure Boot for the IPI installer, PXE boot support for additional nodes, and scheduling pods based on hardware attributes. 

Butane, formerly known as FCCT, the Fedora CoreOS Config Transpiler, is now shipping with OpenShift. Butane helps administrators more easily create RHEL CoreOS machineconfigs, and ignition configurations. Butane streamlines the process of passing individual configuration files or even a tree of files onto CoreOS nodes. It also helps catch machineconfig spec errors.

Improvements to the control plane include Single Service-Serving-Certificates for Headless Statefulset that provides automatic certificate generation and rotation for direct pod-to-pod communication (similar to the service-serving-certificates operator) and support for subject claim URI scheme of the OpenID Connect IdPs that enables users of identity providers that use URI scheme in their `sub` claims to log in to OpenShift. 

The full list of improvements in Red Hat OpenShift 4.8 can be found in the full release notes, including details on all the new technologies in tech preview and on deprecations. In addition, there are a number of companion blogs that delve into select details of What’s New in 4.8. These will be published over the coming days, here and elsewhere.

My fellow product managers delivered an amazing "What's New" presentation with details of what you see here and more. If you have time to sit through all of these changes explained in person, you can check out this deep dive into the changes in 4.8, which was originally broadcast on OpenShift.tv

If you want to try out Red Hat OpenShift 4.8, there are several ways to do it, from online learning tutorials and demos on your laptop to how-to’s for the public cloud or your own data center.