One of the things I hear while visiting customers is how much they love the fact we continue to release new software features in OpenShift at the pace of one release every quarter. OpenShift Container Platform 3.5 is now our 6th "minor" release of OpenShift with countless errata releases (on average about every 3 weeks) since 2015. What you might have not noticed, is the fact all our OpenShift and RHEL engineers are pulling double duty during releases. While we were late up at night getting OpenShift 3.5 ready to release, they were also finishing up Kubernetes 1.6. That pace of innovation and passion is only possible by working in an open community. Every Red Hat engineer that is working on Kubernetes does so with the pride of knowing they are but a small piece of the largest and most feature rich container, cloud native opensource project available in the industry today. If you are working on public cloud platforms, hybrid architectures, microservices, CI/CD, DevOps, security, containers, container networking, next generation storage, metrics and logging, ansible, developer experiences, authentication, and many other bedrock technologies in our industry today...this is the place to be.

If you look at the 20 special interest groups (SIGs) in Kubernetes, Red Hat is leading and contributing serious code investments to about half of them plus we are involved in some way in just about all of them.

Red Hat takes an active role in the container orchestration, cluster management, and container communities so that we can deliver our customers the best fit for purpose experience for a developer or operator's consumption of an application platform running anywhere. We invest in order to give our customers a voice. Our direction is very simple...it's your direction. We move forward together.

In this release of OpenShift 3.5, we leverage Kubernetes 1.5.2.   There were great improvements in following areas:

  • Diversity of Application Workloads
  • Stability of Experience
  • Security Enhancements
  • Storage Enhancements

 

Diversity of Application Workloads

One large area of feature work is in the area of StatefulSets.  OpenShift had support for persistent storage commonly used by non-cloud native applications for years...since the OpenShift 2.x days.  We continued that investment as we moved into OpenShift 3.x because we have always understood our customers need persistence from the platform to allow them to run a wider range of application services.  StatefulSets build on that premise and offer a path to take on even more application requirements.  Think of StatefulSets as a way to describe the ordinal or sequence requirements of your application.  Some applications we have nuances in how additional instances are added or removed.  There is a sequence that has to be followed at deployment time.  There are complexities around how the network name has to be vocalized.  You might not be able to take action without first checking to see if other requirements have been met within the application (i.e. is the instance that started right before me still up before I let you remove me).  The way storage is mounted or can/cannot be shared across new and replaced instances must be controlled.  These are all use cases that are now possible in the technical preview release of StatefulSets.

As you begin to design your application deployment, you might find the need to run a separate process/daemon within the same pod that is housing your primary application service.  It can be doing housekeeping tasks or other work.  In such cases, it is common to want to share the memory (/dev/shm) between just those two containers (that are in the same pod).  We now have that ability in OpenShift 3.5.

The resource management SIG in Kubernetes is working diligently on a solution that will ultimately offer platform level intelligence of node level configurations for PCI, NUMA, GPU, and many other design elements at the hardware level commonly leveraged by applications with latency concerns.  The technical preview release of opaque integrates in OpenShift 3.5 is a first step in that direction.  This feature puts the API structure in place that will ultimately allow the node to advertise an integer resource that the scheduler can leverage for targeting and the cluster can keep track of how many are used and left for future consumption by another workload.

 

Lastly, I'm happy to announce that when tenants declare service endpoints for consumption by their application (such as an Oracle database that lives outside of the OpenShift cluster) they will no longer need to use the IPaddress of that external service. They can now declare a new type called ExternalName in their service declaration and provide the FQDN.

 

 

Stability of Experience

With every release, we want to make sure we are attacking any areas of concern at the implementation level of a feature now that the product has been shipping for almost 2 years.  In this release, we see big improvements in horizontal pod autoscaling.  The selection algorithm is more aware of the running state of the pod so that containers that are booting up do not mistakenly cause the deployment to autoscale due to their startup spike in CPU usage.  At the same time, pods that are not registering any usage metrics, will be handled in a more appropriate manner when the API is looking at scale up and scale down automated actions.

One of the most powerful and extensible areas of Kubernetes is the controller concept.  Replication controllers, replicaSets, daemonSets, deployments, pod disruption budget, deployments, jobs, etc.  They are all leveraging the controller primitives in OpenShift.  As these controllers grew up across different versions of OpenShift, some have different heuristics in how they backoff and retry if the API response is not received.  With this release, we offer consistency across them all in this regard.  The result is a more predictable experience in how the platform APIs will react when orchestrating pods and application services on the platform.

The ability to declare security contexts and policies that govern what tenants and pods are allows to see and do on OpenShift is a feature operators love. But if a tenant has no idea what type of experience they have been limited to, it can become frustrating. OpenShift 3.5 introduces the 'can-i' and 'scc-review' verbs to the oc client. Tenants can discover new information on how their SCC/policy have been setup to help them understand why something might not be working.

List which permissions a particular user/group has in the project by project admin role

$ oc policy can-i --list --user=**
$ oc policy can-i --list --groups=**

List which permissions a particular user/group has in the project by system admin role

$ oc policy can-i --list --user=** -n project
$ oc policy can-i --list --groups=** -n project

Determine if users can have all the combination of verbs and resources from `oc policy can-i --list --user/group`

$ oc policy can-i <verb> <resource> --user/group

Test the SCCs with scopes

$ oc policy can-i verb resource --user/group --scopes=user:info

Test with ignore scopes `oc policy can-i --user/group`

$ oc policy can-i verb resource --user/group --ignore-scopes=true

The lower level user cannot list project admin or system admin roles

$ oc policy can-i --list --user project admin
$ oc policy can-i --list --user system:admin

Check whether a user or a ServiceAccount can create a Pod

$ oc policy scc-subject-review -f examples/hello-openshift/hello-pod.json
RESOURCE ALLOWED BY Pod/hello-openshift restricted

One area that took the most time this release was installation and upgrades. We took a serious look at how we were handling idempotency throughout our ansible playbooks and decided to go back in and clean up all of our playbooks. We created pre/post action hooks for master upgrades to allow customers to pre-check status on related services before taking action. We converted some free line yaml actions into repeatable ansible modules to drive more consistency in how we call repeatable tasks.

Security Enhancements

SELinux has been an amazing defense layer for container runtimes to leverage. Mandatory access controls have been able to battle some userland exploits of the daemon implementation and we are glad we have it turned on by default. In this release, we further lock down the kublets by making sure it and the containers share no SELinux access control labeling in common. More layers of protection are always a "good thing".

Objects like configMaps, secrets, and emtyDir are exposed to containers via volumes. These memory backed volumes use to be left around after pod termination in case you wanted to recreate the deployment. We have decided to increase our security protection and delete these memory backed volumes at pod termination automatically.

Tenants on the platform that need CERTs for their application services can ask for one from OpenShift's service.alpha.openshift.io tech preview endpoint. New in OpenShift 3.5 is OpenShift's ability to watch over the expiry of these application CERTs via the new service.alpha.openshift.io/expiry annotation on your secret. OpenShift encrypts all of its framework traffic (kublet, etcd, master controller, master API server, etc) through TLS as well. These framework services are leveraging CERTs generated by the platform installation. We have enhanced the ansible playbooks that help you administer these CERTs to conduct a rolling upgrade of these CERTs when they expire. We will now keep the old CERT in place while we install the new CERT across the cluster real estate. Then we will remove the old CERT. We also introduced a new oadm command for the platform administrator called 'ca create-master-certs'. This will allow operators to declare their own expiry date for framework CERTs. In a future release, we will offer ansible playbooks to leverage this command across the cluster components.

# oadm ca create-master-certs --hostnames=example.org --signer-expire-days=$[365*2+1]`

OpenShift 3.5 now offers more granular control over how secrets are mounted into containers.  You can image situations wherein applications require their configuration files to be only owner readable in order for their process to start. This feature allows you control over multiple secret file permissions on a mount point of a volume by downward API annotations or configMap declarations.

Storage Enhancements

There are three storage enhancements I would like to bring to your attention. The first has to do with Azure. In the previous release of OpenShift (3.4) we released dynamic storage provisioning for ceph, glusterFS, google PD, AWS EBS, Cinder, and CNS. We round out that set by adding dynamic provisioning for Azure block storage.

The next feature is an exciting new implementation of dynamic provisioning called an external dynamic provisioner. By qualifying this interface on OpenShift 3.5, we expanded storage dynamic provisioning to 3rd party storage vendors such as Netapp. NetApp Trident will now work on OpenShift. This offers direct API consumption of popular NetApp storage devices and arrays.

Lastly, I'll draw your attention to container native storage (CNS). This is an innovative approach to glusterFS where you run your gluster brick topology on kubernetes in pods in order to converge infrastructure, drive proximity to compute resources, and take advance of the automation primitives in Kubernetes. OpenShift 3.5 adds an easy to use experience around leveraging CNS for the internal OpenShift container registry. We have also increased the number of automated admin tasks for the gluster topology while offering more options for geo-replication and glusterFS snapshotting.

Conclusion

As I take a step back and look over these features, I can see where the community is taking OpenShift. Production users of OpenShift are driving more application architectures onto the platform while asking us to keep up to date on stability, security, and storage enhancements. The result is a solution they can leverage through multiple IT strategies from 'lift and shift', cloud native refactoring, elastic IT hybrid clouds, and digital transformation. One platform for people to drive their company forward into the next stage of transformation.

Please be sure to check out the other blogs about OpenShift 3.5. There are impressive enhancements in networking and developer user experience!