Infrastructure teams managing Red Hat OpenShift often ask me how to effectively onboard applications in production. OpenShift embeds many functionalities in a single product and it is fair to imagine an OpenShift administrator struggling to figure out what sort of conversations his team must have with an application team before successfully running an application on OpenShift.

In this article, I suggest a few topics that administrators could use to actively engage with fellow application teams for the onboarding process. I have had several conversations with customers on these topics and observed that suggested approach has really helped them. By no means are these topics exhaustive, but they are sufficient to kick start the necessary and relevant conversations. Over time, I expect administrators to have larger conversations with application teams in application onboarding. 

1. Application Resource Requirements

Administrators need to make sure that OpenShift clusters are sized correctly in order to host applications. Therefore, application resource requirements are a key topic for discussion. There are a few important controls that an administrator must consider when a new application is onboarded. 

Administrators should clearly define quotas for the team in order to control the resources their application can consume. This is an app deployment time requirement. Secondly, an administrator must make sure that application teams have defined minimum resource requirements for their application pods. This specification will help the OpenShift scheduler to find the right nodes to run application pods. This is an app schedule time requirement, and it is governed by the Requests functionality in OpenShift. 

Lastly, administrators have to make sure that application pods do not consume more than their allocated resources, otherwise this could cause performance issues in other application pods. Administrators should make sure that application teams have defined Limits for their application pods. This is an app runtime requirement.

Administrators should also have a conversation about storage requirements (storage type, size, etc.) for their application. Administrators can then create appropriate persistent volumes, referred to in deployment configuration files as persistent volume claims. 

The table below highlights questions that an administrator should ask an application team, administrator action items and links to documentation and blogs - 

 

Questions to Ask OpenShift Administrator’s Action Items Helpful links
How many instances/pods to spin up for the application?
  • Set quota on the project/namespace
  • Check replicas count in deployment resources files
Deployment Configuration  

Capacity Management on OpenShift (blog)

How many milicores/memory/PV needed per instance?
  • Check limits/resources in deployment resource files
  • Check if LimitRange is applied on the project/namespace
Limits
What is the size of each PV? Create necessary PVs, verify PV names in deployment resource files Using Persistent Volumes

2. Container Image Creation and Tagging Process

OpenShift automates the process of creating a container image from source code or compiled binaries via Source-to-Image (s2i) functionality. Not all organizations use this functionality and few might have their own process (outside of OpenShift) to create and manage container images. 

In cases where an organization is indeed using s2i, an administrator needs to provide a base image(s) per app runtime. Red Hat recently released Red Hat Universal Base Image (UBI) for RHEL version 7 and 8. With the release of the UBI, administrators can now take advantage of the greater reliability, security, and performance of official Red Hat container images where OCI-compliant Linux containers run. An administrator must use this UBI to build a specific app runtime (like java, node.js etc) and expose to s2i process. s2i process will use the base image to create the final container image that includes the application code. New versions of these base images will be periodically uploaded by administrators into the container registry. 

Administrators must create an image tagging strategy and make sure application teams are aware of this. Through proper image tagging, administrators can really streamline the image upgrade process. Otherwise, the build process (usually defined by application team) may pick up a new version of the base image when it wasn’t intended, or may pick up an old version of the base image when it was supposed to pick up the new one.

Discussion on following topics would align both Administrator  and application team in image build and tag strategy - 

Questions to Ask OpenShift Administrator's Action Items Helpful links
Is source to image (s2i) used to build container image?
  • What is the base image image referred in s-2-i process?
  • Administrator should provide list of base images available in OpenShift to developer teams
  • Check appropriate base image (with right tags) is referenced in deployment resource files
  • Help app team understand the implications of using "latest" tag while referencing a base image.

I recommend that an image must have a major and minor version represented in the tag. App team should make sure they refer to the latest minor version of a specific major version.

Source to Image Build Process
  • What is the name and tag given to the container image created after the build process is completed?
  • Check the tag used to the output container image (imagestream)
  • Check appropriate image and tag is referred in deployment resource file
ImageStreams

3. Application Health Checks

OpenShift exposes ways to monitor an application’s health and then take desired action. Readiness and Liveness probes are well documented application health checks. Administrators should make sure that developers have defined relevant health checks for their applications. Furthermore, administrators should make an attempt to understand what a particular health check means. This becomes immensely helpful for administrators to dig in more while resolving issues when OpenShift should restart pods in production.

Another conversation to have with developers on this topic is whether developers have coded sufficiently to take care of graceful shutdown. Administrators may set up automated scale out upon increased load which serves well during scale-up. When scale-down happens, OpenShift needs to delete the pods. OpenShift sends TERM signal to pods before it deletes them. It is critical that the signal is appropriately captured by application pods and an appropriate time limit is set for active transactions within a pod to complete before terminated by OpenShift. Red Hat Jboss EAP middleware on OpenShift handles this out of the box, however if developers are using any other middleware or application framework, they must take care of handling signals sent out by OpenShift for graceful shutdowns. 

The table below highlights various discussion points and administrator’s action items (with links to useful documentation/blog) -

Questions to Ask OpenShift Administrator's Action Items Helpful links
Have developers defined Liveness and Readiness probes? • Check for these probes in deployment resource files

• If not defined, ask for the reason

Health Check 
Have developers coded for graceful shutdown? • If no, ask them to code (otherwise scaling down a pod could terminate in-place ongoing transactions) Graceful Termination

 

4. Application Communication

It is not necessary for all applications deployed in OpenShift to be exposed to the outside world. It is good to determine early on which applications would need to be exposed so that administrators can guide application teams to the right hostname for their application. Having a conversation with the application teams prevents a problem on application route’s hostname being already taken by some other team (as the OpenShift cluster could be shared by many teams and it becomes a challenge to keep track of active unique route hostname in the cluster). 

Secondly, SSL offloading for an application could happen at multiple places. OpenShift supports 3 ways of SSL termination -

  1. Edge Termination - Incoming SSL traffic incoming to OpenShift in decrypted at OpenShift’s router component
  2. Passthrough Termination - Incoming SSL traffic is not decrypted at router, but decrypted by application pods 
  3. Reencryption Termination - Incoming SSL traffic is decrypted at router, and then router re-encrypts the traffic with another certificate provided by application team

As you could infer from three strategies above which developers could choose, administrators would require different settings and sharing of certificates. This necessitates a conversation between developers and administrators.

Lastly, there may be egress and ingress requirements for security purposes. For example, an application database hosted outside of OpenShift is firewalled by set of IP addresses. In these cases, a dialogue between administrator and developers is needed for egress.   

The table below highlights questions that an administrator should ask an application team, administrator action items and links to documentation and blogs-

Questions to Ask OpenShift Administrator's Action Items Helpful links
Is route required to expose this microservice outside of OCP?
  • What would be the route hostname? Check route definition in deployment resource files and make sure it is unique
  • Where is ssl terminated? Discuss which termination place is demanded by developer team, why such requirement and work forward Routes Termination
Is egress needed (or the application needs to connect to external service, like database, which is protected by IP level firewall) for the application? Discuss with developer team and understand the need for egress Controlling Egress Traffic
Is there a specific requirement on ingress? Discuss with developer team and understand the need for ingress Controlling Ingress Traffic

5. Application Access Controls

OpenShift cluster provides multi-tenancy, allowing numerous applications owned by multiple teams to be hosted on the same infrastructure. In such situations, it is critical to maintain application-to-application access controls. OpenShift provides multi-tenancy or network policy plugin that gives administrators capabilities to control application-to-application traffic. Therefore, administrators should have a conversation with developer teams on their applications architecture and understand which application-to-application communication (within OpenShift) is allowed. Based on that information, administrator should update necessary policy files in OpenShift. 

The table below highlights questions that an administrator should ask an application team, administrator action items and links to documentation and blogs - 

Question to Ask OpenShift Administrator's Action Items Helpful links
What are the names of existing microservices that this microservice accesses? Define relevant network policy yaml file Network Policy
What are the names of existing microservices who will call this microservice? Define/update existing relevant network policy yaml file

Over time, administrators get experienced in managing OpenShift and they can then drive a variety of conversations (like access controls, service mesh, service catalog, credentials management, router sharding, dedicated worker nodes for specific apps, app upgrade process etc) with the application teams. The list goes on, however the topics discussed here are essential and necessary. You can also download (XL format) consolidated topics and manage/enhance it while working with application teams. 

 

{{cta('5e0d4721-d004-4bfa-9f88-437bb11a3629')}}

About the author

Red Hatter since 2018, tech historian, founder of themade.org, serial non-profiteer.

Read full bio