What are Hybrid Cloud Patterns?

Hybrid Cloud Patterns are a natural progression from reference architectures with additional value.

This effort is focused on customer solutions that involve multiple Red Hat products. The patterns include one or more applications that are based on successfully deployed customer examples. Example application code is provided as a demonstration along with the various open source projects and Red Hat products required to for the deployment to work. Users can then modify the pattern for their own specific application.

How do we select and produce a pattern? We look for novel customer use cases, obtain an open source demonstration of the use case, validate the pattern with its components with the relevant product engineering teams, and create GitOps based automation to make them easily repeatable and extendable.

The automation also enables the solution to be added to Continuous Integration (CI), with triggers for new product versions (including betas), so that we can proactively find and fix breakage and avoid bit-rot.

Who should use these patterns?

It is recommended that architects or advanced developers with knowledge of Kubernetes and Red Hat OpenShift Container Platform use these patterns. There are advanced Cloud Native concepts and projects deployed as part of the pattern framework. These include, but are not limited to, OpenShift Gitops (ArgoCD), Advanced Cluster Management (Open Cluster Management), and OpenShift Pipelines (Tekton)

Pattern Workflow

The Medical Diagnosis Pattern can be found here.

The purpose of this pattern is to show how medical facilities can take full advantage of trained AI/ML models to identify anomalies in the body like pneumonia. From the medical personnel point of view it works with medical imagery equipment submitting an X-ray image into the application to start the workflow.

The image is uploaded to an S3-compatible object storage. This upload triggers an event from the storage, “a new image has been uploaded”, that is sent into a Kafka topic. This topic is consumed by a KNative Eventing listener that triggers the launch of a KNative Serving instance. This instance is a containerimage with the AI/ML model and the needed processing functions. Based on the information from the event it received, the container retrieves the image from the object store, pre-processes it, makes a prediction on the risk of pneumonia using the AI/ML model, and saves the result. A notification of those results is sent to the medical staff as well.


For a recorded demo deploying the pattern and seeing the dashboards available to the user, check out our docs page!

Pattern Deployment

To deploy this pattern, follow the instructions outlined on the getting-started page.

What’s happening?

During the bootstrapping of the pattern, the initial openshift-gitops operator is deployed with the necessary custom resource definitions, and custom resources to deploy the datacenter-<validated-pattern> with references to the appropriate git repository and branch. Once the argoCD application deploys it will create all of the common resources which include advanced cluster manager, vault, and openshift-gitops. The pattern deployment begins with argo applying the helm templates to the cluster, ultimately resulting in all resources deploying and the xraylab dashboard being available via its route.

The charts for the pattern deployment are located: $GIT_REPO_DIR/charts/datacenter/

Pattern Deployed Technology

Operator Upstream Project
openshift data foundation (odf) ceph, rook, noobaa
openshift-gitops argoCD
openshift serverless knative
amq streams kafka
opendatahub opendatahub
grafana grafana


With the imperative dependence on the originating content, there were some resources that didn’t align 1:1 and needed to be overcome. For example, there are a number of tasks that are interrogating the cluster for information to transform into a variable and finally apply that variable to some resource. As you can imagine, this can be very challenging when you’re declaring the state of your cluster. In order to maneuver around these imperative actions we took what we could and created OpenShift jobs to execute the task.


The full pattern details are available here. Speed, accuracy, efficiency all come to mind when considering what this pattern provides. Patients get the treatment they need, when they need it because we’re able to use technology to quickly and accurately diagnosis anomalies detected in X-rays. The validated patterns framework enables administrators to quickly meet their user demands by providing solutions that only require them to bring their own data to complete the last 20-25% of the architecture.


AI/ML, How-tos, GitOps, Machine Learning, OpenShift Pipelines, workloads, Hybrid Cloud Patterns

< Back to the blog