We created the blueprint with a few key goals in mind:
First, we wanted to show how Red Hat’s comprehensive portfolio could be used to address an edge use case (in this case, the industrial edge)
Second, we are using solution blueprints internally in our CI to verify that integrations which are working one day, keep on working over time. This is an area where many of our customers have asked us for help and now, via the work done in bringing this blueprint together, we will be able to identify any integration issues earlier in the release cycle of our products.
Third, we wanted to go beyond the traditional reference architecture which is usually a long list of "printed" instructions. The GitOps model enables us to deliver the blueprint code in a way that is readily available to our customers and partners. What this also enables you to do, is to easily use this blueprint for a POC, modified to fit a particular need and hopefully transformed into a real deployment.
Finally, our blueprints are open source, so anyone can suggest improvements, contribute to them, or fork them to do something else.
For this particular solution blueprint, we demonstrate how OpenShift, ACM, AMQ Streams, OpenDataHub, and other Red Hat products come together to address an edge computing use case commonly found in manufacturing: Machine inference-based anomaly detection on metric time-series sensor data at the edge, with a central data lake and ML model retraining. You can watch a full demonstration of this which we recorded for an OpenShift Commons briefing.
Many developers and organizations are shifting to containerized applications and serverless infrastructure, but there is still huge interest in developing and maintaining applications running as VMs. ...
We are pleased to announce the general availability (GA) of the OpenShift Assisted Installer for OpenShift. While clusters deployed with the OpenShift Assisted Installer have been and are supported, ...