Subscribe to our blog

Introduction

On April 18th, Norris Sam Osarenkhoe (DevOps Architect) from SVA System Vertrieb Alexander GmbH joined Red Hatters at the OpenShift Commons gathering in Amsterdam, a co-located event of KubeCon + CloudNativeCon Europe 2023. In the above presentation, Norris showcased how SVA, a Red Hat partner, turned to OpenShift Serverless to kickstart cloud-native adoption and patterns in a regulated environment, which we will discuss in detail in this case study blog.

Watch the full recording of the presentation:

 

About SVA System Vertrieb Alexander GmbH

SVA System Vertrieb Alexander GmbH is a leading German company that specializes in Professional Services, server migrations, and managed services. SVA is a Red Hat premium partner and 3 times winner of the “Best Managed Service Provider” prize awarded by ChannelPartner and Computerwoche. With a vast portfolio of services, SVA has established itself as a go-to partner in the areas of containers, OpenShift, and distributed systems in the public sector.

Challenge

Public sectors pose unique challenges due to their complex structure and regulated requirements. For example, one particular customer, a German government organisation, is divided into departments, groups, units, and application teams, each with specific functional responsibilities. Some applications have to adhere to specific laws, which limits innovation and makes the lifecycle of applications very stale. With the gradual introduction of microservices and service-oriented architecture, the number of applications connecting to each other grew and this “mesh of applications gets more and more complicated because everyone is calling each other and soon you will have a mothball of applications you cannot really manage anymore or you don't even know who is connecting to who anymore”, as Norris explained in his talk.

Solution

Specific needs for the “One” solution

One significant use case SVA identified is applications getting notified of state and data changes. In this scenario, the customer has a  Big Data System and most of the applications have their own data approval processes to ensure data separation. However, this can lead to complications when changes occur that need to be communicated across multiple applications. Communicating the state and data change in hundreds or even thousands of applications calling each other through REST calls creates a complex network of interactions.

To find the right solution to meet this challenge, SVA came up with the below list of requirements for selecting their platform:

  • Central hosting/deployment
  • Kubernetes native
  • Support for CloudEvents
  • Support for HTTP-based Binding
  • Horizontally scalable
  • Well-written interfaces
  • Support for Kafka
  • Support for Observability, e.g. Tracing
  • Enterprise Support available

As Norris explained that the platform has to be "HTTP based because most of the apps are already REST based, it has to be event driven, it has to use maybe an open standard because what's important is that the standard outlives the product."

                       

Additionally, the platform has to be highly available and adhere to cloud native best practices. The solution also needs to be opt-in and provide a quality developer experience so that the developers themselves would want to use it as it helped them achieve better results. The ultimate solution was "one platform to rule all these requirements at once."

Perfect fit with OpenShift Serverless

Knative was a perfect match. After a field and stress test on a bare-metal Kubernetes on Rancher, they transitioned to a custom installation with OpenShift and OpenShift Serverless add-on, which met all their needs.

Norris likened this process to finding the "lord of the ring." OpenShift Serverless tightly integrates with OpenShift and provides a “nice interface, nice visualization for the developers”. As he stated, "The developer should feel like they're using a cloud-ready product as if they are on AWS or Azure on the on-prem system." Knative allowed the team to centralize their system, scale, audit, and even select events while enforcing policies and simplifying the architecture. The Knative Eventing developer experience —use of simple HTTP servers instead of complex Apache Kafka consumers — helped onboarding of developers to adopt new cloud native patterns in a way that is approachable for them.

The Architecture

The team focused on configuring Knative Eventing to adapt to the needs of the customer. The platform is hosted on the OpenShift environment and is divided between three units of departments: one responsible for configuration and customization, another for OpenShift platform, and the third for Kafka systems.

 

Blue Box and the Stack

The blue box represents the central platform, and within it, we find the hypervisor, OpenShift, Apache Kafka, OpenShift Serverless, and on top of that we have consumers and producers that interact with the system through REST APIs and other functional interfaces.

Data flow

 

The data flow of the blue box begins with traffic coming into the Ingress or the sidecar from Istio, moving on to the Ingress Broker and then to a Kafka receiver where events are persisted in case of any outages. The Dispatcher does the consuming and polling for events and feeds them into the back of the OpenShift system. The Broker Filter then interprets the different subscriptions that trigger specific events. The control plane components make the connective system easy to use in OpenShift, including Custom Resources and two main interfaces for project Namespaces and deploying workloads. The system also uses a Broker as a project-based Ingress point and Triggers to register subscriptions to receive specific events. The project namespace hosts deployments and triggers to point to the relevant services and parts inside or outside your cluster.

The architecture uses multiple OpenShift clusters for each stage, with a producer publishing an event that gets persisted to a global broker URL. The cluster in this case is an isolated environment, which is designed to receive events from the broker. Due to delayed polling, everything remains in sync. This system utilizes Kafka, as it can handle high data volumes and support multiple consumers. In this case, the producer sends an event into the broker Ingress and then the event goes to the Kafka topic that receives the messages. The deployed trigger generates a Kafka consumer group that goes from the channel dispatcher to the broker filter and pushes the message to the consumer.

All of the deployed components are configured using Argo CD because as applications scale up, increasing the number of custom resources to manage, can become overwhelming. In addition, monitoring and observability also become increasingly important, and SVA is using Jaeger integration for tracing and Grafana dashboards for Monitoring.

Results

The implementation of OpenShift Serverless has proved to be a success. With its customizations, SVA is successfully running the solution in production, and it has been well-received by different units and application teams. SVA has seen a significant increase in engagement and interest since the implementation, indicating that OpenShift Serverless has provided a solution that meets customer requirements.

We are very grateful that in the presentation, Norris expressed his appreciation to Red Hat for their excellent support and communication throughout the implementation process. In his own words, “Any issues we faced were promptly addressed, and we are grateful to Naina and her team at Red Hat for their assistance.

Key Takeaways

One of the key takeaways from SVA’s experience is the importance of creating blueprints for application teams to facilitate and enable correct usage. Norris emphasized the need to spend a lot of time to figure out the usage and design patterns, such as idempotency, at least once delivery, error handling, and useful Enterprise Integration Patterns.

Norris also advised starting with one to three application teams and collaborating closely as well as automating everything to reduce friction as much as possible. They recommend freezing the OpenShift Serverless version in use and spending time to improve the observability of the system.

For an “air-gapped” environment,  Norris suggested considering how to access the operator images and using a private CA internally. He also recommended considering the communication between multiple OpenShift clusters per stage and doing some "back-of-the-envelope" math to estimate the throughput per onboarded application team.

Learn more

If this post inspires you please reach out to your Accounts team to learn more about how OpenShift Serverless can help. You can also browse OpenShift Serverless documentation to see what features are available. You can read our blogs on Red Hat Developers site and download our Knative Cookbook. You can also try OpenShift Serverless yourself free of charge for 30 days through Red Hat Developers Sandbox


About the author

Naina Singh joined Red Hat in 2018 and is currently the Principal Product Manager for Red Hat OpenShift Serverless.

Read full bio

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech