Routes, Ingress, and Gateway API - Oh, My! Customer Choice for the Win, Without the Vendor Lock-in

Customers selecting a Kubernetes distribution today (such as Red Hat OpenShift Container Platform), have a lot to consider. One priority we have heard customers considering is a desire to avoid vendor lock-in when choosing how your Kubernetes cluster handles inbound traffic. After all, one hallmark promise of Kubernetes has always been to set forth an abstraction layer above the infrastructure provider, whether that be a public cloud, bare metal, or an on-prem hypervisor cluster that caters to an easier migration of your applications. To reach a truly hybrid-cloud nirvana, you will want a Kubernetes distribution that can deliver in all these places, while offering productive abstractions and choice that allows you to select what is right for you to focus on work that delivers real business value, fast. Peter Drucker said it best: “There is nothing so useless as doing efficiently that which should not be done at all.”

As we traverse topics on this subject, we are going to be injecting input from someone who is much more authoritative on the subject than either of us, with a question-and-answer format. We had a conversation with Daneyon Hansen on this topic, and his thoughtful answers add some diversity of perspective to the points we will be making. He spent over 15 years at Cisco, designing open source cloud computing solutions as an engineer and architect. In that time, he worked with customers on their use cases, with internal Cisco teams on their products, and on open source cloud-native networking technologies in Kubernetes. He joined Red Hat in 2019, working with us on the Network Edge Team, where he spent just over two years. He focused primarily on ingress and egress, with special emphasis on design and implementation of the Kubernetes Gateway API. We will talk about that a bit later. Today he is a principal software engineer at Tetrate focused on advancing capabilities of the Istio service mesh project.

Q: Daneyon, what has your involvement been in the Kubernetes code-base?

A: I have been a contributor or maintainer of Kubernetes since the early days of the project. I have primarily been involved in SIG-Network projects such as IPv6 and Gateway API.

Ingress

Kubernetes was deliberately designed to support a pluggable and extensible architecture with interfaces for various types of components. The core mission of the Kubernetes platform is to enable orchestration of workloads, so the details of many things (container runtime, storage, software-defined networking stack) are delegated to controllers that live outside the core Kubernetes code base. Ingress is one of those well-defined interfaces that provide a method for a given cluster to proxy external traffic to applications. Think of Ingress as the front door to your applications. Implementations will vary greatly depending on who deployed the cluster, which Kubernetes distribution has been selected, and which Ingress controllers a cluster administrator wishes to deploy for a given environment, and which set of features were used. In order for an Ingress resource to be handled when you create it, your cluster must have a separate Ingress controller deployed. The Ingress controller watches the Kubernetes API for any Ingress resources that have been requested and wires up the communication into the application. There are nearly two dozen ingress controllers listed directly on the Kubernetes Ingress documentation page today, with only three of these being maintained by the Kubernetes project.

Q: Do you believe that Kubernetes distributions, or Kubernetes-based application platforms, should provide an Ingress implementation out of the box? Why or why not?

A: I believe one of the key values of a Kubernetes distribution is to provide ease of use. Ingress is required by nearly all clusters and should be a “batteries-included” feature for distributions.

As do most Kubernetes distributions, OpenShift has an opinionated choice as to what Ingress controller it comes with and supports out of the box. The interesting thing to know is that OpenShift Routes existed long before the Kubernetes project created the Ingress API. The OpenShift engineering team recognized the need for a mechanism to provide traffic ingress to a cluster in a robust and enterprise-supported way before the community had settled on the design. To fill this gap, the OpenShift Route API (https://docs.openshift.com/container-platform/4.9/rest_api/network_apis/route-route-openshift-io-v1.html) was created as an extension of the Kubernetes API. Furthermore, because of the strong focus on long-term stability and enterprise support at Red Hat, the Route API still exists in OpenShift today and will for some time to come. That does not mean that you are forced to use it. Red Hat OpenShift Container Platform’s Route API is backed by an HAProxy-based solution that is managed as a component of the platform. If you desire to deploy your own Ingress controller to run alongside the Route API, you can do so rather easily. A cluster can actually have multiple Ingress controllers if there is a desire to do so. Perhaps you wish to deploy your applications behind an NGINX-based (community or supported) Ingress controller but keep your OpenShift management Routes (set up by default when the platform is installed and configured) as Platform-managed pieces. Good news: OpenShift supports this. To pull on this thread a bit more, checkout this blog post that highlights a similar use case: https://www.redhat.com/en/blog/using-nginx-ingress-controller-red-hat-openshift.

It also helps to understand this from the perspective of a migration from a previous Kubernetes distribution to OpenShift. Perhaps your application has a set of Ingress API objects intended to be used with some non-specific Ingress controller. You can take that same Ingress object YAML and apply it to your OpenShift cluster. It will, in turn, make an OpenShift Route object for you, and yield the expected ingress path for traffic to your application. The Kubernetes Ingress API is still there, meaning you can use it exactly as you have on any other cluster. The only delta will be that if your clusters do not have the exact same Ingress Controllers, the annotations that specify more advanced routing mechanisms will be ignored. Those annotations are uniquely interpreted by each Ingress Controller. The Kubernetes Ingress API does not specify details about how the annotations are used when it comes to Ingress, and those implementations are functionally extensions of the Ingress API by those controllers.

Routes

As mentioned previously, the OpenShift Route API predates the creation of the Kubernetes Ingress and IngressClass APIs. In fact, the Ingress codebase was heavily influenced by design decisions made by Red Hat in the creation of the Route API (See this interesting thread on the Kubernetes project Github page for some interesting history: https://github.com/kubernetes/kubernetes/pull/13947). Kubernetes Ingress officially graduated out of beta state in Kubernetes v1.19, meaning until August 2020, users of the Kubernetes project were not guaranteed a stable Ingress API. As an example of what might have happened if Red Hat had embraced Ingress before it graduated to stable, PodSecurityPolicies (PSPs) were first introduced as an alpha API in Kubernetes 1.3 and never graduated to stable. The latest version is still tagged as v1beta1, and the API was deprecated in Kubernetes 1.21. PSPs are expected to be completely removed by Kubernetes 1.25, without a stable API to replace them expected to be available by then. The implementation that provides that same functionality in OpenShift, SecurityContextConstraints (SCCs), are still stable and available in OpenShift clusters today. That is of note, because security and stability are still very much important, regardless of upstream Kubernetes changes. Let’s look at some of the implementation decisions that Red Hat made around Routes that have enabled stability for OpenShift customers.

OpenShift Routes are implemented the same on any infrastructure, meaning that a Router pod is deployed to your nodes, and when traffic reaches that Router, it will be treated the same way as it would on any other infrastructure. The Router pod looks for Route objects in the Kubernetes API and responds to them by ensuring the HAProxy configuration in the pod will handle them according to the defined spec.

There are four main features that OpenShift Routes support that are not in the Ingress spec (although several Ingress implementations do add some of these features as well). They are:

  1. Weighted backends
    1. You can expose a single Route from OpenShift and load balance across many backends, not all of which need to be Kubernetes Services, with weights.
  2. Simple TLS configurations
    1. The TLS configuration, beyond simply providing the certificates to use, is directly in the Route spec, and you are not forced to micromanage TLS termination mechanisms based on infrastructure. Edge, Passthrough, and Re-encrypt options on the Route enable you to deploy TLS for your applications in the way that makes the most sense per component, including internal TLS configurations that some applications may require.
  3. Simple interface
    1. While GitOps and YAML definitions or templates in repositories are fully supported (and, indeed, recommended), exposing your first Route is as easy as typing oc expose svc/my-service.
  4. Stability
    1. Most importantly, while Ingress only came out of beta a year or so ago, Routes have maintained a stable API that customers have been able to count on for years.

Although some Ingress implementations do have features that OpenShift Routes do not, anything that is in the Kubernetes Ingress specification can be supported with an OpenShift Route. Those implementation-specific features are often applied to Ingress resources via annotations on the Kubernetes object, and there is no strict API-level versioning for how they should be interpreted. This leads to a fractured ecosystem of features and implementations that can force a Kubernetes user into making a choice about the Ingress implementation on which to standardize.

Routes versus Ingress - a practical exploration

Let’s look at what some different implementations of Routes and Ingresses might look like. Note that the Ingress-specific examples here will work similarly on any Kubernetes distribution, and the behavior of NGINX Ingress is not unique on OpenShift. Code for the following examples is available at https://github.com/RedHatGov/ingress-route-examples. Although we will be using the OpenShift command line client, you can use regular kubectl if you prefer. We are not taking advantage of any of the extra features provided by oc.

Deploy a simple application with a Deployment and Service (to load balance across the Deployment replicas):

$ oc apply -f https://raw.githubusercontent.com/RedHatGov/ingress-route-examples/main/00-demo-application.yml
namespace/helloworld created

deployment.apps/hello-world created
service/hello-world created

Validate that you can reach the application with a simple port-forward hitting the Service:

$ oc port-forward service/hello-world -n helloworld 8000:8000 &
[1] 1094138
Forwarding from 127.0.0.1:8000 -> 8000
Forwarding from [::1]:8000 -> 8000
$ curl http://localhost:8000
Handling connection for 8000
Hello, world, from hello-world-56cd647dcf-rznfm!
$ curl http://localhost:8000/hello/Red%20Hatter
Handling connection for 8000
Hello, Red Hatter, from hello-world-56cd647dcf-rznfm!

Clean up your port forward to get ready for the application of a proper Ingress mechanism:

$ kill %1

Deploy an OpenShift Route to your service with Edge TLS encryption (and use the default certificate from your OpenShift Router) using the following commands:

$ oc apply -f https://raw.githubusercontent.com/RedHatGov/ingress-route-examples/main/01-route.yml
route.route.openshift.io/hello-world created

Validate that you can reach the application through the HTTPS Route that you have created:

$ curl https://$(oc get route -n helloworld hello-world -ojsonpath='{.status.ingress[0].host}')
Hello, world, from hello-world-56cd647dcf-s4srg!

Note that, in this case, I have an HTTPS certificate deployed in my OpenShift Router that is trusted by most clients (from LetsEncrypt). If you do not, you will have to add -k to your options for curl to accept untrusted certificates.

Now that we have that working, let’s deploy an NGINX Ingress. My cluster happens to be on AWS, so I will use the standard ingress-nginx deployment designed for Kubernetes running on AWS.

The standard deployment for NGINX on Kubernetes does not take the default security posture of OpenShift into account, so it is not allowed to run with the level of permissions it assumes it will have. There is a fully supported NGINX operator for OpenShift that handles all of this configuration for you, but we are sticking as close to the upstream NGINX Ingress deployment as possible here to demonstrate the portability. We can use the procedure from the OpenShift documentation (See: https://docs.openshift.com/container-platform/4.9/authentication/managing-security-context-constraints.html#security-context-constraints-creating_configuring-internal-oauth) to add the capabilities and UID constraints required for NGINX Ingress in a targeted way (rather than simply opening the namespace up to allow anything) via a simple manifest. Let’s apply that now:

$ oc apply -f https://raw.githubusercontent.com/RedHatGov/ingress-route-examples/main/02-nginx-ingress-scc.yml
namespace/ingress-nginx created

securitycontextconstraints.security.openshift.io/nginx created
role.rbac.authorization.k8s.io/ingress-nginx-scc created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-scc created

Then we will apply the stock upstream NGINX deployment:

$ oc apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/aws/deploy.yaml
namespace/ingress-nginx unchanged

serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created

We should be able to watch our Deployment come online and show Ready at this point:

$ oc get deploy -n ingress-nginx -w
NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
ingress-nginx-controller   1/1     1            1           8m

Now that we have NGINX set up on the cluster, there are a few paths we could take to configure access to the NGINX Ingress. Because I am on AWS, I could configure a Route 53 CNAME record in one of my Hosted Zones to point to the NLB address. Or, if I did not have access to the DNS provider but did not need the DNS to work publicly, I could look up the IP address of the NLB and set /etc/hosts entries to point to the NLB on my local machine. This is because NGINX will use either the SNI header in TLS packets (see:  https://en.wikipedia.org/wiki/Server_Name_Indication) or the HTTP Host reference in the HTTP headers (see: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Host) to make decisions about where to send incoming traffic.

To keep things simple here, we are going to just use the DNS name of the NLB in our Ingress specification for this service. To get the NLB DNS name for the NGINX Ingress Controller, run the following:

$ ingress_nlb=$(oc get service -n ingress-nginx ingress-nginx-controller -ojsonpath='{.status.loadBalancer.ingress[0].hostname}')

Then, to change our Ingress specification to use it and apply the Ingress, run the following:

$ curl -s https://raw.githubusercontent.com/RedHatGov/ingress-route-examples/main/03-nginx-ingress.yml | sed 's/hello\.nginx\.example\.com/'"${ingress_nlb}/" | oc apply -f -
ingress.networking.k8s.io/hello-world created

To verify that the Ingress worked, (after giving NGINX a few minutes to deploy and configure, perhaps) let’s curl it:

$ curl http://${ingress_nlb}
Hello, world, from hello-world-56cd647dcf-s8g9w!

In the previous example, we demonstrated a simple application and showed traffic routing to it via an OpenShift Route. Additionally, we demonstrated an extra Kubernetes Ingress controller, in this case NGINX, and made use of that controller to route traffic into our original demo application. The original OpenShift Route could safely be deleted in this case, leaving the NGINX Ingress path available to route traffic, thus demonstrating the flexibility and choice that comes into play by leveraging both Routes and Ingress objects to achieve the same result.

One other capability, and one that really speaks to the portability of using Ingress with OpenShift, is that we can create an Ingress without a specified ingressClassName field set in the spec. OpenShift sets the OpenShift IngressController, aka the OpenShift Router, as the default IngressClass and would therefore handle this Ingress resource by creating a Route and then handling it like normal.

Q: Can you name some valid use cases for installing multiple Ingress controllers on a single cluster?

A: Application owners may prefer to manage the ingress infrastructure, that is, NGINX Ingress controller, for their applications instead of the infrastructure team doing so. Multiple Ingress controllers may be required to support ingress throughput needs of an application.

Let’s take a look at what happens when we apply a more generic Ingress resource, with the host name aligning with our OpenShift router instead of the NGINX Ingress:

$ router_nlb=$(oc get service router-default -n openshift-ingress -ojsonpath='{.status.loadBalancer.ingress[0].hostname}')
$ curl -s
https://raw.githubusercontent.com/RedHatGov/ingress-route-examples/main/04-agnostic-ingress.yml | sed 's/hello\.agnostic\.example\.com/'"${router_nlb}/" | oc apply -f -
ingress.networking.k8s.io/agnostic created

Something interesting happens when we create this Ingress. As mentioned above, instead of NGINX recognizing it and handling configuration, it gets assigned to our default IngressClass. You can see NGINX rejecting it and the result of the OpenShift Router receiving it by running the following:

$ oc logs deploy/ingress-nginx-controller -n ingress-nginx | tail -2
I1005 20:36:41.802965       7 main.go:101] "successfully validated configuration, accepting" ingress="agnostic/helloworld"
I1005 20:36:41.811462       7 store.go:361] "Ignoring ingress because of error while validating ingress class" ingress="helloworld/agnostic" error="ingress does not contain a valid IngressClass"
$ oc get route -n helloworld
NAME             HOST/PORT                                                                PATH   SERVICES      PORT    TERMINATION   WILDCARD


agnostic-52wv8   a578d2c79f3e5432aaf2d980bb333e6f-284865141.us-east-2.elb.amazonaws.com   /      hello-world   <all>                 None
hello-world      hello-world-helloworld.apps.openshift.jharmison.net                             hello-world   8000    edge          None

And, finally, as before, we can validate it by hitting the host from our Ingress (or our Route):

$ curl http://$router_nlb
Hello, world, from hello-world-56cd647dcf-s4srg!
$ curl http://$(oc get ingress -n helloworld agnostic -ojsonpath='{.spec.rules[0].host}')/hello/Red%20Hatters
Hello, Red Hatters, from hello-world-56cd647dcf-s8g9w!

Gateway API - Ingress.Next

The conversation around Routes compared to Ingress implementations from other Kubernetes vendors or independent Ingress implementations is topical, but before long will become irrelevant. The Kubernetes community has decided that they did not accomplish all that they wanted to with the Ingress specification. Part of what led to hesitancy from Red Hat fully adopting Ingress and replacing Routes entirely was the knowledge that it did not meet the needs of our customers.

Still in alpha (but graduating to beta soon), the Gateway API specification is set to evolve K8s service networking functionality of Ingress with a more fully featured set of native capabilities. The minimal set of Ingress features has not been meeting the needs of all of its users, and that fractured ecosystem of annotation-driven controllers has not been a good experience. By establishing a more expressive, extensible, and role-oriented API, the Gateway API project aims to remove all of the drift that exists in implementations of the Ingress spec without placing unnecessary limitations on the implementations that encourage that fracturing. You can see more information about Gateway API at their SIG landing page: https://gateway-api.sigs.k8s.io/.

Q: Why did the community choose to create the Gateway API project as opposed to continuing with the Ingress direction?

A: Rather than simply targeting ingress shortcomings, the community decided that a new set of APIs were needed to solve the wide range of Kubernetes service networking challenges.

Red Hat, as the world’s leading provider of enterprise open source solutions, has been involved in driving those discussions and shaping that design to meet the needs of our customers. The Gateway API spec is being developed in the open with the community on the Networking Special Interest Group within the Kubernetes project. A Red Hatter sits alongside two other chairs for that group (membership is public: https://github.com/kubernetes/community/tree/394b58e2f8de029b7cdeb9902fb6a5cabc98fcab/sig-network#chairs) and several others work within the group on the design and prototype reference implementations. Those meetings are held in the public, and the minutes and recordings are saved and publicly available for those who have input on the design. You can check out the meeting minutes here: https://docs.google.com/document/d/1eg-YjOHaQ7UD28htdNxBR3zufebozXKyI28cl2E11tU/edit. Several implementations are maturing, with Red Hatters working on more than one, and we expect that as they mature one or more will be suitable for consumption by our enterprise customers. You can keep tabs on those implementations here: https://gateway-api.sigs.k8s.io/implementations/

Conclusion - Do Routes remove choice for the OpenShift user?

Not more than any other vendor-provided Ingress implementation. You can use the default implementation we provide with OpenShift Routes, especially if the features make sense for you. Or, if you are looking for a standardized way to deploy Ingress across multiple Kubernetes distributions with different feature sets, you can deploy any industry-standard Ingress implementation and use that for your applications just fine. Using the features provided by your chosen Kubernetes distribution to accelerate your ability to deliver on value for your customers is where we have seen people being the most productive with their time.

Q: Does an Ingress implementation exist that leaves you with no work when moving to another Ingress implementation?

A: Most Ingress implementations expose advanced functionality through annotations. These annotations are not portable, requiring users to map annotations to the new Ingress implementation, or forgo the advanced functionality all together.


About the authors

A Red Hatter since 2019, James' past experiences include IT infrastructure engineering, cyber security and traditional system administration. He built infrastructure as code and ran tooling in Linux containers for several years, spent two more years as an incident responder and threat hunter on DoD networks and loves to tinker with whatever lives at the nexus of performance and security. His focus on security, including the use of modern technologies to enable Defense-in-Depth in IT infrastructure, are likely to shine through his writing.

Read full bio