Technical Overview

The default Ingress Controller in OpenShift Container Platform 4.x is based on HAProxy. If you aren’t familiar with HAProxy, it is free, open source software that provides a high available load balancer and proxy server for TCP and HTTP-based applications that distributes requests across multiple servers.

In the upstream HAProxy community project, Native HTTP Representation (HTX) engine was added in HAProxy 1.9 release in late 2018 which was developed to enhance the capability of HAProxy to parse and modify the HTTP messages. In HTX mode, HAProxy is able to more easily manipulate any representation of the HTTP protocol. This allows users to use end-to-end HTTP/2 connectivity as well as use newer versions of HTTP-based technologies and protocols at a rapid pace. With the release of version 1.9.2, HAProxy now fully supports gRPC traffic. The gRPC protocol allows your application services to communicate with very low latency. HAProxy supports it with enabling bidirectional streaming of data, parsing and inspecting HTTP headers, and logging gRPC traffic.

OpenShift Container Platform 4.5 or later now provides end-to-end proxying of HTTP/2 traffic with the help of HAProxy which is the default Ingress Controller in cluster. You can create gRPC enabled routes to secure and route gRPC traffic over HTTP/2. This capability allows application developers and teams to leverage HTTP/2 protocol functionalities, including single connection, header compression, binary streams, and more.

HTTP/2 connectivity can be enabled for an individual Ingress Controller (default is HAProxy) in OpenShift or for the entire OpenShift cluster. To enable the use of HTTP/2 for the connection from the client to HAProxy, an OpenShift route must specify a custom certificate that needs to be generated using OpenSSL or procured from a trusted certificate authority. A route that uses the default certificate cannot use HTTP/2 connectivity. This restriction is essential to remove issues from connection coalescing, where the client re-uses a connection for different routes that use the same certificate.

The connection from HAProxy to the application pod can use HTTP/2 only for re-encrypt or passthrough routes and not for edge-terminated or insecure routes. This restriction is due to use of TLS extension called Application-Level Protocol Negotiation (ALPN) by HAProxy, to negotiate the use of HTTP/2 between HAProxy itself and the back-end application service. This implies that end-to-end HTTP/2 connectivity can only be possible with passthrough and re-encrypt routes and not with insecure or edge-terminated routes.

Let me describe the reason for this restriction further if you are not aware of ALPN. When using TLS with HTTP/1.1 protocol, the convention is to listen on port 443 by default. Coming up with different ports other than 443 for HTTP/2 would only make things complex and sticking with the same port was the final decision. However, there was a need to define which version of HTTP protocol the server and client would communicate. It was obvious that there could have been an entirely separate handshake that negotiated the protocol for the communication, but in the end it made more sense to encode this information into the TLS handshake process, saving a whole round-trip and improving the latency. The Application-Layer Protocol Negotiation (ALPN) extension was developed to update TLS to support a client and server negotiating the application protocol over a secure connection. It was developed to enable the support for HTTP/2 primarily, but it can be leveraged for any other application protocols that might need to be negotiated to solve a different problem in the future.

Known Limitation

For re-encrypt routes, the Ingress Controller negotiates its connection to the application independently of the connection from the client. This means a client may connect to the HAProxy Ingress Controller and negotiate HTTP/1.1, and the Ingress Controller may then connect to the application service, negotiate HTTP/2, and forward the request from the client HTTP/1.1 connection using the HTTP/2 connection to the application. This poses a problem if the client subsequently tries to upgrade its connection from HTTP/1.1 to the WebSocket protocol, because the Ingress Controller cannot forward WebSocket to HTTP/2 and cannot upgrade its HTTP/2 connection to WebSocket. Consequently, if you have an application that is intended to accept WebSocket connections, it must not allow negotiating the HTTP/2 protocol or else clients will fail to upgrade to the WebSocket protocol.

There is already a reported issue for this limitation in HAProxy GitHub repo. You can further track this issue if it’s really a blocker for you and you want to check the progress for it’s fix: GitHub Issue

Sample Python gRPC application testing in OpenShift

  1. As a prerequisites, you have an OpenShift Container Platform 4.5 or later up and running with the bastion node with RHEL operating system. You also have installed Go Lang and gRPCurl tool kits in the bastion node.

  2. Enable HTTP/2 on a single Ingress Controller. To enable HTTP/2 on an Ingress Controller, enter the oc annotate command:


$ oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true

  • Replace <ingresscontroller_name> with the name of the Ingress Controller to annotate.
  1. If you want to enable HTTP/2 on the entire cluster, you can skip step 2. To enable HTTP/2 for the entire cluster, enter the oc annotate command:

$ oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true

  1. Git clone the sample gRPC Python application using below command:

$ git clone https://github.com/kesseract/ssl_grpc_example.git

$ cd ssl_grpc_example/

  1. Generate certificate for the server using below command:

make gen_key

It uses OpenSSL to generate the certificate. For the CN record, use the *.apps.apps.<cluster_name>.<base_domain> wildcard hostname for OpenShift. You can get this hostname using below command:


$ oc get ingresses.config/cluster -o jsonpath='{.spec.domain}'

Example output: apps.cluster-d3f6.d3f6.example.opentlc.com

  1. Copy the generated certificate with name “tls.crt” and key with name “tls.key” in the empty folder called “tls”.

  2. Create Server by deploying the gRPC Python application in OpenShift cluster in any project. You can either build the image using S2I or deploy a prebuilt image from quay.io using below commands:


$ oc new-app https://github.com/tsailiming/ssl_grpc_example --name=grpc

$ oc new-app --docker-image=quay.io/ltsai/python-grpc-demo --name=grpc

  1. Now secure this application with Passthrough Routes. It offers a secure alternative to re-encrypt routes because the application exposes it’s TLS certificate. With this, the traffic is encrypted between the client and the backend application service. To create a passthrough route, you need a certificate (which you already generated in step 5) and an URL for your application to access it. The recommended way to accomplish this is by using OpenShift TLS secrets. Secrets are exposed via a volume mount point into a container. Below are commands to complete all these configurations:

$ oc create secret tls tls-secret --cert=tls/tls.crt --key=tls/tls.key

$ oc set volume deployment/grpc --name=tls-secret --type=secret --secret-name=tls-secret --mount-path=/opt/app-root/src/tls --add

$ oc create route passthrough grpc --service=grpc

  1. Run the client using gRPCurl.

$ HOSTNAME=`oc get route grpc -o jsonpath='{.spec.host}'`

$ grpcurl -import-path . -proto service.proto -insecure -cert tls/tls.crt -key tls/tls.key $HOSTNAME:443 Server.Foo

{

"message": "Hello! Current time is Thu Sep 24 14:47:30 2020"

}

If you can see the message similar to the above output, then you have successfully tested the HTTP/2 connectivity in OpenShift Container Platform.

References