Subscribe to our blog

IPv6 Single Stack is generally available (GA) in OpenShift Service Mesh (OSSM) starting from version 2.4. OpenShift Service Mesh is based on the Istio and Kiali projects and is part of all Red Hat OpenShift subscription levels. This blog post aims to provide valuable information about what you must consider when migrating your current applications to IPv6.

By default, in OSSM installations from version 2.4, all service mesh control plane resources can work with IPv6, and the Envoy sidecar is enabled to work in IPv6. However, it is crucial that the application container you intend to deploy has IPv6 support to enable communication with the Envoy proxy and enable both incoming and outgoing network traffic in the IPv6 protocol. 

The Envoy proxy intercepts inbound and outbound traffic for the application container. It acts as a network intermediary, handling all network communication for the container. The Envoy proxy will receive IPv6 traffic. All incoming communication will be passed to the application container using IPv6 by default. Therefore, the container should be bound to IPv6 to communicate with the Envoy proxy and allow all incoming and outgoing network traffic in IPv6.

To illustrate this, consider the example demonstrated in this article.

Prerequisites

  • OpenShift Container Platform (OCP) 4.10 or higher. OCP must use OVN-Kubernetes as the network provider to support IPv6. For more information about the OVN-Kubernetes network plugin, see the documentation.              
  • OSSM 2.4 or later operator installed.
  • Service Mesh Control Plane (SMCP) 2.4 or later deployed.

The following steps demonstrate installing a basic httpbin pod to work with IPv6 in Red Hat OpenShift Service Mesh on the OpenShift Container Platform. Additionally, it explains how to install the bookinfo application to test the newly created httpbin service's accessibility.

1. Create a ServiceMeshMemberRoll

You will need to create a ServiceMeshMemberRoll (SMMR) with the target namespace bookinfo. Refer to the documentation on how to add projects to the SMMR using member selectors.

2. Create a bookinfo project

Next, create the bookinfo project using the following command:

oc new project bookinfo

 

3. Deploy bookinfo

Deploy the bookinfo project from Maistra catalog with this command:

oc apply -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfo

 

By default, bookinfo works with IPv6, so no changes are required.

4. Deploy the httpbin pod

Deploy the httpbin pod in the testing namespace and test traffic.

httpbin is a simple HTTP Request & Response Service and is a good example for explaining the simple changes needed to bind an application to an IPv6 address.

You can deploy httpbin with default values and use the following command: ["gunicorn", "--access-logfile", "httpbin:app"]. Use this YAML file to deploy httpbin in your OCP cluster using this command:

oc apply -f <fileName.yaml> -n bookinfo

 

Here is the YAML file:

apiVersion: v1
kind: ServiceAccount
metadata:
 name: httpbin
---
apiVersion: v1
kind: Service
metadata:
 name: httpbin
 labels:
  app: httpbin
  service: httpbin
spec:
 ports:
 - name: http
  port: 8000
  targetPort: 8000
 selector:
  app: httpbin
---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: httpbin
spec:
 replicas: 1
 selector:
  matchLabels:
    app: httpbin
    version: v1
 template:
  metadata:
    annotations:
      sidecar.istio.io/inject: "true"
    labels:
      app: httpbin
      version: v1
  spec:
    serviceAccountName: httpbin
    containers:
    - name: httpbin
      image: {{ image "httpbin" }}
      command: ["gunicorn", "--access-logfile", "-" ,"httpbin:app"]
      ports:
        - containerPort: 8000

 

By default, the httpbin container is bound to the IPv4 address 127.0.0.1:8000, as is mentioned in the community documentation of Gunicorn. You can see the full list of the listening ports below to confirm the behavior:

sh-5.1$ netstat -tulpn | grep LISTEN
tcp    0      0 127.0.0.1:15004       0.0.0.0:*               LISTEN      -                   
tcp    0     0 127.0.0.1:8000        0.0.0.0:*               LISTEN      1/python3           
tcp6   0      0 :::15020               :::*                   LISTEN      -                   
tcp6   0      0 ::1:15053               :::*                  LISTEN      -                   
tcp6   0      0 :::15021                :::*                  LISTEN      -                   
tcp6 0      0 :::15090                :::*                  LISTEN      -                   
tcp6 0      0 ::1:15000               :::*                 LISTEN      -                   
tcp6   0      0 :::15001                :::*                 LISTEN      -                   
tcp6 0     0 :::15006             :::*                  LISTEN      -                  

 

Test communications

Test the incoming and outgoing communication with a simple curl command.

Test incoming traffic to the http container.

From one of the reviews container:

$ curl -I httpbin:8000
HTTP/1.1 503 Service Unavailable
content-length: 145
content-type: text/plain
date: Wed, 14 Jun 2023 15:11:59 GMT
server: envoy
x-envoy-upstream-service-time: 17

 

To test outgoing traffic to the productpage container:

sh-5.1$ curl productpage:9080
<!DOCTYPE html>
<html>
 <head>
  <title>Simple Bookstore App</title>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">

<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="static/bootstrap/css/bootstrap.min.css">

<!-- Optional theme -->
<link rel="stylesheet" href="static/bootstrap/css/bootstrap-theme.min.css">

 </head>
  <body>



<p>
  <h3>Hello! This is a simple bookstore application consisting of three services as shown below</h3>
</p>
<table class="table table-condensed table-bordered table-hover"><tr><th>name</th><td>http://details:9080</td></tr><tr><th>endpoint</th><td>details</td></tr><tr><th>children</th><td><table class="table table-condensed table-bordered table-hover"><tr><th>name</th><th>endpoint</th><th>children</th></tr><tr><td>http://details:9080</td><td>details</td><td></td></tr><tr><td>http://reviews:9080</td><td>reviews</td><td><table class="table table-condensed table-bordered table-hover"><tr><th>name</th><th>endpoint</th><th>children</th></tr><tr><td>http://ratings:9080</td><td>ratings</td><td></td></tr></table></td></tr></table></td></tr></table>

<p>

  <h4>Click on one of the links below to auto-generate a request to the backend as a real user or a tester
  </h4>
</p>

<p><a href="/productpage?u=normal">Normal user</a></p>
<p><a href="/productpage?u=test">Test user</a></p>



<!-- Latest compiled and minified JavaScript -->
<script src="static/jquery.min.js"></script>

<!-- Latest compiled and minified JavaScript -->
<script src="static/bootstrap/js/bootstrap.min.js"></script>

 </body>
</html>
sh-5.1$

 

The incoming requests are failing, but the outgoing requests from the httpbin pod are working. 

5. Bind the httpbin pod to IPv6 address

Set the bind address to IPv6 [::]:8000, as specified in the documentation. Do this by using the following command: ["gunicorn", "--access-logfile", "-", "-b", "[::]:8000", "httpbin:app"]. Modify the YAML file with the following content:

apiVersion: v1
kind: ServiceAccount
metadata:
 name: httpbin
---
apiVersion: v1
kind: Service
metadata:
 name: httpbin
 labels:
  app: httpbin
  service: httpbin
spec:
 ports:
 - name: http
  port: 8000
  targetPort: 8000
 selector:
  app: httpbin
---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: httpbin
spec:
 replicas: 1
 selector:
  matchLabels:
    app: httpbin
    version: v1
 template:
  metadata:
    annotations:
      sidecar.istio.io/inject: "true"
    labels:
      app: httpbin
      version: v1
  spec:
    serviceAccountName: httpbin
    containers:
    - name: httpbin
      image: {{ image "httpbin" }}
      command: ["gunicorn", "--access-logfile", "-", "-b", "[::]:8000", "httpbin:app"]
        ports:

      - containerPort: 8000

 

Test communications again

Next, check the traffic in both directions again.

Outgoing traffic to the product page pod still works fine, as seen below:

sh-5.1$ curl productpage:9080
<!DOCTYPE html>
<html>
 <head>
  <title>Simple Bookstore App</title>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">

<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="static/bootstrap/css/bootstrap.min.css">

<!-- Optional theme -->
<link rel="stylesheet" href="static/bootstrap/css/bootstrap-theme.min.css">

 </head>
  <body>

<p>
  <h3>Hello! This is a simple bookstore application consisting of three services as shown below</h3>
</p>

<table class="table table-condensed table-bordered table-hover"><tr><th>name</th><td>http://details:9080</td></tr><tr><th>endpoint</th><td>details</td></tr><tr><th>children</th><td><table class="table table-condensed table-bordered table-hover"><tr><th>name</th><th>endpoint</th><th>children</th></tr><tr><td>http://details:9080</td><td>details</td><td></td></tr><tr><td>http://reviews:9080</td><td>reviews</td><td><table class="table table-condensed table-bordered table-hover"><tr><th>name</th><th>endpoint</th><th>children</th></tr><tr><td>http://ratings:9080</td><td>ratings</td><td></td></tr></table></td></tr></table></td></tr></table>

<p>
  <h4>Click on one of the links below to auto-generate a request to the backend as a real user or a tester
  </h4>
</p>
<p><a href="/productpage?u=normal">Normal user</a></p>
<p><a href="/productpage?u=test">Test user</a></p>


<!-- Latest compiled and minified JavaScript -->

<script src="static/jquery.min.js"></script>


<!-- Latest compiled and minified JavaScript -->
<script src="static/bootstrap/js/bootstrap.min.js"></script>

 </body>
</html>
sh-5.1$

 

Incoming traffic from the review pod to the httpbin pod is now successful:

$ curl -I httpbin:8000
HTTP/1.1 200 OK
server: envoy
date: Wed, 14 Jun 2023 15:10:57 GMT
content-type: text/html; charset=utf-8
content-length: 11279
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 5

 

Another example: MongoDB

Consider another example of binding addresses for a commonly used application across clusters: MongoDB.

According to the MongoDB documentation, some parameters must be set to bind to an IPv6 address. Here are the options provided:

  • To bind to all IPv4 addresses, you can specify the bind IP address as 0.0.0.0.
  • To bind to all IPv4 and IPv6 addresses, you can specify the bind IP address as ::, 0.0.0.0. Alternatively, you can use the new net.bindIpAll setting or the new command-line option --bind_ip_all.

So, if you have MongoDB, you need to consider those parameters to make it work in an IPv6 environment.

Wrap up

As you can see in the examples, when migrating applications from IPv4 to IPv6 in an OpenShift Service Mesh (OSSM) environment, it is essential to enable the applications are correctly bound to IPv6 addresses. By default, in OSSM installations from version 2.4, the service mesh control plane resources work with IPv6. However, the application container itself must also support IPv6 to effectively communicate with the Envoy proxy and enable incoming and outgoing network traffic in IPv6.

To illustrate the importance of correct binding, we examined an example using the httpbin container. We initially observed that incoming requests failed while outgoing requests worked. However, by changing the bind address to IPv6 [::]:8000, we successfully established communication in both directions. Similarly, when working with MongoDB, the correct configuration of bind IP addresses, including support for both IPv4 and IPv6, is crucial for seamless operation in an IPv6 environment.

In summary, ensuring proper binding of applications is vital to maintain their functionality within an IPv6 OpenShift Container Platform (OCP) cluster. Neglecting this aspect may lead to communication issues between deployed services. Refer to the relevant documentation of your application and configure the applications accordingly to facilitate a smooth migration to IPv6.

To learn more about OpenShift Service Mesh, visit: https://www.redhat.com/en/technologies/cloud-computing/openshift/what-is-openshift-service-mesh


About the author

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech