full-flow-graph

OpenShift Virtualization is already established as a great tool for lift and shift migrations. Developers can prioritize rewriting or porting the individual components where they get the most reward for their efforts. Components based on legacy virtual machines can be brought into the OpenShift cluster mostly unaltered until they may be migrated to containers.

Service meshes provide a way to manage the complexity of applications built on a number of individual microservices. It may seem like the whole porting process should be finished before adding a potentially complicated infrastructure layer, but developers need not wait until all legacy services are replaced to enjoy the benefits of a service mesh.

Today we look into the case where an older application, based on a number of virtualized services, gets ported to a microservice based architecture while simultaneously being manged by OpenShift Service Mesh.

Versions

This article is based on OpenShift Service Mesh 2.1.2 and OpenShift Virtualization 4.10.0 on an OpenShift 4.10.11 cluster.

As of this writing, the OpenShift cluster network must be OpenShiftSDN, not OVNKubernetes in order for VirtualMachines to work with OpenShift Service Mesh.

Install OpenShift Service Mesh

Following the OpenShift Service Mesh Installation instructions, install the set of operators that make up OpenShift Service Mesh:

  • OpenShift Elasticsearch Operator
  • Red Hat OpenShift distributed tracing platform
  • Kiali Operator (provided by Red Hat)
  • Red Hat OpenShift Service Mesh

The Bookinfo Application

Bookinfo is a sample application designed to demonstrate Istio's features. (Istio is the upstream community project on which OpenShift Service Mesh is largely based).

Bookinfo is comprised of four different microservices written in various languages, which composed together form a website that provides information about a book. The top level service is productpage, which calls details to display attributes related to the book, and reviews which provides a pair of reviews of the book. The reviews microservice then calls the ratings microservice to provide a one to five star rating of the book from the viewpoint of each reviewer. The reviews and ratings services are themselves comprised of multiple versions that behave differently depending on how traffic is directed through the application, allowing demonstration of Istio's traffic routing features.

The easiest part of the bookinfo app to peel off is the ratings service. This is a small Node.js program that is designed to interface with a database back-end. When no database is defined, as in the default bookinfo deployment example, it makes up two entries for reviewer1 and reviewer2 of 5 and 4 stars respectively.

For this article, we will work backwards from the usual migration path and introduce virtual machines to replace the ratings microservice. To get started, install Bookinfo in a new project called bookinfo.

Follow the instructions under "Adding workloads to a service mesh" to install the Bookinfo example application. This will guide you through the following steps:

  • Create the bookinfo and istio-system Namespaces.
  • Create a Service Mesh Control Plane (SMCP).
  • Create a Service Mesh Member Roll (SMMR).
  • Create the Deployments that make up the bookinfo microservices.
  • Create an ingress Gateway and extract the URL to the application .
  • Add a basic set of DestinationRules to enable OpenShift Service Mesh to present the application.
  • Check that the application is reachable.

At this point, you can review the Deployments that make up bookinfo:

oc get deployment

NAME READY UP-TO-DATE AVAILABLE AGE
details-v1 1/1 1 1 7d22h
productpage-v1 1/1 1 1 7d22h
ratings-v1 1/1 1 1 7d22h
reviews-v1 1/1 1 1 7d22h
reviews-v2 1/1 1 1 7d22h
reviews-v3 1/1 1 1 7d22h

There are also DestinationRules that define subsets of each service and help tag versions of services to aid OpenShift Service Mesh in guiding traffic through the mesh.

oc get destinationrule reviews -o yaml

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: reviews
namespace: bookinfo
spec:
host: reviews
subsets:
- labels:
version: v1
name: v1
- labels:
version: v2
name: v2
- labels:
version: v3
name: v3

and for the ratings microservice:

oc get destinationrule ratings -o yaml

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: ratings
namespace: bookinfo
spec:
host: ratings
subsets:
- labels:
version: v1
name: v1
- labels:
version: v2
name: v2
- labels:
version: v2-mysql
name: v2-mysql
- labels:
version: v2-mysql-vm
name: v2-mysql-vm

As you can see, reviews and ratings both have multiple subsets defined, and there are multiple deployments for the reviews service. Looking into the labels on the deployments, we see:

oc get deployment --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
details-v1 1/1 1 1 7d22h app=details,version=v1
productpage-v1 1/1 1 1 7d22h app=productpage,version=v1
ratings-v1 1/1 1 1 7d22h app=ratings,version=v1
reviews-v1 1/1 1 1 7d22h app=reviews,version=v1
reviews-v2 1/1 1 1 7d22h app=reviews,version=v2
reviews-v3 1/1 1 1 7d22h app=reviews,version=v3

Note that each version of the reviews deployment is marked by app and version labels. Version 1 of the reviews service does not actually display star ratings, so if we are to demonstrate VMs running ratings, we will need to disable reviews-v1. We can do this by configuring a VirtualService that routes requests to either v2 or v3. We will choose v3 as it displays the stars in red, and makes them stand out more. Create the following yaml and apply it to your bookinfo namespace:

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: reviews
namespace: bookinfo
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v3

Now if you refresh the bookinfo URL, you should see only the version with red stars, showing a five and a four star rating.

v3v1

Virtual Machines in the Service Mesh

The first virtual machine we add runs version v2 of the ratings service. To distinguish the VM based service from the original, we alter the code to produce a default rating of three stars from both reviewers.

From the command line it is possible to discern which version of the ratings service is running by counting the instances of glyphicon-star vs glyphicon-star-empty.

curl -s $GATEWAY_URL/productpage | grep -c 'glyphicon-star\"'

By default, this will count nine stars.

Start by installing a Fedora VM.

OpenShift Virtualization 4.10 now pre-populates boot source images for certain template OSes like RHEL, CentOS, and Fedora. If you have a default storage class when you install the OpenShift Virtualization operator, you should be able to clone a Fedora VM with a few clicks in the OpenShift console. Name the VM ratings-v2 and provide it with an SSH public key to make access easier. Make sure to uncheck the "Start this virtual machine after creation" option so annotations and labels may be added before starting the OS. Once you have created the virtual machine, you may either edit the YAML directly in the OpenShift Console or use oc patch to add the following annotation to the spec.template.metadata stanza:

. . .
spec:
template:
metadata:
annotations:
sidecar.istio.io/inject: "true"
labels:
app: ratings
version: v2

Using the command-line:

oc patch vm ratings-v2 --type merge --patch='{"spec": {"template": {"metadata": {"annotations": {"sidecar.istio.io/inject": "true"}}}}}'

You may now start the virtual machine. Once it starts, you can log in via ssh, however due to Service Mesh's default NetworkPolicies, you cannot access it directly in the usual NodePort manner. Instead, you will need to use the GATEWAY_URL environment variable that points to the IngressGatway route. Setting this variable is covered in the bookinfo installation instructions mentioned earlier, but for convenience:

export GATEWAY_URL=$(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')

An example ssh command:

oc get service ratings-v2-ssh-service

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
ratings-v2-ssh-service NodePort 172.30.59.219 <none> 22000:32219/TCP

ssh fedora@${GATEWAY_URL} -p 32219

What follows is a bash script to set up the Fedora VM to run the ratings Node.js application on port 9080. The script alters the default rating so it returns three stars for both reviewers:

#!/bin/bash

sudo dnf -y install git nodejs vim-enhanced patch

cd
git clone https://github.com/istio/istio

patch -d istio -p1 <<PATCH
diff --git a/samples/bookinfo/src/ratings/ratings.js b/samples/bookinfo/src/ratings/ratings.js
index eda45eca5c..cddf4932ac 100644
--- a/samples/bookinfo/src/ratings/ratings.js
+++ b/samples/bookinfo/src/ratings/ratings.js
@@ -243,8 +243,8 @@ function getLocalReviews (productId) {
return {
id: productId,
ratings: {
- 'Reviewer1': 5,
- 'Reviewer2': 4
+ 'Reviewer1': 3,
+ 'Reviewer2': 3
}
}
}
PATCH

sudo cp istio/samples/bookinfo/src/ratings/*.js* /usr/share/node
pushd /usr/share/node
sudo npm install
popd

sudo tee /etc/systemd/system/ratings.service <<END
[Unit]
Description=Run NodeJS App Reviews

[Service]
ExecStart=/usr/bin/node /usr/share/node/ratings.js 9080

[Install]
WantedBy=multi-user.target
END

sudo systemctl daemon-reload
sudo systemctl enable --now ratings.service

Run this on the VM. When it is done, label the VM spec.template to designate it as an instance of the ratings service with version v2:

oc patch vm ratings-v2 --type merge --patch='{"spec": {"template": {"metadata": {"labels": {"app": "ratings", "version": "v2"}}}}}'

Restart the VM using the OpenShift console or the virtctl tool so the labels may take effect.

NOTE: Rebooting the OS from within the VM will not recreate the VMI with the appropriate labels.

Once the VM comes back up, try the star-counting curl command again:

curl -s $GATEWAY_URL/productpage | grep -c 'glyphicon-star\"'

Now you should see a different result for each request, 9 for the original and 6 for the new VM.

v3v2

Virtual Machine as a Database

Recall the list of DestinationRules for the ratings service. In addition to v1 and v2, there are v2-mysql and v2-mysql-vm. These two are meant to demonstrate using a separate database backend to a microservice. The first, v2-mysql makes use of a Pod called mysqldb in the same bookinfo namespace. The second, v2-mysql-vm connects to a virtual machine for the database. Originally, the bookinfo application was set up to connect to a VM outside the Kubernetes cluster which would be reachable by creating a ServiceEntry for it to reach the VM by IP address. Instead, we will set up a VM called mysqldb in a new namespace vm and configure that VM to serve a mariadb database server.

First create a new project and call it vm. Next, edit the ServiceMeshMemberRoll in the istio-system namespace and add the new namespace to the list. It should read:

  spec:
members:
- bookinfo
- vm

Create a new Fedora virtual machine in the vm namespace called mysqldb. As before, give it an ssh public key and set it to not automatically start on creation as we will want to annotate it for OpenShift Service Mesh.

oc -n vm patch vm mysqldb --type merge --patch='{"spec": {"template": {"metadata": {"annotations": {"sidecar.istio.io/inject": "true"}}}}}'

Now start the VM and find its ssh-service port:

oc -n vm get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
mysqldb-ssh-service NodePort 172.30.219.150 <none> 22000:31569/TCP

ssh fedora@${GATEWAY_URL} -p 31569

The following bash script will install the MariaDB server and set it up to return two star ratings for any service connecting to it.

#!/bin/bash

sudo dnf -y install mariadb-server vim-enhanced

sudo sed -i '/bind-address/c\bind-address = 0.0.0.0' /etc/my.cnf.d/mariadb-server.cnf
sudo systemctl enable --now mariadb

cat <<EOF | sudo mysql
# Grant access to root
GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY 'password' WITH GRANT OPTION;
# Grant root access to other IPs
CREATE USER 'root'@'%' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;
EOF

sudo systemctl restart mariadb
curl -sL https://raw.githubusercontent.com/istio/istio/release-1.13/samples/bookinfo/src/mysql/mysqldb-init.sql | mysql -u root -ppassword

# Set both ratings to 2
mysql -u root -ppassword test -e "update ratings set rating=2;select * from ratings;"

Now label the mysqldb VM and restart it:

oc -n vm patch vm mysqldb --type merge --patch='{"spec": {"template": {"metadata": {"labels": {"app": "mysqldb", "version": "v1"}}}}}'

In order for the service mesh to discover the VM and add it to the mesh, we need to create a Service:

apiVersion: v1
kind: Service
metadata:
name: mysqldb
labels:
app: mysqldb
spec:
ports:
- port: 3306
name: tcp
selector:
app: mysqldb

We need a microservice running in bookinfo that connects to the VM:

Now if you retry the curl command, you should see 4 added to the 9 and 6 results from the v1 and v2 services.

v3v2-mysql

Conclusion

As you can see with this moderately complex application, it is relatively easy to use virtual machines from OpenShift Virtualization as drop-in replacements to microservices in an OpenShift Service Mesh. Hopefully this article will encourage you to give OpenShift Service Mesh a try even as you work to migrate applications from traditional VM based patterns into microservice based ones.