This post is a step-by-step guide to explain certain aspects of deploying a custom app on Istio, going beyond the commonly found BookInfo sample app tutorials.
We'll start with a high-level overview of what OpenShift currently supports when it comes to routing and traffic management, and then dive deeper into Istio by installing an example app and explaining what's happening in detail.
With that being said, it's important to clarify that OpenShift does not officially support Istio, so this post is for technical evaluation purposes only.
Routing and Traffic Management Overview
OpenShift currently supports state of the art routing and traffic management capabilities via HAProxy, its default router, and F5 Router plugins running inside containers. Ingress traffic is proxied to a Kubernetes service associated with the actual route to the endpoints listening inside the containers.
Although F5's are not natively able to run as an OpenShift node, they are generally able to scale independently of the cluster, as long as the load balancer pool is correctly managed and there are one or more ramp nodes available to tunnel any traffic to the internal SDN. In this scenario, router pods watch the Kube API for any new routes via labels and selectors and then pass this information to the load balancer API.
However, customers often come up with certain deployment use cases which can add complexity to our recommended architectures for external load balancers: http://playbooks-rhtconsulting.rhcloud.com/playbooks/installation/load_balancing.html
Deciding on whether to use ramp nodes or native integration also depends on the specific OpenShift - F5 BIG-IP version combination. This isn’t necessarily a big problem, but in certain environments it might add complexity, cost, and potentially raise concerns related to high availability and security.
About Istio Pilot: Envoy
In an attempt to unify and minimize operational overhead, load balancing pools and traffic management, comes Envoy - an API driven, protocol agnostic, data plane proxy deployed as a microservices mesh agent within the Istio project. One of the main components of this project, Istio Pilot, is a service mesh orchestrator responsible for managing and propagating configuration to individual components. In this post I’m focusing on the default proxy agents, whose deployment is based on Envoy and mixer filters.
Those proxy agents form the mesh—a software-defined, message passing channel for a distributed system composed of different upstream/downstream services and applications. As one of the main components of Istio, Envoy has an extensive list of features, although I’ll be focusing on its transparent proxy and routing deployment capabilities within OpenShift.
Hopefully, it is now easier to picture OpenShift as being a good match for this kind of deployment, since it already registers where every service is running and provides APIs to access this same information. Adding to that, Envoy already provides a Service Discovery Service (SDS) to dynamically configure where services can be found upstream. The premise is that applications shouldn’t be managing their own load balancing details, logic, and/or service discovery.
Furthermore, OpenShift takes care of automatically recovering, re-balancing or rescheduling Istio pods either when nodes fail or undergo any maintenance work. What follows is how to deploy a custom app on Istio on top of OpenShift.
Step 1: Istio Deployment
While writing this post, I’ve used Ansible to deploy Istio on top of an OpenShift Container Platform cluster running on AWS. For a local development environment on your laptop, feel free to use https://github.com/rflorenc/Istio_OpenShift_demo everywhere and in any way, except in production. The playbooks support Fedora-based systems only. You'll find all the deployments and services under the examples
directory.
<strong>$ ansible-playbook setup_istio_local.yml </strong>
Environment setup
$ oc version
oc v3.7.0-0.143.2
kubernetes v1.7.0+80709908fd
features: Basic-Auth GSSAPI Kerberos SPNEGOopenshift v3.7.0-0.143.2
kubernetes v1.7.0+80709908fd
After running ansible-playbook setup_istio_local.yml
we end up with a setup similar to the one below:

Above we can see the control/data plane API pods: Mixer, Pilot, and Ingress/Egress.
The mixer pod talks to every Istio-proxy side car container and is responsible for insulating Envoy from specific environment or back-end details.
$ oc get podsNAME READY STATUS RESTARTS AGE
grafana-2894277879-3x251 1/1 Running 0 5h
istio-ca-172649916-gqdzm 1/1 Running 0 5h
istio-egress-3074077857-cx0pg 1/1 Running 0 5h
istio-ingress-4019532693-w3w1r 1/1 Running 0 5h
istio-mixer-113835218-76n57 2/2 Running 0 5h
istio-pilot-401116135-vz9hv 1/1 Running 0 5h
prometheus-4086688911-5z9l5 1/1 Running 0 5h
servicegraph-3770494623-c58w2 1/1 Running 0 5h
skydive-agent-0nxjg 1/1 Running 0 3h
skydive-agent-c42w5 1/1 Running 0 3h
skydive-analyzer-1-t6zlt 2/2 Running 0 3h
zipkin-3660596538-8r363 1/1 Running 0 5h
Step 2: Diving Deeper
As mentioned before, Istio Pilot is a control component of the mesh, converting routing rules into Envoy specific configurations and propagating them to the sidecars at runtime.
It is worth mentioning that Istio proxy currently (v0.2.7) gets deployed in Kubernetes via Init containers and a sidecar container which is kube-injected through a Kubernetes Extensible Admission Controller. The admission controller is also called an Initializer, because it only operates at pod creation time.
$ istioctl kube-inject -f app_deployment.yaml -o injected_app_deployment.yaml
The above command alters the spec file to include the proxy sidecar.
If you’re curious about how this is accomplished and about what is added to the spec in more detail, have a look here: https://github.com/istio/pilot/blob/master/platform/kube/inject/inject.go
The init container adds the necessary iptables rules inside the pod by running https://github.com/istio/pilot/blob/master/docker/prepare_proxy.sh
Looking at the code, we can see that all traffic gets redirected to the sidecar proxy attached to our service. Envoy will then handle only intra-cluster traffic. For a visual difference between a regular spec file and one that has been "kube-injected", click here.
This abstraction is useful because the application itself doesn’t need to know about certain details of authenticating a service to another, like mutual auth or cert chains. At the time of writing, kube-inject doesn’t consider OpenShift’s DeploymentConfig
resource type, so our spec should use plain Kubernetes objects:
apiVersion: extensions/v1beta1
kind: Deployment
When an application tries to communicate with another service inside the Envoy mesh, it actually first connects to the locally running Envoy instance inside the pod so that traffic is then forwarded to the target service.
Istio-proxy sidecars keep a representation of the configured, "discoverable" services and clusters. We can see the service registered by the Route Discovery Service (RDS) API by querying localhost:15000/routes
.
2.1
$ oc apply -f examples/composer.yaml
$ injected_pod=`oc get pods -l app=composer -o jsonpath={.items..metadata.name}`
$ oc exec $injected_pod -c istio-proxy -- curl -s localhost:15000/routes | jq{
"version_info": "hash_bbfd053f4074d403",
"route_config_name": "8080",
"cluster_name": "rds",
"route_table_dump": {
"name": "8080",
"virtual_hosts": [
{
"name": "composer.istio-system.svc.cluster.local|http",
"domains": [
"composer:8080",
"composer",
"composer.istio-system:8080",
"composer.istio-system",
"composer.istio-system.svc:8080",
"composer.istio-system.svc",
"composer.istio-system.svc.cluster:8080",
"composer.istio-system.svc.cluster",
"composer.istio-system.svc.cluster.local:8080",
"composer.istio-system.svc.cluster.local",
"172.27.213.147:8080",
"172.27.213.147"
],
"routes": [
{
"match": {
"prefix": "/"
},
"route": {
"cluster": "out.ec0366219152fbf81716c7003fb03b310968130e"
}
}
]
},(Optional)
$ oc expose svc composer
If you’re accessing the service from outside the cluster, let's say from your laptop, first expose the route and then place it under the /etc/hosts file, “pointing” to a reachable OpenShift router IP address.

Step 3. Visualizing the Service Mesh
At this point we should be able to do a few verification steps. Is the composer service receiving and reporting back traffic to the Mixer pod?
3.1
$ oc logs istio-mixer-podname -c mixer | grep -i composerdestination.service : composer.istio-system.svc.cluster.local
destination.uid : kubernetes://composer-2709687814-3rkk1.istio-system
request.headers : map[x-b3-sampled:1 x-b3-spanid:00008c6be65ecd8c :authority:composer-istio-system.apps.rhcloud.com user-agent:curl/7.29.0 x-forwarded-host:composer-istio-system.apps.rhcloud.com :method:GET accept:*/* x-ot-span-context:00008c6be65ecd8c;00008c6be65ecd8c;0000000000000000 forwarded:for=54.191.235.23;host=composer-istio-system.apps.rhcloud.com;proto=http x-forwarded-for:54.191.235.23 :path:/login x-request-id:4766f3fe-2b8b-9423-803d-a93e5994563d x-forwarded-port:80 x-b3-traceid:00008c6be65ecd8c x-forwarded-proto:http]request.host : composer-istio-system.apps.rhcloud.com
destination.labels : map[app:composer pod-template-hash:2709687814]
destination.service : composer.istio-system.svc.cluster.local
And, after generating some traffic to the service endpoint, by running:
3.2
$ curl composer-istio-system.apps.rhcloud.com/login
Is the added service visible in the service mesh?
3.3
$ curl servicegraph-istio-system.apps.rhcloud.com/graph{"nodes":{"composer.istio-system (unknown)":{},"unknown (unknown)":{}},"edges":[{"source":"unknown (unknown)","target":"composer.istio-system (unknown)","labels":{"reqs/sec":"0.030508"}}]}

This visual servicegraph is accessible at http://<servicegraph_url>/dotviz
All of the above is purely API driven. Using the previous pattern of deploying sidecar-injected Kubernetes deployments and services, we'll now add a peer, a member service, and an ingress gateway to our deployment.
3.4
$ oc apply -f examples/{vp0.yaml,membersrvc.yaml,ingress_hyperledger.yaml}service "vp0" configured
deployment "vp0" configured
service "membersrvc" configured
deployment "membersrvc" configured
ingress "gateway" configured$ oc describe ingress gateway
Name: gateway
Namespace: istio-system
Address: af86a3923af1f11e7a5800244344b7a3-1404950415.us-west-2.elb.amazonaws.com
Default backend: default-http-backend:80 (<none>)
Rules:Host Path Backends
---- ---- --------
*
/login composer:8080 (<none>)
/chain/blocks/0 vp0:7050 ()3.5
$ oc logs vp0-2099924350-5393f -c vp02017-10-12 08:32:04.927 UTC [nodeCmd] initSysCCs -> INFO 189 Deployed system chaincodess
2017-10-12 08:32:04.927 UTC [nodeCmd] serve -> INFO 18a Starting peer with ID=[name:"vp0" ], network ID=[dev], address=[172.21.0.18:7051]
2017-10-12 08:32:04.928 UTC [nodeCmd] serve -> INFO 18b Started peer with ID=[name:"vp0" ], network ID=[dev], address=[172.21.0.18:7051]
After some time, refreshing the servicegraph page in the browser displays the added microservices:

Prometheus is deployed as an add-on to Istio and is required in order for servicegraph to work correctly, otherwise, pod metrics won’t be scraped.
spec:
containers:
- name: servicegraph
image: docker.io/istio/servicegraph:0.2.7
ports:
- containerPort: 8088
args:
- --prometheusAddr=http://prometheus:9090
You can also query all kind of metrics in the prometheus/graph. For example request_count.
request_count{destination_service="composer.istio-system.svc.cluster.local",destination_version="unknown",instance="istio-mixer.istio-system:42422",job="istio-mesh",response_code="200",source_service="unknown",source_version="unknown"}
Hyperledger Composer is the UI and uses mostly HTTP traffic, while the Hyperledger fabric peer generally uses both HTTP and GRPC endpoints.
Summary
The purpose here was to describe and visualize how the service mesh is created and connected, without being concerned about the details of the individual components or even about how they operate internally.
You may also be interested in a very simple Hyperledger-based blockchain playground environment running on a protocol agnostic service mesh we have recently deployed. The full deployment is described in a file called blockchain.yaml.
Categories