For some time I've been hearing about Helm and have been asked by people how they could deploy Charts, the format Helm uses to package an application, into OpenShift.

One of the really nice features that Minishift >= 1.2.0 introduced was the concept of an addon which is a way to provide additional capabilities to your Minishift local environment. As this feature is really interesting, and evolving nicely, I have developed some addons that allow me to extend my Minishift capabilities by issuing a single command.

In this post I will describe how to deploy Helm into Minishift’s OpenShift, and then I will deploy a sample application using a Helm Chart.

Note that this method is not supported and is used for the sole purpose of supporting and describing the work that has been done around Minishift addons. If you want to use what is described here, it’s at your own risk.

Helm in a Handbasket

This description from the Helm documentation perfectly introduces Helm in a few sentences:

Helm is a tool that streamlines installing and managing Kubernetes applications. Think of it like apt/yum/homebrew for Kubernetes.

  • Helm has two parts: a client (helm) and a server (tiller)
  • Tiller runs inside of your Kubernetes cluster, and manages releases (installations) of your charts.
  • Helm runs on your laptop, CI/CD, or wherever you want it to run.
  • Charts are Helm packages that contain at least two things:
  • A description of the package (Chart.yaml)
  • One or more templates, which contain Kubernetes manifest files
  • Charts can be stored on disk, or fetched from remote chart repositories (like Debian or RedHat packages)

Install

You will definitely need to install Helm on your laptop, as it consists of two parts, a client (helm) and a server (tiller). To find the latest client go to GitHub and find the binary that suits your operating system.

Unpack the helm binary and add it to your PATH and you are good to go!

The server part, tiller, will be installed in Minishift via an addon.

Start Minishift (Use VirtualBox)

There is already a guide on how to install Minishift, so I will expect that you have followed it and that you have Minishift already working.

I will also expect that you're using the latest Minishift version available today (1.3.0) or a newer one, if there is a new one available when you read this, because this version is required for the addon I created.

You’ll need to create a fresh Minishift VM with a specific setup to continue with the steps in this post. Also, note that if you’re an experienced Minishift user, you might have already followed some of these steps.

First, I will install the default addons that come shipped with Minishift, and then enable an addon that will create an admin user, so I can easily log into the Minishift OpenShift web UI as admin of the platform.

$ minishift addons install --defaults
$ minishift addons enable admin-user

This process instructs every Minishift instance that will be created from this point forward to install this addon, so it's a one-time step.

Now, I will create my Minishift instance. I'm using the latest available OpenShift version as the time of writing, but you could just be using the default shipped with Minishift. Also, I'm using VirtualBox as virtualization technology, but you could be using the one you prefer from all the available technologies for your operating system. And also, I like to give the VM enough CPU and memory so that I can comfortably work.

$ minishift start --vm-driver=virtualbox --openshift-version=v3.6.0-rc.0 --cpus=2 --memory=8192
Starting local OpenShift cluster using 'virtualbox' hypervisor...
Downloading ISO 'https://github.com/minishift/minishift-b2d-iso/releases/download/v1.0.2/minishift-b2d.iso'
40.00 MiB / 40.00 MiB [===================================================================================================================================================================================================] 100.00% 0s
Downloading OpenShift binary 'oc' version 'v3.6.0-rc.0'
33.74 MiB / 33.74 MiB [===================================================================================================================================================================================================] 100.00% 0s
Starting OpenShift using openshift/origin:v3.6.0-rc.0
Pulling image openshift/origin:v3.6.0-rc.0
Pulled 1/4 layers, 26% complete
Pulled 2/4 layers, 64% complete
Pulled 3/4 layers, 77% complete
Pulled 4/4 layers, 100% complete
Extracting
Image pull complete
OpenShift server started.

The server is accessible via web console at:
https://192.168.99.100:8443

You are logged in as:
User: developer
Password: [any value]

To login as administrator:
oc login -u system:admin

-- Applying addon 'admin-user':..

Install Tiller (Server Side)

Now that we have Minishift up and running, we can install Helm's server part, tiller. For this, I have created an addon that simplifies the installation.

The process is as simple as installing my addon and applying the addon, so that Helm tiller will be provisioned one time on this machine. Note that I use apply instead of enable, as I just want this install to happen for the current Minishift instance and not every time I create a new Minishift instance.

$ cd /tmp
$ git clone https://github.com/jorgemoralespou/minishift-addons
$ cd minishift-addons
$ minishift addons install helm
$ minishift addons apply helm
-- Applying addon 'helm':......
Get Tiller host URL by running these commands in the shell:
export TILLER_HOST="192.168.99.100:$(oc get svc/tiller -o jsonpath='{.spec.ports[0].nodePort}' -n kube-system --as=system:admin)"

Initialize the `helm` client, if not done already

e.g.
helm init -c

Search for an application:

e.g.
helm search

And now deploy an application

e.g.
helm install <APP> --host $TILLER_HOST --kube-context default/192-168-99-100:8443/system:admin

Now that we have installed tiller, we can log into the Minishift OpenShift web UI as admin. Remember we have enabled the admin-user addon, so that there is an admin user with admin password to log in the web UI.

This will open the web UI in our browser.

minishift console

And once we log in with the admin credentials:

We will be able to see tiller deployed in the kube-system namespace.

tiller_overview

As you’ve probably noticed, it's the "#2" deployment. This is mostly because the original Helm deployment has been altered to use a dedicated serviceaccount helm, that will be given the required permissions cluster-admin. As I like to do, I tried to minimize who will get this escalated permissions to just the serviceaccount tiller will use.

NOTE: Helm currently has a shortcoming when it comes to working nicely in multitenant environments. Tiller requires a cluster-admin role to properly work if you want helm to deploy your applications to any namespace, and it’s not possible to install in an unprivileged manner in your own project/namespace to provide you with the ability to deploy applications there, if you try this option it will require at least cluster-reader role.

Also, tiller is exposed through a nodePort that we will use later. We create an environment variable to refer to tiller.

Why do I expose a nodePort? Because otherwise helm will create a tunnel from the host to the tiller pod that introduces some dependencies for some libraries on the host.

$ export TILLER_HOST="$(minishift ip):$(oc get svc/tiller -o jsonpath='{.spec.ports[0].nodePort}' -n kube-system --as=system:admin)"

$ echo $TILLER_HOST
192.168.99.100:30609

Install Helm (Client Side)

It is time to configure our client helm to use tiller in Minishift. I presume you have already installed the helm binary and added to the path, so you can use helm client.

To verify it:

$ helm version
Client: &version.Version{SemVer:"v2.5.0", GitCommit:"012cb0ac1a1b2f888144ef5a67b8dab6c2d45be6", GitTreeState:"clean"}
Error: cannot connect to Tiller

Obviously it can not connect to tiller. So let's configure our helm client instance:

$ helm init -c
Creating /Users/jmorales/.helm
Creating /Users/jmorales/.helm/repository
Creating /Users/jmorales/.helm/repository/cache
Creating /Users/jmorales/.helm/repository/local
Creating /Users/jmorales/.helm/plugins
Creating /Users/jmorales/.helm/starters
Creating /Users/jmorales/.helm/cache/archive
Creating /Users/jmorales/.helm/repository/repositories.yaml
$HELM_HOME has been configured at /Users/jmorales/.helm.
Not installing tiller due to 'client-only' flag having been set
Happy Helming!

Now, there are a few caveats we need to take into account:

  • helm uses the HELM_HOST environment variable, or you need to use --host flag on every command.
  • helm requires using a kube context with admin privileges. sudoer accounts are not an option. There is no ENV to specify this, so it will use the current-context defined in $KUBECONFIG by default unless other is specified on the command line.

In Minishift, the context for the admin user account by default is named default/192-168-99-100:8443/system:admin. Note that the IP might depend on your install.

Deploy a Sample Application

Now it's time to deploy an application, as this has really been the goal of what we've done so far.

I will use chronograf as sample application (more details on this at the end of the blog), and I will create an OpenShift project for it.

$ oc new-project chronograf
$ helm install stable/chronograf --host $TILLER_HOST --kube-context default/192-168-99-100:8443/system:admin -n chronograf --namespace chronograf
NAME: chronograf
LAST DEPLOYED: Wed Jul 19 14:29:17 2017
NAMESPACE: chronograf
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
chronograf-chronograf 172.30.177.119 none 80/TCP 1s

==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
chronograf-chronograf 1 1 1 0 1s

NOTES:
Chronograf can be accessed via port 80 on the following DNS name from within your cluster:

- http://chronograf-chronograf.chronograf

You can easily connect to the remote instance from your browser. Forward the webserver port to localhost:8888

- kubectl port-forward --namespace chronograf $(kubectl get pods --namespace chronograf -l app=chronograf-chronograf -o jsonpath='{ .items[0].metadata.name }') 8888

You can also connect to the container running Chronograf. To open a shell session in the pod run the following:

- kubectl exec -i -t --namespace chronograf $(kubectl get pods --namespace chronograf -l app=chronograf-chronograf -o jsonpath='{.items[0].metadata.name}') /bin/sh

To trail the logs for the Chronograf pod run the following:

- kubectl logs -f --namespace chronograf $(kubectl get pods --namespace chronograf -l app=chronograf-chronograf -o jsonpath='{ .items[0].metadata.name }')

And you can see it deployed though the Minishift OpenShift web UI.

sample-app

For convenience, you can wrap the helm command line in a script that will abstract you away the --host and --kube-context parameters. But that’s up to you.

Summary

In this post I have shown you how you can get Helm up and running and deploy applications packaged as Charts. As mentioned earlier, I used chronograf as a sample application mainly because many of the applications that are packaged as helm charts and shipped in their repositories suffer from security concerns. Many of the applications are required to be run as privileged, or with a specific user id. Some others use Kubernetes alpha annotations not supported on the latest OpenShift version I was using, and so I recommend changing to the beta annotation supported (for example, the mysql pvc, where if you don't implicitly specify a storageClass, it uses the alpha version of the annotation).

There are a wide range of applications packaged as Helm Charts and available in the interwebs to use, so now you can easily take advantage of them.

As a developer, you now have access to more technology to use. But remember, more is not always better.

I would like to hear from you! If you have some ideas or comments, please do not hesitate to tweet me your comments.