An application operator is a rather new and Kubernetes-specific role you might be taking on. As an application operator, you will be installing, updating and maintaining apps in a Kubernetes cluster.

Good practices are still sparsely available and tribal knowledge dominates this realm. I'd like to equip aspiring Kubernetes application operators with a set of basic good practices managing Kubernetes apps and share some tips and tricks to make your life and the life of your users easier.

Let's first have a look how to deal with different kinds of apps, from legacy apps to Kubernetes-native apps, and then discuss good practices you, as an application operator, should at least be aware of.

  • Monolithic legacy apps. While I'm not a big fan of the term legacy apps it sort of accurately describes the class of apps we're talking about in here. Take for example a stock application like WordPress or think of a closed-source, binary-only app you need to deploy in your Kubernetes cluster. These kinds of apps will typically not be aware of Kubernetes at all and you'll likely try your best to configure these apps in a way that they don't have to be aware of it. In this context, it helps to remember that pods were modelled in a way that they functionally resemble a machine, that is, apps running in the containers of a pod see each other on localhost and you can use volumes to share larger quantities of data in an efficient manner.
  • Containerized microservices apps. These kinds of app are typically 12-factor apps, written in a microservices fashion, and are mostly stateless. There are a couple of things you can do to provide for a clean setup including using config maps rather than environment variables or leveraging the downward API to query a range of things, from pod labels to resource limits.
  • Kubernetes-native apps. This class of apps have not only been written with Kubernetes in mind but also act as first class citizen, usually employing custom resources and talking to the API server for a range of tasks, from watches to actively changing the cluster state.

To better understand the typically interactions, we have a concrete look how a Kubernetes-native app would access the API server in the following. We're using a manual approach via shell commands here and obviously an app would do this programmatically, using one of the many libraries, such as client-go or the Python client lib.

As a preparation, create a namespace and a plain pod. All that matters here is that it has curl available, so feel free to replace my container image with whatever you prefer:

$ kubectl create ns native

$ kubectl -n=native run -it --rm jumpod \
--restart=Never --image=quay.io/mhausenblas/jump:0.2

Now you're in the pod and can start exploring:

/ $ ls /var/run/secrets/kubernetes.io/serviceaccount/
ca.crt namespace service-ca.crt token
/ $ cat /var/run/secrets/kubernetes.io/serviceaccount/namespace
native

So Kubernetes was kind enough to provide us with a starter kit to talk with its API server. The ca.crt you see above allows us to verify the identity of the API server and with it avoid to fall victim of a person-in-the-middle attack. And there's also a file called namespace that conveniently tells us in which Kubernetes namespace we're running.

We want to access the API server, right? For that, we need two things: tell curl how to make sure we're talking with the actual API server—using the $CURL_CA_BUNDLE environment variable below—and providing a JSON Web Token (JWT) bearer token in the authorization header, which is also part of the start kit:

/ $ export CURL_CA_BUNDLE=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
/ $ APISERVERTOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
/ $ curl -H "Authorization: Bearer $APISERVERTOKEN" https://kubernetes.default
{
"paths": [
"/api",
"/api/v1",
...
"/swaggerapi",
"/version",
"/version/openshift"
]
}

OK, that was some fun. Note that above setup uses the default service account that you get "for free", on a per-namespace basis. However, this is not a good form and there are a couple of other things we can improve as well.

So let's talk about good practices around deploying and maintaining apps on Kubernetes from an application operator point-of-view, with above scenario as a starting point.

Using namespaces

You should always create a dedicated namespace (or: project, in OpenShift terminology) for your apps and not use the default namespace. A namespace is a great device to group environments (dev/pro), applications, or teams. Another, sometimes orthogonal used, strategy is to create multiple clusters for different environments or teams. Namespaces represent the basic unit of management, enabling you to set defaults for resource limits, enforce resource quotas, define network policies to control apps communication and most importantly to define proper access control for all your apps and users.

While using namespaces to deploy apps is strongly encouraged, don't hard-code them in your YAML manifests unless you're using templating systems such as Helm and make it a variable there. Rather, provide the namespace at runtime, like so: kubectl -n xxx. Tip: if you put the namespace as the first argument, it's easier to re-use commands in the shell.

Another nice feature of namespaces is that they enable cascaded deletion, which is a fancy way of saying that when you do kubectl delete ns XXX then all the objects in the namespace XXX, from pods to services will be deleted.

Writing good manifests

Let's look at a few good practice to write reusable, secure and flexible Kubernetes manifests:

  • Just like with namespaces, create and use a dedicated service account per app and do not use the default service account. If you have a dedicated service account, one low-hanging fruit is that you can simplify pulling images from a private container registry: rather than using imagePullSecrets on a per-pod-basis, the pods inherit this property from the service account they're using. More importantly, a service account is the basis for providing fine-grained access control via RBAC, enabling you to implement the least-privileges principle throughout.
  • Do group related objects (for example: config maps, deployments, services) into a single YAML file, separated with ---. Also, do pay attention to the order, because not all the objects are totally de-coupled, for example, create a service before its corresponding workloads like a deployment so that environment variable-based service discovery is possible.
  • For production, always provide a tag for your container image and avoid using the :latest tag; for the development phase using no tag is OK and also, do use imagePullPolicy: Always in this case.
  • Make sure containers are not running as root and use pod security policies to enforce this.
  • Use init containers to set up your application container, this can range from simple 3rd-party API checks to database schema migrations.
  • Always set resource requests and limits as these limits are too important to ignore and only if you have them in place you have guaranteed QoS. To automate the process of determining resource consumption, I suggest you keep an eye on the work we're carrying out in the SIG Autoscaling around Vertical Pod Autoscaler.
  • Always configure liveness and readiness probes as they play an essential role in the pod life cycle management as well as for services and deployments.
  • Put all sensitive data such as API keys or passwords in Kubernetes secrets; those are encrypted both on the wire and in the control plane and in the pod available through an tmpfs mount. Don't pass sensitive data via environment variables since an app might inadvertently log it and leave traces of the sensitive data on disk.

Now that we know how solid Kubernetes manifests should look and behave like, let's move to the last topic in this post: the act of rolling out the app.

Changing the state

Kubernetes has a powerful mechanism to change the cluster state: kubectl apply which makes sure that if the resources don't exist yet they will be created and if anything has changed it will be updated. The apply command uses a nifty three-way diff between the previous configuration, your input and the current configuration of the resource in order to determine what to update. Tip: when doing a kubectl apply, use the --record parameter, which causes the actual command to be captured as an annotation in the resource. This comes in handy, for example, if you do a kubectl rollout history later, since this information will be displayed in the CHANGE-CAUSE column. Don't ask me why it's not captured by default.

Also, in this context, consider using higher-level abstractions, such as Helm to manage your apps. While you might need to invest a bit of time to find the right application life-cycle management tool for your needs and you need to typically learn a few new concepts and commands, it will pay off quickly.

Last but not least, have a strategy around what happens if something happens: by that I mean if your cluster catches fire or even if you simply fat-finger a command. Two school of thoughts exist here: one set of people believe that the entire state (code, config, credentials, etc.) should exist in a repository and hence can be re-played from there. If that is not what you can or want to do, have a look at backup solutions such as ARK or ReShifter.

With this, I hope you were able to pick up the one or other good practice and may your job as Kubernetes application operator never be boring!