This is a guest post by Michaël Morello, principal software engineer at Elastic. 

Now you can run Elasticsearch, Kibana, or the entire Elastic Stack on Red Hat OpenShift with Elastic Cloud on Kubernetes (ECK). It’s the easiest way to get started with the official offering from Elastic. Let’s explore how you can get up and running quickly, as well as how to use ECK for some of the most common use cases.

Introduction

Now it’s even easier to get the Elastic operator running on OpenShift and integrate it into your OpenShift ecosystem. First, we’ll install the operator through the OpenShift OperatorHub web interface. Then, we’ll see how to leverage the service serving certificates to encrypt the HTTP traffic between the Elastic Stack and the OpenShift components. We'll also see how this makes it easy to create re-encrypt routes to expose Elastic services outside of your OpenShift cluster.

OpenShift comes with preinstalled monitoring components. We’ll show you how to deploy a Metricbeat instance to grab OpenShift cluster metrics, store them in Elasticsearch, and visualize them in Kibana.

Prerequisites

To run the following instructions, you must first:

  • Deploy an OpenShift 4.6 cluster with the monitoring stack deployed.
  • Log in as an administrator.
  • Have a dedicated namespace or OpenShift project to hold the Elastic components.

You must create a project named elastic-monitoring:

oc new-project elastic-monitoring

Deploy the Elasticsearch (ECK) operator on OpenShift

The certified Elastic operator is available in the OperatorHub. It only takes a few clicks to install it through the OpenShift console:

  • In the OpenShift web console, go to the left pane and select Administrator in the dropdown menu.
  • Select Operators, then OperatorHub, and search for "Elasticsearch (ECK) Operator":

  • Click on the tile (skip the community version if you want to install the certified operator). Click on Install, leave the default selection, and click again on Install.

Congratulations, the operator is now running on your OpenShift cluster!

The operator is deployed in the openshift-operators namespace. To get its status from the command line, run the following command:

$ oc get pods -n openshift-operators -l control-plane=elastic-operator
NAME                               READY   STATUS    RESTARTS   AGE
elastic-operator-bc7bbd885-j2sth   1/1     Running   0          53m

To get the operator logs, run this command:

$ oc logs -l control-plane=elastic-operator  -n openshift-operators -f
{"log.level":"info","@timestamp":"2020-11-16T09:10:57.231Z","log.logger":"association.kb-es-association-controller","message":"Starting reconciliation run","service.version":"1.3.0+6db1914b","service.type":"eck","ecs.version":"1.4.0","iteration":10,"namespace":"openshift-monitoring","kb_name":"kibana"}
...
Deploy an Elasticsearch cluster and Kibana
We want to deploy Elasticsearch to collect metrics from your OpenShift cluster and use Kibana to visualize them.
Let’s deploy an Elasticsearch cluster with three data nodes. To make sure that the settings allow the Elasticsearch cluster to handle at least 100GB of data, apply the following manifest:
cat <<EOF | oc apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch
namespace: elastic-monitoring
spec:
version: 7.10.0
nodeSets:
- name: default
  count: 3
  podTemplate:
    spec:
      containers:
        - name: elasticsearch
          env:
            - name: ES_JAVA_OPTS
              value: -Xms4g -Xmx4g
          resources:
            requests:
              memory: 8Gi
              cpu: 1
            limits:
              memory: 8Gi
  volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 100Gi
        storageClassName: standard
  config:
    node.roles: [ "master", "data" ]
    node.store.allow_mmap: false
EOF

If you want more information on how to customize the volume claim or the podTemplate, see  the documentation.

To visualize your metrics through dashboards, deploy a Kibana instance, associated with the Elasticsearch cluster that was previously created:

cat <<EOF | oc apply -f -
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kibana
namespace: elastic-monitoring
spec:
version: 7.10.0
count: 1
elasticsearchRef:
  name: elasticsearch
EOF

With the elasticsearchRef parameter, an encrypted connection between Kibana and Elasticsearch is automatically established. Make sure that the status for both Elasticsearch and Kibana is green:

% oc get es,kb -n elastic-monitoring
NAME                                                       HEALTH   NODES   VERSION   PHASE   AGE
elasticsearch.elasticsearch.k8s.elastic.co/elasticsearch   green    3       7.10.0    Ready   52m

NAME                                  HEALTH   NODES   VERSION   AGE
kibana.kibana.k8s.elastic.co/kibana   green    1       7.10.0    48m

In the next section, we’ll see how to expose Kibana and encrypt the traffic from your browser to Kibana using a re-encrypt route.

Securing traffic with the service serving certificates

We now want to access Kibana with a web browser. Using a re-encrypt route is a common solution on OpenShift. Re-encrypt routes allow you to manage potentially sensitive public certificates at the router level, while still relying on a custom and private certificate authority at the pod level:

Let's see how to create a re-encrypt route and create a trust-relationship between the router and Kibana.

The OpenShift Service CA Operator is an operator installed on OpenShift that helps to make communications between services in an OpenShift cluster more secure. Certificates issued by the OpenShift Service CA Operator are trusted by other OpenShift services. It helps to encrypt the traffic with other components like routers or the Prometheus server, as we'll see later in this blog post. By default, the Elastic operator deploys its own certificate authority to encrypt the HTTP traffic, but it can also delegate that task, and load the TLS key and certificate from any Secret.

First we need to set the right annotation on the Kibana service generated by the operator to let the OpenShift Service CA know that we want a TLS certificate for that service. Then we need to update the Kibana manifest to use that certificate. All of that can be done in the Kibana manifest:

cat <<EOF | oc apply -f -
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kibana
namespace: elastic-monitoring
spec:
version: 7.10.0
count: 1
elasticsearchRef:
  name: elasticsearch
http:
  service:
    metadata:
      annotations:
        # request ECK to create the Kibana service with the following annotation
        service.beta.openshift.io/serving-cert-secret-name: "kibana-openshift-tls"
  tls:
    certificate:
      # use the previously created Secret to on the Kibana endpoint
      secretName: kibana-openshift-tls
EOF

The Kibana service is now using a certificate signed by the OpenShift certificate authority. To check it, run the following command in the Kibana pod:

bash-4.4$ curl --insecure -vvI https://127.0.0.1:5601
* Server certificate:
*  subject: CN=kibana-kb-http.openshift-monitoring.svc
*  start date: Nov 16 10:08:01 2020 GMT
*  expire date: Nov 16 10:08:02 2022 GMT
*  issuer: CN=openshift-service-serving-signer@1605510277

Create the re-encrypt route and use the public certificate of your choice:

cat <<EOF | oc apply -f -
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: kibana
namespace: elastic-monitoring
spec:
host: <your public hostname here>
to:
  kind: Service
  name: kibana-kb-http
  weight: 100
port:
  targetPort: https
tls:
  termination: reencrypt
  certificate: |
    -----BEGIN CERTIFICATE-----
    <public certificate from your certification authority>
    -----END CERTIFICATE-----
  key: |
    -----BEGIN RSA PRIVATE KEY-----
    <private key of the public certificate>
    -----END RSA PRIVATE KEY-----
  insecureEdgeTerminationPolicy: Redirect
wildcardPolicy: None
EOF

That's it — the connection from your browser to Kibana is now fully trusted and encrypted, not only from the browser to the OpenShift router but also inside the OpenShift cluster itself! You don't need to care about internal certificates rotation, as the private certificate is renewed automatically by the OpenShift Service CA Operator.

 Login as the elastic user and get the password with the following command:

  oc get secrets/elasticsearch-es-elastic-user --template='{{.data.elastic | base64decode }}'

Collect and store cluster metrics

OpenShift comes with a lot of helpful components to monitor your cluster. For example, kube-state-metrics is already deployed and a Prometheus instance is already installed. By leveraging solutions like index lifecycle management (ILM) or searchable snapshots, Elasticsearch can help you create a long-term storage solution for those metrics.

Furthermore, since Beats are supported, you can deploy Metricbeat to grab all those metrics, store them in Elasticsearch, and visualize them on pre-existing dashboards automatically created by Metricbeat.

Capture OpenShift cluster metrics with Metricbeat

Metricbeat can fetch metrics from various components. Let’s see how to configure Metricbeat to get metrics from your hosts and from several core components of your OpenShift cluster. Before we go any further we need to allow the Metricbeat pods to run in the privileged Security Context Constraints. This is required to get some system metrics:

oc adm policy add-scc-to-user -z metricbeat -n elastic-monitoring privileged

The whole configuration for OpenShift 4.6 is available here. If you want to try it, apply the manifest with the following command:

oc apply -f https://ela.st/eck-ocp-blog-metricbeat

After a few moments, you can see that the Metricbeat health is green:

$ oc get beats -n elastic-monitoring
NAME                                             HEALTH   AVAILABLE   EXPECTED   TYPE         VERSION
beat.beat.k8s.elastic.co/metricbeat-hosts        green    6           6          metricbeat   7.10.0

Besides the authorization objects (Roles and RoleBinding), let's have a closer look at this configuration to understand what happens behind the scenes. For example, let's see how Metricbeat collects metrics from the controller manager. The controller manager is an important component of the Kubernetes control plane. The controller manager runs core controllers like DaemonSet controller, StatefulSet controller, Kubernetes garbage collector, and more.

The controller manager is running as a Pod and exposes metrics that you may want to grab to monitor your cluster. To collect metrics from the controller manager, we use the following configuration:

config:
metricbeat:
  autodiscover:
    providers:
      - type: kubernetes
        node: ${NODE_NAME}
        templates:
          - condition:
              contains:
                kubernetes.labels.app: kube-controller-manager
                kubernetes.namespace: openshift-kube-controller-manager
            config:
              - module: kubernetes
                enabled: true
                metricsets:
                  - controllermanager
                hosts: [ "https://${data.host}:10257" ]
                bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

In this code extract, Metricbeat discovers pods that run in the namespace openshift-kube-controller-manager and have the app label set to openshift-kube-controller-manager. These pods are running the controller manager. Also, Metricbeat gets authenticated with its ServiceAccount using a token file mounted in the Metricbeat pod itself.

Once deployed, go to Kibana to visualize the controller manager metrics in the dashboard "[Metricbeat Kubernetes] Controller Manager dashboard":

There are five Kubernetes dashboards in Kibana. This last one gives an overview of your cluster:

There is also a dashboard dedicated to Core DNS monitoring:

Collect OpenShift specific metrics with the Prometheus federation API

The Prometheus instance installed by default on OpenShift grabs some OpenShift-specific metrics. For example, you may want to collect the cluster operator metrics already collected by Prometheus. Using the Prometheus federation API is a great starting point because it helps collect these metrics without configuring a Metricbeat module for each new cluster operator.

cat <<EOF | oc apply -f -
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: metricbeat-federate
namespace: elastic-monitoring
spec:
type: metricbeat
version: 7.10.0
elasticsearchRef:
  name: elasticsearch
config:
  metricbeat:
    modules:
      - module: prometheus
        hosts: ["https://prometheus-k8s.openshift-monitoring.svc:9091"]
        metrics_path: '/federate'
        query:
          'match[]': '{job=~"cluster-.*"}'
        # Use service account based authorization:
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        ssl.certificate_authorities:
          - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
deployment:
  podTemplate:
    spec:
      serviceAccountName: metricbeat
      automountServiceAccountToken: true
      containers:
        - args: ["-e","-c","/etc/beat.yml"]
          name: metricbeat
          env:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
      volumes:
        - emptyDir: {}
          name: beat-data
EOF

Those metrics are now safely stored in Elasticsearch and can be queried with Kibana:

Where to go next?

The certified Elastic Cloud on Kubernetes Operator is now available in your OpenShift web console — give it a try following the instructions from this blog post. We focused on Metricbeat, but additional Beats such as Auditbeat or Packetbeat can help you observe your OpenShift cluster even further.

To understand why Elasticsearch 7.10 is a great place to store your metrics, check out our blog post on saving space and money with improved storage efficiency in Elasticsearch 7.10. Also, with version 7.10 Elasticsearch allows you to search data stored on object stores like S3 (beta feature in 7.10), opening new possibilities for high-volume observability-related data. Find out more in our Elasticsearch searchable snapshots blog post.


About the author

Red Hatter since 2018, tech historian, founder of themade.org, serial non-profiteer.

Read full bio