In this article, I will walk you through the deployment of Keycloak, a user authentication and authorization tool and how to integrate this to any Kubernetes Web application without touching a single line of code from your app.

First, we will run Keycloak and configure it to have some users and groups then deploy a simple web application to your Kubernetes cluster (we will deploy a small Kubernetes cluster too). Finally we will add the authentication layer to the app looking at the differences between both authenticated and unauthenticated resources.

This way, you will have an infrastructure provided tool to control user access with near infinite configuration options.

I recommend reading the Keycloak site and documentation for best practices and configuration options. Here I give you a simple way to add authentication to applications but no security scans or validations have been made for possible holes or vulnerabilities. Talk to your Information Security team about any solution you plan to use in your environment.

Keycloak

Keycloak is an open-source identity and access management application that uses open protocols and is easily integrated with other providers. It is the open-source project base of Red Hat Single Sign-on.

Deploying Keycloak

The easiest way to deploy Keycloak is by using a container image. You can deploy it into your existing Kubernetes or Openshift cluster or standalone with Docker or Podman into a host.

Keycloak requires a persistent storage that can be a PV from Kubernetes or a local directory mapped into the container. Here in this article I deployed Keycloak on a Linux VM using Docker.

docker run -d \

--name keycloak \

-p 8080:8080 \

-p 8443:8443 \

-e KEYCLOAK_USER=admin \

-e KEYCLOAK_PASSWORD=admin \

-e PROXY_ADDRESS_FORWARDING=true \

-v $(pwd)/keycloak-db:/opt/jboss/keycloak/standalone/data \

carlosedp/keycloak:v9.0.0

 

The image I used is built by me for both AMD64 and ARM64 architectures with manifests. There is an official image for AMD64 only at jboss/keycloak.

Configuring Keycloak

Log in to the Keycloak web server at https://[host-IP]:8443/auth/admin or by using the nip.io service, your URL becomes for example https://keycloak.[host-IP].nip.io:8443 for example https://keycloak.192.168.164.1.nip.io:8443. This is easy to remember and applications can use it to parse the headers. Use the administrator account created during the deployment environment variables (admin/admin).

Hover your cursor over the realm namespace (default is Master) at the top of the sidebar and click Add Realm.

Enter a realm name, in this example we will use "local" then click Create.

 

Configure an OpenID-Connect Client

With the new realm created, let's create a client that is an application or group of applications that will authenticate in this Realm.

  • Click Clients in the Sidebar and then click the Create button.
  • Enter the Client ID. We will use “gatekeeper”.
  • Select the Client Protocol “openid-connect” from the drop-down menu and click Save. You will be taken to the configuration Settings page of the “gatekeeper” client.
  • From the Access Type drop-down menu, select confidential. This is the access type for server-side applications.
  • In the Valid Redirect URIs box, you can add multiple URLs that are valid to be redirected after the authentication. If this gatekeeper client will be used for multiple applications on your cluster, you can add a wildcard like https://your.domain.com/*.In my configuration, I added “http://*” and “https://*” .

 

 

Next create a mapping that adds to the generated token the “Groups” and “Audience” fields. "Audience" is required by Gatekeeper to be able to authenticate the users. The “Groups” field is optional but it allows you to filter the group of users that have access to your application.

Go to the “Mappers” tab and click “Create”. Select “Audience” on Mapper Type, name it “audience” and in the Included Client Audience, select the created “gatekeeper” client. You need to type the initial letter here of the client.

 

Next, create the groups field mapping in a similar way. Click “Create”, select “Group Membership” on Mapper Type, name it “groups” and in the Token Claim Name, use “groups”. Turn off Full group path.

If you want to use different fields in the Gatekeeper config, you might need to add more fields mappings to the token.

Finally, go to the “Credentials” tab to get the Secret. This is needed to configure the gatekeeper proxy sidecar container that will be configured on your application.

Adding users and groups

Let's create two test users, one that is a member of a group that will have access to your application and one that is not a member of this group.

  • Click Users in the Manage sidebar to view the user information for the current realm (Local).
  • Click Add User.
  • Enter a valid Username (this example uses testuser1) and any additional information (optional) and click Save.
  • Click the Credentials tab for this user and enter a password. Ensure the Temporary option is set to Off so that it does not prompt for a password change later on, and click Set Password. A pop-up window prompts for additional confirmation.

 

 

Now create a group. Click Groups in the sidebar then click New. Name it as you want (my-app in this example) and click Save.

 

 

Now create another user (could be named testuser2) the same way as the first, setting it’s password. When finished, click the Groups tab in the user page. On the right side, select the my-app group and click Join.

 

 

Instead of managing user creation inside Keycloak, you can integrate it with many authentication providers like Google, GitHub, Facebook and many more. There is a section at the end of this article on how to integrate with GitHub.

Deploying your application on Kubernetes

Now we will deploy a simple NGINX web server to demonstrate a front-end Web application that will be protected behind Keycloak.

You can also use Minikube, Minishift or CodeReady Containers, the local development version of Openshift 4. Make sure your cluster has an Ingress Controller or Router to manage the external access to the published applications by using URLs.

Creating the application

 

 

Here is a simple yaml file composed of a Deployment, a Service and an Ingress. With these three resources you are able to test it easily. Just copy the contents to a file (nginx.yaml for example).

Now get your Kubernetes host IP address and replace the IP in the line containing “- host:” on on Ingress resource above with the host IP keeping the “nginx.” and “.nip.io” parts. Note that I'm naming "service "the port that is exposed. This makes migrating to the authentication model easier.

Apply the manifest into your cluster and check that NGINX is running.

kubectl apply -f nginx.yaml

kubectl get pods --all-namespaces

You can see that the NGINX pod is ready! Now you can access this application thru: http://nginx.[host-ip].ip.io/ for example http://nginx.192.168.164.130.nip.io.

 

Great! Now let’s add authentication to this page.

Adding authentication

The process is not too complex, the Gatekeeper will run as a sidecar proxy to your container, it means that it will be a container running together in the same pod as your application container intercepting or proxying all Web traffic to your container.

On the first request, it will redirect the browser to Keycloak for authentication. If the authentication succeeds, Keycloak will redirect back to Gatekeeper where any resource rule can be applied like only allowing access to certain URL paths or certain user groups(remember we added group to the Token). Then all traffic will flow thru the proxy to your app until the token expires where a new authentication is required.

First we need to change your application deployment to add the sidecar container. You can edit the original nginx.yaml with the changes I describe or create a new file pasting all resources below.

Deployment

---

apiVersion: apps/v1

kind: Deployment

metadata:

name: nginx

namespace: default

spec:

replicas: 1

selector:

matchLabels:

app: nginx

template:

metadata:

labels:

app: nginx

spec:

containers:

- name: nginx

image: nginx

- name: gatekeeper

image: carlosedp/keycloak-gatekeeper:latest

args:

- --config=/etc/keycloak-gatekeeper.conf

ports:

- containerPort: 3000

name: service

volumeMounts:

- name: gatekeeper-config

mountPath: /etc/keycloak-gatekeeper.conf

subPath: keycloak-gatekeeper.conf

- name: gatekeeper-files

mountPath: /html

volumes:

- name : gatekeeper-config

configMap:

name: gatekeeper-config

- name : gatekeeper-files

configMap:

name: gatekeeper-files

What changed here from the original deployment is that we removed the ports: containerPort section from the NGINX container since it doesn’t need to be exposed. It’s the proxy that will be. Then we added the new container as seen below -name: gatekeeper line. It contains more parameters since it requires some configMaps mounted as volumes. We also exposed it’s port (3000) and named it "service" like the NGINX port was named.

Service

If you named your port in the container like I did, the Service needs no change since it's already pointing to a port called "service". In case you use the port numbers, adjust the Service resource to instead of pointing to the port 80 in your pod (that was exposed by NGINX), point to the port 3000 from Gatekeeper. It's the targetPort line. The rest remains the same.

---

apiVersion: v1

kind: Service

metadata:

labels:

app: nginx

name: nginx

namespace: default

spec:

ports:

- name: http

port: 80

protocol: TCP

targetPort: service

selector:

app: nginx

type: ClusterIP

 

There is no change to the Ingress but if creating a new file, paste the content below.

---
apiVersion: networking.k8s.io/v1beta1

kind: Ingress

metadata:

name: nginx

namespace: default

annotations:

nginx.ingress.kubernetes.io/rewrite-target: /

spec:

rules:

- host: nginx.192.168.164.130.nip.io

http:

paths:

- path: /

backend:

serviceName: nginx

servicePort: 80

The ingress still points to the Service we created that uses port 80.

In comparison, the network flow is:

 

Before (without authentication): Ingress(port 80) -> Service(port 80) -> NGINX Pod(port 80)

After (with authentication): Ingress(port 80) -> Service(port 80) -> Gatekeeper Pod(port 3000) -> NGINX Pod(port 80)

Finally we create two configMaps that hold the Gatekeeper configuration and a web page to show if the user access is forbidden to the application.

Forbidden page

---
apiVersion: v1

kind: ConfigMap

metadata:

name: gatekeeper-files

namespace: default

creationTimestamp: null

data:

access-forbidden.html: |+

<html lang="en"><head> <title>Access Forbidden</title><style>*{font-family: "Courier", "Courier New", "sans-serif"; margin:0; padding: 0;}body{background: #233142;}.whistle{width: 20%; fill: #f95959; margin: 100px 40%; text-align: left; transform: translate(-50%, -50%); transform: rotate(0); transform-origin: 80% 30%; animation: wiggle .2s infinite;}@keyframes wiggle{0%{transform: rotate(3deg);}50%{transform: rotate(0deg);}100%{transform: rotate(3deg);}}h1{margin-top: -100px; margin-bottom: 20px; color: #facf5a; text-align: center; font-size: 90px; font-weight: 800;}h2, a{color: #455d7a; text-align: center; font-size: 30px; text-transform: uppercase;}</style> </head><body> <use> <svg version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" viewBox="0 0 1000 1000" enable-background="new 0 0 1000 1000" xml:space="preserve" class="whistle"><g><g transform="translate(0.000000,511.000000) scale(0.100000,-0.100000)"><path d="M4295.8,3963.2c-113-57.4-122.5-107.2-116.8-622.3l5.7-461.4l63.2-55.5c72.8-65.1,178.1-74.7,250.8-24.9c86.2,61.3,97.6,128.3,97.6,584c0,474.8-11.5,526.5-124.5,580.1C4393.4,4001.5,4372.4,4001.5,4295.8,3963.2z"/><path d="M3053.1,3134.2c-68.9-42.1-111-143.6-93.8-216.4c7.7-26.8,216.4-250.8,476.8-509.3c417.4-417.4,469.1-463.4,526.5-463.4c128.3,0,212.5,88.1,212.5,224c0,67-26.8,97.6-434.6,509.3c-241.2,241.2-459.5,449.9-488.2,465.3C3181.4,3180.1,3124,3178.2,3053.1,3134.2z"/><path d="M2653,1529.7C1644,1445.4,765.1,850,345.8-32.7C62.4-628.2,22.2-1317.4,234.8-1960.8C451.1-2621.3,947-3186.2,1584.6-3500.2c1018.6-501.6,2228.7-296.8,3040.5,515.1c317.8,317.8,561,723.7,670.1,1120.1c101.5,369.5,158.9,455.7,360,553.3c114.9,57.4,170.4,65.1,1487.7,229.8c752.5,93.8,1392,181.9,1420.7,193.4C8628.7-857.9,9900,1250.1,9900,1328.6c0,84.3-67,172.3-147.4,195.3c-51.7,15.3-790.8,19.1-2558,15.3l-2487.2-5.7l-55.5-63.2l-55.5-61.3v-344.6V719.8h-411.7h-411.7v325.5c0,509.3,11.5,499.7-616.5,494C2921,1537.3,2695.1,1533.5,2653,1529.7z"/></g></g></svg></use><h1>403</h1><h2>Not this time, access forbidden!</h2><h2><a href="/oauth/logout?redirect=https://google.com">Logout</h2></body></html>

 

Gatekeeper Configuration

---
apiVersion: v1

kind: ConfigMap

metadata:

name: gatekeeper-config

namespace: default

data:

keycloak-gatekeeper.conf: |+

discovery-url: https://keycloak.192.168.164.1.nip.io:8443/auth/realms/local

skip-openid-provider-tls-verify: true

client-id: gatekeeper

client-secret: 3d87097b-9f31-4457-89b3-a6578d21f759

listen: :3000

enable-refresh-tokens: true

tls-cert:

tls-private-key:

redirection-url: http://nginx.192.168.164.130.nip.io

secure-cookie: false

encryption-key: vGcLt8ZUdPX5fXhtLZaPHZkGWHZrT6aa

upstream-url: http://127.0.0.1:80/

forbidden-page: /html/access-forbidden.html

resources:

- uri: /*

groups:

- my-app

Here I removed the configuration file comments but a fully commented one can be found here. The important parts are:

  • discovery-url is the URL of your Keycloak server with /auth/realms/[realm_name] at the end. I used the nip.io service here too.
  • skip-openid-provider-tls-verify since Keycloak has no valid certificates, we have this as true
  • client-id is the client ID we obtained when creating the "gatekeeper" client on Keycloak
  • client-secret is the Secret we obtained when creating the "gatekeeper" client on Keycloak
  • redirection-url is the URL used by this application. The same configured in the Ingress resource above.
  • secure-cookie is set to false since our exposed application (in redirection-url) is HTTP instead of HTTPS.
  • upstream-url is the URL gatekeeper will forward the traffic to. It's IP 127.0.0.1 and port 80 since the NGINX container is on the same pod (so localhost) and port 80 since NGINX is configured by default to listen to port 80. On your application might be a different port but still 127.0.0.1.
  • And the resources part is an optional one (can remove it and it's lines below) where I tell gatekeeper to only allow users to members of the group "my-app" the access any page (/*) in this application. There are many rules available. Check the documentation or the sample config.

The complete file can be downloaded from https://gist.github.com/carlosedp/80ea54104cc6303f04b3755033f9c4fe.

Apply all with sudo kubectl apply -f nginx-auth.yaml in case you pasted all within the same file or downloaded from Gist.

You can see that the nginx pod has 2/2 containers ready. It's one for NGINX and one for Gatekeeper.

Now open a new browser window to test using the same URL as before: http://nginx.[host-ip].ip.io/ or for example http://nginx.192.168.164.130.nip.io.

 

You can customize Keycloak login page with colors, logo and more. Login with the user you created that is in the "my-app" group.

1_wD5b_PgLT8jXTj5gLgLhJw

There you go, Keycloak redirected back to the application and NGINX shows the page. If you look at the logs, you can see Gatekeeper redirecting the request for authentication and then NGINX showing it's logs.

If you are curious about the logging tool, it's called "stern"

And what if we login with the user that is not in the "my-app" group. Open a new browser window (or a new private one because of cookies), type the URL, login and:

  1_fKaXXZnpQo2eShLLGJn4Og

 

There you go, no access. You can customize this page in the configMap created previously.

More details and configuration options can be seen on Keycloak and Keycloak Gatekeeper documentation.

The Gatekeeper container image used on these manifests were built by me and hosted on DockerHub. I’ve done it because the official images only support AMD64 and this can be used on both AMD64 and ARM64 architectures.

Conclusion

As you can see, from the moment you have Keycloak deployed and an application running on your cluster, the changes that are required to add the authentication are minimal.

It's just a matter of deploying the sidecar container by adjusting your Deployment, changing the port in the Service (if needed) and creating the configMaps and you instantly have authentication. Then create your users, assign them to groups and adjust as required.

Plan ahead on how you will manage your application authentication strategy, the amount of realms and clients, if you will share the same realm/client for multiple applications and group permissions per app.

Soon I'll work in a tool to automatically inject and adjust this similarly to what Istio does. Stay tuned.

Integrating with external Identity Providers

As mentioned and you probably saw in my login screen, I integrated Keycloak to GitHub as an external identity provider. This way, when users choose this in Keycloak screen they are redirected to GitHub (or another provider) and when successfully authenticated Keycloak creates a new user internally. You can then assign this user to groups. You can also merge this identity to an already existing user in Keycloak.

To configure, click the Identity Providers in the sidebar and you can see all supported providers in the combo. Select GitHub.

 

Then go to your GitHub account, open Settings, select Developer Settings on the left and Oauth Apps. Click New Oauth App.

Name it (Keycloak for example), add your local Keycloak URL (GitHub doesn't need access to it nor it needs to be exposed to the internet) with /auth at the end, and the Authorization callback URL. It's the same Keycloak address with /auth/realms/[your_realm]/broker/github/endpoint.

Then grab the Client ID and Client Secret and return to Keycloak. Add these parameters and keep the defaults.

There you go, on the login page, click the GitHub button on the right and login with your credentials.

{{cta('1ba92822-e866-48f0-8a92-ade9f0c3b6ca','justifycenter')}}