Subscribe to our blog

KCP is a prototype of a multi-tentant Kubernetes control plane for workloads on many clusters.

It provides a generic CustomResourceDefinition (CRD) apiserver that is divided into multiple logical clusters that enable multitenancy of cluster-scoped resources such as CRDs and Namespaces.

Each of these logical clusters is fully isolated from the others, allowing different teams, workloads, and use cases to live side by side.

Source

Currently, the project is under heavy development, which means that every day it is evolving and things are changing rapidly. If you’re reading this after May 2022, it’s very likely that the content is outdated.

Why this blog?

During the last weeks our team at Red Hat has been playing around with KCP, we wanted to understand how KCP worked and what use cases it could solve. During our spike around the KCP technology we wanted to run a transparent multi-cluster demo, but that’s something that was not ready at the time. Instead, we tried an approach where different teams inside the same organization would get their own workspaces and clusters connected to KCP, so they could deploy their applications transparently.

Terminology

The complete terminology can be found here access it to check the most up-to-date definitions.

Term Description Comparable in Kube
Workspaces Used to provide multi-tenancy, every Workspace will have its own api-resources and API endpoint. Some workspaces (Organizational) can contain other workspaces (Universal) A cluster’s API endpoint
Workload Cluster A “real Kubernetes cluster”. One that can run Kubernetes workloads and accepts standard Kubernetes API objects. A cluster

We can think of Workspaces as the way to provide isolation to different users inside a KCP cluster. One potential organization could be:

  • Organization 1
    • Application 1
    • Application 2
  • Organization 2
    • Application 3

In the above organization, we can have different teams taking care of different apps on the same KCP server, but they will be fully isolated from each other.

Compute and Workspaces Demo

The demo showcases how a KCP admin can create different workspaces and provide access to physical clusters to run workloads from KCP. We will have workspaces for two different teams, each team will have access to its own physical cluster under the hood, but both teams will consume KCP API to create their workloads.

NOTE: Demo working with commit decced4.

Starting KCP

In this first part of the demo we will see how we can start KCP and the different Workspaces that come preconfigured out of the box.

  1. Clone the KCP repository and build KCP.

    git clone https://github.com/kcp-dev/kcp.git
    cd kcp/
    git checkout decced4
    make
    export PATH=${PATH}:${PWD}/bin
  2. Start KCP.

    NOTE: You can use the --bind-address flag to force KCP to listen on a specific IP on your node. e.g: kcp start --bind-address 10.19.3.4. Otherwise, it will bind to all interfaces.

    kcp start
  3. Export the kubeconfig to connect to KCP as admin.

    export KUBECONFIG=.kcp/admin.kubeconfig
  4. Move to the root workspace.

    NOTE: By default KCP will create two workspaces, root and inside root we will have default.

    kubectl ws use root
  5. We can list the workspaces inside root, we will see default.

    kubectl get workspaces
    NAME      TYPE           PHASE   URL
    default Organization Ready https://10.19.3.4:6443/clusters/root:default
  6. As we mentioned earlier, a workspace is like having your own K8s API server with its own API resources, etc. You can actually, query this API like you would do with a regular K8s API server.

    curl -k https://10.19.3.4:6443/clusters/root:default/

    NOTE: Since we’re not sending any bearer token/x509 cert with our request, the request won’t be authenticated/authorized.

    {
    "kind": "Status",
    "apiVersion": "v1",
    "metadata": {},
    "status": "Failure",
    "message": "forbidden: User \"system:anonymous\" cannot get path \"/\": \"root:default\" workspace access not permitted",
    "reason": "Forbidden",
    "details": {},
    "code": 403
    }

Create our custom Organization

This part of the demo will guide us through the creation of custom Workspaces for our organization and also for our teams.

  1. Let’s create the Organizational Workspace for out TelcOps organization.

    kubectl ws create telcops --type Organization
  2. It will show up as a new workspace.

    kubectl get workspaces
    NAME      TYPE           PHASE   URL
    default Organization Ready https://10.19.3.4:6443/clusters/root:default
    telcops Organization Ready https://10.19.3.4:6443/clusters/root:telcops
  3. We will create a universal workspace for the team-a inside the telcops organizational workspace.

    kubectl ws use telcops
    kubectl ws create team-a
  4. Again, we can list the workspaces. But this time we will only see the workspaces inside the telcops organizational workspace.

    kubectl get workspaces
    NAME     TYPE        PHASE   URL
    team-a Universal Ready https://10.19.3.4:6443/clusters/root:telcops:team-a
  5. Let’s use this new workspace and list the API resources

    kubectl ws use team-a
    kubectl api-resources
    NAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KIND
    configmaps cm v1 true ConfigMap
    <output_omitted>
    workloadclusters workload.kcp.dev/v1alpha1 false WorkloadCluster
  6. KCP knows nothing about Deployments.

    kubectl api-resources | grep -i deployment
    kubectl get deployments
    error: the server doesn't have a resource type "deployments"

Learning about new API resources

Now that we have our Workspaces ready, we need KCP to learn about the API resources we will be using to deploy our workloads. This part of the demo will guide us through the process of importing API resources from real Kubernetes clusters.

  1. Next, we need KCP to learn about new API resources we want to use in this workspace, like Deployments. The command below creates a WorkloadCluster object in our workspace and outputs a yaml file to deploy the Syncer.

    NOTE: In order to learn this new types we’re going to use something called Syncer. The Syncer is a component that can run on the KCP cluster and use a push pattern, or in the physical cluster (from where types will be learned) and use a pull pattern. In this case we’re using pull, so the syncer will be running on the physical cluster.

    NOTE2: The resources that will be synced by the Syncer are defined as parameters in the deployment (inside the yaml) using the --resource flag.

    NOTE3: The command below uses a Syncer image built at the time of this writting, you may want to create your own by following the steps in this guide.

    kubectl kcp workload sync ocp-sno --syncer-image quay.io/mavazque/kcp-syncer:latest > syncer-ocp-sno.yaml
    cat syncer-ocp-sno.yaml | grep '\--resource'

    NOTE: The syncer will take care of sync the following resources.

    - --resources=configmaps
    - --resources=deployments.apps
    - --resources=secrets
    - --resources=serviceaccounts
  2. Now it’s time to get the syncer deployed in our physical cluster, an OpenShift SNO in this case.

    kubectl --kubeconfig /root/ztp-sno-cluster/kubeconfig apply -f syncer-ocp-sno.yaml
  3. Once syncer is started, we will have the deployment API resource in our KCP workspace

    kubectl api-resources | grep -i deployment
    deployments                       deploy       apps/v1                                true         Deployment

Deploying our Workloads

At this point, our KCP Workspace knows about deployments, so we will go ahead and deploy our application.

  1. Now that our KCP workspace knows about deployments we can go ahead and deploy our application, let’s start by creating a new namespace.

    kubectl create namespace reverse-words

    NOTE: The namespace will get a WorkloadCluster assigned, if we had more than one the first one would be assigned. At this point only one WorkloadCluster can be assigned to the namespace, in the future more options will be available.

    kubectl get namespace reverse-words -o jsonpath='{.metadata.labels.workloads\.kcp\.dev/cluster}'

    We can see the WorkloadCluster we got assigned is ocp-sno.

    ocp-sno
  2. Let’s create our application’s deployment.

    kubectl -n reverse-words create deployment reversewords --image quay.io/mavazque/reversewords:latest
  3. If we try to get the pods, we will see that our KCP workspace doesn’t know about them

    kubectl get pods
    error: the server doesn't have a resource type "pods"
  4. But the syncer will push the status of our deployment in the WorkloadCluster to the KCP workspace.

    kubectl -n reverse-words describe deployment reversewords
    Name:               reversewords
    Namespace: reverse-words
    CreationTimestamp: Thu, 05 May 2022 16:05:08 +0000
    Labels: app=reversewords
    workloads.kcp.dev/cluster=ocp-sno
    Annotations: <none>
    Selector: app=reversewords
    Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
    StrategyType:
    MinReadySeconds: 0
    Pod Template:
    Labels: app=reversewords
    Containers:
    reversewords:
    Image: quay.io/mavazque/reversewords:latest
    Port: <none>
    Host Port: <none>
    Environment: <none>
    Mounts: <none>
    Volumes: <none>
    Conditions:
    Type Status Reason
    ---- ------ ------
    Available True MinimumReplicasAvailable
    Progressing True NewReplicaSetAvailable
    Events: <none>
  5. In the WorkloadCluster we will have our application running.

    kubectl --kubeconfig /root/ztp-sno-cluster/kubeconfig get pods -A -l app=reversewords
    NAMESPACE                                                     NAME                            READY   STATUS    RESTARTS   AGE
    kcp3afd937aa61b43734af460119cd405930ccbdeade88b713fc804a0d6 reversewords-6776c6fccc-kr6vw 1/1 Running 0 7m2s

Onboarding team-b

This final part will show the required steps to onboard the team-b in their own Workspace.

  1. Now that we have our application running in this workspace, let’s create a new workspace for the team-b and do the same steps to get a new WorkloadCluster added to the workspace and the workload running.

    kubectl ws ..
    kubectl ws create team-b --enter
  2. Since this is a new workspace it has its own API resources, since we haven’t added any additional API resource, it doesn’t know anything about deployments:

    kubectl api-resources | grep -i deployment
    kubectl get deployments
    error: the server doesn't have a resource type "deployments"
  3. Let’s add the WorkloadCluster and get the deployments resource in the new workspace.

    kubectl kcp workload sync ocp-ztp --syncer-image quay.io/mavazque/kcp-syncer:latest > syncer-ocp-ztp.yaml
    kubectl --kubeconfig /root/ztp-virtual-cluster/kubeconfig apply -f syncer-ocp-ztp.yaml
    kubectl api-resources | grep -i deployment
    deployments                       deploy       apps/v1                                true         Deployment
  4. Now that our KCP workspace knows about deployments we can go ahead and deploy our application, let’s start by creating a new namespace.

    kubectl create namespace reverse-words
    kubectl get namespace reverse-words -o jsonpath='{.metadata.labels.workloads\.kcp\.dev/cluster}'

    We can see the WorkloadCluster we got assigned is ocp-sno.

    ocp-ztp
  5. Let’s create our application’s deployment.

    kubectl -n reverse-words create deployment reversewords --image quay.io/mavazque/reversewords:latest
  6. In the WorkloadCluster we will have our application running.

    kubectl --kubeconfig /root/ztp-virtual-cluster/kubeconfig get pods -A -l app=reversewords
    NAMESPACE                                                     NAME                            READY   STATUS    RESTARTS   AGE
    kcpeed022296263aa537060763c3934139fb84185e2a39a8dcf8695c89e reversewords-85d7b5b76c-mwtr8 1/1 Running 0 50s

Cleanup

Once we’re done with the demo, we can cleanup the different resources we created following the steps below.

  1. Remove syncer from the WorkloadClusters.

    kubectl --kubeconfig /root/ztp-sno-cluster/kubeconfig delete -f syncer-ocp-sno.yaml
    kubectl --kubeconfig /root/ztp-virtual-cluster/kubeconfig delete -f syncer-ocp-ztp.yaml
  2. Remove the objects created by the syncer.

    NOTE: Objects created by the syncer won’t be removed, this will change in the future and the user will likely be able to chose what happens with objects deployed on the WorkloadClusters.

    kubectl --kubeconfig /root/ztp-sno-cluster/kubeconfig get ns -o name | grep namespace/kcp.* | xargs kubectl --kubeconfig /root/ztp-sno-cluster/kubeconfig delete
    kubectl --kubeconfig /root/ztp-virtual-cluster/kubeconfig get ns -o name | grep namespace/kcp.* | xargs kubectl --kubeconfig /root/ztp-virtual-cluster/kubeconfig delete
  3. Stop KCP and remove its storage.

    ctrl+c 
    rm -rf .kcp/

Next Steps

KCP is under heavy development, and we want to keep an eye on the project for the next months. One of the next steps in our side will be testing the transparent multi-cluster with global ingress. New features will be landing soon in the upcoming releases, we will also try them and see what use cases we can come up with around those.

Useful Resources


About the author

Browse by channel

automation icon

Automation

The latest on IT automation that spans tech, teams, and environments

AI icon

Artificial intelligence

Explore the platforms and partners building a faster path for AI

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

Explore how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the solutions that simplify infrastructure at the edge

Infrastructure icon

Infrastructure

Stay up to date on the world’s leading enterprise Linux platform

application development icon

Applications

The latest on our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech