Subscribe to our blog

This article highlights the benefits of using the pull model method for managing multiple Kubernetes clusters in a CD system like Argo CD. It explains how to enhance Argo CD without significant code changes to your existing applications. The blog mainly focuses on Red Hat Advanced Cluster Management (RHACM) 2.8 where the pull model feature is introduced as Technology Preview.


Introducing the pull model

Argo CD is a CNCF-graduated project that utilizes a GitOps approach for continuously managing and deploying applications on Kubernetes clusters. On the other hand RHACM, which is based off of the CNCF Sandbox project Open Cluster Management, focuses on managing a fleet of Kubernetes clusters at scale.

By utilizing RHACM users can now enable the optional Argo CD pull model architecture which offers flexibility that may be better suited for certain scenarios. One of the main use cases for the optional pull model is to address network scenarios where the centralized cluster is unable to reach out to remote clusters, while the remote clusters can communicate with the centralized cluster. In such scenarios, the push model would not be easily feasible.

Argo CD currently utilizes a push model architecture where the workload is pushed from a centralized cluster to remote clusters, requiring a connection from the centralized cluster to the remote destinations.

In the pull model, the Argo CD Application CR is distributed from the centralized cluster to the remote clusters. Each remote cluster independently reconciles and deploys the application using the received CR. Subsequently, the application status is reported back from the remote clusters to the centralized cluster, resulting in a user experience (UX) that is similar to the push model.

Another advantage of the pull model is decentralized control, where each cluster has its own copy of the configuration and is responsible for pulling updates independently. The hub-managed architecture using Argo CD and the pull model can reduce the need for a centralized system to manage the configurations of all target clusters, making the system more scalable and easier to manage. However, note that the hub cluster itself still represents a potential single point of failure, which you should address through redundancy or other means.

Additionally, the pull model provides more flexibility, allowing clusters to pull updates on their own schedule and reducing the risk of conflicts or disruptions.

Architecture and dependencies

This Argo CD pull model controller on the hub cluster creates ManifestWork objects, wrapping Application objects as payload. See here for more info regarding ManifestWork, which is a central concept for delivering workloads to the managed clusters.

The RHACM agent on the managed cluster notices the ManifestWork on the hub cluster and pulls the Application from there.

See an overview of the architecture of the new pull model:

image1-Oct-30-2023-03-42-54-8769-PM

Prerequisites:

  • A running RHACM 2.8 instance
  • One or more managed cluster(s) in RHACM

Setting up the pull model

  1. Install the OpenShift GitOps (version 1.9.0 or above) operator on the hub cluster and all the target managed clusters in the installed namespace: openshift-gitops.
  2. Create a GitOpsCluster resource that contains a reference to a placement resource. The placement resource selects all the managed clusters that need to support the pull model. As a result, managed cluster secrets are created in the Argo CD server namespace. Every managed cluster needs to have a cluster secret in the Argo CD server namespace on the hub cluster. This is required by the Argo CD application set controller to propagate the Argo CD application template for a managed cluster. See the following example:
apiVersion: apps.open-cluster-management.io/v1beta1
kind: GitOpsCluster
metadata:
name: gitops-cluster
namespace: openshift-gitops
spec:
argoServer:
  cluster: local-cluster
  argoNamespace: openshift-gitops
placementRef:
  kind: Placement
  apiVersion: cluster.open-cluster-management.io/v1beta1
  name: gitops-placement       # the placement can select all clusters
  namespace: openshift-gitops

 

  1. Create a Placement resource for the GitOpsCluster resource created in step 2. The following example assumes that all the target managed clusters are added to the default clusterset:
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
name: gitops-placement
namespace: openshift-gitops
spec:
clusterSets:
  - default

 

Deploying a pull model application

The Argo CD ApplicationSet CRD is used to deploy applications on the managed clusters using the push or pull model. It uses a Placement resource in the generator field to get a list of managed clusters. The template field supports parameter substitution of specifications for the application. The Argo CD ApplicationSet controller on the hub cluster manages the creation of the application for each target cluster.

For the pull model, the destination for the application must be the default local Kubernetes server (https://kubernetes.default.svc) since the application is deployed locally by the application controller on the managed cluster. By default, the push model is used to deploy the application, unless the annotations apps.open-cluster-management.io/ocm-managed-cluster and apps.open-cluster-management.io/pull-to-ocm-managed-cluster are added to the template section of the ApplicationSet.

For deploying applications using the pull model, it is important for the Argo CD application controllers to ignore these application resources on the hub cluster. The desired solution is to add the argocd.argoproj.io/skip-reconcile annotation to the template section of the ApplicationSet. While an ApplicationSet is processed on the hub to enable the dynamic placement of applications on managed clusters using clusterDecisionResource, the processing of Argo CD Applications is only performed on the managed clusters thus supporting a pull model.

The following is a sample ApplicationSet YAML that uses the pull model:

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: helloworld-allclusters-app-set
namespace: openshift-gitops
spec:
generators:
- clusterDecisionResource:
    configMapRef: acm-placement
    labelSelector:
      matchLabels:
        cluster.open-cluster-management.io/placement: helloworld-placement
    requeueAfterSeconds: 30
template:
  metadata:
    annotations:
      apps.open-cluster-management.io/ocm-managed-cluster: '{{name}}'
      apps.open-cluster-management.io/ocm-managed-cluster-app-namespace: openshift-gitops
      argocd.argoproj.io/skip-reconcile: "true"
    labels:
      apps.open-cluster-management.io/pull-to-ocm-managed-cluster: "true"
    name: '{{name}}-helloworld-app'
  spec:
    destination:
      namespace: helloworld
      server: https://kubernetes.default.svc
    project: default
    source:
      path: helloworld
      repoURL: https://github.com/stolostron/application-samples.git
    syncPolicy:
      automated: {}
      syncOptions:
      - CreateNamespace=true

 

Sample Placement YAML referenced by the ApplicationSet above:

apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
name: helloworld-placement
namespace: openshift-gitops
spec:
clusterSets:
  - default

 

Controller architecture

There are two sets of controllers on the hub cluster watching the ApplicationSet resources:

  • The existing Argo CD application controllers
  • The new propagation controller

Annotations in the application resource are used to determine which controller reconciles to deploy the application. The Argo CD application controllers used for the push model, ignore applications that contain the argocd.argoproj.io/skip-reconcile annotation. The propagation controller which supports the pull model, only reconciles on applications that contain the apps.open-cluster-management.io/ocm-managed-cluster annotation.

It generates a ManifestWork to deliver the application to the managed cluster.

The managed cluster is determined by the ocm-managed-cluster annotation value.

The following is a sample ManifestWork YAML that the propagation controller generates to create the helloworld application on the managed cluster cluster2:

apiVersion: work.open-cluster-management.io/v1
kind: ManifestWork
metadata:
annotations:
  apps.open-cluster-management.io/hosting-applicationset: openshift-gitops/helloworld-allclusters-app-set
name: cluster2-helloworld-app-4a491
namespace: cluster2
spec:
manifestConfigs:
- feedbackRules:
  - jsonPaths:
    - name: healthStatus
      path: .status.health.status
    type: JSONPaths
  - jsonPaths:
    - name: syncStatus
      path: .status.sync.status
    type: JSONPaths
  resourceIdentifier:
    group: argoproj.io
    name: cluster2-helloworld-app
    namespace: openshift-gitops
    resource: applications
workload:
  manifests:
  - apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      annotations:
        apps.open-cluster-management.io/hosting-applicationset: openshift-gitops/helloworld-allclusters-app-set
      finalizers:
      - resources-finalizer.argocd.argoproj.io
      labels:
        apps.open-cluster-management.io/application-set: "true"
      name: cluster2-helloworld-app
      namespace: openshift-gitops
    spec:
      destination:
        namespace: helloworld
        server: https://kubernetes.default.svc
      project: default
      source:
        path: helloworld
        repoURL: https://github.com/stolostron/application-samples.git
      syncPolicy:
        automated: {}
        syncOptions:
        - CreateNamespace=true

 

As a result of the feedback rules specified in ManifestConfigs, the health status and the sync status from the status of the Argo CD application are synced to the manifestwork’s statusFeedback.

Managed cluster application deployment flow

After the Argo CD application is created on the managed cluster through ManifestWorks, the local Argo CD controllers reconcile to deploy the application. The controllers deploy the application through this sequence of operations:

  • Connect and pull resources from the specified Git/Helm repository
  • Deploy the resources on the local managed cluster
  • Generate the Argo CD application status
  • Multicluster Application report - aggregate application status from the managed clusters

A new multicluster ApplicationSet report CRD is introduced to provide an aggregate status of the ApplicationSet on the hub cluster. The report is only created for those that are deployed using the new pull model. It includes the list of resources and the overall status of the application from each managed cluster. A separate multicluster ApplicationSet report resource is created for each Argo CD ApplicationSet resource. The report is created in the same namespace as the ApplicationSet. The MulticlusterApplicationSetReport includes:

  • List of resources for the Argo CD application
  • Overall sync and health status for the Argo CD application deployed to each managed cluster
  • Error message for each cluster where the overall status is out of sync or unhealthy
  • Summary status of the overall application status from all the managed clusters

To support the generation of the MulticlusterApplicationSetReport, two new controllers have been added to the hub cluster: the resource sync controller and the aggregation controller.

The resource sync controller runs every 10 seconds, and its purpose is to query the RHACM search V2 component on each managed cluster to retrieve the resource list and any error messages for each Argo CD application.

It then produces an intermediate report for each application set, which is intended to be used by the aggregation controller to generate the final MulticlusterApplicationSetReport report.

The aggregation controller also runs every 10 seconds, and it uses the report generated by the resource sync controller to add the health and sync status of the application on each managed cluster. The status for each application is retrieved from the statusFeedback in the ManifestWork for the application. Once the aggregation is complete, the final MulticlusterApplicationSetReport is saved in the same namespace as the Argo CD ApplicationSet, with the same name as the ApplicationSet.

The two new controllers, along with the propagation controller, all run in separate containers in the same multicluster-integrations pod, as shown in the example below:

NAMESPACE               NAME                                       READY   STATUS  
open-cluster-management multicluster-integrations-7c46498d9-fqbq4  3/3     Running 

 

The following is a sample MulticlusterApplicationSetReport YAML for the helloworld ApplicationSet.

apiVersion: apps.open-cluster-management.io/v1alpha1
kind: MulticlusterApplicationSetReport
metadata:
labels:
  apps.open-cluster-management.io/hosting-applicationset: openshift-gitops.helloworld-allclusters-app-set
name: helloworld-allclusters-app-set
namespace: openshift-gitops
statuses:
clusterConditions:
- cluster: cluster1
  conditions:
  - message: 'Failed sync attempt to 53e28ff20cc530b9ada2173fbbd64d48338583ba: one or more objects failed to apply, reason: services is forbidden: User "system:serviceaccount:openshift-gitops:openshift-gitops-argocd-application-controller" cannot create resource "services" in API group "" in the namespace "helloworld",deployments.apps is forbidden: User "system:serviceaccount:openshift-gitops:openshift-gitops-argocd-application-controller" cannot create resource "deployments" in API group "apps" in the namespace "guestboo...'
    type: SyncError
  healthStatus: Missing
  syncStatus: OutOfSync
- cluster: cluster1
  healthStatus: Progressing
  syncStatus: Synced
- cluster: cluster2
  healthStatus: Progressing
  syncStatus: Synced
summary:
  clusters: "3"
  healthy: "0"
  inProgress: "2"
  notHealthy: "3"
  notSynced: "1"
  synced: "2"

 

All the resources listed in the MulticlusterApplicationSetReport are actually deployed on the managed cluster(s). If a resource fails to be deployed, the resource won't be included in the resource list. However, the error message would indicate why the resource failed to be deployed.

Pull model vs RHACM policies

RHACM policies is an extensive framework which not only helps harden cluster security but its powerful templating feature can also be used to automate processes on managed clusters. Discussing RHACM policies is outside the scope of the article. This section will just briefly touch upon how the pull model can also be achieved using policies and the differences between each method.

Using the policy templating feature, users can automate the process of installing the OpenShift Gitops operator whenever a new managed cluster is created/imported in RHACM. As a part of this automation process, users can also deliver an Argo CD Application resource to the managed cluster which is what the propagation controller does in the pull model. All these steps make complete sense to someone like a Solution Architect but is very daunting for new users of RHACM. New users not only need to know how the Gitops operator works they need to know how the policy generator and templating engine works as well. With the pull model, new users only need to know the concepts covered in this article.

The other issue with using policies is that the application deploy results are scattered across all the target managed clusters. CLI only users would have a hard time gathering all the results from the individual clusters. RHACM console users can see the Argo CD applications on the applications page but would still need to click on each individual application for more detailed information. The MulticlusterApplicationSetReport resource in the pull model makes it easier for CLI users by putting all the results in one place. When console support is added for the pull model(planned for RHACM 2.9), users can click the ApplicationSet on the applications page and view detailed information for all deployed applications across the managed clusters in a single page.

Limitations

  1. Resources are only deployed on the managed cluster(s).
  2. If a resource failed to be deployed, it won't be included in the MulticlusterApplicationSetReport.
  3. In the pull model, the local-cluster is excluded as target managed cluster.
  4. There might be use-cases where the managed clusters cannot reach the GitServer. In this case, a push model would be the only solution.

Conclusion

This article explored the process of setting up a Argo CD Application Pull Model environment and deploying a pull model application. The pull model provides an efficient way for users to deploy applications in environments where centralized clusters cannot easily reach remote clusters but the remote clusters can communicate with the centralized cluster.

 


About the authors

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech