Cloud Experts Documentation

Cross-Cloud Workload Identity - Connecting OpenShift Dedicated to Azure Services

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration. This guide has been validated on OpenShift 4.20. Operator CRD names, API versions, and console paths may differ on other versions.

This guide demonstrates how to use Google Workload Identity Federation to enable workloads running on OpenShift Dedicated to authenticate to Azure Cosmos DB without static credentials.

Architecture Overview

The authentication flow chains multiple identity providers:

Key Point: We use the existing OpenShift OIDC provider in the Google Workload Identity Pool to authenticate. We do NOT need to create a second provider. The Google identity token (needed for Azure) is generated by the Google service account’s generateIdToken API, which changes the issuer from https://openshift.com to https://accounts.google.com.

This approach enables:

  • No static credentials stored in clusters or applications
  • Short-lived tokens with automatic rotation
  • Cloud-native identity federation across GCP and Azure
  • Fine-grained access control at each layer

Prerequisites

  • OpenShift Dedicated cluster running on Google with WIF
  • Google project with Workload Identity enabled
  • Azure subscription with access to create managed identities
  • oc CLI logged into your OSD cluster
  • gcloud CLI configured for your Google project
  • az CLI configured for your Azure subscription
  • ocm CLI for OpenShift Cluster Manager

Set up a working directory for this guide:

Step 1: Gather OSD Cluster Information

Set your cluster name and OIDC issuer:

Note: OpenShift Dedicated clusters on GCP use https://openshift.com as the OIDC issuer.

Get your GCP project details:

Step 2: Set Up GCP Workload Identity Pool

The Workload Identity Pool is created by OCM during cluster setup. Check:

Step 3: Verify Existing OIDC Provider

OpenShift Dedicated clusters on GCP already have an OIDC provider configured by OpenShift Cluster Manager. Verify it exists:

You should see a provider (typically named “oidc”) with:

  • Issuer: https://openshift.com
  • Allowed audiences: openshift

Set the provider name:

View the provider details:

Note: We use this existing OpenShift provider to exchange Kubernetes tokens for GCP tokens. The Google identity token (needed for Azure) is generated later by the GCP service account itself, not by a second WIF provider.

Step 4: Create GCP Service Account

Create a service account that will generate Google identity tokens:

Get the service account’s unique ID (needed for Azure federation):

Step 5: Configure OpenShift Service Account

Create a namespace and service account for your application:

Step 6: Bind OpenShift SA to GCP SA

Allow the OpenShift service account to impersonate the GCP service account:

Important Note on Path Formats:

GCP Workload Identity uses two different path formats depending on the context:

  1. IAM Policy Bindings (above): Use WIF_POOL_PATH (pool only, no provider)

    • Format: principal://iam.googleapis.com/projects/.../workloadIdentityPools/POOL_ID/subject/...
    • The provider is implicitly determined by validating the token’s issuer
  2. Application STS Token Exchange (in app.py): Use WIF_PROVIDER_PATH (includes provider)

    • Format: //iam.googleapis.com/projects/.../workloadIdentityPools/POOL_ID/providers/PROVIDER_NAME
    • The provider is explicitly specified in the audience parameter

This is why we set both variables in this step.

Step 7: Create Azure Cosmos DB

Create a resource group and Cosmos DB account:

Create a database and container:

Get the Cosmos DB endpoint:

Step 8: Create Azure Managed Identity

Create a managed identity for the application:

Get the identity details:

Step 9: Configure Azure Federated Identity Credential

Create a federated credential that trusts Google identity tokens:

Important: The subject must be the GCP service account’s unique ID, not the email address.

Step 10: Grant Cosmos DB Access

Assign the Cosmos DB Data Contributor role to the managed identity:

Step 11: Create Application Code

Create the application that performs the token exchange in your scratch directory:

Create the Dockerfile:

Step 12: Build and Push Container Image

Build the container image and push to OpenShift’s internal registry:

Step 13: Deploy to OpenShift

Deploy the application using an inline manifest:

Step 14: Verify the Deployment

Check pod status:

Expected behavior: Pods may initially show Error or CrashLoopBackOff status for the first 1-2 minutes. This is normal and occurs because:

  • IAM policy changes can take 60-120 seconds to propagate in GCP
  • Azure federated credential validation may have slight delays
  • The application retries on failure with exponential backoff

Wait for pods to reach Running status:

Once pods are running successfully, view logs to confirm the connection:

Expected output:

Verify data in Cosmos DB:

The Azure CLI doesn’t support querying Cosmos DB data directly. Use one of these methods instead:

Option 1: Check pod logs (easiest)

Option 2: Azure Portal

  1. Navigate to Azure Portal → Cosmos DB account
  2. Go to Data Explorer
  3. Expand database → container → Items
  4. Run query:

Frequently Asked Questions

Do I need to create a second OIDC provider in the GCP Workload Identity Pool?

No. You only need the existing OpenShift OIDC provider that was created when the OpenShift Dedicated cluster was set up. The confusion arises because two different identity providers are involved:

  1. GCP Workload Identity Provider (issuer: https://openshift.com): This is the existing provider that validates OpenShift service account tokens
  2. Google Identity Tokens (issuer: https://accounts.google.com): These are generated by the GCP service account using the generateIdToken API

Azure trusts Google identity tokens (issuer: https://accounts.google.com) through the federated credential, but those tokens are created by the GCP service account, not by a Workload Identity Pool provider.

Why does the token issuer change from OpenShift to Google?

The token transformation happens in multiple steps:

  1. OpenShift pod has a token with iss: https://openshift.com
  2. GCP WIF validates this and issues a GCP access token
  3. Using that access token, we call generateIdToken on the GCP service account
  4. The resulting Google identity token has iss: https://accounts.google.com and sub: <GCP SA unique ID>
  5. Azure validates this Google identity token

This is why the Azure federated credential subject must be the GCP service account’s unique ID, not an OpenShift service account identifier.

Security Considerations

  1. Token Lifetime: Service account tokens are short-lived (default 3600 seconds) and automatically rotated
  2. Least Privilege: Grant only the minimum required roles:
    • GCP: iam.serviceAccountOpenIdTokenCreator
    • Azure: Cosmos DB Built-in Data Contributor scoped to specific database
  3. Network Security: Consider using Private Endpoints for Cosmos DB in production
  4. Audit Logging: Enable audit logs in both GCP and Azure to track token exchanges
  5. Subject Validation: Azure validates the exact subject claim, preventing token substitution

Architecture Benefits

This pattern provides several advantages:

  • Zero Static Credentials: No keys, passwords, or connection strings stored anywhere
  • Cloud Native: Uses native identity services from both cloud providers
  • Automatic Rotation: Tokens expire and regenerate automatically
  • Fine-Grained Control: Identity bindings at the service account level
  • Multi-Cloud: Demonstrates practical federated identity across GCP and Azure
  • Scalable: Works across multiple pods and replicas without coordination

Additional Resources

Back to top

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy & sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2026 Red Hat