Large Enterprises normally configure a Shared VPC in a Google cloud environment where all the Networking components are created in a project called “Host Project” and other projects are configured to serve certain infrastructure called “Service Projects.” A VPC created in the Host Project can be shared to a Service Project so that Service Projects do not have to manage their own VPC.

In this document, we will refer to “Network VPC” as “Host VPC” created under Host Project, which is shared to our OpenShift Project as Service Project.

Setting up GCP Environment

Resources that need to be created across both projects

Resource

OpenShift Project

Network Project

API Services

X

X

Service Account

X

X

Managed Domain

X (Public domain)

X (A private domain will be created by the deployment template.)

RHCOS Image

X

 

VPC

 

X (Shared to OpenShift Project)

Master and Worker Subnet

 

X

Image Registry Bucket

X

 

RHCOS Image Bucket

X

 

Firewall Rules

 

X

Load Balancers

X (According to GCP, LB can be created under both projects when a VPC is shared. But it is a best practice to create it under Service Project.)

 

Instances

X

 

IAM Roles

X

 

NAT Router

 

X

 

GCP account limits

Increase following quotas for your region (for example, us-east1)

Compute CPU

30

Compute Static IP addresses

4 (default is 8)

Compute Persistent Disk SSD (Storage)

900 GB

 

Major Differences from IPI and UPI (Shared VPC) and Workarounds

  • Ingress-controller can not create Load Balancer during installation through Google API when shared VPC is used. So, a HostNetwork type ingress-controller is used.
  • The ingress-controller can be created later using cloud-credentials after installing the cluster.
  • In this document, separate Infra nodes were not used. Each worker node should be added to a target pool so that ingress-router-health check can identify where router pods are running and can forward traffic to those nodes.

Project

  • Create a new project from the Web Console where the OpenShift cluster will be installed.
  • Make sure the project is created under an Organization.
  • It must have proper Billing Activated; otherwise, API service will not work.
  • There is an existing Network Project with two shared subnets for the OpenShift Project.
  • From the Web Console, go to Menu: Network Services > Cloud DNS
  • Create a Domain/Subdomain and Forward NS record towards googledomain from the upstream/corporate/hosted DNS.

Service Account

  • Create a Service Account from Menu: IAM & Admin > Service Accounts.
  • Assign “Owner” as a Role for the OpenShift project.
  • Assign “Admin” as a Role for Compute and IAM resources in Network Project. If this is not doable, use a separate SA, which can create firewall rules and DNS entries.
  • Create a JSON key and store the key in the Bastion host.
  • Log in using the key from the Bastion host as below:

 

gcloud auth list

rm -f .gcp/osServiceAccount.json

Set with Project ID, NOT Project Name:


export OPENSHIFT_PROJECT=ocp43-1

export NETWORK_PROJECT=network-vpc-269503

gcloud auth activate-service-account sa-ocp43-1@ocp43-1.iam.gserviceaccount.com --key-file=/root/ocp43-1-gcp/sa-ocp43-1-key.json --project=$OPENSHIFT_PROJECT

gcloud config set project $OPENSHIFT_PROJECT

 

API Services

Activate the following API Services:

  • Compute Engine API
  • Google Cloud APIs
  • Cloud Resource Manager API
  • Google DNS API
  • IAM Service Account Credentials API
  • Identity and Access Management (IAM) API
  • Service Management API
  • Service Usage API
  • Google Cloud Storage JSON API
  • Cloud Storage
gcloud services enable cloudresourcemanager.googleapis.com  --project=$OPENSHIFT_PROJECT
gcloud services enable compute.googleapis.com  --project=$OPENSHIFT_PROJECT
gcloud services enable cloudapis.googleapis.com  --project=$OPENSHIFT_PROJECT
gcloud services enable dns.googleapis.com  --project=$OPENSHIFT_PROJECT
gcloud services enable iamcredentials.googleapis.com  --project=$OPENSHIFT_PROJECT
gcloud services enable iam.googleapis.com  --project=$OPENSHIFT_PROJECT
gcloud services enable servicemanagement.googleapis.com  --project=$OPENSHIFT_PROJECT
gcloud services enable serviceusage.googleapis.com  --project=$OPENSHIFT_PROJECT
gcloud services enable storage-api.googleapis.com  --project=$OPENSHIFT_PROJECT
gcloud services enable storage-component.googleapis.com  --project=$OPENSHIFT_PROJECT
gcloud services enable deploymentmanager.googleapis.com --project=$OPENSHIFT_PROJECT
gcloud services enable compute.googleapis.com  --project=$NETWORK_PROJECT
gcloud services enable cloudapis.googleapis.com  --project=$NETWORK_PROJECT
gcloud services enable dns.googleapis.com  --project=$NETWORK_PROJECT
gcloud services enable cloudresourcemanager.googleapis.com --project=$NETWORK_PROJECT
gcloud services enable deploymentmanager.googleapis.com --project=${NETWORK_PROJECT}
gcloud services enable networkmanagement.googleapis.com --project=$NETWORK_PROJECT
gcloud services list --project=$OPENSHIFT_PROJECT
gcloud services list --project=$NETWORK_PROJECT

Managed Domain

Check to see if the domain is available:

gcloud dns managed-zones list --project=$OPENSHIFT_PROJECT
ZONE_NAME=`gcloud dns managed-zones list --project=$OPENSHIFT_PROJECT | grep -v NAME | awk {' print $1 '}`
gcloud dns managed-zones describe $ZONE_NAME --project=$OPENSHIFT_PROJECT

Custom Image with RHCOS

  • It will be created through a temporary gs bucket.
  • Download the following CoreOS image into Bastion host to use later:
wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.3/latest/rhcos-4.3.0-x86_64-gcp.tar.gz

Subnets

  • Identify two subnets from Network Project that are configured as shared to OpenShift Project.

JQ

Install the jq tool, which will be used to extract JSON fields later:

wget -O /usr/local/sbin/jq https://github.com/stedolan/jq/releases/download/jq-1.5/jq-linux64
chmod +x /usr/local/sbin/jq

Installation Files

Generate/Prepare install-config, Create Manifests, Change control-plane schedulability, Change ingress-controller to HostNetwork and Generate Ignition files:

mkdir ${HOME}/ocp43-1-gcp

export INSTALL_DIR=${HOME}/ocp43-1-gcp
openshift-install create install-config --dir=${INSTALL_DIR}

OR Use the below install-config.yaml file changing highlighted parameters:

apiVersion: v1
baseDomain: ocp431.example.com
compute:
- hyperthreading: Enabled
name: worker
platform: {}
replicas: 0
controlPlane:
hyperthreading: Enabled
name: master
platform: {}
replicas: 3
metadata:
creationTimestamp: null
name: lab
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineCIDR: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
gcp:
projectID: ocp43-1
region: us-east1
publish: Internal
pullSecret: '{"auths":{"cloud.openshift.com":{"auth":"b3lVUg==","email":"szobair@redhat.com"},"registry.connect.redhat.com":{"auth":"NTEQ==","email":"szobair@redhat.com"},"registry.redhat.io":{"auth":"NTE==","email":"szobair@redhat.com"}}}'
sshKey: |
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCkoQVSP4mJVO4CkxZjP+blohvFGPBXJOZC2Yrw== ocp@bastion.ocp4.example.com

openshift-install create manifests --dir=${INSTALL_DIR}
cd ${INSTALL_DIR}
rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml
rm -f openshift/99_openshift-cluster-api_worker-machineset-*.yaml
sed -i "s/  mastersSchedulable: true/  mastersSchedulable: False/g" manifests/cluster-scheduler-02-config.yml

sed -i s/LoadBalancerService/HostNetwork/g manifests/cluster-ingress-default-ingresscontroller.yaml
openshift-install create ignition-configs --dir=${INSTALL_DIR}

Set necessary variables for GCP deployment templates:

export NETWORK_PROJECT=network-vpc-269503
export NETWORK_VPC=network-vpc
export PROJECT_REGION=us-east1
export MASTER_SUBNET=ocp43-1-master
export WORKER_SUBNET=ocp43-1-worker

export OPENSHIFT_PROJECT=ocp43-1

export SERVICE_ACCOUNT_KEY_FILE=$INSTALL_DIR/sa-ocp43-1-key.json
export BASE_DOMAIN='ocp431.example.com'

export NETWORK_CIDR='10.0.0.0/16'
#export MASTER_SUBNET_CIDR='10.0.0.0/19'
#export WORKER_SUBNET_CIDR='10.0.32.0/19'
export BASE_DOMAIN_ZONE_NAME=`gcloud dns managed-zones list | grep -v NAME | awk {' print $1 '}`

export CLUSTER_NETWORK=`gcloud compute networks describe $NETWORK_VPC --project $NETWORK_PROJECT --format json | jq -r .selfLink`

export DEFAULT_NETWORK=`gcloud compute networks describe default --project $OPENSHIFT_PROJECT --format json | jq -r .selfLink`

export MASTER_SUBNET_LINK=`gcloud compute networks subnets describe $MASTER_SUBNET --project $NETWORK_PROJECT --region ${PROJECT_REGION} --format json | jq -r .selfLink`

export WORKER_SUBNET_LINK=`gcloud compute networks subnets describe $WORKER_SUBNET --project $NETWORK_PROJECT --region ${PROJECT_REGION} --format json | jq -r .selfLink`

export MASTER_SUBNET_CIDR=`gcloud compute networks subnets list-usable --project $NETWORK_PROJECT | grep $MASTER_SUBNET | awk {' print $5 '}`

export WORKER_SUBNET_CIDR=`gcloud compute networks subnets list-usable --project $NETWORK_PROJECT | grep $WORKER_SUBNET | awk {' print $5 '}`
export KUBECONFIG=$INSTALL_DIR/auth/kubeconfig
export CLUSTER_NAME=`jq -r .clusterName $INSTALL_DIR/metadata.json`
export INFRA_ID=`jq -r .infraID $INSTALL_DIR/metadata.json`
export PROJECT_NAME=`jq -r .gcp.projectID $INSTALL_DIR/metadata.json`
export REGION=`jq -r .gcp.region $INSTALL_DIR/metadata.json`

GCP Deployment

All the steps described below are extracted from: https://docs.openshift.com/container-platform/4.3/installing/installing_gcp/installing-gcp-user-infra.html#installation-creating-gcp-dns_installing-gcp-user-infra. Please go through the official documentation and GCP guides (https://cloud.google.com/dns/docs/how-to) to better understand the flow. This is just a go-to document.

All deployment templates can be found here: https://github.com/shah-zobair/gcp-upi-shared-vpc.

All the *.py deployment templates can be used as they are. All yaml files need to be regenerated after defining the necessary variable in each section below.

Create NAT Routers:

cat <<EOF >01_nat-router.yaml
imports:
- path: 01_nat-router.py

resources:
- name: cluster-nat-router
type: 01_nat-router.py
properties:
infra_id: '${INFRA_ID}'
region: '${REGION}'

master-subnet: '${MASTER_SUBNET_LINK}'
worker-subnet: '${WORKER_SUBNET_LINK}'
master_subnet_cidr: '${MASTER_SUBNET_CIDR}'
worker_subnet_cidr: '${WORKER_SUBNET_CIDR}'
network: '${CLUSTER_NETWORK}'
EOF
cat <<EOF >01_nat-router.py
def GenerateConfig(context):

   resources = [{
       'name': context.properties['infra_id'] + '-master-nat-ip',
       'type': 'compute.v1.address',
       'properties': {
           'region': context.properties['region']
       }
   }, {
       'name': context.properties['infra_id'] + '-worker-nat-ip',
       'type': 'compute.v1.address',
       'properties': {
           'region': context.properties['region']
       }
   }, {
       'name': context.properties['infra_id'] + '-router',
       'type': 'compute.v1.router',
       'properties': {
           'region': context.properties['region'],
           'network': context.properties['network'],
           'nats': [{
               'name': context.properties['infra_id'] + '-nat-master',
               'natIpAllocateOption': 'MANUAL_ONLY',
               'natIps': ['\$(ref.' + context.properties['infra_id'] + '-master-nat-ip.selfLink)'],
               'minPortsPerVm': 7168,
               'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS',
               'subnetworks': [{
                   'name': context.properties['master-subnet'],
                   'sourceIpRangesToNat': ['ALL_IP_RANGES']
               }]
           }, {
               'name': context.properties['infra_id'] + '-nat-worker',
               'natIpAllocateOption': 'MANUAL_ONLY',
               'natIps': ['\$(ref.' + context.properties['infra_id'] + '-worker-nat-ip.selfLink)'],
               'minPortsPerVm': 128,
               'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS',
               'subnetworks': [{
                   'name': context.properties['worker-subnet'],
                   'sourceIpRangesToNat': ['ALL_IP_RANGES']
               }]
           }]
       }
   }]

   return {'resources': resources}

EOF
gcloud deployment-manager deployments create ${INFRA_ID}-nat-router --config 01_nat-router.yaml --project $NETWORK_PROJECT

Create networking and Load Balancer:

export CLUSTER_NETWORK=`gcloud compute networks describe $NETWORK_VPC --project $NETWORK_PROJECT --format json | jq -r .selfLink`

export DEFAULT_NETWORK=`gcloud compute networks describe default --project $OPENSHIFT_PROJECT --format json | jq -r .selfLink`
cat <<EOF >02_infra-lb.yaml
imports:
- path: 02_infra-lb.py

resources:
- name: cluster-infra
 type: 02_infra-lb.py
 properties:
   infra_id: '${INFRA_ID}' 
   region: '${REGION}' 

    cluster_domain: '${CLUSTER_NAME}.${BASE_DOMAIN}' 
   cluster_network: '${CLUSTER_NETWORK}' 
EOF
Create 02_infra-lb.py from https://docs.openshift.com/container-platform/4.3/installing/installing_gcp/installing-gcp-user-infra.html#installation-deployment-manager-dns_installing-gcp-user-infra

DELETE dns managedZone section from this(02_infra-lb.py) file.
gcloud deployment-manager deployments create ${INFRA_ID}-infra-lb --config 02_infra-lb.yaml --project $OPENSHIFT_PROJECT
cat <<EOF >02_infra-dns.py
def GenerateConfig(context):

    resources = [
   {
       'name': context.properties['infra_id'] + '-private-zone',
       'type': 'dns.v1.managedZone',
       'properties': {
           'description': '',
           'dnsName': context.properties['cluster_domain'] + '.',
           'visibility': 'private',
           'privateVisibilityConfig': {
               'networks': [{
                   'networkUrl': context.properties['cluster_network']
               }]
           }
       }
   }]

    return {'resources': resources}
EOF
cat <<EOF >02_infra-dns.yaml
imports:
- path: 02_infra-dns.py

resources:
- name: cluster-infra
 type: 02_infra-dns.py
 properties:
   infra_id: '${INFRA_ID}' 
   region: '${REGION}' 

    cluster_domain: '${CLUSTER_NAME}.${BASE_DOMAIN}' 
   cluster_network: '${CLUSTER_NETWORK}' 
EOF
gcloud deployment-manager deployments create ${INFRA_ID}-infra-dns --config 02_infra-dns.yaml --project $NETWORK_PROJECT
export CLUSTER_IP=`gcloud compute addresses describe ${INFRA_ID}-cluster-public-ip --region=${REGION} --format json | jq -r .address`
if [ -f transaction.yaml ]; then rm transaction.yaml; fi

gcloud dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME} --project $OPENSHIFT_PROJECT

gcloud dns record-sets transaction add ${CLUSTER_IP} --name api.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${BASE_DOMAIN_ZONE_NAME} --project $OPENSHIFT_PROJECT

gcloud dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME} --project $OPENSHIFT_PROJECT
if [ -f transaction.yaml ]; then rm transaction.yaml; fi

gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone --project $NETWORK_PROJECT

gcloud dns record-sets transaction add ${CLUSTER_IP} --name api.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project $NETWORK_PROJECT

gcloud dns record-sets transaction add ${CLUSTER_IP} --name api-int.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project $NETWORK_PROJECT




gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone --project $NETWORK_PROJECT

Create Firewall Rules and IAM Rules:

Create 03_security_network_vpc.py from https://docs.openshift.com/container-platform/4.3/installing/installing_gcp/installing-gcp-user-infra.html#installation-deployment-manager-security_installing-gcp-user-infra

DELETE
2 iam serviceAccount sections from the last part
export MASTER_NAT_IP=`gcloud compute addresses describe ${INFRA_ID}-master-nat-ip --region ${REGION} --project $NETWORK_PROJECT --format json | jq -r .address`

export WORKER_NAT_IP=`gcloud compute addresses describe ${INFRA_ID}-worker-nat-ip --region ${REGION} --project $NETWORK_PROJECT --format json | jq -r .address`
cat <<EOF >03_security_network_vpc.yaml
imports:
- path: 03_security_network_vpc.py

resources:
- name: cluster-security
 type: 03_security_network_vpc.py
 properties:
   infra_id: '${INFRA_ID}' 
   region: '${REGION}' 
   cluster_network: '${CLUSTER_NETWORK}' 
   network_cidr: '${NETWORK_CIDR}' 
   master_nat_ip: '${MASTER_NAT_IP}' 
   worker_nat_ip: '${WORKER_NAT_IP}' 
EOF
cat <<EOF >03_iam_sa.yaml
imports:
- path: 03_iam_sa.py

resources:
- name: iam-sa
 type: 03_iam_sa.py
 properties:
   infra_id: '${INFRA_ID}' 
   region: '${REGION}' 
   cluster_network: '${CLUSTER_NETWORK}' 
   network_cidr: '${NETWORK_CIDR}' 
   master_nat_ip: '${MASTER_NAT_IP}' 
   worker_nat_ip: '${WORKER_NAT_IP}' 
EOF
cat <<EOF >03_iam_sa.py
def GenerateConfig(context):

    resources = [{
       'name': context.properties['infra_id'] + '-master-node-sa',
       'type': 'iam.v1.serviceAccount',
       'properties': {
           'accountId': context.properties['infra_id'] + '-m',
           'displayName': context.properties['infra_id'] + '-master-node'
       }
   }, {
       'name': context.properties['infra_id'] + '-worker-node-sa',
       'type': 'iam.v1.serviceAccount',
       'properties': {
           'accountId': context.properties['infra_id'] + '-w',
           'displayName': context.properties['infra_id'] + '-worker-node'
       }
   }]

    return {'resources': resources}
EOF
gcloud deployment-manager deployments create ${INFRA_ID}-security-network-vpc --config 03_security_network_vpc.yaml --project ${NETWORK_PROJECT}

gcloud deployment-manager deployments create ${INFRA_ID}-iam-sa --config 03_iam_sa.yaml --project $OPENSHIFT_PROJECT
# These are to be created under $OPENSHIFT_PROJECT

export MASTER_SA=${INFRA_ID}-m@${PROJECT_NAME}.iam.gserviceaccount.com

gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/compute.instanceAdmin"

gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/compute.networkAdmin"

gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/compute.securityAdmin"

gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/iam.serviceAccountUser"

gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/storage.admin"

export WORKER_SA=${INFRA_ID}-w@${PROJECT_NAME}.iam.gserviceaccount.com

gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${WORKER_SA}" --role "roles/compute.viewer"

gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${WORKER_SA}" --role "roles/storage.admin"

Create RHCOS Cluster Image:

mkdir image; cd image

wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.3/latest/rhcos-4.3.8-x86_64-gcp.x86_64.tar.gz
[stratus@bastion image]$ export IMAGE_SOURCE=$INSTALL_DIR/image/rhcos-4.3.8-x86_64-gcp.x86_64.tar.gz

export BUCKET_NAME=rhcos-430
gsutil mb gs://$BUCKET_NAME/
gsutil cp $IMAGE_SOURCE gs://$BUCKET_NAME/
gsutil ls
gsutil du -s gs://$BUCKET_NAME/
gsutil ls -r gs://$BUCKET_NAME/**
gcloud compute images create "${INFRA_ID}-rhcos-image" --source-uri="gs://${BUCKET_NAME}/rhcos-4.3.0-x86_64-gcp.tar.gz"

Create the bootstrap machine:

export CONTROL_SUBNET=`gcloud compute networks subnets describe ${MASTER_SUBNET} --region=${REGION} --project $NETWORK_PROJECT --format json | jq -r .selfLink`

export CLUSTER_IMAGE=`gcloud compute images describe ${INFRA_ID}-rhcos-image --format json | jq -r .selfLink`

export ZONE_0=`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9`
export ZONE_1=`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9`
export ZONE_2=`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9`
gsutil mb gs://${INFRA_ID}-bootstrap-ignition
gsutil cp bootstrap.ign gs://${INFRA_ID}-bootstrap-ignition/
export BOOTSTRAP_IGN=`gsutil signurl -d 24h $SERVICE_ACCOUNT_KEY_FILE gs://${INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep "^gs:" | awk '{print $5}'`
Create 04_bootstrap.py from https://docs.openshift.com/container-platform/4.3/installing/installing_gcp/installing-gcp-user-infra.html#installation-deployment-manager-bootstrap_installing-gcp-user-infra




DELETE firewall section from the above file.
cat <<EOF >04_bootstrap.yaml
imports:
- path: 04_bootstrap.py

resources:
- name: cluster-bootstrap
 type: 04_bootstrap.py
 properties:
   infra_id: '${INFRA_ID}' 
   region: '${REGION}' 
   zone: '${ZONE_0}' 

   cluster_network: '${CLUSTER_NETWORK}' 
   default_network: '${DEFAULT_NETWORK}'
   control_subnet: '${CONTROL_SUBNET}' 
   image: '${CLUSTER_IMAGE}' 
   machine_type: 'n1-standard-4' 
   root_volume_size: '128' 

    bootstrap_ign: '${BOOTSTRAP_IGN}' 
EOF
cat <<EOF >04_bootstrap-firewall.yaml
imports:
- path: 04_bootstrap_firewall.py

resources:
- name: cluster-bootstrap
 type: 04_bootstrap_firewall.py
 properties:
   infra_id: '${INFRA_ID}' 
   region: '${REGION}' 
   zone: '${ZONE_0}' 

    cluster_network: '${CLUSTER_NETWORK}' 
   default_network: '${DEFAULT_NETWORK}'
EOF
cat <<EOF >04_bootstrap_firewall.py
def GenerateConfig(context):

    resources = [{
       'name': context.properties['infra_id'] + '-bootstrap-in-ssh',
       'type': 'compute.v1.firewall',
       'properties': {
           'network': context.properties['cluster_network'],
           'allowed': [{
               'IPProtocol': 'tcp',
               'ports': ['22']
           }],
           'sourceRanges':  ['0.0.0.0/0'],
           'targetTags': [context.properties['infra_id'] + '-bootstrap']
       }
   }]

    return {'resources': resources}
EOF
gcloud deployment-manager deployments create ${INFRA_ID}-bootstrap-firewall --config 04_bootstrap-firewall.yaml --project ${NETWORK_PROJECT}

gcloud deployment-manager deployments create ${INFRA_ID}-bootstrap --config 04_bootstrap.yaml --project ${OPENSHIFT_PROJECT}
gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-bootstrap --project ${OPENSHIFT_PROJECT}

gcloud compute target-pools add-instances ${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-bootstrap --project ${OPENSHIFT_PROJECT}

Create Control Plane Machines:

export MASTER_SERVICE_ACCOUNT_EMAIL=`gcloud iam service-accounts list --project ${OPENSHIFT_PROJECT} | grep "^${INFRA_ID}-master-node " | awk '{print $2}'`

export MASTER_IGNITION=`cat master.ign`
Create 05_control_plane.yaml from https://docs.openshift.com/container-platform/4.3/installing/installing_gcp/installing-gcp-user-infra.html#installation-deployment-manager-control-plane_installing-gcp-user-infra
cat <<EOF >05_control_plane.yaml
imports:
- path: 05_control_plane.py

resources:
- name: cluster-control-plane
 type: 05_control_plane.py
 properties:
   infra_id: '${INFRA_ID}' 
   region: '${REGION}' 
   zones: 
   - '${ZONE_0}'
   - '${ZONE_1}'
   - '${ZONE_2}'

    control_subnet: '${CONTROL_SUBNET}' 
   image: '${CLUSTER_IMAGE}' 
   machine_type: 'n1-standard-4' 
   root_volume_size: '128'
   service_account_email: '${MASTER_SERVICE_ACCOUNT_EMAIL}' 

   ignition: '${MASTER_IGNITION}'
EOF
gcloud deployment-manager deployments create ${INFRA_ID}-control-plane --config 05_control_plane.yaml --project ${OPENSHIFT_PROJECT}
export MASTER0_IP=`gcloud compute instances describe ${INFRA_ID}-m-0 --zone ${ZONE_0} --format json | jq -r .networkInterfaces[0].networkIP`

export MASTER1_IP=`gcloud compute instances describe ${INFRA_ID}-m-1 --zone ${ZONE_1} --format json | jq -r .networkInterfaces[0].networkIP`

export MASTER2_IP=`gcloud compute instances describe ${INFRA_ID}-m-2 --zone ${ZONE_2} --format json | jq -r .networkInterfaces[0].networkIP`
if [ -f transaction.yaml ]; then rm transaction.yaml; fi
gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone --project ${NETWORK_PROJECT}

gcloud dns record-sets transaction add ${MASTER0_IP} --name etcd-0.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project ${NETWORK_PROJECT}

gcloud dns record-sets transaction add ${MASTER1_IP} --name etcd-1.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project ${NETWORK_PROJECT}

gcloud dns record-sets transaction add ${MASTER2_IP} --name etcd-2.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project ${NETWORK_PROJECT}

gcloud dns record-sets transaction add \
 "0 10 2380 etcd-0.${CLUSTER_NAME}.${BASE_DOMAIN}." \
 "0 10 2380 etcd-1.${CLUSTER_NAME}.${BASE_DOMAIN}." \
 "0 10 2380 etcd-2.${CLUSTER_NAME}.${BASE_DOMAIN}." \
 --name _etcd-server-ssl._tcp.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type SRV --zone ${INFRA_ID}-private-zone --project ${NETWORK_PROJECT}

gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone --project ${NETWORK_PROJECT}
# Create under ${OPENSHIFT_PROJECT}

gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-m-0

gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_1}" --instances=${INFRA_ID}-m-1

gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_2}" --instances=${INFRA_ID}-m-2

gcloud compute target-pools add-instances ${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-m-0

gcloud compute target-pools add-instances ${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_1}" --instances=${INFRA_ID}-m-1

gcloud compute target-pools add-instances ${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_2}" --instances=${INFRA_ID}-m-2

Delete bootstrap machine after finishing the bootstrap:

Log in to bootstrap machine and wait for “bootkube.service complete”
OR run from bastion host #openshift-install wait-for bootstrap-complete --dir=$INSTALL_DIR --log-level info

gcloud compute target-pools remove-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-bootstrap --project ${OPENSHIFT_PROJECT}

gcloud compute target-pools remove-instances ${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-bootstrap --project ${OPENSHIFT_PROJECT}
gsutil rm gs://${INFRA_ID}-bootstrap-ignition/bootstrap.ign
gsutil rb gs://${INFRA_ID}-bootstrap-ignition

gcloud deployment-manager deployments delete ${INFRA_ID}-bootstrap --project ${OPENSHIFT_PROJECT} -q

gcloud deployment-manager deployments delete ${INFRA_ID}-bootstrap-firewall --project ${NETWORK_PROJECT} -q

Deploy Worker Nodes:

export COMPUTE_SUBNET=`gcloud compute networks subnets describe ${WORKER_SUBNET} --region=${REGION} --project $NETWORK_PROJECT --format json | jq -r .selfLink`

export WORKER_SERVICE_ACCOUNT_EMAIL=`gcloud iam service-accounts list | grep "^${INFRA_ID}-worker-node " | awk '{print $2}'`

export WORKER_IGNITION=`cat worker.ign`
Create 06_worker.py from https://docs.openshift.com/container-platform/4.3/installing/installing_gcp/installing-gcp-user-infra.html#installation-deployment-manager-worker_installing-gcp-user-infra
FOR 2 WORKER NODES (In 2 Zones, 0 & 1):

cat <<EOF >06_worker.yaml
imports:
- path: 06_worker.py

resources:
- name: 'w-a-0'
 type: 06_worker.py
 properties:
   infra_id: '${INFRA_ID}' 
   region: '${REGION}' 
   zone: '${ZONE_0}' 

    compute_subnet: '${COMPUTE_SUBNET}' 
   image: '${CLUSTER_IMAGE}' 
   machine_type: 'n1-standard-4' 
   root_volume_size: '128'
   service_account_email: '${WORKER_SERVICE_ACCOUNT_EMAIL}' 

    ignition: '${WORKER_IGNITION}'

- name: 'w-b-0'
 type: 06_worker.py
 properties:
   infra_id: '${INFRA_ID}' 
   region: '${REGION}' 
   zone: '${ZONE_1}' 

    compute_subnet: '${COMPUTE_SUBNET}' 
   image: '${CLUSTER_IMAGE}' 
   machine_type: 'n1-standard-4' 
   root_volume_size: '128'
   service_account_email: '${WORKER_SERVICE_ACCOUNT_EMAIL}' 

   ignition: '${WORKER_IGNITION}'
EOF
OR for 1 worker node:

cat <<EOF >06_worker.yaml
imports:
- path: 06_worker.py

resources:
- name: 'w-a-0'
 type: 06_worker.py
 properties:
   infra_id: '${INFRA_ID}' 
   region: '${REGION}' 
   zone: '${ZONE_0}' 

    compute_subnet: '${COMPUTE_SUBNET}' 
   image: '${CLUSTER_IMAGE}' 
   machine_type: 'n1-standard-4' 
   root_volume_size: '128'
   service_account_email: '${WORKER_SERVICE_ACCOUNT_EMAIL}' 

   ignition: '${WORKER_IGNITION}' 
EOF
gcloud deployment-manager deployments create ${INFRA_ID}-worker --config 06_worker.yaml --project ${OPENSHIFT_PROJECT}
APPROVE node CSRs from the below section to add those worker nodes to the cluster.

Download the OpenShift Client (oc) and log in to the cluster using kubeconfig:

curl -L https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz \
 | tar -C /usr/local/bin -xzf - oc
export KUBECONFIG=$INSTALL_DIR/auth/kubeconfig
source <(oc completion bash)
echo "source ~/.oc_completion.sh" >> ~/.bashrc
oc get nodes
oc get csr
oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs oc adm certificate approve
watch -n5 oc get clusteroperators
oc get clusterversion

Add ingress Router Load Balancer and Firewall:

cat <<EOF >06_worker-lb.py
def GenerateConfig(context):

   resources = [{
       'name': context.properties['infra_id'] + '-ingress-public-ip',
       'type': 'compute.v1.address',
       'properties': {
           'region': context.properties['region']
       }
   }, {
       'name': context.properties['infra_id'] + '-ingress-http-health-check',
       'type': 'compute.v1.httpHealthCheck',
       'properties': {
           'port': 32410,
           'requestPath': '/healthz'
       }
   }, {
       'name': context.properties['infra_id'] + '-ingress-target-pool',
       'type': 'compute.v1.targetPool',
       'properties': {
           'region': context.properties['region'],
           'healthChecks': ['\$(ref.' + context.properties['infra_id'] + '-ingress-http-health-check.selfLink)'],
           'instances': []
       }
   }, {
       'name': context.properties['infra_id'] + '-ingress-forwarding-rule',
       'type': 'compute.v1.forwardingRule',
       'properties': {
           'region': context.properties['region'],
           'IPAddress': '\$(ref.' + context.properties['infra_id'] + '-ingress-public-ip.selfLink)',
           'target': '\$(ref.' + context.properties['infra_id'] + '-ingress-target-pool.selfLink)',
           'portRange': '80-443'
       }
   }]

    return {'resources': resources}

EOF
cat <<EOF >06_worker-lb.yaml
imports:
- path: 06_worker-lb.py

resources:
- name: 'worker-lb'
 type: 06_worker-lb.py
 properties:
   infra_id: '${INFRA_ID}' 
   region: '${REGION}' 
EOF
gcloud deployment-manager deployments create ${INFRA_ID}-worker-lb --config 06_worker-lb.yaml --project ${OPENSHIFT_PROJECT}
ADD for each worker node:

gcloud compute target-pools add-instances ${INFRA_ID}-ingress-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-w-a-0

gcloud compute target-pools add-instances ${INFRA_ID}-ingress-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-w-b-0
cat <<EOF >06_ingress-firewall.yaml
imports:
- path: 06_ingress-firewall.py

resources:
- name: cluster-security
 type: 06_ingress-firewall.py
 properties:
   infra_id: '${INFRA_ID}' 
   region: '${REGION}' 
   cluster_network: '${CLUSTER_NETWORK}' 
   network_cidr: '${NETWORK_CIDR}' 
EOF
cat <<EOF >06_ingress-firewall.py
def GenerateConfig(context):

    resources = [{
       'name': context.properties['infra_id'] + '-ingress-health-checks',
       'type': 'compute.v1.firewall',
       'properties': {
           'network': context.properties['cluster_network'],
           'allowed': [{
               'IPProtocol': 'tcp',
               'ports': ['32410', '1936']
           }],
           'sourceRanges':  ['35.191.0.0/16', '209.85.152.0/22', '209.85.204.0/22', '130.211.0.0/22', '35.243.0.0/22'],
           'sourceTags': [context.properties['infra_id'] + '-master'],
           'sourceTags': [context.properties['infra_id'] + '-worker'],
           'targetTags': [context.properties['infra_id'] + '-master'],
           'targetTags': [context.properties['infra_id'] + '-worker']
       }
   }, {
       'name': context.properties['infra_id'] + '-ingress-router',
       'type': 'compute.v1.firewall',
       'properties': {
           'network': context.properties['cluster_network'],
           'allowed': [{
               'IPProtocol': 'tcp',
               'ports': ['80', '443']
           }],
           'sourceRanges':  ['0.0.0.0/0'],
           'targetTags': [context.properties['infra_id'] + '-master'],
           'targetTags': [context.properties['infra_id'] + '-worker']
       }
   }]

    return {'resources': resources}
EOF
Make sure External IP Address for ingress-health-check is allowed in 06_ingress-firewall.py

gcloud deployment-manager deployments create ${INFRA_ID}-ingress-firewall --config 06_ingress-firewall.yaml --project ${NETWORK_PROJECT}

Setting up DNS record for *apps:

export INGRESS_IP=`gcloud compute addresses describe ${INFRA_ID}-ingress-public-ip --region=${REGION} --format json | jq -r .address`
if [ -f transaction.yaml ]; then rm transaction.yaml; fi

gcloud dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME} --project ${OPENSHIFT_PROJECT}

gcloud dns record-sets transaction add ${INGRESS_IP} --name \*.apps.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 300 --type A --zone ${BASE_DOMAIN_ZONE_NAME} --project ${OPENSHIFT_PROJECT}

gcloud dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME} --project ${OPENSHIFT_PROJECT}

if [ -f transaction.yaml ]; then rm transaction.yaml; fi

gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone --project ${NETWORK_PROJECT}

gcloud dns record-sets transaction add ${INGRESS_IP} --name \*.apps.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 300 --type A --zone ${INFRA_ID}-private-zone --project ${NETWORK_PROJECT}

gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone --project ${NETWORK_PROJECT}

Post Install or Day 2 Operation

Node Scale Up

Create a new worker node, approve the CSR and add it to the target-pool:

Export all required variables for the template
cat <<EOF >06_worker.yaml
imports:
- path: 06_worker.py

resources:
- name: 'w-c-0'
 type: 06_worker.py
 properties:
   infra_id: '${INFRA_ID}' 
   region: '${REGION}' 
   zone: '${ZONE_2}' 

    compute_subnet: '${COMPUTE_SUBNET}' 
   image: '${CLUSTER_IMAGE}' 
   machine_type: 'n1-standard-4' 
   root_volume_size: '128'
   service_account_email: '${WORKER_SERVICE_ACCOUNT_EMAIL}' 

   ignition: '${WORKER_IGNITION}' 
EOF
gcloud deployment-manager deployments create ${INFRA_ID}-worker --config 06_worker.yaml --project ${OPENSHIFT_PROJECT}
oc get csr

oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs oc adm certificate approve
gcloud compute target-pools add-instances ${INFRA_ID}-ingress-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-w-c-0

Uninstall the Cluster

This is just for a reference to delete a deployed cluster:

export INSTALL_DIR=${HOME}/ocp43-1-gcp
export INFRA_ID=`jq -r .infraID $INSTALL_DIR/metadata.json`

export NETWORK_PROJECT=network-vpc-269503
export OPENSHIFT_PROJECT=ocp43-1

#export CLUSTER_NAME=`jq -r .clusterName $INSTALL_DIR/metadata.json`
#export BASE_DOMAIN='ocp431.example.com'
gcloud deployment-manager deployments list --project $OPENSHIFT_PROJECT
gcloud deployment-manager deployments list --project $NETWORK_PROJECT
echo $INFRA_ID >> Make sure it matches the deployment template prefixes from above output
gcloud dns record-sets import --zone=${INFRA_ID}-private-zone --delete-all-existing /dev/null --project $NETWORK_PROJECT
gcloud deployment-manager deployments delete $INFRA_ID-worker $INFRA_ID-worker-lb --project $OPENSHIFT_PROJECT -q

gcloud deployment-manager deployments delete $INFRA_ID-control-plane --project $OPENSHIFT_PROJECT -q

gcloud deployment-manager deployments delete $INFRA_ID-infra-lb $INFRA_ID-iam-sa --project $OPENSHIFT_PROJECT -q
export LB_IP=`gcloud dns record-sets list --zone ${BASE_DOMAIN_ZONE_NAME} --project $OPENSHIFT_PROJECT | grep api | awk {' print $4 '}`

if [ -f transaction.yaml ]; then rm transaction.yaml; fi

gcloud dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME} --project $OPENSHIFT_PROJECT

gcloud dns record-sets transaction remove $LB_IP --name api.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${BASE_DOMAIN_ZONE_NAME} --project $OPENSHIFT_PROJECT
gcloud dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME} --project $OPENSHIFT_PROJECT
gcloud deployment-manager deployments delete $INFRA_ID-security-network-vpc $INFRA_ID-infra-dns $INFRA_ID-nat-router $INFRA_ID-ingress-firewall --project $NETWORK_PROJECT -q
gcloud compute images delete "${INFRA_ID}-rhcos-image"
gsutil rb `gsutil ls | grep $INFRA_ID-image-registry`