Cloud Experts Documentation

Deploying ROSA PrivateLink Cluster with Ansible

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.

Background

This guide shows an example of how to deploy a classic Red Hat OpenShift Services on AWS (ROSA) cluster with PrivateLinkexternal link (opens in new tab) with STSexternal link (opens in new tab) enabled using Ansibleexternal link (opens in new tab) playbook from our MOBB GitHub repositoryexternal link (opens in new tab) and makefilesexternal link (opens in new tab) to compile them. Note that this is an unofficial Red Hat guide and your implementation may vary.

This guide is broken down into two main sections namely the architectural landscape of this deployment scenario including the AWS services and open source products and services that we will be using, and the implementation steps which includes the prerequisites needed to run the deployment itself.

Architecture

In this section, we will first discuss about the architecture of the cluster we are going to create in high level. Afterward, we will talk about the components that we are utilizing in this deployment, from the Git repository where this deployment will be cloned from, the makefile that compiles the codes needed to run the deployment, to the Ansible commands that the makefile compiles.

PrivateLink allows you to securely access AWS services over private network connections, without exposing your traffic to the public internet. In this scenario, we will be using Transit Gateway (TGW)external link (opens in new tab) allowing inter-VPC and VPC-to-on-premises communications by providing a scalable and efficient way to handle traffic between these networks.

To help with DNS resolution, we will be using DNS forwarder to forward the queries to Route 53 Inbound Resolverexternal link (opens in new tab) allowing the cluster to accept incoming DNS queries from external sources and thus establishing the desired connection without exposing the underlying infrastructure.

ROSA with PL and TGW

In addition, Egress VPCexternal link (opens in new tab) will be provisioned serving as a dedicated network component for managing outbound traffic from the cluster. A NAT Gatewayexternal link (opens in new tab) will be created within the public subnet of the Egress VPC and along with it a Squidexternal link (opens in new tab) -based proxy to restrict egress traffic from the cluster to only the permitted endpoints or destinations .

We will also be using VPC Endpointsexternal link (opens in new tab) to privately access AWS resources, e.g. gateway endpoint for S3 bucket, interface endpoint for STS, interface endpoint for EC2 instances, etc.

Finally, once the cluster is created, we will access it by establishing secure SSH connection using a jump host that is set up within the Egress VPC, and to do so we will be using sshuttleexternal link (opens in new tab) .

Git

Git is version control system that tracks changes to files and enables collaboration, while GitHub is a web-based hosting service for Git repositories. And in this scenario, the deployment will be based on the Ansible playbook from MOBB GitHub repository at https://github.com/rh-mobb/ansible-rosaexternal link (opens in new tab) .

We are specifying the default variables in ./roles/_vars/defaults/main.yml, and these default variables will then be overridden by specific variables located in ./environment/*/group_vars/all.yaml depends on the scenario.

For now, let’s take a look at what these default variables are. Below are the snippets from ./roles/_vars/defaults/main.yml:

As mentioned previously, we are going to override the above default variables with ones that are relevant to our scenario, and in this case it would be ROSA with PrivateLink and Transit Gateway, and to do so, we will be running the variables specified from ./environment/transit-gateway-egress/group_vars/all.yaml instead:

Next, we will talk about what makefile is and how it helps compiling the code for our deployment in this scenario.

Makefile

Makeexternal link (opens in new tab) is a build automation tool to manage the compilation and execution of programs. It reads a file called a makefileexternal link (opens in new tab) that contains a set of rules and dependencies, allowing developers to define how source code files should be compiled, linked, and executed.

In this scenario, the makefile can be found in the root directory of the GitHub repository, and here below is the snippet where the cluster name is set up along with the virtual environment that makefile will compile when we are running make virtualenv:

As you can see above, the cluster_name variable is hardcoded in the makefile to be ans-${username}.

And below are what the makefile will compile when we are running make create.tgw and make delete.tgw for this scenario.

Here we see that we have Ansible commands that trigger the deployment. So next, let’s discuss about what Ansible is and how it helps building the cluster in this scenario.

Ansible

Ansibleexternal link (opens in new tab) is an open-source automation tool that simplifies system management and configuration. It uses a declarative approach, allowing users to define desired states using YAML-based Playbooksexternal link (opens in new tab) . With an agentless architecture and a vast library of modulesexternal link (opens in new tab) , Ansible enables automation of tasks such as configuration management, package installation, and user management.

Recall that we have the following code snippet in the Makefile section that will be run for make create.tgw command:

In this case, we will be running Ansible command by executing a playbook called create-cluster.yaml and specifying ./environment/transit-gateway-egress/hosts as the inventory file.

Let’s take a quick look at the create-cluster.yaml playbook which can be found in the repository’s root folder:

As you can see above, the playbook consists of two plays targeting different hosts, the first one targeting all hosts and the second one targeting jump host. And within each plays, there are different tasks specified. The first play’s tasks are to be executed locally using roles that are specified depends on the scenario. For example, in this case, it will be executing the tasks in the ./roles/tgw_create because rosa_tgw_enabled value will be returned true. In similar vein, the second play’s tasks required SSH connection to the host, and in this case, it will execute the tasks in the ./roles/post_install since the value for both rosa_private_link and enable_jumphost will be returned true.

Implementation

Now that we understand the architecture of the cluster that we are going to create in this scenario in high level, along with the components and the commands that needed to run the deployment, we can now start preparing to build the cluster itself.

Prerequisites

  • AWS CLI
  • ROSA CLI >= 1.2.22
  • ansible >= 2.15.0
  • python >= 3.6
  • boto3 >= 1.22.0
  • botocore >= 1.25.0
  • make
  • sshuttle

Deployment

Once you have all of the prerequisites installed, clone our repository and go to the ansible-rosa directory.

Then, run the following command to create python virtual environment.

Next, run the following command to allow Ansible playbook to create the cluster.

Note that the cluster setup may take up to one hour. Once the cluster is successfully deployed, you will see following snippet (note this is an example):

Next, let’s connect to the jump host and login to the cluster using the credentials provided by Ansible upon the creation task completion as seen above. In this example, we will be using sshuttle (note that this is just an example, so please refer to the one provided by your Ansible deployment):

And you may need your password for local sudo access and proceed with yes when asked for if you want to continue connecting. And once you see that it is connected in your terminal, hop on to your browser using the console URL provided. In the example above, the URL was https://console-openshift-console.apps.ans-dianasari.caxr.p1.openshiftapps.com (note that this is just an example, so please refer to the one provided by your Ansible deployment).

It will then ask you to login using htpasswd, so click that button, and use the username and the password provided by the Ansible deployment. In this case, as you can see from the snippet above, the username is cluster-admin and the password is Rosa1234password67890. And finally click the login button.

Note that cluster-admin username with its password are precreated in this scenario, so you might want to modify this for your own deployment.

And once you are done with the cluster, use the following command to delete the cluster.

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy & sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2023 Red Hat, Inc.