Cloud Experts Documentation

OCP

How to deploy Jupyter Notebook

Retrieve the login command If you are not logged in via the CLI, access your cluster via the web console, then click on the dropdown arrow next to your name in the top-right and select Copy Login Command. A new tab will open and select the authentication method you are using (in our case it’s github) Click Display Token Copy the command under where it says “Log in with this token”.

Installing the Open Data Hub Operator

The Open Data Hub operator is available for deployment in the OpenShift OperatorHub as a Community Operators. You can install it from the OpenShift web console: From the OpenShift web console, log in as a user with cluster-admin privileges. For a developer installation from try.openshift.com including AWS and CRC, the kubeadmin user will work. Create a new project named ‘jph-demo’ for your installation of Open Data Hub Find Open Data Hub in the OperatorHub catalog.

Jupyter Notebooks

You will need the following prerequistes in order to run a basic Jupyter notebook with GPU on OpenShift 1. A OpenShift Cluster This will assume you have already provisioned a OpenShift cluster succesfully and are able to use it. You will need to log in as cluster admin to deploy GPU Operator . 2. OpenShift Command Line Interface Please see the OpenShift Command Line section for more information on installing.

Installing the HashiCorp Vault Secret CSI Driver

The HashiCorp Vault Secret CSI Driver allows you to access secrets stored in HashiCorp Vault as Kubernetes Volumes. Prerequisites An OpenShift Cluster (ROSA, ARO, OSD, and OCP 4.x all work) oc helm v3 Installing the Kubernetes Secret Store CSI Create an OpenShift Project to deploy the CSI into oc new-project k8s-secrets-store-csi Set SecurityContextConstraints to allow the CSI driver to run (otherwise the DaemonSet will not be able to create Pods)

Installing the Kubernetes Secret Store CSI on OpenShift

The Kubernetes Secret Store CSI is a storage driver that allows you to mount secrets from external secret management systems like HashiCorp Vault and AWS Secrets. It comes in two parts, the Secret Store CSI, and a Secret provider driver. This document covers just the CSI itself. Prerequisites An OpenShift Cluster (ROSA, ARO, OSD, and OCP 4.x all work) kubectl helm v3 Installing the Kubernetes Secret Store CSI Create an OpenShift Project to deploy the CSI into

Configuring OpenShift Logging using LokiStack on ROSA and (soon) ARO

A guide to shipping logs and metrics on OpenShift using the new LokiStack setup. Recently, the default logging system with OpenShift swapped from ElasticSearch/FluentD/Kibana to a system based on LokiStack/Vector/OCP Console. LokiStack requires an object store in order to function, and this guide is designed to walk the user through the steps required to set this up. Overview of the components of OpenShift Cluster Logging Prerequisites OpenShift CLI (oc) Rights to install operators on the cluster Access to create S3 buckets (AWS/ROSA), Blob Storage Container (Azure), Storage Bucket (GCP) Setting up your environment for ROSA Create environment variables to use later in this process by running the following commands: $ export REGION=$(oc get infrastructure cluster -o=jsonpath="{.

OpenShift Logging

A guide to shipping logs and metrics on OpenShift Prerequisites OpenShift CLI (oc) Rights to install operators on the cluster Setup OpenShift Logging This is for setup of centralized logging on OpenShift making use of Elasticsearch OSS edition. This largely follows the processes outlined in the OpenShift documentation here . Retention and storage considerations are reviewed in Red Hat’s primary source documentation. This setup is primarily concerned with simplicity and basic log searching.

OpenShift - Sharing Common images

In OpenShift images (stored in the in-cluster registry) are protected by Kubernetes RBAC and by default only the namespace in which the image was built can access it. For example if you build an image in project-a only project-a can use that image, or build from it. If you wanted the default service account in project-b to have access to the images in project-a you would run the following. oc policy add-role-to-user \ system:image-puller system:serviceaccount:project-b:default \ --namespace=project-a However if you had to do this for every namespace it could become quite combersome.

Examples of using a WAF in front of ROSA / OSD on AWS / OCP on AWS

Problem Statement Operator requires WAF (Web Application Firewall) in front of their workloads running on OpenShift (ROSA) Operator does not want WAF running on OpenShift to ensure that OCP resources do not experience Denial of Service through handling the WAF Quick Introduction by Paul Czarkowskiexternal link (opens in new tab) & Ryan Niksch on YouTubeexternal link (opens in new tab) Solutions Cloud Front -> WAF -> CustomDomain -> $APP This is the preferred method and can also work with most third party WAF systems that act as a reverse proxy

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy & sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2023 Red Hat, Inc.