Contact

Contact

POLICIES

OpenShift Dedicated Service Definition

Table of Contents

  1. Account Management
  2. Logging
  3. Monitoring
  4. Networking
  5. Storage
  6. Platform
  7. Security

Account Management

Billing

Each Red Hat OpenShift Dedicated (OSD) cluster requires a minimum annual base cluster purchase, but there are two billing options available for each cluster: Standard and Customer Cloud Subscription (CCS; previously known as Bring-Your-Own-Cloud or BYOC).

Standard OpenShift Dedicated clusters are deployed in to their own cloud infrastructure accounts, each owned by Red Hat. Red Hat is responsible for this account, and cloud infrastructure costs are paid directly by Red Hat. The customer will only pay the Red Hat subscription costs.

In the Customer Cloud Subscription model, the customer pays the cloud infrastructure provider directly for cloud costs and the cloud infrastructure account will be part of a customer’s Organization, with specific access granted to Red Hat. In this model, the customer will pay Red Hat for the CCS subscription and will pay the cloud provider for the cloud costs. It is the customer's responsibility to pre-purchase or provide Reserved Instance (RI) compute instances to ensure lower cloud infrastructure costs.

Additional resources may be purchased for an OpenShift Dedicated Cluster, including:

  • Additional Nodes (can be different types and sizes through the use of Machinepools)
  • Middleware (JBoss EAP, JBoss Fuse, etc.) - additional pricing based on specific middleware component
  • Additional Storage in increments of 500GB (non-CCS only; 100GB included)
  • Additional 12 TiB Network I/O (non-CCS only; 12TB included)
  • Load Balancers for Services are available in bundles of 4; enables non-HTTP/SNI traffic or non-standard ports (non-CCS only)

Cluster Self-service

Customers can create, scale, and delete their clusters from OpenShift Cluster Manager (OCM), provided they've pre-purchased the necessary subscriptions.

Actions available in OpenShift Cluster Manager must not be directly performed from within the cluster as this may cause adverse affects, including, but not limited to, having all actions automatically reverted.

Cloud Providers

OpenShift Dedicated offers OpenShift Container Platform clusters as a managed service on the following cloud providers:

  • Amazon Web Services (AWS)
  • Google Cloud

Compute

Single-AZ clusters require a minimum of 2 worker nodes for Customer Cloud Subscription (CCS) clusters deployed to a single availability zone. A minimum of 4 worker nodes is required for non-CCS clusters. These 4 worker nodes are included in the base subscription.

Multi-AZ clusters require a minimum of 3 worker nodes for Customer Cloud Subscription (CCS) clusters, 1 deployed to each of three availability zones. A minimum of 9 worker nodes is required for non-CCS clusters. These 9 worker nodes are included in the base subscription, and additional nodes must be purchased in multiples of three in order to maintain proper node distribution.

All OpenShift Dedicated clusters support a maximum of 180 worker nodes.

Note: The default Machine Pool node type/size cannot be changed once the cluster has been created.

Master and infrastructure nodes are also provided by Red Hat. There are at least 3 master nodes that handle etcd and API related workloads. There are at least 2 infrastructure nodes that handle metrics, routing, web console and other workloads. Master and infrastructure nodes are strictly for Red Hat workloads to operate the service, and customer workloads are not permitted to be deployed on these nodes.

Note: Approximately 1 vCPU core and 1 GiB of memory are reserved on each worker node and removed from allocatable resources. This is necessary in order to run processes required by the underlying platform. This includes but is not limited to system daemons like udev, kubelet, container runtime, etc. and also accounts for kernel reservations. OpenShift core systems such as audit log aggregation, metrics collection, DNS, image registry, SDN, etc. may consume additional allocatable resources in order to maintain the stability and maintainability of the cluster. The additional resources consumed may vary based on usage.

Compute Types - AWS

OpenShift Dedicated offers the following worker node types and sizes:

General purpose

  • M5.xlarge (4 vCPU, 16 GiB)
  • M5.2xlarge (8 vCPU, 32 GiB)
  • M5.4xlarge (16 vCPU, 64 GiB)
  • M5.8xlarge (32 vCPU, 128 GiB)
  • M5.12xlarge (48 vCPU, 192 GiB)
  • M5.16xlarge (64 vCPU, 256 GiB)
  • M5.24xlarge (96 vCPU, 384 GiB)

Memory-optimized

  • R5.xlarge (4 vCPU, 32 GiB)
  • R5.2xlarge (8 vCPU, 64 GiB)
  • R5.4xlarge (16 vCPU, 128 GiB)
  • R5.8xlarge (32 vCPU, 256 GiB)
  • R5.12xlarge (48 vCPU, 384 GiB)
  • R5.16xlarge (64 vCPU, 512 GiB)
  • R5.24xlarge (96 vCPU, 768 GiB)

Compute-optimized

  • C5.2xlarge (8 vCPU, 16 GiB)
  • C5.4xlarge (16 vCPU, 32 GiB)
  • C5.9xlarge (36 vCPU, 72 GiB)
  • C5.12xlarge (48 vCPU, 96 GiB)
  • C5.18xlarge (72 vCPU, 144 GiB)
  • C5.24xlarge (96 vCPU, 192 GiB)

Compute Types - Google Cloud

OpenShift Dedicated offers the following worker node types and sizes on Google Cloud chosen to have a common CPU and memory capacity as other cloud instance types:

General purpose

  • custom-4-16384 (4 vCPU, 16 GiB)
  • custom-8-32768 (8 vCPU, 32 GiB)
  • custom-16-65536 (16 vCPU, 64 GiB)
  • custom-32-131072 (32 vCPU, 128 GiB)
  • custom-48-196608 (48 vCPU, 192 GiB)
  • custom-64-262144 (64 vCPU, 256 GiB)
  • custom-96-393216 (96 vCPU, 384 GiB)

Memory-optimized

  • custom-4-32768-ext (4 vCPU, 32 GiB)
  • custom-8-65536-ext (8 vCPU, 64 GiB)
  • custom-16-131072-ext (16 vCPU, 128 GiB)
  • custom-32-262144 (32 vCPU, 256 GiB)
  • custom-48-393216 (48 vCPU, 384 GiB)
  • custom-64-524288 (64 vCPU, 512 GiB)
  • custom-96-786432 (96 vCPU, 768 GiB)

Compute-optimized

  • custom-8-16384 (8 vCPU, 16 GiB)
  • custom-16-32768 (16 vCPU, 32 GiB)
  • custom-36-73728 (36 vCPU, 72 GiB)
  • custom-48-98304 (48 vCPU, 96 GiB)
  • custom-72-147456 (72 vCPU, 144 GiB)
  • custom-96-196608 (96 vCPU, 192 GiB)

Regions and Availability Zones

The following AWS regions are currently supported by:

  • af-south-1 (Cape Town, AWS opt-in required)
  • ap-east-1 (Hong Kong, AWS opt-in required)
  • ap-northeast-1 (Tokyo)
  • ap-northeast-2 (Seoul)
  • ap-south-1 (Mumbai)
  • ap-southeast-1 (Singapore)
  • ap-southeast-2 (Sydney)
  • ca-central-1 (Central Canada)
  • eu-central-1 (Frankfurt)
  • eu-north-1 (Stockholm)
  • eu-south-1 (Milan, AWS opt-in required)
  • eu-west-1 (Ireland)
  • eu-west-2 (London)
  • eu-west-3 (Paris)
  • me-south-1 (Bahrain, AWS opt-in required)
  • sa-east-1 (São Paulo)
  • us-east-1 (N. Virginia)
  • us-east-2 (Ohio)
  • us-west-1 (N. California)
  • us-west-2 (Oregon)

The following Google Cloud regions are currently supported:

  • asia-east1, Changhua County, Taiwan
  • asia-east2, Hong Kong
  • asia-northeast1, Tokyo, Japan
  • asia-northeast2, Osaka, Japan
  • asia-northeast3, Seoul, Korea
  • asia-south1, Mumbai, India
  • asia-southeast1, Jurong West, Singapore
  • asia-southeast2, Jakarta, Indonesia
  • europe-north1, Hamina, Finland
  • europe-west1, St. Ghislain, Belgium
  • europe-west2, London, England, UK
  • europe-west3, Frankfurt, Germany
  • europe-west4, Eemshaven, Netherlands
  • europe-west6, Zürich, Switzerland
  • northamerica-northeast1, Montréal, Québec, Canada
  • southamerica-east1, Osasco (São Paulo), Brazil
  • us-central1, Council Bluffs, Iowa, USA
  • us-east1, Moncks Corner, South Carolina, USA
  • us-east4, Ashburn, Northern Virginia, USA
  • us-west1, The Dalles, Oregon, USA
  • us-west2, Los Angeles, California, USA
  • us-west3, Salt Lake City, Utah, USA
  • us-west4, Las Vegas, Nevada, USA

Multi-AZ clusters can only be deployed in regions with at least 3 AZs (see AWS and Google Cloud).

Each new OSD cluster is installed within a dedicated Virtual Private Cloud (VPC) in a single Region, with the option to deploy into a single Availability Zone (Single-AZ) or across multiple Availability Zones (Multi-AZ). This provides cluster-level network and resource isolation, and enables cloud-provider VPC settings, such as VPN connections and VPC Peering. Persistent volumes are backed by cloud block storage and are specific to the AZ in which they are provisioned. Persistent volumes do not bind to a volume until the associated pod resource is assigned into a specific AZ in order to prevent unschedulable pods. AZ-specific resources are only usable by resources in the same AZ.

Note: The region and the choice of single or multi AZ cannot be changed once a cluster has been deployed.

Service Level Agreement (SLA)

Any SLAs for the service itself are defined in Appendix 4 of the Red Hat Enterprise Agreement Appendix 4 (Online Subscription Services).

Limited Support Status

You may not remove or replace any native OpenShift Dedicated components or any other component installed and managed by Red Hat. If using cluster administration rights, Red Hat is not responsible for any actions taken by you or any of your authorized users including but not limited to actions that may affect infrastructure services, service availability and data loss.

If any actions that affect infrastructure services, service availability or data loss are detected, Red Hat will notify the customer of such and request either that the action be reverted or to create a support case to work with Red Hat to remedy any issues.

Support

OpenShift Dedicated includes Red Hat Premium Support, which can be accessed by using the Red Hat Customer Portal.

Please see our Scope of Coverage Page for more details on what is covered with included support offering for OpenShift Dedicated.

OpenShift Dedicated support SLAs can be found here.

Logging

Red Hat OpenShift Dedicated provides optional integrated log forwarding to Amazon (AWS) CloudWatch.

Cluster Audit Logging

Cluster audit logs are available through AWS CloudWatch, if the integration is enabled. If the integration is not enabled, you can request the audit logs by opening a support case. Audit log requests must specify a date and time range not to exceed 21 days. When requesting audit logs, customers should be aware that audit logs are many GB per day in size.

Application Logging

Application logs sent to STDOUT are collected by Fluentd and forwarded to AWS CloudWatch through the cluster logging stack, if it is installed.

Monitoring

Cluster Metrics

OpenShift Dedicated clusters come with an integrated Prometheus/Grafana stack for cluster monitoring including CPU, memory, and network-based metrics. This is accessible via the web console and can also be used to view cluster-level status and capacity/usage through a Grafana dashboard. These metrics also allow for horizontal pod autoscaling based on CPU or memory metrics provided by an OpenShift Dedicated user.

Cluster Status Notification

Red Hat communicates the health and status of OSD clusters through a combination of a cluster dashboard available in the OpenShift Cluster Manager, and email notifications sent to the email address of the contact that originally deployed the cluster.

Networking

Custom Domains for Applications

To use a custom hostname for a route, you must update your DNS provider by creating a canonical name (CNAME) record. Your CNAME record should map the OpenShift canonical router hostname to your custom domain. The OpenShift canonical router hostname is shown on the Route Details page after a Route is created. Or a wildcard CNAME record can be created once to route all subdomains for a given hostname to the cluster's router.

Custom Domains for Cluster services

Custom domains and subdomains are not available for the platform service routes, e.g., the API or web console routes, or for the default application routes.

Domain Validated Certificates

OpenShift Dedicated includes TLS security certificates needed for both internal and external services on the cluster. For external routes, there are two, separate TLS wildcard certificates that are provided and installed on each cluster, one for the web console and route default hostnames and the second for the API endpoint. Let’s Encrypt is the certificate authority used for certificates. Routes within the cluster, e.g., the internal API endpoint, use TLS certificates signed by the cluster's built-in certificate authority and require the CA bundle available in every pod for trusting the TLS certificate.

Custom Certificate Authorities for Builds

OpenShift Dedicated supports the use of custom certificate authorities to be trusted by builds when pulling images from an image registry.

Load Balancers

OSD uses up to five different load balancers: an internal master load balancer, an external master load balancer, an external master load balancer that is only accessible from Red Hat-owned, whitelisted bastion hosts, one external router load balancer, and one internal router load balancer. Optional service-level load balancers may also be purchased to enable non-HTTP/SNI traffic and non-standard ports for services.

  1. Internal Master Load Balancer: This load balancer is internal to the cluster and is used to balance traffic for internal cluster communications.
  2. External Master Load Balancer: This load balancer is used for accessing the OpenShift and Kubernetes APIs. This load balancer can be disabled in OCM. If this load balancer is disabled, Red Hat reconfigures the API DNS to point to the internal master load balancer.
  3. External Master Load Balancer for Red Hat: This load balancer is reserved for cluster management by Red Hat. Access is strictly controlled, and communication is only possible from whitelisted bastion hosts.
  4. Default Router/Ingress Load Balancer: This is the default application load balancer, denoted by apps in the URL. The default load balancer can be configured in OCM to be either publicly accessible over the Internet, or only privately accessible over a pre-existing private connection. All application routes on the cluster are exposed on this default router load balancer, including cluster services such as the logging UI, metrics API, and registry.
  5. Optional Secondary Router/Ingress Load Balancer: This is a secondary application load balancer, denoted by apps2 in the URL. The secondary load balancer can be configured in OCM to be either publicly accessible over the Internet, or only privately accessible over a pre-existing private connection. If a "Label match" is configured for this router load balancer, then only application routes matching this label will be exposed on this router load balancer, otherwise all application routes will also be exposed on this router load balancer.
  6. Optional Load balancers for Services: This can be mapped to a service running on OSD to enable advanced ingress features, such as non-HTTP/SNI traffic or the use of non-standard ports. These can be purchased in groups of 4 for non-CCS clusters or can be provisioned without charge in CCS clusters, however each AWS account has a quota which limits the number of Classic Load Balancers that can be used within each cluster. See: Exposing TCP services

Network Usage

For non-CCS OSD clusters, network usage is measured based on data transfer between inbound, VPC peering, VPN, and AZ traffic. On a non-CCS OSD base cluster, 12TB of network I/O is provided. Additional network I/O can be purchased in 12TB increments. For CCS OSD clusters, network usage is not monitored, and it billed directly by the cloud provider.

Cluster Ingress

Project admins can add route annotations for many different purposes, including ingress control via IP whitelisting.

Ingress policies can also be changed by using NetworkPolicy objects, which leverage the ovs-networkpolicy plugin. This allows for full control over ingress network policy down to the pod level, including between pods on the same cluster and even in the same namespace.

All cluster ingress traffic will go through the defined load balancers. Direct access to all nodes is blocked by cloud configuration.

Cluster Egress

Pod egress traffic control via EgressNetworkPolicy objects can be used to prevent or limit outbound traffic in OpenShift Dedicated.

Public outbound traffic from the master and infrastructure nodes is required and is necessary to maintain cluster image security and cluster monitoring. This requires the 0.0.0.0/0 route to belong only to the internet gateway, it is not possible to route this range over private connections.

OpenShift 4 clusters use NAT Gateways to present a public, static IP for any public outbound traffic leaving the cluster. Each subnet a cluster is deployed into receives a distinct NAT Gateway. For clusters deployed on AWS with multiple availability zones, up to 3 unique static IP addresses can exist for cluster egress traffic. For clusters deployed on Google Cloud, regardless of availability zone topology, there will by 1 static IP address for worker node egress traffic. Any traffic that remains inside the cluster or does not go out to the public internet will not pass through the NAT Gateway and will have a source IP address belonging to the node that the traffic originated from. Node IP addresses are dynamic, and therefore a customer should not rely on whitelisting individual IP address when accessing private resources.

Customers can determine their public, static IP address(es) by running a pod on the cluster and then querying an external service. For example:

oc run ip-lookup --image=busybox -i -t --restart=Never --rm -- /bin/sh -c "/bin/nslookup -type=a myip.opendns.com resolver1.opendns.com | grep -E 'Address: [0-9.]+'"

Cloud Network Configuration

OpenShift Dedicated allows for the configuration of private network connection through several cloud provider managed technologies:

  • VPN connections
  • AWS VPC peering
  • AWS Transit Gateway
  • AWS Direct Connect
  • Google Cloud VPC Network peering
  • Google Cloud Classic VPN
  • Google Cloud HA VPN

No monitoring of these private network connections is provided by Red Hat SRE. Monitoring these connections is the responsibility of the customer.

DNS Forwarding

For OpenShift Dedicated clusters that have a private cloud network configuration, a customer may specify internal DNS server(s) available on that private connection that should be queried for explicitly provided domains.

Storage

Encrypted-at-rest OS/node storage

Master nodes use encrypted-at-rest EBS storage.

Encrypted-at-rest PV

EBS volumes used for persistent volumes are encrypted-at-rest by default.

Block Storage (RWO)

Persistent volumes are backed by block storage (AWS EBS and Google Cloud persistent disk), which is Read-Write-Once. On a non-CCS OSD base cluster, 100GB of block storage is provided for persistent volumes, which is dynamically provisioned and recycled based on application requests. Additional persistent storage can be purchased in 500GB increments.

Persistent volumes can only be attached to a single node at a time and are specific to the availability zone in which they were provisioned, but they can be attached to any node in the availability zone.

Each cloud provider has its own limits for how many PVs can be attached to a single node. See AWS Instance Type Limits or Google Cloud custom machine types for details.

Shared Storage (RWX)

The AWS CSI Driver can be used to provide RWX support for OpenShift Dedicated on AWS. A community operator is provided to simplify setup.

Platform

Cluster Backup Policy

⚠️ It is critical that customers have a backup plan for their applications and application data.

Application and application data backups are not a part of the OpenShift Dedicated service.
All Kubernetes objects in each OpenShift Dedicated cluster are backed up to facilitate a prompt recovery in the unlikely event that a cluster becomes irreparably inoperable.

The backups are stored in a secure object storage (Multi Availability Zone) bucket in the same account as the cluster.
Node root volumes are not backed up as Red Hat Enterprise Linux CoreOS is fully managed by the OpenShift Container Platform cluster and no stateful data should be stored on a node's root volume.

The following table shows the frequency of backups:

Component Snapshot Frequency Retention Notes
Full object store backup Daily at 0100 UTC 7 days This is a full backup of all kubernetes objects. No PVs are backed up in this backup schedule.
Full object store backup Weekly on Mondays at 0200 UTC 30 days This is a full backup of all kubernetes objects. No PVs are backed up in this backup schedule.
Full object store backup Hourly at 17 minutes past the hour 24 hours This is a full backup of all kubernetes objects. No PVs are backed up in this backup schedule.

Auto Scaling

Node autoscaling is not available on OpenShift Dedicated at this time.

DaemonSets

Customers may create and run DaemonSets on OpenShift Dedicated. In order to restrict DaemonSets to only running on worker nodes, use the following nodeSelector:

...
spec:
nodeSelector:
role: worker
...

Multi-AZ

In a multiple availability zone cluster, master nodes are distributed across AZs and at least three worker nodes are required in each AZ.

Node Labels

Custom node labels are created by Red Hat during node creation and cannot be changed on OpenShift Dedicated clusters at this time.

OpenShift Version

Upgrades

Refer to OpenShift Dedicated Life Cycle for more information on the upgrade policy and procedures.

Windows containers

Window containers are not available on OpenShift Dedicated at this time.

Container Engine

OpenShift Dedicated runs on OpenShift 4 and uses CRI-O as the only available container engine.

Operating System

OpenShift Dedicated runs on OpenShift 4 and uses Red Hat Enterprise Linux CoreOS as the operating system for all master and worker nodes.

Kubernetes Operator Support

All Operators listed in the Operator Hub marketplace should be available for installation. Operators installed from OperatorHub, including Red Hat operators, are not SRE managed as part of the OpenShift Dedicated service. Refer to the Red Hat Customer Portal for more information on the supportability of a given operator.

Security

Authentication Provider

Authentication for the cluster is configured as part of the OpenShift Cluster Manager cluster creation process. OpenShift is not an identity provider, and all access to the cluster must be managed by the customer as part of their integrated solution. Multiple identity providers provisioned at the same time is supported. The following identity providers are supported:

  • OpenID Connect
  • Google OAuth
  • GitHub OAuth
  • GitLab OAuth
  • LDAP

Privileged Containers

Privileged containers are not available by default on OSD. The "anyuid" and "nonroot" Security Context Constraints are available for dedicated-admins, and should address many use cases. Privileged containers are only available for cluster-admin users.

Customer Admin User

In addition to normal users, OpenShift Dedicated provides access to an OSD-specific Group called dedicated-admins. Any users on the cluster that are members of the dedicated-admins group:

  • Have admin access to all customer-created projects on the cluster
  • Can manage resource quotas and limits on the cluster
  • Can add/manage NetworkPolicy objects
  • Are able to view information about specific nodes and PVs in the cluster, including scheduler information
  • Can access the reserved ‘dedicated-admin’ project on the cluster, which allows for the creation of ServiceAccounts with elevated privileges and gives the ability to update default limits and quotas for projects on the cluster.

For more specific information on the dedicated-admin role, please see https://docs.openshift.com/dedicated/4/administering_a_cluster/dedicated-admin-role.html.

Cluster Admin Role

As an administrator of an OpenShift Dedicated cluster with Customer Cloud Subscriptions (CCS), you have access to the cluster-admin role. While logged in to an account with the cluster-admin role, users have mostly unrestricted access to control and configure the cluster. There are some configurations that are blocked with webhooks to prevent destablizing the cluster, or because they are managed in OpenShift Cluster Manager (OCM) and any in-cluster changes would be overwritten.

For more information on the cluster-admin role, please see https://docs.openshift.com/dedicated/4/administering_a_cluster/cluster-admin-role.html.

Project Self-service

All users, by default, have the ability to create, update, and delete their projects. This can be restricted if a member of the dedicated-admins group removes the self-provisioner role from authenticated users:

oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth

This can be reverted by applying:

oc adm policy add-cluster-role-to-group self-provisioner system:authenticated:oauth

Regulatory Compliance

Refer to OpenShift Dedicated Process and Security Overview for the latest compliance information.

Network Security

With OSD on AWS, AWS does provide a standard DDoS protection on all Load Balancers, called AWS Shield. This provides 95% protection against most commonly used level 3 and 4 attacks on all the public facing Load Balancers used for OpenShift Dedicated. We also add a timeout for http requests coming to our haproxy router of 10 secs within which to get response or connection is closed to provide additional protection.