The Red Hat Quay team is excited to announce the general availability of Red Hat Quay 3.6 today. This version brings a large set of stability improvements and over 80 bug fixes along with features that are in high demand by customers. Users of older versions of Quay can also leverage a simplified update path to Red Hat Quay 3.6.

Quay Operator improvements

Running Quay on top of OpenShift leverages the built-in scheduling, load-balancing, and recovery mechanism of the underlying Kubernetes platform. In combination with the Kubernetes Operator pattern, all the usually manually sequenced deployment / update steps and repetitive operations to distribute configuration throughout a scale-out Quay deployment are automated. Hence, the Quay Operator is the preferred method of running a central, highly-available, and elastic Red Hat Quay deployment in a landscape with many Kubernetes clusters and container clients.

Simplified TLS management

In Quay 3.6, a lot of customer feedback is now reflected in the way the operator installs and maintains a Quay deployment on top of OpenShift. As a result, customers will be able to leverage OpenShift’s built-in certificate management system, which is part of the OpenShift Routes API, to provide HTTPS ingress traffic to the registry. Where previously the Quay administrator was required to inject manually generated TLS certificates into Quay, this can now be delegated to OpenShift via a new control component in the operator’s CustomResource interface called “tls”:

apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
components:
  - kind: route
    managed: true
  - kind: tls
    managed: true

This will cause the operator to create an OpenShift Route of type “edge,” which means Quay is exposed via HTTPS and the encryption is terminated at the ingress point of the cluster that is managed by OpenShift. The advantage of this method is that the Quay admin does not have to explicitly generate a TLS certificate and inject it into the Quay configuration. Instead, OpenShift will dynamically generate a TLS certificate and attach it to the Route. The certificate is signed by the Certificate Authority that has been configured in the OpenShift cluster, which is often already trusted throughout enterprise environments. The Quay admin also does not need to worry about rotating the certificate in-time before it expires, as OpenShift automatically rotates these certificates automatically before they expire.

In cases where it is desirable to have the ingress traffic be encrypted all the way into the Quay pods, customers can opt out of the TLS management via OpenShift. Simply setting the “tls” component to “managed: false” and supplying custom TLS certificates into Quay using its configuration bundle will yield the following configuration:

In both cases, the Route configuration will be managed by the Quay operator, which allows the Route’s host name to be in sync with the hostname that Quay is configured with via its own configuration bundle. The Route management of the operator can also be disabled for even more flexibility in configurations using Ingress objects, for example.

More reliable deployments

In the 3.6 version of the Red Hat Quay Operator, by default all stateless components will now be deployed in an highly-available and elastic fashion. This is achieved by using HorizontalPodAutoscaler configurations not only for Quay, but also the image scanning engine (Clair) and the Quay repository mirror workers. The default minimum replica set size has been adjusted to 2 for all components, including Quay, to help with transparent failover in case cluster nodes become unavailable or are put into maintenance mode. In addition, database failover and restarts have been made more resilient.

The Quay deployment process and status can also be observed in more detail now. A new section in the status block of the QuayRegistry custom resource allows an administrator to understand which components are still in the process of deployment or if they degraded and why:

Overall deployment speed has been increased significantly by optimizing the way database schema upgrades are run at initial deployment and during updates to newer versions of the Quay registry.

Simplified Quay Bridge Operator setup

The Quay Bridge Operator serves as a glue component between Quay and the OpenShift ImageStream and Build APIs since Red Hat Quay 3.4. It allows customers to leverage these APIs in the very same way they are used when images are referenced and pushed to the OpenShift internal registry. But instead, the images are stored in a central Quay deployment that can be shared among many OpenShift clusters while the same automations are available, which enable automatic rolling updates of Deployments when a new image was built via an OpenShift BuildConfig and pushed to Quay.

The Quay Bridge Operator (QBO) uses Kubernetes Mutating Admission Webhooks to link into OpenShift’s Build and ImageStream APIs and requires the OpenShift API server to trust QBO’s HTTPS webhook endpoints. So far, this was configured with a manual script that had to be run before installing QBO followed by manually retrieving a Kubernetes Secret.This in turn  would then be referenced by the QBO deployment. Thanks to advancements in the Operator Lifecycle Manager (OLM), this is no longer necessary. As of Quay 3.6, QBO can simply be installed via the OperatorHub UI or OLM APIs in OpenShift like any operator without any manual prerequisites. The trust management between the operator and OpenShift is managed transparently by OLM.

Direct migrations from Red Hat Quay 3.3 Operator

Customers running the previous 3.3 version of the Quay Operator (sometimes referred to as Quay Setup Operator) enjoy a simplified update path. While normally Quay updates have to be applied serially, in this release, we spent extra testing cycles on direct updates from the Red Hat Quay 3.3 to the Red Hat Quay 3.6 Operator. This is valuable, because the 3.6 version can directly migrate existing 3.3 setups that already leverage an edge route and retain that setup, whereas the 3.4 and 3.5 versions of the Operator did not migrate these and would leave Routes unmanaged. The Quay deployment under management will also be updated to Red Hat Quay 3.6 subsequently. This direct update is also supported for non-operator deployments such as Red Hat Enterprise Linux using podman.

Quay Registry improvements

Nested Repository support

A popular request from Quay users and customers was to support nested repositories. This functionality is introduced as of Red Hat Quay 3.6. Traditionally, container images were referenced via repository names in container registries that had two path components: namespace / repository, making an image pull specification look generically like this <registry-hostname>:<port>/<namespace>/<repository>:tag. A specific example would be: quay.io/jetstack/cert-manager-controller:v1.5.4, with jetstack/cert-manager-controller being the repository name.

Nesting repositories means that a repository name can have more than two path components. In Quay 3.6, this specifically means that the <repository> portion can contain multiple path components separated by a forward slash “/”. While the first path component of a repository name will always be used to determine a Quay organization (a collection of container images, users, policies, and permissions), the rest of the string allows for nested paths, like the following examples:

quay.io/jetstack/operator/cert-manager-controller:v1.5.4
quay.io/jetstack/istio/cert-manager-istio-agent:v1.5.4
quay.io/jetstack/servicemesh/cert-manager-istio-agent:v1.5.4

This allows for content organization in Quay that helps avoid naming conflicts. As a result, in the above example, all three repository names are completely independent items in Quay, even though the last two appear to have the same image name.

This is especially helpful when using Quay to ingest content from multiple external repositories, and it is desired to keep the repository name from the external source and simply nest it below a common organization in Quay.

It should be noted that organizing content this way is primarily aiming at separation of content inside a single Quay organization. This can be compared to how blobs can be further organized in folders in object storage buckets. Like in S3, there is no tree structure with inheritance behind this where permissions of a path or folder are applied to the lower-level components. Even when repository names have some of their path components in common, they remain independent entities in Quay’s permission model.

Generic OCI artifact type support

OCI artifacts mark a new step towards unification of how content and software are distributed in a cloud-native world and the role registries play in it. In essence, the OCI artifact spec encourages a standard way in which OCI compatible registries (like Quay) can support additional artifact types. That is, they can store content other than container filesystem images. This builds on top of the OCI distribution spec, which defines how registries like Quay distribute OCI compatible content of any type.

As a result of this, a lot of new ideas for additional content types, and thus, use cases for registries, have spun up. These include distributing Helm Charts or trained machine learning models as examples for content unrelated to container images or use cases related to container images such as encrypted images or using alternative compression algorithms like “zstd.” This allows seeking and thus running a container image that has only been partially downloaded built on top of custom OCI types (for example, “application/vnd.cncf.helm.config.v1+json”) that the registry has to accept during push and serve via pulls. It can also optionally build additional behavior and functionality around them.

In Red Hat Quay 3.6, administrators will be able to specify custom OCI artifact types as part of the registry-wide configuration file. An example configuration could look like this:

FEATURE_GENERAL_OCI_SUPPORT = True
ALLOWED_OCI_ARTIFACT_TYPES = {
      "application/vnd.oci.image.config.v1+json": [
          "application/vnd.dev.cosign.simplesigning.v1+json"
      ],
      "application/vnd.cncf.helm.config.v1+json": [
         "application/tar+gzip"
     ]
,
  }

The above configuration would allow uploading Helm charts as OCI images as well as storing image signatures that have been generated with cosign. The first directive generally enables OCI support in Quay that is continuously validated against the official OCI conformance test suites. The second directive is a listing of the supported types. The configuration shown is the default of Quay 3.6.

With generic OCI artifact type support, users can add new types to Quay as new use cases emerge. This is usually the case when new tools want to leverage Quay as storage for artifacts they manage or produce and where to define a new OCI artifact type. Artifacts not listed in the above configuration directive are rejected upon push by Quay with an appropriate error message. The core docker image types (for example, application/vnd.docker.distribution.manifest.v2+json) as well as gzip and zstd compression schemes for OCI containers are supported out of the box in Quay 3.6.

The existing OCI artifact support for Helm remains, but the associated configuration bundle directive FEATURE_HELM_OCI_SUPPORT is now considered deprecated in favor of the more generic approach shown above.

Cosign support

One exciting new use case that is enabled with generic OCI artifact support is image signing using the SigStore toolchain. This project marks a new approach to image and artifact signing without some of the drawbacks of traditional key management (although regular key management systems and local key pairs are supported as well). We encourage you to explore the SigStore project deeper, which aims to democratize signing of container images and cloud-native artifacts and is co-sponsored by Red Hat. 

The cosign utility of this project is used to carry out the image signing itself. In this process, it creates a signature that is stored alongside the image itself in the registry. For example, an image quay.io/dmesser/helloworld:v1.0 will have its signature live right alongside in quay.io/dmesser/helloworld:sha256-fdcd62c5bb2aa7b9d3d178da9115d49e48c9abfe93e52432f4987c728b0680fd.sig. The latter is an OCI artifact of the type application/vnd.dev.cosign.simplesigning.v1+json and is stored like any other OCI image in Quay from where it can be pulled by any OCI compatible client.

Publicizing the signature like this allows cosign to verify the image by simply pulling the signature artifact and verifying it against a public signature key, or in the case of keyless signing, a signing identity such as a GitHub login. As the signature is stored along with the image in the same Quay registry, no separate storage mechanism is required, and the signature is always available where the image is stored. This leverages the very scalable serving layer of Quay and alleviates the need for a special signature store that operations teams would need to manage.

cosign is also destined to sign generic blobs like application binaries or text artifacts such as Helm charts or Kubernetes manifests. Attestations to audit how and where an artifact was built are also starting to become supported. We strongly encourage you to follow this upstream project that aims to directly address software supply chain security in the cloud-native and open source world.

In the future of Quay, we have further plans beyond just serving signed images, including proper representation in the UI or automatically signing images that are built directly in Quay.

Better vulnerability ratings

When security vulnerabilities (CVEs) are detected by the Clair scanning engine in Quay, up until Quay 3.4, the interpretation of the severity of these vulnerabilities was conducted by pulling information from the National Vulnerability Database (NVD), which is owned by the National Institute of Standards and Technology (NIST). While this is a generally accepted practice, sometimes the rating of a particular vulnerability is disputed. This may happen when the distributor of an operating system or a particular application puts the vulnerability into the perspective of how the affected code is used by their software. This adds additional burden on the user side, including developers storing their images in Quay or InfoSec teams that have to carry out risk acceptance reviews.

Starting with Red Hat Quay 3.4, vulnerability ratings were primarily taken from the distributor's vulnerability database (for example Red Hats OVAL v2 feeds for UBI images and Red Hat software) for that reason. However, not all OS distributions supported by Clair have rating information in their feeds. While a CVE would be detected by Clair and reported by Quay, the user would not have any information about the severity of the vulnerability. 

With Red Hat Quay 3.6, Clair will combine severity information from the NVD with those from the security feeds of OS distributions or language package managers. In cases where there is a stark difference in the rating level (two levels or more), the rating of the vendor is preferred over those from the NVD, and the CVE will be marked as reclassified in Quay: 

With this, users have comprehensive information about the impact that certain CVEs have and can act accordingly and prioritize responses in relation to the threat level. Starting with this release, the structure in which rating information is supplied has also been updated to CVSS (Common Vulnerability Scoring System) v3. It rates CVEs among a variety of criteria, giving users further context such as which attack vector is used to penetrate a system or whether an attacker requires elevated privileges in that system to carry out the attack.

Automation

Several handy automations have been added to Red Hat Quay 3.6 targeting customers who are looking to automate the setup of Red Hat Quay outside of an Red Hat OpenShift cluster.

They can now leverage a configuration directive called FEATURE_USER_INITIALIZE to leverage the Quay API to create a first-user right after a fresh deployment and retrieve an API token for further automation.

During push operations, by default, Red Hat Quay creates repositories if they do not already exist. The default visibility of these repositories is private. A new configuration directive called CREATE_PRIVATE_REPO_ON_PUSH can be set to “false” in case the default should be public. This is a registry-wide setting, a similar policy at the organization level will be added in a future release.

In contrast, when pushing a container image to Quay, by default, the organization (the first-path component of a repository name, such as quay.io/organization/image:v1.0) has to exist. With the new configuration directive CREATE_ORGANIZATION_ON_PUSH, Quay can be instructed to automatically create the organization on the first push attempt.

More details on all those three options can be found in the Red Hat Quay documentation.

Deprecations

With the Red Hat Quay 3.6 release, using MySQL/MariaDB as the database backend for Quay is deprecated, and going forward, support will eventually be removed in Red Hat Quay 3.8. Until then, MySQL is still supported as per the support matrix but will not receive additional features or explicit testing coverage. The Red Hat Quay Operator only supports PostgreSQL as a managed database since Red Hat Quay 3.4.  External MySQL/MariaDB databases can still be leveraged (setting the database to “unmanaged” in the Operator in the process) up until Red Hat Quay 3.8.

How to get Red Hat Quay 3.6

Like all Red Hat Quay releases, the images can be retrieved from the Red Hat Container Catalog. The Quay Operator can be installed from within the Operator Hub in OpenShift. Customers running an earlier version of the Quay Operator can simply use the Operator Lifecycle Manager UI to switch the channel of the Subscription to “stable-3.6” and trigger an update of both the Quay Operator and the Quay deployments under management:


Categories

News, How-tos, Red Hat Quay Registry, Security

< Back to the blog