Subscribe to our blog

Red Hat Quay 3.7 marks another important feature release of Red Hat’s central container registry platform for modern multi-cluster Kubernetes landscapes across cloud and on-premise infrastructures. This version focuses on introducing major new functionality that helps enterprise customers run large multi-tenant and resilient container registry deployments that are integrated with external registries. Among net-new capabilities for the registry this release further refines the lifecycle of Red Hat Quay on Red Hat OpenShift Container Platform via the operator.

Storage Quota Management

Administrators of central Quay deployments now have the ability to limit the storage consumption of tenant organizations in their registry. This is important for environments where storage capacity is either limited or storage usage is growing rapidly due to the use of automated systems that generate new content in the registry, like CI pipelines or build farms.

To control growth and limit individual storage usage in a tenant organization, a super user in Quay can set a storage quota. All images in all repositories in that organization count against the Quota. A policy can be configured to control what happens when quota utilization exceeds certain thresholds. Warning thresholds can be set at custom capacity levels to inform the owners that their organization has crossed a certain utilization level.

Furthermore, a reject policy can be implemented when the quota is used up entirely, which will cause Quay to block new images from being created in the organization. Clients like podman or docker will report an appropriate error message while attempting to push in these scenarios.

$ podman push quay.corp/project/quay:latest
Getting image source signatures
Copying blob 30b9dc741809 [---------------------] 8.0b / 938.5KiB
Copying blob 4736f5768d00 [---------------------] 8.0b / 2.5KiB
Copying blob c86122b5e4d3 [---------------------] 8.0b / 20.0KiB
Copying blob ac4be186f3aa [---------------------] 8.0b / 7.5KiB
Copying blob a89cb3db02dc [---------------------] 8.0b / 9.3MiB
Copying blob fed401980efe [---------------------] 8.0b / 34.8MiB
Copying blob 5412f5ec3c9c [---------------------] 8.0b / 15.7MiB
Copying blob 2f73501002ed [---------------------] 8.0b / 171.6MiB
Copying blob eb23d494575f [---------------------] 8.0b / 13.3MiB
Copying blob b53bdf62f50c [---------------------] 8.0b / 24.1MiB
Copying blob 3813924f3fa4 [---------------------] 8.0b / 223.9MiB
Copying blob 4d7f3ec9cc55 [---------------------] 8.0b / 2.5KiB
Copying blob e4747c4913fe [---------------------] 8.0b / 278.3MiB
Copying blob 3bd4e393e703 [---------------------] 8.0b / 237.3MiB
Copying blob aec00070eb92 [---------------------] 8.0b / 265.0MiB
Error: writing blob: initiating layer upload to /v2/project/quay/blobs/uploads/ in quay.corp: denied: Quota has been exceeded on namespace

While the reject policy is optional, it is a viable instrument to force organization owners to delete old content that may not be needed anymore or contact the administrator of the registry to ask for a quota increase along with a business justification. This puts the onus of staying within certain bounds on the content owners rather than Quay administrators who are often in turn asked to control the storage consumption registry by infrastructure and procurement teams.

To that end, a Quay administrator can now also see the storage utilization of all tenant organizations in the registry from the super user panel, to easily spot the biggest storage consumers in the system and also identify organizations which have exceeded their quota but don’t have a reject policy in place.

The super user is the single persona in the Quay system that can add, edit or remove a storage quota to an individual organization. They can also define a system wide default quota for all existing and new organizations that do not already have an explicit quota configured. This further helps control growth of the system when tenants create new organizations. A default quota will always have a reject policy to deny pushes once the storage usage hits 100% of the allotted quota.

With an individual configuration for an organization, a super user can define a quota that has no reject policy and thus can theoretically exceed 100% usage. This is useful when the storage requirements of a new project aren’t well known in the beginning and allows to start tracking against an initial estimate without enforcement, rather than the usual over-provision of quota. It also serves as a release valve when a project quickly needs to be able to continue to produce new content but the quota should not be adjusted long-term.

To see a demo of the storage quota management feature click here.

Proxy Pull-Through Caching (Tech Preview)

When relying on 3rd party, externally hosted registries, users have to bring a certain level of trust to the table, as it represents a dependency they have little control over. The external registry needs to be available at all times, has to be fast at all times and serve trusted content. This can make it difficult to integrate them into production environments.

Proxy pull-through caching in Quay 3.7 allows for an easier integration of external registries that is transparent for clients. With this new feature, Quay can serve as a cache for an entire external registry or only specific portions. Clients communicate with Quay instead of the external registry which transparently streams the desired content from the external registry back to the client while also storing it for future use for a limited, configurable time. The cached content will be leveraged the next time a client requests it again and it will be served directly from Quay, which greatly improves the pull performance.

The vehicle to enable caching in Quay is an organization. Any user can configure an organization to be a cache of an OCI or docker v2s2 compatible container registry, at which point the organization becomes read-only for clients, i.e. they cannot push content into. Beyond this, the organization works exactly like a regular organization in Quay with respect to access management, authentication, content scanning, etc. It is also possible to configure credentials for the external registry should it need authentication. Any desired number of cache organizations can be created to cache different registries or different portions of the same external registry, for example quay.io/redhat and quay.io/openshift-release-dev separately.

Clients direct their pull requests to the cache, which in turn translate it to the appropriate location of the upstream registry. For example, if an organization called “cache” is configured to proxy the docker.io/library project, the following command would pull the upstream postgres image through your private Quay deployment at quay.corp:

podman pull quay.corp/cache/postgres:latest

In turn, if the “cache” organization is configured to cache the entire DockerHub registry (i.e. docker.io), the pull command would look like this:

podman pull quay.corp/cache/library/postgres:latest

Here’s a tip: you can configure your podman client to transparently redirect pull requests for certain registries to a mirror. Assuming the following configuration in your $HOME/.config/containers/registries.conf file:

[[registry]]
prefix = "docker.io/library"
insecure = false
blocked = false
location = "quay.corp/cache"

The pull command can now be executed as follows:

podman pull docker.io/library/postgres:latest

or even in the short form:

podman pull docker.io/postgres:latest

or when docker.io is in the default set of search registries:

podman pull postgres:latest

The podman client will transparently pull the requested content from the cache at quay.corp/cache/postgres:latest and the developer can freely access any image in the DockerHub library project like they are used to, without even being aware that it is going through the cache.

This particular example shows three major highlights of the caching feature:

First, you can selectively give access to external registries instead of trusting them entirely. In the example above, a Quay administrator may decide to give developers access to popular open source projects in the DockerHub library that are frequently updated to remove security vulnerabilities but not the rest of DockerHub. This could be further enforced by blocking direct access from clusters and individuals to the docker.io domain at the firewall.

Second, Quay transparently manages cache coherence and automatically pulls in new content if the cache image is outdated. This is particularly useful when pulling images via tags which can update and move at any time in the external registry. Quay caching automatically detects this and updates the cache when needed. This way, the cache is never behind the upstream registry.

There is a special nuance in the implementation here, that provides an additional safety net in case the external registry is temporarily unavailable. If the image in the cache has not yet expired but the external registry reports an abnormal status or is completely unavailable, the image will continue to be served from the cache until the expiration timer kicks in. This allows for an additional buffer in case of intermittent outages of the external registry and prevents workloads in Kubernetes and OpenShift clusters from failing quickly as a result of an external registry outage. It will not apply to images that are not yet in the cache or if the image actually got removed from the upstream registry or access is denied due to incorrect credentials.

Third, besides the immediate performance advantages of repeated pulls of the same content, this cache provides an effective way to manage pull-rate limits that external registries might impose. This is often impacting users behind a corporate network environment which, from the outside perspective, all access the external registry with the same public IP address. Since pull-rate limiting is often applied at the level of a unique IP address, the rate limit will quickly be hit and workloads in clusters as well as build pipelines start failing. When using Quay as a cache, there is only one system accessing the external registry, significantly lowering the pull rate behind a public IP address. And because the cache is kept coherent as explained above, it only ever pulls new images from the upstream registry when actually needed, thus effectively staying below the pull rate limit.

The proxy pull-through cache feature will be in Tech Preview in Red Hat Quay 3.7, which means we encourage experimentation and exploration of this feature and we will respond to support tickets but do not yet recommend nor support production use. There is one particular reason for this, which is the lack of cache eviction and quota integration. In Red Hat Quay 3.7, if a cache organization reaches its quota and the quota has a reject policy enabled (such as the default system-wide quota), the cache will refuse to pull in new content and return an error to the client.

To experience the pull-thru proxy caching capability in action check out this demo.

Geo-replication with the Red Hat Quay Operator

Geo-replication is a unique feature of Red Hat Quay which allows users to deploy Quay instances around the globe but present them to clients and clusters as a single federated registry with a single URL. In this setup the images are transparently replicated to individual storage buckets in the various regions, greatly increasing pull and push performance of clients while maintaining data integrity in a global registry.

While this capability is not new, it wasn’t well supported with the Red Hat Quay operator so far. The main limitation up until now was that the operator did not support a custom configuration of the image security scanner, Clair, and would consequently always deploy it with a local database. However, in geo-replication an external, shared database is required for Clair. In the 3.7 version of the Red Hat Quay operator this deployment topology is now natively supported

The operator now manages the Clair security scanner and its database separately. Consequently, in a geo-replication setup depicted above the operator can be instructed to not manage the Clair database. Instead, an external shared database would be used. Quay and Clair support a variety of providers and vendors of PostgreSQL for this, as outlined in our test matrix. In addition, the operator also supports a custom Clair configuration to be injected into the deployment, which allows users to configure the managed Clair instance with the connection credentials for the external database. This concludes the requirements for geo-replication which is now a supported deployment topology of Quay on OpenShift clusters managed by the operator.

Updating such deployments can also be done via the operator, but since two or more clusters in two or more separate sites are involved, it needs to be serialized by the user when Quay is updated from one minor version to another (e.g. 3.6 -> 3.7). The process looks like this: picking a cluster to start the update and scaling down the Quay / Clair deployments to 0 in all other sites/clusters. Then the operator on the selected cluster is updated, in turn updating the Quay deployment. Finally, the operators in all remaining sites are updated and scaled back up. When moving between patch version updates of Quay (e.g. 3.7.0 -> 3.7.1), sites can be updated in a rolling fashion in any order without scaling down any site first.

This also opens up the possibility to further customize the Clair configuration. Popular use cases are to enable and disable certain updaters or enable airgap-mode to prevent Clair from trying to reach public URLs to download security vulnerability data and side-load it instead.

Image builds on top of non-bare metal Red Hat OpenShift Container Platform

For quite some time now, Red Hat Quay supports building container images right inside the registry by integrating with source code management systems like GitHub, GitLab or BitBucket. This provides an easy path to continuous integration and simplifies existing pipelines which normally need to build container images themselves and have credentials to push and pull them to a registry.

With Quay builders users can trigger image builds based on git commits with the resulting images being stored in Quay, alleviating the need to distribute credentials with write access to the registry.

This feature was so far only supported by Quay if it ran on OpenShift installed on bare-metal servers, since the actual build would be executed using podman build in a containerized virtual machine using qemu. This was a security measure, because Image builds in the past needed root access. Using a virtual machine provides a high degree of security when dealing with untrusted tenants but also requires bare-metal infrastructure to be used, since OpenShift worker nodes do not support nested virtualization.

In enterprise environments, tenants are usually not entirely untrusted and at the same time the buildah project evolved significantly to support creating images without running with root privileges. Red Hat Quay takes advantage of that and in version 3.7 now offers the ability to run builds in rootless, containerized buildah instances on Red Hat OpenShift. This eliminates the requirement of OpenShift running on bare-metal, effectively enabling Quay builders to run on any OpenShift cluster, starting with version 4.10.

Other improvements and future outlook

With every Quay release there are also numerous improvements to existing capabilities and components. With Red Hat Quay v3.7 the Microsoft Azure Government storage service is now fully supported. The Red Hat Quay operator automatically detects whether the OpenShift cluster has a cluster-wide HTTP(S) proxy configured and passes the configuration down to all Quay and Clair deployments under management automatically. Beyond that, it supports adding custom environment variables to all managed components, which is useful to quickly change logging levels or turn options useful for troubleshooting or setting a custom HTTP(S) proxy different from the cluster configuration. Lastly, support has been improved for small clusters when evaluating Red Hat Quay for a proof-of-concept, the operator now supports deploying components without Kubernetes resource requests on OpenShift, which disables resource reservation and helps with placement on smaller clusters to run the various Quay components.

Red Hat Quay will have another feature release towards the end of the third quarter of this year that will introduce exciting new features like a preview of the new Red Hat Quay user interface, IPv6 single-stack support on all supported platforms as well introduce a new type of user in the system that only has access to content that was shared with them explicitly.

How to get it

Like all Red Hat Quay releases, the container images for Red Hat Quay 3.7 can be retrieved from the Red Hat Container Catalog. The Quay Operator can be installed from within the Operator Hub in OpenShift. Users running an earlier version of the Quay Operator can simply use the Operator Lifecycle Manager UI to switch the channel of the Subscription to “stable-3.7” and trigger an update of both the Quay Operator and the Quay deployments under management on the cluster.


About the author

Browse by channel

automation icon

Automation

The latest on IT automation that spans tech, teams, and environments

AI icon

Artificial intelligence

Explore the platforms and partners building a faster path for AI

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

Explore how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the solutions that simplify infrastructure at the edge

Infrastructure icon

Infrastructure

Stay up to date on the world’s leading enterprise Linux platform

application development icon

Applications

The latest on our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech