When you are using large numbers of containers, a stable of images becomes the basis of your environments, your build processes and everything your developers use day to day. As it is incumbent upon such an open hybrid cloud to keep those container images somewhere, and to keep them in a place that can handle security scans and verify the integrity of those images, a container registry is a wise addition to any cloud stack.


While the open hybrid cloud can work with almost any container registry you can find, we’ve had our own solution here at Red Hat for some time. Quay.io has been one of those components of the open hybrid cloud stack that performs its duties in a reliable way and dependable fashion. And it’s done so since 2013. This managed container registry service, however, just had a big milestone, and we wanted to celebrate in some small way with a blog post.

Quay had 100% uptime for August

The foremost metric that the Quay.io SRE team tracks is site availability. Our team is happy to report that Quay had 100% uptime for the month of August and we hope to see that continue.

Last month Red Hat shared a post-mortem about recent Quay.io downtime. The post discussed the root causes which stemmed from some inefficient code that interacted with Quay’s database, causing it to hit it’s connection limit. What we failed to mention in that post was the fact that the Quay.io registry service was reaching record numbers of users and container pull requests. Numbers that far exceed those that could be reproduced in a test environment.

This makes our 100% uptime celebration for August that much more exciting, as it means we managed to not only fix a difficult bug, but that we’d also managed to meet those enormous needs of scale consistently for an entire month after patching.

And that brings us to our next point:

Quay hits 1 billion containers pulls per month

August also saw the first month of over a billion container pulls from Quay.io to users around the world. Quay users run across all clouds, from those their own datacenters to Amazon, Microsoft, Google and bare metal hosting platforms around the globe. They depend on Quay to provide containers to all of their applications, and as they control their own images, nothing vanishes over night, is unexpectedly delisted or overwritten. Quay’s billion container pulls is even more of an accomplishment when you realize that each and every one of those containers is constantly scanned for security vulnerabilities.

With all those users has come innovation on both sides of the equation. On our side, Quay.io features robot accounts, which help you securely separate credentials between your different applications. This helps improve the reliability, security and auditability of your containers by removing human-run accounts from the automated processes you construct around Quay. On average, Quay.io users have more robot accounts than human accounts configured, which we love to see.

That means we’re helping increase the amount of automation inside your organizations. And that’s the goal for the Quay team, here at Red Hat: to help remove and automate as many of the daily concerns a container registry introduces. As this system is at the heart of your environments, and provides the seeds for every new container you create, we understand just how important 100% uptime, and integrated security features are to your ability to automate your IT infrastructure.

Quay.io improves Red Hat Quay

All this work we’ve poured into Quay.io also goes into Red Hat Quay. As the open source project behind Quay.io, the managed service and the server software you can run inside your own Kubernetes clusters are built from the same code base. That means the same code we run to serve 1 billion container image pulls in a month can he run inside your open hybrid cloud, too.

And it’s also been tested at scale. Enormous scale.

Try out Quay.io

Your first 30 days are free on Quay. Create an account, configure your favorite CI/CD system and start building containers!


News, Red Hat Quay Registry, cloud scale, massive scale

< Back to the blog