You may have seen that AWS and Red Hat have now integrated using Spot Instances into the ROSA worker plane. This is a fantastic extension of ROSA functionality and offers great flexibility to customers when architecting their service deployments on the ROSA service. It goes beyond the standard EC2 worker nodes that ROSA has always offered.
The Spot markets came about because the hyperscale cloud providers cannot offer their standard services without commissioning a large amount of spare capacity into their infrastructures to provide resources to cope with the vagaries of customer demand that become a burden of unproductive investment.
To counter this issue and monetize this resource pool, AWS started offering their spare capacity in a Spot marketplace in late 2009.
The marketplace allows customers to bid for Spot instances at remarkable cost savings, in some cases upto 90% of the EC2 cost. But the opportunity does come with a health warning, because if AWS needs to take those resources back for a customer who is willing to pay a standard fee for those resources, these nodes will suffer Spot Instance interruption.
Processes have been created to warn customers of any interruption of the Spot Instances consumed, with an alert being automatically sent to advise that service interruption will happen within two minutes, allowing customers to automate a correct response. This risk must be understood [Fig 3], but the alert does allow ROSA to automatically manage this volatility of nodes effectively.
To use Spot Instances with ROSA, you will need to add a new machine pools to an existing cluster. The cluster will have a default worker machine pool, which can be examined through either the ROSA CLI client or the ROSA portal available at cloud.redhat.com
Fig 1: ROSA Command line detail of default machine pool
Fig 2: ROSA machine pool Managed through OpenShift Cluster Manager (OCM)
Customers can then use these tools to add a new machine pool.
Fig 3: Output from the help provided in ROSA CLI
Fig 4: OCM Add Machine pool window
Although before we decide to deploy resources to a Spot Worker node, it is essential that we make sure the services that we intend to deliver here are well designed to use on volatile Spot Instances.
AWS has some good advice to customers on the workloads that work well when using Spot. Please be cautious, because Red Hat has found that not all deployments of a container service are delivered in the same way by our customers and the approach taken can decide if workloads suit SPOT instances
Looking at Red Hat’s advice in this area, the best services to deploy onto a Spot worker tier are refactored (rearchitected) applications. Meaning those applications that have been migrated to OpenShift using the rehost and replatform approaches may not be ideal for a Spot Instance tier and may better suit deployment to a standard worker node tier using EC2 instances in ROSA.
Once you are confident that you have a service well-suited to operate in Spot environments, the next step is to make sure the pods that run your Spot optimized service will be deployed onto the machine pool that has Spot Instances available.
Within ROSA, this is done using the ability to add labels and taints to your machine pools. This metadata allows you to orchestrate the behavior of your services pods in Kubernetes with fine detail. Labels are a keyvalue pair allowing you that control.
When these features are applied to a new Spot Instance Machine Pool, the services that will best run on Spot Instances can then be forced to run on those nodes.
The ability to define a pool of Spot Instances worker resources in OpenShift is a fantastic additional capability that can help organizations tune and streamline their infrastructure usage when using ROSA. But like all great innovations, it works best for specific use cases, such as replatformed and refactored services that are decoupled. For the best results when designing your solution, work with your AWS and Red Hat teams to understand the nature of the workloads you are planning to deliver onto ROSA, and then decide where this technology can help accelerate your transformation to Hyperscale-based service delivery.
About the author
Browse by channel
Automation
The latest on IT automation that spans tech, teams, and environments
Artificial intelligence
Explore the platforms and partners building a faster path for AI
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
Explore how we reduce risks across environments and technologies
Edge computing
Updates on the solutions that simplify infrastructure at the edge
Infrastructure
Stay up to date on the world’s leading enterprise Linux platform
Applications
The latest on our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Developer resources
- Customer support
- Red Hat value calculator
- Red Hat Ecosystem Catalog
- Find a partner
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit