Innovation and the ability to deliver exciting new features and capabilities to customers, rapidly and continuously is bolstered by having a hybrid approach to application development. Developers want to use a Kubernetes platform that best suits their needs and use the resources and services that suit their applications to roll out innovative applications. So, whenever a developer or a Kubernetes cluster administrator wants to develop an application on Kubernetes clusters, it is likely that they are using some AWS services,  such as a database, an S3 bucket, ElastiCache, etc. as a part of the overall solution architecture, and they would need to manage these AWS services. So far, the developers or cluster admins have had to manage the containerized application and the associated AWS services separately.

AWS Controllers for Kubernetes (ACK)

To solve this complexity, the AWS  team has now made it possible to centrally manage both the application and the AWS services through the Kubernetes CLI with the ACK project. AWS Controllers for Kubernetes (ACK) is an open-source project which defines a framework to build custom controllers (or Kubernetes Operators) for AWS services. These custom controllers enable developers to define, create, deploy, control, update, delete, and manage Amazon services directly from inside the Kubernetes clusters. So, instead of separately managing the containerized applications on Kubernetes clusters, and the associated AWS services (eg. S3 bucket, database, ElastiCache etc), ACK controllers now allow centralized management through the Kubernetes CLI. Developers can now manage both Kubernetes-native applications as well as the resources the application depends on, using the same Kubernetes API.

Here’s an example of a Postgres RDS database. Notice that the password is sourced from a Kubernetes Secret named “rds-postgresql-user-creds” in the “production” namespace. The application also running in the namespace can also import this secret to connect to the database.

apiVersion: rds.services.k8s.aws/v1alpha1
kind: DBInstance
metadata:
name: example-database
spec:
allocatedStorage: 20
dbInstanceClass: db.t3.micro
dbInstanceIdentifier: example-database
engine: postgres
engineVersion: "14"
masterUsername: myusername
masterUserPassword:
  namespace: production
  name: rds-postgresql-user-creds
  key: password

ACK controllers help save time and provide a hybrid app development experience 

These ACK controllers help developers save time as now they can provision and manage AWS resources directly from inside their Kubernetes cluster. This allows them to focus on code and delivering business value quickly, instead of wasting time managing Kubernetes and AWS resources separately. Being able to access and manage resources from inside the Kubernetes cluster also lends itself well to applying the GitOps principles with mixed container and cloud resources. It also helps give them visibility into the entire application stack, as well as giving them a smoother experience overall.

The cool part is that these ACK Controllers are now available in Red Hat OpenShift and Red Hat OpenShift Service on AWS (ROSA), as a result of the deep collaboration between AWS and Red Hat.

Consistently consume applications and AWS services from inside OpenShift

OpenShift and ROSA customers who have an AWS account and use AWS resources (S3 buckets, ElastiCache, DynamoDB etc.) can now access these ACK controllers and manage their services from inside their OpenShift and ROSA clusters! They can install the AWS service-specific controller into the cluster and start managing the resources and create more resources right in the cluster, making their OpenShift experience very consistent, regardless of the task. These SaaS Operators further open up newer ways to consume OpenShift and a variety of services that applications will need from other cloud providers. It helps provide developers a truly hybrid experience.

Red Hat and AWS engineers helped automate the generation of a lot of these operators that are now visible in OpenShift and ROSA.

Red Hat and AWS collaborate and do wonders!

The ACK controllers can be installed in any Kubernetes cluster, and are distributed through Helm charts. It was not initially targeted towards OpenShift or Operator Lifecycle Managed (OLM-based) clusters. Because of customer demand for leveraging AWS resources directly in OpenShift as a part of application development and management, the Red Hat and AWS teams took the existing processes that the ACK team developed to build the Operators and made them OLM-compatible, so that the operators could be listed on OperatorHub.io, the place where the Kubernetes community lists and finds Operators. Finding and installing these service controllers into OpenShift clusters can now be done in a few steps, staying very much in line with our recommended best practices, and developers can start managing their application and AWS services from OpenShift in no time.

Additionally, the Red Hat team has built automation to cover existing and future new controllers to be packaged for OperatorHub. So every time, AWS releases a new controller for their services, it will show up in OperatorHub,  allowing the entire community to benefit from it.

The collaboration between Red Hat and AWS engineers working on the ACK project was based on open-source principles! The ACK team maintains the controllers, and the Red Hat team creates issues, implements code fixes etc. The two teams collaborate with each other, share ideas in the community meetings every week, put in requests and implement new ideas together.

The Red Hat team also enjoyed working with the core ACK team a lot. The core ACK team was very open to taking feedback from the community and implementing it. For the Red Hat team, it was very interesting to work with the ACK team because the scale of services they have to write Operators for are huge, so it is an interesting outlook and encouraged the Red Hat team to look at problems in a fresh perspective too.

Where do we go from here?

The ACK team aims to make a Controller available in Kubernetes for each AWS service. Since the Red Hat team has established an automated process that will dynamically grow as the ACK Controllers grow, going forward, as soon as the ACK team releases an Operator for another AWS service, it will raise a pull request to OperatorHub.io and get listed in OperatorHub.

Jay Pipes, Principal Engineer, Kubernetes team at AWS

"The ACK core team could not be happier with the collaboration and contribution we've received from the OpenShift team. Red Hatters Jose R. Gonzalez, Adam Cornett and Tayler Geiger have contributed code and mechanisms to allow for the many ACK controller artifacts to be published to OperatorHub.io. They have explained the inner workings of the Operator Lifecycle Manager (OLM) toolkit and built automation to generate OLM manifests and keep those manifests updated as we release new versions of the controllers and core ACK runtime. I'm personally looking forward to even greater collaboration and coordination in 2022 and beyond."

Rob Szumski, Director of Product Management for OpenShift

“Developers are most productive using streamlined workflows. Managing all of an app’s components, including cloud services via the Kubernetes API allows for self-service by app teams without opening a ticket or getting an admin involved. More advanced use-cases like GitOps and create-test-destroy CI/CD jobs don’t have to get any more complex as cloud services are integrated. I’m excited to continue our collaboration with the AWS team as more AWS services are brought to Kubernetes with the ACK model.

Watch a demo on how to use AWS Controllers for Kubernetes with Red Hat OpenShift here!


About the author

Jehlum is in the Red Hat OpenShift Product Marketing team and is very passionate about learning about how customers are adopting OpenShift and helping them do more with OpenShift!

Read full bio