The Turbonomic container image is Red Hat Certified and listed in the Red Hat Container Catalog.

Cloud Native and the Rise of Microservices

Cloud native is slated to be the standard for digital transformations. The business wants better services faster; developers need rapid, iterative application development. Containers and microservices make it possible. Every microservice has just one job, reducing complexity for the developer. But an application can have 10-20 microservices—for bleeding edge adopters it’s upwards of 100. What does that mean for the infrastructure folks? More dynamic change, more complexity.

Monolithic (VM) applications were complicated enough, now cloud-native containerized applications have more moving parts, greater density, and create more layers to manage. It’s no longer about a single application statically operating on a VM. It’s about multiple pieces of an application dynamically operating across heterogeneous infrastructure, on-premise, and in the cloud.

Containers in Production. Easy, Right?

When we talked with customers about their container adoption, a few challenges came up. What we came to realize is that while the pieces and roles had changed to some extent, the core problems aren’t all that new—they’re just more complicated.

Is “DevOps vs. Infrastructure/Ops” the New “Dev vs. Ops”?

For many of our customers, the classic “Dev vs. Ops” has taken a new guise of “DevOps vs. Infrastructure/Ops.” Where in the past developers and application owners would demand the biggest VM, today DevOps engineers are asking for more and more cluster capacity. What hasn’t changed? The conflicting views of the environment.

When deploying containers, DevOps engineers will abide by the config file of the application, which dictates that a container needs, for example, 8G of memory. Container schedulers will do the same—ignoring the real-time resource consumption. When a DevOps engineer asks for more cluster capacity, it’s based on what’s been allocated—not what’s actually being used.

Meanwhile, the Infrastructure engineer has no visibility into the container level, but sees via the hypervisor the true utilization of the infrastructure—and it’s not that much, for example, 20%. Does this sound familiar?

Shared Environments Make for Complex Tradeoffs

Whether you’re a DevOps engineer or managing the infrastructure, you have to navigate complex tradeoffs due to the nature of shared virtualized environments.

  • How do I avoid “noisy neighbor” congestion due to containers peaking together on the same node?
  • How do I avoid CPU starvation due to the underlying infrastructure resource congestion?
  • Do I provision for the worst-case scenario? That’s expensive.
  • Or, do I provision for the average? That’s risky.

At the end of the day, the burden of resource allocation decisions is on engineers—and just try and do that in real-time. What’s worse, you’re making those decisions in silos—depending what part of the stack you own—and that’s where the contention between teams typically starts. Cloud-native technologies and microservices are enabling rapid development and workload mobility, but they also expedite the need to take people out of the business of workload management and resource allocation. That’s best left to software. Innovation, on the other hand, is best left to people.

Turbonomic & OpenShift

OpenShift abstracts away the messy problems of the underlying heterogeneous infrastructure for DevOps engineers, allowing them to build consistent, infrastructure-agnostic cloud-native applications on top of it. But that messy infrastructure below still exists—it’s just the infrastructure guy that has to deal with it. And compared to a virtualized IT environment, a cloud-native environment is highly dynamic with greater density and more layers (if the underlying cluster is a VM based cluster) to manage.

With cloud-native technologies, including OpenShift, the artifacts have changed, but the core challenge is the same—just more complex: how do you ensure workloads get the resources they need when they need them?

Turbonomic takes human beings out of the business of workload management and into the business of creativity and innovation. The platform continuously analyzes application consumption and automatically allocates resources in real-time. It assures application performance while maximizing efficiency by giving workloads the resources they need when they need them. For OpenShift, we have built Kubeturbo to leverage the Turbonomic decision engine, in order to assure the performance of microservices running in OpenShift, as well as maximize the efficiency of underlying infrastructure.

Watch the demo to see how Turbonomic delivers:

  • Full-stack visibility and real-time monitoring
  • Continuous placement for pods running on OpenShift
  • Vertical scaling of pods
  • Full-stack control and automation

Conclusion

In short, people making resource decisions in today’s complex cloud environments must go the way of the buffalo. With Turbonomic and OpenShift, your cloud-native deployments manage themselves.