aimlbook

 

One of the great benefits of container-based infrastructure is the ability to provision huge swaths of servers at once. This translates into a capability you could call dynamic environment provisioning: spinning up large sets of nodes all preconfigured to work together on a single task like a Hadoop job, a message queue or an Elasticsearch. Just as with software in other realms, the job of provisioning and making available these complex application environments is becoming easier and faster over time, here thanks to Kubernetes, Istio and Linux containers.

AI/ML jobs often require lots of servers to process large amounts of data and produce a model, at which time they don't need to do anything at all until the next job. You don't want to pay for those servers when they're not in use, but you also need tremendous scale to produce results in a timely fashion. It's a scenario that would have choked a team of 20 IT professionals and filled a datacenter in the early 2000's, but which can now be performed by a single individual, given a properly instrumented, secure and open hybrid cloud filled with the necessary data.

We've produced an ebook outlining exactly what you should take into consideration when building out a production-ready AI/ML environment without breaking the bank or wasting the time of your data scientists. With proper planning, monitoring and pipelines, your AI/ML teams can increase their velocity and productivity through automated provisioning of environments.


Categories

AI/ML, How-tos, OpenShift 4

< Back to the blog