New eBook: Considerations for building production-ready AI/ML environment
July 24, 2020 | by
One of the great benefits of container-based infrastructure is the ability to provision huge swaths of servers at once. This translates into a capability you could call dynamic environment provisioning: spinning up large sets of nodes all preconfigured to work together on a single task like a Hadoop job, a message queue or an Elasticsearch. Just as with software in other realms, the job of provisioning and making available these complex application environments is becoming easier and faster over time, here thanks to Kubernetes, Istio and Linux containers.
AI/ML jobs often require lots of servers to process large amounts of data and produce a model, at which time they don't need to do anything at all until the next job. You don't want to pay for those servers when they're not in use, but you also need tremendous scale to produce results in a timely fashion. It's a scenario that would have choked a team of 20 IT professionals and filled a datacenter in the early 2000's, but which can now be performed by a single individual, given a properly instrumented, secure and open hybrid cloud filled with the necessary data.
We've produced an ebook outlining exactly what you should take into consideration when building out a production-ready AI/ML environment without breaking the bank or wasting the time of your data scientists. With proper planning, monitoring and pipelines, your AI/ML teams can increase their velocity and productivity through automated provisioning of environments.
Introduction By default, the OpenShift Container Platform registry is not exposed outside of the cluster at the time of installation. Red Hat Advanced Cluster Security can be used to scan images held ...
In part 2 of this three-part blog series, I covered a practical implementation of OpenShift Platform Plus tools and policies that help with achieving compliance with certain NIST SP 800-53 security ...