OpenShift sandboxed containers, based on the Kata Containers open sourceproject, provides an additional layer of isolation for applications with stringent security requirements, as well as strong insulation of untrusted workloads or third-party applications.
To run workloads in sandboxed containers, all you need to do after installing the OpenShift sandboxed containers operator is to explicitly set the 'RuntimeClass' in the YAML file of your workload resource (e.g. pod, deployment, statefulset, etc.) to 'kata'. That's it, you have got a sandboxed workload!
If this is your first time reading about OpenShift sandboxed containers, don’t fret, we have been hard at work to provide you with the resources you need to learn and use it, a good place to start is our landing page.
What’s New in OpenShift sandboxed containers 1.2
OpenShift sandboxed containers simplify the onboarding of an OCI-compliant runtime environment that uses a virtualization stack as a backend. There is almost no difference between running sandboxed workloads and normal workloads from a usability perspective. However, the machinery behind the sandboxing process contains more areas where things can go wrong. Therefore, this release introduces the following new features that make it easier for cluster administrators operating sandboxed containers to swiftly identify and report errors.
Pre-install checks for Node eligibility to run sandboxed containers
OpenShift sandboxed containers rely on hardware virtualization to achieve sandboxing. The end result is a lightweight virtual machine that you can seamlessly run your workload on.
If you enable OpenShift sandboxed containers on nodes that do not support hardware virtualization, or on nodes that are generally not eligible for running sandboxed containers, the installation is considered faulty and the process terminates with errors.
In this release, the focus is on taking a step forward towards better observability, not only in monitoring but also in logging. we provide access to detailed logs about the Virtual Machine Monitor (VMM) QEMU, the Kata runtime environment, 'virtiofsd' (the daemon that allows QEMU to perform filesystem sharing ), and the Kata agent (the process responsible for setting up the container environment).
Additional logs improve the time needed to identify root-cause, which in turn contributes to better user experience in general.
In previous releases, OpenShift sandboxed containers were only available for use in baremetal on-premises environments. Based on feedback from our users, we are relaxing this restriction a bit in this release and offering OpenShift sandboxed containers on OpenShift clusters with AWS bare metal nodes. This will only be available as a tech preview while we gather and validate more feedback.
While working on a Red Hat OpenStack related engagement with one of our customers, we did a Proof of Concept (PoC) with them where the scope was to set up Red Hat OpenStack in their environment and ...