Red Hat just announced Red Hat Device Edge, which delivers an enterprise-ready and supported distribution of Kubernetes named MicroShift, combined with an edge-optimized OS built from Red Hat Enterprise Linux. It expands where users of Red Hat’s platforms can run edge computing workloads to the space of field-deployed devices such as IoT gateways, point-of-sales terminals, robots, and drones. Let me unbox that for you.

My edge computing use case is not your edge computing use case.

Would you use a heavy truck for a downtown pizza delivery service or taking your family on vacation? In transportation, it is somewhat obvious that one (vehicle) size does not fit all. At the same time, there are obvious advantages in drivers being able to rely on the same familiar user interfaces, proven technologies “under the hood”, and shared road and refueling infrastructure across types of vehicles.

The same holds true for edge computing. Today, use cases from drive simulations for autonomous vehicles on GPU-accelerated server racks in the lab to 5G radio base stations on single servers in roof-top cabinets all run on Red Hat OpenShift. Some use cases may require special deployment topologies like 3-node compact clusters and single-node clusters. They may require different types of hardware acceleration or to tune the system for batch-throughput or real-time determinism. Either way, OpenShift and Red Hat Enterprise Linux CoreOS under its hood provide the same workload behavior and operations experience on the cloud or a single edge server.

Use cases with field-deployed devices like the before-mentioned IoT gateways and drones present very different technical and operational challenges, though. In part, this is due to differences between device and server hardware:

  • Devices embed compute units like single-board computers or systems-on-chips. These have application-optimized I/O interfaces and hardware accelerators (e.g. GPUs, tensor cores, video encoding/decoding accelerators) already on board, but are rather resource-constrained (in CPU, memory, network I/O) and not as extensible in comparison.
  • ARM is a widely-spread architecture in devices, in part due to ARM’s IP core model and high performance-per-Watt. Yet, implementers do not yet provide the same level of standardized hardware and firmware features for operating systems to rely on as in the server world.
  • New engineering constraints in terms of power (batteries or power-over-ethernet), thermal design power (no cooling), network connectivity (intermittent, slow, charged by megabyte transferred), and security (weak physical access controls).
  • To scale, provisioning needs to be dead simple, not requiring technical expertise or equipment on site. The on-site process should resemble unboxing any device, mounting it, and powering it up, from where it boots its factory-image.
  • No out-of-band management. Unlike with servers that can almost always be remotely recovered (via a dedicated management network to the board management console, a cloud API, or a nearby technician), devices which become unbootable or permanently lose network connectivity may be lost or require expensive truck-rolls.

Deployment in the field - along an oil pipeline, on a boat, or on a shop floor – rather than in a controlled environment like a server room has implications, too:

Organizational challenges exist as well. For example, device manufacturers with teams used to bespoke hardware and custom Linux OSes built from toolchains such as Yocto or buildroot may experience high friction in developing services jointly with teams used to container- and Kubernetes-native tools and methods. Lowering the bar to adopting general-purpose, off-the-shelf hardware and Linux distributions and IT-centric processes – a characteristic of edge computing – would be a good first stepping stone.

Meet the operating system for the device edge.

To get this out of the way: The foundation for Red Hat Device Edge is Red Hat Enterprise Linux.

This means that all the goodness you’re used to from Red Hat Enterprise Linux, including its hardware enablement, security controls, system tunables to optimize for specific workloads (e.g. for performance, determinism, or energy-efficiency), user space tools are available and familiar. Users can continue to deploy the same Red Hat-curated, partner-provided or own content, whether RPM packaged or containerized.

What’s new is that Red Hat Device Edge adopts several capabilities and tools developed for addressing the before-mentioned challenges that make it particularly compelling for use cases at the device edge:

  • System image-based model: Like on your phone, update devices by downloading a new system image rather than individual software packages. Composing a "golden image" containing the operating system, drivers, and core application workloads means you can version, build, test, and roll-out system updates to devices as a unit. This provides high predictability and simplifies troubleshooting when deploying at scale. rpm-ostree is the technology that enables this model while preserving users’ ability to compose from existing RPM and container image content.
  • Custom OS composes: System builders need the ability to heavily customize their OS images such as adding low-level drivers or own device management agents. ImageBuilder is the tooling with which they can compose system images from blueprints stored in version control.
  • A/B deployments: Network connectivity and power supply can be notoriously unreliable. Imagine either failing during a system update. rpm-ostree allows downloading and staging the new system version in parallel to the running system. The actual update is then a mere reboot taking seconds.
  • Auto-rollback: Pushing an over-the-air update with a faulty software or configuration could render devices unbootable or unmanageable. To limit this risk, the greenboot service can detect faulty updates based on Red Hat- or user-provided business logic. When such tests fail, greenboot automatically reverts the system to the previous image version.
  • Delta-updates: Devices may be connected through networks that are slow or charged by volume. With delta-updates, each device just downloads those parts of an rpm-ostree system image that have changed since the previous update.
  • ARM device enablement: Red Hat Enterprise Linux added support for ARM SystemReady IR, e.g. DeviceTree hardware descriptors and is working with hardware vendors to support their respective hardware platforms, for example NXP’s iMX 8 systems-on-module.
  • FIDO Device Onboarding (FDO): FDO is an industry-specification for secure device onboarding with late binding of devices to owners. When FDO-enabled devices are first powered on, they look up their owner from a registry, securely onboard to their management system based on hardware root of trust, and receive initial security credentials and configuration. Red Hat created a production-grade open source implementation of this standard.

Growing into Kubernetes? Red Hat Device Edge has you covered, too.

Choose the workload model that best suits your needs: Build systems from RPM packages, containers running on Podman or both. Run VMs on KVM. Bake container workloads into the system image for fast start-up and network-resilience (edge appliance model) or deploy and update at runtime (container host model).

As you successfully roll out more services, you will sooner or later find you need orchestration. Rather than rolling your own, a more sustainable solution would be to adopt Kubernetes for container orchestration. Or maybe you would like to use off-the-shelf Kubernetes services. Or you would like the same Kubernetes development and operational model at the device edge as on the cloud.

Red Hat Device Edge has you covered there, too. It includes MicroShift, a new small form-factor OpenShift-derived Kubernetes distribution, built specifically for the device edge.

MicroShift comes as an RPM software package that you can add to the blueprint of your system images when needed. Include your Kubernetes workloads, too, if you want. They will be deployed the next time you roll out updates to your devices. Red Hat Device Edge with MicroShift runs on Intel and Arm systems as small as 2 CPU cores and 2GB RAM.

We want you to be able to develop your services on the cloud with OpenShift and roll them out to your production edge devices on MIcroShift, benefiting from OpenShift’s features such as a strong security posture. Therefore, we build MicroShift from OpenShift’s source code for binary compatibility. We use the same proven software components as on OpenShift, such as the CRI-O container runtime and OVN-Kubernetes networking.

MicroShift also provides OpenShift’s APIs for security context constraints and routes, but to reduce footprint we’ve removed APIs that are only useful on build clusters or clusters with multi-user interactive access. We’ve also removed Operators responsible for managing the operating system updates and configuration or orchestrating control plane components, as they are not needed in the MicroShift model.

Summary and call to action

Red Hat Device Edge offers Red Hat Enterprise Linux with new capabilities and tools built for the device edge like a system-image centric model. With MicroShift, the new small form factor OpenShift Kubernetes runtime, users have the choice to build systems from traditional Linux-native software packages, Docker application containers, or Kubernetes workloads.

Learn more about Red Hat Device Edge from the press release and the collaborations with Lockheed Martin and  ABB. If you want to get hands on with MicroShift, please start from microshift.io.


About the author

Frank Zdarsky is Senior Principal Software Engineer in Red Hat’s Office of the CTO responsible for Edge Computing. He is also a member of Red Hat’s Edge Computing leadership team. Zdarsky’s team of seasoned engineers is developing advanced Edge Computing technologies, working closely with Red Hat’s business, engineering, and field teams as well as contributing to related open source community projects. He is serving on the Linux Foundation Edge’s technical advisory council and is a TSC member of the Akraino project.

Prior to this, Zdarsky was leading telco/NFV technology in the Office of the CTO and built a forward deployed engineering team working with Red Hat’s most strategic partners and customers on enabling OpenStack and Kubernetes for telco/NFV use cases. He has also been an active contributor to open source projects such as OpenStack, OpenAirInterface, ONAP, and Akraino.

Read full bio