Contact

Contact

    Hybrid cloud blog

    August 26, 2021

    Using NVIDIA A100’s Multi-Instance GPU to Run Multiple Workloads in Parallel on a Single GPU

    The new Multi-Instance GPU (MIG) feature lets GPUs based on the NVIDIA Ampere architecture run multiple GPU-accelerated CUDA applications in parallel in a fully isolated way. The compute units of the ...

    Kevin Pouget

    March 23, 2021

    Using the NVIDIA GPU Operator to Run Distributed TensorFlow 2.4 GPU Benchmarks in OpenShift 4

    Motivation In certain cases, OpenShift 4.x users may want to run large, distributed workloads in the latest version of OpenShift 4.x, especially if their cluster’s bare metal machines’ hardware is ...

    Courtney Pacheco