Platform
DGX-Ready Software

NVIDIA DGX™-Ready Software

NVIDIA DGX System​

NVIDIA DGX servers and workstations use GPGPU to accelerate deep learning applications. They have a rackmount chassis with high-performance x86 server CPUs, except for DGX A100 and DGX Station A100, which use AMD EPYC CPUs. The main component is a set of 4 to 16 NVIDIA Tesla GPU modules on an independent system board, integrated using a version of the SXM socket. DGX systems have powerful cooling systems to cool thousands of watts of thermal output.​

Backend.AI is validated as ​ the first NVIDIA DGX™ Ready Software in Asia Pacific region.​

Backend.AI on DGX-family​

Complements to NVIDIA Container Runtime​

  • GPU sharing for multi-user support
  • Features for machine learning pipeline
  • Scheduling with CPU/GPU topology​​

We're here for you!

Complete the form and we'll be in touch soon

Contact Us

Headquarter & HPC Lab

KR Office: 8F, 577, Seolleung-ro, Gangnam-gu, Seoul, Republic of Korea US Office: 3003 N First st, Suite 221, San Jose, CA 95134

© Lablup Inc. All rights reserved.

We value your privacy

We use cookies to enhance your browsing experience, analyze site traffic, and understand where our visitors are coming from. By clicking "Accept All", you consent to our use of cookies. Learn more