Platform
GPU Virtualization

GPU Virtualization​

Container-level Fractional GPU Scaling

Assigning slices of SMP / RAM to containers

Shared GPUs: inference & education workloads 

Multiple GPUs: model training workloads 

With a proprietary CUDA virtualization layer

* Registered patent in Korea, US and Japan​

Proprietary CUDA virtualization layer​

  • Supports all GPU models for CUDA 8 to 12(desktop / workstation / datacenter)
  • No code change required for user programs
  • No customization/rebuild required for DL frameworks
  • It is not limited to TensorFlow/PyTorch; any GPU-accelerated computing workload works!
  • Supports multi-GPU for single container using multiple fractions from different GPUs
  • Reproducible R&D environments for faster experiment cycles
  • On-demand resource provisioning on top of bare-metal, VMs, and containers
  • Optimized for clusters of high-end nodes with many CPUs and accelerators​

We're here for you!

Complete the form and we'll be in touch soon

Contact Us

Headquarter & HPC Lab

KR Office: 8F, 577, Seolleung-ro, Gangnam-gu, Seoul, Republic of Korea US Office: 3003 N First st, Suite 221, San Jose, CA 95134

© Lablup Inc. All rights reserved.

We value your privacy

We use cookies to enhance your browsing experience, analyze site traffic, and understand where our visitors are coming from. By clicking "Accept All", you consent to our use of cookies. Learn more