Platform
HPC Optimization

HPC Optimization​​

Backend.AI utilizes a proprietary GPU-centric orchestrator and scheduler to ensure optimal resource placement and multi-node workloads distribution for AI and high-performance computing. Additionally, it incorporates a storage proxy to parallelize data I/O, further enhancing its efficiency in managing computing resources and unlocking their maximum potential.​

  • Distributed processing of multiple containers using multiple nodes​

    • Built-in scheduler-wide multi-container resource placement and relocation
    • Supports high-speed networking technology such as RDMA​
  • GPU optimization technology

    • Implementation of CUDA Optimal Solutions based on NVIDIA Partnership
    • Industry's only container-based multi/partial GPU sharing (fractional GPU™ scaling)
  • Enforce container resource constraints in HPC libraries (e.g. BLAS)​

    • Powerful system resource control through libc overriding (CPU core count correction, etc.)
  • Dynamic sandboxing: programmable and rewriteable syscall filter

    • Supports rich programmable policies compared to apparmor/seccomp​

We're here for you!

Complete the form and we'll be in touch soon

Contact Us

Headquarter & HPC Lab

Namyoung Bldg. 4F/5F, 34, Seolleung-ro 100-gil, Gangnam-gu, Seoul, Republic of Korea

© Lablup Inc. All rights reserved.