Platform
HPC Optimization

HPC Optimization​​

Backend.AI utilizes a proprietary GPU-centric orchestrator and scheduler to ensure optimal resource placement and multi-node workloads distribution for AI and high-performance computing. Additionally, it incorporates a storage proxy to parallelize data I/O, further enhancing its efficiency in managing computing resources and unlocking their maximum potential.​

  • Distributed processing of multiple containers using multiple nodes​

    • Built-in scheduler-wide multi-container resource placement and relocation
    • Supports high-speed networking technology such as RDMA​
  • GPU optimization technology

    • Implementation of CUDA Optimal Solutions based on NVIDIA Partnership
    • Industry's only container-based multi/partial GPU sharing (fractional GPU™ scaling)
  • Enforce container resource constraints in HPC libraries (e.g. BLAS)​

    • Powerful system resource control through libc overriding (CPU core count correction, etc.)
  • Dynamic sandboxing: programmable and rewriteable syscall filter

    • Supports rich programmable policies compared to apparmor/seccomp​

We're here for you!

Complete the form and we'll be in touch soon

Contact Us

Headquarter & HPC Lab

KR Office: 8F, 577, Seolleung-ro, Gangnam-gu, Seoul, Republic of Korea US Office: 3003 N First st, Suite 221, San Jose, CA 95134

© Lablup Inc. All rights reserved.

We value your privacy

We use cookies to enhance your browsing experience, analyze site traffic, and understand where our visitors are coming from. By clicking "Accept All", you consent to our use of cookies. Learn more