On-premise
Our Enterprise features pricing model
is based on yearly subscription
- Open-source (Ground)Backend.AI Ground is an open-source software, which will last free forever. Use your computational resources efficiently, get away from the management hassle, focus on the essence. Use and hack as needed!
- Enterprise
Are you building an in-house GPU cluster farm? Are you having difficulty assigning resources to multiple organizations and users?
Are GPU resources not being used efficiently?
Save your GPU clusters with Backend.AI on-premise enterprise solution.
Cloud
Use Cloud and pay
as much you consumed.
- Basic
Ideal if you are new to machine learning or want to do a simple test. You can switch to a paid account whenever you want.
- Essential
Good for modeling, machine learning study and training. Also, you can use Backend.AI on your computers via cloud console (beta), too, meaning you can not only use your resource without paying for them, but also use them at anywhere.
- Pro
Want to train a larger model with more resources?
Also, you can use Backend.AI on your computers via cloud console (beta), too, meaning you can not only use your resource without paying for them, but also use them at anywhere. - Enterprise
Do you use a lot of resources at your institute? You can reserve the resources to use in advance and use them at a lower cost.
Enterprise cloud also provides group and organization management features, such as user management, group management, and resource allocation.
Compare editions
Feature reference table
is based on Backend.AI (24.03) / Backend.AI Enterprise (R2).
- Container-level Multi GPU
- NVIDIA CUDA GPU
- AMD ROCm GPU
- Google Cloud TPU
- GraphCore IPU
- Container-level Fractional GPU sharing
- NVLink-optimized GPU plugin architecture
- On-premise installation on both bare-metal / VM
- Hybrid cloud (on-premise + cloud)
- Polycloud (multi-cloud federation)
- Attaching multiple network planes to containers for data transfers and GPU Direct Connect / GPU Direct Storage in distributed workloads
- Vendor-specific storage acceleration plugins (RedHat CephFS, PureStorage FlashBlade, NetApp ONTAP, Dell EMC PowerStore and Weka.io)
- Automatic scaling integrated with cloud APIs
- Unified scheduling & monitoring with GUI and CLI admin
- Resource allocation per user or user group
- Multi-container batch execution and monitoring
- Availability-slot based scheduling with heuristic FIFO
- Customizable batch job scheduler
- Detection and auto-blocking cryptocurrency mining workloads
- Automatic policy-based reclamation of idle resources
- Multi-tenancy
- Sandboxing via Hypervisor/Container
- Programmable Sandboxing
- Syscall-level logging
- Administrator monitoring
- High-availability (HA) configuration with active-active management nodes
- On-the-fly addition and removal of compute nodes
- Desktop application (Windows 10~, macOS 10.12~, Linux)
- In-browser/in-app access to container applications
- Control panel and dashboard for enterprise administrators
- Data up-/download and sharing via shared folder
- Large file transfers via scalable, standalone storage proxy
- EFS, NFS, SMB and distributed file system (cephFS, GlusterFS, HDFS)
- Access control and sharing data/models with users and group projects
- Local acceleration cache (SSD, memory)
- Universal programming language (17+ including Python, C/C++, R, Java, MATLAB, etc.)
- IDE plugins : VS Code, IntelliJ, PyCharm
- Interactive shell & terminal support
- GUI-based custom container image builder
- GUI tools in user application (Jupyter, TensorBoard, etc.)
- GUI tools in web console (Jupyter, TensorBoard, etc)
- NGC (NVIDIA GPU Cloud) platform integration
- Fully compatible with major machine learning libraries (TensorFlow, PyTorch, CNTK, Mxnet, etc.)
- Concurrent execution of multiple versions of libraries
- Automatic update of ML libraries
- DL model as a function
- Serving user-written models
- Model versioning
- GUI-based per-node customizable MLOps/AIOps tool
- Offline installer packages for on-premise setups
- Backend.AI Reservoir: a private package repository to serve PyPI, CRAN and Ubuntu
- System administrator dedicated dashboard
- Administrator dedicated control panel
- Compute node setting control
- Compute node system setting control
- System statistics
- Monitoring solution interlock
- On-site installation (Bare metal / VM)
- Configuration support (on-premise + cloud)
- On-site admin/user training
- Managed version upgrade
- Priority development and escalation
- Custom kernel images and managed repository
- High Availability (H/A) installation
- On-premOpen-source (Ground)
- On-premEnterprise
- CloudBasic
- CloudEssential
- CloudPro
- CloudEnterprise
- Container-level Multi GPU
- NVIDIA CUDA GPU
- AMD ROCm GPU
- Google Cloud TPU
- GraphCore IPU
- Container-level Fractional GPU sharing
- NVLink-optimized GPU plugin architecture
- On-premise installation on both bare-metal / VM
- Hybrid cloud (on-premise + cloud)
- Polycloud (multi-cloud federation)
- Attaching multiple network planes to containers for data transfers and GPU Direct Connect / GPU Direct Storage in distributed workloads
- Vendor-specific storage acceleration plugins (RedHat CephFS, PureStorage FlashBlade, NetApp ONTAP, Dell EMC PowerStore and Weka.io)
- Automatic scaling integrated with cloud APIs
- Unified scheduling & monitoring with GUI and CLI admin
- Resource allocation per user or user group
- Multi-container batch execution and monitoring
- Availability-slot based scheduling with heuristic FIFO
- Customizable batch job scheduler
- Detection and auto-blocking cryptocurrency mining workloads
- Automatic policy-based reclamation of idle resources
- Multi-tenancy
- Sandboxing via Hypervisor/Container
- Programmable Sandboxing
- Syscall-level logging
- Administrator monitoring
- High-availability (HA) configuration with active-active management nodes
- On-the-fly addition and removal of compute nodes
- Desktop application (Windows 10~, macOS 10.12~, Linux)
- In-browser/in-app access to container applications
- Control panel and dashboard for enterprise administrators
- Data up-/download and sharing via shared folder
- Large file transfers via scalable, standalone storage proxy
- EFS, NFS, SMB and distributed file system (cephFS, GlusterFS, HDFS)
- Access control and sharing data/models with users and group projects
- Local acceleration cache (SSD, memory)
- Universal programming language (17+ including Python, C/C++, R, Java, MATLAB, etc.)
- IDE plugins : VS Code, IntelliJ, PyCharm
- Interactive shell & terminal support
- GUI-based custom container image builder
- GUI tools in user application (Jupyter, TensorBoard, etc.)
- GUI tools in web console (Jupyter, TensorBoard, etc)
- NGC (NVIDIA GPU Cloud) platform integration
- Fully compatible with major machine learning libraries (TensorFlow, PyTorch, CNTK, Mxnet, etc.)
- Concurrent execution of multiple versions of libraries
- Automatic update of ML libraries
- DL model as a function
- Serving user-written models
- Model versioning
- GUI-based per-node customizable MLOps/AIOps tool
- Offline installer packages for on-premise setups
- Backend.AI Reservoir: a private package repository to serve PyPI, CRAN and Ubuntu
- System administrator dedicated dashboard
- Administrator dedicated control panel
- Compute node setting control
- Compute node system setting control
- System statistics
- Monitoring solution interlock
- On-site installation (Bare metal / VM)
- Configuration support (on-premise + cloud)
- On-site admin/user training
- Managed version upgrade
- Priority development and escalation
- Custom kernel images and managed repository
- High Availability (H/A) installation