|
1 | 1 | # Backends |
2 | 2 |
|
3 | | -Backends allow `dstack` to provision fleets across cloud providers or Kubernetes clusters. |
| 3 | +Backends allow `dstack` to provision fleets across GPU clouds or Kubernetes clusters. |
4 | 4 |
|
5 | 5 | `dstack` supports two types of backends: |
6 | 6 |
|
7 | 7 | * [VM-based](#vm-based) – use `dstack`'s native integration with cloud providers to provision VMs, manage clusters, and orchestrate container-based runs. |
8 | 8 | * [Container-based](#container-based) – use either `dstack`'s native integration with cloud providers or Kubernetes to orchestrate container-based runs; provisioning in this case is delegated to the cloud provider or Kubernetes. |
9 | 9 |
|
10 | | -??? info "SSH fleets" |
| 10 | +!!! info "SSH fleets" |
11 | 11 | When using `dstack` with on-prem servers, backend configuration isn’t required. Simply create [SSH fleets](../concepts/fleets.md#ssh-fleets) once the server is up. |
12 | 12 |
|
13 | 13 | Backends can be configured via `~/.dstack/server/config.yml` or through the [project settings page](../concepts/projects.md#backends) in the UI. See the examples of backend configuration below. |
14 | 14 |
|
| 15 | +> If you update `~/.dstack/server/config.yml`, you have to restart the server. |
| 16 | +
|
15 | 17 | ## VM-based |
16 | 18 |
|
17 | | -VM-based backends allow `dstack` users to manage clusters and orchestrate container-based runs across a wide range of cloud providers. |
18 | | -Under the hood, `dstack` uses native integrations with these providers to provision clusters on demand. |
| 19 | +VM-based backends allow `dstack` users to manage clusters and orchestrate container-based runs across a wide range of cloud providers. Under the hood, `dstack` uses native integrations with these providers to provision clusters on demand. |
19 | 20 |
|
20 | 21 | Compared to [container-based](#container-based) backends, this approach offers finer-grained, simpler control over cluster provisioning and eliminates the dependency on a Kubernetes layer. |
21 | 22 |
|
@@ -1036,9 +1037,13 @@ projects: |
1036 | 1037 |
|
1037 | 1038 | No additional setup is required — `dstack` configures and manages the proxy automatically. |
1038 | 1039 |
|
1039 | | -??? info "NVIDIA GPU Operator" |
1040 | | - For `dstack` to correctly detect GPUs in your Kubernetes cluster, the cluster must have the |
1041 | | - [NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/index.html) pre-installed. |
| 1040 | +??? info "Required operators" |
| 1041 | + === "NVIDIA" |
| 1042 | + For `dstack` to correctly detect GPUs in your Kubernetes cluster, the cluster must have the |
| 1043 | + [NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/index.html) pre-installed. |
| 1044 | + === "AMD" |
| 1045 | + For `dstack` to correctly detect GPUs in your Kubernetes cluster, the cluster must have the |
| 1046 | + [AMD GPU Operator](https://github.com/ROCm/gpu-operator) pre-installed. |
1042 | 1047 |
|
1043 | 1048 | <!-- ??? info "Managed Kubernetes" |
1044 | 1049 | While `dstack` supports both managed and on-prem Kubernetes clusters, it can only run on pre-provisioned nodes. |
@@ -1071,7 +1076,7 @@ projects: |
1071 | 1076 |
|
1072 | 1077 | Ensure you've created a ClusterRoleBinding to grant the role to the user or the service account you're using. |
1073 | 1078 |
|
1074 | | -> To learn more, see the [Kubernetes](../guides/kubernetes.md) guide. |
| 1079 | +> To learn more, see the [Lambda](../../examples/clusters/lambda/#kubernetes) and [Lambda](../../examples/clusters/crusoe/#kubernetes) examples. |
1075 | 1080 |
|
1076 | 1081 | ### RunPod |
1077 | 1082 |
|
|
0 commit comments