---
myst:
html_meta:
"description lang=en": "Automated Workspaces session scaling for Docker agents and fixed infrastructure for cloud deployments."
"keywords": "Kasm, Autoscaling, Cloud, AWS, OCI, Digital Ocean, Oracle Cloud, Azure, GCP, Google Compute Engine, Kubernetes, KubeVirt, Harvester, Google Kubernetes Engine, GKE"
"property=og:locale": "en_US"
---
# VM Provider Configs
```{note}
AutoScaling is available in the Community and Enterprise editions only. For more information on licensing please visit: [Licensing](/license).
```
```{figure} /images/compute/vm_create_new.webp
:align: center
**Create New Provider**
```
```{eval-rst}
.. table:: VM Provider Settings
:widths: 200
+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| **Name** | **Description** |
+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| **VM Provider Configs** | Select an existing config or create a new config. If selecting an existing config and changing any of the details, those details will be changed for anything using the same VM Provider config. |
+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| **Provider** | Select a provider from AWS, Azure, Digital Ocean, Google Cloud or Oracle Cloud. If selecting an existing provider this will be selected automatically. |
+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
```
## AWS Settings
A number of settings are required to be defined to use this functionality.
```{include} /guide/compute/vm_providers/aws.md
```
## Azure Settings
A number of settings are required to be defined to use this functionality. The Azure settings appear in the
Deployment Zone configuration when the feature is licensed.
```{figure} /images/compute/vm_azure.webp
:align: center
**Azure Settings**
```
## Register Azure app
An API key for Kasm must be created to use to interface with Azure. Azure call these apps, and the example will walk through registering one along with the required permissions.
1. Register an app by going to the Azure Active Directory service in the Azure portal.
```{figure} /images/autoscaling/azure/azure_active_directory.png
:align: center
**Azure Active Directory**
```
2. From the **Add** dropdown select **App Registration**
```{figure} /images/autoscaling/azure/app_registration.png
:align: center
**App Registration**
```
3. Give this app a human-readable name such as **Kasm Workspaces**
```{figure} /images/autoscaling/azure/app_registration_name.png
:align: center
**App Registration**
```
4. Go to **Resource Groups** and select the **Resource Group** that Kasm will autoscale in.
```{figure} /images/autoscaling/azure/azure_resource_groups.png
:align: center
**Azure Resource Groups**
```
5. Select **Access Control (IAM)**
```{figure} /images/autoscaling/azure/resource_group_access_control.png
:align: center
**Access Control**
```
6. From the **Add** drop down select **Add role assignment**
```{figure} /images/autoscaling/azure/add_role_assignment.png
:align: center
**Add Role Assignment**
```
7. The app created in Azure will need two roles, first select the *Virtual Machine Contributor* role, then on the next page select the app by typing in the name e.g. **Kasm Workspaces**
```{figure} /images/autoscaling/azure/select_virtual_machine_contributor.png
:align: center
**Virtual Machine Contributor**
```
```{figure} /images/autoscaling/azure/virtual_machine_contributor_assign_app.png
:align: center
**Assign Contributor**
```
8. Go through this process again to add the *Network Contributor* and the *DNS Zone Contributor* roles
```{figure} /images/autoscaling/azure/assign_network_contributor.png
:align: center
**Network Contributor**
```
```{figure} /images/autoscaling/azure/assign_dns_zone_contributor.png
:align: center
**DNS Zone Contributor**
```
## Azure VM Settings
A number of settings are required to be defined to use this functionality.
The Digital Ocean settings appear in the Pool configuration when the feature is licensed.
```{include} /guide/compute/vm_providers/azure.md
```
## Digital Ocean Settings
```{note}
A detailed guide on Digital Ocean AutoScale configuration is avaialble [Here](/how_to/infrastructure_components/autoscale_providers/digitalocean)
```
```{warning}
Please review [Tag Does Not Exist Error](#tag-does-not-exist-error) for known issues and workarounds
```
```{include} /guide/compute/vm_providers/digital_ocean.md
```
### Tag Does Not Exist Error
Upon first testing AutoScaling with Digital Ocean, an error similar to the following may be presented:
```{code-block} bash
:emphasize-lines: 1
Future generated an exception: tag zone:abc123 does not exist
traceback:
..
File "digitalocean/Firewall.py", line 225, in add_tags
File "digitalocean/baseapi.py", line 196, in get_data
digitalocean.DataReadError: tag zone:abc123 does not exist
process: manager_api_server
```
This error occurs when Kasm Workspaces tries to assign a unique tag based on the Zone Id to the Digital Ocean Firewall.
If that tag does not already exist in Digital Ocean, the operation will fail and present the error.
To workaround the issue, manually create a tag matching the one specified in the error (e.g `zone:abc123`) via
the Digital Ocean console. This can be done via API, or simply creating the tag on a temporary Droplet.
## Google Cloud (GCP) Settings
```{note}
A detailed guide on GCP AutoScale configuration is avaialble [Here](/how_to/infrastructure_components/autoscale_providers/gcp)
```
```{include} /guide/compute/vm_providers/gcp.md
```
### Note on Updating Existing Google Cloud Providers (GCP)
Please review the settings for all existing Google Cloud Providers (GCP). Two new fields were added; `VM Installed OS Type`
which defaults to `Linux`, and `Startup Script Type` which defaults to `Bash Script`. If the existing provider is configured
with a Windows VM it will not successfully launch the startup script without changing these values.
## Harvester Settings
```{note}
A detailed guide on Harvester AutoScale configuration is avaialble [Here](/how_to/infrastructure_components/autoscale_providers/harvester)
```
```{include} /guide/compute/vm_providers/harvester.md
```
## Oracle Cloud (OCI) Settings
```{note}
A detailed guide on OCI AutoScale configuration is avaialble [Here](/how_to/infrastructure_components/autoscale_providers/oci)
```
```{include} /guide/compute/vm_providers/oci.md
```
You can find the OCI Image ID for the version of the desired operating system in the desired region by finding navigating the [OCI Image page](https://docs.oracle.com/en-us/iaas/images/).
### OCI Config Override Examples
Below are some OCI autoscale configurations that utilize the OCI Config Override.
```{eval-rst}
.. dropdown:: Disable Legacy Instance Metadata Service
:animate: fade-in
Disables instance metadata service v2 for additional security.
.. code-block:: json
{
"launch_instance_details": {
"instance_options": {
"OCI_MODEL_NAME": "InstanceOptions",
"are_legacy_imds_endpoints_disabled": true
}
}
}
.. dropdown:: Enable Instance Agent Plugins
:animate: fade-in
A list of available plugins can be retrieved by navigating to an existing instance's "Oracle Cloud Agent" config page.
This example enables the "Vulnerability Scanning" plugin.
.. code-block:: json
{
"launch_instance_details": {
"agent_config": {
"OCI_MODEL_NAME": "LaunchInstanceAgentConfigDetails",
"is_monitoring_disabled": false,
"is_management_disabled": false,
"are_all_plugins_disabled": false,
"plugins_config": [{
"OCI_MODEL_NAME": "InstanceAgentPluginConfigDetails",
"name": "Vulnerability Scanning",
"desired_state": "ENABLED"
}]
}
}
}
```
## Nutanix Settings
```{note}
A detailed guide on Nutanix AutoScale configuration is avaialble [Here](/how_to/infrastructure_components/autoscale_providers/nutanix)
```
```{include} /guide/compute/vm_providers/nutanix.md
```
## Proxmox Settings
```{note}
A detailed guide on Proxmox AutoScale configuration is avaialble [Here](/how_to/infrastructure_components/autoscale_providers/proxmox)
```
```{include} /guide/compute/vm_providers/proxmox.md
```
## VMware vSphere Settings
```{note}
A detailed guide on vSphere AutoScale configuration is avaialble [Here](/how_to/infrastructure_components/autoscale_providers/vmware)
```
```{include} /guide/compute/vm_providers/vmware.md
```
### Permissions for vCenter service account
These are the minimum permissions that your service account requires in vCenter based on a default configuration. The account might require additional privileges depending on specific features and configurations you have in place. We advise creating a dedicated service account for Kasm Workspaces autoscaling with these permissions to enhance security and minimize potential risks.
* Datastore
* Allocate space
* Browse datastore
* Global
* Cancel task
* Network
* Assign network
* Resource
* Assign virtual machine to resource pool
* Virtual machine
* Change Configuration
* Change CPU count
* Change Memory
* Set annotation
* Edit Inventory
* Create from existing
* Create new
* Remove
* Unregister
* Guest operations
* Guest operation modifications
* Guest operation program execution
* Guest operation queries
* Interaction
* Power off
* Power on
* Provisioning
* Deploy template
### Network Connectivity
The agent startup scripts utilize VMware's guest script execution via VMware Tools. This functionality requires direct HTTPS connectivity between the Kasm Workspace Manager and the ESXi host(s) running the agent VMs.
### Notes on vSphere Datastore Storage
When configuring VMware vSphere with Kasm Workspaces one important item to keep in mind is datastore storage. When clones are created VMware will attempt to satisfy the clone operation if the datastore runs out of space, any VMs that are running on that datastore will be paused until space is available. Kasm Workspaces recommends that critical management VMs such as the Vcenter server VM and cluster management VMs are on separate datastores that are not used for Kasm autoscaling.
## OpenStack Settings
A number of settings are required to be defined to use this functionality.
The OpenStack settings appear in the Pool configuration when the feature is licensed.
The appropriate OpenStack configuration options can be found by using the "API Access" page of the OpenStack UI and downloading the "OpenStack RC File".
```{include} /guide/compute/vm_providers/openstack.md
```
### Openstack Notes
The OpenStack provider requires that OpenStack endpoints present trusted, signed TLS certificates. This can be done through an API gateway that presents a valid certificate or
through configuring valid certificates on each individual service (Reference: [Openstack Docs](https://docs.openstack.org/charm-guide/latest/admin/security/tls.html)).
```{eval-rst}
.. dropdown:: Openstack Endpoints Require Trusted Certificates
:animate: fade-in
The OpenStack provider requires that OpenStack endpoints present trusted, signed TLS certificates. This can be done through an API gateway that presents a valid certificate or
through configuring valid certificates on each individual service (Reference: `Openstack Docs `_.).
.. dropdown:: Application Credential Access Rules
:animate: fade-in
Openstack Application credentials allow for administrators to specify Access Rules to restrict the permissions of an application credential further than a role might allow.
Below is an example of the minimum set of permissions that Kasm Workspaces requires in an Application Credential
.. code-block:: Bash
- service: volumev3
method: POST
path: /v3/*/volumes
- service: volumev3
method: DELETE
path: /v3/*/volumes/*
- service: volumev3
method: GET
path: /v3/*/volumes
- service: volumev3
method: GET
path: /v3/*/volumes/*
- service: volumev3
method: GET
path: /v3/*/volumes/detail
- service: compute
method: GET
path: /v2.1/servers/detail
- service: compute
method: GET
path: /v2.1/servers
- service: compute
method: GET
path: /v2.1/flavors
- service: compute
method: GET
path: /v2.1/flavors/*
- service: compute
method: GET
path: /v2.1/servers/*/os-volume_attachments
- service: compute
method: GET
path: /v2.1/servers/*
- service: compute
method: GET
path: /v2.1/servers/*/os-interface
- service: compute
method: POST
path: /v2.1/servers
- service: compute
method: DELETE
path: /v2.1/servers/*
- service: image
method: GET
path: /v2/images/*
- service: image
method: GET
path: /v2/schemas/image
```
## KubeVirt Enabled Providers
### Overview
KASM supports autoscaling in Kubernetes environments that are running KubeVirt. This includes generic k8s installations as well as GKE and Harvester deployments.
### Updated Startup Scripts
We have released updated startup scripts to include KubeVirt support, the most important change is the inclusion of the qemu-agent.
```
https://github.com/kasmtech/workspaces-autoscale-startup-scripts/blob/develop/latest/docker_agents/ubuntu.sh
```
The qemu-agent installation snippet is commented out by default in the startup script, and thus to use it with KubeVirt you must first uncomment it.
### Config Overrides
KASM generates VMs using a Kubernetes yaml manifest described by this API specification:
```
https://kubevirt.io/api-reference/main/definitions.html#_v1_virtualmachine
```
In the event that KASM providers do not expose a required feature, the provider configuration may be overridden.
In order to do this, the entire manifest must be stored in the provider `config_override`.
KASM will parse the manifest and attempt to update certain fields; the `metadata` will be updated so that the `name` field
contains a unique name, the `namespace` matches the namespace in the provider config, and the `labels` are updated to contain
various labels required for autoscale functionality. All other values will be preserved.
The `runStrategy` will be set to `Always` and the `hostname` will be set to match the unique name.
In order to support startup scripts, a `disk` with the following settings will be appended to the `disks`:
```
- name: config-drive-disk
cdrom:
bus: sata
readonly: true
```
This points to a `volume` that will be appended to the `volumes` with the following settings:
```
- name: config-drive-disk
cloudInitConfigDrive:
secretRef:
name: f'{name}-secret'
```
The manifest will be used to spawn multiple VMs, thus using unique names for certain resources such as PVCs is necessary.
To support this the provider will replace any instance of `$KASM_NAME` with a unique name, to use this for multiple different types of
resources you can append to the name such as this suggested PVC example:
```
volumes:
- name: disk-0
persistentVolumeClaim:
claimName: $KASM_NAME-pvc
```
Again, due to the fact that the manifest will be spawning multiple VMs it is necessary to utilize a disk cloning method
such as the `dataVolume` feature of the Containerized Data Importer interface created by KubeVirt.
### Caveats
The k8s namespace for KASM resources is configured on the provider, this should not be updated while the provider is in use.
Doing so can result in unpredictable behavior and orphaned resources. If it is necessary to change the k8s namespace, a
new autoscale and provider should be created with the new namespace and the old autoscale configuration should be updated
setting the standby cores, gpus and memory to 0. This should allow new resources to transition to the new provider.
It is possible for orphaned k8s objects to exist for various reasons, such as power loss of the KASM server during VM creation.
Currently, these objects must be cleaned up manually.
The k8s objects that KASM creates are: virtualmachines, secrets and PVCs.
The KASM kubevirt provider does not work out of the box with the following Kubernetes deployments:
- KIND, the default KIND deployment uses local-path-provisioning for storage which does not support CDI cloning.
### KubeVirt Settings
A number of settings are required to be defined to use this functionality.
The KubeVirt settings appear in the Pool configuration when the feature is licensed.
The appropriate Kubernetes configuration options can be found by downloading the KubeConfig file provided by your Kubernetes installation.
```{include} /guide/compute/vm_providers/kubevirt.md
```
### KubeVirt GKE Setup Example
This example assumes you have a GKE account, a Linux development environment, and an existing KASM deployment ([ref](../../install/single_server_install)).
The example will assume the following variables:
- cluster name `kasm`
- zone `us-central1`
- region `us-central1-c`
- machine-type `c3-standard-8`
- namespace `kasm`
- storage class name `kasm-storage`
- pvc name `kasm-ubuntu-focal`
- pvc size `25GiB`
- pvc image `focal-server-cloudimg-amd64.img`
These should be replaced with values more appropriate to your installation.
#### Ensure GKE is configured
- Install the gcloud console ([ref](https://cloud.google.com/sdk/docs/install)):
```Bash
curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-linux-x86_64.tar.gz
tar -xf google-cloud-cli-linux-x86_64.tar.gz
./google-cloud-sdk/install.sh -q --path-update true --command-completion true
. ~/.profile
```
- Initialize the gcloud console ([ref](https://cloud.google.com/sdk/docs/initializing)):
```Bash
gcloud init --no-launch-browser
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-c
```
- Enable the GKE engine API ([ref](https://cloud.google.com/kubernetes-engine/docs/how-to/nested-virtualization#before_you_begin)):
```Bash
gcloud services enable container.googleapis.com
```
- Create a cluster with nested virtualization support ([ref](https://cloud.google.com/kubernetes-engine/docs/how-to/nested-virtualization#enable-nested-virt)):
```Bash
gcloud container clusters create kasm \
--enable-nested-virtualization \
--node-labels=nested-virtualization=enabled \
--machine-type=c3-standard-8
```
- Install the kubectl gcloud component ([ref](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#install_kubectl)):
```Bash
gcloud components install kubectl
```
- Configure GKE kubectl authentication ([ref](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#install_plugin)):
```Bash
gcloud components install gke-gcloud-auth-plugin
gcloud container clusters get-credentials kasm \
--region=us-central1-c
```
- Create the KASM namespace:
```Bash
kubectl create namespace kasm
```
#### Install KubeVirt
Note: The current v1.3 release of KubeVirt introduced a bug preventing GKE support. You must install the v1.2.2 release.
- Install KubeVirt ([ref](https://kubevirt.io/user-guide/cluster_admin/installation/)):
```Bash
#export RELEASE=$(curl https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt)
export RELEASE=v1.2.2
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr.yaml
```
- Wait for it to be ready. This may time out multiple (2-3) times before returning successfully:
```Bash
kubectl -n kubevirt wait kv kubevirt --for condition=Available
```
#### Install the Containerized Data Importer extension
In order to support efficient cloning KubeVirt requires the Containerized Data Importer extension ([ref](https://github.com/kubevirt/containerized-data-importer)).
- Install the CDI extension:
```Bash
export VERSION=$(curl -s https://api.github.com/repos/kubevirt/containerized-data-importer/releases/latest | grep '"tag_name":' | sed -E 's/.*"([^"]+)".*/\1/')
kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml
kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml
```
- Create a new storage class that uses the GKE CSI driver and has the `Immediate` volume binding mode:
```Bash
kubectl apply -f - < tmp.deploy.ca.crt
touch $HOME/local.cfg
export KUBECONFIG=$HOME/local.cfg
kubectl config set-cluster local --server=https://$KUBE_API_EP --certificate-authority=tmp.deploy.ca.crt --embed-certs=true
kubectl config set-credentials $KUBE_SA_NAME --token=$KUBE_API_TOKEN
kubectl config set-context local --cluster local --user $KUBE_SA_NAME
kubectl config use-context local
```
- Validate your kubeconfig works
```Bash
kubectl version
```
It should display both the client and server versions, if it does not you can retrieve the current config used by kubectl to ensure it is using the correct config
```Bash
kubectl config view
```
Ensure that it is using the local settings you generated and not an existing GKE configuration.
#### Upload a PVC
The `virtctl` tool can be used to upload a VM image. Both the `raw` and `qcow2` formats are supported. The image should be cloud-ready, with cloud-init configured.
- Download and install the `virtctl` tool:
```Bash
VERSION=$(kubectl get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.observedKubeVirtVersion}")
ARCH=$(uname -s | tr A-Z a-z)-$(uname -m | sed 's/x86_64/amd64/') || windows-amd64.exe
echo ${ARCH}
curl -L -o virtctl https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-${ARCH}
chmod +x virtctl
sudo install virtctl /usr/local/bin
```
- Expose the CDI Upload Proxy by executing the following command in another terminal:
```Bash
kubectl -n cdi port-forward service/cdi-uploadproxy 8443:443
```
- Use the `virtctl` tool to upload the VM image:
```Bash
virtctl image-upload pvc kasm-ubuntu-focal --uploadproxy-url=https://localhost:8443 --size=25Gi --image-path=./focal-server-cloudimg-amd64.img --insecure -n kasm
```
#### Ensure KASM is configured
- Configure KASM
- Add a license
- Set the default zone upstream address to the address of the KASM host
- Add a Pool
- `Name` KubeVirt Pool
- `Type` Docker Agent
- Add an Auto-Scale config
- `Name` KubeVirt AutoScale
- `AutoScale Type` Docker Agent
- `Pool` KubeVirt Pool
- `Deployment Zone` default
- `Standby Cores` 4
- `Standby GPUs` 1
- `Standby Memory` 4000
- `Downscale Backoff` 600
- `Agent Cores Override` 4
- `Agent GPUs Override` 1
- `Agent Memory Override` 4
- Create a new VM Provider
- `Provider` KubeVirt
- `Name` KubeVirt Provider
- `Max Instances` 10
- `Host` paste server URI from kubeconfig
- `SSL Certificate` paste certiciate-authority-data from kubeconfig
- `API Token` paste token from kubeconfig
- `VM Namespace` kasm
- `VM Public SSH Key` paste user public ssh key
- `Cores` 4
- `Memory` 4
- `Disk Source` kasm-ubuntu-focal
- `Disk Size` 30
- `Interface Type` bridge
- `Network Name` default
- `Network Type` pod
- `Startup Script` paste ubuntu docker agent startup script