Connection to Session Hangs - Requesting Kasm¶
When troubleshooting this issue, always test by creating new sessions after making changes. Resuming a session may not have the changes applied.
If the Kasm Workspaces deployment is internet accessible, consider testing access via a browser using the Kasm Live Demo to help rule out client-side issues. You can sign up for the free demo at https://kasmweb.com
Disable Browser Extensions
Some browser extensions have been know to cause conflicts when making connections to Kasm Sessions. Disable any browser extensions/add-ons that may be enabled. Avoid using “incognito” or “private-browsing” modes while troubleshooting.
Corporate Proxies - WebSockets
Some corporate proxies/firewalls and other security appliances may not support proxying WebSockets, which is needed to establish a connection to Kasm sessions. You can verify your environment supports WebSockets by visiting: https://www.websocket.org/echo.html .
You may see the UI cycle between “Requesting Kasm…” and “Connecting…” repeatedly. The following errors will repeat in the Console tab of the Develop Tools for the browser.
Compatible Docker Version
Ensure the system is running compatible versions of
docker compose. See System Requirements for more details.
Reverse Proxy Issues
If running Kasm behind a reverse proxy such as NGINX, please consult the Reverse Proxy Guide The reverse proxy used, must support WebSockets, and the Zone configuration must also be updated accordingly.
Name Resolution and General Connectivity
If accessing Kasm Workspaces via DNS name, ensure the Kasm Workspaces services can properly resolve the same address. The Kasm services must also be able to connect to user-facing addresses. During installation, docker will create several local sub interfaces and Kasm services are assigned addresses on those bridged networks. Ensure firewalls or security groups are not blocking access.
Conduct the following tests from the Kasm Workspaces server to ensure name resolution and general connectivity are working properly. If these items fail, correct the dns/networking/firewall issues.
# Test using the URL used to connect to the Kasm UI sudo docker exec -it kasm_proxy wget https://<your.kasm.server>:443 # Test using the IP of the Kasm Workspaces server if your deployment is using a reverse proxy. If Kasm Workspaces was installed using a non-standard port. Specify that port here sudo docker exec -it kasm_proxy wget https://<your.kasm.server.ip>:443
If there is a NAT / DNS issue that will prevent the Kasm Workspaces services from ever contacting the user-facing address, then the address used by the services can be updated with the Kasm Workspaces server IP address.
This can be done by navigating to Kasm Admin UI page, Zones, editing the Default zone with the pencil icon and setting the Upstream Auth Address to the IP of the Kasm workspaces server
UFW on Ubuntu and Debian Systems
Despite Docker adding rules to iptables allowing external connections the the ‘kasm_proxy’ container from external IPs, UFW’s default rules will block connections from ‘kasm_default_network’ to the ‘kasm_proxy’ container.
Ensure that there is a firewall rule added to allow these connections.
# Test to see if UFW is installed and enabled sudo ufw status # If UFW is active then allow https connections sudo ufw allow https
Cloud Security Groups
If the security group rule in the AWS / Azure / OCI console that allows https into the webapp restricts connections by IP address (the source IP is not “0.0.0.0/0”), then an additional rule must be added to allow the server to connect to itself.
This is done by creating an additional rule to the security group allowing https connections from the same security group.
GPU Passthrough Support¶
Agents table shows 0 GPUs
Workspaces requires the NVIDIA container toolkit and GPU drivers are installed and functional. Ensure to restart the Docker daemon after installation of the nvidia container runtime. If the agents view (first screenshot below) shows 0 GPUs but in the details it lists all GPUs (second screenshot below), this means that the agent did detect the GPU hardware, however, it did not detect that the NVIDIA container runtime was installed.
- No resources are available
When users attempt to provision a container they get the following error message.
There are a number of reasons for this error message to occur, generally, this error message means that the image had requirements that could not be fullfilled by any of the available agents. The requirements include: CPUs, RAM, GPUs, docker networks, and zone restrictions. Check the Logging view under the Admin panel to search for the cause. The example below shows that the image was set for 6 GPUs but no agent could be contacted with at least 6 GPUs.
- nvidia-smi errors
The nvidia-smi tool should automatically be available inside the container. Before troubleshooting anything with Workspaces, ensure nvidia-smi works on the host directly. Next, ensure that you can manually run a container on the host with the nvidia runtime set and successfully run the nvidia-smi command from within the container. If you are able to run nvidia-smi from the host directly and when manually running a container, then open a support issue with Kasm.