Browse Source

Maintenance

ehfd 4 years ago
parent
commit
5ad497583e
3 changed files with 32 additions and 18 deletions
  1. 1 1
      Dockerfile
  2. 24 11
      README.md
  3. 7 6
      bootstrap.sh

+ 1 - 1
Dockerfile

@@ -80,7 +80,7 @@ RUN apt-get install -y \
 # Driver version must be equal to the host
 #ARG BASE_URL=https://us.download.nvidia.com/tesla
 ARG BASE_URL=http://us.download.nvidia.com/XFree86/Linux-x86_64
-ENV DRIVER_VERSION 450.80.02
+ENV DRIVER_VERSION 450.66
 RUN cd /tmp && \
     curl -fSsl -O $BASE_URL/$DRIVER_VERSION/NVIDIA-Linux-x86_64-$DRIVER_VERSION.run && \
     sh NVIDIA-Linux-x86_64-$DRIVER_VERSION.run -x && \

+ 24 - 11
README.md

@@ -1,16 +1,29 @@
 # docker-nvidia-glx-desktop
 
-MATE Desktop container supporting GLX/Vulkan for NVIDIA GPUs by spawning its own X Server and noVNC WebSocket interface instead of using the host X server. Does not require `/tmp/.X11-unix` host sockets or any non-conventional/dangerous host setup.
-
-Use [docker-nvidia-egl-desktop](https://github.com/ehfd/docker-nvidia-egl-desktop) for a MATE Desktop container that directly accesses NVIDIA GPUs without using an X Server (without Vulkan support).
-
-**Change the NVIDIA GPU driver version inside the container to be equal to the host and build your own Dockerfile.** Change **bootstrap.sh** if you are using a headless GPU like Tesla. Corresponding container toolkit on the host for allocating GPUs should also be set up.
-
-Connect to the spawned noVNC WebSocket instance with a browser in port 5901, no VNC client required (password for the default user is 'vncpasswd').
-
-Note: Requires access to at least one **/dev/ttyX** device. Check out [k8s-hostdev-plugin](https://github.com/bluebeach/k8s-hostdev-plugin) for provisioning this in Kubernetes clusters without privileged access.
-
-For Docker this configuration is tested to work but the container will have potentially dangerous privileged access:
+MATE Desktop container supporting GLX/Vulkan for NVIDIA GPUs by spawning its own
+X Server and noVNC WebSocket interface instead of using the host X server. Does
+not require `/tmp/.X11-unix` host sockets or any non-conventional/dangerous host
+setup.
+
+Use
+[docker-nvidia-egl-desktop](https://github.com/ehfd/docker-nvidia-egl-desktop)
+for a MATE Desktop container that directly accesses NVIDIA GPUs without using an
+X Server (without Vulkan support).
+
+**Change the NVIDIA GPU driver version inside the container to be equal to the
+host and build your own Dockerfile.** Change **bootstrap.sh** also if you are
+using a headless GPU like Tesla. Corresponding container toolkit on the host for
+allocating GPUs should also be set up.
+
+Connect to the spawned noVNC WebSocket instance with a browser in port 5901, no
+VNC client required (password for the default user is 'vncpasswd').
+
+Note: Requires access to at least one **/dev/ttyX** device. Check out
+[k8s-hostdev-plugin](https://github.com/bluebeach/k8s-hostdev-plugin) for
+provisioning this in Kubernetes clusters without privileged access.
+
+For Docker this configuration is tested to work but the container will have
+potentially unsafe privileged access:
 
 ```
 docker run --gpus 1 --privileged -it -e SIZEW=1920 -e SIZEH=1080 -e SHARED=TRUE -e VNCPASS=vncpasswd -p 5901:5901 ehfd/nvidia-glx-desktop:latest

+ 7 - 6
bootstrap.sh

@@ -10,8 +10,8 @@ HEX_ID=$(sudo nvidia-smi --query-gpu=pci.bus_id --id="$(echo "$NVIDIA_VISIBLE_DE
 IFS=":." ARR_ID=("$HEX_ID")
 unset IFS
 BUS_ID=PCI:$((16#${ARR_ID[1]})):$((16#${ARR_ID[2]})):$((16#${ARR_ID[3]}))
-# Leave out --use-display-device=None if GPU is headless such as Tesla and download links of such GPU drivers in Dockerfile should also be changed
-sudo nvidia-xconfig --virtual="${SIZEW}x$SIZEH" --allow-empty-initial-configuration --enable-all-gpus --no-use-edid-dpi --busid="$BUS_ID" --use-display-device=None
+# Leave out --use-display-device=None if GPU is headless such as Tesla, and download links of such GPU drivers in Dockerfile should also be changed
+sudo nvidia-xconfig --virtual="${SIZEW}x${SIZEH}" --allow-empty-initial-configuration --enable-all-gpus --no-use-edid-dpi --busid="$BUS_ID" --use-display-device=None
 
 if [ "x$SHARED" == "xTRUE" ]; then
   export SHARESTRING="-shared"
@@ -20,7 +20,7 @@ fi
 shopt -s extglob
 for TTY in /dev/tty+([0-9]); do
   if [ -w "$TTY" ]; then
-    /usr/bin/X tty"$(echo "$TTY" | grep -Eo '[0-9]+$')" :0 &
+    Xorg tty"$(echo "$TTY" | grep -Eo '[0-9]+$')" :0 &
     break
   fi
 done
@@ -33,10 +33,11 @@ sleep 1
 sleep 1
 
 export DISPLAY=:0
-if vulkaninfo | grep "$(echo "$NVIDIA_VISIBLE_DEVICES" | cut -d ',' -f1 | cut -c 5-)" | grep -q ^; then
+UUID_CUT=$(sudo nvidia-smi --query-gpu=uuid --id="$(echo "$NVIDIA_VISIBLE_DEVICES" | cut -d ',' -f1)" --format=csv | sed -n 2p | cut -c 5-)
+if vulkaninfo | grep "$UUID_CUT" | grep -q ^; then
   VK=0
   while true; do
-    if ENABLE_DEVICE_CHOOSER_LAYER=1 VULKAN_DEVICE_INDEX=$VK vulkaninfo | grep "$(echo "$NVIDIA_VISIBLE_DEVICES" | cut -d ',' -f1 | cut -c 5-)" | grep -q ^; then
+    if ENABLE_DEVICE_CHOOSER_LAYER=1 VULKAN_DEVICE_INDEX=$VK vulkaninfo | grep "$UUID_CUT" | grep -q ^; then
       export ENABLE_DEVICE_CHOOSER_LAYER=1
       export VULKAN_DEVICE_INDEX="$VK"
       break
@@ -44,7 +45,7 @@ if vulkaninfo | grep "$(echo "$NVIDIA_VISIBLE_DEVICES" | cut -d ',' -f1 | cut -c
     VK=$((VK + 1))
   done
 else
-  echo "Vulkan not available for current GPU."
+  echo "Vulkan not available for the current GPU."
 fi
 
 mate-session &