Bladeren bron

Maintenance

ehfd 4 jaren geleden
bovenliggende
commit
113533f1c7
4 gewijzigde bestanden met toevoegingen van 24 en 17 verwijderingen
  1. 1 2
      Dockerfile
  2. 5 5
      README.md
  3. 16 10
      bootstrap.sh
  4. 2 0
      xgl.yaml

+ 1 - 2
Dockerfile

@@ -77,11 +77,10 @@ RUN apt-get install -y \
         rm -rf /var/lib/apt/lists/*
 
 # Install NVIDIA drivers, including X graphic drivers by omitting --x-{prefix,module-path,library-path,sysconfig-path}
-# Driver version must be equal to host's driver
+# Driver version must be equal to the host
 #ARG BASE_URL=https://us.download.nvidia.com/tesla
 ARG BASE_URL=http://us.download.nvidia.com/XFree86/Linux-x86_64
 ENV DRIVER_VERSION 450.66
-
 RUN cd /tmp && \
     curl -fSsl -O $BASE_URL/$DRIVER_VERSION/NVIDIA-Linux-x86_64-$DRIVER_VERSION.run && \
     sh NVIDIA-Linux-x86_64-$DRIVER_VERSION.run -x && \

+ 5 - 5
README.md

@@ -1,10 +1,10 @@
 # docker-nvidia-glx-desktop
 
-MATE Desktop container with GLX support for NVIDIA GPUs by spawning its own X Server and noVNC WebSocket interface instead of using the host X server. Does not require `/tmp/.X11-unix` host sockets or any non-conventional/dangerous host setup.
+MATE Desktop container supporting GLX/Vulkan for NVIDIA GPUs by spawning its own X Server and noVNC WebSocket interface instead of using the host X server. Does not require `/tmp/.X11-unix` host sockets or any non-conventional/dangerous host setup.
 
-Use [docker-nvidia-egl-desktop](https://github.com/ehfd/docker-nvidia-egl-desktop) for a more stable MATE Desktop container that directly accesses NVIDIA GPUs without using an X Server.
+Use [docker-nvidia-egl-desktop](https://github.com/ehfd/docker-nvidia-egl-desktop) for a more stable MATE Desktop container that directly accesses NVIDIA GPUs without using an X Server (without Vulkan support).
 
-**Change the NVIDIA GPU driver version to be same as the host and build your own Dockerfile.** Change **bootstrap.sh** if you are using a headless GPU like Tesla. Corresponding container toolkit on the host for allocating GPUs should also be set up.
+**Change the NVIDIA GPU driver version inside the container to be equal to the host and build your own Dockerfile.** Change **bootstrap.sh** if you are using a headless GPU like Tesla. Corresponding container toolkit on the host for allocating GPUs should also be set up.
 
 Connect to the spawned noVNC WebSocket instance with a browser in port 5901, no VNC client required (password for the default user is 'vncpasswd').
 
@@ -13,11 +13,11 @@ Note: Requires access to at least one **/dev/ttyX** device. Check out [k8s-hostd
 For Docker this configuration is tested to work but the container will have potentially dangerous privileged access:
 
 ```
-docker run --gpus 1 --privileged -it -e SIZEW=1920 -e SIZEH=1080 -e VNCPASS=vncpasswd -p 5901:5901 ehfd/nvidia-glx-desktop:latest
+docker run --gpus 1 --privileged -it -e SIZEW=1920 -e SIZEH=1080 -e SHARED=TRUE -e VNCPASS=vncpasswd -p 5901:5901 ehfd/nvidia-glx-desktop:latest
 ```
 
 The below may also work without privileged access but is untested and may be buggy:
 
 ```
-docker run --gpus 1 --device=/dev/tty0:rw -it -e SIZEW=1920 -e SIZEH=1080 -e VNCPASS=vncpasswd -p 5901:5901 ehfd/nvidia-glx-desktop:latest
+docker run --gpus 1 --device=/dev/tty0:rw -it -e SIZEW=1920 -e SIZEH=1080 -e SHARED=TRUE -e VNCPASS=vncpasswd -p 5901:5901 ehfd/nvidia-glx-desktop:latest
 ```

+ 16 - 10
bootstrap.sh

@@ -5,12 +5,18 @@ trap "echo TRAPed signal" HUP INT QUIT KILL TERM
 
 echo "user:${VNCPASS}" | sudo chpasswd
 
-# NVIDIA driver inside the container must be same version as host.
-HEX_ID=$(sudo nvidia-smi --query-gpu=pci.bus_id --id=${NVIDIA_VISIBLE_DEVICES} --format=csv | tail -n1)
-IFS=":." ARR_ID=($HEX_ID); unset IFS
+# NVIDIA driver version inside the container from Dockerfile must be equal to the host
+HEX_ID=$(sudo nvidia-smi --query-gpu=pci.bus_id --id=$(echo $NVIDIA_VISIBLE_DEVICES | cut -d ',' -f1) --format=csv | sed -n 2p)
+IFS=":." ARR_ID=($HEX_ID)
+unset IFS
 BUS_ID=PCI:$((16#${ARR_ID[1]})):$((16#${ARR_ID[2]})):$((16#${ARR_ID[3]}))
-# Leave out --use-display-device=None if GPU is headless such as Tesla and download links of such GPU drivers in Dockerfile should also be different
-sudo nvidia-xconfig -a --virtual=${SIZEW}x${SIZEH} --allow-empty-initial-configuration --enable-all-gpus --busid=$BUS_ID --use-display-device=None
+# Leave out --use-display-device=None if GPU is headless such as Tesla and download links of such GPU drivers in Dockerfile should also be changed
+sudo nvidia-xconfig --virtual=${SIZEW}x${SIZEH} --allow-empty-initial-configuration --enable-all-gpus --no-use-edid-dpi --busid=$BUS_ID --use-display-device=None
+
+if [ "x${SHARED}" == "xTRUE" ]
+then
+    export SHARESTRING="-shared"
+fi
 
 shopt -s extglob
 for TTY in /dev/tty+([0-9])
@@ -23,14 +29,14 @@ fi
 done
 sleep 1
 
-x11vnc -display :0 -passwd $VNCPASS -forever -rfbport 5900 &
-sleep 2
-
 pulseaudio --start
-sleep 2
+sleep 1
+
+x11vnc -display :0 -passwd $VNCPASS -forever -ncache 10 -ncache_cr -xkb -rfbport 5900 $SHARESTRING &
+sleep 1
 
 /opt/noVNC/utils/launch.sh --vnc localhost:5900 --listen 5901 &
-sleep 2
+sleep 1
 
 export DISPLAY=:0
 mate-session &

+ 2 - 0
xgl.yaml

@@ -23,6 +23,8 @@ spec:
           value: "1920"
         - name: SIZEH
           value: "1080"
+        - name: SHARED
+          value: "TRUE"
         - name: VNCPASS
           value: "vncpasswd"
 #          valueFrom: