Forráskód Böngészése

Minor improvements

ehfd 4 éve
szülő
commit
cad675fd56
4 módosított fájl, 20 hozzáadás és 15 törlés
  1. 3 1
      Dockerfile
  2. 9 7
      README.md
  3. 5 1
      bootstrap.sh
  4. 3 6
      xgl.yaml

+ 3 - 1
Dockerfile

@@ -9,6 +9,7 @@ ARG DEBIAN_FRONTEND=noninteractive
 ENV NVIDIA_DRIVER_CAPABILITIES all
 
 # Default options (password is 'vncpasswd')
+ENV TZ UTC
 ENV VNCPASS vncpasswd
 ENV SIZEW 1920
 ENV SIZEH 1080
@@ -129,7 +130,8 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
     usermod -a -G adm,audio,cdrom,dialout,dip,fax,floppy,input,lp,lpadmin,netdev,plugdev,render,scanner,ssh,sudo,tape,tty,video,voice user && \
     echo "user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers && \
     chown -R user:user /home/user && \
-    echo "user:${VNCPASS}" | chpasswd
+    echo "user:${VNCPASS}" | chpasswd && \
+    ln -snf "/usr/share/zoneinfo/$TZ" /etc/localtime && echo "$TZ" | tee /etc/timezone > /dev/null
 
 COPY bootstrap.sh /etc/bootstrap.sh
 RUN chmod 755 /etc/bootstrap.sh

+ 9 - 7
README.md

@@ -2,21 +2,23 @@
 
 MATE Desktop container supporting GLX/Vulkan for NVIDIA GPUs by spawning its own X Server and noVNC WebSocket interface instead of using the host X server. Does not require `/tmp/.X11-unix` host sockets or host configuration. Designed for Kubernetes.
 
-Use [docker-nvidia-egl-desktop](https://github.com/ehfd/docker-nvidia-egl-desktop) for a MATE Desktop container that directly accesses NVIDIA GPUs without using an X11 Server, and is also compatible with Kubernetes (but without Vulkan support unlike this container).
+Use [docker-nvidia-egl-desktop](https://github.com/ehfd/docker-nvidia-egl-desktop) for a MATE Desktop container that directly accesses NVIDIA GPUs without using an X11 Server (but without Vulkan support unlike this container).
 
-Requires reasonably recent NVIDIA GPU drivers and corresponding container toolkits to be set up on the host for allocating GPUs. GPUs should have one or more DVI/HDMI/DisplayPort digital video ports instead of having only analog video ports (very ancient GPUs), although the ports to be used are recommended NOT to be connected with an actual monitor (manually specify a video port that is not connected to a monitor in **VIDEO_PORT**). Since this container fakes the driver to simulate being plugged in to a monitor while it actually does not, make sure the resolutions specified with the environment variables **SIZEW** and **SIZEH** are within the maximum supported by the GPU (1920 x 1200 at 60 hz is the maximum that should work on default configuration without DisplayPort for any recent enough GPUs, and the sizes between this and the GPU maximum size will be functional if the port is set to DisplayPort). The environment variable **VIDEO_PORT** can override which video port is used (defaults to DFP, the first unoccupied port detected in the driver), and overriding **VIDEO_PORT** to an unplugged DisplayPort (for example numbered like DP-0, DP-1, and so on) is recommended for resolutions above 1920 x 1200, because of some driver restrictions applied when the default is a DVI or HDMI port. If all your GPUs are not connected to any monitors and have DisplayPort, simply setting **VIDEO_PORT** to DP-0 is recommended (but not set as default because of legacy compatibility reasons).
+Requires reasonably recent NVIDIA GPU drivers and corresponding container toolkits to be set up on the host for allocating GPUs. GPUs should have one or more DVI-D/HDMI/DisplayPort digital video ports instead of having only analog video ports (which mean very ancient GPUs). However, the ports to be used are recommended NOT to be connected with an actual monitor, unless the user wants the remote desktop screen to be shown in the monitor. If you need to connect a real monitor to the X server session spawned by the container, connect the monitor and set **VIDEO_PORT** to the the video port connected to the monitor. Manually specify a video port that is not connected to a monitor in **VIDEO_PORT**. **VIDEO_PORT** identifiers and their connection states can be obtained by typing `xrandr -q` when the `$DISPLAY` environment variable is set. **Do not start more than one X server for one GPU. Use a separate GPU (or do not use one) for the host X server, and do not make it available to the containers.**
 
-The Quadro M4000 (Maxwell) was the earliest GPU with physical video ports to be tested. At least Maxwell or after generation GPUs are confirmed to work, perhaps even earlier ones as long as a supported driver is installed.
+Since this container fakes the driver to simulate being plugged in to a monitor while it actually does not, make sure the resolutions specified with the environment variables **SIZEW** and **SIZEH** are within the maximum size supported by the GPU. The environment variable **VIDEO_PORT** can override which video port is used (defaults to DFP, the first port detected in the driver). Therefore, overriding **VIDEO_PORT** to an unplugged DisplayPort (for example numbered like DP-0, DP-1, and so on) is recommended for resolutions above 1920 x 1200, because of some driver restrictions applied when the default is set to a DVI-D or HDMI port. The maximum size that should work in all cases is 1920 x 1200 at 60 hz, mainly for when the default DFP **VIDEO_PORT** identifier is not set to DisplayPort. The sizes between 1920 x 1200 and the maximum size for each port supported by GPU specifications will be possible if the port is set to DisplayPort, or when a real monitor or dummy plug to any other type of display ports including DVI-D and HDMI has been connected. If all GPUs have DisplayPort and they are not connected to any monitors, simply setting **VIDEO_PORT** to DP-0 is recommended (but this is not set as default because of legacy compatibility reasons).
 
-Tesla GPUs seem to only support resolutions of up to around 2560 x 1600 (**VIDEO_PORT** has to be kept to DFP instead of changing to DP-0). The Tesla K40 (Kepler) GPU did not support RandR (required for some graphical applications using SDL). Other Kepler Tesla GPUs except maybe the GRID K1 and K2 GPUs are also unlikely to support RandR, while the desktop itself can start up without other issues. RandR support probably starts from Maxwell Tesla GPUs. Other tested Tesla GPUs (V100, T4, A40, A100) supported applications that consumer GPUs supported. However, the performances were not better than consumer GPUs that usually cost a fraction of Tesla GPUs, and the maximum supported resolutions were even lower.
+The Quadro M4000 (Maxwell) was the earliest GPU with physical video ports to be tested. GPUs of generations at least at Maxwell or after are likely confirmed to work, perhaps even earlier ones as long as a supported driver is installed.
 
-Container startup could take some time at first startup as it automatically installs NVIDIA drivers with the same version as the host.
+Datacenter GPUs (Tesla) seem to only support resolutions of up to around 2560 x 1600 (**VIDEO_PORT** has to be kept to DFP instead of changing to DP-0 or other DisplayPorts). The K40 (Kepler) GPU did not support RandR (required for some graphical applications using SDL). Other Kepler generation Datacenter GPUs (maybe except the GRID K1 and K2 GPUs with vGPU capabilities) are also unlikely to support RandR, while the remote desktop itself is otherwise functional. RandR support probably starts from Maxwell Datacenter GPUs. Other tested Datacenter GPUs (V100, T4, A40, A100) likely support all graphical applications that consumer GPUs support. However, the performances were not better than consumer GPUs that usually cost a fraction of Datacenter GPUs, and the maximum supported resolutions were even lower.
 
-Connect to the spawned noVNC WebSocket instance with a browser in port 5901, no VNC client required (password for the default user is 'vncpasswd').
+Container startup could take some time at first launch as it automatically installs NVIDIA drivers with the same version as the host.
+
+Connect to the spawned noVNC WebSocket instance with a browser in port 5901, no installed VNC client is required (password for the default user is 'vncpasswd').
 
 Wine and Winetricks are bundled by default, comment out the installation section in **Dockerfile** if the user wants to remove them from the container.
 
-This container should not be used in privileged mode, and requires **/dev/ttyN** (N >= 8) virtual terminal device provisioning in unprivileged mode. All containers in a single node should be provisioned with the exact same virtual terminal device. Check out [smarter-device-manager](https://gitlab.com/arm-research/smarter/smarter-device-manager) or [k8s-hostdev-plugin](https://github.com/bluebeach/k8s-hostdev-plugin) for provisioning this in Kubernetes clusters without privileged access.
+This container should not be used in privileged mode, and requires to provision one **/dev/ttyN** (N >= 8) virtual terminal device in unprivileged mode. All containers in a single node should be provisioned with the exact same virtual terminal device. Check out [smarter-device-manager](https://gitlab.com/arm-research/smarter/smarter-device-manager) or [k8s-hostdev-plugin](https://github.com/bluebeach/k8s-hostdev-plugin) for provisioning this in Kubernetes clusters without privileged access.
 
 ```
 docker run --gpus 1 --device=/dev/tty63:rw -it -e SIZEW=1920 -e SIZEH=1080 -e SHARED=TRUE -e VNCPASS=vncpasswd -e VIDEO_PORT=DFP -p 5901:5901 ehfd/nvidia-glx-desktop:latest

+ 5 - 1
bootstrap.sh

@@ -4,6 +4,7 @@ set -e
 trap "echo TRAPed signal" HUP INT QUIT KILL TERM
 
 echo "user:$VNCPASS" | sudo chpasswd
+sudo ln -snf "/usr/share/zoneinfo/$TZ" /etc/localtime && echo "$TZ" | sudo tee /etc/timezone > /dev/null
 
 sudo /etc/init.d/dbus start
 
@@ -26,10 +27,13 @@ if ! command -v nvidia-xconfig &> /dev/null; then
                     --no-libglx-indirect \
                     --no-install-libglvnd
   sudo rm -rf /tmp/NVIDIA*
-  sudo sed -i "s/allowed_users=console/allowed_users=anybody/;$ a needs_root_rights=yes" /etc/X11/Xwrapper.config
   cd ~
 fi
 
+if grep -Fxq "allowed_users=console" /etc/X11/Xwrapper.config; then
+  sudo sed -i "s/allowed_users=console/allowed_users=anybody/;$ a needs_root_rights=yes" /etc/X11/Xwrapper.config
+fi
+
 if [ -f "/etc/X11/xorg.conf" ]; then
   sudo rm /etc/X11/xorg.conf
 fi

+ 3 - 6
xgl.yaml

@@ -45,6 +45,9 @@ spec:
             memory: 64Gi
             cpu: "16"
             nvidia.com/gpu: 1
+# Deploy either https://gitlab.com/arm-research/smarter/smarter-device-manager or https://github.com/bluebeach/k8s-hostdev-plugin and increase allocatable resources of one same virtual terminal device to at least the number of GPUs
+#            smarter-devices/tty63: 1
+#            hostdev.k8s.io/dev_tty63: 1
           requests:
             memory: 100Mi
             cpu: 100m
@@ -55,8 +58,6 @@ spec:
           name: xgl-root-vol
         - mountPath: /dev/shm
           name: dshm
-        - mountPath: /dev/tty63
-          name: tty
       volumes:
       - name: xgl-cache-vol
         emptyDir: {}
@@ -69,7 +70,3 @@ spec:
       - name: dshm
         emptyDir:
           medium: Memory
-# Replace this and the above mountPath with the cluster's device plugin entry that provisions the virtual terminal device
-      - name: tty
-        hostPath:
-          path: /dev/tty63