2.1. Using remote VirtualGL desktop¶
SMITER GUI is a graphically intensive application that for large cases needs graphics card with OpenGL. VirtualGL technology allows remote client without such capabilities to still use the hardware accelerated display over remote desktop connection.
Without VirtualGL only small cases are comfortable for use remotely. Of course, the best solution is running SMITER on Linux workstation with good graphics card and large system memory.
The following remote desktop connections are possible:
- Using NX graphical desktop without VirtualGL is useful for running non intensive graphical cases.
- Using NX desktop and X11 connection to GPU accelerated workstation with VNC desktop and client available there. This allows handling large meshes interactively with the whole VNC desktop being image compressed by NX desktop.
- Direct remote connection with TurboVNC client running Windows, Linux, OSX. Obviously, such direct connection between client and server works best. It works only locally if there are no firewalls in te middle to block the communication. Note that the firewall can be also at the workstation itself.
- Tunneled TurboVNC connection through SSH. This can pass single or multiple gateways or login nodes but also brings some latency to communication.
2.1.1. NX desktop without VirtualGL¶
For connecting to FreeNX remote desktop one needs to install older version 5 (or 3 for Window 7) of the NoMachine client that is available only from some sites on the internet. For example use http://hpc.fs.uni-lj.si/access attachments.
From the FreeNX desktop provided on hpc-login[4-8].iter.org
one can
login to hpc-login.iter.org
(or other hpc-login0[1-5]
node) with
[kosl@hpc-login5 ~]$ ssh -Y hpc-login02.iter.org
[kosl@hpc-login02 ~]$ module load SMITER
[kosl@hpc-login02 ~]$ smiter_mesa
2.1.2. Preparing TurboVNC desktop¶
We need to create a startup file and then start VNC server on
io-ls-titan.iter.org
, which will be protected by a password and
accessible through SSH tunneling from outside and direct only
from “internal” iter.org
network.
Note
To get access to io-ls-titan.iter.org
on needs to request acesss
through ITER Jira Service Desk. Having account on
hpc-login.iter.org
is a prerequisite for io-ls-titan.iter.org
!
When logged with SSH on io-ls-titan.iter.org
one can then:
$ module load SMITER
SMITER module file provides useful aliases for creating VNC desktop with VirtualGL support. The following aliases are available:
- vncserver-mwm
- Creates Motif window manager controlled desktop with DELL E2216H compatible resolution.
- vncserver-mwm-nogl
- As above without VirtualGL. Use vglrun -d :0.2 smiter to run smiter.
- vncserver-tde
- Creates resizable Trinity desktop environment.
- vncserver-list
- Lists running desktops. Should be just one.
- vncserver-kill
- Kills the virtual VNC desktop. Specify :display
- vncserver-passwd
- Changes desktop and view-only password.
To run SMITER use:
- smiter
- Normal run. Use
-help
for options. - smiter_doc
- Read SMITER documentation in a web browser.
- smiter_mesa
- Software rendering without VirtualGL.
- smiter_vgl
- Use VirtualGL with recommended GPU.
To establish SMITER with TurboVNC client one can simply use Putty client or:
$ ssh hpc-login.iter.org
$ ssh io-ls-titan.iter.org
$ module load SMITER
$ vncserver-passwd
$ vncserver-mwm # or vncserver-tde
to create desktop and then use TurboVNC client as described in Installing TurboVNC client to connect to remote desktop.
Note
VNC display numbers are allocated sequentially by taking into account other users’ display numbers. Once the VNC desktop with display number is allocated it can only be killed by the user that created it. The remote desktop will remain active even if we disconnect or logout from the terminal that created the server.
To list and kill the desktop use:
$ vncserver-list
$ vncserver-kill :#display number
To manualy create VNC server use:
$ ssh hpc-login.iter.org
$ ssh io-ls-titan.iter.org
$ export TVNC_VGLRUN="vglrun +wm -d :0.4"
$ export PATH=/opt/TurboVNC/bin:${PATH} TVNC_WM=mwm
$ vncpasswd # Generating password for accessing the desktop
$ vncserver -geometry 1920x1080
$ vncserver -list
In the second line we have specified Xorg server with NVIDIA GPU #4 display
acceleration. Complete window manager will be run under VirtualGL
interposer libraries. Machine io-ls-titan.iter.org
consists of 8 GPUs
and one can use either of them by changing -d 0.4
to -d 0.x
, where
x
is in range from 0 to 7. GPUs memory usage can be obtained by command
$ nvidia-smi
We have created a desktop with “legacy” Motif Window Manager, where we can open X terminal by right-click New Window and setting background with
$ xsetroot -grey
If fully fledged windows manager is desired instead one can use Trinity display manager instead with
$ vncserver -kill :display#
$ export TVNC_VGLRUN="vglrun +wm -d :0.3"
$ export PATH=/opt/trinity/bin:${PATH} TVNC_WM=starttde
$ vncserver -geometry 1280x720
Note that once the VNC desktop is started it will remain runing until being killed. User can then connect to the desktop with the VNC client at any time.
Note
Trinity desktop needs personalisation with
kpersonalizer that can also be found within
Slow Processor (fewer effects) for
comfortable remote work in Step 3 of the wizard.
Furthermore, due to bug Undefined color: "WINDOW_FOREGROUND"
one needs to Launch Trinity Control Center and then
disable checkbox.
2.1.3. Installing TurboVNC client¶
Usually TurboVNC client is available for download from SourceForge site.
However, to install the client adminstrator privileges are required on MS
Windows. For convenience in $SALOME_ARCHIVES/TurboVNC.zip
pointed
with:
$ module load SMITER
$ ls -l ${SALOME_ARCHIVES}/TurboVNC.zip
The $SALOME_ARCHIVES/TurboVNC.zip
can simply be unpacked to
Windows desktop and vncviewer.exe
and putty.exe for SSH access
are provided therein.
2.1.4. VNC client inside NX desktop¶
After establishing NX desktop on one of the login nodes we can run
$ ssh -Y io-ls-titan
$ /opt/TurboVNC/bin/vncviewer localhost:display#
where display#
is the number of display reported at
vncserver creation. Once connected we can simply open a terminal
and run
$ glxinfo | grep VirtualGL
$ module load smiter
$ smiter
2.1.5. Connecting with Windows TurboVNC client¶
Direct connection from TurboVNC client is possible only from iter.org
internal network by simply specifying the name of the machine with display
number created above. For example VNC server: io-ls-titan:1
will connect to display# :1 on io-ls-titan GPU workstation directly from
iter.org
“intranet” but not from “Guest” WiFi network.
To connect to io-ls-titan.iter.org
from internet one needs to establish
SSH tunnel with PuTTY terminal emulator that comes with the TurboVNC
client on Windows. VNC server listening port is starting at 5900 +
display#. In set Source port : 5900, Destination : 5901
if your display# and VNC server is io-ls-titan.iter.org:1
and press
Add to get
Then Open SSH session by specifying .
After successful login to hpc-login.iter.org
leave PuTTY terminal
open and use TurboVNC client with VNC Server: localhost:0 and
Connect.
See Remote access to compute node with VNC on Microsoft Windows on how to setup the client through mutliple tunnels and an alternative way of starting TurboVNC server without window manager under VirtualGL and using vglrun command instead.
2.1.6. Connection with Linux TurboVNC client¶
Linux and OSX clients can specify SSH tunnel in Options… as shown in the following screenshot.
and specify VNC Server: io-ls-titan:1 if display :1 is a target
display on io-ls-titan.iter.org
.
Besides direct connection one can use TurboVNC client with SSH tunnel (see
Remote desktop with VirtualGL for more info) by specifying VNC_VIA_CMD
as
in the following example for user kosl
on display io-ls-titan:5
$ export VNC_VIA_CMD='/usr/bin/ssh -l kosl -f -L %L:%H:%R %G sleep 2'
$ /opt/TurboVNC/bin/vncviewer -via hpc-login.oter.org io-ls-titan:5
Instead above commands we could establish single channel with
$ ssh -L5905:io-ls-titan:5905 kosl@hcp-login.iter.org
and then run the client locally:
$ /opt/TurboVNC/bin/vncviewer localhost:5
Double tunnels or establishing a separate tunnel is also possible and is similar to what is described in Section 2.1.7, where we create a tunnel with SSH manually while logging in.
For frequent connection the following function can be added to
~/.bashrc
(note: kosl
as a remote user hardcoded in alias):
titan () {
if [ $# == 0 ]; then display=0 ; else display=$1; fi
echo "Connecting to io-ls-titan:${display} via hpc-login.iter.org"
VNC_VIA_CMD='/usr/bin/ssh -l kosl -f -L %L:%H:%R %G sleep 2' \
/opt/TurboVNC/bin/vncviewer -via hpc-login.iter.org io-ls-titan:${display}
}
and then called with titan 2 for connecting to io-ls-titan:2 via
hpc-login.iter.org
.
2.1.7. Connection with OSX TurboVNC client¶
We need to create a tunnel for VNC client through SSH secure line in order
to connect to VNC server. From outside ITER network on needs firstly to
connect to one of the login nodes and from there create another tunnel to
io-ls-titan.iter.org
machine where we have a running TurboVNC server.
OSX is BSD variant of Unix with slightly different commannd switches than
Linux. OSX clients can specify SSH tunnel in Options…
as
shown in the following screenshot.
and specify VNC Server: io-ls-titan:1 if display :1 is a target
display on io-ls-titan.iter.org
.
Alternatively, the dual tunnel can be created by Terminal:
$ ssh -D 5901 kosl@hpc-login5.iter.org
Then from login node do another tunnel
$ ssh -g -L 5901:io-ls-titan:5901 io-ls-titan
After that establishing in TurboVNC client localhost:1
will open VNC
desktop.
Instead of dual tunnel we could establish single tunnel similarly to
Linux client where in terminal we establish ssh tunnel from localhost:1
to io-ls-titan:5
display through one of login nodes with
$ ssh -L 5901:io-ls-titan:5905 kosl@hpc-login.iter.org
and then within OSX TurboVNC client use localhost:1
to connect.