SMITER AppImage =============== .. highlight:: sh SMITER can be distributed as an `AppImage `_ to be run on variety of Linux distributions. It packages all necessary system dependencies for non-development Linux and therefore no prerequisites beyond Linux graphics desktop are needed. Recommended Linux distribution is CentOS 8.2 or higher. Linux AppImage should run under Windows WSLg provided required libraries normally found under Linux X11 workstation! To run AppImage provided just change AppImage executable bit and start:: wget http://static.iter.org/imas/assets/smiter/appimage/SMITER-1.6.4-glibc2.28-x86_64.AppImage chmod +x SMITER-*.AppImage ./SMITER-1.6.4-glibc2.28-x86_64.AppImage Study examples (12+ GB) are downloaded only once. Then they are relinked from user's assets directory to :file:`/tmp/${USER}/study-1.6.4` that is also available as a study shortcut when starting SMITER module in GUI and opening a study. For example:: ./SMITER-1.6.4-glibc2.28-x86_64.AppImage trial /tmp/${USER}/study-1.6.4/deck/Test-EQ3-inrshad1-inres1.hdf opens "inres1" study provided. To run the case activate "Smiter" module and right click on "inres1" case in study with :guilabel:`Compute case` or use :menuselection:`S&miter-->C&ase-->Compute`. IMAS example data is installed under ``~/public/imasdb/smiter/``. Use Help for the rest of the tutorials provided. Benchmark case [NF55]_ is an example where parallelisation speeds up if :guilabel:`Enable MPI` is checked in :menuselection:`F&ile --> Pref&erences --> Smiter`. Load the study from the command line with:: ./SMITER-1.6.4-glibc2.28-x86_64.AppImage trial /tmp/${USER}/study-1.6.4/iter-nf55/nf_55_033019_2015.hd .. note:: First time AppImage is run it download all studies provided and saves them under :file:`${HOME}/assets/MD5` directory as files with MD5 sum equaling to the filename and symlinked at every start of AppImage. Downloading several gigabytes of studies can take some time and that's why downloading the remaining examples is delayed for 20 minutes to lower the confusion for first time users. For that a startup option ``trial`` disables downloading the studies completely except *inres1* and *nf55* examples. Studies appear (symlinked) under :file:`/tmp/${USER}/study` directory unless set by system "modulefile" to :file:`${HOME}/study`. If first argument to AppImage is "mesa" then software rendering will be used instead for machine with no graphics card:: ./SMITER-1.6.4-glibc2.28-x86_64.AppImage mesa /tmp/${USER}/study/deck/Test-EQ3-inrshad1-inres1.hdf .. note:: If runing AppImage with "mesa" under VirtualGL acceleration then GL "faker" libs should be disabled with:: env --unset LD_PRELOAD ./SMITER*.appimage mesa If running under NoMachine without VirtualGL desktop (EUROfusion Gateway) then start needs tu be wrapped with:: /usr/NX/scripts/vgl/vglrun ./SMITER*.AppImage Parallel computation is provided with MPI when enabled with :menuselection:`F&ile-->Pref&erences-->Smiter-->MPI Options-->Use MPI`. **System default :guilabel:`MPI run command` is listed as first under the dropdown box** enabling user to :guilabel:`Apply` its own MPI preferences. .. note:: Released AppImages under :file:`/work/imas/shared/external/assets/smiter/appimage/` directory on ITER cluster are mirrored and publicly downloadable with URL prefix https://static.iter.org/imas/assets/smiter/appimage are listed on `SMITER page at ITER `_. Issues and customizing startup ------------------------------ CentOS might need to install *fuse libs* in order to run appimage. Use the following command to install missing :file:`libfuse.so` :: sudo yum -y install fuse-libs The AppImage is intended to run on multicore Linux workstation or remotely within graphically capable (`VirtualGL `_) login node. On compute cluster it is possible to move MPI computing to compute nodes by copying :file:`exec` and :file:`openmpi` subdirectory from mounted AppImage to a shared place and exporting ``SMITER_DIR``, ``PATH``, ``LD_LIBRARY_PATH``, ``OPAL_PREFIX`` environment variables. Environment variables used if defined before starting AppImage:: export SMITER_DIR=\${SMITER_DIR:-\${DIR}} export SMITER_STUDY_EXAMPLES_DIR=\${SMITER_STUDY_EXAMPLES_DIR:-/tmp/\${USER}} export SMITER_MPI_DEFAULT_COMMAND=\${SMITER_MPI_DEFAULT_COMMAND:-${MPI_COMMAND}} export OPAL_PREFIX=\${OPAL_PREFIX:-${OPAL_PREFIX}} export QT_QPA_FONTDIR=\${QT_QPA_FONTDIR:-\${DIR}/fonts/dejavu} export MAKE_STUDY_SLEEP_MINUTES=\${MAKE_STUDY_SLEEP_MINUTES:-20} where ``${DIR}`` is AppImage mount directory under ``/tmp`` Assets (example studies and CAD models) are being downloaded into:: ASSETDIR ?= ${HOME}/assets/MD5 with ``ASSETDIR_GROUP`` ownership if defined. System administrators may adjust ``ASSETDIR`` directory to group of users with modulefile and all users will find filenames relinked there in their home directories or /tmp. SLURM Cluster integration ------------------------- OpenMPI provided with AppImage is compiled with SLURM and Infiniband libraries (OpenIB) and that usually single node :command:`env --unset=OMPI_MCA_ess mpirun` is sufficient for mosts studies provided. If AppImage is started with argument ``slurm`` with ``SMITER_COMPUTE_DIRECTORY`` environment variable exported then AppImage installs SMARDDA runtime libraries under that directory and use them to submit job. For example:: export SMITER_COMPUTE_DIRECTORY=${HOME}/smiter-compute ./SMITER*.AppImage mesa slurm or just:: env SMITER_COMPUTE_DIRECTORY=${HOME}/smiter-compute \ ./SMITER-1.6.4-glibc2.28-x86_64.AppImage slurm will install and set ``SMITER_DIR`` to ``SMITER_COMPUTE_DIRECTORY``. :guilabel:`MPI run command` will be set to default:: env --unset=LD_PRELOAD --unset=PMIX_SERVER_URI2 --unset=OMPI_MCA_pmix --unset=PMIX_SERVER_URI21 --unset=PMIX_NAMESPACE --unset=OMPI_MCA_orte_ess_num_procs --unset=OMPI_MCA_ess srun -n 12 --mpi=pmi2 --tasks-per-node=12 --kill-on-bad-exit unless exported with ``SMITER_MPI_DEFAULT_COMMAND`` environment variable. ParaView compiled with MPI can introduce additional variables to the environment that needs to be cleared with:: env --unset=ORTE_SCHIZO_DETECTION --unset=PMIX_SYSTEM_TMPDIR --unset=OMPI_COMMAND --unset=HFI_NO_BACKTRACE --unset=OMPI_MCA_orte_precondition_transports --unset=PMIX_RANK --unset=OMPI_ARGV --unset=PWD --unset=PMIX_SERVER_TMPDIR --unset=PMIX_PTL_MODULE --unset=IPATH_NO_BACKTRACE --unset=OMPI_MCA_orte_launch --unset=PMIX_INSTALL_PREFIX --unset=OMPI_APP_CTX_NUM_PROCS --unset=PMIX_BFROP_BUFFER_TYPE --unset=PMIX_SERVER_URI2 --unset=OMPI_MCA_pmix --unset=PMIX_SERVER_URI21 --unset=PMIX_SECURITY_MODE --unset=OMPI_MCA_ess --unset=PMIX_NAMESPACE --unset=OMPI_MCA_orte_ess_num_procs --unset=PMIX_GDS_MODULE ... In cases where Infiniband fabric is incopatible we can disable openib *BTL* and non usable out-of-band component by setting OMPI environment variables as in the folowing :guilabel:`MPI run command` parametes at EUROfusion Gateway cluster:: env --unset=LD_PRELOAD --unset=OMPI_MCA_ess OMPI_MCA_oob=^ud OMPI_MCA_btl=^openib srun -n 48 --mpi=pmi2 --tasks-per-node=48 --kill-on-bad-exit In principle tuning MPI parameters should result in no warnings or errors when running cases. Usually it is enough to run MPI on single node! Since MPI warnings occur also when runing serially without MPI it is better to set environment variables before running AppImage:: setenv OMPI_MCA_oob ^ud setenv OMPI_MCA_btl ^openib /usr/NX/scripts/vgl/vglrun ./SMITER*.AppImage slurm and then :guilabel:`MPI run command` can be just:: srun -n 48 --mpi=pmi2 --tasks-per-node=48 --kill-on-bad-exit System adminstrators can prepare single module file and copy AppImage to arbitrary location to provide SMITER to all users. For example on the *Gateway* cluster modulefile looks without help text:: #%Module set app_dir /afs/eufus.eu/g2itmdev/user/g2kosl set appimage SMITER-1.6.4.RC1-82-ga5984700-glibc2.28-x86_64.AppImage set vglrun /usr/NX/scripts/vgl/vglrun setenv SMITER_GUI_DIR $app_dir setenv SMITER_STUDY_EXAMPLES_DIR $::env(HOME)/study setenv SMITER_COMPUTE_DIRECTORY /pfs/scratch/$::env(USER)/smiter-compute setenv SMITER_MPI_DEFAULT_COMMAND "env --unset=OMPI_MCA_ess srun -n 48 --mpi=pmi2 --tasks-per-node=48 --kill-on-bad-exit" setenv OMPI_MCA_oob ^ud setenv OMPI_MCA_btl ^openib setenv MAKE_STUDY_SLEEP_MINUTES 5 set-alias smiter "env SMITER_MPI_DEFAULT_COMMAND=\"env --unset=OMPI_MCA_ess OMPI_MCA_ess mpirun -np 6\" $vglrun $app_dir/$appimage" set-alias smiter-slurm "$vglrun $app_dir/$appimage slurm" set-alias smiter_mesa "env --unset=LD_PRELOAD $app_dir/$appimage mesa" set-alias smiter_doc "firefox https://static.iter.org/imas/assets/smiter/html/index.html" System-wide assets cache can be set up by ``ASSETDIR`` and ``ASSSETDIR`` by adding:: setenv ASSETDIR /pfs/scratch/g2kosl/assets/smiter/MD5 setenv ASSETDIR_GROUP g2 to the modulefile.