Table of Contents |
---|
...
Name | Version | Location / Module | Category |
---|---|---|---|
Apptainer | 1.2.5-1.el9 | (loaded automatically on all compute nodes) | Run Docker-like containers |
cmake | 3.20.2 | (loaded automatically on all compute nodes) | |
cuda | 12.3 | (loaded automatically on GPU nodes) | GPU Computing |
gcc | 11.4.1 | (loaded automatically on all compute nodes) | Compiler - C / C++ |
gdal, gdal-devel libraries | 3.4.3 | (loaded automatically on all compute nodes) | |
gsl/gsl-devel libraries | 2.6-7 | (loaded automatically on all compute nodes) | GNU Scientific Library |
gurobi | 10.0.3 | module load gurobi/10/0/3 | Prescriptive analytics platform and a decision-making technology |
hdf5/hdf5-devel | 1.12.1-7 | (loaded automatically on all compute nodes) | Hierarchical Data Format version 5 |
hdf5p | 1.10.7 and 1.14.3 | module load hdf5p | Hierarchical Data Format version 5, PARALLEL version |
Intel oneAPI toolkit | various | module load intel-oneAPI-toolkit <library> | Core set of tools and libraries for developing high-performance, data-centric applications across diverse architectures. |
julia | 1.5.3 | module load julia/1.5.3 | Programming Language |
knitro | 13.2.0 | module load knitro/13.2.0 | Software package for solving large scale nonlinear mathematical optimization problems; short for "Nonlinear Interior point Trust Region Optimization" |
make | 4.3 | (loaded automatically on all compute nodes) | |
Mathematica | 14.0 | (loaded automatically on all compute nodes) | Numerical Computing |
MATLAB | R2023b | module load MATLAB/2023b | Numerical Computing |
openmpi | 5.0.2 | module load openmpi/gcc/64/4.1.5a1 | OpenMPI Compiler (provided by Nvidia/Mellanox) |
Python (Incl many libraries such as numpy, torch, Tensorflow, scipy, and more) | 3.9.18 | (loaded automatically on all compute nodes) | Python for Scientific Computing |
Qt 5 | 5.15.9-1 | (loaded automatically on all compute nodes) | |
R | 4.3.2 | (loaded automatically on all compute nodes) | Programming Language |
Schrodinger | 2024-1 | module load schrodinger | A collection of software for chemical and biochemical use. It offers various tools that facilitate the investigation of the structures, reactivity and properties of chemical systems. |
Singularity (now called Apptainer. see above) | |||
Visual Studio Code Server | Not a module | A server side Integrated Development Environment hosted on Insomnia compute nodes |
...
The first time you launch Mathematica you will need to provide the host details of the MathLM (license) server. Using the Activating Mathematica guide, click 'Other ways to activate' then choose 'Connect to a Network License Server' and enter the IP address, 128.59.30.140
OpenMPI Settings
...
The default OpenMPI on Insomnia is openmpi-5.0.2, which is provided by Nvidia Mellanox and optimized for the MOFED stack. You will receive the following warnings when using mpirun/mpiexec:
...
You can pass the following option, which will use ucx which is default as of version 3.x:
--mca pml ucx --mca btl '^openib'
To help with the following warning:
Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
You can also add:
--mca orte_base_help_aggregate 0
If you choose to use the openmpi/gcc/64/4.1.1_cuda_11.0.3_aware module, this version expects a GPU and will throw the following warning on non-GPU nodes:
The library attempted to open the following supporting CUDA libraries, but each of them failed. CUDA-aware support is disabled.
libcuda.so.1: cannot open shared object file: No such file or directory
libcuda.dylib: cannot open shared object file: No such file or directory
/usr/lib64/libcuda.so.1: cannot open shared object file: No such file or directory
/usr/lib64/libcuda.dylib: cannot open shared object file: No such file or directory
If you are not interested in CUDA-aware support, then run with --mca opal_warn_on_missing_libcuda 0
to suppress this message. If you are interested in CUDA-aware support, then try setting LD_LIBRARY_PATH
to the location of libcuda.so.1 to get passed this issue.
You can pass this option:
...
Insomnia has a few MPI options loadable as modules in addition to Intel oneAPI/hpctoolkit/mpi/2021.11:
• openmpi5/5.0.2
• mpi/mpich-x86_64/4.1.1
• mpi/openmpi-x86_64/4.1.1
If you find that a mpirun hangs or does not complete, try adding the following option:
-mca coll ^hcoll
RStudio in an Apptainer container
...
Code Block |
---|
apptainer pull --name rstudio.simg docker://rocker/rstudio:4.3.1 |
In order for RStudio to start in a browser via an interactive session you will need the IP address of the compute node. Note that the IP below will likely be different for you:
Code Block |
---|
$ hostname -i 10.197.16.39 (REMEMBER, this is only an example IP. Yours will likely be different) |
From RStudio 4.2 and later, some added security features require binding of a locally created database file to the database in the container. Don't forget to change the password.
Code Block |
---|
mkdir -p run var-lib-rstudio-server printf 'provider=sqlite\ndirectory=/var/lib/rstudio-server\n' > database.conf PASSWORD='CHANGEME' singularity exec \ --bind run:/run,var-lib-rstudio-server:/var/lib/rstudio-server,database.conf:/etc/rstudio/database.conf \ rstudio.simg \ /usr/lib/rstudio-server/bin/rserver --auth-none=0 --auth-pam-helper-path=pam-helper --server-user=$USER |
This will run rserver
in a Singularity container.
Now open another Terminal and start the RStudio rserver
session using Port Forwarding to connect a local port on your computer to a remote one on Insomnia.
...
Visual Studio Code Server
Note |
---|
A pre-existing Github account is now required to use the below instructions. |
Visual Studio Code is an Integrated Development Environment that some like to use on their laptops. If you are familiar with that, the HPC has a server-side version hosted on the compute nodes (NOT the login nodes) for users to connect their local VS Code application on their laptop to Insomnia and open files from their Insomnia folder directly. To use it, do the following:
...
|
When you use the device login url, you will first get a page asking you to use your GitHub credentials to login.
After using your GitHub login, then you will get a page asking you to input the device code given you above (represented as <###-###> here)
Next you will see a page requesting you to authorize VS Code studio's access permissions.
After that, when you use your local VS Code application on your computer, you will see a running ssh tunnel listed. Double click to connect to it. This can take a moment to finish.
Once done, you will be able to open files in your Insomnia folder the same as you do ones on your local computer.