Table of Contents | ||
---|---|---|
|
...
This program will print out "Hello World!" when run on a gpu server or print "Hello Hello" when no gpu module is found.
Singularity
Singularity is a software tool that brings Docker-like containers and reproducibility to scientific computing and HPC. Singularity has Docker container support and enables users to easily run different flavors of Linux with different software stacks. These containers provide a single universal on-ramp from the laptop, to HPC, to cloud.
Users can run Singularity containers just as they run any other program on our HPC clusters. Example usage of Singularity is listed below. For additional details on how to use Singularity, please contact us or refer to the Singularity User Guide.
Downloading Pre-Built Containers
Singularity makes it easy to quickly deploy and use software stacks or new versions of software. Since Singularity has Docker support, users can simply pull existing Docker images from Docker Hub or download docker images directly from software repositories that increasingly support the Docker format. Singularity Container Library also provides a number of additional containers.
You can use the pull command to download pre-built images from an external resource into your current working directory. The docker:// uri reference can be used to pull Docker images. Pulled Docker images will be automatically converted to the Singularity container format.
...
Here's an example of pulling the latest stable release of the Tensorflow Docker image and running it with Singularity. (Note: these pre-built versions may not be optimized for use with our CPUs.)
...
Singularity - Interactive Shell
The shell command allows you to spawn a new shell within your container and interact with it as though it were a small virtual machine:
...
Code Block |
---|
Singularity tensorflow.simg:~> python >>> import tensorflow as tf >>> print(tf.__version__) 1.13.1 >>> exit() |
When done, you may exit the Singularity interactive shell with the "exit" command.
Singularity tensorflow.simg:~> exit
Singularity: Executing Commands
The exec command allows you to execute a custom command within a container by specifying the image file. This is the way to invoke commands in your job submission script.
...
Singularity: Running a Batch Job
Below is an example of job submission script named submit.sh that runs Singularity. Note that you may need to specify the full path to the Singularity image you wish to run.
Code Block |
---|
#!/bin/bash # Singularity example submit script for Slurm. # # Replace <ACCOUNT> with your account name before submitting. # #SBATCH -A <ACCOUNT> # Set Account name #SBATCH --job-name=tensorflow # The job name #SBATCH -c 1 # Number of cores #SBATCH -t 0-0:30 # Runtime in D-HH:MM #SBATCH --mem-per-cpu=4gb # Memory per cpu core module load singularity singularity exec tensorflow.simg python -c 'import tensorflow as tf; print(tf.__version__)' |
Then submit the job to the scheduler. This example prints out the tensorflow version.
$ sbatch submit.sh
For additional details on how to use Singularity, please contact us or refer to the Singularity User Guide.
Swak4FOAM in a Singularity container
Swak4FOAM (SWiss Army Knife for Foam) can be run inside a container. Using this Docker container as inspiration, here is a sample tutorial.
...
Since R will know where to look for libraries, a call to library(sm) will be successful (however, this line is not necessary per se for the install.packages(...) call, as the directory is already specified in it).
...
MATLAB
...
MATLAB (single thread)
The file linked below is a Matlab MATLAB M-file containing a single function, simPoissGLM, that takes one argument (lambda).
...
No Format |
---|
#!/bin/sh # # Simple MatlabMATLAB submit script for Slurm. # # #SBATCH -A astro # The account name for the job. #SBATCH -J SimpleMLJob # The job name. #SBATCH -t 1:00 # The time the job will take to run. #SBATCH --mem-per-cpu=1gb # The memory the job will use per cpu core. module load matlabMATLAB echo echo "Launching an MatlabMATLAB run" date #define parameter lambda LAMBDA=10 #Command to execute MatlabMATLAB code matlabMATLAB -nosplash -nodisplay -nodesktop -r "simPoissGLM($LAMBDA)" # > matoutfile # End of script |
...
This program will leave several files in the output directory: slurm-<jobid>.out
, out.mat
, and matoutfile
.
Matlab (multi-threading)
Matlab has built-in implicit multi-threading (even without applying its Parallel Computing Toolbox, PCT), which causes it to use several cores on the node it is running on. It consumes the number of cores assigned by Slurm.The user can activate explicit (PCT) multi-threading by specifying the number of cores desired also in the Matlab program.
The Torque submit script (simpoiss.sh) should contain the following line:
No Format |
---|
#SBATCH -c 6
|
The -c flag determines the number of cores (up to 24 are allowed).
For explicit multi-threading, the users must include the following corresponding statement within their Matlab program:
No Format |
---|
parpool('local', 6) |
The second argument passed to parpool must equal the number specified with the ppn directive. Users who are acquainted with the use of commands like parfor need to specify explicit multi-threading with the help of parpool command above.
Note: maxNumCompThreads() is being deprecated by Mathworks. It is being replaced by parpool:
The command to execute Matlab code remains unchanged from the single thread example above.
Important note: On Yeti, where Matlab was single thread by default, it appeared that the more recent versions of Matlab took liberties to grab all the cores within a node even when fewer (or even only one) cores were specified as above. On Terremoto, we believe this has been addressed by implementing a system mechanism which enforces the proper usage of the number of specified cores.
Matlab with Parallel Server
Matlab 2020b and 2022b on Terremoto now have access to Parallel Server, and the toolbox is installed. The first time you run Matlab, it can take a few minutes to fully open, especially over WiFi. In order to use Parallel Server, a Cluster Profile needs to be created to use the Slurm job scheduler. You will need to request the number of nodes desired as well and may need to increase the amount of memory desired. With an interactive job requesting two nodes start with:
...
srun --pty -t
0
-
04
:
00
--nodes=
2
--mem=10gb -A <your-account> /bin/bash
Step One
Using the Configure for Slurm MathWorks tutorial as a guide:
- On the Home tab, in the Environment area, select Parallel > Create and Manage Clusters. Click ok on the dialog box Product Required: MATLAB Parallel Server.
- Create a new profile in the Cluster Profile Manager by selecting Add Cluster Profile > Slurm.
- With the new profile selected in the list, click Rename and edit the profile name something informative for future use, e.g., InstallTest. Press Enter.
- In the Properties tab, provide settings for the following fields:
- Set the Description field to something informative, e.g., For testing installation.
- Set the JobStorageLocation to the location where you want job and task data to be stored, e.g.,
/moto/home/<your-directory>
.- Note: JobStorageLocation should not be shared by parallel computing products running different versions; each version on your cluster should have its own JobStorageLocation.
- Set the NumWorkers field to the number of workers you want to run the validation tests on. This should be not be more than what is specified by
--nodes=
in the interactive job request, i.e.,srun
. - Set the ClusterMatlabRoot to the installation location of the MATLAB version, i.e.,
/moto/opt/matlab/R2020b
or/moto/opt/matlab/R2022b
. - Within ADDITIONAL SLURM PROPERTIES add
- Click Done to save your cluster profile.
Step 2: Validate the Cluster Profile
Step Two
In this step you verify your cluster profile, and thereby your installation. You can specify the number of workers to use when validating your profile. If you do not specify the number of workers in the Validation tab, then the validation will attempt to use as many workers as the value specified by the NumWorkers property on the Properties tab. You can specify a smaller number of workers to validate your configuration without occupying the whole cluster.
...
MATLAB with Parallel Server
Running MATLAB via X11 Forwarding
MATLAB Parallel Server is now configured on Terremot for R2020b and R2022a. Note that MATLAB 2023a and greater cannot be installed due to kernel and minimum version of Red Hat 7.9. X11 Forwarding is available and for Apple Mac computers, XQuartz is recommended and for Windows, MobaXterm. The first time you run MATLAB via X11, it can take a few minutes to fully open, especially over WiFi. You can run one simple command to enable the Toolbox:
>> configCluster
You should see:
Must set AccountName before submitting jobs to TERREMOTO. E.g.
>> c = parcluster;
>> c.AdditionalProperties.AccountName = 'group-account-name';
>> c.saveProfile
Complete. Default cluster profile set to "Terremoto".
Running MATLAB From Your Desktop/Laptop
You can now also install MATLAB on your laptop/desktop and download it from MathWorks Columbia page, where students can download it for free, and currently only 2022b and 2020b are supported. You will need to download a zip file which contains all the necessary integration scripts including the license. You will also need to be on the Columbia WiFi or VPN and copy the network.lic file into your device's MATLAB directory. On a Mac, you would use Finder, Applications, MATLAB, ctl-click the mouse, Show Package Contents, then licenses. Alternately you can run the userpath
command. In MATLAB, navigate to the Coumbia-University.Desktop folder. In the Command Window type configCluster
. You will be prompted for Ginsburg and Terremoto, select 2, for Terremoto. Enter your UNI (without @columbia.edu). You should see:
>> c = parcluster;
>> c.AdditionalProperties.AccountName = 'group-account-name';
>> c.saveProfile
Complete. Default cluster profile set to "Terremoto".
Inside the zip file is a Getting Started tutorial in a Word document. You can start with getting a handle to the cluster:
>> c = parcluster;
Submission to the remote cluster requires SSH credentials. You will be prompted for your SSH username and password or identity file (private key). The username and location of the private key will be stored in MATLAB for future sessions. Jobs will now default to the cluster rather than submit to the local machine.
Configuring Jobs
Prior to submitting the job, we can specify various parameters to pass to our jobs, such as queue, e-mail, walltime, etc. See AdditionalProperties for the complete list. AccountName and MemPerCPU are the only fields that are mandatory.
>> % Specify the account
>> c.AdditionalProperties.AccountName = 'group-account-name';
>> % Specify memory to use, per core (default: 4gb)
>> c.AdditionalProperties.MemPerCPU = '6gb';
Python and JULIA
To use python you need to use:
...
Code Block |
---|
$ srun --pty -t 0-02:00:00 --gres=gpu:1 -A <group_name> /bin/bash |
Then load the singularity environment module and run the tensorflow container, which was built from the Tensorflow docker image. You can start an interactive singularity shell and specify the --nv flag which instructs singularity to use the Nvidia GPU driver.
Code Block |
---|
$ module load singularity $ singularity shell --nv /moto/opt/singularity/tensorflow-1.13-gpu-py3-moto.simg Singularity tensorflow-1.13-gpu-py3-moto.simg:~> python Python 3.5.2 (default, Nov 12 2018, 13:43:14) [GCC 5.4.0 20160609] on linux >>> import tensorflow as tf >>> hello = tf.constant('Hello, TensorFlow!') >>> sess = tf.Session() .. >>> exit() |
You may type "exit" to exit when you're done with the Singularity shell.
Singularity tensorflow-1.13-gpu-py3-moto.simg:~> exit
Below is an example of job submission script named submit.sh that runs Tensorflow with GPU support using Singularity.
Code Block |
---|
#!/bin/bash # Tensorflow with GPU support example submit script for Slurm. # # Replace <ACCOUNT> with your account name before submitting. # #SBATCH -A <ACCOUNT> # Set Account name #SBATCH --job-name=tensorflow # The job name #SBATCH -c 1 # Number of cores #SBATCH -t 0-0:30 # Runtime in D-HH:MM #SBATCH --gres=gpu:1 # Request a gpu module module load singularity singularity exec --nv /moto/opt/singularity/tensorflow-1.13-gpu-py3-moto.simg python -c 'import tensorflow as tf; print(tf.__version__)' |
Then submit the job to the scheduler. This example prints out the tensorflow version.
$ sbatch submit.sh
For additional details on how to use Singularity, please contact us, see our Singularity documentation, or refer to the Singularity User Guide.
Another option:
Please note that you should not work on our head node.
...
This is one way to set up and run a jupyter notebook on Terremoto. As your notebook will listen on a port that will be accessible to anyone logged in on a the submit node, you should first create a password (as shown below).
Creating a Password
The following steps can be run on the submit node or in an interactive job.
...
Running a Jupyter Notebook
16. Log in to the submit node. Start an interactive job.
Code Block |
---|
$ srun --pty -t 0-01:00 -A <ACCOUNT> /bin/bash OR, if you want the notebook to run on a GPU node $ srun --pty -t 0-01:00 --gres=gpu:1 -A <ACCOUNT> /bin/bash |
Please note that the example above specifies time limit of one 1 hour only. That can be set to a much higher value, and in fact the default (i.e. if not specified at all) is as long as 5 days.
27. Get rid of XDG_RUNTIME_DIR environment variable
Code Block |
---|
$ unset XDG_RUNTIME_DIR |
38. Load the anaconda environment module.
Code Block |
---|
$ module load anaconda/3-2019.10 |
49. Look up the IP of the node your interactive job is running on.
Code Block |
---|
$ hostname -i 10.43.4.206 |
510. Start the jupyter notebook, specifying the node IP.
Code Block |
---|
$ jupyter notebook --no-browser --ip=10.43.4.206 |
611. Look for the following line in the startup output to get the port number.
Code Block |
---|
The Jupyter Notebook is running at: http://10.43.4.206:8888/ |
712. From your local system, open a second connection to Terremoto that forwards a local port to the remote node and port. Replace UNI below with your uni.
Code Block |
---|
$ ssh -L 8080:10.43.4.206:8888 UNI@moto.rcs.columbia.edu |
813. Open a browser session on your desktop and enter the URL 'localhost:8080' (i.e. the string within the single quotes) into its search field. You should now see the notebook.