...
...
...
...
Table of Contents |
---|
Getting Access
Access to the cluster is subject to formal approval by selected members of the participating research groups. See the HPC Service Webpage for more information on access options.
...
If you're not familiar with basic Linux commands and usage, or if you need a refresher on these topics, please refer to the following resources from our workshop series:
Intro to Shell Scripting - Slides
Intro to Shell Scripting - Videoand Video Recording
Intro to High Performance Computing - Slides and Video Recording
For a list of recorded trainings and upcoming research computing workshops and events, please see:
https://rcfoundationswww.researchcuit.columbia.edu/rcs/training
Logging In
You will need to use SSH (Secure Shell) in order to access the cluster. Windows users can use PuTTY or Cygwin. MacOS users can use the built-in Terminal application.
...
Users log in to the cluster's submit node, located at Insomnia insomnia.rcs.columbia.edu or use the shorter form burg som.rcs.columbia.edu. If logging in from a command line, type:
Code Block |
---|
$ ssh <UNI>@insomnia.rcs.columbia.edu
OR
$ ssh |
...
<UNI>@som.rcs.columbia.edu |
where <UNI> is your Columbia UNI. Please make sure not to include the angle brackets ('<' and' >') in your command; they only represent UNI as a variable entity.
...
You must specify your account whenever you submit a job to the cluster. You can use the following table to identify the account name to use.
Account | Full Name |
---|
Anastassiou Lab
5sigma | Biostatistics |
asenjo | lab of Ana Asenjo |
Garcia |
, Dept of Physics |
astro |
Columbia Astrophysics Lab |
berkelbach |
Chemistry |
cboyce | Chemical Engineering (Christopher Boyce) |
cklab | IEOR |
crislab | Chemical Engineering (Venkat Venkatasubramanian) |
db | Computer Science |
delmore_lab | Ecology, Evolution and Environmental Biology |
dr_beast | lab of Dr. Nikhil Sharma, Molecular Pharmacology |
e3b | Ecology, Evolution and Environmental Biology |
esma | SIPA-CGEP |
exposomics | MSPH Exposomics |
free | special group for Free Tier users with limited run times on the cluster |
friesner | Dept of Chemistry |
hill | Physics (Columbia Astrophysics Laboratory) |
hilsha | Lamont Climate School - Steckler lab |
houlab | Laboratory of Wenpin Hou |
ieortang | Industrial Engineering and Operations Research (IEOR) |
iicd | Irving Institute for Cancer Dynamics |
abernathey
Ocean Climate Physics: Abernathey
sscc
Your First Cluster Job
intelseedfree | Special group with access to a non-NVIDIA GPU seed node from Intel. email hpc-support@columbia.edu if interested in details. |
mcilvain | Grace McIlvain Lab |
mmsci | Multimessenger Science |
morpheus | Bianca Dumitrascu Lab |
msph | MSPH IT |
neuralctrl | Laboratory for Neural Engineering and Control |
ntar_lab | Biomedical Engineering (Morrison) |
pas_lab | Biological Sciences |
ueil | Biomedical Engineering (Konofagou) |
qmech | Quantum Mechanics/Applied Physics and Applied Math: Marianetti |
sscc | Social Science Computing Committee (ISERP, Econ, and CPRC) |
tekle_smith | Chemistry Dept - Tekle Smith group |
xulab | Earth and Environmental Engineering |
Your First Cluster Job
While best practice on all Columbia HPC group clusters, it is particularly important on Insomnia to move from the initial login node to a compute node before doing most work. Example:
Code Block |
---|
srun --pty -t 0-2:00 -A <ACCOUNT> /bin/bash |
Now you have moved from the login node to one of the compute nodes on the cluster. While simple things like editing a file or making new folders can be done on a login node, they can also be done on a compute node, and as you run more complicated jobs on Insomnia, some things simply will not work unless you are first on a compute node.
An example Submit Script
This script will print "Hello World", sleep for 10 seconds, and then print the time and date. The output will be written to a file in your current directory.
...
In order for this example to work you need to replace ACCOUNT with your group account name. If you don't know your account name the table in the previous section might help.
Code Block |
---|
#!/bin/sh
#
# Simple "Hello World" submit script for Slurm.
#
# Replace ACCOUNT with your account name before submitting.
#
#SBATCH --account=ACCOUNT # Replace ACCOUNT with your group account name
#SBATCH --job-name=HelloWorld # The job name
#SBATCH -N 1 # The number of nodes to request
#SBATCH -c 1 # The number of cpu cores to use (up to 32 cores per server)
#SBATCH --time=0-0:30 # The time the job will take to run in D-HH:MM
#SBATCH --mem-per-cpu=5G # The memory the job will use per cpu core
echo "Hello World"
sleep 10
date
# End of script
|
Job Submission
If this script is saved as helloworld.sh you can submit it to the cluster with:
Code Block |
---|
$ sbatch helloworld.sh
|
This job will create one output file name slurm-####.out, where the #'s will be replaced by the job ID assigned by Slurm. If all goes well the file will contain the words "Hello World" and the current date and time.
See the further documentation we have about submitting jobs. For much more in-depth information, there is a Slurm Quick Start Guide for a more in-depth introduction on using the Slurm schedulerweb.