Getting Access
Access to the cluster is subject to formal approval by selected members of the participating research groups. See the HPC service catalog for more information on access options.
Introductory Linux Training and Resources
If you're not familiar with basic Linux commands and usage, or if you need a refresher on these topics, please refer to the following resources from our workshop series:
- Intro to Linux - Slides
- Intro to Linux - Video
- Intro to Shell Scripting - Slides
- Intro to Shell Scripting - Video
- Intro to High Performance Computing - Slides
- Intro to High Performance Computing - Video
For a list of upcoming research computing workshops and events, please see:
https://rcfoundations.research.columbia.edu/
Logging In
You will need to use SSH (Secure Shell) in order to access the cluster. Windows users can use PuTTY or Cygwin. MacOS users can use the built-in Terminal application.
Users log in to the cluster's submit node, located at habanero.rcs.columbia.edu. If logging in from a command line, type:
$ ssh <UNI>@habanero.rcs.columbia.edu
where <UNI> is your Columbia UNI. Please make sure not to include the angle brackets ('<' and' >') in your command; they only represent UNI as a variable entity.
Once prompted, you need to provide your usual Columbia password.
Submit Account
You must specify your account whenever you submit a job to the cluster. You can use the following table to identify the account name to use.
Account | Full Name |
---|---|
action | Costa Lab |
apam | Applied Physics and Applied Mathematics |
astro | Astronomy and Astrophysics |
bpx | Biology at Physical Extremes |
ccl | Center for Climate and Life |
cheme | Chemical Engineering |
cmt | Condensed Matter Theory |
cwc | Columbia Water Center |
dsi | Data Science Institute |
dslab | Shohamy Lab |
dsp | Peterka Lab |
edu | Education Users |
elsa | Polar Group |
emlab | Key EM Lab |
fcs | Frontiers in Computing Systems |
free | Free Users |
geco | Columbia Experimental Gravity Group |
glab | Gentine Lab |
gsb | Graduate School of Business |
hblab | Bussemaker Lab |
heat | Heat Lab |
issa | Issa Lab |
jalab | Austermann Lab |
katt | Hirschberg Speech Lab |
ldeo | Lamont-Doherty Earth Observatory |
mfplab | Przeworski Lab |
mphys | Materials Physics |
nklab | Kriegeskorte Lab |
ocp | Ocean and Climate Physics |
pimri | Psychiatric Institute |
psych | Psychology |
qmech | Quantum Mechanics |
rent<UNI> | Renters |
seasdean | School of Engineering and Applied Science |
sipa | School of International and Public Affairs |
spice | Simulations Pertaining to or Involving Compact-object Explosions |
sscc | Social Science Computing Committee |
stats | Statistics |
stock | Stockwell Lab |
sun | Sun Lab |
theory | Theoretical Neuroscience |
ton | Ton Dieker Lab |
tosches | Tosches Lab |
tzsts | Tian Zheng Statistics |
xray | X Ray Lab |
zi | Zuckerman Institute |
Your First Cluster Job
Submit Script
This script will print "Hello World", sleep for 10 seconds, and then print the time and date. The output will be written to a file in your current directory.
In order for this example to work you need to replace <ACCOUNT> with your account name. If you don't know your account name the table in the previous section might help.
#!/bin/sh # # Simple "Hello World" submit script for Slurm. # # Replace <ACCOUNT> with your account name before submitting. # #SBATCH --account=<ACCOUNT> # The account name for the job. #SBATCH --job-name=HelloWorld # The job name. #SBATCH -c 1 # The number of cpu cores to use. #SBATCH --time=1:00 # The time the job will take to run (here, 1 min) #SBATCH --mem-per-cpu=1gb # The memory the job will use per cpu core. echo "Hello World" sleep 10 date # End of script
Job Submission
If this script is saved as helloworld.sh you can submit it to the cluster with:
$ sbatch helloworld.sh
This job will create one output file name slurm-####.out, where the #'s will be replaced by the job ID assigned by Slurm. If all goes well the file will contain the words "Hello World" and the current date and time.
See the Slurm Quick Start Guide for a more in-depth introduction on using the Slurm scheduler.
Free Tier for Yeti Users
Experienced Yeti users will find that Free Tier differs significantly from the environment they are used to. Here are some of the most important differences to keep in mind.
- Free Tier uses Slurm to schedule jobs while Yeti uses Torque/Moab. This means that your submit files from Yeti will not work on Free Tier See the Slurm Quick Start Guide for an introduction to using the Slurm scheduler.
- Your home directories are different on the two clusters. Any files in your home directory on Yeti that you wish to use on Free Tier will need to be transferred over. See Transferring Files for more information*.*
- On Free Tier the path to your scratch space begins with /rigel. On Yeti it begins with /vega. (Rigel and Vega are the names of the storage devices where your scratch space is located.)