Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Current »

Getting Access


Access to the cluster is subject to formal approval by selected members of the participating research groups. See the HPC service catalog for more information on access options.

Introductory Linux Training and Resources

If you're not familiar with basic Linux commands and usage, or if you need a refresher on these topics, please refer to the following resources from our workshop series:

For a list of recorded trainings and upcoming research computing workshops and events, please see:

https://www.cuit.columbia.edu/rcs/training


Logging In


You will need to use SSH (Secure Shell) in order to access the cluster.  Windows users can use PuTTY or Cygwin. MacOS users can use the built-in Terminal application.


Users log in to the cluster's submit node, located at terremoto.rcs.columbia.edu.  If logging in from a command line, type:


$ ssh <UNI>@terremoto.rcs.columbia.edu


where <UNI> is your Columbia UNI. Please make sure not to include the angle brackets ('<' and' >') in your command; they only represent UNI as a variable entity.


Once prompted,  you need to provide your usual Columbia password.


Submit Account


You must specify your account whenever you submit a job to the cluster. You can use the following table to identify the account name to use.


Account

Full Name

actionCosta Lab

apam

Applied Physics and Applied Mathematics

astro

Astronomy and Astrophysics

bpx

Biology at Physical Extremes

ccl

Center for Climate and Life

cheme

Chemical Engineering

cmt

Condensed Matter Theory

cwc

Columbia Water Center

dsi

Data Science Institute

dslabShohamy Lab
dspPeterka Lab

edu

Education Users

elsaPolar Group
emlabKey EM Lab

fcs

Frontiers in Computing Systems

free

Free Users

geco

Columbia Experimental Gravity Group

glab

Gentine Lab

gsbGraduate School of Business

hblab

Bussemaker Lab

heat

Heat Lab

issaIssa Lab
jalabAustermann Lab

katt

Hirschberg Speech Lab 

ldeo

Lamont-Doherty Earth Observatory

mfplabPrzeworski Lab

mphys

Materials Physics

nklabKriegeskorte Lab

ocp

Ocean and Climate Physics

pimri

Psychiatric Institute

psych

Psychology

qmech

Quantum Mechanics

rent<UNI>

Renters

seasdeanSchool of Engineering and Applied Science

sipa

School of International and Public Affairs

spice

Simulations Pertaining to or Involving Compact-object Explosions

sscc

Social Science Computing Committee

stats

Statistics

stock

Stockwell Lab

sun

Sun Lab

theory

Theoretical Neuroscience

ton

Ton Dieker Lab

toschesTosches Lab

tzsts

Tian Zheng Statistics

xray

X Ray Lab

zi

Zuckerman Institute


Your First Cluster Job


Submit Script


This script will print "Hello World", sleep for 10 seconds, and then print the time and date. The output will be written to a file in your current directory.


In order for this example to work you need to replace <ACCOUNT> with your account name. If you don't know your account name the table in the previous section might help.


#!/bin/sh
#
# Simple "Hello World" submit script for Slurm.
#
# Replace <ACCOUNT> with your account name before submitting.
#
#SBATCH --account=<ACCOUNT>      # The account name for the job.
#SBATCH --job-name=HelloWorld    # The job name.
#SBATCH -c 1                     # The number of cpu cores to use.
#SBATCH --time=1:00              # The time the job will take to run (here, 1 min)
#SBATCH --mem-per-cpu=1gb        # The memory the job will use per cpu core.

echo "Hello World"
sleep 10
date

# End of script


Job Submission


If this script is saved as helloworld.sh you can submit it to the cluster with:


$ sbatch helloworld.sh


This job will create one output file name slurm-####.out, where the #'s will be replaced by the job ID assigned by Slurm. If all goes well the file will contain the words "Hello World" and the current date and time.


See the Slurm Quick Start Guide for a more in-depth introduction on using the Slurm scheduler.


  • No labels