Ginsburg: Getting Started

Ginsburg: Getting Started

Getting Access

Access to the cluster is subject to formal approval by selected members of the participating research groups. See the HPC Service Webpage for more information on access options.

Introductory HPC Training and Resources

If you're not familiar with basic Linux commands and usage, or if you need a refresher on these topics, please refer to the following resources from our workshop series:

  1. Intro to Linux: workshop recording

  2. Intro to Bash Shell Scripting: and workshop recording

  3. Intro to Python for HPC:  and workshop recording

  4. Intro to High Performance Computing: and workshop recording

 

Also useful:

Logging In



You will need to use SSH (Secure Shell) in order to access the cluster.  Windows users can use PuTTY or Cygwin. MacOS users can use the built-in Terminal application.



Users log in to the cluster's submit node, located at ginsburg.rcs.columbia.edu or use the shorter form burg.rcs.columbia.edu.  If logging in from a command line, type:



$ ssh <UNI>@ginsburg.rcs.columbia.edu OR $ ssh <UNI>@burg.rcs.columbia.edu



where <UNI> is your Columbia UNI. Please make sure not to include the angle brackets ('<' and' >') in your command; they only represent UNI as a variable entity.



Once prompted,  you need to provide your usual Columbia password.



Submit Account



You must specify your account whenever you submit a job to the cluster. You can use the following table to identify the account name to use.



Account

Full Name

Account

Full Name

anastassiou

Anastassiou Lab

apam

Applied Physics and Applied Math

asenjo

Asenjo-Garcia Lab

astro

Astronomy and Astrophysics

berkelbach

Berkelbach Group

biostats

Biostats

ccce

Columbia Center for Computational Electrochemistry

cgl

Biomedical Engineering

dslab

Shohamy Lab

dsi

Data Science Institute

edru

Karin Foerde

e3b

Department of E3B

ehsmsph

Environmental Health Sciences Mailman School of Public Health

emlab

Electromagnetic Geophysics Laboratory

gsb

Business School

hblab

Harmen Bussemaker Lab

iicd

Irving Institute for Cancer Dynamics 

jalab

Austermann Lab

jhucbsr

Jianhua Hu Biostatistics

kellylab

Shaina Kelly Lab

katt3

Computer Science

millis

Physics

myers

Myers Lab

mjlab

Biological Sciences

morphogenomics-lab

Bianca M. Dumitrascu

ntar_lab

Neurotrauma and Repair Lab (Morrison)

abernathey

Ocean Climate Physics: Abernathey

camargo

Ocean Climate Physics: Camargo

fiore

Ocean Climate Physics: Fiore

glab

Ocean Climate Physics: Gentine

mckinley

Ocean Climate Physics: McKinley

oshaughnessy

Ben O'Shaughnessy, Dept. Chemical Engineering

seager

Ocean Climate Physics: Seager

sobel

Ocean Climate Physics: Sobel

ting

Ocean Climate Physics: Ting

wu

Ocean Climate Physics: Wu

qmech

Quantum Mechanics: Marianetti

rent

Rent

sail

Schiminovich Astronomy & Instrumentation Lab

seasdean

School of Engineering and Applied Science

sscc

Social Science Computing Committee

stats

Statistics

stock

Stockwell Lab

thea

Sironi / Beloborodov

theory

Theoretical Neuroscience: Abbott Lab

tosches

Tosches Lab

urbangroup

Urban Group

vedula

Vijay Vedula

zi

Zuckerman Institute



Your First Cluster Job

When you first login to Ginsburg, you are on a login node. Login nodes are not places where users should do actual work aside from simple tasks like editing a file or creating new folders.

Instead, it is important to move from the initial login node to a compute node before doing most work. Example:

srun --pty -t 0-2:00 -A <ACCOUNT> /bin/bash

Now you have moved from the login node to one of the compute nodes on the cluster.  The simple tasks mentioned above can also be done here, but from here is where it is especially important to submit scripts for processing.

If the HPC group notices jobs being run on a login node, such jobs will be terminated and the user notified.

Submit Scripts



This script will print "Hello World", sleep for 10 seconds, and then print the time and date. The output will be written to a file in your current directory.



In order for this example to work you need to replace ACCOUNT with your group account name. If you don't know your account name the table in the previous section might help.



#!/bin/sh # # Simple "Hello World" submit script for Slurm. # # Replace ACCOUNT with your account name before submitting. # #SBATCH --account=ACCOUNT # Replace ACCOUNT with your group account name #SBATCH --job-name=HelloWorld # The job name #SBATCH -N 1 # The number of nodes to request #SBATCH -c 1 # The number of cpu cores to use (up to 32 cores per server) #SBATCH --time=0-0:30 # The time the job will take to run in D-HH:MM #SBATCH --mem-per-cpu=5G # The memory the job will use per cpu core echo "Hello World" sleep 10 date # End of script



Job Submission



If this script is saved as helloworld.sh you can submit it to the cluster with:



$ sbatch helloworld.sh



This job will create one output file name slurm-####.out, where the #'s will be replaced by the job ID assigned by Slurm. If all goes well the file will contain the words "Hello World" and the current date and time.



See the Slurm Quick Start Guide for a more in-depth introduction on using the Slurm scheduler.