Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 22 Next »

Getting Access

Access to the cluster is subject to formal approval by selected members of the participating research groups. See the HPC Service Webpage for more information on access options.


Introductory Linux Training and Resources

If you're not familiar with basic Linux commands and usage, or if you need a refresher on these topics, please refer to the following resources from our workshop series:

For a list of recorded trainings and upcoming research computing workshops and events, please see:

https://www.cuit.columbia.edu/rcs/training

Logging In


You will need to use SSH (Secure Shell) in order to access the cluster.  Windows users can use PuTTY or Cygwin. MacOS users can use the built-in Terminal application.


Users log in to the cluster's submit node, located at insomnia.rcs.columbia.edu or use the shorter form som.rcs.columbia.edu.  If logging in from a command line, type:


$ ssh <UNI>@insomnia.rcs.columbia.edu



OR


$ ssh <UNI>@som.rcs.columbia.edu


where <UNI> is your Columbia UNI. Please make sure not to include the angle brackets ('<' and' >') in your command; they only represent UNI as a variable entity.


Once prompted,  you need to provide your usual Columbia password.


Submit Account


You must specify your account whenever you submit a job to the cluster. You can use the following table to identify the account name to use.


Account

Full Name

5sigma

Biostatistics

astroColumbia Astrophysics Lab
berkelbachChemistry
cboyceChemical Engineering
cklabIEOR
dbComputer Science
e3bEcology, Evolution and Environmental Biology
esmaSIPA-CGEP

exposomics

MSPH Exposomics
hillPhysics (Columbia Astrophysics Laboratory)
ieortangIndustrial Engineering and Operations Research (IEOR)
iicdIrving Institute for Cancer Dynamics 
intelseedfree

Special group with access to a non-NVIDIA GPU seed node from Intel. email hpc-support@columbia.edu if interested in details.

mcilvain

Grace McIlvain Lab 

mmsciAstrophysics - Luca Comisso lab
morpheus

Bianca Dumitrascu Lab

msphMSPH IT

neuralctrl


Laboratory for Neural Engineering and Control
ntar_labBiomedical Engineering  (Morrison)
pas_labBiological Sciences
ueilBiomedical Engineering  (Konofagou)
qmechQuantum Mechanics/Applied Physics and Applied Math: Marianetti
ssccSocial Science Computing Committee (ISERP, Econ, and CPRC)

tekle_smith

Chemistry Dept - Tekle Smith group
xulabEarth and Environmental Engineering


Your First Cluster Job

While best practice on all Columbia HPC group clusters, it is particularly important on Insomnia to move from the initial login node to a compute node before doing most work. Example:

srun --pty -t 0-2:00 -A <ACCOUNT> /bin/bash

Now you have moved from the login node to one of the compute nodes on the cluster. While simple things like editing a file or making new folders can be done on a login node, they can also be done on a compute node, and as you run more complicated jobs on Insomnia, some things simply will not work unless you are first on a compute node.

An example Submit Script


This script will print "Hello World", sleep for 10 seconds, and then print the time and date. The output will be written to a file in your current directory.


In order for this example to work you need to replace ACCOUNT with your group account name. If you don't know your account name the table in the previous section might help.


#!/bin/sh
#
# Simple "Hello World" submit script for Slurm.
#
# Replace ACCOUNT with your account name before submitting.
#
#SBATCH --account=ACCOUNT        # Replace ACCOUNT with your group account name 
#SBATCH --job-name=HelloWorld    # The job name
#SBATCH -N 1                     # The number of nodes to request
#SBATCH -c 1                     # The number of cpu cores to use (up to 32 cores per server)
#SBATCH --time=0-0:30            # The time the job will take to run in D-HH:MM
#SBATCH --mem-per-cpu=5G         # The memory the job will use per cpu core

echo "Hello World"
sleep 10
date

# End of script


Job Submission


If this script is saved as helloworld.sh you can submit it to the cluster with:


$ sbatch helloworld.sh


This job will create one output file name slurm-####.out, where the #'s will be replaced by the job ID assigned by Slurm. If all goes well the file will contain the words "Hello World" and the current date and time.


See further documentation we have about submitting jobs.  For much more in-depth information, there is a Slurm Quick Start Guide on the web.

  • No labels