Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

About HPC Cluster

The new HPC cluster at C2B2 is a Linux-based (Rocky9.4) compute cluster consisting of 62 Dell Server, 2 head nodes, and a virtualized pool of login (submit) nodes, 8 Weka storage nodes, is designed with the goals of running compute intensive AI workloads.

...

Info

This HPC cluster exclusively accepts MC credentials for authentication. However, to access the cluster, you also need an active HPC account with C2B2. If you don't have an account, please reach out to dsbit_help@cumc.columbia.edu to request one.

Getting Access

In order to get access to this HPC cluster, every research group needs to establish a PI Account using an MoU-SLA agreement that can be downloaded DSBIT-MOU-SLA.pdf This document provides further details about modalities, rights & responsibilities, and charges etc.

Logging In

You will need to use SSH in order to access the cluster.  Windows users can use PuTTY or Cygwin or MobaXterm. MacOS users can use the built-in Terminal application.

...

$ ssh <UNI>@hpc.c2b2.columbia.edu

Interactive login to Compute Node

All users will access the HPC resources via a login node. These nodes are meant for basic tasks like editing files or creating new directories, but not for heavy workloads. If you need to perform certain heavy duty tasks in an interactive mode, you must open an interactive shell session on a compute node using SLURM’s srun command, like an example below. See the SLURM User Guide link on the side navigation to learn more about SLURM.

srun --pty -t 1:00:00 /bin/bash

Interactive login on GPU node

srun -p gpu --gres=gpu:L40S:1 --mem=8G --pty /bin/bash

Interactive login on GPU node With memory and time limit

srun -n 1 --time=01:00:00 -p gpu --gres=gpu:L40S:1 --mem=10G --pty /bin/bash