Getting Started
Getting Started
Slurm User Guide
SGE to SLURM conversion
Job Examples
Research Products
Available software
Storage
Submitting Jobs
Technical Information
HPC Over V
The primary, general purpose compute cluster at C2B2 is now called "Ganesha.", This HPC cluster s a Linux-based (Rocky9.4) compute cluster consisting of 62 Dell Server 2 head nodes and a virtualized pool of login (submit) nodes 8 Weka nodes. The nodes fit in a dense configuration in 9 high-density racks and are cooled by dedicated rack refrigeration systems.
The clusters comprise:
20 compute nodes, each with 192 core processors and 768 GB of memory
2 nodes with 192 cores and 1.5 TB of memory
40 GPU node featuring 2 NVIDIA L40s GPU cards 192 cores processors and 768 GB memory
1 GPU node with a Superchip GH200 ARM architecture, 1 GPU, and 570 GB of memory
Each node has a 25 Gbps ethernet connection and 100 Gbps HDR InfiniBand. Additionally, a set of login nodes running on Proxmox virtualization provide a pool of virtual login nodes for user access to this and other systems.
Like our previous clusters, this cluster the system is controlled by SLURM. Storage for the cluster is provided exclusively by our Weka parallel filesystem with over 1 PB of total capacity.
If you're experiencing issues with the cluster, please reach out to dsbit-help@cumc.columbia.edu for support. To facilitate a quick and precise response, be sure to include the following in your email:
Your Columbia University ID (UNI)
Job ID numbers (if your issue is related to a specific job)