Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 13 Next »

The primary, general purpose compute cluster at C2B2 is now called "Ganesha.", This HPC cluster s a Linux-based (Rocky9.4) compute cluster consisting of 62 Dell Server 2 head nodes and a virtualized pool of login (submit) nodes 8 Weka nodes. The nodes fit in a dense configuration in 9 high-density racks and are cooled by dedicated rack refrigeration systems.

The clusters comprise:

  • 20 compute nodes, each with 192 core processors and 768 GB of memory

  • 2 nodes with 192 cores and 1.5 TB of memory

  • 40 GPU node featuring 2 NVIDIA L40s GPU cards 192 cores processors and 768 GB memory

  • 1 GPU node with a Superchip GH200 ARM architecture, 1 GPU, and 570 GB of memory

Each node has a 25 Gbps ethernet connection and 100 Gbps HDR InfiniBand. Additionally, a set of login nodes running on Proxmox virtualization provide a pool of virtual login nodes for user access to this and other systems.

Like our previous clusters, this cluster the system is controlled by SLURM. Storage for the cluster is provided exclusively by our Weka parallel filesystem with over 1 PB of total capacity.

For assistance with cluster-related issues, please email dsbit-help@cumc.columbia.edu, including the following details in your message:

  • Your Columbia University Network ID (UNI)

  • Job ID numbers, if your inquiry pertains to a specific job issue

This information will help ensure a prompt and accurate response to your cluster-related questions.

Getting Started

Job Examples

Research Products

Available software

Storage

Submitting Jobs

Technical Information

Working onĀ Ginsburg

  • No labels