CUIT High Performance Computing (HPC)

CUIT High Performance Computing (HPC)


Adobe Express - file(1).png

Introduction

Welcome to the home page for Columbia University's High-Performance Computing (HPC) resources!

The CUIT High-Performance Computing service provides powerful cluster resources that support data-intensive and computationally demanding research across numerous departments and groups at the University. The clusters are administered and supported by the CUIT HPC team and governed by the faculty-led Shared Research Computing Policy Advisory Committee (SRCPAC).

NOTE: Sensitive data is not allowed on any of our clusters. Please reference the University Data Classification Policy.


Getting Access

Access to the cluster is subject to formal approval by selected members of the participating research groups. See the HPC CUIT Webpage under “Make a Request” for more information on access options.

Cluster User Documentation

Workshop & Training

Spring 2026 Training Series

Register for our revamped training series to get the most out of your time on the cluster:

Training Series Recordings

If you're not familiar with basic Linux commands and usage, or if you need a refresher on these topics, please refer to the following resources from our workshop series:

  1. Intro to Linux: and workshop recording

  2. Intro to Bash Shell Scripting: and workshop recording

  3. Intro to Python for HPC:  and workshop recording

  4. Intro to High Performance Computing: and workshop recording

Other resource links

Featured Links

Getting Help

Interactive Job Policy

Maintenance Schedule

New Queue System

Available Resources

insomnia.png

Insomnia went live in February 2024 and initially was a joint purchase by 21 research groups and departments. Unlike its predecessors, this new high performance computing design allows researchers to buy not only a node but half or even a quarter of a node on the cluster.

Insomnia is faculty-governed by the cross-disciplinary SRCPAC and is administered and supported by CUIT’s High Performance Computing team.

Insomnia is a new type of design intended to allow indefinite expansion to Columbia's shared high performance computing cluster, adding new hardware and capabilities as needed. It is a perpetual cluster. Hardware will be retired after five (5) years.

All of Insomnia's high performance computing servers are equipped with Dual Intel Xeon Platinum 8460Y processors (2 GHz):

  • Total nodes: 90

    • Standard nodes: 41

    • High-memory nodes: 19

    • Total GPU nodes: 30

      • A600 x 8 GPU nodes: 13

      • A600 x 4 GPU nodes: 2

      • H100 x 2 GPU nodes: 3

      • L40 x 2 GPU nodes: 3

      • L40s x 2 GPU nodes: 9

  • Total storage: 

    • 2.282 PB GPFS filesystem

  • CPU resources:

    • Physical cores: 7,144

    • Logical CPUs (including hyperthreads): 14,288

  • Total memory: 

    • ~58.3 TiB

  • HDR Infiniband

  • Red Hat Enterprise Linux 9.3

  • Slurm job scheduler

See more information here.

ginsburg.png

Ginsburg high performance computing went live in February 2021 and is a joint purchase by 33 research groups and departments. 

The cluster is faculty-governed by the cross-disciplinary SRCPAC and is administered and supported by CUIT’s High Performance Computing team.

Tentative retirement dates

  • Ginsburg Phase 1 retirement: February 2026

  • Ginsburg Phase 2 retirement: March 2027

  • Ginsburg Phase 3 retirement: December 2027

All of Ginsburg's high performance computing servers are equipped with Dual Intel Xeon Gold 6226R processors (2.9 GHz):

  • Total nodes: 286

    • Standard nodes: 191

    • High-memory nodes: 56

    • Total GPU nodes: 39

      • RTX 8000 x 2 GPU nodes: 18

      • V100S x 2 GPU nodes: 4

      • A100 x 2 GPU nodes: 8

      • A40 x 2 GPU nodes: 9

  • Total storage:

    • 1PB of DDN ES7790 Lustre storage

  • CPU resources:

    • Physical cores: 4,576

    • Logical CPUs (including hyperthreads): 9,152

  • Total memory:

    • ~ 77.10 TB

  • HDR Infiniband

  • Red Hat Enterprise Linux 8

  • Slurm job scheduler

See more information here.