Note: This guide provides an introduction to the SLURM job scheduler and its application on the c2b2 clusters. The clusters comprise:
8 compute nodes, each with 20-core processors and 128 GB of memory
Some nodes have 192 cores and 1.5 TB of memory
1 GPU node featuring 2 NVIDIA L40s GPU cards
1 GPU node with a Superchip GH200 ARM architecture, 1 GPU, and 570 GB of memory
This guide will help you get started with using SLURM on these clusters.