Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Note: This guide provides an introduction to the SLURM job scheduler and its application on the c2b2 clusters. The clusters comprise:

  • 8 compute nodes, each with 20-core processors and 128 GB of memory

  • Some nodes have 192 cores and 1.5 TB of memory

  • 1 GPU node featuring 2 NVIDIA L40s GPU cards

  • 1 GPU node with a Superchip GH200 ARM architecture, 1 GPU, and 570 GB of memory

This guide will help you get started with using SLURM on these clusters.

  • No labels