Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Current »

Hardware

Terremoto has 110 total execute nodes, consisting:

  • 92 Standard Nodes
  • 10 High Memory Nodes
  • 8 GPU Nodes

Standard Nodes

Terremoto has 92 Standard Nodes with the following specifications:


Model

Dell C6420

CPU

Intel Xeon Gold 6126 2.6 Ghz

Number of CPUs

2

Cores per CPU

12

Total Cores

24

Memory

192 GB

Network

EDR Infiniband


High Memory Nodes


Terremoto's 10 high memory nodes have 768 GB of memory each. They are otherwise identical to the standard nodes.


GPU Nodes


Terremoto has 8 GPU nodes, each with two Nvidia V100 GPU modules. They are otherwise identical to the standard nodes.

Storage


430 TB of GPFS parallel file system storage is used for scratch space and home directories.

Network

10 Gb/s ethernet connections to the Internet from login nodes and transfer node. 100 Gb/s EDR Infiniband connection between compute nodes.

Scheduler


Terremoto uses the Slurm scheduler to manage jobs.

Fair share

Resource allocation on our cluster is based on each group's contribution to computing cores. The Slurm scheduler uses fair share targets and historical resource utilization to determine when jobs are scheduled to run. Also, within-group priority is based on historical usage such that heavier users will have a lower priority than light users. Slurm uses all of a job's attributes - such as wall time, resource constraints, and group membership - to determine the order in which jobs are run. 

Using job data such as walltime and resources requested, the scheduler can start other, lower-priority jobs so long as they do not delay the highest priority jobs. Because it works by essentially filling in holes in node space, backfill tends to favor smaller and shorter running jobs more than larger and longer running ones.

There is no preemption in the current system; a job in the queue will never interrupt or stop a job in run state.

  • No labels