Skip to content
Contact Support

Hardware

A list of the currently available hardware.

If you are looking for information on maximum resource requests, see Job Limits.

Compute Nodes

Your jobs will land on appropriately sized nodes automatically based on your CPU to memory ratio. For example in the Genoa partition:

  • A job requesting ≤ 2 GB/core will run on a 2 GB/core node, or if full, a 4 GB/core node.
  • A job requesting ≤ 4 GB/core will run on a 4 GB/core node, or if full, a 8 GB/core node.

And so on. You will always get the amount of memory you requested, even if running on a node with a higher ratio.

Architecture Core Memory GPGPU Nodes
2 x AMD Milan 7713 CPU
└ 8 x Chiplets
    └ 8 x Cores
126 512GB (4GB / Core) - 54
1024GB (8GB / Core) - 8
4 x NVIDIA HGX A100 4
2 x AMD Genoa 9634 CPU
└ 12 x Chiplets
    └ 7 x Cores
166 358GB (1GB / Core) - 44
716GB (2GB / Core) 2 x NVIDIA A100 4
1432GB (4GB / Core) - 8
2 x NVIDIA H100 4
4 x NVIDIA L4 4

GPGPUs

NeSI has a range of Graphical Processing Units (GPUs) to accelerate compute-intensive research and support more analysis at scale.

Depending on the type of GPU, you can access them in different ways, such as via batch scheduler (Slurm), or Virtual Machines (VMs).

For information about how to request these GPUs in a Slurm job, see Using GPUs.

Architecture Purpose/Note VRAM GPUs on Node Nodes
NVIDIA A100 Machine Learning 80GB 4 Milan 4
40GB 2 Genoa 4
NVIDIA H100 Large-scale Machine Learning 96GB 2 Genoa 4
NVIDIA L4 No fp64 double precision 24GB 4 Genoa 4
NVIDIA A40 Teaching / training 48GB RDC 4

If you have any questions about hardware or the status of anything listed in the table, Contact our Support Team.