Available Resources
Compute Nodes¶
Your jobs will land on appropriately sized nodes automatically based on your CPU to memory ratio. For example in the genoa partition:
A job which requests ≤ 2 GB/core will run on the 44 Genoa nodes which have 2 GB/core, or if those are full, the 4 GB/core nodes.
A job which requests ≤ 4 GB/core will run on the 4 Genoa nodes which have 4 GB/core, or if those are full, the 8 GB/core nodes.
A job which requests > 4 GB/core will run on the 16 Genoa nodes which have 8 GB/core.
Architecture | Cores | Memory | GPGPU | Nodes |
2 x AMD Milan 7713 CPU└ 8 x Chiplets └ 8 x Cores |
126 | 512GB (2GB / Core) | - | 48 |
1 x NVIDIA A100 | 4 | |||
2 x NVIDIA A100 | 2 | |||
4 x NVIDIA HGX A100 | 4 | |||
1024GB (4GB / Core) | - | 8 | ||
2 x AMD Genoa 9634 CPU└ 12 x Chiplets └ 7 x Cores | 166 | 358GB (1GB / Core) | - | 28 |
2 x NVIDIA H100 | 4 | |||
4 x NVIDIA L4 | 4 | |||
1024B (2GB / Core) | - | 4 | ||
1432GB (4GB / Core) | - | 16 |
GPGPUs¶
NeSI has a range of Graphical Processing Units (GPUs) to accelerate compute-intensive research and support more analysis at scale. Depending on the type of GPU, you can access them in different ways, such as via batch scheduler (Slurm), interactively (using Jupyter on NeSI), or Virtual Machines (VMs).
The table below outlines the different types of GPUs, who can access them and how, and whether they are currently available or on the future roadmap.
If you have any questions about GPUs on NeSI or the status of anything listed in the table, Contact our Support Team.
Architecture | Purpose/Note | VRAM | SLURM | ||
NVIDIA A100 PCIe cards | Machine Learning (ML) applications | 40GB |
|
4 | |
Milan, 2 x A100 |
|
2 | |||
NVIDIA HGX A100 | Large-scale Machine Learning applications | 80GB | Milan, 4 x HGX A100 |
|
4 |
NVIDIA H100 | Large-scale Machine Learning (ML) applications | 96GB | Genoa, 2 x H100 |
|
8 |
NVIDIA L4 | no fp64 double precision | Genoa, 4 x L4 |
|
16 | |
NVIDIA A40 | Teaching / training | 48GB | Flexible HPC | Not accessable by Slurm | 4 |