Skip to content
Contact Support

Māui

Māui is a Cray XC50 supercomputer featuring Skylake Xeon nodes, Aries interconnect and IBM ESS Spectrum Scale Storage. NeSI has access to 316 compute nodes on Māui.

Māui is designed as a capability high-performance computing resource for simulations and calculations that require large numbers of CPUs working in a tightly-coupled parallel fashion, as well as interactive data analysis. To support workflows that are primarily single core jobs, for example pre- and post-processing work, and to provide virtual lab services, we offer a small number Māui ancillary nodes.

Tips

The computing capacity of the Māui ancillary nodes is limited. If you think you will need large amounts of computing power for small jobs in addition to large jobs that can run on Māui, please Contact our Support Team about getting an allocation on Mahuika, our high-throughput computing cluster.

The login or build nodes maui01 and maui02 provide access to the full Cray Programming Environment (e.g. editors, compilers, linkers, debug tools). Typically, users will access these nodes via SSH from the NeSI lander node. Jobs can be submitted to the HPC from these nodes.

Important Notes

  1. The Cray Programming Environment on the XC50 (supercomputer) differs from that on Mahuika and the Māui Ancillary nodes.
  2. The /home, /nesi/project, and /nesi/nobackup file systems are mounted on Māui.
  3. The I/O subsystem on the XC50 can provide high bandwidth to disk (large amounts of data), but not many separate reading or writing operations. If your code performs a lot of disk read or write operations, it should be run on either the Māui ancillary nodes or Mahuika.

All Māui resources are indicated below, and the the Māui Ancillary Node resources here.

Māui Supercomputer (Cray XC50)

Login nodes (also known as eLogin nodes)

80 cores in 2 × Skylake (Gold 6148, 2.4 GHz, dual socket 20 cores per socket) nodes

Compute nodes

18,560 cores in 464 × Skylake (Gold 6148, 2.4 GHz, dual socket 20 cores per socket) nodes;

Hyperthreading

Enabled (accordingly, SLURM will see 37,120 cores)

Theoretical Peak Performance

1.425 PFLOPS

Memory capacity per compute node

232 nodes have 96 GB, the remaining 232 have 192 GB each

Memory capacity per login (build) node

768 GB

Total System memory

66.8 TB

Interconnect

Cray Aries, Dragonfly topology

Workload Manager

Slurm (Multi-Cluster)

Operating System

Cray Linux Environment CLE7.0UP04
SUSE Linux Enterprise Server 15 SP3

Storage (IBM ESS)

Scratch Capacity (accessible from all Māui, Mahuika, and Ancillary nodes). 4,412 TB (IBM Spectrum Scale, version 5.0). Total I/O bandwidth to disks is 130 GB/s
Persistent storage (accessible from all Māui, Mahuika, and Ancillary nodes). 1,765 TB (IBM Spectrum Scale, version 5.0) Shared Storage. Total I/O bandwidth to disks is 65 GB/s (i.e. the /home and /nesi/project filesystems)
Offline storage (accessible from all Māui, Mahuika, and Ancillary nodes). Of the order of 100 PB (compressed)