Preparing your code for use on NeSI's new HPC platform
Background¶
Since 2018 NeSI and its Collaborators (University of Auckland, NIWA, University of Otago, Manaaki Whenua) have operated the current national HPC platform and underlying infrastructure, best known as Mahuika and Māui.
NeSI is refreshing its platform, and we anticipate migrating users in a staggered manner starting in July 2024. We will take all care to manage any changes for you and your work in using NeSI’s HPC.
We anticipate teams might require assistance with getting ready, so we’re providing wrap-around support. This page provides an overview of how to familiarise yourself with infrastructure similar to the new environment in advance. We explain the ways it will differ from Māui and Mahuika's Broadwell nodes, and actions you may need to take to prepare your project for migration.
Below is a quick overview of some of the changes you need to be aware of when porting code from Māui to Mahuika:
Māui | Mahuika | comments |
---|---|---|
NA | module purge |
|
module avail -S X |
module spider X |
search for module X |
module load PrgEnv-cray/6.0.10 |
NA | no Cray compiler on Mahuika Milan nodes |
module load craype-hugepages*M |
NA | |
module load PrgEnv-intel |
module load intel |
Intel MPI and Intel compilers |
module load PrgEnv-gnu |
module load gimkl |
Intel MPI and GNU compilers |
ftn |
ifort , gfortran , mpiifort or mpif90 |
Fortran compiler, use mpiifort (Intel) or mpif90 (GNU) if your code has MPI |
CC |
icc , g++ , mpiicc or mpicxx |
C++ compiler, use mpiicc (Intel) or mpicxx (GNU) if your code has MPI |
cc |
icc , gcc , mpiicc or mpicc |
C compiler, use mpiicc (Intel) or mpicc (GNU) if your code has MPI |
Test your code on Mahuika¶
The platform NeSI has selected to replace Mahuika is most similar to the Mahuika AMD Milan compute nodes than nodes on other partitions. So, we'll be using the Milan nodes to validate any issues, mitigating risks of your subsequent migration to the new platform.
Some projects on Māui will move to the new NeSI hardware. These projects have been notified and given a small allocation on Mahuika which can be used by the Māui users to validate the software they need is available (or can be built) on the AMD Milan nodes and works as expected. All members of the project can use this Mahuika allocation.
To access Mahuika's AMD Milan nodes and submit jobs from any NeSI lander node or Māui login node, add --partition=milan
to your Slurm script or srun
command.
At any point, if you don't see what you need or something isn’t working, Contact our Support Team. We’re keen to ensure this early stage validation process is as quick and painless as possible.
Porting your batch scripts¶
Environment Modules¶
The module command works much the same way on Mahuika, though it happens
to be a different implementation ("Lmod") with a few extra features.
You will probably find its extra search command module spider
to be
faster and more useful than the familiar module avail
.
If you currently use software on Māui that we have provided via environment modules, then please check to see if we have it installed on Mahuika (note that it is unlikely to be the same version) and let us know about anything that you can't find. If you compile your own software, then see below.
Slurm options¶
Partitions¶
There are several partitions available to NeSI jobs on Mahuika, however
for the purposes of migrating from Māui and future-proofing, we
recommend the milan
partition. As its name suggests, that partition
has AMD Milan (Zen3) CPUs, while the rest of Mahuika has Intel Broadwell
CPUs.
If for any reason you want to use any of the other Mahuika partitions,see Mahuika Slurm Partitions for an overview and Milan Compute Nodes for the differences between them and milan.
Shared nodes¶
Māui is scheduled by node while Mahuika is scheduled by core, so small jobs can share Mahuika nodes, while on Māui nodes are exclusively occupied by a single job at a time.
When submitting an MPI job you have (at least) three options:
- Request a number of tasks without worrying what nodes they land on. That is OK for quick tests, but probably not optimal for real work as it both increases dependence on the interconnect and fragments node resources, as such job submissions end up much more scattered than they would on Māui.
- Request a number of tasks and a number (or range) of nodes.
- Request a number of nodes and a number of tasks per node. This is appropriate for most Māui-sized jobs, and by requesting all of the CPUs on a node better isolates the job from contention with other jobs over socket-level or node-level resources such as memory bandwidth or the GPFS client.
Node sizes¶
Since most ex-Māui jobs will want to take whole nodes, it is important to be aware of the size of those nodes:
Māui | Mahuika (milan partition) |
|
---|---|---|
cores | 40 | 128 |
CPUs | 80 | 256 |
RAM | 90 or 180 GB | 460 or 960 GB |
Temporary files¶
In Mahuika batch scripts, please replace any mention of /tmp
with
$TMPDIR
.
Porting your software¶
Compiling without the Cray compiler wrappers¶
If you have been compiling software on Māui you will be familiar with
the CPE, the "Cray Programming Environment" compiler wrappers (ftn
,
cc
and their underlying infrastructure) which allow you to switch
between the GCC, Intel, and Cray compilers while using the same command
lines. CPE is not supported on Mahuika, and so it will be necessary to
use a compiler directly, for example gfortran
or gcc
.
We have GCC and Intel compilers (but not the Cray compiler) available on Mahuika via environment modules, recent examples being:
GCC/12.3.0
-gfortran
,gcc
, andg++
intel-compilers/2022.0.2
-ifort
,icc
, andicpc
If you also require MPI or any of the libraries BLAS, LAPACK, ScaLAPACK, or FFTW then you will be best off loading one of our EasyBuild "toolchain" environment modules such as:
foss/2023a
- GCC, FFTW, FlexiBLAS, OpenBLAS, OpenMPIintel/2022a
- Intel compilers, Intel MKL with its FFTW wrappers, Intel MPI.
For more on this topic, please see Compiling software on Mahuika.
Since an increasing proportion of NeSI CPUs are AMD ones, good performance of Intel's MKL library should not be assumed - other BLAS/LAPACK implementations will sometimes perform much better on AMD CPUs. So far we provide the vendor-neutral OpenBLAS and BLIS, and may also add AMD's own AOCL libraries.
Microarchitecture¶
All of the current Mahuika CPUs have AVX2 instructions, but lack the AVX512 instructions found on Māui's Skylake CPUs. The next tranche of NeSI hardware will have AMD Zen4 CPUs, which will have AVX512.
Questions?¶
If you have any questions or need any help, Contact our Support Team or pop in to one of our weekly Online Office Hours to chat with Support staff one-to-one.
No question is too small - don't hesitate to reach out.