ABAQUS
Finite Element Analysis software for modeling, visualization and best-in-class implicit and explicit dynamics FEA.
Warning
ABAQUS is proprietary software. Make sure you meet the requirements for it's usage.
Tip
For a list of ABAQUS commands type:
abaqus help
Available Modules¶
module load ABAQUS/2020
Licences¶
The following network licence servers can be accessed from the NeSI cluster.
Institution | Faculty | Uptime | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
University of Auckland | Faculty of Engineering | 98% | ||||||||||||
|
||||||||||||||
University of Waikato | 96% |
If you do not have access, or want a server connected Contact our Support Team.
You can force ABAQUS to use a specific licence type by setting the
parameter academic=TEACHING
or academic=RESEARCH
in a relevant
environment file.
Tip
Required ABAQUS licences can be determined by this simple and
intuitive formula ⌊ 5 x N0.422 ⌋
where N
is number
of CPUs.
Hyperthreading
can provide significant speedup to your computations, however
hyperthreaded CPUs will use twice the number of licence tokens. It may
be worth adding #SBATCH --hint nomultithread
to your slurm script if
licence tokens are your main limiting factor.
Solver Compatibility¶
Not all solvers are compatible with all types of parallelisation.
Element operations | Iterative | Direct | Lanczos | |
mp_mode=threads |
✖ | ✔ | ✔ | ✔ |
mp_mode=mpi |
✔ | ✔ | ✖ | ✖ |
Warning
If your input files were created using an older version of ABAQUS you will need to update them using the command,
abaqus -upgrade -job new_job_name -odb old.odb
abaqus -upgrade -job new_job_name -inp old.inp
Examples¶
For when only one CPU is required, generally as part of a job array
#!/bin/bash -e
#SBATCH --job-name ABAQUS-serial
#SBATCH --time 00:05:00 # Walltime
#SBATCH --cpus-per-task 1
#SBATCH --mem 1500 # total mem
module load ABAQUS/2020
abaqus job="propeller_s4rs_c3d8r" verbose=2 interactive
mp_mode=threads
Uses a nodes shared memory for communication.
May have a small speedup compared to MPI when using a low number of
CPUs, scales poorly. Needs significantly less memory than MPI.
Hyperthreading may be enabled if using shared memory but it is not
recommended.
#!/bin/bash -e
#SBATCH --job-name ABAQUS-Shared
#SBATCH --time 00:05:00 # Walltime
#SBATCH --cpus-per-task 4
#SBATCH --mem 2G # total mem
module load ABAQUS/2020
abaqus job="propeller_s4rs_c3d8r cpus=${SLURM_CPUS_PER_TASK} \
mp_mode=threads verbose=2 interactive
Shared memory run with user defined function (fortran or C). Function will be compiled at start of run. You may need to chance the function suffix if you usually compile on windows.
#!/bin/bash -e
#SBATCH --job-name ABAQUS-SharedUDF
#SBATCH --time 00:05:00 # Walltime
#SBATCH --cpus-per-task 4
#SBATCH --mem 2G # total mem
module load imkl
module load ABAQUS/2020
abaqus job="propeller_s4rs_c3d8r" user=my_udf.f90 \
cpus=${SLURM_CPUS_PER_TASK} mp_mode=threads verbose=2 interactive
mp_mode=mpi
Multiple processes each with a single thread. Not limited to one node.
Model will be segmented into -np
pieces which
should be equal to --ntasks
Each task could be running on a different node leading to increased
communication overhead. Jobs can be limited to a single node by adding --nodes=1
however this will increase your time in the
queue as contiguous cpu's are harder to schedule.
This is the default method if mp_mode
is left
unspecified.
#!/bin/bash -e
#SBATCH --job-name ABAQUS-Distributed
#SBATCH --time 00:05:00 # Walltime</span></span>
#SBATCH --ntasks 8
#SBATCH --mem-per-cpu 1500 # Each CPU needs it's own.
#SBATCH --nodes 1
module load ABAQUS/2020
abaqus job="propeller_s4rs_c3d8r" cpus=${SLURM_NTASKS} mp_mode=mpi \
verbose=2 interactive
The GPU nodes are limited to 16 CPUs In order for the GPUs to be worthwhile, you should see a speedup equivalent to 56 CPU's per GPU used. GPU modes will generally have less memory/cpus.
#!/bin/bash -e
#SBATCH --job-name ABAQUS-gpu
#SBATCH --time 00:05:00 # Walltime</span></span>
#SBATCH --cpus-per-task 4
#SBATCH --mem 4G # total mem</span></span>
#SBATCH --gpus-per-node 1
module load ABAQUS/2020
module load CUDA
abaqus job="propeller_s4rs_c3d8r" cpus=${SLURM_CPUS_PER_TASK} \
gpus=${SLURM_GPUS_PER_NODE} mp_mode=threads \
verbose=2 interactive
User Defined Functions¶
User defined functions (UDFs) can be included on the command line with
the argument user=<filename>
where <filename>
is the C or fortran
source code.
Extra compiler options can be set in your local abaqus_v6.env
file.
The default compile commands are for imkl
, other compilers can be
loaded with module load
, you may have to change the compile
commands
in your local .env
file.
Environment file¶
The ABAQUS environment file contains a number of parameters that define how the your job will run, some of these you may with to change.
These parameters are read in the following order of preference,
../ABAQUS/SMA/site/abaqus_v6.env
Set by NeSI and cannot be changed.
~/abaqus_v6.env
(your home directory) If exists, will be used in all
jobs submitted by you.
<working directory>/abaqus_v6.env
If exists, will used in this job
only.
You may want to include this short snippet when making changes specific to a job.
# Before starting abaqus
echo "parameter=value
parameter=value
parameter=value" > "abaqus_v6.env"
# After job is finished.
rm "abaqus_v6.env"
Performance¶
Note: Hyperthreading off, testing done on small mechanical FEA model. Results highly model dependant. Do your own tests.
Common Issues¶
Unable to create temporary directory¶
This may be caused by using a path for the job
parameter. e.g.
abaqus job="/nesi/project/nesi99999/my_input.inp"
ABAQUS cannot create subdirectories, leading to the error message about permissions.
This can be fixed by using the input
parameter, e.g.
abaqus input="/nesi/project/nesi99999/my_input.inp" job="my_input"