Skip to content
Contact Support

Slurm Interactive Sessions

A SLURM interactive session reserves resources on compute nodes allowing you to use them interactively as you would the login node.

There are two main commands that can be used to make a session, srun and salloc, both of which use most of the same options available to sbatch (see our Slurm Reference Sheet).

Warning

An interactive session will, once it starts, use the entire requested block of CPU time and other resources unless earlier exited, even if unused. To avoid unnecessary charges to your project, don't forget to exit an interactive session once finished.

Using srun --pty bash

srun will add your resource request to the queue. When the allocation starts, a new bash session will start up on one of the granted nodes.

For example;

srun --account nesi12345 --job-name "InteractiveJob" --cpus-per-task 8 --mem-per-cpu 1500 --time 24:00:00 --pty bash

You will receive a message.

srun: job 10256812 queued and waiting for resources

And when the job starts:

srun: job 10256812 has been allocated resources
[wbn079 ~ SUCCESS ]$

Note the host name in the prompt has changed to the compute node wbn079.

For a full description of srun and its options, see the schedmd documentation.

Using salloc

salloc functions similarly srun --pty bash in that it will add your resource request to the queue. However the allocation starts, a new bash session will start up on the login node. This is useful for running a GUI on the login node, but your processes on the compute nodes.

For example:

salloc --account nesi12345 --job-name "InteractiveJob" --cpus-per-task 8 --mem-per-cpu 1500 --time 24:00:00

You will receive a message.

salloc: Pending job allocation 10256925
salloc: job 10256925 queued and waiting for resources

And when the job starts;

salloc: job 10256925 has been allocated resources
salloc: Granted job allocation 10256925 
[mahuika01~ SUCCESS ]$

Note the that you are still on the login node mahuika01, however you will now have permission to ssh to any node you have a session on .

For a full description of salloc and its options, see here.

Requesting a postponed start

salloc lets you specify that a job is not to start before a specified time, however the job may still be delayed if requested resources are not available. You can request a start time using the --begin flag.

The --begin flag takes either absolute or relative times as values.

Warning

If you specify absolute dates and/or times, Slurm will interpret those according to your environment's current time zone. Ensure that you know what time zone your environment is using, for example by running date in the same terminal session.

  • --begin=16:00 means start the job no earlier than 4 p.m. today. (Seconds are optional, but the time must be given in 24-hour format.)
  • --begin=11/05/20 means start the job on (or after) 5 November 2020. Note that Slurm uses American date formats. --begin=2020-11-05 is another Slurm-acceptable way of saying the same thing, and possibly easier for a New Zealander.
  • --begin=2020-11-05T16:00:00 means start the job on (or after) 4 p.m. on 5 November 2020.
  • --begin=now+1hour means wait at least one hour before starting the job.
  • --begin=now+60 means wait at least one minute before starting the job.

If no --begin argument is given, the default behaviour is to start as soon as possible.

While you wait

It's quite common to have to wait for some time before your interactive session starts, even if you specified, expressly or by implication, that the job is to start as soon as possible.

While you're waiting, you will not have use of that shell prompt. Do not use Ctrl-C to get the prompt back, as doing so will cancel the job. If you need a shell prompt, detach your tmux or screen session, or switch to (or open) another terminal session to the same cluster's login node.

In the same way, before logging out (for example, if you choose to shut down your workstation at the end of the working day), be sure to detach the tmux or screen session. In fact, we recommend detaching whenever you leave your workstation unattended for a while, in case your computer turns off or goes to sleep or its connection to the internet is disrupted while you're away.

Running Python+JupyterLab in Interactive Mode

Warning

If you are using a windows computer, this method has currently been tested in VSCode, WSL powershell, and WSL Ubuntu. We have not tested it yet in Putty or Mobaxterm

To run Python+JupyterLab in interactive mode, first we need to load your interactive session:

srun --account nesi12345 --job-name "InteractiveJob" --cpus-per-task 2 --mem 8G --time 24:00:00 --pty bash

Then, we need to start up Python, install JupyterLab if you dont have it yet, and obtain the hostname and the port:

# Load Python
module load Python

# Install and activate a python virtual environment (or activate your
# current virtual environment). 
python3 -m venv venv
source venv/bin/activate

# Install JupyterLab
pip3 install JupyterLab

# Select a random port
PORT=$(shuf -i8000-9999 -n1)

# Check the hostname and port - we will need this later, you can also 
# see it at the start of your prompt
hostname | cut -d'.' -f1 # <-- This is the hostname
echo $PORT               # <-- This is the port

Make a note of the hostname and the port, given by the hostname | cut -d'.' -f1 and echo $PORT commands. Then, we need to start up JupyterLab:

# Start Jupyter. This might take a minute
jupyter lab --no-browser --ip=0.0.0.0 --port=$PORT

Make a note of the second URL given by JupyterLab once it launches. For instance:

[C 2025-11-03 14:34:31.797 ServerApp] 

    To access the server, open this file in a browser:
        file:///home/john.doe/.local/share/jupyter/runtime/jpserver-2965439-open.html
    Or copy and paste one of these URLs:
        http://c003.hpc.nesi.org.nz:9339/lab?token=e6ff816a27867d88311bcc9f04141402590af48c2fd5f117
        http://127.0.0.1:9339/lab?token=e6ff816a27867d88311bcc9f04141402590af48c2fd5f117

The http://127.0.0.1:9339/lab?token=e6ff816a27867d88311bcc9f04141402590af48c2fd5f117 address in this case will be our url that we will use to launch JupyterLabs

In a second terminal on your local machine (or a second screen in tmux or screen), type the following:

ssh -L PORT:HOSTNAME:PORT mahuika

#For example:
#ssh -L 9339:c003:9339 mahuika

Then, in your browser, type in the URL from before

http://127.0.0.1:PORT/lab?token=TOKEN

# For example:
# http://127.0.0.1:9339/lab?token=e6ff816a27867d88311bcc9f04141402590af48c2fd5f117

You will now be able to see and work wih Python+JupyterLab in your web browser.

Running Julia+Pluto.ji in Interactive Mode

Warning

If you are using a windows computer, this method has currently been tested in VSCode, WSL powershell, and WSL Ubuntu. We have not tested it yet in Putty or Mobaxterm

To run Julia+Pluto.ji in interactive mode, first we need to load your interactive session:

srun --account nesi12345 --job-name "InteractiveJob" --cpus-per-task 2 --mem 8G --time 24:00:00 --pty bash

Then, we need to start up Julia and obtain the hostname and the port:

# Load Julia
module load Julia 

# Select a random port
PORT=$(shuf -i8000-9999 -n1)

# Check the hostname and port - we will need this later, you can also 
# see it at the start of your prompt
hostname | cut -d'.' -f1 # <-- This is the hostname
echo $PORT               # <-- This is the port

# Export port to a variable name
export pluto_port=${PORT}

Make a note of the hostname and the port, given by the hostname | cut -d'.' -f1 and echo $PORT commands. Then, we need to start up Julia, install and run Pluto.ji:

#Start Julia
julia

# Install Pluto.ji. This might take a minute
import Pkg; Pkg.add("Pluto")

# Start Pluto. This might take a minute
using Pluto
Pluto.run(host="0.0.0.0",port=parse(Int, ENV["pluto_port"]),launch_browser=false)

Take a note of the information given for the URL

[ Info: Loading...
┌ Info:  Go to http://0.0.0.0:9627/?secret=mXmq6659 in your browser to start writing ~ have fun!
└ 

Here, we will be using http://0.0.0.0:9627/?secret=mXmq6659 to access Pluto.

Next, open up a second terminal on your local machine (or a second screen in tmux or screen), and type the following:

ssh -L PORT:HOSTNAME:PORT mahuika

#For example:
#ssh -L 9627:mc081:9627 mahuika

Then, in your browser, type in the URL from before

http://0.0.0.0:PORT/?secret=SECRET

# For example:
# http://0.0.0.0:9627/?secret=mXmq6659

You will now be able to see and work wih Julia+Pluto in your web browser.

Setting up a detachable terminal

Warning

If you don't request your interactive session from within a detachable terminal, any interruption to the controlling terminal, for example by your computer going to sleep or losing its connection to the internet, will permanently cancel that interactive session and remove it from the queue, whether it has started or not.

  1. Connect to a login node.
  2. Start up tmux or screen.

Modifying an existing interactive session

Whether your interactive session is already running or is still waiting in the queue, you can make a range of changes to it using the scontrol command. Some changes are off limits for ordinary users, such as increasing the maximum permitted wall time, or unsafe, like decreasing the memory request. But many other changes are allowed.

Postponing the start of an interactive job

Suppose you submitted an interactive job just after lunch, and it's already 4 p.m. and you're leaving in an hour. You decide that even if the job starts now, you won't have time to do everything you need to do before the office shuts and you have to leave. Even worse, the job might start at 11 p.m. after you've gone to bed, and you'll get to work at 9:00 the next morning and find that it has wasted ten wall-hours of time.

Slurm offers an easy solution: Identify the job, and use scontrol to postpone its start time.

Note

Job IDs are unique to each cluster but not across the whole of NeSI. Therefore, scontrol must be run on a node belonging to the cluster where the job is queued.

The following command will delay the start of the job with numeric ID 12345678 until (at the earliest) 9:30 a.m. the next day:

scontrol update jobid=12345678 StartTime=tomorrowT09:30:00

This variation, if run on a Friday, will delay the start of the same job until (at the earliest) 9:30 a.m. on Monday:

scontrol update jobid=12345678 StartTime=now+3daysT09:30:00

Warning

Don't just set StartTime=tomorrow with no time specification unless you like the idea of your interactive session starting at midnight or in the wee hours of the morning.

Bringing forward the start of an interactive job

In the same way, you can use scontrol to set a job's start time to earlier than its current value. A likely application is to allow a job to start immediately even though it stood postponed to a later time:

scontrol update jobid=12345678 StartTime=now

Other changes using scontrol

There are many other changes you can make by means of scontrol. For further information, please see the scontrol documentation.

Modifying multiple interactive sessions at once

In the same way, if you have several interactive sessions waiting to start on the same cluster, you might want to postpone them all using a single command. To do so, you will first need to identify them, hence the earlier suggestion to something specific to interactive jobs in the job name.

For example, if all your interactive job names start with the text "InteractiveJob", you could do this:

# -u $(whoami) restricts the search to my jobs only.
# The --states=PD option restricts the search to pending jobs only.
#
squeue -u $(whoami) --states=PD -o "%A %j" | grep "InteractiveJob"

The above command will return a list of your jobs whose names start with the text "InteractiveJob". In this respect, it's more flexible than the -n option to squeue, which requires the entire job name string in order to identify a match.

In order to use scontrol, we need to throw away all of the line except for the job ID, so let's use awk to do this, and send the output to scontrol via xargs:

squeue -u $(whoami) --states=PD -o "%A %j" | grep "InteractiveJob" | \
awk '{print $1}' | \
xargs -I {} scontrol update jobid={} StartTime=tomorrowT09:30:00

Cancelling an interactive session

You can cancel a pending interactive session by attaching the relevant session, putting the job in the foreground (if necessary) and pressing Ctrl-C on your keyboard.

To cancel all your queued interactive sessions on a cluster in one fell swoop, a command like the following should do the trick:

squeue -u $(whoami) --states=PD -o "%A %j" | grep "InteractiveJob" | \
awk '{print $1}' | \
xargs -I {} scancel {}

To cancel all your running interactive sessions on a cluster in one fell swoop, a command like the following should do the trick:

squeue -u $(whoami) --states=R -o "%A %j" | grep "InteractiveJob" | \
awk '{print $1}' | \
xargs -I {} scancel {}

If you frequently use interactive jobs, we recommend doing this before you go away on leave or fieldwork or other lengthy absence.