...
srun can also be used without salloc, but you then need to specify the SLURM options to each srun:
Code Block | ||
---|---|---|
| ||
ebuchlin@cluster-head:~$ srun -p flarecast -n 2 hostname |
...
cluster-r730-1 |
...
cluster-r730-1 |
...
ebuchlin@cluster-head:~$ |
Again, please use screen
if you plan to logout after launching the job.
...
A batch job can be launched using
Code Block | ||
---|---|---|
| ||
ebuchlin@cluster-head:~$ sbatch script.sh |
where
where script.sh
is a shell script including (a) line(s) with #SBATCH followed by SLURM options.
For example, for 10 independent tasks, script.sh
can be:
Code Block | ||
---|---|---|
| ||
#!/bin/bash |
...
#SBATCH -n 10 -p flarecast |
...
cd some_directory |
...
srun ./my_executable |
For MPI parallization on 12 processors:
Code Block | ||
---|---|---|
| ||
#!/bin/bash |
...
#SBATCH --jobname my_job_name |
...
#SBATCH -n 12 |
...
echo "$SLURM_NNODES nodes: $SLURM_NODELIST" |
...
cd my_directory |
...
mpirun ./my_executable |
For an IDL job (using the full node, otherwise please change !cpu.tpool_nthreads
):
Code Block | ||
---|---|---|
| ||
#!/bin/bash |
...
#SBATCH -N 1 -p flarecast |
...
cat > idlscript.pro << EOF |
...
my_idl_command1 |
...
my_idl_command2 |
...
EOF |
...
idl idlscript.pro |
...
rm idlscript.pro |