Writing scripts to submit jobs on comet using the slurm queue manager

Comet is a huge cluster of thousands of computing nodes, and the queue manager software called “slurm” is what handles all the requests, directs each job to a specific node(s), and then lets you know when its done. In a prior post I showed the basic slurm commands to submit a job and check the queue.

You also need to write a special linux bash script that contains a bunch of slurm configurations, and also the linux commands to actually run your calculation.  This is easiest show by example, and I’ll show two:  one for a Gaussian job, and another for an AMBER job.


#!/bin/bash
#SBATCH -t 10:00:00
#SBATCH --job-name="gaussian"
#SBATCH --output="gaussian.%j.%N.out"
#SBATCH --partition=batch
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --export=ALL


nprocshared=1
jobfile=YOURINPUTFILE  #assumes you use a gjf extension (which is added below)
jobfile=$jobfile.gjf
outputfile=$jobfile.out

export GAUSS_SCRDIR=/scratch/$USER/$SLURM_JOBID
. /etc/profile.d/modules.sh
module load gaussian
exe=`which g09`
export OMP_NUM_THREADS=$nprocshared
/usr/bin/time $exe < $jobfile > $outputfile

If you copy this file exactly and then modify just the important parts, you can submit your own Gaussian jobs quickly.   But its worth knowing what these commands do.  All the SBATCH commands are slurm settings.  Most important is the line with the “t” flag which sets the wall time you think the job will require.  (“Wall time” means the same thing as “real time”, as if you were watching the clock on your wall.)  If your job winds up going longer than your set wall time, then slurm will automatically cancel your job even if it didn’t finish—so don’t underestimate.   The other important slurm flags are for “nodes” and “n-tasks-per-node”, which set how many nodes you want the job to use, and how many processors (cores) per node you want your job to use, respectively.  For Gaussian jobs, you always want to use nodes=1.  You can start with tasks-per-node=1, and then try and increase it up to 12 (the max for comet) to see if your calculation can take advantage of any of Gaussian’s parallel processing algorithms.  (You would also need to add a %nprocshared=8 line to the .gjf file.  It doesn’t always work that well, meaning your simply wasting processing time that we have to pay for.)

The commands at the bottom are bash linux commands that set up some important variables, including the stem of the filename for your Gaussian job file.  Eventually the script then loads the Gaussian module, and with the last line, submits your job.

If you’re using AMBER for molecular dynamics simulations, here’s a simple slurm script you can copy:

#!/bin/bash -l
#SBATCH -t 01:10:00
#SBATCH --job-name="amber"
#SBATCH --output="oamber.%j"
#SBATCH --error="eamber.%j"
#SBATCH --partition="compute"
#SBATCH --nnodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --no-requeue
#SBATCH --mail-type=END
#SBATCH --mail-user=username@skidmore.edu

module load amber

ibrun sander.MPI -np 1 -O -i equil4.in -p 2mlt.top -c 2mlt_equil3.crd -r 2mlt_equil4.crd -ref 2mlt_equil3.crd -x 2mlt_equil4.trj -o mdout

The SBATCH commands do the same job of configuring slurm, though the “mail-type” and “mail-user” options show how you can have slurm email you when a job finishes. In this example, the last ibrun command is the one that actually runs AMBER (a sander job) and sets up all of its settings, input files, output files, and other necessities. Those must be changed for each new job.

Here is some specific information and examples for comet at the San Diego Supercomputer Center, as well as a complete list of all the slurm options you can use.

Leave a Comment

Filed under computing

Leave a Reply