Hi guys
I'm quite new to bioinformatics right now I'm trying to run my first run at my university's cluster which uses slurm to manage the queue. After submitting a job for 4 nodes I noticed that the run was using only 1% CPU and that I was not getting any output files in my working directory. After some googling, I noticed that I did not define any scratchdir and I've adapted my submission looks something like this:
#!/bin/bash
#SBATCH -n 48
#SBATCH --mem=0
#SBATCH -o %j.o
#SBATCH -e %j.e
# Run for 7 days
#SBATCH -t 07-00:00:00
#SBATCH --exclusive
#SBATCH --job-name=2py
echo "Starting at `date`"
echo "Running on hosts: $SLURM_NODELIST"
echo "Running on $SLURM_NNODES nodes."
echo "Running on $SLURM_NPROCS processors."
echo "Current working directory is `pwd`"
mkdir -p /scratch/$USER/$JOB
SCRATCHDIR=/scratch/$USER/$JOB
module load CP2K/6.1-foss-2019a
mpirun srun cp2k.popt -i cp2k.inp -o cp2k.out
cp -r $SCRATCHDIR .
rm -rf $SCRATCHDIR
echo "Program finished with exit code $? at: `date`"
Do you guys have any advice on how to improve it? Also this will copy now all the files to my working directory right? If you have any resources that could help me learn about this that will be quite helpful
Thanks
Thanks for the answer So I've removed the
--mem=0
and the--exclusive
tags. The program that I'm using is CP2K which runs (or can run) using MPI. I thought that it would be helpful to define a scratchdir as when running in several nodes I'm not able to see the output files being generatedAre you certain following method of parallel job submission is correct?
OpenMPI site seems to indicate a slightly different way.
Try
then submit the script file by doing
Some like to use srun in their sbatch scripts. I prefer not to. But genomax is right anyway, it should look like this.