Time ran out in SLURM operation
1
0
Entering edit mode
4.0 years ago

Hello! I'm extremely new to SLURM and bioinformatics so apologies for this rather dumb question and if I use any terminology wrong (feel free to correct me)!

Basically I ran a SLURM operation, however it surpassed the allocated 24 hours on the HPC server I'm using and ran out of time...

Here are some of the allocated inputs I used:

--ntasks=1

--ntasks-per-node=1

--cpus-per-task=28

--mem=110

OMP_NUM_THREADS=28

My question is... what do I need to change from these inputs to get the analysis complete before 24 hours?

I'm not really sure about the terminology and what constitutes as "tasks" or what a "node" is etc...

Thank you so much

SLURM software error parallel multithreading • 1.7k views
ADD COMMENT
3
Entering edit mode
4.0 years ago
GenoMax 141k

This question is impossible to answer with the information provided. We have no idea what software you are running, what the inputs are etc. So with that in mind you can simply ask for more time with -t Number_of_days (e.g. -t 2-0) option to get more time for the job. If your cluster limits the max time allocation time to 24h then you would need to find a workaround.

If you look at the man sbatch page you will find a ton of information about the options you are listing above.

ADD COMMENT
0
Entering edit mode

Thank you so much for your answer, my cluster does unfortunately have a max time allocation of 24 hours..

Basically I'm running OrthoFinder on the protein fasta files of 5 species... I was wondering if changing the ntasks/ntasks per node/cpus per task parameters would make me able to complete the job faster?

ADD REPLY
1
Entering edit mode

How are you running orthofinder? Are you using the -t and -a options to use multiple threads?

ADD REPLY
0
Entering edit mode

From a SLURM shell script.

It doesn't have the -t and -a parameters but it has

#SBATCH --ntasks=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=28
#SBATCH --time=24:00:00  (I've tried extending this but the maximum allocation time is 24 hours)

I'm not really sure what to change or how many tasks or tasks per node to give it since this is all really new to me including the terminology

ADD REPLY
1
Entering edit mode

I am asking about orthofinder command line options. Do you have above options in that command line? See parallelizing orthofinder section in the manual. e.g. if you are asking for 28 cpu's in your SLURM script (--cpus-per-task=28) there needs to be a corresponding -t 28 in your orthofinder command line to take advantage of those (just an example).

ADD REPLY
0
Entering edit mode

OHHH! I'm so sorry I didn't see the parallelising section in the manual, so in my run script I have:

srun --export=ALL orthofinder -f species/

but it should actually be srun --export=ALL orthofinder -f species/ -t 28?

ADD REPLY
1
Entering edit mode

No worries. Yes something along that lines. Take a look at the manual and decide on what optimal options you need to use. You may also want to see if you want to run the blast/DIAMOND analyses separately (if that 24h limit is hard cap).

ADD REPLY
0
Entering edit mode

Thank you so much for all your help, I really appreciate it! :)

ADD REPLY
1
Entering edit mode

If you are using a script like one above you could also do something like this

#SBATCH --ntasks=1
#SBATCH -p partition
#SBATCH --mem=Ng
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=28
#SBATCH --time=24:00:00 

orthofinder -f species/ -t 28

Save contents in a file e.g. my_script. And then just sbatch my_script to submit the job.

ADD REPLY

Login before adding your answer.

Traffic: 1890 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6