Question: snakemake on slurm cluster - jobs not updating/submitting after checkpoints? (Error submitting jobscript (exit code 1):)
gravatar for Berghopper
7 days ago by
Kampen, Netherlands
Berghopper10 wrote:

Dear Biostars,

I have a pretty complicated pipeline I need to run on a slurm cluster, but am not able to get it to work.

For some reason, the pipeline works for smaller jobs, but as soon as I add more input files for more rigorous testing, it doesn't want to finish correctly.

I don't have a minimal example (yet) as it's the end of my workday, I will add one if this question isn't easily resolved.

So, what happens is the following:

I submit my main snakemake "daemon" job with sbatch ../, aka the following script:

#!/usr/bin/env bash

# Jobname
# Project
#SBATCH --account=nn3556k
# Wall clock limit
#SBATCH --time=24:00:00
# Max memory usage:
#SBATCH --mem-per-cpu=16G

## set up job environment
source /usit/abel/u1/caspercp/Software/snek/bin/activate
module purge   # clear any inherited modules
#set -o errexit # exit on errors (turned off, so all jobs are cancelled in event of crash)

## copy input files
cp -R /usit/abel/u1/caspercp/nobackup/DATA/ $SCRATCH
cp -R /usit/abel/u1/caspercp/lncrna_thesis_prj/src/snakemake_pipeline/ $SCRATCH

## Do some work:
cd $SCRATCH\/snakemake_pipeline
echo $(date) >> ../bash_tims.txt
# run pipeline
snakemake --snakefile start.snakefile -pr --runtime-profile ../timings.txt --cluster "sbatch -A nn3556k --time=24:00:00 --mem-per-cpu=4G -d after:"$SLURM_JOB_ID -j 349 --restart-times 1
echo $(date) >> ../bash_tims.txt

## Make sure the results are copied back to the submit directory:
cp -R $SCRATCH\/snakemake_pipeline/.snakemake/ $SUBMITDIR
mkdir $SUBMITDIR\/child_logs/
cp $SCRATCH\/snakemake_pipeline/slurm-*.out $SUBMITDIR\/child_logs/
cp $SCRATCH\/timings.txt $SUBMITDIR
cp $SCRATCH\/bash_tims.txt $SUBMITDIR

# CANCEL ALL JOBS IN EVENT OF CRASH (or on exit, but it should not matter at that point.)
scancel -u caspercp

I am using the abel cluster if you want to know specifics:

This is where I feel the whole thing falls apart. For some reason, when the checkpoints are finished, snakemake can't submit new jobs. I get the following error (subset of the snakemake output):

[Thu Mar 14 17:46:27 2019]
checkpoint split_up_genes_each_sample_lnc:
    input: ../OUTPUTS/prepped_datasets/expression_table_GSEA_Stopsack-HALLMARK_IL6_JAK_STAT3_SIGNALING.txt
    output: ../OUTPUTS/control_txts/custom_anno/expression_table_GSEA_Stopsack-HALLMARK_IL6_JAK_STAT3_SIGNALING-human-BP/
    jobid: 835
    reason: Missing output files: ../OUTPUTS/control_txts/custom_anno/expression_table_GSEA_Stopsack-HALLMARK_IL6_JAK_STAT3_SIGNALING-human-BP/; Input files updated by another job: ../OUTPUTS/prepped_datasets/expression_table_GSEA_Stopsack-HALLMARK_IL6_JAK_STAT3_SIGNALING.txt
    wildcards: expset=expression_table_GSEA_Stopsack, geneset=HALLMARK_IL6_JAK_STAT3_SIGNALING, organism=human, ontology=BP
Downstream jobs will be updated after completion.

Error submitting jobscript (exit code 1):

Updating job 655.
[Thu Mar 14 17:46:43 2019]
Finished job 896.
95 of 1018 steps (9%) done
Updating job 539.
[Thu Mar 14 17:47:24 2019]
Finished job 780.
96 of 1022 steps (9%) done
Updating job 643.
[Thu Mar 14 17:51:35 2019]
Finished job 964.
203 of 1451 steps (14%) done
Updating job 677.
[Thu Mar 14 17:51:46 2019]
Finished job 918.
204 of 1455 steps (14%) done
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: /work/jobs/26276509.d/snakemake_pipeline/.snakemake/log/2019-03-14T172923.764021.snakemake.log

Roughly speaking what the checkpoint does, is split up an output txt with genes (so a geneset file) into seperate files called {gene}.txt for each sample. So I can feed it to my analysis algorithms.

But I am really confused with this error "Error submitting jobscript (exit code 1):", it doesn't really give a clear direction for troubleshooting.

Thanks in advance for any input!

extra info:

  • The pipeline runs fine outside of the cluster.
  • I suspect I have to do a group my jobs in a specific way, although I am not sure

I am using the following snakemake setup:

(snek) -bash-4.1$ pip freeze --local
ADD COMMENTlink modified 7 days ago • written 7 days ago by Berghopper10

What happens if you remove -d after:$SLURM_JOB_ID? Usually it's most convenient to let snakeMake handle starting jobs.

ADD REPLYlink written 7 days ago by Devon Ryan88k

I don't know yet, I added this more as a safety feature for if the main "daemon" job terminates and the child jobs are still pending. I'll try remove it and report back.

I also added another comment, you may be on the right track actually.

ADD REPLYlink modified 7 days ago • written 7 days ago by Berghopper10

I think I resolved my issue, see my latest comment.

ADD REPLYlink written 7 days ago by Berghopper10


It was actually not a mistake in the documentation...

ADD REPLYlink written 1 day ago by Berghopper10
gravatar for Berghopper
7 days ago by
Kampen, Netherlands
Berghopper10 wrote:

The reason this was happening was because of slurms job scheduler... Sadly, UiO's documentation listed that you can max utilize ~400 jobs. ( When I went out and measured it though, it was only 40 at max!

This creates a bit of a problem for pipelines that rely on running rules for their parrallelization, I might have to tweak things a bit...

Edit: This was not a mistake in the documentation, I contacted slurm admins and they verified that you actually CAN run 400 jobs on the cluster. This makes me wonder wether this is a slurm or snakemake bug...

ADD COMMENTlink modified 1 day ago • written 7 days ago by Berghopper10

As OP, can you check your own answer as solved? You're question, though very well written and with many appreciated details, is quite a bit of text to read and it took me a moment until I realized it's solved.

ADD REPLYlink written 7 days ago by Carambakaracho930

Actually, it is still not solved sadly, I have contacted the slurm cluster admins and they state that I CAN use 400 jobs after all. So either this is a slurm bug or snakemake failing to handle this many jobs.

Either way, as a workaround for now I'll probably make a separate intermediary script that handles detailed multithreading per node.

ADD REPLYlink written 1 day ago by Berghopper10
gravatar for Berghopper
7 days ago by
Kampen, Netherlands
Berghopper10 wrote:

Ok, update time:

What I've noticed is that, if I dial down the -j option (jobs submitted at the same time) the pipeline is a lot more stable for some reason. I don't know why this is, but I imagine this is more a bug/something regarding slurm rather than snakemake...

ADD COMMENTlink written 7 days ago by Berghopper10
Please log in to add an answer.


Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 2.3.0
Traffic: 2023 users visited in the last hour