snakemake: pipeline failed with MissingOutputException
2
0
Entering edit mode
19 months ago
DY Chen ▴ 10

To make it easy I have to modify the post. The situation is that at the beginning I ran the pipeline well on a local machine but failed when submitted to cluster. After posting the question, I found the version of snakemake was 3.13.3, so I updated to v5.7.3, and then I found it failed on both of the local machine and cluster. Thus, I am now struggling to figure what’ wrong with my Snakefile or anything else.
The error message:

Waiting at most 5 seconds for missing files.
MissingOutputException in line 24 of /work/path/rna_seq_pipeline/Snakefile:
Missing files after 5 seconds:
bam/A2_Aligned.toTranscriptome.out.bam
This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: /work/path/rna_seq_pipeline/.snakemake/log/2019-11-07T153434.327966.snakemake.log

Here is my Snakefile:

# config file 
configfile: "config.yaml"

shell.prefix("source ~/.bash_profile")

# determine which genome reference you would like to use
# here we are using GRCm38
# depending on the freeze, the appropriate references and data files will be chosen from the config
freeze = config['freeze']

# read list of samples, one per line
with open(config['samples']) as f:
    SAMPLES = f.read().splitlines()

rule all:
    input:
        starindex = config['reference']['stargenomedir'][freeze] + "/" + "SAindex",
        rsemindex = config['reference']['rsemgenomedir'][freeze] + ".n2g.idx.fa",
        fastqs = expand("fastq/{file}_{rep}_paired.fq.gz", file = SAMPLES, rep = ['1','2']),
        bams = expand("bam/{file}_Aligned.toTranscriptome.out.bam", file = SAMPLES),
        quant = expand("quant/{file}.genes.results", file = SAMPLES)

# align using STAR
rule star_align:
    input:
        f1 = "fastq/" + "{file}_1_paired.fq.gz",
        f2 = "fastq/" + "{file}_2_paired.fq.gz"
    output:
        out = "bam/" + "{file}_Aligned.toTranscriptome.out.bam"
    params:
        star = config['tools']['star'],
        genomedir = config['reference']['stargenomedir'][freeze],
        prefix = "bam/" + "{file}_"
    threads: 12
    shell:  
        """
        {params.star} \
        --runThreadN {threads} \
        --genomeDir {params.genomedir} \
        --readFilesIn {input.f1} {input.f2} \
        --readFilesCommand zcat \
        --outFileNamePrefix {params.prefix} \
        --outSAMtype BAM SortedByCoordinate \
        --outSAMunmapped Within \
        --quantMode TranscriptomeSAM \
        --outSAMattributes NH HI AS NM MD \
        --outFilterType BySJout \
        --outFilterMultimapNmax 20 \
        --outFilterMismatchNmax 999 \
        --outFilterMismatchNoverReadLmax 0.04 \
        --alignIntronMin 20 \
        --alignIntronMax 1000000 \
        --alignMatesGapMax 1000000 \
        --alignSJoverhangMin 8 \
        --alignSJDBoverhangMin 1 \
        --sjdbScore 1 \
        --limitBAMsortRAM 50000000000
        """

# quantify expression using RSEM
rule rsem_quant:
    input:
        bam = "bam/" + "{file}_Aligned.toTranscriptome.out.bam"
    output:
        quant = "quant/" + "{file}.genes.results"
    params:
        calcexp = config['tools']['rsem']['calcexp'],
        genomedir = config['reference']['rsemgenomedir'][freeze],
        prefix =  "quant/" + "{file}"
    threads: 12
    shell:
        """
        {params.calcexp} \
        --paired-end \
        --no-bam-output \
        --quiet \
        --no-qualities \
        -p {threads} \
        --forward-prob 0.5 \
        --seed-length 21 \
        --fragment-length-mean -1.0 \
        --bam {input.bam} {params.genomedir} {params.prefix}
        """

And my config.yaml:

freeze: grcm38

# samples file
samples:
    samples.txt

# software, binaries or tools
tools:
    fastqdump: fastq-dump
star: STAR
rsem: 
    calcexp: rsem-calculate-expression
    prepref: rsem-prepare-reference

# reference files, genome indices and data
reference:
    stargenomedir: 
        grch38: /work/path/reference/STAR/GRCh38
        grcm38: /work/path/reference/STAR/GRCm38
    rsemgenomedir: 
        grch38: /work/path/reference/RSEM/GRCh38/GRCh38
        grcm38: /work/path/reference/RSEM/GRCm38/GRCm38
    fasta: 
        grch38: /work/path/GRCh38/Homo_sapiens.GRCh38.dna.primary_assembly.fa
        grcm38: /work/path/reference/GRCm38/Mus_musculus.GRCm38.dna.primary_assembly.fa
    gtf: 
        grch38: /work/path/reference/GRCh38/Homo_sapiens.GRCh38.98.gtf
        grcm38: /work/path/reference/GRCm38/Mus_musculus.GRCm38.98.gtf

And finally, samples.txt:

A1
A2

ps: adapted from the pipeline https://github.com/komalsrathi/rnaseq-star-rsem-pipeline/blob/master/Snakefile

snakemake RNA-Seq • 2.7k views
ADD COMMENT
2
Entering edit mode

The way to debug anything is to start with something that works and then make small changes until it doesn’t. I am not totally convinced your worker nodes have access to /work. Maybe drop the absolute directories and try some helloworld Snakefiles with qsub and work your way up to the problem.

ADD REPLY
0
Entering edit mode

What version of snakemake are you running?

ADD REPLY
0
Entering edit mode

3.13.3... I didn't realized it is such an old version. Follow the instructions of snakemake tutorial and do not know why it is not the latest. I will try the latest version to see whether the problem is solved.

ADD REPLY
0
Entering edit mode

Yeah that's like.... 4 year old version!

I think if you update the issue will be fixed. Checkout this issue from 2015 with similar symptoms: https://bitbucket.org/snakemake/snakemake/issues/273/cannot-touch-temporary-snakemake-file

Can you link to the snakemake tutorial you are following?

ADD REPLY
0
Entering edit mode

The official one here: https://snakemake.readthedocs.io/en/stable/getting_started/installation.html (Seems forgot to use -c conda-forge at that time, anyway it works now)

However, after upgrading to v5.7.4, still surfer the problem. As I have two pairs of raw paired-end fastq file, the error file 1 returned by cluster:

MissingOutputException in line 24 of /work/path/rna_seq_pipeline/Snakefile:
Missing files after 5 seconds:
bam/A1_Aligned.toTranscriptome.out.bam
This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
 Shutting down, this might take some time.
 Exiting because a job execution failed. Look above for error message
 touch: cannot touch `/work/path/rna_seq_pipeline/.snakemake/tmp.jvdud4yy/1.jobfailed': No such file or directory

and another one:

Waiting at most 5 seconds for missing files.
Missing files after 5 seconds:
/work/path/rna_seq_pipeline/.snakemake/tmp.jvdud4yy
touch: cannot touch `/work/path/rna_seq_pipeline/.snakemake/tmp.jvdud4yy/2.jobfailed': No such file or directory
ADD REPLY
0
Entering edit mode

Sorry out of my depth. I hope you get an answer. If you do find one, please come back andpost it.

ADD REPLY
0
Entering edit mode

This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.

Did you read this part of the error message and did you try the suggestion?

The problem could also be in the way you've written your pipeline so showing your snakemake file could be useful.

ADD REPLY
0
Entering edit mode

I saw that and tried to add parameters `--latency-wait 180', but just spent more time to see the error message. The weird thing is that after upgrading snakemake, running it on a local machine also failed

MissingOutputException in line 24 of 
/backup/home/xuheping_Group/chendianyu/project/rna_seq_pipeline-5/Snakefile:
Missing files after 180 seconds:
bam/A2_Aligned.toTranscriptome.out.bam
This might be due to filesystem latency. If that is the case, consider to increase the wait time with -- 
latency-wait.

I will add my snakemake file in the original post

ADD REPLY
1
Entering edit mode

Most of the time that error means that a rule cannot find an certain output file. If it is not no to much work for you it always helps the execute all the steps manually from the command line and double check the filenames of the output files.

In your case it looks like it can not find the file bam/A2_Aligned.toTranscriptome.out.bam is it created? Is it the right path?

ADD REPLY
0
Entering edit mode

Yeah snakemake created the bam/ directory but not the file bam/A2_Aligned.toTranscriptome.out.bam. I used snakemake -np to create shell command and ran it manually. The command ran successfully except I needed to create directory bam/ and quant/ manually, but I think snakemake can automatically mkdir, isn't it?

ADD REPLY
1
Entering edit mode

not if it is an input. Your bam/ has to be there because it is in the input part of one rule, since it is the output of another, I would start there.

ADD REPLY
0
Entering edit mode

Are you sure SortedByCoordinate and TranscriptomeSAM generate *_Aligned.toTranscriptome.out.bam?

ADD REPLY
0
Entering edit mode

Never mind. It apparently generates both Aligned.sortedByCoord.out.bam and Aligned.toTranscriptome.out.bam.

ADD REPLY
0
Entering edit mode

What versions of STAR/RSEM are you using?

Also, can you do a tree (or show the directory structure) on your working directory? I see you have hardcoded the paths of bam/, fastq/ and quant/. Just want to make sure these directories exist.

ADD REPLY
2
Entering edit mode
19 months ago
DY Chen ▴ 10

At least solved the problem but as totally new to snakemake, still need to spend time learining and figuring out the mechanism behind

  • After updating to snakemake v5.7.3, pipeline failed on a local machine because of shell.prefix("source ~/.bash_profile") in Snakefile. Actually I modify my PATH in ~/.bashrc. After commenting this line out, the pipeline runs well on my local machine. (Need to figure out what shell.prefix does exactly as I thought it can help modify $PATH and then use STAR directly without using absolute path before)
  • When submitting pipeline to cluster, I used command snakemake -p --cluster "qsub -cwd -V" -j 6. I used -V here hoping to use STAR directly without using absolute path, but error occured Error submitting jobscript (exit code -7):. So I add absolute path for STAR and RSEM in my config.yaml and use snakemake -p --cluster "qsub -cwd" -j 6. Now the pipeline runs well on cluster. (Any idea why -V does not work?)
  • Now I am using absolute path in my config.yaml to call STAR and RSEM , but I think it's less portable. Is there a better way?

Thanks to all of you sincerely!
BTW: life is always hard, especially you are the only one doing bioinformatics in your lab.

ADD COMMENT
0
Entering edit mode
19 months ago
caggtaagtat ★ 1.5k

Hi there,

since you want to submit jobs to the cluster, I would like to share a nice way to do so. You can write the STAR command in a shell script, lets call it STAR.sh and then submit it to the cluster like this:

rule align_with_STAR_pass1:
    input:
        "{samples}.fq"
    params:
        current_path = os.getcwd()
    output:
        temp("{samples}.star1.done.txt")
    shell:
        """
        echo "{params.current_path}/scripts/STAR_pass1.sh -i {input}" | qsub -l select=1:ncpus=8:mem=50gb -l walltime=70:59:59 -N STAR1 -A FoKo2017-Epithel -j oe
        """

What it does, is that it thakes the your command to execute the shell script as an echo and gives the result with qsub.

Then you call snakemake like this:

snakemake --latency-wait 400000 -j 24
ADD COMMENT
0
Entering edit mode

Are you sure? I am absolutely no expert but want to learn so I let me know if I don't understand. The idea of a workflow manager is also that you run jobs in parallel in an easy way. If you have 100 samples and a cluster of 100 nodes you want to run all samples at the same time. But if you do it like this and you execute the snakefile from a computer or master with only 4 cores it only runs 4 samples at the same time. So runs maximal 4 samples but the actual analysis is done on a 100 node cluster.

ADD REPLY
0
Entering edit mode

Please excuse me, I was under the impression, that you wanted to submit the jobs to an High-Performing-Cluster. I just re-read your question and saw that you want to execute it on your local mashine.

ADD REPLY
0
Entering edit mode

I am not the OP... and that person still wants to execute it on a cluster...

ADD REPLY
0
Entering edit mode

I see. You can simualte a "clsuter" on your mashien, thats why I got confused about. You are right, the OP want to execute it on a regular cluster, so my answer is still relevant.

Coming back to your question above: When you had 100 samples this snakemake script would try to submit 100 job to the cluster, each using 8 CPUs, 50 GB RAM and 70h time like stated in the second part of the shell command select=1:ncpus=8:mem=50gb -l walltime=70:59:59 . So the pipe takes the command to execute the script from echo and gives it to the qsub command.

ADD REPLY
0
Entering edit mode

Sorry to make confused. At the begining the pipeline worked well on a local machine but failed when submited to cluster. After posting the question I found the version of snakemake was 3.13.3...So I updated to 5.7.3 and then I found the pipeline failed on both of the local machine and cluster. Thus, now I am struggling to figure out what's wrong with my snakemake file or anything else

ADD REPLY
0
Entering edit mode

Ok, than I think an alternative solution could be to submit the jobs similar to this, depending on the "job scheduler" on your cluster

ADD REPLY
0
Entering edit mode

are you able to run star alone? i.e. without snakemake, just the old good bash command? If not, the pipeline you mentioned has an additional step prior to the mapping that is meant to generate needed star index, which you are skipping in your workflow.

ADD REPLY
0
Entering edit mode

Yes I can. I can run STAR alone, I can run code generated by snakemake -np (create directory bam/ and quant/ manually ), and even I can run the same pipeline successfully at least on my local machine with snakemake v3.13.3. But now with v5.7.3, the pipeline fail even on local machine.

ADD REPLY

Login before adding your answer.

Traffic: 2812 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6