Variant Calling Pipeline Parallelization, ERROR : sbatch: not found Error submitting jobscript (exit code 127)
0
0
Entering edit mode
3 months ago

I developed a pipeline written in Snakemake for genome variant calling analysis based on GATK tool kit. I'm trying to parallelize it now , in order to be ran in a HPC cluster with multiple nodes.

I have a configuration file (yaml) where the paths of the files are defined.

When I execute the pipeline with the command:

shifter --volume=/home/ubuntu/vcall_docker_gatk4_bottle/:/mnt/  \
--image=docker:ray2g/vcall_biodata:1.5.1 \
snakemake --snakefile /mnt/vcall-pipe3_cluster.snake \
-p /mnt/genome/resources_broad_hg38_v0_Homo_sapiens_assembly38.fasta.sa \
-j 24 \
--cluster 'sbatch -p {params.partition} --mem {resources.mem_mb}mb --cpus-per-task {resources.cpus}' \
--forceall

I'm getting this ERROR:

Building DAG of jobs...
Using shell: /bin/bash
Provided cluster nodes: 24
Unlimited resources: cpus, mem_mb
Job counts:
        count   jobs
        1       bwa_index
        1

[Wed Jan 27 14:30:40 2021]
Job 0: Building index -> /mnt/genome/resources_broad_hg38_v0_Homo_sapiens_assembly38.fasta.sa

bwa index /mnt/genome/resources_broad_hg38_v0_Homo_sapiens_assembly38.fasta
/bin/sh: 1: sbatch: not found
Error submitting jobscript (exit code 127):Shutting down, this might take some time.

Any one could help me ?

snakemake variant calling pipeline gatk hpc • 229 views
ADD COMMENT
0
Entering edit mode

HPC cluster with multiple nodes.

what is the workflow manager of your HPC ? sge ? slurm ? ...

ADD REPLY
0
Entering edit mode

Thank you for the reply. The HPC is using Slurm as workflow manager.

ADD REPLY

Login before adding your answer.

Traffic: 1214 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6