Question: bwa parallel on a SLURM cluster
2
gravatar for little_more
5 months ago by
little_more20
little_more20 wrote:

I'm working on a SLURM cluster with NGS data. I trimmed raw reads and was thinking of the best way to align them to the reference genome. I have pairs of reads for a few samples. I wrote a script for parallel bwa:

#SBATCH --cpus-per-task=1
#SBATCH --ntasks=10
#SBATCH --nodes=1

# align with bwa & convert to bam
bwatosam() {
  id=$1
  index=$2
  output=$3/"$id".bam
  fq1=$4/"$id".R1.fq.gz
  fq2=$4/"$id".R2.fq.gz

  bwa mem -t 16 -R '@RG\tID:"$id"\tSM:"$id"\tPL:ILLUMINA\tLB:"$id"_exome' -v 3 -M $index $fq1 $fq2 |
    samtools view -bo $output
};
export -f bwatosam

# run bwatosam in parallel
ls trimmed/*.R1.fq.gz |
 xargs -n 1 basename |
  awk -F ".R1" '{print $1 | "sort -u"}' |
   parallel -j $SLURM_NTASKS "bwatosam {} index.fa alns trimmed"

But I'm not sure if I use the right parameters (#SBATCH) for the job because if I do it without -j:

#SBATCH --nodes=1
#SBATCH --ntasks-per-node=5

# run bwatosam in parallel
ls trimmed/*.R1.fq.gz |
  xargs -n 1 basename |
   awk -F ".R1" '{print $1 | "sort -u"}' |
    parallel "bwatosam {} index.fa alns trimmed"

It works 10 times faster. What number of nodes/cpus/threads should I use?

bwa • 247 views
ADD COMMENTlink modified 5 months ago • written 5 months ago by little_more20

Have you tried submitting jobs directly to SLURM without the additional complexity of using parallel. On a cluster (with parallel) you are adding complexity for no good reason as far as I see.

ADD REPLYlink modified 5 months ago • written 5 months ago by genomax76k
4
gravatar for ATpoint
5 months ago by
ATpoint28k
Germany
ATpoint28k wrote:

Depends on the node. I typically run alignments with basically this kind of script (sorry Pierre Lindenbaum, no snakemake yet) on a 72-core node with 192GB RAM, and then use:

#SBATCH --nodes=1
#SBATCH --ntasks-per-node=72
#SBATCH --partition=normal

In this case I would use 4 parallel processes with 16 threads for bwa each. Depends on how much memory your node has. Can you give some details? When using parallel I recommend booking the entire node to ensure you are not interfering with processes from other users.

=> Note that I always book the entire node if running parallel things so I essentially do not care about RAM consumption etc. as long as the node can handle it. If you share the node with others it might be a good idea to task the your admin before if using parallel is allowed on your cluster nodes.

ADD COMMENTlink modified 5 months ago • written 5 months ago by ATpoint28k

ATpoint, thanks for answering to my question without endless referring to snakemake! :)

clusters specifications:

376 nodes: each with 2 processors (8 cores each) & 64 GB RAM

144 nodes: each with 2 processors (12 cores each) & 64 GB RAM

do you specify how much memory you need with #SBATCH --mem=...? and do you use -j 4 for using 4 parallel processes? because as I understood by default parallel runs maximum number of parallel processes (that depends on a number of CPUs on the node).

ADD REPLYlink modified 5 months ago • written 5 months ago by little_more20
1

Yes we have to set #SBATCH --mem=... on our cluster as the batch system kills processes using more than the specified amount. I have it at 80GB by default. For your 64GB nodes I would probably run 2 or 3 jobs in parallel, probably 2 as bwa sometimes uses a lot of memory when aligning batches of reads that are somewhat difficult (repetitive) from what I understand. That way you probably avoid running out of memory.

ADD REPLYlink written 5 months ago by ATpoint28k

so you suggest using:

#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
parallel -j 2...

right? sorry for many questions I just started working with this cluster and I was surprised when parallel -j ran longer than just parallel

ADD REPLYlink modified 5 months ago • written 5 months ago by little_more20
1

No worries. --ntasks-per-node=8 would need to be 34 as this is the total number of threads you are going to use, so 2x16 for bwa plus 2x1 for the samtools view. Without -j it launches jobs on all files in parallel and will grep every available resource on the node, maybe not a good idea when sharing nodeswith others.

I always book the entire node so my advices here might not be adequate when sharing nodes with others, keep that in mind.

ADD REPLYlink modified 5 months ago • written 5 months ago by ATpoint28k

I see. so every task will run on a separate CPU and a node needs to have 34 CPUs?

ADD REPLYlink written 5 months ago by little_more20
1

This is how I understand things.

ADD REPLYlink written 5 months ago by ATpoint28k

I see. Thanks very much for your help!

ADD REPLYlink written 5 months ago by little_more20
Please log in to add an answer.

Help
Access

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 2.3.0
Traffic: 1229 users visited in the last hour