Question: bbduk fails to output histograms for random files
0
gravatar for heso
4 months ago by
heso40
Sweden
heso40 wrote:

Hi,

I'm having a problem with running bbduk on the server on a range of fastq.gz files (unpaired). Over a range of 24 files, 4 random files fail to get the histogram outputs. I do receive the fastq filtering (quality trimming+length filtering) output files, but even then I guess can't be sure that they are correct (Slurm output misses the Input, QTrimmed, Low quality discards etc specifications)

Code for bbduk:

bbduk.sh in=$file out="${file%_UMIextr.fa.gz}_filt.fa.gz"  qtrim=rl trimq=10 maq=10 minlen=17 bhist=$file"_bhist.txt" qhist=$file"_qhist.txt" gchist=$file"_gchist.txt" aqhist=$file"_aqhist.txt" lhist=$file"_lhist.txt" gcbins=auto

By looking at the Slurm output of the "faulty" files, it always says "line 380: ** Killed"

Example:

Initial: Memory: max=26366m, total=26366m, free=25929m, used=437m

Input is being processed as unpaired Started output streams: 0.034 seconds. /home/h/user/Public/easybuild/software/Compiler/GCC/8.2.0-2.31.1/BBMap/38.50b/bbduk.sh: line 380: 429343 Killed

Whereas for the remaining 20 files I do get the data output specifications

Initial: Memory: max=26350m, total=26350m, free=25914m, used=436m

Input is being processed as unpaired Started output streams: 0.042 seconds. Processing time: 25.337 seconds.

Input: 10139222 reads 217509511 bases. QTrimmed: 3842 reads (0.04%) 21566 bases (0.01%) Low quality discards: 1495899 reads (14.75%) 15555801 bases (7.15%) Total Removed: 1496658 reads (14.76%) 15577367 bases (7.16%) Result:
8642564 reads (85.24%) 201932144 bases (92.84%)

Time: 25.380 seconds. Reads Processed:
10139k 399.49k reads/sec Bases Processed: 217m 8.57m bases/sec

Can somebody please tell me what could be the problem and how to fix this?

bbtools bbduk • 108 views
ADD COMMENTlink written 4 months ago by heso40
1
gravatar for lieven.sterck
4 months ago by
lieven.sterck7.8k
VIB, Ghent, Belgium
lieven.sterck7.8k wrote:

Might be a memory issue? Can you check what the mem-usage for the failed jobs is?

ADD COMMENTlink written 4 months ago by lieven.sterck7.8k
1

It indeed was a memory issue. Didn't suspect this as the details that Slurm gave didn't indicate to memory shortage. But problem fixed now.

Thanks!

ADD REPLYlink modified 4 months ago • written 4 months ago by heso40

good to hear the issue is resolved.

A small educational note: if an answer was helpful you should upvote it, if the answer resolved your question you should mark it as accepted. (and you can accept multiple answers if need-be)

Upvote|Bookmark|Accept

ADD REPLYlink written 4 months ago by lieven.sterck7.8k
Please log in to add an answer.

Help
Access

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 2.3.0
Traffic: 1521 users visited in the last hour