MetaSpades can't access memory? Crashes when requesting 45 Gigs on a 128 Gig RHEL6 machine
Entering edit mode
6.3 years ago
jnowacki ▴ 100

Metaspades can't seem to request memory from a 128 GB Redhat 6 machine. Any ideas? Lowering the CPU and memory flag limits doesn't help. This is from the developer themselves:

Here is what happened: SPAdes used approx 15 Gb of RAM and tried to allocate 45 Gb of RAM more. However, your OS failed to fulfil SPAdes' request to do so... Unfortunately, we do not have any workaround for this - we can only pass this error to you.

Actual log file:

done. Total clusters: 240865095
1:04:06.059    15G / 68G   INFO   K-mer Counting           (kmer_data.cpp             : 381)   Collecting K-mer information, this takes a while.
<jemalloc>: Error in malloc(): out of memory. Requested: 47492581872, active: 16651386880

== Error ==  system call for: "['/data1/nimblegen-pipeline/nimblegen-software/SPAdes/SPAdes-3.10.1-Linux/bin/hammer', '/data2/Metagenomics/CAMI_Standard_Simple/metaSpadesResults_CPU_40_RAM_120_postReboot/corrected/configs/']" finished abnormally, err code: -6

======= SPAdes pipeline finished abnormally and WITH WARNINGS!

=== Error correction and assembling warnings:
 * 0:25:15.725    18G / 41G   WARN   K-mer Index Building     (kmer_index_builder.hpp    : 451)   Number of threads was limited down to 19 in order to fit the memory limits during the index construction
======= Warnings saved to /data2/Metagenomics/CAMI_Standard_Simple/metaSpadesResults_CPU_40_RAM_120_postReboot/warnings.log

=== ERRORs:
 * system call for: "['/data1/nimblegen-pipeline/nimblegen-software/SPAdes/SPAdes-3.10.1-Linux/bin/hammer', '/data2/Metagenomics/CAMI_Standard_Simple/metaSpadesResults_CPU_40_RAM_120_postReboot/corrected/configs/']" finished abnormally, err code: -6

In case you have troubles running SPAdes, you can write to
Please provide us with params.txt and spades.log files from the output directory.
Spades metagenomics jemalloc malloc • 3.7k views
Entering edit mode

Could you provide some information of earlier experiences with SPAdes? How many reads do you try to assemble? I cannot think of why the OS would throw that error other then the malloc() function trying to claim too much in one call or that specific implementation has too much overhead.

Maybe you can try an older version of SPAdes? In a previous question the version proved to cause a somewhat similar issue.

Entering edit mode

I solved this. It's a combination of two things:

1) Redhat Enterprise 6 does not work but Ubuntu 16 LTS server does

2) Ubuntu server works ONLY if there are at least 3 gigabytes of memory for each core. (according to the software developers). I throttled the assembly down to 6 gigabytes per thread to be safe and assembly completed without issue. It would crash fairly early when there were 2 gigabytes per thread. My server has 128 threads and 256 GB of memory so it's more than enough. Apparently certain stages of MetaSpades have a minimum memory footprint per thread regardless of data size.

Entering edit mode

I have the same problem with jmalloc and amount of memory used by SPAdes-3-10.1 and other, more older versions.

How did you specified that it should use 3 or 6 gigabytes of memory for 1 thread at your machine for SPAdes assembly? Should it be amount of RAM memory?


Login before adding your answer.

Traffic: 1017 users visited in the last hour
Help About
Access RSS

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6