Question: Different Problems with Velvet
gravatar for stu111392
5.8 years ago by
stu11139230 wrote:

Hello Folks,

lets start from the beginning so that you get an Idea of what I'am doing. My fist files are Paired-End-Reads which I put into Module that clips and merges (it's an ancient DNA sample). After that I filtered it with deconseq and got 1 fastq File of "clean" reads which is about 3gb big and contains 14.639.466 reads.

Now I want do do a metagenomic assembly from the clean fastq File. The length of the reads is ranging form 25 to 192. So I compiled velvet with the following parameters:


I used this Kmerlength because I wanted to include all the reads in my file and the Openmp for using the multithreading option. I am not entirely sure what the Categories Option does but what I get from the manual it seem ok to set this to one.

Then I executed Velveth with this command line:

./velveth out-dir 17 -fastq /path/to/the/file......

And it is creating the initial files. After that I borrowed a lot of memory (100gb) and tired to execute Velvet g with the the following parameters:

./velvetg out-dir -exp_cov auto

Then the program starts and until now I've seen two possible outcomes. One is this strange Error I'am not understanding:

Scanning pre-graph file out-dir/PreGraph for k-mers
velvetg: Key length 24 greater than max allowed value (17) Recompile Velvet to deal with this word length.: No such file or directory

Why is there suddenly a greater key length? I thought the only thing that velveth is doing is constructing a Hashtable. So recompiled it with 'MAXKMERLENGTH=27' and ran it again.

Then I encountered the second output: PBS Killed Job Memory Limit exceeded.

Any thoughts on this? Anybody with experience in metagenomic assembly? How much Memory you think I need for the task or am I doing something fundamentally wrong? If I could I would ask somebody at our group but I am the only one here working with Velvet so I have no one to ask.


As always with kind regards and thanks in advance,


assembly software error • 4.0k views
ADD COMMENTlink modified 4.5 years ago by rafael.d.macho0 • written 5.8 years ago by stu11139230

IIRC velvet is a crazy memory hog. I remember allocating 244 gigs to run velvetg. Quick question: Have you tried using digital normalization before assembly? That makes the task computationally less expensive. Check out

ADD REPLYlink written 5.8 years ago by _r_am31k

No I have not but I'll check it out. Thank you for the hint.

ADD REPLYlink written 5.8 years ago by stu11139230
gravatar for Vivek Todur
5.8 years ago by
Vivek Todur50
Vivek Todur50 wrote:


Velvet is programmed for assembling single genome. If you really wanted to stick to Velvet, then try MetaVelvet which is developed for the purpose. Or else there is even better pipeline MOCAT, For assembly, gene prediction and classification,


Good Luck,

ADD COMMENTlink written 5.8 years ago by Vivek Todur50

Thanks for the advice but in the Meta Velvet manual it clearly states that you have to use velveth and velvetg von "normal" Velvet before you can use Meta Velvet. As far as I understanded the process you take the initial De Bruijn Graph built by velvetg and MetaVelvet didives this graph into subgraphs for each organism in your sample. So I think the workflow is right. But I´ll have a look an MOCAT. 

ADD REPLYlink written 5.8 years ago by stu11139230
gravatar for stu111392
5.8 years ago by
stu11139230 wrote:

I looked a the principle of digital normalization ,as suggested by RamRS, for my data but when I read the paper another question appeared. So what digital normalization basically does is trimming you k-mer coverage to a certain value. With this process also errors are removed and redundant data is discarded. But when I read the paper about MetaVelvet it says that the division of the graph from Velvet is done by looking at the k-mer frequencies. But if you normalize it before you have no great differences in these frequencies.

As far as I get it using the normalization before do novo assembly of a metagenomic sample is not a good idea. Any thoughts on this?

ADD COMMENTlink written 5.8 years ago by stu11139230

You should ditch Velvet for something better like idba-ud.

ADD REPLYlink written 5.8 years ago by 5heikki9.0k
gravatar for djuna.gulliver
5.8 years ago by
United States
djuna.gulliver0 wrote:

You can make this work on Velvet.  I struggle too (and ask many questions on Biostar), but I managed to assemble my metagenome with various KMER lengths of 21, 31, and 41.

Might I suggest also inputting "-short" in the velveth command?  So it would be:

velveth out-dir 17 -short -fastq /path/to/the/file

And then run the velvetg code as previous?

Stay strong.

ADD COMMENTlink written 5.8 years ago by djuna.gulliver0

Hey, thank you for you suggestion. I'll try it out but at the moment it seem I'll try using spades for futher assemblies. By the way I asked in the Meta Velvet Google Group how to assign my reads and they should be declared as short paired. During the next days I will get access to a cluster with much much more power. And if I find the time I will try assembling with Velvet as well.

Best wishes

ADD REPLYlink written 5.8 years ago by stu11139230
gravatar for rafael.d.macho
4.5 years ago by
rafael.d.macho0 wrote:

I had the same problem about "core dump" or similar and it was that you need to locate the sequences within the same folder as the software to make it run, despite you had written the very path towards the file wherever you may left it.

Here is my example: Computer@Computer:~/Descargas/velvet_1.2.10$ ./velveth velvet5 31 -fastq.gz -shortPaired ~/Descargas/SRR292770_1.fastq.gz ~/Descargas/SRR292770_2.fastq.gz

ADD COMMENTlink written 4.5 years ago by rafael.d.macho0
Please log in to add an answer.


Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 2.3.0
Traffic: 1313 users visited in the last hour