Can you break up your fasta file containing all metagenomic reads into smaller, separate files and submit these smaller files separately to an assembly program? I am working with 454 data on Genovo assembler and it doesn't seem to want to take files with more than about 300 reads in them.
Maybe your problem is you have sequences longer than 1000bp? From the FAQ:
Algorithm cannon handle reads with length>1000.
Anyway, you can try the approach of splitting the initial fasta into many files, then running smaller assemblies and continuing the next where the previous one stopped using the following command-line:
assemble <fasta_file> N <dump_file>
This will run Genovo for
N iterations, loading the initial state from
<dump_file> is the
<fasta_file.dump.best> from the previous run. You can then run this until all reads have been incorporated into the assembly.