Forum:Binning the rare biosphere
1
2
Entering edit mode
4 months ago
Konstantin ▴ 20

Hello everyone. I would like to start a discussion here.

Recently, I responded to a question on Seqanswers regarding the binning of rare microbes from shotgun metagenomes (https://www.seqanswers.com/forum/applications-forums/metagenomics/324515-software-recommendation-for-metagenome-assembly-for-low-abundance-bacteria). I suggested an idea for an appropriate pipeline for this task, but unfortunately, it did not gain much attention. Nevertheless, I tried implementing my idea and I will share it here for further discussion.

Let's assume that we have several metagenomes representing a specific area. We expect that all our low-abundance taxa of interest are uniformly distributed across the area, along with their corresponding DNA and reads.

The first step is to assemble contigs, for which we could use a tool like Metaspades. From there, we can individually construct MAGs (Metagenome-Assembled Genomes) from each sample. It is preferable to use three or four binners, such as Concoct, Metabat, and Maxbin. We can then use DAStool to obtain consensus bins from the results of all three or more software.

Now that we have MAGs constructed from the most abundant and less diverse DNA in the samples, the next step is to combine all the raw metagenomic datasets into one file and merge all the constructed MAGs into another file. The combined-MAGs file can be used to build an indexed database, for example, using Bowtie2. We can then map the combined raw metagenomic reads onto this database, saving only the reads that were not utilized for contigs and bins assembly. Since this object contains all the DNA from the initial samples, but not the DNA of the most abundant taxa, we expect that the quantity of reads belonging to low-abundance taxa would be sufficient to assemble contigs and cluster them into bins.

Therefore, the next step is to use an assembler and a binner on this dataset enriched with less abundant DNA. The hypothesis is that this will result in more or less complete MAGs of rare microbes.

It is not necessary to map raw reads onto abundant MAGs. We can also utilize initial contigs to obtain the reads that were not included in the MAGs. However, in this case, more data would be lost, as some portion of contigs had already been discarded by the binning program before MAGs were constructed. Therefore, using contigs instead of MAGs would leave us with only the reads that were not used by either the assembler or the binner.

I have performed exactly what was described above.

A brief background: My study focuses on soil where I suspect there is a high diversity and abundance of a specific microbial group, sulfate-reducing bacteria. Sulfate-reducers are known for their low absolute abundance in soils. However, I have 1) culture-based evidence and 2) 16S-amplicon and 3) shotgun taxonomic profiles and 4) metadata about the soil that strongly indicate the presence of sulfate-reducers. Despite this, the results of MAGs assembly are contradicting. None of the typical sulfate-reducers mentioned in the profiles were assembled using the usual pipeline. Interestingly, classification of dropped reads (those that were dropped during the contig assembly stage) shows the presence of many sulfate-reducers, sometimes even exceeding its number of assembled reads.

Unfortunately, assembly and binning from the dropped reads also failed to identify the sulfate-reducers. It should be noted that this approach did result in new bins with moderate quality, and some microbes assembled this way represent phyla that were absent in the first set of bins obtained using the usual approach. I consider the latter as an indicator of general correctness of the pipeline, as it results in at least a few new bins. I also should note that several shotgun samples were used and quality controlled during my attempts, and binning through usual pipeline yielded in many excellent-quality MAGs. So, again, I believe that not the data posing the problem, but rather their usage.

My questions are:

  1. Is there a possibility that any of the suggested stages were performed incorrectly or are fundamentally flawed?
  2. What are your opinions and experiences regarding binning rare microbes?
  3. How does the depth of sequencing relate to the ability to bin rare taxa? And what depth is considered optimal?
  4. Will the core idea (of getting rid off the most abundant and less diverse DNA from a sample, and than combine what is left) be helpful in binning rare species from samples with not so great sequencing depth?

Please note that I am a beginner in environmental metagenomics and lack extensive experience. I would appreciate any suggestions or thoughts.

Respectfully, Konstantin.

metagenomics soil assembly techniques binning • 608 views
ADD COMMENT
2
Entering edit mode
4 months ago
Mensur Dlakic ★ 27k

I don't think there is anything conceptually wrong with approach. If I am right, there are at least two explanations for your results: 1) something is wrong in your execution; 2) the microbes you want are in such low abundance that they can't be assembled. There are all kinds of things one could be doing wrong even when they know how to describe the procedure in general terms, so I will focus on the second possibility.

Generally speaking, there is nothing special needed to assemble low-abundance microbes. I have a dataset of ~100 bins, of which 17 individually have abundance < 0.2% (~2.2% combined). Yet 13 of them are assembled at > 70% completeness - even 3 that have 0.09% abundance. Their average sequencing depth is 8-22x. This is to say that in many cases there is nothing special needed to assemble low-abundance MAGs.

Now, for all I know you have 300 different microbes in the sample and the ones you are interested in are at 0.001% or so. I suggest you try khmer and its digital normalization. The idea is to lower the depth of your reads selectively, such that low-abundance organisms are not affected. For example, if you do 60x depth normalization, any k-mers that are present at < 60x will not be affected, while others will be brought down. This is different from random reads sampling, and it may allow you to assemble MAGs globally without any read subtraction. It may be worth trying to down-sample to 20x, 40x, 60x and 80x.

ADD COMMENT
0
Entering edit mode

You suggest to do such normalization so that 1) only the reads presented at <Nx will be used for assembly and so on, or 2) you mean that they will be preserved for future binning, while the others (more abundant) will be discarded (or used for high-abundance bin construction)?

Regardless of the answer, I wanna elaborate on the following. A genome is represented by a number of reads. Even if the genome is not abundant (the corresponding microbial species is rare), isn't it natural to expect that some of its reads are more abundant than the others? So, in this case, picking a specific depth selectivity will result in discarding of some of the genome's reads. Of course, we can try a number of threshold values and choose the one that will result in more rare bins with good quality. But it also means that this threshold search should be repeated every new dataset, doesn't it? Because we don't know how the taxa are distributed along the read's list sorted from more to less abundant. Or we expect this distribution to be the same in case of the same object and DNA extraction-sequencing protocol?

Sorry if my formulations are a bit cumbersome. Thank you very much for the suggestion, I will definitely try it.

ADD REPLY
1
Entering edit mode

Digital normalization will not "delete" any bin. If you do it at 60x, any bin with average sequencing depth < 60x should not be affected at all. Those with depth > 60x will be down-sampled to 60x, but they will still assemble. As to "some parts" of the genome being sequenced at a greater or smaller depth, that shouldn't be an issue because they will still be preserved at their original depth or at 60x, whichever is smaller.

I am not suggesting this normalization should be done for all datasets, but it might help your current case. I have seen a CPR-like MAG that doesn't show up when assembling a full dataset, but pops out when the data is digitally down-sampled.

Beware that abundance can't be reliably estimated from a down-sampled dataset.

ADD REPLY

Login before adding your answer.

Traffic: 1861 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6