Question: UMI Tools Dedup
gravatar for rajpal22288
5 months ago by
rajpal222880 wrote:

I am using dedup fucntion in UMI tools but still there seems to be an issue of excessive memory usage while the output is being generated. Kindly let me know if there's an alternative tool/ way to remove the remove the duplicates and count the UMI's.

ADD COMMENTlink modified 8 weeks ago by viktorcheberachko0 • written 5 months ago by rajpal222880

Your post lacks any details that allow reproduction of the (what I assume it is) error or problem. What are the command lines, which error/warnings did come up, how much memory do you have and how much memory was consumed, what are the input files? Please edit your question accordingly Brief Reminder On How To Ask A Good Question

ADD REPLYlink modified 5 months ago • written 5 months ago by ATpoint21k

I have added the link where many people have reported the same issue.

ADD REPLYlink written 5 months ago by rajpal222880

It may still help to note what your criteria for excessive is.

This may be an alternate option to try.

ADD REPLYlink written 5 months ago by genomax70k

And the conclusion of the link seemed to be that for the most part, that's just how the software is. It has to remember all the reads and their indices that it comes across; this is going to be memory intense.

ADD REPLYlink written 5 months ago by swbarnes26.2k
gravatar for i.sudbery
5 months ago by
Sheffield, UK
i.sudbery5.2k wrote:

If you are doing drop-seq or 10x chromium, I highly recommend alevin. Other tools that can handle UMI deduplication are STARsolo, umis and picard MarkDuplicatesWithCigar. Not that the last two do not do error correction.

Two things might cause excessive memory usage:

  1. Many reads whose pairs are on a different contig - here there is no solution unless you are willing to drop these reads - no other tool is going to do any better.
  2. Analysing single-cell RNA seq without using the --per-cell option.
  3. Extreme read depth, with an appreciable % saturation of the space of possible UMIs.

The general advice for to reduce the memory usage is to not use the --output-stats option.

If you are struggling with high read depth and UMI space saturation, you can switch to --method=unique. The downside of this is that you loose UMI-tools' error correcting dedup, which we have shown introduces bias, especially at high depth. The upside is that this makes UMI-tools effectively function the same as any other UMI aware tool. Only really umi_tools, alevin and STARsolo use error correction on UMIs. Otherwise, it just is an intrisically high memory requiring task.

ADD COMMENTlink written 5 months ago by i.sudbery5.2k

My sequences are not from a single cell, so i guess i have stick with either umi or picard. I have tried running samples without --output-stats but it appears to be the same problem. The --method-unique works for me. I have observed that the UMI's have been removed but i am not sure how to see the no. of UMI's extracted?

Before dedup :

@SN526:357:CCAUDACXX:1:1104:2387:1878_CCAAGACCAACC 1:N:0:ATCACG
@SN526:357:CCAUDACXX:1:1104:2744:1836_ACTATGTCAACT 1:N:0:ATCACG
@SN526:357:CCAUDACXX:1:1104:2683:1842_GCCTCCGCGGGG 1:N:0:ATCACG
@SN526:357:CCAUDACXX:1:1104:2676:1990_GTGCTACTTGGG 1:N:0:ATCACG

After dedup:

ADD REPLYlink modified 5 months ago by genomax70k • written 5 months ago by rajpal222880
gravatar for Lior Pachter
8 weeks ago by
Lior Pachter330
United States
Lior Pachter330 wrote:

The kallisto | bustools workflow has a very low memory footprint.

ADD COMMENTlink written 8 weeks ago by Lior Pachter330
Please log in to add an answer.


Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 2.3.0
Traffic: 1083 users visited in the last hour