I am using dedup fucntion in UMI tools but still there seems to be an issue of excessive memory usage while the output is being generated. Kindly let me know if there's an alternative tool/ way to remove the remove the duplicates and count the UMI's.
Your post lacks any details that allow reproduction of the (what I assume it is) error or problem. What are the command lines, which error/warnings did come up, how much memory do you have and how much memory was consumed, what are the input files? Please edit your question accordingly Brief Reminder On How To Ask A Good Question
And the conclusion of the link seemed to be that for the most part, that's just how the software is. It has to remember all the reads and their indices that it comes across; this is going to be memory intense.
If you are doing drop-seq or 10x chromium, I highly recommend alevin. Other tools that can handle UMI deduplication are STARsolo, umis and picard MarkDuplicatesWithCigar. Not that the last two do not do error correction.
Two things might cause excessive memory usage:
Many reads whose pairs are on a different contig - here there is no solution unless you are willing to drop these reads - no other tool is going to do any better.
Analysing single-cell RNA seq without using the --per-cell option.
Extreme read depth, with an appreciable % saturation of the space of possible UMIs.
The general advice for to reduce the memory usage is to not use the --output-stats option.
If you are struggling with high read depth and UMI space saturation, you can switch to --method=unique. The downside of this is that you loose UMI-tools' error correcting dedup, which we have shown introduces bias, especially at high depth. The upside is that this makes UMI-tools effectively function the same as any other UMI aware tool. Only really umi_tools, alevin and STARsolo use error correction on UMIs. Otherwise, it just is an intrisically high memory requiring task.
My sequences are not from a single cell, so i guess i have stick with either umi or picard.
I have tried running samples without --output-stats but it appears to be the same problem.
The --method-unique works for me. I have observed that the UMI's have been removed but i am not sure how to see the no. of UMI's extracted?
Your post lacks any details that allow reproduction of the (what I assume it is) error or problem. What are the command lines, which error/warnings did come up, how much memory do you have and how much memory was consumed, what are the input files? Please edit your question accordingly Brief Reminder On How To Ask A Good Question
I have added the link where many people have reported the same issue.
It may still help to note what your criteria for
excessive
is.This may be an alternate option to try.
And the conclusion of the link seemed to be that for the most part, that's just how the software is. It has to remember all the reads and their indices that it comes across; this is going to be memory intense.