Entering edit mode
17 months ago
official.profile
▴
20
I am planning to perform a scRNA-seq analysis of a few million cells collected from various experiments. However, I need to estimate what computational power will be approximately needed to process bam and fastq files up to the DEGs, UMAPs, etc.
I know that this is a multidimensional problem, but I am looking for some estimations or any information that would help me with this issue (disk size, RAM, number of cores, etc.)
Do you plan to have all these millions of cells in the same analysis or is it some sort of meta-analysis approach? Add some details please. Is it going to be R or python?
Thank you for your reply. It's hard to tell at this point what exact strategy I will adopt. Some significant integration will take place for sure, for example by tissue. Regarding the R and python - I don't think it is that important in terms of the computing power, but lets assume that Seurat, Harmony and MAST will be employed.