I have RNA-seq data from two different treatment groups (F and NF ) at 2 different time points (T1 and T2). The mapping was done with STAR aligner and the quantification was done with FeatureCounts. I run differential expression with Deseq2 and limma-voom and the number of DEGs is very different between the two methods. In the comparison between F : T2 vs T1 i have 40 DEGs with Deseq2 whereas I get 3302 DEGs with limma by using the same covariates.
The same issue is encountered also in comparisson NF: T2 vs T1 where Deseq2 gives 30 DEGs whereas limma 3844. From a biological prespective we anticipate more DEGs in the comparison of F : T2 vs T1. So Deseq2 seems to follow this trend but with very few DEGs whereas limma returns much more hits in NF: T2 vs T1 comparison.
Do you think that this huge difference in the number of the DEGs from each tool is explained (different apllied algorithms/methods) or something may be wrong with the input data?