For my DESeq analysis I am getting around 89% of my genes in low counts. My concern is that most genes are getting filtered with mean count < 319. I tried different techniques to pre-filter etc but this mean count is not reduced. I think im lossing alot of data there. I even tried independentFiltering = FALSE and it doesnt do any good. I want it to filter but at a mean which is less.
*out of 10429 with nonzero total read count
adjusted p-value < 0.1
LFC > 0 (up) : 24, 0.23%
LFC < 0 (down) : 14, 0.13%
outliers [1] : 0, 0%
low counts [2] : 8492, 81%
(mean count < 319)
[1] see 'cooksCutoff' argument of ?results
[2] see 'independentFiltering' argument of ?results*
This is the expected and correct behavior of DESeq2's independent filtering, which is a crucial feature designed to maximize your statistical power, not a sign of data loss.
The algorithm identifies an optimal mean count threshold (319 in your case) to remove genes that have very low power to be detected as significant. By filtering these uninformative genes before p-value adjustment, the severity of the multiple testing burden is reduced on the remaining gene set. This directly increases your ability to detect true positives at a given FDR.
Manually overriding this process or lowering the threshold is statistically unsound and counterproductive. The low number of DEGs you're observing is a true reflection of the underlying biological signal and sequencing depth in your experiment, not a technical artifact of the filtering process.