False discovery rates for moderated t-statistic
0
1
Entering edit mode
3.4 years ago

Here is a duplicate of my question at Bioconductor support, not answered so far. Since I have no experience with empirical Bayes methods, I would be glad to any feedback in this regard. Thanks!

I use the DEP package for label-free proteomics analysis. Differential expression (test_diff) results provide p-values (p.val) and FDR values (p.adj), the latter are calculated by fdrtool using moderated t-statistics of empirical Bayes (eBayes function in limma) as input. What is a reason to use the moderated t-statistic, not p-value, to compute FDRs? The relations between the t-statistic-derived FDRs and the FDRs calculated by adjusting the p-values using the BH method (with p.adjust(method = "BH") or with fdrtool::fdrtool(statistic = "pvalue")) seem to depend on a contrast of interest, for some comparisons the t-statistic FDR delivers more differentially expressed proteins, whereas for others the p-value-based FDR provides a lower cut-off (see the figure for 4 different contrasts in this post). I would highly appreciate some feedback regarding those differences. Are both procedures correct to apply for FDR calculations?

R DEP FDR p-value empirical Bayes • 1.7k views
ADD COMMENT
0
Entering edit mode

What is a reason to use the moderated t-statistic, not p-value, to compute FDRs?

Not sure what you mean by this but the moderated t-statistic is used to compute p-values which can then be adjusted in the usual way using the FDR procedure. Maybe this online course can help shed light on the use of the moderated t.

ADD REPLY
0
Entering edit mode

Thanks for your prompt response, Jean-Karim! Totally agree, this is the way how I also initially understood it works. eBayes function (limma package) produces moderated t-statistics and corresponding p-values as output (details here). So it would be logical to adjust those p-values, but test_diff function in DEP takes moderated t-statistics as input to perform adjustment. If one uses the p-values (not t-statistics) for FDR calculations, it obviously delivers quite different results. I am wondering why this is done in that way, and whether it sounds fine to adjust p-values in a regular way, as you as well suggested.

ADD REPLY
0
Entering edit mode

I see. This is because one can extend the FDR approach to various test statistics beyond p-values and test_diff calls fdrtool which can take various types of input besides p-values. See A unified approach to false discovery rate estimation which is what fdrtool is based on.

ADD REPLY

Login before adding your answer.

Traffic: 2701 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6