Proteomic Statistics - FDR vs p-value
0
0
Entering edit mode
13 months ago
bhumm ▴ 110

Hi,

We have just finished a blood based aptamer proteomics screen where we targeted ~7000 proteins. Following initial analysis where statistical significance between control and disease conditions was determined by FDR. We found some interesting hits, but some of the proteins previously shown to be involved in the disease were absent. We then extracted columns containing our most suspected proteins and performed t-tests on these individual columns. Some of these targets were found to be significantly different. My question is how valid is this approach? In other words, is it fair to subset a proteomics data set and perform t-tests when there are much fewer data points? I don't want to cherry pick data through my post hoc analysis, but some of these observations recapitulate what has been previously reported in the field. I also suspect I have an outlier problem that may reconcile some of these differences, but that is a different issue.

Thanks in advance!

FDR proteomics t-test p-value • 1.4k views
ADD COMMENT
1
Entering edit mode

You could say something like: "DE analysis was performed, and protein X was not found to be differentially abundant. The unadjusted statistic for Protein X was <=0.05, however, suggesting a trend towards up/downregulation."

The real questions is, why are you pushing things to be true which the data do not seem to support? It may be time to revise your hypothesis!

ADD REPLY
0
Entering edit mode

Thanks for your reply. I agree with your point in trying to torture the data into what we want or expect. There are other orthogonal methods we have used where results support the "post-hoc" analysis, thus the reason we are digging deeper into the proteomics. I appreciate your answer!

ADD REPLY
1
Entering edit mode

Dear bhumm,

I see no problem in using the p-value, as long as you add a Log2FC value to select the differential abundance proteins.

I have seen several papers that used only p-value and Log2FC and published very well.

You can add information from public transcriptomics and/or proteomics datasets as validation to your finding.

ADD REPLY
0
Entering edit mode

Thanks for your response! That leads to another question. I have not found any clear information as to what is a relevant log2 fold change size. For example, some of my significant DE proteins have rather small FCs, sometimes less than 1-2 FCs. I look in other publications and at times see 5-10 if not greater FCs. Do you have any advice or literature that could help me make sense of this? Or do I just rely on the raw effect size? I have been using Cohen's D to determine effect size and it seems substantial but is not necessarily reflected following log transformation. Thanks!

ADD REPLY
1
Entering edit mode

Dear bhumm,

I don't think you will find an exact metric that everyone uses... This depends a lot on your goal, if you are looking to profile a broader change, you can use for example:

Log2 Fold-Change (FC) < −0.26 or > 0.26 or Log2FC < −0.37 or > 0.37

You can use these genes and create a protein-protein interaction network and search for modules in your network...

Modules with proteins that have same function, even with a small log2FC, may be more important for disease than just one protein with a large log2FC!

Another idea, you can put your normalized protein abundance matrix in GSEA and do the analysis looking for altered pathways and processes, this is independent of the DEPs list.

ADD REPLY
0
Entering edit mode

I really appreciate the response. I will mull this over a bit and do some of the recommended strategies. Thanks again for your insight!

ADD REPLY
1
Entering edit mode

I've writtten this here often enough but p-values, adjusted or not, are not related to biological relevance. Consider whether the magnitude of the effect is relevant to the biological question at hand. Also, if there is other evidence to support involvement of some proteins, just test these, you don't need a statistical test for all proteins in the screen if you already know which one you're interested in. You mention an outlier problem so I suggest you deal with it first because the t-test for example doesn't behave well if the distribution is too far from normal. Before doing any test, you should check whether the test assumptions are satisfied.

ADD REPLY
0
Entering edit mode

This is all great advice, thanks for your response. I have been calculating the normality of the data using qq plots and effect size using Cohen's D, so I believe I am addressing most of these considerations. I will be more diligent in outlier analysis and removal before making conclusions. Thank you for your insightful answer!

ADD REPLY

Login before adding your answer.

Traffic: 993 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6