Few Significantly Regulated Genes Found After Correction For Multiple Testing
4
4
Entering edit mode
13.2 years ago
Bdv ▴ 320

In a microarray experiment with 2 conditions and 2 duplicates for each, I receive almost no genes, which survive the correction for multiple testing of t-test results (FDR calculation).

How is it possible to solve this issue?

Correction for multiple testing has to be done, but I also want to have a reasonable number of genes left ... any suggestions?

Thank you alot!!

microarray • 4.4k views
ADD COMMENT
11
Entering edit mode
13.2 years ago
Mmorine ▴ 280

That's a well-recognized difficulty in the field. The problem is that the correction becomes severe with such a large number of statistical tests. Even when using one of the more lenient correction algorithms (e.g., Benjamini & Hochberg, 1995) you end up with low statistical power. Since a) the severity of the correction is partially dependent on the number of tests performed, and b) many of the genes/tests can be seen as noise (i.e., either unexpressed or otherwise uninteresting genes), a useful approach can be to first filter your dataset. By doing this, you're removing these uninteresting genes and improving your statistical power. There are many approaches to microarray dataset filtering, but here's a recent paper that reviews the problem in detail and presents a solution.

ADD COMMENT
8
Entering edit mode
13.2 years ago

I've been in the same position, and I sympathize with your situation. It is natural to want to see genes differentially expressed, but wanting to have statistical power and biological differences detectable by your experimental design does not mean that you have those things. The above suggestions are very good, but you have very low power with two samples per condition to detect a difference even when you're only performing one test; when you compare thousands of genes under those conditions it's almost miraculous when anything comes out significant.

ADD COMMENT
1
Entering edit mode

Agreed. It's a bit of a side-step, but another approach is to use pathway or GO analysis. The more generous statistical power of these methods can often reveal significant changes that aren't apparent at the single gene level, and can also provide good functional targets for follow-up analysis.

ADD REPLY
0
Entering edit mode

The flip side being that if you can confirm any findings with additional experimental evidence - it's not a waste of time. Frequently cost is a factor in experiments of this nature, and even a small sample size can provide useful pointers that can be verified by other means in a larger sample size.

ADD REPLY
0
Entering edit mode

The pathway or GO analysis offers extra power because it is statistically unlikely that a larger fraction of false positive genes end up in one specific pathway. Thus you can accept more false positives (and thus do no or less stringent FDR corrections or use lower fold changes) and still find significant pathways or GO classes.

ADD REPLY
6
Entering edit mode
13.2 years ago
User 59 13k

Mmorine makes some good points - I favour aggressive filtering. However your sample size is small, and has few replicates, so a problematic chip could introduce a lot of noise into the experiment.

There are other ways of looking for differentially expressed genes other than a t-test with Benjamini-Hochberg MTC (which is the gentlest MTC you will get!). I have had a great deal of success with the RankProducts package in BioConductor for small/noisy datasets.

Have a look at the page for the software here. Quoting from the page:

"Rank Products are a new test statistic that has been developed specifically for this purpose. They are particularly powerful for small and noisy datasets (with few replicates but many genes), which are typical of many microarray experiments. In that case they can often perform better than more traditional approaches (t-test, Wilcoxon rank sums, SAM)."

ADD COMMENT
6
Entering edit mode
13.2 years ago
Michael 54k

I have seen this situation a lot, being asked for advice how to salvage such an 'unexpected' result in a microarray experiment. So here is my 50 ct.:

As the other answers already suggest, this outcome is far from unexpected, as the power of a t-test depends solely on these factors:

  1. Number of replicates
  2. alpha (max p-value/ type 1 error you allow)
  3. the variance of the data
  4. the minimum expression change you wish to detect.

There are a very simple approaches to it by addressing the points above:

  1. Increase the number of replicates. If you can, by any means repeat the experiment and add hybridizations until the power is sufficient.

  2. Lower your standards. If you cannot get something with 5% false discovery rate, try allow 10%. It depends on what you want to do with the gene lists afterwards. If this experiment was just a pre-screening, it might be ok. On the other hand, if you want to publish the results this is not likely to work out that way.

    points 3. and 4. are hard to address. If the second option feels weird to you, than this clearly indicates to invest more into replication, while there is no guarantee that this will solve the problem.

I believe it would be not honest to try to 'massage' the data by searching for a 'statistical method' that gives you the results you wish to obtain, would it? Actually, you needed a test that in general gave you lower p-values on the same data. But increase of power often comes for a price, more false discoveries. However, there is some evidence e.g. here that variance stabilizing methods such as SAM, limma or Cyber-T are potentially better suited for very small sample sizes (e.g. 2 as in your case).

ADD COMMENT

Login before adding your answer.

Traffic: 1713 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6