Almost no differentially expressed genes on microarray - what now?
1
0
Entering edit mode
6.6 years ago
n.lu ▴ 50

Hi, I did my analysis on GeneChip Mouse Gene ST 2.0 Array, Affymetrix. I compared expression profiles of 4 experimental groups and got some dissapointig and surprising results). At best 2 genes are differentially expressed (limma, adjusted p 0.05 - BH correction) between groups, and none at all between some. Biologically, it doesnt make much sense to get no difference (we are comparing inflammatory cells in joints of arthritic mice and control mice that dont have arthritis), but I guess the changes are not necesarily on transcription level. Now I find myself with all those microarrays and expression values, but the pipeline for analysis sort of stops here since for further pathway/functional analyses are all based on experimental group comparison. Is there a way to find out anything about this cells that we analyzed in general - not in the sense of between group comparison? Like are they activated inflammatory cells or not or something form the expression values? Is it possible to do some enrichement for gene sets that have high expression values even though they are not different between groups? Is it possible to compare the values within one array (gene of interest - housekeeping gene)? From what I know about microarrays, the analysis is based on comparison of experimental groups and expression values by istelf dont mean anything at all, so I realize this questions probably dont make sense but I wanted to double check if there is some way to put this enormous set of expression data into good use.

Thanks

microarray no differentialy expressed genes • 3.6k views
ADD COMMENT
0
Entering edit mode

Yes, synovial sample but murine synovia - cells were released from the knee joint by digestion with collagenase and myeloid (CD11b+ Gr-1+) population was sorted by FACS and used for microarray analysis.

I included 14.414 genes in limma analysis. Only main probes were included, I removed those without entrez ID, .. I attached the R code bellow:

library("affycoretools")
library("mogene20sttranscriptcluster.db")
eset <- getMainProbes(eset)
eset <- annotateEset(eset, mogene20sttranscriptcluster.db)
annotation(eset) <- "mogene20sttranscriptcluster.db"

library(genefilter)
eset <- featureFilter(eset, require.entrez=TRUE, require.GOBP=FALSE, require.GOCC=FALSE, require.GOMF=FALSE, require.CytoBand=FALSE, remove.dupEntrez=TRUE, feature.exclude="^AFFX")

I also excluded probes with low variability based on row SDS

library(genefilter)
sds <- rowSds(exprs(eset))
sh <- shorth(sds) 
filtered_eset <- eset[sds>=sh, ] 
#filtered eset = 14.414 genes

I also tried independant hypothesis weighting (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4930141/) instead of prefiltering the dataset with row SDS, but it gave the same result.

Adjusted p values in my most relavant group comparison for top 10 genes (from lowest - highest): 4.60E-06, 4.60E-06, 0.137408915 0.164786542 0.226554087 0.226554087 0.304023382 0.432896157 0.432896157 0.572612477 So 15% FDR would add just one other gene. Mostly it looks like the groups are not different between eachother at all.

Would you recomend another strategy to filter the genes?

ADD REPLY
2
Entering edit mode

n.lu : Please use ADD COMMENT/ADD REPLY when responding to existing posts to keep threads logically organized.

This should have been a comment against @Kevin's answer.

ADD REPLY
0
Entering edit mode

This is a silly question, but what are the top genes that have passed FDR? They are obviously not what you would expect(?)

There is a big gap between the top 2 genes and the others, so, it does not look correct. How did the nomalised data look in a boxplot and histogram?

ADD REPLY
0
Entering edit mode

We have a knockout for a gene involved in apoptosis, and those mice dont develop experimental arthritis like wild type mice. So those were my 4 groups - wt mice control (no arthirits), wt mice with arthritis, knockout with arthritis, knockout control. wild type control vs arthritis show no differentially expressed genes at all (which is very strange). The comparison that we were most interested in was wild type arthritis vs knockout arthritis - and thats where the two genes mentioned above come up - Mid1 and Erdr1. They have been described in inflammatory conditions, Erdr1 even in murine arthritis, but I still found it wierd that no other genes acting together with these two would be changed. (The knockout gene has adjusted p value 0.13, logFC 1, but is not very highly expressed in those cells). If I try hierarchical clustering / PCA / NMF the samples dont group by experimental groups at all (and not by batch effect either).

When the hybridization of the chips was performed, there was a problem with the scanner, so the chips were stored (properly) from 3-5 days after hybridization and before scanning. My first thought was that that was the problem, but then QC looked fine - I will attach boxplots and histiograms from before and after normalization. 1 2 3 4

ADD REPLY
2
Entering edit mode

A couple of more things to check:

  • How does a PCA bi-plot of your data look?
  • Have you adjusted for any batch effects?
ADD REPLY
0
Entering edit mode

I didnt adjust for batch effect. PCA and clustering didnt show any gropuping by batch (half of the arrays were scanned on a different day). I also tried the algorithm for discovering unknown batches in SVA package and it foud 0 surrogate variables (although this is probably methodologicaly wrong, since I know my batches).

I am attahing the PCA plot below (group 1-4: WT_CTRL, WT_arthritis, KO_CTRL, KO_arthritis). The second graph is the same, I just marked them by the day they were scanned to check for batch.

pca pca batch

Thank you very much for helping me with this!

ADD REPLY
2
Entering edit mode

No problem - interesting data.

What is the explained variance along PC1 and PC2? Outliers can affect the derived stats, too!

Also, what if you tried the contrasts separately for just your samples of interest? Sometimes I worry about complex design formulae when used with a dataset of low numbers of samples.

Here is how I do it with Limma (this for a recent Affymetrix Genechip HuGene ST 2.0 array):

In my data, my groups are defined in targetinfo$Group

design <- paste(targetinfo$Group, sep="")
comparisonmodel <- model.matrix(~0+design)
colnames(comparisonmodel) <- levels(design)

#Fit the linear model on the study's data
#lmfit uses generalized least squares (GLS), which is better for heteroscedastic data (i.e., data where the variances of sub-populations are different)
#The model looks at each probe across the defined sample groups and 'fits' a value, which is then used to gauge statistical inferences
project.fitmodel <- lmFit(MyExpressionData, comparisonmodel)

#Applying the empirical Bayes method to the fitted values acts as an extra normalization step and aims to bring the different probe-wise variances to common values.
#This ultimately reduces the degrees of freedom of variances (i.e., less individual, different, variances).
project.fitmodel.eBayes <- eBayes(project.fitmodel)
names(project.fitmodel.eBayes)

#results <- decideTests(project.fitmodel.eBayes)
#vennDiagram(project.fitmodel.eBayes)

#Make individual contrasts and fit the new model
Q1_contrastmodel <- makeContrasts(Q1="(Group1)-(Group2)", levels=comparisonmodel)
Q1_contrastmodel.fitmodel <- contrasts.fit(project.fitmodel.eBayes, Q1_contrastmodel)
Q1_contrastmodel.fitmodel.eBayes <- eBayes(Q1_contrastmodel.fitmodel)

topTable(Q1_contrastmodel.fitmodel.eBayes, adjust="fdr", coef="Q1", number=99999, p.value=1)

That's more or less the same as yours, but slightly different too.

ADD REPLY
0
Entering edit mode

Hi, thank you very much for sharing this script (and sorry for the late reply). The results are pretty much the same (same 2 genes significant, and than FDR jumps to 13% on the next one, and than to 50%).

For principal components:

Importance of components:
                                    PC1    PC2     
Standard deviation     5.1804 4.2105 
Proportion of Variance 0.1688 0.1115 
Cumulative Proportion  0.1688 0.2803

Does that give you the info about explained variances?

ADD REPLY
1
Entering edit mode

Hey, yes, that gives the information regarding the proportion of variance. For PC1, it says that it's 16.88%, which is common for this type of data. For example, in my array experiment (3 conditions), and in which I found many genes differentially expressed, my PC1 was 13%.

The only other thing that I can think is to group your samples into 2 groups and just compare those. Your sample numbers right now are very low and the model you have been trying to do is too complex for the low sample numbers.

If that does nothing, then you may have to consider reducing FDR to 20 or 25%, even though that will not return many extra genes. Using nominal P values will probably mean that you will struggle to publish the study and/or you may have to advertise it as purely hypothesis-driven that requires major validation in other datasets.

Apart from that, check again to ensure that nothing went wrong during the wet-lab preparation. However, this can be tricky because what can happen is that you may enter a 'battle' of the wet-lab blaming the analyst, and vice-versa.

ADD REPLY
0
Entering edit mode

Thanks for the explanation.

I tried grouping them based on genotype (knockout vs wild type) and arthritis vs cotrol. First comparison gives 5 genes, the second none. One other thing I did was group them in two groups based on the wey they group in hierarchical clustering. The groups dont realy make sense biologicaly, samples seem like they are randomly mixed. I just wanted to check what makes them cluster this way.. And turns out this gives 800 differentially expressed genes, mostly involved in cell cycle regulation and similar. So maybe these cells really dont differ that much between groups, but have some other differences that are group independant. I guess we will try to confirm the 2 genes that we did find by qrtPCR, and then try to explore their role in our arthritis model if the results are correct.

The only part I was worried about in the wet lab is the delay between hybridization and scaning. Would you say there is a way to see if the fluorescent signal was poor?

Thank you very much for your help

ADD REPLY
1
Entering edit mode

Yes, there are some diagnostic methods, but I rarely have to use them because everything usually goes smoothly! The last time that I used these was on a very old Affymetrix chip (but they may still work!). I think that you may need to load the affy and gcrma packages in order to use these.

Chip images

The first basic thing to check is an image of each chip. Take a look here: Microarray image explanation Thi scan be done with:

#Perform some diagnostics on the arrays
#Generate chip images to diagnose spatial artifacts for each array and a merged boxplot of intensities
pdf("ChipImages.pdf")
  image(RawData)
  boxplot(RawData, col="red", las=2)
dev.off()

If there is degradation of probes due to the assumed wet-lab issue, then you should see very faint signals of light on some chips compared to others.

Compare perfect and mismatch probes

(may not function at all with the latest chips, because they no longer use mismatch probes directly!)

par(mfrow=c(2,1))
  hist(log2(pm(RawData)), breaks=100, col="blue", main="PM", xlim=c(4,14))
  hist(log2(mm(RawData)), breaks=100, col="red", main="MM", xlim=c(4,14))
dev.off()

#Compare for just 1 sample
par(mfrow=c(2,1))
  hist(log2(pm(RawData[,1])), breaks=100, col="blue", main="PM", xlim=c(4,14))
  hist(log2(mm(RawData[,1])), breaks=100, col="red", main="MM", xlim=c(4,14))
dev.off()

Use arrayQualityMetrics

require("arrayQualityMetrics")
arrayQualityMetrics(RawData, outdir="AQMRaw", force=FALSE, do.logtransform=FALSE, intgroup=NULL, reporttitle=paste("Array Quality Metric for", deparse(substitute(RawData))))

This outputs a whole bunch of stuff into a new directory and adds a index.html file that summarises everything.

ADD REPLY
1
Entering edit mode

Thanks, I'll check those as well. Thank you very much for your suggestions and comments!!!

ADD REPLY
1
Entering edit mode

I now just realised that the other thread on chip images was with you, too! Your chip images appeared to be fine, but maybe the arrayQualityMetrics will reveal something.

Unfortunately, as we move forward with NGS and other technologies, I find that microarrays are less and less common.

ADD REPLY
1
Entering edit mode

Hi! Thank you for sharing the information.

Yes, it is very worrying that wild type control vs arthritis showed nothing. I would expect the greatest differences here, particularly in the synovium.

Which model formula did you use for Limma? Sometimes, that can be the issue and result in very low numbers of statistically significantly expressed genes.

If something occurred with the scanner (or before that point), then there is not much that we can do as analysts...

ADD REPLY
0
Entering edit mode

Limma:

design <- model.matrix(~ 0+factor(c(1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4)))
colnames(design) <- c("WT_CTRL", "WT_arthritis", "KO_CTRL", "KO_arthritis")
rownames(design) <- sampleNames(eset) 

design
 WT_CTRL WT_arthritis KO_CTRL KO_arthritis
WT_CTRL1            1            0       0            0
WT_CTRL2            1            0       0            0
WT_CTRL3            1            0       0            0
WT_CTRL4            1            0       0            0
WT_arthritis1       0            1       0            0
WT_arthritis2       0            1       0            0
WT_arthritis3       0            1       0            0
WT_arthritis4       0            1       0            0
KO_CTRL1            0            0       1            0
KO_CTRL2            0            0       1            0
KO_CTRL3            0            0       1            0
KO_CTRL4            0            0       1            0
KO_arthritis1       0            0       0            1
KO_arthritis2       0            0       0            1
KO_arthritis3       0            0       0            1
KO_arthritis4       0            0       0            1
attr(,"assign")
[1] 1 1 1 1
attr(,"contrasts")
attr(,"contrasts")$`factor(c(1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4))`
[1] "contr.treatment"

contrast.matrix <- makeContrasts(WT_CTRL-WT_arthritis, WT_CTRL-KO_CTRL, WT_CTRL-KO_arthritis, WT_arthritis-KO_arthritis, WT_arthritis-KO_CTRL, KO_CTRL-KO_arthritis, levels=design)

contrast.matrix
              Contrasts
Levels         WT_CTRL - WT_arthritis WT_CTRL - KO_CTRL WT_CTRL - KO_arthritis WT_arthritis - KO_arthritis WT_arthritis - KO_CTRL KO_CTRL - KO_arthritis
  WT_CTRL                           1                 1                      1                           0                      0                      0
  WT_arthritis                     -1                 0                      0                           1                      1                      0
  KO_CTRL                           0                -1                      0                           0                     -1                      1
  KO_arthritis                      0                 0                     -1                          -1                      0                     -1

fit1 <- lmFit(eset, design)
fit2 <- contrasts.fit(fit1, contrast.matrix)
fit2 <- eBayes(fit2) 
table <- topTable(fit2, adjust="BH", number=Inf)
results_WT_CTRLvsWT_AIA <- topTable(fit2, coef=1, number=Inf, sort.by="P") #and like this for each contrast
ADD REPLY
1
Entering edit mode
6.6 years ago

I think that I know the answer, but are your samples from the synovium? - they do appear to be synovial biopsies.

I think that if you consider these points, then you will achieve a better result:

  • Are you including too many probes ('genes') in your chip? You should consider removing anything in which you have no interest. The GeneChip ST 2.0 Array by Afymetrix is very dense and the number of genes that you include will affect the FDR adjustment. If you are not interested in pseudogenes or non-coding RNAs, then remove them. Take a look at the annotation file in greater depth in order to see what the chip is actually targeting. Here and Here are direct links for the trancript- and probeset-level annotation for your data (source: http://www.affymetrix.com/support/technical/byproduct.affx?product=mogene-1_0-st-v1).
  • Ensure you you remove control probes after their values have been used during normalisation (these are also labeled in the annotation CSV file)
  • What happens if you reduce the FDR threshold to 10% and 15%?

All is not lost! - just need to work with it a bit more in order to achieve ideal results. Every study/analysis is unique.

Kevin

ADD COMMENT

Login before adding your answer.

Traffic: 1820 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6