Entering edit mode

7.3 years ago

lessismore
★
1.4k

Hello,

I am checking SVA for estimating batches in an expression atlas.

This expression atlas is composed my multiple experiments (different dates) and multiple tissues analysed from wild type samples.

Contrarily to the general aim of this analysis, I am not interested in performing a DE analysis, but I am interested in normalizing this data for a following a network analysis.

For this reason I'll consider the two methods:

- SVA : identifying and estimating surrogate variables for unknown sources of variation.
- ComBat: directly removing known batch effects using ComBat in SVA.

Potentially I have no adjustment variables nor variables of interest. For the 1st method what will be the full model matrix that I have to set?

Hope it is clear,

Thanks in advance.

it depends on what you are interested in, for Combat if you are aware of the confounders then you directly use them in your model matrix and pass them through combat for reducing the effect of known confounders or batches. With SVA you are trying to identify the most number of confounding variables that you might use to model your data, however, take a note that you will not be seeing all the SVA values to be giving the best fit adjustment probably some will overfit or some will not have a proper differential estimate so you might have to make a trial for the values to see which is the best fit that keeps the order of your hypothesis. This is sample code but we need to understand understand your design

Thank you very much for your comment. I would like to test the two ways. I merged an amount of NGS experiments (basically on the same tissues) but carried out on several dates. A preprocessing has been done removing the background signal and quantile normalizing all data. Now i know there are batch-effects and i know the number of batches i have (so i am ready for the analysis with ComBat). But i also want to perform SVA because i want to check other possible confounders that i may miss using only the first approach. Then i will probably correct for the SVA results because, as i mentioned, i'm only interested in adjusting for the technical souces of variation and i wouldnt consider the biological ones for my purposes. is it more clear now? what would you suggest? thanks in advance!

SVA will only give you a numerical value of the confounders but you will not know which are actual attributes of this confounders unless you adjust for each attribute and plot PCA to see how the adjustment changes and what is reduced as an effect of the adjustments to avoid i) over fitting and ii) accounting for the exact confounders of your choice. Otherwise, SVA will give you a number and you will model everything around it and it will not tell you which each of the confounder effect it is. I hope it is now clear?

Thanks, i calculated my variables.

Now, i was expecting 6 surrogate variables. When you say "adjusting for each attribute" you mean reducing the dataset to check (with PCA) which attributes (observations) were causing this, correct? A second question: i don't understand when you write :

There's no "svaBatchCor" function from what i have seen. Sorry for the petulance, i am following the tutorial (https://www.bioconductor.org/packages/devel/bioc/vignettes/sva/inst/doc/sva.pdf) and it's a completely new topic for me.

ah my bad, its a small function , you can find it in other posts but am providing it here.

Usage of the function is as you have already specified and I also mentioned. So if you have 7 SV you can start from 3 or 4 and correct for them and try to see the PCA, however I hope you know how to model the matrix with SVA if you are using DESeq2 or edgeR. Since you will have to use them when you make your new model. Good luck

Thanks for your answer. I am starting to get things more clear. Could you please tell if your function is equivalent to Remove Surrogate Variables and Batch ?? From what i understood in this link they remove all surrogate variables from the matrix. Is it a not adviceable approach?? As i was mentioning, my purpose is not DE analysis but a gene co-expression network (GCN) analysis. Could you please link some relevant papers/tutorials/guidelines where i can follow reading about this? Thanks in advance.

If it's gene co expression network then you take a look at WGCNA. That's a better way to find coexpressed modules. I have not used surrogate variable analysis in WGCNA so am not well informed in that one. But if it works on normalized expression value you can always use SVA and create newly corrected normalized expression matrix and use WGCNA. Take a look at WGCNA tutorials and it's use with modeling of batch effects or surrogate variable. You will definitely find something. Good luck.

Hey @vchris_ngs, considering ComBat as the best method to correct the batch-effect biased matrix (read some papers about it), i decided to go like this:

i referred to http://bioconductor.org/packages/release/bioc/html/BatchQC.html The density and Q-Q plots curves for the mean and the variance of the normal parametric assumption compared the empirical distribution let me decide for a parametric combat. Could you give me an opinion? Could you give me an example of overfitting?

I will not go over the debate of which method is good or better for batch correction. If you are trying to visualize the effects of a batch inside of a linear model, then you can specify the design matrix (without the batch factor) as well as the batch factor when calling removeBatchEffect() from the edgeR package. This will also do the work. Similarly you can perform the same with combat. All these will work if apriori you know which are the factors that contribute to your batches. If you do not have the confounding factors then you can use SVA. This is the clarification. Since SVA at times produces too many variables , then modelling around all of them might give a classical case of overfitting. So it is not always advisable to model all the variables that sva gives. With combat you can model only around the batches that you are sure about. If you want to know about over fitting, try this link. If I remember correctly that you are using WGCNA and want to use batch effect reduction there. Then according to WGCNA you can use comBat since that is what they use most of the time.

Excerpts from WGCNA:

I think these explanations are pretty good enough to decide if you want to use combat and how you want to use them. It is already available in SVA package as well. So follow some examples and perform it. But just to understand one thing. If you do a plot and do not see variability due to batches in data in PCA plot based on batches then no point doing it even if you know there were batches. This will over fit.

I already came across this good explanation on WGCNA manual, it was quite clear as well as your answer. Thanks a lot!

Could you please explain why starting with 3 or 4 for correction when there are 7 SVs? Thanks.

Hi ivivek_ngs:

I've used your function and found that it gives the same result as obtaining the residuals from this linear model: Y ~ SVs

However: I think maybe this solution doesn't take into account the influence of SVs on our variable of interest?

For example, I found in this paper: https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-015-0808-5 that the function they use to regress SVs out of their data is (see the github code):

If I modify your function this way it gives the same result as in that paper:

I am far from being an expert in this matter. Could you clarify the difference between the two approaches?

thanks

Apologies for the delayed response, the code you showed should also work, the code I showed gives your SVs and using those residuals you are just fitting them for the batch correction but that does not mean you are performing differential analysis. It is just taking care of the normalized dataset applying the residual adjustments that can be used for any visual inspection. Now including them in a model matrix for downstream differential analysis with regression should be done via multi-factorial design, that I did not show here. Hence that might be the reason you might not see it.

Code snippet 1as far as I understand is using a beta prior to just cleanse the effect nominally so that the residuals do not remove the non-linear or any secondary effects. (this is what I can understand from rough reading of the paper you cited)Code Snippet 2is more or less similar on the lines I am doing directly giving you a corrected normalized matrix totally as a return of the function. But then again here discounted to the factor which residual optimally should be used to entirely regress your data and still be not reducing the non-linearity or secondary effects. To my knowledge , bulk RNASeq does not include that much on non-linearity effects given your data is high dimensional. However, batch effect correction for visualization is different from using batch SV residuals as factors for differential analysis.One should use the same residual factor for both for consistency. This is what I can say. I can only feel you are including the probability of Type 1 into account but not much of a difference. However, both the approaches of the code does not tell you which is the optimal residual that gives best fit visualization and not run into over-fitting. This is my hunch. I might be not entirely correct. Let me know if I was clear enough?

Thanks for the detailed and thorough response.

Yes, I think my confusion relates to what you say here: "

However, batch effect correction for visualization is different from using batch SV residuals as factors for differential analysis".In my case I was playing with a linear model:

Y ~ tumor/control + sva1 + sva2to detect differential expressed genes (DEGs) between control and tumor. I fit the model and detect some significant DEGs. Thus, when writing a report on the results and making plots about the DEGs, the coefficients in the model do NOT match the differences in means of the genes for the original expression data (of course, because the model includes SVAs).However, they also do NOT match the differences in means of the expression corrected for SVAs, as returned by the function you showed (which returns the residuals from

Y ~ sva1 + sva2).They YES match the values corrected by the other proposed function, which is the same as calculating the residuals from

Y ~ sva1 + sva2but substituting the SVA coefficients for the SVA coefficients ofY ~ tumor/control + sva1 + sva2.This is why in this case, I think maybe the last solution is more appropriate, because what I want to represent is "the expression values that are being evaluated in my final model" (the real differences in means that are being tested across the coefficients)

Can you show some plots for the scenario. It sounds interesting. However, I also don’t expect a linear model using residuals multifactors for regressing the data giving differential gene expression will exactly match with batch corrected value based on mean expression for the same genes. I have doubts there but I might be mistaken. We do use priors or shrinkages in our differential model if you are using deseq2, so you might not get directly 1:1 relationship of mean values of degs similar to mean value of that same gene pulled from normalized & batch corrected gene expression. What’s the coefficient correlation? What model for differential are you using? voom/edgeR/ DESeq2? They do have a bit of differences as to how the entire flow of differential value for a gene is reached.

I'm using limma for the tests and such.

Yes, that's

exactlywhat I was trying to observe: see If I got the same results from limma from doing:Y ~ tumor/control + sva1 + sva2Versus doing the model with Y and tumor/control pre-adjusted for the linear influence of the SVAs: ** Y.adj ~ tumor/control.adjWhat I find is that I get the

samecoefficients for those models, and these coefficients also match the differences in means for the adjusted values for visualization when adjusted using:I'm actually using DNA methylation data. I talked about expression because I didn't want the message to be too confusing. So I use "raw" limma (not with voom method). I think limma does not do more that approximate variance measurements to the data but should the model coefficients should be differences between mean of the compared groups in the absence of covariates. Incidentally I was unable to find the correlation coefficients from the limma output.

I could send you the code plus the example data if you are interested.

Wwell, limma is used for DNA methlylation. Should not be an issue, it is just not the voom transformation. Well in voom you will have the difference since it is for RNASeq, while traditionally limma is for array based works so DNAmethylation fits that bill, in that case your query should be valid. The only thing that I fear is, about the data type. Are you using the beta values or M values? There are dedicated workflows to take care of this. Now for removing or taking care of batch effects, the best guidance is first look at PCA on your data to understand what is capture in PC1, if its other than biological then your SVA/combat should come into play and anything you want to use for taking care of normalization and downstream differential.

A cross-package Bioconductor workflow for analysing methylation array data

Underlying premise, either use coefficients in your model matrix to fit them as covariates to make the idfferential or apriori use them in your data and perform the differential. Mostly both should be on similar lines if there are not a lot of SVs and not much of transformation as in voom or any prior shrinkage.

I'm using M-values as recommended because they're more gaussian. Indeed, I was looking at PCA before and after adjustment to see how the variance was shrinking in some dimensions related or not to known variables.

Regarding the use of the coefficients before the differential I think I will incorporate them to the model and just do these adjustments for visualization. Jeff Leek did not recommend pre-adjusting in this answer as one would change degrees of freedom: answer on cleaning data before differential testing

But, yes, as you said, I checked and usually there are not many changes in the number of detected significant features when pre-adjusting or adjusting inside the models.

Thanks for your comprehensive advice.

Glad it worked out now at your end.

I should have pointed out the part of Jeff Leek advice( I was indicating the same but I was not that comprehensive out there), since pre-adjustment is only advisable for a visualization perspective but for model they should be incorporated as covariates to not impact the degrees of freedom. I guess I mentioned the term

residualsbut I should have been more clear.Hi @ivivek_ngs,

Excuse me, I have two questions. 1. When you write "plPCA(co)", but I can't seem to find the function "plPCA" anywhere. Is it "patch-based local PCA"? Could you please tell me which package is the "plPCA" function from; 2. Could you please link some relevant papers where i can follow reading about your method? Thanks in advance.

Hi, Apologies for the late reply.

`plPCA()`

was basically a wrapper around the PCA function developed in my former lab. Any standard PCA function or packages having PCA should work. There are currently multiple exploratory packages available in BioConductor or CRAN for performing PCA or any other standard dimension reduction that can be used for performing that step.Thank you very much!!