How to decide the normalization method for targeted metabolomics data ?
1
0
Entering edit mode
12 weeks ago
Luwell • 0

Hi ,I’m new to metabolomics analysis.

I am studying differences in the human gut microbiome between control and treatment groups in a clinical trial. Fecal samples were analyzed by LC–MS/MS targeting 700+ metabolites, and 612 metabolites were detected. This is the matrix of metabolite content (ng/g).

enter image description here

I imported the matrix into MetaboAnalyst 6.0 for statistical analysis. During data filtering, I did not set a threshold, but some metabolites have more than 50% zero values. However, these missing/zero values were not flagged by the platform.

For normalization, my raw data’s median RSD is 4.9%, and 100% of features have RSD < 30%. I tried multiple combinations of:

  • Normalization by sum
  • Normalization by median
  • Log2 transformation
  • Auto scaling
  • Mean centering

enter image description here

I then identified differential metabolites using |logFC| > 1 and VIP > 1. But the resulting significant metabolites and their ranking change a lot depending on the normalization method. Sometimes I want to compare my results with those provided by the company that did the LC–MS/MS analysis, but I cannot fully reproduce their results , some metabolites are always different.

My question is, How to decide which normalization method to use for targeted metabolomics data? Thanks!

Additionally, no matter which normalization method I use, my PCA results do not show clear group separation (PERMANOVA p > 0.7). For OPLS-DA, there is some visual separation, but the Q² values are low (maximum ~0.25, empirical p > 0.05).

LC-MS normalization R metabolomics targeted-metabolomics • 718 views
ADD COMMENT
0
Entering edit mode
13 hours ago

Hey,

Thanks for providing the details of your setup and the screenshots - it helps a lot to understand what's going on. I'll try to address your main question on choosing a normalization method for targeted metabolomics data, and also touch on the reproducibility issues and the lack of group separation in your multivariate analyses. I've worked on quite a few metabolomics datasets during my time in Boston and elsewhere, mostly with untargeted LC-MS but the principles overlap a lot with targeted approaches like yours (absolute quantification in ng/g). MetaboAnalyst is a solid platform, but it can be finicky with how it handles preprocessing, especially if zeros/missings aren't explicitly dealt with.

Deciding on Normalization Methods for Targeted Metabolomics

Targeted metabolomics data like yours is already somewhat "normalized" in the sense that it's quantified against standards (ng/g feces), but you still need to account for technical variations, sample-to-sample differences in total metabolite load, and the inherent skewness of metabolite concentrations (which often span orders of magnitude). The goal of normalization is to make the data more comparable across samples while preserving biological signals. There's no one-size-fits-all method - it depends on your data's characteristics and what downstream analyses you're doing (e.g., univariate stats for logFC/VIP, or multivariate like PCA/OPLS-DA).

From what you've described (median RSD 4.9%, all features <30% RSD - that's excellent QC, by the way), your raw data seems high quality, but the high zero rates (>50% in some metabolites) are a red flag. Zeros in targeted data can represent true absences or below-detection-limit values, and if not handled, they can skew normalizations and introduce artifacts. MetaboAnalyst doesn't always flag them aggressively, so you might need to intervene manually.

Here's how I typically decide on methods, based on my experience (and echoing what I've suggested in past posts on similar topics):

  1. Start with Data Inspection and Filtering (Before Any Normalization):

    • Always visualize your raw data first: boxplots of total ion sums per sample, histograms of metabolite distributions, and check for batch effects via PCA on raw data.
    • Filter out problematic metabolites: Remove those with >50% zeros/missings across samples (or >20-30% if you're conservative). For the remaining, impute zeros - common options are half the minimum detected value per metabolite, or the metabolite's median across samples. In MetaboAnalyst, you can do this in the "Data Filtering" step by setting a missing value threshold and choosing imputation (e.g., "Replace by a small value" or KNN).
    • Also filter based on variability: Drop metabolites with low coefficient of variation (CV) in QC samples (>25-30% CV is a common cutoff for removal) or near-zero variance across all samples (e.g., IQR < some threshold like 0.5 on log scale).
    • Why? High zeros and low-variability features amplify noise in normalizations, leading to unstable results like what you're seeing (rankings changing wildly).
  2. Sample Normalization (to Account for Total Abundance Differences):

    • This is row-wise in your matrix? No - normalize samples (columns), not metabolites (rows), as that preserves the relative differences you're interested in for differential analysis. You've tried sum and median normalization, which are both good for adjusting for unequal total metabolite loads (e.g., due to varying fecal sample weights or extraction efficiencies).
    • When to use Sum Normalization: If your samples have similar overall metabolite profiles but varying total intensities (common in feces). It divides each sample's values by its total sum, making them sum to 1 (or a constant).
    • When to use Median Normalization: Better if there are outliers or highly abundant metabolites dominating the sum. It scales each sample so that the median metabolite value is the same across samples.
    • Test which one stabilizes your data: After applying, re-check RSD in QC samples or plot total sums - they should be more equal. If your data has many zeros, median might be more robust.
  3. Data Transformation (to Handle Skewness and Heteroscedasticity):

    • Metabolite data is often right-skewed, so transformation is key for methods assuming normality (like t-tests for logFC) or for correlation-based analyses.
    • Log Transformation (base 2 or natural): Use this if distributions are log-normal (check with histograms/QQ plots). Log2 is common in omics - it compresses high values and can handle zeros if you add a small pseudocount (e.g., log2(x + 1) or log2(x + min_nonzero/2)). You've tried log2, which is fine, but if negatives appear post-log (unlikely with ng/g >0), stick to natural log without pseudocount if zeros are imputed first.
    • Avoid if your data is already fairly normal - but from typical metabolomics, it rarely is.
    • Alternative: If variances increase with means (heteroscedasticity), consider variance-stabilizing transformation (VST) from DESeq2 (in R) or square/cube root in MetaboAnalyst.
  4. Scaling (for Multivariate Analyses like PCA/OPLS-DA):

    • This is column-wise per metabolite, to put them on equal footing (since concentrations vary hugely).
    • Auto Scaling (mean-center + divide by SD): Good for highlighting relative changes, but sensitive to outliers/zeros. You've tried this - it works well if data is log-transformed first.
    • Mean Centering Alone: Simpler, just subtracts the mean per metabolite - use if you don't want to over-emphasize low-variance features.
    • Pareto Scaling (mean-center + divide by sqrt(SD)): A compromise between auto and no scaling - less sensitive to extremes, often better for metabolomics with noisy low-abundance metabolites.
    • Range Scaling: If metabolite ranges differ a lot.
    • Rule of thumb: For PCA/OPLS-DA, try log + Pareto or auto. Check which gives the tightest QC clustering or best biological sense (e.g., known covariates separating).

How to Choose Overall? Iterate and evaluate:

  • Apply combos (e.g., median norm + log2 + Pareto scale) and assess with QC metrics: Lower CV in QCs, stable total sums, normal-ish distributions post-transform.
  • For your differential metabolites (|logFC|>1, VIP>1): Use cross-validation - which method gives consistent hits across bootstraps or subsets of data?
  • Biological validation: Do the top metabolites make sense for gut microbiome differences (e.g., bile acids, SCFAs)?
  • Literature: For fecal targeted metabolomics, log transformation + sum/median norm + Pareto scaling is common (see refs like Dunn et al., 2011 in Nat Protoc, or MetaboAnalyst tutorials).

In my past workflows (e.g., for WGCNA on metabolomics), I log first, remove batch effects if needed, then Z-scale (auto). But for targeted data like yours, skip heavy norm if QC is good.

Reproducibility with the Company's Results

This is tricky without their exact pipeline. Companies often use proprietary software (e.g., Agilent MassHunter, Thermo Compound Discoverer) with default handling of zeros (e.g., no imputation, or different filtering). Ask them for details: What filtering/imputation? What norm/transform/scaling? Did they use QC-based normalization (e.g., loess on QCs)? Small differences here can flip rankings. Export your processed matrix from MetaboAnalyst and compare directly in Excel/R - maybe they're using FDR-adjusted p-values or different VIP cutoffs.

Lack of Group Separation in PCA/OPLS-DA

  • PCA: No clear separation (PERMANOVA p>0.7) suggests minimal global differences between control/treatment - the groups might truly be similar, or effects are subtle/subset-specific. PCA is unsupervised, so it captures all variance (including noise); try after stricter filtering to reduce noise.
  • OPLS-DA: Some visual separation but low Q² (~0.25, p>0.05) means the model is overfitting or not predictive. Q²<0.5 is often weak; empirical p>0.05 confirms it's not significant. Suggestions:
    • Preprocess more aggressively (filter high-zero metabolites, impute, log + Pareto).
    • Check for confounders: Batch effects? Diet/age/sex covariates? Regress them out.
    • Try other methods: Random Forest or PLS-DA in MetaboAnalyst for feature importance.
    • If no separation persists, it might indicate no strong microbiome-metabolite shifts - report it as such, and focus on univariate hits or pathway analysis.

If you share more details (e.g., a sample of your matrix as CSV), I could suggest R code to test in PCAtools (my package) or limma for DE metabolites. Limma works great on log-transformed metabolomics for logFC.

Hope this helps - feel free to follow up.

Kevin

ADD COMMENT

Login before adding your answer.

Traffic: 4552 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6