The smaller the counts of a gene (or whatever you measure) are, the more unreliable they are and the more prone these counts are to show large fold changes.
Lets have an example:
A gene had 10 counts in sampleA and 2 counts in sampleB. Makes a fold change of 5 right?
Say another gene had 1000 counts in A and 200 in B, also FC = 5.
Which is more reliable: I would say the second one.
Imagine you have small fluctuations of the counts because of the inherent uncertainly / error rate of sequencing and the quantification method.
Say the gene now had only 5 counts in A and 4 in B, FC is now 1.25 instead of 5.
If the second gene had the same fluctuation so 995 in A and 202 in B, the FC is now 4,925742574257426, so still very close to 5. The high counts are more resistent to little fluctuations. => If the mean (so the average counts for the genes) is low, the fold changes are high (but unreliable). As far as I know this holds true for every kind of experiment in which quantities are measured.
Long story short: Low counts tend to show artificially high (and often false) fold changes, therefore the confidence in them is low and therefore p-values tend to be large. You would need more replicates to have the power to detect differential genes with low counts compared to genes with high counts. That is why statistical power is inherently greater for highly-expressed than lowly-expressed genes.
modified 8 months ago
8 months ago by
ATpoint ♦ 36k