I'm trying to design an experiment comparing the effects of various expression levels of Gene X between Sample A and Sample B. If measurements in Gene X expression are from RNA-seq in Sample A and qPCR in Sample B, what assumptions must be made in comparing the effects of expression on a trait between these two samples?
Expression levels of Gene X in Sample A can only be measured by RNA-seq (standard Illumina high-throughput). Expression levels of Gene X in Sample B can only be measured by qPCR – a standard qPCR protocol in which RNA is extracted and used to produce single-strand cDNA, from which the target gene is amplified and measured with a real-time quantitative PCR machine. In either case, with qPCR or RNA-seq, expression of Gene X is standardized against expression of the same reference gene.
The reason for the methods for each is that:
- There are too many SNPs in Sample A to design qPCR primers that would work for enough samples
- RNA-seq data is already available for Sample A, so cost not a factor
- RNA-seq of Sample B would be too costly and is unnecessary since qPCR primers will work for all samples
I wish to build linear models showing the effects of expression on the trait for each sample. In considering whether this is a valid approach, there is concern regarding possible bias from the two different methods of measurement. Where can bias be introduced in either qPCR or RNA-seq? How can either method be more or less accurate in measuring gene expression levels?