So I have a set of publicly available ChiP-chip data that uses the Affymetrix Human Tiling 2.0R Set. Which really means that each individual replicate has seven individual arrays associated with it.
I am trying to use the Starr package from Bioconductor to analyse this data, and while I can read in the data into distinct ExpressionSets I am not sure how to best combine these arrays and normalise between them.
I thought that I would be able to read he data one replicate at a time and normalise across all seven arrays using a set of control probes and then normalise across all replicates afterwards,.........
- normalise all arrays from replicate X
- combine all arrays from replicate X into a single Eset A
- normalise all arrays from replicate Y
- combine all arrays from replicate Y into a single Eset B
- normalise all arrays from replicate Z
- combine all arrays from replicate Z into a single Eset C
- Normalise across A, B, C
.... but the ESet I read in using "readCelFile" does not appear to have any control probes to allow me to normalise between different chips of the same replicate, and the available normalization functions in the Starr package does not seem to have this functionality.
Is there a way to handle this in Bioconductor? Or is it acceptable to not normalize between the arrays from the same replicate - and to just normalise after concatenating them together?
Thanks in advance