How to make lumi accept .txt data?
Entering edit mode
7.8 years ago


I am trying to do background correction/ quantile normalization on a .txt file of expression values (.CEL data is unavailable to me). GSE_data is a 47,000 x 73 dataframe with headers of expression values. The data was gathered on an Illumina microarray.

I'm trying to use lumi like so:

testMat <- as.matrix(GSE_data)
testMat <- testMat[,-1] # to get rid of probe IDs in first column

# background correction/RMA normalization
lumiObj <- lumiB(eset, method='bgAdjust',verbose=TRUE)

error in evaluating the argument 'object' in selecting a method for function 'exprs': Error in bgAdjust(x.lumi, ...) : The object should be class "LumiBatch"!

Can anyone see what I'm doing wrong? What's the best way to do background correction/quantile normalization on .txt data?

Thank you so much!

EDIT: When I rewrite the eset definition, I get an error:

eset<-new('LumiBatch', exprs = test)

Error in `featureNames<-`(`*tmp*`, value = c("1", "2", "3", "4", "5",  : 
  'value' length (47308) must equal feature number in AssayData (47308)

EDIT 2: Also, using lumiR doesn't seem to work - inputting this (importme = file in my path that is a .txt file of the expression values only)

try.lumi <- lumiR('importme')

Error in gregexpr("\t", dataLine1)[[1]] : subscript out of bounds
microarray lumi • 3.4k views

Login before adding your answer.

Traffic: 3150 users visited in the last hour
Help About
Access RSS

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6