Some general questions...
So it appears with most classification learners one must come up with a series of quantifiable variables to associate with each observation. This might include mean and std deviation intensity values, texture, and zernike polynomials. Does anyone have any other suggestions as to how one might quantify 3-dimensional fluorescence pixel data features?
Once appropriate quantification's have been acquired for a given class one can then train a classifier. My next question is - does one then have to segment test images and create a matrix of the same feature variables from that segmented image to use the classifier? Or can one use the raw image itself?
What supervised machine learning techniques will evaluate the raw image data itself - either for training or testing? Can manually annotated data sets or clustered data sets provide a direct model for training data? That is to say, are there supervised machine learning algorithms which accept pure pixel data as models for training?
Are there reliable pre-trained classifiers that are already implemented for these type of fluorescence data sets?
Any resources or advice is greatly appreciated.
Thanks in advance for any input!