When a lab performs an investigation to sequence a population of individuals, is it typically the case that every individual in the population will be sequenced using the same platform (e.g. solid/illumina). I am wondering if cases exist where some individuals in an investigation are sequenced using a different platform.
I am assuming all individuals would be sequenced using the same instrument for consistency. But then you also have to assume all post-sequencing analysis is the same (e.g. the reads are assembled with the same pipeline) if you want consistency?
My point of asking this is because I am interested whether it is considered 'acceptable' to treat the individuals in a population differently because of how it affects subsequent calculations such as allele and genotype frequencies.
I've never worked directly on a sequencing project and have only been given the data to analyse after all this work has been done. The prior steps affect data quality (e.g. some pipelines are considered noisier than others) so it must affect the quality of the subsequent calculations. Or are the figures for allele and genotype frequencies just to imprecise for this to matter?
So to summarise, how you can assess the quality of a variation study if all data wasn't gathered using the same protocol. Are there guidelines for this?
Thank-you for your time
edit: Thanks for your answers. With regard to guidelines I was also wondering if there were guidelines for how to describe the pipeline used to sequence data and the associated parameters/thresholds that may vary with the pipeline or is it just a case documenting it? Is there a need among the community for such guidelines if they don't exist or is general documentation sufficient