Summary: Every paper that is published with sequencing data is supposed to put their data in the SRA. And those data are generated by your tax money, but the crappy part is that everyone uses their own pipeline to generate the results which make it almost impossible to reuse easily. And one day I decided to run >400k raw sequencing data through the same pipeline, and generated an omic matrix for each data layer which everyone can query and go from data to some common publication figures in a minute. The reason I am posting here is to get some feedback from the community to see what are the data layers they want to see. The project is still in the early phase, your comments will be deeply valued and decide on how the project is going to be. Currently, the project only offers transcript counts and allelic read counts.
Related blog posts:
Overview of the project (Intro): A preview of the Skymap project: Extracting allelic read counts and expression profiles of >400,000 public sequencing runs and merging them into simple -omic matrices that can fit into your hard drive
Design rationale of the computational infrastructure (Method) How can a Jupyter notebook extract the expression levels or allelic read counts from > 200,000 sequencing runs in seconds?