Forum: What would be the trend in next few years in NGS era?
1
gravatar for ahmad mousavi
4 weeks ago by
ahmad mousavi340
Royan Institute, Tehran, Iran
ahmad mousavi340 wrote:

Dear friends,

Despite the past few years we saw more and more data have been generated, and now days it is general to use genetic tests for patients., also lots of companies started working on these type of data even companies like google.

I wanted to ask this question in order to share information and using experiences/perspective for our local region.

Do you think just analyzing raw data or low scale data would be continued in the next few years as wee see like now or we will be faced with other area of science. Like :

1- Using Deep learning (or other learning) methods for extracting patterns from genomic/transcriptome data.

2- Using NGS results for drug research / repositioning.

thanks.

ADD COMMENTlink modified 4 weeks ago • written 4 weeks ago by ahmad mousavi340
4

Two words:

Long reads.

Two more:

Hybrid assemblies.

*Though one could make the case that that is now rather than the next few years,

ADD REPLYlink modified 4 weeks ago • written 4 weeks ago by jrj.healey9.2k
2

I do agree. Long reads are game-changing in more than one way.

For most people bioinformatics today is a narrow concept, it really means dealing with the various constraints that billions of short reads impose on us. What changes when we have few but long reads? ... Well ... everything.

right off the bat, BAM does not even work, a standard that five years later cannot store the data it was meant to standardize this is what the "premature optimization the root of all evil" quote is about

Then interpreting alignments, tuning alignments (something we almost never need to do today) becomes critical.

The entire basis of alignments they way we do it today becomes obsolete. Alignments are narrow and abstract mathematical concepts that don't work well for long reads where the sequences have a biological function. Alignment scoring should be dynamic (and depend on the information content of the region) rather than uniform rewards and penalties. We can already see the effects of introducing convex penalties (minimap2 vs bwa mem), the minimap2 long read alignments are major step in the right direction (yet mathematically bwa mem is "more" correct), but that is just the beginning, there is a lot more to go there.

But now all of a sudden the traditional dynamic programming algorithms Needleman Wunsch and Smith-Waterman become outdated - but guess what every aligner uses these, now we need entirely new breeds of algorithms and aligners.

ADD REPLYlink modified 4 weeks ago • written 4 weeks ago by Istvan Albert ♦♦ 78k

Great, so we will new type programming in aligners.

ADD REPLYlink written 4 weeks ago by ahmad mousavi340

That's if you doing NGS for biology. If you are doing NGS for medicine, the field would probably be dominated by single-cell RNA-seq.

ADD REPLYlink written 4 weeks ago by predeus710

if we are talking about future (perhaps further than few years even) I would even be tempted to drop the "two more".

very long reads, making assembly obsolete

ADD REPLYlink written 4 weeks ago by lieven.sterck3.3k

I was thinking that too, but I think less for assembly issues and more for accuracy. The short read techniques are still the go-to for low error, while the long reads make for nice easy assembly.

ADD REPLYlink written 4 weeks ago by jrj.healey9.2k

agreed!

though they'll likely resolve that too in the future ;)

I do also agree with predeus 's remark , I'm pretty biased towards my own field of research and indeed other fields will have a different view on this.

ADD REPLYlink written 4 weeks ago by lieven.sterck3.3k
1

A similar post

Where and how NGS techniques are heading for the next 5 years?

ADD REPLYlink written 4 weeks ago by Vijay Lakhujani3.4k
6
gravatar for Jean-Karim Heriche
4 weeks ago by
EMBL Heidelberg, Germany
Jean-Karim Heriche17k wrote:

The future is in data integration. Sequences are not going to be highly valuable on their own for very long but will be just one piece of data to use among others. From my perspective, sequencing, and other omics technologies, are moving to the microscope. Now that you can get sequence information from a single cell, you also want to know what this cell was doing, where it was in relation to other cells, what phenotype it had... all things you can get by imaging.

ADD COMMENTlink written 4 weeks ago by Jean-Karim Heriche17k
1

Yes, I feel that too much hope was placed on large DNA sequencing projects and how they would revolutionise healthcare and solve everything. We all remember Bill Clinton's famous speech about the first draft of the human genome on the White House front lawn. Slowly, the community is appreciating epigenetics and chromatin structure, and the roles these place in disease. Looking at DNA sequence only give so much info.

ADD REPLYlink written 4 weeks ago by Kevin Blighe33k
1

I remember one of the things I had to do for my Genetics coursework during my undergrad was write and essay on how the Human Genome Project had failed to deliver on the promises of revolutionising healthcare etc.!

ADD REPLYlink written 4 weeks ago by jrj.healey9.2k

Thanks, Jean. despite the good protocol for gathering the sequences information we face with bad standard in gathering clinical information. In different project they use different clinical information. Some projects like TCGA tried to gather as complete as they could but most of other not.

I think that's another problem.

ADD REPLYlink written 4 weeks ago by ahmad mousavi340
Please log in to add an answer.

Help
Access

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 2.3.0
Traffic: 1172 users visited in the last hour