Forum:Where and how NGS techniques are heading for the next 5 years?
6
4
Entering edit mode
6.7 years ago

Hello everyone,

I wanted to open up a discussion on the new developments and advancements in terms of sequencing techniques and the computational analysis methods within next few years; may the next 5.

How NGS is evolving across the globe? What are the new plans, additions, major changes in terms of sequencing methods and the downstream processing? In case of sequencing technique, as per my knowledge, the recent advancement is oxford nanopore; or is there anything else on which someone is working upon?

And what in terms of computational methods, for e.g. programming. Shell and Python/Perl and C are preferable so far. "R" is of course playing a key role. What are the other evolving stuff one should start looking into and learn?

next-gen NGS • 2.7k views
ADD COMMENT
1
Entering edit mode

I think NGS machines, reagents and machines to analyze the output data get cheaper (as cheap as PCR machines) in near future. It will be a standard apparatus as in HPLC/PCR/FPLC in labs.

ADD REPLY
1
Entering edit mode

Agree! but in that case, it requires expertise to handle NGS data. People who worked with microarrays earlier are gradually transitioning to NGS due to obvious reasons but again, it will take time. A lot of workshops, trainings and seminars are being conducted across the globe that surely is going to help.

However, in majority of the developing countries, researchers are still dependent on out-sourcing sequencing as well as the data analysis, mainly because of expensive sequencers and technical (wetlab) and bioinformatics expertise required to analyse the data. I am hopeful that the scene will change very soon.

ADD REPLY
7
Entering edit mode
6.7 years ago

Long reads and low cell numbers.

Nanopore and PacBio are increasingly popular and I don't see that waning any time soon. The biggest issues these have at the moment is that their throughput is low, so it's tough to compete with the core of Illumina's business. I expect that will change eventually. A lot of issues go away with longer reads (e.g., phasing and splicing), so once people can do human/mouse experiments at scale and on budget then they'll transition over. Of course a lot of tools aren't geared toward this, so that'll require some change.

We're increasingly doing single-cell sequencing of various types. This doesn't really require new tools (for the most part), but it can require some retooling at various levels since eventually you're dealing with thousands of samples.

ADD COMMENT
5
Entering edit mode

Another major problem with the current long-read technologies is that they are unable to call SNPs from diploid genomes. In theory, PacBio probably can achieve the goal, but no available tools match the accuracy of Illumina data yet. Nanopore can't so far, even in theory. Without solving this problem, we would need both short and long reads even if long reads were cheaper to produce.

ADD REPLY
1
Entering edit mode

I'm hopeful that nanopore will get there eventually :)

ADD REPLY
1
Entering edit mode

Regarding nanopore human genome sequencing this is an interesting preprint (including SNP calling, SV calling, assembly and methylation).

Nanopore sequencing and assembly of a human genome with ultra-long reads

ADD REPLY
2
Entering edit mode

+1 for that Thanks Devon, that was indeed useful. Ain't the "throughput" issue be compensated by "length" ? I mean instead of a lot of reads, a few high quality reads from nanopore/pacbio will do the good, isnt't it? For bacterial and other smaller genomes, pac bio data has resulted in a nice assembly in a single scaffold. I think that's amazing as it can bypass some of the inherent problems with de-Bruijn graph based assemblers.

ADD REPLY
2
Entering edit mode

In the majority of sequencing we do, depth is more important than length. Long reads would be a "great to have", but they're not essential. Once the price comes down and the capacity scales up then we/others will transition over.

ADD REPLY
3
Entering edit mode
6.7 years ago
Charles Plessy ★ 2.9k

Regarding sequencing techniques, I think that next-generation protein sequencing (for instance with nanopores) will come and start to displace an increasingly large amount of applications of next-generation transcriptome sequencing.

ADD COMMENT
1
Entering edit mode

Does that point to a surge in proteomics or related field?

ADD REPLY
1
Entering edit mode

And this news from NASA is pretty interesting and motivating for the ngs journey:

ADD REPLY
0
Entering edit mode

Not any time soon, I'm afraid. Sequencing proteins is the holy grail, but so far it seems very, very hard.

ADD REPLY
3
Entering edit mode
6.7 years ago
GenoMax 139k

Majority of NGS sequencing will become a commodity. Sourced out to the lowest cost provider, just like it happened with Sanger sequencing. Whether people like it or not there will be more commercial software analysis packages/cloud data analysis service providers out there, especially in the clinical sequencing area.

ADD COMMENT
3
Entering edit mode
6.7 years ago

Another dimension is DNA as digital data storage. Microsoft n others have already used synthesis and NGS to encode-decode digital data in DNA. So future could be also be in this direction.

ADD COMMENT
3
Entering edit mode

It's depressing that someone green-lighted funding for a project to encode and read DNA using current-day technology. Might as well set the money on fire. They both just made an encoding and then synthesized a ton of short fragments with indexes like disk inodes and called it a day. Very disappointing that anyone would actually throw such a large amount of money at something that could have been published on the back of a napkin.

(edit: but thank you for bringing it to my attention!)

ADD REPLY
1
Entering edit mode

The views may vary; I think there is no harm in trying a new stuff; now how that is actually useful is another debatable topic.

ADD REPLY
1
Entering edit mode

+1 , thanks for sharing this!

ADD REPLY
3
Entering edit mode
6.7 years ago
h.mon 35k

My far-fetched 2 pennies:

1) On the technology side, more and more patents, leading to meaner and more abundant patent wars. All informed bioinformaticians / molecular biologists will have to learn legalese.

2) On the analysis side, integration of sequencing and automated analysis: user selects some analyses, the sequencer output will be continuously analysed until sequencing is over, sequences and analyses results sent simultaneously to user.

ADD COMMENT
2
Entering edit mode
6.7 years ago

In terms of data integration and analysis: Multiomics, Proteogenomics, Metagenomics-Metaproteomics, present trends - would become routine.

ADD COMMENT

Login before adding your answer.

Traffic: 2911 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6