Question: PacBio sequencing for variant calling?
gravatar for borealis
20 months ago by
borealis30 wrote:

Hello BioStars!

I am new to variant calling and would appreciate your feedback or any pointers in the right direction/literature recommendations. I have a question about the practicality of using PacBio sequencing to perform variant calling between 2 plants - e.g one is normal phenotype and another one is diseased (allopolyploid, >50% repetitive elements). The disease is a likely result of a disruption in a gene (insertion/deletion) or a transcription factor in a biosynthetic pathway. There is a reference genome available which was assembled using a hybrid assembly including PacBio/Illumina. My initial plan was to do Illumina sequencing (15x coverage) and map to the reference genome. Now there is an opportunity of doing PacBio sequencing and I am debating whether it will be help me with answering my biological question. I was thinking of a preliminary idea of doing PacBio coverage at 10x for closing the gaps from paired end reads and use Illumina 15x data to correct the errors... Thank you for reading.

sequencing • 2.0k views
ADD COMMENTlink modified 20 months ago by tjduncan270 • written 20 months ago by borealis30

While someone else may be along with a proper answer these two things are going to make this a tough task:

  • plant, allopolyploid, >50% repetitive elements
  • coverage you expect to have (are you going to do CCS type reads rather than long?)

If you suspect you know the likely cause (gene) then going selectively after that may be better.

ADD REPLYlink written 20 months ago by genomax89k

Thank you! Going selectively for the genes involved in the pathway is good advice.

ADD REPLYlink written 20 months ago by borealis30
gravatar for Rox
20 months ago by
France / Toulouse / GeT-Plage
Rox1.2k wrote:

Hello Borealis !

I think this paper may help you : (the free bioarxiv here ;) : )

It explains how long read sequencing (such as ONT and PacBio methods) can help finding larger structural variant, and it also present their NGMLR + sniffle pipeline to detect structural variant from long read data.

I already tried this method with PacBio data and it worked pretty well for me (it wasn't plant data, but it was a highly diploid drosophila species). But I had a enormous PacBio coverage (like 100X +...) so I don"t know if only 10X would be enough for that analysis !

Bonus : You can then try to visualize your data using Ribbon ( ).

Hope this could guide you a little bit ! Don't hesitate to ask for further questions.



ADD COMMENTlink modified 20 months ago by genomax89k • written 20 months ago by Rox1.2k

Hello Roxanne, thanks for your answer and the link to the paper! Yes, it seems from the other answers and this paper ( - "We generated ~72× sequencing coverage of the Oropetium genome using 32 SMRT cells on the PacBio RS II platform") more coverage would be needed... I also found that the reference genome of my plant was constructed using 100 SMRT cells - so that would be much more coverage than 10x.

ADD REPLYlink modified 20 months ago • written 20 months ago by borealis30
gravatar for tjduncan
20 months ago by
Indianapolis, IN
tjduncan270 wrote:

Hi All,

What the others have said is generally accurate. Typically Illumina short read data is great for SNPs and long read data (PacBio or ONT) is typically used (this is changing) primarily for de novo assemblies or structural variation. If you truly have a good reference genome your idea of mapping Illumina data from your samples for SNVs and PacBio ~10x coverage for SV detection may work just fine.

Once you have PacBio data ( ~10x coverage for SV detection) you can use their PBSV structural variant calling pipeline to call SVs. It is available on bioconda (along with other key Pacbio BFX tools) or you can download the entire SMRT Link software suite from their website.

A really interesting and cutting edge approach is what rrbutlerii mentioned. With the newest Sequel 6.0 chemistry release you can generate PacBio ccs data (circular consensus sequencing). Using this method you size select for up to ~10kb fragments and then sequence. Because each unique molecule is sequenced multiples times you get a high quality consensus accuracy for each molecule. With a 10kb fragment more than half of your data will be at a single molecule accuracy of >Q20 (1 in 1000 error). Because you have a good chunk of data that is more like the error profile of illumina data (>Q30 1:10000 error) many of the existent short read variant calling tools (GATK) that were designed for illumina data will work with PacBio ccs data. You can see this approach in the following poster

Building on top of that here is a preprint that was released yesterday that further breaks down this approach. It is pretty cool because you get all types of genomic variation (SVs, SNPs, haplotypes (if relevant to your sampel) and good de novo genomes from only one technology.

At the end of they day I am sure you just want meaningful answers to your biological question. At this point in time there is no clear and cut way for identifying all types of genomic variation (SNVs are relatively easy while larger structural events are the wild west) so it is best to jump in with the highest quality data sets you can afford to generate and see what you find.

Hope some of this info helps!

ADD COMMENTlink written 20 months ago by tjduncan270

minor comment: Q20 (1 in 100) likewise Q30 (1 in 1000)

ADD REPLYlink written 20 months ago by jean.elbers1.4k

Hi tjduncan,

Thanks for the reply! - this paper is really interesting. Yes, I think I also need to consider how much coverage/sequencing problem I can afford.

ADD REPLYlink written 20 months ago by borealis30
gravatar for rrbutleriii
20 months ago by
US, Chicago
rrbutleriii80 wrote:

tl;dr The question to ask is the type of variation you are expecting to encounter; a non-systematic approach to be sure, but there is not a one size fits all solution. In my personal experience (TruSeq/Nextera Illumina several paired end types de novo and reference based, PacBio RSII p4c2/p5c3/p6c4 de novo, Nanopore R7/R9.4 de novo), I would go with the PacBio (>40x coverage, ideally >80x; 15kb-20kb fragment size) combined with Illumina (10-15x covs). PacBio alone will work for SVs so long as you are not specifically calling SNPs, as homopolymer frameshifts and read accuracy are a potential confounder (not insurmountable, just something to be aware of).

Longer Version: As Roxane said, there are numerous advantages to long reads for structural variation (also see this paper). That said they are noisy reads, so 10x depth will not be enough for majority SNP basecalls, hence the typical hybrid sequencing route. Comparatively, Illumina cannot call the same types of SVs without additional technologies (combining w/ mate pair runs, as in human genomics), however I personally am skeptical of its ability to call things other than large scale insertions, deletions and inversions. Tandem insertions of multiple copies of a gene (as in the ultra-long nanopore read paper above) would fall under this category, and could be relevant to an allopolyploid with differential gene copy number in said pathway.

With PacBio, you can choose your fragment size, and overcome particular obstacles. There was a recent PR blurb by PacBio for Sequel 6.0, basically that their circular consensus reads (CCS) could achieve >99% accuracy in high throughput, the drawback being that is on 1kb and 2kb fragments, I imagine the result of five passes or more of the target sequence in each CCS. With the RSII--even without CCS reads--the majority of basecalling errors we saw were stochastic, so higher depth coverage (>80x) was sufficient for consensus even with homopolymers of 3-6 bases, something that typically stymied 454 sequencing. Larger homopolymers will still give you frameshift trouble, though I haven't tried the Sequel yet, so I don't know if they have gotten better at resolving those.

The last time I spoke with a PacBio rep, their suggestion was to run a library of 2kb for CCS, and a library of 15-20kb for scaffolding (taken with a grain of 'please buy my product' salt). If you have cheap Illumina available to you though, that is still the better option to check individual base accuracy and check homopolymers, though Illumina itself is not immune to that issue. In our experience with bacterial genomes, the PacBio 15k long reads at 80x was sufficient to no longer need the Illumina for checking. Given eukaryotic genome complexity, and the size of typical plant genomes, the hybrid approach is still you best bet. However you should probably scale up the PacBio long read coverage.

ADD COMMENTlink written 20 months ago by rrbutleriii80

Hello rrbutleriii, thanks for the really informative answer, lots of things to look into. I will read on PacBio Sequel, I think our lab hasn't used it before. Another thing I was thinking of looking for is transposable elements in the biosynthetic pathway.

ADD REPLYlink written 20 months ago by borealis30
Please log in to add an answer.


Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 2.3.0
Traffic: 1696 users visited in the last hour