Question: Very different results using differents tools for fusion transcripts detection
gravatar for Evan
19 months ago by
Evan80 wrote:

Hello, I'm currently analyzing rna-seq data to detect fusion transcripts. For this I'm trying to use all the tools available and test them to compare the performance and the results. The fact is, my data aren't perfect for fusion detection : single-end ~ 70bp, I have not sequenced this but I'm obliged to work with it.

So, I've tested 3 tools at the moment :

I'm looking to test other tools (tophat fusion, gfusion, fusionmap). But the results are already very different. Here is a venn diagram to get a better vision of my results (entire dataset ~ 50 samples ) :

The results for each sequence are very different, few transcripts are kept between the tools but not on all dataset, for me, it's not sufficient to interprate it correctly.

I've tested the 3 tools on a control sequence which contains 17 known fusion transcripts (i got it on github of fusioncatcher, it was a paired-end, i concatenate the two reads into one to simulate a single-end. The results between the 3 tools are quite similar :

Maybe the results are biaised for this control sequence because it have been created by hand and like it contains known fusion transcripts, the tools are more accurate I think.

I would like to get your point of view and advices for my situation, if you have already experienced fusion detection what would you do ? At the moment I want to get the more results as possible and keep only fusion transcripts which are found in two tools or more.

Thank you in advance.

ADD COMMENTlink modified 9 weeks ago by michael.j.apostolides10 • written 19 months ago by Evan80
gravatar for guillaume.rbt
19 months ago by
guillaume.rbt830 wrote:

I've been working on fusion detection, and I encountered the same issues of reproducibility.

As you already started to do, we eventullay decided is to run several fusion detection tools (Defuse, FusionCatcher and Tophat fusion in our case), and merge the results of the 3 tools.

We kept every detected fusion, annotated with the number of tools that detected it, to evaluate the quality of the calling, fusions detected by 3 tools being the most reliable.

If you want more details on our methodology you can check our publication :

ADD COMMENTlink modified 19 months ago • written 19 months ago by guillaume.rbt830

Thank you for your answer, I'm at ease to know that It's not that simple to get good results at first time. I take a look your publication and It's very interessant, with all these common fusions transcripts it have been the the best way to ensure the results to be pertinent. In my case I will try to adjust parameters to detect more fusion, if I get suffisant number I will try to build something similar to your pipeline to interprate my data correctly. Your methodology will help me for sure :) !

ADD REPLYlink modified 19 months ago • written 19 months ago by Evan80

You're welcome! When you compare detected fusions from different tools you have to be careful to set a window for crossing fusion positions on the genome. (For a same fusion, the positions predicted can vary of several bases across tools)

ADD REPLYlink modified 19 months ago • written 19 months ago by guillaume.rbt830

Well, I will remember this information thank you very much!

ADD REPLYlink written 19 months ago by Evan80
gravatar for enxxx23
19 months ago by
European Union
enxxx23240 wrote:

Different tools use different versions/sources of gene annotations and different genome releases/patches (eg. hg18/hg19). Therefore, as one would expect, one gets different results. I would say the the differences in Venn diagram can be explained mostly by differences in gene annotations uses (e.g. Ensembl vs Gencode vs ... and also differences between versions of Gencode, etc.).

For example, a "known" fusion transcript might use some transcripts which exist in Gencode BUT are annotated incorrectly in Ensembl and therefore the tool that uses Ensembl will have a much harder job in finding that fusion transcript.

Regarding taking into consideration only the fusions which are found simultaneously by a set of fusion finders I do not think is a good to do it. BECAUSE some aligners have their "niche" where they perform much better than the others, like for example, TopHat-fusion is able to find circular fusion genes whilst other not, FusionCatcher is the only one to be able to detect IGH or DUX4 fusions, some fusion finders perform better on real samples than on simulated data than on data from real samples, etc.

So in the end it is complicated and it depends on what type of samples and what fusions one expects to find.

ADD COMMENTlink modified 19 months ago • written 19 months ago by enxxx23240

Thank you for your answer, I like your point of view ! I will be attentive to genome annotation version, I did not take seriously how it could impact my results. I'm according to the fact that each tool has substantially differences, keep only simultaneous fusions is risky in the sense we could lose crucial fusion which isn't detected by other tools. In my case, as I'm in the primary step of my fusion research it still interestant to see the biais between tools, because of the poverty of the fusions number detection, I should try to interprate my data only with one tool even if, for me, it's harder to justify a result on the basis of one tool if the fusion transcript isn't found by other tools.

ADD REPLYlink modified 19 months ago • written 19 months ago by Evan80

One way to do get more information about a fusion transcript found only by one tool would be to google (or search pubmed or google scholar?) the fusion gene and see where that fusion has been reported previously and in what type of cancers. If that fusion has been published previously in articles/databases then the confidence is higher that might be there in your sample. That might help in building confidence in that fusion transcript. Of course, the ultimate validation would be to validate the fusion transcript in the wetlab using RT-PCR.

Here is an example of why a union of all fusions found by different tools might be better than the intersection of the fusions found by different tools:

Panagolous, The “Grep” Command But Not FusionMap, FusionFinder or ChimeraScan Captures the CIC-DUX4 Fusion Gene from Whole Transcriptome Sequencing Data on a Small Round Cell Tumor with t(4;19)(q35;q13), 2014

ADD REPLYlink written 19 months ago by enxxx23240

Thank you for this, I read the article, very interessant, it will help me a lot to build confidence in my results.

ADD REPLYlink written 19 months ago by Evan80
gravatar for michael.j.apostolides
9 weeks ago by
michael.j.apostolides10 wrote:


We have recently written a manuscript benchmarking 7 tools and their combined results. Perhaps this will be helpful to gather more information about your questions.


ADD COMMENTlink written 9 weeks ago by michael.j.apostolides10
Please log in to add an answer.


Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 2.3.0
Traffic: 1155 users visited in the last hour