Question: GFFutils very slow at creating database file. Any Idea why..?
gravatar for Kirill
3.7 years ago by
Kirill260 wrote:

Hi guys, 

I've been exploring [gffutils]( lately. All is well when I work with small gff file (less then a Mb in size). However I'm trying to make a database file with Mus_musculus.GRCm38.81.gff(gtf) files and it takes very long time.. The gtf file is around Gb in size. I left it running and it was still running after a day (like 25 hours). I ctrl^c it. Right now trying to make database file using GFF file instead and its been running for few hours thus far. GFF is around 300 Mb in size.

Here is the command I'm running

`db = gffutils.create_db(myGFF, dbfn='Mus_musculus_GFF.db', force=True, keep_order=True, merge_strategy='merge',     sort_attribute_values=True)` 

Am I doing something wrong or it can take days to make a database file..?

by the way PC specs are: i7-4600U CPU @ 2.10GHz with 16 Gb of RAM. 

Also can I multi thread and will it help..?




gffutils gff • 2.2k views
ADD COMMENTlink modified 3.7 years ago by Ryan Dale4.8k • written 3.7 years ago by Kirill260

It really shouldn't take longer than an hour. For example, database creation for the full human GENCODE GTF takes 20 mins.

Can you post a link to the file(s) you're using?

ADD REPLYlink written 3.7 years ago by Ryan Dale4.8k

@Ryan you can pull exact files from here

I actually got those files from Ensembl


p.s those links are temporal. i'll remove at some point

ADD REPLYlink modified 3.7 years ago • written 3.7 years ago by Kirill260

@Ryan just a quick update. So database file generation have been running for about 5-6 hours now and hasn't finished yet. The current database file is ~ 150 Mb. I think database file roughly is the double of the original gff. (It was the case with previous gff's I tried). If this the case then original gff is 250 Mb I then expect db file aroung 500 Mb. Still long way to go.. I'll way until tomorrow morning and might kill it then. Either way if gtf/gff file is Gb in size at this rate it'll be too long to make db file.


ADD REPLYlink written 3.7 years ago by Kirill260
gravatar for Ryan Dale
3.7 years ago by
Ryan Dale4.8k
Bethesda, MD
Ryan Dale4.8k wrote:

Using the script below, called as

python Mus_musculus.GRCm38.81.gtf.gz GRCm38.81.db

it took 12 minutes to create a database out of the 1.5M features in the GTF file.

Turns out Ensembl's GTF files don't follow the GTF spec, so gffutils needs some hints to avoid spending a ton of time merging features due to name collisions. I've commented the script pretty heavily to illustrate my reasoning for the options I used.

ADD COMMENTlink written 3.7 years ago by Ryan Dale4.8k

@Ryan, thanks for code and effort, I really do appreciate it, but firstly I really don't think GTF format is the problem..I think Ensembl does follow GFT file standards. Secondly though what are the standards ..? We sort of had this discussion previously here A: Ensemble GTF format: isn't the tag "transcript_id" mandatory?. I've been doing a bit of research and here are the most common sites that people refer to as GFF/GTF specifications..







From what I understand the only major difference between GFF 1,2,3 and GTF is the 9th column formatting. The tag and value pairs. It worth to mention that features type can also differ (but I don't think its that important at all. I actually never have seen start and stop _codon features). According to Sanger site tag names can be Tags must be standard identifiers ([A-Za-z][A-Za-z0-9_]*). My understanding that means any text, at least for GFF3 format... Whereas according to GENCODE there are pre-defined number of tag that you must have, that's for GTF only format.

I've noticed that Ensembl had only released GFF3 format in the latest release (that is release 81). I looked back a few releases and they have only ever released GTF files until now. 

The reason I'm writing it all out here is because GFF/GTF files are pain in the back work with, but unavoidable.. You did mention once before about writing to GA4GH in order to standardise the format. Im all for it. 

I git cloned your code snippet, but it doesn't work. Here is the command I'm running 

python ~/ref-files/Mammalian/Mus_musculus/Mus_musculus.GRCm38.80.gtf t.db

​Here is the error I'm getting 

Traceback (most recent call last):
  File "", line 111, in <module>
TypeError: create_db() got an unexpected keyword argument 'disable_infer_genes'

I don't know if I can do that, but if i comment out lines 100 and 101 (because both cause the error) the script runs for 10-15 minutes fine (I actually see t.db file grows in size very quickly to 1.4 Gb, but then it seats at this forever 

2015-07-31 08:58:05,082 - INFO - Committing changes: 1524000 features  
2015-07-31 08:58:10,510 - INFO - Populating features table and first-order relations: 1524099 features  
2015-07-31 08:58:10,510 - INFO - Creating relations(parent) index  
2015-07-31 08:58:41,345 - INFO - Creating relations(child) index  
2015-07-31 08:59:12,226 - INFO - Inferring gene and transcript extents, and writing to tempfile  
2015-07-31 09:01:49,873 - INFO - Importing inferred features into db  
0 of 433220 (0%)  

Ultimate goal for me is to have one reliable annotation format that I can easily parse and get what I need. Right now I need to parse GFF such that I need to grab all features that belong to a gene and kinder do some quick calculations and merging of different exons.. My script works ok, but I'd much rather utilise gffutils tool. I really like database idea central to gffutils..    


Here are a couple of some what related link for others reference on the GFF/GTF matter:

1. Yet another randomly formatted GFF file A: RNASeqQC - Low number of genes detected on run  

2. Good discussion about large GFF files and how to get quick feature access Light-Weight Script-Based Random Access To Large Gff3 Files Using Biopython

ADD REPLYlink modified 3.7 years ago • written 3.7 years ago by Kirill260

The "disable_infer_genes" and "disable_infer_transcripts" are new features in the latest version (0.8.4rc1) to accommodate GTF files that only had gene or transcript -- but not both -- already included See for details.

Updating should fix the issue. If you don't want to update your version of gffutils, replace "disable_infer_genes=True, disable_infer_transcripts=True" with "infer_gene_extent=False" in my example to get the equivalent behavior for your file.

As for the larger point about GTF and GFF:

It's true that the syntax of GTF and GFF only differ by the 9th field. But semantically they can be quite different. The biggest difference is that GFF doesn't guarantee that gene ID is specified in each exon (requiring traversing the exon-transcript-gene DAG to figure it out); GTF doesn't guarantee that gene and transcript features are included in the file. Actual contents in actual GTF or GFF files vary wildly, and supporting all of those various flavors is what gffutils is for.

The specs you link to are actually a mix of GTF and GFF.  At the time gffutils was written, people had converged on the second of your links to be the GTF spec, which does not include genes and transcripts. One of gffutils' original jobs was to infer the full extent of transcripts and genes based on such files that only provided exons.

Recently, providers like Ensembl have started to include gene and transcript features. That's convenient, but since it's not the canonical GTF format, gffutils' assumptions are wrong.

The extent to which Ensembl becomes the de facto GTF format is the extent to which gffutils should change the assumptions about GTF format. I could be convinced to change the assumptions.

However, my goal for gffutils is to maintain the flexibility to handle any sort of semantics you can cram into a GTF or GFF file.  For example, when creating a database we need unique primary keys. But the Ensembl GTF files don't provide unique IDs for exons. gffutils was trying to use the "gene_id" as the exon ID, but since so many exons share the same gene ID it was spending a lot of time reconciling those features.  The fix in this example script was to provide hints to create a unique exon ID, drastically speeding things up.

If you'd like more guidance ot have suggestions, I'd encourage you to submit an issue in the github repo, It might be better suited for discussion than the comments of a biostar post!

ADD REPLYlink modified 3.7 years ago • written 3.7 years ago by Ryan Dale4.8k

@Ryan thanks, the latest version works much better. I'm sure you know, but you don't get the latest version through pip install. 

ADD REPLYlink written 3.7 years ago by Kirill260

Yep, the latest version has an "rc1" at the end of the version which won't be automatically installed by pip. But it's been like that for a while, thanks for reminding me to upload a new final release.

I've just uploaded to PyPI v0.8.5 (no changes from 0.8.4rc1); you should be able to pip install upgrade now.

ADD REPLYlink written 3.7 years ago by Ryan Dale4.8k
gravatar for Matt Shirley
3.7 years ago by
Matt Shirley8.9k
Cambridge, MA
Matt Shirley8.9k wrote:

For what it's worth, several people have reported a similar experience. Maybe you want to add to the issue?:

ADD COMMENTlink written 3.7 years ago by Matt Shirley8.9k

great, i'll consider this. 


ADD REPLYlink written 3.7 years ago by Kirill260
Please log in to add an answer.


Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 2.3.0
Traffic: 723 users visited in the last hour