Question: GATK4 local computing workflow
gravatar for moxu
20 months ago by
moxu440 wrote:

We have a large sample size for WGS and variant calling, and now we are assessing different pipeline options. One of the pipelines we are investigating is the GATK4 pipeline. Broad provided a workflow defined in .WDL + JSON, but it uses cloud computing (reference files in the google storage cloud). This does not work for us because we are not allowed to access cloud in any sense, for privacy and security reasons. Besides, it's hard to run the google storage cloud computing pipeline -- I always get some errors related to google storage.

The above being said, I am wondering if anyone has run the GATK4 pipeline using only local files with the workflow recommended by Broad, and is willing to share the workflow (in .wdl or .cwl or something similar).

Your help would be highly appreciated!

(p.s. This is a not-for-profit project which would greatly benefit the research community and general public, so your contribution would be maximized because you are contributing to the human kind, not just a small group of people)

ADD COMMENTlink modified 20 months ago by vdauwera960 • written 20 months ago by moxu440
gravatar for vdauwera
20 months ago by
Cambridge, MA
vdauwera960 wrote:

Assuming you're talking about the germline short variants discovery pipeline, we have several different versions; aside from the cloud-optimized pipeline we also have one that is optimized for local execution. They are summarized here: and you can find the WDL for the local-optimized version here: Note also that the pipelines listed as "universal" can be run anywhere; you just need to download the files and update the paths accordingly.

We'd be happy to help you further over on the GATK support forum:

ADD COMMENTlink written 20 months ago by vdauwera960

You are the best!

I did post my questions to the GATK forum, but nobody answers me recently.

A quick question: what are 2T, 56T, 20k, HDD, on-prem, throughputs, FPGA?

A 2 cent suggestion: It would be nice to have a choice to automatically download all auxiliary datasets to designated directories as defined in the WDL/json file, or you can pre-bundle everything.

Thanks so much!

ADD REPLYlink written 20 months ago by moxu440
Please log in to add an answer.


Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 2.3.0
Traffic: 1093 users visited in the last hour