Tutorial: Fast download of FASTQ files from the European Nucleotide Archive (ENA)
53
gravatar for ATpoint
19 months ago by
ATpoint29k
Germany
ATpoint29k wrote:

Published sequencing data are commonly stored at NCBI and questions on how to efficiently download these data are posted frequently at Biostars. While NCBI relies on its SRA format to share these data it is possible to directly download them in fastq format from the European Nucleotide Archive (ENA[1]) which mirrors NCBI.

In this tutorial, we will examplarily download an entire dataset of ChIP-seq and ATAC-seq data, requiring minimal preprocessing work. We will use the Aspera client for download rates of several tens of Mb/s up to few hundred Mb/s (depending on the connection, I/O capacity and distance to the download location). This example code should work on Linux and Mac.


-- last modified: 16.1.20: Explicitely recommend sra-explorer[3] to browse data.


Step-1: Get the Aspera client

Go to https://downloads.asperasoft.com/en/downloads/8?list and get the most recent installer for your system. For Linux, it is a tarball (use tar zxvf to unpack) with an installer batch script and for Mac, a standard disk image.

After installation, there now will be these executables/files in their default locations:

Linux:

$HOME/.aspera/connect/bin/ascp --- the executable

$HOME/.aspera/connect/etc/asperaweb_id_dsa.openssh --- openssh file that we'll need later

Mac:

$HOME/Applications/Aspera\ Connect.app/Contents/Resources/ascp --- the executable

$HOME/Applications/Aspera\ Connect.app/Contents/Resources/asperaweb_id_dsa.openssh --- openssh file that we'll need later

In any case, make sure you add the folder with the ascpexecutable to your PATH. If PATH is a new word to you, please google it ;-)


Step-2: Choose your dataset

We have multiple options here.

Use sra-explorer: The sra-explorer is a convenient GUI to get browse NCBI for datasets. In our case we enter our accession number PRJNA288801 into the search field, then select the desired samples, add them to the cart and then copy the Aspera download links it produces. I will not add further details here as the tool is self-explainatory. It includes many (download) options, check it out. Thanks to Phil Ewels for this awesome tool!

Alternatively, query ENA / NCBI manually to find datasets: Once you know which data you want to download, check if they are backed up on the ENA, which is true for most unrestricted data. For this tutorial, we will download the entire dataset from the ChIPmentation paper of 2015. When you check the paper for the NCBI accession, you'll find GSE70482. Following this link, you find the BioSample accession number PRJNA288801. So you go to the ENA, enter this PRJNA288801 in the search field and find a summary page with all available data for download. Scrolling down a bit, you see a table with accession numbers and all kinds of metadata. As typically we do not need most of these metadata, we use the field Select columns to select the essential metadata we need for the download, which are Study Accession, FASTQ files (FTP) and Experiment title. After selecting these, and unselecting everything else, you press TEXT and save the file as accessions.txt in your project folder.


Select Columns ENA


accessions.txt


Step-3: Download the data

As you'll see in accessions.txt, the download paths direct you to the ENA ftp-server, which is rather slow. We want to download with the Aspera client (up to 200Mb/s at my workplace). Therefore, we awk around a bit to change the download paths to the era-fasp server. As you'll see in case of paired-end data, the paths to the two mate fastq files in accessions.txt are separated by semicolon, which we take into account. The output of this snippet is download.txt.

Linux:

awk 'FS="\t", OFS="\t" { gsub("ftp.sra.ebi.ac.uk", "era-fasp@fasp.sra.ebi.ac.uk:"); print }' accessions.txt | cut -f3 | awk -F ";" 'OFS="\n" {print $1, $2}' | awk NF | awk 'NR > 1, OFS="\n" {print "ascp -QT -l 300m -P33001 -i $HOME/.aspera/connect/etc/asperaweb_id_dsa.openssh" " " $1 " ."}' > download.txt

Mac:

awk 'FS="\t", OFS="\t" { gsub("ftp.sra.ebi.ac.uk", "era-fasp@fasp.sra.ebi.ac.uk:"); print }' accessions.txt | cut -f3 | awk -F ";" 'OFS="\n" {print $1, $2}' | awk NF | awk 'NR > 1, OFS="\n" {print "ascp -QT -l 300m -P33001 -i $HOME/Applications/Aspera\\ Connect.app/Contents/Resources/asperaweb_id_dsa.openssh" " " $1 " ."}' > download.txt

The output is a simple list of download commands using ascp.

output download.txt

That's it. Now, we only have to run the download commands.

Edit (23.07.18): The download paths are always like era-fasp@fasp.sra.ebi.ac.uk:/vol1(...). I point that out because of a recent post (328182) where OP accidentally forgot the ":" after the .ac.uk and used fasp@ instead of era-fasp@.

Lets download:

## Either by a simple loop:
while read LIST; do
$LIST; done < download.txt

## or by using GNU parallel to have things parallelized:
cat download.txt | parallel "{}"

Once the download is complete, one can play around using the accessions.txt to rename the files with e.g. information from the Experiment title field (column 2), or other metadata you may retrieve from ENA.


Edit 28.2.19: For matters of completeness, I also add a suggestion on how to get the same data from NCBI using prefetch and parallel-fastq-dump, a wrapper for fastq-dump from Renan Valieris for parallelized fastq conversion from sra files. Say one has a file IDs.txt which contains the SRA file IDs like:

SRRXXXXXX1
SRRXXXXXX2
(...)
SRRXXXXXXn

one can use this simple function to download SRA files via prefetch (please see the NCBI documention on how use Aspera with prefetch to avoid slow FTP downloads), followed by fastq conversion with parallel-fastq-dump.

function LoadDump {
  prefetch -O ./ -X 999999999 $1 

  if [[ -e ${1}.sra ]]; then
    parallel-fastq-dump -s ${1}.sra -t 8 -O ./ --tmpdir ./ --split-3 --gzip && rm ${1}.sra
  else
    echo '[ERROR]' $1 'apparently not successfully loaded' && exit 1
  fi
}; export -f LoadDump

cat IDs.txt | parallel -j 2 "LoadDump {}"

This would use 8 threads for fastq conversion and run two SRA files at a time via GNU parallel, hence requiring 16 threads. As always, scale up or down based on the available resources and potential I/O bottlenecks on your system.

ADD COMMENTlink modified 5 weeks ago • written 19 months ago by ATpoint29k
2

I recently was downloading the data form CCLE experiment and it was taking ages (also crushing more than once) with sra-toolkit and fastq-dump. I used your approach, slightly it modifying and it worked wonders! Thanks a lot!

My modification below:

esearch -db sra -query PRJNA523380 | efetch --format runinfo |  grep ${tissue_of_interest} | grep ${experiment} | cut -f1 -d',' | xargs | sed 's/ / OR /g' | xclip -selection c

where ${tissue_of_interest} and ${experiment} where variables I set up specifically to my needs (i.e. CERVIX, RNA-seq). I copied this into sra-explorer. The Project has too many files to directly search sra-explorer with its ID.

ADD REPLYlink modified 7 months ago • written 7 months ago by kzkedzierska160
1

Glad to hear it is used productively :)

ADD REPLYlink written 7 months ago by ATpoint29k
1

Good work! I once had the pleasure to use fastq-dump on whole-genome data. I was cursing in multiple languages! :D

ADD REPLYlink modified 19 months ago • written 19 months ago by Eric Lim1.6k

Good work dude. Will use this next time I need to get data from ENA.

ADD REPLYlink written 19 months ago by Kevin Blighe54k

Thank you Sir, glad to help!

ADD REPLYlink written 19 months ago by ATpoint29k

Hi I'm installing your LoadDump script. I am finding that it will prefetch the *.sra files but parallel-fastq-dump is not processing them. Where is the --split-3 portion of the parallel-fastq-dump command on line 7 coming from? Thanks!

ADD REPLYlink written 10 weeks ago by madraghrua16900

Please be more specific. What exactly is not working? --split-3 is a command from fastq-dump itself that splits paired-end reads into R1 and R2 (so two files in case the sra contains PE information). Use fastq-dump --help for more info.

ADD REPLYlink written 10 weeks ago by ATpoint29k

Thanks for your reply. I've copied the LoadDump script on your page into a file, made it an executable and am running it with a list of SRA IDs as you describe in your text. I have installed the sra toolkit, parallel, prefetch, parallel-fastq-dump and added their locations to my $PATH environment. While running the script I am finding that it is pulling down the .sra files and putting them into their own directories. However the script is generating the following messages: 2019-12-11T22:29:28 prefetch.2.10.0: 1) Downloading 'DRR168516'... 2019-12-11T22:29:28 prefetch.2.10.0: Downloading via https... 2019-12-11T22:29:33 prefetch.2.10.0: https download succeed 2019-12-11T22:29:33 prefetch.2.10.0: 1) 'DRR168516' was downloaded successfully [ERROR] DRR168516 apparently not successfully loaded

I am wondering where the error message is coming from and I was trying to understand if there was an executable called split-3 that was missing from my environment. Now I understand that this is coming from fastq-dump and I can see that this is in my $PATH.

Thanks for your help.

ADD REPLYlink written 10 weeks ago by madraghrua16900

Please show the full command you used. It works fine on my machine. It seems that the same accession is being loaded multiple times.

ADD REPLYlink modified 10 weeks ago • written 10 weeks ago by ATpoint29k
./loadDump.sh

I simply name my ID file IDs.txt and put it into the same directory as the script.

I just figured it out - it was a $PATH problem into the sra bin directory after all. All appears to be working fine now. Thank you very much for your excellent script!

ADD REPLYlink written 10 weeks ago by madraghrua16900

Thanks for your reply. I've copied the LoadDump script on your page into a file, made it an executable and am running it with a list of SRA IDs as you describe in your text. I have installed the sra toolkit, parallel, prefetch, parallel-fastq-dump and added their locations to my $PATH environment. While running the script I am finding that it is pulling down the .sra files and putting them into their own directories. However the script is generating the following messages:

2019-12-11T22:33:22 prefetch.2.10.0: 1) Downloading 'DRR168536'...
2019-12-11T22:33:22 prefetch.2.10.0:  Downloading via https...
2019-12-11T22:33:27 prefetch.2.10.0:  https download succeed
2019-12-11T22:33:27 prefetch.2.10.0: 1) 'DRR168536' was downloaded successfully
[ERROR] DRR168536 apparently not successfully loaded

As I result I am seeing the SRA files are not being split by the function.

I am wondering where the error message is coming from and I was trying to understand if there was an executable called split-3 that was missing from my environment. Now I understand that this is coming from fastq-dump and I can see that this is in my $PATH.

Thanks for your help.

ADD REPLYlink written 10 weeks ago by madraghrua16900

This is asking for my password yet I've never made a password. How can I get around this? EDIT: If you use this https://anaconda.org/rpetit3/aspera-connect it requires a password for some reason

ADD REPLYlink modified 4 weeks ago • written 4 weeks ago by O.rka170

I do not know. You can install it from the source that is linked which does not require root permissions. No need for conda here.

ADD REPLYlink written 4 weeks ago by ATpoint29k

I finally got this to work. Do you know why I could be getting variable download speeds? The frist time I ran this, I was getting speeds of ~230 MB/s and now I'm getting 1.7 MB/s at best. Is there an option I can choose to increase the speed? What could be affecting this you think?

ADD REPLYlink written 4 weeks ago by O.rka170
7
gravatar for benjwoodcroft
6 months ago by
benjwoodcroft140
benjwoodcroft140 wrote:

Hi,

This is a very helpful post - thanks a lot for writing it. I wrote a simple Python script based on this which automates things so you only need to provide a run identifier as an argument and it works out the rest - hopefully someone will find it useful.

https://github.com/wwood/ena-fast-download

ADD COMMENTlink written 6 months ago by benjwoodcroft140
2

Cool, very useful if you want to batch-query many accession numbers. You might want to add an option like --linux and --osx to output the correct path to the default aspera openssh file as default paths are a bit different in both operating systems. If you want to search NCBI also check out sra-explorer : find SRA and FastQ download URLs in a couple of clicks from Phil Ewels which has an option to print ENA links directly.

ADD REPLYlink written 6 months ago by ATpoint29k
1

Good idea - I added an --ssh_key option along those lines. I've not had a chance to test it on OSX though - if that is straightforward for you would you mind giving it a crack please?

ADD REPLYlink written 6 months ago by benjwoodcroft140

Works like a charm! Just a small thing, maybe add a kind of exists(--ssh_key) an isinPATH (ascp) (no clue what the command in python are :-D ) option that stops the run right away if the key does not exist. Everything else works really nice!

ADD REPLYlink modified 6 months ago • written 6 months ago by ATpoint29k

Amazing! Thank you so much Ben - your script "just works"! Now instead of variable download speeds of a few Mb/sec at best from wget using the direct ftp links, I'm getting steady, tens of Mb/sec downloads. Much obliged!

ADD REPLYlink written 4 weeks ago by jcmcnch0
Please log in to add an answer.

Help
Access

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 2.3.0
Traffic: 1763 users visited in the last hour