The bulk amount of fastq.gz files were stored in the server while i need to download them into my workfolder/laptop. However, i have only a 800GB external hardisk and a laptop with 500GB HDD.I have never handle such big data so far . According to my colleagues, the previous user used her laptop to run this crazy 2.6TB of data, sounds impossible to me... Anyone has any suggestion?
What you could do, and I have not thoroughly tested this at all (just had the idea), is to process the data on your machine (if it has the memory and CPU power for it), but reading the data from scp/stdin.
sshpass -p 'password' scp firstname.lastname@example.org:/scratch/tmp/input.fastq.gz /dev/stdout | bwa mem -p idx /dev/stdin | (...)
I gave this a quick test with an interleaved paired-end fastq file on our HPC server, processing it on my local machine and it worked (and worked means it gave alignments as expected for the 30'' I ran it). I don't know if
scp is stable enough to run for several hours, but maybe this is an option for you. Using highest compression levels before writing to disk should also help, but will of course increase the processing time.