I've read a few answers concerning the transfer of large data quickly and reliably across the internet. I've not been able to find a tool which combines these features. Does anyone know if a combination of options in the commonly used tool achieves this?
- Multiple TCP streams or UDP for very fast transfer of bulk data
- Similarly sensible re disk writes, threads, poll/select and copying to get stuff onto disk quickly
- Checking of ownership and permissions as well as checksumming at both ends
- Handles multiple small files as efficiently as very large files
- Can run "rsync-style", only transferring diffs when appropriate
- Has good security integrity and authenticity guarantees (secrecy not required)
- good quality linux server and client with robust error detection and reporting.
I've had a look at rsync, fdt, bbcp, unison, aspera, udt/udr, &c, and all seem to offer a subset of these features?
Obviously, through a combination of tools and a load of glue scripts I could achieve this with existing tools, but before going to the effort, if it's just a magic combination of parameters, do let me know!
All - Since multiple people commented it was hard to find the Aspera Linux client and unix-appropriate docs, we posted a self-extracting installer for our 'ascp' Linux command line binary, with man page.
Extract the contents, and run
man ascpfor all details of usage.
We will find a more permanent home on our web site www.asperasoft.com) soon. Hopefully this is helpful, and if any questions or feedback feel free to write us at email@example.com.
We have also added a permanent home for the ascp installer on our web site:
Current Release : http://download.asperasoft.com/download/sw/ascp-client/3.5.4/ascp-install-126.96.36.199989-linux-64.sh
General Download Page: http://downloads.asperasoft.com/en/downloads/50
gives all usage.
If any other platforms (OS X, Win, Solaris, etc.) are needed please let us know. We support them but don't get as many requests for the standalone CLI.
That was my experience, too, Matt. Sadly, the main reason we're moving from the existing solution is terrible error reporting meaning that we discover far too late that large jobs have failed in subtle ways around release time causing delays. If people don't need this, then I guess Aspera is probably the way to go (if you can afford it).
BitTorrent isn't something I'd considered. I'll check that out. Assuming you can create a partitioned network away from the Wild West, I don't think there should be any issues with the main network's nefarious uses. Might have to warn our networks guys, though, or it would scare them to death, :-).
I just wrote a post benchmarking BitTorrent vs scp. I'm not going to benchmark against Aspera, since I don't have a server license, but I think as far as throughput it would go
aspera,unison,udt > BitTorrent > scp,netcat,http,ftp,scp. The main benefit to using BitTorrent would be lightweight infrastructure and good, stable tools, as well as scalable distribution if you are sending data to more than one collaborator.
another benefit of bittorrent is that the data sources can be distributed across multiple locations. in realistic scenarios the download speeds are often capped at the source beyond one's reach. Simultaneous downloads from multiple sources is often substantially faster.
I would include bbcp in there as well. We have had some pretty good performance from it and it is almost a drop-in replacement for scp.
That's very interesting. Does bbcp have to be installed on both the source and sink client, or is it a "drop in replacement" in the sense that it only needs an ssh server on the receiving end?
It needs to be installed on both ends, but by "installed", the executable needs to be in the user's path. For some useful details, see:
Have you had a look at BitTorrent Sync? (http://labs.bittorrent.com/experiments/sync.html) One-way or two-way secure encrypted synchronisation. I've never tried it but really like the concept.
Yes, although not in the context of rsync-style folder synchronization on a server. It seems promising.