I have hard time believing that there are 400K genomes of salmonella deposited. But even if that's the case, there has to be a considerable redundancy between them. Let's say that you can download them at 1 minute per dataset. Do you really think that downloading will be your main problem? How about storing that much data? Or processing and analyzing 400K datasets? Even if you had the bandwidth and computational power to do everything in parallel, I think it would be wildly optimistic to assume it would take you on average 10 minutes per dataset to do all of this. And 10 minutes times 400K is more than 7 years.
Rather than trying to find a way to download 400K sets of files, it may be better to intelligently arrive at a smaller subset that will still serve your purpose and be feasible to work with.
Use aspera CLI will dramatically speed up your download. I heard that you can download with full speed only if your terminal is located within USA (actually I personally tested it before. By connecting a VPN node in USA, I got folds increase of download speed).
In some case the database provide a fasta-aspera link so you can directly download it. In other case, you can modify your ftp URL to download your file with Aspera.
For instance, a ftp URL like ftp://ftp.sra.ebi.ac.uk/vol1/..... can be replaced by email@example.com:/vol1/...
a ftp URL like ftp://ftp.ebi.ac.uk/databases/.... can be replaced by firstname.lastname@example.org:databases/...
At least from my experience, I always use these two "headers" to download fastq via Aspera. Hope this could help.