I've been looking at putting a local Galaxy instance together so that we have a platform accessible to lab members for viewing data when our NGS samples start rolling in. I've got a single server for this right now, which obviously would be a major bottleneck for trying to do an end-to-end analysis of a larger dataset. We do have cluster computing facilities available as a core resource however- so I started thinking about the possibility of having the option of using a local Galaxy instance to initiate compute tasks on a remote server, either in our computing center or in the cloud. This isn't something available in the core Galaxy codebase at the moment as far as I know, but seems like something that could exist somewhere.
To clarify a bit- I can't actually host Galaxy on the cluster, and hosting it full-time in the cloud would probably be too much money. Our existing server system might be powerful enough for most tasks. When there's something that would take a month to run on that box, it would be great to be able to use the same front-end instance to kick off processing on a remote system- like to spin up EC2 nodes and handle the sending/receiving of data,
The main motivation would be the ability to have a local system maintaining our own sample database and workflow management, while being able to leverage larger computing systems for bigger jobs.
I don't really expect something like this to exist already for Galaxy and the like, but it seemed like something worth asking. You never know what resources are around that Google somehow missed.