An option is to use eHive, which is free and open source:
The processing of the jobs can go from the very simple list of commands to the very complex pipelining, like the ones used in Ensembl and other projects out there. A simple example of command line piping into a queueing system, with fail tolerance, resource management (num. CPUs, memory, etc), all in one script is here:
also have a look at InputFile_SystemCmd:
init_pipeline.pl Bio::EnsEMBL::Hive::PipeConfig::InputFile_SystemCmd_conf -ensembl_cvs_root_dir $HOME $dbdetails -inputfile very_long_list_of_blast_jobs.txt
beekeeper.pl -url $dburl -loop
There are a few Perl dependencies to get it working, and then the backend can be a no-frills simple sqlite which will work fine for tens to few hundreds of concurrent jobs, or a MySQL backend that usually works well for hundreds to close to a thousand concurrent jobs.
LSF support comes out of the box in eHive. There is also support for some other queueing systems, like SGE. The same script that you use in your farm you can test first in your workstation without the need of a queueing system, just using the '-local' option.