I have been attempting to run MrBayes on my university's cluster that has NVIDIA GPUs. I compiled MrBayes with the BEAGLE library according to the manual in order to take advantage of running my analysis using a GPU, and the HPC team verified that it was installed and compiled correctly on the cluster.
My amino acid dataset consists of 179 taxa and 90350 sites, and I am running a partitioned analysis in MrBayes. When I run my dataset using a GPU, the memory instantly fills up right after the model partitioning step at the beginning of running the program.
I was wondering if anyone has had any luck running a large dataset in MrBayes with GPU(s)? I can run my dataset with CPUs, but it takes about 26 days for it to complete its run.
Additional MrBayes details: ngen = 1000000, nruns = 4, burnin = 1000, samplefreq = 1000, and nchains = 4
I appreciate any and all help that may be given. If any more info needs to be given, I'm happy to share more.