Which GPU for local Colabfold
1
0
Entering edit mode
17 months ago

Hi, I could use some help on setting up a local install for Colabfold, I just don't seem to get clear intel from all those posts on the net.

We have it up and running. It runs on Ryzen 5 / Kepler K40. We do see short bursts of GPU activity but the temperature does not exceed 60 celsius so it is not working very hard. As far as I understand K40 can use CUDA up to version 4 whereas Colabfold applies 11.4. Not sure what that means. Is Colabfold just running on CPU only or is it running with GPU but with poor communication-

It is also stated Keplers lack ram, whereas other GPUs with the same ram (8GB) such as RTX2040 or 3060 also have 8GB.

We want to run 1000 aa sequences but the local install runs indeed out of memory. Would the same happen with for instance an RTX3060 or does the ram not fill since it is faster?

Does Colabfold with AMD GPUs (these appear to have more ram onboard)?

Thanks!

GPU Alphafold • 1.7k views
ADD COMMENT
0
Entering edit mode
12 months ago

Hi Arjen,

This is a little while ago now so maybe this is not that helpful but...

You should be able to tell if the GPU is being seen because the predictions will be really fast! For a protein that size, I reckon 1-5 minutes per iteration (at least on my slightly newer, speedier GPUs; although they have 16GB RAM; see below).

CUDA version =/= CUDA compute capability (this is confusing but it's just the way it is). From the looks of things, your GPU should be compatible with CUDA 11.1. So it ought to work. Have you tried making sure that TensorFlow and PyTorch are seeing the GPU? The code here and here will help you to do that. If both of those indicate that the GPU is being detected then your problem is elsewhere.

8GB of GPU RAM might not be enough for 1000 residues, so it is possible that this is what is causing your problem. The most recent prediction I ran was 1500 residues but a homotrimer and peaked at 7.5GB of GPU RAM usage; not sure exactly what the drivers of this are, but in the case of a homomultimer, I suspect lower RAM usage. Does your CPU usage spike when ColabFold is making predictions? How many cores are typically being used? When I run local ColabFold on the GPU, I have just one CPU core in use; if you're using multiple, the chances are your predictions are running on the CPU.

AMD GPUs:

GPU acceleration in python (which is the nuts and bolts of ColabFold) relies on CUDA which afaik doesn't really work on AMD GPUs. There is a workaround from AMD (called ROCm), although I have never tried it. See this forum post for more info.

ADD COMMENT
0
Entering edit mode

Another giveaway is that when you hit run, it will say either "Running on GPU" or "Running on CPU" in the log output...

enter image description here

ADD REPLY

Login before adding your answer.

Traffic: 2581 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6