You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Im trying to run a fasta file with 3643 in length. MSA part was done, but the inference part tried to allocate 80 GB of VRAM on GPU which I dont have access to, Graphic cards are NVIDIA Tesla V100 16 GB. Now im trying to run inference on CPU which is a very slow process, and the job keeps using a lot of RAM and expand the usage as the time passes. Can I limit usage of RAM somehow? Or can I run inference on more graphic cards maybe with parallel process?
The text was updated successfully, but these errors were encountered:
You can: use CUDA shared memory from multiple GPU cards
You can't: limit the memory, and more GPU cannot accelerate the prediction process with parallel process
I need to add, I tested AlphaFold on NVIDIA Tesla V100 32GB, and it's still able to predicting when I submitted a sequence with more than 3000aa. Perhaps you don't need that much memory, and maybe it alse depends on the size of MSA
Im trying to run a fasta file with 3643 in length. MSA part was done, but the inference part tried to allocate 80 GB of VRAM on GPU which I dont have access to, Graphic cards are NVIDIA Tesla V100 16 GB. Now im trying to run inference on CPU which is a very slow process, and the job keeps using a lot of RAM and expand the usage as the time passes. Can I limit usage of RAM somehow? Or can I run inference on more graphic cards maybe with parallel process?
The text was updated successfully, but these errors were encountered: