Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limit RAM usage #12

Closed
hrzolix opened this issue Dec 30, 2021 · 1 comment
Closed

Limit RAM usage #12

hrzolix opened this issue Dec 30, 2021 · 1 comment

Comments

@hrzolix
Copy link

hrzolix commented Dec 30, 2021

Im trying to run a fasta file with 3643 in length. MSA part was done, but the inference part tried to allocate 80 GB of VRAM on GPU which I dont have access to, Graphic cards are NVIDIA Tesla V100 16 GB. Now im trying to run inference on CPU which is a very slow process, and the job keeps using a lot of RAM and expand the usage as the time passes. Can I limit usage of RAM somehow? Or can I run inference on more graphic cards maybe with parallel process?

@Zuricho
Copy link
Owner

Zuricho commented Dec 31, 2021

You can: use CUDA shared memory from multiple GPU cards
You can't: limit the memory, and more GPU cannot accelerate the prediction process with parallel process

I need to add, I tested AlphaFold on NVIDIA Tesla V100 32GB, and it's still able to predicting when I submitted a sequence with more than 3000aa. Perhaps you don't need that much memory, and maybe it alse depends on the size of MSA

@hrzolix hrzolix closed this as completed Jan 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants