-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LST_AI installation on MAC OS #3
Comments
Hey @sbajaj1 and thanks for checking out LST-AI! So Step 6.1 downloads and installs a pre-compiled version of greedy (that we compiled on Ubuntu), so I am unsure whether that pre-compiled version would work on Mac OS. I would recommend to just give it a try, i.e. use a T1w and FLAIR and run lst. Please report if that works for you 👍 If you encounter any problems, we might use Step 6.2 (compiling greedy on MAC) instead of Step 6.1. However, I have not tested on MAC (and do not have a MAC), so we might need to troubleshoot that together if you run into any problems. |
Hi Julian, Thanks for your quick response. I am having serious trouble in installing this toolbox (may be because the instructions are not valid for Mac). Please see my steps below after starting from scratch. I would greatly appreciate your help resolving this:
but pip install -e . gives me following error, which I am unable to resolve: |
I then updated my python to 3.9, still I get similar error about tensor flow as following: ERROR: Could not find a version that satisfies the requirement tensorflow<2.12.0 (from lst-ai) (from versions: 2.13.0rc0, 2.13.0rc1, 2.13.0rc2, 2.13.0, 2.13.1, 2.14.0rc0, 2.14.0rc1, 2.14.0, 2.14.1, 2.15.0rc0, 2.15.0rc1, 2.15.0) |
Thank you for the debugs. Can you try relaxing the version restrictions for tensorflow in setup.py? From:
To e.g.
|
Hi Julian, That has partially resolved the issue- thank you so much for that. After I edited setup.py file as you suggested I get the following warning after I check if lst is installed fine: /Users/sbajaj/lib/python3.9/site-packages/tensorflow_addons/utils/tfa_eol_msg.py:23: UserWarning: TensorFlow Addons (TFA) has ended development and introduction of new features. For more information see: tensorflow/addons#2807 warnings.warn( Thank you for using LST-AI. If you publish your results, please cite our paper: usage: lst [-h] --t1 T1 --flair FLAIR --output OUTPUT [--existing_seg EXISTING_SEG] [--temp TEMP] [--segment_only] [--annotate_only] [--stripped] [--threshold THRESHOLD] [--fast-mode] [--device DEVICE] If I ignore the above warning and run lst command as following, I get error: $ lst --t1 /Users/sbajaj/BRAINIX_NIFTI_T1.nii.gz --flair /Users/sbajaj/BRAINIX_NIFTI_FLAIR.nii.gz --output /Users/sbajaj/check_output /Users/sbajaj/lib/python3.9/site-packages/tensorflow_addons/utils/tfa_eol_msg.py:23: UserWarning: TensorFlow Addons (TFA) has ended development and introduction of new features. For more information see: tensorflow/addons#2807 warnings.warn( Thank you for using LST-AI. If you publish your results, please cite our paper: Looking for model weights in /Users/sbajaj/bin. I am not getting error while installing greedy - but still even after having lst command showing OK in my terminal, the analysis command is showing above error. |
Good morning @sbajaj1, thanks for doing all of the debugging. That's super good for troubleshooting. And thank you for being so patient with the installation process, we did not for see MAC Users installing LST-AI natively that soon, and developed LST-AI for the Linux platforms. In general, and perhaps I should have mentioned this earlier as in retrospect perhaps that was not ultimately clear. (If so, please let me know and I will add this to the README.md. Also, sincere apologies!). You should always be able to use the docker containers (provided in CPU and GPU) to build and run dockerized versions of LST-AI instead of natively installing it on your system. For MAC OSX, I would use the CPU-version as Nvidia GPUs are likely not supported. If you want to continue with the native installation of LST-AI on MAC OS, I have the following advice: Likely, the installed version of greedy (which we compiled on Ubuntu) does not work on MAC OS. To test that, we can try the following. Could you please open a new terminal and call
If that is not the case, then you will need to download a different version of greedy - which fortunately is available for MAC OS, and you can install / download the package for MAC OS. You would then replace the binary you placed in step 6.1. (II) with the binary you just downloaded. Did that help? Cheers, |
**Hi Julian, Thanks for the details about next steps. When I try docker, I am getting the following error:
Dockerfile:2018 | RUN git clone https://github.com/CompImg/LST-AI/
|
@sbajaj1 Oh no, we are running into the "chicken or the egg" problem - i.e., if you use MAC OS to build the docker container, then the installation on MAC OS needs to work, which was why we turned to docker in the first place. However, we can bypass this problem if I share the pre-built docker container with you. So I just built the newest version of the docker (CPU) container on our Ubuntu server, and I am currently uploading it to dockerhub (which unfortunately takes some time as it's 12GB large). The next stops will be:
However, I will give you the exact instructions to do so once the upload has completed. I am deeply sorry that we did not provide the docker container in the first place; I did not consider that you need a working solution for MAC to build the docker container. Thank you very much for your patience, and I think that we will have finished the installation process for you, and you will have a working LST-AI solution for your MAC laptop. Thank you for your understanding, I will give you some more instructions soon once the upload has completed! 🙂 Have a nice evening, |
Great - thank you so much Julian. I will wait for the newest version and the next steps. Thank you so much for all your help !! |
Thanks for your patience @sbajaj1 - the next steps should hopefully fix the issues and allow you to run LST-AI on your MAC. First things first, I compiled a CPU docker container, so it won't be using your MAC's GPU, as we don't support that (yet). Also, CPU docker's are much easier to handle as we cannot really forsee which GPU hardware our user base is going to leverage (we would even need different GPU docker versions for Nvidia GPU / CUDA Versions). Consequently, segmentation will take significantly longer (in the range of minutes on a CPU rather than seconds on the GPU). Including the registration and skull-stripping I expect it to be >10mins. However, the advantage should be that it actually works 🙂 - and I will work on a GPU-MAC version if I can get one in my hands any time soon (e.g. from a colleague). That being said, and without further ado, here is the workflow for installing the CPU-docker on your system:
docker build -t jqmcginnis/lst-ai_cpu:latest . This retrieves the pre-built docker container from dockerhub.
Some more notes:
Hope this helped, and please let me know, we will update the Readme.md according to your experiences once we have a working solution for MAC. Thanks again, |
Hi Julian, Ok so I ran the two commands you wrote. The first one, it was showing some error, so I installed docker app on my Mac Pro and ran jqmcginnis/lst-ai_cpu:latest. It seems docker is running lst-ai_cpu fine. Please see attached screenshot. Then I ran the following command in terminal: docker run -v lst_in:/custom_apps/lst_input -v lst_out:/custom_apps/lst_output -v lst_temp:/custom_apps/lst_temp jqmcginnis/lst-ai_cpu:latest --t1 /Users/sbajaj/sample_test/BRAINIX_NIFTI_T1.nii.gz --flair /Users/sbajaj/sample_test/BRAINIX_NIFTI_FLAIR.nii.gz --output /Users/sbajaj/lst_output --temp /Users/sbajaj/lst_temp --device cpu But this gives me following warning: (however, adding --platform linux/amd64 to the command got rid of this above warning) and then shows And now, its been forever, I do not see any activity on my terminal, please see attached screenshot. Docker status screenshot is also attached. I am not sure if I am running my docker run command correctly or not? |
Hi @sbajaj1 Thanks for staying with us here, and I am sorry that the process of getting this up and running on MAC OS is so utterly painful. I expected it to be a lot easier with docker, as tensorflow and pytorch (and other python libraries) are cross-platform (and they work out of the box in many projects), however in our specific case it seems to be particularly hard. I dug deeper, and it feels like the issue is centered around tensorflow / tensorflow-addons incompatibility issues with MAC-OS. This GitHub issue includes a rather lengthy thread regarding the problems we are also running into. I tried building the docker container on my ubuntu laptop with the
Moreover, I have found that tensorflow-addons in particular is difficult to get up and running on MAC-OS, as documented here. While this makes me hopeful that it is possible, I would likely need access to a MAC-OS laptop to test and getting everything up and running myself. With this in mind, I reached out and asked a collaborator (who is working on MAC OS, and is very experienced with docker) for help and we will meet on Friday. Thus, I would propose that I(we do some more debugging on our end, and report back to you, once I know more. In the meantime, please let me know if I can assist with the segmentation. E.g. if you have a tight schedule and need segmentations now, I would be willing to run lst-ai for you, if you are able to share data (naturally over a private channel and not via github). |
Hi Julian, That really explains the issues. Thank you so much once again for your willingness and hard work in getting this resolved. Also, thanks for offering help with segmentation at your end. Good news is that I can definitely wait until next week (or more if needed) as I do not have specific deadline. I will just put this project on brief hold without any issue. So, please take your time! I am looking forward to receiving the next update and further using this LST-AI method. Best, |
Hi @sbajaj1 Thank you very much for your patience. With the help of Florian Kofler (Helmholtz), I have been able to narrow it down to the unavailability of Tensorflow-Addons for the arm64 platform, which, frankly speaking, is very surprising to me. Apparently, Tensorflow-Addons has never been supported for both linux and MAC-OS ARM architectures. However, we are not giving up, and I will check in the upcoming days if we can replace the functionality we have been using from this library with (1) and own implementation or (2) if I can compile tensorflow-addons for arm64 platforms by ourselves. Just wanted to give you a quick update and let you know that we are (still) working (with highest priority) on a solution to make LST-AI work on your, and potentially other, ARM64 platforms. Have a nice day, Julian |
yes, we could reproduce your issue and are working on a fix. Currently, there seem to be three options:
I am curious about which solution will win in the end :) |
Great - thank you so much Julian @jqmcginnis and Florian @neuronflow for the update and working on this. That sounds like a plan. I am looking forward to getting the next updates. |
@sbajaj1 It seems that just right in time for Christmas, we might have a little 🎁 For the last two weeks, @neuronflow and I have been debugging multiple issues arising with MacOS / ARM64 Dockerfiles, and I am happy to share that we have made quite some progress:
To obtain the new LST-AI image, please do the following: docker pull jqmcginnis/lst-ai_cpu:latest On your MAC OS platform, this should automatically fetch the ARM64 version (linux/amd64 would be available as well, but not suited for your platform). Then, you can finally run: docker run -v /Users/nf/projects/lst/input:/custom_apps/lst_input -v /Users/nf/projects/lst/output:/custom_apps/lst_output -v /Users/nf/projects/lst/temp:/custom_apps/lst_temp jqmcginnis/lst-ai_cpu:latest --t1 /custom_apps/lst_input/T1W.nii.gz --flair /custom_apps/lst_input/FLAIR.nii.gz --output /custom_apps/lst_output --temp /custom_apps/lst_temp --device cpu Please let us know if this works for you. We will update the Dockerfile and Release soon, once we have obtained your feedback. Cheers, |
Hi Julian, Thank you so much for working on this. Sorry for the late response (I was on Christmas break). The docker pull command ran fine without any issue and the download was successful, but the docker run command is giving the following error. It seems there is some minor issue (may be from my side): sudo docker run -v /Users/nf/projects/lst/input:/custom_apps/lst_input -v /Users/nf/projects/lst/output:/custom_apps/lst_output -v /Users/nf/projects/lst/temp:/custom_apps/lst_temp jqmcginnis/lst-ai_cpu:latest --t1 /Users/sbajaj/sample_test/BRAINIX_NIFTI_T1.nii.gz --flair /Users/sbajaj/sample_test/BRAINIX_NIFTI_FLAIR.nii.gz --output /custom_apps/lst_output --temp /custom_apps/lst_temp --device cpu |
does the same happen when you ran without sudo?
Does this folder and the others you mount exist already? What are their permissions? |
Hi @neuronflow Yes, I get the same error without sudo as well. No, these folders do not exist already. And, permissions are fine too as I am the only admin. May be something wrong in the command. Just to reiterate, I am doing the following (my data T1w and FALIR images are at the location: /Users/sbajaj/sample_test/....nii.gz MDACM0CL9004107:~ sbajaj$ docker pull jqmcginnis/lst-ai_cpu:latest latest: Pulling from jqmcginnis/lst-ai_cpu MDACM0CL9004107:~ sbajaj$ docker run -v /Users/nf/projects/lst/input:/custom_apps/lst_input -v /Users/nf/projects/lst/output:/custom_apps/lst_output -v /Users/nf/projects/lst/temp:/custom_apps/lst_temp jqmcginnis/lst-ai_cpu:latest --t1 /Users/sbajaj/sample_test/BRAINIX_NIFTI_T1.nii.gz --flair /Users/sbajaj/sample_test/BRAINIX_NIFTI_FLAIR.nii.gz --output /Users/sbajaj/lst_output --temp /Users/sbajaj/lst_temp --device cpu docker: Error response from daemon: error while creating mount source path '/host_mnt/Users/nf/projects/lst/input': mkdir /host_mnt/Users/nf: permission denied. |
Hey @sbajaj1 - Happy New Year ✨ Thank you for the helpful debugging - I think there are some misunderstandings with the volume mounting process, i.e. wrong mounting of paths (
You should replace the
Let me know if this works 👍 Cheers, |
Happy new year too! 🚀 |
Can you copy-paste your console output, please? The computation should take a while. The final results should appear in the output folder. The temp folder should contain intermediate steps generated on the way toward populating the output folder. PS: lesion size should be trivial to implement by counting voxels? |
Hi @neuronflow Here you go (I just noticed this error in this output: vnl_lbfgs: Error. Netlib routine lbfgs failed.): MDACM0CL9004107:~ sbajaj$ docker run -v /Users/sbajaj/sample_test/:/custom_apps/lst_input -v /Users/sbajaj/lst_output:/custom_apps/lst_output -v /Users/sbajaj/lst_temp:/custom_apps/lst_temp jqmcginnis/lst-ai_cpu:latest --t1 /custom_apps/lst_input/BRAINIX_NIFTI_T1.nii.gz --flair /custom_apps/lst_input/BRAINIX_NIFTI_FLAIR.nii.gz --output /custom_apps/lst_output --temp /custom_apps/lst_temp --device cpu N=7 NUMBER OF CORRECTIONS=5 INITIAL VALUES F= -10792.4 GNORM= 16.6195 I NFN FUNC GNORM STEPLENGTH N=7 NUMBER OF CORRECTIONS=5 INITIAL VALUES F= -10778.8 GNORM= 5.16266 I NFN FUNC GNORM STEPLENGTH N=7 NUMBER OF CORRECTIONS=5 INITIAL VALUES F= -10701.9 GNORM= 4.15881 I NFN FUNC GNORM STEPLENGTH N=7 NUMBER OF CORRECTIONS=5 INITIAL VALUES F= -10768.3 GNORM= 36.2245 I NFN FUNC GNORM STEPLENGTH N=7 NUMBER OF CORRECTIONS=5 INITIAL VALUES F= -11372.3 GNORM= 15.1961 I NFN FUNC GNORM STEPLENGTH N=7 NUMBER OF CORRECTIONS=5 INITIAL VALUES F= -11463.6 GNORM= 16.9441 I NFN FUNC GNORM STEPLENGTH ######################## File: /custom_apps/lst_temp/sub-X_ses-Y_space-mni_T1w.nii.gz |
@sbajaj1 - great news 🚀 - happy you have obtained your first segmentations using LST-AI. That's also some great feedback for us regarding MAC-OS and we will work on merging this to master in the upcoming days. Regarding your questions: (1) Output files:
The class labels for the annotated segmentation are:
(2) Temp files: LST.AI performs segmentation and annotation in the MNI152 template space. For users that would like to keep the registered T1w and FLAIR images and their corresponding segmentation masks, we allow to keep these files if the user provides a temp directory. If you skip the
(3) You are absolutely correct, LST-AI currently does not gather the statistics as the previous LST/SPM12 version. To compute your stats, you can run the script using the following command: Stats for binary mask
Example Output:
Stats for multi-class mask
Example Output:
Let me know if this is what you would expect, and if you think that we should also integrate this into the LST-Ai package. Cheers, |
Thanks for the detailed information. Somehow, the temp folder I got has 14 files in it (not 20 as you described) as can be seen in my previous screen shot I shared, and 0 files in output folder (not 2 as you described). I also got that error in the terminal console as following: vnl_lbfgs: Error. Netlib routine lbfgs failed. It seems something minor still needs to be fixed. Please let me know otherwise. I will next try computing the stats for binary mask/lesion. Thanks, |
@sbajaj1 This might be an issue with your input data. @jqmcginnis @CompImg can we(here meaning you) supply example data to check the functionality? This would allow us to pinpoint whether it is a data issue. @sbajaj1 can you describe your input data a bit please? resolution etc. |
Hi @neuronflow I am using standard dataset from here: https://www.kaggle.com/datasets/ilknuricke/neurohackinginrimages to test LST-AI This is open access data to public. I double checked and it seems this input data is OK. Please find the data info below: T1w: voxel to ras transform: voxel-to-ras determinant -1.31836 ras to voxel transform: FLAIR: Volume information for BRAINIX_NIFTI_T1.nii.gz voxel to ras transform: voxel-to-ras determinant -1.31836 ras to voxel transform: voxel to ras transform: voxel-to-ras determinant -3.82668 ras to voxel transform: |
@sbajaj1 apologies that I misread and misunderstood your previous comment - somehow I interpreted that it finally worked after I had seen the many temp files! I am sorry about that. Please let me know if I overlooked anything else! So, I tested a couple of things on my end: (1) I have used the BRAINIX_NIFTI files from Kaggle - and on my end, they seem to work fine, i.e. I get all temp and output files. However, I think it might not be the best sample dataset, as it looks to me as if the scan does not feature MS lesions, and also exhibit highly anisotropic data. Perhaps you would like to switch to MSLUB and choose one of the patients under Raw Data? 🙂 (2) According to the developer of greedy, the warning "This might not be a problem, unless the registration is doing something nonsensical. Registration is an optimization problem, and sometimes the tolerances for the optimization scheme are not quite right for the problem. Here it looks like it ran for some iterations and reduced the objective, so it might be ok. If you see it printing this error after just a couple of iterations (NFN), then something is wrong." I get the same warning as well. However, the registration (also in your case) seems to work, as the files are populated in your temp folder. I think we can ignore it 👍 (3) Checking the debug output it seems that the script stops after the registration / after warping the images and before entering the LST Segmentation routine ( I am missing the "Running LST Segmentation" print):
Two prints of Next, Based on this, my feeling is that the segmentation is still on-going (and the prints are lagging behind). Could you please provide additional information about the duration of the operation within the Docker container? In our experience, especially noted when running Docker on @neuronflow's M2 system, there can be a delay in the appearance of Docker's print commands. Therefore, it's conceivable that the LST-AI process is still ongoing. Could you run it again and wait (for a longer time?). Thank you very much for your help, and sorry once again for my misunderstanding. Julian |
My Macbook has an M1 Max, otherwise, I agree :) @sbajaj1, which resources did you assign to Docker on your system? My successful tests were with these settings, I am quite sure one could get away with much less: |
So, finally, it seems everything is working fine now. Two things I noticed:
Also, stats command compute_stats.py is working fine. Thank you so much for your hard working resolving all the issues. Best, |
I just downloaded T1(pre) and FLAIR from subject # 1. During the processing I notice greedy shows the following warnings:
Also I see:
However, it still generates segmentations that look fine to my layman's eyes: What happens if you increase the resources available to Docker? |
Everything is working fine with all the subjects I tested now (there was something wrong at my end while downloading the sample data in pat 1 vs pat 15). Thank you so much all your consistent help and support with this. Best, |
sweet, thank you for reporting back so quickly :) |
@jqmcginnis I believe this issue can be closed? |
@neuronflow I am currently working on a PR for all changes related to this issue in order to push it to the main branch of this repo. For referencing the PR and problems addressed in this issue, I would keep it open until I created & closed the PR? :) |
The improvements implemented in this issue have been merged to the main branch, and have now been released in v1.1.0 🎈 |
Greetings,
I am trying to install LST-AI on Mac Book Pro Ventura 13.6.2 (chip: Apple M2 Max).
I guess I have been successful installing up to step 6.1. I am not sure whether after 6.1, do I still need to do 6.2?
If yes, the step 6.2 gives me following error:
Warning: No available formula with the name "build-essential".
==> Searching for similarly named formulae and casks...
Error: No formulae or casks found for build-essential.
If no, then I am unable to start runnning lst command following step 6.1.
I would greatly appreciate any help in installing lst-ai on my system.
Thanks,
Sahil
The text was updated successfully, but these errors were encountered: