Multimodal LLM Mural Assistant with Ollama
MurAI is a multimodal LLM assistant designed to analyze and extract information from Mural.co screenshots. This tool is meant to help users quickly understand and interact with Mural content.
A general multimodal LLM assistant would greatly help the office navigate large amounts of information quicker and more efficiently. This prototype aims to lay the groundwork for future LLM projects at CMS.
-
Clone the repository:
git clone https://github.com/DSACMS/mural-ollama cd mural-ollama
-
Install the required dependencies (ideally in a virtual environment):
pip install -r requirements.txt
-
Ensure Ollama is installed and the required model is available (in this case llava:13b):
ollama pull llava:13b
A full list of contributors can be found on https://github.com/DSACMS/mural-ollama/graphs/contributors.
Thank you for considering contributing to an Open Source project of the US Government! For more information about our contribution guidelines, see CONTRIBUTING.md.
We adhere to the CMS Open Source Policy. If you have any questions, just shoot us an email.
Submit a vulnerability: Unfortunately, we cannot accept secure submissions via email or via GitHub Issues. Please use our website to submit vulnerabilities at https://hhs.responsibledisclosure.com. HHS maintains an acknowledgements page to recognize your efforts on behalf of the American public, but you are also welcome to submit anonymously.
For more information about our Security, Vulnerability, and Responsible Disclosure Policies, see SECURITY.md.
This project is in the public domain within the United States, and copyright and related rights in the work worldwide are waived through the CC0 1.0 Universal public domain dedication as indicated in LICENSE.
All contributions to this project will be released under the CC0 dedication. By submitting a pull request or issue, you are agreeing to comply with this waiver of copyright interest.