Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

epic: Improving Ollama integration #2998

Open
eckartal opened this issue Jun 5, 2024 · 17 comments
Open

epic: Improving Ollama integration #2998

eckartal opened this issue Jun 5, 2024 · 17 comments
Labels
P2: nice to have Nice to have feature type: feature request A new feature

Comments

@eckartal
Copy link
Contributor

eckartal commented Jun 5, 2024

Problem
Integrating Ollama with Jan using the single OpenAI endpoint feels challenging. It’s also a hassle to ‘download’ the model.

Success Criteria

  • Make it easier to add Ollama endpoints.
  • Automatically find available Ollama models and settings.
  • Allow multiple Ollama instances (e.g., local for small models, server/cloud for models).

Additional context
Related Reddit comment to be updated: https://www.reddit.com/r/LocalLLaMA/comments/1d8n9wr/comment/l77ifd1/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

@eckartal eckartal added the type: feature request A new feature label Jun 5, 2024
@Van-QA Van-QA assigned imtuyethan and unassigned Van-QA Jun 6, 2024
@ShravanSunder
Copy link

Yes please! This is my biggest blocker with Jan. I don't want multiple redundant model file location. I'd like my ollama models to be easily used

@richardstevenhack
Copy link

I second this. I looked at the docs about "Ollama integration", but all that does is set up the server endpoint. You can't select an Ollama model already downloaded where Ollama stores its models and I don't think you can upload the model. On my openSUSE Tumbleweed system, Ollama stores its models in /var/lib/ollama/.ollama/models/ rather than the default Ollama location, and the Import file selection dialog can't even see the directories below the /var/lib/ollama directory.

@imtuyethan imtuyethan removed their assignment Aug 28, 2024
@0xSage 0xSage added P2: nice to have Nice to have feature wontfix This will not be worked on labels Sep 5, 2024
@0xSage
Copy link
Contributor

0xSage commented Sep 5, 2024

You can import already downloaded models (locally symlink) directly from the Hub.

image image

We likely won't support a direct integration for a while as we already integration with Hugging Face

@0xSage 0xSage closed this as completed Sep 5, 2024
@sammcj
Copy link

sammcj commented Sep 5, 2024

You shouldn't have to import a model though, if you look at how other tools do it, they provide a list of available models from the api

@sean-public
Copy link

That's right, a call to the model listing endpoint and then allowing one to be selected for use is what we're talking about, at least to start. I think 0xSage is a little mixed up about what we're all asking for, maybe? This is holding back anyone that has Ollama running somewhere with modelfiles already configured that they want to use.

@0xSage 0xSage reopened this Sep 5, 2024
@0xSage
Copy link
Contributor

0xSage commented Sep 5, 2024

Ah I see what you guys are saying, we can do a dropdown option here, pinging your existing running server:
image

@0xSage 0xSage removed the wontfix This will not be worked on label Sep 5, 2024
@sammcj
Copy link

sammcj commented Sep 5, 2024

Yeah! That’d be fantastic!

@richardstevenhack
Copy link

Yup, that's the idea.

@ShravanSunder
Copy link

yes please

@dan-homebrew dan-homebrew changed the title feat: Improving Ollama integration epic: Improving Ollama integration Sep 11, 2024
@mrtysn
Copy link

mrtysn commented Sep 17, 2024

my local models downloaded via ollama are curious to test Jan 👀
I was able to symlink ollama models to Jan using https://github.com/sammcj/gollama

@richardstevenhack
Copy link

" I was able to symlink ollama models to Jan using https://github.com/sammcj/gollama"

I use gollama to link to LMStudio. How did you use it to link to Jan? Did you put the Jan directory into the LMStudio files path in Gollama?

@sammcj
Copy link

sammcj commented Sep 17, 2024

@richardstevenhack I probably could update Gollama to add Jan linking support but I think it would make more sense for Jan to just support Ollama as LLM provider though, that way you'd get all the nice Ollama API features and wouldn't have to load models in multiple places.

@richardstevenhack
Copy link

I agree. If everyone rallied around Ollama as the main AI server for PCs, and other programs concentrated on the UI and additional features on top, things would be easier. Until then, would be nice to have the ability to link Jan to Ollama.

@khromov
Copy link

khromov commented Sep 20, 2024

Would also love to see Ollama models from a dropdown in Jan via a model provider! Running models on both Ollama and Jan simultaneously can bring most computers to their knees!

@mrtysn
Copy link

mrtysn commented Sep 20, 2024

" I was able to symlink ollama models to Jan using https://github.com/sammcj/gollama"

I use gollama to link to LMStudio. How did you use it to link to Jan? Did you put the Jan directory into the LMStudio files path in Gollama?

Should have included the steps in the original comment:

  1. Install ollama and have at least one model downloaded.
  2. Install LMStudio (I assume this step is optional, you should be able to manually create the default folder path for the blobs and skip this). The default folder path is: ~/.cache/lm-studio/models.
  3. Install gollama and make it create symlinks in the default path: gollama -L. Again, you can skip (2) if gollama adds/has support for custom folder paths.
  4. Install Jan and manually import the GGUF file. e.g. ~/.cache/lm-studio/models/llama3.1/llama3.1-latest-GGUF/llama3.1-latest.gguf.

Feedback:

  • It would be good to be able to specify a folder path so you don't have to manually add every model.
  • It would be good if Jan scanned such folders by default.
  • It would be great if no symlinking was necessary and Jan could already see the ollama installation.

@richardstevenhack
Copy link

I followed the above advice and I now see that Jan has added the option when importing to:
Keep Original Files & Symlink You maintain your model files outside of Jan. Keeping your files where they are, and Jan will create a smart link to them.
Very nice!

@abdessalaam
Copy link

abdessalaam commented Sep 25, 2024

Very nice. Just an observation: it worked for me when I selected the folder to be imported (containing a GGUF file). When I tried to select the symlink to the actual model file I had an error saying that only GGUF files are supported.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P2: nice to have Nice to have feature type: feature request A new feature
Projects
Status: Scheduled
Development

No branches or pull requests