-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
epic: Better Design abstraction for Remote AI (and Engineering abstraction) #994
Comments
Archive original comment from @0xSage
Success Criteria
Additional context |
I looked through the UI mockups and I can see there's one thing is missing - to allow user to specify it's own URL, for OpenAI-compatible models. Like for instance I do have a model running on my server via oobabooga (it provides OpenAI-compatible API out of the box) or via ollama LiteLLM. And I want to be able to chat with that model from Jan - for that I need to be able to add a remotely inferenced model with OpenAI-compatible API on a custom URL. |
Thanks for flagging this @kha84. I'll add this user story as well! |
I am sunsetting this issue in favor for a more holistic redesign of the "Provider" abstraction: |
Problem
Success Criteria
API key needed
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered: