getting this error : KeyError: 'model'
#3634
Replies: 3 comments 9 replies
-
Not sure why this is closed. This almost looks like a problem with config.py when setting environmental values and so models cant be pulled. In my case, I have ollama running as a separate container on my localhost to which I have a open-webui container connect: docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda When reaching out, open-webui should be using host.docker.internal, not localhost or 127.0.0.1. And yet that is exactly what seems to be happening below with "health" calls to some random port: app.state.MODELS = {model["model"]: model for model in models["models"]}
|
Beta Was this translation helpful? Give feedback.
-
Same issue here, found that ollama 0.3.0 doesn't have I've fixed it filling |
Beta Was this translation helpful? Give feedback.
-
I have been looking for quite some time now, yelling at my screen, and finally just found out what was wrong for me. I had setup ollama using https://github.com/ollama/ollama/blob/main/docs/linux.md in WSL2 and had forgotten about it. Along the way I also setup ollama using the windows executable. So lately what I had been updating was the windows executable, not the linux binary. I was stuck at version 0.1.x, was able to confirm that using http://192.168.1.xxx:11434/api/version I decided to uninstall both linux version and windows version and run a docker container using a compose file. Now it works. |
Beta Was this translation helpful? Give feedback.
-
getting this error on running latest docker image :
Beta Was this translation helpful? Give feedback.
All reactions