Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I'm using open-webui for ollama's web pages, but would like to get a faster response time. #3099

Closed
BA7LWN opened this issue Jun 12, 2024 · 0 comments

Comments

@BA7LWN
Copy link

BA7LWN commented Jun 12, 2024

To preload the mistral model using the generate endpoint, use:

curl http://localhost:11434/api/generate -d '{"model": "mistral"}'

To use the chat completions endpoint, use:

curl http://localhost:11434/api/chat -d '{"model": "mistral"}'

I want to know which one I should choose to get faster response time

@open-webui open-webui locked and limited conversation to collaborators Jun 12, 2024
@tjbck tjbck converted this issue into discussion #3100 Jun 12, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant