Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

/ollama/api/generate api for customGPTs #3609

Closed
veya2ztn opened this issue Jul 3, 2024 · 0 comments
Closed

/ollama/api/generate api for customGPTs #3609

veya2ztn opened this issue Jul 3, 2024 · 0 comments

Comments

@veya2ztn
Copy link

veya2ztn commented Jul 3, 2024

The openwebup support api usage like [OWEB]/ollama/api/generate for base mode loaded from ollama.
However, if I want to wrapper the base model (for example, the qwen2 model) with certern

  • system prompt
  • knowledge base for RAG
    The [OWEB]/ollama/api/generate will return status code: 400.

below is a sample that get property result via [OWEB]/ollama/api/generate , the model named qwen2:72b-instruct-q4_0
image

If I wrapper it in a customGPTs that even with no any extra setting. (we name it qwentest)
image

The reason for such suggestion.

The [OWEB]/ollama/api/generate is usually for the code completion task. Currently, it barely base on the ability of the base model such as starcode2 or deepseek-code. Enable preset prompt may help to build much more robust code. Moreover, I also wise the code completion can eventully become text completion that for writting as copilot way

@justinh-rahb justinh-rahb converted this issue into discussion #4061 Jul 23, 2024
@open-webui open-webui deleted a comment from 812781385 Jul 24, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant