-
-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: whole/full document mode #3129
Comments
Full support, with 32-128k context models now being more common, sometimes it's just easier and more failsafe to pass the whole document into the context if you want a summary. |
Yes I like this. on Summarization I have a modelfile which was setup just for summarizing which i can just paste my entire document (text) and return just the summary |
That would be really useful for several edge cases, recently ran into this not with summaries but some analysis. It was not about the content but patterns in it (so the entire context was important to survive at once). Ended up copying and pasting a lot, and eventually throwing it into Claude because this got tedious quickly. RAG is amazing, but there's cases where a simple 1:1 is all you need. |
With llama3.1 supporting 128k context this would be an important feature to have |
I think a lot of people are going to soon realize that Ollama doesn't actually run models at their full context size by default, and that if you wanted to run even 8B at 128K you'd need over 100GB of VRAM... |
just because some can't fit it, doesn't mean others can't. num_ctx is not exactly hidden or advanced to understand, and you don't have to max it out just because it can. on top not everything uses standard quadratic attention to sequence length. open webui even has it more clearly worded as context length in the model file. |
I wasn't arguing against it @rvkwi, you can see that I did thumbs-up the OP. You and I may know how |
Is your feature request related to a problem? Please describe.
Sometimes it may be ideal to pass the whole document to LLM for tasks like summarization. Currently, if you upload a document and ask the LLM to summarize it, the RAG would most likely just return no result. As the user can't turn off the retrieval part, the LLM basically becomes useless in this rather common use case. Passing the full document is also useful of tasks like translation or sentiment analysis.
Describe the solution you'd like
When using an uploaded document or fetched webpage, there should be a checkbox allowing the user to pass the entire ingested document as the context. This should be a relative easy change to just skip the retrieval straight to the query.
Ideally, there should be a warning if the content is larger than the LLM's context size, so truncation or degraded output does not occur.
Describe alternatives you've considered
Additional context
This feature is also present in some other chat UI's, e.g., “document pinning” in AnythingLLM.
I'm not sure how this will interact with pipelines, since pipelines can basically do anything. I think such a flag can also be passed to compatible RAG pipelines to let it know if the user wants to perform context retrieval or just do a full-context insertion.
The text was updated successfully, but these errors were encountered: