-
-
Notifications
You must be signed in to change notification settings - Fork 341
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add back history and reset subcommand in magics #997
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for opening this PR and for re-implementing message history in the magics! I've left suggestions about 1) bounding the chat history to 2 exchanges at most, and 2) avoiding the pseudo-XML syntax being used for non-chat providers. This is a good stopgap solution for users who want to use history in AI magics as soon as possible.
There are better ways to pass message history in LangChain however. In the future, we will definitely want to rework this logic to use the new LCEL syntax and use the RunnableWithMessageHistory
class from langchain_core.runnables.history
; see #392.
Thank you for your contribution! It's been a week since we've last heard from you, so I'm closing this PR due to inactivity. Please feel free to re-open this PR after addressing the review feedback. |
I don't seem to have permission to re-open. Could you do that? |
Yeah, that's the problem with closing PRs on GitHub. |
35ce047
to
d2f680e
Compare
Update: I solved the below question myself, see next message. May I ask for advice @dlqqq and @krassowski? What might be the reason |
I asked:
Oh, I seem to have forgotten to add config=True, to the traitlet! |
Based on my manual testing, Interestingly, if I use |
I fixed the behavior when |
@krassowski & @dlqqq would it make sense to include message history in the context also when invoking custom chains? I haven't touched that part, so it still only sends the prompt entered in the cell: if args.model_id in self.custom_model_registry and isinstance(
self.custom_model_registry[args.model_id], LLMChain
):
# Get the output, either as raw text or as the contents of the 'text' key of a dict
invoke_output = self.custom_model_registry[args.model_id].invoke(prompt) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@akaihola Awesome work! Thank you for including unit test coverage as well. I've left some additional minor feedback below. Once that's addressed, this is ready to be merged. 👍
This time it's not specific to OpenAI.
All history is still kept so to allow configuring a higher `max_history` mid-session and have older exchanges be included.
- `context` instead of `self.transcript` as if clause condition - less confusing model context input variable
@dlqqq I believe I've now addressed your fine review feedback! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@akaihola Awesome work, thank you! 🎉
* Add back history and reset subcommand in magics This time it's not specific to OpenAI. * Document `%ai reset` * Add the `max_history` magic command option * Respect `max_history` when calling the model All history is still kept so to allow configuring a higher `max_history` mid-session and have older exchanges be included. * Use `<role>:` prefix instead of pseudo-XML for non-chat models * Document the `max_history` setting for the magic commands * fix: make `max_history` a configuration option * fix: `max_history=0` to generate with no context * Add unit test for `max_history` magic option * Add unit test for `%ai reset` magic command * Test that transcript is updated with new prompt * Review corrections - `context` instead of `self.transcript` as if clause condition - less confusing model context input variable
%config AiMagics.max_history
%config AiMagics.max_history
#551 removed the history associated uniquely with the
openai-chat
provider in magic commands. It also removed the "reset" command to delete said history. Docs were updated to remove mention of the history and the reset command.This PR adds back the history in magic commands. It also adds the
%ai reset
subcommand to delete said history. A mention of the history and the reset command are added to docs.The history transcript maintains the distinction between human and AI messages by wrapping the prompts and responses in
HumanMessage
andAIMessage
objects.The maximum number of Human/AI message exchanges to include before the new prompt can be configured using
%config AiMagics.max_history = <n>
, defaulting to 2. The whole history is still kept in memory, so raising this option value will start including older messages in subsequent interactions.For non-chat providers, human messages are
wrapped in pseudo XMLprepended with<HUMAN>...</HUMAN>
tagsAI:
orHuman:
unless there is nothing but the first prompt. This is yet untested.