Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enh: many model chat ui #2461

Closed
chrisoutwright opened this issue May 21, 2024 · 4 comments
Closed

enh: many model chat ui #2461

chrisoutwright opened this issue May 21, 2024 · 4 comments

Comments

@chrisoutwright
Copy link

Is your feature request related to a problem? Please describe.
I'm always frustrated when adding new models on top compresses the viewable chat canvas vertically, making it harder to read and interact with the chat. This reduces the usability and overall user experience, especially when multiple models are involved.

Describe the solution you'd like
I would like to have a toggle feature that allows the set of models to be displayed in one line or put them sideways in a separate boxing. This way, the chat canvas height remains unaffected, ensuring a consistent and readable chat interface. Ideally, this toggle could be implemented as a button or a dropdown that users can easily switch between compact and expanded views.

Describe alternatives you've considered

Horizontal Scroll: Allow the models to be displayed in a single horizontal line with a scroll bar if the models exceed the screen width.
Collapsible Menu: Implement a collapsible menu for the models that users can expand or collapse as needed.
Separate Section: Place the models in a separate section on the side of the chat interface, allowing the chat canvas to maintain its full height.

@tjbck tjbck changed the title Chat Canvas Preservation with Model List Toggle enh: many model chat ui May 21, 2024
@kwekewk
Copy link

kwekewk commented May 26, 2024

i suggest similar to gemini draft, and its ability to regenerate

@dexwenway
Copy link

This feature is very helpful for interacting with multiple models in a single conversation, and I hope that it can be improved in the future.

@Maralai
Copy link

Maralai commented Jun 15, 2024

I would also like to also share some thoughts on the current implementation of running multiple models side by side in the interface. While I appreciate the effort behind this feature, I have a few suggestions and observations based on my experience.

  1. User Interface Layout:

    • Current Side-by-Side Layout: I find the current narrower side-by-side layout less intuitive compared to when each model's response filled the entire screen width. The latter provided a cleaner and more focused experience per model.
    • Single Thread per Model: Previously, each model would respond in separate threads (horizontal response), allowing users to develop divergent branches of thought specific to each model. This approach felt more intuitive and was preferred for exploring different reasoning patterns availed by different models.
  2. Prompt Targeting:

    • Issue with Subsequent Prompts: In the current setup, subsequent prompts are sent to all models, which is problematic when models provide disparate responses. Follow-up prompts are generally intended for a specific model's response rather than all at once.
    • Desired Functionality: It would be more effective if the interface allowed users to direct prompts to specific models. This could be achieved by:
      • Implementing a graphical indicator (e.g., a color border) to show which model's response a user is engaging with.
      • Providing an option to select one or multiple models for subsequent prompts.
      • Displaying a contextual label above the prompt text box to indicate which models will receive the next prompt.
      • Including a menu to select whether the prompt should be sent to one, some, or all models.
  3. Model Interaction Enhancement:

    • Incorporating a way to pass each selected model's response along with the original message could facilitate a “mixture of agents” approach, enhancing the capability to combine insights from multiple models effectively.

Ultimately, the current multi-model feature seems geared more towards zero-shot evaluation rather than iterative prompting. It would be immensely helpful if there were a setting to revert back to the earlier interface behavior, enabling more targeted and model-specific interactions.

Thank you for considering these suggestions. I’m hopeful they can enhance the usability and effectiveness of the multi-model feature.

@chrisoutwright
Copy link
Author

chrisoutwright commented Jun 23, 2024

What I originally meant is that (before we got the side-by-side, but still with top cutoff), the vertical cutoff is not ideal:
image

Now with side-by-side I wish there would be an option to specify the minimum width till when we get the output get collapsed via an arrow. At the moment it will need to be quite slim to do so:

image

This is really a bit difficult to read this way (see how the third one is literally getting like one token per line nearly)

@open-webui open-webui locked and limited conversation to collaborators Aug 19, 2024
@tjbck tjbck converted this issue into discussion #4727 Aug 19, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants