Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model responds with "you didn't provide an image" or equivalent #3282

Closed
2 of 4 tasks
Rudd-O opened this issue Jun 19, 2024 · 0 comments
Closed
2 of 4 tasks

Model responds with "you didn't provide an image" or equivalent #3282

Rudd-O opened this issue Jun 19, 2024 · 0 comments

Comments

@Rudd-O
Copy link

Rudd-O commented Jun 19, 2024

Bug Report

Various vision-capable models (Llama3, phi3) I've tried ignore the images parameter of the API call /chat. They say things like:

As mentioned earlier, without an actual visual provided for "[img-0]" or "this one," it's impossible for me to give a description. However, if these are placeholders for images that should have been included with your message, please make sure the images are attached properly when you resubmit your query. Once they are visible, I can certainly assist in interpreting and describing them!

It was working first, and now it's no longer working.

No relevant log messages from ollama or Open-WebUI.

Description

Bug Summary:

Chat messages with uploaded images are not recognized as having images.

Steps to Reproduce:

Select image (phone / desktop doesn't matter), type question about image, get error above.

Expected Behavior:

Get image description as I used to get.

Actual Behavior:

See above.

Environment

  • Open WebUI Version: ghcr.io/open-webui/open-webui:main at the time of this bug report

  • Ollama (if applicable): 0.1.44 (installed today)

  • Operating System: Lunix (Fedora 40)

  • Browser (if applicable): Chromium on mobile, Firefox on desktop.

Reproduction Details

Confirmation:

  • I have read and followed all the instructions provided in the README.md.
  • I am on the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.

Logs and Screenshots

Browser Console Logs:
[Include relevant browser console logs, if applicable]

Docker Container Logs:
[Include relevant Docker container logs, if applicable]

Screenshots (if applicable):
[Attach any relevant screenshots to help illustrate the issue]

Installation Method

Docker.

Additional Information

None.

Note

If the bug report is incomplete or does not follow the provided instructions, it may not be addressed. Please ensure that you have followed the steps outlined in the README.md and troubleshooting.md documents, and provide all necessary information for us to reproduce and address the issue. Thank you!

@open-webui open-webui locked and limited conversation to collaborators Jun 19, 2024
@tjbck tjbck converted this issue into discussion #3283 Jun 19, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant