You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When starting the backend with ./start.sh there seems to be a problem with the BertTokenizerFast causing a segfault:
Description
Bug Summary:
[Provide a brief but clear summary of the bug]
Steps to Reproduce:
pip install -r requirements.txt
./start.sh
Expected Behavior:
Backend starts up and listens to a port
Actual Behavior:
Backend gives segmentation fault:
OSError: Can't load tokenizer for '/Users/wschrep/.cache/huggingface/hub/models--sentence-transformers--all-MiniLM-L6-v2/snapshots/8b3219a92973c328a8e22fadcfa821b5dc75636a'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '/Users/wschrep/.cache/huggingface/hub/models--sentence-transformers--all-MiniLM-L6-v2/snapshots/8b3219a92973c328a8e22fadcfa821b5dc75636a' is the correct path to a directory containing all relevant files for a BertTokenizerFast tokenizer.
[1] 33382 segmentation fault ./start.sh
Environment
**Open WebUI Version: current version git
**Ollama (if applicable): 0.1.44
**Operating System: Mac OS Sonoma
Reproduction Details
Confirmation:
I have read and followed all the instructions provided in the README.md.
I am on the latest version of both Open WebUI and Ollama.
I have included the browser console logs.
I have included the Docker container logs.
Logs and Screenshots
File "/Users/wschrep/llmWork/open-webui/backend/main.py", line 43, in <module>
from apps.rag.main import app as rag_app
File "/Users/wschrep/llmWork/open-webui/backend/apps/rag/main.py", line 210, in <module>
update_embedding_model(
File "/Users/wschrep/llmWork/open-webui/backend/apps/rag/main.py", line 187, in update_embedding_model
app.state.sentence_transformer_ef = sentence_transformers.SentenceTransformer(
File "/Users/wschrep/llmWork/open-webui/backend/python_env/lib/python3.9/site-packages/sentence_transformers/SentenceTransformer.py", line 197, in __init__
modules = self._load_sbert_model(
File "/Users/wschrep/llmWork/open-webui/backend/python_env/lib/python3.9/site-packages/sentence_transformers/SentenceTransformer.py", line 1296, in _load_sbert_model
module = Transformer(model_name_or_path, cache_dir=cache_folder, **kwargs)
File "/Users/wschrep/llmWork/open-webui/backend/python_env/lib/python3.9/site-packages/sentence_transformers/models/Transformer.py", line 38, in __init__
self.tokenizer = AutoTokenizer.from_pretrained(
File "/Users/wschrep/llmWork/open-webui/backend/python_env/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 899, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/Users/wschrep/llmWork/open-webui/backend/python_env/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2094, in from_pretrained
Browser Console Logs:
[Include relevant browser console logs, if applicable]
Docker Container Logs:
[Include relevant Docker container logs, if applicable]
Screenshots (if applicable):
[Attach any relevant screenshots to help illustrate the issue]
Installation Method
[Describe the method you used to install the project, e.g., manual installation, Docker, package manager, etc.]
Additional Information
[Include any additional details that may help in understanding and reproducing the issue. This could include specific configurations, error messages, or anything else relevant to the bug.]
Note
If the bug report is incomplete or does not follow the provided instructions, it may not be addressed. Please ensure that you have followed the steps outlined in the README.md and troubleshooting.md documents, and provide all necessary information for us to reproduce and address the issue. Thank you!
The text was updated successfully, but these errors were encountered:
Bug Report
When starting the backend with ./start.sh there seems to be a problem with the BertTokenizerFast causing a segfault:
Description
Bug Summary:
[Provide a brief but clear summary of the bug]
Steps to Reproduce:
pip install -r requirements.txt
./start.sh
Expected Behavior:
Backend starts up and listens to a port
Actual Behavior:
Backend gives segmentation fault:
Environment
**Open WebUI Version: current version git
**Ollama (if applicable): 0.1.44
**Operating System: Mac OS Sonoma
Reproduction Details
Confirmation:
Logs and Screenshots
Browser Console Logs:
[Include relevant browser console logs, if applicable]
Docker Container Logs:
[Include relevant Docker container logs, if applicable]
Screenshots (if applicable):
[Attach any relevant screenshots to help illustrate the issue]
Installation Method
[Describe the method you used to install the project, e.g., manual installation, Docker, package manager, etc.]
Additional Information
[Include any additional details that may help in understanding and reproducing the issue. This could include specific configurations, error messages, or anything else relevant to the bug.]
Note
If the bug report is incomplete or does not follow the provided instructions, it may not be addressed. Please ensure that you have followed the steps outlined in the README.md and troubleshooting.md documents, and provide all necessary information for us to reproduce and address the issue. Thank you!
The text was updated successfully, but these errors were encountered: