Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker build not exist dir /app/backend/data/ #3311

Closed
tclxyang-guan opened this issue Jun 20, 2024 · 1 comment
Closed

docker build not exist dir /app/backend/data/ #3311

tclxyang-guan opened this issue Jun 20, 2024 · 1 comment

Comments

@tclxyang-guan
Copy link

Bug Report

Description

Bug Summary:
not exist dir /app/backend/data/

Steps to Reproduce:
docker build -t open-webui:v1 .

Reproduction Details

Confirmation:

######## WebUI backend ########
FROM python:3.11-slim-bookworm as base

Use args

ARG USE_CUDA
ARG USE_OLLAMA
ARG USE_CUDA_VER
ARG USE_EMBEDDING_MODEL
ARG USE_RERANKING_MODEL
ARG UID
ARG GID

Basis

ENV ENV=prod
PORT=8080
# pass build args to the build
USE_OLLAMA_DOCKER=${USE_OLLAMA}
USE_CUDA_DOCKER=${USE_CUDA}
USE_CUDA_DOCKER_VER=${USE_CUDA_VER}
USE_EMBEDDING_MODEL_DOCKER=${USE_EMBEDDING_MODEL}
USE_RERANKING_MODEL_DOCKER=${USE_RERANKING_MODEL}

Basis URL Config

ENV OLLAMA_BASE_URL="/ollama"
OPENAI_API_BASE_URL=""

API Key and Security Config

ENV OPENAI_API_KEY=""
WEBUI_SECRET_KEY=""
SCARF_NO_ANALYTICS=true
DO_NOT_TRACK=true
ANONYMIZED_TELEMETRY=false

Other models

whisper TTS model settings

ENV WHISPER_MODEL="base"
WHISPER_MODEL_DIR="/app/backend/data/cache/whisper/models"

RAG Embedding model settings

ENV RAG_EMBEDDING_MODEL="$USE_EMBEDDING_MODEL_DOCKER"
RAG_RERANKING_MODEL="$USE_RERANKING_MODEL_DOCKER"
SENTENCE_TRANSFORMERS_HOME="/app/backend/data/cache/embedding/models"

Hugging Face download cache

ENV HF_HOME="/app/backend/data/cache/embedding/models"

Other models

WORKDIR /app/backend

ENV HOME /root

Create user and group if not root

RUN if [ $UID -ne 0 ]; then
if [ $GID -ne 0 ]; then
addgroup --gid $GID app;
fi;
adduser --uid $UID --gid $GID --home $HOME --disabled-password --no-create-home app;
fi

RUN mkdir -p $HOME/.cache/chroma
RUN echo -n 00000000-0000-0000-0000-000000000000 > $HOME/.cache/chroma/telemetry_user_id

Make sure the user has access to the app and root directory

RUN chown -R $UID:$GID /app $HOME

RUN if [ "$USE_OLLAMA" = "true" ]; then
apt-get update &&
# Install pandoc and netcat
apt-get install -y --no-install-recommends pandoc netcat-openbsd curl &&
# for RAG OCR
apt-get install -y --no-install-recommends ffmpeg libsm6 libxext6 &&
# install helper tools
apt-get install -y --no-install-recommends curl jq &&
# install ollama
curl -fsSL https://ollama.com/install.sh | sh &&
# cleanup
rm -rf /var/lib/apt/lists/;
else
apt-get update &&
# Install pandoc and netcat
apt-get install -y --no-install-recommends pandoc netcat-openbsd curl jq &&
# for RAG OCR
apt-get install -y --no-install-recommends ffmpeg libsm6 libxext6 &&
# cleanup
rm -rf /var/lib/apt/lists/
;
fi

install python dependencies

COPY ./backend /app
COPY --chown=$UID:$GID ./backend/requirements.txt ./requirements.txt

RUN pip3 install uv &&
if [ "$USE_CUDA" = "true" ]; then
# If you use CUDA the whisper and embedding model will be downloaded on first use
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/$USE_CUDA_DOCKER_VER --no-cache-dir &&
uv pip install --system -r requirements.txt --no-cache-dir &&
python -c "import os; from sentence_transformers import SentenceTransformer; SentenceTransformer(os.environ['RAG_EMBEDDING_MODEL'], device='cpu')" &&
python -c "import os; from faster_whisper import WhisperModel; WhisperModel(os.environ['WHISPER_MODEL'], device='cpu', compute_type='int8', download_root=os.environ['WHISPER_MODEL_DIR'])";
else
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu --no-cache-dir &&
uv pip install --system -r requirements.txt --no-cache-dir &&
python -c "import os; from sentence_transformers import SentenceTransformer; SentenceTransformer(os.environ['RAG_EMBEDDING_MODEL'], device='cpu')" &&
python -c "import os; from faster_whisper import WhisperModel; WhisperModel(os.environ['WHISPER_MODEL'], device='cpu', compute_type='int8', download_root=os.environ['WHISPER_MODEL_DIR'])";
fi;
chown -R $UID:$GID /app/backend/data/

@tclxyang-guan
Copy link
Author

COPY ./backend /app

@justinh-rahb justinh-rahb converted this issue into discussion #3313 Jun 20, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant