open-webui
Office Web - Source - Docker Image - Document
open-webui 是一个可扩展 、功能丰富且用户友好的自托管 AI 平台,旨在完全离线运行。 它支持各种 LLM 运行器,如 Ollama 和 OpenAI 兼容的 API,并内置了 RAG 推理引擎 ,使其成为强大的 AI 部署解决方案 。
Installation with Default Configuration
If Ollama is on your computer, use this command:
bashdocker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:mainIf Ollama is on a Different Server, use this command:
To connect to Ollama on another server, change the
OLLAMA_BASE_URLto the server's URL:bashdocker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:mainTo run Open WebUI with Nvidia GPU support, use this command:
bashdocker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
Installation for OpenAI API Usage Only
If you're only using OpenAI API, use this command:
bashdocker run -d -p 3000:8080 -e OPENAI_API_KEY=your_secret_key -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Installing Open WebUI with Bundled Ollama Support
This installation method uses a single container image that bundles Open WebUI with Ollama, allowing for a streamlined setup via a single command. Choose the appropriate command based on your hardware setup:
With GPU Support: Utilize GPU resources by running the following command:
bashdocker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollamaFor CPU Only: If you're not using a GPU, use this command instead:
bashdocker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama