Open WebUI — Multi-User ChatGPT Clone
Deploy a private AI chat interface with role-based access control for your entire team.
Completed Part 1 (Ollama running), Docker installed, domain name (optional)
25–35 minutes
4GB ($20/mo) minimum. 8GB ($40/mo) for teams of 5+
Looking for a quick-start guide? Check out our standalone Open WebUI Deployment Guide for a streamlined setup walkthrough.
Introduction
Ollama's CLI is powerful but not team-friendly. Open WebUI provides a polished ChatGPT-like experience with conversation history, model switching, file uploads, and — critically — role-based access control so you can manage who accesses what.
💰 Cost comparison: ChatGPT Team costs $25/user/month ($125/mo for 5 users). Open WebUI with Ollama on your RamNode VPS: $0/user — unlimited users on a single $8–16/mo plan.
Installing Docker
If you don't already have Docker installed:
# Install Docker
curl -fsSL https://get.docker.com | sh
# Add your user to the docker group
sudo usermod -aG docker $USER
# Install Docker Compose plugin
sudo apt install -y docker-compose-plugin
# Verify installation
docker --version
docker compose versionLog out and back in for group changes to take effect.
Deploying Open WebUI
Create a project directory and Docker Compose file:
mkdir -p ~/ai-stack/open-webui && cd ~/ai-stack/open-webuiversion: "3.8"
services:
open-webui:
image: ghcr.io/open-webui/open-webui:main
container_name: open-webui
restart: unless-stopped
ports:
- "3000:8080"
environment:
- OLLAMA_BASE_URL=http://host.docker.internal:11434
- WEBUI_SECRET_KEY=your-secret-key-change-this
- ENABLE_SIGNUP=true
volumes:
- open-webui-data:/app/backend/data
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
open-webui-data:docker compose up -dOpen WebUI is now available at http://your-server-ip:3000.
Initial Setup & Admin Account
The first user to register becomes the admin. Navigate to your instance and create your admin account immediately.
⚠️ Important: Create your admin account before sharing the URL with anyone. The first registration always gets admin privileges.
After logging in, verify Ollama connectivity: go to Settings → Connections and confirm the Ollama URL shows a green connection indicator. Your models from Part 1 should appear in the model selector.
Configuring RBAC
This is the key differentiator from most Open WebUI tutorials. Set up proper role-based access control:
User Roles
| Role | Capabilities | Use Case |
|---|---|---|
| Admin | Full access, user management, settings | System administrators |
| User | Chat, file upload, approved models | Team members |
| Pending | No access until approved | New registrations |
Registration Policies
Navigate to Admin Panel → Settings → General and configure:
- Open Registration: Anyone can sign up and immediately use the system
- Approval Required: Users register but must be approved by an admin (recommended for teams)
- Invite Only: Disable registration; admin creates accounts manually (most secure)
Model Permissions
Restrict which models users can access. For example, reserve larger resource-intensive models for power users while giving everyone access to smaller, faster models.
Customizing the Interface
Create purpose-built model presets for different team workflows:
Code Assistant
Model: codegemma or deepseek-coder. System prompt: "You are an expert programmer. Provide clean, well-documented code with explanations."
Writing Helper
Model: mistral. System prompt: "You are a professional writer. Help with drafting, editing, and improving content. Match the user's tone."
Research Analyst
Model: llama3.1:8b. System prompt: "You are a research analyst. Provide thorough, evidence-based analysis with structured outputs."
Document Upload & RAG Preview
Open WebUI includes built-in document upload — drag a PDF or text file into a conversation and ask questions about it. This is great for quick, ad-hoc document Q&A.
However, for production document ingestion — processing entire document libraries, managing embeddings at scale, and providing reliable retrieval — you'll want a dedicated RAG pipeline. That's what we'll build in Part 3.
Securing the Deployment
Lock down access to your Open WebUI instance:
# Allow Open WebUI only from your IP (replace with your IP)
sudo ufw allow from YOUR_IP to any port 3000
# Or restrict to local network
sudo ufw allow from 10.0.0.0/8 to any port 3000
# Deny public access
sudo ufw deny 3000For HTTPS with a domain name, see Part 8: Production Hardening for the full reverse proxy setup with Let's Encrypt.
Environment Variable Security
Update the WEBUI_SECRET_KEY in your Docker Compose file:
# Generate a secure key
openssl rand -hex 32What's Next?
Your team now has a private ChatGPT — no data leaving your infrastructure, no per-seat licensing. In Part 3: RAG Pipeline, we'll build a production document intelligence system so your AI can answer from your actual documents:
- Deploy Qdrant vector database
- Build a document ingestion pipeline
- Connect RAG to Open WebUI
- Optimize retrieval quality
