Prerequisites
Before starting, ensure you have:
Server Requirements
- • RamNode VPS (2GB+ RAM recommended)
- • Ubuntu 20.04/22.04 or Debian 11+
- • 1+ CPU cores
- • 10GB+ disk space
- • SSH access to your VPS
Knowledge Requirements
- • Basic Linux command line
- • Understanding of Docker
- • Basic networking knowledge
- • Domain name (optional)
Initial Server Setup
Connect to your RamNode VPS and prepare the environment:
ssh root@your-server-ipapt update && apt upgrade -yadduser openwebui
usermod -aG sudo openwebui
su - openwebui💡 Security Tip: Running OpenWebUI as a dedicated user improves security by limiting permissions and isolating the application.
Install Docker
OpenWebUI runs in Docker containers for easy deployment and management:
sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-releasecurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/nullsudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-pluginsudo systemctl start docker
sudo systemctl enable docker
sudo usermod -aG docker $USERdocker --version
docker compose version✅ Docker is now installed and ready for OpenWebUI deployment!
Deploy OpenWebUI
Deploy OpenWebUI using Docker:
mkdir -p ~/openwebui
cd ~/openwebuidocker run -d \
--name openwebui \
-p 3000:8080 \
-v openwebui:/app/backend/data \
--restart unless-stopped \
ghcr.io/open-webui/open-webui:maindocker ps
docker logs openwebuiWhat is OpenWebUI?
OpenWebUI is a feature-rich, self-hosted web interface for large language models. It provides a ChatGPT-like experience while giving you complete control over your data, conversations, and AI models. It supports multiple models, user management, and extensive customization options.
Basic Configuration
Configure OpenWebUI settings and environment:
docker stop openwebuinano ~/openwebui/.envAdd these environment variables:
# Basic Configuration
WEBUI_SECRET_KEY=your-secret-key-here
WEBUI_NAME="Your OpenWebUI Instance"
# Security Settings
ENABLE_SIGNUP=false
DEFAULT_USER_ROLE=pending
# Model Settings
OLLAMA_BASE_URL=http://localhost:11434
# Optional: OpenAI API Integration
OPENAI_API_KEY=your-openai-key-here
OPENAI_API_BASE_URL=https://api.openai.com/v1docker run -d \
--name openwebui \
-p 3000:8080 \
-v openwebui:/app/backend/data \
--env-file ~/openwebui/.env \
--restart unless-stopped \
ghcr.io/open-webui/open-webui:main🔐 Security: Replace "your-secret-key-here" with a strong, unique secret key. Keep your environment file secure!
Install Ollama (Local Models)
Install Ollama to run local AI models with OpenWebUI:
curl -fsSL https://ollama.ai/install.sh | shsudo systemctl start ollama
sudo systemctl enable ollamaollama pull llama2ollama list
ollama run llama2 "Hello, how are you?"docker stop openwebui
docker rm openwebui
docker run -d \
--name openwebui \
--network=host \
-v openwebui:/app/backend/data \
--env-file ~/openwebui/.env \
--restart unless-stopped \
ghcr.io/open-webui/open-webui:main🧠 Local AI: You now have local AI models running on your server! Ollama provides privacy and control over your AI interactions.
Set Up Nginx Reverse Proxy
Configure Nginx as a reverse proxy for OpenWebUI:
sudo apt install nginx -ysudo nano /etc/nginx/sites-available/openwebuiAdd the following Nginx configuration:
server {
listen 80;
server_name your-domain.com; # Replace with your domain
client_max_body_size 100M;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 86400;
proxy_send_timeout 86400;
}
}sudo ln -s /etc/nginx/sites-available/openwebui /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl restart nginxSet Up SSL with Let's Encrypt
Secure your OpenWebUI installation with SSL certificate:
sudo apt install certbot python3-certbot-nginx -ysudo certbot --nginx -d your-domain.comsudo certbot renew --dry-run🔒 SSL Enabled: Your OpenWebUI instance is now secured with HTTPS!
User Management
Configure user access and permissions in OpenWebUI:
First-Time Setup
Access your OpenWebUI instance:
- 1. Open your browser and go to https://your-domain.com
- 2. Create your admin account (first user is automatically admin)
- 3. Sign in with your new account
- 4. Go to Settings → Admin Settings
User Management Options
- • Disable Signup: Prevent new user registrations
- • User Roles: Set default roles for new users
- • Model Access: Control which models users can access
- • Chat History: Enable/disable conversation persistence
sudo ufw allow ssh
sudo ufw allow 80
sudo ufw allow 443
sudo ufw --force enableModel Management
Manage AI models in your OpenWebUI installation:
Popular Ollama Models
# Lightweight models (good for 2-4GB RAM)
ollama pull llama2:7b
ollama pull mistral:7b
ollama pull codellama:7b
# Larger models (requires 8GB+ RAM)
ollama pull llama2:13b
ollama pull vicuna:13b
# Specialized models
ollama pull deepseek-coder:6.7b # Code generation
ollama pull dolphin-mixtral:8x7b # Advanced reasoningModel Management Commands
# List installed models
ollama list
# Remove unused models
ollama rm model-name
# Update a model
ollama pull model-name
# Check model info
ollama show model-name💡 Resource Tip: Start with smaller 7B models and upgrade to larger ones based on your VPS capacity and performance needs.
Performance Optimization
Optimize OpenWebUI and Ollama for your RamNode VPS:
System Optimization
echo "* soft nofile 65536" | sudo tee -a /etc/security/limits.conf
echo "* hard nofile 65536" | sudo tee -a /etc/security/limits.conf
echo "vm.max_map_count=262144" | sudo tee -a /etc/sysctl.conf
sudo sysctl -pDocker Resource Limits
docker stop openwebui
docker rm openwebui
docker run -d \
--name openwebui \
--network=host \
-v openwebui:/app/backend/data \
--env-file ~/openwebui/.env \
--memory="1g" \
--cpus="1.0" \
--restart unless-stopped \
ghcr.io/open-webui/open-webui:mainOllama Configuration
# Create Ollama configuration
sudo mkdir -p /etc/systemd/system/ollama.service.d
sudo nano /etc/systemd/system/ollama.service.d/override.conf[Service]
Environment="OLLAMA_MAX_LOADED_MODELS=2"
Environment="OLLAMA_NUM_PARALLEL=2"
Environment="OLLAMA_FLASH_ATTENTION=1"sudo systemctl daemon-reload
sudo systemctl restart ollama⚡ Performance Tip: Monitor resource usage with htop and docker stats to fine-tune these settings.
Backup Strategy
Implement automated backups for your OpenWebUI installation:
mkdir -p ~/backups/openwebui
chmod 755 ~/backupsnano ~/backup-openwebui.sh#!/bin/bash
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="/home/openwebui/backups"
# Create backup directory
mkdir -p $BACKUP_DIR
# Backup OpenWebUI data
docker run --rm -v openwebui:/source -v $BACKUP_DIR:/backup alpine tar czf /backup/openwebui_data_$DATE.tar.gz -C /source .
# Backup Ollama models
tar -czf $BACKUP_DIR/ollama_models_$DATE.tar.gz /usr/share/ollama/.ollama/models
# Backup configuration
cp ~/openwebui/.env $BACKUP_DIR/env_$DATE.bak
cp /etc/nginx/sites-available/openwebui $BACKUP_DIR/nginx_$DATE.conf
# Clean old backups (keep 7 days)
find $BACKUP_DIR -name "openwebui_data_*.tar.gz" -mtime +7 -delete
find $BACKUP_DIR -name "ollama_models_*.tar.gz" -mtime +7 -delete
find $BACKUP_DIR -name "env_*.bak" -mtime +7 -delete
find $BACKUP_DIR -name "nginx_*.conf" -mtime +7 -delete
echo "Backup completed: $DATE"chmod +x ~/backup-openwebui.sh
crontab -e
# Add: 0 2 * * * /home/openwebui/backup-openwebui.sh >> /var/log/openwebui-backup.log💾 Backup Tip: Test your backup restoration process periodically to ensure your backups are working correctly.
Troubleshooting
Common issues and solutions:
🛠️ Support: For additional help, check the OpenWebUI GitHub repository, Discord community, or contact RamNode support for VPS-specific issues.
🎉 OpenWebUI Successfully Deployed!
Your private AI chat interface is now running on RamNode VPS. You have full control over your conversations, data, and AI models with a ChatGPT-like experience.
