AI Web Interface

    Deploy OpenWebUI on RamNode

    Set up OpenWebUI on your RamNode VPS hosting. Create your own private, self-hosted ChatGPT-like interface with full control over your data and conversations.

    Ubuntu/Debian
    Docker + Python
    ⏱️ 20-30 minutes

    Prerequisites

    Before starting, ensure you have:

    Server Requirements

    • • RamNode VPS (2GB+ RAM recommended)
    • • Ubuntu 20.04/22.04 or Debian 11+
    • • 1+ CPU cores
    • • 10GB+ disk space
    • • SSH access to your VPS

    Knowledge Requirements

    • • Basic Linux command line
    • • Understanding of Docker
    • • Basic networking knowledge
    • • Domain name (optional)
    2

    Initial Server Setup

    Connect to your RamNode VPS and prepare the environment:

    Connect via SSH
    ssh root@your-server-ip
    Update System Packages
    apt update && apt upgrade -y
    Create OpenWebUI User for Security
    adduser openwebui
    usermod -aG sudo openwebui
    su - openwebui

    💡 Security Tip: Running OpenWebUI as a dedicated user improves security by limiting permissions and isolating the application.

    3

    Install Docker

    OpenWebUI runs in Docker containers for easy deployment and management:

    Install Docker Dependencies
    sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-release
    Add Docker Repository
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
    echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    Install Docker
    sudo apt update
    sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
    Configure Docker for User
    sudo systemctl start docker
    sudo systemctl enable docker
    sudo usermod -aG docker $USER
    Verify Docker Installation
    docker --version
    docker compose version

    ✅ Docker is now installed and ready for OpenWebUI deployment!

    4

    Deploy OpenWebUI

    Deploy OpenWebUI using Docker:

    Create OpenWebUI Directory
    mkdir -p ~/openwebui
    cd ~/openwebui
    Pull and Run OpenWebUI
    docker run -d \
      --name openwebui \
      -p 3000:8080 \
      -v openwebui:/app/backend/data \
      --restart unless-stopped \
      ghcr.io/open-webui/open-webui:main
    Verify OpenWebUI is Running
    docker ps
    docker logs openwebui

    What is OpenWebUI?

    OpenWebUI is a feature-rich, self-hosted web interface for large language models. It provides a ChatGPT-like experience while giving you complete control over your data, conversations, and AI models. It supports multiple models, user management, and extensive customization options.

    5

    Basic Configuration

    Configure OpenWebUI settings and environment:

    Stop OpenWebUI Container
    docker stop openwebui
    Create Environment File
    nano ~/openwebui/.env

    Add these environment variables:

    OpenWebUI Environment Configuration
    # Basic Configuration
    WEBUI_SECRET_KEY=your-secret-key-here
    WEBUI_NAME="Your OpenWebUI Instance"
    
    # Security Settings
    ENABLE_SIGNUP=false
    DEFAULT_USER_ROLE=pending
    
    # Model Settings
    OLLAMA_BASE_URL=http://localhost:11434
    
    # Optional: OpenAI API Integration
    OPENAI_API_KEY=your-openai-key-here
    OPENAI_API_BASE_URL=https://api.openai.com/v1
    Restart OpenWebUI with Environment
    docker run -d \
      --name openwebui \
      -p 3000:8080 \
      -v openwebui:/app/backend/data \
      --env-file ~/openwebui/.env \
      --restart unless-stopped \
      ghcr.io/open-webui/open-webui:main

    🔐 Security: Replace "your-secret-key-here" with a strong, unique secret key. Keep your environment file secure!

    6

    Install Ollama (Local Models)

    Install Ollama to run local AI models with OpenWebUI:

    Install Ollama
    curl -fsSL https://ollama.ai/install.sh | sh
    Start Ollama Service
    sudo systemctl start ollama
    sudo systemctl enable ollama
    Download Your First Model
    ollama pull llama2
    Test Ollama Installation
    ollama list
    ollama run llama2 "Hello, how are you?"
    Connect OpenWebUI to Ollama
    docker stop openwebui
    docker rm openwebui
    docker run -d \
      --name openwebui \
      --network=host \
      -v openwebui:/app/backend/data \
      --env-file ~/openwebui/.env \
      --restart unless-stopped \
      ghcr.io/open-webui/open-webui:main

    🧠 Local AI: You now have local AI models running on your server! Ollama provides privacy and control over your AI interactions.

    7

    Set Up Nginx Reverse Proxy

    Configure Nginx as a reverse proxy for OpenWebUI:

    Install Nginx
    sudo apt install nginx -y
    Create Nginx Configuration
    sudo nano /etc/nginx/sites-available/openwebui

    Add the following Nginx configuration:

    Nginx Configuration for OpenWebUI
    server {
        listen 80;
        server_name your-domain.com;  # Replace with your domain
    
        client_max_body_size 100M;
    
        location / {
            proxy_pass http://localhost:3000;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_cache_bypass $http_upgrade;
            proxy_read_timeout 86400;
            proxy_send_timeout 86400;
        }
    }
    Enable the Site
    sudo ln -s /etc/nginx/sites-available/openwebui /etc/nginx/sites-enabled/
    sudo nginx -t
    sudo systemctl restart nginx
    8

    Set Up SSL with Let's Encrypt

    Secure your OpenWebUI installation with SSL certificate:

    Install Certbot
    sudo apt install certbot python3-certbot-nginx -y
    Obtain SSL Certificate
    sudo certbot --nginx -d your-domain.com
    Test Auto-Renewal
    sudo certbot renew --dry-run

    🔒 SSL Enabled: Your OpenWebUI instance is now secured with HTTPS!

    9

    User Management

    Configure user access and permissions in OpenWebUI:

    First-Time Setup

    Access your OpenWebUI instance:

    1. 1. Open your browser and go to https://your-domain.com
    2. 2. Create your admin account (first user is automatically admin)
    3. 3. Sign in with your new account
    4. 4. Go to Settings → Admin Settings

    User Management Options

    • Disable Signup: Prevent new user registrations
    • User Roles: Set default roles for new users
    • Model Access: Control which models users can access
    • Chat History: Enable/disable conversation persistence
    Configure Firewall
    sudo ufw allow ssh
    sudo ufw allow 80
    sudo ufw allow 443
    sudo ufw --force enable
    10

    Model Management

    Manage AI models in your OpenWebUI installation:

    Popular Ollama Models

    Download Recommended Models
    # Lightweight models (good for 2-4GB RAM)
    ollama pull llama2:7b
    ollama pull mistral:7b
    ollama pull codellama:7b
    
    # Larger models (requires 8GB+ RAM)
    ollama pull llama2:13b
    ollama pull vicuna:13b
    
    # Specialized models
    ollama pull deepseek-coder:6.7b  # Code generation
    ollama pull dolphin-mixtral:8x7b  # Advanced reasoning

    Model Management Commands

    Ollama Management
    # List installed models
    ollama list
    
    # Remove unused models
    ollama rm model-name
    
    # Update a model
    ollama pull model-name
    
    # Check model info
    ollama show model-name

    💡 Resource Tip: Start with smaller 7B models and upgrade to larger ones based on your VPS capacity and performance needs.

    11

    Performance Optimization

    Optimize OpenWebUI and Ollama for your RamNode VPS:

    System Optimization

    Increase System Limits
    echo "* soft nofile 65536" | sudo tee -a /etc/security/limits.conf
    echo "* hard nofile 65536" | sudo tee -a /etc/security/limits.conf
    echo "vm.max_map_count=262144" | sudo tee -a /etc/sysctl.conf
    sudo sysctl -p

    Docker Resource Limits

    Update OpenWebUI with Resource Limits
    docker stop openwebui
    docker rm openwebui
    docker run -d \
      --name openwebui \
      --network=host \
      -v openwebui:/app/backend/data \
      --env-file ~/openwebui/.env \
      --memory="1g" \
      --cpus="1.0" \
      --restart unless-stopped \
      ghcr.io/open-webui/open-webui:main

    Ollama Configuration

    Optimize Ollama Settings
    # Create Ollama configuration
    sudo mkdir -p /etc/systemd/system/ollama.service.d
    sudo nano /etc/systemd/system/ollama.service.d/override.conf
    Ollama Service Override
    [Service]
    Environment="OLLAMA_MAX_LOADED_MODELS=2"
    Environment="OLLAMA_NUM_PARALLEL=2"
    Environment="OLLAMA_FLASH_ATTENTION=1"
    Restart Ollama with New Settings
    sudo systemctl daemon-reload
    sudo systemctl restart ollama

    Performance Tip: Monitor resource usage with htop and docker stats to fine-tune these settings.

    12

    Backup Strategy

    Implement automated backups for your OpenWebUI installation:

    Create Backup Directory
    mkdir -p ~/backups/openwebui
    chmod 755 ~/backups
    Create Backup Script
    nano ~/backup-openwebui.sh
    OpenWebUI Backup Script
    #!/bin/bash
    DATE=$(date +%Y%m%d_%H%M%S)
    BACKUP_DIR="/home/openwebui/backups"
    
    # Create backup directory
    mkdir -p $BACKUP_DIR
    
    # Backup OpenWebUI data
    docker run --rm -v openwebui:/source -v $BACKUP_DIR:/backup alpine tar czf /backup/openwebui_data_$DATE.tar.gz -C /source .
    
    # Backup Ollama models
    tar -czf $BACKUP_DIR/ollama_models_$DATE.tar.gz /usr/share/ollama/.ollama/models
    
    # Backup configuration
    cp ~/openwebui/.env $BACKUP_DIR/env_$DATE.bak
    cp /etc/nginx/sites-available/openwebui $BACKUP_DIR/nginx_$DATE.conf
    
    # Clean old backups (keep 7 days)
    find $BACKUP_DIR -name "openwebui_data_*.tar.gz" -mtime +7 -delete
    find $BACKUP_DIR -name "ollama_models_*.tar.gz" -mtime +7 -delete
    find $BACKUP_DIR -name "env_*.bak" -mtime +7 -delete
    find $BACKUP_DIR -name "nginx_*.conf" -mtime +7 -delete
    
    echo "Backup completed: $DATE"
    Make Script Executable and Schedule
    chmod +x ~/backup-openwebui.sh
    crontab -e
    # Add: 0 2 * * * /home/openwebui/backup-openwebui.sh >> /var/log/openwebui-backup.log

    💾 Backup Tip: Test your backup restoration process periodically to ensure your backups are working correctly.

    13

    Troubleshooting

    Common issues and solutions:

    🛠️ Support: For additional help, check the OpenWebUI GitHub repository, Discord community, or contact RamNode support for VPS-specific issues.

    🎉 OpenWebUI Successfully Deployed!

    Your private AI chat interface is now running on RamNode VPS. You have full control over your conversations, data, and AI models with a ChatGPT-like experience.

    Self-Hosted AI
    Private & Secure
    Local Models