AI Development Guide

    Deploying Flowise

    Flowise is an open-source, low-code platform for building AI agents, chatbots, and LLM-powered workflows using a visual drag-and-drop interface. Built on LangChain and LangGraph, deploy it on RamNode's reliable VPS hosting.

    Visual AI Builder
    LangChain
    RAG Workflows
    Multi-Agent
    1

    Prerequisites

    RamNode VPS Requirements

    Flowise is a Node.js application that is relatively lightweight but benefits from adequate RAM, especially when running multiple AI workflows or connecting to embedding models.

    PlanvCPUsRAMStorageBest For
    Standard 2GB1 vCPU2 GB30 GB NVMeTesting & light workflows
    Standard 4GB2 vCPUs4 GB60 GB NVMeProduction use, multiple flows
    Standard 8GB4 vCPUs8 GB120 GB NVMeHeavy RAG workflows, multi-agent

    💡 Recommendation: For most Flowise deployments, the Standard 4GB plan provides the best balance of performance and cost. If you plan to use local embedding models or heavy document processing, consider the 8GB plan.

    Software Requirements

    • A RamNode VPS running Ubuntu 22.04 or 24.04 LTS
    • A registered domain name with DNS pointed to your VPS IP address
    • SSH access to your server (root or sudo user)
    • Basic familiarity with the Linux command line
    2

    Initial Server Setup

    Connect and update system
    ssh root@your-server-ip
    apt update && apt upgrade -y

    Create a Non-Root User

    Create dedicated user
    adduser flowise
    usermod -aG sudo flowise
    su - flowise
    Configure firewall
    sudo ufw allow OpenSSH
    sudo ufw allow 80/tcp
    sudo ufw allow 443/tcp
    sudo ufw enable
    sudo ufw status
    3

    Install Docker & Docker Compose

    Install prerequisites and Docker GPG key
    # Install prerequisites
    sudo apt install -y ca-certificates curl gnupg lsb-release
    
    # Add Docker's official GPG key
    sudo install -m 0755 -d /etc/apt/keyrings
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
      sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
    sudo chmod a+r /etc/apt/keyrings/docker.gpg
    
    # Add the Docker repository
    echo "deb [arch=$(dpkg --print-architecture) \
      signed-by=/etc/apt/keyrings/docker.gpg] \
      https://download.docker.com/linux/ubuntu \
      $(lsb_release -cs) stable" | \
      sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    Install Docker
    sudo apt update
    sudo apt install -y docker-ce docker-ce-cli \
      containerd.io docker-compose-plugin
    
    # Configure Docker for non-root user
    sudo usermod -aG docker flowise
    newgrp docker
    
    # Verify installation
    docker --version
    docker compose version
    4

    Deploy Flowise with Docker Compose

    Create project directory
    mkdir -p ~/flowise && cd ~/flowise

    Create the Environment File

    Create a .env file with your configuration. Replace placeholder values with your own secure credentials:

    ~/flowise/.env
    # ── Flowise Configuration ──
    PORT=3000
    FLOWISE_USERNAME=admin
    FLOWISE_PASSWORD=your-secure-password-here
    FLOWISE_SECRETKEY_OVERWRITE=your-secret-key-here
    APIKEY_PATH=/root/.flowise
    SECRETKEY_PATH=/root/.flowise
    LOG_PATH=/root/.flowise/logs
    BLOB_STORAGE_PATH=/root/.flowise/storage
    
    # ── Database Configuration (PostgreSQL) ──
    DATABASE_TYPE=postgres
    DATABASE_PORT=5432
    DATABASE_HOST=postgres
    DATABASE_NAME=flowise
    DATABASE_USER=flowise
    DATABASE_PASSWORD=your-db-password-here
    
    # ── Postgres Container Settings ──
    POSTGRES_USER=flowise
    POSTGRES_PASSWORD=your-db-password-here
    POSTGRES_DB=flowise

    Security Notice: Always use strong, unique passwords. Never commit the .env file to version control. The FLOWISE_SECRETKEY_OVERWRITE is used to encrypt stored API keys — if lost, all saved credentials become unrecoverable.

    Create Docker Compose File

    ~/flowise/docker-compose.yml
    version: "3.8"
    
    services:
      flowise:
        image: flowiseai/flowise:latest
        container_name: flowise
        restart: always
        env_file:
          - .env
        ports:
          - "3000:3000"
        volumes:
          - flowise_data:/root/.flowise
        depends_on:
          postgres:
            condition: service_healthy
        healthcheck:
          test: ["CMD", "wget", "--spider", "-q", "http://localhost:3000"]
          interval: 30s
          timeout: 10s
          retries: 3
    
      postgres:
        image: postgres:16-alpine
        container_name: flowise-postgres
        restart: always
        environment:
          POSTGRES_USER: ${POSTGRES_USER}
          POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
          POSTGRES_DB: ${POSTGRES_DB}
        volumes:
          - postgres_data:/var/lib/postgresql/data
        healthcheck:
          test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
          interval: 10s
          timeout: 5s
          retries: 5
    
    volumes:
      flowise_data:
      postgres_data:
    Launch the stack
    docker compose up -d
    
    # Verify both containers are running
    docker compose ps
    docker compose logs -f flowise

    Flowise should now be accessible at http://your-server-ip:3000. You'll see the login screen prompting for the credentials you set in the .env file.

    5

    Configure Nginx & SSL

    Install Nginx and Certbot
    sudo apt install -y nginx certbot python3-certbot-nginx
    /etc/nginx/sites-available/flowise
    server {
        listen 80;
        server_name yourdomain.com;
    
        location / {
            proxy_pass http://localhost:3000;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_cache_bypass $http_upgrade;
    
            # Increase timeouts for long-running AI inference requests
            proxy_read_timeout 300s;
            proxy_send_timeout 300s;
    
            # Increase body size for file uploads
            client_max_body_size 50M;
        }
    }
    Enable site and obtain SSL
    # Enable the site
    sudo ln -s /etc/nginx/sites-available/flowise /etc/nginx/sites-enabled/
    sudo rm -f /etc/nginx/sites-enabled/default
    
    # Test configuration
    sudo nginx -t
    
    # Reload Nginx
    sudo systemctl reload nginx
    
    # Obtain and install SSL certificate
    sudo certbot --nginx -d yourdomain.com

    💡 Auto-Renewal: Let's Encrypt certificates expire every 90 days. Certbot installs a systemd timer that handles renewal automatically. Verify it's active with: sudo systemctl status certbot.timer

    6

    Post-Deployment Configuration

    Accessing the Dashboard

    Navigate to https://yourdomain.com in your browser. Log in using the FLOWISE_USERNAME and FLOWISE_PASSWORD credentials you set in the .env file.

    Adding API Keys

    1. Click the Settings icon in the sidebar
    2. Navigate to "Credentials" and click "Add New"
    3. Select your provider (OpenAI, Anthropic, Google, etc.)
    4. Enter your API key and save

    Flowise encrypts stored credentials using the FLOWISE_SECRETKEY_OVERWRITE value from your .env file.

    Building Your First Flow

    1. Click "Chatflows" in the sidebar, then "Add New"
    2. Drag a Chat Model node (e.g., ChatOpenAI) onto the canvas
    3. Drag a Chain node (e.g., Conversation Chain) and connect them
    4. Add a Memory node (e.g., Buffer Memory) for conversation context
    5. Click the chat icon to test your flow inline
    6. Click "Save" and use the "Share Chatbot" button to embed it
    7

    Environment Variables Reference

    Flowise supports a comprehensive set of environment variables for fine-tuning your deployment:

    VariableDescriptionDefault
    FLOWISE_FILE_SIZE_LIMITMax upload file size in MB50
    NUMBER_OF_PROXIESRate limit: number of proxies—
    CORS_ORIGINSAllowed CORS origins (comma-separated)*
    IFRAME_ORIGINSAllowed iframe embedding origins*
    TOOL_FUNCTION_BUILTIN_DEPAllowed built-in Node.js modules for custom tools—
    TOOL_FUNCTION_EXTERNAL_DEPAllowed npm packages for custom tools—
    LOG_LEVELLogging verbosity (error, info, verbose, debug)info
    DEBUGEnable debug modefalse
    DISABLE_CHATFLOW_REUSEPrevent chatflow instance reusefalse

    💡 External Dependencies: If your custom tool functions require npm packages (like cheerio or axios), add them to TOOL_FUNCTION_EXTERNAL_DEP.

    8

    Maintenance & Operations

    9

    Security Hardening

    Restrict Direct Port Access

    After configuring Nginx, block direct access to port 3000 so all traffic goes through the reverse proxy:

    Block direct port access
    sudo ufw deny 3000

    Then update docker-compose.yml to bind port 3000 only to localhost:

    Update docker-compose.yml ports
    # Change this line in docker-compose.yml:
    ports:
      - "127.0.0.1:3000:3000"

    Enable Rate Limiting in Nginx

    Nginx rate limiting
    # Add to the http block in /etc/nginx/nginx.conf
    limit_req_zone $binary_remote_addr zone=flowise:10m rate=10r/s;
    
    # Add inside the location block
    limit_req zone=flowise burst=20 nodelay;

    Set Up Fail2Ban

    Install Fail2Ban
    sudo apt install -y fail2ban
    sudo systemctl enable fail2ban
    sudo systemctl start fail2ban

    Security Checklist

    ItemStatusAction
    SSH key authentication
    Required
    Disable password auth in sshd_config
    Firewall (UFW)
    Required
    Allow only 22, 80, 443
    HTTPS (Let's Encrypt)
    Required
    Certbot auto-renewal enabled
    Flowise auth enabled
    Required
    Set FLOWISE_USERNAME & PASSWORD
    Port 3000 blocked
    Recommended
    Bind to localhost only
    Fail2Ban active
    Recommended
    Auto-block brute force
    Rate limiting
    Recommended
    Nginx limit_req configured
    Automatic backups
    Recommended
    Cron + pg_dump daily
    10

    Troubleshooting

    Flowise Deployed Successfully!

    Your self-hosted AI agent builder is now running. Here are some next steps:

    • Build RAG chatbots with document loaders and vector stores (Pinecone, Qdrant, Chroma)
    • Create multi-agent workflows for complex tasks
    • Embed the Flowise chat widget on your website
    • Expose chatflows as REST API endpoints
    • Set up queue mode with Redis and BullMQ for high-traffic deployments
    • Explore MCP integration for connecting agents to external tools