Workflow Engine
    PostgreSQL
    MIT

    Deploy Temporal on a VPS

    A single-node Temporal cluster on Docker Compose — PostgreSQL persistence, optional Elasticsearch visibility, Web UI behind Nginx with Basic Auth, and the gRPC frontend locked down to localhost.

    At a Glance

    ProjectTemporal Server + Web UI
    LicenseMIT
    Recommended PlanStandard 4 GB (no ES); Premium 8 GB (with Elasticsearch)
    OSUbuntu 24.04 LTS (AlmaLinux 9 also fine)
    PersistencePostgreSQL 16
    Estimated Setup Time45–60 minutes

    Sizing

    • Solo dev / no ES: 2 GB / 2 vCPU / 40 GB SSD (add 2 GB swap)
    • Internal tooling, light prod: 4 GB / 2 vCPU / 60 GB SSD
    • Prod with Elasticsearch visibility: 8 GB / 4 vCPU / 80 GB SSD
    • Higher volume / heavy retention: 16 GB / 4+ vCPU / 160 GB+ SSD

    Workflow histories accumulate based on retention — pick a plan with NVMe and headroom.

    1

    Initial Server Setup

    Update + sudo user + baseline
    apt update && apt upgrade -y
    
    adduser deploy
    usermod -aG sudo deploy
    rsync --archive --chown=deploy:deploy ~/.ssh /home/deploy
    
    apt install -y ufw curl wget gnupg ca-certificates lsb-release \
      htop tmux fail2ban unattended-upgrades apache2-utils
    Harden SSH (/etc/ssh/sshd_config)
    PermitRootLogin no
    PasswordAuthentication no
    Firewall — note we do NOT open 7233
    systemctl reload ssh
    
    ufw default deny incoming
    ufw default allow outgoing
    ufw allow OpenSSH
    ufw allow 80/tcp
    ufw allow 443/tcp
    ufw enable
    2 GB tier? add swap first
    fallocate -l 2G /swapfile
    chmod 600 /swapfile
    mkswap /swapfile
    swapon /swapfile
    echo '/swapfile none swap sw 0 0' >> /etc/fstab
    echo 'vm.swappiness=10' >> /etc/sysctl.conf
    sysctl -p
    2

    Install Docker + Compose v2

    Official Docker repo (Compose v2 plugin included)
    sudo install -m 0755 -d /etc/apt/keyrings
    sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
      -o /etc/apt/keyrings/docker.asc
    sudo chmod a+r /etc/apt/keyrings/docker.asc
    
    echo \
      "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] \
      https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
      sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    
    sudo apt update
    sudo apt install -y docker-ce docker-ce-cli containerd.io \
      docker-buildx-plugin docker-compose-plugin
    
    sudo usermod -aG docker deploy
    # log out and back in for the group change
    
    docker version && docker compose version
    3

    Lay Out the Temporal Directory

    Everything lives under /opt/temporal
    sudo mkdir -p /opt/temporal/{config,dynamicconfig}
    sudo chown -R deploy:deploy /opt/temporal
    cd /opt/temporal
    /opt/temporal/.env (use openssl rand -base64 24 for the password)
    POSTGRES_USER=temporal
    POSTGRES_PASSWORD=REPLACE_WITH_STRONG_PASSWORD
    POSTGRES_DB=temporal
    
    TEMPORAL_VERSION=1.30.2
    TEMPORAL_UI_VERSION=2.40.0
    TEMPORAL_ADMINTOOLS_VERSION=1.30.2
    
    ES_VERSION=7.17.27
    Lock the env file
    chmod 600 /opt/temporal/.env

    Pin tags rather than chasing latest. Temporal releases are frequent — predictable upgrades matter.

    4

    Create the Dynamic Config

    /opt/temporal/dynamicconfig/development-sql.yaml
    limit.maxIDLength:
      - value: 255
        constraints: {}
    system.forceSearchAttributesCacheRefreshOnRead:
      - value: true
        constraints: {}

    A minimal file gives you a place to drop overrides later. Swap to a different config when you enable Elasticsearch's advanced visibility.

    5

    Write the Compose File

    /opt/temporal/docker-compose.yml
    services:
      postgresql:
        image: postgres:16-alpine
        container_name: temporal-postgresql
        restart: unless-stopped
        environment:
          POSTGRES_USER: ${POSTGRES_USER}
          POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
          POSTGRES_DB: ${POSTGRES_DB}
        volumes:
          - postgres-data:/var/lib/postgresql/data
        networks: [temporal-net]
        healthcheck:
          test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
          interval: 10s
          timeout: 5s
          retries: 10
    
      temporal:
        image: temporalio/auto-setup:${TEMPORAL_VERSION}
        container_name: temporal
        restart: unless-stopped
        depends_on:
          postgresql: { condition: service_healthy }
        environment:
          DB: postgres12
          DB_PORT: 5432
          POSTGRES_USER: ${POSTGRES_USER}
          POSTGRES_PWD: ${POSTGRES_PASSWORD}
          POSTGRES_SEEDS: postgresql
          DYNAMIC_CONFIG_FILE_PATH: config/dynamicconfig/development-sql.yaml
          DEFAULT_NAMESPACE: default
          DEFAULT_NAMESPACE_RETENTION: 72h
        volumes:
          - ./dynamicconfig:/etc/temporal/config/dynamicconfig
        networks: [temporal-net]
        ports:
          # gRPC bound to localhost — workers reach it via tunnel/mTLS/private net
          - "127.0.0.1:7233:7233"
    
      temporal-admin-tools:
        image: temporalio/admin-tools:${TEMPORAL_ADMINTOOLS_VERSION}
        container_name: temporal-admin-tools
        restart: unless-stopped
        depends_on: [temporal]
        environment:
          TEMPORAL_ADDRESS: temporal:7233
          TEMPORAL_CLI_ADDRESS: temporal:7233
        networks: [temporal-net]
        stdin_open: true
        tty: true
    
      temporal-ui:
        image: temporalio/ui:${TEMPORAL_UI_VERSION}
        container_name: temporal-ui
        restart: unless-stopped
        depends_on: [temporal]
        environment:
          TEMPORAL_ADDRESS: temporal:7233
          TEMPORAL_CORS_ORIGINS: https://temporal.example.com
        networks: [temporal-net]
        ports:
          - "127.0.0.1:8080:8080"
    
    volumes:
      postgres-data:
    
    networks:
      temporal-net:
        driver: bridge

    Three deliberate choices: gRPC + UI bound to 127.0.0.1 only, dynamic config mounted from host (edit without rebuilding), 72h retention on default namespace (bump for prod, but it drives storage growth).

    6

    First Boot

    Bring it up + watch logs
    cd /opt/temporal
    docker compose up -d
    docker compose logs -f temporal
    # Ctrl+C once Frontend/History/Matching/Worker all report started
    
    curl -I http://127.0.0.1:8080

    Auto-setup creates schema and registers the default namespace. The UI is up locally but not yet exposed publicly.

    7

    Move Off auto-setup for Production

    auto-setup runs schema setup on every start (risky during upgrades) and auto-registers namespaces. Once schema exists, swap to the production image:

    Edit docker-compose.yml
      temporal:
        image: temporalio/server:${TEMPORAL_VERSION}
        # remove DEFAULT_NAMESPACE and DEFAULT_NAMESPACE_RETENTION env vars
    Reapply
    docker compose up -d
    docker compose ps

    From here on, schema upgrades are explicit through admin-tools, not on every restart.

    8

    Nginx + Let's Encrypt + Basic Auth

    The Web UI has no built-in auth. Anyone reaching port 8080 sees every workflow, signal, and search attribute. Nginx with TLS and Basic Auth fixes it.

    Install + htpasswd
    sudo apt install -y nginx certbot python3-certbot-nginx
    sudo htpasswd -c /etc/nginx/.temporal-htpasswd admin
    # enter a strong password
    /etc/nginx/sites-available/temporal
    server {
        listen 80;
        server_name temporal.example.com;
        location / { return 301 https://$host$request_uri; }
    }
    
    server {
        listen 443 ssl http2;
        server_name temporal.example.com;
    
        ssl_certificate /etc/letsencrypt/live/temporal.example.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/temporal.example.com/privkey.pem;
        include /etc/letsencrypt/options-ssl-nginx.conf;
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
    
        add_header X-Frame-Options "SAMEORIGIN" always;
        add_header X-Content-Type-Options "nosniff" always;
        add_header Referrer-Policy "strict-origin-when-cross-origin" always;
    
        auth_basic "Temporal";
        auth_basic_user_file /etc/nginx/.temporal-htpasswd;
    
        location / {
            proxy_pass http://127.0.0.1:8080;
            proxy_http_version 1.1;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_read_timeout 300s;
        }
    }
    Enable + cert
    sudo ln -s /etc/nginx/sites-available/temporal /etc/nginx/sites-enabled/
    sudo nginx -t
    sudo certbot --nginx -d temporal.example.com \
      --redirect --agree-tos -m you@example.com --no-eff-email
    sudo systemctl reload nginx
    9

    Lock Down the gRPC Frontend

    Port 7233 is the API workers and clients use to start workflows and send signals. The OSS server has no auth by default — anyone who reaches it can do anything.

    • Option A — workers on this VPS: they connect to 127.0.0.1:7233 (or temporal:7233 from inside the network). Firewall blocks 7233 entirely. Right answer for most small self-hosted deployments.
    • Option B — WireGuard for remote workers: bind the frontend to the WG interface address; keep 7233 blocked publicly.
    • Option C — mTLS: TLS with a private CA + client certs. The right production answer (this is effectively what Temporal Cloud does), but meaningful work to set up and rotate.

    Whatever you pick, never just open 7233 publicly. No rate limit, no auth challenge, comprehensive API.

    10

    Temporal CLI + Backups

    Install the temporal CLI on the host
    curl -sSf https://temporal.download/cli.sh | sh
    sudo mv $HOME/.temporalio/bin/temporal /usr/local/bin/
    temporal --version
    Create a real production namespace (don't use 'default')
    temporal operator namespace create \
      --address 127.0.0.1:7233 \
      --retention 7d \
      production
    Daily Postgres dump
    docker compose exec -T postgresql \
      pg_dump -U temporal temporal | gzip \
      > /var/backups/temporal-$(date +%F).sql.gz

    Cron the dump, ship to off-host storage with restic. Retention is the storage cost lever — start at 7d and adjust based on what you actually query.

    Common Issues

    • UI loads but shows "no namespaces": CORS — set TEMPORAL_CORS_ORIGINS to your real HTTPS hostname
    • "connection refused" from worker: 7233 is bound to 127.0.0.1 by design — use a tunnel or run the worker on the same host
    • OOM kill of Elasticsearch: JVM heap not configured for the box — skip ES on 4 GB tiers, or set ES_JAVA_OPTS=-Xms1g -Xmx1g
    • Schema mismatch after upgrade: you swapped to server but skipped the admin-tools migration — run temporal-sql-tool from the admin container