Deploy Temporal on a VPS
A single-node Temporal cluster on Docker Compose — PostgreSQL persistence, optional Elasticsearch visibility, Web UI behind Nginx with Basic Auth, and the gRPC frontend locked down to localhost.
At a Glance
| Project | Temporal Server + Web UI |
| License | MIT |
| Recommended Plan | Standard 4 GB (no ES); Premium 8 GB (with Elasticsearch) |
| OS | Ubuntu 24.04 LTS (AlmaLinux 9 also fine) |
| Persistence | PostgreSQL 16 |
| Estimated Setup Time | 45–60 minutes |
Sizing
- Solo dev / no ES: 2 GB / 2 vCPU / 40 GB SSD (add 2 GB swap)
- Internal tooling, light prod: 4 GB / 2 vCPU / 60 GB SSD
- Prod with Elasticsearch visibility: 8 GB / 4 vCPU / 80 GB SSD
- Higher volume / heavy retention: 16 GB / 4+ vCPU / 160 GB+ SSD
Workflow histories accumulate based on retention — pick a plan with NVMe and headroom.
Initial Server Setup
apt update && apt upgrade -y
adduser deploy
usermod -aG sudo deploy
rsync --archive --chown=deploy:deploy ~/.ssh /home/deploy
apt install -y ufw curl wget gnupg ca-certificates lsb-release \
htop tmux fail2ban unattended-upgrades apache2-utilsPermitRootLogin no
PasswordAuthentication nosystemctl reload ssh
ufw default deny incoming
ufw default allow outgoing
ufw allow OpenSSH
ufw allow 80/tcp
ufw allow 443/tcp
ufw enablefallocate -l 2G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
echo '/swapfile none swap sw 0 0' >> /etc/fstab
echo 'vm.swappiness=10' >> /etc/sysctl.conf
sysctl -pInstall Docker + Compose v2
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
-o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io \
docker-buildx-plugin docker-compose-plugin
sudo usermod -aG docker deploy
# log out and back in for the group change
docker version && docker compose versionLay Out the Temporal Directory
sudo mkdir -p /opt/temporal/{config,dynamicconfig}
sudo chown -R deploy:deploy /opt/temporal
cd /opt/temporalPOSTGRES_USER=temporal
POSTGRES_PASSWORD=REPLACE_WITH_STRONG_PASSWORD
POSTGRES_DB=temporal
TEMPORAL_VERSION=1.30.2
TEMPORAL_UI_VERSION=2.40.0
TEMPORAL_ADMINTOOLS_VERSION=1.30.2
ES_VERSION=7.17.27chmod 600 /opt/temporal/.envPin tags rather than chasing latest. Temporal releases are frequent — predictable upgrades matter.
Create the Dynamic Config
limit.maxIDLength:
- value: 255
constraints: {}
system.forceSearchAttributesCacheRefreshOnRead:
- value: true
constraints: {}A minimal file gives you a place to drop overrides later. Swap to a different config when you enable Elasticsearch's advanced visibility.
Write the Compose File
services:
postgresql:
image: postgres:16-alpine
container_name: temporal-postgresql
restart: unless-stopped
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
volumes:
- postgres-data:/var/lib/postgresql/data
networks: [temporal-net]
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
interval: 10s
timeout: 5s
retries: 10
temporal:
image: temporalio/auto-setup:${TEMPORAL_VERSION}
container_name: temporal
restart: unless-stopped
depends_on:
postgresql: { condition: service_healthy }
environment:
DB: postgres12
DB_PORT: 5432
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PWD: ${POSTGRES_PASSWORD}
POSTGRES_SEEDS: postgresql
DYNAMIC_CONFIG_FILE_PATH: config/dynamicconfig/development-sql.yaml
DEFAULT_NAMESPACE: default
DEFAULT_NAMESPACE_RETENTION: 72h
volumes:
- ./dynamicconfig:/etc/temporal/config/dynamicconfig
networks: [temporal-net]
ports:
# gRPC bound to localhost — workers reach it via tunnel/mTLS/private net
- "127.0.0.1:7233:7233"
temporal-admin-tools:
image: temporalio/admin-tools:${TEMPORAL_ADMINTOOLS_VERSION}
container_name: temporal-admin-tools
restart: unless-stopped
depends_on: [temporal]
environment:
TEMPORAL_ADDRESS: temporal:7233
TEMPORAL_CLI_ADDRESS: temporal:7233
networks: [temporal-net]
stdin_open: true
tty: true
temporal-ui:
image: temporalio/ui:${TEMPORAL_UI_VERSION}
container_name: temporal-ui
restart: unless-stopped
depends_on: [temporal]
environment:
TEMPORAL_ADDRESS: temporal:7233
TEMPORAL_CORS_ORIGINS: https://temporal.example.com
networks: [temporal-net]
ports:
- "127.0.0.1:8080:8080"
volumes:
postgres-data:
networks:
temporal-net:
driver: bridgeThree deliberate choices: gRPC + UI bound to 127.0.0.1 only, dynamic config mounted from host (edit without rebuilding), 72h retention on default namespace (bump for prod, but it drives storage growth).
First Boot
cd /opt/temporal
docker compose up -d
docker compose logs -f temporal
# Ctrl+C once Frontend/History/Matching/Worker all report started
curl -I http://127.0.0.1:8080Auto-setup creates schema and registers the default namespace. The UI is up locally but not yet exposed publicly.
Move Off auto-setup for Production
auto-setup runs schema setup on every start (risky during upgrades) and auto-registers namespaces. Once schema exists, swap to the production image:
temporal:
image: temporalio/server:${TEMPORAL_VERSION}
# remove DEFAULT_NAMESPACE and DEFAULT_NAMESPACE_RETENTION env varsdocker compose up -d
docker compose psFrom here on, schema upgrades are explicit through admin-tools, not on every restart.
Nginx + Let's Encrypt + Basic Auth
The Web UI has no built-in auth. Anyone reaching port 8080 sees every workflow, signal, and search attribute. Nginx with TLS and Basic Auth fixes it.
sudo apt install -y nginx certbot python3-certbot-nginx
sudo htpasswd -c /etc/nginx/.temporal-htpasswd admin
# enter a strong passwordserver {
listen 80;
server_name temporal.example.com;
location / { return 301 https://$host$request_uri; }
}
server {
listen 443 ssl http2;
server_name temporal.example.com;
ssl_certificate /etc/letsencrypt/live/temporal.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/temporal.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
auth_basic "Temporal";
auth_basic_user_file /etc/nginx/.temporal-htpasswd;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 300s;
}
}sudo ln -s /etc/nginx/sites-available/temporal /etc/nginx/sites-enabled/
sudo nginx -t
sudo certbot --nginx -d temporal.example.com \
--redirect --agree-tos -m you@example.com --no-eff-email
sudo systemctl reload nginxLock Down the gRPC Frontend
Port 7233 is the API workers and clients use to start workflows and send signals. The OSS server has no auth by default — anyone who reaches it can do anything.
- Option A — workers on this VPS: they connect to
127.0.0.1:7233(ortemporal:7233from inside the network). Firewall blocks 7233 entirely. Right answer for most small self-hosted deployments. - Option B — WireGuard for remote workers: bind the frontend to the WG interface address; keep 7233 blocked publicly.
- Option C — mTLS: TLS with a private CA + client certs. The right production answer (this is effectively what Temporal Cloud does), but meaningful work to set up and rotate.
Whatever you pick, never just open 7233 publicly. No rate limit, no auth challenge, comprehensive API.
Temporal CLI + Backups
curl -sSf https://temporal.download/cli.sh | sh
sudo mv $HOME/.temporalio/bin/temporal /usr/local/bin/
temporal --versiontemporal operator namespace create \
--address 127.0.0.1:7233 \
--retention 7d \
productiondocker compose exec -T postgresql \
pg_dump -U temporal temporal | gzip \
> /var/backups/temporal-$(date +%F).sql.gzCron the dump, ship to off-host storage with restic. Retention is the storage cost lever — start at 7d and adjust based on what you actually query.
Common Issues
- UI loads but shows "no namespaces": CORS — set
TEMPORAL_CORS_ORIGINSto your real HTTPS hostname - "connection refused" from worker: 7233 is bound to 127.0.0.1 by design — use a tunnel or run the worker on the same host
- OOM kill of Elasticsearch: JVM heap not configured for the box — skip ES on 4 GB tiers, or set
ES_JAVA_OPTS=-Xms1g -Xmx1g - Schema mismatch after upgrade: you swapped to
serverbut skipped the admin-tools migration — runtemporal-sql-toolfrom the admin container
