Deploy Pulumi with a Self-Hosted State Backend
Run Pulumi against your own MinIO state backend on a RamNode VPS — keep stack state under your control, drop the Pulumi Cloud seat caps, and back it all up off-host.
At a Glance
| Project | Pulumi (CLI) + MinIO state backend |
| License | Pulumi: Apache 2.0 · MinIO: AGPL v3 |
| Recommended Plan | Cloud VPS 2 vCPU / 2 GB+ RAM |
| OS | Ubuntu 24.04 LTS |
| Languages Supported | TypeScript, Python, Go, .NET, Java, YAML |
| Estimated Setup Time | 45–60 minutes |
When self-hosting makes sense
- You want stacks owned by your team without inviting members to a third-party service
- You already run object storage (MinIO, Ceph, Garage) and want Pulumi state alongside other artifacts
- State must live inside a specific jurisdiction or network boundary
- Prerequisites: A or AAAA record (e.g.
pulumi-state.example.com) pointed at the VPS, sudo user, key-only SSH
Initial Server Hardening
sudo apt update && sudo apt -y upgrade
sudo apt -y install ufw fail2ban ca-certificates curl gnupg
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable
sudo systemctl enable --now fail2banConfirm PasswordAuthentication no and PermitRootLogin no in sshd_config before you log out.
Install the Pulumi CLI
curl -fsSL https://get.pulumi.com | sh
echo 'export PATH="$HOME/.pulumi/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
pulumi versionFor a system-wide install, the script accepts --install-root. The upstream installer keeps future upgrades to a single command.
Install a Language Runtime
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt -y install nodejs
node --version && npm --versionsudo apt -y install python3 python3-venv python3-pipThe remainder of this guide uses TypeScript examples, but the backend setup is language-agnostic.
Choose a State Backend
- Local filesystem (
file://~): JSON state under~/.pulumi. Fine for a single operator on a single host. Skip ahead to step 8 and runpulumi login --local. - S3-compatible (
s3://bucket?endpoint=...): state in MinIO/Garage. Multiple operators, locking via the storage layer, reachable from CI. Continue with step 5.
Deploy MinIO
sudo useradd -r -s /sbin/nologin minio-user
sudo mkdir -p /var/lib/minio /etc/minio
sudo chown -R minio-user:minio-user /var/lib/minio /etc/minio
wget https://dl.min.io/server/minio/release/linux-amd64/minio -O /tmp/minio
sudo install -o root -g root -m 0755 /tmp/minio /usr/local/bin/minioMINIO_ROOT_USER=changeme-root
MINIO_ROOT_PASSWORD=replace-with-32-char-random-string
MINIO_VOLUMES="/var/lib/minio"
MINIO_OPTS="--address 127.0.0.1:9000 --console-address 127.0.0.1:9001"sudo chmod 0640 /etc/default/minio
sudo chown root:minio-user /etc/default/minio[Unit]
Description=MinIO Object Storage
After=network-online.target
Wants=network-online.target
[Service]
User=minio-user
Group=minio-user
EnvironmentFile=/etc/default/minio
ExecStart=/usr/local/bin/minio server $MINIO_OPTS $MINIO_VOLUMES
Restart=always
LimitNOFILE=65536
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/lib/minio
NoNewPrivileges=true
[Install]
WantedBy=multi-user.targetsudo systemctl daemon-reload
sudo systemctl enable --now minio
sudo systemctl status minio --no-pagerBinding to localhost ensures only the reverse proxy can reach MinIO directly.
Front MinIO with Nginx and Let's Encrypt
sudo apt -y install nginx certbot python3-certbot-nginxserver {
listen 80;
server_name pulumi-state.example.com;
client_max_body_size 1G;
location / {
proxy_pass http://127.0.0.1:9000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 60s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
proxy_buffering off;
proxy_request_buffering off;
chunked_transfer_encoding off;
}
}sudo ln -s /etc/nginx/sites-available/pulumi-state.conf /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl reload nginx
sudo certbot --nginx -d pulumi-state.example.com \
--redirect --non-interactive --agree-tos -m you@example.comCreate the Bucket and Service Account
wget https://dl.min.io/client/mc/release/linux-amd64/mc -O /tmp/mc
sudo install -o root -g root -m 0755 /tmp/mc /usr/local/bin/mcmc alias set local https://pulumi-state.example.com \
changeme-root replace-with-32-char-random-string
mc mb local/pulumi-state
mc admin user svcacct add local changeme-rootThe svcacct add command prints an Access Key and Secret Key — save these for the next step. For tighter scoping, attach a policy that restricts the service account to the pulumi-state bucket via mc admin policy attach.
Point Pulumi at the Backend
export AWS_ACCESS_KEY_ID="your-minio-access-key"
export AWS_SECRET_ACCESS_KEY="your-minio-secret-key"
export AWS_REGION="us-east-1" # placeholder — MinIO ignores itpulumi login 's3://pulumi-state?endpoint=pulumi-state.example.com&s3ForcePathStyle=true'
pulumi whoami --verboseSecrets and a First Project
Self-hosted backends encrypt stack secrets with a passphrase. Generate a strong one and export it before running Pulumi commands:
openssl rand -base64 32
export PULUMI_CONFIG_PASSPHRASE="paste-the-generated-value"For higher assurance, point at AWS KMS or self-hosted Vault: --secrets-provider="awskms://alias/pulumi" or --secrets-provider="hashivault://pulumi" on pulumi stack init.
mkdir -p ~/pulumi/demo && cd ~/pulumi/demo
pulumi new aws-typescript --name demo --stack dev \
--description "self-hosted backend test" --yesimport * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";
const bucket = new aws.s3.BucketV2("demo-bucket", {
tags: { managedBy: "pulumi" },
});
export const bucketName = bucket.id;pulumi config set aws:region us-east-1
pulumi preview
pulumi up
mc ls --recursive local/pulumi-state
# expect: .pulumi/stacks/demo/dev.json (encrypted)pulumi destroy
pulumi stack rm dev --yesBackups and Operations
Pulumi state is the source of truth — lose it and you face manual reconciliation against the live cloud. Treat the MinIO bucket as a tier-zero backup target.
sudo apt -y install restic
sudo restic -r sftp:backup@backup.example.com:/restic/pulumi init#!/usr/bin/env bash
set -euo pipefail
export RESTIC_REPOSITORY="sftp:backup@backup.example.com:/restic/pulumi"
export RESTIC_PASSWORD_FILE="/etc/restic.pass"
systemctl is-active --quiet minio || exit 1
restic backup /var/lib/minio --tag pulumi-state
restic forget --keep-daily 7 --keep-weekly 4 --keep-monthly 6 --pruneMake it executable, store the repo password in /etc/restic.pass with mode 0600, and run from a systemd timer or cron. For an additional layer, add a weekly mc mirror from the bucket to a second object store.
- Upgrades: re-run the installer to pull a new CLI release; pin the version in CI
- Multiple operators: per-user MinIO service accounts; Pulumi takes a per-stack lock through the backend
- CI runners: store keys + passphrase in your CI secret store; same login command works anywhere
- Observability: nginx access logs cover backend traffic; MinIO exposes Prometheus metrics at
/minio/v2/metrics/cluster - State growth: stack files are typically a few hundred KB — one VPS comfortably hosts thousands
Common Issues
- Login fails with "InvalidEndpoint": missing
s3ForcePathStyle=true— MinIO uses path-style URLs, not virtual-hosted - Empty
AWS_REGIONerror: the AWS SDK requires something non-empty even for MinIO - "could not decrypt secret" on a teammate's machine:
PULUMI_CONFIG_PASSPHRASEnot exported — share via your password manager - Pulumi state lock stuck: a previous run was killed;
pulumi cancelon the stack
