Why Multi-Server?
A single VPS can handle a surprising amount of traffic, but eventually you'll need moreโwhether for redundancy, geographic distribution, or raw capacity.
๐ก๏ธ Redundancy
A single server is a single point of failure. Hardware dies, datacenters have outages. Running across multiple servers means your applications stay online.
๐ Geographic Distribution
Users in Amsterdam shouldn't wait for packets to travel to Los Angeles. Deploy in multiple RamNode regions (NYC, Atlanta, LA, Seattle, Netherlands).
๐ Resource Isolation
Production on dedicated hardware, staging on smaller instances, resource-intensive tasks (AI, video) on specialized servers.
๐ Horizontal Scaling
Web applications behind a load balancer can handle virtually unlimited traffic by adding more nodes.
Architecture Overview
In a multi-server Coolify setup, your main Coolify instance connects to remote servers via SSH. It deploys containers, monitors health, and manages the entire fleet.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Coolify Dashboard โ
โ (Management Server) โ
โ RamNode NYC - 4GB โ
โโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ SSH + Docker API
โโโโโโโโโโโโโโโผโโโโโโโโโโโโโโ
โ โ โ
โผ โผ โผ
โโโโโโโโโโโโโ โโโโโโโโโโโโโ โโโโโโโโโโโโโ
โ Worker 1 โ โ Worker 2 โ โ Worker 3 โ
โ NYC 8GB โ โ LA 8GB โ โ NL 8GB โ
โ (US-East) โ โ (US-West) โ โ (Europe) โ
โโโโโโโโโโโโโ โโโโโโโโโโโโโ โโโโโโโโโโโโโYour applications run on the worker nodes, not the management server. This separation keeps your Coolify dashboard responsive and secure.
Adding a Remote Server
Step 1: Provision the Server
Order a new VPS from RamNode:
- Plan: Standard 4GB or 8GB depending on workload
- Location: Different region than your Coolify server for redundancy
- OS: Ubuntu 22.04 or 24.04
Step 2: Prepare the Remote Server
ssh root@new-server-ip
# Update system
apt update && apt upgrade -y
# Install Docker (Coolify needs this)
curl -fsSL https://get.docker.com | sh
# Verify Docker is running
systemctl status dockerStep 3: Configure SSH Access
On your Coolify management server:
# Generate a dedicated key if you don't have one
ssh-keygen -t ed25519 -C "coolify-deploy" -f ~/.ssh/coolify_deploy
# Copy the public key to the remote server
ssh-copy-id -i ~/.ssh/coolify_deploy.pub root@new-server-ip
# Test the connection
ssh -i ~/.ssh/coolify_deploy root@new-server-ip "echo 'Connection successful'"Step 4: Add Server in Coolify
- Go to Servers โ Add Server
- Fill in the details:
| Setting | Value |
|---|---|
| Name | worker-la (descriptive name) |
| Description | US West production worker |
| IP Address | Your new server's IP |
| Port | 22 |
| User | root |
| Private Key | Paste your ~/.ssh/coolify_deploy private key |
Step 5: Create a Destination
Destinations are Docker networks where containers run:
- Click on your new server โ Destinations โ Add Destination
- Name:
production, Network:coolify - Save
Deploying to Remote Servers
Select Destination During Deployment
- Add Resource โ Select your application type
- In the configuration, find Server or Destination
- Choose your remote server's destination
The application deploys to the remote server, not your management node.
Moving Existing Applications
To migrate an application to a different server:
- Go to the resource โ Settings
- Change the Destination to your remote server
- Redeploy
Coolify builds and deploys on the new server. Update DNS if the server IP changed.
Multi-Region Deployment Strategies
Strategy 1: Active-Passive (Failover)
Primary (NYC) โโโ Active, handles all traffic
โ
โโโ DNS points here
Standby (LA) โโโ Deployed, idle, ready to activateStrategy 2: Active-Active (Load Balanced)
โโโโ Worker NYC
โ
User โโโ Load Balancer โโโโค
โ
โโโโ Worker LAStrategy 3: Geographic Routing
US Users โโโโโโโ Worker NYC
EU Users โโโโโโโ Worker NL
APAC Users โโโโโ Worker (or closest available)Setting Up Load Balancing
Option 1: Cloudflare Load Balancing
If you're already using Cloudflare: add both server IPs as origins, create a load balancer with health checks, configure traffic steering.
Pros: No infrastructure to manage, global anycast, DDoS protection. Pricing: $5/month for basic.
Option 2: Dedicated HAProxy Node
version: "3.8"
services:
haproxy:
image: haproxy:2.9
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
- ./certs:/etc/ssl/certs:roglobal
log stdout format raw local0
defaults
mode http
timeout connect 5s
timeout client 30s
timeout server 30s
option httpchk GET /health
frontend http
bind *:80
redirect scheme https code 301 if !{ ssl_fc }
frontend https
bind *:443 ssl crt /etc/ssl/certs/combined.pem
default_backend app_servers
backend app_servers
balance roundrobin
option httpchk GET /health
server nyc worker-nyc-ip:80 check
server la worker-la-ip:80 checkOption 3: Coolify's Built-in Proxy
Coolify uses Traefik as its reverse proxy. For multi-server setups, you can configure Traefik on your management node to route to remote servers.
Shared State: Databases
Session Management
If your application uses sessions, they need to be shared across servers.
Redis for Sessions
# On your database server or dedicated node
services:
redis-sessions:
image: redis:7-alpine
restart: always
ports:
- "6379:6379"
command: redis-server --requirepass ${REDIS_PASSWORD}
volumes:
- redis-data:/dataconst RedisStore = require('connect-redis').default;
const redis = require('redis');
const redisClient = redis.createClient({
url: `redis://:${process.env.REDIS_PASSWORD}@redis-server-ip:6379`
});
app.use(session({
store: new RedisStore({ client: redisClient }),
secret: process.env.SESSION_SECRET,
resave: false,
saveUninitialized: false
}));SESSION_DRIVER=redis
REDIS_HOST=redis-server-ip
REDIS_PASSWORD=your-password
REDIS_PORT=6379JWT Tokens (Stateless)
Alternatively, use stateless authentication: user authenticates, receives a signed JWT, JWT is verified on each request without server-side state.
File Storage
If your application handles uploads, they need to be accessible from all servers.
S3-Compatible Object Storage
const { S3Client, PutObjectCommand } = require('@aws-sdk/client-s3');
const s3 = new S3Client({
region: 'us-east-1',
endpoint: 'https://s3.us-east-1.amazonaws.com', // or Backblaze, Wasabi, etc.
credentials: {
accessKeyId: process.env.S3_ACCESS_KEY,
secretAccessKey: process.env.S3_SECRET_KEY
}
});
async function uploadFile(buffer, key) {
await s3.send(new PutObjectCommand({
Bucket: process.env.S3_BUCKET,
Key: key,
Body: buffer
}));
}Cost-Effective Options
- Backblaze B2: $0.006/GB storage
- Wasabi: $0.0069/GB, no egress fees
- Cloudflare R2: No egress fees
NFS Shared Storage
For applications that must use filesystem paths: set up an NFS server on one node, mount on all workers. Note: NFS server becomes a single point of failure.
Monitoring Multi-Server Deployments
Centralized Logging with Loki + Grafana
version: "3.8"
services:
loki:
image: grafana/loki:2.9.0
ports:
- "3100:3100"
volumes:
- loki-data:/loki
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD}
volumes:
- grafana-data:/var/lib/grafana
volumes:
loki-data:
grafana-data:{
"log-driver": "loki",
"log-opts": {
"loki-url": "http://management-server-ip:3100/loki/api/v1/push",
"loki-batch-size": "400"
}
}Restart Docker: systemctl restart docker
Health Monitoring
Deploy Uptime Kuma (from Part 4) and monitor:
- Each worker server's health endpoint
- Individual application endpoints on each server
- Database connectivity
- Load balancer health
Resource Monitoring with Netdata
bash <(curl -Ss https://get.netdata.cloud/kickstart.sh)Access each server's Netdata at http://server-ip:19999, or connect to Netdata Cloud for unified dashboard.
Scaling Checklist
Before going multi-server, ensure your application is ready:
| Requirement | Solution |
|---|---|
| Stateless application | Move sessions to Redis or use JWTs |
| Shared database | Centralized DB or replication |
| File storage | Object storage (S3-compatible) |
| Environment parity | Same env vars, same configs across servers |
| Health endpoints | /health route for load balancer checks |
| Graceful shutdown | Handle SIGTERM, drain connections |
| Database migrations | Run once, not per-instance |
Example: Scaling a Node.js API
1. Prepare the Application
// Use Redis for sessions
const RedisStore = require('connect-redis').default;
app.use(session({
store: new RedisStore({ client: redisClient }),
// ...
}));
// Health endpoint for load balancer
app.get('/health', (req, res) => {
res.json({ status: 'healthy', server: process.env.SERVER_ID });
});
// Graceful shutdown
process.on('SIGTERM', () => {
console.log('SIGTERM received, shutting down gracefully');
server.close(() => {
console.log('Server closed');
process.exit(0);
});
});2. Deploy to Multiple Servers
In Coolify: Deploy to worker-nyc destination, then deploy again to worker-la destination (same repo, same config).
3. Configure Load Balancer
backend api_servers
balance roundrobin
option httpchk GET /health
server nyc worker-nyc-ip:3000 check
server la worker-la-ip:3000 check4. Verify
# Hit the endpoint multiple times
for i in {1..10}; do
curl -s https://api.yourdomain.com/health | jq .server
done
# Should alternate between servers
"nyc"
"la"
"nyc"
"la"
...What's Next
Your infrastructure now spans multiple servers and regions. You've learned how to add remote servers, deploy across them, handle shared state, and monitor the fleet.
In Part 6, we'll harden everything for productionโwildcard SSL, CI/CD automation, security best practices, and advanced troubleshooting techniques.
