Back to Coolify Series
    Part 5 of 6

    Multi-Server & Scaling

    Add remote servers to Coolify, deploy applications across regions, and build infrastructure that survives server failures.

    Multi-Server
    Geographic Distribution
    โฑ๏ธ 25 min
    1

    Why Multi-Server?

    A single VPS can handle a surprising amount of traffic, but eventually you'll need moreโ€”whether for redundancy, geographic distribution, or raw capacity.

    ๐Ÿ›ก๏ธ Redundancy

    A single server is a single point of failure. Hardware dies, datacenters have outages. Running across multiple servers means your applications stay online.

    ๐ŸŒ Geographic Distribution

    Users in Amsterdam shouldn't wait for packets to travel to Los Angeles. Deploy in multiple RamNode regions (NYC, Atlanta, LA, Seattle, Netherlands).

    ๐Ÿ”’ Resource Isolation

    Production on dedicated hardware, staging on smaller instances, resource-intensive tasks (AI, video) on specialized servers.

    ๐Ÿ“ˆ Horizontal Scaling

    Web applications behind a load balancer can handle virtually unlimited traffic by adding more nodes.

    2

    Architecture Overview

    In a multi-server Coolify setup, your main Coolify instance connects to remote servers via SSH. It deploys containers, monitors health, and manages the entire fleet.

    Multi-Server Architecture
    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
    โ”‚                    Coolify Dashboard                     โ”‚
    โ”‚                  (Management Server)                     โ”‚
    โ”‚                   RamNode NYC - 4GB                      โ”‚
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                          โ”‚ SSH + Docker API
            โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
            โ”‚             โ”‚             โ”‚
            โ–ผ             โ–ผ             โ–ผ
    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
    โ”‚  Worker 1 โ”‚  โ”‚  Worker 2 โ”‚  โ”‚  Worker 3 โ”‚
    โ”‚  NYC 8GB  โ”‚  โ”‚  LA 8GB   โ”‚  โ”‚  NL 8GB   โ”‚
    โ”‚ (US-East) โ”‚  โ”‚ (US-West) โ”‚  โ”‚ (Europe)  โ”‚
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

    Your applications run on the worker nodes, not the management server. This separation keeps your Coolify dashboard responsive and secure.

    3

    Adding a Remote Server

    Step 1: Provision the Server

    Order a new VPS from RamNode:

    • Plan: Standard 4GB or 8GB depending on workload
    • Location: Different region than your Coolify server for redundancy
    • OS: Ubuntu 22.04 or 24.04

    Step 2: Prepare the Remote Server

    SSH into new server and setup
    ssh root@new-server-ip
    
    # Update system
    apt update && apt upgrade -y
    
    # Install Docker (Coolify needs this)
    curl -fsSL https://get.docker.com | sh
    
    # Verify Docker is running
    systemctl status docker

    Step 3: Configure SSH Access

    On your Coolify management server:

    Generate and copy SSH key
    # Generate a dedicated key if you don't have one
    ssh-keygen -t ed25519 -C "coolify-deploy" -f ~/.ssh/coolify_deploy
    
    # Copy the public key to the remote server
    ssh-copy-id -i ~/.ssh/coolify_deploy.pub root@new-server-ip
    
    # Test the connection
    ssh -i ~/.ssh/coolify_deploy root@new-server-ip "echo 'Connection successful'"

    Step 4: Add Server in Coolify

    1. Go to Servers โ†’ Add Server
    2. Fill in the details:
    SettingValue
    Nameworker-la (descriptive name)
    DescriptionUS West production worker
    IP AddressYour new server's IP
    Port22
    Userroot
    Private KeyPaste your ~/.ssh/coolify_deploy private key

    Step 5: Create a Destination

    Destinations are Docker networks where containers run:

    1. Click on your new server โ†’ Destinations โ†’ Add Destination
    2. Name: production, Network: coolify
    3. Save
    4

    Deploying to Remote Servers

    Select Destination During Deployment

    1. Add Resource โ†’ Select your application type
    2. In the configuration, find Server or Destination
    3. Choose your remote server's destination

    The application deploys to the remote server, not your management node.

    Moving Existing Applications

    To migrate an application to a different server:

    1. Go to the resource โ†’ Settings
    2. Change the Destination to your remote server
    3. Redeploy

    Coolify builds and deploys on the new server. Update DNS if the server IP changed.

    5

    Multi-Region Deployment Strategies

    Strategy 1: Active-Passive (Failover)

    Active-Passive Setup
    Primary (NYC) โ”€โ”€โ”€ Active, handles all traffic
        โ”‚
        โ””โ”€โ”€ DNS points here
        
    Standby (LA) โ”€โ”€โ”€ Deployed, idle, ready to activate
    โœ“ Simple, low resource usage
    โœ— Manual failover, brief downtime during DNS propagation

    Strategy 2: Active-Active (Load Balanced)

    Active-Active Setup
                        โ”Œโ”€โ”€โ”€ Worker NYC
                        โ”‚
    User โ”€โ”€โ”€ Load Balancer โ”€โ”€โ”€โ”ค
                        โ”‚
                        โ””โ”€โ”€โ”€ Worker LA
    โœ“ Higher availability, zero-downtime failover
    โœ— More complex, need shared state

    Strategy 3: Geographic Routing

    Geographic Routing
    US Users โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Worker NYC
    EU Users โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Worker NL
    APAC Users โ”€โ”€โ”€โ”€โ”€ Worker (or closest available)
    โœ“ Lowest latency for users
    โœ— Data synchronization complexity
    6

    Setting Up Load Balancing

    Option 1: Cloudflare Load Balancing

    If you're already using Cloudflare: add both server IPs as origins, create a load balancer with health checks, configure traffic steering.

    Pros: No infrastructure to manage, global anycast, DDoS protection. Pricing: $5/month for basic.

    Option 2: Dedicated HAProxy Node

    docker-compose.yml
    version: "3.8"
    
    services:
      haproxy:
        image: haproxy:2.9
        restart: always
        ports:
          - "80:80"
          - "443:443"
        volumes:
          - ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
          - ./certs:/etc/ssl/certs:ro
    haproxy.cfg
    global
        log stdout format raw local0
    
    defaults
        mode http
        timeout connect 5s
        timeout client 30s
        timeout server 30s
        option httpchk GET /health
    
    frontend http
        bind *:80
        redirect scheme https code 301 if !{ ssl_fc }
    
    frontend https
        bind *:443 ssl crt /etc/ssl/certs/combined.pem
        default_backend app_servers
    
    backend app_servers
        balance roundrobin
        option httpchk GET /health
        server nyc worker-nyc-ip:80 check
        server la worker-la-ip:80 check

    Option 3: Coolify's Built-in Proxy

    Coolify uses Traefik as its reverse proxy. For multi-server setups, you can configure Traefik on your management node to route to remote servers.

    7

    Shared State: Databases

    Centralized Database

    Centralized Database Architecture
    Worker NYC โ”€โ”€โ”€โ”
                  โ”œโ”€โ”€โ”€โ”€ Database Server (NYC)
    Worker LA โ”€โ”€โ”€โ”€โ”˜

    All application instances connect to a single database server. Simple and consistent, but the database becomes a single point of failure.

    Database Replication

    Primary-Replica Architecture
    Primary DB (NYC) โ”€โ”€โ”€โ”€โ”€ Replica DB (LA)
           โ”‚                      โ”‚
       Worker NYC             Worker LA
       (read/write)           (read-only)
    PostgreSQL replication config
    # On primary - postgresql.conf
    wal_level = replica
    max_wal_senders = 3
    
    # On primary - pg_hba.conf
    host replication replicator replica-ip/32 scram-sha-256
    Set up replica
    # On replica
    pg_basebackup -h primary-ip -D /var/lib/postgresql/data -U replicator -P -R

    Managed Database Services

    Consider offloading database management:

    • PlanetScale (MySQL): Global edge caching, automatic failover
    • Neon (PostgreSQL): Serverless, branching, auto-scaling
    • Supabase (PostgreSQL): Hosted Postgres with extras
    • Turso (SQLite): Edge-replicated SQLite
    8

    Session Management

    If your application uses sessions, they need to be shared across servers.

    Redis for Sessions

    docker-compose.yml
    # On your database server or dedicated node
    services:
      redis-sessions:
        image: redis:7-alpine
        restart: always
        ports:
          - "6379:6379"
        command: redis-server --requirepass ${REDIS_PASSWORD}
        volumes:
          - redis-data:/data
    Node.js (Express) session config
    const RedisStore = require('connect-redis').default;
    const redis = require('redis');
    
    const redisClient = redis.createClient({
      url: `redis://:${process.env.REDIS_PASSWORD}@redis-server-ip:6379`
    });
    
    app.use(session({
      store: new RedisStore({ client: redisClient }),
      secret: process.env.SESSION_SECRET,
      resave: false,
      saveUninitialized: false
    }));
    Laravel .env
    SESSION_DRIVER=redis
    REDIS_HOST=redis-server-ip
    REDIS_PASSWORD=your-password
    REDIS_PORT=6379

    JWT Tokens (Stateless)

    Alternatively, use stateless authentication: user authenticates, receives a signed JWT, JWT is verified on each request without server-side state.

    โœ“ Simpler multi-server setup, no Redis dependency
    โœ— Can't invalidate tokens instantly
    9

    File Storage

    If your application handles uploads, they need to be accessible from all servers.

    S3-Compatible Object Storage

    Node.js with AWS SDK
    const { S3Client, PutObjectCommand } = require('@aws-sdk/client-s3');
    
    const s3 = new S3Client({
      region: 'us-east-1',
      endpoint: 'https://s3.us-east-1.amazonaws.com', // or Backblaze, Wasabi, etc.
      credentials: {
        accessKeyId: process.env.S3_ACCESS_KEY,
        secretAccessKey: process.env.S3_SECRET_KEY
      }
    });
    
    async function uploadFile(buffer, key) {
      await s3.send(new PutObjectCommand({
        Bucket: process.env.S3_BUCKET,
        Key: key,
        Body: buffer
      }));
    }

    Cost-Effective Options

    • Backblaze B2: $0.006/GB storage
    • Wasabi: $0.0069/GB, no egress fees
    • Cloudflare R2: No egress fees

    NFS Shared Storage

    For applications that must use filesystem paths: set up an NFS server on one node, mount on all workers. Note: NFS server becomes a single point of failure.

    10

    Monitoring Multi-Server Deployments

    Centralized Logging with Loki + Grafana

    docker-compose.yml on management server
    version: "3.8"
    
    services:
      loki:
        image: grafana/loki:2.9.0
        ports:
          - "3100:3100"
        volumes:
          - loki-data:/loki
    
      grafana:
        image: grafana/grafana:latest
        ports:
          - "3000:3000"
        environment:
          - GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD}
        volumes:
          - grafana-data:/var/lib/grafana
    
    volumes:
      loki-data:
      grafana-data:
    /etc/docker/daemon.json on each worker
    {
      "log-driver": "loki",
      "log-opts": {
        "loki-url": "http://management-server-ip:3100/loki/api/v1/push",
        "loki-batch-size": "400"
      }
    }

    Restart Docker: systemctl restart docker

    Health Monitoring

    Deploy Uptime Kuma (from Part 4) and monitor:

    • Each worker server's health endpoint
    • Individual application endpoints on each server
    • Database connectivity
    • Load balancer health

    Resource Monitoring with Netdata

    Install on each server
    bash <(curl -Ss https://get.netdata.cloud/kickstart.sh)

    Access each server's Netdata at http://server-ip:19999, or connect to Netdata Cloud for unified dashboard.

    11

    Scaling Checklist

    Before going multi-server, ensure your application is ready:

    RequirementSolution
    Stateless applicationMove sessions to Redis or use JWTs
    Shared databaseCentralized DB or replication
    File storageObject storage (S3-compatible)
    Environment paritySame env vars, same configs across servers
    Health endpoints/health route for load balancer checks
    Graceful shutdownHandle SIGTERM, drain connections
    Database migrationsRun once, not per-instance
    12

    Example: Scaling a Node.js API

    1. Prepare the Application

    Ensure statelessness
    // Use Redis for sessions
    const RedisStore = require('connect-redis').default;
    app.use(session({
      store: new RedisStore({ client: redisClient }),
      // ...
    }));
    
    // Health endpoint for load balancer
    app.get('/health', (req, res) => {
      res.json({ status: 'healthy', server: process.env.SERVER_ID });
    });
    
    // Graceful shutdown
    process.on('SIGTERM', () => {
      console.log('SIGTERM received, shutting down gracefully');
      server.close(() => {
        console.log('Server closed');
        process.exit(0);
      });
    });

    2. Deploy to Multiple Servers

    In Coolify: Deploy to worker-nyc destination, then deploy again to worker-la destination (same repo, same config).

    3. Configure Load Balancer

    HAProxy backend config
    backend api_servers
        balance roundrobin
        option httpchk GET /health
        server nyc worker-nyc-ip:3000 check
        server la worker-la-ip:3000 check

    4. Verify

    Test load balancing
    # Hit the endpoint multiple times
    for i in {1..10}; do
      curl -s https://api.yourdomain.com/health | jq .server
    done
    
    # Should alternate between servers
    "nyc"
    "la"
    "nyc"
    "la"
    ...

    What's Next

    Your infrastructure now spans multiple servers and regions. You've learned how to add remote servers, deploy across them, handle shared state, and monitor the fleet.

    In Part 6, we'll harden everything for productionโ€”wildcard SSL, CI/CD automation, security best practices, and advanced troubleshooting techniques.