Orchestration Guide

    Deploy HashiCorp Nomad

    Nomad is a lightweight, flexible workload orchestrator that deploys and manages containers, non-containerized applications, and batch jobs. Simpler than Kubernetes with powerful scheduling on RamNode's VPS hosting.

    Container Orchestration
    Consul Integration
    ACL Security
    Multi-Datacenter

    Architecture Overview

    This guide deploys a single-server Nomad cluster suitable for development, staging, or small production workloads. The architecture consists of a Nomad server (scheduler and state store), a co-located Nomad client (task executor), Consul for service discovery, and Docker as the container runtime.

    For high-availability production environments, this can be extended to a 3- or 5-server cluster across multiple RamNode VPS instances.

    1

    Prerequisites

    Recommended VPS Specifications

    ComponentMinimumRecommended
    CPU1 vCPU2+ vCPU
    RAM2 GB4 GB+
    Storage20 GB NVMe SSD40 GB+ NVMe SSD
    OSUbuntu 24.04 LTSUbuntu 24.04 LTS
    Network1 Gbps1 Gbps

    Before You Begin

    • A RamNode VPS provisioned with Ubuntu 24.04 LTS
    • Root or sudo SSH access to the VPS
    • A domain name or static IP for accessing the Nomad UI (optional but recommended)
    • Basic familiarity with Linux command-line administration
    2

    Initial Server Setup

    Update system and install base packages
    sudo apt update && sudo apt upgrade -y
    sudo apt install -y curl gnupg software-properties-common \
      apt-transport-https ca-certificates unzip jq

    Configure Firewall

    Configure UFW for Nomad and Consul
    sudo ufw allow 22/tcp       # SSH
    sudo ufw allow 4646/tcp     # Nomad HTTP API
    sudo ufw allow 4647/tcp     # Nomad RPC
    sudo ufw allow 4648/tcp     # Nomad Serf (WAN gossip)
    sudo ufw allow 4648/udp     # Nomad Serf (WAN gossip)
    sudo ufw allow 8500/tcp     # Consul HTTP API
    sudo ufw allow 8301/tcp     # Consul Serf LAN
    sudo ufw allow 8301/udp     # Consul Serf LAN
    sudo ufw allow 8300/tcp     # Consul Server RPC
    sudo ufw --force enable

    Security Note: In production, restrict ports 4646, 4647, 4648, 8500, 8300, and 8301 to your private network or VPN CIDR range. Only SSH (22) and application ports should be publicly accessible.

    Set hostname and timezone
    sudo hostnamectl set-hostname nomad-server-01
    sudo timedatectl set-timezone UTC

    Create Nomad System User

    Create dedicated system user
    sudo useradd --system --home /etc/nomad.d --shell /bin/false nomad
    sudo mkdir -p /opt/nomad/data /etc/nomad.d
    sudo chown -R nomad:nomad /opt/nomad /etc/nomad.d
    3

    Install Docker

    Nomad uses Docker as its primary task driver for running containerized workloads.

    Install Docker Engine
    # Add Docker's official GPG key
    sudo install -m 0755 -d /etc/apt/keyrings
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
      | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
    sudo chmod a+r /etc/apt/keyrings/docker.gpg
    
    # Add Docker repository
    echo "deb [arch=$(dpkg --print-architecture) \
      signed-by=/etc/apt/keyrings/docker.gpg] \
      https://download.docker.com/linux/ubuntu \
      $(lsb_release -cs) stable" | \
      sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    
    # Install Docker Engine
    sudo apt update
    sudo apt install -y docker-ce docker-ce-cli containerd.io
    sudo systemctl enable --now docker
    
    # Verify the installation
    sudo docker run hello-world
    4

    Install Consul

    Consul provides service discovery and health checking for Nomad-scheduled workloads. While optional, it is strongly recommended for production deployments.

    Add HashiCorp repository and install Consul
    # Add HashiCorp GPG key and repository
    wget -O- https://apt.releases.hashicorp.com/gpg | \
      sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
    echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
      https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
      sudo tee /etc/apt/sources.list.d/hashicorp.list
    
    sudo apt update && sudo apt install -y consul

    Configure Consul

    Create /etc/consul.d/consul.hcl
    datacenter   = "dc1"
    data_dir     = "/opt/consul"
    log_level    = "INFO"
    
    server           = true
    bootstrap_expect = 1
    
    bind_addr   = "0.0.0.0"
    client_addr = "0.0.0.0"
    
    ui_config {
      enabled = true
    }
    
    connect {
      enabled = true
    }
    Enable and start Consul
    sudo systemctl enable consul
    sudo systemctl start consul
    consul members         # Verify Consul is running
    5

    Install Nomad

    Install Nomad
    # HashiCorp repository already configured from Consul step
    sudo apt install -y nomad
    nomad version   # Verify installation

    Server Configuration

    Create /etc/nomad.d/nomad.hcl
    datacenter = "dc1"
    data_dir   = "/opt/nomad/data"
    bind_addr  = "0.0.0.0"
    
    server {
      enabled          = true
      bootstrap_expect = 1
    }
    
    client {
      enabled = true
    }
    
    plugin "docker" {
      config {
        allow_privileged = false
        volumes {
          enabled = true
        }
      }
    }
    
    consul {
      address = "127.0.0.1:8500"
    }

    Create Systemd Service

    Create /etc/systemd/system/nomad.service
    [Unit]
    Description=HashiCorp Nomad
    Documentation=https://nomadproject.io/docs/
    Wants=network-online.target
    After=network-online.target consul.service
    
    [Service]
    ExecReload=/bin/kill -HUP $MAINPID
    ExecStart=/usr/bin/nomad agent -config /etc/nomad.d/
    KillMode=process
    KillSignal=SIGINT
    LimitNOFILE=65536
    LimitNPROC=infinity
    Restart=on-failure
    RestartSec=2
    TasksMax=infinity
    OOMScoreAdjust=-1000
    
    [Install]
    WantedBy=multi-user.target
    Enable and start Nomad
    sudo systemctl daemon-reload
    sudo systemctl enable nomad
    sudo systemctl start nomad

    Verify the Cluster

    Verify Nomad is running
    nomad server members
    nomad node status
    nomad status

    💡 Web UI Access: The Nomad UI is available at http://YOUR_VPS_IP:4646/ui. For secure remote access, set up an SSH tunnel: ssh -L 4646:localhost:4646 root@YOUR_VPS_IP

    6

    Enable ACL Security

    Access Control Lists (ACLs) are essential for production Nomad deployments. They control who can submit jobs, view logs, and administer the cluster.

    Add ACL configuration and bootstrap
    # Add ACL block to Nomad config
    sudo tee -a /etc/nomad.d/nomad.hcl > /dev/null <<'EOF'
    acl {
      enabled = true
    }
    EOF
    
    sudo systemctl restart nomad
    sleep 5
    
    # Bootstrap the ACL system (save the output!)
    nomad acl bootstrap

    Critical: Save the Secret ID from the bootstrap output immediately. This is your management token and cannot be retrieved again. Store it in a secure location such as a password manager or HashiCorp Vault.

    Set management token
    export NOMAD_TOKEN="<your-bootstrap-secret-id>"
    
    # Optionally persist in your shell profile:
    echo 'export NOMAD_TOKEN="<your-bootstrap-secret-id>"' >> ~/.bashrc
    7

    Deploy Your First Job

    Create hello-web.nomad.hcl
    job "hello-web" {
      datacenters = ["dc1"]
      type        = "service"
    
      group "web" {
        count = 2
    
        network {
          port "http" {
            to = 80
          }
        }
    
        service {
          name     = "hello-web"
          port     = "http"
          provider = "consul"
    
          check {
            type     = "http"
            path     = "/"
            interval = "10s"
            timeout  = "2s"
          }
        }
    
        task "nginx" {
          driver = "docker"
    
          config {
            image = "nginx:alpine"
            ports = ["http"]
          }
    
          resources {
            cpu    = 100
            memory = 128
          }
        }
      }
    }
    Plan and run the job
    # Preview the deployment plan
    nomad job plan ~/hello-web.nomad.hcl
    
    # Deploy the job
    nomad job run ~/hello-web.nomad.hcl
    
    # Check job status
    nomad job status hello-web
    
    # View allocation details
    nomad alloc status <alloc-id>

    💡 Consul Integration: With Consul running, your Nginx instances are automatically registered as services. Check http://YOUR_VPS_IP:8500/ui to see them in the Consul service catalog with health checks.

    8

    Production Hardening

    TLS Encryption

    Enable TLS to encrypt all Nomad RPC and HTTP communication:

    Generate and configure TLS certificates
    # Generate a CA and server certificates
    nomad tls ca create
    nomad tls cert create -server -region global
    nomad tls cert create -client
    
    # Move certificates to the config directory
    sudo mkdir -p /etc/nomad.d/tls
    sudo mv *.pem /etc/nomad.d/tls/
    sudo chown -R nomad:nomad /etc/nomad.d/tls
    Add TLS block to nomad.hcl
    tls {
      http = true
      rpc  = true
    
      ca_file   = "/etc/nomad.d/tls/nomad-agent-ca.pem"
      cert_file = "/etc/nomad.d/tls/global-server-nomad.pem"
      key_file  = "/etc/nomad.d/tls/global-server-nomad-key.pem"
    
      verify_server_hostname = true
      verify_https_client    = false
    }

    Resource Limits

    Configure system resource limits
    echo "nomad soft nofile 65536" | sudo tee -a /etc/security/limits.conf
    echo "nomad hard nofile 65536" | sudo tee -a /etc/security/limits.conf
    echo "nomad soft nproc  65536" | sudo tee -a /etc/security/limits.conf
    echo "nomad hard nproc  65536" | sudo tee -a /etc/security/limits.conf

    Log Rotation

    Configure journald log retention
    sudo tee /etc/systemd/journald.conf.d/nomad.conf > /dev/null <<'EOF'
    [Journal]
    SystemMaxUse=500M
    SystemMaxFileSize=50M
    MaxRetentionSec=30day
    EOF
    
    sudo systemctl restart systemd-journald
    9

    Monitoring & Observability

    Nomad exposes Prometheus-compatible metrics at /v1/metrics.

    Quick health checks
    # Cluster health
    nomad server members
    nomad node status
    
    # Job-level monitoring
    nomad job status hello-web
    nomad alloc logs <alloc-id>
    
    # System metrics
    curl -s http://localhost:4646/v1/metrics | \
      jq '.Gauges[] | select(.Name | contains("nomad"))'

    Key Metrics to Watch

    MetricDescriptionAlert Threshold
    nomad.client.allocs.runningActive allocations per nodeVaries by capacity
    nomad.client.host.cpu.idleAvailable CPU percentage< 15%
    nomad.client.host.memory.availFree memory in bytes< 256 MB
    nomad.client.host.disk.availableAvailable disk space< 10%
    nomad.nomad.rpc.queryRPC query rateSudden spikes
    10

    Scaling Your Cluster

    Adding Client Nodes

    On each new RamNode VPS, install Nomad and Docker, then use this client-only configuration:

    Client node /etc/nomad.d/nomad.hcl
    datacenter = "dc1"
    data_dir   = "/opt/nomad/data"
    bind_addr  = "0.0.0.0"
    
    client {
      enabled = true
      servers = ["<nomad_server_ip>:4647"]
    }
    
    plugin "docker" {
      config {
        allow_privileged = false
        volumes { enabled = true }
      }
    }
    
    consul {
      address = "<consul_server_ip>:8500"
    }

    Multi-Server HA Cluster

    For high-availability, provision 3 or 5 RamNode VPS instances as Nomad servers with bootstrap_expect set to the total server count. Nomad uses Raft consensus and can tolerate (N-1)/2 server failures.

    Cluster SizeFault ToleranceRamNode Config
    1 serverNone (dev/staging)1× Premium KVM 4GB ($24/mo)
    3 servers1 server failure3× Premium KVM 4GB ($72/mo)
    5 servers2 server failures5× Premium KVM 4GB ($120/mo)
    11

    Troubleshooting

    IssueDiagnosisSolution
    Nomad won't startjournalctl -u nomad -fCheck config syntax: nomad config validate /etc/nomad.d/
    No Docker drivernomad node status -selfEnsure Docker is running: systemctl status docker
    Job stuck in pendingnomad job status <job>Check resource constraints and node eligibility
    Consul unavailableconsul membersVerify Consul is running and ports are open
    ACL bootstrap failsCheck if already bootstrappedReset with: nomad acl bootstrap -reset

    Nomad Deployed Successfully!

    Your RamNode VPS is now running HashiCorp Nomad with Docker container orchestration, Consul service discovery, ACL security, and TLS encryption. Scale horizontally by adding client nodes or promote to a multi-server HA configuration for fault tolerance.