Part 5 of 7

    Deploying Kubernetes on RamNode

    Deploy a production-capable k3s cluster with Ingress routing, automatic SSL certificates, and persistent storage.

    k3s
    Ingress
    cert-manager
    Longhorn

    Theory is useful, but you learn Kubernetes by running it. This guide walks through deploying a production-capable Kubernetes cluster on RamNode VPS instances using k3s—a lightweight distribution that runs well on modest hardware without sacrificing functionality.

    We'll cover single-node setups for getting started, multi-node clusters for high availability, ingress configuration for routing traffic, and automatic SSL certificates with Let's Encrypt.

    1

    Why k3s?

    k3s is a certified Kubernetes distribution designed for resource-constrained environments. Rancher Labs built it for edge computing, IoT, and situations where you can't dedicate 4GB of RAM just to run the control plane.

    The "k3s" name is a play on k8s (Kubernetes)—it's "half the size" (5 letters vs 10).

    What makes k3s ideal for VPS deployments:

    • Single binary under 100MB
    • Control plane runs in ~512MB RAM
    • Bundles essential components (containerd, Traefik, CoreDNS, local-path storage)
    • Fully compatible with standard Kubernetes APIs
    • Easy single-command installation
    • Production-ready with proper configuration
    2

    Planning Your Cluster

    Before provisioning, decide on your architecture.

    Single-Node Setup

    Best for learning, development, or running lightweight production workloads where high availability isn't critical.

    ComponentSpecsMonthly Cost
    Server (control plane + worker)2GB RAM, 1 vCPU, 30GB SSD~$8

    Multi-Node High-Availability Setup

    For production workloads requiring redundancy.

    ComponentSpecsCountPurpose
    Server nodes2GB RAM, 1 vCPU3Control plane (HA)
    Agent nodes4GB RAM, 2 vCPU2+Run workloads

    Budget-Conscious Production Setup

    A practical middle ground for self-hosters: Start with one 4GB node, add agents as you grow.

    3

    Prerequisites

    For each VPS:

    • Ubuntu 22.04 or 24.04 LTS (Debian 11/12 also works)
    • SSH access with sudo privileges
    • A domain name pointed to your server IPs (for Ingress/SSL)
    • Firewall access to required ports

    Required Ports

    For single-node:

    • 80/443: HTTP/HTTPS traffic
    • 6443: Kubernetes API (optional)

    For multi-node clusters:

    • 6443: Kubernetes API
    • 2379-2380: etcd (server nodes only)
    • 10250: kubelet
    • 8472/UDP: Flannel VXLAN
    4

    Single-Node Deployment

    Let's start with a single-node cluster—the fastest path to a working Kubernetes environment.

    Step 1: Prepare the Server

    Update system and set hostname
    sudo apt update && sudo apt upgrade -y
    sudo hostnamectl set-hostname k3s-server

    Step 2: Install k3s

    The official installation script handles everything:

    Install k3s
    curl -sfL https://get.k3s.io | sh -

    That's it. Within a minute, you have a running Kubernetes cluster. Verify:

    Verify installation
    sudo kubectl get nodes
    Expected output
    NAME         STATUS   ROLES                  AGE   VERSION
    k3s-server   Ready    control-plane,master   30s   v1.28.5+k3s1

    Step 3: Configure kubectl Access

    By default, kubectl requires sudo. To use it as a regular user:

    Setup user access
    mkdir -p ~/.kube
    sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
    sudo chown $USER:$USER ~/.kube/config
    chmod 600 ~/.kube/config

    Now you can run kubectl without sudo:

    Verify access
    kubectl get nodes
    kubectl get pods -A
    5

    Multi-Node Cluster Deployment

    For high availability or additional capacity, add more nodes.

    Cluster architecture
    ┌─────────────────────────────────────────────────────────────┐
    │                    Load Balancer / DNS                       │
    │                  (points to all server IPs)                  │
    └─────────────────────────────────────────────────────────────┘
                                  │
            ┌─────────────────────┼─────────────────────┐
            ▼                     ▼                     ▼
    ┌──────────────┐      ┌──────────────┐      ┌──────────────┐
    │   Server 1   │◄────►│   Server 2   │◄────►│   Server 3   │
    │  k3s server  │      │  k3s server  │      │  k3s server  │
    │  etcd        │      │  etcd        │      │  etcd        │
    └──────────────┘      └──────────────┘      └──────────────┘
            │                     │                     │
            └─────────────────────┼─────────────────────┘
                                  │
            ┌─────────────────────┼─────────────────────┐
            ▼                     ▼                     ▼
    ┌──────────────┐      ┌──────────────┐      ┌──────────────┐
    │   Agent 1    │      │   Agent 2    │      │   Agent N    │
    │  k3s agent   │      │  k3s agent   │      │  k3s agent   │
    │  workloads   │      │  workloads   │      │  workloads   │
    └──────────────┘      └──────────────┘      └──────────────┘

    Setting Up the First Server Node

    Initialize first server (k3s-server-1)
    # Set hostname
    sudo hostnamectl set-hostname k3s-server-1
    
    # Install k3s with cluster-init to enable embedded etcd
    curl -sfL https://get.k3s.io | sh -s - server \
      --cluster-init \
      --tls-san=your-load-balancer-ip-or-domain \
      --tls-san=k3s-server-1-public-ip

    Retrieve the node token (needed to join other nodes):

    Get node token
    sudo cat /var/lib/rancher/k3s/server/node-token

    Adding Additional Server Nodes

    Join additional servers (k3s-server-2, k3s-server-3)
    # Set hostname
    sudo hostnamectl set-hostname k3s-server-2  # or k3s-server-3
    
    # Join the cluster as a server
    curl -sfL https://get.k3s.io | sh -s - server \
      --server https://k3s-server-1-ip:6443 \
      --token YOUR_NODE_TOKEN \
      --tls-san=your-load-balancer-ip-or-domain \
      --tls-san=this-server-public-ip

    Adding Agent (Worker) Nodes

    Agent nodes only run workloads, not the control plane:

    Join as agent
    # Set hostname
    sudo hostnamectl set-hostname k3s-agent-1
    
    # Join as an agent
    curl -sfL https://get.k3s.io | sh -s - agent \
      --server https://k3s-server-1-ip:6443 \
      --token YOUR_NODE_TOKEN
    6

    Configuring Remote kubectl Access

    To manage your cluster from your local machine:

    On the Server

    Copy kubeconfig
    sudo cat /etc/rancher/k3s/k3s.yaml

    On Your Local Machine

    Save the output to ~/.kube/config and replace 127.0.0.1 with your server's public IP:

    ~/.kube/config (snippet)
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: <base64-cert>
        server: https://YOUR-SERVER-IP:6443  # Change this
      name: default
    Secure the file and test
    chmod 600 ~/.kube/config
    kubectl get nodes

    Firewall Considerations

    Allow only your IP (or use SSH tunneling)
    # On the server, allow only your IP
    sudo ufw allow from YOUR_LOCAL_IP to any port 6443
    
    # Alternative: SSH tunnel from your local machine
    ssh -L 6443:127.0.0.1:6443 user@your-server-ip
    # Then kubectl connects to localhost:6443
    7

    Ingress Controller Setup

    k3s includes Traefik as the default ingress controller. It's already running and ready to route traffic.

    Verify Traefik is running
    kubectl get pods -n kube-system | grep traefik

    Basic Ingress Example

    test-app.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: hello-world
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: hello-world
      template:
        metadata:
          labels:
            app: hello-world
        spec:
          containers:
            - name: hello
              image: nginxdemos/hello
              ports:
                - containerPort: 80
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: hello-world
    spec:
      selector:
        app: hello-world
      ports:
        - port: 80
          targetPort: 80
    ---
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: hello-world
    spec:
      rules:
        - host: hello.yourdomain.com
          http:
            paths:
              - path: /
                pathType: Prefix
                backend:
                  service:
                    name: hello-world
                    port:
                      number: 80
    Apply and test
    kubectl apply -f test-app.yaml
    # Point hello.yourdomain.com to your server's IP, then visit it

    Using Nginx Ingress Instead of Traefik

    Disable Traefik and install nginx-ingress
    # During k3s installation
    curl -sfL https://get.k3s.io | sh -s - --disable traefik
    
    # Install nginx-ingress
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.5/deploy/static/provider/baremetal/deploy.yaml
    
    # Expose via external IP
    kubectl patch svc ingress-nginx-controller -n ingress-nginx \
      -p '{"spec": {"type": "LoadBalancer", "externalIPs": ["YOUR-SERVER-IP"]}}'
    8

    Automatic SSL with cert-manager

    cert-manager automates obtaining and renewing TLS certificates from Let's Encrypt.

    Install cert-manager

    Install and wait for pods
    kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.2/cert-manager.yaml
    
    # Wait for pods to be ready
    kubectl get pods -n cert-manager -w

    Configure Let's Encrypt Issuer

    letsencrypt-issuer.yaml
    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: letsencrypt-prod
    spec:
      acme:
        server: https://acme-v02.api.letsencrypt.org/directory
        email: your-email@example.com  # Change this
        privateKeySecretRef:
          name: letsencrypt-prod-account-key
        solvers:
          - http01:
              ingress:
                class: traefik  # or "nginx" if using nginx-ingress
    Apply issuer
    kubectl apply -f letsencrypt-issuer.yaml

    Enable TLS on Ingress

    Ingress with TLS
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: hello-world
      annotations:
        cert-manager.io/cluster-issuer: letsencrypt-prod
    spec:
      tls:
        - hosts:
            - hello.yourdomain.com
          secretName: hello-world-tls
      rules:
        - host: hello.yourdomain.com
          http:
            paths:
              - path: /
                pathType: Prefix
                backend:
                  service:
                    name: hello-world
                    port:
                      number: 80

    cert-manager automatically detects the annotation, creates a Certificate resource, completes the HTTP-01 challenge, stores the certificate, and renews before expiration.

    Check certificate status
    kubectl get certificates
    kubectl describe certificate hello-world-tls
    9

    Persistent Storage

    k3s includes local-path-provisioner, which creates storage on the node's filesystem. Good for single-node setups, but data is tied to a specific node.

    Using Local-Path Storage

    Simple PVC
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: my-data
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: local-path
      resources:
        requests:
          storage: 5Gi

    Data is stored in /var/lib/rancher/k3s/storage/ on the node.

    Distributed Storage with Longhorn

    For multi-node clusters, Longhorn provides replicated storage across nodes.

    Install Longhorn
    kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.0/deploy/longhorn.yaml
    
    # Wait for pods
    kubectl get pods -n longhorn-system -w
    Set Longhorn as default storage class
    # Remove default from local-path
    kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
    
    # Set Longhorn as default
    kubectl patch storageclass longhorn -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
    
    # Access Longhorn UI
    kubectl port-forward -n longhorn-system svc/longhorn-frontend 8080:80
    10

    Deploying a Real Application

    Let's deploy WordPress with MySQL—a practical test of Deployments, Services, Ingress, persistent storage, and secrets.

    Create Namespace and Secrets

    Setup namespace and secrets
    kubectl create namespace wordpress
    
    kubectl create secret generic mysql-secret \
      --from-literal=mysql-root-password=your-root-password \
      --from-literal=mysql-password=your-wp-password \
      -n wordpress

    Deploy MySQL

    mysql.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: mysql
      namespace: wordpress
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: mysql
      template:
        metadata:
          labels:
            app: mysql
        spec:
          containers:
            - name: mysql
              image: mysql:8.0
              env:
                - name: MYSQL_ROOT_PASSWORD
                  valueFrom:
                    secretKeyRef:
                      name: mysql-secret
                      key: mysql-root-password
                - name: MYSQL_DATABASE
                  value: wordpress
                - name: MYSQL_USER
                  value: wordpress
                - name: MYSQL_PASSWORD
                  valueFrom:
                    secretKeyRef:
                      name: mysql-secret
                      key: mysql-password
              ports:
                - containerPort: 3306
              volumeMounts:
                - name: mysql-data
                  mountPath: /var/lib/mysql
              resources:
                limits:
                  memory: "512Mi"
                  cpu: "500m"
          volumes:
            - name: mysql-data
              persistentVolumeClaim:
                claimName: mysql-pvc
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: mysql
      namespace: wordpress
    spec:
      selector:
        app: mysql
      ports:
        - port: 3306
      clusterIP: None
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: mysql-pvc
      namespace: wordpress
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi

    Deploy WordPress

    wordpress.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: wordpress
      namespace: wordpress
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: wordpress
      template:
        metadata:
          labels:
            app: wordpress
        spec:
          containers:
            - name: wordpress
              image: wordpress:6.4-php8.2-apache
              env:
                - name: WORDPRESS_DB_HOST
                  value: mysql
                - name: WORDPRESS_DB_USER
                  value: wordpress
                - name: WORDPRESS_DB_PASSWORD
                  valueFrom:
                    secretKeyRef:
                      name: mysql-secret
                      key: mysql-password
                - name: WORDPRESS_DB_NAME
                  value: wordpress
              ports:
                - containerPort: 80
              volumeMounts:
                - name: wordpress-data
                  mountPath: /var/www/html
              resources:
                limits:
                  memory: "256Mi"
                  cpu: "250m"
          volumes:
            - name: wordpress-data
              persistentVolumeClaim:
                claimName: wordpress-pvc
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: wordpress
      namespace: wordpress
    spec:
      selector:
        app: wordpress
      ports:
        - port: 80
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: wordpress-pvc
      namespace: wordpress
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
    ---
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: wordpress
      namespace: wordpress
      annotations:
        cert-manager.io/cluster-issuer: letsencrypt-prod
    spec:
      tls:
        - hosts:
            - blog.yourdomain.com
          secretName: wordpress-tls
      rules:
        - host: blog.yourdomain.com
          http:
            paths:
              - path: /
                pathType: Prefix
                backend:
                  service:
                    name: wordpress
                    port:
                      number: 80

    Apply and Verify

    Deploy everything
    kubectl apply -f mysql.yaml
    kubectl apply -f wordpress.yaml
    
    # Verify
    kubectl get pods -n wordpress
    kubectl get svc -n wordpress
    kubectl get ingress -n wordpress
    kubectl get certificate -n wordpress
    
    # Watch logs if needed
    kubectl logs -f deployment/wordpress -n wordpress
    11

    Maintenance Commands

    Updating k3s

    Update k3s on each node
    curl -sfL https://get.k3s.io | sh -

    Backup etcd (HA clusters)

    Create etcd snapshot
    # On a server node
    sudo k3s etcd-snapshot save --name my-backup
    
    # Snapshots stored in /var/lib/rancher/k3s/server/db/snapshots/

    Draining a Node for Maintenance

    Drain and uncordon
    # Prevent new pods, evict existing ones
    kubectl drain node-name --ignore-daemonsets --delete-emptydir-data
    
    # Perform maintenance...
    
    # Allow scheduling again
    kubectl uncordon node-name

    Removing a Node

    Remove node from cluster
    # From the control plane
    kubectl delete node node-name
    
    # On the node itself
    sudo /usr/local/bin/k3s-uninstall.sh      # server
    sudo /usr/local/bin/k3s-agent-uninstall.sh # agent

    What's Next

    You now have a functional Kubernetes cluster running on RamNode. You've configured ingress routing, automatic SSL, persistent storage, and deployed a real multi-tier application.

    In Part 6, we'll cover production operations: monitoring your cluster with Prometheus and Grafana, setting up alerts, implementing horizontal pod autoscaling, backup strategies, and disaster recovery planning.