Deployment Guide

    Deploy Varnish Cache

    Varnish Cache is a powerful HTTP accelerator designed for content-heavy dynamic websites and APIs. This guide covers deploying Varnish on RamNode's reliable VPS hosting to reduce server load by up to 90% and deliver sub-millisecond response times.

    Ubuntu 22.04/24.04
    HTTP Accelerator
    ⏱️ 30-45 minutes
    1

    Introduction

    Varnish Cache is a powerful HTTP accelerator designed for content-heavy dynamic websites and APIs. By caching HTTP responses in memory, Varnish can reduce server load by up to 90% and dramatically improve response times for your visitors.

    Why Use Varnish?

    • Memory-based caching delivers sub-millisecond response times for cached content
    • Reduces backend server load by serving cached responses directly
    • Highly configurable with Varnish Configuration Language (VCL)
    • Handles traffic spikes gracefully without overwhelming your origin server
    • Open-source with an active community and extensive documentation
    2

    Prerequisites

    Before beginning the installation, ensure you have the following:

    RequirementDetails
    Operating SystemUbuntu 22.04/24.04 LTS or Debian 11/12
    Access LevelRoot or sudo access
    Web ServerApache or Nginx already installed and running
    MemoryMinimum 1GB RAM (2GB+ recommended for production)
    SkillsBasic Linux command line familiarity
    3

    Architecture Overview

    In a typical Varnish deployment, the cache sits in front of your web server and intercepts all incoming HTTP requests. The traffic flow works as follows:

    Client Request → Varnish (Port 80) → Backend Web Server (Port 8080)

    When a request arrives, Varnish checks its cache. If a valid cached response exists (cache hit), Varnish serves it directly without contacting the backend. If no cached response exists (cache miss), Varnish forwards the request to your web server, caches the response, and returns it to the client.

    4

    Installation

    Update System Packages

    Start by updating your package index and upgrading existing packages:

    Update system
    sudo apt update
    sudo apt upgrade -y

    Install Varnish Cache

    Install Varnish from the official Ubuntu/Debian repositories:

    Install Varnish
    sudo apt install varnish -y

    Verify the installation by checking the Varnish version:

    Verify installation
    varnishd -V

    Install Latest Version (Optional)

    For the latest Varnish version, install from the official Varnish repository:

    Install from Packagecloud
    curl -s https://packagecloud.io/install/repositories/varnishcache/varnish75/script.deb.sh | sudo bash
    sudo apt install varnish -y
    5

    Configure Backend Port

    Before configuring Varnish, you need to move your web server to a different port (typically 8080) so Varnish can listen on port 80.

    6

    Configure Varnish Service

    Configure Varnish to listen on port 80. Edit the systemd service override:

    Edit Varnish service
    sudo systemctl edit varnish

    Add the following configuration to set Varnish to listen on port 80 with appropriate cache size:

    Varnish systemd override
    [Service]
    ExecStart=
    ExecStart=/usr/sbin/varnishd \
      -a :80 \
      -a localhost:8443,PROXY \
      -f /etc/varnish/default.vcl \
      -s malloc,256m

    💡 Memory Allocation: The -s malloc,256m parameter allocates 256MB of RAM for caching. Adjust this based on your VPS memory. A general rule is to allocate 50-75% of available RAM to Varnish cache.

    7

    VCL Configuration

    The VCL (Varnish Configuration Language) file defines how Varnish handles requests. Edit the default VCL:

    Edit VCL file
    sudo nano /etc/varnish/default.vcl

    Replace the contents with this optimized configuration:

    /etc/varnish/default.vcl
    vcl 4.1;
    
    # Backend definition
    backend default {
        .host = "127.0.0.1";
        .port = "8080";
        .connect_timeout = 5s;
        .first_byte_timeout = 60s;
        .between_bytes_timeout = 10s;
    }
    
    # ACL for purge requests
    acl purge {
        "localhost";
        "127.0.0.1";
    }
    
    sub vcl_recv {
        # Handle PURGE requests
        if (req.method == "PURGE") {
            if (!client.ip ~ purge) {
                return (synth(405, "Not allowed."));
            }
            return (purge);
        }
    
        # Normalize the host header
        if (req.http.host ~ "^www\.") {
            set req.http.host = regsub(req.http.host, "^www\.", "");
        }
    
        # Remove tracking parameters for better cache hit rate
        if (req.url ~ "(\?|&)(utm_|gclid|fbclid)") {
            set req.url = regsuball(req.url, "(\?|&)(utm_[a-z]+|gclid|fbclid)=[^&]*", "");
        }
    
        # Don't cache POST requests
        if (req.method == "POST") {
            return (pass);
        }
    
        # Don't cache authenticated sessions
        if (req.http.Authorization || req.http.Cookie ~ "(session|login|auth)") {
            return (pass);
        }
    
        # Remove cookies for static files
        if (req.url ~ "\.(css|js|jpg|jpeg|png|gif|ico|woff|woff2|ttf|svg)quot;) {
            unset req.http.Cookie;
            return (hash);
        }
    
        return (hash);
    }
    
    sub vcl_backend_response {
        # Set default TTL
        if (beresp.ttl <= 0s) {
            set beresp.ttl = 1h;
            set beresp.uncacheable = false;
        }
    
        # Cache static files for longer
        if (bereq.url ~ "\.(css|js|jpg|jpeg|png|gif|ico|woff|woff2|ttf|svg)quot;) {
            set beresp.ttl = 7d;
            unset beresp.http.Set-Cookie;
        }
    
        # Don't cache error responses
        if (beresp.status >= 400) {
            set beresp.ttl = 0s;
            set beresp.uncacheable = true;
        }
    
        return (deliver);
    }
    
    sub vcl_deliver {
        # Add debug headers (remove in production)
        if (obj.hits > 0) {
            set resp.http.X-Cache = "HIT";
            set resp.http.X-Cache-Hits = obj.hits;
        } else {
            set resp.http.X-Cache = "MISS";
        }
    
        # Remove server identification
        unset resp.http.X-Powered-By;
        unset resp.http.Server;
        unset resp.http.X-Varnish;
        unset resp.http.Via;
    
        return (deliver);
    }
    8

    Starting Varnish

    Start Varnish Service

    Reload systemd and start Varnish:

    Start Varnish
    sudo systemctl daemon-reload
    sudo systemctl restart varnish
    sudo systemctl enable varnish

    Verify Varnish is running:

    Check status
    sudo systemctl status varnish

    Verify Configuration

    Check that Varnish is listening on port 80:

    Check port
    sudo ss -tlnp | grep varnish

    Test with curl to verify the X-Cache header:

    Test caching
    curl -I http://localhost

    ✅ Expected Result: The first request should show X-Cache: MISS. Subsequent requests for the same resource should show X-Cache: HIT.

    9

    SSL/HTTPS Setup

    Varnish does not handle SSL/TLS directly. For HTTPS, you need an SSL termination proxy in front of Varnish. The recommended approach uses Nginx for SSL termination.

    Nginx as SSL Termination Proxy

    Install Nginx (if not already installed) and configure it as an SSL termination proxy:

    Install Nginx
    sudo apt install nginx -y

    Create an SSL termination configuration:

    Create config
    sudo nano /etc/nginx/sites-available/ssl-termination

    Add the following configuration:

    /etc/nginx/sites-available/ssl-termination
    server {
        listen 443 ssl http2;
        server_name yourdomain.com;
    
        ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
    
        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
        ssl_prefer_server_ciphers off;
    
        location / {
            proxy_pass http://127.0.0.1:80;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }

    Enable the site and restart Nginx:

    Enable site
    sudo ln -s /etc/nginx/sites-available/ssl-termination /etc/nginx/sites-enabled/
    sudo nginx -t
    sudo systemctl restart nginx
    10

    Monitoring & Statistics

    Real-time Statistics with varnishstat

    View real-time cache statistics:

    View stats
    varnishstat

    Key metrics to monitor include cache_hit (successful cache hits),cache_miss (requests sent to backend), client_req (total client requests), and n_object (number of cached objects).

    Request Logging with varnishlog

    View detailed request logs:

    View logs
    varnishlog

    Filter logs for specific patterns:

    Filter logs
    # Show only cache misses
    varnishlog -q "VCL_call eq MISS"
    
    # Show requests for specific URL pattern
    varnishlog -q "ReqURL ~ '/api/'"

    Top Requests with varnishtop

    Identify the most requested URLs:

    Top requests
    varnishtop -i ReqURL
    11

    Cache Management

    Purging Cached Content

    To purge a specific URL from cache:

    Purge URL
    curl -X PURGE http://localhost/path/to/page

    To purge all cached content, restart the Varnish service:

    Purge all
    sudo systemctl restart varnish

    Ban Expressions

    For more complex cache invalidation, use ban expressions via varnishadm:

    Ban expressions
    varnishadm ban "req.url ~ ^/api/"
    varnishadm ban "req.http.host == example.com"

    WordPress Integration

    For WordPress sites, install a Varnish-compatible caching plugin such as Proxy Cache Purge (formerly Varnish HTTP Purge) for automatic cache invalidation:

    1. Install the Proxy Cache Purge plugin from the WordPress plugin repository
    2. Configure the plugin with your Varnish server IP (usually 127.0.0.1)
    3. The plugin will automatically send PURGE requests when posts are updated
    12

    Performance Tuning

    Memory Allocation

    Adjust cache size based on your VPS resources. For a 4GB RAM VPS, you might allocate 2-3GB to Varnish:

    Memory parameter
    -s malloc,2g

    Thread Pool Tuning

    For high-traffic sites, adjust thread pool parameters in your systemd override:

    Thread pool config
    ExecStart=/usr/sbin/varnishd \
      -a :80 \
      -f /etc/varnish/default.vcl \
      -s malloc,2g \
      -p thread_pool_min=50 \
      -p thread_pool_max=1000 \
      -p thread_pool_timeout=120

    RamNode VPS Recommendations

    VPS PlanCache SizeThread Pool MinThread Pool Max
    1GB RAM256m - 512m25500
    2GB RAM1g501000
    4GB RAM2g - 3g1002000
    8GB+ RAM4g - 6g2004000

    Troubleshooting

    Security Considerations

    • Restrict PURGE requests to localhost and trusted IPs using the ACL configuration
    • Never cache responses containing sensitive user data or authentication tokens
    • Remove debug headers (X-Cache, X-Cache-Hits) in production environments
    • Keep Varnish updated to patch security vulnerabilities
    • Use firewall rules to ensure only the SSL termination proxy can reach Varnish

    🎉 Congratulations!

    Varnish Cache is now deployed on your RamNode VPS! With proper configuration, you can achieve significant reductions in server load and response times. Monitor your cache hit rates and adjust your VCL configuration as needed to maximize caching efficiency for your specific workload.

    For additional support, consult the official Varnish documentation or reach out to RamNode's technical support team.