Monitoring Guide

    Deploying the ELK Stack

    The Elastic Stack (ELK) is a powerful open-source platform for centralized log management, full-text search, data analytics, and observability. Deploy Elasticsearch, Logstash, and Kibana on RamNode's reliable VPS hosting.

    Elasticsearch
    Logstash
    Kibana
    Filebeat

    Architecture Overview

    Single-server architecture: Filebeat agents ship logs → Logstash processes them → Elasticsearch indexes → Kibana visualizes.

    ComponentDefault PortRole
    Elasticsearch9200, 9300Search & indexing engine
    Logstash5044Data processing pipeline
    Kibana5601Visualization & UI
    FilebeatN/A (shipper)Lightweight log forwarder
    1

    Prerequisites & Recommended Plans

    Minimum Requirements

    ResourceMinimumRecommended
    RAM4 GB8 GB+
    CPU2 vCPUs4 vCPUs
    Storage40 GB SSD80 GB+ NVMe
    OSUbuntu 22.04 LTSUbuntu 24.04 LTS

    💡 Recommendation: For production workloads processing more than a few GB of logs per day, opt for an 8 GB RAM plan or higher. RamNode's NVMe storage is ideal for Elasticsearch's I/O-heavy indexing.

    Software Prerequisites

    • Ubuntu 22.04 or 24.04 LTS (fresh installation)
    • A non-root sudo user
    • A registered domain name (for HTTPS access to Kibana)
    • DNS A record pointing your domain to the VPS IP address
    • Java 17 (bundled with Elasticsearch 8.x)
    2

    Initial Server Setup

    Update system and install base packages
    sudo apt update && sudo apt upgrade -y
    sudo apt install -y apt-transport-https curl gnupg2 wget software-properties-common

    Configure Swap (If RAM < 8 GB)

    Add swap space
    sudo fallocate -l 4G /swapfile
    sudo chmod 600 /swapfile
    sudo mkswap /swapfile
    sudo swapon /swapfile
    echo "/swapfile none swap sw 0 0" | sudo tee -a /etc/fstab
    
    # Minimize swap usage for Elasticsearch
    echo "vm.swappiness=1" | sudo tee -a /etc/sysctl.conf && sudo sysctl -p

    Set System Limits

    Configure /etc/security/limits.conf
    elasticsearch  -  nofile            65535
    elasticsearch  -  nproc             4096
    elasticsearch  -  memlock unlimited
    Set virtual memory map count
    echo "vm.max_map_count=262144" | sudo tee -a /etc/sysctl.conf
    sudo sysctl -p
    Configure firewall
    sudo ufw allow OpenSSH
    sudo ufw allow 80/tcp
    sudo ufw allow 443/tcp
    sudo ufw enable

    Important: Ports 9200, 9300, 5044, and 5601 should NOT be exposed publicly. Kibana will be accessed through the Nginx reverse proxy on port 443.

    Add the Elastic APT Repository

    Import GPG key and add repository
    wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | \
      sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
    
    echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] \
      https://artifacts.elastic.co/packages/8.x/apt stable main" | \
      sudo tee /etc/apt/sources.list.d/elastic-8.x.list
    
    sudo apt update
    3

    Installing Elasticsearch

    Install Elasticsearch
    sudo apt install -y elasticsearch

    Save the output! The installation generates a superuser password and enrollment token. You will need these for Kibana enrollment and API access.

    Configure Elasticsearch

    /etc/elasticsearch/elasticsearch.yml
    # ── Cluster ──
    cluster.name: elk-ramnode
    node.name: elk-node-1
    
    # ── Paths ──
    path.data: /var/lib/elasticsearch
    path.logs: /var/log/elasticsearch
    
    # ── Network ──
    network.host: 127.0.0.1
    http.port: 9200
    
    # ── Discovery (single-node) ──
    discovery.type: single-node
    
    # ── Security (enabled by default in 8.x) ──
    xpack.security.enabled: true
    xpack.security.enrollment.enabled: true
    
    xpack.security.http.ssl:
      enabled: true
      keystore.path: certs/http.p12
    
    xpack.security.transport.ssl:
      enabled: true
      verification_mode: certificate
      keystore.path: certs/transport.p12
      truststore.path: certs/transport.p12

    Configure JVM Heap Size

    Set JVM heap to no more than 50% of available RAM (never exceed 31 GB). For a 4 GB VPS, allocate 2 GB; for 8 GB, allocate 4 GB.

    Start and verify Elasticsearch
    sudo systemctl daemon-reload
    sudo systemctl enable elasticsearch
    sudo systemctl start elasticsearch
    
    # Verify cluster health (enter elastic password when prompted)
    curl -k -u elastic https://localhost:9200
    curl -k -u elastic https://localhost:9200/_cluster/health?pretty

    You should see a JSON response with cluster status "green" (or "yellow" for single-node, which is expected).

    4

    Installing Logstash

    Install Logstash
    sudo apt install -y logstash

    Beats Input Pipeline

    /etc/logstash/conf.d/02-beats-input.conf
    input {
      beats {
        port => 5044
        ssl_enabled => true
        ssl_certificate_authorities => ["/etc/logstash/certs/ca.crt"]
        ssl_certificate => "/etc/logstash/certs/logstash.crt"
        ssl_key => "/etc/logstash/certs/logstash.key"
      }
    }

    Syslog Filter Pipeline

    /etc/logstash/conf.d/10-syslog-filter.conf
    filter {
      if [fileset][module] == "system" {
        if [fileset][name] == "syslog" {
          grok {
            match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
          }
          date {
            match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
          }
        }
      }
    }

    Elasticsearch Output Pipeline

    /etc/logstash/conf.d/30-elasticsearch-output.conf
    output {
      elasticsearch {
        hosts => ["https://localhost:9200"]
        index => "logstash-%{+YYYY.MM.dd}"
        user => "logstash_internal"
        password => "YOUR_LOGSTASH_PASSWORD"
        ssl_certificate_verification => true
        cacert => "/etc/logstash/certs/ca.crt"
      }
    }

    Create Logstash User in Elasticsearch

    Create dedicated Logstash user
    curl -k -u elastic -X POST "https://localhost:9200/_security/user/logstash_internal" \
      -H 'Content-Type: application/json' -d '{
      "password": "YOUR_LOGSTASH_PASSWORD",
      "roles": ["logstash_writer"],
      "full_name": "Logstash Internal User"
    }'
    Start Logstash
    sudo systemctl enable logstash
    sudo systemctl start logstash
    
    # Verify
    sudo systemctl status logstash
    sudo tail -f /var/log/logstash/logstash-plain.log
    5

    Installing Kibana

    Install and enroll Kibana
    sudo apt install -y kibana
    
    # Generate enrollment token (if the original expired)
    sudo /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana
    
    # Run the Kibana setup tool
    sudo /usr/share/kibana/bin/kibana-setup --enrollment-token <YOUR_TOKEN>
    /etc/kibana/kibana.yml
    server.port: 5601
    server.host: "127.0.0.1"
    server.publicBaseUrl: "https://elk.yourdomain.com"
    # elasticsearch.hosts set automatically by enrollment
    # elasticsearch.ssl.certificateAuthorities set automatically
    Start Kibana
    sudo systemctl enable kibana
    sudo systemctl start kibana
    
    # Verify Kibana is listening
    curl -s http://localhost:5601/api/status | head -c 200
    6

    Configuring Filebeat

    Filebeat is a lightweight log shipper. Install it on each server you want to monitor.

    Install Filebeat (on remote hosts)
    sudo apt install -y filebeat
    /etc/filebeat/filebeat.yml
    # ── Filebeat Inputs ──
    filebeat.inputs:
      - type: filestream
        id: syslog
        enabled: true
        paths:
          - /var/log/syslog
          - /var/log/auth.log
    
    # ── Output to Logstash ──
    output.logstash:
      hosts: ["YOUR_ELK_SERVER_IP:5044"]
      ssl.certificate_authorities:
        - "/etc/filebeat/certs/ca.crt"
      ssl.certificate: "/etc/filebeat/certs/filebeat.crt"
      ssl.key: "/etc/filebeat/certs/filebeat.key"
    
    # ── Modules ──
    filebeat.config.modules:
      path: ${path.config}/modules.d/*.yml
      reload.enabled: false
    Enable system module and start
    sudo filebeat modules enable system
    
    sudo filebeat setup --index-management \
      -E output.logstash.enabled=false \
      -E 'output.elasticsearch.hosts=["https://YOUR_ELK_SERVER_IP:9200"]' \
      -E 'output.elasticsearch.username="elastic"' \
      -E 'output.elasticsearch.password="YOUR_PASSWORD"' \
      -E 'output.elasticsearch.ssl.certificate_authorities=["/etc/filebeat/certs/ca.crt"]'
    
    sudo systemctl enable filebeat
    sudo systemctl start filebeat
    
    # Verify
    sudo filebeat test output
    7

    Security & TLS Configuration

    Generate Certificates for Logstash & Filebeat

    Generate certificates
    sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert \
      --ca /etc/elasticsearch/certs/http_ca.crt \
      --ca-key /etc/elasticsearch/certs/http_ca.key \
      --name logstash \
      --out /tmp/logstash-certs.zip
    
    sudo unzip /tmp/logstash-certs.zip -d /etc/logstash/certs/
    sudo chown -R logstash:logstash /etc/logstash/certs/

    Reset Passwords (If Needed)

    Reset Elasticsearch passwords
    sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
    sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u kibana_system

    Create Dedicated Roles & Users

    Create logstash_writer role
    curl -k -u elastic -X POST "https://localhost:9200/_security/role/logstash_writer" \
      -H 'Content-Type: application/json' -d '{
      "cluster": ["manage_index_templates", "monitor", "manage_ilm"],
      "indices": [{
        "names": ["logstash-*"],
        "privileges": ["write", "create_index", "manage", "auto_configure"]
      }]
    }'
    Create analyst role (read-only)
    curl -k -u elastic -X POST "https://localhost:9200/_security/role/analyst" \
      -H 'Content-Type: application/json' -d '{
      "indices": [{
        "names": ["logstash-*", "filebeat-*"],
        "privileges": ["read", "view_index_metadata"]
      }],
      "applications": [{
        "application": "kibana-.kibana",
        "privileges": ["feature_discover.read", "feature_dashboard.read"],
        "resources": ["*"]
      }]
    }'

    Firewall Hardening

    Allow Filebeat from specific IPs only
    sudo ufw allow from REMOTE_SERVER_IP to any port 5044 proto tcp

    Warning: Never expose Elasticsearch port 9200 to the public internet. Always keep it bound to 127.0.0.1 or use SSH tunneling for remote access.

    8

    Nginx Reverse Proxy for Kibana

    Install Nginx and Certbot
    sudo apt install -y nginx certbot python3-certbot-nginx
    /etc/nginx/sites-available/kibana
    server {
        listen 80;
        server_name elk.yourdomain.com;
    
        location / {
            proxy_pass http://127.0.0.1:5601;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_cache_bypass $http_upgrade;
            proxy_read_timeout 90;
        }
    }
    Enable site and obtain SSL
    sudo ln -s /etc/nginx/sites-available/kibana /etc/nginx/sites-enabled/
    sudo nginx -t
    sudo systemctl reload nginx
    
    sudo certbot --nginx -d elk.yourdomain.com
    
    # Verify auto-renewal
    sudo certbot renew --dry-run
    9

    Performance Tuning

    Elasticsearch Tuning

    SettingValuePurpose
    indices.memory.index_buffer_size15%Indexing buffer (default 10%)
    thread_pool.write.queue_size1000Handles burst write loads
    indices.queries.cache.size15%Query cache allocation
    bootstrap.memory_locktruePrevents heap swap-out

    Index Lifecycle Management (ILM)

    Automatically manage log retention and prevent disk exhaustion. This policy rolls over indices daily or at 50 GB, and deletes old data after 30 days.

    Create ILM policy
    curl -k -u elastic -X PUT "https://localhost:9200/_ilm/policy/logs-policy" \
      -H 'Content-Type: application/json' -d '{
      "policy": {
        "phases": {
          "hot":    { "actions": { "rollover": { "max_age": "1d", "max_size": "50gb" } } },
          "warm":   { "min_age": "7d", "actions": { "shrink": { "number_of_shards": 1 } } },
          "delete": { "min_age": "30d", "actions": { "delete": {} } }
        }
      }
    }'

    Logstash Tuning

    /etc/logstash/logstash.yml
    pipeline.workers: 2          # Match CPU cores
    pipeline.batch.size: 250     # Increase for higher throughput
    pipeline.batch.delay: 50     # ms to wait for batch to fill
    10

    Monitoring & Maintenance

    11

    Troubleshooting

    Common Issues & Solutions

    SymptomCauseFix
    ES won't startInsufficient heap or file descriptorsCheck jvm.options heap; verify limits.conf
    Cluster status: redUnassigned primary shardsCheck _cluster/allocation/explain
    Kibana: "server not ready"ES connection failureVerify ES is running; check SSL config
    Logstash pipeline errorGrok pattern mismatchTest at grokdebugger.com; check logs
    Filebeat: connection refusedFirewall blocking 5044Add UFW rule for source IP
    High memory / OOM killsHeap too large or no swapReduce JVM heap; add swap; upgrade plan

    Key Log File Locations

    ComponentLog Path
    Elasticsearch/var/log/elasticsearch/elk-ramnode.log
    Logstash/var/log/logstash/logstash-plain.log
    Kibanajournalctl -u kibana -f
    Filebeat/var/log/filebeat/filebeat

    Useful Diagnostic Commands

    Diagnostic commands
    # Check cluster health and node stats
    curl -k -u elastic https://localhost:9200/_cluster/health?pretty
    curl -k -u elastic https://localhost:9200/_nodes/stats?pretty
    
    # Check shard allocation issues
    curl -k -u elastic https://localhost:9200/_cluster/allocation/explain?pretty
    
    # List all indices with size and doc count
    curl -k -u elastic 'https://localhost:9200/_cat/indices?v&s=store.size:desc'
    
    # Test Logstash configuration
    sudo /usr/share/logstash/bin/logstash --config.test_and_exit \
      -f /etc/logstash/conf.d/
    
    # Verify Filebeat connectivity
    sudo filebeat test config
    sudo filebeat test output

    ELK Stack Deployed Successfully!

    Your centralized log management and observability platform is now running. Access Kibana at your domain to explore logs, create dashboards, and set up alerts.