SaltStack on Your VPS Series
    Part 6 of 6

    Pillars, Grains, Orchestration & Hardening

    Production-ready Salt: GPG-encrypted secrets, advanced targeting, orchestrated deployments, and a full security hardening baseline.

    50 minutes

    Pillars — Secrets & Per-Environment Variables

    Pillars store data separately from states. They serve two purposes: secrets management and per-minion configuration. A minion never receives pillar data destined for another minion.

    /srv/pillar/top.sls
    base:
      '*':
        - common
        - users
      'web-*':
        - lamp
        - docker
      'db-*':
        - lamp
    /srv/pillar/secrets.sls
    database:
      root_password: 'very-strong-root-pw'
      app_password: 'app-db-password'
    
    redis:
      password: 'redis-auth-token'

    Using Pillars in States

    mariadb_root_password:
      mysql_user.present:
        - name: root
        - host: localhost
        - password: {{ pillar.get('database:root_password', 'CHANGEME') }}
    sudo salt 'web-01' pillar.items
    sudo salt 'web-01' pillar.get database:root_password

    GPG-Encrypted Pillars

    # Generate a GPG key on the master
    gpg --batch --gen-key <<EOF
    %no-protection
    Key-Type: RSA
    Key-Length: 4096
    Name-Real: Salt Master
    Name-Email: salt@example.com
    Expire-Date: 0
    EOF
    
    # Encrypt a secret
    echo -n "my-secret-password" | gpg --armor --encrypt --recipient salt@example.com
    /etc/salt/master (GPG config)
    decrypt_pillar:
      - 'database:root_password': gpg
      - 'ssl:key': gpg

    Advanced Grains Usage

    Custom grains via state
    set_role_grain:
      grains.present:
        - name: role
        - value: webserver
    
    set_environment_grain:
      grains.present:
        - name: environment
        - value: {{ pillar.get('environment', 'staging') }}

    Grains in Jinja Templates

    {% set mem_gb = grains['mem_total'] // 1024 %}
    
    mariadb_config:
      file.managed:
        - name: /etc/mysql/mariadb.conf.d/50-server.cnf
        - contents: |
            [mysqld]
            innodb_buffer_pool_size = {{ (mem_gb * 0.7) | int }}G
            max_connections = {{ [150, mem_gb * 50] | min }}

    Compound Targeting

    # All production webservers
    salt -C 'G@role:webserver and G@environment:production' state.highstate
    
    # Web or LB servers in production
    salt -C 'G@environment:production and ( G@role:webserver or G@role:loadbalancer )' test.ping
    
    # By OS and role
    salt -C 'G@os:Ubuntu and G@role:database' state.apply lamp.database

    Orchestration Runner

    /srv/salt/orch/deploy.sls
    apply_common:
      salt.state:
        - tgt: '*'
        - sls: common
    
    run_database_migrations:
      salt.state:
        - tgt: 'db-primary'
        - sls: lamp.database
        - require:
          - salt: apply_common
    
    deploy_app_code:
      salt.state:
        - tgt: 'web-*'
        - sls: app.deploy
        - require:
          - salt: run_database_migrations
    
    reload_nginx:
      salt.function:
        - name: service.reload
        - tgt: 'web-*'
        - arg:
          - nginx
        - require:
          - salt: deploy_app_code
    sudo salt-run state.orch orch.deploy

    Rolling Deployment

    /srv/salt/orch/rolling-deploy.sls
    {% for minion in ['web-01', 'web-02', 'web-03'] %}
    remove_{{ minion }}_from_lb:
      salt.function:
        - name: cmd.run
        - tgt: lb-01
        - arg:
          - "nginx -s reload"
    
    deploy_{{ minion }}:
      salt.state:
        - tgt: {{ minion }}
        - sls: app.deploy
        - require:
          - salt: remove_{{ minion }}_from_lb
    
    healthcheck_{{ minion }}:
      salt.function:
        - name: cmd.run
        - tgt: {{ minion }}
        - arg:
          - curl -sf http://localhost/health
        - require:
          - salt: deploy_{{ minion }}
    
    readd_{{ minion }}_to_lb:
      salt.function:
        - name: cmd.run
        - tgt: lb-01
        - arg:
          - "nginx -s reload"
        - require:
          - salt: healthcheck_{{ minion }}
    {% endfor %}

    SSH Hardening

    /srv/salt/hardening/files/sshd_config (key settings)
    Port 22
    Protocol 2
    PermitRootLogin no
    PasswordAuthentication no
    PubkeyAuthentication yes
    MaxAuthTries 3
    MaxSessions 4
    X11Forwarding no
    AllowAgentForwarding no
    AllowTcpForwarding no
    ClientAliveInterval 300
    ClientAliveCountMax 2
    KexAlgorithms curve25519-sha256,diffie-hellman-group14-sha256
    Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com
    MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com
    LogLevel VERBOSE

    User Management from Pillar

    /srv/salt/hardening/users.sls
    {% set admin_users = pillar.get('admin_users', []) %}
    
    {% for user in admin_users %}
    create_user_{{ user.username }}:
      user.present:
        - name: {{ user.username }}
        - shell: /bin/bash
        - groups:
          - sudo
    
    ssh_key_{{ user.username }}:
      ssh_auth.present:
        - user: {{ user.username }}
        - name: {{ user.ssh_key }}
    {% endfor %}
    
    lock_root_password:
      cmd.run:
        - name: passwd -l root
        - unless: passwd -S root | grep -q 'L'

    Kernel Security Parameters

    /srv/salt/hardening/kernel.sls
    sysctl_security:
      sysctl.present:
        - name: kernel.dmesg_restrict
        - value: 1
    
    sysctl_rp_filter:
      sysctl.present:
        - name: net.ipv4.conf.all.rp_filter
        - value: 1
    
    sysctl_no_accept_redirects:
      sysctl.present:
        - name: net.ipv4.conf.all.accept_redirects
        - value: 0
    
    sysctl_syn_cookies:
      sysctl.present:
        - name: net.ipv4.tcp_syncookies
        - value: 1
    
    sysctl_randomize_va_space:
      sysctl.present:
        - name: kernel.randomize_va_space
        - value: 2

    UFW Default Policy

    /srv/salt/hardening/firewall.sls
    ufw_default_deny_incoming:
      cmd.run:
        - name: ufw default deny incoming
        - unless: ufw status verbose | grep "Default: deny (incoming)"
    
    ufw_default_allow_outgoing:
      cmd.run:
        - name: ufw default allow outgoing
        - unless: ufw status verbose | grep "Default: allow (outgoing)"
    
    allow_ssh_hardened:
      cmd.run:
        - name: ufw allow OpenSSH
        - unless: ufw status | grep -q 'OpenSSH.*ALLOW'
    
    ufw_enable:
      cmd.run:
        - name: ufw --force enable
        - unless: ufw status | grep -q 'Status: active'
        - require:
          - cmd: allow_ssh_hardened

    Fail2ban

    /srv/salt/hardening/files/jail.local
    [DEFAULT]
    bantime  = 3600
    findtime = 600
    maxretry = 5
    backend  = systemd
    
    [sshd]
    enabled  = true
    port     = ssh
    logpath  = %(sshd_log)s
    maxretry = 3
    bantime  = 86400

    The Full top.sls

    /srv/salt/top.sls
    base:
      '*':
        - common
        - hardening
    
      'web-*':
        - lamp.webserver
        - lamp.php
        - lamp.vhost
        - docker
    
      'db-*':
        - lamp.database
    
      'lamp-single':
        - lamp
        - docker
    # Test first
    sudo salt '*' state.highstate test=True
    
    # Apply in batches
    sudo salt '*' state.highstate --batch 5
    
    # Staging first, then production
    sudo salt -C 'G@environment:staging' state.highstate
    sudo salt -C 'G@environment:production' state.highstate --batch 3

    Full Deployment Workflow

    With everything from this series in place, provisioning a new server looks like this:

    # 1. Provision via Salt Cloud (Part 3)
    sudo salt-cloud -p web-small web-04
    
    # 2. Minion auto-connects and key is pre-accepted
    
    # 3. Set role grain
    sudo salt 'web-04' grains.setval role webserver
    
    # 4. Apply full highstate
    sudo salt 'web-04' state.highstate
    
    # 5. Verify
    sudo salt 'web-04' test.ping
    sudo salt 'web-04' service.status nginx
    sudo salt 'web-04' service.status php8.1-fpm
    sudo salt 'web-04' service.status docker

    That is the full lifecycle — from a bare VM to a hardened, configured, production-ready server — driven entirely by code you can review, version control, and reproduce.