DevOps Automation
Server monitoring alerts, deployment triggers, log analysis, and infrastructure health checks — your VPS starts managing itself.
Completed Part 3, running n8n instance
~40 minutes
5 DevOps automation workflows
Your Automation Platform Is Also Your Operations Center
Up to this point we've built general-purpose workflows and AI-powered processing. Now we're turning n8n into a DevOps tool — one that monitors your infrastructure, responds to events, and handles the kind of repetitive operational tasks that eat hours every week.
The workflows in this guide are especially valuable if you're running multiple services on RamNode VPS instances. Instead of logging into each server to check on things, n8n centralizes monitoring, alerting, and response into automated pipelines.
Workflow 1: Server Resource Monitor with Intelligent Alerts
Basic monitoring tells you when CPU hits 90%. Intelligent monitoring tells you when CPU has been climbing steadily for 3 hours and is likely to cause problems. Let's build the intelligent version.
Schedule Trigger (every 5 min)
→ Execute Command (collect metrics)
→ Code (calculate trends)
→ IF (threshold exceeded OR trend is concerning)
→ True: Slack/Email alert with context
→ False: Store metrics for trend analysisCollect Server Metrics
Add a Schedule Trigger set to every 5 minutes, then an Execute Command node:
echo "{
\"cpu\": $(top -bn1 | grep 'Cpu(s)' | awk '{print $2}' | cut -d. -f1),
\"memory_used\": $(free -m | awk '/Mem:/ {print $3}'),
\"memory_total\": $(free -m | awk '/Mem:/ {print $2}'),
\"disk_percent\": $(df -h / | awk 'NR==2 {print $5}' | tr -d '%'),
\"load_1m\": $(cat /proc/loadavg | awk '{print $1}'),
\"load_5m\": $(cat /proc/loadavg | awk '{print $2}'),
\"load_15m\": $(cat /proc/loadavg | awk '{print $3}'),
\"open_files\": $(cat /proc/sys/fs/file-nr | awk '{print $1}'),
\"tcp_connections\": $(ss -s | grep 'TCP:' | awk '{print $2}'),
\"uptime_seconds\": $(cat /proc/uptime | awk '{print $1}' | cut -d. -f1),
\"timestamp\": \"$(date -u +%Y-%m-%dT%H:%M:%SZ)\"
}"Parse and Analyze
const metrics = JSON.parse($input.first().json.stdout);
const alerts = [];
// Static thresholds
if (metrics.cpu > 85) alerts.push(`🔴 CPU at ${metrics.cpu}%`);
if (metrics.disk_percent > 85) alerts.push(`🔴 Disk at ${metrics.disk_percent}%`);
const memPercent = Math.round((metrics.memory_used / metrics.memory_total) * 100);
if (memPercent > 90) alerts.push(`🔴 Memory at ${memPercent}%`);
// Load average (should be below number of CPU cores)
const cpuCores = require('os').cpus().length;
if (parseFloat(metrics.load_5m) > cpuCores * 1.5) {
alerts.push(`🟡 Load average ${metrics.load_5m} exceeds ${cpuCores * 1.5} (1.5x cores)`);
}
// Trend detection: load increasing over time
if (parseFloat(metrics.load_15m) < parseFloat(metrics.load_5m) &&
parseFloat(metrics.load_5m) < parseFloat(metrics.load_1m)) {
alerts.push(`📈 Load trending upward: ${metrics.load_15m} → ${metrics.load_5m} → ${metrics.load_1m}`);
}
return [{
json: {
...metrics,
memPercent,
hasAlerts: alerts.length > 0,
alertMessages: alerts,
alertSummary: alerts.join('\n')
}
}];Route and Alert
Add an IF node checking {{ $json.hasAlerts }}, then a Slack node:
🖥️ *Server Alert — {{ $json.timestamp }}*
{{ $json.alertSummary }}
*Current Metrics:*
CPU: {{ $json.cpu }}% | Memory: {{ $json.memPercent }}% | Disk: {{ $json.disk_percent }}%
Load: {{ $json.load_1m }} / {{ $json.load_5m }} / {{ $json.load_15m }}
TCP Connections: {{ $json.tcp_connections }}Avoid Alert Fatigue
Add a Function node with cooldown logic using n8n's static data:
const staticData = $getWorkflowStaticData('global');
const now = Date.now();
const cooldownMs = 15 * 60 * 1000; // 15 minutes
if (staticData.lastAlertTime && (now - staticData.lastAlertTime) < cooldownMs) {
// Still in cooldown, suppress alert
return [];
}
staticData.lastAlertTime = now;
return $input.all();Workflow 2: Git Push Deployment Trigger
Automatically deploy when you push to your main branch. Replaces basic CI/CD setups for small projects and staging environments.
Webhook (GitHub push event)
→ IF (branch is main?)
→ True:
→ Execute Command (pull & deploy)
→ IF (deploy succeeded?)
→ True: Slack success notification
→ False: Slack failure alert + rollback
→ False: IgnoreConfigure the Webhook
- Add a Webhook node in n8n — note the URL
- In GitHub: Settings → Webhooks → Add webhook
- Set Payload URL to your n8n webhook
- Content type:
application/json - Set a strong secret for HMAC verification
- Select "Just the push event"
Verify and Filter
const crypto = require('crypto');
const secret = 'your-github-webhook-secret';
const signature = $input.first().json.headers['x-hub-signature-256'];
const body = JSON.stringify($input.first().json.body);
const expected = 'sha256=' + crypto.createHmac('sha256', secret)
.update(body).digest('hex');
if (signature !== expected) {
throw new Error('Invalid GitHub signature');
}
const payload = $input.first().json.body;
const branch = payload.ref?.replace('refs/heads/', '') || '';
const commits = payload.commits || [];
const commitMessages = commits.map(c => `• ${c.message}`).join('\n');
return [{
json: {
branch,
isMain: branch === 'main',
repository: payload.repository?.full_name,
pusher: payload.pusher?.name,
commitCount: commits.length,
commitMessages,
headCommit: payload.head_commit?.id?.substring(0, 7)
}
}];Deploy
Add an IF node checking {{ $json.isMain }}, then an Execute Command node:
cd /var/www/your-project && \
git pull origin main 2>&1 && \
docker compose build --no-cache 2>&1 && \
docker compose up -d 2>&1 && \
echo "DEPLOY_SUCCESS"Result Notifications
✅ *Deployment Successful*
Repository: {{ $('Code').first().json.repository }}
Branch: main ({{ $('Code').first().json.headCommit }})
Pushed by: {{ $('Code').first().json.pusher }}
Commits: {{ $('Code').first().json.commitCount }}
{{ $('Code').first().json.commitMessages }}❌ *Deployment Failed*
Repository: {{ $('Code').first().json.repository }}
Error output attached. Manual intervention required.Workflow 3: SSL Certificate Expiry Monitor
Even with Caddy's auto-renewal, it's good practice to monitor certificate expiry across all your domains. This workflow checks multiple domains and alerts 14 days before any certificate expires.
Schedule Trigger: Daily at 9 AM
Generate Domain List
const domains = [
'n8n.yourdomain.com',
'app.yourdomain.com',
'api.yourdomain.com'
];
const results = [];
for (const domain of domains) {
results.push({ json: { domain } });
}
return results;Check Certificate with OpenSSL
echo | openssl s_client -servername {{ $json.domain }} -connect {{ $json.domain }}:443 2>/dev/null | openssl x509 -noout -enddate -subject 2>/dev/nullCalculate Days Until Expiry
const output = $input.first().json.stdout;
const expiryMatch = output.match(/notAfter=(.+)/);
if (!expiryMatch) return [{ json: { domain: $input.first().json.domain, error: 'Could not read cert' } }];
const expiryDate = new Date(expiryMatch[1]);
const daysLeft = Math.floor((expiryDate - new Date()) / (1000 * 60 * 60 * 24));
return [{
json: {
domain: $input.first().json.domain,
expiryDate: expiryDate.toISOString().split('T')[0],
daysLeft,
isExpiring: daysLeft < 14
}
}];Add an IF node on {{ $json.isExpiring }} → Alert via Slack/Email with days remaining.
Workflow 4: Docker Container Health Monitor
Monitor all running Docker containers and alert when any go unhealthy or stop.
Schedule Trigger: Every 10 minutes
docker ps -a --format '{"name":"{{.Names}}","status":"{{.Status}}","state":"{{.State}}","image":"{{.Image}}"}'const lines = $input.first().json.stdout.trim().split('\n');
const containers = lines.map(line => JSON.parse(line));
const issues = containers.filter(c =>
c.state !== 'running' ||
c.status.includes('unhealthy')
);
if (issues.length === 0) return [];
return [{
json: {
issueCount: issues.length,
issues: issues.map(c => `${c.name} (${c.state}): ${c.status}`).join('\n'),
allContainers: containers.length
}
}];🐳 *Docker Container Alert*
{{ $json.issueCount }} of {{ $json.allContainers }} containers have issues:
{{ $json.issues }}Workflow 5: Automated Backup Verification
Backups that aren't verified aren't backups. This workflow checks that your automated backups are actually running and producing valid files.
Schedule Trigger: Daily at 6 AM (after your backup cron runs)
BACKUP_DIR="/backups"
LATEST=$(ls -t "$BACKUP_DIR"/*.tar.gz 2>/dev/null | head -1)
if [ -z "$LATEST" ]; then
echo '{"status":"MISSING","message":"No backup files found"}'
exit 0
fi
SIZE=$(stat -f%z "$LATEST" 2>/dev/null || stat -c%s "$LATEST")
AGE_HOURS=$(( ($(date +%s) - $(stat -c%Y "$LATEST")) / 3600 ))
INTEGRITY=$(tar -tzf "$LATEST" > /dev/null 2>&1 && echo "OK" || echo "CORRUPT")
echo "{\"status\":\"$INTEGRITY\",\"file\":\"$(basename $LATEST)\",\"size_mb\":$((SIZE/1048576)),\"age_hours\":$AGE_HOURS}"const backup = JSON.parse($input.first().json.stdout);
const alerts = [];
if (backup.status === 'MISSING') alerts.push('🔴 No backup files found!');
if (backup.status === 'CORRUPT') alerts.push(`🔴 Latest backup is corrupt: ${backup.file}`);
if (backup.age_hours > 26) alerts.push(`🟡 Latest backup is ${backup.age_hours} hours old`);
if (backup.size_mb < 1) alerts.push(`🟡 Backup suspiciously small: ${backup.size_mb} MB`);
return [{
json: {
...backup,
hasAlerts: alerts.length > 0,
alertSummary: alerts.join('\n')
}
}];Alert on failures, log successful verifications.
Organizing Your DevOps Workflows
Naming Convention: Prefix all DevOps workflows with [DevOps] — e.g., [DevOps] Server Resource Monitor. Makes them easy to find and filter.
Tags: Use n8n's tagging feature to categorize: monitoring, deployment, backup, security.
Shared Error Handler: All DevOps workflows should point to the Global Error Handler you built in Part 2. Operational workflows failing silently is worse than having no monitoring at all.
Execution Retention: Set DevOps workflows to retain execution data for at least 7 days. When debugging an alert, you'll want to see the raw data from the execution that triggered it.
What's Next?
In Part 5, we're building multi-service integrations — connecting n8n to GitHub, Slack, email, databases, and external APIs to create cross-platform workflows that would cost hundreds per month on Zapier.
We'll build a complete project management pipeline that ties multiple services together into a cohesive system.
