Architecture Overview
GlusterFS provides scalable network-attached storage with replication and high availability across multiple nodes.
Requirements
- • 3+ Storage VPS instances
- • Root/sudo access on all nodes
- • Private networking recommended
- • Ubuntu 22.04 or CentOS 8+
Recommended Setup
- • Minimum 3 nodes (replica 3)
- • Dedicated storage partitions
- • Low-latency network
- • Separate from root partition
Prepare All Nodes
Perform these steps on every node in your cluster:
# Ubuntu/Debian
sudo apt update && sudo apt upgrade -y
# CentOS/Rocky Linux
sudo dnf update -y192.168.1.10 gluster1.yourdomain.com gluster1
192.168.1.11 gluster2.yourdomain.com gluster2
192.168.1.12 gluster3.yourdomain.com gluster3⚠️ Important: Replace IP addresses with your actual RamNode VPS IPs (private IPs recommended).
# For Ubuntu/Debian (UFW)
sudo ufw allow from 192.168.1.0/24 to any port 24007:24008 proto tcp
sudo ufw allow from 192.168.1.0/24 to any port 49152:49251 proto tcp
sudo ufw reload
# For CentOS/Rocky Linux (firewalld)
sudo firewall-cmd --permanent --add-service=glusterfs
sudo firewall-cmd --reloadInstall GlusterFS
Ubuntu/Debian Installation
sudo apt install software-properties-common -y
sudo add-apt-repository ppa:gluster/glusterfs-11 -y
sudo apt update
sudo apt install glusterfs-server -yCentOS/Rocky Linux Installation
sudo dnf install centos-release-gluster11 -y
sudo dnf install glusterfs-server -ysudo systemctl start glusterd
sudo systemctl enable glusterd
sudo systemctl status glusterd✅ GlusterFS daemon is now running on all nodes!
Prepare Storage
Create dedicated storage for GlusterFS bricks (recommended: separate partition/disk):
# Format the partition (adjust device as needed)
sudo mkfs.xfs /dev/sdb1
# Create mount point
sudo mkdir -p /data/glusterfs
# Mount the partition
sudo mount /dev/sdb1 /data/glusterfs
# Make persistent in /etc/fstab
echo '/dev/sdb1 /data/glusterfs xfs defaults 0 0' | sudo tee -a /etc/fstab
# Create brick directory
sudo mkdir -p /data/glusterfs/brick1💡 Tip: Using XFS is recommended for GlusterFS. Repeat this on all nodes.
Create Trusted Storage Pool
Run from one node only (e.g., gluster1):
# Probe other nodes to form the cluster
sudo gluster peer probe gluster2
sudo gluster peer probe gluster3
# Verify peer status
sudo gluster peer statusNumber of Peers: 2
Hostname: gluster2
Uuid: ...
State: Peer in Cluster (Connected)
Hostname: gluster3
Uuid: ...
State: Peer in Cluster (Connected)Create GlusterFS Volume
Create a replicated volume (recommended for redundancy):
sudo gluster volume create gvol0 replica 3 \
gluster1:/data/glusterfs/brick1/gvol0 \
gluster2:/data/glusterfs/brick1/gvol0 \
gluster3:/data/glusterfs/brick1/gvol0 \
forceAlternative: Distributed Volume (No Redundancy)
sudo gluster volume create gvol0 \
gluster1:/data/glusterfs/brick1/gvol0 \
gluster2:/data/glusterfs/brick1/gvol0 \
gluster3:/data/glusterfs/brick1/gvol0 \
forcesudo gluster volume start gvol0
# Verify volume information
sudo gluster volume info gvol0Configure Volume Options
Optional but recommended optimizations:
# Enable performance optimizations
sudo gluster volume set gvol0 performance.cache-size 1GB
sudo gluster volume set gvol0 performance.write-behind-window-size 4MB
sudo gluster volume set gvol0 network.ping-timeout 10# Enable client authentication
sudo gluster volume set gvol0 auth.allow 192.168.1.*
# Enable quota (optional)
sudo gluster volume quota gvol0 enable
sudo gluster volume quota gvol0 limit-usage / 500GBMount GlusterFS on Clients
Install client and mount the volume:
# Ubuntu/Debian
sudo apt install glusterfs-client -y
# CentOS/Rocky Linux
sudo dnf install glusterfs-client -y# Create mount point
sudo mkdir -p /mnt/glusterfs
# Mount using native client
sudo mount -t glusterfs gluster1:/gvol0 /mnt/glusterfs
# Verify mount
df -h /mnt/glusterfsgluster1:/gvol0 /mnt/glusterfs glusterfs defaults,_netdev 0 0Alternative: NFS Mount
# Enable NFS on the volume (from any cluster node)
sudo gluster volume set gvol0 nfs.disable off
# Mount via NFS (from client)
sudo mount -t nfs gluster1:/gvol0 /mnt/glusterfsTesting & Verification
Test write/read operations and verify replication:
# Write test file
echo "GlusterFS test" | sudo tee /mnt/glusterfs/test.txt
# Verify file exists on all bricks
ssh gluster1 "cat /data/glusterfs/brick1/gvol0/test.txt"
ssh gluster2 "cat /data/glusterfs/brick1/gvol0/test.txt"
ssh gluster3 "cat /data/glusterfs/brick1/gvol0/test.txt"# Check volume status
sudo gluster volume status gvol0
# Check heal status (for replicated volumes)
sudo gluster volume heal gvol0 info
# Monitor performance
sudo gluster volume profile gvol0 start
sudo gluster volume profile gvol0 infoMaintenance Operations
Add Additional Bricks (Expansion)
sudo gluster peer probe gluster4
sudo gluster volume add-brick gvol0 replica 4 \
gluster4:/data/glusterfs/brick1/gvol0 force
sudo gluster volume rebalance gvol0 startRemove a Brick
# Start brick removal (migrates data)
sudo gluster volume remove-brick gvol0 replica 2 \
gluster3:/data/glusterfs/brick1/gvol0 start
# Check status
sudo gluster volume remove-brick gvol0 replica 2 \
gluster3:/data/glusterfs/brick1/gvol0 status
# Commit removal when migration complete
sudo gluster volume remove-brick gvol0 replica 2 \
gluster3:/data/glusterfs/brick1/gvol0 commitReplace Failed Node
# Remove failed brick
sudo gluster volume remove-brick gvol0 replica 2 \
failed-node:/data/glusterfs/brick1/gvol0 force
# Add replacement brick
sudo gluster volume add-brick gvol0 replica 3 \
new-node:/data/glusterfs/brick1/gvol0 force
# Trigger heal
sudo gluster volume heal gvol0 fullBackup and Recovery
# Stop volume before backup
sudo gluster volume stop gvol0
# Backup brick data from each node
sudo tar -czf /backup/gluster-brick-backup.tar.gz \
/data/glusterfs/brick1/gvol0
# Restart volume
sudo gluster volume start gvol0Troubleshooting
Peer Connection Issues
sudo gluster peer detach gluster2
sudo gluster peer probe gluster2Split-Brain Resolution
# Identify split-brain files
sudo gluster volume heal gvol0 info split-brain
# Resolve by choosing source brick
sudo gluster volume heal gvol0 split-brain source-brick \
gluster1:/data/glusterfs/brick1/gvol0Performance Issues
# Check volume performance settings
sudo gluster volume get gvol0 all
# Optimize for your workload
sudo gluster volume set gvol0 performance.readdir-ahead on
sudo gluster volume set gvol0 performance.stat-prefetch onCheck Logs
tail -f /var/log/glusterfs/glusterd.log
tail -f /var/log/glusterfs/bricks/data-glusterfs-brick1-gvol0.logSecurity Best Practices
- Use Private Networking: Configure RamNode VPS with private IPs for cluster communication
- Enable SSL/TLS: Configure GlusterFS to use encrypted connections
- Implement Access Controls: Use auth.allow to restrict client access
- Regular Updates: Keep GlusterFS packages updated
- Backup Strategy: Implement regular backup procedures
