Why CephFS on RamNode VPS?
CephFS provides enterprise-grade distributed storage that's particularly valuable for:
Development Teams
Shared storage across multiple environments
High Availability
Applications requiring fault tolerance
No Vendor Lock-in
Avoid cloud storage dependencies
Scalable Storage
Grow storage with demand
Prerequisites and Planning
Before starting, ensure you have:
Server Requirements
- • At least 3 RamNode VPS instances
- • Minimum 2GB RAM per VPS
- • At least 20GB storage per VPS
- • Ubuntu 20.04 LTS or newer
- • Root access to all instances
Example 5-Node Cluster
- • 3 nodes as Ceph monitors + OSDs
- • 2 additional nodes as pure OSDs
- • All nodes can be CephFS clients
- • High availability configuration
Initial VPS Setup
Update all RamNode VPS instances and install essential packages:
apt update && apt upgrade -y
apt install -y python3 python3-pip curl wget gnupg2Configure hostnames and update /etc/hosts on all nodes:
192.168.1.10 ceph-mon1
192.168.1.11 ceph-mon2
192.168.1.12 ceph-mon3
192.168.1.13 ceph-osd1
192.168.1.14 ceph-osd2💡 Tip: Set up SSH key authentication between all nodes to simplify cluster management.
Install Ceph
Install Docker on all nodes (required for cephadm):
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
systemctl enable docker
systemctl start dockerInstall cephadm on your first monitor node:
curl --silent --remote-name --location https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm
chmod +x cephadm
./cephadm add-repo --release quincy
./cephadm installBootstrap Ceph Cluster
Initialize your Ceph cluster on the first monitor node:
cephadm bootstrap --mon-ip 192.168.1.10 --cluster-network 192.168.1.0/24This command will:
- Deploy the first Ceph monitor
- Deploy a Ceph manager
- Generate cluster keys and configuration
- Start the Ceph dashboard on port 8443
🔑 Important: Save the dashboard credentials displayed after bootstrap completion!
Add Additional Nodes
Copy the cluster's public key to other nodes and add them:
ssh-copy-id root@ceph-mon2
ssh-copy-id root@ceph-mon3
ssh-copy-id root@ceph-osd1
ssh-copy-id root@ceph-osd2ceph orch host add ceph-mon2 192.168.1.11
ceph orch host add ceph-mon3 192.168.1.12
ceph orch host add ceph-osd1 192.168.1.13
ceph orch host add ceph-osd2 192.168.1.14ceph orch apply mon "ceph-mon1,ceph-mon2,ceph-mon3"Configure OSDs (Object Storage Daemons)
OSDs are the storage workhorses of your Ceph cluster.
ceph orch device lsceph orch apply osd --all-available-devicesFor VPS instances with only the root filesystem, create file-based OSDs:
# Create directories for OSD data on each node
mkdir -p /var/lib/ceph/osd-data
# Add file-based OSDs
ceph orch daemon add osd ceph-mon1:/var/lib/ceph/osd-data
ceph orch daemon add osd ceph-mon2:/var/lib/ceph/osd-data
ceph orch daemon add osd ceph-mon3:/var/lib/ceph/osd-data
ceph orch daemon add osd ceph-osd1:/var/lib/ceph/osd-data
ceph orch daemon add osd ceph-osd2:/var/lib/ceph/osd-dataDeploy MDS (Metadata Servers)
CephFS requires metadata servers to manage filesystem metadata. Deploy at least two for redundancy:
ceph orch apply mds cephfs --placement="ceph-mon1,ceph-mon2,ceph-mon3"📊 High Availability: Multiple MDS servers ensure metadata operations continue even if one server fails.
Create CephFS Filesystem
Create the required pools and filesystem:
ceph osd pool create cephfs-data 64 64
ceph osd pool create cephfs-metadata 64 64ceph fs new cephfs cephfs-metadata cephfs-dataceph fs status✅ Your CephFS filesystem is now created and ready to be mounted!
Set Up CephFS Clients
Install the CephFS client on all nodes that need to mount the filesystem:
apt install -y ceph-fuseCreate a client keyring for mounting:
# Create client key
ceph auth get-or-create client.cephfs mon 'allow r' mds 'allow rw' osd 'allow rw pool=cephfs-data, allow rw pool=cephfs-metadata' > /etc/ceph/ceph.client.cephfs.keyringMount CephFS
You can mount CephFS using either the kernel client or FUSE client. The kernel client generally offers better performance:
mkdir -p /mnt/cephfsmount -t ceph ceph-mon1:6789,ceph-mon2:6789,ceph-mon3:6789:/ /mnt/cephfs -o name=cephfs,secret=AQC...==ceph-fuse /mnt/cephfs --name client.cephfsFor persistent mounting, add an entry to /etc/fstab:
ceph-mon1:6789,ceph-mon2:6789,ceph-mon3:6789:/ /mnt/cephfs ceph name=cephfs,secret=AQC...==,_netdev 0 2Performance Optimization
Network Optimization
# Edit /etc/ceph/ceph.conf on all nodes
[global]
public_network = 192.168.1.0/24
cluster_network = 192.168.1.0/24
ms_bind_port_min = 6800
ms_bind_port_max = 7300OSD Optimization
ceph config set osd osd_op_threads 2
ceph config set osd osd_disk_threads 1
ceph config set osd osd_recovery_threads 1Client-Side Caching
mount -t ceph mon1:6789:/ /mnt/cephfs -o name=cephfs,cache=strictMonitoring and Maintenance
Using the Ceph Dashboard
Access the web-based dashboard at https://your-monitor-ip:8443 to monitor cluster health, performance metrics, and manage resources.
Command-Line Monitoring
# Check overall cluster health
ceph status
# Monitor OSD status
ceph osd status
# Check filesystem status
ceph fs status
# Monitor cluster performance
ceph -wRegular Maintenance Tasks
# Enable automatic OSD scrubbing
ceph config set osd osd_scrub_auto_repair true
# Configure deep scrub intervals (weekly)
ceph config set osd osd_deep_scrub_interval 604800Backup and Recovery Strategies
Snapshot Management
CephFS supports snapshots for point-in-time recovery:
# Create a snapshot
mkdir /mnt/cephfs/.snap/backup-$(date +%Y%m%d)
# List snapshots
ls /mnt/cephfs/.snap/
# Restore from snapshot
cp -r /mnt/cephfs/.snap/backup-20231215/* /mnt/cephfs/restored/External Backups
rsync -av --progress /mnt/cephfs/ /backup/cephfs-$(date +%Y%m%d)/Scaling Your CephFS Cluster
Adding More OSDs
# Add new VPS node
ceph orch host add ceph-osd3 192.168.1.15
# Deploy OSD on new node
ceph orch daemon add osd ceph-osd3:/var/lib/ceph/osd-dataAdding More MDS Servers
# Scale MDS daemons for better metadata performance
ceph orch apply mds cephfs --placement="5"Conclusion
Deploying CephFS on RamNode VPS instances provides a cost-effective way to build enterprise-grade distributed storage. While the initial setup requires careful planning and configuration, the result is a robust, scalable filesystem that can grow with your needs.
The combination of Ceph's proven technology and RamNode's reliable VPS infrastructure creates an excellent foundation for applications requiring distributed storage. With proper monitoring, maintenance, and security practices, your CephFS cluster will provide years of reliable service.
Remember that distributed storage systems require ongoing attention and maintenance. Regular monitoring, timely updates, and proper backup procedures are essential for maintaining a healthy cluster. Start small, understand the system thoroughly, and scale gradually as your requirements grow.
Whether you're building a development environment, supporting a growing application, or creating a resilient storage solution, CephFS on RamNode VPS offers the flexibility and reliability needed for modern storage requirements.
