Distributed Storage Guide

    Deploying CephFS on RamNode VPS

    Building a distributed filesystem across multiple virtual private servers can seem daunting, but with the right approach, it becomes a powerful solution for scalable storage. CephFS is a POSIX-compliant distributed filesystem that runs on top of Ceph's object storage system. Combined with RamNode's reliable and affordable VPS hosting, you get a cost-effective way to build enterprise-grade distributed storage.

    Ubuntu 20.04+
    CephFS Quincy
    ⏱️ 45-60 minutes

    Why CephFS on RamNode VPS?

    CephFS provides enterprise-grade distributed storage that's particularly valuable for:

    Development Teams

    Shared storage across multiple environments

    High Availability

    Applications requiring fault tolerance

    No Vendor Lock-in

    Avoid cloud storage dependencies

    Scalable Storage

    Grow storage with demand

    Prerequisites and Planning

    Before starting, ensure you have:

    Server Requirements

    • • At least 3 RamNode VPS instances
    • • Minimum 2GB RAM per VPS
    • • At least 20GB storage per VPS
    • • Ubuntu 20.04 LTS or newer
    • • Root access to all instances

    Example 5-Node Cluster

    • • 3 nodes as Ceph monitors + OSDs
    • • 2 additional nodes as pure OSDs
    • • All nodes can be CephFS clients
    • • High availability configuration
    2

    Initial VPS Setup

    Update all RamNode VPS instances and install essential packages:

    Update System on All Nodes
    apt update && apt upgrade -y
    apt install -y python3 python3-pip curl wget gnupg2

    Configure hostnames and update /etc/hosts on all nodes:

    Example /etc/hosts Configuration
    192.168.1.10 ceph-mon1
    192.168.1.11 ceph-mon2
    192.168.1.12 ceph-mon3
    192.168.1.13 ceph-osd1
    192.168.1.14 ceph-osd2

    💡 Tip: Set up SSH key authentication between all nodes to simplify cluster management.

    3

    Install Ceph

    Install Docker on all nodes (required for cephadm):

    Install Docker
    curl -fsSL https://get.docker.com -o get-docker.sh
    sh get-docker.sh
    systemctl enable docker
    systemctl start docker

    Install cephadm on your first monitor node:

    Install Cephadm
    curl --silent --remote-name --location https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm
    chmod +x cephadm
    ./cephadm add-repo --release quincy
    ./cephadm install
    4

    Bootstrap Ceph Cluster

    Initialize your Ceph cluster on the first monitor node:

    Bootstrap Cluster
    cephadm bootstrap --mon-ip 192.168.1.10 --cluster-network 192.168.1.0/24

    This command will:

    • Deploy the first Ceph monitor
    • Deploy a Ceph manager
    • Generate cluster keys and configuration
    • Start the Ceph dashboard on port 8443

    🔑 Important: Save the dashboard credentials displayed after bootstrap completion!

    5

    Add Additional Nodes

    Copy the cluster's public key to other nodes and add them:

    Copy SSH Keys
    ssh-copy-id root@ceph-mon2
    ssh-copy-id root@ceph-mon3
    ssh-copy-id root@ceph-osd1
    ssh-copy-id root@ceph-osd2
    Add Nodes to Cluster
    ceph orch host add ceph-mon2 192.168.1.11
    ceph orch host add ceph-mon3 192.168.1.12
    ceph orch host add ceph-osd1 192.168.1.13
    ceph orch host add ceph-osd2 192.168.1.14
    Deploy Additional Monitors
    ceph orch apply mon "ceph-mon1,ceph-mon2,ceph-mon3"
    6

    Configure OSDs (Object Storage Daemons)

    OSDs are the storage workhorses of your Ceph cluster.

    List Available Devices
    ceph orch device ls
    Add All Available Devices as OSDs
    ceph orch apply osd --all-available-devices

    For VPS instances with only the root filesystem, create file-based OSDs:

    Create File-Based OSDs
    # Create directories for OSD data on each node
    mkdir -p /var/lib/ceph/osd-data
    
    # Add file-based OSDs
    ceph orch daemon add osd ceph-mon1:/var/lib/ceph/osd-data
    ceph orch daemon add osd ceph-mon2:/var/lib/ceph/osd-data
    ceph orch daemon add osd ceph-mon3:/var/lib/ceph/osd-data
    ceph orch daemon add osd ceph-osd1:/var/lib/ceph/osd-data
    ceph orch daemon add osd ceph-osd2:/var/lib/ceph/osd-data
    7

    Deploy MDS (Metadata Servers)

    CephFS requires metadata servers to manage filesystem metadata. Deploy at least two for redundancy:

    Deploy Metadata Servers
    ceph orch apply mds cephfs --placement="ceph-mon1,ceph-mon2,ceph-mon3"

    📊 High Availability: Multiple MDS servers ensure metadata operations continue even if one server fails.

    8

    Create CephFS Filesystem

    Create the required pools and filesystem:

    Create Data and Metadata Pools
    ceph osd pool create cephfs-data 64 64
    ceph osd pool create cephfs-metadata 64 64
    Create Filesystem
    ceph fs new cephfs cephfs-metadata cephfs-data
    Verify Filesystem Creation
    ceph fs status

    ✅ Your CephFS filesystem is now created and ready to be mounted!

    9

    Set Up CephFS Clients

    Install the CephFS client on all nodes that need to mount the filesystem:

    Install CephFS Client
    apt install -y ceph-fuse

    Create a client keyring for mounting:

    Create Client Authentication Key
    # Create client key
    ceph auth get-or-create client.cephfs mon 'allow r' mds 'allow rw' osd 'allow rw pool=cephfs-data, allow rw pool=cephfs-metadata' > /etc/ceph/ceph.client.cephfs.keyring
    10

    Mount CephFS

    You can mount CephFS using either the kernel client or FUSE client. The kernel client generally offers better performance:

    Create Mount Point
    mkdir -p /mnt/cephfs
    Mount Using Kernel Client
    mount -t ceph ceph-mon1:6789,ceph-mon2:6789,ceph-mon3:6789:/ /mnt/cephfs -o name=cephfs,secret=AQC...==
    Or Mount Using FUSE Client
    ceph-fuse /mnt/cephfs --name client.cephfs

    For persistent mounting, add an entry to /etc/fstab:

    Add to /etc/fstab
    ceph-mon1:6789,ceph-mon2:6789,ceph-mon3:6789:/ /mnt/cephfs ceph name=cephfs,secret=AQC...==,_netdev 0 2
    11

    Performance Optimization

    Network Optimization

    Configure Network Settings
    # Edit /etc/ceph/ceph.conf on all nodes
    [global]
    public_network = 192.168.1.0/24
    cluster_network = 192.168.1.0/24
    ms_bind_port_min = 6800
    ms_bind_port_max = 7300

    OSD Optimization

    Tune OSD Settings for VPS
    ceph config set osd osd_op_threads 2
    ceph config set osd osd_disk_threads 1
    ceph config set osd osd_recovery_threads 1

    Client-Side Caching

    Mount with Cache Options
    mount -t ceph mon1:6789:/ /mnt/cephfs -o name=cephfs,cache=strict
    12

    Monitoring and Maintenance

    Using the Ceph Dashboard

    Access the web-based dashboard at https://your-monitor-ip:8443 to monitor cluster health, performance metrics, and manage resources.

    Command-Line Monitoring

    Essential Monitoring Commands
    # Check overall cluster health
    ceph status
    
    # Monitor OSD status
    ceph osd status
    
    # Check filesystem status
    ceph fs status
    
    # Monitor cluster performance
    ceph -w

    Regular Maintenance Tasks

    Configure Automated Maintenance
    # Enable automatic OSD scrubbing
    ceph config set osd osd_scrub_auto_repair true
    
    # Configure deep scrub intervals (weekly)
    ceph config set osd osd_deep_scrub_interval 604800
    13

    Backup and Recovery Strategies

    Snapshot Management

    CephFS supports snapshots for point-in-time recovery:

    Snapshot Operations
    # Create a snapshot
    mkdir /mnt/cephfs/.snap/backup-$(date +%Y%m%d)
    
    # List snapshots
    ls /mnt/cephfs/.snap/
    
    # Restore from snapshot
    cp -r /mnt/cephfs/.snap/backup-20231215/* /mnt/cephfs/restored/

    External Backups

    Example Backup Script
    rsync -av --progress /mnt/cephfs/ /backup/cephfs-$(date +%Y%m%d)/
    14

    Scaling Your CephFS Cluster

    Adding More OSDs

    Add New VPS Node
    # Add new VPS node
    ceph orch host add ceph-osd3 192.168.1.15
    
    # Deploy OSD on new node
    ceph orch daemon add osd ceph-osd3:/var/lib/ceph/osd-data

    Adding More MDS Servers

    Scale MDS Daemons
    # Scale MDS daemons for better metadata performance
    ceph orch apply mds cephfs --placement="5"

    Conclusion

    Deploying CephFS on RamNode VPS instances provides a cost-effective way to build enterprise-grade distributed storage. While the initial setup requires careful planning and configuration, the result is a robust, scalable filesystem that can grow with your needs.

    The combination of Ceph's proven technology and RamNode's reliable VPS infrastructure creates an excellent foundation for applications requiring distributed storage. With proper monitoring, maintenance, and security practices, your CephFS cluster will provide years of reliable service.

    Remember that distributed storage systems require ongoing attention and maintenance. Regular monitoring, timely updates, and proper backup procedures are essential for maintaining a healthy cluster. Start small, understand the system thoroughly, and scale gradually as your requirements grow.

    Whether you're building a development environment, supporting a growing application, or creating a resilient storage solution, CephFS on RamNode VPS offers the flexibility and reliability needed for modern storage requirements.