Part 7 of 7

    Deploying Applications and Databases

    Put your infrastructure to work with real-world deployments—applications and production databases that power self-hosted services.

    Nextcloud
    Gitea
    PostgreSQL
    MariaDB
    Redis

    You've built the foundation: Docker for containers, Compose for multi-service stacks, Kubernetes for orchestration at scale. Now let's put that infrastructure to work with real-world deployments—the applications and databases that power self-hosted services.

    This guide covers deploying popular self-hosted applications, production database configurations, high-availability patterns, and the operational practices that keep data safe.

    1

    Application Deployment Patterns

    Most self-hosted applications follow a similar architecture:

    Standard application stack
    ┌─────────────────┐
    │  Reverse Proxy  │  ← SSL termination, routing
    │  (Nginx/Traefik)│
    └────────┬────────┘
             │
    ┌────────▼────────┐
    │   Application   │  ← Business logic
    │   (Web + API)   │
    └────────┬────────┘
             │
        ┌────┴────┐
        │         │
    ┌───▼───┐ ┌───▼───┐
    │  DB   │ │ Cache │  ← Data layer
    │(Postgres)│(Redis)│
    └───────┘ └───────┘

    Environment Configuration

    Keep configuration separate from deployment files:

    .env (never commit this)
    DB_PASSWORD=generated-secure-password
    ADMIN_EMAIL=admin@yourdomain.com
    DOMAIN=app.yourdomain.com

    Persistent Data Strategy

    Data TypeStorage ApproachBackup Priority
    Database filesNamed volumeCritical
    User uploadsNamed volume or bind mountHigh
    ConfigurationBind mount (version controlled)Medium
    Cachetmpfs or ephemeral volumeNone
    LogsLog driver or volume with rotationLow
    2

    Deploying Nextcloud

    Nextcloud is a self-hosted productivity platform—file sync, calendars, contacts, and collaboration tools. It's one of the most popular self-hosted applications.

    nextcloud/docker-compose.yml
    services:
      nextcloud:
        image: nextcloud:28-apache
        restart: unless-stopped
        ports:
          - "8080:80"
        volumes:
          - nextcloud-html:/var/www/html
          - nextcloud-data:/var/www/html/data
          - nextcloud-config:/var/www/html/config
        environment:
          - POSTGRES_HOST=db
          - POSTGRES_DB=nextcloud
          - POSTGRES_USER=nextcloud
          - POSTGRES_PASSWORD=${DB_PASSWORD}
          - REDIS_HOST=redis
          - NEXTCLOUD_ADMIN_USER=${ADMIN_USER}
          - NEXTCLOUD_ADMIN_PASSWORD=${ADMIN_PASSWORD}
          - NEXTCLOUD_TRUSTED_DOMAINS=${DOMAIN}
          - OVERWRITEPROTOCOL=https
        depends_on:
          db:
            condition: service_healthy
          redis:
            condition: service_started
    
      db:
        image: postgres:16-alpine
        restart: unless-stopped
        volumes:
          - postgres-data:/var/lib/postgresql/data
        environment:
          - POSTGRES_DB=nextcloud
          - POSTGRES_USER=nextcloud
          - POSTGRES_PASSWORD=${DB_PASSWORD}
        healthcheck:
          test: ["CMD-SHELL", "pg_isready -U nextcloud"]
          interval: 10s
          timeout: 5s
          retries: 5
    
      redis:
        image: redis:7-alpine
        restart: unless-stopped
        volumes:
          - redis-data:/data
        command: redis-server --appendonly yes
    
      cron:
        image: nextcloud:28-apache
        restart: unless-stopped
        volumes:
          - nextcloud-html:/var/www/html
          - nextcloud-data:/var/www/html/data
          - nextcloud-config:/var/www/html/config
        entrypoint: /cron.sh
        depends_on:
          - nextcloud
    
    volumes:
      nextcloud-html:
      nextcloud-data:
      nextcloud-config:
      postgres-data:
      redis-data:
    Generate secure passwords
    DB_PASSWORD=$(openssl rand -hex 24)
    ADMIN_USER=admin
    ADMIN_PASSWORD=$(openssl rand -hex 16)
    DOMAIN=cloud.yourdomain.com
    
    # Save these somewhere safe
    echo "Admin password: $ADMIN_PASSWORD"
    3

    Deploying Gitea

    Gitea is a lightweight Git hosting solution—self-hosted GitHub alternative that runs well on modest hardware (~200MB RAM at idle).

    gitea/docker-compose.yml
    services:
      gitea:
        image: gitea/gitea:latest
        restart: unless-stopped
        ports:
          - "3000:3000"
          - "2222:22"
        volumes:
          - gitea-data:/data
          - /etc/timezone:/etc/timezone:ro
          - /etc/localtime:/etc/localtime:ro
        environment:
          - USER_UID=1000
          - USER_GID=1000
          - GITEA__database__DB_TYPE=postgres
          - GITEA__database__HOST=db:5432
          - GITEA__database__NAME=gitea
          - GITEA__database__USER=gitea
          - GITEA__database__PASSWD=${DB_PASSWORD}
          - GITEA__server__ROOT_URL=https://git.yourdomain.com/
          - GITEA__server__SSH_DOMAIN=git.yourdomain.com
          - GITEA__server__SSH_PORT=2222
          - GITEA__service__DISABLE_REGISTRATION=true
        depends_on:
          db:
            condition: service_healthy
    
      db:
        image: postgres:16-alpine
        restart: unless-stopped
        volumes:
          - postgres-data:/var/lib/postgresql/data
        environment:
          - POSTGRES_DB=gitea
          - POSTGRES_USER=gitea
          - POSTGRES_PASSWORD=${DB_PASSWORD}
        healthcheck:
          test: ["CMD-SHELL", "pg_isready -U gitea"]
          interval: 10s
          timeout: 5s
          retries: 5
    
    volumes:
      gitea-data:
      postgres-data:

    Gitea CI/CD with Actions

    Add runner for CI/CD
    services:
      gitea:
        # ... existing config
        environment:
          # ... existing vars
          - GITEA__actions__ENABLED=true
    
      runner:
        image: gitea/act_runner:latest
        restart: unless-stopped
        volumes:
          - runner-data:/data
          - /var/run/docker.sock:/var/run/docker.sock
        environment:
          - GITEA_INSTANCE_URL=http://gitea:3000
          - GITEA_RUNNER_REGISTRATION_TOKEN=${RUNNER_TOKEN}
        depends_on:
          - gitea
    
    volumes:
      runner-data:
    4

    Wikis and Documentation

    BookStack

    BookStack provides a structured wiki with books, chapters, and pages:

    bookstack/docker-compose.yml
    services:
      bookstack:
        image: lscr.io/linuxserver/bookstack:latest
        restart: unless-stopped
        ports:
          - "6875:80"
        volumes:
          - bookstack-config:/config
        environment:
          - PUID=1000
          - PGID=1000
          - APP_URL=https://wiki.yourdomain.com
          - DB_HOST=db
          - DB_PORT=3306
          - DB_USER=bookstack
          - DB_PASS=${DB_PASSWORD}
          - DB_DATABASE=bookstack
        depends_on:
          db:
            condition: service_healthy
    
      db:
        image: mariadb:11
        restart: unless-stopped
        volumes:
          - mariadb-data:/var/lib/mysql
        environment:
          - MYSQL_ROOT_PASSWORD=${DB_ROOT_PASSWORD}
          - MYSQL_DATABASE=bookstack
          - MYSQL_USER=bookstack
          - MYSQL_PASSWORD=${DB_PASSWORD}
        healthcheck:
          test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"]
          interval: 10s
          timeout: 5s
          retries: 5
    
    volumes:
      bookstack-config:
      mariadb-data:
    5

    Database Deployments

    Databases are the foundation of most applications. Getting them right is critical.

    PostgreSQL Production Setup

    postgres/docker-compose.yml
    services:
      postgres:
        image: postgres:16-alpine
        restart: unless-stopped
        ports:
          - "127.0.0.1:5432:5432"  # Only local access
        volumes:
          - postgres-data:/var/lib/postgresql/data
          - ./init:/docker-entrypoint-initdb.d:ro
          - ./postgresql.conf:/etc/postgresql/postgresql.conf:ro
        environment:
          - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
          - POSTGRES_INITDB_ARGS=--encoding=UTF8 --locale=en_US.UTF-8
        command: postgres -c config_file=/etc/postgresql/postgresql.conf
        healthcheck:
          test: ["CMD-SHELL", "pg_isready -U postgres"]
          interval: 10s
          timeout: 5s
          retries: 5
        deploy:
          resources:
            limits:
              memory: 2G
    
    volumes:
      postgres-data:
    postgresql.conf (tuned for 4GB RAM VPS)
    # Memory
    shared_buffers = 1GB
    effective_cache_size = 3GB
    work_mem = 32MB
    maintenance_work_mem = 256MB
    
    # Connections
    max_connections = 100
    
    # WAL
    wal_buffers = 64MB
    checkpoint_completion_target = 0.9
    max_wal_size = 2GB
    min_wal_size = 1GB
    
    # Query Planning
    random_page_cost = 1.1  # SSD storage
    effective_io_concurrency = 200
    
    # Logging
    log_min_duration_statement = 1000  # Log queries over 1s

    PostgreSQL Backup Script

    /opt/scripts/pg-backup.sh
    #!/bin/bash
    BACKUP_DIR="/opt/backups/postgres"
    RETENTION_DAYS=7
    DATE=$(date +%Y%m%d_%H%M%S)
    CONTAINER="postgres"
    
    mkdir -p "$BACKUP_DIR"
    
    # Dump all databases
    docker exec $CONTAINER pg_dumpall -U postgres | gzip > "$BACKUP_DIR/all_databases_$DATE.sql.gz"
    
    # Individual database dumps
    for db in $(docker exec $CONTAINER psql -U postgres -t -c "SELECT datname FROM pg_database WHERE datistemplate = false AND datname != 'postgres'"); do
        docker exec $CONTAINER pg_dump -U postgres -Fc "$db" > "$BACKUP_DIR/${db}_$DATE.dump"
    done
    
    # Clean old backups
    find "$BACKUP_DIR" -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete
    find "$BACKUP_DIR" -name "*.dump" -mtime +$RETENTION_DAYS -delete

    Redis for Caching and Sessions

    redis/docker-compose.yml
    services:
      redis:
        image: redis:7-alpine
        restart: unless-stopped
        ports:
          - "127.0.0.1:6379:6379"
        volumes:
          - redis-data:/data
        command: >
          redis-server
          --appendonly yes
          --maxmemory 512mb
          --maxmemory-policy allkeys-lru
          --save 900 1
          --save 300 10
          --save 60 10000
        healthcheck:
          test: ["CMD", "redis-cli", "ping"]
          interval: 10s
          timeout: 5s
          retries: 5
    
    volumes:
      redis-data:
    6

    Database High Availability

    For critical applications, single database instances are a risk. Here are patterns for redundancy.

    PostgreSQL Streaming Replication

    Primary server additions to postgresql.conf
    wal_level = replica
    max_wal_senders = 3
    wal_keep_size = 1GB
    hot_standby = on
    pg_hba.conf for replication
    host replication replicator 0.0.0.0/0 scram-sha-256

    Redis Sentinel for HA

    redis-ha/docker-compose.yml
    services:
      redis-master:
        image: redis:7-alpine
        command: redis-server --appendonly yes
        volumes:
          - redis-master-data:/data
    
      redis-replica:
        image: redis:7-alpine
        command: redis-server --appendonly yes --replicaof redis-master 6379
        volumes:
          - redis-replica-data:/data
        depends_on:
          - redis-master
    
      sentinel:
        image: redis:7-alpine
        command: redis-sentinel /etc/redis/sentinel.conf
        volumes:
          - ./sentinel.conf:/etc/redis/sentinel.conf
        depends_on:
          - redis-master
          - redis-replica
    
    volumes:
      redis-master-data:
      redis-replica-data:
    sentinel.conf
    sentinel monitor mymaster redis-master 6379 2
    sentinel down-after-milliseconds mymaster 5000
    sentinel failover-timeout mymaster 60000
    sentinel parallel-syncs mymaster 1
    7

    Kubernetes Database Deployments

    For Kubernetes, consider whether you need to run databases in the cluster or use external managed services.

    Good fit for K8s databases:

    • Development/staging environments
    • Stateless apps with external database
    • Tight integration with cluster services

    Consider external databases:

    • Production requiring high availability
    • Special storage needs (high IOPS)
    • Compliance requirements

    PostgreSQL on Kubernetes with CloudNativePG

    Install CloudNativePG operator
    kubectl apply -f https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.22/releases/cnpg-1.22.0.yaml
    PostgreSQL Cluster resource
    apiVersion: postgresql.cnpg.io/v1
    kind: Cluster
    metadata:
      name: postgres-cluster
    spec:
      instances: 3
      primaryUpdateStrategy: unsupervised
      
      storage:
        size: 20Gi
        storageClass: longhorn
      
      resources:
        requests:
          memory: "1Gi"
          cpu: "500m"
        limits:
          memory: "2Gi"
          cpu: "2"
      
      postgresql:
        parameters:
          shared_buffers: "256MB"
          effective_cache_size: "1GB"
      
      bootstrap:
        initdb:
          database: app
          owner: app
          secret:
            name: postgres-credentials
      
      backup:
        barmanObjectStore:
          destinationPath: s3://your-bucket/postgres-backups
          s3Credentials:
            accessKeyId:
              name: s3-creds
              key: ACCESS_KEY_ID
            secretAccessKey:
              name: s3-creds
              key: SECRET_ACCESS_KEY
        retentionPolicy: "30d"
    8

    Monitoring Databases

    PostgreSQL Metrics with postgres_exporter

    Add to docker-compose.yml
    services:
      postgres-exporter:
        image: prometheuscommunity/postgres-exporter
        restart: unless-stopped
        ports:
          - "9187:9187"
        environment:
          - DATA_SOURCE_NAME=postgresql://postgres:${POSTGRES_PASSWORD}@postgres:5432/postgres?sslmode=disable

    Key metrics to alert on:

    • pg_up: Database reachability
    • pg_stat_activity_count: Active connections
    • pg_stat_database_tup_fetched: Query throughput
    • pg_replication_lag: Replica delay (seconds)

    Redis Metrics with redis_exporter

    Add Redis exporter
    services:
      redis-exporter:
        image: oliver006/redis_exporter
        restart: unless-stopped
        ports:
          - "9121:9121"
        environment:
          - REDIS_ADDR=redis://redis:6379
    9

    Production Checklist for Data Services

    Reliability

    • Automated backups configured and tested
    • Backup restoration procedure documented
    • Monitoring and alerting in place
    • Resource limits set to prevent OOM
    • Health checks configured

    Security

    • Database ports not exposed publicly
    • Strong passwords for all accounts
    • Minimal privileges for app users
    • Encryption at rest (if required)
    • SSL/TLS for connections

    Performance

    • Configuration tuned for resources
    • Connection pooling for high-traffic
    • Slow query logging enabled
    • Indexes reviewed for common queries

    Operations

    • Runbook for common issues
    • Log rotation configured
    • Disk space monitoring
    • Upgrade procedure documented

    Series Complete

    You've now seen how to deploy real applications and production databases using Docker and Kubernetes. The patterns repeat across different tools: reverse proxy for SSL and routing, persistent volumes for data, health checks for reliability, and proper configuration for performance.

    The skills from this series—containerization, orchestration, monitoring, and operational best practices—apply whether you're running a personal Nextcloud instance or a multi-node database cluster. Start simple, add complexity only when needed, and always have tested backups.