Back to Coolify Series
    Part 4 of 6

    Docker Compose & Self-Hosted Apps

    Deploy complete application stacks with Docker Compose—from analytics platforms to automation tools to AI infrastructure—all managed through your Coolify dashboard.

    Docker Compose
    Self-Hosted
    ⏱️ 25 min
    1

    Docker Compose in Coolify

    Docker Compose lets you define multi-container applications in a single file. A typical stack might include:

    • Application container
    • Database container
    • Redis for caching
    • Reverse proxy or worker processes

    Coolify reads your docker-compose.yml, creates the containers, sets up networking, and manages the lifecycle—start, stop, restart, update—all from the UI.

    Adding a Docker Compose Resource

    1. Go to Projects → Select your project
    2. Add ResourceDocker Compose
    3. Choose your source:
      • Git Repository: Coolify pulls the compose file from your repo
      • Raw Docker Compose: Paste the YAML directly into Coolify
    2

    One-Click Services vs Custom Compose

    🚀 One-Click Services

    Pre-configured templates under Add ResourceService:

    • Plausible Analytics
    • Umami, Uptime Kuma
    • n8n, Gitea, Ghost
    • WordPress, and dozens more

    Great for getting started quickly.

    📝 Custom Docker Compose

    For apps not in the library or when you need specific configs:

    • Image versions and tags
    • Environment variables
    • Volume mounts
    • Network configuration
    • Resource limits
    3

    Analytics: Plausible Analytics

    Plausible is a privacy-focused, lightweight alternative to Google Analytics. Perfect for data ownership.

    Deploy via One-Click

    1. Add ResourceService → Search "Plausible"
    2. Configure Base URL, Admin Email, Admin Password
    3. Deploy — Coolify provisions PostgreSQL and ClickHouse automatically

    Deploy via Custom Compose

    docker-compose.yml
    version: "3.8"
    
    services:
      plausible:
        image: ghcr.io/plausible/community-edition:v2.1
        restart: always
        command: sh -c "sleep 10 && /entrypoint.sh db createdb && /entrypoint.sh db migrate && /entrypoint.sh run"
        ports:
          - "8000:8000"
        environment:
          - BASE_URL=${BASE_URL}
          - SECRET_KEY_BASE=${SECRET_KEY_BASE}
          - DATABASE_URL=postgres://postgres:postgres@plausible-db:5432/plausible
          - CLICKHOUSE_DATABASE_URL=http://plausible-events-db:8123/plausible_events
        depends_on:
          - plausible-db
          - plausible-events-db
    
      plausible-db:
        image: postgres:16-alpine
        restart: always
        volumes:
          - plausible-db-data:/var/lib/postgresql/data
        environment:
          - POSTGRES_PASSWORD=postgres
    
      plausible-events-db:
        image: clickhouse/clickhouse-server:24.3-alpine
        restart: always
        volumes:
          - plausible-events-data:/var/lib/clickhouse
        ulimits:
          nofile:
            soft: 262144
            hard: 262144
    
    volumes:
      plausible-db-data:
      plausible-events-data:
    Generate secret key
    openssl rand -base64 64 | tr -d '\n'

    Add Tracking Script

    Add to your websites
    <script defer data-domain="yourdomain.com" src="https://analytics.yourdomain.com/js/script.js"></script>
    4

    Monitoring: Uptime Kuma

    Uptime Kuma monitors your websites, APIs, and services with a beautiful dashboard and flexible alerting.

    docker-compose.yml
    version: "3.8"
    
    services:
      uptime-kuma:
        image: louislam/uptime-kuma:1
        restart: always
        ports:
          - "3001:3001"
        volumes:
          - uptime-kuma-data:/app/data
    
    volumes:
      uptime-kuma-data:

    Monitor Types

    Monitor TypeUse Case
    HTTP(s)Websites, APIs
    TCP PortDatabase connectivity
    PingServer availability
    DNSDNS resolution checks
    Docker ContainerContainer health

    Supports 90+ notification integrations: Email, Slack, Discord, Telegram, PagerDuty, and more.

    5

    Automation: n8n

    n8n is a workflow automation platform—like Zapier, but self-hosted and infinitely more flexible.

    docker-compose.yml (with PostgreSQL)
    version: "3.8"
    
    services:
      n8n:
        image: docker.n8n.io/n8nio/n8n:latest
        restart: always
        ports:
          - "5678:5678"
        environment:
          - N8N_HOST=${N8N_HOST}
          - N8N_PORT=5678
          - N8N_PROTOCOL=https
          - NODE_ENV=production
          - WEBHOOK_URL=https://${N8N_HOST}/
          - GENERIC_TIMEZONE=${TIMEZONE}
          - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
          - DB_TYPE=postgresdb
          - DB_POSTGRESDB_HOST=n8n-db
          - DB_POSTGRESDB_PORT=5432
          - DB_POSTGRESDB_DATABASE=n8n
          - DB_POSTGRESDB_USER=n8n
          - DB_POSTGRESDB_PASSWORD=${DB_PASSWORD}
        volumes:
          - n8n-data:/home/node/.n8n
        depends_on:
          - n8n-db
    
      n8n-db:
        image: postgres:16-alpine
        restart: always
        environment:
          - POSTGRES_USER=n8n
          - POSTGRES_PASSWORD=${DB_PASSWORD}
          - POSTGRES_DB=n8n
        volumes:
          - n8n-db-data:/var/lib/postgresql/data
    
    volumes:
      n8n-data:
      n8n-db-data:

    n8n Use Cases

    DevOps

    Deploy notifications, incident alerts, automated responses

    Marketing

    Lead capture, email sequences, social posting

    Data

    ETL pipelines, database syncs, report generation

    AI

    Chain LLM calls, process documents, build agents

    6

    AI Infrastructure: Ollama

    Ollama runs large language models locally. Private AI inference without sending data to third parties.

    Hardware Considerations

    Model SizeMinimum RAMRecommended Plan
    7B models (Llama 3.2, Mistral)8GBStandard 8GB
    13B models16GBStandard 16GB
    70B models64GB+Premium with GPU
    Ollama + Open WebUI
    version: "3.8"
    
    services:
      ollama:
        image: ollama/ollama:latest
        restart: always
        volumes:
          - ollama-data:/root/.ollama
        deploy:
          resources:
            limits:
              memory: 8G
    
      open-webui:
        image: ghcr.io/open-webui/open-webui:main
        restart: always
        ports:
          - "3000:8080"
        environment:
          - OLLAMA_BASE_URL=http://ollama:11434
        volumes:
          - open-webui-data:/app/backend/data
        depends_on:
          - ollama
    
    volumes:
      ollama-data:
      open-webui-data:
    Pull models after deployment
    docker exec -it <container-id> ollama pull llama3.2
    docker exec -it <container-id> ollama pull mistral
    docker exec -it <container-id> ollama pull codellama
    7

    AI Workflows: Dify

    Dify is an LLM application development platform—build AI chatbots, agents, and workflows with a visual interface.

    docker-compose.yml
    version: "3.8"
    
    services:
      api:
        image: langgenius/dify-api:latest
        restart: always
        environment:
          - MODE=api
          - SECRET_KEY=${SECRET_KEY}
          - CONSOLE_WEB_URL=https://${DIFY_HOST}
          - SERVICE_API_URL=https://${DIFY_HOST}
          - APP_WEB_URL=https://${DIFY_HOST}
          - DB_USERNAME=dify
          - DB_PASSWORD=${DB_PASSWORD}
          - DB_HOST=db
          - DB_PORT=5432
          - DB_DATABASE=dify
          - REDIS_HOST=redis
          - REDIS_PORT=6379
          - CELERY_BROKER_URL=redis://redis:6379/1
          - STORAGE_TYPE=local
          - STORAGE_LOCAL_PATH=/app/api/storage
        volumes:
          - dify-storage:/app/api/storage
        depends_on:
          - db
          - redis
    
      worker:
        image: langgenius/dify-api:latest
        restart: always
        environment:
          - MODE=worker
          - SECRET_KEY=${SECRET_KEY}
          - DB_USERNAME=dify
          - DB_PASSWORD=${DB_PASSWORD}
          - DB_HOST=db
          - DB_DATABASE=dify
          - REDIS_HOST=redis
          - CELERY_BROKER_URL=redis://redis:6379/1
        volumes:
          - dify-storage:/app/api/storage
        depends_on:
          - db
          - redis
    
      web:
        image: langgenius/dify-web:latest
        restart: always
        ports:
          - "3000:3000"
        environment:
          - CONSOLE_API_URL=https://${DIFY_HOST}
          - APP_API_URL=https://${DIFY_HOST}
    
      db:
        image: postgres:16-alpine
        restart: always
        environment:
          - POSTGRES_USER=dify
          - POSTGRES_PASSWORD=${DB_PASSWORD}
          - POSTGRES_DB=dify
        volumes:
          - dify-db:/var/lib/postgresql/data
    
      redis:
        image: redis:7-alpine
        restart: always
        volumes:
          - dify-redis:/data
    
    volumes:
      dify-storage:
      dify-db:
      dify-redis:

    Connect to Ollama: In Dify, go to SettingsModel Providers → Add Ollama with endpoint http://ollama:11434

    8

    Code Hosting: Gitea

    Gitea is a lightweight Git server—host your private repositories without GitHub or GitLab.

    docker-compose.yml
    version: "3.8"
    
    services:
      gitea:
        image: gitea/gitea:latest
        restart: always
        ports:
          - "3000:3000"
          - "2222:22"
        environment:
          - USER_UID=1000
          - USER_GID=1000
          - GITEA__database__DB_TYPE=postgres
          - GITEA__database__HOST=gitea-db:5432
          - GITEA__database__NAME=gitea
          - GITEA__database__USER=gitea
          - GITEA__database__PASSWD=${DB_PASSWORD}
        volumes:
          - gitea-data:/data
          - /etc/timezone:/etc/timezone:ro
          - /etc/localtime:/etc/localtime:ro
        depends_on:
          - gitea-db
    
      gitea-db:
        image: postgres:16-alpine
        restart: always
        environment:
          - POSTGRES_USER=gitea
          - POSTGRES_PASSWORD=${DB_PASSWORD}
          - POSTGRES_DB=gitea
        volumes:
          - gitea-db-data:/var/lib/postgresql/data
    
    volumes:
      gitea-data:
      gitea-db-data:

    Gitea + Coolify Integration: Add Gitea as a source in Coolify (SourcesAdd SourceGitea) to deploy directly from your self-hosted repositories.

    9

    Document Management: Paperless-ngx

    Paperless-ngx scans, indexes, and organizes your documents with OCR and full-text search.

    docker-compose.yml
    version: "3.8"
    
    services:
      paperless:
        image: ghcr.io/paperless-ngx/paperless-ngx:latest
        restart: always
        ports:
          - "8000:8000"
        environment:
          - PAPERLESS_REDIS=redis://redis:6379
          - PAPERLESS_DBHOST=db
          - PAPERLESS_DBUSER=paperless
          - PAPERLESS_DBPASS=${DB_PASSWORD}
          - PAPERLESS_DBNAME=paperless
          - PAPERLESS_SECRET_KEY=${SECRET_KEY}
          - PAPERLESS_URL=https://${PAPERLESS_HOST}
          - PAPERLESS_ADMIN_USER=${ADMIN_USER}
          - PAPERLESS_ADMIN_PASSWORD=${ADMIN_PASSWORD}
          - PAPERLESS_OCR_LANGUAGE=eng
        volumes:
          - paperless-data:/usr/src/paperless/data
          - paperless-media:/usr/src/paperless/media
          - paperless-consume:/usr/src/paperless/consume
        depends_on:
          - db
          - redis
    
      db:
        image: postgres:16-alpine
        restart: always
        environment:
          - POSTGRES_USER=paperless
          - POSTGRES_PASSWORD=${DB_PASSWORD}
          - POSTGRES_DB=paperless
        volumes:
          - paperless-db:/var/lib/postgresql/data
    
      redis:
        image: redis:7-alpine
        restart: always
    
    volumes:
      paperless-data:
      paperless-media:
      paperless-consume:
      paperless-db:
    10

    Managing Persistent Data

    All these applications store data in Docker volumes. Understanding volume management is crucial.

    📁 Where Data Lives

    Coolify stores volumes at /data/coolify/services/<service-id>/. Each named volume maps to a directory there.

    💾 Backup Strategy

    Backup volumes
    # Stop the service first for consistency
    cd /data/coolify/services/<service-id>
    tar -czvf backup-$(date +%Y%m%d).tar.gz volumes/

    🔐 Volume Permissions

    Some containers run as non-root users. If you see permission errors:

    Fix permissions
    # Find the container's user ID
    docker exec <container-id> id
    
    # Adjust volume ownership
    chown -R 1000:1000 /data/coolify/services/<service-id>/volumes/<volume-name>
    11

    Resource Management

    Running multiple applications requires careful resource allocation.

    Memory Planning for 4GB VPS

    ServiceMemory
    Coolify itself~500MB
    PostgreSQL512MB-1GB
    Redis128MB
    Small web apps256MB each
    n8n512MB
    Plausible1GB (ClickHouse is hungry)

    Setting Resource Limits

    Add to compose files
    services:
      myapp:
        image: myapp:latest
        deploy:
          resources:
            limits:
              cpus: '1'
              memory: 512M
            reservations:
              memory: 256M

    This prevents any single container from consuming all resources and crashing other services.

    What's Next

    You now have a toolkit for deploying virtually any self-hosted application. From analytics to AI, your Coolify instance handles it all with consistent management, automatic SSL, and integrated backups.

    In Part 5, we'll scale beyond a single server—adding multiple RamNode VPS instances as deployment targets, load balancing across regions, and building redundant infrastructure.