Your apps are deployed. Now they need data. This guide covers creating databases in Dokploy, connecting applications, running migrations safely, and setting up automated backups you can actually restore from.
Dokploy's Built-in Databases
Dokploy can deploy databases as managed services with a few clicks. Supported engines:
- PostgreSQL — The default choice for most applications
- MySQL / MariaDB — WordPress, Laravel legacy, specific compatibility needs
- MongoDB — Document storage, flexible schemas
- Redis — Caching, sessions, queues
Each runs as a Docker container with persistent volumes. Dokploy handles networking so your apps can reach them by hostname.
Creating a PostgreSQL Database
Via Dashboard
- Open your project in Dokploy
- Click Create Service → Database → PostgreSQL
- Configure:
- Name:
main-db(becomes the hostname) - Database Name:
myapp - Username:
myapp - Password: Generate a strong one
- Version:
16(latest stable)
- Name:
- Click Create
Dokploy spins up the container and creates a persistent volume for /var/lib/postgresql/data.
Connection Details
Host: main-db (internal hostname)
Port: 5432
Database: myapp
Username: myapp
Password: (what you set)postgresql://myapp:yourpassword@main-db:5432/myappCreating MySQL/MariaDB
Same process, different engine:
- Create Service → Database → MySQL (or MariaDB)
- Configure:
- Name:
mysql-db - Root Password: Generate one
- Database Name:
myapp - Username:
myapp - Password: Generate one
- Version:
8.0
- Name:
mysql://myapp:yourpassword@mysql-db:3306/myappMySQL vs MariaDB
Use MariaDB unless you specifically need MySQL features. MariaDB is fully compatible with MySQL clients, often faster for common workloads, and truly open source.
Creating Redis
Redis is essential for caching, sessions, and background job queues.
- Create Service → Database → Redis
- Configure:
- Name:
cache - Password: Set one (optional but recommended)
- Version:
7
- Name:
redis://:yourpassword@cache:6379/0Redis for Multiple Purposes
Use different database numbers to separate concerns:
# Caching
CACHE_URL=redis://cache:6379/0
# Sessions
SESSION_URL=redis://cache:6379/1
# Job Queues
QUEUE_URL=redis://cache:6379/2Database numbers 0-15 are available by default.
Connecting Applications to Databases
In your application's Environment tab, add the connection details:
DATABASE_URL=postgresql://myapp:yourpassword@main-db:5432/myapp
# Or split out (some frameworks prefer this):
DB_HOST=main-db
DB_PORT=5432
DB_NAME=myapp
DB_USER=myapp
DB_PASSWORD=yourpasswordFramework-Specific Examples
DB_CONNECTION=pgsql
DB_HOST=main-db
DB_PORT=5432
DB_DATABASE=myapp
DB_USERNAME=myapp
DB_PASSWORD=yourpassword
CACHE_DRIVER=redis
QUEUE_CONNECTION=redis
SESSION_DRIVER=redis
REDIS_HOST=cache
REDIS_PASSWORD=yourpassword
REDIS_PORT=6379Running Migrations
Migrations need to run once per deployment, not per container. Here are three approaches:
Option 1: Manual via Dokploy Terminal
After deployment, go to your application → Terminal tab → run:
# Laravel
php artisan migrate --force
# Django
python manage.py migrate
# Prisma (Next.js, etc.)
npx prisma migrate deploy
# Rails
rails db:migrateGood for: Initial setup, debugging, one-off changes.
Option 2: Startup Script
Add migrations to your container startup with an entrypoint.sh:
#!/bin/sh
set -e
echo "Running migrations..."
npx prisma migrate deploy
echo "Starting application..."
exec node server.jsGood for: Simple apps, development environments. Warning: runs on every container start.
Option 3: Deployment Hook (Recommended)
Use Dokploy's Advanced settings to run a command after build, before traffic switches:
- Go to your application → Advanced tab
- Find Post-deployment command
- Add your migration command
Good for: Production deployments, zero-downtime migrations.
Database Backups
Dokploy doesn't have built-in backup scheduling (yet). Set it up yourself — it takes 10 minutes and will save you someday.
Manual Backup Commands
docker exec dokploy-main-db pg_dump -U myapp myapp > backup-$(date +%Y%m%d-%H%M%S).sqlAutomated Backup Script
Create /opt/dokploy-backups/backup.sh:
#!/bin/bash
set -e
BACKUP_DIR="/opt/dokploy-backups"
RETENTION_DAYS=14
DATE=$(date +%Y%m%d-%H%M%S)
# Create backup directory
mkdir -p "$BACKUP_DIR/daily"
# PostgreSQL backup
echo "Backing up PostgreSQL..."
docker exec dokploy-main-db pg_dump -U myapp myapp | gzip > "$BACKUP_DIR/daily/postgres-$DATE.sql.gz"
# Redis backup
echo "Backing up Redis..."
docker exec dokploy-cache redis-cli -a yourpassword BGSAVE
sleep 5 # Wait for save to complete
docker cp dokploy-cache:/data/dump.rdb "$BACKUP_DIR/daily/redis-$DATE.rdb"
# Clean up old backups
echo "Cleaning up backups older than $RETENTION_DAYS days..."
find "$BACKUP_DIR/daily" -type f -mtime +$RETENTION_DAYS -delete
echo "Backup complete!"Schedule with Cron
crontab -e
# Add this line for daily backup at 3 AM:
0 3 * * * /opt/dokploy-backups/backup.sh >> /var/log/dokploy-backup.log 2>&1Offsite Backups with Restic
Local backups aren't enough — if the server dies, so do your backups. Use Restic to sync to S3-compatible storage.
# After local backup completes
echo "Syncing to offsite storage..."
restic -r s3:s3.us-east-1.amazonaws.com/your-bucket backup "$BACKUP_DIR/daily"
restic -r s3:s3.us-east-1.amazonaws.com/your-bucket forget --keep-daily 7 --keep-weekly 4 --keep-monthly 6 --pruneRestoring from Backup
# Stop your application first to prevent writes
# Then restore:
gunzip -c backup-20240115-030000.sql.gz | docker exec -i dokploy-main-db psql -U myapp myapp
# Or drop and recreate for a clean restore:
docker exec dokploy-main-db psql -U postgres -c "DROP DATABASE myapp;"
docker exec dokploy-main-db psql -U postgres -c "CREATE DATABASE myapp OWNER myapp;"
gunzip -c backup-20240115-030000.sql.gz | docker exec -i dokploy-main-db psql -U myapp myappPerformance Tuning
For a 4GB RAM server, configure these settings via environment variables or config files:
POSTGRES_SHARED_BUFFERS=1GB
POSTGRES_EFFECTIVE_CACHE_SIZE=3GB
POSTGRES_WORK_MEM=16MB
POSTGRES_MAINTENANCE_WORK_MEM=256MBMonitoring Database Health
Quick health checks to add to your monitoring:
# PostgreSQL
docker exec dokploy-main-db pg_isready -U myapp
# MySQL
docker exec dokploy-mysql-db mysqladmin -u myapp -pyourpassword ping
# Redis
docker exec dokploy-cache redis-cli -a yourpassword pingPart 5 covers integrating these with Prometheus and Grafana for proper observability.
Common Issues
"Connection refused" errors
- Is the database container running? Check
docker ps - Are you using the correct hostname? Use the service name, not
localhost - Is the port correct? PostgreSQL=5432, MySQL=3306, Redis=6379
"Authentication failed"
- Double-check password in environment variables
- Ensure no trailing whitespace in your env values
- For PostgreSQL, verify the username matches
"Database does not exist"
- The database name in your connection string must match what you created
- For PostgreSQL:
docker exec dokploy-main-db psql -U postgres -c "\l"to list databases
Slow queries
- Check if you have indexes on frequently queried columns
- For PostgreSQL:
EXPLAIN ANALYZE your_queryto see the query plan - Consider connection pooling for high-traffic apps (PgBouncer)
What's Next
Your data layer is solid — persistent storage, automated backups, tested restores. Part 4 adds deployment automation:
CI/CD & Git Workflows
Branch-based deployments, staging environments, GitHub/GitLab webhooks, and rollback strategies.
