Part 1 got you a working Dokploy instance. Now let's deploy applications the way you'd actually run them in production — with proper Dockerfiles, environment handling, and build optimization.
This guide covers four common stacks. Each section is standalone — jump to what you need.
Next.js (App Router)
Next.js needs a multi-stage build to keep the production image small and a standalone output for proper containerization.
Dockerfile
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
# Install dependencies first (better caching)
COPY package.json package-lock.json* ./
RUN npm ci
# Copy source and build
COPY . .
# Required for standalone output
ENV NEXT_TELEMETRY_DISABLED=1
RUN npm run build
# Production stage
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1
# Create non-root user
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# Copy built assets
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT=3000
ENV HOSTNAME="0.0.0.0"
CMD ["node", "server.js"]next.config.js
Enable standalone output — this is required for the Dockerfile above:
/** @type {import('next').NextConfig} */
const nextConfig = {
output: 'standalone',
}
module.exports = nextConfigEnvironment Variables in Dokploy
In your application settings → Environment tab:
# Runtime variables (available in API routes and server components)
DATABASE_URL=postgresql://user:pass@dokploy-postgres:5432/mydb
NEXTAUTH_SECRET=your-secret-here
NEXTAUTH_URL=https://app.yourdomain.com
# Build-time variables (prefix with NEXT_PUBLIC_ for client access)
NEXT_PUBLIC_API_URL=https://api.yourdomain.comNEXT_PUBLIC_ are only available server-side. Variables with the prefix are baked into the client bundle at build time.Dokploy Settings
- Build Command: Leave empty (Dockerfile handles it)
- Port: 3000
- Health Check Path:
/api/health
Quick Health Check Endpoint
Create app/api/health/route.ts:
export async function GET() {
return Response.json({ status: 'ok' })
}Laravel (with Queue Workers)
Laravel deployments need PHP-FPM, Nginx, and typically a queue worker. We'll use a single container with Supervisor to manage both processes.
Dockerfile
FROM php:8.3-fpm-alpine
# Install system dependencies
RUN apk add --no-cache \
nginx \
supervisor \
libpq-dev \
libzip-dev \
zip \
unzip \
git
# Install PHP extensions
RUN docker-php-ext-install pdo pdo_pgsql pdo_mysql zip opcache
# Install Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
WORKDIR /var/www/html
# Copy composer files first (better caching)
COPY composer.json composer.lock ./
RUN composer install --no-dev --no-scripts --no-autoloader
# Copy application
COPY . .
# Generate autoloader and run scripts
RUN composer dump-autoload --optimize
RUN php artisan config:cache
RUN php artisan route:cache
RUN php artisan view:cache
# Set permissions
RUN chown -R www-data:www-data /var/www/html/storage /var/www/html/bootstrap/cache
# Copy config files
COPY docker/nginx.conf /etc/nginx/nginx.conf
COPY docker/supervisord.conf /etc/supervisord.conf
COPY docker/php.ini /usr/local/etc/php/conf.d/custom.ini
EXPOSE 80
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"]docker/nginx.conf
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name _;
root /var/www/html/public;
index index.php;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
}docker/supervisord.conf
[supervisord]
nodaemon=true
logfile=/var/log/supervisord.log
pidfile=/var/run/supervisord.pid
[program:php-fpm]
command=/usr/local/sbin/php-fpm -F
autostart=true
autorestart=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:nginx]
command=/usr/sbin/nginx -g "daemon off;"
autostart=true
autorestart=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/artisan queue:work --sleep=3 --tries=3 --max-time=3600
autostart=true
autorestart=true
numprocs=2
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0Environment Variables
APP_NAME=MyApp
APP_ENV=production
APP_KEY=base64:your-key-here
APP_DEBUG=false
APP_URL=https://app.yourdomain.com
DB_CONNECTION=pgsql
DB_HOST=dokploy-postgres
DB_PORT=5432
DB_DATABASE=laravel
DB_USERNAME=laravel
DB_PASSWORD=your-password
CACHE_DRIVER=redis
QUEUE_CONNECTION=redis
SESSION_DRIVER=redis
REDIS_HOST=dokploy-redis
REDIS_PORT=6379Dokploy Settings
- Port: 80
- Health Check Path:
/up(Laravel 11+) or create a/healthroute
Running Migrations
After deployment, run migrations via Dokploy's terminal or SSH:
docker exec -it <container-id> php artisan migrate --forceDjango (with Celery)
Django with Celery follows a similar pattern — Gunicorn for the web process, Celery for background tasks, both managed by Supervisor.
Dockerfile
FROM python:3.12-slim
# Install system dependencies
RUN apt-get update && apt-get install -y \
supervisor \
libpq-dev \
gcc \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application
COPY . .
# Collect static files
RUN python manage.py collectstatic --noinput
# Copy supervisor config
COPY docker/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# Create non-root user
RUN useradd -m appuser && chown -R appuser:appuser /app
USER appuser
EXPOSE 8000
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]requirements.txt
Django>=5.0
gunicorn
psycopg2-binary
celery[redis]
redis
whitenoisedocker/supervisord.conf
[supervisord]
nodaemon=true
logfile=/tmp/supervisord.log
pidfile=/tmp/supervisord.pid
[program:gunicorn]
command=gunicorn myproject.wsgi:application --bind 0.0.0.0:8000 --workers 3
directory=/app
autostart=true
autorestart=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:celery]
command=celery -A myproject worker --loglevel=info --concurrency=2
directory=/app
autostart=true
autorestart=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0settings.py (Production Additions)
import os
DEBUG = False
ALLOWED_HOSTS = os.environ.get('ALLOWED_HOSTS', '').split(',')
# Database
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ.get('DB_NAME', 'django'),
'USER': os.environ.get('DB_USER', 'django'),
'PASSWORD': os.environ.get('DB_PASSWORD', ''),
'HOST': os.environ.get('DB_HOST', 'localhost'),
'PORT': os.environ.get('DB_PORT', '5432'),
}
}
# Static files with WhiteNoise
STATIC_ROOT = '/app/staticfiles'
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware', # Add after SecurityMiddleware
# ... rest of middleware
]
# Celery
CELERY_BROKER_URL = os.environ.get('REDIS_URL', 'redis://localhost:6379/0')
CELERY_RESULT_BACKEND = CELERY_BROKER_URLEnvironment Variables
DJANGO_SETTINGS_MODULE=myproject.settings
SECRET_KEY=your-secret-key
ALLOWED_HOSTS=app.yourdomain.com,www.yourdomain.com
DB_HOST=dokploy-postgres
DB_PORT=5432
DB_NAME=django
DB_USER=django
DB_PASSWORD=your-password
REDIS_URL=redis://dokploy-redis:6379/0Dokploy Settings
- Port: 8000
- Health Check Path:
/health/(create a simple view)
Static Sites (Vite, Astro, Hugo)
Static sites are the simplest — build once, serve with Nginx. The key is a proper multi-stage build.
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json* ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
COPY docker/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]docker/nginx.conf (for all static sites)
server {
listen 80;
server_name _;
root /usr/share/nginx/html;
index index.html;
# Gzip compression
gzip on;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml;
gzip_min_length 1000;
# Cache static assets
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
# SPA routing - serve index.html for all routes
location / {
try_files $uri $uri/ /index.html;
}
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
}Build-Time Environment Variables
For static sites, environment variables must be available at build time. In Dokploy:
- Go to Environment tab
- Add your variables
- Check "Available at build time" for each
VITE_API_URL=https://api.yourdomain.com
VITE_APP_NAME=MyAppBuild Caching Tips
Docker layer caching dramatically speeds up rebuilds. The key principle: copy dependency files before source code.
✓ Good Pattern
# Dependencies change less often than source
COPY package.json package-lock.json ./
RUN npm ci
# Source changes frequently
COPY . .
RUN npm run build✗ Bad Pattern
# Every code change invalidates npm install
COPY . .
RUN npm ci
RUN npm run buildDokploy Build Cache
Dokploy preserves Docker build cache between deployments by default. If builds are slow:
- Check your Dockerfile layer order
- Use
.dockerignoreto exclude unnecessary files - Consider using BuildKit cache mounts for package managers:
RUN --mount=type=cache,target=/root/.npm npm ci.dockerignore Template
Every project should have one:
node_modules
.git
.gitignore
*.md
.env*
.DS_Store
Dockerfile
docker-compose*.yml
.dockerignore
# Framework-specific
.next
.nuxt
dist
build
coverageQuick Debugging
Build fails?
# Check build logs in Dokploy dashboard
# Or manually test locally:
docker build -t test-build .Container crashes on start?
# View container logs
docker logs <container-id>
# Shell into a running container
docker exec -it <container-id> /bin/shApp runs but returns 502?
- Check the port in Dokploy matches what your app listens on
- Verify the app binds to
0.0.0.0, not127.0.0.1
What's Next
Your apps are deployed with production-ready configurations. Part 3 adds persistent data:
Database Management
Set up PostgreSQL, MySQL, and Redis. Learn connection strings, migrations, automated backups, and disaster recovery.
