What Is Docker Compose?
Docker Compose is a tool for defining and running multi-container Docker applications. Instead of managing each container individually with long docker run commands, you describe your entire application stack in a single docker-compose.yml (or compose.yaml) file. With one command — docker compose up — you can start all your services, networks, and volumes together.
Docker Compose is essential for local development environments, CI/CD pipelines, and staging servers. It eliminates the "works on my machine" problem by ensuring every developer runs the exact same stack. This tutorial covers everything from basic syntax to production-ready configurations with real-world examples.
Generate docker-compose.yml files visually with our Docker Compose Generator.
Docker Compose File Structure
A docker-compose.yml file uses YAML syntax and consists of several top-level keys. The most important are services, networks, and volumes.
# docker-compose.yml basic structure
version: "3.9" # optional in newer Docker Compose
services:
web: # service name
image: nginx:latest
ports:
- "8080:80"
api:
build: ./backend
ports:
- "3000:3000"
depends_on:
- db
db:
image: postgres:16
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data: # named volume declaration
networks:
default: # customize default network
driver: bridgeServices: Defining Your Containers
Each entry under services defines a container. Services can use a pre-built image or build from a Dockerfile. Here are the most important service configuration options:
Using an Image
services:
redis:
image: redis:7-alpine # official image with tag
postgres:
image: postgres:16 # specific version
app:
image: myregistry.com/myapp:latest # custom registryBuilding from Dockerfile
services:
api:
# Simple build from current directory
build: .
frontend:
# Build with custom Dockerfile and context
build:
context: ./frontend
dockerfile: Dockerfile.prod
args:
NODE_ENV: production
API_URL: https://api.example.com
target: production # multi-stage build target
backend:
# Build and also tag the image
build:
context: ./backend
dockerfile: Dockerfile
image: myapp-backend:latest # tag built imagePorts: Exposing Services
The ports key maps container ports to host ports, making services accessible from outside Docker.
services:
web:
image: nginx
ports:
- "8080:80" # HOST:CONTAINER
- "443:443" # HTTPS
api:
build: .
ports:
- "3000:3000" # same port
- "127.0.0.1:9229:9229" # bind to localhost only (debugging)
db:
image: postgres:16
ports:
- "5432:5432" # expose database
# Or use 'expose' for inter-container access only (no host binding):
expose:
- "5432"Environment Variables
Environment variables configure your services without hardcoding values. Docker Compose supports inline variables, .env files, and external env files.
services:
api:
build: .
# Method 1: Inline key-value pairs
environment:
NODE_ENV: production
DATABASE_URL: postgres://user:pass@db:5432/mydb
REDIS_URL: redis://redis:6379
JWT_SECRET: my-super-secret-key
# Method 2: Load from .env file
env_file:
- .env
- .env.local # override with local values
# Method 3: Pass host environment variable
environment:
- API_KEY # passes $API_KEY from host
- NODE_ENV=${NODE_ENV:-development} # with default# .env file example
POSTGRES_USER=myapp
POSTGRES_PASSWORD=secretpassword123
POSTGRES_DB=myapp_production
REDIS_PASSWORD=redispass
JWT_SECRET=change-this-in-productionVolumes: Persist Data and Share Files
Volumes ensure data survives container restarts and allow sharing files between the host and containers. There are two types: named volumes (managed by Docker) and bind mounts (direct host path mapping).
services:
db:
image: postgres:16
volumes:
# Named volume (persistent, Docker-managed)
- db-data:/var/lib/postgresql/data
api:
build: .
volumes:
# Bind mount (for development - live code reload)
- ./src:/app/src
- ./package.json:/app/package.json
# Anonymous volume (prevent container files from being overwritten)
- /app/node_modules
# Read-only bind mount
- ./config/nginx.conf:/etc/nginx/nginx.conf:ro
# Declare named volumes at top level
volumes:
db-data:
driver: local
redis-data:
# External volume (must already exist)
external: trueNetworks: Service Communication
Docker Compose creates a default network where all services can communicate using their service name as the hostname. You can also define custom networks for isolation.
services:
frontend:
build: ./frontend
networks:
- frontend-net
api:
build: ./backend
networks:
- frontend-net # accessible by frontend
- backend-net # accessible by database
db:
image: postgres:16
networks:
- backend-net # NOT accessible by frontend (isolated)
networks:
frontend-net:
driver: bridge
backend-net:
driver: bridge
# Internal network (no external access)
internal: truedepends_on: Service Startup Order
The depends_on key controls startup order. By default it only waits for the container to start, not for the service to be ready. Use health checks for readiness-based ordering.
services:
api:
build: .
depends_on:
# Simple form: just wait for container to start
- db
- redis
worker:
build: .
depends_on:
# With condition: wait for service to be healthy
db:
condition: service_healthy
redis:
condition: service_healthy
api:
condition: service_started
db:
image: postgres:16
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
start_period: 10s
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5Healthcheck: Monitor Service Health
Health checks tell Docker whether a container is functioning correctly. Docker periodically runs the specified command and marks the container as healthy, unhealthy, or starting.
services:
api:
build: .
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s # check every 30 seconds
timeout: 10s # fail if no response in 10s
retries: 3 # mark unhealthy after 3 failures
start_period: 40s # grace period during startup
db:
image: mysql:8
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 10s
timeout: 5s
retries: 5Restart Policies
services:
api:
build: .
restart: always # always restart (even after manual stop)
worker:
build: .
restart: on-failure # restart only on non-zero exit code
cron:
image: alpine
restart: unless-stopped # restart unless manually stopped
migration:
build: .
restart: "no" # never restart (run once)Resource Limits
services:
api:
build: .
deploy:
resources:
limits:
cpus: "0.50" # max 50% of one CPU
memory: 512M # max 512MB RAM
reservations:
cpus: "0.25" # reserve 25% CPU
memory: 256M # reserve 256MB RAMReal-World Example 1: Node.js + PostgreSQL + Redis
A typical modern web application stack with a Node.js API, PostgreSQL database, and Redis cache:
# docker-compose.yml - Node.js + PostgreSQL + Redis
services:
api:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
environment:
NODE_ENV: development
DATABASE_URL: postgres://appuser:apppass@db:5432/myapp
REDIS_URL: redis://redis:6379
JWT_SECRET: dev-secret-change-in-prod
volumes:
- ./src:/app/src # live reload
- /app/node_modules # prevent overwrite
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
restart: unless-stopped
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: appuser
POSTGRES_PASSWORD: apppass
POSTGRES_DB: myapp
ports:
- "5432:5432"
volumes:
- postgres-data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U appuser -d myapp"]
interval: 5s
timeout: 5s
retries: 5
restart: unless-stopped
redis:
image: redis:7-alpine
command: redis-server --requirepass redispass
ports:
- "6379:6379"
volumes:
- redis-data:/data
healthcheck:
test: ["CMD", "redis-cli", "-a", "redispass", "ping"]
interval: 5s
timeout: 3s
retries: 5
restart: unless-stopped
volumes:
postgres-data:
redis-data:Real-World Example 2: Python Flask + Redis + Celery
A Python application with an API server, Redis message broker, and Celery worker for background tasks:
# docker-compose.yml - Python + Redis + Celery
services:
web:
build: .
command: gunicorn --bind 0.0.0.0:5000 app:create_app()
ports:
- "5000:5000"
environment:
FLASK_ENV: development
CELERY_BROKER_URL: redis://redis:6379/0
CELERY_RESULT_BACKEND: redis://redis:6379/0
volumes:
- .:/app
depends_on:
redis:
condition: service_healthy
restart: unless-stopped
worker:
build: .
command: celery -A app.celery worker --loglevel=info
environment:
CELERY_BROKER_URL: redis://redis:6379/0
CELERY_RESULT_BACKEND: redis://redis:6379/0
volumes:
- .:/app
depends_on:
redis:
condition: service_healthy
restart: unless-stopped
beat:
build: .
command: celery -A app.celery beat --loglevel=info
environment:
CELERY_BROKER_URL: redis://redis:6379/0
volumes:
- .:/app
depends_on:
redis:
condition: service_healthy
restart: unless-stopped
redis:
image: redis:7-alpine
ports:
- "6379:6379"
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
restart: unless-stoppedReal-World Example 3: WordPress + MySQL + phpMyAdmin
A complete WordPress development environment with MySQL database and phpMyAdmin for database management:
# docker-compose.yml - WordPress + MySQL + phpMyAdmin
services:
wordpress:
image: wordpress:latest
ports:
- "8080:80"
environment:
WORDPRESS_DB_HOST: mysql:3306
WORDPRESS_DB_USER: wpuser
WORDPRESS_DB_PASSWORD: wppass
WORDPRESS_DB_NAME: wordpress
volumes:
- wp-content:/var/www/html/wp-content
- ./themes:/var/www/html/wp-content/themes
- ./plugins:/var/www/html/wp-content/plugins
depends_on:
mysql:
condition: service_healthy
restart: unless-stopped
mysql:
image: mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: rootpass
MYSQL_DATABASE: wordpress
MYSQL_USER: wpuser
MYSQL_PASSWORD: wppass
volumes:
- mysql-data:/var/lib/mysql
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
phpmyadmin:
image: phpmyadmin:latest
ports:
- "8081:80"
environment:
PMA_HOST: mysql
PMA_USER: root
PMA_PASSWORD: rootpass
depends_on:
- mysql
restart: unless-stopped
volumes:
wp-content:
mysql-data:Essential Docker Compose Commands
# Start all services (foreground)
docker compose up
# Start in detached mode (background)
docker compose up -d
# Start specific services only
docker compose up -d api db
# Rebuild images before starting
docker compose up -d --build
# Stop all services
docker compose down
# Stop and remove volumes (DESTROYS DATA)
docker compose down -v
# Stop and remove images
docker compose down --rmi all
# View running containers
docker compose ps
# View logs
docker compose logs
docker compose logs -f api # follow specific service
docker compose logs --tail 100 api # last 100 lines
# Execute command in running container
docker compose exec api sh
docker compose exec db psql -U appuser -d myapp
# Run a one-off command
docker compose run --rm api npm test
# Scale a service
docker compose up -d --scale worker=3
# Pull latest images
docker compose pull
# View resource usage
docker compose topMulti-Stage Builds with Compose
Multi-stage Dockerfiles reduce image size by separating build dependencies from runtime. Use the target key in Compose to select which stage to build:
# Dockerfile (multi-stage)
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production
FROM node:20-alpine AS production
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
EXPOSE 3000
CMD ["node", "dist/server.js"]# docker-compose.yml
services:
api-dev:
build:
context: .
target: builder # use build stage for development
volumes:
- ./src:/app/src
command: npm run dev
api-prod:
build:
context: .
target: production # use production stage
ports:
- "3000:3000"Docker Compose Best Practices
- Use
.envfiles for secrets: Never hardcode passwords or API keys indocker-compose.yml. Use.envfiles and add them to.gitignore. - Pin image versions: Use specific tags like
postgres:16-alpineinstead ofpostgres:latestto ensure reproducible builds. - Use health checks: Always add health checks to databases and critical services. Use
depends_onwithcondition: service_healthyfor reliable startup ordering. - Named volumes for data: Always use named volumes for database storage. Bind mounts are for development source code only.
- Use Alpine images: Prefer
-alpineimage variants to minimize image size and attack surface. - Separate dev and prod configs: Use
docker-compose.override.ymlfor development-specific settings that automatically merge with the base file. - Set resource limits: Use
deploy.resourcesto prevent runaway containers from consuming all system resources. - Use restart policies: Set
restart: unless-stoppedfor production services to ensure they recover from crashes.
Frequently Asked Questions
What is the difference between docker-compose and docker compose?
docker-compose (with hyphen) is the older standalone Python tool (V1). docker compose (with space) is the newer Go-based plugin integrated into Docker CLI (V2). Docker Compose V2 is now the default and recommended version. The syntax and features are nearly identical, but V2 is faster and better maintained. If you are starting new, always use docker compose (V2).
How do I access one service from another?
Services on the same Docker network can reach each other using the service name as the hostname. For example, if your docker-compose.yml has a service named db, your API can connect to it at db:5432. Docker's built-in DNS resolves service names to container IP addresses automatically.
How do I persist database data?
Use named volumes. For PostgreSQL: volumes: ["pgdata:/var/lib/postgresql/data"]. Declare the volume at the top level. Named volumes survive docker compose down but are destroyed by docker compose down -v. Never use bind mounts for database data in production.
How do I run database migrations with Compose?
Use a one-off run command: docker compose run --rm api npm run migrate. Or create a separate migration service with restart: "no" that runs once and exits. You can also use docker-entrypoint-initdb.d/ for initial SQL scripts in PostgreSQL and MySQL containers.
How do I use different configs for dev and production?
Use override files. Docker Compose automatically loads docker-compose.yml and docker-compose.override.yml. Put production config in the base file and development overrides (bind mounts, debug ports) in the override file. For explicit control, use docker compose -f docker-compose.yml -f docker-compose.prod.yml up.
How do I rebuild a single service without stopping others?
Run docker compose up -d --build api to rebuild and restart only the "api" service. Other running services are not affected. Add --no-deps to skip restarting dependent services.