TL;DR
Docker packages your application and dependencies into portable containers. Master docker run/build/exec/logs for daily use, write efficient Dockerfiles with multi-stage builds to minimize image size, use Docker Compose for multi-container dev environments, and apply security hardening (non-root user, read-only filesystem, secret management) before going to production. Use our Docker Compose generator to scaffold compose files instantly.
Core Docker Commands โ pull, run, stop, rm, ps and Key Flags
These are the commands you will use every single day. Understanding their flags is the foundation of working productively with Docker.
# Pull an image from a registry (Docker Hub by default)
docker pull nginx:latest
docker pull node:20-alpine # alpine = minimal base image
# Run a container
docker run nginx # foreground, blocking
docker run -d nginx # -d = detached (background)
docker run -d -p 8080:80 nginx # -p HOST:CONTAINER port mapping
docker run -d -p 8080:80 --name web nginx # --name for easy reference
# Common flags:
# -d run in background (detached)
# -p 8080:80 map host port 8080 โ container port 80
# -v ./data:/data mount host directory into container
# -e KEY=VAL set environment variable
# --name assign a name
# --rm auto-remove container when it exits
# --network connect to a specific network
# -it interactive + TTY (for shells)
# Container lifecycle
docker ps # list running containers
docker ps -a # list all containers (including stopped)
docker stop web # graceful stop (SIGTERM โ SIGKILL after 10s)
docker kill web # immediate stop (SIGKILL)
docker start web # restart stopped container
docker restart web # stop + start
docker rm web # remove stopped container
docker rm -f web # force remove running container
# Images
docker images # list local images
docker rmi nginx:latest # remove image
docker image prune # remove all dangling images
docker system prune -a # remove everything unused (images, containers, volumes, networks)
# Quick one-liners
docker run --rm -it ubuntu:22.04 bash # throwaway interactive container
docker run --rm -v $(pwd):/work -w /work node:20 npm test # run tests in containerUse --rm for short-lived containers (running a command, testing, building) to avoid accumulating stopped containers. Use named containers (--name) for services you start and stop repeatedly.
Docker Images โ Dockerfile Instructions Explained
A Dockerfile is a recipe for building an image. Each instruction creates a new layer. Layers are cached, so order matters: put rarely-changing instructions early (base image, system deps) and frequently-changing instructions late (application code).
# syntax=docker/dockerfile:1
# FROM โ base image (always first)
FROM node:20-alpine AS base
# LABEL โ metadata (optional but good practice)
LABEL org.opencontainers.image.source="https://github.com/org/repo"
# ENV โ environment variables (available at build and runtime)
ENV NODE_ENV=production
ENV PORT=3000
# ARG โ build-time variables only (not available at runtime)
ARG BUILD_DATE
ARG GIT_COMMIT
# WORKDIR โ set working directory (creates if not exists)
WORKDIR /app
# COPY โ copy files from build context into image
COPY package.json package-lock.json ./ # copy dependency files first
RUN npm ci --only=production # install (cached until package.json changes)
COPY . . # copy source code (invalidates cache on any change)
# RUN โ execute commands during build
RUN npm run build && rm -rf src node_modules/.cache # chain commands to reduce layers
# EXPOSE โ document which port the app uses (informational, doesn't publish)
EXPOSE 3000
# USER โ switch to non-root user for security
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
# VOLUME โ declare mount points for external volumes
VOLUME ["/app/data"]
# HEALTHCHECK โ Docker health check command
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 CMD wget -qO- http://localhost:3000/health || exit 1
# ENTRYPOINT โ main executable (hard to override)
ENTRYPOINT ["node"]
# CMD โ default arguments to ENTRYPOINT (easy to override)
CMD ["dist/server.js"]# Build the image
docker build -t myapp:latest .
docker build -t myapp:1.0.0 -f Dockerfile.prod . # custom Dockerfile
docker build --build-arg BUILD_DATE=$(date -u +%Y-%m-%dT%H:%M:%SZ) -t myapp .
# Tag for registry
docker tag myapp:latest registry.example.com/org/myapp:1.0.0
# Push
docker push registry.example.com/org/myapp:1.0.0Multi-stage Builds โ Minimize Image Size, Builder Pattern, .dockerignore
Multi-stage builds are the single most impactful optimization for Docker images. Compile or bundle your code in a fat builder image, then copy only the runtime artifacts into a minimal final image.
Node.js Multi-stage Example
# Stage 1: Install all dependencies and build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci # install dev + prod deps
COPY . .
RUN npm run build # compile TypeScript, bundle assets
# Stage 2: Production runtime (minimal)
FROM node:20-alpine AS runner
ENV NODE_ENV=production
WORKDIR /app
# Copy only what's needed for runtime
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
RUN adduser -D appuser && chown -R appuser /app
USER appuser
EXPOSE 3000
CMD ["node", "dist/index.js"]
# Result: builder = ~400 MB, runner = ~80 MBGo Multi-stage with scratch (10 MB final image)
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-s -w" -o server ./cmd/server
# Scratch = zero-byte base image (no OS, no shell)
FROM scratch AS runner
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=builder /app/server /server
EXPOSE 8080
ENTRYPOINT ["/server"]
# Alternative: use distroless for a shell-less but slightly larger image
# FROM gcr.io/distroless/base-debian12 AS runner.dockerignore โ Exclude Files from Build Context
# .dockerignore โ add to every Docker project
node_modules
.git
.gitignore
*.md
.env
.env.*
dist
.next
coverage
*.test.ts
*.spec.ts
Dockerfile*
docker-compose*
.dockerignoreA missing .dockerignore can accidentally send hundreds of megabytes (like node_modules or .git) to the Docker daemon as build context, massively slowing every build.
Volumes and Bind Mounts โ Persistent Storage, Backup, Restore
Container filesystems are ephemeral โ all data is lost when a container is removed. Use volumes or bind mounts to persist data outside the container lifecycle.
# Named volume (managed by Docker, stored in /var/lib/docker/volumes/)
docker volume create pgdata
docker run -d --name postgres -v pgdata:/var/lib/postgresql/data # named volume
-e POSTGRES_PASSWORD=secret postgres:16
# Bind mount (maps host path into container)
docker run -d --name app -v $(pwd)/src:/app/src # bind mount โ for dev with live reload
-v $(pwd)/.env:/app/.env # single file bind mount
node:20 npm run dev
# Read-only bind mount (security: prevent container from modifying)
docker run -d -v $(pwd)/config:/app/config:ro nginx
# tmpfs mount (in-memory, not persisted โ for secrets, temp data)
docker run -d --tmpfs /app/tmp:rw,size=100m myapp
# Volume management commands
docker volume ls # list volumes
docker volume inspect pgdata # details and mount point
docker volume rm pgdata # remove volume (data loss!)
docker volume prune # remove unused volumes
# Backup a volume
docker run --rm -v pgdata:/source:ro -v $(pwd)/backups:/backup alpine tar czf /backup/pgdata-$(date +%Y%m%d).tar.gz -C /source .
# Restore from backup
docker run --rm -v pgdata:/dest -v $(pwd)/backups:/backup:ro alpine tar xzf /backup/pgdata-20240101.tar.gz -C /destDocker Networking โ Bridge, Host, Overlay, Container DNS
Docker networking controls how containers communicate with each other and the outside world. Understanding the network drivers and DNS resolution prevents connectivity bugs in multi-container applications.
# List networks
docker network ls
# NETWORK ID NAME DRIVER SCOPE
# bridge0abc bridge bridge local โ default network
# host0abc host host local
# none0abc none null local
# Create a custom bridge network
docker network create --driver bridge myapp-net
# Run containers on the same network โ they resolve each other by name
docker run -d --name postgres --network myapp-net postgres:16
docker run -d --name api --network myapp-net myapi:latest
# The api container can reach postgres at: postgres:5432
# Connect a running container to an additional network
docker network connect myapp-net existing-container
# Disconnect
docker network disconnect myapp-net container-name
# Inspect network (see all connected containers + IP addresses)
docker network inspect myapp-net
# Remove unused networks
docker network prune
# Host network mode โ container shares host's network stack (Linux only)
docker run -d --network host nginx
# nginx binds directly to host port 80, no -p mapping needed
# Port publishing options
docker run -d -p 8080:80 nginx # host 8080 โ container 80
docker run -d -p 127.0.0.1:8080:80 nginx # bind only to localhost
docker run -d -p 80 nginx # random host port โ 80 (see with docker port)
docker port container_name # show all port mappings| Driver | Use Case | Container DNS |
|---|---|---|
bridge (default) | Single-host multi-container apps | By name on custom bridge only |
host | Performance-critical, Linux only | Same as host |
overlay | Docker Swarm multi-host | By service name across hosts |
none | Isolated, no network access | None |
Docker Compose โ Services, Networks, Volumes, Healthchecks, Override Files
Docker Compose defines multi-container applications in a single docker-compose.yml file. It handles service startup order, shared networks, named volumes, and environment variable management automatically.
# docker-compose.yml
version: "3.9"
services:
api:
build:
context: .
dockerfile: Dockerfile
args:
BUILD_ENV: production
image: myapp/api:latest
container_name: myapp-api
restart: unless-stopped
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=postgresql://app:secret@postgres:5432/appdb
env_file:
- .env.production # load from .env file
depends_on:
postgres:
condition: service_healthy # wait for postgres healthcheck
redis:
condition: service_started
networks:
- backend
healthcheck:
test: ["CMD", "wget", "-qO-", "http://localhost:3000/health"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
postgres:
image: postgres:16-alpine
container_name: myapp-postgres
restart: unless-stopped
environment:
POSTGRES_DB: appdb
POSTGRES_USER: app
POSTGRES_PASSWORD: secret
volumes:
- pgdata:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro
networks:
- backend
healthcheck:
test: ["CMD-SHELL", "pg_isready -U app -d appdb"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
container_name: myapp-redis
restart: unless-stopped
command: redis-server --requirepass secret --maxmemory 256mb --maxmemory-policy allkeys-lru
volumes:
- redisdata:/data
networks:
- backend
nginx:
image: nginx:1.25-alpine
container_name: myapp-nginx
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
- ./ssl:/etc/nginx/ssl:ro
depends_on:
- api
networks:
- backend
volumes:
pgdata:
redisdata:
networks:
backend:
driver: bridge# Common Compose commands
docker compose up -d # start all services in background
docker compose up -d api # start only the api service
docker compose down # stop and remove containers + networks
docker compose down -v # also remove volumes (data loss!)
docker compose ps # show service status
docker compose logs -f api # follow logs for api service
docker compose exec api bash # shell into running api container
docker compose build api # rebuild only the api image
docker compose pull # pull latest images
docker compose restart api # restart a service
docker compose scale api=3 # run 3 replicas (v2 syntax)
# Override files for dev vs prod:
# docker-compose.yml โ base config
# docker-compose.override.yml โ auto-merged for local dev
# docker-compose.prod.yml โ production overrides
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -dDocker Exec and Debugging โ logs, inspect, stats, top, cp
Knowing how to inspect and debug running containers is essential for diagnosing production issues. These commands let you look inside containers without modifying them.
# Shell access
docker exec -it CONTAINER bash # bash shell (use sh for Alpine)
docker exec -it CONTAINER sh # sh shell (always available)
docker exec CONTAINER cat /etc/hosts # run single command
# Logs
docker logs CONTAINER # all logs
docker logs -f CONTAINER # follow (stream) logs
docker logs --tail 100 CONTAINER # last 100 lines
docker logs --since 10m CONTAINER # logs from last 10 minutes
docker logs --since 2024-01-01T00:00:00 CONTAINER
# Inspect โ low-level container/image info (JSON)
docker inspect CONTAINER # everything about a container
docker inspect --format '{{.State.Status}}' CONTAINER # specific field
docker inspect --format '{{.NetworkSettings.IPAddress}}' CONTAINER
docker inspect --format '{{range .Mounts}}{{.Source}} โ {{.Destination}}{{println}}{{end}}' CONTAINER
# Resource usage
docker stats # live CPU/RAM/network/disk for all containers
docker stats CONTAINER --no-stream # single snapshot (no streaming)
docker top CONTAINER # processes running inside container (like ps aux)
# Copy files between host and container
docker cp CONTAINER:/app/logs/error.log ./error.log # container โ host
docker cp ./config.json CONTAINER:/app/config.json # host โ container
# Diff โ show filesystem changes since container start
docker diff CONTAINER
# A = added, C = changed, D = deleted
# Image history โ see each layer size
docker history myimage:latest
docker history --no-trunc myimage:latest # full commands
# Dive tool for interactive layer exploration (if installed)
# dive myimage:latestFor containers that crash immediately on start (exit code non-zero), use docker run --entrypoint sh IMAGE -c "cat /app/crash.log" or override the entrypoint to keep the container alive: docker run -it --entrypoint sh IMAGE.
Registry and Publishing โ Docker Hub, GitHub GHCR, Multi-arch Buildx
Docker registries store and distribute images. Docker Hub is the default public registry. GitHub Container Registry (GHCR) is popular for open-source projects integrated with GitHub Actions.
# Authenticate
docker login # Docker Hub (interactive)
echo $TOKEN | docker login --username USER --password-stdin # CI/CD
# Docker Hub tag convention: USERNAME/REPOSITORY:TAG
docker tag myapp:latest johndoe/myapp:1.0.0
docker tag myapp:latest johndoe/myapp:latest
docker push johndoe/myapp:1.0.0
docker push johndoe/myapp:latest
# GitHub Container Registry (GHCR)
echo $GITHUB_TOKEN | docker login ghcr.io --username $GITHUB_ACTOR --password-stdin
docker tag myapp:latest ghcr.io/org/myapp:1.0.0
docker push ghcr.io/org/myapp:1.0.0
# Private registry
docker login registry.example.com
docker tag myapp:latest registry.example.com/team/myapp:1.0.0
docker push registry.example.com/team/myapp:1.0.0
# Multi-architecture builds with Docker Buildx (AMD64 + ARM64)
docker buildx create --use --name multiarch
docker buildx inspect --bootstrap
docker buildx build --platform linux/amd64,linux/arm64 --push -t johndoe/myapp:1.0.0 -t johndoe/myapp:latest .
# GitHub Actions workflow for auto-publish on push
# (See: .github/workflows/docker-publish.yml)
# - uses: docker/setup-qemu-action@v3 # QEMU for ARM emulation
# - uses: docker/setup-buildx-action@v3
# - uses: docker/login-action@v3
# - uses: docker/build-push-action@v5
# with:
# platforms: linux/amd64,linux/arm64
# push: true
# tags: ghcr.io/${{ github.repository }}:latestDocker Security โ Non-root USER, Read-only Filesystem, Secrets, Image Scanning
Containers running as root with writable filesystems are a significant security risk. Apply these hardening practices before deploying to production.
Non-root User
# In Dockerfile: create and switch to non-root user
FROM node:20-alpine
WORKDIR /app
COPY --chown=node:node package*.json ./
RUN npm ci --only=production
COPY --chown=node:node . .
USER node # switch to built-in 'node' user (uid=1000)
# Or create a custom user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
# Verify: container should NOT be running as root
docker exec CONTAINER id
# uid=1000(node) gid=1000(node)Read-only Filesystem and Capability Dropping
# Read-only root filesystem (container can't write to image layers)
docker run -d --read-only --tmpfs /tmp # writable tmpfs for /tmp
--tmpfs /app/cache # writable tmpfs for cache
myapp
# Drop all Linux capabilities, add only what's needed
docker run -d --cap-drop=ALL --cap-add=NET_BIND_SERVICE # allow binding to ports < 1024
myapp
# No new privileges โ prevent privilege escalation
docker run -d --security-opt no-new-privileges myapp
# Seccomp profile โ restrict syscalls
docker run -d --security-opt seccomp=./seccomp-profile.json myappSecrets Management (not env vars)
# โ Bad: secrets in environment variables (visible in docker inspect)
docker run -e DATABASE_PASSWORD=mysecret myapp
# โ
Better: Docker secrets (Swarm mode or Compose secrets)
# docker-compose.yml with secrets:
services:
api:
image: myapp
secrets:
- db_password
environment:
- DB_PASSWORD_FILE=/run/secrets/db_password # read from file
secrets:
db_password:
file: ./secrets/db_password.txt # or external: true for Swarm
# In the app: read from file
const password = fs.readFileSync(process.env.DB_PASSWORD_FILE, 'utf8').trim();
# BuildKit secrets โ for credentials used only during build
docker buildx build --secret id=npm_token,src=$HOME/.npmrc -t myapp .
# In Dockerfile:
RUN --mount=type=secret,id=npm_token,target=/root/.npmrc npm ciImage Vulnerability Scanning with Trivy
# Install Trivy (macOS)
brew install trivy
# Scan an image for vulnerabilities
trivy image myapp:latest
trivy image --severity HIGH,CRITICAL myapp:latest # only high/critical
# Scan in CI/CD (fail if critical vulnerabilities found)
trivy image --exit-code 1 --severity CRITICAL myapp:latest
# Scan a Dockerfile for misconfigurations
trivy config ./DockerfileProduction Patterns โ Restart Policies, Resource Limits, Health Checks, Rolling Updates
Running Docker in production requires restart policies, resource constraints, health monitoring, and update strategies to minimize downtime.
Restart Policies
# restart policies:
# no (default) โ never restart
# on-failure restart if exit code != 0
# on-failure:5 restart up to 5 times
# always always restart (including after docker restart daemon)
# unless-stopped restart unless manually stopped (survives daemon restart)
docker run -d --restart unless-stopped myapp
# In docker-compose.yml:
services:
api:
restart: unless-stopped
worker:
restart: on-failure:3Resource Limits
# Hard limits (container killed if exceeded)
docker run -d --memory=512m # 512 MB RAM limit
--memory-swap=512m # disable swap (swap = memory-swap - memory)
--cpus=1.5 # 1.5 CPU cores
--pids-limit=100 # max 100 processes (prevents fork bombs)
myapp
# In docker-compose.yml (v3 with deploy):
services:
api:
deploy:
resources:
limits:
cpus: "0.5"
memory: 256M
reservations: # guaranteed minimum
cpus: "0.25"
memory: 128MZero-downtime Rolling Updates
# Manual rolling update pattern (with Compose):
# 1. Pull new image
docker pull myapp:2.0.0
# 2. Update docker-compose.yml image tag to 2.0.0
# 3. Recreate only the changed service with no downtime
docker compose up -d --no-deps --build api
# Docker Swarm rolling update
docker service update --image myapp:2.0.0 --update-parallelism 1 # update 1 replica at a time
--update-delay 30s # wait 30s between replicas
--update-failure-action rollback # auto-rollback on failure
myapp_api
# Rollback to previous version
docker service rollback myapp_apiGenerate Docker Compose Files Instantly
Use our Docker Compose generator to scaffold production-ready docker-compose.yml files with services, volumes, networks, and healthchecks โ no boilerplate to remember.
Frequently Asked Questions
How do I reduce Docker image build time in CI/CD?
Cache Docker layers in CI. In GitHub Actions, use cache-from and cache-to with the registry cache or actions/cache for the local buildx cache. Structure your Dockerfile so dependency installation (slow, rarely changes) comes before code copy (fast, changes every commit). Use COPY package.json . && RUN npm ci before COPY . .. For monorepos, only rebuild images whose context has changed using path-based triggers.
What is the difference between docker-compose v2 and v3?
Compose file format v3 introduced deploy keys designed for Docker Swarm. In standalone Compose (non-Swarm), the deploy block is ignored. Version 2 of the Compose file format has directmem_limit and cpu_shares keys that work in standalone mode. For modern usage, the Compose specification (no version number) unifies both โ Docker Compose v2 CLI (the docker compose plugin, not the old docker-compose binary) uses the Compose Spec and supports resource limits viadeploy.resources even without Swarm.
How do I see why a container keeps restarting?
Use docker ps to see the restart count, then docker logs CONTAINER to see the crash output. If the container exits too quickly for logs, use docker inspect CONTAINER and check State.ExitCode and State.Error. Override the entrypoint to keep the container alive for investigation: docker run -it --entrypoint sh IMAGE. Common causes: missing environment variables, wrong file paths, port already in use, out-of-memory (check State.OOMKilled), or a missing dependency service (use depends_on with healthchecks).
Key Takeaways
- Always use .dockerignore: Omitting it sends node_modules or .git as build context, massively slowing builds.
- Multi-stage builds are the single best way to shrink image size โ compile in a fat image, copy only the binary to a minimal one.
- Order Dockerfile layers by change frequency: dependency install before code copy to maximize cache hits.
- Use named volumes for data persistence, bind mounts for development live-reload.
- Custom networks required for DNS: container name resolution only works on user-defined bridge networks, not the default bridge.
- Run as non-root user in all production images โ add a USER instruction before CMD.
- Never put secrets in ENV variables: use Docker secrets or read from files mounted via tmpfs.
- Set resource limits (
--memory,--cpus) to prevent runaway containers from taking down the host. - Use healthchecks in Compose to ensure dependent services wait until truly ready, not just started.
- Scan images with Trivy or similar tools in CI before pushing to production registries.