DevToolBoxKOSTENLOS
Blog

Docker Security Best Practices: Container-Härtungsanleitung

13 Min. Lesezeitvon DevToolBox

Docker Container Security: A Comprehensive Hardening Guide

Docker containers are the backbone of modern deployment pipelines, but their convenience can create a false sense of security. A misconfigured container can expose your host system, leak secrets, or provide an attacker with a foothold into your infrastructure. This guide covers Docker security best practices from image building to runtime configuration, covering the most critical areas you need to address to harden your containers for production.

Whether you are running containers on a single server, Kubernetes, or a managed service like AWS ECS, these practices apply universally and will significantly reduce your attack surface.

Image Security: Building Secure Foundations

Security starts with the container image. A vulnerable base image, unnecessary packages, or leaked secrets in image layers can compromise your entire deployment before a single container starts running.

Use Minimal Base Images

Every package in your image is a potential attack vector. Minimize your attack surface by choosing the smallest base image that meets your requirements.

# BAD: Full Ubuntu image (~77MB, hundreds of packages)
FROM ubuntu:24.04
RUN apt-get update && apt-get install -y nodejs npm

# BETTER: Slim variant (~52MB, fewer packages)
FROM node:22-slim

# BEST: Alpine-based (~50MB, musl libc, minimal packages)
FROM node:22-alpine

# MOST SECURE: Distroless (no shell, no package manager)
FROM node:22-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
RUN npm run build

FROM gcr.io/distroless/nodejs22-debian12
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["dist/server.js"]
Base ImageSizeCVEs (typical)Shell AccessUse Case
ubuntu:24.04~77MB20-50YesDevelopment only
node:22-slim~52MB10-30YesGeneral purpose
node:22-alpine~50MB5-15YesProduction
distroless/nodejs22~40MB0-5NoHigh security
scratch~0MB0NoStatic binaries (Go, Rust)

Multi-Stage Builds

Multi-stage builds are essential for security. They ensure build tools, source code, and development dependencies never reach your production image.

# Stage 1: Dependencies
FROM node:22-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --ignore-scripts

# Stage 2: Build
FROM node:22-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Remove any .env files that might have been copied
RUN rm -f .env .env.local .env.production
RUN npm run build
# Prune dev dependencies
RUN npm prune --production

# Stage 3: Production (minimal runtime)
FROM node:22-alpine AS runner
WORKDIR /app

# Security: Create non-root user
RUN addgroup --system --gid 1001 appgroup && \
    adduser --system --uid 1001 --ingroup appgroup appuser

# Security: Remove unnecessary tools
RUN apk --no-cache add dumb-init && \
    rm -rf /var/cache/apk/* /tmp/*

# Copy only production artifacts
COPY --from=builder --chown=appuser:appgroup /app/dist ./dist
COPY --from=builder --chown=appuser:appgroup /app/node_modules ./node_modules
COPY --from=builder --chown=appuser:appgroup /app/package.json ./

# Security: Run as non-root user
USER appuser

# Security: Use dumb-init for proper signal handling
ENTRYPOINT ["dumb-init", "--"]
CMD ["node", "dist/server.js"]

Never Run as Root

Running containers as root is the single most common Docker security mistake. If an attacker exploits a vulnerability in your application, running as root gives them full control over the container -- and potentially the host system if other misconfigurations exist.

# Create a non-root user in your Dockerfile
FROM node:22-alpine

# Create system user and group
RUN addgroup --system --gid 1001 appgroup && \
    adduser --system --uid 1001 --ingroup appgroup appuser

# Set ownership of application directory
WORKDIR /app
COPY --chown=appuser:appgroup . .
RUN npm ci --production

# Switch to non-root user BEFORE CMD
USER appuser

# Verify: This should print "appuser" not "root"
# docker exec <container> whoami

CMD ["node", "server.js"]
# docker-compose.yml - Enforce non-root
services:
  api:
    build: .
    user: "1001:1001"
    security_opt:
      - no-new-privileges:true
    read_only: true
    tmpfs:
      - /tmp:noexec,nosuid,size=64m

Secrets Management

Container secrets (API keys, database passwords, certificates) are a frequent source of breaches. Never bake secrets into images, pass them as build arguments, or store them in environment variables visible through docker inspect.

What NOT to Do

# NEVER: Hardcode secrets in Dockerfile
ENV DATABASE_URL=postgres://admin:password123@db:5432/myapp
ENV API_KEY=sk-live-abc123xyz

# NEVER: Use build args for secrets (they persist in image layers)
ARG DB_PASSWORD
ENV DATABASE_URL=postgres://admin:${DB_PASSWORD}@db:5432/myapp

# NEVER: Copy secret files into the image
COPY .env /app/.env
COPY credentials.json /app/credentials.json

Proper Secrets Management

# Docker Compose with secrets
services:
  api:
    build: .
    secrets:
      - db_password
      - api_key
    environment:
      # Reference secrets from files, not values
      DATABASE_URL_FILE: /run/secrets/db_password

secrets:
  db_password:
    file: ./secrets/db_password.txt  # For development
    # external: true                  # For production (Docker Swarm)
  api_key:
    file: ./secrets/api_key.txt
// Read secrets from files (Docker secrets pattern)
import { readFileSync, existsSync } from 'fs';

function getSecret(name: string): string {
  // Docker secrets are mounted as files
  const secretPath = `/run/secrets/${name}`;
  if (existsSync(secretPath)) {
    return readFileSync(secretPath, 'utf-8').trim();
  }

  // Fallback to environment variable for local development
  const envValue = process.env[name.toUpperCase()];
  if (envValue) {
    return envValue;
  }

  throw new Error(`Secret '${name}' not found`);
}

// Usage
const dbPassword = getSecret('db_password');
const apiKey = getSecret('api_key');

Network Security

Docker networking defaults can expose containers to unnecessary communication. Apply the principle of least privilege to your network configuration.

Isolate Container Networks

# docker-compose.yml - Network isolation
services:
  frontend:
    build: ./frontend
    networks:
      - frontend-net
    ports:
      - "443:3000"  # Only frontend is exposed

  api:
    build: ./api
    networks:
      - frontend-net   # Can communicate with frontend
      - backend-net    # Can communicate with database
    # No ports exposed to host!

  database:
    image: postgres:16-alpine
    networks:
      - backend-net    # Only accessible from backend network
    # No ports exposed to host!
    volumes:
      - pgdata:/var/lib/postgresql/data

  redis:
    image: redis:7-alpine
    networks:
      - backend-net
    command: redis-server --requirepass ${REDIS_PASSWORD}

networks:
  frontend-net:
    driver: bridge
  backend-net:
    driver: bridge
    internal: true   # No external access at all

volumes:
  pgdata:

Restrict Container Capabilities

# Drop ALL capabilities, add back only what's needed
services:
  api:
    build: .
    cap_drop:
      - ALL
    cap_add:
      - NET_BIND_SERVICE  # Only if binding to ports < 1024
    security_opt:
      - no-new-privileges:true
    # Prevent container from gaining additional privileges

  # For even stricter security, use read-only filesystem
  worker:
    build: .
    cap_drop:
      - ALL
    read_only: true
    tmpfs:
      - /tmp:noexec,nosuid,size=100m
      - /var/run:noexec,nosuid,size=10m

Image Scanning and Vulnerability Management

Scanning images for known vulnerabilities should be part of your CI/CD pipeline. Multiple tools are available, and you should use at least one.

Trivy Scanner (Recommended)

# Scan a local image
trivy image myapp:latest

# Scan with severity filter
trivy image --severity HIGH,CRITICAL myapp:latest

# Scan and fail CI if critical vulnerabilities found
trivy image --exit-code 1 --severity CRITICAL myapp:latest

# Scan a Dockerfile for misconfigurations
trivy config Dockerfile

# Scan filesystem for secrets
trivy fs --scanners secret .

# Generate SBOM (Software Bill of Materials)
trivy image --format spdx-json --output sbom.json myapp:latest

CI/CD Integration

# .github/workflows/docker-security.yml
name: Docker Security Scan

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Build image
        run: docker build -t myapp:scan .

      - name: Run Trivy vulnerability scanner
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: 'myapp:scan'
          format: 'sarif'
          output: 'trivy-results.sarif'
          severity: 'CRITICAL,HIGH'
          exit-code: '1'

      - name: Upload scan results
        uses: github/codeql-action/upload-sarif@v3
        if: always()
        with:
          sarif_file: 'trivy-results.sarif'

      - name: Run Dockle (Dockerfile best practices)
        uses: erzz/dockle-action@v1
        with:
          image: 'myapp:scan'
          exit-code: '1'
          failure-threshold: 'WARN'

Runtime Security

Even with a secure image, runtime configuration can introduce vulnerabilities. These settings control what a container can do once it is running.

Resource Limits

# docker-compose.yml - Resource limits prevent DoS
services:
  api:
    build: .
    deploy:
      resources:
        limits:
          cpus: '2.0'
          memory: 512M
          pids: 100        # Prevent fork bombs
        reservations:
          cpus: '0.5'
          memory: 256M

    # Restart policy
    restart: unless-stopped

    # Health check
    healthcheck:
      test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s

    # Logging limits (prevent disk exhaustion)
    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "3"

Seccomp and AppArmor Profiles

{
  "defaultAction": "SCMP_ACT_ERRNO",
  "architectures": ["SCMP_ARCH_X86_64"],
  "syscalls": [
    {
      "names": [
        "read", "write", "open", "close", "stat", "fstat",
        "mmap", "mprotect", "munmap", "brk", "ioctl",
        "access", "pipe", "select", "sched_yield",
        "clone", "execve", "exit", "wait4", "kill",
        "fcntl", "flock", "fsync", "fdatasync",
        "getpid", "getuid", "getgid", "geteuid", "getegid",
        "socket", "connect", "accept", "sendto", "recvfrom",
        "bind", "listen", "setsockopt", "getsockopt",
        "epoll_create", "epoll_ctl", "epoll_wait",
        "futex", "nanosleep", "clock_gettime"
      ],
      "action": "SCMP_ACT_ALLOW"
    }
  ]
}
# Run with custom seccomp profile
docker run --security-opt seccomp=./seccomp-profile.json myapp:latest

# Run with no-new-privileges (prevents privilege escalation)
docker run --security-opt no-new-privileges myapp:latest

Docker Content Trust and Image Signing

Image signing ensures that the images you deploy are exactly what you built, with no tampering in transit. Docker Content Trust (DCT) uses Notary to sign and verify images.

# Enable Docker Content Trust
export DOCKER_CONTENT_TRUST=1

# Push a signed image
docker push myregistry/myapp:latest
# This will prompt for signing keys on first use

# Pull only signed images
docker pull myregistry/myapp:latest
# Fails if the image is not signed

# Verify image signatures
docker trust inspect --pretty myregistry/myapp:latest

# Using cosign (modern alternative, OCI-native)
# Sign an image
cosign sign --key cosign.key myregistry/myapp@sha256:abc123

# Verify an image
cosign verify --key cosign.pub myregistry/myapp@sha256:abc123

Dockerfile Security Checklist

Use this checklist when reviewing Dockerfiles for security issues. Every item addresses a common misconfiguration that has led to real-world breaches.

Docker Security Checklist:

IMAGE BUILDING
[ ] Using minimal base image (alpine, slim, or distroless)
[ ] Multi-stage build separates build and runtime
[ ] Base image pinned to specific digest (not just tag)
[ ] No secrets in Dockerfile or build args
[ ] .dockerignore excludes .env, .git, node_modules, secrets
[ ] Image scanned for CVEs in CI/CD pipeline
[ ] COPY used instead of ADD (ADD has URL/tar extraction risks)

USER & PERMISSIONS
[ ] Container runs as non-root user (USER directive)
[ ] File permissions are restrictive (no world-writable files)
[ ] no-new-privileges security option enabled

RUNTIME
[ ] All capabilities dropped, only required ones added back
[ ] Read-only filesystem where possible
[ ] Resource limits set (CPU, memory, PIDs)
[ ] Health check configured
[ ] Logging limits configured

NETWORK
[ ] Only necessary ports exposed
[ ] Internal networks for backend services
[ ] No --privileged flag used
[ ] Host networking avoided (--network=host)

SECRETS
[ ] Secrets injected at runtime (not build time)
[ ] Docker secrets or vault used for sensitive data
[ ] No secrets in environment variables visible via inspect

MONITORING
[ ] Container logs collected and monitored
[ ] Image vulnerabilities tracked over time
[ ] Runtime anomaly detection in place

Monitoring and Incident Response

Security does not end at deployment. Continuous monitoring of your running containers is essential for detecting breaches, configuration drift, and anomalous behavior. Implement centralized logging with tools like Fluentd or the ELK stack, and set up alerts for suspicious activities such as unexpected network connections, file system modifications in read-only containers, or processes spawning shells.

Runtime security tools like Falco can monitor system calls inside containers and alert on policy violations in real time. For example, Falco can detect when a container opens a shell, reads sensitive files like /etc/shadow, or makes outbound connections to unexpected IP addresses. Integrating these alerts into your incident response workflow ensures that even if a container is compromised, the breach is detected quickly and contained before it spreads to other parts of your infrastructure.

Regularly audit your container configurations with tools like Docker Bench for Security, which runs automated checks based on the CIS Docker Benchmark. Schedule these audits as part of your monthly security review process to catch configuration drift and newly discovered best practices.

Common Docker Security Anti-Patterns

  • Running as root: The default and the most dangerous. Always create and use a non-root user.
  • Using latest tag: Tags are mutable. Pin to a specific SHA256 digest for reproducibility: FROM node@sha256:abc123...
  • Exposing Docker socket: Mounting /var/run/docker.sock gives the container full control over the host Docker daemon.
  • Using --privileged: This disables all security features. There is almost never a valid reason to use it in production.
  • Storing secrets in layers: Even if you delete a secret in a later layer, it exists in the image history. Use multi-stage builds or BuildKit secrets.
  • Ignoring image updates: Base images receive security patches regularly. Rebuild and redeploy images at least monthly.
  • No resource limits: A container without limits can consume all host resources, enabling denial-of-service attacks.
  • Host network mode: Using --network=host removes network isolation entirely.

Conclusion

Docker security is not a single configuration change but a set of practices applied across the entire container lifecycle -- from image building to runtime monitoring. The most impactful changes you can make are: using minimal base images, running as non-root, managing secrets properly, scanning for vulnerabilities, and applying the principle of least privilege to capabilities and networking.

Start by auditing your existing Dockerfiles against the checklist above, integrate image scanning into your CI/CD pipeline, and gradually adopt more advanced protections like seccomp profiles and image signing. Every layer of security you add makes exploitation significantly harder for attackers.

Use our Hash Generator for verifying image digests, or check out the Git Branching Strategies guide for setting up secure development workflows.

𝕏 Twitterin LinkedIn
War das hilfreich?

Bleiben Sie informiert

Wöchentliche Dev-Tipps und neue Tools.

Kein Spam. Jederzeit abbestellbar.

Verwandte Tools ausprobieren

{ }JSON Formatter±Text Diff Checker

Verwandte Artikel

Docker Best Practices: 20 Tipps für Produktionscontainer

20 wesentliche Docker-Best-Practices: Multi-Stage-Builds, Sicherheitshärtung, Image-Optimierung und CI/CD-Automatisierung.

Docker Networking Guide: Bridge, Host, Overlay Netzwerke

Vollständiger Docker Networking Guide: Bridge, Host, Overlay und Macvlan Netzwerke.

Docker Compose Tutorial: Von den Grundlagen zum produktionsreifen Stack

Vollstaendiges Docker Compose Tutorial: docker-compose.yml Syntax, Services, Netzwerke, Volumes, Umgebungsvariablen, Healthchecks und Praxisbeispiele.