DevToolBox無料
ブログ

Docker vs Kubernetes

12分by DevToolBox

Docker and Kubernetes are two of the most important technologies in modern software deployment, but they solve fundamentally different problems. Docker packages your application into portable containers, while Kubernetes orchestrates those containers at scale. This guide breaks down when to use each, how they complement each other, and how to decide the right approach for your project in 2026.

What Is Docker?

Docker is a containerization platform that packages applications and their dependencies into lightweight, portable units called containers. Each container includes everything the application needs to run: code, runtime, system libraries, and configuration files. Containers share the host OS kernel, making them far more efficient than traditional virtual machines.

Docker solved a fundamental problem in software development: the "it works on my machine" syndrome. By packaging the entire runtime environment, Docker ensures that an application behaves identically in development, staging, and production. This consistency dramatically reduces deployment failures and simplifies debugging.

Docker in Practice: Building an Image

# Dockerfile for a Node.js application
FROM node:20-alpine AS builder

WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

COPY . .
RUN npm run build

# Production stage
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./

EXPOSE 3000
USER node
CMD ["node", "dist/server.js"]

Docker Compose for Multi-Container Apps

Docker Compose lets you define and run multi-container applications using a single YAML file. It is ideal for local development environments where you need a database, cache, and application server running together.

# docker-compose.yml
version: '3.9'
services:
  app:
    build: .
    ports:
      - "3000:3000"
    environment:
      - DATABASE_URL=postgresql://user:pass@db:5432/myapp
      - REDIS_URL=redis://cache:6379
    depends_on:
      db:
        condition: service_healthy
      cache:
        condition: service_started

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: pass
      POSTGRES_DB: myapp
    volumes:
      - pgdata:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U user"]
      interval: 5s
      timeout: 5s
      retries: 5

  cache:
    image: redis:7-alpine
    ports:
      - "6379:6379"

volumes:
  pgdata:

What Is Kubernetes?

Kubernetes (often abbreviated K8s) is a container orchestration platform originally developed by Google. It automates the deployment, scaling, and management of containerized applications across clusters of machines. While Docker runs containers on a single host, Kubernetes coordinates containers across many hosts.

Kubernetes provides self-healing capabilities: if a container crashes, Kubernetes automatically restarts it. If a node fails, Kubernetes reschedules containers to healthy nodes. It handles service discovery, load balancing, rolling updates, and secret management out of the box.

Kubernetes Architecture

A Kubernetes cluster consists of a control plane and worker nodes. The control plane manages the cluster state through the API server, scheduler, controller manager, and etcd (the distributed key-value store). Worker nodes run the kubelet agent and container runtime, executing the actual workloads in units called Pods.

# deployment.yaml — Kubernetes Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
  labels:
    app: web-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web-app
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  template:
    metadata:
      labels:
        app: web-app
    spec:
      containers:
        - name: web-app
          image: myregistry/web-app:v2.1.0
          ports:
            - containerPort: 3000
          resources:
            requests:
              cpu: "100m"
              memory: "128Mi"
            limits:
              cpu: "500m"
              memory: "512Mi"
          livenessProbe:
            httpGet:
              path: /healthz
              port: 3000
            initialDelaySeconds: 10
            periodSeconds: 15
          readinessProbe:
            httpGet:
              path: /ready
              port: 3000
            initialDelaySeconds: 5
            periodSeconds: 10
          env:
            - name: DATABASE_URL
              valueFrom:
                secretKeyRef:
                  name: app-secrets
                  key: database-url
---
apiVersion: v1
kind: Service
metadata:
  name: web-app-service
spec:
  selector:
    app: web-app
  ports:
    - port: 80
      targetPort: 3000
  type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: web-app-ingress
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  tls:
    - hosts:
        - app.example.com
      secretName: app-tls
  rules:
    - host: app.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: web-app-service
                port:
                  number: 80

Docker vs Kubernetes: Feature Comparison

Understanding the fundamental differences helps you choose the right tool for each scenario:

FeatureDockerKubernetes
PurposeBuild & run containersOrchestrate containers at scale
ScopeSingle hostMulti-node cluster
ScalingManualAutomatic (HPA, VPA)
Self-healingRestart policies onlyFull (reschedule, replace, restart)
NetworkingBridge / host networksCluster-wide SDN, services, ingress
Load BalancingManual / externalBuilt-in service load balancing
Rolling UpdatesNot built-inNative rolling update strategy
Learning CurveLowHigh
Operational CostMinimalSignificant
Best ForDev, CI/CD, small appsProduction microservices at scale

When to Use Docker Alone

Docker without Kubernetes is the right choice in many common scenarios. Not every application needs container orchestration, and adding Kubernetes too early introduces unnecessary complexity.

  • Local development environments where you need consistent, reproducible setups across team members.
  • Small applications with 1-5 containers that can comfortably run on a single server.
  • CI/CD pipelines where Docker provides isolated, clean build environments for testing and artifact generation.
  • Monolithic applications that are being gradually containerized but do not yet need orchestration.
  • Personal projects, prototypes, and MVPs where operational overhead must be minimal.

When to Use Kubernetes

Kubernetes becomes essential when your application needs to scale reliably across multiple machines, handle traffic spikes automatically, or maintain high availability.

  • Microservices architectures with 10+ services that need independent scaling, deployment, and service discovery.
  • Production workloads requiring zero-downtime deployments, automatic rollbacks, and self-healing.
  • Multi-team organizations where different teams deploy independently to shared infrastructure.
  • Applications with variable traffic patterns that benefit from horizontal pod autoscaling.
  • Compliance-heavy environments where you need fine-grained access controls, network policies, and audit logging.

Alternatives to Full Kubernetes

Running a full Kubernetes cluster is operationally expensive. In 2026, several lighter alternatives have matured:

Docker Swarm

Built into Docker Engine. Simpler than Kubernetes but with limited features. Good for small teams that need basic orchestration without the Kubernetes learning curve.

K3s / K0s

Lightweight Kubernetes distributions that strip out cloud-provider-specific features. K3s uses about 512MB of RAM and is ideal for edge computing, IoT, and resource-constrained environments.

Managed Kubernetes (EKS, GKE, AKS)

Cloud providers manage the control plane for you, reducing operational burden significantly. You only manage worker nodes and your application workloads. This is the most common Kubernetes deployment model in production.

Container-as-a-Service (Cloud Run, Fargate, Fly.io)

Serverless container platforms that run your Docker containers without any cluster management. You push a container image and the platform handles scaling, networking, and infrastructure. Ideal when you want container portability without orchestration complexity.

Kubernetes Autoscaling

Horizontal Pod Autoscaler (HPA) automatically adjusts the number of pod replicas based on CPU usage, memory consumption, or custom metrics. This ensures your application scales to meet demand without over-provisioning.

# hpa.yaml — Horizontal Pod Autoscaler
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: web-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: web-app
  minReplicas: 2
  maxReplicas: 20
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70
    - type: Resource
      resource:
        name: memory
        target:
          type: Utilization
          averageUtilization: 80
  behavior:
    scaleUp:
      stabilizationWindowSeconds: 60
      policies:
        - type: Pods
          value: 4
          periodSeconds: 60
    scaleDown:
      stabilizationWindowSeconds: 300
      policies:
        - type: Percent
          value: 10
          periodSeconds: 60

Decision Framework: Docker or Kubernetes?

Use this framework to guide your decision based on your project needs:

ScenarioRecommendation
Solo dev, 1-3 servicesDocker + Docker Compose
Small team, < 5 servicesDocker + Docker Compose or Docker Swarm
Need auto-scalingKubernetes (managed) or CaaS
Microservices, 10+ servicesKubernetes (managed: EKS / GKE / AKS)
Edge / IoT deploymentDocker + K3s
Serverless containersCloud Run / Fargate / Fly.io
Enterprise, multi-teamKubernetes with namespaces + RBAC
Learning / prototypingDocker Desktop with Compose

Best Practices for Both

  • Use multi-stage Docker builds to keep production images small. A typical Node.js image can go from 1GB to 150MB with proper multi-stage configuration.
  • Never run containers as root. Always specify a non-root USER in your Dockerfile and set security contexts in Kubernetes.
  • Pin specific image tags instead of using "latest." Immutable tags (using image digests) provide the strongest guarantee of reproducibility.
  • Set resource requests and limits in Kubernetes. Without them, a single misbehaving pod can starve the entire node.
  • Implement health checks (liveness and readiness probes in Kubernetes, HEALTHCHECK in Docker). They enable self-healing and prevent traffic from reaching unhealthy instances.
  • Store secrets in dedicated secret management systems (Kubernetes Secrets with encryption at rest, HashiCorp Vault, or cloud-native solutions). Never bake secrets into container images.
  • Use namespaces in Kubernetes to isolate environments (dev, staging, production) and teams. Apply resource quotas and network policies per namespace.
  • Scan container images for vulnerabilities in your CI pipeline using tools like Trivy, Snyk, or Grype. Block images with critical CVEs from reaching production.

Try our related developer tools

FAQ

Can I use Docker without Kubernetes?

Absolutely. Docker is a standalone tool that works perfectly on its own. Docker Compose handles multi-container setups on a single host. Many successful production applications run on Docker without any orchestrator, especially when deployed to container-as-a-service platforms like Cloud Run, Fly.io, or Railway.

Can I use Kubernetes without Docker?

Yes. Since Kubernetes 1.24, Docker is no longer the default container runtime. Most production Kubernetes clusters use containerd or CRI-O directly. Kubernetes works with any OCI-compliant container runtime. You can still build images with Docker and run them on Kubernetes with containerd.

How many containers do I need before Kubernetes makes sense?

There is no magic number, but generally Kubernetes starts making sense when you have 10+ microservices, need auto-scaling, require zero-downtime deployments, or have multiple teams deploying independently. For fewer than 5 services, Docker Compose or a container-as-a-service platform is usually sufficient.

Is Kubernetes overkill for small projects?

For small projects, yes. Kubernetes has significant operational overhead: you need to understand networking, storage, RBAC, monitoring, and cluster maintenance. Managed Kubernetes (EKS, GKE) reduces this but still has a learning curve. For small projects, use Docker with a PaaS like Fly.io, Railway, or Render.

What is the cost difference between Docker and Kubernetes?

Docker itself is free and open source. Kubernetes is also free, but running a cluster has costs: compute for worker nodes, managed control plane fees ($70-200/month on cloud providers), monitoring, logging, and engineering time. A minimal production Kubernetes setup on AWS EKS costs roughly $200-500/month before workload costs. Docker on a single VPS can cost as little as $5-20/month.

𝕏 Twitterin LinkedIn
この記事は役に立ちましたか?

最新情報を受け取る

毎週の開発ヒントと新ツール情報。

スパムなし。いつでも解除可能。

Try These Related Tools

🐳Docker Compose Generator

Related Articles

Kubernetes入門ガイド

Pod、Deployment、Serviceなど、Kubernetesの基礎を学びます。

Docker Composeチュートリアル:基本から本番対応スタックまで

Docker Compose完全チュートリアル:docker-compose.yml構文、サービス、ネットワーク、ボリューム、環境変数、ヘルスチェック、Node.js/Python/WordPressの実例。