Docker containers are ephemeral by design — when a container is removed, all data inside it is lost. Docker provides three storage mechanisms to persist data beyond the container lifecycle: volumes, bind mounts, and tmpfs mounts. This guide covers every aspect of Docker storage, from basic volume commands to advanced driver configurations, backup strategies, and performance tuning.
Storage Types Overview
Docker offers three primary storage options for containers. Understanding the differences is critical for choosing the right approach for each use case.
| Storage Type | Managed By | Host Location | Best For |
|---|---|---|---|
| Volume | Docker | /var/lib/docker/volumes/ | Production data, databases |
| Bind Mount | User | Any host path | Development, config files |
| tmpfs | Docker | Memory (RAM) | Secrets, temp data (Linux only) |
# Comparison of storage types in docker run
# Named volume
docker run -v mydata:/app/data nginx
# Bind mount
docker run -v /home/user/data:/app/data nginx
# tmpfs mount
docker run --tmpfs /app/temp nginx
# Using --mount flag (more explicit)
docker run --mount type=volume,source=mydata,target=/app/data nginx
docker run --mount type=bind,source=/home/user/data,target=/app/data nginx
docker run --mount type=tmpfs,target=/app/temp nginxNamed Volumes
Named volumes are the most common and recommended way to persist data in Docker. Docker manages the entire lifecycle of the volume, including creation, storage location, and cleanup.
Creating and Using Named Volumes
# Create a named volume
docker volume create mydata
# Create with specific driver and options
docker volume create --driver local \
--opt type=none \
--opt device=/path/to/data \
--opt o=bind \
my-bind-vol
# Use a named volume in docker run
docker run -d \
--name postgres-db \
-v pgdata:/var/lib/postgresql/data \
-e POSTGRES_PASSWORD=secret \
postgres:16
# Volume is auto-created if it doesn't exist
docker run -d -v mydata:/data alpine
# Use --mount for explicit volume creation
docker run -d \
--mount type=volume,source=mydata,target=/data \
alpine
# Inspect volume details
docker volume inspect mydata
# Output:
# [
# {
# "CreatedAt": "2025-01-15T10:30:00Z",
# "Driver": "local",
# "Labels": {},
# "Mountpoint": "/var/lib/docker/volumes/mydata/_data",
# "Name": "mydata",
# "Options": {},
# "Scope": "local"
# }
# ]Volume Lifecycle
Named volumes persist independently of containers. They survive container stops, restarts, and removals. The only way to remove a named volume is to explicitly delete it with docker volume rm or docker volume prune.
# Create container with volume
docker run -d --name app1 -v shared-data:/data alpine sleep 3600
# Write data inside the volume
docker exec app1 sh -c 'echo "Hello from app1" > /data/message.txt'
# Stop and remove the container
docker stop app1 && docker rm app1
# Data persists! Create new container with same volume
docker run --rm -v shared-data:/data alpine cat /data/message.txt
# Output: Hello from app1
# Multiple containers can share the same volume
docker run -d --name writer -v shared-data:/data alpine sh -c \
'while true; do date >> /data/log.txt; sleep 5; done'
docker run -d --name reader -v shared-data:/data:ro alpine tail -f /data/log.txtBind Mounts
Bind mounts map a specific host directory or file directly into a container. The host path must be an absolute path. Bind mounts are ideal for development workflows where you need live code reloading.
Bind Mount Syntax
# Short syntax: -v host_path:container_path
docker run -v /home/user/project:/app node:20
# Short syntax with read-only
docker run -v /home/user/config:/etc/app/config:ro nginx
# Long syntax: --mount (recommended for clarity)
docker run --mount type=bind,source=/home/user/project,target=/app node:20
# --mount with read-only
docker run \
--mount type=bind,source=/home/user/config,target=/etc/app/config,readonly \
nginx
# Key difference: -v creates the host dir if missing
# --mount fails with an error if host dir doesn't exist
docker run -v /nonexistent/path:/data alpine # Creates /nonexistent/path
docker run --mount type=bind,source=/nonexistent/path,target=/data alpine # ERRORDevelopment Workflow with Bind Mounts
Bind mounts shine in development — edit code on your host, and changes appear immediately inside the container without rebuilding the image.
# React development with hot reload
docker run -d \
--name react-dev \
-v $(pwd)/src:/app/src \
-v $(pwd)/public:/app/public \
-p 3000:3000 \
-e CHOKIDAR_USEPOLLING=true \
my-react-app
# Python Flask development
docker run -d \
--name flask-dev \
-v $(pwd):/app \
-p 5000:5000 \
-e FLASK_DEBUG=1 \
my-flask-app
# Go development with air (live reload)
docker run -d \
--name go-dev \
-v $(pwd):/app \
-p 8080:8080 \
cosmtrek/airDocker Compose Volumes
Docker Compose provides a declarative way to define volumes alongside your services. Volumes can be defined using short syntax or long syntax for more control.
Short Syntax
services:
db:
image: postgres:16
volumes:
# Named volume
- pgdata:/var/lib/postgresql/data
# Bind mount (relative path)
- ./init-scripts:/docker-entrypoint-initdb.d
# Bind mount (absolute path)
- /var/log/postgres:/var/log/postgresql
# Read-only bind mount
- ./config/postgresql.conf:/etc/postgresql/postgresql.conf:ro
# Anonymous volume
- /var/lib/postgresql/temp
volumes:
pgdata: # Named volume declaration
driver: localLong Syntax
The long syntax provides more configuration options including read-only mode, volume sub-paths, and bind propagation.
services:
web:
image: nginx:alpine
volumes:
# Long syntax - named volume
- type: volume
source: web-data
target: /usr/share/nginx/html
volume:
nocopy: true # Don't copy container data into volume
# Long syntax - bind mount
- type: bind
source: ./nginx.conf
target: /etc/nginx/nginx.conf
read_only: true
# Long syntax - tmpfs
- type: tmpfs
target: /tmp
tmpfs:
size: 100000000 # 100MB limit
volumes:
web-data:External and Shared Volumes
Use external volumes to share data between multiple Compose projects, or reference volumes created outside of Compose.
# Project A: creates the volume
# docker-compose.yml (Project A)
services:
api:
image: my-api
volumes:
- shared-data:/data
volumes:
shared-data:
name: my-shared-data # Explicit name (not prefixed with project name)
---
# Project B: uses the same volume
# docker-compose.yml (Project B)
services:
worker:
image: my-worker
volumes:
- shared-data:/data
volumes:
shared-data:
external: true
name: my-shared-data # Must match the volume name from Project A
---
# Pre-create a volume externally
docker volume create --driver local my-shared-dataVolume Drivers
Docker supports pluggable volume drivers that allow you to store data on remote hosts or cloud providers. The default driver is "local" which stores data on the Docker host.
Local Driver with Options
# Local driver with tmpfs backend (RAM-based volume)
docker volume create --driver local \
--opt type=tmpfs \
--opt device=tmpfs \
--opt o=size=256m \
ramdisk
# Local driver with ext4 filesystem on a specific device
docker volume create --driver local \
--opt type=ext4 \
--opt device=/dev/sdb1 \
fast-storage
# Local driver binding to a specific directory
docker volume create --driver local \
--opt type=none \
--opt device=/mnt/data/app \
--opt o=bind \
app-dataNFS Volume Driver
# NFS volume using the local driver
docker volume create --driver local \
--opt type=nfs \
--opt o=addr=192.168.1.100,rw,nfsvers=4 \
--opt device=:/exports/data \
nfs-data
# NFS volume in Docker Compose
services:
app:
image: my-app
volumes:
- nfs-data:/app/data
volumes:
nfs-data:
driver: local
driver_opts:
type: nfs
o: addr=192.168.1.100,rw,nfsvers=4
device: ":/exports/data"
# CIFS/SMB volume (Windows shares)
docker volume create --driver local \
--opt type=cifs \
--opt device=//server/share \
--opt o=addr=server,username=user,password=pass \
smb-dataCloud Volume Drivers
For production workloads in the cloud, use provider-specific volume drivers to mount cloud storage directly into containers.
# AWS EFS volume (using amazon-ecs-volume-plugin or docker plugin)
# First install the plugin
docker plugin install rexray/efs
# Create volume backed by EFS
docker volume create --driver rexray/efs \
--opt filesystem=fs-12345678 \
efs-data
# Azure Files in Docker Compose
volumes:
azure-data:
driver: azure_file
driver_opts:
share_name: myshare
storage_account_name: mystorageaccount
# GCP Filestore / persistent disk (via CSI or plugin)
# Typically used with Kubernetes rather than standalone DockerRead-Only Volumes
The :ro flag (or read_only option in long syntax) mounts a volume as read-only inside the container. This is a critical security practice — if a container is compromised, it cannot modify the mounted data.
# Read-only with -v flag
docker run -v myconfig:/etc/app/config:ro nginx
# Read-only with --mount
docker run \
--mount type=volume,source=myconfig,target=/etc/app/config,readonly \
nginx
# Read-only bind mount
docker run -v $(pwd)/nginx.conf:/etc/nginx/nginx.conf:ro nginx
# Docker Compose - short syntax
services:
web:
volumes:
- ./config:/etc/app/config:ro
- certs:/etc/ssl/certs:ro
# Docker Compose - long syntax
services:
web:
volumes:
- type: bind
source: ./config
target: /etc/app/config
read_only: true
# Entire container filesystem as read-only (--read-only flag)
docker run --read-only \
--tmpfs /tmp \
--tmpfs /run \
-v data:/app/data \
my-appSecurity Benefits of Read-Only Mounts
- Prevents container from modifying configuration files
- Protects host filesystem from compromised containers
- Enforces immutable infrastructure patterns
- Reduces blast radius of container escape vulnerabilities
Data Persistence
Understanding when data persists and when it is lost is essential for reliable container deployments.
| Scenario | Volume Data | Bind Mount Data | Container Data |
|---|---|---|---|
| Container stopped | Preserved | Preserved | Preserved |
| Container restarted | Preserved | Preserved | Preserved |
| Container removed | Preserved | Preserved | Lost |
| docker compose down | Preserved | Preserved | Lost |
| docker compose down -v | Lost | Preserved | Lost |
| docker volume prune | Removed (unused) | Preserved | N/A |
Backup & Restore
Docker does not provide built-in backup commands for volumes. The standard approach is to use a temporary container that mounts the volume and creates a tar archive.
Backup a Volume
# Backup a named volume to a tar.gz file
docker run --rm \
-v mydata:/source:ro \
-v $(pwd)/backups:/backup \
alpine tar czf /backup/mydata-$(date +%Y%m%d).tar.gz -C /source .
# Backup PostgreSQL volume (prefer pg_dump for consistency)
docker exec postgres-db pg_dump -U postgres mydb > backup.sql
# Backup MySQL volume
docker exec mysql-db mysqldump -u root -p mydb > backup.sql
# Backup with compression and timestamp
docker run --rm \
-v pgdata:/source:ro \
-v $(pwd)/backups:/backup \
alpine sh -c 'tar czf /backup/pgdata-$(date +%Y%m%d-%H%M%S).tar.gz -C /source .'Restore a Volume
# Restore a volume from backup
docker volume create mydata-restored
docker run --rm \
-v mydata-restored:/target \
-v $(pwd)/backups:/backup:ro \
alpine sh -c 'cd /target && tar xzf /backup/mydata-20250115.tar.gz'
# Restore PostgreSQL from SQL dump
docker exec -i postgres-db psql -U postgres mydb < backup.sql
# Clone a volume (backup + restore in one step)
docker run --rm \
-v source-vol:/source:ro \
-v target-vol:/target \
alpine sh -c 'cd /source && tar cf - . | (cd /target && tar xf -)'Backup Using --volumes-from
The --volumes-from flag mounts all volumes from another container, which is useful when you don't know the exact volume mount paths.
# Backup all volumes from a running container
docker run --rm \
--volumes-from my-running-container \
-v $(pwd)/backups:/backup \
alpine tar czf /backup/all-volumes.tar.gz \
/var/lib/postgresql/data \
/etc/app/config
# This is useful when containers have multiple volumes
# and you want a single backup of all dataAutomated Backup Script
#!/bin/bash
# backup-volumes.sh - Automated Docker volume backup
BACKUP_DIR="/backups/docker"
RETENTION_DAYS=30
DATE=$(date +%Y%m%d-%H%M%S)
# Backup each named volume
for vol in $(docker volume ls -q); do
echo "Backing up volume: $vol"
docker run --rm \
-v "$vol":/source:ro \
-v "$BACKUP_DIR":/backup \
alpine tar czf "/backup/$vol-$DATE.tar.gz" -C /source .
done
# Clean up old backups
find "$BACKUP_DIR" -name "*.tar.gz" -mtime +$RETENTION_DAYS -delete
echo "Backup complete: $(ls -1 $BACKUP_DIR/*$DATE* | wc -l) volumes backed up"
# Add to crontab for daily backups at 2 AM:
# 0 2 * * * /path/to/backup-volumes.sh >> /var/log/docker-backup.log 2>&1Volume Management Commands
Docker provides a complete set of CLI commands for managing volumes throughout their lifecycle.
| Command | Description |
|---|---|
| docker volume create mydata | Create a named volume |
| docker volume ls | List all volumes |
| docker volume ls -f dangling=true | List volumes not used by any container |
| docker volume inspect mydata | Show detailed volume information |
| docker volume rm mydata | Remove a volume (must be unused) |
| docker volume rm $(docker volume ls -q) | Remove ALL volumes (dangerous!) |
| docker volume prune | Remove all unused volumes |
| docker volume prune -f | Force remove unused volumes (no prompt) |
| docker system df -v | Show volume disk usage |
| docker inspect -f '{{.Mounts}}' container | View container mount information |
Performance Considerations
Volume performance varies significantly across operating systems. On Linux, both volumes and bind mounts achieve near-native performance. On macOS and Windows, there are important differences.
Linux
Both volumes and bind mounts use the native filesystem. Performance is identical to direct host access. This is the recommended platform for production Docker workloads.
macOS and Windows
Docker Desktop runs Linux containers inside a lightweight VM. File sharing between the host and the VM introduces I/O overhead, especially for bind mounts with many small files (like node_modules).
# Problem: node_modules in bind mount is extremely slow on macOS
# BAD - entire project as bind mount (slow npm install, slow builds)
docker run -v $(pwd):/app node:20 npm install
# This syncs thousands of node_modules files between host and VM
# GOOD - use named volume for node_modules
docker run \
-v $(pwd):/app \
-v node_modules:/app/node_modules \
node:20 npm install
# node_modules stays inside the VM, only source code is syncedConsistency Flags (macOS)
Docker on macOS supports consistency flags to tune the tradeoff between performance and data consistency. Note: these flags are deprecated in recent Docker Desktop versions which use VirtioFS by default.
| Flag | Behavior | Use Case |
|---|---|---|
| consistent | Full consistency (default), host and container always see same view | When strong consistency needed |
| cached | Host writes are eventually visible in container (delay allowed) | Source code (host edits, container reads) |
| delegated | Container writes are eventually visible on host (delay allowed) | Build output (container writes, host reads) |
# Legacy consistency flags (deprecated with VirtioFS)
docker run -v $(pwd)/src:/app/src:cached node:20
docker run -v $(pwd)/dist:/app/dist:delegated node:20
# Modern Docker Desktop (4.15+) uses VirtioFS by default
# which provides near-native performance without flags
# Check: Docker Desktop → Settings → General → Virtual file sharingPerformance Tips
- Use named volumes for node_modules to avoid cross-filesystem sync
- On macOS/Windows, use VirtioFS (default in Docker Desktop 4.15+) for best bind mount performance
- Exclude large dependency directories from bind mounts using a named volume overlay
- For database volumes, always use named volumes instead of bind mounts
Common Patterns
Here are battle-tested patterns for common Docker storage scenarios.
Database Data Persistence
# PostgreSQL with persistent data
services:
postgres:
image: postgres:16
environment:
POSTGRES_PASSWORD: secret
POSTGRES_DB: myapp
volumes:
- pgdata:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro
restart: unless-stopped
mysql:
image: mysql:8
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: myapp
volumes:
- mysqldata:/var/lib/mysql
restart: unless-stopped
mongo:
image: mongo:7
volumes:
- mongodata:/data/db
- mongoconfig:/data/configdb
restart: unless-stopped
redis:
image: redis:7-alpine
command: redis-server --appendonly yes
volumes:
- redisdata:/data
restart: unless-stopped
volumes:
pgdata:
mysqldata:
mongodata:
mongoconfig:
redisdata:Shared Configuration Files
# Share config files across multiple services
services:
web:
image: nginx:alpine
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/certs:/etc/nginx/certs:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- web-logs:/var/log/nginx
app:
build: .
volumes:
- ./config/app.yml:/app/config/app.yml:ro
- app-logs:/app/logs
# Log aggregator reads logs from both services
logrotate:
image: alpine
volumes:
- web-logs:/logs/nginx:ro
- app-logs:/logs/app:ro
volumes:
web-logs:
app-logs:Development Hot-Reload with Dependency Isolation
This pattern bind-mounts your source code for live reloading while using a named volume for node_modules to avoid performance issues and platform-specific binary mismatches.
# Development setup with hot-reload + fast dependencies
services:
app:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
# Bind mount source code for hot-reload
- ./src:/app/src
- ./public:/app/public
- ./package.json:/app/package.json
# Named volume for node_modules (performance + isolation)
- node_modules:/app/node_modules
# Exclude build artifacts
- dist:/app/dist
ports:
- "3000:3000"
environment:
- NODE_ENV=development
volumes:
node_modules: # Stays in Docker VM, no host sync overhead
dist: # Build output stays fast
# After changing package.json, rebuild:
# docker compose run --rm app npm install
# docker compose upBuild Artifact Output
Use a named volume to share build artifacts between a builder container and a web server container.
# Multi-stage build with artifact sharing
services:
builder:
build:
context: .
target: builder
volumes:
- build-output:/app/dist
web:
image: nginx:alpine
volumes:
- build-output:/usr/share/nginx/html:ro
ports:
- "80:80"
depends_on:
builder:
condition: service_completed_successfully
volumes:
build-output:Related Tool
Generate Docker Compose configurations visually with our Docker Compose Generator tool:
FAQ
What is the difference between Docker volumes and bind mounts?
Volumes are managed by Docker and stored in a Docker-controlled area of the host filesystem (/var/lib/docker/volumes/). Bind mounts map a specific host path directly into the container. Volumes are more portable, easier to back up, and work consistently across platforms. Bind mounts depend on the host filesystem structure and are primarily used for development workflows.
Do Docker volumes persist after container removal?
Yes. Named volumes persist independently of containers. Removing a container with docker rm does not remove its volumes. Volumes are only removed when explicitly deleted with docker volume rm, docker volume prune, or docker compose down -v. Anonymous volumes (created without a name) may be removed with docker rm -v or docker volume prune.
How do I backup a Docker volume?
Use a temporary container to create a tar archive of the volume contents: docker run --rm -v myvolume:/data -v $(pwd):/backup alpine tar czf /backup/myvolume-backup.tar.gz -C /data . This mounts the volume and your current directory, then creates a compressed archive. For databases, prefer using the database's native dump tools (pg_dump, mysqldump) for consistent backups.
Why are Docker volumes slow on macOS?
Docker Desktop on macOS runs Linux containers inside a lightweight VM. Bind mounts require file synchronization between the macOS host and the Linux VM, which introduces I/O overhead. This is especially noticeable with many small files (like node_modules). Solutions include: using named volumes for dependency directories, enabling VirtioFS in Docker Desktop settings, and minimizing the number of files in bind mounts.
Should I use volumes or bind mounts for development?
Use bind mounts for source code so changes on your host are immediately reflected in the container. Use named volumes for dependency directories (node_modules, vendor, venv) to avoid performance issues and platform-specific binary problems. This hybrid approach gives you live reloading for your code while keeping dependencies fast and isolated.