DevToolBoxฟรี
บล็อก

Coolify Complete Guide 2026: Self-Hosted PaaS — Deploy Apps Without Vercel or Heroku

26 min readโดย DevToolBox Team

TL;DR

Coolify is a free, open-source, self-hosted PaaS that lets you deploy applications, databases, and services on your own servers with a single command install. It provides automatic SSL, Git-based deployments, one-click databases, and a Vercel-like experience without vendor lock-in or per-seat pricing. In 2026, Coolify v4 supports multi-server management, Docker Compose deployments, and a powerful API, making it a production-ready alternative to Vercel, Netlify, Railway, and Heroku.

Key Takeaways

  • Coolify installs on any VPS with a single curl command and provides a full PaaS experience
  • Supports Node.js, Python, Go, PHP, Ruby, Rust, static sites, Docker, and Docker Compose deployments
  • One-click database provisioning for PostgreSQL, MySQL, MongoDB, Redis, and more
  • Automatic SSL via Let's Encrypt with Traefik reverse proxy
  • Native GitHub and GitLab integration with automatic deployments on push
  • Multi-server management from a single dashboard
  • A typical 4GB VPS at $5-20/month replaces $50-500/month in cloud platform costs
  • Coolify v4 brings a new UI, improved API, S3 backups, and Sentinel monitoring agent

Table of Contents

  1. What Is Coolify and Why It Matters
  2. Coolify vs Vercel vs Netlify vs Railway vs Heroku
  3. Requirements and Installation
  4. Architecture Overview
  5. Deploying Applications
  6. Docker and Docker Compose Deployments
  7. Database Management
  8. Environment Variables and Secrets
  9. Custom Domains and SSL
  10. GitHub/GitLab Integration
  11. Coolify v4 Features
  12. Multi-Server Management
  13. Monitoring and Logs
  14. Backup and Restore
  15. Resource Management and Scaling
  16. Cost Comparison
  17. Security Hardening
  18. Production Best Practices
  19. Troubleshooting
  20. Coolify API Usage
  21. FAQ

The self-hosting movement is accelerating in 2026. Developers and teams are moving away from expensive cloud platforms to regain control over their infrastructure, reduce costs, and eliminate vendor lock-in. Coolify is at the forefront of this shift — an open-source, self-hosted Platform-as-a-Service (PaaS) that gives you a Vercel-like deployment experience on your own servers. This guide covers everything you need to know to install, configure, and run Coolify in production.

What Is Coolify and Why It Matters

Coolify is a free, open-source, self-hosted PaaS built to simplify application deployment and server management. Think of it as your own private Heroku or Vercel, running on servers you control. It abstracts away Docker, reverse proxies, SSL certificates, and deployment pipelines into a clean web UI.

Why Coolify Matters in 2026

  • Zero vendor lock-in: your code, your servers, your data
  • No per-seat pricing: unlimited team members at no extra cost
  • Full data sovereignty and GDPR compliance control
  • Dramatically lower costs: a $5 VPS replaces $50+ in platform fees
  • Growing ecosystem: 30,000+ GitHub stars, active community, frequent releases
  • Supports any language, framework, or Docker image

Coolify vs Vercel vs Netlify vs Railway vs Heroku

Here is how Coolify compares to popular cloud deployment platforms across key dimensions for a typical full-stack application in 2026.

FeatureCoolifyVercelNetlifyRailwayHeroku
PricingFree (VPS cost only)$20/user/mo Pro$19/user/mo ProUsage-based ($5+)$5-25/dyno/mo
Open SourceYes (Apache 2.0)NoNoNoNo
Self-HostedYesNoNoNoNo
Docker SupportFull (Compose too)LimitedNoDockerfile onlyContainer stack
DatabasesOne-click (8+ types)Postgres onlyNone built-inPostgres, MySQL, RedisPostgres, Redis
Auto SSLYes (Let\'s Encrypt)YesYesYesYes (paid)
Git Auto-DeployYesYesYesYesYes
Multi-ServerYesN/A (managed)N/A (managed)N/A (managed)N/A (managed)
Edge NetworkNo (single region)Yes (global)Yes (global)NoNo
Vendor Lock-inNoneModerateModerateLowLow

Requirements and Installation

Coolify runs on any Linux server (Ubuntu 22.04+ recommended) with SSH access. The minimum requirements are modest, but production workloads need more resources.

SpecMinimumRecommended (Production)
CPU2 cores4+ cores
RAM2 GB8 GB+
Disk30 GB80 GB+ SSD
OSUbuntu 22.04Ubuntu 24.04 LTS
AccessSSH root or sudoSSH key-based auth

One-Command Installation

Coolify installs with a single curl command that sets up Docker, the Coolify application, and all dependencies automatically.

# Install Coolify with one command
curl -fsSL https://cdn.coollabs.io/coolify/install.sh | bash

# The installer will:
# 1. Install Docker Engine if not present
# 2. Pull Coolify Docker images
# 3. Set up PostgreSQL for internal state
# 4. Configure Traefik reverse proxy
# 5. Start the Coolify dashboard

# After installation, access the dashboard:
# http://your-server-ip:8000

# First-time setup:
# 1. Create an admin account
# 2. Set your instance domain (e.g., coolify.yourdomain.com)
# 3. Configure the wildcard domain for app subdomains

# VPS setup example (Hetzner/DigitalOcean/Vultr):
# 1. Create a VPS with Ubuntu 24.04
# 2. Point a domain to the VPS IP (A record)
# 3. SSH in and run the install command
ssh root@your-server-ip
curl -fsSL https://cdn.coollabs.io/coolify/install.sh | bash

Architecture Overview

Understanding Coolify's architecture helps you troubleshoot issues and optimize your setup. Coolify is built on proven, battle-tested components.

  • Coolify Application: A Laravel-based web UI and API for managing deployments, served via Nginx
  • Docker Engine: All applications, databases, and services run as Docker containers
  • Traefik: Automatic reverse proxy handling HTTPS termination, routing, and load balancing
  • PostgreSQL: Internal database storing Coolify configuration, deployment history, and metadata
  • Redis: Queue management and caching for background jobs and real-time updates
  • Sentinel (v4): Optional monitoring agent for resource metrics and health checks
# Coolify Architecture Diagram
#
#   Internet
#      |
#   [Traefik] :80/:443
#      |--- app1.yourdomain.com --> [App Container 1]
#      |--- app2.yourdomain.com --> [App Container 2]
#      |--- coolify.yourdomain.com --> [Coolify Dashboard]
#      |
#   [Docker Engine]
#      |--- [PostgreSQL] (Coolify internal DB)
#      |--- [Redis] (Queue & cache)
#      |--- [Sentinel] (Monitoring agent)
#      |--- [App DB: PostgreSQL/MySQL/MongoDB]
#      |--- [App Services: workers, cron, etc.]

Deploying Applications

Coolify supports deploying applications from Git repositories, Docker images, or raw Dockerfiles. Here are deployment examples for popular frameworks.

Coolify supports three deployment methods: (1) Dockerfile-based builds where you provide a Dockerfile in your repository, (2) Nixpacks auto-detection that automatically detects your language and framework, and (3) Docker image deployments where you specify a pre-built image from a registry. For most projects, the Dockerfile approach gives you the most control.

Nixpacks Auto-Detection

If your repository does not contain a Dockerfile, Coolify uses Nixpacks to automatically detect your language and framework. Nixpacks supports Node.js, Python, Go, Rust, Ruby, PHP, Java, .NET, and many more. It generates an optimized Dockerfile behind the scenes.

# Nixpacks auto-detects and builds:
# - package.json → Node.js (detects Next.js, Nuxt, Remix, etc.)
# - requirements.txt / pyproject.toml → Python
# - go.mod → Go
# - Cargo.toml → Rust
# - Gemfile → Ruby
# - composer.json → PHP

# Override Nixpacks behavior with nixpacks.toml:
[phases.setup]
nixPkgs = ["...", "ffmpeg"]  # Add system packages

[phases.build]
cmds = ["npm run build"]

[start]
cmd = "npm start"
Build MethodBest ForProsCons
NixpacksQuick starts, standard appsZero config, auto-detectionLess control, larger images
DockerfileProduction, custom setupsFull control, optimized imagesRequires Docker knowledge
Docker ImagePre-built, third-partyNo build step, fast deployExternal CI required for custom

Node.js / Next.js

# Dockerfile for Next.js on Coolify
FROM node:20-alpine AS base

FROM base AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --only=production

FROM base AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build

FROM base AS runner
WORKDIR /app
ENV NODE_ENV=production
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
COPY --from=builder /app/public ./public

EXPOSE 3000
ENV PORT=3000
CMD ["node", "server.js"]

Python (FastAPI / Flask)

# Dockerfile for Python FastAPI
FROM python:3.12-slim

WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

Go

# Dockerfile for Go application
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o server .

FROM alpine:3.19
WORKDIR /app
COPY --from=builder /app/server .
EXPOSE 8080
CMD ["./server"]

Static Sites

# Dockerfile for static site (Vite, Astro, Hugo, etc.)
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80

Docker Image Deployment

You can deploy any pre-built Docker image directly. This is useful for third-party services, custom registries, or images built in external CI pipelines.

# Deploy from Docker Hub:
# Image: nginx:alpine
# Port: 80

# Deploy from GitHub Container Registry:
# Image: ghcr.io/your-org/your-app:latest

# Deploy from private registry:
# Image: registry.yourdomain.com/app:v2.1.0
# Add registry credentials in Coolify > Settings > Docker Registries

# Pre-built image with environment variables:
# Image: node:20-alpine
# Command override: node server.js
# Working directory: /app

Docker and Docker Compose Deployments

For complex applications with multiple services, Coolify supports Docker Compose deployments directly. Push your docker-compose.yml and Coolify handles the rest.

When using Docker Compose, Coolify reads your docker-compose.yml file, builds any services that use the build directive, pulls images for others, and manages the entire lifecycle. Coolify automatically injects Traefik labels on the service you designate as the public-facing entry point.

# docker-compose.yml for a full-stack app on Coolify
services:
  app:
    build:
      context: .
      dockerfile: Dockerfile
    environment:
      - DATABASE_URL=postgresql://postgres:secret@db:5432/myapp
      - REDIS_URL=redis://redis:6379
      - NODE_ENV=production
    ports:
      - "3000:3000"
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_started
    restart: unless-stopped
    deploy:
      resources:
        limits:
          cpus: "1.0"
          memory: 512M

  worker:
    build:
      context: .
      dockerfile: Dockerfile
    command: node worker.js
    environment:
      - DATABASE_URL=postgresql://postgres:secret@db:5432/myapp
      - REDIS_URL=redis://redis:6379
    depends_on:
      - db
      - redis
    restart: unless-stopped

  db:
    image: postgres:16-alpine
    volumes:
      - pgdata:/var/lib/postgresql/data
    environment:
      POSTGRES_DB: myapp
      POSTGRES_PASSWORD: secret
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5
    restart: unless-stopped

  redis:
    image: redis:7-alpine
    volumes:
      - redisdata:/data
    restart: unless-stopped

volumes:
  pgdata:
  redisdata:

Database Management

Coolify provides one-click provisioning for databases with automatic backups, persistent volumes, and easy connection string management.

Each database is deployed as a Docker container with a persistent volume for data. Coolify automatically generates secure passwords, configures networking so your applications can reach the database by container name, and provides connection strings in the dashboard. You can also expose database ports to the host for external access during development.

Supported Databases

  • PostgreSQL (16, 15, 14, 13)
  • MySQL (8.4, 8.0)
  • MariaDB (11, 10)
  • MongoDB (7, 6)
  • Redis (7, Stack)
  • Dragonfly (fast Redis alternative)
  • ClickHouse (analytics)
  • KeyDB (multi-threaded Redis)
# Coolify creates databases with connection details like:
# Internal URL (for services on same server):
# postgresql://postgres:generated_password@db-container:5432/mydb

# You can also connect externally if you expose the port:
# postgresql://postgres:generated_password@your-server-ip:5432/mydb

# Backup a Coolify-managed PostgreSQL database manually:
docker exec coolify-db pg_dump -U postgres mydb > backup.sql

# Restore from backup:
cat backup.sql | docker exec -i coolify-db psql -U postgres mydb

# One-click MongoDB setup produces:
# mongodb://root:generated_password@mongo-container:27017/mydb?authSource=admin

# Redis connection:
# redis://redis-container:6379

Environment Variables and Secrets

Coolify provides a secure way to manage environment variables with support for shared variables across services, build-time and runtime separation, and preview-specific overrides.

Coolify encrypts environment variables at rest in its PostgreSQL database. Variables marked as sensitive are never displayed in the UI after being set. Build-time variables are available during the Docker build process, while runtime variables are injected when the container starts. This separation is important for Next.js applications where NEXT_PUBLIC_ variables must be available at build time.

# Coolify environment variable types:

# 1. Application-level variables (set in app settings)
# These are injected at runtime into the container
DATABASE_URL=postgresql://postgres:pass@db:5432/app
REDIS_URL=redis://redis:6379
NODE_ENV=production
APP_SECRET=your-secret-key-here

# 2. Build-time variables (available during docker build)
# Set "Build Variable" toggle in the Coolify UI
NEXT_PUBLIC_API_URL=https://api.yourdomain.com
NEXT_PUBLIC_SITE_URL=https://yourdomain.com

# 3. Shared variables (reusable across multiple services)
# Created under "Shared Variables" in team settings
# Reference with: \${SHARED_VAR_NAME}

# 4. Preview-specific variables
# Override variables for pull request preview deployments
# Useful for staging database URLs, test API keys, etc.
DATABASE_URL=postgresql://postgres:pass@staging-db:5432/app_preview

Custom Domains and SSL

Coolify automatically provisions SSL certificates via Let's Encrypt through its integrated Traefik reverse proxy. Adding a custom domain requires just a DNS A record and a domain entry in the Coolify dashboard.

Coolify handles both single-domain and wildcard SSL certificates. For standard domains, it uses HTTP-01 challenge validation. For wildcard certificates, you need to configure a DNS provider (Cloudflare, AWS Route53, DigitalOcean DNS) for DNS-01 challenge validation.

# Step 1: Add DNS A record at your registrar
# Type: A
# Name: app (or @ for root domain)
# Value: your-server-ip
# TTL: 300

# Step 2: In Coolify dashboard, set the domain:
# Application Settings > Domains
# Add: app.yourdomain.com

# Step 3: Coolify automatically:
# - Configures Traefik routing rules
# - Requests Let's Encrypt SSL certificate
# - Sets up automatic certificate renewal
# - Enables HTTP -> HTTPS redirect

# For wildcard domains (*.yourdomain.com):
# Requires DNS challenge (Cloudflare, Route53, etc.)
# Configure in Coolify > Settings > DNS Provider

# Traefik configuration is auto-generated:
# - TLS termination at the proxy level
# - WebSocket support enabled by default
# - Custom headers can be added per application

# Multiple domains per application:
# Add comma-separated domains in the settings:
# app.yourdomain.com, www.yourdomain.com, custom.com

GitHub/GitLab Integration and Auto-Deploy

Coolify integrates natively with GitHub and GitLab. Connect your account through OAuth, select a repository, and Coolify creates webhook listeners for automatic deployments on every push.

The integration supports both GitHub Apps (recommended for organizations) and personal access tokens for simpler setups. For GitLab, both cloud and self-hosted instances are supported. Coolify creates webhook endpoints that listen for push events and automatically trigger the deployment pipeline.

Preview deployments are one of the most powerful features. When a pull request is opened, Coolify automatically deploys a preview instance with a unique URL, injects preview-specific environment variables, and tears it down when the PR is closed. This gives your team a Vercel-like preview experience on your own infrastructure.

# GitHub Integration Setup:
# 1. Go to Coolify dashboard > Sources > Add GitHub App
# 2. Authorize the Coolify GitHub App
# 3. Select repositories to grant access

# GitLab Integration Setup:
# 1. Go to Coolify dashboard > Sources > Add GitLab
# 2. Enter your GitLab instance URL (or gitlab.com)
# 3. Create an application in GitLab for OAuth
# 4. Paste Client ID and Secret in Coolify

# Auto-deploy configuration:
# - Push to main/master branch -> Production deployment
# - Push to other branches -> Preview deployment (optional)
# - Pull request opened -> Preview deployment with unique URL

# Coolify listens for webhooks and triggers:
# 1. git clone / git pull
# 2. Docker build (using Dockerfile or Nixpacks)
# 3. Container restart with zero-downtime deployment
# 4. Health check verification
# 5. Traefik routing update

# Manual deployment trigger:
# Click "Deploy" button in the dashboard
# Or use the Coolify API (see API section below)

Coolify v4 Features and Improvements

Coolify v4 is a major rewrite bringing significant improvements in usability, performance, and reliability.

Version 4 was developed over 18 months and represents a near-complete rewrite of the codebase. The most impactful changes include the Sentinel monitoring agent that provides per-container resource metrics, a fully documented REST API for automation, and significantly improved Docker Compose handling that supports complex multi-service stacks out of the box.

  • Completely redesigned UI with improved navigation and dark mode
  • New Sentinel monitoring agent for real-time resource tracking
  • Improved Docker Compose support with full YAML compatibility
  • S3-compatible backup destinations (AWS, Cloudflare R2, MinIO)
  • Enhanced API with full CRUD operations for all resources
  • Improved webhook handling and deployment pipeline
  • Tags and filtering for organizing large numbers of resources
  • Terminal access to running containers from the dashboard
  • Improved multi-server proxy management
  • Automatic server cleanup and resource reclamation

Multi-Server Management

Coolify can manage multiple servers from a single dashboard. Install Coolify on your primary server, then add remote servers by providing SSH credentials. Each server runs its own Docker engine and Traefik instance, while Coolify orchestrates deployments across all of them.

Multi-server management is one of Coolify's strongest advantages over other self-hosted PaaS tools. You can designate servers for specific purposes: one for production web applications, another for databases, and a third for staging environments. This separation improves security and resource isolation.

# Adding a remote server to Coolify:

# 1. On the remote server, ensure SSH is available:
sudo apt update && sudo apt install -y openssh-server

# 2. Create a dedicated user for Coolify (recommended):
sudo adduser coolify
sudo usermod -aG docker coolify

# 3. Set up SSH key access:
# Copy the public key from Coolify dashboard > Servers > Add Server
ssh-copy-id coolify@remote-server-ip

# 4. In Coolify dashboard:
# Servers > Add Server
# - Name: production-worker-1
# - IP: remote-server-ip
# - User: coolify
# - Private Key: select from Coolify key store
# - Click "Validate Server"

# 5. Coolify will:
# - Install Docker on the remote server
# - Set up Traefik proxy
# - Enable the server for deployments

# Deploy to specific servers:
# When creating a new resource, select the target server
# Each server operates independently with its own Docker and Traefik

Monitoring and Logs

Coolify v4 includes built-in monitoring through the Sentinel agent. It tracks CPU, memory, disk, and network usage per server and per container. You can view real-time logs for any application or service directly in the dashboard.

For teams that need more advanced monitoring, Coolify integrates well with external tools. You can deploy Prometheus and Grafana as services within Coolify itself, then configure them to scrape metrics from your applications and the Docker daemon. Coolify's Sentinel agent can also export metrics in a Prometheus-compatible format.

# Coolify Sentinel Agent (v4):
# Automatically installed on each managed server
# Collects metrics every 10 seconds:
# - CPU usage (per server and per container)
# - Memory usage and available memory
# - Disk usage and I/O
# - Network traffic in/out

# View logs in the dashboard:
# Applications > [Your App] > Logs
# Shows real-time stdout/stderr from the container

# CLI access to container logs:
docker logs -f container_name --tail 100

# View all Coolify-managed containers:
docker ps --filter "label=coolify.managed=true"

# External monitoring integration:
# Export metrics to Prometheus/Grafana:
# 1. Deploy Prometheus + Grafana via Coolify
# 2. Configure Sentinel to export metrics
# 3. Create dashboards for all your services

# Health check configuration (per application):
# Settings > Health Check
# Path: /health or /api/health
# Interval: 30s
# Timeout: 5s
# Retries: 3

Backup and Restore Strategies

Coolify supports automated database backups to local storage or S3-compatible destinations. You can schedule backups at custom intervals and set retention policies.

A solid backup strategy for Coolify involves three layers: (1) database backups managed by Coolify's built-in scheduler sent to S3, (2) volume backups for persistent data using Docker volume snapshots, and (3) Coolify configuration backup by dumping its internal PostgreSQL database. Combining all three ensures you can fully recover from any failure scenario, including complete server loss.

# Coolify Backup Configuration:

# 1. Configure S3 backup destination:
# Settings > S3 Storages > Add
# - Provider: AWS S3 / Cloudflare R2 / MinIO
# - Bucket: coolify-backups
# - Region: us-east-1
# - Access Key and Secret Key

# 2. Enable database backup:
# Databases > [Your DB] > Backups
# - Schedule: 0 2 * * * (daily at 2 AM)
# - Destination: S3 storage configured above
# - Retention: 7 days

# 3. Backup Coolify itself:
# The internal PostgreSQL database stores all configuration

# Manual Coolify backup:
cd /data/coolify
docker compose exec postgres pg_dumpall -U postgres > coolify-backup.sql

# Restore Coolify from backup:
cat coolify-backup.sql | docker compose exec -T postgres psql -U postgres

# 4. Volume backup strategy:
# For persistent volumes, create periodic snapshots:
docker run --rm -v myapp_data:/data -v /backups:/backup \
  alpine tar czf /backup/myapp-data-$(date +%Y%m%d).tar.gz /data

Resource Management and Scaling

Coolify lets you set resource limits per container (CPU, memory) and configure horizontal scaling through Docker Compose replicas. For vertical scaling, resize your VPS and Coolify automatically uses the additional resources.

Resource limits are critical in a self-hosted environment because unlike cloud platforms, there is no automatic scaling beyond your server capacity. Without limits, a single runaway application can consume all available memory and crash other services. Set both limits (hard cap) and reservations (guaranteed minimum) for each service.

# Set resource limits in Coolify dashboard:
# Application > Settings > Resource Limits
# - CPU Limit: 1.0 (1 core)
# - Memory Limit: 512M
# - CPU Reservation: 0.25
# - Memory Reservation: 256M

# Or in docker-compose.yml:
services:
  app:
    deploy:
      resources:
        limits:
          cpus: "1.0"
          memory: 512M
        reservations:
          cpus: "0.25"
          memory: 256M
      replicas: 3  # Horizontal scaling

# Monitor resource usage:
docker stats --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}"

# Disk cleanup (run periodically):
docker system prune -af --volumes
# WARNING: removes unused images, containers, and volumes
# Coolify v4 has automatic cleanup in Settings > Server

Cost Comparison: VPS vs Cloud Platforms

One of the biggest advantages of Coolify is cost savings. Here is a realistic comparison for a typical full-stack application with a database, background workers, and moderate traffic.

SetupCoolify + VPSVercel ProRailwayHeroku
Web AppIncluded$20/user/mo~$10/mo$7/dyno/mo
DatabaseIncluded$20+/mo~$10/mo$9+/mo
Background WorkerIncludedServerless (usage)~$5/mo$7/dyno/mo
Redis CacheIncluded$20+/mo (KV)~$5/mo$15+/mo
SSL CertificateFree (Let\'s Encrypt)FreeFreeFree (paid plans)
3 Team MembersFree$60/moFreeFree
Total Monthly$6-24 (VPS)$120-200+$30-50$38-60

Recommended VPS Providers

Any Linux VPS with root SSH access works with Coolify. Here are popular options sorted by price-to-performance ratio in 2026.

Provider4GB RAM PlanLocation OptionsNotes
Hetzner~$7/mo (CX22)EU, US EastBest value, AMD EPYC, fast NVMe
DigitalOcean$24/moGlobal (15 regions)Good docs, simple UI, marketplace
Vultr$24/moGlobal (32 regions)Wide region selection, hourly billing
Linode (Akamai)$24/moGlobal (25 regions)Reliable, good support
Contabo~$7/mo (S)EU, US, AsiaCheapest, slower disk I/O
Oracle CloudFree (ARM 24GB)Limited regionsFree tier with ARM Ampere instances

Security Hardening

Running your own PaaS means you are responsible for security. Follow these hardening steps for a production Coolify installation.

  • Enable UFW firewall and allow only ports 22, 80, 443
  • Use SSH key authentication and disable password login
  • Keep the host OS and Docker updated regularly
  • Enable automatic security updates on Ubuntu
  • Set strong passwords for Coolify admin and all databases
  • Use Coolify's built-in environment variable encryption
  • Restrict Coolify dashboard access with IP allowlisting or VPN
  • Enable 2FA on your GitHub/GitLab accounts used for deployment
  • Review Docker container permissions — avoid running as root
  • Monitor server access logs and set up fail2ban
# Security hardening commands:

# 1. Enable UFW firewall
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 22/tcp    # SSH
sudo ufw allow 80/tcp    # HTTP
sudo ufw allow 443/tcp   # HTTPS
sudo ufw enable

# 2. Disable password authentication
sudo sed -i "s/PasswordAuthentication yes/PasswordAuthentication no/" /etc/ssh/sshd_config
sudo systemctl restart sshd

# 3. Enable automatic security updates
sudo apt install -y unattended-upgrades
sudo dpkg-reconfigure -plow unattended-upgrades

# 4. Install fail2ban
sudo apt install -y fail2ban
sudo systemctl enable fail2ban
sudo systemctl start fail2ban

# 5. Restrict Coolify dashboard to specific IPs (via Traefik):
# In Coolify Settings > Custom Labels, add:
# traefik.http.middlewares.coolify-ip.ipallowlist.sourcerange=YOUR_IP/32

Production Best Practices

Following these practices will help you run a stable and maintainable Coolify production environment. Most production issues stem from insufficient resource limits, missing backups, or unmonitored disk usage.

  • Use a dedicated VPS for Coolify — do not share with other manual Docker setups
  • Always set resource limits (CPU and memory) on containers to prevent one app from starving others
  • Enable automated database backups to an S3 destination — never rely on local-only backups
  • Use separate staging and production environments with different servers if budget allows
  • Pin Docker image versions in production — never use :latest tags
  • Set up health checks for all services to enable automatic restarts
  • Use preview deployments for pull requests before merging to main
  • Monitor disk usage — Docker images and build caches grow quickly
  • Run docker system prune periodically or enable Coolify's automatic cleanup
  • Keep Coolify updated — run the built-in update from the dashboard regularly

Common Issues and Troubleshooting

Here are the most common issues when running Coolify and how to resolve them.

Most Coolify issues fall into three categories: build failures (Dockerfile or Nixpacks errors), networking issues (DNS, SSL, port conflicts), and resource exhaustion (disk space, memory). The Coolify dashboard provides deployment logs for build issues, and Docker logs for runtime issues. When in doubt, check the Coolify Discord community where the maintainers and community are very active.

# Issue: Deployment fails with "port already in use"
# Solution: Check if another container is using the port
docker ps --format "{{.Names}} {{.Ports}}" | grep 3000
# Stop the conflicting container or change your app port

# Issue: SSL certificate not issuing
# Solution: Verify DNS propagation and Traefik logs
dig +short app.yourdomain.com   # Should show your server IP
docker logs coolify-proxy 2>&1 | grep -i "certificate"
# Ensure ports 80 and 443 are open in firewall

# Issue: Application shows 502 Bad Gateway
# Solution: Check if the container is running and healthy
docker ps -a | grep your-app
docker logs your-app-container --tail 50
# Common causes: app crashed, wrong port, health check failing

# Issue: Out of disk space
# Solution: Clean up Docker resources
docker system df          # Check Docker disk usage
docker system prune -af   # Remove unused images/containers
docker volume prune -f    # Remove unused volumes (careful!)
df -h                     # Check overall disk usage

# Issue: Coolify dashboard not loading
# Solution: Check Coolify containers
cd /data/coolify
docker compose ps
docker compose logs --tail 50
docker compose restart

# Issue: Webhook not triggering deployments
# Solution: Verify webhook URL in GitHub/GitLab
# Check Coolify > Sources > [Your Source] > Webhooks
# GitHub: Settings > Webhooks > Recent Deliveries
# Look for 200 response from Coolify endpoint

# Issue: High memory usage
# Solution: Set container memory limits and check for leaks
docker stats --no-stream
# Set memory limits in Coolify app settings
# Consider adding swap on the VPS:
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

Zero-Downtime Deployments

Coolify supports zero-downtime deployments through its rolling update strategy. When you deploy a new version, Coolify starts the new container, waits for it to pass health checks, updates Traefik routing to point to the new container, and then stops the old container. This ensures your application remains available during deployments.

# Zero-downtime deployment requires:
# 1. A health check endpoint in your application
#    GET /health -> 200 OK

# 2. Health check configuration in Coolify:
#    Path: /health
#    Interval: 10s
#    Timeout: 5s
#    Retries: 3
#    Start Period: 30s (grace period for slow-starting apps)

# 3. Example health check endpoint (Node.js/Express):
# app.get("/health", (req, res) => {
#   // Check database connection
#   // Check Redis connection
#   res.status(200).json({ status: "healthy" });
# });

Coolify Update and Migration

Keeping Coolify updated is important for security patches and new features. The update process is designed to be non-disruptive — your running applications are not affected during a Coolify update. Only the Coolify management containers restart.

# Update Coolify from the dashboard:
# Settings > Update > Check for Updates > Update

# Update via CLI (alternative):
curl -fsSL https://cdn.coollabs.io/coolify/install.sh | bash

# Before major version updates, backup:
cd /data/coolify
docker compose exec postgres pg_dumpall -U postgres > /tmp/coolify-pre-update.sql
cp -r /data/coolify/source/.env /tmp/coolify-env-backup

# Check current version:
cat /data/coolify/source/.env | grep APP_VERSION

# Rollback if needed (restore backup):
cat /tmp/coolify-pre-update.sql | docker compose exec -T postgres psql -U postgres

Coolify API Usage

Coolify v4 provides a comprehensive REST API for programmatic management. Generate an API token from the dashboard under Settings > API Tokens.

The API follows REST conventions and returns JSON responses. It supports authentication via Bearer tokens and covers all resources: applications, databases, servers, deployments, and environment variables. This makes it easy to integrate Coolify into your existing CI/CD pipelines or build custom automation scripts.

# Generate API token:
# Dashboard > Settings > API Tokens > Create New Token

# List all applications:
curl -s -H "Authorization: Bearer YOUR_API_TOKEN" \
  https://coolify.yourdomain.com/api/v1/applications | jq .

# Get application details:
curl -s -H "Authorization: Bearer YOUR_API_TOKEN" \
  https://coolify.yourdomain.com/api/v1/applications/APP_UUID | jq .

# Trigger a deployment:
curl -s -X POST \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  https://coolify.yourdomain.com/api/v1/applications/APP_UUID/deploy

# Update environment variable:
curl -s -X PATCH \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"key": "APP_VERSION", "value": "2.1.0"}' \
  https://coolify.yourdomain.com/api/v1/applications/APP_UUID/envs

# List all servers:
curl -s -H "Authorization: Bearer YOUR_API_TOKEN" \
  https://coolify.yourdomain.com/api/v1/servers | jq .

# List databases:
curl -s -H "Authorization: Bearer YOUR_API_TOKEN" \
  https://coolify.yourdomain.com/api/v1/databases | jq .

# CI/CD integration example (GitHub Actions):
# .github/workflows/deploy.yml
# name: Deploy to Coolify
# on:
#   push:
#     branches: [main]
# jobs:
#   deploy:
#     runs-on: ubuntu-latest
#     steps:
#       - name: Trigger Coolify deployment
#         run: |
#           curl -s -X POST \
#             -H "Authorization: Bearer $COOLIFY_TOKEN" \
#             $COOLIFY_URL/api/v1/applications/$APP_UUID/deploy

Frequently Asked Questions

Is Coolify free to use?

Yes, Coolify is completely free and open source under the Apache 2.0 license. You can self-host it at no cost on your own server. The Coolify team also offers a paid cloud version if you prefer not to manage the server yourself, but the self-hosted version has the same features.

What are the minimum server requirements for Coolify?

The minimum requirements are 2 CPU cores, 2GB RAM, and 30GB disk space running Ubuntu 22.04 or later. For production use with multiple applications and databases, we recommend at least 4 CPU cores, 8GB RAM, and 80GB SSD. Coolify itself uses approximately 500MB-1GB of RAM.

Can Coolify replace Vercel for Next.js deployments?

Yes, Coolify fully supports Next.js applications including server-side rendering, API routes, and middleware. The main difference is that Vercel offers a global edge network, while Coolify deploys to your specific server location. For most applications where a single region is sufficient, Coolify provides equivalent functionality at a fraction of the cost.

How does Coolify handle SSL certificates?

Coolify uses Traefik as its reverse proxy, which automatically provisions and renews SSL certificates from Let's Encrypt. When you add a custom domain in the Coolify dashboard and point your DNS A record to your server IP, Traefik automatically obtains a certificate. Renewal happens automatically before expiration.

Can I migrate from Heroku or Railway to Coolify?

Yes. For most applications, migration involves setting up Coolify, connecting your Git repository, configuring environment variables, and updating DNS records. Coolify supports the same Dockerfile-based or Buildpack-based deployment workflows. Database migration requires exporting data from your current provider and importing it into a Coolify-managed database.

Does Coolify support horizontal scaling and load balancing?

Coolify supports horizontal scaling through Docker Compose replicas and multi-server management. For a single server, you can run multiple container replicas behind Traefik's built-in load balancer. For true horizontal scaling across servers, deploy the same application to multiple Coolify-managed servers and use an external load balancer or DNS-based distribution.

How do I update Coolify to the latest version?

Coolify includes a built-in update mechanism. Navigate to Settings in the dashboard and click Update. Alternatively, run the install command again on your server, which performs an in-place upgrade while preserving your data and configuration. Always backup your Coolify PostgreSQL database before major updates.

Is Coolify suitable for production workloads?

Yes. Many teams and companies run production applications on Coolify. It uses battle-tested components like Docker, Traefik, and PostgreSQL. The key to production readiness is proper server sizing, automated backups, monitoring, and following the security hardening steps outlined in this guide. For high-availability requirements, use multi-server setups with external health monitoring.

Conclusion

Coolify represents the maturing self-hosting ecosystem in 2026. It eliminates the complexity of managing Docker, reverse proxies, and SSL certificates while giving you full control over your infrastructure. Whether you are a solo developer looking to cut hosting costs or a team seeking data sovereignty, Coolify provides a production-grade deployment platform at a fraction of cloud platform pricing.

Getting Started Checklist

  1. Choose a VPS provider and create an Ubuntu 24.04 instance (4GB+ RAM recommended)
  2. Point your domain DNS A record to the server IP address
  3. SSH into the server and run the Coolify install command
  4. Access the dashboard, create an admin account, and set your instance domain
  5. Connect your GitHub or GitLab account for automatic deployments
  6. Deploy your first application from a Git repository
  7. Configure custom domains and verify SSL certificates are issued
  8. Set up automated database backups to an S3 destination
  9. Apply security hardening: firewall, SSH keys, fail2ban
  10. Enable Sentinel monitoring and set up health checks for all services

For additional resources, visit the official Coolify documentation at coolify.io/docs, join the Coolify Discord community for support, and follow the GitHub repository at github.com/coollabsio/coolify for release notes and feature announcements. The self-hosting community is growing rapidly, and Coolify is one of the most mature and actively developed projects in this space. Starting with a small VPS and a single application is the best way to learn, and you can scale from there as your confidence and requirements grow.

𝕏 Twitterin LinkedIn
บทความนี้มีประโยชน์ไหม?

อัปเดตข่าวสาร

รับเคล็ดลับการพัฒนาและเครื่องมือใหม่ทุกสัปดาห์

ไม่มีสแปม ยกเลิกได้ตลอดเวลา

ลองเครื่องมือที่เกี่ยวข้อง

{ }JSON Formatter

บทความที่เกี่ยวข้อง

Docker Best Practices: 20 เคล็ดลับสำหรับ Container ใน Production

เชี่ยวชาญ Docker ด้วย 20 แนวปฏิบัติที่ดี: multi-stage builds, ความปลอดภัย, การเพิ่มประสิทธิภาพ image และ CI/CD

Vercel vs Netlify 2026: Complete Comparison — Which Deployment Platform Should You Choose?

In-depth comparison of Vercel and Netlify: features, pricing at scale, serverless/edge functions, CDN performance, framework support, monorepo, analytics, enterprise features, and migration guide.

Docker Compose Tutorial: จากพื้นฐานถึง Stack พร้อมใช้งานจริง

Tutorial Docker Compose ฉบับสมบูรณ์: ไวยากรณ์ docker-compose.yml, services, networks, volumes, environment variables, healthchecks และตัวอย่างจริง