DevToolBoxGRATIS
Blogg

Nginx Config Guide: Write nginx.conf Files — Complete Guide

12 min readby DevToolBox

TL;DR

Nginx configuration is organized in nested blocks: main → events → http → server → location. Use proxy_pass for reverse proxy, upstream for load balancing, ssl_certificate + listen 443 ssl http2 for HTTPS, and limit_req_zone for rate limiting. Always test with nginx -t before reloading.

Key Takeaways

  • nginx.conf uses a hierarchical block structure: main, events, http, server, location
  • Directives in outer blocks are inherited by inner blocks (can be overridden)
  • proxy_pass enables reverse proxy to Node.js, Python, Go, and other backends
  • SSL/TLS with Let's Encrypt is free and automatable via certbot
  • HTTP/2 requires SSL and the http2 parameter on the listen directive
  • Load balancing strategies: round-robin, least_conn, ip_hash, weighted
  • Rate limiting uses shared memory zones defined in the http block
  • Security headers (HSTS, CSP, X-Frame-Options) protect against common attacks
  • Always validate config with nginx -t before reloading

1. nginx.conf Structure: The Block Hierarchy

Every nginx configuration is built from nested blocks called contexts. Directives inside a context apply to that scope. Understanding the hierarchy is the foundation of writing correct nginx config.

The four main contexts are: main (global settings), events (connection handling), http (HTTP-specific settings), and server (virtual host). Inside server blocks, location blocks handle specific URL patterns.

# /etc/nginx/nginx.conf — Top-level structure

# ── MAIN CONTEXT ──────────────────────────────────────────────────────────
user  nginx;                     # Worker process user
worker_processes  auto;          # Number of worker processes (auto = CPU cores)
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

# ── EVENTS CONTEXT ────────────────────────────────────────────────────────
events {
    worker_connections  1024;    # Max simultaneous connections per worker
    use epoll;                   # Event method (Linux: epoll, macOS: kqueue)
    multi_accept on;             # Accept multiple connections at once
}

# ── HTTP CONTEXT ──────────────────────────────────────────────────────────
http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    sendfile        on;
    keepalive_timeout  65;

    # ── SERVER CONTEXT (virtual host) ─────────────────────────────────────
    server {
        listen       80;
        server_name  example.com www.example.com;

        # ── LOCATION CONTEXT (URL pattern matching) ────────────────────────
        location / {
            root   /var/www/html;
            index  index.html;
        }

        location /api/ {
            proxy_pass http://localhost:3000;
        }
    }
}
Tip: Most distributions (Ubuntu/Debian) split config across files. The main nginx.conf includes /etc/nginx/conf.d/*.conf and/or /etc/nginx/sites-enabled/*. Put each virtual host in its own file and symlink it: ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/

Key Main-Context Directives

DirectiveDefaultDescription
worker_processes1Set to auto to match CPU cores
worker_connections512Max connections per worker; total = workers × connections
sendfileoffEnable kernel-level file transfer (faster for static files)
keepalive_timeout75sHow long to keep idle client connections open
server_tokensonSet to off to hide nginx version in headers

2. Static File Serving

Nginx excels at serving static files. The key directives are root (the base directory) and alias (maps URL to a path). Understanding the difference between them prevents common path-building bugs.

server {
    listen 80;
    server_name static.example.com;

    # ── root: URL path is appended to root ────────────────────────────────
    # Request: /images/logo.png → /var/www/html/images/logo.png
    root /var/www/html;

    location / {
        try_files $uri $uri/ =404;
        # try_files: 1. try exact file, 2. try as directory, 3. return 404
    }

    # ── alias: URL prefix is replaced with the alias path ─────────────────
    # Request: /static/app.js → /opt/frontend/dist/app.js
    location /static/ {
        alias /opt/frontend/dist/;
        expires 1y;                         # Cache for 1 year
        add_header Cache-Control "public, immutable";
        access_log off;                     # Skip logging for static assets
    }

    # ── Custom 404 / 50x error pages ─────────────────────────────────────
    error_page 404 /404.html;
    error_page 500 502 503 504 /50x.html;

    location = /50x.html {
        root /usr/share/nginx/html;
    }

    # ── SPA (Single Page App): always serve index.html ───────────────────
    location / {
        try_files $uri $uri/ /index.html;   # Fallback to index.html for client routing
    }

    # ── Prevent serving dotfiles (.env, .git, etc.) ───────────────────────
    location ~ /\. {
        deny all;
        return 404;
    }
}

root vs alias — The Key Difference

DirectiveRequestFile Served
root /var/www; in location /imgs//imgs/a.png/var/www/imgs/a.png
alias /var/www/; in location /imgs//imgs/a.png/var/www/a.png
Tip: Use our Nginx Config Generator to build static file server configs visually without memorizing every directive.

3. Reverse Proxy with proxy_pass

A reverse proxy sits in front of your application server (Node.js, Python, Go, Java) and forwards client requests to it. Nginx handles SSL termination, compression, and caching while your app server focuses on business logic.

server {
    listen 80;
    server_name api.example.com;

    # ── Basic reverse proxy to a Node.js/Python app ───────────────────────
    location / {
        proxy_pass http://127.0.0.1:3000;   # App server address

        # Pass real client info to the upstream server
        proxy_set_header Host              $host;
        proxy_set_header X-Real-IP         $remote_addr;
        proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # WebSocket support (needed for socket.io, etc.)
        proxy_http_version 1.1;
        proxy_set_header Upgrade    $http_upgrade;
        proxy_set_header Connection "upgrade";

        # Timeouts
        proxy_connect_timeout  60s;
        proxy_send_timeout     60s;
        proxy_read_timeout     60s;

        # Buffer settings
        proxy_buffering         on;
        proxy_buffer_size       16k;
        proxy_buffers           4 32k;
        proxy_busy_buffers_size 64k;
    }

    # ── Proxy a specific path to a different service ────────────────────
    location /auth/ {
        proxy_pass http://127.0.0.1:4000/;  # Note trailing slash strips /auth prefix
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }

    # ── PHP-FPM via FastCGI ────────────────────────────────────────────────
    location ~ \.php$ {
        fastcgi_pass unix:/run/php/php8.2-fpm.sock;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    }
}
Trailing Slash Gotcha: With proxy_pass http://backend:3000 (no trailing slash), nginx appends the full URI. With proxy_pass http://backend:3000/ (trailing slash), nginx strips the location prefix first. This matters when proxying sub-paths.

Common proxy_set_header Values Explained

HeaderVariablePurpose
X-Real-IP$remote_addrClient's real IP address
X-Forwarded-For$proxy_add_x_forwarded_forIP chain through proxies
X-Forwarded-Proto$schemeOriginal protocol (http/https)
Host$hostOriginal request Host header

4. SSL/TLS with Let's Encrypt

Let's Encrypt provides free, automated SSL/TLS certificates. Install certbot and the nginx plugin, then certbot can obtain a certificate and automatically update your nginx config.

# Step 1: Install certbot
# Ubuntu/Debian:
apt install certbot python3-certbot-nginx

# Step 2: Obtain certificate (certbot edits nginx config automatically)
certbot --nginx -d example.com -d www.example.com

# Step 3: Auto-renewal (certbot installs a systemd timer)
systemctl status certbot.timer
# Manual renewal test:
certbot renew --dry-run

After certbot runs, your server block will look like:

server {
    listen 443 ssl http2;
    server_name example.com www.example.com;

    # ── Certificate paths (set by certbot) ─────────────────────────────────
    ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    # ── Modern SSL settings (Mozilla Intermediate compatibility) ───────────
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers off;

    # ── Session caching (improves performance) ─────────────────────────────
    ssl_session_cache   shared:SSL:10m;
    ssl_session_timeout 1d;
    ssl_session_tickets off;

    # ── OCSP Stapling ──────────────────────────────────────────────────────
    ssl_stapling        on;
    ssl_stapling_verify on;
    resolver 8.8.8.8 8.8.4.4 valid=300s;
    resolver_timeout 5s;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

# ── HTTP → HTTPS redirect ─────────────────────────────────────────────────
server {
    listen 80;
    server_name example.com www.example.com;
    return 301 https://$host$request_uri;
}
Security Tip: Disable TLS 1.0 and 1.1 by only listing TLSv1.2 TLSv1.3. Use Mozilla SSL Config Generator to generate modern, secure cipher suites for your nginx version.

5. HTTP/2 and HTTP/3

HTTP/2 dramatically improves performance through multiplexing (multiple requests on one connection), header compression (HPACK), and server push. It requires HTTPS in all major browsers.

# ── HTTP/2 ────────────────────────────────────────────────────────────────
# Nginx 1.9.5+ supports HTTP/2
# Just add "http2" to the listen directive

server {
    listen 443 ssl http2;          # ← Enable HTTP/2 here
    server_name example.com;

    ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;

    # HTTP/2 tuning
    http2_max_concurrent_streams 128;
    http2_idle_timeout 3m;

    location / {
        proxy_pass http://127.0.0.1:3000;
    }
}

# ── HTTP/3 (QUIC) ──────────────────────────────────────────────────────────
# Requires nginx 1.25+ compiled with --with-http_v3_module
# or OpenResty / nginxinc/nginx-quic

server {
    listen 443 ssl;
    listen 443 quic reuseport;     # ← Enable HTTP/3 via QUIC (UDP 443)
    http2 on;

    server_name example.com;
    ssl_certificate     /path/to/cert.pem;
    ssl_certificate_key /path/to/key.pem;
    ssl_protocols TLSv1.3;         # HTTP/3 requires TLS 1.3

    # Advertise HTTP/3 support via Alt-Svc header
    add_header Alt-Svc 'h3=":443"; ma=86400';

    location / {
        proxy_pass http://127.0.0.1:3000;
    }
}
FeatureHTTP/1.1HTTP/2HTTP/3
MultiplexingNo (pipelining limited)YesYes (QUIC streams)
Header compressionNoHPACKQPACK
TransportTCPTCPUDP (QUIC)
TLS requiredNoIn browsers (yes)Yes (TLS 1.3)
HOL blockingYesTCP level onlyNo

6. Load Balancing

Nginx's upstream block defines a pool of backend servers. Use proxy_pass pointing to the upstream name to distribute traffic. Nginx supports multiple balancing algorithms out of the box.

http {
    # ── Round-Robin (default): requests distributed sequentially ───────────
    upstream backend_rr {
        server 10.0.0.1:3000;
        server 10.0.0.2:3000;
        server 10.0.0.3:3000;
    }

    # ── Weighted Round-Robin: server 1 gets 3x more traffic ────────────────
    upstream backend_weighted {
        server 10.0.0.1:3000 weight=3;
        server 10.0.0.2:3000 weight=1;
    }

    # ── least_conn: route to server with fewest active connections ──────────
    upstream backend_lc {
        least_conn;
        server 10.0.0.1:3000;
        server 10.0.0.2:3000;
        server 10.0.0.3:3000;
    }

    # ── ip_hash: sticky sessions — same client IP → same server ────────────
    upstream backend_sticky {
        ip_hash;
        server 10.0.0.1:3000;
        server 10.0.0.2:3000;
        # Mark one server as backup (only used when others are down)
        server 10.0.0.3:3000 backup;
    }

    # ── Health check & server parameters ──────────────────────────────────
    upstream backend_health {
        server 10.0.0.1:3000 max_fails=3 fail_timeout=30s;
        server 10.0.0.2:3000 max_fails=3 fail_timeout=30s;
        # max_fails: mark server as unavailable after N failed attempts
        # fail_timeout: timeframe for max_fails and duration of unavailability
    }

    server {
        listen 80;
        server_name lb.example.com;

        location / {
            proxy_pass http://backend_lc;       # Reference the upstream name
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

            # Connection keepalive to upstream servers
            proxy_http_version 1.1;
            proxy_set_header Connection "";     # Remove Connection header for keepalive
        }
    }
}

Load Balancing Algorithms Compared

AlgorithmDirectiveBest for
Round-Robin(default)Stateless apps with similar request times
Weightedweight=NMixed-capacity server pools
Least Connectionsleast_connVariable-length requests (e.g., file uploads)
IP Haship_hashStateful apps needing session persistence
RandomrandomLarge clusters, avoid hot spots

7. Gzip Compression

Gzip reduces response size by 60–80% for text-based content (HTML, CSS, JS, JSON). This directly improves page load times, especially on slower connections. Enable it in the http block so it applies globally.

http {
    # ── Enable gzip compression ────────────────────────────────────────────
    gzip on;
    gzip_vary on;              # Add Vary: Accept-Encoding header
    gzip_proxied any;          # Compress proxied responses too
    gzip_comp_level 6;         # Compression level 1-9 (6 is a good balance)
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_min_length 256;       # Don't compress tiny files (bytes)

    # Content types to compress
    gzip_types
        text/plain
        text/css
        text/javascript
        application/javascript
        application/x-javascript
        application/json
        application/xml
        application/rss+xml
        application/atom+xml
        image/svg+xml
        font/truetype
        font/opentype
        application/vnd.ms-fontobject
        application/font-woff
        application/font-woff2;

    server {
        # ...
        # Serve pre-compressed .gz files if they exist (even faster)
        gzip_static on;     # Requires --with-http_gzip_static_module
    }
}
Tip: For even better compression, use Brotli (typically 15–20% smaller than gzip). It requires the ngx_brotli module: brotli on; brotli_comp_level 6; brotli_types text/plain text/css application/javascript application/json;

8. Caching: proxy_cache and Expires Headers

Nginx has two caching layers: browser caching (via expires and Cache-Control headers) and proxy caching (storing upstream responses on disk via proxy_cache).

Browser Cache Headers (Expires)

server {
    # ── Cache static assets aggressively ──────────────────────────────────
    location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
        expires 1y;
        add_header Cache-Control "public, max-age=31536000, immutable";
        access_log off;         # Don't log static asset hits
    }

    # ── Cache HTML briefly (allow revalidation) ─────────────────────────
    location ~* \.html$ {
        expires 1h;
        add_header Cache-Control "public, max-age=3600, must-revalidate";
    }

    # ── Never cache API responses ────────────────────────────────────────
    location /api/ {
        expires -1;
        add_header Cache-Control "no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0";
        proxy_pass http://127.0.0.1:3000;
    }
}

Proxy Cache (Server-Side Caching)

http {
    # ── Define a cache zone (in the http block) ────────────────────────────
    # proxy_cache_path: path levels keys_zone name:size inactive max_size
    proxy_cache_path /var/cache/nginx
        levels=1:2
        keys_zone=api_cache:10m       # 10MB zone for keys (holds ~80k keys)
        max_size=1g                    # Max disk usage for cached responses
        inactive=60m                   # Evict if not accessed for 60 minutes
        use_temp_path=off;

    server {
        location /api/ {
            proxy_cache api_cache;                        # Use the cache zone
            proxy_cache_key "$scheme$request_method$host$request_uri";
            proxy_cache_valid 200 302  10m;              # Cache 200/302 for 10 min
            proxy_cache_valid 404      1m;               # Cache 404 for 1 min
            proxy_cache_use_stale error timeout updating http_500 http_502 http_503;
            proxy_cache_lock on;                         # Only one request populates cache
            proxy_cache_min_uses 2;                      # Cache after 2 hits

            # Add header to show cache status (HIT/MISS/BYPASS)
            add_header X-Cache-Status $upstream_cache_status;

            proxy_pass http://127.0.0.1:3000;
        }

        # Bypass cache for authenticated users or specific conditions
        location /api/user/ {
            proxy_cache_bypass $cookie_session $http_authorization;
            proxy_no_cache     $cookie_session $http_authorization;
            proxy_pass http://127.0.0.1:3000;
        }
    }
}

9. Rate Limiting with limit_req

Rate limiting protects your server from abuse, DDoS attacks, and runaway scrapers. Nginx uses a leaky bucket algorithm: requests fill a bucket at the rate they arrive; the bucket drains at a defined rate. Excess requests are delayed or rejected.

http {
    # ── Define rate limit zones ────────────────────────────────────────────
    # Format: limit_req_zone <key> zone=<name>:<size> rate=<rate>;
    # Key: $binary_remote_addr uses 4 bytes per IP (more efficient than $remote_addr)

    limit_req_zone $binary_remote_addr zone=general:10m  rate=30r/m;  # 30 req/min
    limit_req_zone $binary_remote_addr zone=api:10m      rate=10r/s;  # 10 req/sec
    limit_req_zone $binary_remote_addr zone=login:10m    rate=5r/m;   # 5 req/min (brute force)
    limit_req_zone $binary_remote_addr zone=search:10m   rate=1r/s;   # 1 req/sec

    # Rate limit by API key instead of IP (for authenticated APIs)
    limit_req_zone $http_x_api_key zone=api_key:10m rate=100r/s;

    server {
        listen 80;

        # ── Apply rate limit to all requests ──────────────────────────────
        location / {
            limit_req zone=general burst=20 nodelay;
            # burst: allow up to 20 extra requests above the rate
            # nodelay: serve burst requests immediately (no artificial delay)
            proxy_pass http://127.0.0.1:3000;
        }

        # ── Stricter limit for API endpoints ──────────────────────────────
        location /api/ {
            limit_req zone=api burst=50 nodelay;
            limit_req_status 429;            # Return 429 Too Many Requests
            limit_req_log_level warn;        # Log at warn level

            proxy_pass http://127.0.0.1:3000;
        }

        # ── Very strict limit for login (prevent brute force) ─────────────
        location /api/auth/login {
            limit_req zone=login burst=3;    # Only burst 3, no nodelay → delay excess
            proxy_pass http://127.0.0.1:3000;
        }
    }
}
ParameterDescription
zone=nameShared memory zone to use for tracking
burst=NMax requests queued above the rate limit
nodelayProcess burst requests immediately (don't queue them)
delay=NDelay requests after N immediate requests
limit_req_statusHTTP status code for rejected requests (default 503, prefer 429)

10. Security Headers

HTTP security headers instruct browsers to enable built-in security mechanisms. Adding them to nginx is straightforward with add_header. Put shared headers in the http block or a shared config file.

server {
    listen 443 ssl http2;
    server_name example.com;

    # ── HSTS: Force HTTPS for 1 year ───────────────────────────────────────
    # WARNING: Only add includeSubDomains if ALL subdomains support HTTPS
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;

    # ── Clickjacking protection ────────────────────────────────────────────
    add_header X-Frame-Options "SAMEORIGIN" always;
    # Use "DENY" to block all framing, "SAMEORIGIN" to allow same-origin frames

    # ── MIME sniffing protection ───────────────────────────────────────────
    add_header X-Content-Type-Options "nosniff" always;

    # ── XSS protection (legacy browsers) ──────────────────────────────────
    add_header X-XSS-Protection "1; mode=block" always;

    # ── Referrer Policy ────────────────────────────────────────────────────
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;

    # ── Permissions Policy (restrict browser APIs) ─────────────────────────
    add_header Permissions-Policy "camera=(), microphone=(), geolocation=(self)" always;

    # ── Content Security Policy (CSP) ────────────────────────────────────
    # Start with Report-Only mode to detect violations before enforcing
    add_header Content-Security-Policy-Report-Only "default-src 'self'; script-src 'self' 'unsafe-inline' https://cdn.jsdelivr.net; style-src 'self' 'unsafe-inline'; img-src 'self' data: https:; font-src 'self' https://fonts.gstatic.com; connect-src 'self' https://api.example.com; report-uri /csp-report" always;

    # ── Enforce CSP (when you're confident in your policy) ────────────────
    # add_header Content-Security-Policy "default-src 'self'; script-src 'self' https://cdn.jsdelivr.net; ..." always;

    # ── Hide nginx version ────────────────────────────────────────────────
    server_tokens off;

    location / {
        proxy_pass http://127.0.0.1:3000;
    }
}
Important: The always flag on add_header ensures headers are sent even on error responses (4xx/5xx). Without it, headers only appear on 200, 201, 204, 206, 301, 302, 303, 304, 307, 308. Also: headers set in an inner block (location) reset all inherited headers from outer blocks — be explicit.

11. Access Logging and Error Logging

Nginx writes two types of logs: access logs (every request) and error logs (problems and nginx messages). Custom log formats help with analytics and debugging.

http {
    # ── Custom log format ─────────────────────────────────────────────────
    # Named "main" — standard extended format
    log_format main '$remote_addr - $remote_user [$time_local] '
                    '"$request" $status $body_bytes_sent '
                    '"$http_referer" "$http_user_agent" '
                    '$request_time $upstream_response_time';

    # JSON format (easier to parse with Logstash, Splunk, Datadog)
    log_format json_combined escape=json
        '{'
            '"time_local":"$time_local",'
            '"remote_addr":"$remote_addr",'
            '"request":"$request",'
            '"status": "$status",'
            '"body_bytes_sent":"$body_bytes_sent",'
            '"request_time":"$request_time",'
            '"http_referrer":"$http_referer",'
            '"http_user_agent":"$http_user_agent",'
            '"upstream_addr":"$upstream_addr",'
            '"upstream_response_time":"$upstream_response_time"'
        '}';

    # ── Global access and error logs ──────────────────────────────────────
    access_log /var/log/nginx/access.log main;
    error_log  /var/log/nginx/error.log warn;
    # Error log levels: debug, info, notice, warn, error, crit, alert, emerg

    server {
        # ── Per-server logs ───────────────────────────────────────────────
        access_log /var/log/nginx/example.com.access.log json_combined;
        error_log  /var/log/nginx/example.com.error.log  error;

        location /api/ {
            access_log /var/log/nginx/api.access.log main buffer=32k flush=5s;
            # buffer=32k: buffer writes (reduce disk I/O)
            # flush=5s: force flush every 5 seconds
            proxy_pass http://127.0.0.1:3000;
        }

        # ── Skip logging for health checks ───────────────────────────────
        location /healthz {
            access_log off;
            return 200 "OK";
        }

        # ── Conditional logging: skip bots and monitoring ──────────────────
        map $http_user_agent $log_ua {
            ~*Googlebot     0;
            ~*UptimeRobot   0;
            default         1;
        }
        # Apply in location: access_log /path/to/log main if=$log_ua;
    }
}

Useful Nginx Log Variables

VariableDescription
$remote_addrClient IP address
$request_timeTotal request processing time (seconds)
$upstream_response_timeTime waiting for upstream response
$upstream_cache_statusHIT, MISS, BYPASS, EXPIRED, etc.
$body_bytes_sentResponse body size in bytes
$http_refererReferrer URL from request

12. Common Mistakes and Debugging

Even experienced engineers encounter nginx configuration issues. Here are the most common mistakes and how to debug them.

Debugging Workflow

# ── Step 1: Test config syntax ─────────────────────────────────────────────
nginx -t
# Output: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
#         nginx: configuration file /etc/nginx/nginx.conf test is successful

# ── Step 2: Dump complete merged config ────────────────────────────────────
nginx -T 2>/dev/null | less

# ── Step 3: Reload (zero-downtime) ────────────────────────────────────────
nginx -s reload
# or:
systemctl reload nginx

# ── Step 4: Check error log in real-time ────────────────────────────────────
tail -f /var/log/nginx/error.log

# ── Step 5: Enable debug logging temporarily ────────────────────────────────
# In nginx.conf:
error_log /var/log/nginx/error.log debug;
# Reload, reproduce the issue, check logs, then set back to warn

# ── Step 6: Test with curl ─────────────────────────────────────────────────
curl -I https://example.com                        # Check response headers
curl -v https://example.com                        # Verbose (see request + response)
curl -H "Host: example.com" http://localhost       # Test specific server block
curl -L -o /dev/null -s -w "%{http_code}" https://example.com  # Just status code

# ── Step 7: Check which nginx is running ──────────────────────────────────
nginx -v                         # Version
nginx -V                         # Version + compile flags + modules
ps aux | grep nginx              # Running processes
ss -tlnp | grep :80              # What's listening on port 80

Common Mistakes

Mistake 1: Missing semicolons
# Wrong — missing semicolon causes cryptic "unexpected }" error
location / {
    root /var/www/html    ← no semicolon!
}

# Correct
location / {
    root /var/www/html;
}
Mistake 2: Forgetting add_header inheritance reset
# Wrong: security headers in http block are LOST in location block
http {
    add_header X-Frame-Options "SAMEORIGIN";  # Set here
    server {
        location / {
            add_header Content-Type "text/html";  # This RESETS all parent headers!
            # X-Frame-Options is now gone from this location
        }
    }
}

# Correct: repeat all headers or use include
location / {
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header Content-Type "text/html";
}
Mistake 3: Conflicting location blocks
# nginx location matching priority:
# 1. = (exact match) — highest priority
# 2. ^~ (prefix, no regex) — stops regex search if matched
# 3. ~ and ~* (regex) — case-sensitive / case-insensitive
# 4. / (prefix) — lowest priority catch-all

# Common confusion: this regex won't override the prefix below
location ~ \.php$ { ... }
location /api/    { ... }  # Request to /api/index.php matches THIS (prefix wins over some regex)

# Use = for exact paths, ^~ to prevent regex matching
location = /favicon.ico { access_log off; return 204; }
location ^~ /static/    { root /var/www; expires 1y; }
Mistake 4: proxy_pass trailing slash with prefixed locations
# Behavior difference:
location /app/ {
    proxy_pass http://backend;         # /app/foo → upstream receives /app/foo
}
location /app/ {
    proxy_pass http://backend/;        # /app/foo → upstream receives /foo (strips /app/)
}
location /app/ {
    proxy_pass http://backend/app/;    # /app/foo → upstream receives /app/foo (explicit)
}
Quick Reference: Most Useful nginx Commands
nginx -t              # Test configuration
nginx -T              # Test + dump full config
nginx -s reload       # Graceful reload
nginx -s quit         # Graceful shutdown
nginx -s stop         # Fast shutdown
systemctl status nginx
journalctl -u nginx --since "10 min ago"  # nginx systemd logs

13. Complete Production nginx.conf Example

Here is a full, production-ready configuration combining all the techniques covered: HTTPS, HTTP/2, reverse proxy, gzip, caching, rate limiting, and security headers.

# /etc/nginx/nginx.conf

user nginx;
worker_processes auto;
worker_rlimit_nofile 65535;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

events {
    worker_connections 65535;
    use epoll;
    multi_accept on;
}

http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    # ── Logging ────────────────────────────────────────────────────────────
    log_format main '$remote_addr - [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_time';
    access_log /var/log/nginx/access.log main buffer=32k;
    error_log  /var/log/nginx/error.log warn;

    # ── Performance ────────────────────────────────────────────────────────
    sendfile           on;
    tcp_nopush         on;
    tcp_nodelay        on;
    keepalive_timeout  65;
    types_hash_max_size 2048;
    server_tokens      off;

    # ── Gzip ───────────────────────────────────────────────────────────────
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_min_length 256;
    gzip_types text/plain text/css application/javascript application/json image/svg+xml font/woff2;

    # ── Rate Limiting Zones ─────────────────────────────────────────────────
    limit_req_zone $binary_remote_addr zone=global:10m rate=30r/s;
    limit_req_zone $binary_remote_addr zone=api:10m    rate=10r/s;
    limit_req_zone $binary_remote_addr zone=login:10m  rate=5r/m;

    # ── Proxy Cache ────────────────────────────────────────────────────────
    proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=main_cache:10m max_size=1g inactive=60m;

    # ── HTTP → HTTPS redirect ─────────────────────────────────────────────
    server {
        listen 80;
        server_name example.com www.example.com;
        return 301 https://$host$request_uri;
    }

    # ── Main HTTPS server ─────────────────────────────────────────────────
    server {
        listen 443 ssl http2;
        server_name example.com www.example.com;

        ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305;
        ssl_prefer_server_ciphers off;
        ssl_session_cache shared:SSL:10m;
        ssl_session_timeout 1d;
        ssl_session_tickets off;
        ssl_stapling on;
        ssl_stapling_verify on;

        # Security headers
        add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
        add_header X-Frame-Options "SAMEORIGIN" always;
        add_header X-Content-Type-Options "nosniff" always;
        add_header Referrer-Policy "strict-origin-when-cross-origin" always;
        add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline'" always;

        root /var/www/html;

        # Static assets with long-term caching
        location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2)$ {
            expires 1y;
            add_header Cache-Control "public, immutable";
            access_log off;
        }

        # API with rate limiting and proxy caching
        location /api/ {
            limit_req zone=api burst=20 nodelay;
            limit_req_status 429;

            proxy_pass http://127.0.0.1:3000;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_http_version 1.1;
            proxy_set_header Connection "";

            proxy_cache main_cache;
            proxy_cache_valid 200 5m;
            proxy_cache_bypass $http_cache_control;
            add_header X-Cache-Status $upstream_cache_status;
        }

        # Login endpoint — strict rate limit
        location /api/auth/ {
            limit_req zone=login burst=3;
            proxy_pass http://127.0.0.1:3000;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }

        # Health check — no rate limit, no cache, no logging
        location /healthz {
            access_log off;
            proxy_pass http://127.0.0.1:3000/healthz;
        }

        # SPA fallback
        location / {
            try_files $uri $uri/ /index.html;
        }

        # Block dotfiles
        location ~ /\. {
            deny all;
            return 404;
        }
    }
}

Generate nginx.conf Visually

Use the DevToolBox Nginx Config Generator to build production-ready nginx configurations with a visual interface. Supports static sites, reverse proxy, load balancing, SSL, and more.

Open Nginx Config Generator →

14. Frequently Asked Questions

Where is the nginx.conf file located?

The main nginx.conf is typically at /etc/nginx/nginx.conf on Linux systems. Site-specific configs live in /etc/nginx/sites-available/ (symlinked to /etc/nginx/sites-enabled/) or as *.conf files in /etc/nginx/conf.d/. On macOS with Homebrew, it's at /usr/local/etc/nginx/nginx.conf or /opt/homebrew/etc/nginx/nginx.conf.

How do I reload nginx without downtime?

Run nginx -t first to validate the config. If it passes, run nginx -s reload or systemctl reload nginx. This sends SIGHUP to the master process, which starts new workers with the updated config and gracefully drains old workers. Active connections continue on old workers until they finish — zero downtime.

What is the difference between proxy_pass and fastcgi_pass?

proxy_pass proxies HTTP/HTTPS requests to an upstream server speaking HTTP (Node.js, Python/Gunicorn, Go, Ruby/Puma, etc.). fastcgi_pass communicates over the FastCGI protocol specifically for FastCGI applications like PHP-FPM. Use fastcgi_pass for PHP; use proxy_pass for everything else.

How does location block matching work?

Nginx evaluates location blocks in this priority order: (1) = /exact — exact match, highest priority; (2) ^~ /prefix — prefix match that stops regex evaluation; (3) ~ regex or ~* regex — regex, case-sensitive/insensitive; (4) /prefix — longest prefix match as fallback. When multiple prefix locations match, nginx picks the longest one, then checks regexes.

How do I set up WebSocket proxying?

WebSocket upgrades require two special headers. Add to your proxy location:

location /ws/ {
    proxy_pass http://127.0.0.1:3000;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_read_timeout 3600s;   # Keep connection open for WebSocket
}

How do I redirect www to non-www (or vice versa)?

# Redirect www.example.com → example.com
server {
    listen 443 ssl http2;
    server_name www.example.com;
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    return 301 https://example.com$request_uri;
}

# Redirect example.com → www.example.com
server {
    listen 443 ssl http2;
    server_name example.com;
    ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem;
    return 301 https://www.example.com$request_uri;
}

Related Tools and Resources

Continue mastering nginx and web server configuration with these DevToolBox resources:

𝕏 Twitterin LinkedIn
Var detta hjälpsamt?

Håll dig uppdaterad

Få veckovisa dev-tips och nya verktyg.

Ingen spam. Avsluta när som helst.

Try These Related Tools

NXNginx Config Generator.ht.htaccess Generator🛡️CSP Header Generator

Related Articles

Nginx Config Generator - Generate nginx.conf Online (Free Tool + Complete Guide)

Generate production-ready nginx.conf files online. Covers server blocks, reverse proxy, SSL/TLS, load balancing, gzip, security headers, rate limiting, caching, and common patterns for static sites, SPAs, API gateways, and WordPress.

Nginx location Block och Regex Guide

Djupdykning i Nginx location-block: exakt matchning, prefix, regex och prioritetsregler.

Nginx Reverse Proxy Konfiguration: Load Balancing, SSL och Caching

Konfigurera Nginx som reverse proxy: upstream-servrar, load balancing, SSL-terminering och caching.

Nginx Konfigurationsguide: Fran Grundinstallation till Produktion

Komplett Nginx-konfigurationsguide. Lar dig serverblock, reverse proxy, SSL/TLS och lastbalansering.

Nginx vs Apache 2026: Vilken webbserver ska du välja?

Nginx och Apache jämförelse 2026: prestanda, konfiguration och användningsfall.