DevToolBoxFREE
Blog

Nginx Reverse Proxy Configuration: Load Balancing, SSL, and Caching

13 min readby DevToolBox

A reverse proxy sits between your clients and backend servers, forwarding requests and returning responses. Nginx is the most popular choice for this role in 2026 — it handles SSL termination, load balancing, caching, and WebSocket proxying with minimal configuration. This guide covers every production-ready pattern you need.

Basic Reverse Proxy Setup

The simplest reverse proxy forwards all requests to a single backend server. The key headers ensure your backend sees the real client IP and protocol.

# Basic reverse proxy configuration
server {
    listen 80;
    server_name example.com www.example.com;

    location / {
        proxy_pass http://localhost:3000;

        # Pass original request headers to the backend
        proxy_set_header Host              $host;
        proxy_set_header X-Real-IP         $remote_addr;
        proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Timeouts (seconds)
        proxy_connect_timeout 60s;
        proxy_send_timeout    60s;
        proxy_read_timeout    60s;

        # Buffer settings
        proxy_buffering    on;
        proxy_buffer_size  16k;
        proxy_buffers      4 16k;
    }
}

SSL/TLS Termination

SSL termination at Nginx means your backend servers only handle HTTP internally, simplifying their configuration. Nginx handles the encryption overhead.

# Full SSL/TLS reverse proxy with HTTP redirect
server {
    listen 80;
    server_name example.com www.example.com;
    # Permanent redirect all HTTP traffic to HTTPS
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    server_name example.com www.example.com;

    # SSL certificate (Let's Encrypt)
    ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    # Modern SSL settings (2026 recommended)
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305;
    ssl_prefer_server_ciphers off;

    # SSL session cache
    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets off;

    # HSTS: tell browsers to always use HTTPS (6 months)
    add_header Strict-Transport-Security "max-age=15768000" always;

    # OCSP stapling
    ssl_stapling on;
    ssl_stapling_verify on;
    resolver 8.8.8.8 8.8.4.4 valid=300s;

    location / {
        proxy_pass http://localhost:3000;
        proxy_set_header Host              $host;
        proxy_set_header X-Real-IP         $remote_addr;
        proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Upstream Blocks and Load Balancing

The upstream block defines a pool of backend servers. Nginx supports round-robin, least-connections, and IP-hash load balancing algorithms.

# Load balancing across multiple backends
upstream app_servers {
    # Default: round-robin
    server 10.0.0.1:3000;
    server 10.0.0.2:3000;
    server 10.0.0.3:3000;

    # Least connections algorithm
    # least_conn;

    # IP hash (sticky sessions)
    # ip_hash;

    # Weighted distribution
    # server 10.0.0.1:3000 weight=3;
    # server 10.0.0.2:3000 weight=1;

    # Health checks (Nginx Plus only; use passive for open-source)
    server 10.0.0.3:3000 max_fails=3 fail_timeout=30s;

    # Keepalive connections to backends
    keepalive 32;
}

server {
    listen 443 ssl http2;
    server_name example.com;

    # ... SSL config omitted for brevity ...

    location / {
        proxy_pass http://app_servers;

        proxy_set_header Host              $host;
        proxy_set_header X-Real-IP         $remote_addr;
        proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Connection        "";
        proxy_http_version 1.1;

        proxy_connect_timeout 5s;
        proxy_read_timeout    60s;
    }
}

WebSocket Proxying

WebSocket connections require special handling because they upgrade from HTTP to a persistent TCP connection. The Upgrade and Connection headers must be forwarded.

# WebSocket proxy configuration
server {
    listen 443 ssl http2;
    server_name ws.example.com;

    # ... SSL config ...

    location /ws/ {
        proxy_pass http://localhost:8080;

        # WebSocket upgrade headers
        proxy_http_version 1.1;
        proxy_set_header Upgrade    $http_upgrade;
        proxy_set_header Connection "upgrade";

        proxy_set_header Host            $host;
        proxy_set_header X-Real-IP       $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # Long timeout for persistent connections
        proxy_read_timeout 3600s;
        proxy_send_timeout 3600s;
    }

    # Regular HTTP traffic on the same server
    location / {
        proxy_pass http://localhost:3000;
        proxy_set_header Host              $host;
        proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Proxy Caching

Nginx can cache backend responses to reduce load and improve latency for cacheable content.

# Caching with proxy_cache
proxy_cache_path /var/cache/nginx
    levels=1:2
    keys_zone=app_cache:10m
    max_size=1g
    inactive=60m
    use_temp_path=off;

server {
    listen 443 ssl http2;
    server_name example.com;

    # ... SSL config ...

    location / {
        proxy_pass http://localhost:3000;
        proxy_set_header Host $host;

        # Enable cache
        proxy_cache         app_cache;
        proxy_cache_key     "$scheme$request_method$host$request_uri";
        proxy_cache_valid   200 302 10m;
        proxy_cache_valid   404     1m;

        # Return stale content while refreshing
        proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;

        # Add cache status header for debugging
        add_header X-Cache-Status $upstream_cache_status;

        # Don't cache if Cookie or Authorization header present
        proxy_cache_bypass $http_cookie $http_authorization;
        proxy_no_cache     $http_cookie $http_authorization;
    }

    # Bypass cache for API routes
    location /api/ {
        proxy_pass http://localhost:3000;
        proxy_cache_bypass 1;
        proxy_no_cache     1;
    }
}

Quick Reference: Key Directives

DirectiveDescription
proxy_passUpstream backend URL
proxy_set_headerModify/add request headers to backend
proxy_connect_timeoutTime to establish connection to backend
proxy_read_timeoutTime to read response from backend
proxy_send_timeoutTime to send request to backend
proxy_bufferingBuffer backend responses (on/off)
proxy_cacheEnable caching using a named cache zone
proxy_cache_validCache duration per status code
proxy_http_versionHTTP version for backend (set 1.1 for keepalive)
upstreamDefine a pool of backend servers
keepaliveMax idle keepalive connections to backends

Production Best Practices

  1. Always test configuration before reloading: nginx -t. Never reload without testing in production.
  2. Use proxy_set_header X-Forwarded-Proto $scheme so your backend knows whether the original request was HTTP or HTTPS.
  3. Set appropriate timeouts. The default proxy_read_timeout is 60s — increase for long-running requests (file uploads, reports).
  4. Enable keepalive connections to backends using proxy_http_version 1.1 and keepalive in the upstream block for better performance.
  5. Monitor /var/log/nginx/error.log and access.log. Set up log rotation with logrotate.

Frequently Asked Questions

How do I pass the real client IP to my backend?

Add these headers: proxy_set_header X-Real-IP $remote_addr and proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for. Then in your backend, read X-Real-IP (for the direct client) or the first entry in X-Forwarded-For. If using multiple proxies, configure trusted proxy IPs.

How do I handle multiple backends with different paths?

Use multiple location blocks: location /api/ { proxy_pass http://api-server:8080; } and location /app/ { proxy_pass http://app-server:3000; }. The trailing slash in proxy_pass matters — http://backend/api/ will strip the /api prefix before forwarding.

What is the difference between proxy_pass with and without a trailing slash?

Without trailing slash: proxy_pass http://backend — the full URI including the location path is forwarded. With trailing slash: proxy_pass http://backend/ — the location path prefix is removed. Example: location /app/ with proxy_pass http://backend/ will forward /app/page as /page to the backend.

How do I debug proxy connection issues?

Check /var/log/nginx/error.log first. Common issues: (1) upstream connection refused — backend not running or wrong port; (2) upstream timed out — increase proxy_read_timeout; (3) 502 Bad Gateway — backend crashed or returned invalid response. Add "error_log /var/log/nginx/error.log debug;" temporarily for verbose logging.

How do I configure Nginx to use HTTP/2 for backends?

Nginx's proxy module only supports HTTP/1.1 for upstream connections (as of 2026). HTTP/2 is only for client-to-Nginx connections (listen 443 ssl http2). To get HTTP/2 end-to-end, use grpc_pass instead of proxy_pass for gRPC backends, or use Nginx Plus which has HTTP/2 upstream support.

Related Tools

𝕏 Twitterin LinkedIn
Was this helpful?

Stay Updated

Get weekly dev tips and new tool announcements.

No spam. Unsubscribe anytime.

Try These Related Tools

🔗URL Parser📡HTTP Request Builder🔓CORS Tester

Related Articles

Nginx vs Apache in 2026: Which Web Server Should You Choose?

Compare Nginx and Apache web servers in 2026. Performance benchmarks, configuration differences, use cases, and migration tips.

API Rate Limiting: Strategies, Algorithms, and Implementation Guide

Complete guide to API rate limiting. Learn token bucket, sliding window, leaky bucket algorithms with code examples. Includes Express.js middleware, Redis distributed rate limiting, and best practices.

Мы используем файлы cookie для показа рекламы и анализа трафика. Вы можете выбрать, что разрешить. Политика конфиденциальности