DevToolBoxGRATIS
Blog

Esempi di configurazione Nginx: Reverse Proxy, SSL e siti statici

11 min di letturadi DevToolBox

Nginx is the world's most popular web server, powering over 30% of all websites. Whether you need to serve static files, set up a reverse proxy, configure SSL, or load-balance traffic across multiple backends, this comprehensive Nginx configuration guide provides production-ready examples you can copy and adapt immediately. Every configuration block includes detailed comments explaining each directive.

Nginx Basics

Nginx (pronounced "engine-x") is a high-performance HTTP server, reverse proxy, and load balancer. Its event-driven architecture handles thousands of concurrent connections with minimal memory. The main configuration file is typically located at /etc/nginx/nginx.conf, with site-specific configs in /etc/nginx/conf.d/ or /etc/nginx/sites-available/.

# Main configuration file structure
# /etc/nginx/nginx.conf

user nginx;
worker_processes auto;          # One worker per CPU core
error_log /var/log/nginx/error.log warn;
pid /run/nginx.pid;

events {
    worker_connections 1024;    # Max connections per worker
    multi_accept on;            # Accept multiple connections at once
}

http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    # Logging format
    log_format main '$remote_addr - $remote_user [$time_local] '
                    '"$request" $status $body_bytes_sent '
                    '"$http_referer" "$http_user_agent"';

    access_log /var/log/nginx/access.log main;

    sendfile on;                # Efficient file transfer
    tcp_nopush on;              # Optimize TCP packets
    tcp_nodelay on;             # Disable Nagle's algorithm
    keepalive_timeout 65;       # Keep connections alive
    types_hash_max_size 2048;

    # Include site configurations
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

Static Site Configuration

The simplest and most common use case: serving HTML, CSS, JavaScript, and image files directly from disk. This configuration includes caching headers for optimal performance.

# Static website configuration
# /etc/nginx/conf.d/static-site.conf

server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;

    # Document root
    root /var/www/example.com/html;
    index index.html index.htm;

    # Main location block
    location / {
        try_files $uri $uri/ =404;
    }

    # Cache static assets aggressively
    location ~* \.(css|js|jpg|jpeg|png|gif|ico|svg|woff|woff2|ttf|eot)$ {
        expires 30d;
        add_header Cache-Control "public, immutable";
        access_log off;
    }

    # Cache HTML files with shorter duration
    location ~* \.html$ {
        expires 1h;
        add_header Cache-Control "public, must-revalidate";
    }

    # Deny access to hidden files (.htaccess, .git, etc.)
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    # Custom error pages
    error_page 404 /404.html;
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /usr/share/nginx/html;
    }

    access_log /var/log/nginx/example.com.access.log;
    error_log /var/log/nginx/example.com.error.log;
}

Reverse Proxy Configuration

A reverse proxy sits between clients and your backend application (Node.js, Python, Go, etc.), forwarding requests and returning responses. This is the most common production setup for web applications.

# Reverse proxy to Node.js/Python/Go application
# /etc/nginx/conf.d/app-proxy.conf

server {
    listen 80;
    server_name app.example.com;

    # Max upload size
    client_max_body_size 50M;

    # Proxy all requests to the backend application
    location / {
        proxy_pass http://127.0.0.1:3000;

        # Pass the real client IP to the backend
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # WebSocket support (for Socket.IO, etc.)
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";

        # Timeouts
        proxy_connect_timeout 60s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;

        # Buffering settings
        proxy_buffering on;
        proxy_buffer_size 4k;
        proxy_buffers 8 4k;
    }

    # Serve static files directly (bypass the backend)
    location /static/ {
        alias /var/www/app/static/;
        expires 30d;
        add_header Cache-Control "public, immutable";
    }

    # Health check endpoint
    location /health {
        proxy_pass http://127.0.0.1:3000/health;
        access_log off;
    }

    access_log /var/log/nginx/app.access.log;
    error_log /var/log/nginx/app.error.log;
}

SSL/TLS Configuration

Securing your site with HTTPS is mandatory in modern web development. This configuration uses Let's Encrypt certificates and follows current best practices for TLS security, including strong cipher suites and HSTS.

# SSL/TLS configuration with Let's Encrypt
# /etc/nginx/conf.d/ssl-site.conf

# Redirect all HTTP traffic to HTTPS
server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;

    # Let's Encrypt ACME challenge
    location /.well-known/acme-challenge/ {
        root /var/www/certbot;
    }

    # 301 permanent redirect to HTTPS
    location / {
        return 301 https://$server_name$request_uri;
    }
}

# HTTPS server block
server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name example.com www.example.com;

    # Let's Encrypt certificate paths
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    # SSL session settings
    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:50m;
    ssl_session_tickets off;

    # Modern TLS configuration (TLS 1.2 + 1.3 only)
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers off;

    # OCSP Stapling (faster certificate verification)
    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;
    resolver 8.8.8.8 8.8.4.4 valid=300s;
    resolver_timeout 5s;

    # HSTS (force HTTPS for 2 years, including subdomains)
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;

    root /var/www/example.com/html;
    index index.html;

    location / {
        try_files $uri $uri/ =404;
    }

    access_log /var/log/nginx/example.com.ssl.access.log;
    error_log /var/log/nginx/example.com.ssl.error.log;
}

SPA (React/Vue/Angular) Configuration

Single Page Applications use client-side routing, which means all routes must fall back to index.html. This configuration handles that correctly while also serving static assets efficiently.

# SPA configuration (React, Vue, Angular, Next.js static export)
# /etc/nginx/conf.d/spa.conf

server {
    listen 80;
    server_name spa.example.com;

    root /var/www/spa/dist;
    index index.html;

    # The key directive for SPAs: fallback to index.html
    # This ensures client-side routing works correctly
    location / {
        try_files $uri $uri/ /index.html;
    }

    # Cache JavaScript and CSS bundles (with hash in filename)
    location ~* \.(js|css)$ {
        expires 1y;
        add_header Cache-Control "public, immutable";
        access_log off;
    }

    # Cache images, fonts, and media
    location ~* \.(jpg|jpeg|png|gif|ico|svg|webp|avif|woff|woff2|ttf|eot|mp4|webm)$ {
        expires 30d;
        add_header Cache-Control "public";
        access_log off;
    }

    # Do NOT cache index.html (always serve the latest version)
    location = /index.html {
        expires -1;
        add_header Cache-Control "no-store, no-cache, must-revalidate";
    }

    # API proxy (forward /api requests to the backend)
    location /api/ {
        proxy_pass http://127.0.0.1:4000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # Deny access to source maps in production
    location ~* \.map$ {
        deny all;
    }
}

Load Balancing

Distribute incoming traffic across multiple backend servers for high availability and better performance. Nginx supports several load-balancing algorithms out of the box.

# Load balancing across multiple backend servers
# /etc/nginx/conf.d/load-balancer.conf

# Define backend server group
upstream app_servers {
    # Round-robin (default) - requests distributed evenly
    server 10.0.0.1:3000;
    server 10.0.0.2:3000;
    server 10.0.0.3:3000;

    # Mark a server as backup (used only when others are down)
    server 10.0.0.4:3000 backup;

    # Health check: mark server down after 3 failed attempts
    # max_fails=3  fail_timeout=30s (default)
}

# Weighted load balancing (send more traffic to powerful servers)
upstream app_weighted {
    server 10.0.0.1:3000 weight=5;   # Gets 5x the traffic
    server 10.0.0.2:3000 weight=3;   # Gets 3x the traffic
    server 10.0.0.3:3000 weight=1;   # Gets 1x the traffic
}

# Least connections (send to the server with fewest active requests)
upstream app_least_conn {
    least_conn;
    server 10.0.0.1:3000;
    server 10.0.0.2:3000;
    server 10.0.0.3:3000;
}

# IP hash (same client always goes to same server - sticky sessions)
upstream app_ip_hash {
    ip_hash;
    server 10.0.0.1:3000;
    server 10.0.0.2:3000;
    server 10.0.0.3:3000;
}

server {
    listen 80;
    server_name lb.example.com;

    location / {
        proxy_pass http://app_servers;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # Connection keep-alive to backends
        proxy_http_version 1.1;
        proxy_set_header Connection "";
    }

    # Simple health check endpoint
    location /nginx-health {
        access_log off;
        return 200 "OK";
        add_header Content-Type text/plain;
    }
}

Security Headers

HTTP security headers protect your site from common attacks like clickjacking, XSS, and content injection. These headers should be added to every production Nginx configuration.

# Security headers configuration
# Add these inside your server {} block or create a snippet

# /etc/nginx/snippets/security-headers.conf
# Include with: include /etc/nginx/snippets/security-headers.conf;

# Prevent clickjacking: deny embedding in iframes
add_header X-Frame-Options "SAMEORIGIN" always;

# Prevent MIME-type sniffing
add_header X-Content-Type-Options "nosniff" always;

# Enable XSS protection (legacy browsers)
add_header X-XSS-Protection "1; mode=block" always;

# Control referrer information sent with requests
add_header Referrer-Policy "strict-origin-when-cross-origin" always;

# Content Security Policy (customize based on your needs)
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' https://cdn.example.com; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com; img-src 'self' data: https:; font-src 'self' https://fonts.gstatic.com; connect-src 'self' https://api.example.com; frame-ancestors 'self';" always;

# Permissions Policy (formerly Feature-Policy)
add_header Permissions-Policy "camera=(), microphone=(), geolocation=(), interest-cohort=()" always;

# Prevent information leakage
add_header X-Permitted-Cross-Domain-Policies "none" always;

# HSTS (only add if you have SSL configured)
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;

# Hide Nginx version number
server_tokens off;

Gzip Compression

Enabling Gzip compression can reduce the size of transferred responses by up to 90%, significantly improving page load times. This configuration compresses all text-based content types.

# Gzip compression configuration
# Add inside the http {} block in nginx.conf

# Enable gzip compression
gzip on;

# Minimum file size to compress (skip tiny files)
gzip_min_length 256;

# Compression level (1-9, higher = more CPU, smaller files)
# Level 6 is a good balance between compression ratio and CPU usage
gzip_comp_level 6;

# Number and size of compression buffers
gzip_buffers 16 8k;

# Compress responses for HTTP/1.0 clients too
gzip_http_version 1.0;

# Compress all text-based content types
gzip_types
    text/plain
    text/css
    text/xml
    text/javascript
    application/json
    application/javascript
    application/x-javascript
    application/xml
    application/xml+rss
    application/atom+xml
    application/vnd.ms-fontobject
    font/opentype
    font/ttf
    image/svg+xml
    image/x-icon;

# Add Vary: Accept-Encoding header (important for caching proxies)
gzip_vary on;

# Disable gzip for old IE browsers
gzip_disable "MSIE [1-6]\.";

# Enable gzip for proxied requests too
gzip_proxied any;

Rate Limiting

Rate limiting protects your server from abuse, brute-force attacks, and DDoS. Nginx's limit_req module uses a leaky bucket algorithm to control request rates.

# Rate limiting configuration
# Define zones in the http {} block, apply in server/location blocks

# ── Define rate limit zones (in http {} block) ──

# General rate limit: 10 requests/second per IP
limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;

# Strict rate limit for login/auth: 5 requests/minute per IP
limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m;

# API rate limit: 30 requests/second per IP
limit_req_zone $binary_remote_addr zone=api:10m rate=30r/s;

# Rate limit by server name (protect against virtual host abuse)
limit_req_zone $server_name zone=per_server:10m rate=100r/s;

# ── Apply rate limits (in server {} block) ──

server {
    listen 80;
    server_name api.example.com;

    # Custom error page for rate-limited requests
    error_page 429 = @rate_limited;
    location @rate_limited {
        default_type application/json;
        return 429 '{"error": "Too many requests. Please try again later."}';
    }

    # General pages: allow burst of 20, no delay for first 10
    location / {
        limit_req zone=general burst=20 nodelay;
        limit_req_status 429;
        proxy_pass http://127.0.0.1:3000;
    }

    # Login endpoint: strict rate limiting
    location /api/auth/login {
        limit_req zone=login burst=3 nodelay;
        limit_req_status 429;
        proxy_pass http://127.0.0.1:3000;
    }

    # API endpoints: higher limit with burst
    location /api/ {
        limit_req zone=api burst=50 nodelay;
        limit_req_status 429;
        proxy_pass http://127.0.0.1:3000;
    }

    # Whitelist certain IPs from rate limiting
    # geo $limit {
    #     default 1;
    #     10.0.0.0/8 0;       # Internal network
    #     192.168.0.0/16 0;   # Local network
    # }
    # map $limit $limit_key {
    #     0 "";
    #     1 $binary_remote_addr;
    # }
    # limit_req_zone $limit_key zone=custom:10m rate=10r/s;
}

Common Directives Reference

Here is a quick reference of the most commonly used Nginx directives and what they do.

DirectiveDescription
worker_processesNumber of worker processes (set to auto for CPU count)
worker_connectionsMax simultaneous connections per worker
server_nameDomain name(s) this server block responds to
listenPort and protocol to listen on
rootRoot directory for serving files
indexDefault file to serve for directory requests
locationMatch request URI to apply specific configuration
proxy_passForward requests to a backend server
try_filesTry serving files in order, fallback to last option
ssl_certificatePath to the SSL certificate file
ssl_certificate_keyPath to the SSL private key file
add_headerAdd a custom HTTP response header
gzipEnable or disable gzip compression
expiresSet Cache-Control max-age for static assets
upstreamDefine a group of backend servers for load balancing
limit_req_zoneDefine a shared memory zone for rate limiting
error_pageDefine custom error pages for specific status codes
access_logPath and format for access log files
error_logPath and level for error log files
client_max_body_sizeMaximum allowed size of the client request body
sendfileEnable efficient file transfer using kernel sendfile

Frequently Asked Questions

What is the difference between Nginx and Apache?

Nginx uses an event-driven, asynchronous architecture that handles many concurrent connections efficiently with low memory usage. Apache traditionally uses a process-per-connection or thread-per-connection model which consumes more memory under high load. Nginx excels at serving static content and as a reverse proxy, while Apache offers more built-in modules and .htaccess support. Many production setups use Nginx as a reverse proxy in front of Apache.

How do I test my Nginx configuration before reloading?

Always run "nginx -t" before reloading. This command parses the configuration files and checks for syntax errors without actually applying the changes. If the test passes, you can safely reload with "nginx -s reload" or "systemctl reload nginx". Never restart Nginx in production without testing first.

How do I set up Let's Encrypt SSL certificates with Nginx?

Install Certbot (the Let's Encrypt client) and run "certbot --nginx -d yourdomain.com -d www.yourdomain.com". Certbot will automatically obtain certificates and modify your Nginx configuration. It also sets up auto-renewal via a cron job or systemd timer. Certificates are stored in /etc/letsencrypt/live/yourdomain.com/.

What does "proxy_set_header X-Real-IP $remote_addr" do?

When Nginx acts as a reverse proxy, the backend application sees Nginx's IP as the client IP. The X-Real-IP header passes the original client IP address to the backend so your application can log the real visitor IP, apply rate limiting per user, and perform geolocation correctly.

How do I redirect HTTP to HTTPS in Nginx?

Create a separate server block listening on port 80 that returns a 301 redirect to the HTTPS version: "return 301 https://$server_name$request_uri;". This ensures all HTTP traffic is permanently redirected to the secure HTTPS version. The 301 status code tells browsers and search engines to update their links.

What is the try_files directive and why is it important for SPAs?

The try_files directive tells Nginx to check for the existence of files in order and serve the first one found. For SPAs (React, Vue, Angular), "try_files $uri $uri/ /index.html" first tries to serve the exact file requested, then looks for a directory, and finally falls back to index.html. This is essential because SPA routes like /about or /dashboard do not correspond to actual files on disk.

These Nginx configurations cover the most common production scenarios. Always test your configuration with "nginx -t" before reloading, and monitor your access and error logs regularly to catch issues early.

𝕏 Twitterin LinkedIn
È stato utile?

Resta aggiornato

Ricevi consigli dev e nuovi strumenti ogni settimana.

Niente spam. Cancella quando vuoi.

Prova questi strumenti correlati

NXNginx Config Generator.ht.htaccess Generator🛡️CSP Header Generator

Articoli correlati

Cheat Sheet Docker Compose: Servizi, volumi e reti

Riferimento Docker Compose: definizioni servizi, volumi, reti, variabili d'ambiente e esempi di stack.

.htaccess Redirect Cheat Sheet: Esempi da copiare

Riferimento completo redirect .htaccess.

Nginx Config Generator - Generate nginx.conf Online (Free Tool + Complete Guide)

Generate production-ready nginx.conf files online. Covers server blocks, reverse proxy, SSL/TLS, load balancing, gzip, security headers, rate limiting, caching, and common patterns for static sites, SPAs, API gateways, and WordPress.