DevToolBoxKOSTENLOS
Blog

Nginx Config Beispiele: Reverse Proxy, SSL und statische Seiten

11 Min. Lesezeitvon DevToolBox

Nginx ist der weltweit beliebteste Webserver und betreibt ĂŒber 30% aller Websites. Ob statische Dateien, Reverse Proxy, SSL oder Load Balancing - dieser umfassende Nginx-Konfigurationsleitfaden bietet produktionsreife Beispiele zum sofortigen Kopieren und Anpassen.

Nginx Grundlagen

Nginx (ausgesprochen "Engine-X") ist ein Hochleistungs-HTTP-Server, Reverse Proxy und Load Balancer. Die ereignisgesteuerte Architektur verarbeitet Tausende gleichzeitige Verbindungen mit minimalem Speicher. Die Hauptkonfiguration befindet sich unter /etc/nginx/nginx.conf.

# Main configuration file structure
# /etc/nginx/nginx.conf

user nginx;
worker_processes auto;          # One worker per CPU core
error_log /var/log/nginx/error.log warn;
pid /run/nginx.pid;

events {
    worker_connections 1024;    # Max connections per worker
    multi_accept on;            # Accept multiple connections at once
}

http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    # Logging format
    log_format main '$remote_addr - $remote_user [$time_local] '
                    '"$request" $status $body_bytes_sent '
                    '"$http_referer" "$http_user_agent"';

    access_log /var/log/nginx/access.log main;

    sendfile on;                # Efficient file transfer
    tcp_nopush on;              # Optimize TCP packets
    tcp_nodelay on;             # Disable Nagle's algorithm
    keepalive_timeout 65;       # Keep connections alive
    types_hash_max_size 2048;

    # Include site configurations
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

Statische Website-Konfiguration

Der einfachste Anwendungsfall: HTML-, CSS-, JavaScript- und Bilddateien direkt von der Festplatte bereitstellen.

# Static website configuration
# /etc/nginx/conf.d/static-site.conf

server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;

    # Document root
    root /var/www/example.com/html;
    index index.html index.htm;

    # Main location block
    location / {
        try_files $uri $uri/ =404;
    }

    # Cache static assets aggressively
    location ~* \.(css|js|jpg|jpeg|png|gif|ico|svg|woff|woff2|ttf|eot)$ {
        expires 30d;
        add_header Cache-Control "public, immutable";
        access_log off;
    }

    # Cache HTML files with shorter duration
    location ~* \.html$ {
        expires 1h;
        add_header Cache-Control "public, must-revalidate";
    }

    # Deny access to hidden files (.htaccess, .git, etc.)
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    # Custom error pages
    error_page 404 /404.html;
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /usr/share/nginx/html;
    }

    access_log /var/log/nginx/example.com.access.log;
    error_log /var/log/nginx/example.com.error.log;
}

Reverse-Proxy-Konfiguration

Ein Reverse Proxy steht zwischen Clients und Ihrer Backend-Anwendung (Node.js, Python, Go usw.) und leitet Anfragen weiter.

# Reverse proxy to Node.js/Python/Go application
# /etc/nginx/conf.d/app-proxy.conf

server {
    listen 80;
    server_name app.example.com;

    # Max upload size
    client_max_body_size 50M;

    # Proxy all requests to the backend application
    location / {
        proxy_pass http://127.0.0.1:3000;

        # Pass the real client IP to the backend
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # WebSocket support (for Socket.IO, etc.)
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";

        # Timeouts
        proxy_connect_timeout 60s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;

        # Buffering settings
        proxy_buffering on;
        proxy_buffer_size 4k;
        proxy_buffers 8 4k;
    }

    # Serve static files directly (bypass the backend)
    location /static/ {
        alias /var/www/app/static/;
        expires 30d;
        add_header Cache-Control "public, immutable";
    }

    # Health check endpoint
    location /health {
        proxy_pass http://127.0.0.1:3000/health;
        access_log off;
    }

    access_log /var/log/nginx/app.access.log;
    error_log /var/log/nginx/app.error.log;
}

SSL/TLS-Konfiguration

HTTPS ist in der modernen Webentwicklung Pflicht. Diese Konfiguration verwendet Let's Encrypt-Zertifikate und folgt aktuellen TLS-Best-Practices.

# SSL/TLS configuration with Let's Encrypt
# /etc/nginx/conf.d/ssl-site.conf

# Redirect all HTTP traffic to HTTPS
server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;

    # Let's Encrypt ACME challenge
    location /.well-known/acme-challenge/ {
        root /var/www/certbot;
    }

    # 301 permanent redirect to HTTPS
    location / {
        return 301 https://$server_name$request_uri;
    }
}

# HTTPS server block
server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name example.com www.example.com;

    # Let's Encrypt certificate paths
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    # SSL session settings
    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:50m;
    ssl_session_tickets off;

    # Modern TLS configuration (TLS 1.2 + 1.3 only)
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers off;

    # OCSP Stapling (faster certificate verification)
    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;
    resolver 8.8.8.8 8.8.4.4 valid=300s;
    resolver_timeout 5s;

    # HSTS (force HTTPS for 2 years, including subdomains)
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;

    root /var/www/example.com/html;
    index index.html;

    location / {
        try_files $uri $uri/ =404;
    }

    access_log /var/log/nginx/example.com.ssl.access.log;
    error_log /var/log/nginx/example.com.ssl.error.log;
}

SPA-Konfiguration (React/Vue/Angular)

Single Page Applications verwenden clientseitiges Routing, daher mĂŒssen alle Routen auf index.html zurĂŒckfallen.

# SPA configuration (React, Vue, Angular, Next.js static export)
# /etc/nginx/conf.d/spa.conf

server {
    listen 80;
    server_name spa.example.com;

    root /var/www/spa/dist;
    index index.html;

    # The key directive for SPAs: fallback to index.html
    # This ensures client-side routing works correctly
    location / {
        try_files $uri $uri/ /index.html;
    }

    # Cache JavaScript and CSS bundles (with hash in filename)
    location ~* \.(js|css)$ {
        expires 1y;
        add_header Cache-Control "public, immutable";
        access_log off;
    }

    # Cache images, fonts, and media
    location ~* \.(jpg|jpeg|png|gif|ico|svg|webp|avif|woff|woff2|ttf|eot|mp4|webm)$ {
        expires 30d;
        add_header Cache-Control "public";
        access_log off;
    }

    # Do NOT cache index.html (always serve the latest version)
    location = /index.html {
        expires -1;
        add_header Cache-Control "no-store, no-cache, must-revalidate";
    }

    # API proxy (forward /api requests to the backend)
    location /api/ {
        proxy_pass http://127.0.0.1:4000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # Deny access to source maps in production
    location ~* \.map$ {
        deny all;
    }
}

Lastverteilung

Verteilen Sie eingehenden Traffic auf mehrere Backend-Server fĂŒr hohe VerfĂŒgbarkeit und bessere Performance.

# Load balancing across multiple backend servers
# /etc/nginx/conf.d/load-balancer.conf

# Define backend server group
upstream app_servers {
    # Round-robin (default) - requests distributed evenly
    server 10.0.0.1:3000;
    server 10.0.0.2:3000;
    server 10.0.0.3:3000;

    # Mark a server as backup (used only when others are down)
    server 10.0.0.4:3000 backup;

    # Health check: mark server down after 3 failed attempts
    # max_fails=3  fail_timeout=30s (default)
}

# Weighted load balancing (send more traffic to powerful servers)
upstream app_weighted {
    server 10.0.0.1:3000 weight=5;   # Gets 5x the traffic
    server 10.0.0.2:3000 weight=3;   # Gets 3x the traffic
    server 10.0.0.3:3000 weight=1;   # Gets 1x the traffic
}

# Least connections (send to the server with fewest active requests)
upstream app_least_conn {
    least_conn;
    server 10.0.0.1:3000;
    server 10.0.0.2:3000;
    server 10.0.0.3:3000;
}

# IP hash (same client always goes to same server - sticky sessions)
upstream app_ip_hash {
    ip_hash;
    server 10.0.0.1:3000;
    server 10.0.0.2:3000;
    server 10.0.0.3:3000;
}

server {
    listen 80;
    server_name lb.example.com;

    location / {
        proxy_pass http://app_servers;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # Connection keep-alive to backends
        proxy_http_version 1.1;
        proxy_set_header Connection "";
    }

    # Simple health check endpoint
    location /nginx-health {
        access_log off;
        return 200 "OK";
        add_header Content-Type text/plain;
    }
}

Sicherheitsheader

HTTP-Sicherheitsheader schĂŒtzen Ihre Website vor Clickjacking, XSS und Content-Injection.

# Security headers configuration
# Add these inside your server {} block or create a snippet

# /etc/nginx/snippets/security-headers.conf
# Include with: include /etc/nginx/snippets/security-headers.conf;

# Prevent clickjacking: deny embedding in iframes
add_header X-Frame-Options "SAMEORIGIN" always;

# Prevent MIME-type sniffing
add_header X-Content-Type-Options "nosniff" always;

# Enable XSS protection (legacy browsers)
add_header X-XSS-Protection "1; mode=block" always;

# Control referrer information sent with requests
add_header Referrer-Policy "strict-origin-when-cross-origin" always;

# Content Security Policy (customize based on your needs)
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' https://cdn.example.com; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com; img-src 'self' data: https:; font-src 'self' https://fonts.gstatic.com; connect-src 'self' https://api.example.com; frame-ancestors 'self';" always;

# Permissions Policy (formerly Feature-Policy)
add_header Permissions-Policy "camera=(), microphone=(), geolocation=(), interest-cohort=()" always;

# Prevent information leakage
add_header X-Permitted-Cross-Domain-Policies "none" always;

# HSTS (only add if you have SSL configured)
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;

# Hide Nginx version number
server_tokens off;

Gzip-Komprimierung

Gzip-Komprimierung kann die GrĂ¶ĂŸe ĂŒbertragener Antworten um bis zu 90% reduzieren.

# Gzip compression configuration
# Add inside the http {} block in nginx.conf

# Enable gzip compression
gzip on;

# Minimum file size to compress (skip tiny files)
gzip_min_length 256;

# Compression level (1-9, higher = more CPU, smaller files)
# Level 6 is a good balance between compression ratio and CPU usage
gzip_comp_level 6;

# Number and size of compression buffers
gzip_buffers 16 8k;

# Compress responses for HTTP/1.0 clients too
gzip_http_version 1.0;

# Compress all text-based content types
gzip_types
    text/plain
    text/css
    text/xml
    text/javascript
    application/json
    application/javascript
    application/x-javascript
    application/xml
    application/xml+rss
    application/atom+xml
    application/vnd.ms-fontobject
    font/opentype
    font/ttf
    image/svg+xml
    image/x-icon;

# Add Vary: Accept-Encoding header (important for caching proxies)
gzip_vary on;

# Disable gzip for old IE browsers
gzip_disable "MSIE [1-6]\.";

# Enable gzip for proxied requests too
gzip_proxied any;

Rate Limiting

Rate Limiting schĂŒtzt Ihren Server vor Missbrauch, Brute-Force-Angriffen und DDoS. Nginx's limit_req-Modul nutzt den Leaky-Bucket-Algorithmus.

# Rate limiting configuration
# Define zones in the http {} block, apply in server/location blocks

# ── Define rate limit zones (in http {} block) ──

# General rate limit: 10 requests/second per IP
limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;

# Strict rate limit for login/auth: 5 requests/minute per IP
limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m;

# API rate limit: 30 requests/second per IP
limit_req_zone $binary_remote_addr zone=api:10m rate=30r/s;

# Rate limit by server name (protect against virtual host abuse)
limit_req_zone $server_name zone=per_server:10m rate=100r/s;

# ── Apply rate limits (in server {} block) ──

server {
    listen 80;
    server_name api.example.com;

    # Custom error page for rate-limited requests
    error_page 429 = @rate_limited;
    location @rate_limited {
        default_type application/json;
        return 429 '{"error": "Too many requests. Please try again later."}';
    }

    # General pages: allow burst of 20, no delay for first 10
    location / {
        limit_req zone=general burst=20 nodelay;
        limit_req_status 429;
        proxy_pass http://127.0.0.1:3000;
    }

    # Login endpoint: strict rate limiting
    location /api/auth/login {
        limit_req zone=login burst=3 nodelay;
        limit_req_status 429;
        proxy_pass http://127.0.0.1:3000;
    }

    # API endpoints: higher limit with burst
    location /api/ {
        limit_req zone=api burst=50 nodelay;
        limit_req_status 429;
        proxy_pass http://127.0.0.1:3000;
    }

    # Whitelist certain IPs from rate limiting
    # geo $limit {
    #     default 1;
    #     10.0.0.0/8 0;       # Internal network
    #     192.168.0.0/16 0;   # Local network
    # }
    # map $limit $limit_key {
    #     0 "";
    #     1 $binary_remote_addr;
    # }
    # limit_req_zone $limit_key zone=custom:10m rate=10r/s;
}

HĂ€ufige Direktiven-Referenz

Schnellreferenz der am hÀufigsten verwendeten Nginx-Direktiven.

DirektiveBeschreibung
worker_processesAnzahl der Worker-Prozesse (auto fĂŒr CPU-Anzahl)
worker_connectionsMax. gleichzeitige Verbindungen pro Worker
server_nameDomainname(n) fĂŒr diesen Server-Block
listenPort und Protokoll zum Abhören
rootWurzelverzeichnis fĂŒr Dateiauslieferung
indexStandarddatei fĂŒr Verzeichnisanfragen
locationRequest-URI abgleichen fĂŒr spezifische Konfiguration
proxy_passAnfragen an Backend-Server weiterleiten
try_filesDateien der Reihe nach versuchen, auf letzte Option zurĂŒckfallen
ssl_certificatePfad zur SSL-Zertifikatsdatei
ssl_certificate_keyPfad zur SSL-SchlĂŒsseldatei
add_headerBenutzerdefinierten HTTP-Antwort-Header hinzufĂŒgen
gzipGzip-Komprimierung aktivieren/deaktivieren
expiresCache-Control max-age fĂŒr statische Assets setzen
upstreamBackend-Server-Gruppe fĂŒr Load Balancing definieren
limit_req_zoneShared-Memory-Zone fĂŒr Rate Limiting definieren
error_pageBenutzerdefinierte Fehlerseiten definieren
access_logPfad und Format der Zugriffsprotokolle
error_logPfad und Level der Fehlerprotokolle
client_max_body_sizeMaximale GrĂ¶ĂŸe des Client-Anfragekörpers
sendfileEffiziente DateiĂŒbertragung via Kernel-sendfile aktivieren

HĂ€ufig gestellte Fragen

Was ist der Unterschied zwischen Nginx und Apache?

Nginx nutzt eine ereignisgesteuerte, asynchrone Architektur fĂŒr effiziente Verarbeitung vieler gleichzeitiger Verbindungen. Apache verwendet ein Prozess-/Thread-pro-Verbindung-Modell. Viele Produktionsumgebungen setzen Nginx als Reverse Proxy vor Apache ein.

Wie teste ich die Nginx-Konfiguration vor dem Neuladen?

FĂŒhren Sie immer "nginx -t" vor dem Neuladen aus. Dieser Befehl prĂŒft auf Syntaxfehler. Bei Erfolg laden Sie mit "nginx -s reload" neu.

Wie richte ich Let's Encrypt SSL-Zertifikate ein?

Installieren Sie Certbot und fĂŒhren Sie "certbot --nginx -d ihredomain.com" aus. Certbot erhĂ€lt automatisch Zertifikate und Ă€ndert Ihre Nginx-Konfiguration.

Was macht "proxy_set_header X-Real-IP $remote_addr"?

Als Reverse Proxy sieht das Backend Nginx's IP als Client-IP. Dieser Header leitet die echte Client-IP an das Backend weiter.

Wie leite ich HTTP zu HTTPS um?

Erstellen Sie einen Server-Block auf Port 80 mit "return 301 https://$server_name$request_uri;" fĂŒr eine permanente Umleitung.

Was ist try_files und warum ist es fĂŒr SPAs wichtig?

try_files prĂŒft Dateien der Reihe nach. FĂŒr SPAs stellt "try_files $uri $uri/ /index.html" sicher, dass bei nicht vorhandenen Dateien index.html ausgeliefert wird, da SPA-Routen keinen echten Dateien entsprechen.

Diese Nginx-Konfigurationen decken die hĂ€ufigsten Produktionsszenarien ab. Testen Sie immer mit "nginx -t" und ĂŒberwachen Sie Ihre Logs regelmĂ€ĂŸig.

𝕏 Twitterin LinkedIn
War das hilfreich?

Bleiben Sie informiert

Wöchentliche Dev-Tipps und neue Tools.

Kein Spam. Jederzeit abbestellbar.

Verwandte Tools ausprobieren

NXNginx Config Generator.ht.htaccess GeneratorđŸ›ĄïžCSP Header Generator

Verwandte Artikel

Docker Compose Spickzettel: Services, Volumes und Netzwerke

Docker Compose Referenz: Service-Definitionen, Volumes, Netzwerke, Umgebungsvariablen und Stack-Beispiele.

.htaccess Redirect Cheat Sheet: Copy-Paste Beispiele

Komplette .htaccess Redirect Referenz.

Nginx Config Generator - nginx.conf Online Erstellen (Kostenloses Tool + Vollstaendiger Leitfaden)

Erstellen Sie produktionsreife nginx.conf-Dateien online. Server-Bloecke, Reverse Proxy, SSL/TLS, Lastverteilung, gzip, Sicherheits-Header, Rate Limiting und Caching.