Nginx est le serveur web le plus populaire au monde, alimentant plus de 30% de tous les sites web. Que vous ayez besoin de servir des fichiers statiques, configurer un reverse proxy, SSL ou un load balancer, ce guide complet de configuration Nginx fournit des exemples prêts pour la production que vous pouvez copier et adapter immédiatement.
Les bases de Nginx
Nginx (prononcé "engine-x") est un serveur HTTP haute performance, reverse proxy et load balancer. Son architecture événementielle gère des milliers de connexions simultanées avec un minimum de mémoire. Le fichier de configuration principal se trouve généralement dans /etc/nginx/nginx.conf.
# Main configuration file structure
# /etc/nginx/nginx.conf
user nginx;
worker_processes auto; # One worker per CPU core
error_log /var/log/nginx/error.log warn;
pid /run/nginx.pid;
events {
worker_connections 1024; # Max connections per worker
multi_accept on; # Accept multiple connections at once
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logging format
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log /var/log/nginx/access.log main;
sendfile on; # Efficient file transfer
tcp_nopush on; # Optimize TCP packets
tcp_nodelay on; # Disable Nagle's algorithm
keepalive_timeout 65; # Keep connections alive
types_hash_max_size 2048;
# Include site configurations
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}Configuration de site statique
Le cas d'utilisation le plus simple : servir des fichiers HTML, CSS, JavaScript et images directement depuis le disque.
# Static website configuration
# /etc/nginx/conf.d/static-site.conf
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
# Document root
root /var/www/example.com/html;
index index.html index.htm;
# Main location block
location / {
try_files $uri $uri/ =404;
}
# Cache static assets aggressively
location ~* \.(css|js|jpg|jpeg|png|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 30d;
add_header Cache-Control "public, immutable";
access_log off;
}
# Cache HTML files with shorter duration
location ~* \.html$ {
expires 1h;
add_header Cache-Control "public, must-revalidate";
}
# Deny access to hidden files (.htaccess, .git, etc.)
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
# Custom error pages
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log;
}Configuration du reverse proxy
Un reverse proxy se place entre les clients et votre application backend (Node.js, Python, Go, etc.), transmettant les requêtes et retournant les réponses.
# Reverse proxy to Node.js/Python/Go application
# /etc/nginx/conf.d/app-proxy.conf
server {
listen 80;
server_name app.example.com;
# Max upload size
client_max_body_size 50M;
# Proxy all requests to the backend application
location / {
proxy_pass http://127.0.0.1:3000;
# Pass the real client IP to the backend
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support (for Socket.IO, etc.)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# Buffering settings
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
}
# Serve static files directly (bypass the backend)
location /static/ {
alias /var/www/app/static/;
expires 30d;
add_header Cache-Control "public, immutable";
}
# Health check endpoint
location /health {
proxy_pass http://127.0.0.1:3000/health;
access_log off;
}
access_log /var/log/nginx/app.access.log;
error_log /var/log/nginx/app.error.log;
}Configuration SSL/TLS
Sécuriser votre site avec HTTPS est obligatoire dans le développement web moderne. Cette configuration utilise les certificats Let's Encrypt et suit les meilleures pratiques TLS actuelles.
# SSL/TLS configuration with Let's Encrypt
# /etc/nginx/conf.d/ssl-site.conf
# Redirect all HTTP traffic to HTTPS
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
# Let's Encrypt ACME challenge
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
# 301 permanent redirect to HTTPS
location / {
return 301 https://$server_name$request_uri;
}
}
# HTTPS server block
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com www.example.com;
# Let's Encrypt certificate paths
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# SSL session settings
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
# Modern TLS configuration (TLS 1.2 + 1.3 only)
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
# OCSP Stapling (faster certificate verification)
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
# HSTS (force HTTPS for 2 years, including subdomains)
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
root /var/www/example.com/html;
index index.html;
location / {
try_files $uri $uri/ =404;
}
access_log /var/log/nginx/example.com.ssl.access.log;
error_log /var/log/nginx/example.com.ssl.error.log;
}Configuration SPA (React/Vue/Angular)
Les applications monopage utilisent le routage côté client, ce qui signifie que toutes les routes doivent revenir à index.html.
# SPA configuration (React, Vue, Angular, Next.js static export)
# /etc/nginx/conf.d/spa.conf
server {
listen 80;
server_name spa.example.com;
root /var/www/spa/dist;
index index.html;
# The key directive for SPAs: fallback to index.html
# This ensures client-side routing works correctly
location / {
try_files $uri $uri/ /index.html;
}
# Cache JavaScript and CSS bundles (with hash in filename)
location ~* \.(js|css)$ {
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
}
# Cache images, fonts, and media
location ~* \.(jpg|jpeg|png|gif|ico|svg|webp|avif|woff|woff2|ttf|eot|mp4|webm)$ {
expires 30d;
add_header Cache-Control "public";
access_log off;
}
# Do NOT cache index.html (always serve the latest version)
location = /index.html {
expires -1;
add_header Cache-Control "no-store, no-cache, must-revalidate";
}
# API proxy (forward /api requests to the backend)
location /api/ {
proxy_pass http://127.0.0.1:4000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Deny access to source maps in production
location ~* \.map$ {
deny all;
}
}Équilibrage de charge
Distribuez le trafic entrant sur plusieurs serveurs backend pour une haute disponibilité et de meilleures performances.
# Load balancing across multiple backend servers
# /etc/nginx/conf.d/load-balancer.conf
# Define backend server group
upstream app_servers {
# Round-robin (default) - requests distributed evenly
server 10.0.0.1:3000;
server 10.0.0.2:3000;
server 10.0.0.3:3000;
# Mark a server as backup (used only when others are down)
server 10.0.0.4:3000 backup;
# Health check: mark server down after 3 failed attempts
# max_fails=3 fail_timeout=30s (default)
}
# Weighted load balancing (send more traffic to powerful servers)
upstream app_weighted {
server 10.0.0.1:3000 weight=5; # Gets 5x the traffic
server 10.0.0.2:3000 weight=3; # Gets 3x the traffic
server 10.0.0.3:3000 weight=1; # Gets 1x the traffic
}
# Least connections (send to the server with fewest active requests)
upstream app_least_conn {
least_conn;
server 10.0.0.1:3000;
server 10.0.0.2:3000;
server 10.0.0.3:3000;
}
# IP hash (same client always goes to same server - sticky sessions)
upstream app_ip_hash {
ip_hash;
server 10.0.0.1:3000;
server 10.0.0.2:3000;
server 10.0.0.3:3000;
}
server {
listen 80;
server_name lb.example.com;
location / {
proxy_pass http://app_servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Connection keep-alive to backends
proxy_http_version 1.1;
proxy_set_header Connection "";
}
# Simple health check endpoint
location /nginx-health {
access_log off;
return 200 "OK";
add_header Content-Type text/plain;
}
}En-têtes de sécurité
Les en-têtes de sécurité HTTP protègent votre site contre les attaques courantes comme le clickjacking, XSS et l'injection de contenu.
# Security headers configuration
# Add these inside your server {} block or create a snippet
# /etc/nginx/snippets/security-headers.conf
# Include with: include /etc/nginx/snippets/security-headers.conf;
# Prevent clickjacking: deny embedding in iframes
add_header X-Frame-Options "SAMEORIGIN" always;
# Prevent MIME-type sniffing
add_header X-Content-Type-Options "nosniff" always;
# Enable XSS protection (legacy browsers)
add_header X-XSS-Protection "1; mode=block" always;
# Control referrer information sent with requests
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Content Security Policy (customize based on your needs)
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' https://cdn.example.com; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com; img-src 'self' data: https:; font-src 'self' https://fonts.gstatic.com; connect-src 'self' https://api.example.com; frame-ancestors 'self';" always;
# Permissions Policy (formerly Feature-Policy)
add_header Permissions-Policy "camera=(), microphone=(), geolocation=(), interest-cohort=()" always;
# Prevent information leakage
add_header X-Permitted-Cross-Domain-Policies "none" always;
# HSTS (only add if you have SSL configured)
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
# Hide Nginx version number
server_tokens off;Compression Gzip
L'activation de la compression Gzip peut réduire la taille des réponses jusqu'à 90%, améliorant significativement les temps de chargement.
# Gzip compression configuration
# Add inside the http {} block in nginx.conf
# Enable gzip compression
gzip on;
# Minimum file size to compress (skip tiny files)
gzip_min_length 256;
# Compression level (1-9, higher = more CPU, smaller files)
# Level 6 is a good balance between compression ratio and CPU usage
gzip_comp_level 6;
# Number and size of compression buffers
gzip_buffers 16 8k;
# Compress responses for HTTP/1.0 clients too
gzip_http_version 1.0;
# Compress all text-based content types
gzip_types
text/plain
text/css
text/xml
text/javascript
application/json
application/javascript
application/x-javascript
application/xml
application/xml+rss
application/atom+xml
application/vnd.ms-fontobject
font/opentype
font/ttf
image/svg+xml
image/x-icon;
# Add Vary: Accept-Encoding header (important for caching proxies)
gzip_vary on;
# Disable gzip for old IE browsers
gzip_disable "MSIE [1-6]\.";
# Enable gzip for proxied requests too
gzip_proxied any;Limitation de débit
La limitation de débit protège votre serveur contre les abus, les attaques par force brute et les DDoS. Le module limit_req de Nginx utilise un algorithme de seau percé.
# Rate limiting configuration
# Define zones in the http {} block, apply in server/location blocks
# ── Define rate limit zones (in http {} block) ──
# General rate limit: 10 requests/second per IP
limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
# Strict rate limit for login/auth: 5 requests/minute per IP
limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m;
# API rate limit: 30 requests/second per IP
limit_req_zone $binary_remote_addr zone=api:10m rate=30r/s;
# Rate limit by server name (protect against virtual host abuse)
limit_req_zone $server_name zone=per_server:10m rate=100r/s;
# ── Apply rate limits (in server {} block) ──
server {
listen 80;
server_name api.example.com;
# Custom error page for rate-limited requests
error_page 429 = @rate_limited;
location @rate_limited {
default_type application/json;
return 429 '{"error": "Too many requests. Please try again later."}';
}
# General pages: allow burst of 20, no delay for first 10
location / {
limit_req zone=general burst=20 nodelay;
limit_req_status 429;
proxy_pass http://127.0.0.1:3000;
}
# Login endpoint: strict rate limiting
location /api/auth/login {
limit_req zone=login burst=3 nodelay;
limit_req_status 429;
proxy_pass http://127.0.0.1:3000;
}
# API endpoints: higher limit with burst
location /api/ {
limit_req zone=api burst=50 nodelay;
limit_req_status 429;
proxy_pass http://127.0.0.1:3000;
}
# Whitelist certain IPs from rate limiting
# geo $limit {
# default 1;
# 10.0.0.0/8 0; # Internal network
# 192.168.0.0/16 0; # Local network
# }
# map $limit $limit_key {
# 0 "";
# 1 $binary_remote_addr;
# }
# limit_req_zone $limit_key zone=custom:10m rate=10r/s;
}Référence des directives courantes
Voici une référence rapide des directives Nginx les plus couramment utilisées.
| Directive | Description |
|---|---|
| worker_processes | Nombre de processus worker (auto pour le nombre de CPU) |
| worker_connections | Connexions simultanées max par worker |
| server_name | Nom(s) de domaine auxquels ce bloc serveur répond |
| listen | Port et protocole d'écoute |
| root | Répertoire racine pour servir les fichiers |
| index | Fichier par défaut pour les requêtes de répertoire |
| location | Correspondance d'URI pour appliquer une configuration spécifique |
| proxy_pass | Transmettre les requêtes à un serveur backend |
| try_files | Essayer les fichiers dans l'ordre, revenir à la dernière option |
| ssl_certificate | Chemin du fichier de certificat SSL |
| ssl_certificate_key | Chemin du fichier de clé privée SSL |
| add_header | Ajouter un en-tête de réponse HTTP personnalisé |
| gzip | Activer ou désactiver la compression gzip |
| expires | Définir Cache-Control max-age pour les assets statiques |
| upstream | Définir un groupe de serveurs backend pour le load balancing |
| limit_req_zone | Définir une zone mémoire partagée pour la limitation de débit |
| error_page | Définir des pages d'erreur personnalisées |
| access_log | Chemin et format des fichiers de log d'accès |
| error_log | Chemin et niveau des fichiers de log d'erreur |
| client_max_body_size | Taille maximale autorisée du corps de la requête client |
| sendfile | Activer le transfert de fichiers efficace via sendfile |
Questions fréquentes
Quelle est la différence entre Nginx et Apache ?
Nginx utilise une architecture événementielle asynchrone qui gère efficacement de nombreuses connexions simultanées avec peu de mémoire. Apache utilise traditionnellement un modèle processus/thread par connexion. Beaucoup d'environnements de production placent Nginx en reverse proxy devant Apache.
Comment tester la configuration Nginx avant de recharger ?
Exécutez toujours "nginx -t" avant de recharger. Cette commande vérifie les erreurs de syntaxe. Si le test réussit, rechargez avec "nginx -s reload".
Comment configurer les certificats SSL Let's Encrypt ?
Installez Certbot et exécutez "certbot --nginx -d votredomaine.com". Certbot obtient automatiquement les certificats et modifie votre configuration Nginx.
Que fait "proxy_set_header X-Real-IP $remote_addr" ?
Quand Nginx est reverse proxy, l'application backend voit l'IP de Nginx. Cet en-tête transmet l'IP réelle du client au backend.
Comment rediriger HTTP vers HTTPS ?
Créez un bloc serveur écoutant sur le port 80 avec "return 301 https://$server_name$request_uri;" pour rediriger tout le trafic HTTP vers HTTPS.
Qu'est-ce que try_files et pourquoi est-ce important pour les SPA ?
try_files vérifie l'existence des fichiers dans l'ordre. Pour les SPA, "try_files $uri $uri/ /index.html" sert index.html quand aucun fichier ne correspond, car les routes SPA ne correspondent pas à des fichiers réels.
Ces configurations Nginx couvrent les scénarios de production les plus courants. Testez toujours avec "nginx -t" et surveillez vos logs régulièrement.