DevToolBoxZA DARMO
Blog

Nginx Config Generator - Generate nginx.conf Online (Free Tool + Complete Guide)

18 min readby DevToolBox

TL;DR

An Nginx config generator builds production-ready nginx.conf files instantly from your requirements — server blocks, reverse proxy rules, SSL/TLS, load balancing, caching, gzip compression, security headers, and rate limiting. Instead of memorizing directive syntax, use our free Nginx Config Generator to create, validate, and download your configuration in seconds. This guide explains every major Nginx concept so you understand what each directive does and why it matters.

Key Takeaways

  • Nginx handles static serving, reverse proxy, load balancing, and SSL termination in a single lightweight process
  • Server blocks define virtual hosts; location blocks route requests within each host
  • Always test configuration with nginx -t before reloading
  • Use proxy_pass to forward requests to Node.js, Python, Go, or any backend
  • Enable gzip compression, security headers, and rate limiting in every production deployment
  • An online Nginx config generator eliminates syntax errors and saves hours of manual configuration
  • HTTP/2, keepalive connections, and open_file_cache provide significant performance gains

Generate Your Nginx Config Now

Skip the manual editing. Build a production-ready nginx.conf in seconds.

Open Nginx Config Generator →

1. What Is Nginx and Why Does It Matter?

Nginx (pronounced “engine-x”) is an open-source, high-performance HTTP server, reverse proxy server, and load balancer originally created by Igor Sysoev in 2004. It powers more than 30% of all websites on the internet and is the default choice for modern cloud infrastructure, containerized deployments, and microservice architectures.

Unlike traditional web servers that spawn a new thread or process for every incoming connection, Nginx uses an event-driven, asynchronous, non-blocking architecture. A single Nginx worker process can handle thousands of simultaneous connections while consuming only a few megabytes of memory. This makes Nginx exceptionally efficient for serving static files, proxying application traffic, and terminating SSL/TLS connections.

The main configuration file lives at /etc/nginx/nginx.conf. Site-specific configurations typically go in /etc/nginx/conf.d/ or /etc/nginx/sites-available/ (with symlinks in sites-enabled/). An Nginx config generator helps you produce these files correctly without memorizing every directive.

2. Nginx Configuration File Structure

Every nginx.conf follows a hierarchical block structure. Understanding this hierarchy is essential before using any configuration generator, because each directive is valid only within certain contexts.

# Main context (global settings)
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /run/nginx.pid;

events {
    worker_connections 2048;
    multi_accept on;
}

http {
    # HTTP context (shared across all virtual hosts)
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    server {
        # Server context (one virtual host)
        listen 80;
        server_name example.com;

        location / {
            # Location context (URL path matching)
            root /var/www/html;
            index index.html;
        }

        location /api/ {
            proxy_pass http://backend:3000;
        }
    }
}

Context Hierarchy

ContextPurposeCommon Directives
mainGlobal process-level settingsworker_processes, error_log, pid
eventsConnection handling configurationworker_connections, multi_accept
httpHTTP server settings shared across all sitesinclude, log_format, gzip, upstream
serverVirtual host definitionlisten, server_name, root, ssl_certificate
locationRequest URI matching and processingproxy_pass, try_files, root, alias

3. Server Blocks (Virtual Hosts)

Server blocks are the Nginx equivalent of Apache virtual hosts. Each server block defines how Nginx should respond to requests for a specific domain name or IP:port combination. You can host multiple websites on a single Nginx instance by creating separate server blocks for each domain.

# Site 1: example.com
server {
    listen 80;
    server_name example.com www.example.com;
    root /var/www/example;
    index index.html;
}

# Site 2: api.example.com
server {
    listen 80;
    server_name api.example.com;

    location / {
        proxy_pass http://127.0.0.1:3000;
    }
}

# Default server (catch-all for unmatched domains)
server {
    listen 80 default_server;
    server_name _;
    return 444;  # Close connection without response
}
Tip: The server_name directive supports exact names, wildcards (*.example.com), and regular expressions (~^www\.(.+)$). Nginx evaluates them in order: exact match first, then longest wildcard prefix, then longest wildcard suffix, then first matching regex. Always define a default_server to handle unknown domain requests.

4. Location Directives: Routing Requests

Location blocks determine how Nginx processes requests based on the URI path. The matching behavior depends on the modifier used with the location directive. Understanding location matching priority is critical for correct routing.

ModifierMatch TypePriorityExample
=Exact match1 (highest)location = /favicon.ico
^~Prefix match (skip regex)2location ^~ /images/
~Case-sensitive regex3location ~ \.php$
~*Case-insensitive regex3location ~* \.(jpg|png|gif)$
(none)Prefix match4 (lowest)location /api/
server {
    listen 80;
    server_name example.com;

    # Exact match: highest priority
    location = / {
        return 200 "Homepage";
    }

    # Prefix with ^~ : skips regex matching
    location ^~ /static/ {
        root /var/www;
        expires 30d;
    }

    # Regex: matches .css and .js files
    location ~* \.(css|js)$ {
        root /var/www/assets;
        expires 7d;
        add_header Cache-Control "public, immutable";
    }

    # Standard prefix: lowest priority
    location /api/ {
        proxy_pass http://backend:3000;
    }

    # Catch-all for SPA routing
    location / {
        root /var/www/app;
        try_files $uri $uri/ /index.html;
    }
}

5. Reverse Proxy Configuration

A reverse proxy sits between the client and your application server, forwarding HTTP requests and returning responses. This is the most common production deployment pattern for Node.js, Python (Django/Flask/FastAPI), Go, Ruby, and Java applications. Nginx handles SSL termination, static file serving, compression, and connection pooling, freeing your application to focus on business logic.

Basic Reverse Proxy for Node.js

server {
    listen 80;
    server_name app.example.com;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;

        # Pass original client information
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # WebSocket support
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";

        # Timeouts
        proxy_connect_timeout 60s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;

        # Buffering
        proxy_buffering on;
        proxy_buffer_size 4k;
        proxy_buffers 8 4k;
    }

    # Serve static files directly (bypass proxy)
    location /static/ {
        alias /var/www/app/public/static/;
        expires 30d;
        access_log off;
    }
}

Reverse Proxy for Python (Gunicorn/uWSGI)

upstream python_backend {
    server unix:/run/gunicorn.sock;
}

server {
    listen 80;
    server_name api.example.com;

    client_max_body_size 10M;

    location / {
        proxy_pass http://python_backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    location /media/ {
        alias /var/www/app/media/;
    }
}
Tip: When using Unix domain sockets (like Gunicorn or PHP-FPM), the proxy_pass target uses the unix: prefix. This eliminates TCP overhead and is faster than connecting via localhost:port for services on the same machine.

6. SSL/TLS Configuration

HTTPS is mandatory for modern websites. Search engines rank HTTPS pages higher, browsers warn users about insecure HTTP sites, and HTTP/2 requires TLS. A proper SSL configuration involves obtaining certificates, configuring protocols and ciphers, enabling HSTS, and setting up OCSP stapling.

Full SSL Configuration with Let's Encrypt

# HTTP to HTTPS redirect
server {
    listen 80;
    server_name example.com www.example.com;
    return 301 https://$server_name$request_uri;
}

# HTTPS server
server {
    listen 443 ssl http2;
    server_name example.com www.example.com;

    # Certificate files (Let's Encrypt)
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    # TLS protocols and ciphers
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers off;

    # SSL session caching
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 1d;
    ssl_session_tickets off;

    # OCSP stapling
    ssl_stapling on;
    ssl_stapling_verify on;
    resolver 8.8.8.8 8.8.4.4 valid=300s;
    resolver_timeout 5s;

    # HSTS (1 year, include subdomains)
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;

    root /var/www/html;
    index index.html;
}
Warning: Never disable TLSv1.2 unless you are certain all your users support TLSv1.3 exclusively. Dropping TLSv1.0 and TLSv1.1 is recommended as they have known vulnerabilities. Always test your SSL configuration at ssllabs.com/ssltest to verify your grade.

7. Load Balancing

Nginx can distribute incoming traffic across multiple backend servers for high availability and horizontal scaling. The upstream block defines a pool of servers, and proxy_pass references it by name. Nginx supports several load balancing algorithms out of the box.

AlgorithmDirectiveBehavior
Round Robin(default)Distributes requests evenly across all servers in order
Least Connectionsleast_connSends requests to the server with fewest active connections
IP Haship_hashRoutes requests from the same client IP to the same server (sticky sessions)
Randomrandom two least_connPicks two random servers, sends to the one with fewer connections
Weightedweight=N on serverServers with higher weight receive proportionally more requests
upstream app_cluster {
    least_conn;

    server 10.0.0.1:3000 weight=3;    # 3x more traffic
    server 10.0.0.2:3000 weight=2;    # 2x more traffic
    server 10.0.0.3:3000;             # default weight=1

    server 10.0.0.4:3000 backup;      # only used when others fail
    server 10.0.0.5:3000 down;        # temporarily removed

    # Passive health checks
    # max_fails=3: mark as down after 3 consecutive failures
    # fail_timeout=30s: try again after 30 seconds
    server 10.0.0.6:3000 max_fails=3 fail_timeout=30s;

    # Keepalive connections to backends
    keepalive 32;
}

server {
    listen 443 ssl http2;
    server_name app.example.com;

    location / {
        proxy_pass http://app_cluster;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Host $host;
        proxy_next_upstream error timeout http_500 http_502 http_503;
    }
}

8. Caching Configuration

Nginx caching operates at two levels: client-side caching (browser cache via HTTP headers) and proxy caching (Nginx stores backend responses locally). Proper caching can reduce server load by 80-90% and dramatically improve page load times for repeat visitors.

Browser Cache Headers (Static Assets)

# Cache static assets aggressively
location ~* \.(jpg|jpeg|png|gif|ico|svg|webp)$ {
    expires 30d;
    add_header Cache-Control "public, immutable";
    access_log off;
}

location ~* \.(css|js)$ {
    expires 7d;
    add_header Cache-Control "public";
}

# Do not cache HTML (always fresh)
location ~* \.html$ {
    expires -1;
    add_header Cache-Control "no-cache, no-store, must-revalidate";
}

Proxy Cache (Cache Backend Responses)

# Define cache zone in http context
proxy_cache_path /var/cache/nginx
    levels=1:2
    keys_zone=app_cache:10m     # 10MB metadata in memory
    max_size=1g                 # 1GB on disk
    inactive=60m                # Remove unused items after 60 min
    use_temp_path=off;

server {
    listen 80;
    server_name app.example.com;

    location /api/ {
        proxy_pass http://backend:3000;
        proxy_cache app_cache;
        proxy_cache_valid 200 10m;     # Cache 200 responses for 10 min
        proxy_cache_valid 404 1m;      # Cache 404 responses for 1 min
        proxy_cache_use_stale error timeout updating http_500 http_502;
        add_header X-Cache-Status $upstream_cache_status;
    }
}

9. Gzip Compression

Enabling gzip compression reduces the size of text-based responses by 60-90%, significantly improving page load times and reducing bandwidth costs. Nginx compresses responses on-the-fly before sending them to clients. Modern browsers all support gzip decoding transparently.

http {
    # Enable gzip compression
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 5;          # 1-9 (higher = more compression, more CPU)
    gzip_min_length 256;        # Don't compress tiny responses
    gzip_buffers 16 8k;

    # MIME types to compress
    gzip_types
        text/plain
        text/css
        text/xml
        text/javascript
        application/json
        application/javascript
        application/xml
        application/rss+xml
        application/atom+xml
        application/vnd.ms-fontobject
        font/opentype
        image/svg+xml;
}
Tip: Set gzip_comp_level between 4 and 6 for the best balance of compression ratio and CPU usage. Level 9 provides only marginally better compression but uses significantly more CPU. Also consider pre-compressing static files with gzip_static on to serve pre-built .gz files without real-time compression overhead.

10. Security Headers

HTTP security headers instruct browsers to enforce security policies that protect your users from common web attacks. Adding these headers to your Nginx configuration is one of the simplest yet most impactful security improvements you can make.

server {
    # Prevent clickjacking
    add_header X-Frame-Options "SAMEORIGIN" always;

    # Prevent MIME-type sniffing
    add_header X-Content-Type-Options "nosniff" always;

    # XSS protection (legacy browsers)
    add_header X-XSS-Protection "1; mode=block" always;

    # HTTPS enforcement (HSTS)
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;

    # Referrer policy
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;

    # Content Security Policy
    add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' https://cdn.example.com; style-src 'self' 'unsafe-inline'; img-src 'self' data: https:; font-src 'self' https://fonts.gstatic.com;" always;

    # Permissions policy (restrict browser features)
    add_header Permissions-Policy "camera=(), microphone=(), geolocation=(), payment=()" always;

    # Hide Nginx version
    server_tokens off;
}
HeaderProtection AgainstRecommended Value
X-Frame-OptionsClickjackingSAMEORIGIN or DENY
X-Content-Type-OptionsMIME sniffingnosniff
Strict-Transport-SecurityProtocol downgrademax-age=31536000; includeSubDomains
Content-Security-PolicyXSS, data injectionRestrict script/style/image sources
Referrer-PolicyInformation leakagestrict-origin-when-cross-origin
Permissions-PolicyFeature abuseDisable camera, microphone, geolocation

11. Rate Limiting

Rate limiting protects your server from brute-force attacks, DDoS attempts, and API abuse. Nginx uses a leaky bucket algorithm through the limit_req module. You define a shared memory zone that tracks request rates by client IP, then apply limits to specific locations.

http {
    # Define rate limit zones
    # 10 requests per second per IP (zone: 10MB shared memory)
    limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;

    # Stricter limit for login/auth endpoints
    limit_req_zone $binary_remote_addr zone=auth:10m rate=3r/s;

    # API rate limit by API key (from header)
    limit_req_zone $http_x_api_key zone=api:10m rate=100r/s;

    server {
        listen 80;
        server_name example.com;

        # General pages: allow burst of 20 with no delay for first 10
        location / {
            limit_req zone=general burst=20 nodelay;
            limit_req_status 429;
            root /var/www/html;
        }

        # Login page: strict rate limiting
        location /login {
            limit_req zone=auth burst=5;
            limit_req_status 429;
            proxy_pass http://backend:3000;
        }

        # API endpoints
        location /api/ {
            limit_req zone=api burst=50 nodelay;
            limit_req_status 429;
            proxy_pass http://backend:3000;
        }
    }
}
Warning: When running behind a CDN or another reverse proxy, $binary_remote_addr will be the proxy IP, not the client IP. In that case, use $http_x_forwarded_for or $http_x_real_ip in the limit_req_zone definition, and ensure the upstream proxy is trusted via set_real_ip_from and real_ip_header.

12. Logging Configuration

Nginx supports flexible logging through custom log formats. Access logs record every request, while error logs capture server-side problems. Proper logging is essential for debugging, monitoring, security auditing, and performance analysis.

http {
    # Custom log format with timing information
    log_format detailed '$remote_addr - $remote_user [$time_local] '
                        '"$request" $status $body_bytes_sent '
                        '"$http_referer" "$http_user_agent" '
                        'rt=$request_time uct=$upstream_connect_time '
                        'uht=$upstream_header_time urt=$upstream_response_time';

    # JSON log format (for log aggregation tools)
    log_format json_combined escape=json
        '{"time":"$time_iso8601",'
        '"remote_addr":"$remote_addr",'
        '"method":"$request_method",'
        '"uri":"$request_uri",'
        '"status":$status,'
        '"body_bytes":$body_bytes_sent,'
        '"request_time":$request_time,'
        '"upstream_time":"$upstream_response_time",'
        '"user_agent":"$http_user_agent"}';

    # Apply to server blocks
    access_log /var/log/nginx/access.log detailed;
    error_log /var/log/nginx/error.log warn;

    server {
        # Per-site logging
        access_log /var/log/nginx/example.access.log json_combined;

        # Disable logging for health checks and static assets
        location = /health {
            access_log off;
            return 200 "OK";
        }

        location ~* \.(jpg|png|css|js|ico)$ {
            access_log off;
        }
    }
}

13. Common Configuration Patterns

Below are complete, production-ready Nginx configuration patterns for the most common deployment scenarios. Each can be generated automatically with our Nginx Config Generator.

Pattern 1: Static Website

server {
    listen 443 ssl http2;
    server_name blog.example.com;

    ssl_certificate /etc/letsencrypt/live/blog.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/blog.example.com/privkey.pem;

    root /var/www/blog;
    index index.html;

    # Enable gzip for text content
    gzip on;
    gzip_types text/plain text/css application/json application/javascript image/svg+xml;
    gzip_min_length 256;

    # Cache static assets
    location ~* \.(css|js|jpg|jpeg|png|gif|svg|woff2|ico)$ {
        expires 30d;
        add_header Cache-Control "public, immutable";
    }

    # Clean URLs (no .html extension)
    location / {
        try_files $uri $uri/ $uri.html =404;
    }

    error_page 404 /404.html;
}

Pattern 2: Single Page Application (React/Vue/Angular)

server {
    listen 443 ssl http2;
    server_name app.example.com;

    ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;

    root /var/www/app/dist;
    index index.html;

    # SPA: all routes fall back to index.html
    location / {
        try_files $uri $uri/ /index.html;
    }

    # Cache versioned assets forever
    location /assets/ {
        expires 1y;
        add_header Cache-Control "public, immutable";
    }

    # Do not cache index.html
    location = /index.html {
        expires -1;
        add_header Cache-Control "no-cache, no-store, must-revalidate";
    }

    # API proxy to backend
    location /api/ {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

Pattern 3: API Gateway

upstream users_service { server 10.0.1.1:3001; server 10.0.1.2:3001; }
upstream orders_service { server 10.0.2.1:3002; }
upstream payments_service { server 10.0.3.1:3003; }

server {
    listen 443 ssl http2;
    server_name api.example.com;

    ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;

    # Global rate limiting
    limit_req_zone $binary_remote_addr zone=api:10m rate=50r/s;
    limit_req zone=api burst=100 nodelay;

    # CORS headers
    add_header Access-Control-Allow-Origin "https://app.example.com" always;
    add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS" always;
    add_header Access-Control-Allow-Headers "Authorization, Content-Type" always;

    # Route to microservices
    location /api/v1/users/ {
        proxy_pass http://users_service/;
    }

    location /api/v1/orders/ {
        proxy_pass http://orders_service/;
    }

    location /api/v1/payments/ {
        proxy_pass http://payments_service/;
    }

    # Health check endpoint
    location = /health {
        access_log off;
        return 200 '{"status":"ok"}';
        add_header Content-Type application/json;
    }
}

Pattern 4: WordPress

server {
    listen 443 ssl http2;
    server_name wordpress.example.com;

    ssl_certificate /etc/letsencrypt/live/wordpress.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/wordpress.example.com/privkey.pem;

    root /var/www/wordpress;
    index index.php;

    client_max_body_size 64M;

    # WordPress permalinks
    location / {
        try_files $uri $uri/ /index.php?$args;
    }

    # PHP-FPM processing
    location ~ \.php$ {
        fastcgi_pass unix:/run/php/php8.2-fpm.sock;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
        fastcgi_intercept_errors on;
        fastcgi_buffer_size 16k;
        fastcgi_buffers 4 16k;
    }

    # Block access to sensitive files
    location ~ /\. { deny all; }
    location ~* /(?:uploads|files)/.*\.php$ { deny all; }
    location ~* /(wp-config|xmlrpc)\.php$ { deny all; }

    # Cache static assets
    location ~* \.(css|js|jpg|jpeg|png|gif|svg|woff2|ico)$ {
        expires 30d;
        access_log off;
    }
}

14. Nginx vs Apache: Which Should You Choose?

Both Nginx and Apache are production-grade web servers, but they have fundamentally different architectures and strengths. Understanding these differences helps you choose the right tool or combine them effectively.

FeatureNginxApache
ArchitectureEvent-driven, async, non-blockingProcess/thread per connection (prefork/worker MPM)
Memory usageVery low (fixed workers)Higher (scales with connections)
Static file servingExtremely fastFast
Concurrent connections10,000+ with minimal RAMHundreds to low thousands
.htaccess supportNot supportedFull support (per-directory config)
Dynamic modulesLimited (compiled or dynamic since 1.9.11)Extensive ecosystem (mod_rewrite, mod_php, etc.)
Reverse proxyBuilt-in, excellentVia mod_proxy
Configuration reloadGraceful reload, zero downtimeGraceful restart supported
Best forStatic files, reverse proxy, load balancer, API gatewayPHP apps, shared hosting, .htaccess workflows

Recommendation: Use Nginx as your primary web server and reverse proxy for most modern deployments. If you need .htaccess support (common in shared hosting or WordPress with plugins that rely on it), either use Apache behind Nginx or consider migrating .htaccess rules to Nginx location blocks. Our Nginx Config Generator can help translate common Apache patterns into equivalent Nginx configuration.

15. Performance Tuning

Nginx is fast out of the box, but proper tuning can extract even more performance. These optimizations cover worker processes, connection handling, file I/O, and protocol-level improvements.

Worker and Connection Settings

# Set workers to number of CPU cores
worker_processes auto;

# Increase worker connection limit
events {
    worker_connections 4096;
    multi_accept on;
    use epoll;            # Linux only (most efficient)
}

http {
    # Efficient file transfer
    sendfile on;
    tcp_nopush on;        # Send headers and file in one packet
    tcp_nodelay on;       # Disable Nagle algorithm for low latency

    # Keepalive settings
    keepalive_timeout 65;
    keepalive_requests 1000;

    # File descriptor caching
    open_file_cache max=10000 inactive=30s;
    open_file_cache_valid 60s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;

    # Buffer sizes
    client_body_buffer_size 16k;
    client_header_buffer_size 1k;
    large_client_header_buffers 4 8k;
    client_max_body_size 8m;

    # Timeouts
    client_body_timeout 12;
    client_header_timeout 12;
    send_timeout 10;

    # Enable HTTP/2 (on listen directive)
    # listen 443 ssl http2;
}

Performance Tuning Checklist

SettingRecommended ValueWhy
worker_processesautoOne worker per CPU core for optimal parallelism
worker_connections2048-4096Total capacity = workers x connections
sendfileonKernel-level file transfer, bypasses userspace
tcp_nopushonCombines headers and body into fewer TCP packets
gzip_comp_level4-6Best compression/CPU trade-off
open_file_cachemax=10000 inactive=30sReduces filesystem syscalls for frequently accessed files
keepalive_timeout65Reuse TCP connections, reduce handshake overhead
HTTP/2Enable on 443 sslMultiplexing, header compression, server push
Tip: Check your system limits with ulimit -n. Each worker needs enough file descriptors for connections plus open files. Set worker_rlimit_nofile 65535; in the main context if needed. On Linux, also increase net.core.somaxconn and net.ipv4.tcp_max_syn_backlog in sysctl for high-traffic servers.

16. Essential Directives Quick Reference

Here is a compact reference of the most frequently used Nginx directives, organized by function. These are the directives that an Nginx config generator handles automatically.

DirectiveContextDescription
listenserverPort and protocol to listen on (e.g., 80, 443 ssl http2)
server_nameserverDomain name(s) for this virtual host
rootserver, locationDocument root directory for file serving
indexserver, locationDefault files to serve for directory requests
try_fileslocationTry files in order, fall back to last option
proxy_passlocationForward requests to a backend server
upstreamhttpDefine a group of backend servers
ssl_certificateserverPath to the SSL/TLS certificate chain
add_headerserver, locationAdd a custom HTTP response header
expireslocationSet Cache-Control max-age
gziphttp, serverEnable or disable response compression
limit_req_zonehttpDefine rate limiting zone and parameters
client_max_body_sizeserver, locationMaximum upload/request body size
error_pageserver, locationCustom error pages for HTTP status codes
access_loghttp, server, locationConfigure access log path and format

17. Testing and Debugging Your Configuration

Before applying any Nginx configuration change in production, always validate it first. A single typo can bring down your entire server. Here are the essential commands and techniques for safe Nginx configuration management.

# Test configuration syntax (ALWAYS run before reload)
nginx -t

# Test and show the full parsed configuration
nginx -T

# Reload configuration without downtime
sudo nginx -s reload
# Or using systemctl:
sudo systemctl reload nginx

# View real-time error log
tail -f /var/log/nginx/error.log

# View access log with request timing
tail -f /var/log/nginx/access.log

# Check which Nginx processes are running
ps aux | grep nginx

# Verify Nginx is listening on expected ports
ss -tlnp | grep nginx

# Test specific URLs with curl
curl -I https://example.com          # Check response headers
curl -w "%{time_total}s\n" -o /dev/null -s https://example.com  # Response time
Important: Never use nginx -s stop or systemctl restart nginx in production unless absolutely necessary. These commands terminate existing connections. Always prefer nginx -s reload which gracefully starts new workers while old workers finish serving current requests.

Stop Writing Nginx Configs by Hand

Select your server type, SSL settings, backend, caching rules, and security headers. Get a complete, validated nginx.conf instantly.

Try the Nginx Config Generator →

18. Production Deployment Checklist

Before deploying your Nginx configuration to production, verify each of these items. This checklist represents the collective wisdom of thousands of production Nginx deployments.

  • SSL/TLS: HTTPS enabled, HTTP redirects to HTTPS, TLSv1.2+ only, HSTS header set
  • Security headers: X-Frame-Options, X-Content-Type-Options, CSP, Permissions-Policy configured
  • Compression: Gzip enabled for text/html, text/css, application/javascript, application/json
  • Caching: Static assets have long expires headers, HTML files set to no-cache
  • Rate limiting: Applied to login endpoints, API routes, and form submissions
  • Logging: Access and error logs configured with rotation (logrotate)
  • server_tokens: Set to off to hide Nginx version
  • client_max_body_size: Set appropriately for your application (not unlimited)
  • Backup: Configuration files backed up before changes
  • Test: Configuration validated with nginx -t before every reload
  • Monitoring: Nginx status module enabled (stub_status) for metrics collection
  • Firewall: Only ports 80 and 443 exposed to the internet

19. Frequently Asked Questions

Q: What is an Nginx config generator and why should I use one?

A: An Nginx config generator is an online tool that produces valid nginx.conf configuration files based on your requirements. Instead of writing directives from scratch and risking syntax errors, you select options like server type, SSL settings, reverse proxy backends, and caching rules, and the generator outputs a production-ready configuration. This saves time, reduces configuration mistakes, and helps developers who are not Nginx experts deploy secure, performant web servers quickly.

Q: How do I set up Nginx as a reverse proxy for Node.js?

A: Create a server block that listens on port 80 (or 443 for HTTPS) and uses the proxy_pass directive to forward requests to your Node.js application, typically running on localhost:3000. Include proxy_set_header directives to pass the original Host, X-Real-IP, and X-Forwarded-For headers to your backend. Add proxy_http_version 1.1 and set the Upgrade and Connection headers for WebSocket support. Use an upstream block if you have multiple Node.js instances for load balancing.

Q: What is the difference between Nginx and Apache for web serving?

A: Nginx uses an event-driven, asynchronous, non-blocking architecture that handles thousands of concurrent connections with minimal memory. Apache traditionally uses a process-per-request or thread-per-request model that consumes more RAM under load. Nginx excels at serving static files, reverse proxying, and load balancing. Apache offers .htaccess per-directory configuration and a larger selection of built-in modules. Many production environments use Nginx as a front-end reverse proxy with Apache behind it for PHP applications.

Q: How do I enable HTTPS with Let's Encrypt on Nginx?

A: Install Certbot and run certbot --nginx -d yourdomain.com. Certbot automatically obtains a free TLS certificate, modifies your Nginx configuration to use it, and sets up automatic renewal. The certificate files are stored in /etc/letsencrypt/live/yourdomain.com/. You should also add an HTTP-to-HTTPS redirect server block, enable HSTS, configure modern TLS protocols (TLSv1.2 and TLSv1.3), and select strong cipher suites. An Nginx config generator can produce all of these settings automatically.

Q: How do I configure Nginx for a Single Page Application like React or Vue?

A: For SPAs, the key directive is try_files $uri $uri/ /index.html inside your location / block. This tells Nginx to first look for the exact file requested, then a directory, and finally fall back to index.html for client-side routing. Set the root to your build output directory (e.g., /var/www/app/dist). Add long cache expiration headers for static assets in /assets/ and no-cache for index.html itself so users always get the latest version.

Q: What are the most important Nginx security headers?

A: Essential security headers include: X-Frame-Options DENY or SAMEORIGIN to prevent clickjacking, X-Content-Type-Options nosniff to block MIME-type sniffing, X-XSS-Protection "1; mode=block" as a legacy XSS filter, Strict-Transport-Security with max-age=31536000 and includeSubDomains for HSTS, Referrer-Policy strict-origin-when-cross-origin for privacy, Content-Security-Policy to control resource loading, and Permissions-Policy to restrict browser features like camera and geolocation.

Q: How do I configure Nginx load balancing with health checks?

A: Define an upstream block with your backend servers and select a load balancing method: round-robin (default), least_conn, ip_hash, or random with two choices. Add max_fails and fail_timeout parameters to each server for passive health checks. For example, server 10.0.0.1:3000 max_fails=3 fail_timeout=30s removes a server from the pool after 3 consecutive failures for 30 seconds. Mark backup servers with the backup keyword. Nginx Plus (commercial) adds active health checks that proactively test backends.

Q: How do I optimize Nginx for maximum performance?

A: Set worker_processes to auto (one per CPU core) and worker_connections to 2048 or higher. Enable sendfile, tcp_nopush, and tcp_nodelay for efficient file transfer. Turn on gzip compression for text content with gzip_comp_level 4-6. Set keepalive_timeout to 65s and enable keepalive connections to upstream servers. Use open_file_cache to reduce filesystem lookups. Add expires headers for static assets (30 days for images, 1 year for versioned assets). Enable HTTP/2 with the http2 parameter on your listen directive.

Conclusion

Nginx is an incredibly powerful and efficient web server that serves as the backbone of modern web infrastructure. From simple static file serving to complex microservice API gateways with load balancing and SSL termination, a well-configured Nginx deployment provides the performance, security, and reliability your applications need.

However, Nginx configuration syntax has a steep learning curve with hundreds of directives, each valid only in specific contexts. A single misplaced semicolon or incorrect directive can cause the entire configuration to fail. That is why an Nginx config generator is invaluable: it eliminates syntax errors, applies best practices automatically, and lets you focus on architecture rather than memorizing directive names.

Whether you are deploying a new project or optimizing an existing server, our free Nginx Config Generator can produce a complete, production-ready nginx.conf tailored to your exact requirements in seconds. Combine it with the knowledge from this guide, and you will have Nginx configurations that are secure, performant, and maintainable.

𝕏 Twitterin LinkedIn
Czy to było pomocne?

Bądź na bieżąco

Otrzymuj cotygodniowe porady i nowe narzędzia.

Bez spamu. Zrezygnuj kiedy chcesz.

Try These Related Tools

NXNginx Config Generator.ht.htaccess Generator

Related Articles

Przykłady konfiguracji Nginx: Reverse Proxy, SSL i strony statyczne

Konfiguracje Nginx gotowe do produkcji: reverse proxy, SSL/TLS, pliki statyczne, load balancing.

Blok location Nginx i przewodnik Regex

Dogłębne omówienie bloków location Nginx: dokładne dopasowanie, prefiks, regex i reguły priorytetów.

Przewodnik Konfiguracji Nginx: Od Podstaw do Produkcji

Kompletny przewodnik konfiguracji Nginx. Naucz sie server blocks, reverse proxy, SSL/TLS i load balancing.

Konfiguracja Nginx Reverse Proxy: Load Balancing, SSL i Caching

Konfiguracja Nginx jako reverse proxy: serwery upstream, load balancing, SSL i caching.

Nginx vs Apache 2026: Który serwer web wybrać?

Porównanie Nginx i Apache 2026: wydajność, konfiguracja i przypadki użycia.