Nginx Configuration
Nginx can be configured for
- Load Balancing
- Proxy for Java Application servers like Tomcat, Jetty, WebLogic Server and Glassfish
- Reverse Proxy
- SSL-Offloading
- Caching (plus Reverse Proxy)
- Much more...
Reverse Proxy
By examples, JIRA, Confluence and GitLab.
JIRA
JIRA running on port 8080 (Tomcat) behind nginx, Simple Proxy between jira.au.oracle.com:80 <=> localhost:8080
Apache configuration (JIRA as root context)
<VirtualHost *:80> ServerName jira.au.oracle.com DocumentRoot /var/www ProxyPass / http://localhost:8080/ ProxyPassReverse / http://localhost:8080/ </VirtualHost>
NOTE: Nginx does NOT have ProxyPassReverse. The solution is adding few missing HTTP headers. Please also see http://wiki.nginx.org/HttpProxyModule#proxy_redirect , This wiki is partly incorrect. If you need to do location header rewriting. You will need to use proxy_redirect as well.
Nginx
server { listen *:80 default_server; listen [::]:80 default_server server_name jira.au.oracle.com; location / { root /var/www; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://localhost:8080; } ... }
Confluence
Confluence running on port 8080 (Tomcat) behind reverse proxy, Simple Proxy between support.au.oracle.com:80 <=> localhost:8080
Apache Configuration
# Put this after the other LoadModule directives LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so # Put this in the main section of your configuration (or desired virtual host, if using Apache virtual hosts) ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /confluence http://localhost:8080/confluence ProxyPassReverse /confluence http://localhost:8080/confluence <Location /confluence> Order allow,deny Allow from all </Location>
Nginx
server { listen *:80; ## listen for ipv4; this line is default and implied # listen [::]:80 default ipv6only=on; ## listen for ipv6 root /var/www; # index index.html index.htm; # Make site accessible from http://localhost/ server_name support.au.oracle.com; location /confluence { # First attempt to serve request as file, then # as directory, then fall back to index.html # try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules proxy_pass http://localhost:8080; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } ... }
GitLab
HTTP (gitlab)
# GITLAB # Maintainer: @randx # App Version: 5.0 upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } server { listen *:80 default_server; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea server_name YOUR_SERVER_FQDN; # e.g., server_name source.example.com; server_tokens off; # don't show the version number, a security best practice root /home/git/gitlab/public; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab unicorn) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } }
HTTPS (gitlab-https)
WordPress
nginx + php-fpm
server { listen 80; root /var/www; index index.php index.html index.htm; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/wordpress_access.log; error_log /var/log/nginx/wordpress_error.log; server_name your_domain.com www.your_domain.com; location / { try_files $uri $uri/ /index.php?q=$uri&$args; } location ~ \.php$ { try_files $uri =404; fastcgi_index index.php; # include fastcgi_params; # nginx 1.6.1 upstream delivers fastcgi.conf include fastcgi.conf; fastcgi_pass unix:/var/run/php5-fpm.sock; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location ~ /\. { deny all; } location ~* /(?:uploads|files)/.*\.php$ { deny all; } }
NOTE:
Including rules for how Nginx attempts to serve favicon requests and robot.txt requests (which are used by search engines to index sites). We don't need to log requests for this information.
A location block that denies access to any hidden folders (dot files in a Linux system). This will prevent us from serving files that may have been used for Apache configurations, like .htaccess files. It is more secure to keep these files from our users.
Finally, prevent any PHP files from being run or accessed from within the uploads or files directory. This can prevent our server from executing malicious code.
NOTE: Nginx (before 1.6.1) shipped a modified fastcgi_params which declared SCRIPT_FILENAME fastcgi_param. This line has now been removed in 1.6.1. Switch to fastcgi.conf (as per upstream repository), which includes the same SCRIPT_FILENAME parameter value.
Another workaround is to add the diff into the old fastcgi_params config file without touching the per site configuration files.
More information: https://bugs.launchpad.net/nginx/+bug/1366651
Separate configuration files (Best Practice)
Simple reverse proxy, proxy.conf included in main
proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k;
Include it in main configuration file
server { listen *:80 default_server; listen [::]:80 default_server server_name jira.au.oracle.com; location / { root /var/www; include proxy.conf; proxy_pass http://localhost:8080; }
Simple Load Balancing
Simple WebLogic Server Cluster Load Balancing
A simple WebLogic Server cluster with 4 nodes (managed servers), group them in upstream which interacts with Nginx.
There are a few rewrites that ensure that Nginx serves the static files, and all dynamic requests are processed by the WebLogic Server back-end. It can also be seen how we set proxy headers correctly to ensure that the client IP is forwarded correctly to the rails application. It is important for a lot of applications to be able to access the client IP to show geo-located information, and logging this IP can be useful in identifying if geography is a problem when the site is not working properly for specific clients.
upstream weblogic_cluster { server 10.187.65.101:7001; server 10.187.65.102:7001; server 10.187.65.103:7001; server 10.187.65.104:7001; } server { listen 80; server_name weblogic.au.oracle.com; root /var/www; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect false; try_files $uri $uri/index.html $uri.html @weblogic; location @weblogic { include proxy.conf; proxy_pass http://weblogic_cluster; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } }
Nginx example
upstream big_server_com { server 127.0.0.3:8000 weight=5; server 127.0.0.3:8001 weight=5; server 192.168.0.1:8000; server 192.168.0.1:8001; } server { # simple load balancing listen 80; server_name big.server.com; access_log logs/big.server.access.log main; location / { proxy_pass http://big_server_com; } }
GitLab nginx configuration example
In upstream nginx interacts directly with Unix Socket in this case. It is possible to use a domain name, an address, port or unix socket. If domain name resolves to several addresses, then all are used.
upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; }
Note: HTTPUpstreamModule http://wiki.nginx.org/HttpUpstreamModule
Reverse Proxy For Multiple Back-ends
As traffic increases, the need to scale the site up becomes a necessity. With a transparent reverse proxy like Nginx in front, most users never even see the scaling affecting their interactions with the site. Usually, for smaller sites one backend process is sufficient to handle the oncoming traffic. As the site popularity increases, the first solution is to increase the number of backend processes and let Nginx multiplex the client requests. This recipe takes a look at how to add new backend processes to Nginx.
upstream backend { server backend1.au.oracle.com weight=5; server backend2.au.oracle.com max_fails=3 fail_timeout=30s; server backend3.au.oracle.com; fair no_rr; } server { listen 80; server_name au.oracle.com; access_log /var/www/au.oracle.com/log/nginx.access.log; error_log /var/www/au.oracle.com/log/nginx_error.log debug; #set your default location location / { include proxy.conf; proxy_pass http://backend; }
Explanation
In this configuration we set up an upstream, which is nothing but a set of servers with some proxy parameters. For the server http://backend1.au.oracle.com, we have set a weight of five, which means that the majority of the requests will be directed to that server. This can be useful in cases where there are some powerful servers and some weaker ones. In the next server http://backend2.au.oracle.com, we have set the parameters such that three failed requests over a time period of 30 seconds will result in the server being considered inoperative. The last one is a plain vanilla setup, where one error in a ten second window will make the server inoperative!
This displays the thought put in behind the design of Nginx. It seamlessly handles servers which are problematic and puts them in the set of inoperative servers. All requests to the server are sent in a round robin fashion.
NOTE: In this particular "fair" mode, which is no_rr, the server will send the request to the first backend whenever it is idle. The goal of this module is to not send requests to already busy backends as it keeps information of how many requests a current backend is already processing. This is a much better model than the default round robin that is implemented in the default upstream directive.
More: You can choose to run this load balancer module in a few other modes, as described below, based on your needs! This is a very simple way of ensuring that none of the backend experiences load unevenly as compared to the rest.
Mode |
---|
default (fair) |
no_rr |
weight_mode=idle no_rr |
weight_mode=peak |
Peak weight mode setup example
upstream backend { server backend1.example1.com weight=4; server backend2.example1.com weight=3; server backend3.example1.com weight=4; fair weight_mode=idle no_rr; }
Fair load balancer module for Nginx: http://nginx.localdomain.pl/wiki/UpstreamFair
Reverse Proxy with Caching
Example
http { proxy_cache_path /var/www/cache levels=1:2 keys_zone=my-cache:8m max_size=1000m inactive=600m; proxy_temp_path /var/www/cache/tmp; server { listen 80; server_name example1.com; access_log /var/www/example1.com/log/nginx.access.log; error_log /var/www/example1.com/log/nginx_error.log debug; #set your default location location / { include proxy.conf; proxy_pass http://127.0.0.1:8080/; proxy_cache my-cache; proxy_cache_valid 200 302 60m; proxy_cache_valid 404 1m; } } }
Explanation
This configuration implements a simple cache with 1000MB maximum size, and keeps all HTTP response 200 pages in the cache for 60 minutes and HTTP response 404 pages in cache for 1 minute. There is an initial directive that creates the cache file on initialization, in further directives we basically configure the location that is going to be cached. It is possible to actually set up more than one cache path for multiple locations.
Nginx Configuration File Template on Gist
https://gist.github.com/terrywang/9612069
# User and group used by worker processes user www-data; # Ideally # of worker processes = # of CPUs or cores # Set to auto to autodetect # max_clients = worker_processes * worker_connections worker_processes auto; pid /run/nginx.pid; # Maximum number of open file descriptors per process # should be > worker_connections worker_rlimit_nofile 10240; events { # Use epoll on Linux 2.6+ use epoll; # Max number of simultaneous connections per worker process worker_connections 2048; # Accept all new connections at one time multi_accept on; } http { ## # Basic Settings ## # Hide nginx version information server_tokens off; # Speed up file transfers by using sendfile() to copy directly # between descriptors rather than using read()/write() sendfile on; # Tell Nginx not to send out partial frames; this increases throughput # since TCP frames are filled up before being sent out (adds TCP_CORK) # Send the response header and the beginning of a file in one packet # Send a file in full packets tcp_nopush on; # Tell Nginx to enable the Nagle buffering algorithm for TCP packets # which collates several smaller packets together into one larger packet # thus saving bandwidth at the cost of a nearly imperceptible increase to latency tcp_nodelay off; send_timeout 30; # How long to allow each connection to stay idle; # Longer values are better for each individual client, especially SSL # But means that worker connections are tied up longer.75 keepalive_timeout 60; keepalive_requests 200; # client_header_timeout 20; # client_body_timeout 20; reset_timedout_connection on; types_hash_max_size 2048; server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; # default_type application/octet-stream; default_type text/html; charset UTF-8; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## # Enable Gzip compression gzip on; # This should be turned on if pre-compressed copies (.gz) of static files exist # If NOT it should be left off as it will cause extra I/O # default: off # gzip_static on; # Do NOT compress anything smaller than 256 bytes gzip_min_length 256; # Fuck IE6 gzip_disable "msie6"; # Tell proxies to cache both the gzipped and regular version of a resource # whenever the client's Accept-Encoding capabilities header varies; # Avoids the issue where a non-gzip capable client (rare) # would display gibberish if their proxy gave them the gzipped version. # gzip_vary on; # Compress data even for clients that are connecting via proxies # Identified by the "Via" header gzip_proxied any; # Compression level (1-9) # 5 is the perfect compromise between size and CPU usage gzip_comp_level 5; # gzip_buffers 16 8k; # gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; # Cache open file descriptors, their sizes and mtime # information on existence of directories # file lookup error such as "file not found", "no read permission" and so on # # Pros: nginx can immediately begin sending data when a popular file is requested # and will also immediately send a 404 if a file doesn't exist, and so on # # Cons: The server will NOT react immediately to changes on file system # which may be undesirable # # Config: inactive files are released from the cache after 20 seconds # whereas active (recently requested) files are re-validated every 30 seconds # File descriptors will NOT be cached unless they are used at least twice in 20s (inactive) # # A maximum of the 1000 most recently used file descriptors will be cached at any time # # Production servers with stable file collections will definitely want to enable the cache open_file_cache max=1000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } #mail { # # See sample authentication script at: # # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript # # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; # } # # server { # listen localhost:143; # protocol imap; # proxy on; # } #}
NOTE: work OOTB on Debian and Ubuntu.