This guide for optimizing Nginx is related to the needed modification of the nginx.conf file on Cloud Server. Optimization will differ on Bare Metal. Practically we are using a multi tenant environment, even if on an instance there is Multi Core Processor, essentially they are virtual. There will be definite difference with physical multi-Core processor based computer vs. multi-CPU computer. Single tenant environments including bare metal, colocation server, dedicated server will have dedicated physically definable CPU and Motherboard.
Optimizing Nginx (nginx.conf) on Cloud Server : Where We Have Started From
You can start from our previous guide on how to install WordPress with Nginx on Rackspace Cloud Server, we have a helper video for installing nginx too. Those who are fully new with Nginx can read articles like Basics of nginx HTTP Server, Shifting WordPress from Apache to nginx Web Server, Reasons to Switch to Nginx From Apache on Cloud Server like articles. We recommend to use a different Server for running MySQL – application server is desirable to be different. This makes the things more easier. For testing purpose, you can use one server (2 GB, Performance 1 on Rackspace). 2 GB will offer 2 virtual cores. You must install Rackspace Cloud Monitoring Agent to check the load on CPU and RAM.
Optimizing Nginx (nginx.conf) on Cloud Server
Nginx needs not much tweaking compared to PHP, Apache2 or MySQL. Kindly, do not use Nginx as proxy for Apache2 on the same server.
---
First, check again the guide how to install WordPress with Nginx on Rackspace Cloud Server for a quick recall, we have not done any modification of /etc/nginx/nginx.conf. Open it :
1 | nano /etc/nginx/nginx.conf |
The code block starts with :
1 2 3 4 5 6 | worker_processes 8; events { worker_connections 1024; multi_accept on; } |
That worker_processes indicates number of CPU-core. For example, in 2 GB, it is 2. For a Multi CPU computer with multi core processors, the number will be simple addition. 2 Hexacore processor = 12 worker_processes. We can get this data by running this command :
1 | grep processor /proc/cpuinfo | wc -l |
Thankfully, we can set it to auto. We do not need to mention the raw number anymore.
Second variable is worker_connections, it is the number of clients that can be served per unit of time (second is the SI unit) multiplied by the number of cores. How we will know what worker_connections is set by the Operating System? We can run this command :
1 | ulimit -n |
If you run this on your 15″ MacBook Pro, it will return a value like 256. On Server with GNU Linux, usually it is 1024. Theoretically, what Nginx can handle as maximum of clients per unit time can be written as = worker_processes x worker_connections. It is, should be 1024 x 2 = 2048 in our case. This two directives are to prevent adverse situations, we will increase the capabilities via other directives. multi_accept makes to immediately accept as many connections Nginx can, it is related to the kernel socket setup. We keep it as ON. The parameter which is not present is worker_rlimit_nofile. Another absent parameter is epoll, this event-model is generally recommended to force to use. So our final thing is becoming :
1 2 3 4 5 6 7 8 9 10 | worker_processes auto; events { worker_connections 2048; multi_accept on; use epoll; } worker_rlimit_nofile 40000; |
We should do a configtest before reloading :
1 2 3 4 5 6 7 | nginx -t # output nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful # or this command /etc/init.d/nginx configtest # will return [OK] if everything is fine |
Now, do a reload and optionally restart :
1 2 | service nginx reload # service nginx restart |
Next block starts with :
1 2 3 4 5 6 | http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 15; } |
We only need to tweak the keepalive_timeout to keep a lesser number like 10. We will add a block here :
1 2 3 4 5 6 7 8 9 10 11 12 13 | open_file_cache max=2000 inactive=20s; open_file_cache_valid 60s; open_file_cache_min_uses 5; open_file_cache_errors off; client_body_buffer_size 10K; client_header_buffer_size 1k; client_max_body_size 8m; client_body_timeout 12; client_header_timeout 12; large_client_header_buffers 2 1k; send_timeout 10; |
client_body_buffer_sizw handles the client buffer size – any POST actions sent to Nginx. client_header_buffer_size is similar but instead it handles the client header size only. For safety, 1K is usually a good value. client_max_body_size is the maximum allowed size for a client request, if exceeded, then Nginx will throw a 413 error. large_client_header_buffers is the maximum number and size of buffers for large client headers. client_body_timeout and client_header_timeout directives are for waiting after a request is sent. keepalive_timeout sets the timeout for keep-alive connections. Lower better for avowing delay with 404.
Up to this, our file will look like this :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 | worker_processes auto; events { worker_connections 2048; multi_accept on; use epoll; } worker_rlimit_nofile 40000; http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 10; large_client_header_buffers 2 1k; send_timeout 10; open_file_cache max=2000 inactive=20s; open_file_cache_valid 60s; open_file_cache_min_uses 5; open_file_cache_errors off; client_body_buffer_size 10K; client_header_buffer_size 1k; client_max_body_size 8m; client_body_timeout 12; client_header_timeout 12; # more stuffs here } |
We will make :
1 2 | access_log off error_log logs/error.log crit; |
Access log is not required because you can track from other softwares. Error log becomes huge with time, we will log only critical errors. Last is gzip :
1 2 3 4 5 6 7 8 | gzip on; gzip_comp_level 6; gzip_min_length 1024; gzip_proxied expired no-cache no-store private auth; gzip_vary on; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_disable "MSIE [6]"; gzip_http_version 1.1; |
Optimizing Nginx (nginx.conf) on Cloud Server : Final Result
Finally it is becoming like :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 | worker_processes auto; events { worker_connections 2048; multi_accept on; use epoll; } worker_rlimit_nofile 40000; http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 10; large_client_header_buffers 2 1k; send_timeout 10; open_file_cache max=2000 inactive=20s; open_file_cache_valid 60s; open_file_cache_min_uses 5; open_file_cache_errors off; client_body_buffer_size 10K; client_header_buffer_size 1k; client_max_body_size 8m; client_body_timeout 12; client_header_timeout 12; gzip on; gzip_comp_level 6; gzip_min_length 1024; gzip_vary on; # gzip_proxied any; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_disable "msie6"; gzip_http_version 1.1; # more stuffs here } |
Test with :
1 2 3 | nginx -t # or /etc/init.d/nginx configtest |
Then restart :
1 2 | service nginx reload # service nginx restart |
If there is any repeat code, please let us know. There is a thing name “server_token”, it is set to off by default but kept commented out. Just make it active for security reasons. As for syntax, only worker_processes and worker_rlimit_nofile are outside { second brackets }.
Tagged With @yahoo @gmail @hotmail @aol , nano /etc/nginx/nginx conf , http://etc/nginx/nginx conf , nano -w /etc/nginx/nginx conf , optimize nginx for windows , television1tv