Nginx Configuration For Performance on Cloud Server Will Have Difference With a Physical Server With Multiple Processors or Multiple Cores. Previously, we talked about multi-core processors. This short guide will guide you for Nginx configuration for performance on Cloud Server instances – the cores are virtual, not physical.
Nginx Configuration For Performance on Cloud Server
Physical server, obviously will give measurable result; but the risks, cost of maintenance, difficulty in changing hardware in real time is more. Practically, we can not take much risk of going above the limit with a Cloud Server instance, it can die so badly that you can not even do a SSH. There is no real hard disk, if it fails to reboot, it is a challenge to return.
The global Nginx configuration file is located at : /etc/nginx/nginx.conf
. This falls among the Core Module documentation on the official website : http://nginx.org/en/docs/ngx_core_module.html
.
---
First is :
1 | worker_processes 2; |
worker_processes
for Cloud Server instances or Virtual Instances, should be the exact number of virtual cores. There is also an auto
option now. We can get the value of cores by running this command :
1 | cat /proc/cpuinfo | grep processor | wc -l |
File descriptors are part of the POSIX application programming interface. A file descriptor is a non-negative integer, representing in C programming language as the type int. We need to set the worker_rlimit_nofile
for this purpose, else the default value 2000 will work as max limit :
1 2 | worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log; |
Next is :
1 2 3 | events { } |
block. The max clients value determines how much clients will be served per worker, which is multiplier of worker_connections and worker_processes. We can safely go with :
1 2 3 4 5 6 | events { worker_connections 4000; worker_rlimit_nofile 20000; use epoll; multi_accept on; } |
This is an example full configuration :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | user www-data; worker_processes 4; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 4000; multi_accept on; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; server_tokens off; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log; sendfile on; tcp_nopush on; tcp_nodelay on; types_hash_max_size 2048; send_timeout 2; client_max_body_size 20m; client_body_buffer_size 128k; client_body_timeout 30; client_header_timeout 30; keepalive_timeout 30; open_file_cache max=5000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; gzip on; gzip_http_version 1.1; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js; gzip_buffers 16 8k; gzip_disable "MSIE [1-6]\.(?!.*SV1)" include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } |
Optionally, you can add :
1 2 3 4 5 6 | limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m; limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s; server { limit_conn conn_limit_per_ip 10; limit_req zone=req_limit_per_ip burst=10 nodelay; } |
This will limit the number of connections per single IP and the number of requests for a given session. Second part is defining the zone. You should test load average for performance tweaking rather than blindly running behind how many millions page a $40/month can serve. Thats honestly theoretical. Without, load balancing; single Cloud Server instance can badly fail even only 100 user opens same webpage of a WordPress installation.