If you have a separate cache server in front of your main server, then you may avoid many of the attacks on the main server plus downtimes of the main server will get masked (although PHP functions of the main site will not work).
We need a cloud server instance from a host which is reliable and has IPv6 support. There are many cloud IaaS providers who are IPv6 capable and sells 1 GB RAM instance at a cheap rate. The main point we need to check is their average response time. A server with a slow response time will harm the perceived page loading speed. When configured as a cache, NGINX can cache both static and dynamic content. It may improve dynamic content performance with micro-caching.
Install Nginx on a fresh server (you’ll find guides on our website like this one). You need the copies of SSL certificate and private key. You can transfer them via FTP or create a compressed file and wget from the other server.
1 2 3 4 5 6 7 8 9 10 | AAAA Record A Record +-------------------+ +---------------------+ | Dedicated Server |====>| Virtual Server | ===> Headers and HTML passed | | | With IPv4, IPv6 | | | | running | | With IPv4, IPv6 | | Nginx reverse proxy | | running Apache | +---------------------+ | | +-------------------+ |
This is an example configuration to provide you a basic example :
---
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | ## All is kept default except Proxy Configs stanza added to default ## mkdrir -p /var/lib/nginx/cache && sudo chown www-data /var/lib/nginx/cache ## sudo chmod 700 /var/lib/nginx/cache user www-data; worker_processes auto; pid /run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # SSL Settings ## ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; ## # Proxy Configs ## proxy_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=backcache:8m max_size=50m; proxy_cache_key "$scheme$request_method$host$request_uri$is_args$args"; proxy_cache_valid 200 302 10m; proxy_cache_valid 404 1m; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } |
Using proxy_cache_use_stale error timeout http_500;
means that when Nginx receives an error from the upstream server and has a stale version of the requested file in the cached content, it delivers that stale file.
You’ll get the official documentations here :
1 2 3 4 5 6 7 8 | # http://nginx.org/en/docs/http/ngx_http_proxy_module.html # https://www.nginx.com/blog/nginx-caching-guide/ # https://www.nginx.com/resources/wiki/start/topics/examples/reverseproxycachingexample/ ### There are many tools these days for Nginx for different reasons: # https://github.com/mahmud-ridwan/loadcat # https://github.com/schenkd/nginx-ui # https://nginxproxymanager.com/ |