How to Tune and Optimize Performance of Nginx Web Server

Introduction

Nginx is a very fast, robust, lightweight and high performing web server running at least 40% of the busiest websites globally. Owing to Nginx’s versatility, it’s also being used a load balancer, reverse proxy and HTTP cache server.

The best feature of Nginx is its speed which enables it to handle thousands of concurrent connections easily.

In this article, we will illustrate the best ways of tuning and optimizing the Nginx web server.

Pre-requisites

  • Linux VPS Setup (Any Linux Flavor)
  • Basic understanding of Nginx configuration
  • Installed Nginx

Tuning and optimizing Nginx will involve adjusting parameters in the default Nginx configuration file /etc/nginx/nginx.conf.

Here’s a sample default Nginx configuration file /etc/nginx/nginx.conf

user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
 
events {
        worker_connections 768;
    # multi_accept on;
}
 
http {
 
    ##
    # Basic Settings
    ##
 
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
    server_tokens off;
    add_header X-XSS-Protection "1; mode=block";
    add_header X-Frame-Options "SAMEORIGIN";
 
    # server_names_hash_bucket_size 64;
    # server_name_in_redirect off;
 
    include /etc/nginx/mime.types;
    default_type application/octet-stream;
 
    ##
    # SSL Settings
    ##
 
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
        ssl_prefer_server_ciphers on;
 
    ##
    # Logging Settings
    ##
  access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;
 
    ##
    # Gzip Settings
    ##
 
    gzip on;
 
    # gzip_vary on;
    # gzip_proxied any;
    # gzip_comp_level 6;
    # gzip_buffers 16 8k;
    # gzip_http_version 1.1;
    # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
 
    ##
    # Virtual Host Configs
    ##
 
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}
 
 
#mail {
#   # See sample authentication script at:
#   # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
#   # auth_http localhost/auth.php;
#   # pop3_capabilities "TOP" "USER";
#   # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
#   server {
#               listen localhost:110;
#               protocol   pop3;
#               proxy  on;
#   }
#
#   server {
#               listen localhost:143;
#               protocol   imap;
#               proxy  on;
#   }
#}

Back up your current nginx config file before editing while performing nginx tuning and optimization. It’s recommended to make one change at a time, save the config file, restart Nginx server and do performance testing to see any improvement in the performance. If you don’t see any improvement, you may want to revert to the default/initial value.

1. Worker Processes

Worker processes refer to the number of workers to be spawn by Nginx. It’s best practice to run 1 worker process per CPU core. If you place a value more than the number of CPU cores in your machine/VPS, this will cause idle processes in the system.

By-default the worker processes value is set to auto.

To know the number of CPU cores in your system, run command:

$ grep processor /proc/cpuinfo | wc -l
1

In our VPS, we have 1 core, therefore it’s recommended to set the worker process value to 1

in the config file as shown below:

worker_processes 1;

In case there is more traffic build up to your nginx web server and require to run more processes, then it’s recommended to upgrade your machine to more cores and re-adjust the worker processes to the new number of CPU cores in your system.

2. Worker Connections

Worker connections are the number of clients that can be simultaneously served by a Nginx web server. When combined with worker process, you get the maximum number of clients that can be served-per-second as follows:

Max Number of Clients/Second = Worker processes * Worker connections

By-default the value of worker connections is set to 768.

However, it should be noted that most times a browser opens at least 2 connections per server at the same time, therefore the number could be cut in half.

To maximize Nginx’s full potential, the worker connections should be set to the allowed maximum number of processes that can be run by the core at a time. This is equivalent to the number of open file descriptors that can be obtained with the command:

$ ulimit -n
1024

In our VPS, the core is limited to 1024 processes at a time, therefore it’s recommended to set the value of worker connections (within events section) to 1024 as follows:

events {
        worker_connections 1024;
    
}

From our calculation the maximum number of clients that can be served per second is

1024 worker connections * 1 worker process = 1024 clients per second

3. Multi Accept

Multi accept defines how the worker process accepts new connections.

By default, the worker process is set to off and accepts one new connection at a time.

If it’s put on, the worker process accepts all new connections at once. The multi_accept value (within events section) should be set to off as shown below:

events {
        worker_connections 1024;
      multi_accept off;
}

4. Gzip Compression

Compression of clients’ responses reduces their size hence utilizing less network bandwidth and improving page load time for slow connections.  It should be noted that the compression process itself consumes machine resources hence you should analyze and evaluate the cost-benefits of the compression. Otherwise, compression may work to a disadvantage and reduce nginx performance. It’s recommended to implement compression as follows:

  1. Only enable gzip content for appropriate content e.g CSS, text, JavaScript files, e.t.c
  2. Examine effects of compression by enabling and disabling compression for the various content types and sizes
  3. Do NOT increase the compression level since this costs CPU effort without a commensurate throughput increase.

Sample recommended gzip configuration is as follows: (within http section)

gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 1;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

5. Buffers

Buffers play a big role in the optimization of nginx performance. The following are the variables that need to be adjusted for optimum performance:

client_body_buffer_size – Handles client buffer size i.e. the POST actions e.g. form submissions sent to Nginx web server. It’s recommended to set this to 10K.

client_header_buffer_size – Similar client_body_buffer_size  but handles client header size. It’s recommended o set this to 1K.

client_max_body_size – Maximum allowed client request size. If the value is exceeded, nginx produces 413 error or Request Entity Too Large.

large_client_header_buffers – Maximum number and size of buffers for large client headers.

The recommended settings are as follows (within http section)

client_body_buffer_size 10K;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 4 4k;

With the values above the nginx will work optimally but for even further optimization, you can tweak the values and test the performance.

6. Timeouts

Timeouts really improve nginx performance substantially. The keepalive connections reduce cpu and network overhead required for opening and closing connections. The following are the variables that need to be adjusted for best performance:
client_header_timeout & client_body_timeout – Time that nginx server will wait for a client header or body to be sent after a request.

keepalive_timeout – The duration that keepalive connection remains open, after which nginx closes client connection.

send_timeout – Timeout for sending a response to the client. If the client fails to receive the server’s response within this duration, nginx terminates the connection.

The following are the recommended values: (within http section)

 

client_body_timeout 12;
client_header_timeout 12;
keepalive_timeout 15;
send_timeout 10;

7. Access Log

Logging is very critical for managing, troubleshooting and auditing systems. However, logging and storing large chunks of data consumes so much of system resources, utilize more CPU/IO cycles hence reducing server performance. for access logging, it logs every single nginx request hence consuming a lot of CPU resources which reduces nginx performance.

There are two solutions to go about this.

  1. Disable Access Logging entirely
    access_log off;
  2. If it’s mandatory to have access logging, then enable access-log buffering. This enables Nginx to buffer a series of log entries and writes them to the log file together at once instead of performing the different write operation for each single log entry.
    access_log /var/log/nginx/access.log main buffer=16k

You could also use open source solutions for logging like ELK stack and others which will centralize all logs for your system.

Conclusion

Once you have an optimized nginx web server, the next step is to monitor the server and tweak the settings with time as traffic to the server build up or other factors come into place. Therefore the recommended values are not the best for all circumstances but only during development and during low-traffic or medium-traffic to the web server. You can increase the values of step-wise while doing performance testing to check for any improvement.

If you don’t see any improvement, please leave the value at the default. Collectively the tweaked values of the various parameters improve nginx performance immensely.

Other considerations you could make to improve your web server performance especially when traffic builds up is load balancing, auto-scaling, high availability, to mention but a few. If all else fails, consider switching to a new VPS provider with better, more up-to date hardware and software (need help? Visit HostAdvice’s Best VPS hosting services list)

 

Check out these top 3 VPS services:

HostArmada
$2.49 /mo
Starting price
Visit HostArmada
Rating based on expert review
  • User Friendly
    4.5
  • Support
    4.5
  • Features
    4.5
  • Reliability
    4.5
  • Pricing
    4.0
A2 Hosting
$2.99 /mo
Starting price
Visit A2 Hosting
Rating based on expert review
  • User Friendly
    4.5
  • Support
    4.0
  • Features
    4.5
  • Reliability
    4.8
  • Pricing
    4.0
Hostens
$0.90 /mo
Starting price
Visit Hostens
Rating based on expert review
  • User Friendly
    4.5
  • Support
    3.8
  • Features
    4.3
  • Reliability
    4.5
  • Pricing
    4.8
  • Click this link and all your queries to best hosting will end.

How to Install the LEMP (Linux, Nginx, MySQL, PHP) Stack on CentOS 7 VPS or Dedicated Server

LEMP is an acronym for Linux, Nginx, MySQL and PHP. LEMP stack just like LAMP st
5 min read
Max Ostryzhko
Max Ostryzhko
Senior Web Developer, HostAdvice CTO

How to harden Nginx Web Server on an Ubuntu 18.04 VPS or Dedicated Server

Nginx is one of the most popular web servers which is vulnerable to hacking atta
3 min read
Max Ostryzhko
Max Ostryzhko
Senior Web Developer, HostAdvice CTO

How to Install WordPress with Nginx & Redis on a CentOS VPS or Dedicated Server

This is an easy-to-follow guide written to help you learn how to install WordPre
5 min read
Eliran Ouzan
Eliran Ouzan
Web Designer & Hosting Expert

How to Install WordPress with Nginx & Redis

This is an easy-to-follow guide written to help you learn how to install WordPre
5 min read
Max Ostryzhko
Max Ostryzhko
Senior Web Developer, HostAdvice CTO
HostAdvice.com provides professional web hosting reviews fully independent of any other entity. Our reviews are unbiased, honest, and apply the same evaluation standards to all those reviewed. While monetary compensation is received from a few of the companies listed on this site, compensation of services and products have no influence on the direction or conclusions of our reviews. Nor does the compensation influence our rankings for certain host companies. This compensation covers account purchasing costs, testing costs and royalties paid to reviewers.