BTMash

Blob of contradictions

Installing Nginx on Ubuntu 10.04 with PHP-FPM

Written

To get myself more fully acquainted with Nginx, I decided to finally take the plunge and move out of using Apache. To note, I do not have any real problems with Apache; it has treated me very well for years; It is flexible and very easy to configure. I've also been using Varnish as an HTTP accelerator on this blog for a few months and given that I'm dealing with anonymous folk viewing content, the setup was more than adequate. But I have also heard great things about nginx for the past couple of years (I've talked about it in at performance talks in a 'I hear it uses less memory and is more performant than apache' manner). Recently, my workplace started to switch over our sites into an nginx environment (I didn't install the software and only participated in part of the site configuration) and have noticed the sites do seem zippier. My server only consists 512 megs of memory so it would be nice to see just how well it performs under such limitations. As a result, I wanted to know:

  • How easy is it to switch from Apache to Nginx?
  • What kind of a difference is there in performance between apache and nginx for regular site needs (with and without an HTTP accelerator)

I made the switch from MySQL to MariaDB a little while ago and the process was very painless (I haven't done any benchmarking tests but MariaDB seems atleast a little bit faster). So how difficult could this be?
First, I did a benchmark of the current site. I tested how the site performs strictly on Apache and later on Varnish as a frontend to Apache. This consisted of 2 tests.

Test 1: Apache on 1000 requests with 100 concurrent users

  1. shell:~# ab -n 1000 -c 100 http://btmash.com/
  2. Benchmarking btmash.com (be patient)
  3. Completed 100 requests
  4. Completed 200 requests
  5. Completed 300 requests
  6. Completed 400 requests
  7. Completed 500 requests
  8. Completed 600 requests
  9. Completed 700 requests
  10. Completed 800 requests
  11. Completed 900 requests
  12. Completed 1000 requests
  13. Finished 1000 requests
  14.  
  15.  
  16. Server Software: Apache/2.2.14
  17. Server Hostname: btmash.com
  18. Server Port: 80
  19.  
  20. Document Path: /
  21. Document Length: 49337 bytes
  22.  
  23. Concurrency Level: 100
  24. Time taken for tests: 2.564 seconds
  25. Complete requests: 1000
  26. Failed requests: 0
  27. Write errors: 0
  28. Total transferred: 49771000 bytes
  29. HTML transferred: 49337000 bytes
  30. Requests per second: 390.07 [#/sec] (mean)
  31. Time per request: 256.363 [ms] (mean)
  32. Time per request: 2.564 [ms] (mean, across all concurrent requests)
  33. Transfer rate: 18959.27 [Kbytes/sec] received

Not too bad, right? Being able to process 390 requests / sec means the site should be pretty darn zippy. So lets go crazy.

Test 2: Apache on 10000 requests with 1000 concurrent users

This was probably excessive but the last set of results looked very promising.

  1. shell:~# ab -n 10000 -c 1000 http://btmash.com/
  2. Benchmarking btmash.com (be patient)
  3. Completed 1000 requests
  4. Completed 2000 requests
  5. apr_socket_recv: Connection reset by peer (104)
  6. Total of 2344 requests completed

Hmm, that didn't work out too well. It just...died after 2300ish requests.

Test 3: Apache on 10000 requests with 100 concurrent users

This should work out well again...except it didn't. Once I started to run it, my server slowed down to a crawl and I was barely able to stop the test and restart the apache server. I did end up taking a look at the load average.

  1. shell:~# uptime
  2. 18:06:41 up 97 days, 20:24, 1 user, load average: 129.42, 58.16, 22.56

Yikes! As Kevin said on Twitter yesterday:

Wow, a load average of 129...I didn't even know load averages went that high.

Suffice to say...Drupal and Apache didn't play all that well with a high traffic load.

Test 4: Varnish in front of Apache on 10000 requests with 1000 concurrent users.

Ok, after the colossal fail of the the last two tests, there is no reason I should be doing this kind of test. However, I have been using Varnish for a few months already and I already know it can handle this:

  1. shell:~# ab -n 10000 -c 1000 http://btmash.com/
  2. Benchmarking btmash.com (be patient)
  3. Completed 1000 requests
  4. Completed 2000 requests
  5. Completed 3000 requests
  6. Completed 4000 requests
  7. Completed 5000 requests
  8. Completed 6000 requests
  9. Completed 7000 requests
  10. Completed 8000 requests
  11. Completed 9000 requests
  12. Completed 10000 requests
  13. Finished 10000 requests
  14.  
  15.  
  16. Server Software: Apache/2.2.14
  17. Server Hostname: btmash.com
  18. Server Port: 80
  19.  
  20. Document Path: /
  21. Document Length: 49337 bytes
  22.  
  23. Concurrency Level: 1000
  24. Time taken for tests: 2.930 seconds
  25. Complete requests: 10000
  26. Failed requests: 0
  27. Write errors: 0
  28. Total transferred: 498540000 bytes
  29. HTML transferred: 493370000 bytes
  30. Requests per second: 3412.78 [#/sec] (mean)
  31. Time per request: 293.016 [ms] (mean)
  32. Time per request: 0.293 [ms] (mean, across all concurrent requests)
  33. Transfer rate: 166153.26 [Kbytes/sec] received

As you see, it can handle 3400 requests per second! It finished the test in 2.9 seconds so the move over to Varnish was excellent. But keep in mind that varnish is only going to handle anonymous / static content requests. Imagine having a site where users can log in and the page requests are now handled by Apache. Again, you will run into issues on a big site.

Time for Nginx

So to get Nginx up and running, we need to install 2 components.

  • Nginx
  • Install FastCGI and Spawn PHP

In the past, this has been a somewhat painful process (mind you, my experience is with apache on this aspect) and been the primary reason I stayed with Apache and mod_php. But this is where PHP-FPM enters. PHP-FPM is another implementation of PHP FastCGI but with supposedly better performance (cannot vouch on it) but definitely easy enough to spawn off. The best part about this is that as of November 29, 2011, it became a part of PHP-5.4 and is also available in 5.3.3. The bad news would be that PHP-FPM is not in the Ubuntu repositories (atleast, not for 10.04 and lower since it uses 5.3.2). Luckily, the Nginx team has created a debian/ubuntu repository which does it all for you (see link). We need to add the ability to add additional repositories (via the add-apt-repository command which you can get via the python-software-properties package).

  1. shell:~# sudo apt-get install python-software-properties
  2. shell:~# sudo add-apt-repository ppa:nginx/php5
  3. shell:~# sudo apt-get update
  4. shell:~# sudo apt-get upgrade

This should take care of upgrading your php libraries to be PHP-FPM ready. So you would shutdown apache (/etc/init.d/apache2 stop) and install nginx and php-fpm:

  1. shell:~# sudo apt-get install nginx php5-fpm

Now it is all installed for you! You should be able to see various instances of php-fpm running as part of your background process at this stage and you should also see port 9000 in usage (this is by the php-fpm package). Its now time to start setting up your nginx config. And rather than write it out here and get outdated, the nginx team have written up their own set of configuration suggestions (see link). There were a couple of things I added in my script, however. First in my nginx.conf file inside  http {}, I added the following:

  1. upstream php {
  2. # This is a comment; it is not seen by the server for any configuration related information.
  3. # server unix:/tmp/php-cgi.socket
  4. server 127.0.0.1:9000
  5. }

And in your server configuration for the site, I would change:

  1. fastcgi_pass unix:/tmp/phpcgi.socket

into

  1. fastcgi_pass php;

The reason for this is that in the event that you decide to use a phpcgi socket in the future, it is easy enough to change it in one place (so you could uncomment the unix line and comment the IP/port combo) and all your sites will now use the new setting once you restart nginx.

Now we are ready to start testing the new setup.

Test 5: Nginx on 1000 requests with 100 concurrent users

  1. shell:~# ab -n 1000 -c 100 http://btmash.com/
  2. Benchmarking btmash.com (be patient)
  3. Completed 100 requests
  4. Completed 200 requests
  5. Completed 300 requests
  6. Completed 400 requests
  7. Completed 500 requests
  8. Completed 600 requests
  9. Completed 700 requests
  10. Completed 800 requests
  11. Completed 900 requests
  12. Completed 1000 requests
  13. Finished 1000 requests
  14.  
  15.  
  16. Server Software: nginx/0.7.65
  17. Server Hostname: btmash.com
  18. Server Port: 80
  19.  
  20. Document Path: /
  21. Document Length: 48881 bytes
  22.  
  23. Concurrency Level: 100
  24. Time taken for tests: 2.728 seconds
  25. Complete requests: 1000
  26. Failed requests: 0
  27. Write errors: 0
  28. Total transferred: 49334000 bytes
  29. HTML transferred: 48881000 bytes
  30. Requests per second: 366.56 [#/sec] (mean)
  31. Time per request: 272.805 [ms] (mean)
  32. Time per request: 2.728 [ms] (mean, across all concurrent requests)
  33. Transfer rate: 17660.10 [Kbytes/sec] received

A little strange. These first set of numbers seem to imply that nginx processes slightly fewer requests than Apache does. Let's see how it performs under more stressful conditions.

Test 6: Nginx on 10000 requests with 1000 concurrent users

  1. root@li166-63:/etc/nginx/sites-enabled# ab -n 10000 -c 1000 http://btmash.com/
  2. This is ApacheBench, Version 2.3 <$Revision: 655654 $>
  3. Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
  4. Licensed to The Apache Software Foundation, http://www.apache.org/
  5.  
  6. Benchmarking btmash.com (be patient)
  7. Completed 1000 requests
  8. Completed 2000 requests
  9. Completed 3000 requests
  10. Completed 4000 requests
  11. Completed 5000 requests
  12. Completed 6000 requests
  13. Completed 7000 requests
  14. Completed 8000 requests
  15. Completed 9000 requests
  16. Completed 10000 requests
  17. Finished 10000 requests
  18.  
  19.  
  20. Server Software: nginx/0.7.65
  21. Server Hostname: btmash.com
  22. Server Port: 80
  23.  
  24. Document Path: /
  25. Document Length: 193 bytes
  26.  
  27. Concurrency Level: 1000
  28. Time taken for tests: 51.494 seconds
  29. Complete requests: 10000
  30. Failed requests: 9539
  31. (Connect: 0, Receive: 0, Length: 9539, Exceptions: 0)
  32. Write errors: 0
  33. Non-2xx responses: 515
  34. Total transferred: 468114195 bytes
  35. HTML transferred: 463734600 bytes
  36. Requests per second: 194.20 [#/sec] (mean)
  37. Time per request: 5149.447 [ms] (mean)
  38. Time per request: 5.149 [ms] (mean, across all concurrent requests)
  39. Transfer rate: 8877.51 [Kbytes/sec] received

I'm honestly quite impressed. It wasn't the fastest site in the world, but it actually processed and completed all those requests. I checked the load average after the process and it peaked at 5.6! A massive difference from Apache.

I decided not to include the results of the benchmarks when Varnish is in front of Nginx since those results were very similar to the benchmarks of Varnish is in front of Apache (makes sense since Varnish is serving the content and not the web servers). But in an environment where users log in, using Nginx could make a huge difference in your server setup.