Nginx default configuration is to run many many more processes/threads then Apache does. So basically what your really testing is how does Nginx scale as traffic increases from low to high. ![]() Lastly, I really dislike the common method of benchmarking you ran ab 1,10,…. Even though you had still had to transfer the same 100bytes over and over to Nginx, once nginx receives it – based on the hash of the data it will send the data from it’s copy in cache – avoiding having to copy the response from one region in memory to another region in memory, run it through some filters, and then copy it again. Nginx also performs some special caching tricks on small responses. So 100 17k results all coming over a single fastcgi connection will be processed differently then 100 17k results over 100 sockets. ![]() Here it will be wildly different since Nginx counts on persistent fpm connections by default and Apache counts on unique ones. Since the whole purpose of libevent is to allow more processes while waiting for results, server 100byte fast results means your not testing how the server handles having to suddenly handle 100 17k different responses from FPM. Moreover, not serving longer running larger content generating scripts is not really useful. In this case, the performance for service PHP will be based on how the web server handles it’s worker threads. So to test it, you have to use Apache Event MPM + fastcgi. Nginx was built to use libevent from the start which helps it to be so screamingly fast. This means that the libEvent server can process many more requests as since it can run more workers. Using event, the idea is that when it comes time to execute a cgi script, the processing is passed off to a separate server and the Apache worker thread goes to sleep until there is a response or a timeout. So it makes almost zero use of libEvent, libUv or whatever flavor of the month. With mod_php each Apache thread is going to execute PHP directly. Posted in Performance Tagged apache, nginx, performance, php-fpm Post navigationĪre you running Apache Event MPM + mod_php or Apache Event MPM + fastcgi?Īpache Event MPM + mod_php really does not buy much. While the event MPM will help with concurrency it will not help you speed up PHP and so is not really needed. If you are serving static content from a CDN or have a load balancer in front of Apache which is running PHP then the prefork MPM is the way to go. So it seems that if you are serving static content Nginx is still your best bet. The event MPM was faster than the prefork MPM for static content, but not by much. Then I added the prefork MPM to provide a baseline. While the numbers were 25% better or so they were still well shy of what Nginx was capable of. So I pulled out all of the LoadModule statements I could but still have a functional implementation of Apache. But I don’t really have to tune Nginx to get 14k requests per second so I was expecting a little better from Apache. ![]() I will say for the record that I do not know how to tune the event MPM. What surprised me was that Apache was only half that. Given my wimpy VM that I ran it on, those numbers are pretty good. I basically ran ab at concurrency levels of 1, 10, 25, 50, 1. ![]() So I had a few spare moments this Friday and figured I would try it out. I had heard that Apache 2.2’s event MPM wasn’t great (it was experimental) but that 2.4 was better, possibly even faster, than Nginx. At first I figured that it would be and then it turned out not to be, though only by a bit.īut since Apache also has an event-based MPM I wanted to see if the opposite results were true that if Apache were using its event MPM it would be about the same as Nginx. I wrote a little while ago about how, for running PHP, Nginx was not faster than Apache.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |