I have a new post on using Jetty with PHP-FPM that, if you think this is interesting, you should check that one out.
(this post has a sister post on Apache’s event MPM compared to Nginx)
I was originally going to write a blog post about why NginX with FastCGI was faster than Apache with mod_php. I had heard a while ago that NginX running PHP via FastCGI was faster than Apache with mod_php and have heard people swear up and down that it was true. I did a quick test on it a while back and found some corresponding evidence.
Today I wanted to examine it more in depth and see if I could get some good numbers on why this was the case. The problem was that I couldn’t. IIRC, it was for a Magento installation.
To test I did a simple “hello, world” script. Why something simple? Because once you’re in the PHP interpreter there should be no difference in performance. So why not just do a blank page? It’s because I wanted to have some kind of bi-directional communication. The intent was to test the throughput of the web-server, not PHP. So I wanted to be spending as little time in PHP as possible but still test the data transmission.
The baseline tests show the following.
Apache w/ mod_php
Total transferred: 3470000 bytes HTML transferred: 120000 bytes Requests per second: 2395.73 [#/sec] (mean) Time per request: 4.174 [ms] (mean) Time per request: 0.417 [ms] (mean, across all concurrent requests) Transfer rate: 811.67 [Kbytes/sec] received
NginX with PHP-FPM
Total transferred: 1590000 bytes HTML transferred: 120000 bytes Requests per second: 5166.39 [#/sec] (mean) Time per request: 1.936 [ms] (mean) Time per request: 0.194 [ms] (mean, across all concurrent requests) Transfer rate: 801.82 [Kbytes/sec] received
Apache was able to dish out 2400 requests per second compared with 5200 requests per second on NginX. That was more than I had seen before and so I did an strace -c -f on Apache to see what came up. -c shows cumulative time on system calls, -f follows forks. The result for the top 10?
% time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 33.65 0.042063 4 10003 getcwd 16.10 0.020127 2 10001 writev 16.00 0.019994 2 10001 shutdown 10.54 0.013179 0 51836 40118 open 9.01 0.011263 1 20008 semop 5.22 0.006522 0 54507 10002 read 2.53 0.003158 0 10024 write 1.91 0.002386 0 88260 66483 close 1.57 0.001959 245 8 clone 1.16 0.001455 0 54832 384 stat64
getcwd? Why? Then I remembered that I had AllowOverride (.htaccess) turned on. So I re-ran the test with AllowOverride set to None.
Total transferred: 3470000 bytes HTML transferred: 120000 bytes Requests per second: 5352.41 [#/sec] (mean) Time per request: 1.868 [ms] (mean) Time per request: 0.187 [ms] (mean, across all concurrent requests) Transfer rate: 1813.40 [Kbytes/sec] received
At 5352 requests per second Apache actually was outperforming NginX. But what about if more data was transferred? So I created about 100k of content and tested again.
Apache
Total transferred: 1051720000 bytes HTML transferred: 1048570000 bytes Requests per second: 2470.24 [#/sec] (mean) Time per request: 4.048 [ms] (mean) Time per request: 0.405 [ms] (mean, across all concurrent requests) Transfer rate: 253710.79 [Kbytes/sec] received
NginX
Total transferred: 1050040000 bytes HTML transferred: 1048570000 bytes Requests per second: 2111.08 [#/sec] (mean) Time per request: 4.737 [ms] (mean) Time per request: 0.474 [ms] (mean, across all concurrent requests) Transfer rate: 216476.53 [Kbytes/sec] received
This time the difference was even greater. This all makes sense. mod_php has PHP embedded in Apache and so it should be faster. If you’re running only PHP on a web server then Apache still seems to be your best bet for performance. And if you are seeing a significant performance difference then you should check if AllowOverride is turned on. If it is, try moving that into httpd.conf and try again.
If you are running mixed content, such as adding CSS, JS and images, then NginX will provide better overall performance but it will not run PHP any faster. It will also respond better to denial of service attacks better, but a CDN is generally better at mitigating that risk.
But if you are running pure PHP content on a given server, Apache seems to still be the best bet for the job.
[UPDATED]
Here’s a chart of the throughput difference
Comments
wmute
Did you try keepalive connection cache and fastcgi_keep_conn ? http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
kschroeder
I did not. I might see what that does differently. But since both servers react to an accept() *system* call to process the request (and do not manage the handshake themselves) it is unlikely that it will make much of a difference.
grovercampos
Very good, thanks for your article
Why PHP-FastCGI with NGinx is faster than Apache with mod_php
[…] Read the post in full at http://www.eschrade.com/page/why-is-fastcgi-w-nginx-so-much-faster-than-apache-w-mod_php/ […]
petewarnock
The bigger issue is the memory footprint. If the server hits its swap, the performance will degrade. Nginx facilitates much larger scale on smaller, more economical hardware by using a smaller, more predictable amount of memory under load.
It’s a SWAG, but Nginx might be a little slower because each request has multiple asynchronous events; I think along the lines of send upstream and listen upstream.
What version of Apache were you running? 2.4 claims to be faster than Nginx, but it’s only been around for a year.
kschroeder
I was using Apache 2.2. But let’s not go too far here. What I claimed was that PHP was faster on Apache. I have heard several times that this is the case. I wanted to figure out why and found that assertion was actually wrong. I wasn’t talking about memory and I wasn’t talking about static files. NginX is faster on a raw performance test. By at least an order of magnitude, due to its event based architecture.
Personally, I would still recommend using NginX with FastCGI for PHP even though it is slower than Apache. For a mixed media site, the additional performance of static files more than makes up for the slow-down with FastCGI. And if it’s an API-based site (only PHP-based content) NginX will handle transient loads better (as well as denial of service attacks).
kschroeder
… in other words, yes, memory utilization is more efficient with NginX. But one of the claims I heard was that NginX was faster, which turned out not to be true. With memory being cheap these days, memory usage should not be a primary factor for determining the server to use. The arguments _for_ NginX can be made quite easily without having to go to secondary arguments.
Eugene OZ
Another option is nginx+apache. Nginx as front-end for static files and as proxy to apache+mod_php. In this setup, nginx will also send responses to “slow” clients (like mobile phones) and it will give more free memory.
Alexander Ustimenko
Nginx + apache + mod_php will be slower than just nginx + php-fpm.
Eugene OZ
Alexander, maybe, but it’s need to be tested without AllowOverride.
Julianus
Do you tested php-fpm (http://php-fpm.org/)?
kschroeder
Yep. Used FPM
fkrauthan
Jeah there should be a big difference if you use fastcgi or php-fpm to embed php into nginx.
joaquinhdzg
What about apache + php5-fpm + mod_pagespeed?
Moving WordPres .htaccess rules to httpd.conf - (not so) Tiny WordPress Notes
[…] move to the traditional Apache with mod_php instead of Nginx with php-fpm, after going through the pros and cons of each setup. If you are not aware already that AllowOverride All brings a major performance lag […]
tubalmartin
A few months ago I reviewed this topic too since many people claimed NGINX + PHP-FPM to be faster than mod_php and the tests we ran at my company revealed results similar to yours, that is, Apache + mod_php for PHP only is faster than NGINX + PHP-FPM under any load scenario.
The setup we end up using is NGINX as reverse proxy for Apache 2.2 prefork with mod_php. NGINX handles HTTP connections, SSL, serving static files etc…while Apache is used just for PHP. AllowOverride is set to none for sure.
What’s more, many people often claim NGINX + PHP-FPM consumes much less RAM memory than Apache + mod_php. Our tests revealed memory consumption to be more or less the same. Note that we enable just the Apache modules our app needs:
mod_cgi # Munin Zoom graphs
mod_status # Munin
mod_alias # Aliases
mod_authz_host # Order deny allow commands
mod_expires
mod_rewrite
mod_headers
mod_setenvif
With these modules only, we observed a nearly equal memory consumption.
I’m glad to see more people testing things properly (although Apache Bench is not the best tool for load testing). Good article!
Eugene OZ
tubalmartin thank you, it’s very interesting for me. I like nginx, but I think proxy is better when we have clients with slow connections (like mobile apps/sites), because PHP can send response to nginx and finish execution (free resources), while nginx will send response to client with slow connection – all this time memory and CPU will not be used by PHP process.
But there is one important thing – fastcgi_finish_request(). This function can solve wide range of issues, very useful in REST API implementations and cases when we need update/insert something big in database and user doesn’t need results in response. These cases can be solved by queues/tasks, but with fastcgi_finish_request() it’s much more easily.
tubalmartin
Eugene OZ
What we do for long running processes is run them in the background using either Gearman or shell_exec($cmd.’ > /dev/null 2>/dev/null &’); so that the user doesn’t have to wait for a response to arrive.
Eugene OZ
tubalmartin I have solution too, but one function is much more convenient than Gearman.
tubalmartin
Eugene OZ Sure, but we have Gearman/shell_exec wrapped in a simple function call: do_job_async($cmd).
If Gearman is not installed in one of our servers, it falls back to shell_exec.
kschroeder
tubalmartin Eugene OZ Every time someone uses shell_exec() for async operations a kitten gets a tummy ache.
mkevac
The problem with Apache is not speed, but ability to handle lot of user requests. That’s why high load sites use nginx. Read about 10k (100k, 1m) connections problem – http://www.kegel.com/c10k.html
kschroeder
mkevac Yes, but if you are running PHP-FPM with NginX you still have the same problem. Even though NginX can handle 10k connections does not mean that you want 10k PHP processes running in the background to handle the requests. So while your assertion is true it’s an apples/oranges comparison. For static content, absolutely NginX is the best server. However, for running PHP loads, Apache handles the request just a little more efficiently.
JaimieSirovich
kschroeder mkevac Finally some sense in a benchmark. I’ve never had the patience to run this test, but I’ve always assumed that mod_php would be faster if you are serving predominantly dynamic content. In theory, PHP in Apache’s threaded MPM should be fastest, but it’s known to be problematic historically.
Nginx & Light are going to be awesome at server static content because they can use various event-based features of an OS to pump data without any thread at all. You can’t do that for a script.
scott_conger
kschroeder I admit not knowing how PHP-FPM works, but there are normally two potential accept system calls. One is from the client to the webserver over HTTP and one from the webserver to the application using FastCGI. The persistent connection can eliminate the second one.
michaelhood
scott_conger kschroeder there can also be a very large difference between connecting to the fastcgi upstream (from nginx) with domain sockets vs TCP.
would a few different, interesting benchmarks here.
Speeding up a WordPress site part 2 | Chris's blog
[…] even show that on low-traffic sites serving dynamic (php) content – Apache might be faster like this blogpost shows. That’s why it’s important that the combination of everything in your environment is a […]
Gwyneth Llewelyn
Thank you for doing some benchmarking on this! Recently I’ve being toying with a VPS with a very small memory footprint and limited resources, and the goal was to get everything in memory and avoid swap. This seemed to be impossible to manage with Apache; doing some benchmarks, Nginx + PHP-FPM outperformed Apache (both with mod_php and FastCGI) by about 10:1 — accepting at least 10 times as more simultaneous connections — while leaving the VPS running smoothly; Apache would bring it down under serious load.
But now I know why that was the case. As you so well explained, not everything is PHP — at least half of that content is static (JS, CSS, images, and so forth), since in my case I was using WordPress as the sole application on the backend. Now I understand why Nginx performed so well: it was dealing mostly with static content, and would outperform Apache easily — and handle much higher loads — even though Apache might be winning the race every time there was a PHP request. But in the middle of so many requests for static content, Nginx clearly was ahead of the race.
The lesson learned is that Apache isn’t “that bad”. On a completely different environment, the solution was to put Varnish in front of Apache. Varnish is merely a proxy-cache server, it doesn’t serve static content at all (unlike Nginx, which can do all of that), but it does its job admirably well, and it’s programmed/configured in a very similar way than Nginx. The results are impressive — if you can afford a lot of memory for Varnish.
At the end of the day, the lesson I learned was simple. If you are operating in a very limited and constrained environment, and still need to extract good performance of your underpowered setup, then the solution is to deploy Nginx + PHP-FPM, because it’s easier to fit everything inside memory and avoid nasty surprises with swap. If, on the other hand, you have a couple of GBytes to spare, a better solution might be Varnish + Apache + mod_php — Varnish will handle static content even quicker than Nginx, and you’ll benefit from Apache’s slight edge in running PHP. In either case, make sure that your application handles dynamic content cleverly — the better it’s able to generate static content out of that, the better the performance, and that applies to both kinds of solution.
For the sake of the argument, on my tiny VPS I avoided TCP sockets completely. Nginx talks to PHP-FPM via Unix sockets, and PHP talks to MySQL via TCP sockets as well (MySQL doesn’t reply to TCP ports in my configuration). According to what I’ve read, the difference in performance shouldn’t be noticeable, although benchmarks (and some academic papers) report a performance increase from as little as 5-10% to 200% when using Unix sockets. Why there is such a huge difference in the reports baffles me, although many of those reports are old. Maybe recent TCP/IP stacks are far better these days and, as such, the differences in performance between both communication techniques have been reduced…
Matthew Vaughan
Good results. Though I have actually never seen any evidence that Varnish is faster than Nginx and lots of evidence that they perform about the same, so I’d just use Nginx in that case anyway (either as the sole web server with PHP-FPM, or as a server proxied in front of Apache+mod_php to handle static files and send 404s to Apache, or as a caching proxy just like Varnish). It also supports some things that Varnish doesn’t. And of course Nginx does all this using much less memory. I have nothing really against Varnish, I just don’t see that it does anything better or faster than Nginx.
nneves
Hi, what about the difference between nginx + php-fpm versus nginx proxy apache/mod_php for php pages only?
kschroeder
I would venture to say that Nginx with FastCgi would be faster
petewarnock
… reason being FastCGI protocol has less overhead than HTTP.
Some interesting infos about PHP on nginx or… | bohuco.net
[…] http://www.eschrade.com/page/why-is-fastcgi-w-nginx-so-much-faster-than-apache-w-mod_php/ […]
NGINX+PHP-FPM+SOCKET VS NGINX+PHP-FPM+TCP VS NGINX+APACHE+MOD_PHP | 偶尔陶醉的世界
[…] http://www.eschrade.com/page/why-is-fastcgi-w-nginx-so-much-faster-than-apache-w-mod_php/ 这个博主就测出来,在关闭APACHE AllowOverride […]
DIREKTSPEED
And Now Open your EYES!!!!! Test Apache 2.4.6 With Directly proxy cgi and PHP-FPM and use the Apache 2.4.6 Event MPM not the Prefork or Worker?????
Now any questions who has won? Right Apache wins all clear NGINX is fake its like compare apples and PC
NGINX has a EVENT engine and you benchmark it against PREFORK Engine with MOD PHP wow what a supprise NGINX was faster but realy test Apache MPM Event best 2.4.6 with the cgi proxy and you will be stunted how blind a nginx user is!
Last time as i tested my Ferrari against my Ford i was Realy supprised that the Ferrari was Faster 😉
kschroeder
DIREKTSPEED You don’t need to get your panties in a bunch. This was testing the typical Apache scenario against the typical Nginx scenario for PHP. The event MPM for Apache is irrelevant to this discussion. It will not be any faster with PHP because PHP then will need to use PHP-FPM for Apache in exactly the same manner as you would with Nginx. This post was about PHP, not static content.
kschroeder
DIREKTSPEED I should also mention that the point of this was not to see which is faster, but to address _common_claims_about_nginx_against_apache_. People say “Nginx is faster” and I say “ehhhhh, not so fast.” That is it.
DIREKTSPEED
Gwyneth Llewelyn
LoL you Should Test NODE.JS it can handle Easy Some 100.000 Connections even on Small Hardware. If you got Biger it can Handle over Million Requests at time.
But at all to Stop you from Thinking Nativ C can Handle Most is the Fastest and thats simply coz the other software like webservers is Written in that Language and not PHP 😀
DIREKTSPEED
mkevac
Apache MPM Event can Handle more Requests at same time then NGINX !
Gwyneth Llewelyn
DIREKTSPEED I did exactly those tests on a low-end VPS with constrained memory (merely 512 MB) for a few websites who have moderate traffic — and Apache would drag the VPS to its knees, while Nginx runs smoothly and has a tiny footprint both in memory and CPU consumption.
So, to take your analogy further, if you drive your Ferrari on a narrow, short street, the Ford can still beat it… specially if you’re driving on a cobbled street (try it, if you have a Ferrari!).
Ferraris work great on motorways, though, and will eventually beat Fords. A fellow sysadmin also evaluated your setup, this time using a physical server with 8 cores and 32 GB, which, although it has considerable load, is generally running at 20-25% of maximum load. He reports that Apache beats Nginx under that configuration, so he sticks to Apache with Varnish on top of it (and plenty of RAM for Varnish!)
My guess is that you have a balanced mix of PHP and static content, Nginx will very likely be better, if you have a small footprint — at least, that’s what I’ve experienced so far, and that’s why I use Nginx on all my small servers, where every CPU cycle counts. On the other hand, if you have a vast amount of free CPU cycles, lots of RAM, superfast connections, unlimited bandwidth, and so forth, then Varnish + Apache should have a slight advantage over Nginx. But that’s just my experience.
Web Server Performance for Drupal and WordPress, Part I » GeoffStratton.com
[…] worth noting that Apache with mod_php is going to be faster for pure PHP — while consuming more memory, particularly under heavier loads — so, again, you should […]
garet1
DIREKTSPEED Gwyneth Llewelyn Native C can handle the most because it inherently runs faster with fewer calls. Practically all other languages are simply programs written in either native C, lisp or something similar to native C like verilog/vhdl.
These programs interpret strings and run commands, like a terminal, but they’re still native commands and issuing them directly is faster. The only downside is it’s usually too complicated for the likes of you to do correctly. There’s few things as frustrating to me as people spreading bad info.
Aside from that, NodeJS’s performance can be obtained from most languages, not only C, but the V8 engine google made for javascript is the real key there.
More on topic, I see many people posting about apache+varnish, which is good. Something to consider additionally is Apache+Nginx, where nginx is a proxy cache.
From the nginx conf you can add a few lines to have your site output proxied similarly to varnish, this allows you to more-or-less use nginx’s static content speed and apache’s runtime conveniences together. Dynamic content turns into static content on-the-fly. Issue a delete from cache folder on updates, or have a time to live.
This way you can also have many nginx servers in the same cluster replicate from one or two apache VMs. Any time a request isn’t cached, apache serves it anyway. You can define rules to bypass the proxy as well, for web services and such.
Bonus, no license fees =0
garet1
Alexander Ustimenko this is not the case.
Imagine situations where you might want (nginx)+(nginx with php-fpm). Since nginx shines so much with proxying and static content, it is reasonable to replace the backend development with the preferred stack.
Some algorithms do work better in apache, especially if the frontend serving is taken out of the equation.
Similarly, some cloud services very much need php-fpm’s process pool or similar features.
akaMrJohn
kschroeder & Gwyneth Llewelyn Hi, Thanks for the comments and the article, they are very useful! By implication of what you’re saying, then if your web architecture is such that you’re serving static html files (web app) using a CDN and this app only makes API calls to your backend (all in PHP) i’d be better off sticking with Apache + mod_php than considering Nginx + PHP-FPM?
kschroeder
akaMrJohn Gwyneth Llewelyn The data seems to support that assertion. One additional qualification would be that Apache is protected via load balancer or CDN reverse proxy from serving frontend requests. The concurrency restrictions of the prefork MPM are still pertinent.
DimaSoltys
I understand it’s a year too late, but there’s a bit on information lacking from the article. Have you been connecting nginx and php-fpm through unix socket or tcp one?
kschroeder
DimaSoltys If it’s on the local host I will always connect via Unix socket if it’s available.
DimaSoltys
kschroeder Thank you. It’s just that I’ve been asked to move from nginx to apache with mod_php recently. Your post have given me some much needed insight.
Gopalakrishna Palem
Gwyneth Llewelyn DIREKTSPEED Nice explanation and good insight.
Gopalakrishna Palem
Found something similar here: http://seravo.fi/2012/the-fastest-web-server-in-the-world but with different results.
May be its old data.
kschroeder
Gopalakrishna Palem It’s a completely different test. I was testing *PHP* throughput.
garet1
kschroeder Gopalakrishna Palem “Because once you’re in the PHP interpreter there should be no difference in performance.” and especially, “getcwd? Why? Then I remembered that I had AllowOverride (.htaccess) turned on. So I re-ran the test with AllowOverride set to None.”
This is where you really start comparing apples and oranges. Perhaps you’re not familiar with how to optimize php-fpm as much as you are apache, but A) fpm makes a huge difference on real scripts as it caches opcode, where mod_php does not. B) if you play with both configs nginx will still easily outperform Apache even with message pool extensions. This is a contrived benchmark, aside from now being rather dated.
kschroeder
garet1 kschroeder Gopalakrishna Palem That is not true. Neither mod_php nor php-fpm cache opcodes by default. It is internal to the Zend Engine itself and has no bearing on which web server you are using.
Apache MPM: Which One To Use? GeoffStratton.com
[…] concurrency is vital for scaling. Nevertheless, if you’re serving primarily PHP to relatively few concurrent connections, Prefork may be preferable to the threaded […]
Jason Yang
petewarnock because your requests to nginx magically bypasses HTTP protocol completely. /sarcasm
JaimieSirovich
Jason Yang petewarnockWhat about if you’re primarily serving dynamic content so you’re going to consume the process anyway? In that case mod_php might actually be more efficient. Well, threads might be better than processes, but proxying from 1 pool to another seems silly if it’s almost 1:1.
kschroeder
JaimieSirovich Jason Yang petewarnock That is the conclusion that my testing supports. That said, threads would not be better than processes because then you would need to use ZTS and your performance would tank.
Jason Yang
TBH Use Nginx if you want to have a all-in-one server that serves dynamic and static content at the same time.
But ultimately, large sites do not server static content from a single server, they use CDNs. mod_php + Apache is actually *faster* than nginx, but as pointed out above, the # of concurrent connections alter people’s perception of how quick a site is.
An event-based request handling model (nginx, nodejs…etc.) solves that by asynchronously handling each request, then effectively “time sharing” the server’s resources to each request so that no long-running request could block the entire queue of requests, which is the case for apache prefork mpm.
There is no 1 solve-it-all solution to all problems, if your service is anticipated to be short running (as in near-instant) and purely PHP, apache will probably out perform nginx.
But if you anticipate longer running responses (calculations or more complex logic), then nginx will be be faster.
So if you just want a simple LAMP setup, then use nginx (therefore it’s now LNMP) as it’s simple to setup and fast without any extra tweaking.
Apache however has it’s significantly larger library of mods available for people to use, nginx has mods too, but apache’s is time-tested over many years.
If know linux, apache vs nginx is very similar to ubuntu vs centos as a web server, both work, and have their fair share of fan boys and girls, just pick one and run with it.
There is no correct answer to such a generic question of “which is better?” it really depends on various things…
Jason Yang
JaimieSirovich Jason Yang petewarnock kschroeder forking is fast, threading consumes less memory. Pick your poison.
JaimieSirovich
kschroeder JaimieSirovichJason Yangpetewarnock
Is there some sort of issue with ZTS and a big fat lock for thread safety? The reason many don’t use it is that it is only _allegedly_ thread safe. Or extensions are not, etc. I’d want to see Apache+mod_php in worker/event mode. Bet it might not stink?
kschroeder
JaimieSirovich kschroeder Jason Yang petewarnock I don’t know exactly what goes on behind the scenes, but the thread safety adds a significant amount of overhead. Enough so that even Microsoft said to not use it on IIS and use FastCGI instead.
JaimieSirovich
kschroeder JaimieSirovichJason YangpetewarnockGood datapoint. IIS is a properly threaded/async server via IO completion ports and pools of threads. If they say it adds overhead, it’s probably true.
Gwyneth Llewelyn
Recently I have been banging my head with a WordPress plugin that works flawlessly under Apache but requires some tricks to get it working under nginx. Eventually, the community figured out that the “trick” is that the plugin makes serious use of rewriting rules. Under Apache — and here comes my point! — because AllowOverride is on, and as .htaccess was already used to handle (part) of the rewriting rules, everything went smoothly. Under nginx the rewrite rules had to be manually added to the configuration.
Soooo… that made me remember this article. Effectively, Apache + mod_php, with all the fine-tuning, will beat nginx with PHP5-FPM, because of all the reasons you’ve pointed out. However, one of the big performance hits is having AllowOverride On. I’m assuming (I’ve not tested it!) that if all rewrite rules on common applications (and, yes, WordPress comes to mind, since 20% of all websites run WordPress…) are on httpd.conf, then all these Apache setups would actually get a huge increase of performance, just by avoiding the extra file calls. The trade-off is that every time something gets changed, Apache has to reload its configuration. But that’s what already happens with nginx (which has its rewrite rules on the configuration file as well).
This was particularly noticeable on a shared hosting server which I use for less important websites, such as client demos and temporary setups. Almost all sites there run WordPress. I wondered recently why the server, although its CPUs are practically doing nothing, and there is little traffic, has such a sluggish performance. I originally blamed it on Apache, but now I’m pretty sure that AllowOverride On is the culprit: because this server’s disks are actually remotely mounted (I think they’re NFS-mounted), every time a WordPress site gets a request, it has to read .htaccess from the disk. Every time. Even if the actual page is cached! I make serious use of complex caching plugins and security plugins which are totally dependent on .htaccess, and this would certainly explain why a very powerful server, with lots of cores and RAM, and next-to-zero traffic, actually performs so badly.
It would also explain why a relatively standard phpBB installation has blazingly fast performance on that server, compared to the sluggishness of WordPress — that particular phpBB installation doesn’t use .htaccess.
Fortunately, Automattic has been working on an internal URL rewriting API, which comes now standard with WordPress, and which, over time, should allow all kinds of rewriting rules in order to avoid .htaccess — or plugin compatibility issues across Apache/IIS/nginx/lighttpd etc. Obviously most plugins still don’t use that API. But it certainly explains why people running WordPress, like myself, are so astonished at the incredible performance of nginx over Apache on the same server. Again, Apache is not really to blame. In this case, it truly seems to be the so-lovely rewriting rules on .htaccess that are slowing down everything.
Nice catch. I actually hadn’t paid close attention to that detail on your benchmark. I believe that things are even far worse in the “real world” — specially because on most sharing environments, end-users don’t get access to Apache’s configuration, but have no choice but to rely on .htaccess. On the other hand, maybe there will be a way to safely and securely add rewriting rules to httpd.conf (via include files) and get rid of AllowOverride on most Apache installations…
.htaccess is too convenient 🙂
JaimieSirovich
kschroeder Interesting datapoint: Open Litespeed appears to use an architecture most similar to Nginx + php-fpm. But Open Litespeed has to be designed for hybrid content.
kschroeder Jason YangpetewarnockI suppose another alternative would be using a static.example.com server or port (run lighttpd/nginx on :8001) running something like Nginx or Lighttpd to host most of your boring content like *.js, *.css, *.sprite.png, etc. You might want to keep certain images that are worth indexing on the main domain for other reasons. That all sounds very CDN-ish, but Nginx + php-fpm does look a lot like a proxy-type CDN like Cloudflare.
I’ve never bought into the “Nginx is always faster” thing because Nginx proxying via local sockets to another process (so 2 processes) just isn’t lighter than mod_php inside 1 process. If the majority of your requests are for dynamic content generated by a scripting language, you can’t benefit from asynchronous I/O. It doesn’t _really_ matter so much if your language is thread safe and can be threaded, though I said it would be faster (@kschroeder pointed out why I was wrong). Some empirical proof of this is the Apache worker MPM doesn’t really do a ton better than prefork on static content. Alot of this “magic” is Nginx is part of epoll and sendfile, which Apache is adding with the event mpm. It doesn’t compete so far, but it probably could someday.
The second you host any static content (“hybrid content”), the tables turn, though, and you’d be best off using Nginx + pools of dynamic content threads/procs. There may be other reasons php-fpm probably gives you more control over which user is running what.
The slowness of .htaccess is really a headturner every time. If Litespeed can fix it, why can’t Apache? Open Litespeed is also a contender if you don’t mind its limitations.
JaimieSirovich
kschroeder JaimieSirovichJason YangpetewarnockThere’s one thing I overlooked. If you proxy to a PHP process pool, you’ll gain (usable) keepalives. Whether this matters so much, of course, depends — on connection speed and number of connections per page. If you’re going to use the prefork MPM, you’re going to get into trouble with keepalives as well as mobile clients, potentially. Slowloris is just surfing a website with a Hayes 300baud modem, no? Totally possible. Of course you can mitigate it. Turn off keepalives and use mod_reqtimeout, etc., but that might be a bandaid.
joeynovak
I really appreciated this post. It really highlights how important it is to tune your web server according to your server and usage case, and I appreciate that you used both of them out of the box 🙂
nitinanuj
What about 65536 concurrent requests from nginx/apache to php-fpm …… it will breach the port limit either on unix socket or tcp/ip.
kschroeder
nitinanuj Then you’ll need about 30,000 cores to handle those requests, so it’s not something I’d be too worried about.
garet
nitinanuj if you need more concurrent connections simply make a new virtual network interface and watch on the new set of porta.
nitinanuj
Managed to resolve the issue thanks for non valuable inputs 😛
nutsack
I have to say this is a brilliant post. Thanks for writing about this. Disabling .htaccess files has always been standard apache optimization advice. I just never knew how much of a difference it makes!
There is however another thing to consider. With mod_php file permissions can be a problem. With php-fpm php runs as the user owning the files so file permissions are easier to do and safer on shared servers.
vankenn
kschroeder nitinanuj This is false. We have about 43,000 concurrency requests. Everything in 72 cores.
vankenn
Not sure if this post is favoring Apache or what not. But the major problems we run into is high concurrency with Apache. Apache spikes cpu load and ram usage while Nginx stays really low. This is totally resolved by moving to nginx. I guess most websites won’t run into concurrency issues.
kschroeder
vankenn kschroeder nitinanuj 43,000 concurrent PHP requests on 72 cores? That is 597 requests at a time per core. I doubt it. Or you’re running a Hello World app.
kschroeder
vankenn This post is to say that while Nginx is good with one type of workload (static files, lots of connections) that doesn’t automatically make it better in all scenarios. In fact, in most low-medium traffic scenarios Apache is slightly better. Nginx RAM usage has nothing to do with Apache memory usage; it is PHP-FPM which is better, though I have talked to Sysadmins who have reduced it significantly through module removal and configuration options so that Apache and Nginx/PHP-FPM are on par.
The bottom line of the article is that Nginx isn’t automatically better than Apache. The answer, as with most questions, is “it depends”.
nitinanuj
vankenn That’s what I said you cant have more than 65536 concurrent connections on a single port communication and 43000 is lesser than that, that is why you are able to achieve.
kschroeder Yes the cores and CPU utilization are subjective to how much cpu intensive the php processing is. Even if it is a bit intensive, then as well it should hit response time due to queuing not concurrent connections.
JaimieSirovich
nitinanuj vankenn kschroeder All of this is ignoring 1 critical and salient point. In real life connections stall, vanish, or go at 1-10kb/s (a la Slowloris). mod_php will do well, or even better than nginx + php-fpm when you’re slamming it from localhost or on a gb network, but add in some mobile phones on *cough* Sprint or something, then add a dash of keepalives, which are especially necessary on SSL, and YMMV. I would hypothesize that a well-tuned Apache 2.4 using the event MPM serving smaller documents from php-fpm would perform almost as well as Nginx + php-fpm, but so far I don’t see it. I guess that’s why we still need benchmarks, but the theory is important. Nobody cares how fast ab works on localhost. In real life, you get all sorts of speeds and latencies.
John
There is such a thing as keepalivetimeout. If there is no response from the client the connection times out and the process ends freeing up memory.
Apache needs to be optimized. People think the default config will work for everyone but it doesn’t. You have to set keepalivetimeout to a low value like 3-5 seconds. Maxclients has to be tweaked or you’ll run out of memory.
Kevin Schroeder
I would argue that even 5 seconds would be high. Especially for a desktop site. Mobile, not so sure.
Jaimie Sirovich
@kevin If it costs only an open socket, why is 5s or even 30s problematic? The problem is the process, not the socket, no? Mobile devices and their networks might decide not to use keepalives much, but the same problem comes up because you get lots of slow/stalling connections, which makes you want the same thing — a process pool detached from I/O. Keepalives are like infinitely I/O bound in certain MPMs.
Kevin Schroeder
They aren’t infinite, but some MPMs handle them better. Almost all Apache installations that I have ever seen are using the prefork MPM, which binds the socket and the connection and long-lived keepalives have the same impact as a DOS.
Jaimie Sirovich
@kevin It won’t let me reply to “They aren’t infinite,” so I’ll do it here. Let’s say we _do_ have an efficient process model that detaches the connection from the threads/processes (Nginx, Litespeed, perhaps Apache 2.4+mpm_event). Especially with SSL connections, why not leave it open for 60 seconds? On a very busy server, you might need to disconnect idle connections a little faster or just disconnect oldest active keepalives when new connections come in after a certain thresshold. But it’s just a file handle, no? SSL connections can lag quite a bit. OK, add stapling, HTTP/2 and some greasy hacks to speed it up, but the web will go SSL soon and the first handshake will still be slow-ish.
JaimieSirovich
nitinanuj vankenn kschroeder I meant the old school prefork MPM in the latter part of that comment. Assuming the processes are actually doing stuff, and stuff isn’t very I/O bound (I guess you could consider keep-alives as an extreme case of I/O bound), prefork will do extremely well benefitting from PHP’s embeddedness and various forms of locality
kschroeder
nitinanuj vankenn kschroeder Yes, achieve, but practically that will only happen in the event of a significant problem on the system. The original comment was that you could breach the 65k connection limit using FastCGI. My comment was stating that from a practical standpoint that limit should never be an issue because in order to do that you would need multiple 10’s of thousands of CPU’s to handle the level of concurrency required to hit that number of connections.
kschroeder
JaimieSirovich nitinanuj vankenn kschroeder You are exactly right. Apache will do better with 100% dynamic requests. Only by a little, but it will do better. Nginx/PHP-FPM is best with a combo of dynamic and static content without a LB. Apache (w/ prefork) is a better choice when it is only dynamic content and it is protected by a high-concurrency proxy server to protect it against connection persistence.
In the end, the blog post was to explain why Nginx/PHP-FPM was better than Apache, which is the claim that I’ve heard said over and over. As should be expected, and to my surprise, the answer is, as always, “it depends”.
nitinanuj
@kschroeder I think we cannot generally calculate the CPU utilization based on concurrency. To conclude I have achieved the concurrency of 100K on 32 cores machine. And definitely the app does lot more things than HelloWorld. The solution was to create multiple php-fpm pools in my case I created 3 pools, so there were 3 socket communication channel betweeen Nginx and Php-Fpm. This allowed 65k * 3 concurrent connection in parallel between Nginx and Php-Fpm.
JaimieSirovich
kschroeder JaimieSirovich nitinanuj vankenn I wouldn’t even go that far. It will do better with dynamic requests that are all coming from _FAST_ connections. Once you mix in mobile and slower connections — or connections that stall — Apache with mod_php will begin to choke because the slow connections are holding on to a process.
If you repeat the test with a benchmark that simulates slow connections, I bet Nginx will do better against mod_php and prefork, but I’m less sure about event + php-fpm.
معماری قابل گسترش برای سرورهای وب | تاد
[…] مطالعات بیشتر : چرا nginx+phpfpm از apache+modphp سریع تر است؟ […]
Performance Apache, Nginx, Php-FPM, Mod-PHP | Consultance Sécurité & Systèmes Linux
[…] faut a un seul surfeur pour que le code php qu’il demande soit interprété) ; il n’y a pas l’ombre d’une différence entre Apache et Nginx. De toute façon, ni l’un ni l’autre […]
Ultimate Guide to Page Speed: Advanced | world is vulnerable
[…] performance, especially when compared to NGINX with FastCGI (PHP5-FPM). However, we (and others) have found that it is the product of poor configuration, directly related to AllowOverride. Once .htaccess discovery is turned off, Apache with mod_php […]
WordPress performance comparison – NginX vs Apache2 – WordPress Lab
[…] WHY IS FASTCGI /W NGINX SO MUCH FASTER THAN APACHE /W MOD_PHP? […]