Nginx frontend with apache2 backend for php

Hi guys,

For a hobby project I'd like to get rid of a single machine to serve all content, and instead break it up to get more performance. Performance is fine now as it is, though it's fun to try it out so I'd like to do it anyways.

This is what I had in mind, with my awesome drawing skills:

~~Now I do have worked with Apache, though I'm no expert, but I have very little experience with Nginx. I also have a few other questions:

1. What's the best way to sync files between Apache and Nginx? Just rsync? How should I handle permissions on both sides?

2. Will this keep all .htaccess rules intact?

3. Since APC is caching, should I run it on the Apache host or the Nginx host?

4. Is there any point in syncing the MySQL databases? I first thought of doing it, but since the database servers aren't exactly separate anyways there's probably no point in having more than one MySQL running with this kind of setup.

5. Is the setup/ idea good or does it sound really stupid-ish?

6. Any tips/tricks/things I should know for this to become a success? :)

Thanks all,~~

9 Replies

I doubt this setup will be faster, infact it'll probably be slower since theres added network latency between the two servers.

FYI apc is a PHP extension so has to run along side PHP-FPM you couldn't put it on the nginx box.

If you want performance try the following in this order:

1) Reduce network transfer by compressing images, minifying css/js, gzipping content.

2) Optimise mysql usage, check your slow log and add indexes where appropriate, remove excess queries

3) Optimise your PHP scripts removing anything that isn't needed anymore (any old functions you forgot to remove etc) make sure apc is installed, switch to an in memory session store (memcached can be used out of the box for this).

4) Set up page caching if appropriate, I personally use nginx's fastcgi cache on some wordpress sites it's brilliant

Hi,

Thank you for your response and ideas. I do use compression/minifying etc, ModPagespeed has been a great help with this as well.

How big of a performance impact do you think it would be? My reasoning to "split it" was this:

  • NGINX is very efficient with static content. I did think about page caching with Nginx and I did want to try this as well, though I'm not sure how good it would be.

  • I imagine the load on the apache machine would always be higher than on the nginx machine, since it has to process the PHP-code. I could dedicate this VPS to do that and leave the nginx machine running without a higher load. This means that my other services, ZNC but most importantly VoIP will not suffer when the site is busy with PHP. I currently run TS3 that will get quite busy in a few months time.

  • I also heard that NGINX is not affected by attacks like Slowloris, this should also increase my security and not make the site that easy to DoS. (Then again, I don't run anything interesting but the idea that it is really simple to down it doesn't comfort me. I tried out Slowloris on the Apache I run and it is so extremely easy to down it. There's a few modules that help mitigate the attack but the site still becomes a lot slower)

For clarity: I do intend to use the LAN IPs, so the increased network latency should be very minimal.

Thanks,

Personally i'd skip apache and just have php-fpm handling php straight, as it is perfectly capable at handling everything by itself. You can also have multiple php-fpm backends if needed.

Stick with a single server have nginx/mysql/php-fpm all on the same box, add fcgi caching to php pages example here http://wiki.nginx.org/HttpFastcgiModule and you'll be able to serve 1000s of requests a second here's an example from apache bench using a similar setup on a linode 512

ab -c 1000 -n 10000
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests

Server Software:        nginx
Server Hostname:        
Server Port:            80

Document Path:          /
Document Length:        518331 bytes

Concurrency Level:      1000
Time taken for tests:   7.062 seconds
Complete requests:      10000
Failed requests:        37
   (Connect: 0, Receive: 0, Length: 37, Exceptions: 0)
Write errors:           0
Total transferred:      5187155680 bytes
HTML transferred:       5184864993 bytes
[b]Requests per second:    1415.96 [#/sec] (mean)[/b]
Time per request:       706.237 [ms] (mean)
Time per request:       0.706 [ms] (mean, across all concurrent requests)
Transfer rate:          717263.71 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        5  202 718.0     17    3014
Processing:     8  494 178.6    545     987
Waiting:        0   21  35.6     16     411
Total:         24  696 775.1    586    3993

Percentage of the requests served within a certain time (ms)
  50%    586
  66%    632
  75%    663
  80%    682
  90%    829
  95%   3550
  98%   3640
  99%   3816
 100%   3993 (longest request)

That's indeed a lot better than the results I see. Running AB gives me this:
> Concurrency Level: 1000

Time taken for tests: 20.682 seconds

Complete requests: 10000

Failed requests: 509

(Connect: 0, Receive: 0, Length: 509, Exceptions: 0)

Write errors: 0

Non-2xx responses: 9503

Total transferred: 13909505 bytes

HTML transferred: 13313863 bytes

Requests per second: 483.52 #/sec

Time per request: 2068.156 ms

Time per request: 2.068 [ms] (mean, across all concurrent requests)

Transfer rate: 656.79 [Kbytes/sec] received

Connection Times (ms)

min mean[+/-sd] median max

Connect: 10 308 1668.1 11 15047

Processing: 10 89 318.5 12 2833

Waiting: 10 89 315.9 12 2821

Total: 21 397 1812.1 24 17437

Percentage of the requests served within a certain time (ms)

50% 24

66% 25

75% 27

80% 30

90% 465

95% 1553

98% 7317

99% 9450

100% 17437 (longest request)

I'll try using just NGINX/mysql/php-fpm, with fcgi caching, to see how that works. It does have a few downsides though, I think:

  • I currently have a huge .htaccess list, I hope someone wrote one for vbulletin & nginx as well. A quick search gave me some for the rewrites, but the .htaccess I have also includes some caching I think. But will have to try that out.

  • There's no native Mod pagespeed for nginx, so no automatic minifying & css sprites etc etc, mod pagespeed was a great help with this.

I'll definitely try it out though and see how it works. Thanks ;)

Whats your .htaccess contain normally it's not hard to translate.

As for mod pagespeed, just do it yourself when you upload your files, yuicompressor is pretty decent and does css and js, adding gzip to that makes things really fly. If you want to optimise images use optipng and jpegoptim on your server they work wonders.

I use the one mentioned here. I got the rewrite working though on a VM, it works fine like that.

Still trying stuff out on a VM now with the setup you mentioned. I couldn't get the cache working, it simply returns an empty page…

EDIT: Seems I broke everything now, rewrite rules no longer work and it will only give me an 404 error. I probably need to re-do it. However a few things I ran into with the VM:

  • apachebench getting rcv errors (104). At the same time the VM got a lot of TCp: drop open requests from and time wait bucket table overflows.

  • vBulletin seemed to work without the fastcgi cache, when I tried it on the forum.php page the results were very similar to the one here on Linode. I'm not sure if there really is a CPU-limit there or I fail to do something to optimize it (kernel, apache/nginx config or php-fpm configuration).

I'll look into yuicompressor as well, not sure if it is compatible with vbulletin.

> NGINX is very efficient with static content.

But your handling dynamic content with Apache instead implies that you think nginx is very inefficient with dynamic content. It's not. Splitting static/dynamic content onto two servers is a lot of unnecessary complexity for no perceptible benefit. Just about the only justifiable reason to do this is if something requires Apache specifically, like if you have some requirement for completely compatible Apache rewrite rules or something.

@Guspaz:

> NGINX is very efficient with static content.

But your handling dynamic content with Apache instead implies that you think nginx is very inefficient with dynamic content. It's not. Splitting static/dynamic content onto two servers is a lot of unnecessary complexity for no perceptible benefit. Just about the only justifiable reason to do this is if something requires Apache specifically, like if you have some requirement for completely compatible Apache rewrite rules or something.

That makes sense - thank you.

It is true that I did indeed want .htaccess rules to work as well - however seeing as the rewrites seemed to work for vbulletin I might as well just keep it on one machine.

It is often suggested to "use a CDN" for static files, so I thought this would be a good way to go.

However, as you explain it one machine should indeed be fine then, and if it gets too much load I can always get another machine to loadbalance.

That's actually one out of the two things I want to do, the other one is that I keep messing with the settings. I'm trying to get as much rq/s as possible, though the settings posted online can differ a lot and I never really know what "is best". I tried looking for practical guides that explain it but I haven't been able to find much. It's mainly (I think?) the balance between the following:

  • Nginx settings (or apache)

  • MySQL (my.cnf) settings

  • php-fpm.conf & pool/www.conf

  • Possibly kernel optimizing - I think there's some limits when running ApacheBench that cause errors. I saw a lot of "tcp time wait bucket table overflow" and "tcp drop open request from "

Do you know of any books or online training that explain this? That way I could also get the server to actually serve content in a efficient+fast way, I guess. :mrgreen:

Thank you so far anyways, I'll at least play around with it some more on a VM and see how it runs, after that I can use it as a main webserver as well I think.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct