tuning apache

I have a 360 running a few websites, the highest traffic one is a phpBB for an NFL team. It gets its highest traffic during games (ie, for a 3 hour period a week). Outside of games, traffic isn't all that high, so it's not a problem. During games, it's nearly brought to its knees.

I'm running apache prefork with mod_php, mysql backend. I have apc installed for opcode caching. That helped a lot, but I'm still having problems.

I did not have mod_expires enabled, just did that today to keep down the hits on the little gif files in the phpbb styles.

Any thoughts for more tuning? Here's my prefork config, any other configs I should post?

prefork config:

StartServers 5

MinSpareServers 5

MaxSpareServers 10

MaxClients 50

MaxRequestsPerChild 100

8 Replies

Is mysqld a big part of the load? I'd try moving mysqld to a separate 360 and connecting to it over the private network.

@BarkerJr:

Is mysqld a big part of the load? I'd try moving mysqld to a separate 360 and connecting to it over the private network.

It doesn't seem like it. Actually, what I was noticing was high queue lengths without CPU being very high.

@glg:

prefork config:

StartServers 5

MinSpareServers 5

MaxSpareServers 10

MaxClients 50

MaxRequestsPerChild 100
I suppose my first suggestion would be to drop the MaxClients setting a bunch (perhaps to 20 or so) and see how things perform. You shouldn't see any difference under idle/light load, and hopefully while responses may still be more sluggish under heavy load than when idle, it shouldn't bring the machine to its knees.

In terms of picking the best value, do you have any stats for either what your memory usage and swap is like under your loaded periods, or what one "client" requires while in use? And - how many apache processes are present at such times? Or, working bottom up, how large does one of your apache2 processes get while processing your php scripts and interacting with the database to serve a page?

With mpmprefork, the single biggest thing to be concerned with is that MaxClients combined with your per-worker-process usage will work with the ram you have. mpmprefork means that each client gets its own apache2 process, so once you start getting requests fast enough you'll probably have at least 50 such processes. So multiply your typical apache2 worker process resident size by that, and take into account your other needs (such as for mysql, for which you also will do well to try to ensure you still have some buffer/cache space in the system to aid in query processing).

If things start to tank it could be much worse, since Apache will periodically kill off child processes and make new ones even when at the maximum count (in your config after a particular process has handled 100 requests). But if your system is already thrashing, child processes won't free up their resources immediately, which in turn lets the replacement process even make things worse.

Because disk I/O is a real bottleneck on a VPS, you want to do your best to avoid swapping. It's much better (in terms of throughput and responsiveness of the machine) to force requests to queue up than to attempt to execute them in parallel with swapping.

As has already been pointed out, this all assumes that your per-request processing time is sufficiently small to handle the load you are getting. As an extreme example, say you needed 1s to actually service a request. If you're getting 100 requests/s, then requests will always back up until their average rate drops below 1/s. Changing MaxClients low enough will keep your machine from thrashing completely due to apache processes created for the pending requests, and may even help throughput through the database by reducing parallel requests, but it still won't be that responsive to clients since it simply can't support the req/s throughput needed - if you got 50 requests at once, at least one of those requests is only going to get an answer 50s later.

So if you haven't, working out some benchmarks of just what request rate your current setup can support when idle could be illuminating as well.

– David

@db3l:

If things start to tank it could be much worse, since Apache will periodically kill off child processes and make new ones even when at the maximum count (in your config after a particular process has handled 100 requests). But if your system is already thrashing, child processes won't free up their resources immediately, which in turn lets the replacement process even make things worse.

That 100 was probably too low. This has been a work in progress, as I only have 3 hours or so a week or real load. I upped that to 5000.

@db3l:

Changing MaxClients low enough will keep your machine from thrashing completely due to apache processes created for the pending requests, and may even help throughput through the database by reducing parallel requests, but it still won't be that responsive to clients since it simply can't support the req/s throughput needed - if you got 50 requests at once, at least one of those requests is only going to get an answer 50s later.

Through some previous work, 50 seems to be about as high as I can go without thrashing.

However, I did find another setting that seems to have helped. Default on KeepAliveTimeout is 15, but with prefork, that means it waits up to 15 seconds for another request from a client before releasing that process to answer another. For worker, that default is fine, but for prefork, it's too high. I turned it down to 2 and seemed to get markedly better performance.

@glg:

Through some previous work, 50 seems to be about as high as I can go without thrashing.

However, I did find another setting that seems to have helped. Default on KeepAliveTimeout is 15, but with prefork, that means it waits up to 15 seconds for another request from a client before releasing that process to answer another. For worker, that default is fine, but for prefork, it's too high. I turned it down to 2 and seemed to get markedly better performance.
Hmm, these two paragraphs could be contradictory depending on why the KeepAliveTimeout change improved things - did your previous work actually generate the 50 worker processes at peak load?

If the problem was that you got to 50 worker processes, and the KeepAliveTimeout was leaving requests queued for the 15s before handling them (so purely queuing delay) that's one thing. But if lowering KeepAliveTimeout is just keeping the total number of worker processes lower (by reusing them faster), it might also indicate that a lower maximum clients is still worthwhile.

You might also consider how your database is handling the simultaneous load. Even if you can handle 50 apache processes, you might find that your database has a smaller sweet spot in terms of simultaneous requests. While sometimes counter-intuitive, keeping the concurrent requests lower can actually yield better overall throughput by improving the per-request response time more than any queuing delay added by the smaller number of simultaneous workers.

But it sounds like you're heading in the right direction…

-- David

@db3l:

@glg:

Through some previous work, 50 seems to be about as high as I can go without thrashing.

However, I did find another setting that seems to have helped. Default on KeepAliveTimeout is 15, but with prefork, that means it waits up to 15 seconds for another request from a client before releasing that process to answer another. For worker, that default is fine, but for prefork, it's too high. I turned it down to 2 and seemed to get markedly better performance.
Hmm, these two paragraphs could be contradictory depending on why the KeepAliveTimeout change improved things - did your previous work actually generate the 50 worker processes at peak load?

Yeah, it was my previous work that found 50 to be "good" (ie, not swapping), but response was still slow due to more than 50 users hitting at a time.

@db3l:

If the problem was that you got to 50 worker processes, and the KeepAliveTimeout was leaving requests queued for the 15s before handling them (so purely queuing delay) that's one thing. But if lowering KeepAliveTimeout is just keeping the total number of worker processes lower (by reusing them faster), it might also indicate that a lower maximum clients is still worthwhile.

I think the KeepAliveTimeout was lowering the queueing, not sure if it was actually keeping the number of workers used lower.

@db3l:

You might also consider how your database is handling the simultaneous load. Even if you can handle 50 apache processes, you might find that your database has a smaller sweet spot in terms of simultaneous requests. While sometimes counter-intuitive, keeping the concurrent requests lower can actually yield better overall throughput by improving the per-request response time more than any queuing delay added by the smaller number of simultaneous workers.

What's a good metric to look at for mysql? I was watching top pretty constantly and rarely saw mysql pop up to the top.

@glg:

What's a good metric to look at for mysql? I was watching top pretty constantly and rarely saw mysql pop up to the top.
You can sort top however you like, but if you mean you're watching CPU, while there are exceptions, in general database engines are rarely going to be CPU bound. They're much more likely to be stuck in I/O (either internal memory I/O between cache and process space, or more likely disk I/O, especially under a VPS), and having too many concurrent queries all attempting I/O simultaneously can essentially thrash the available I/O bandwidth.

I don't personally use mysql, so I can't help with any mysql-specific monitoring tools or tuning settings, but as a general matter I'd watch your iowait percentage as well as the status of the mysql processes, to see if they are waiting for I/O. Blocking is going to happen, so the key may be more how bad it gets under various settings.

It may be easier also to take a pragmatic approach (probably not dissimilar to what you've done so far). Come up with a small benchmark - say an internal web page that executes a fairly typical query or queries needing the back-end. Then use something like ApacheBench (ab) to stress test that web page under various configurations. It would be interesting for example to compare MaxClients 5, 20 and 50 for pages that involve the whole chain from browser through application and database.

– David

If you notice slow queries, but not high CPU usage, have you considered moving MySQL's tmp folder to /dev/shm? This is, of course, if you have enough free RAM to do so. This could help alleviate slow IO issues.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct