Time to first byte more than double in a new Linode

Hi,

I created a new linode in UK and got into the new hardware with a E5-2670.

I had an IPB board with Debian Squeeze in another provider and just setup every thing like was before, the only deference is worker_processes in nginx that I changed from 4 to 8.

Now my time to first byte it's almost 2x / 3x than before.

Why is this appending? What should I do or where should I start?

Thanks

21 Replies

What was it before? What is it now? What happens if you change worker_processes back to 4?

Hi

Before I could see 0,3 to 0,4 and now I have 0,7 to 0,9.

When I change backup to 4 I get the same values.

Check PHP is using APC and that it has enough memory. There is a apc.php that comes with php-apc that will let you see what it's up to.

Check mysql. Mysqltuner is a wonderful tool.

Check your DNS speed with one of the many on-line testers. A good website tester should also tell you about DNS delays.

Check your system isn't doing something really stupid with top or longview.

Hi,

Today TTF is 0,2 and this is the the expected value for the power we have in linode :)

Could this be related bad neighborhood in my node?

@nfn:

Hi,

Today TTF is 0,2 and this is the the expected value for the power we have in linode :)

Could this be related bad neighborhood in my node?

If you changed nothing and your time to first byte went down to a quarter of what it was yesterday then it has to be due to host load. It's outside your control, you can only talk to support.

@sednet:

@nfn:

Hi,

Today TTF is 0,2 and this is the the expected value for the power we have in linode :)

Could this be related bad neighborhood in my node?

If you changed nothing and your time to first byte went down to a quarter of what it was yesterday then it has to be due to host load. It's outside your control, you can only talk to support.

That, and/or caches have "warmed up."

I was getting about 1500ms or greater FTB values until I migrated from Apache to Nginx utilizing W3TC on WordPress. With W3Tc there's no need to fire up PHP for every page visit and Nginx just serves up the static HTML disk cache files. It brought my FTB values down to around 150ms (that's a huge performance increase!).

http://blog.michaelfmcnamara.com/2012/1 … x-php-fpm/">http://blog.michaelfmcnamara.com/2012/11/apache2-mod_php-vs-nginx-php-fpm/

The issue here was that TTF had doubled on what should, ostensibly be a faster system.

For what it is worth though, I get decent TTF times out of Wordpress without static file caching or PHP-FPM. APC is essential to that, since otherwise the startup overhead for each request is about 50% the cost of the request processing time.

TTFB is kind of an arbitrary benchmark anyways.

Static file caching really helps with wordpress~~

What I am seeing is much higher contention for CPU resources on many of the Linodes I already migrated. I think those of us that migrated early have all ended up on the same host machines and are not getting as good performance now because we are all fighting for resources, while before probably on average the load was more spread out on more actual host machines.

I wish I hadn't migrated a bunch of my nodes… they were performing better before because I was getting less CPU steal %! :)

Side-note: the ones I didn't migrate now have even less CPU contention on them.

It doesn't help that linodes now have 8 virtual cores, not sure what the reasoning behind that decision was. Even though the core count is doubled on the new hosts, it would have probably reduced contention.

Yeah. It sounds more impressive but only on one out of five hosts I migrated am I seeing almost no contention for the CPU. Even though the number of cores on the physical machines is doubled, the ram is I imagine more than doubled, which also means higher density of Linode per physical host. So if I could roll back my migrations to the hosts they were on before, I would, since I'm seeing basically no benefit from the new hosts and I was not bound by RAM anyway. Oh well.

And that's why when I saw the news I thought "I'll wait a few weeks, until they deploy more nextgen hosts."

Just like with almost any other "new, cool service", the initial rush of users ends up overloading things. :)

Wise words, rsk, wish I'd listened to my gut and waited as well! :)

+1 on wishing I could go back and not take the free upgrade. I didn't really need the RAM and now the CPU and disk IO are horrid.

I'm sitting here waiting on a manual tar|gzip backup that took maybe 5 minutes on the old hardware, after migrating it's at 30 minutes and counting. Watching htop I'd say I'm getting at least 90% cpu steal :(

@MichaelMcNamara:

I was getting about 1500ms or greater FTB values until I migrated from Apache to Nginx utilizing W3TC on WordPress. With W3Tc there's no need to fire up PHP for every page visit and Nginx just serves up the static HTML disk cache files. It brought my FTB values down to around 150ms (that's a huge performance increase!).

http://blog.michaelfmcnamara.com/2012/1 … x-php-fpm/">http://blog.michaelfmcnamara.com/2012/11/apache2-mod_php-vs-nginx-php-fpm/

A better option is just to ditch Wordpress and use Pelican. It can import your Wordpress blog automatically and is ridiculously fast (static HTML only). Plus you can now do your blogging using Vim and Git :). Much nicer.

http://docs.getpelican.com/en/3.1.1/

@Stever:

+1 on wishing I could go back and not take the free upgrade. I didn't really need the RAM and now the CPU and disk IO are horrid.

I'm sitting here waiting on a manual tar|gzip backup that took maybe 5 minutes on the old hardware, after migrating it's at 30 minutes and counting. Watching htop I'd say I'm getting at least 90% cpu steal :(

Sounds to me like the new nodes are getting hammered with all the migrations. I'd expect it to calm down somewhat once this huge mass of migrations has finished. I'd imagine it is putting quite a load on their internal network and the host machines IO in particular.

@Stever:

+1 on wishing I could go back and not take the free upgrade. I didn't really need the RAM and now the CPU and disk IO are horrid.

I'm sitting here waiting on a manual tar|gzip backup that took maybe 5 minutes on the old hardware, after migrating it's at 30 minutes and counting. Watching htop I'd say I'm getting at least 90% cpu steal :(

I had exactly the same issue post-upgrade, opened a support ticket and had a new migration to transfer to a new machine within 10 minutes, haven't had a problem since. I think I'm back on the old hardware (but with the increased RAM), but I wasn't CPU bound anyway - and it's better than completely unusable like it was post-migration.

@Guspaz:

It doesn't help that linodes now have 8 virtual cores, not sure what the reasoning behind that decision was. Even though the core count is doubled on the new hosts, it would have probably reduced contention.

This reminds me of when using VMware (I know it works a little differently than Xen, but here goes:)

We constantly had "discussions" between the network group (in charge of the VMware infrastructure) and the Database admins. The DBAs wanted more cpu cores for their systems (and more RAM) while the network group wanted less cores (RAM was understandable in this situation). To quench the issue a test environment was setup with both configurations, and the DBAs were asked to test the machines. It turned out that the less CPU cores had better performance than the full cpu cores. (6 cores vs 2 cores on a host with quad 6 core xeon processors). The issue in VMware was that all "requested" cores must have an available cycle before the host would give the guest the requested cpu cycles. So with 2 or 4 cores, it got the cpu cycles needed quicker than with 6 cores.. (with multiple guest machines on the system, only difference was the DB machines)… so… at least with VMware "more is not always better".

Xen isn't stupid like that, though.

Edit: Xen may or may not have other interesting performance issues, but it 100% does not have that one.

I was more suggesting that the number of threads involved was a bit nuts. Assuming each linode represents 8 threads on the host machine, and that there are still 40 linodes per lowest plan host machine, you've got a 16-core server managing 320 threads (virtual cores), or 20 virtual cores per real core.

Now, that's no worse than when we had 40x4 threads on 4x2 real cores but there was an opportunity to reduce the contention there (by doubling the real core count and keeping the virtual core count the same at 4 per linode).

I'm saying I'm not sure what the point of doubling the virtual core count was.

@Guspaz:

I'm saying I'm not sure what the point of doubling the virtual core count was.

It was marketing and because this change could be made without buying extra hardware.

I doubt there were many people who were CPU bound before the upgrade.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct