Linode Manager CPU graphs topping out at 200%, not 400%

In the Linode Manager I see CPU topping out at 200% instead of 400%. Does anyone know why? Of course if I were limited to 2 CPU cores instead of 4, that would explain it - but I have received no notice from Linode that that has occurred, and they still charge me for 4 cores so something else must be causing the problem.

James

16 Replies

Not aware of anything. Have you looked at 'htop' to see your usage in real time?

-Tim

Mine are working fine I've one peaking at 240% (it never does enough to get it to 400)

@obs:

Mine are working fine I've one peaking at 240% (it never does enough to get it to 400)
Mine never seems to reach 30% :-)

@theckman:

Have you looked at 'htop' to see your usage in real time?

Load in top was more than four, yet the Linode Manager cpu graph maxed out at 200% when it should have been 400%. Something is wrong. I will keep an eye on it now that I know about the problem.

James

Edit: I looked at the past history in the Linode Manager, and this started roughly three weeks ago.

@zunzun:

Edit: I looked at the past history in the Linode Manager, and this started roughly three weeks ago.
Any local changes or possibility for the applications themselves to be interlocking in a way that prevents simultaneous execution or could perhaps make them I/O bound rather than CPU bound.

Or, maybe there's a new guest on your host also bursting as much as possible, so best case you two end up sharing the cores. I don't know how the host Xen allocates cores, but depending on Linode size there has to be some complete overlaps, or at least cases where there is just never an opportunity to have all 4 cores.

– David

PS: Not sure I'd consider it fair to say Linode is charging you for four cores (unless this is like a Linode 16GB). The charge is for a pro-rated portion of the host per number of guests, so on a 512 for example it only guarantees about 20% of a single core, with the burst to 4 cores purely on an availability basis. True, bursting usually works fine and makes the service work as well as it does. But it's not like being unable to reach 4 cores is depriving you of something you paid for, again unless you're on one of the largest plans with very few guests.

@db3l:

Any local changes or possibility for the applications themselves to be interlocking in a way that prevents simultaneous execution or could perhaps make them I/O bound rather than CPU bound.

Then load would not have been over four in top, and it was.

I will open a support ticket to ask if my Linode is being CPU limited in any way and share the answer.

James

@zunzun:

I will open a support ticket to ask if my Linode is being CPU limited in any way and share the answer.

Support Ticket Question: Is my Linode being CPU-limited in any way?

Support Ticket Answer: Your CPU is not limited in any way at this time.

James

@zunzun:

Support Ticket Answer: Your CPU is not limited in any way at this time.

not limited in any way at this time.

at this time

ಠ_ಠ

(NOTE: I know zunzun's workload, and Linode, and am mostly picking on the very careful wording of that. This is not an accusation that CPU was being limited in any way at any other time. Just… amusing.)

@zunzun:

@db3l:

Any local changes or possibility for the applications themselves to be interlocking in a way that prevents simultaneous execution or could perhaps make them I/O bound rather than CPU bound.

Then load would not have been over four in top, and it was.

I'm pretty sure Linux includes iowait processes in the load value, so yes, it could be over 4 and yet not be CPU bound. But if your iowait% was also very low, then I'd probably agree something else was going on.

– David

Shouldn't this be simple to benchmark? Take a program that will spike the CPU like the pi.py spigot generator. Establish a baseline for 100% CPU:

python pi.py > baseline.out & sleep 300; kill %1

Then run four instances at once:

for f in 1 2 3 4; do python pi.py > $f.out & done; sleep 300; kill %1 %2 %3 %4

*ls -l *.out* will show how many digits of pi each process calculated, which should then tell you whether each of the four instances had access to 100% CPU or if they were somehow limited. Of course they won't be exactly equal to the baseline, but if you're being limited to 200% the difference should be obvious. (This assumes that any limiting is applied as an aggregate across all CPUs, and is not implemented by throttling each individual CPU to 50%.)

@Vance:

Shouldn't this be simple to benchmark?

You are correct, direct CPU benchmarking is simple. I will do some benchmarking today, and also try Arch Linux in addition to the current Ubuntu.

James

This is the host. I created a new Ubuntu instance, and with "host load is low" on both servers I ran the same cpu-intensive benchmark. The existing server ran the benchmark in half an hour: the new instance took 15 minutes.

I will move my site over the the new instance this afternoon.

James

As usual the Linode staff speedily took care of the problem, migrating my somewhat cpu-intensive web site Linode to a new host. The new host completed my cpu load test in just under 10 minutes, which I see as being 3 times faster than the old host.

James

After the migration, the Linode Manager CPU graphs are back to normal.

James

Further testing shows that performance on the new host is excellent.

James

Another interesting benefit to the host move today: the previous munin load graphs over the past few weeks never showed a minimum value of less than about 0.75, where before this time I could see a minimim graphed value of zero. On the new host, this minimum graphed value is back to zero when the server is not busy.

James

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct