CPU Activity

Hi all,

I've a question about CPU activity on Linode. On a normal Linux box, if I have a single threaded application that uses up 100% of CPU that is using the whole of one core of the CPU. To make it go faster I optimize my software or buy a faster processor.

When there are say, 40 virtual machines running on one of Linode's servers, and I have 100% CPU usage, what does this mean? Is my software actually using 100% CPU of one physical server cores (which I assume is quite fast?), or is is using 100% of 1/40 of the server CPUs (which probably isn't that fast).

Thanks,

S.

15 Replies

You have access to 4 cores, so your graphs go up to 400%. So when your graphs are showing 100% CPU usage, it's 100% of a single core.

Thanks Praefectus,

So I assume that if there are 40 linodes on one server, then 10 per core, if each of these was trying to get full CPU access then everyone would get 10%?

So in the case where not everyone is hitting the CPU hard, then others get unrestricted access to it?

S.

@blastStu:

So in the case where not everyone is hitting the CPU hard, then others get unrestricted access to it?

My understanding: Linode's servers currently have 8 physical CPU cores. Each Linode has 4 virtual cores. If your Linode is the only one on a physical server that is using CPU, you effectively have 4 physical cores at your disposal. If two Linodes are trying to use all available CPU, each would effectively have 4 CPU cores available to them (less virtualization overhead), so they would not slow down at all. If four Linodes were trying to use all available CPU, each would effectively get two of the physical server cores at their disposal and so would actually run at half of maximum speed. If eight Linodes were trying to use all available CPU at the same time, each would effectively get one physical server core and actually run at one-fourth of maximum speed.

James

@zunzun:

If two Linodes are trying to use all available CPU, each would effectively have 4 CPU cores available to them (less virtualization overhead), so they would not slow down at all.

IINM, it's not quite that perfect. I'm pretty sure I've read here that the virual core to physical core mapping is preassigned, not dynamic, so if there are two linodes pegging CPU, the odds are that they'll conflict on at least one (meaning at least one other one is sitting idle).

That said, if I'm doing the binomial coefficient right (8 choose 4), there are 70 possible combinations of cores to be assigned, so with 40 linodes per box (not sure if that's still the max or not), there needn't be any linodes with the exact same set of 4 cores, but I'm not sure how xen and/or linode does it. ie, does it just assign cores 1/2/3/4 to half and 5/6/7/8 to half or does it do a distribution so that no two have the same set of cores.

@glg:

IINM, it's not quite that perfect. I'm pretty sure I've read here that the virual core to physical core mapping is preassigned, not dynamic, so if there are two linodes pegging CPU, the odds are that they'll conflict on at least one (meaning at least one other one is sitting idle).

That said, if I'm doing the binomial coefficient right (8 choose 4), there are 70 possible combinations of cores to be assigned, so with 40 linodes per box (not sure if that's still the max or not), there needn't be any linodes with the exact same set of 4 cores, but I'm not sure how xen and/or linode does it. ie, does it just assign cores 1/2/3/4 to half and 5/6/7/8 to half or does it do a distribution so that no two have the same set of cores.

It depends on the scheduler and the admin. You can pin VCPUs to real CPUs if you want to, and that might be useful in certain specific enterprise environments where it matters, but it seems silly in a hosting environment like Linode.

The credit-based scheduler, which I believe is slated to eventually become the only scheduler supported, does automatic load balancing of VCPUs to real CPUs, if you don't pin. As for if the other schedulers support this, or which scheduler Linode uses, no clue.

It's very rare for CPU power to be an issue on a linode, though.

@Guspaz:

It's very rare for CPU power to be an issue on a linode, though.

cough http://zunzun.com cough

James

Thanks for all the responses. Our stuff is actually quite CPU heavy at times, requiring some funky geometry analysis.

Can I clarify then, if top in my OS reports 100% usage then this is 100% of a real CPU (less virtualisation overheads). I guess really I'm saying: Can I get a faster processor by upgrading my Linode to a bigger size, or will I just get more memory?

S.

@blastStu:

Can I clarify then, if top in my OS reports 100% usage then this is 100% of a real CPU (less virtualisation overheads).
Not necessarily - your linode says 100% CPU when it is using all the cycles it can get. This may or may not be equal to the capacity of a physical core depending on the loading of the host.

@blastStu:

Can I get a faster processor by upgrading my Linode to a bigger size, or will I just get more memory?
A bigger Linode gets you the same four processor cores, but you share them with a lower number of other VPSs – effectively giving you faster processors.

@zunzun:

@Guspaz:

It's very rare for CPU power to be an issue on a linode, though.

cough http://zunzun.com cough

James
I think we would all agree that you are a very rare case, in more ways than one… ;)

@blastStu:

Can I clarify then, if top in my OS reports 100% usage then this is 100% of a real CPU (less virtualisation overheads). I guess really I'm saying: Can I get a faster processor by upgrading my Linode to a bigger size, or will I just get more memory?

If it is on a scale of 0-400%, you're using 100% of one real CPU (and probably need to modify your application to be multi-threaded). If it's on a scale of 0-100%, you're using 100% of each of four real CPUs (and probably need to modify your application to be multi-server). Neither can be mitigated by upgrading to a larger Linode plan.

The problem with being CPU-bound is that individual CPUs aren't really getting much faster, and there are physical and software limits to how many CPU cores you can fit in a single system. So, if you're maxing out CPU and planning to grow, you're gonna have to scale out.

@hoopycat:

The problem with being CPU-bound is that individual CPUs aren't really getting much faster, and there are physical and software limits to how many CPU cores you can fit in a single system. So, if you're maxing out CPU and planning to grow, you're gonna have to scale out.

I think that if you compare a dual-core Conroe and a dual-core Sandy Bridge running at the same clockspeed, there's a huge difference in terms of performance. I'm not sure where you get the idea that individual CPUs aren't getting any faster, considering how we've been at quad core for a few generations now, and performance has improved enormously during that time.

Multi-core is SO five minutes ago:

http://www.theregister.co.uk/2011/09/15 … processor/">http://www.theregister.co.uk/2011/09/15/intelrattnermic_coprocessor/

James

@zunzun:

Multi-core is SO five minutes ago:

http://www.theregister.co.uk/2011/09/15 … processor/">http://www.theregister.co.uk/2011/09/15/intelrattnermic_coprocessor/

James

I'm not sure Intel has conclusively demonstrated that their MIC initiative is any better than throwing a few GeForce or Tesla cards at a problem. I suspect a GeForce or Tesla solution is probably a lot cheaper.

@Guspaz:

I'm not sure Intel has conclusively demonstrated that their MIC initiative is any better than throwing a few GeForce or Tesla cards at a problem. I suspect a GeForce or Tesla solution is probably a lot cheaper.

Excellent point.

James

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct