Difference Between Bare Metal And Dedicated

Folks:

I hope this is an appropriate question for here.

I currently have a dedicated four processor server (Ubuntu 16.04 LTS, Dedicated 8GB: 4 CPU, 160GB Storage, 8GB RAM) in the Newark Linode Data Center; linode11513095.

I have learned from a source, not here at Linode, that if I want to have as low latency as possible, I would need to get what is called a bare metal server. I looked up Bare Metal on Google and came up with a whole slew of answers that don't seem consistent.

As far as I can tell here, I don't see any offering from Linode except for 'dedicated', which is what I have with my machine in Newark.

My goal is for low latency for an application called Jamulus, which is used for as near real time as possible live music jamming. There is a group in the UK who insist that the only way to get good latency is something they call bare metal on a vendor called OVH, whoever they are. The person in charge tells me that the bare metal tha he's using does not even have the hypervisor on it. He has an entire physical machine for himself.

Now, I wonder, for my instance in Newark, do I still have the hypervisor between me and the hardware? If so, is there much advantage in terms on latency if I have no hypervisor at all?

I hope that I am making sense with my question.

Thank you for any help you can give me.

Mark Allyn
Bellingham, Washington

17 Replies

Hi @maallyn,

With a Dedicated Linode, your Linode's virtual cpus (vCPUs) are scheduled 1:1 with physical cores on the host. No other Linode is scheduled to use the same cores allocated to your Linode.

In comparison with Shared hosts, vCPUs can be scheduled to run on the same cores as other Linode's vCPUs, leaving the opportunity of congestion for the same physical core; especially on higher-loaded hosts.

My understanding of Bare Metal is that it is a dedicated server but managed with the flexibility of the cloud - i.e. there is no hypervisor, but you can still power on/off/cancel servers, pay by the hour (or minute), rebuild just like you can with a virtual server.

However I have seen other providers sell "bare metal" servers as a single virtual server running on a physical host, with the entire hosts' resources allocated to that 1 virtual server - so there is still the hypervisor layer.

Linode is running a "Bare Metal" beta soon, if you're interesting in trying out a bare metal server to evaluate the performance difference, feel free to sign up for the Green Light beta program. Until this beta is under-way I guess we won't know what the Bare Metal servers will look like or how they will perform.

Thank you for the clarification. I guess the question for me is what difference does the presence of the hypervisor layor impose.

Do you (or anyone) know how much the hyperisor imposes on the performance of the machine?

The application that I am looking at if primarily cpu/memory/network, with very little use of the disk. There is virtually no storage use except for logging of connections (one access per session a client is logged in).

Mark

Hello!

Our upcoming bare metal offering will be just that: bare metal. There's no hypervisor, it's not a single VM running on the box: you get your operating system installed on the machine directly, and you have control over the entire rig.

They'll be listed as Linodes, you can boot, shut down, redeploy, add and remove them, you get console access and so on -- all with the exact same interfaces (gui, cli, api) and endpoints that you use to manage Linodes today. But instead of it being a virtual machine, it's a physical machine.

Similar to the Linode VM plans, we hope to offer different tiers of machine types, and then within each tier there will be some different sizes to choose from as well.

Hope that helps,
-Chris

Thanks for the clarification, Chris @caker

I’m signed up for the beta and really looking forward to it :)

Now, I am curious; I don't know if the is the right forum, but what is the effect on performance by the hypervisor. So, basically, what is the impact on response time (latency) on a machine (with my instance running alone) with a hypervisor present and without a hypervisor present? I am trying to determine whether it's worthwhile to seek a machine without the hypervisor.

I hope I am making sense, but I am a newbie with regards to hypervisors and just how much resource they occupy for themselves.

Mark

@maallyn

It’s an interesting question to which I don’t have any definitive answers I’m afraid.

I think this very much depends on your workload and where it’s bottlenecks are likely to be (CPU, Disk, Network.) Clearly there is a need and demand for bare metal, otherwise Linode wouldn’t be doing this now.

Some years ago Microsoft did not recommend running SQL Server and Exchange virtualised, however this did change (I want to say around 2014) and it’s much more preferred now, so for me, this suggests hypervisor overhead isn’t so much of an issue with modern OSs and processors.

I think when it comes to the cloud, the biggest threat to performance more so is noisy neighbours - which Linode have addressed in part with the dedicated Linodes you and I are using. However that only addresses the CPU; there will still be contention on other resources such as the host’s hard drives, NIC, etc. which all Linodes on that host will still share.

If all Linodes on a host are 4 CPU dedicated, then how many would be sharing the host and it's peripherals?

This would depend on the host server and it’s hardware capacity, and I can only guess as I don’t know the internals of Linode.

For example:

My Dedicated Linode is running on an AMD EPYC 7601 host. This CPU has 32 cores/64 threads. Let’s assume the host is dual CPU, that makes 64 cores/128 threads total.

I’m not sure if one vCPU is a full core or a thread, but let’s go with a thread.

That would give me up to 32 4-core Linodes (32 x 4 = 128 threads) although I would hope they reserve some cores for the host OS, so maybe only 30 or 31 to leave 8 or 4 threads for the host.

I am guessing though that Linode mix and match plans up to the capacity of the hardware, so in reality this number could be more or less.

Also this is just max capacity; day to day usage could vary as Linodes are spun up and down and shuffled around the data centre.

I’m sure there’s a lot more calculations to it than my simple maths.

Thank you, Andy. This does help.

Now, for Chris and other at Linode support, this is a bit of a disturbing thing I heard on a forum about dedicated and bare metal. I don't know if this is vendor specific or general in the industry. A bare metal user was complaining that if his bare metal server does not fully come up following a reboot, his only option would be to file a ticket for the data center staff to manually power cycle the machine. I wonder, with Linode's upcoming Bare Metal offerings; would there be a means of both powering up and down; hitting reset; and some sort of direct console access such that I would not have to file a ticket every time my server fails to come up into full multi-user operation?

A bare metal user was complaining that if his bare metal server does not fully come up following a reboot, his only option would be to file a ticket for the data center staff to manually power cycle the machine.

I suspect this is because they only have SSH (if it’s a Linux machine, RDP if it’s Windows) so if the machine doesn’t come up to a state where it can start and operate these services, you have no access to it.

Chris has said you will get the ability to boot and shut down (power cycle) and console access, so looks like this won’t be an issue.

They'll be listed as Linodes, you can boot, shut down, redeploy, add and remove them, you get console access and so on -- all with the exact same interfaces (gui, cli, api) and endpoints that you use to manage Linodes today.

@maallyn --

While you might get some reduction in network latency with a bare-metal server, network latency is a complex thing and many factors can affect it…not the least of which is your location and the route that traffic takes between source and destination (especially in the US).

Network latency is going to be most dependent on the ISPs involved that route your traffic from Newark to your home (presumably that's the endpoint). That's a cross-country round-trip that could involve bouncing your traffic off a satellite. The distance alone is going to negate any decrease in latency that you may get from the lack of a hypervisor.

Before you take the plunge on shelling out for a bare-metal server in New Jersey, I would make careful measurements of your network latency using traceroute(1), qperf(1), tcpdump(1), wireshark(1), etc.

See:

OVH is a big ISP all over Europe (it's also a favorite haunt for Russian 'botnet operators). You don't know anything about where your friend in England lives in relation to the OVH datacenter; what the network route is; which ISPs are along that route; what the routing policy for network traffic such as yours at each of those ISPs is; how many bandwidth-sucking hack attacks per minute/second/hour does s/he experience; etc. Those are the questions you should be asking.

You should be asking the same questions of your local ISP (remember, once a packet leaves Linode's datacenter, they have no control over where it goes or what happens to it before it gets to you (thanks to our current so-called president and his lackeys at the FCC -- all of whom understand only one thing…money, not all internet traffic is treated equally…Comcast & ATT are notorious for very aggressive congestion control policies…and give higher priority to their own traffic and the traffic of those customers who are willing to pay big bucks for that preferential treatment).

Not to disrespect your English friend, but his words sound more like the words of his OVH salesman than any kind of empirical evidence (OVH charges more for a bare-metal server than for a VPS or dedicated VPS server…as will Linode, I'm sure).

You will get some decrease in network latency from a bare-metal server for sure…I'm just saying that it's probably not as much as you might think it will be given the distance between the datacenter (New Jersey) and your home (Washington). I'm also saying that whatever decrease you do get is probably not going to be worth the increased cost.

@andysh lives in the UK…perhaps he can comment more about OVH.

-- sw

Steve, Andy, and Chris:

Sorry, but this is going to be a long one . . . .

First of all, thank you. I am on Comcast in Bellingham, Washington. The service I am setting up is called Jamulus, which is for near real time live music jammnig and rehearsing. Jamulus is a very light client server software that depends solely on UDP packets for it's data, eschewing the overhead of TCP.

I set up a dedicated server in New Jersey, as that data center has my regular web server, which I changed to dedicated so that it could double as my test server for Jamulus.

I intend this server to be used for stress testing Jamulus for large groups of musicians from all over Europe and the Eastern US, which is a larger potential audience than western US; particularly the pacific northwest. I anticipate connections from people from all of the various providers (Comcast, Centurylink; all of them), so I am hoping that I am doing the best that reasonable for all of them.

I fully understand what you are saying about network latency and the minimal effect that the hypervisor has on it compared to network layout and/or incompetence on the part of the backbone or last mile providers. I am just going to have to live with that; and based on what you are saying, I will throw away and hope that going bare metal will add measurable difference on network latency.

Now, what about memory and disk performance? How much does having a hypervisor affect memory and disk performance versus going without a hypervisor.

I am going to take an educated wild guess and that is also going to be minimal compared to what the backbones are going to toss at me. But I am open to your comments.

Now, here is a question. I am not as verse on the backbone architecture as I should be. For me (in Washington state); would my packets go via Comcast to Linode's Fremont data center (closest to me); then over Linode's backbone to the Newarks data center? Or do they stay outside of Linode's infrastructure until they ride Comcast's back bone all the way to New Jersey? If I had another server in Fremont that all it does is to forward packets over Linode's infrastructure to Newark? Will that save latency than merely allowing the packets to find their own way from Washington State to Newark?

Now to Chris; yes I understand what you are saying about the same abilities for console and reboot access on your bare metal versus dedicated through hypervisor, but why am I hearing from the same OVH customer who was complaining to me about non bare metal that he's now having to file a ticket with OVH in the event that his bare metal is stuck on a reboot and he cannot get at it. Something is not making sense. I am assuming (and you are welcome to tell me that I am crazy and naive) that Linode, OVH, and other VSP vendors do their in board infrastructure more or less the same and that rebooting bare metal is done the same way, ticket or no ticket. This is what I am not understanding.

As this arrangement for testing and helping the Jamulus software folks resolve some performance and loading problems is temporary and I intend to take it all down once it's done and hope to find a dedicated server capable VPS vendor in or near Seattle for my own personal solution, can I assume that your Frement data center is the closest to Seattle as far as distance and latency is concerned? I currently have a server at the Vultr data center in Seattle, but am finding some wild gyrations in latency and they do not offer dedicated.

Thank you all very much for your help!

Luv you all!

Mark Allyn
Bellingham, Washington

@maallyn --

You write:

Now, here is a question. I am not as verse on the backbone architecture as I should be. For me (in Washington state); would my packets go via Comcast to Linode's Fremont data center (closest to me); then over Linode's backbone to the Newarks data center? Or do they stay outside of Linode's infrastructure until they ride Comcast's back bone all the way to New Jersey? If I had another server in Fremont that all it does is to forward packets over Linode's infrastructure to Newark? Will that save latency than merely allowing the packets to find their own way from Washington State to Newark?

I think this is a fundamental flaw in your thinking…

  1. I don't think Linode has a backbone…@caker can confirm this.

  2. Once a packet leaves the Linode data center, you cannot guarantee the path it takes from source to destination…unless you have a complex (and expensive) set of legal agreements with multiple operators to guarantee this (even with Linode…they're not going to give you traffic on any backbone they may have or operate). At HP when I was there, the Company had a team of about 500 lawyers, contract administrators, accountants and network engineers to monitor every technical/financial aspect of their worldwide network operations. I may be out on a limb here but I don't think you have that kind of dough laying around…

  3. The U in UDP stands for unreliable.

-- sw

Steve:
Now I know that you're correct. This is a big flaw in my thinking. Where it came from is back in the days when UUNET was both a hosting provider and a network. That I guess is now wrong to assume.

Thanks!

I don't think Linode has a backbone…@caker can confirm this.

I believe they do. They spent some time building it out following their DDoS attacks circa 2015/16.

From https://www.linode.com/global-infrastructure/:

Linode data centers support 11 global markets, enabling secure and reliable networking through our network backbone. Machines can communicate with one another, reducing latency and lowering the friction of scale.

In theory yes, traffic destined for Linode should be routed from your local ISP onto the backbone at the closest point. This is how Cloudflare describes their network - your traffic gets onto their “highway” at the nearest “on-ramp”.

However I guess if your ISP has a backbone that terminates near the destination, they may choose to route the traffic over their own network. In practice you may find that your ISP peers in the same facility as Linode somewhere and the traffic crosses over there.

Like @stevewi says, Linode have no control how ISPs route traffic outside of their network.

The only way to know for sure is to run an “mtr” or “traceroute” from a machine on your home LAN to a Linode. Most of the time the reverse DNS names of the routers along the way will give you a decent clue who’s responsible for them. Failing that a WHOIS on the IP will give you a rough idea. From that you can figure out whose networks the packets have traversed to get there.

Just be warned routes change regularly to route around failing ISPs or even downed submarine cables. What is optimal today may change and not be optimal tomorrow.

As for OVH, I used them prior to finding Linode, so it would have been around 2013/14. Their product offering was good and priced on a budget, but it really showed in their customer service. It felt like they were building new DCs and offering new products but not scaling the support to cope.

Frequently tickets went unanswered for several days and then when they did, it was only to ask a question or they hadn’t understood what you’d first asked.

Their uptime in the time I was with them was pretty good (I had a “cloud VPS” which was HA.) I only remember one instance of downtime. After around an hour of my server being down, I called them and the (French-speaking) agent basically just took a message and said they’d look into it. Around 45 minutes later my server was back up, but no communication from them as to what had gone wrong or even that it was working again!

This was all before they had a UK office so things might be different now.

@andysh writes:

In theory yes, traffic destined for Linode should be routed from your local ISP onto the backbone at the closest point. This is how Cloudflare describes their network - your traffic gets onto their “highway” at the nearest “on-ramp”.
 
However I guess if your ISP has a backbone that terminates near the destination, they may choose to route the traffic over their own network. In practice you may find that your ISP peers in the same facility as Linode somewhere and the traffic crosses over there.

The key phrase here is in theory… Assuming Linode does have a backbone, they're not going to leave money on the table by not charging you for a guaranteed level of service…or even for access (your IP usage is already metered…look at the dashboard for your Linodes sometime).

Comcast certainly has a backbone but, the same principle will be in force…plus Comcast has a plethora of mechanisms in place to prioritize traffic in exchange for money.

My last point is that UDP does not guarantee that traffic will even make it to the destination. That's why it's called unreliable. It can be routed into oblivion and you'd never know…

-- sw

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct