Broadwell-EP hosts, any specs?

I just did a couple of system resizes last night and found that the newest hosts seem to be using some of the very latest Broadwell-EP processors, namely Xeon E5-2697 v4 (18C/36T @ 2.3 GHz). These processors have slower clocks than the previous-generation Haswell-EP hosts (Xeon E5-2680 v3, 12C/24T @ 2.5 GHz), which might hurt single-threaded application performance a bit, but I suppose this is made up for by lower CPU contention and lower cost per instance (this is probably why Linode is able to offer 2 GB for $10 a month).

Perhaps the most exciting thing about this is that we should be able to use new instructions like TSX-NI on these new servers. Is this functionality enabled on these hosts, and should I rely on the availablility of these instructions for my applications?

More significantly, are there any specs on these new hosts? What about the typical number of Linodes run on each host at common sizes (2 GB, 4 GB, 8 GB)? I would be very interested to see what Linode's latest hardware looks like and how it's being used.

Draco

–-

Edit: I can see that ADX and TSX-NI (hle and rtm flags) are enabled in /proc/cpuinfo. Any official word on new instruction availability, though?

5 Replies

ooh interesting E5-2697v4 !

@bwDraco:

More significantly, are there any specs on these new hosts? What about the typical number of Linodes run on each host at common sizes (2 GB, 4 GB, 8 GB)? I would be very interested to see what Linode's latest hardware looks like and how it's being used.

Linode does not release this kind of information, and I doubt they ever will.

@bwDraco:

I can see that ADX and TSX-NI are enabled in /proc/cpuinfo. Any official word on new instruction availability, though?

It's unlikely that Linode will make any commitment, because 1) not everybody is on a 2697v4 host, and 2) the instructions may need to be disabled at some point for security reasons. For example, AVX was disabled in Xen for a long time due to issues with not saving the registers properly during context switches. My recommendation would be to verify that the feature is available at program start, and then use it if it is, but have fallbacks if it isn't. Make sure you do the full feature test recommended by Intel. (I say this because AVX's feature test was 2 parts, and many things only did the first part, which passed even when Xen had disabled AVX, because Xen only cause the second part to fail; this proceeded to cause loads of problems.) I realize that TSX is a bit different than AVX, but stuff does happen, like it being broken in the entire Haswell line, and needing to be disabled in a microcode update.

@dwfreed:

@bwDraco:

More significantly, are there any specs on these new hosts? What about the typical number of Linodes run on each host at common sizes (2 GB, 4 GB, 8 GB)? I would be very interested to see what Linode's latest hardware looks like and how it's being used.

Linode does not release this kind of information, and I doubt they ever will.

Well, many years back, on the old Xen servers with 8 shared cores per instance, Linode said that they were running about 40 Linode 1 GB instances on a single host. What I'm most curious about is the level of CPU contention typical of these new hosts and therefore how predictable CPU performance is. Some of the biggest players in the cloud space tend to guarantee that their cores can deliver full performance at all times except possibly for the low end (e.g. "shared-core" instances). I'm wondering whether that's improved with the new hosts, which have significantly more cores than before. The rest of the technical details are not a big deal.

When I first signed up for Linode, the servers were 2S/16C/32T; they're apparently now 2S/36C/72T. With these new servers, it certainly looks like they could pack 80 to 100 2 GB Linodes onto a single host without causing an excessive amount of contention.

I recognize the cloud services market is getting more competitive than ever, so Linode keeping the cards to themselves with respect to their host hardware is probably best.

Draco

@bwDraco:

@dwfreed:

@bwDraco:

More significantly, are there any specs on these new hosts? What about the typical number of Linodes run on each host at common sizes (2 GB, 4 GB, 8 GB)? I would be very interested to see what Linode's latest hardware looks like and how it's being used.

Linode does not release this kind of information, and I doubt they ever will.

Well, many years back, on the old Xen servers with 8 shared cores per instance, Linode said that they were running about 40 Linode 1 GB instances on a single host. What I'm most curious about is the level of CPU contention typical of these new hosts and therefore how predictable CPU performance is.

They stopped releasing that information a long long time ago (easily 5 years ago).

@bwDraco:

Some of the biggest players in the cloud space tend to guarantee that their cores can deliver full performance at all times except possibly for the low end (e.g. "shared-core" instances).

You mean like Azure, Google Compute Engine, Softlayer, or Amazon, none of which actually do that? Sure, Amazon has the "ECU" but that's based on a unit of measure that's so old as to be completely useless as an indicator of actual resource availability in modern systems and applications.

Microsoft Azure does not oversubscribe CPUs except for hosts running A0 instances, but may use timeslicing for performance consistency (see https://blogs.technet.microsoft.com/hyb … -in-azure/">https://blogs.technet.microsoft.com/hybridcloudbp/2016/05/26/virtual-machine-cpus-in-azure/ and http://windowsitpro.com/azure/azure-cor … bscription">http://windowsitpro.com/azure/azure-core-oversubscription). Not sure with GCE, but the notion of a "shared-core" instance strongly suggests the same.

With respect to new instructions, it definitely looks best to verify that the instructions are indeed usable before actually using them.

Draco

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct