Anyone already moved to new E5-2670 hardware?

Linode said in the next few weeks, and this was a few weeks ago.

45 Replies

The new 1GB Linode that I provisioned yesterday in London is of the new hardware type.

How can you tell what hardware type a Linode is on?

Yes, I'm on one at Newark.

Migrated five Linodes and none are are the new hardware, oh well. I suppose I'll wait until it is my turn.

I've opened a ticket yesterday and asked about new hardware and they told me that they will have new machines within a week or two, so you should check it out then.

Btw, I'm in London DC.

````

grep 'model name' /proc/cpuinfo

model name : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
model name : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
model name : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
model name : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
model name : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
model name : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
model name : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
model name : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz

Shweet!

EDIT:

Also:

grep bogo /proc/cpuinfo | tail -1

bogomips : 5200.17
````

sednet: What DC was that in?

I also asked in a ticket this week and they told me I could request migration to new hardware in a couple weeks.

It's odd that they didn't align the migrations. I had to migrate to get the extra RAM, but then I'll have to migrate again in a few weeks to get the CPU. Should have aligned the two upgrades.

Ive got the hew power!

@Guspaz:

It's odd that they didn't align the migrations. I had to migrate to get the extra RAM, but then I'll have to migrate again in a few weeks to get the CPU. Should have aligned the two upgrades.
Maybe it's part of optimizing rack usage or something. Rather than installing enough of the new hardware for the entire user base at once, they have just enough to free up enough of the older hardware slots to efficiently reuse the rack and colo space. In the mean time, other Linodes are consolidated on some of the older hardware as sort of a staging area until the remaining swaps can be done incrementally. Might help minimize wasted resources (in terms of spare hardware/slots) during the conversion.

Since presumably they didn't bother doubling the memory in the older hosts, it may also mean that for the time being if you landed on older hardware, you actually have fewer sibling guests than usual, so that might on balance improve performance.

– David

The node I migrated moved from L5520 to L5630. An upgrade, but not the one I was hoping for since all I did was move from hardware that is 4 years old to some that is 3 years old.

EDIT 1: Never mind, ask and ye shall receive. Migrating again right now.

EDIT 2: That a fast server. Page load times are seriously down to 50% of what they were on the old hardware.

@jasonlitka:

EDIT 1: Never mind, ask and ye shall receive. Migrating again right now.
I've asked the same thing two days ago and have been told to wait undisclosed amount of time until server becomes available. Still waiting. This is in Newark. Where is your server?

@neo:

@jasonlitka:

EDIT 1: Never mind, ask and ye shall receive. Migrating again right now.
I've asked the same thing two days ago and have been told to wait undisclosed amount of time until server becomes available. Still waiting. This is in Newark. Where is your server?

Maybe they like me more than you. :)

Seriously though, Dallas. I opened a ticket on announcement day volunteering to go first. When I did the memory upgrade I updated the ticket asking for an ETA because I was surprised that they weren't migrating people to the new hardware. Support activated it right then.

@DrJ:

sednet: What DC was that in?

That one was in London. I upgraded another two in London and got the old hardware, not that the older hardware is slow. CPU speed was never my problem at Linode.

I've got a couple in Dallas.

Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz, 8 cores

New CPU and more ram, great start to the weekend!

I just want to know whats up with Fremont :(

@tubaguy50035:

I've got a couple in Dallas.
I'm in the Dallas DC and took the free upgrade this afternoon (Linode 512 to Linode 1024). Still on L5520 hardware for mine.

Upgraded in Dallas and got new hardware. But it wasn't quite the hardware I expected: "Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz".

@OverlordQ:

I just want to know whats up with Fremont :(

Yeah, what exactly is up with Fremont. No follow up post on the status thus far as well.

All you people, I'm on my awesome E5-2670 host in Tokyo :D

We're on E5-2670's in Newark. We migrated early Sunday morning.

just upgraded… partition table is hosed and cant boot aaargh. not sure if restoring the backup will work since its all been migrated now. :/ my node is/was in newark…

@Stu:

just upgraded… partition table is hosed and cant boot aaargh. not sure if restoring the backup will work since its all been migrated now. :/ my node is/was in newark…
Stu, below is part of a support email I got when asking questions about the RAM upgrade:
> The 'old' disk images will be kept on the previous host until they are securely removed by a cron job. If there is a problem with your migration, please contact us right away. We will respond quickly to ensure that any problems which do potentially come up get fixed.

Edit: I can't spell.

thats good, I hope.. I logged a ticket immediately when I booted in with fennix and saw the errors in dmesg.

More details plz of freemont !

I think they're a bit busy right now: ~~[http://blog.linode.com/2013/04/16/security-incident-update/" target="_blank">](http://blog.linode.com/2013/04/16/secur … nt-update/">http://blog.linode.com/2013/04/16/security-incident-update/](.

I'm sure they'll get to it when they have a chance (my VM is located in Fremont, so I'm waiting to hear as well, but…)

But… Linodes don't have partition tables.

Edit: Presumably Stu had some sort of other disk-related problem. So my post is probably worthless pedantry. I should check if there's a delete button. :P

Mark from support said I hit a known Xen bug, and the workaround is to boot with the 64bit kernel isntead of the 32bit kernel.

So I guess if your node does not come up, try the 64bit kernel!

I hit that bug with a few servers, it's a pain since the servers used pv_grub so had to be migrated to another host again that wasn't effected by the bug. Hopefully that get's resolved soon.

@Stu:

Mark from support said I hit a known Xen bug, and the workaround is to boot with the 64bit kernel isntead of the 32bit kernel.

So I guess if your node does not come up, try the 64bit kernel!
I hope this means you're up and running - without further incident - with your new RAM on one of the new servers.

I've just migrated my Linode from 1024 to 2048 and I've landed on new hardware. Yay! (London DC)

I'm in Newark and still running on L5520 :(

I'm in London DC, and different hardware on different linodes appears;

cat /proc/cpuinfo | grep 'model name'

model name : Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz

Another one:

cat /proc/cpuinfo | grep 'model name'

model name : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz

Mixed "new" hardware? (2630L - 2670)

I've gone through several hosts in London and they seem to be an even split between the 2630L and 2670. The 2670's seems to have a measurable difference in performance but that could just be the load on the host.

Did any of you who have upgraded do a test of disk I/O?

My Db is really down on its knees during peak hours and all I can think of causing this, since I've made no change since before the upgrade (and the traffic is somewhat "constant") is that the disk on these machines are considerably worse, performance wise, than that of the hardware used before the upgrade.

I'm sort of out of ideas here other than checking with you guys if you have a similar experience?

I've had 2 out of 4 problematic migrations that have had poor disk IO. I contacted support and they migrated the servers to another host, the hosts I was on had users with heavy disk usage.

Ugh… Just got an email about my new E5-2670 box needing downtime for maintenance on 5/8…

@jasonlitka:

Ugh… Just got an email about my new E5-2670 box needing downtime for maintenance on 5/8…
Me too are you on london588? Must be something pretty important that's broken to require 45 minutes downtime.

You could ask support to migrate to a different host, that would most likely require less down time depending on the size of your disk.

I've got them for half of my nodes(*) across several accounts, in both Newark and Dallas data centers, and on both E5-2570 and E5-2630L nodes (but not all such nodes). For a pair of more critical machines I did request a migration for so I could control the down time.

It actually takes a host quite a while to fully recover from a reboot (guest restarts are spread over time). So the actual maintenance might be relatively quick, or even just a software update or configuration change that requires a restart, but most of the window is then reserved for recovery from the reboot.

– David

(*) Edit: I forgot a node, so it's half not more than half. It's only 5 of 10 nodes (9 of which use new hosts), but it may indicate that it's a broad based adjustment to many of the newer hosts.

More than half of your nodes? I feel fortunate! I've only had one…so far.

@obs:

@jasonlitka:

Ugh… Just got an email about my new E5-2670 box needing downtime for maintenance on 5/8…
Me too are you on london588? Must be something pretty important that's broken to require 45 minutes downtime.

You could ask support to migrate to a different host, that would most likely require less down time depending on the size of your disk.

No, different DC.

I'm guessing that this has something to do with the poor disk performance a lot of people have been reporting. BIOS, RAID, or drive firmware update probably. Something is probably causing the disks to really suck under high queue depth.

It could also be something to do with the xen bug where some hosts can't boot 32 bit kernels in certain areas of memory..I hit that one a lot. The node that's scheduled for migration for me keeps most of it's data in memory so I wouldn't notice a disk slowdown on that one.

@obs:

@jasonlitka:

Ugh… Just got an email about my new E5-2670 box needing downtime for maintenance on 5/8…
Me too are you on london588? Must be something pretty important that's broken to require 45 minutes downtime.

You could ask support to migrate to a different host, that would most likely require less down time depending on the size of your disk.

I got a ticket for 45 minutes downtime on london577. I also got an updated version of xen at home so maybe Linode are just updating their xen.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct