Happy 12th birthday to us!
Welp, time keeps on slippin’ into the future, and we find ourselves turning 12 years old today. To celebrate, we’re kicking off the next phase of Linode’s transition from Xen to KVM by making KVM Linodes generally available, starting today.
Better performance, versatility, and faster booting
Using identical hardware, KVM Linodes are much faster compared to Xen. For example, in our UnixBench testing a KVM Linode scored 3x better than a Xen Linode. During a kernel compile, a KVM Linode completed 28% faster compared to a Xen Linode. KVM has much less overhead than Xen, so now you will get the most out of our investment in high-end processors.
KVM Linodes are, by default, paravirtualized, supporting the Virtio disk and network drivers. However, we also now support fully virtualized guests – which means you can run alternative operating systems like FreeBSD, BSD, Plan 9, or even Windows – using emulated hardware (PIIX IDE and e1000). We’re also working on a graphical console (GISH?) which should be out in the next few weeks.
In a recent study of VM creation and SSH accessibility times performed by Cloud 66, Linode did well. The average Linode ‘create, boot, and SSH availability’ time was 57 seconds. KVM Linodes boot much faster – we’re seeing them take just a few seconds.
How do I upgrade a Linode from Xen to KVM?
On a Xen Linode’s dashboard, you will see an “Upgrade to KVM” link on the right sidebar. It’s a one-click migration to upgrade your Linode to KVM from there. Essentially, our KVM upgrade means you get a much faster Linode just by clicking a button.
How do I set my account to default to KVM for new stuff?
In your Account Settings you can set ‘Hypervisor Preference’ to KVM. After that, any new Linodes you create will be KVM.
What will happen to Xen Linodes?
New customers and new Linodes will, by default, still get Xen. Xen will cease being the default in the next few weeks. Eventually we will transition all Xen Linodes over to KVM, however this is likely to take quite a while. Don’t sweat it.
On behalf of the entire Linode team, thank you for the past 12 years and here’s to another 12! Enjoy!
Linode continues to be an excellent service provider. Thanks =)
You’re welcome Alex!
My Linode continues to be a great value, and runs like a champ. Thanks for constantly improving and making the experience better and better.
a) What is the result for the UnixBench testing a KVM 1024 Linode ?
b) Is osv.io support planned ?
c) Is live migration planned ? 🙂
@rata: 1) Don’t know, 2) No idea, probably? 3) Nope.
I choose Linode because of Xen
“Eventually we will transition all Xen Linodes over to KVM” – really hope you are not serious.
Linode is AWSOME!
Why was Xen used in the 1st place?
Ed: we used UML in the first place (2003). Neither Xen nor KVM existed. Then we moved to Xen. Now we’re moving to KVM.
Hi! Great news! One question: How much downtime on upgrade?
A word of warning to customers: the KVM upgrade hosed my linode, and now it doesn’t boot. Be warned, this is not a seamless upgrade. I’m going to go open a trouble ticket.
A followup on my previous comment, it seems that Ubuntu on KVM requires that devtmpfs be enabled on your linode profile. Caker enabled it and now it’s booting fine.
Suggestion: this wasn’t required on Xen (or at least my linode booted fine before the upgrade), so perhaps the KVM upgrade should automatically enable it?
It’s required under Xen, too – however for reasons not yet understood Ubuntu was more tolerant to missing devtmpfs under Xen. We’re going to look at auto-enabling this during the upgrade. Thanks!
The downtime was 8-9 minutes for a 1GB instance (they have to copy the disk images to another host).
Good news, everyone!
ps: <3 Linode
Thank you !
6 linodes migrated
Just upgraded my 2G Linode.
Hmm… the cpu spec seems to have dropped:
model name : Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
cpu MHz : 2800.044
bogomips : 5600.08
model name : Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
cpu MHz : 2499.994
bogomips : 5001.32
However, it’s running much more quickly.
The kernel build time dropped from 573 to 363 seconds.
That’s 1.6x faster.
Many thanks Linode 🙂
Will you you also offer FreeBSD images in the future?
Ok, how to check /make sure devtmpfs is enabled on ububtu 14.04?
We upgraded one of our VM that used pvgrub + OpenVZ kernel (2.6.32) and it didn’t boot. We were left at the “grub>” prompt.
Changing from paravirt to full virt made it work but I’m wondering if there is something we are missing?
Hmm can I run Windows Server now? I dont see the option. 🙂
Flawless upgrade and immediate performance gains. Thanks guys.
Just migrated to KVM and cloudlinux os has stopped working. How can i install my own kernel?
My Debian Jessie instance migrated seamlessly. It only took a few minutes.
Finally! Thank you very much! I’m so gonna upgrade to KVM! Now everything is perfect <3
@Rich Russon: You’re missing the fact that your CPU changed from the E5-2680 v2 to the E5-2680 v3. The old one was a 10-core Ivy Bridge, the new one is a 12-core Haswell.
Will there / is there any option for download host images and upload my own images?
Would suggest proceeding with caution – I attempted a migration this morning, but it failed and now won’t boot at all. Support tells me that unfortunately a hardware issue occurred at exactly the time I attempted the migrate (despite no current hardware issues being shown on https://status.linode.com), and I’m still awaiting an update.
I would have hoped the LVM migration script would perform a full health check so as to not leave customers stuck in limbo.
The concept is great, but thus far I’m disappointed.
Yay! I just migrated my server. It went great. My site feels much snappier. You guys rock! Thanks, Linode. 🙂
Would it possible to post a list of cpu flags supported in the new KVMs if different from the current Xen VMs (from /proc/cpuinfo)? I’m currently using aes instructions to accelerate ipsec on the Xen instance, but with KVM VMs aes isn’t always enabled.
Do you support Nested KVM?
Does this mean we can have LVM2 root drives?
Will we be able to use one Linode to build another (i.e., do an AWS-style chroot-build)?
@Micki: Yes. You can download your ‘image’ using Rescue Mode. You can upload your own image the exact same way (in reverse).
@Tim: you were coming from a troubled host, sadly. Looks like you’re sorted. Sorry for the hassle.
@Ricardo: it’s currently disabled. We left nesting off for the time being – but we will revisit this soon.
@Tom: you already could do LVM root. You have all the tools: GRUB, initrd, disk devices you can manage, etc. No?
Will we be able to install an OS straight from an ISO or will we still have to go through the old process to migrate it?
Currently no stock in Japan?
Can you provide checklist for seamless migration? What I should doublecheck? For example, linode created /etc/fstab with “/dev/xvda” devices by default. Should I manually replace device names or not?
Please consider enabling Nested KVM. We could host oVirt or OpenStack on top of it. Imagine that!
I can’t found the bottun!
I found no such “Upgrade to KVM” link on the right sidebar.
Is it not yet available in Tokyo data-center?
Already got FreeBSD running  and the Debian benchmark is an improvement .
Any planned support for Tokyo on the near horizon, or would I be better off migrating elsewhere?
Interesting, I thought you guys had tested this previously for some reason, although I guess I didn’t feel strongly since largely invisible to guest (at least at my workloads), EC2 and Linode and Vr.org using Xen, then DigitalOcean, Rackspace etc.. using KVM. Those benchmarks difference though are large, I am surprised to see such a jump.
And it is not available for servers in Tokyo yet !!
Does this apply to all linodes in all locations? I have an existing Xen linode in Japan and I’m not seeing the upgrade option… (I can change the default type in my account but not in the dashboard for my specific linode).
The upgrade works well and seamless. I used the KVM upgrade option in the control panel to upgrade 3 Ubuntu servers and a Debian server to KVM. The downtime was only 10 minutes on a Linode 4096 server, disk I/O has increased 20% (disk I/O was never an issue to begin wit), not bad for clicking a button.
Will these new KVM VMs still be multicore? I thought KVM didn’t support multiple virtual cpus running on multiple real cpus…
Happy Birthday Linode! Twelve years is a nice milestone for any company to reach – and I’m glad you reached it for sure! Congratulations for the birthday – and it’s great to see that you’re moving to KVM! 🙂
Be very careful migrating a linode to KVM at this time. Linode’s process failed at migrating one of my nodes and their support hasn’t addressed the problem in _3_ hours, nor given any clarity on the situation.
I’m on a 32bit Linode (London DC). Two questions:
1. I don’t see any button to upgrade to KVM. Is it normal ?
2. Will be possibile to upgrade my linode to a SSD node AND switch to KVM at the same time ?
No upgrades in Tokyo it looks like?
How stable can we expect this to be? Would it be wise to migrate mission critical Linodes or better to wait for a few months?
Can somebody please create a guide/tutorial for installing Windows Server 2012 on the new KVM Linodes? I would really appreciate it!
Weird. Last time I’d asked about it (just a few months back), I was told that the Xen PV-GRUB you were using didn’t support doing a partitioned, single-disk root drive (i.e., to put “/boot” on /dev/xvda1 and an LVM2 root VG on /dev/xvda2).
At any rate, that issue aside, more critical to me is “can one use a live Linode instance to do a chroot-install of an OS to a vDisk, then register that vDisk as a bootable instance” (as is doable with AWS). I’d really rather just do everything “in the cloud” rather than having to upload an OS image file.
I’m still not getting the upgrade button for my Tokyo Linode. Any ETA?
@Rob: Yes, our systems are still multi-core, no changes there.
@Keith: Thanks! I’m glad that we’re able to increase the performance for everyone on such a great day!
@Cal: If you don’t receive a response from Support quickly and it’s urgent, I would suggest giving us a call so we can try to help you out asap.
@skp: I’m seeing that there is space for London at this time. If you still can’t migrate, I would suggest contacting our support.
@Mike/Superbarney: Unfortunately it looks like space is out at Tokyo at this time, but you can migrate elsewhere and get the upgrade
@losif: We are out of beta so it should be 100% stable. If you want to be cautious, I would recommend first taking a backup, and or instead of migrating, make a new KVM Linode and migrate your current Linode to there first and test it.
are you planned write a HowTo like this:
I need to know if is possible to install a “Distribution-Supplied Kernel” before migrate or create news KVM Linode (I’m using CentOS 6.X)
I just upgraded two nodes and immediately saw a notable difference in speed.
Thank you and Happy Birthday!
@Bakko, here’s what worked for me on a Debian 8.1 image with KVM.
First, install a kernel and grub2 (apt-get install linux-image-amd64 grub2)
Debian’s package manager already installed grub on the root filesystem, but do it manually if you need to (grub-install /dev/sda)
Next, configure grub to not use graphical mode, and to add “console=ttyS0″ to the kernel command line (edit /etc/default/grub to include the following lines:
Run the ‘update-grub’ command to regenerate /boot/grub/grub.cfg (update-grub)
Go into the linode dashboard, edit the configuration profile for the image, and under “Boot Settings”, look at the “Kernel” field. Take note of what is currently there (in case you break something and need to go back to it), and then change the Kernel to “GRUB 2”. Click “Save Changes”, and then reboot your linode.
That got me booting with Debian’s default kernel. If it doesn’t work for you, just set the Kernel field back to what it previously was, save changes, reboot, and try and figure out what went wrong.
Thank you @AndresSalomon but I’m using CentOS. Regards
Note: If you use “GRUB 2” as the “kernel” in the Configuration Profile, then you don’t need “grub-install /dev/sda” at all. That’s only required for “Direct Disk” boot,
which I would not recommend on disks not having a partition table.
Hey Linode, the reference guide (https://www.linode.com/docs/platform/kvm) talks about ‘Direct Disk’ booting. What is this? and is it preferable to switch to this?
Maybe a comment on the reference guide should be added about this.
I’ve migrated a few Linodes to KVM already, with zero hiccups. One side-effect I noticed, however, is that disks are now reported as “rotational” (aka whether it is an SSD or not).
On a KVM Linode:
# cat /sys/block/sdb/queue/rotational
On a Xen Linode:
# cat /sys/block/xvda/queue/rotational
I had previously been opportunistically setting the IO scheduler to
noopwhere the disk reported
rotational 0, so this “breaks” that but it’s not a huge deal; the disk is still fast!
AWESOME! just upgraded and i’ve got more than 25% stability and performance.
maybe this HowTo can be useful to someone:
“Migrate Linode CentOS 6.6 – 64 bit from XEN to KVM using GRUB and the new ttyS0 console”:
Your DevOps automation is impressive. How are you even able to pull this off, something like SaltStack?
I just upgraded to KVM and it is faster with so far, more memory to spare, nice!
Please say about Frankfurt?
I am wondering if you have a more specific upgrade schedule? We are closing down our office for one month vacation starting next week and need to prepare so we minimize the risk of firefighting. You say we shouldn’t sweat it. Does that mean we can wait with this upgrade until the end of August or beginning of September?
I checked this and I get the following which is probably why linux thinks it is rotational now.
root@icarus:~# smartctl -a /dev/sda
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.1.0-x86_64-linode59] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, http://www.smartmontools.org
=== START OF INFORMATION SECTION ===
Product: QEMU HARDDISK
User Capacity: 51,275,366,400 bytes [51.2 GB]
Logical block size: 512 bytes
LU is thin provisioned, LBPRZ=0
Rotation Rate: 5400 rpm
Device type: disk
Local Time is: Mon Jul 6 23:22:57 2015 UTC
SMART support is: Unavailable – device lacks SMART capability.
Last time I checked KVM does not work with cloudlinux. Do you have a way of making it work or do I need to find a new provider?
How do you get ” 25% stability “? Did your Linode crash four times in one hour before, and now it’s just once? XD
Basicly, KVM better than Xen, it’s good upgrade.
[…] will be cycled this morning to take advantage of Linode’s new KVM setup. It’s been running a while in staging without […]
was just looking to move from shared to dedicated resources, looks like i have found a good place.
Tokyo ETA please …..
For those of you that ended up with an unbootable system unless you changed to ‘full virtualization’, you may be missing the virtio block driver in your initramfs.
sudo echo ‘add_drivers+=”virtio_blk”‘ > /etc/dracut.conf.d/kvm.conf
shutdown, change to para-virtualization, pray, and boot.
Not available in Tokyo, Japan, and several questions above about this. Please provide an eta for when those with Xen in Tokyo, Japan can upgrade to KVM. Thank you!.
Have we received an answer on CloundLinux and KVM Upgrade?
The results from our unixbench benchmark of KVM and XEN can be found at: http://wpress.io/unixbench-results-for-digitalocean-linode-kvm-linode-xen/
You can run CloudLinux under KVM at Linode: http://docs.cloudlinux.com/cloudlinux_on_linode_kvm.html (haven’t tested this myself yet, but I’m about to)
CloudLinux is working perfectly for me under KVM
Wonder why Xen did so poorly relative to KVM on those boxes (ex: this shows it only a few percentage points slower: https://major.io/2014/06/22/performance-benchmarks-kvm-vs-xen/ ) …
Interesting. Nice performance gain. Wonder why it was so large, other comparisons of KVM vs Xen only find like a 1% difference [maybe it’s I/O?] ex: http://wpress.io/unixbench-results-for-digitalocean-linode-kvm-linode-xen/
@John / @Troy A little late, but It seems CloudLinux don’t have any issues about being installed on a KVM node: http://docs.cloudlinux.com/kvm_images.html (otherwise there wouldn’t be any KVM images).
It’s likely though that you’ll have to install a CentOS image, switch to native kernel (grub / pv-grub) instead of the Linode Kernels and then convert it to CloudLinux.
Thanks Linode for making this such a smooth transition – enjoying the bump in performance.
are you willing to share the xen -> kvm migration script you guys use?
Lost network connectivity after migration. Turned out to be because eth0 had changed to eth1, so I needed to modify my network settings. Not a big deal for me but may cause a problem for some so I thought it worth mentioning.
It has been more than half an year and KVM upgrade seems to be still unavailable at Tokyo datacenter.
After the auto migration I had an error with booting complaining about the mount points /dev/pts and /dev/shm not existing.
I eventually fixed by setting the Automount devtmpfs value to No in my profile configuration as per instructions here: http://thomas.broxrost.com/2016/06/15/fixing-boot-problems-after-upgrading-to-ubuntu-9-10-on-linode.