우리에게 해피 12 번째 생일!
웰프, 시간은 미래에 슬립 핀에 유지' 그리고 우리는 우리 자신을 발견 12 세 오늘. 축하하기 위해, 우리는 Linode의 전환의 다음 단계를 시작 Xen 받는 사람 KVM 제작에 의한 KVM 오늘부터 Linodes는 일반적으로 사용할 수 있습니다.
더 나은 성능, 다기능성 및 더 빠른 부팅
동일한 하드웨어를 사용하여 KVM 리노드에 비해 훨씬 빠릅니다. Xen . 예를 들어 유닉스벤치 테스트에서 KVM 리노드는 3배 더 좋은 점수를 기록했습니다. Xen 리노드. 커널 컴파일 중에 KVM 리노드는 28% 더 빨리 완료 Xen 리노드. KVM 오버헤드보다 훨씬 적은 오버헤드가 있습니다. Xen , 그래서 지금 당신은 하이 엔드 프로세서에 대한 우리의 투자를 최대한 얻을 것이다.
KVM Linodes는 기본적으로 마비되어 Virtio 디스크 및 네트워크 드라이버를 지원합니다. 그러나 이제 는 완전히 가상화된 게스트를 지원하므로 에뮬레이트 된 하드웨어 (PIIX IDE 및 e1000)를 사용하여 FreeBSD, BSD, 계획 9 또는 Windows와 같은 대체 운영 체제를 실행할 수 있습니다. 우리는 또한 그래픽 콘솔에서 작업 (GISH?) 어떤 다음에 밖으로 해야 몇 주.
클라우드 66에서 수행한 VM 생성 및 SSH 접근성 시간을 최근 조사한 결과, Linode는 잘 했습니다. 평균 리노드 '만들기, 부팅 및 SSH 가용성' 시간은 57초였습니다. KVM 리노드가 훨씬 빠르게 부팅됩니다 - 우리는 그들이 몇 초밖에 걸리지 않는 것을 보고 있습니다.
Linode를 업그레이드하려면 어떻게 해야 합니까? Xen 받는 사람 KVM ?
에 Xen Linode의 대시보드에는 "업그레이드가 표시됩니다. KVM " 오른쪽 사이드바에 링크. 리노드를 업그레이드하는 원클릭 마이그레이션입니다. KVM 거기에서. 본질적으로, 우리의 KVM 업그레이드는 버튼을 클릭하는 것만으로 훨씬 빠른 리노드를 얻을 수 있습니다.
계정을 기본값으로 설정하려면 어떻게 해야 합니까? KVM 새로운 물건을 위해?
계정 설정에서 '하이퍼바이저 기본 설정'을 설정할 수 있습니다. KVM . 그 후, 당신이 만드는 모든 새로운 Linodes는 KVM .
무슨 일이 일어날 것인가 Xen 리노드?
새로운 고객과 새로운 Linodes는 기본적으로 여전히 얻을 것입니다. Xen . Xen 앞으로 몇 주 안에 기본값이 중단됩니다. 결국 우리는 모든 것을 전환할 것입니다. Xen 리노드 를 통해 KVM 그러나 이것은 꽤 걸릴 가능성이 높습니다. 속 태우지 마라.
전체 Linode 팀을 대신하여, 지난 12 년 동안 감사합니다 여기에 또 다른 12입니다! 누리다!
Linode continues to be an excellent service provider. Thanks =)
You’re welcome Alex!
My Linode continues to be a great value, and runs like a champ. Thanks for constantly improving and making the experience better and better.
a) What is the result for the UnixBench testing a KVM 1024 Linode ?
b) Is osv.io support planned ?
c) Is live migration planned ? 🙂
@rata: 1) Don’t know, 2) No idea, probably? 3) Nope.
I choose Linode because of Xen
“Eventually we will transition all Xen Linodes over to KVM” – really hope you are not serious.
Linode is AWSOME!
Why was Xen used in the 1st place?
Ed: we used UML in the first place (2003). Neither Xen nor KVM existed. Then we moved to Xen. Now we’re moving to KVM.
Hi! Great news! One question: How much downtime on upgrade?
A word of warning to customers: the KVM upgrade hosed my linode, and now it doesn’t boot. Be warned, this is not a seamless upgrade. I’m going to go open a trouble ticket.
A followup on my previous comment, it seems that Ubuntu on KVM requires that devtmpfs be enabled on your linode profile. Caker enabled it and now it’s booting fine.
Suggestion: this wasn’t required on Xen (or at least my linode booted fine before the upgrade), so perhaps the KVM upgrade should automatically enable it?
It’s required under Xen, too – however for reasons not yet understood Ubuntu was more tolerant to missing devtmpfs under Xen. We’re going to look at auto-enabling this during the upgrade. Thanks!
The downtime was 8-9 minutes for a 1GB instance (they have to copy the disk images to another host).
Good news, everyone!
ps: <3 Linode
Thank you !
6 linodes migrated
Just upgraded my 2G Linode.
Hmm… the cpu spec seems to have dropped:
model name : Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
cpu MHz : 2800.044
bogomips : 5600.08
model name : Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
cpu MHz : 2499.994
bogomips : 5001.32
However, it’s running much more quickly.
The kernel build time dropped from 573 to 363 seconds.
That’s 1.6x faster.
Many thanks Linode 🙂
Will you you also offer FreeBSD images in the future?
Ok, how to check /make sure devtmpfs is enabled on ububtu 14.04?
We upgraded one of our VM that used pvgrub + OpenVZ kernel (2.6.32) and it didn’t boot. We were left at the “grub>” prompt.
Changing from paravirt to full virt made it work but I’m wondering if there is something we are missing?
Hmm can I run Windows Server now? I dont see the option. 🙂
Flawless upgrade and immediate performance gains. Thanks guys.
Just migrated to KVM and cloudlinux os has stopped working. How can i install my own kernel?
My Debian Jessie instance migrated seamlessly. It only took a few minutes.
Finally! Thank you very much! I’m so gonna upgrade to KVM! Now everything is perfect <3
@Rich Russon: You’re missing the fact that your CPU changed from the E5-2680 v2 to the E5-2680 v3. The old one was a 10-core Ivy Bridge, the new one is a 12-core Haswell.
Will there / is there any option for download host images and upload my own images?
Would suggest proceeding with caution – I attempted a migration this morning, but it failed and now won’t boot at all. Support tells me that unfortunately a hardware issue occurred at exactly the time I attempted the migrate (despite no current hardware issues being shown on https://status.linode.com), and I’m still awaiting an update.
I would have hoped the LVM migration script would perform a full health check so as to not leave customers stuck in limbo.
The concept is great, but thus far I’m disappointed.
Yay! I just migrated my server. It went great. My site feels much snappier. You guys rock! Thanks, Linode. 🙂
Would it possible to post a list of cpu flags supported in the new KVMs if different from the current Xen VMs (from /proc/cpuinfo)? I’m currently using aes instructions to accelerate ipsec on the Xen instance, but with KVM VMs aes isn’t always enabled.
Do you support Nested KVM?
Does this mean we can have LVM2 root drives?
Will we be able to use one Linode to build another (i.e., do an AWS-style chroot-build)?
@Micki: Yes. You can download your ‘image’ using Rescue Mode. You can upload your own image the exact same way (in reverse).
@Tim: you were coming from a troubled host, sadly. Looks like you’re sorted. Sorry for the hassle.
@Ricardo: it’s currently disabled. We left nesting off for the time being – but we will revisit this soon.
@Tom: you already could do LVM root. You have all the tools: GRUB, initrd, disk devices you can manage, etc. No?
Will we be able to install an OS straight from an ISO or will we still have to go through the old process to migrate it?
Currently no stock in Japan?
Can you provide checklist for seamless migration? What I should doublecheck? For example, linode created /etc/fstab with “/dev/xvda” devices by default. Should I manually replace device names or not?
Please consider enabling Nested KVM. We could host oVirt or OpenStack on top of it. Imagine that!
I can’t found the bottun!
I found no such “Upgrade to KVM” link on the right sidebar.
Is it not yet available in Tokyo data-center?
Already got FreeBSD running  and the Debian benchmark is an improvement .
Any planned support for Tokyo on the near horizon, or would I be better off migrating elsewhere?
Interesting, I thought you guys had tested this previously for some reason, although I guess I didn’t feel strongly since largely invisible to guest (at least at my workloads), EC2 and Linode and Vr.org using Xen, then DigitalOcean, Rackspace etc.. using KVM. Those benchmarks difference though are large, I am surprised to see such a jump.
And it is not available for servers in Tokyo yet !!
Does this apply to all linodes in all locations? I have an existing Xen linode in Japan and I’m not seeing the upgrade option… (I can change the default type in my account but not in the dashboard for my specific linode).
The upgrade works well and seamless. I used the KVM upgrade option in the control panel to upgrade 3 Ubuntu servers and a Debian server to KVM. The downtime was only 10 minutes on a Linode 4096 server, disk I/O has increased 20% (disk I/O was never an issue to begin wit), not bad for clicking a button.
Will these new KVM VMs still be multicore? I thought KVM didn’t support multiple virtual cpus running on multiple real cpus…
Happy Birthday Linode! Twelve years is a nice milestone for any company to reach – and I’m glad you reached it for sure! Congratulations for the birthday – and it’s great to see that you’re moving to KVM! 🙂
Be very careful migrating a linode to KVM at this time. Linode’s process failed at migrating one of my nodes and their support hasn’t addressed the problem in _3_ hours, nor given any clarity on the situation.
I’m on a 32bit Linode (London DC). Two questions:
1. I don’t see any button to upgrade to KVM. Is it normal ?
2. Will be possibile to upgrade my linode to a SSD node AND switch to KVM at the same time ?
No upgrades in Tokyo it looks like?
How stable can we expect this to be? Would it be wise to migrate mission critical Linodes or better to wait for a few months?
Can somebody please create a guide/tutorial for installing Windows Server 2012 on the new KVM Linodes? I would really appreciate it!
Weird. Last time I’d asked about it (just a few months back), I was told that the Xen PV-GRUB you were using didn’t support doing a partitioned, single-disk root drive (i.e., to put “/boot” on /dev/xvda1 and an LVM2 root VG on /dev/xvda2).
At any rate, that issue aside, more critical to me is “can one use a live Linode instance to do a chroot-install of an OS to a vDisk, then register that vDisk as a bootable instance” (as is doable with AWS). I’d really rather just do everything “in the cloud” rather than having to upload an OS image file.
I’m still not getting the upgrade button for my Tokyo Linode. Any ETA?
@Rob: Yes, our systems are still multi-core, no changes there.
@Keith: Thanks! I’m glad that we’re able to increase the performance for everyone on such a great day!
@Cal: If you don’t receive a response from Support quickly and it’s urgent, I would suggest giving us a call so we can try to help you out asap.
@skp: I’m seeing that there is space for London at this time. If you still can’t migrate, I would suggest contacting our support.
@Mike/Superbarney: Unfortunately it looks like space is out at Tokyo at this time, but you can migrate elsewhere and get the upgrade
@losif: We are out of beta so it should be 100% stable. If you want to be cautious, I would recommend first taking a backup, and or instead of migrating, make a new KVM Linode and migrate your current Linode to there first and test it.
are you planned write a HowTo like this:
I need to know if is possible to install a “Distribution-Supplied Kernel” before migrate or create news KVM Linode (I’m using CentOS 6.X)
I just upgraded two nodes and immediately saw a notable difference in speed.
Thank you and Happy Birthday!
@Bakko, here’s what worked for me on a Debian 8.1 image with KVM.
First, install a kernel and grub2 (apt-get install linux-image-amd64 grub2)
Debian’s package manager already installed grub on the root filesystem, but do it manually if you need to (grub-install /dev/sda)
Next, configure grub to not use graphical mode, and to add “console=ttyS0″ to the kernel command line (edit /etc/default/grub to include the following lines:
Run the ‘update-grub’ command to regenerate /boot/grub/grub.cfg (update-grub)
Go into the linode dashboard, edit the configuration profile for the image, and under “Boot Settings”, look at the “Kernel” field. Take note of what is currently there (in case you break something and need to go back to it), and then change the Kernel to “GRUB 2”. Click “Save Changes”, and then reboot your linode.
That got me booting with Debian’s default kernel. If it doesn’t work for you, just set the Kernel field back to what it previously was, save changes, reboot, and try and figure out what went wrong.
Thank you @AndresSalomon but I’m using CentOS. Regards
Note: If you use “GRUB 2” as the “kernel” in the Configuration Profile, then you don’t need “grub-install /dev/sda” at all. That’s only required for “Direct Disk” boot,
which I would not recommend on disks not having a partition table.
Hey Linode, the reference guide (https://www.linode.com/docs/platform/kvm) talks about ‘Direct Disk’ booting. What is this? and is it preferable to switch to this?
Maybe a comment on the reference guide should be added about this.
I’ve migrated a few Linodes to KVM already, with zero hiccups. One side-effect I noticed, however, is that disks are now reported as “rotational” (aka whether it is an SSD or not).
On a KVM Linode:
# cat /sys/block/sdb/queue/rotational
On a Xen Linode:
# cat /sys/block/xvda/queue/rotational
I had previously been opportunistically setting the IO scheduler to
noopwhere the disk reported
rotational 0, so this “breaks” that but it’s not a huge deal; the disk is still fast!
AWESOME! just upgraded and i’ve got more than 25% stability and performance.
maybe this HowTo can be useful to someone:
“Migrate Linode CentOS 6.6 – 64 bit from XEN to KVM using GRUB and the new ttyS0 console”:
Your DevOps automation is impressive. How are you even able to pull this off, something like SaltStack?
I just upgraded to KVM and it is faster with so far, more memory to spare, nice!
Please say about Frankfurt?
I am wondering if you have a more specific upgrade schedule? We are closing down our office for one month vacation starting next week and need to prepare so we minimize the risk of firefighting. You say we shouldn’t sweat it. Does that mean we can wait with this upgrade until the end of August or beginning of September?
I checked this and I get the following which is probably why linux thinks it is rotational now.
root@icarus:~# smartctl -a /dev/sda
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.1.0-x86_64-linode59] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, http://www.smartmontools.org
=== START OF INFORMATION SECTION ===
Product: QEMU HARDDISK
User Capacity: 51,275,366,400 bytes [51.2 GB]
Logical block size: 512 bytes
LU is thin provisioned, LBPRZ=0
Rotation Rate: 5400 rpm
Device type: disk
Local Time is: Mon Jul 6 23:22:57 2015 UTC
SMART support is: Unavailable – device lacks SMART capability.
Last time I checked KVM does not work with cloudlinux. Do you have a way of making it work or do I need to find a new provider?
How do you get ” 25% stability “? Did your Linode crash four times in one hour before, and now it’s just once? XD
Basicly, KVM better than Xen, it’s good upgrade.
[…] will be cycled this morning to take advantage of Linode’s new KVM setup. It’s been running a while in staging without […]
was just looking to move from shared to dedicated resources, looks like i have found a good place.
Tokyo ETA please …..
For those of you that ended up with an unbootable system unless you changed to ‘full virtualization’, you may be missing the virtio block driver in your initramfs.
sudo echo ‘add_drivers+=”virtio_blk”‘ > /etc/dracut.conf.d/kvm.conf
shutdown, change to para-virtualization, pray, and boot.
Not available in Tokyo, Japan, and several questions above about this. Please provide an eta for when those with Xen in Tokyo, Japan can upgrade to KVM. Thank you!.
Have we received an answer on CloundLinux and KVM Upgrade?
The results from our unixbench benchmark of KVM and XEN can be found at: http://wpress.io/unixbench-results-for-digitalocean-linode-kvm-linode-xen/
You can run CloudLinux under KVM at Linode: http://docs.cloudlinux.com/cloudlinux_on_linode_kvm.html (haven’t tested this myself yet, but I’m about to)
CloudLinux is working perfectly for me under KVM
Wonder why Xen did so poorly relative to KVM on those boxes (ex: this shows it only a few percentage points slower: https://major.io/2014/06/22/performance-benchmarks-kvm-vs-xen/ ) …
Interesting. Nice performance gain. Wonder why it was so large, other comparisons of KVM vs Xen only find like a 1% difference [maybe it’s I/O?] ex: http://wpress.io/unixbench-results-for-digitalocean-linode-kvm-linode-xen/
@John / @Troy A little late, but It seems CloudLinux don’t have any issues about being installed on a KVM node: http://docs.cloudlinux.com/kvm_images.html (otherwise there wouldn’t be any KVM images).
It’s likely though that you’ll have to install a CentOS image, switch to native kernel (grub / pv-grub) instead of the Linode Kernels and then convert it to CloudLinux.
Thanks Linode for making this such a smooth transition – enjoying the bump in performance.
are you willing to share the xen -> kvm migration script you guys use?
Lost network connectivity after migration. Turned out to be because eth0 had changed to eth1, so I needed to modify my network settings. Not a big deal for me but may cause a problem for some so I thought it worth mentioning.
It has been more than half an year and KVM upgrade seems to be still unavailable at Tokyo datacenter.
After the auto migration I had an error with booting complaining about the mount points /dev/pts and /dev/shm not existing.
I eventually fixed by setting the Automount devtmpfs value to No in my profile configuration as per instructions here: http://thomas.broxrost.com/2016/06/15/fixing-boot-problems-after-upgrading-to-ubuntu-9-10-on-linode.