Skip to main content
BlogLinodeIntel’s L1TF CPU Vulnerabilities & Linode

Intel’s L1TF CPU Vulnerabilities & Linode


Earlier this week Intel publicly disclosed a new class of processor vulnerabilities known as L1 Terminal Fault (L1TF). Variants of L1TF affect many single and multi-tenant environments, including some of Linode’s infrastructure and Linodes themselves.

We have begun mitigation efforts and anticipate full mitigation of our fleet within the next few weeks. We believe we can achieve this without any interruption to your running systems, and without requiring any coordination on your part. However, this is still evolving and we’ll know more as we go. Early results of our mitigations have been encouraging.

While that protects our side of things, you should make sure you’re running a Linux kernel with the mitigations in place. Check out our guide on upgrading your kernel.

As we move forward with our mitigation efforts over the next few weeks, we will continue providing more information here on our blog. Stay tuned!

Comments (8)

  1. Author Photo

    Thanks for the hard work in dealing with this. Though I am not sure it’s enough to just update the Kernel on OS side, need microcode updates – I guess from host node OS level too ?

    Wouldn’t the microcode updates require host node level reboots ?

  2. Christopher Aker

    You are correct! We’re able to transparently move VMs to patched infrastructure using live migrations.

  3. Author Photo

    Ah sweet – live migration feature is awesome. One of many reasons I have stuck with Linode for 4+ yrs now 🙂

  4. Author Photo

    What are your plans regarding HyperThreading?

    One of the things that has me shocked about L1TF is that there does not yet appear to be any publicly-available, complete mitigation to either of the major open-source hypervisors (KVM and Xen) that does not require HyperThreading to be disabled.

    L1TF is not fully mitigated if unrelated guests can run as hyper-siblings (or if an untrusted guest–which is all guests for a cloud VM provider–can run as a hyper-sibling of a hypervisor thread). Technically, this could be enforced by a scheduler, but the most unequivocal statement of a scheduler that will do so comes from, of all places, Microsoft, and therefore Azure (

    Google also indicates that individual cores are never concurrently shared between VMs ( Certainly, they have the wherewithal to pull this off with custom internal kernel changes, so there’s no particular reason to doubt them. (I didn’t find any clear statement from AWS on shared cores, but they already have their custom Nitro hypervisor, so plausibly they have a custom modification.)

    Unfortunately, the current docs applicable to KVM don’t provide any good solution for a cloud VM provider other than disabling HyperThreading:

    Am I wrong about this?

  5. Author Photo

    I too would like to know more about the hyperthreading story. We have multiple internal deployments of openstack and vmware that would suffer if we have to disable HT. Did Linode disable HT?

    I am very happy with Linode being able to live migrate things with no downtime to customers. That is a massive improvement over the past migration queues.

  6. Author Photo

    Thanks for implementing these security fixes.

    The new “live” migrations is certainly interesting – is this a new feature that you’re now able to use? It’s certainly much less painful than existing migration queues and forced downtime.

    Futhermore, will live migration be introduced for other server moves, such as upgrades and downgrades?

  7. Author Photo

    Our current plan for L1TF mitigation is to disable HyperThreading.

    Yes, live migrations are a feature that we are now able to use. We are evaluating the different use cases for this one, but currently it cannot be used for upgrades/downgrades with plan resizing.

  8. Author Photo

    Thanks for keeping us informed and patching the hosts. We appreciate the effort and due diligence. I’m sure these projects at large scale are never fun.

Leave a Reply

Your email address will not be published. Required fields are marked *