메인 콘텐츠로 건너뛰기
블로그 리노드 (주) 인텔의 L1TF CPU 취약점 및 리노드

인텔의 L1TF CPU 취약점 및 리노드

IntelL1TFVulnerability_1200x631

이번 주 초 인텔은 L1 터미널 오류 (L1TF)로알려진 프로세서 취약점의 새로운 클래스를 공개 . L1TF의 변형은 리노드의 일부 인프라와 리노드 자체를 포함하여 많은 단일 테넌트 및 다중 테넌트 환경에 영향을 미칩니다.

우리는 완화 노력을 시작했으며 앞으로 몇 주 이내에 함대의 완전한 완화를 기대했습니다. 우리는 실행 중인 시스템에 대한 중단 없이, 그리고 귀하의 부분에 대한 조정없이 이를 달성 할 수 있다고 믿습니다. 그러나, 이것은 여전히 진화하고 우리는 우리가 가는 대로 더 많은 것을 알 수 있습니다. 완화의 초기 결과는 고무적이었습니다.

그것은 사물의 우리의 측면을 보호하는 동안, 당신은 당신이 장소에 완화와 리눅스 커널을 실행하고 있는지 확인해야합니다. 커널 업그레이드에 대한 가이드를 확인하십시오.

앞으로 몇 주 동안 완화 노력을 추진함에 따라 블로그에 더 많은 정보를 계속 제공할 것입니다. 지켜!


댓글 (8)

  1. Thanks for the hard work in dealing with this. Though I am not sure it’s enough to just update the Kernel on OS side, need microcode updates – I guess from host node OS level too https://www.linode.com/community/questions/17120/how-is-linode-handling-l1tf-what-actions-can-we-take-to-mitigate#answer-66869 ?

    Wouldn’t the microcode updates require host node level reboots ?

  2. You are correct! We’re able to transparently move VMs to patched infrastructure using live migrations.

  3. Ah sweet – live migration feature is awesome. One of many reasons I have stuck with Linode for 4+ yrs now 🙂

  4. What are your plans regarding HyperThreading?

    One of the things that has me shocked about L1TF is that there does not yet appear to be any publicly-available, complete mitigation to either of the major open-source hypervisors (KVM and Xen) that does not require HyperThreading to be disabled.

    L1TF is not fully mitigated if unrelated guests can run as hyper-siblings (or if an untrusted guest–which is all guests for a cloud VM provider–can run as a hyper-sibling of a hypervisor thread). Technically, this could be enforced by a scheduler, but the most unequivocal statement of a scheduler that will do so comes from, of all places, Microsoft, and therefore Azure (https://blogs.technet.microsoft.com/virtualization/2018/08/14/hyper-v-hyperclear/).

    Google also indicates that individual cores are never concurrently shared between VMs (https://cloud.google.com/blog/products/gcp/protecting-against-the-new-l1tf-speculative-vulnerabilities). Certainly, they have the wherewithal to pull this off with custom internal kernel changes, so there’s no particular reason to doubt them. (I didn’t find any clear statement from AWS on shared cores, but they already have their custom Nitro hypervisor, so plausibly they have a custom modification.)

    Unfortunately, the current docs applicable to KVM don’t provide any good solution for a cloud VM provider other than disabling HyperThreading: https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html

    Am I wrong about this?

  5. I too would like to know more about the hyperthreading story. We have multiple internal deployments of openstack and vmware that would suffer if we have to disable HT. Did Linode disable HT?

    I am very happy with Linode being able to live migrate things with no downtime to customers. That is a massive improvement over the past migration queues.

  6. Thanks for implementing these security fixes.

    The new “live” migrations is certainly interesting – is this a new feature that you’re now able to use? It’s certainly much less painful than existing migration queues and forced downtime.

    Futhermore, will live migration be introduced for other server moves, such as upgrades and downgrades?

  7. Our current plan for L1TF mitigation is to disable HyperThreading.

    Yes, live migrations are a feature that we are now able to use. We are evaluating the different use cases for this one, but currently it cannot be used for upgrades/downgrades with plan resizing.

  8. Thanks for keeping us informed and patching the hosts. We appreciate the effort and due diligence. I’m sure these projects at large scale are never fun.

댓글 남기기

이메일 주소는 게시되지 않습니다.