Questa settimana sono state divulgate dal team Project Zero di Google e da altri una serie di gravi vulnerabilità di sicurezza che interessano molte architetture di CPU(CVE-2017-5753, CVE-2017-5715 e CVE-2017-5754). Il nostro team sta lavorando con i fornitori e i nostri ingegneri per determinare le implicazioni per la nostra piattaforma, ma si prevede che sarà necessario un riavvio dell'intera flotta per proteggersi da questi problemi.
Mentre lavoriamo al nostro piano di risposta, vi preghiamo di comprendere che, a causa della natura e della gravità di questi problemi, potrebbe essere necessaria una risposta rapida. Come sempre, forniremo il maggior preavviso possibile. Se sarà necessario riavviare i vostri Linode, vi comunicheremo direttamente le informazioni sulla programmazione.
Continueremo a tenervi aggiornati qui quando saranno disponibili ulteriori informazioni.
Le informazioni relative a queste vulnerabilità sono disponibili sui seguenti siti:
- Il post sul Progetto Zero di Google
- meltdownattack.com
- Documentazione/x86/kaiser.txt
- LWN: KAISER: nascondere il kernel dallo spazio utente
Aggiornamento: 4 gennaio 2018
Stiamo continuando a indagare su questo problema e volevamo fornire un breve aggiornamento sulla situazione:
- Stiamo rimandando tutte le manutenzioni non correlate per concentrare i nostri sforzi e le nostre risorse sulla riduzione di questo problema.
- Come discusso in precedenza dal team di Scaleway, a causa delle informazioni incomplete fornite dai produttori di hardware, abbiamo unito le forze con altri provider di hosting cloud potenzialmente interessati, tra cui Scaleway, Packete OVH. Abbiamo creato un canale di comunicazione dedicato per condividere le informazioni e lavorare insieme per risolvere le vulnerabilità Meltdown e Spectre.
- Stiamo continuando a valutare e testare internamente le misure di mitigazione.
- Per domani sono previste discussioni per un approfondimento con i fornitori di hardware.
Continueremo a fornire qui gli aggiornamenti del caso.
Aggiornamento: 5 gennaio 2018
Stiamo continuando a fare progressi e volevamo condividere con voi le ultime novità:
- Gli ultimi kernel Linux stabili e a lungo termine sono stati rilasciati oggi con le patch KPTI / Meltdown. Pertanto, abbiamo reso disponibile il kernel 4.14.12 e lo abbiamo impostato come ultimo. Se si utilizza un kernel Linode, al prossimo riavvio il proprio Linode verrà aggiornato a questa versione. Questo non vi mette completamente al riparo dalle vulnerabilità Meltdown e Spectre, ma ci fornisce una buona base su cui lavorare mentre pianifichiamo la bonifica completa.
- Abbiamo organizzato sessioni di pianificazione con i nostri fornitori di hardware e abbiamo lavorato sui piani di implementazione per gli aggiornamenti del kernel, dell'hypervisor e del firmware. Tutti questi aggiornamenti saranno necessari per raggiungere uno stato di rimedio, ma non tutti sono disponibili.
Non ci aspettiamo grandi movimenti nel corso del fine settimana, in attesa di dipendenze esterne, ma forniremo sicuramente aggiornamenti qui se ci saranno. In caso contrario, ulteriori aggiornamenti saranno condivisi qui lunedì della prossima settimana.
Aggiornamento: 8 gennaio 2018
Stiamo continuando a fare progressi con i nostri test interni, ma stiamo ancora aspettando gli aggiornamenti del microcodice dai nostri fornitori di hardware. Sia l'aggiornamento del microcodice che quello del kernel sono necessari per garantire una mitigazione adeguata alle tre varianti di Meltdown e Spectre.
Aggiornamento: 9 gennaio 2018
Abbiamo trascorso la giornata di oggi preparando il piano per la distribuzione delle mitigazioni di Meltdown su tutto il parco macchine di Linode. Nel corso della prossima giornata implementeremo le correzioni a un sottoinsieme del parco macchine, monitoreremo l'impatto e poi continueremo il rollout al resto. La mitigazione di Meltdown richiede il riavvio del nostro hardware fisico che riavvierà i Linode ospitati su di esso. Un sottoinsieme di Linodes nei data center di Tokyo 1, Francoforte e Singapore sarà riavviato come parte di questo gruppo iniziale. Per le persone interessate, riceverete un ticket di assistenza e un'e-mail con le informazioni sulla pianificazione.
I riavvii di questa settimana riguardano solo Meltdown. Sono in corso test e pianificazioni in parallelo per affrontare Spectre. Nelle prossime settimane saranno necessari ulteriori riavvii per mitigare correttamente tutte le varianti di Spectre.
Aggiornamento: 10 gennaio 2018
Il rollout del sottoinsieme della nostra flotta è andato bene finora per la mitigazione di Meltdown. Stiamo proseguendo con questo piano e nei prossimi giorni effettueremo riavvii per il resto del parco macchine. I clienti interessati riceveranno ticket di supporto e messaggi di posta elettronica con la finestra di riavvio per i loro Linodes con un preavviso minimo di 24 ore.
- A causa della natura continua di questo problema, è stata creata la è stata creata la seguente pagina di stato.
- Prossimamente pubblicheremo un documento che parlerà meglio di Meltdown e Spectre per mostrare cosa significa per voi e cosa potete fare per prepararvi a questo problema sui vostri Linodes. Condivideremo un link a questo documento in un prossimo post sul blog.
Aggiornamento: 11 gennaio 2018
Il processo di mitigazione di Meltdown sta proseguendo e stiamo facendo progressi ogni giorno su tutta la flotta. È disponibile una nuova guida con ulteriori informazioni su queste vulnerabilità e su come proteggere il proprio Linode: Cosa è necessario fare per mitigare Meltdown e Spectre.
Aggiornamento: 12 gennaio 2018
Stiamo proseguendo con il processo di mitigazione di Meltdown e abbiamo programmato dei riavvii durante il fine settimana. Il nostro programma va avanti fino al 18 gennaio. Metteremo in pausa gli aggiornamenti quotidiani del blog fino al completamento di questo processo, a meno che non siano disponibili altre notizie utili.
Aggiornamento: 8 febbraio 2018
Come promemoria, tutti i nostri host KVM sono ora correttamente mitigati per Meltdown. Stiamo continuando a lavorare per ottenere una mitigazione adeguata per la vulnerabilità Spectre e forniremo un piano aggiornato sul nostro blog non appena sarà disponibile.
Per ulteriori informazioni su queste vulnerabilità, sullo stato del nostro parco macchine e su come proteggere il vostro Linode, consultate la guida Meltdown e Spectre.
Commenti (77)
Wishlist: alternate-CPU-architecture Linode hosts 😉
-Eugene
Sadly, portions of this affect both AMD and Intel, and likely others, fwiw.
My wishlist would be that all VPS providers could get the same early notifications and patches that a certain trendy yet slower provider received two weeks ago.
Yup, ARM’s response is https://developer.arm.com/support/security-update
fwiw, Scaleway is patching & mass rebooting hypervisors 1/4/2018
Sadly, it appears that even ARM CPUs are affected (to some degree) by this design flaw – so even if other CPU architectures were available, a reboot would still likely be necessary.
I could be wrong, but I think Eugene is referring to an alternative architecture like RISC-V…?
https://www.codasip.com/2016/09/22/what-is-risc-vwhy-do-we-care-and-why-you-should-too/
Thanks for the update. One more question, the patch says 5-30% performance hit depending on workload. Do we need to add more VMs to deal with the load?
@Scott, @Eugene – it seems that “all CPUs” may very well be most modern CPUs; RedHat advisory claims that POWER and even SystemZ (as used in IBM mainframes) may be impacted by Spectre.
https://access.redhat.com/security/vulnerabilities/speculativeexecution
Basically if your core does speculative execution for performance gain then it may be vulnerable. The BOOM RISC-V core ( https://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-157.html ) can do out of order execution and so _may_ be vulnerable. It would require a deeper look at other implementations of the spec to see if they are vulnerable or not.
Will this require patching the OS in our Linodes?
We’re still planning our full mitigation strategy, but Linodes will need an updated kernel, and we’re working on providing one from the Linode Manager. If you currently use a distribution-supplied or custom-compiled kernel, you will need to take separate actions to update it.
@Ruben Yes.
It appears that patching the guest is required to mitigate meltdown on Xen VM’s. If KVM, it should just be the hypervisor that needs the patches. Am I reading this correctly?
https://www.theregister.co.uk/2018/01/04/intel_amd_arm_cpu_vulnerability/
Correction, all VM’s have to be patched because of Meltdown, not just those on Xen.
Thanks Linode for your quick response and adapting 🙂
Related tweet by WikiLeaks at https://twitter.com/wikileaks/status/948723793324838914
Official website: https://meltdownattack.com
Article: https://www.theregister.co.uk/2018/01/04/intel_amd_arm_cpu_vulnerability/
A severe design flaws that allow stealing of sensitive data from memory has been discovered in Intel chipsets, affecting Xen, KVM, and more.
Linode will fix this, I trust in them. So no worries here.
Linode, can you please provide the details of patching hypervisors?
I guess mitigation of memory reads between different VMs is often even more important than within a single VM.
I am relatively new to Linode, and have a low volume site. When critical issues like this are discovered and eventually fixed, it would be awesome to get an email with directions on what to do, if anything. Thanks !!
Could you clarify how “[upgrading your VM kernel] provides us a good foundation to work with while planning for full remediation”? I mean, less attack surface is great and all, but, how does it factor into your planning? I ass-u-me you’re going to be rebooting my host at some point anyway.
It’s great to see that you are on top of it. As long as we are informed ahead of time, we are fine with the reboot and security patches. As you know our customers don’t like downtimes. Also, please try to minimize fleet-wide reboot, which makes services completely unavailable.
Thanks Linode for the updates.
I guess being on a shared (virtualized) server, all linodes on that (physical) server have to apply the new kernel, for the protection to be really effective.
That’s a good start anyway.
Thanks for updated Kernel. Looks like it doesn’t support Redhat/CentOS KPTI tunables to be able to control KPTI and related patch operations https://community.centminmod.com/posts/57936/. Would be nice to have though – details https://access.redhat.com/articles/3311301
cat /sys/kernel/debug/x86/pti_enabled
cat: /sys/kernel/debug/x86/pti_enabled: No such file or directory
cat /sys/kernel/debug/x86/ibpb_enabled
cat: /sys/kernel/debug/x86/ibpb_enabled: No such file or directory
cat /sys/kernel/debug/x86/ibrs_enabled
cat: /sys/kernel/debug/x86/ibrs_enabled: No such file or directory
were any Spectre fixes added to the kernel ? PoC at https://github.com/crozone/SpectrePoC successfully runs = not fixed on updated linode with 4.14.12-x86_64-linode92 on centos 7.4 64bit
but on dedicated elsewhere with centos 7.4 64bit and 3.10.0-693.11.6.el7.x86_64 the PoC fails to read = fixed
@George: One of the Spectre vulns requires you either recompile EVERYTHING with mitigations or a microcode patch. The other Spectre vuln isn’t fixable without newly architecture hardware (that doesn’t exist yet)
Meltdown is the one you apply the KPTI patch for (Intel only).
So Linode will probably have to issue a second round of reboots when their motherboard/OEM’s get around to issuing a CPU microcode patch. (Or they recompile everything)
Re: previous comment, ah upstream linual kernel hasn’t tackled Spectre yet according to http://kroah.com/log/blog/2018/01/06/meltdown-status/ but some distro backported kernels have i.e. Redhat/CentOS
you say “If you are leveraging a Linode kernel, upon your next reboot your Linode will be upgraded to this version.”.
Is the easiest way to tell that by doing a `uname -a` and seeing if the string contains `-linode` e.g. “4.9.50-x86_64-linode86”
@Patrick,
If `uname -a` is showing “somethingsomething-linode”, your kernel is coming from Linode. The particular version is assigned each time the instance boots, so if it is out of date, just reboot your Linode.
If you aren’t sure about the kernel source or want to change it, you can log into the Linode Manager and click the “edit” link for your Linode’s Configuration Profile. The Kernel option under Boot is where this particular setting is stored.
Some of the OS patches are requiring microcode changes. But I think the linux ones work without and differently once the microcode has changed.
The microcode will come in the form of a firmware update from the vendor.
Is there a reason why the Intel microcode update cannot be used directly rather than waiting for vendors to repackage it?
Ie, https://downloadcenter.intel.com/download/27337/Linux-Processor-Microcode-Data-File
Apparently the earlier mentioned link is not the latest version, not sure of the direct Intel link but it’s apparently included in eg https://launchpad.net/ubuntu/+source/intel-microcode/3.20171215.1
The question remains the same regardless.
excelent
Hi,
Any idea how long the reboots will take once they are scheduled?
Thanks!
Neil
We’ve allocated a two hour window for maintenance, however in many cases the actual downtime will be less. That being said, we would still recommend preparing for a full two hours of downtime.
The link given in “Jan 11 update” gives misleading information.
“Spectre targets the way modern CPUs work, regardless of speculative execution” is incorrect.
Both Spectre and Meltdown take advantage of “speculative execution”.
While Meltdown exploits a race condition based on the code after an exception is triggered, Spectre relies on the code speculatively run after a ‘if’ branch (uncached) condition that “usually” goes through happens to be false (since the next code accesses an out of bounds array).
I used to use linode vps, very good network speed.
> Our schedule runs through January 18th.
Are you kidding me.. Why so long to patch Meltdown?
Rather let your customers stay vulnerable to the exploit than lose a few customers because of insufficient server capacity?
Hey Krian!
Due to the scope of this vulnerability, we are rolling out the patch in waves to balance downtime for customers as well as ensure the patches work effectively across the entire fleet. With the hasty release of the kernel patches, we are making sure the patches don’t cause more issues for our customers than they fix.
So, my service is spread accross 5 linodes. Bringing down my VMs, one at a time, at unknown intervals spread over the next several days, is going to cause me to have possibly *five* outages in the worst case (if my VMs are all on different physical hosts, which I have no way of knowing), rather than one.
@Neil Ticktin
It took one minute to fix.
According to Uptime Robot, the monitor (my linode VM, Linode 2048) is back UP (Keyword Exists) (It was down for 0 minutes and 49 seconds).
I am relatively new to Linode, and have a low two volume site. Today I got the email with this subject “Linode Support Ticket 9678973 – Critical Maintenance for CPU Vulnerabilities (Meltdown) “. it would be great, If you provide an email what to do exactly, or anything.
Thanks !!
Hey Maneesh, first of all, welcome to Linode! For these Critical Maintenance for CPU Vulnerabilities (Meltdown) tickets, there is no action required on your end. That being said, we do recommend making sure your Linode is set to the latest kernel, which you can read more about how to do here. We would also recommend taking a look at the Reboot Survival Guide to ensure these reboots and migrations have as little impact on your Linode as possible.
I’m still seeing “Maintenance is not yet scheduled” on my dashboard.
Warning would be good it sounds like there is a schedule.
Hi Adrian! Wo don’t have a full schedule of exactly what host will experience the mitigation at what time just yet, however once we do set the schedule for a host your Linode is on you will be alerted with a ticket and via the dashboard.
How long reboots will take, once it is scheduled? Thanks in advance!
The maintenance window for hosts is 2 hours, however we expect the reboots to not take the full two hours. Beyond that, I’m afraid I can’t really give a more accurate assessment of how long the reboot will take. Hope this helps!
Oh. It is serious issue that we need to mitigate.
The whole operation, from shutdown to server back up and running took 11 minutes.
Mine reboot was around 45 minutes.. I moved my servers for now in DigitalOcean. I do not know what DO is doing but as of the moment the do not promise down time..
@Romel
That is hilarious. DO hasn’t done their reboots yet. So you leave a hosting provider for a serious, unavoidable reboot to another that has to do the same thing!
Needed that laugh this morning.
You’ve schedule 2/3 of my cluster hosts for the same window, and given me no way to reschedule that. Support has not responded to my message about preventing downtime on my cluster by either rescheduling or migrating one of my nodes to another host. That means in 11 hours my cluster will be broken.
What happened linode? You used to be good about handling these outages, but last summer you started to suck. Please improve, I’d hate to end a 4 year relationship over this.
Hey Zach, I’m terribly sorry we haven’t gotten back to your ticket yet. Can you let us know the ticket number so I can take a look and see what we can do for you? Please also feel free to call our support line for more immediate assistance as well. Thank you for being patient with us during this process!
reaaaally wish you’d had a pool of servers that we could migrate onto, on our own schedule instead of this middle-of-the-night stuff. You’ve done it before.
I apologize, due to the scope and severity of these vulnerabilities, we are unable to be as flexible as we would like with the mitigation process. If you open a ticket or call the support help line, however, we will be happy to see what we can do for you.
@Zach: Ask support for server migration for one of the cluster nodes to another hypervisor before reboot…
Hmm.
The mugration og my first linode scheduled, could be started when it fit my schedule. Why not the next? (Both nodes are located in london)
/Henning
I have a question about notifications. In the past, when downtime was scheduled for one of our systems, we’d receive an e-mail giving us plenty of advance notice. With the reboots for Meltdown/Spectre, however, we were left to polling our Linode Manager page to see what, if anything is scheduled.
From reading an article at Ars Technica, it appears that Linode learned of these vulnerabilities like the rest of us — with no advance notice. I cannot imagine the scrambling that must have caused!
As you would have appreciated advance notice, so too would we.
We’ve been hosted on Linode coming up on 4 years and currently have 9 servers here. One of the reboots took down 3 of our nodes at once. As one of these handled DNS, the lack of sufficient notice prevented us from redirecting to our redundant system (TLS propagation issues). Fortunately, that reboot happened quickly and our loss of service resulted in limited down time for our site (less than 15 minutes).
As further phases of remediation are planned, we expect that each of our systems will see at least one more reboot.
How can we get email notification of upcoming server reboots?
I completely understand where you’re coming from. For this round of reboots we aimed for at least 24 hours notice via ticket. For future reboots we’ll be able to provide notice further in advance, which will allow you more time to plan for any maintenance.
You’ll be notified of any ticket updates via the contact email address on file. If you’re not receiving notices via email let us know in a ticket and we’ll be happy to take a look.
Thank you for the prompt response!
Best wishes to you as you try and deal with rolling out fixes to thousands(?) of systems and deal with anxious customers.
We have continued to receive e-mail messages informing of system reboots… *after* they occurred. The last e-mail message *predicting* a reboot was sent on Jan 11, 2018. We had 7 systems reboot after that, and only learned of those ahead of time by constant scanning of our Linode Manager page.
This is both error-prone and time-consuming.
Our Linode Manager Notifications tab ( https://manager.linode.com/profile/notifications ) currently shows:
Linode Events Email
Events Email Notification
Notifications are currently: Enabled
So we *should* be receiving e-mail notifications in *advance* from now on? Or is there something else that must be done?
Yes, you should receive an email notification before any reboots from here on out. Email notifications should be much more timely moving forward. Even if there is a delay, the increased notice we’ll be providing for future maintenance will allow ample time for email notifications to reach you before any reboots take place.
That is a tremendous relief — Thank You!
That is one thing Linode has made a name for itself — being open and forthright in dealings with its clients. Thanks for holding your standards high!
This is unbearable. Our linodes go down with no notice. Went down twice in the last 12 hours for about an hour each time. The scheduled downtimes have been VEEERRRYYY slowish, also up to 1 hour which is unheard of in professional hosting… Lots of our services affected, our users outraged… What are u going to do about it folks??
Hello George, we are terribly sorry for the inconvenience. Due to the scope and and serious nature of the vulnerabilities, we had to perform the maintenance in an expeditious manner. You should have had tickets open informing you of the downtime in at least 24 hours notice. The maintenance window to address the vulnerability was 2 hours, however often the maintenance itself was shorter within about the hour mark. Do you currently have a ticket open so we can take a look at your account and confirm exactly what happened? I thank you for being patient through this process.
Hello,my maintenance status show phase 1 complete and future maintenance is pending,i want to know when will the migration plan come to an end。
now i try to ping my linode IP aways show request timeout, thank you!
This round of maintenance has completed, and your Linode should have returned to its previously booted state. I’m sorry to see the pings are timing out when you attempt to connect to the Linode. Do you currently have a ticket open with us to we can take a deeper look at this?
i had open a ticket to you,please help me to solve the problem,thanks!
Can you please provide the ticket number? Thank you.
Hello,my maintenance status show phase 1 complete and future maintenance is pending,i want to know when will the migration plan come to an end。
now i try to ping my linode IP aways show request timeout,
my ticket number 9869831
thank you!
We don’t currently have an ETA on the next maintenance window but tickets will be sent out once that is determined. We are currently waiting for patches to come from our hardware vendors.
We have a medium size fleet of Linodes and are seeing random loss of connectivity (requiring a reboot from Linode control panel to fix) across this fleet. This is causing major problems; last one to go was a primary database server which took out most of the fleet, and the nature of the networking problems is not playing nice with our cluster failover (eg the failures are inconsistent so the cluster might think the master it elected is fine, but outside the cluster its not visible – so the whole application is broken).
Linode support is aware of networking issues and advised us to reboot onto latest kernel versions which, unfortunately, do not seem to have resolved these issues.
Hi
I’ve set passwordauthentication NO
But now i can’t log in with my public key any more.
the public key remind the same and nothing change.
It show:
PuTTY Fatal Error
Disconnected: No supported authentication methods available (server sent: pulickey,gssapi-keyex,gssapi-with-mic)
Should I wait until the maintenance done so I can log in?
We’d like to look into this with you. Could you open a ticket for us and let us know the ticket number so we can locate it on our end?
You may be able to use Lish to log into your Linode if it’s up and running, even if SSH isn’t working:
https://www.linode.com/docs/guides/using-the-lish-console/
ticket 9859656
@paul, what would be a medium sized fleet?
Linode, can you please provide the details of patching hypervisors?
I guess mitigation of memory reads between different VMs is often even more important than within a single VM.
We just completed fleet-wide reboots for Spectre v1/v2 mitigation and are now working through a few rounds of migrations in order to close this out. More information can be found on the status page here: https://status.linode.com/incidents/8dbtk37dwm67/.
None of your servers need need BIOS/firmware updates for any of the recent CPU vulnerabilities? Would you consider providing at least some hardware with all AMD & Intel “management” features disabled? That seems like it would be 100% unique offering for a cloud host.
Also, I would like to know if you use Intel ME for any management done in the datacenters – or you use other tools (which I think are more suited to managing datacenters).
Hi Wayne – In addition to switching to our latest patched kernel (5.1.5), we are addressing these vulnerabilities at the host level during scheduled maintenance windows. This guide has additional detailed information on these vulnerabilities as well as their mitigation.
As far as providing “hardware with all AMD & Intel ‘management’ features disabled,” I have added your suggestion to our internal tracker.
Regarding your last question about using Intel ME or other tools, we aren’t able to discuss specific information like this. Though if you have any other questions, let us know and we’ll be happy to provide as much information as we’re able to.