Zum Inhalt springen
BlogCloud ÜbersichtenDie zwölf Tage der Krise - Ein Rückblick auf Linodes DDoS-Attacken an den Feiertagen

Die zwölf Tage der Krise - Ein Rückblick auf Linodes Feiertags-DDoS-Attacken

In den zwölf Tagen zwischen dem 25. Dezember und dem 5. Januar gab es bei Linode mehr als hundert Denial-of-Service-Angriffe auf jeden wichtigen Teil unserer Infrastruktur, von denen einige den Service für Hunderttausende von Linode-Kunden ernsthaft gestört haben. Ich möchte an mein früheres Update anknüpfen, indem ich etwas mehr Einblick in die Art und Weise gebe, wie wir angegriffen wurden und was wir tun, um zu verhindern, dass so etwas wieder passiert.

Im Wesentlichen bewegte sich der Angreifer in dieser Reihenfolge auf unseren Stack zu:

  • Layer 7 ("400 Bad Request") Angriffe auf unsere öffentlich zugänglichen Websites
  • Volumetrische Angriffe auf unsere Websites, maßgebliche Nameserver und andere öffentliche Dienste
  • Volumetrische Angriffe auf die Linode-Netzwerkinfrastruktur
  • Volumetrische Angriffe auf die Netzwerkinfrastruktur unseres Colocation-Providers

Die meisten der Angriffe waren einfache volumetrische Angriffe. Eine volumetrische Attacke ist die häufigste Art von Distributed-Denial-of-Service (DDoS)-Angriffen, bei denen eine Kanone mit Datenmüll auf eine IP-Adresse gerichtet wird, um das beabsichtigte Opfer aus dem Internet zu löschen. Es ist das virtuelle Äquivalent zur absichtlichen Verursachung eines Verkehrsstaus mit einer Flotte von Mietwagen, und die Verbreitung dieser Art von Angriffen hat weltweit Hunderte von Milliarden Dollar an wirtschaftlichen Schäden verursacht.

Normalerweise sieht Linode jeden Tag mehrere Dutzend volumetrische Angriffe, die auf unsere Kunden abzielen. Diese Angriffe betreffen jedoch fast nie das breitere Linode-Netzwerk, da wir uns mit einem Werkzeug schützen, das wir " remote-triggered blackholing" nennen. Wenn eine IP-Adresse "blackholed" wird, stimmt das Internet kollektiv zu, den gesamten Verkehr, der für diese IP-Adresse bestimmt ist, zu verwerfen, wodurch sowohl guter als auch schlechter Verkehr daran gehindert wird, sie zu erreichen. Für Content-Netzwerke wie Linode, die Hunderttausende von IPs haben, ist Blackholing eine stumpfe, aber entscheidende Waffe in unserem Arsenal, die uns die Möglichkeit gibt, "einen Finger abzuschneiden, um die Hand zu retten" - das heißt, den Kunden zu opfern, der angegriffen wird, um die anderen online zu halten.

Blackholing versagt als effektives Schutzmittel unter einem offensichtlichen, aber wichtigen Umstand: wenn die IP, die angegriffen wird - sagen wir, ein kritischer Teil der Infrastruktur - nicht offline gehen kann, ohne andere mitzunehmen. Beispiele, die mir normalerweise in den Sinn kommen, sind "Server von Servern", wie API Endpunkte oder DNS-Server, die die Grundlage für andere Infrastrukturen bilden. Während sich viele der Angriffe gegen unsere "Server von Servern" richteten, erwiesen sich die Angriffe, die direkt auf unsere und die Netzwerkinfrastruktur unseres Colocation-Providers gerichtet waren, als die schwierigsten, die wir abwehren mussten.

Sekundär-Adressen

Die Angriffe auf unsere Netzwerkinfrastruktur waren relativ einfach, aber es war nicht möglich, sie zu entschärfen. Als Artefakt der Geschichte segmentieren wir die Kunden in einzelne /24-Subnetze, was bedeutet, dass unsere Router innerhalb jedes dieser Subnetze eine "sekundäre" IP-Adresse haben müssen, die die Kunden als ihre Netzwerk-Gateways nutzen können.

Im Laufe der Zeit haben unsere Router Hunderte dieser sekundären Adressen angehäuft, die jeweils ein potenzielles Ziel für Angriffe darstellen. Natürlich war dies nicht das erste Mal, dass unsere Router direkt angegriffen wurden. Normalerweise werden spezielle Maßnahmen ergriffen, um Blackhole-Ankündigungen an unsere Upstreams zu senden, ohne dass in unserem Kern ein Blackhole auftritt, wodurch der Angriff gestoppt wird, während der Kundenverkehr wie gewohnt passieren kann. Wir waren jedoch nicht auf das Szenario vorbereitet, in dem jemand schnell und unvorhersehbar viele Dutzend verschiedene sekundäre IPs auf unseren Routern angreift. Dafür gab es mehrere Gründe. Erstens erforderte die Entschärfung von Angriffen auf Netzwerkgeräte manuelle Eingriffe durch Netzwerktechniker, was langsam und fehleranfällig war. Zweitens konnten unsere Upstream-Provider nur eine begrenzte Anzahl von Blackhole-Anzeigen akzeptieren, um das Schadenspotenzial im Falle eines Fehlers zu begrenzen.

Nach einem mehrtägigen Katz-und-Maus-Spiel mit dem Angreifer konnten wir mit unseren Colocation-Providern zusammenarbeiten, um entweder alle unsere sekundären Adressen zu schwärzen oder stattdessen den Datenverkehr an den Rändern der Netzwerke ihrer Transit-Provider abzuleiten, wo kein Blackholing möglich war.

Querverbindungen

Die Angriffe auf unsere Colocation-Anbieter waren genauso einfach, aber noch schwieriger zu entschärfen. Nachdem unsere Router nicht mehr direkt angegriffen werden konnten, wurden unsere Colocation-Partner und ihre Transit-Provider zum nächsten logischen Ziel - nämlich ihre Cross-Connects. Eine Cross-Connect-Verbindung kann man sich im Allgemeinen als die physische Verbindung zwischen zwei beliebigen Routern im Internet vorstellen. Jede Seite dieser physischen Verbindung benötigt eine IP-Adresse, damit die beiden Router miteinander kommunizieren können, und genau diese IP-Adressen waren das Ziel.

Wie bei unserer eigenen Infrastruktur war diese Angriffsmethode an und für sich nicht neu. Was diese Methode so effektiv machte, war die Schnelligkeit und Unvorhersehbarkeit der Angriffe. In vielen unserer Rechenzentren wurden Dutzende verschiedener IPs in den vorgelagerten Netzwerken angegriffen, was ein Maß an Konzentration und Koordination zwischen unseren Colocation-Partnern und ihren Transit-Providern erforderte, das nur schwer aufrecht zu erhalten war. Unser bei weitem längster Ausfall - über 30 Stunden in Atlanta - kann direkt auf häufige Zusammenbrüche in der Kommunikation zwischen Linode-Mitarbeitern und Leuten zurückgeführt werden, die manchmal vier Grade von uns entfernt waren. Wir waren schließlich in der Lage, diesen Angriffsvektor komplett zu schließen, nachdem einige hartnäckige Transit-Provider endlich zugaben, dass ihre Infrastruktur angegriffen wurde und erfolgreich Maßnahmen ergriffen, um die Angriffe zu stoppen.

Gelernte Lektionen

Auf persönlicher Ebene ist es uns peinlich, dass so etwas passieren konnte, und wir haben einige harte Lektionen aus dieser Erfahrung gelernt.

Lektion eins: Verlassen Sie sich nicht auf Zwischenhändler Im Nachhinein glauben wir, dass die längeren Ausfälle hätten vermieden werden können, wenn wir uns nicht auf unsere Colocation-Partner für den IP-Transit verlassen hätten. Hierfür gibt es zwei spezifische Gründe: Erstens wurden wir in mehreren Fällen zu der Annahme verleitet, dass unsere Colocation-Provider einfach mehr IP-Transit-Kapazitäten hatten, als sie tatsächlich hatten. Einige Male war die Menge des auf Linode gerichteten Angriffsverkehrs so groß, dass unsere Colocation-Provider keine andere Wahl hatten, als das Linode-Netzwerk vorübergehend zu de-peeren, bis die Angriffe beendet waren. Zweitens erforderte die erfolgreiche Entschärfung einiger der nuancierteren Angriffe die direkte Beteiligung von leitenden Netzwerktechnikern verschiedener Tier-1-Provider. Um 4 Uhr morgens an einem Feiertagswochenende wurden unsere Colocation-Partner zu einer zusätzlichen, unnötigen Barriere zwischen uns und den Leuten, die unsere Probleme beheben könnten.

Lektion zwei: größere Angriffe absorbieren Linodes Kapazitätsmanagement-Strategie für IP-Transit war einfach: Wenn unsere tägliche Spitzenauslastung anfängt, sich 50% unserer Gesamtkapazität zu nähern, dann ist es Zeit, mehr Links zu bekommen. Diese Strategie ist Standard für Carrier-Netzwerke, aber wir wissen jetzt, dass sie für Content-Netzwerke wie das unsere unzureichend ist. Um dies in Zahlen auszudrücken: Unsere kleineren Rechenzentrumsnetzwerke haben eine Gesamt-IP-Transitkapazität von 40 GBit/s. Das mag vielen von Ihnen als viel Kapazität erscheinen, aber im Kontext eines DDoS-Angriffs mit 80 GBit/s, der nicht geblockt werden kann, lässt uns ein Headroom von nur 20 GBit/s mit lähmenden Paketverlusten für die Dauer des Angriffs zurück.

Lektion drei: Lassen Sie die Kunden wissen, was los ist Es ist wichtig, dass wir uns eingestehen, wenn wir versagen, und unser Mangel an detaillierter Kommunikation in den ersten Tagen des Angriffs war ein großer Fehler. Detaillierte technische Updates während einer Krise zu geben, kann nur von denjenigen gemacht werden, die detailliertes Wissen über den aktuellen Stand der Dinge haben. Normalerweise sind diese Leute auch diejenigen, die die Brandbekämpfung durchführen. Nachdem sich die Dinge beruhigt hatten und wir unsere öffentliche Kommunikation überprüft hatten, kamen wir zu dem Schluss, dass unsere Angst, etwas schlecht zu formulieren und unangemessene Panik auszulösen, dazu führte, dass wir uns in unseren Status-Updates vieldeutiger ausdrückten, als wir es hätten tun sollen. Das war falsch, und in Zukunft wird ein designierter technischer Ansprechpartner dafür verantwortlich sein, bei größeren Ereignissen wie diesem detailliert zu kommunizieren. Außerdem können sich Kunden auf unserer Statusseite über den Link "Updates abonnieren" per E-Mail und SMS über Service-Probleme benachrichtigen lassen.

Unsere Zukunft ist heller als unsere Vergangenheit

Mit diesen Lektionen im Hinterkopf möchten wir Sie wissen lassen, wie wir sie in die Praxis umsetzen. Zunächst der einfache Teil: Wir haben die Bedrohung durch Angriffe auf unsere öffentlich zugänglichen Server durch die Implementierung von DDoS-Abwehrmaßnahmen gemindert. Unsere Nameserver werden jetzt von Cloudflare geschützt, und unsere Websites sind jetzt durch leistungsstarke kommerzielle Traffic-Scrubbing-Appliances geschützt. Darüber hinaus haben wir sichergestellt, dass die Notfallmaßnahmen, die wir während der Angriffe in den Ferien ergriffen haben, dauerhaft umgesetzt werden.

Allein diese Maßnahmen versetzen uns in eine Lage, in der wir zuversichtlich sind, dass die Art von Angriffen, die sich über die Feiertage ereignet haben, nicht wieder passieren können. Trotzdem müssen wir noch mehr tun. Deshalb freue ich mich, heute ankündigen zu können, dass Linode unsere gesamte Konnektivitätsstrategie für Rechenzentren überarbeitet und 200 Gigabit an Transit- und Peering-Kapazität von wichtigen regionalen Standorten in jeden unserer Standorte einspeist.

Hier finden Sie einen Überblick über die bevorstehenden Infrastrukturverbesserungen in unserem Rechenzentrum in Newark, das als erstes diese Kapazitätserweiterungen erhalten wird.

Das Herzstück dieser Architektur sind die optischen Transportnetzwerke, mit deren Ausbau wir bereits begonnen haben. Diese Netzwerke werden völlig unterschiedliche Pfade zu einigen der wichtigsten PoPs in der Region bereitstellen und Linode Zugang zu hunderten von verschiedenen Carrier-Optionen und tausenden von direkten Peering-Partnern geben. Im Vergleich zu unserer bestehenden Architektur sind die Vorteile dieses Upgrades offensichtlich. Wir werden die Kontrolle über unsere gesamte Infrastruktur übernehmen, bis hin zum Rand des Internets. Das bedeutet, dass wir nicht mehr von Zwischenhändlern für den IP-Transit abhängig sind, sondern in direkter Partnerschaft mit den Carriern stehen, auf die wir für unseren Service angewiesen sind.

Zusätzlich wird Linode die Menge an Bandbreite, die uns derzeit zur Verfügung steht, verfünffachen, was uns erlaubt, extrem große DDoS-Angriffe zu absorbieren, bis sie ordnungsgemäß entschärft sind. Wenn die Größe der Angriffe in Zukunft zunimmt, wird diese Architektur schnell skalieren, um den Anforderungen gerecht zu werden, ohne große neue Investitionen zu tätigen.

Letzte Worte

Schließlich ist eine aufrichtige Entschuldigung angebracht. Als ein Unternehmen, das kritische Infrastrukturen für unsere Kunden hostet, wird uns die Verantwortung übertragen, diese Infrastrukturen online zu halten. Wir hoffen, dass die Transparenz und das vorausschauende Denken in diesem Beitrag etwas von diesem Vertrauen zurückgewinnen können. Wir möchten uns auch bei Ihnen für Ihre freundlichen Worte des Verständnisses und der Unterstützung bedanken. Vielen von uns wurde durch diese unerbittlichen Angriffe der Urlaub ruiniert, und es ist schwierig, dies unseren Lieben zu erklären. Die Unterstützung durch die Community hat wirklich geholfen. Wir ermutigen Sie dazu, Ihre Fragen oder Kommentare unten zu posten.


Kommentare (67)

  1. Author Photo

    Thanks for your great work. My VPS was running well during these days.

  2. Author Photo

    Good postmortem analysis – thanks for being candid.

  3. Author Photo

    Thanks for being honest and forthcoming about this and the issues you addressed-both on the technical and PR sides-as well as the steps you are taking to better your company.

    Kimo.

  4. Author Photo

    You people are awesome and have great stamina. We are satisfied customer from Pakistan.

  5. Author Photo

    I’ll never stop buying linodes!!

  6. Author Photo

    You guys are are rock stars in my book, and I appreciate the transparency. More tech companies need to live and breath that these days, or else find themselves losing the game to cheaper competitors.

    While I haven’t been a fan of how some past incidents were handled, I still give Linode a 5-star rating. Good job!

  7. Author Photo

    Things happen. Those of us who network or sysadmin know that when youre fighting fires and figuring out what is going on and fielding calls from angry clients the last thing you have time for is updating everyone. Hell…you may not even know what all is going on for a couple days or more with huge attacks.

    This is a good postmortem and your ability to learn and adapt and invest in your own infrastructure is why I love and continue to be a Linode fanatic.

    Keep it up you guys. Sorry Christmas was such a bummer.

  8. Author Photo

    May the Network be with you!

  9. Author Photo

    Can’t thank the Linode team enough for your dedication. The livelyhood of thousands rest in your hands, I feel like this whole event further proves how well qualified you guys are to be doing what you’re doing.

  10. Author Photo

    The only part of this that really bothers me is the idea that if I get a DDOS, Linode is just going to blackhole me, and me alone. Doesn’t that mean that I have to give in to ransom demands from attackers?

  11. Author Photo

    I really appreciate this. We were waiting for this to take the decision if we will stay in linode or move away, and we are staying.

    I strongly agree that being more transparent would have helped a LOT.

    I’d like to know, though, when is scheduled the above change in the rest of the datacenters. I’m not using newark right now and would like to know when my datacenter will have it : )

    Thanks a lot,
    Rodrigo

  12. Author Photo

    @Mogden – for people who are attacked regularly, we suggest Cloudflare or others in the DDoS protection market. I’m not sure what the future holds on this subject, but rest assured that it really bothers us too.

  13. Author Photo

    Thanks for the update. Any time frame for other datacenters to be updated? My linodes are in Atlanta and we suffered almost three days of downtime.

    Cheers

  14. Author Photo

    We had 2 linodes, one of them in Atlanta datacenter. We have not experience any issues during holidays, but I was worried though. Thanks for the explanation and amazing work. Honestly hope your family can understand the situation.

    Amazing company!

  15. Author Photo

    Like Rodrigo, this is a huge thing to us. I was honestly feeling that it was going the usual corporate way with silence and deniability, just waiting for the furore to die down. It really makes a difference to hear not only the details of the response/mitigation activities, which we appreciate, but also acknowledgement of the position we were put into when communication was sparse.

    It goes a long way.

    Thanks again.

    Mark.

  16. Author Photo
    Stefan Winer @ CloudFlare

    Great to hear we could help you get protected.

    swiner@cloudflare.com

  17. Author Photo

    @mogden – if your the one being ddos’d then you deserve to be blackholed. I dont pay for my linodes for you to be targetted with a ddos and mine linodes taken down!!

  18. Author Photo

    Thank you for the analysis and a break down of what took place, and most importantly, thank you for being honest with customers!

    Cheers!

  19. Author Photo

    I’m obviously a huge fan of Linode, but I wonder if this attack will force them to re-evaluate their “3 strikes” policy towards hosted sites which come under DDoS attack. As this attack should have taught them, it’s indiscriminate, and there’s not a whole lot a small website owner can do to mitigate it. We rely on Linode to be able to deal with this, and punishing the victim is hardly a fair solution.

  20. Author Photo

    And attacks started minutes after posting updates. http://status.linode.com/incidents/mkcgnmjmnnln

    I’ve a message for Linode especially Chris, please invest more and more on infrastructure if you want to stay in the game otherwise, you’ll be overtaken by heavily funded startups in this domain. We know you have innovative mind and excellent technology but this alone is not sufficient for you to win in this domain. I like performance and flexibility of Linode but moved to DO just because I needed to setup my stuff at Japan and Singapore data-centers and Japan DC is sold out. 3 out of 6 locations are sold out and you are not yet expanding? How will you compete?

    Come out of your box and look at your neighbors. It was painful to move to Digital Ocean for me but I had to take this decision. I am still using Linode for some of my stuff will continue using it until I need redundancy or you expand.

  21. Author Photo

    There’s nothing that I love more than the amount of technical detail that you provide to us on these cases, and even with some minor updates.

    I love being a Linode customer, no DDoS will get that away from me 😀

  22. Author Photo

    Thanks for this post, Alex. This was a rough period for everyone involved and affected but I am extremely impressed by Linode making the effort to hopefully prevent the same scenario from happening again.

    There were many lessons to be learned from this – both for Linode and for customers.

    Linode appears to have realized what they needed to do and that is fantastic. Instead of saying sh*t happens and going about business as usual you are actively working to make sure it doesn’t happen again. Well done.

    We (customers) need to cover our own bases too. For anything critical or even slightly important you need to have a plan in place in the event of a Linode outage (regardless of the reason).

    I have now split some of my services and are far better placed to recover quickly in the event something like this were to happen again. Linode had always been so reliable that I got complacent. Lesson definitely learned.

    In my case my costs have now increased as I am now paying other providers in addition to what I have and will continue to pay Linode, but the ability to keep some important services online is worth it.

    Thank you to everyone at Linode for your hard work and for looking out for your customers.

  23. Author Photo

    Some of our big clients suffered with the downtime on those days but, with several VPS and more online each day, we never accepted any offer from others players. This kind of behaviour make us confident with the team and give us peace of mind that we’re in good hands.

    Thank you for the update and respect with your customers.

    Hostcare Internet

  24. Author Photo

    Thank you for being open, good luck with your new defences and I hope that you catch up on your family time!

  25. Author Photo

    Linode user here. Thanks for the transparency. I wasn’t directly affected but I appreciate the openness on the issue. It’s a welcome change to most companies now. I plan to keep using Linode just because of how cool you all handled the situation. Keep up the good work!

  26. Author Photo

    Cloudflare will probably help with your DDoS but they aren’t infallible as any other vendor.. But what happens when they get hit really hard themselves? I’d recommend getting a second DNS provider.

    See Also: https://blog.thousandeyes.com/ultradns-ddos-affects-major-web-services/
    https://blog.thousandeyes.com/ultradns-outage-october-2015/

  27. Author Photo

    I was beginning to wonder if such a note would arrive. The explanation is useful and I’m feeling as though things are safer than before.

  28. Author Photo

    Thank you for being transparent about what happened. That was a truly hellish attack. Getting slammed with a sophisticated and highly targeted 80 Gbit DDoS is stressful for any network admin and I’m glad that Linode succeeded in weathering the storm.

  29. Author Photo

    I am really impressed with way you have handled this whole situation, your company’s honesty and explanation is more than anyone could have expected. I’m sure there were many hours invested, not only in locating and fixing the problem on top of adding the double protection; but even in your letter to your customers. I hope all your customers are as loyal to your company as you have been with them. Way to step up your game, keep up the good work. Wishes for much more success……

  30. Author Photo

    Thank you very much for the detailed breakdown of what went wrong and what you plan to do to prevent this in the future. I have to say though, technical reasons and justifications aside, Linode has a lot to learn in regards to communication. I know you acknowledge that in your blog post but for many people (myself included) it’s too little way too late. It’s taken you 30 days to write a blog post that could’ve been written in hours. For 30 days people have been sitting on the fence wondering exactly what you guys are doing and whether or not they should jump ship. For many people (myself included), the absence of this response and the overall feeling that it has been so long since you said you were going to provide an update, that honestly you were just going to push this to the side and hope it went away, has directly contributed to Linode losing a significant amount of business from us.

    I don’t want my response to turn into some Linode bashing post, but I want you to be aware that your failure to provide sufficient information and responses is the biggest problem here – for me, at least. It hit your reputation hard and caused us to lose a significant amount of trust in your company and services. DDoS attacks happen, and we know you guys were working extremely hard to deal with those. You reminded us often enough in your status updates. What we really wanted to know was that the worst was over and that you identified your weaknesses and were addressing those. The longer we had to wait for this information, the less trust we had/have in you.

    I’d like to end this on a more positive note. All of the above said, your services are fantastic overall and I’d love to come back to Linode in the future, once you’ve performed all of the changes you have mentioned here. Just please, improve on your communications!

  31. Author Photo
    Saint Aardvark the Carpeted

    Long-time Linode customer…I wasn’t affected by the outage, but I’m really glad you’ve taken the time to write up what happened. Thanks for being transparent and generally awesome.

  32. Author Photo

    Alex, this caught my attention: “… requiring a level of focus and coordination between our colocation partners and their transit providers which was difficult to maintain.”

    How did you structure this communication? What tools / technologies did you use or tried to use?

  33. Author Photo

    This is a nicely put article. I only have amazing things to say about Linode and its staff. Awesome post!

  34. Author Photo

    As a long time customer and a fellow network administrator I just wanted to say that I do really appreciate all your hard work. Respect.

  35. Author Photo

    Sounds an exciting project Alex, good luck!

    Any news on continued security farces at Linode? and ‘The Best Practices not invented here’ approach.. For example to reset 2FA

    Should you need us to disable your Two-Factor Authentication, the following information is required:

    An image of the front and back of the payment card on file, which clearly shows both the last 6 digits and owner of the card.
    An image of the front and back of the matching government-issued photo ID.

    A) Photoshop CC in 2 mins, you have no idea what my CC should look like.
    B) You can’t verify government ID so say 5 minute photoshop.

    Woohoo for 2FA, known as 2 f… alls

  36. Author Photo

    Thanks for the update, and letting us know that things will be better handled in the future. Both technically and on the communication front.

    Any idea who attacked and why?

  37. Author Photo

    Linode – you are the best. Thanks for your service.

  38. Author Photo

    Thanks for the update. As a long time linode customer, it is appreciated.

    For you guys complaining about being kicked out in case of a DDoS, I recommend getting DDoS protection for your linodes. There are a lot of cheap options right there that can be integrated easily.

    Some one recommended CloudFlare and they are great. You can also look at Sucuri:

    http://sucuri.net/website-firewall/

    Or Incapsula:

    https://incapsula.com

    Both great products and solutions. Stay safe!

  39. Author Photo

    200g? this years ddos was 800gbps…

  40. Author Photo

    good postmortem. now can you explain what happened with the “leaked” credentials and the fact that we had to reset the passwords.
    thank you

  41. Author Photo

    These attacks could happen to anyone and any provider. Keep up the good work!

  42. Author Photo

    Great article and the right way to handle these kinds of problems. Transparency and constructive retros are the way to go.

  43. Author Photo

    I think you did great job considering the size of the attack. That’s why continue to use Linode for my virtual machines. Thank you for your support and keep up the good work.

  44. Author Photo

    Thank you for the clear and concise explanation. I look forward to you rolling out your upgrades and continue to be a happy customer with Linode.

  45. Author Photo

    Cisco routers, seriously?

    Juniper high end routers take a gigantic steaming dump all over Cisco.

  46. Author Photo

    @Jake that’s essentially what ASRs are 😉

  47. Author Photo

    If you want to do it on the cheap side and be safe, get some cheaper / best equipment from huawei (give them a call). You might think the Chinese cannot be better than Cisco, but Cisco is now also made in China. Also I’m sorry, but you need some Ddos protection (expensive). You cannot just nullroute your costumers… you have to protect them. If the cheap OVH company can do it, why can’t you…

    Looks like you guys need to hire someone with real experience in network engineering (worked at ISP level), not just some cheap undergraduate out of university.

    You need to rely more on anycast, have reserved capacity, etc.

    After reading this, I would not host my sites on linode. You guys look amateur (sorry).

  48. Author Photo

    I appreciate this honest insight, but I’ve moved back to a local server since these attacks made access to my Linode difficult or impossible, and always-on, always-accessible was my main reason for moving to Linode in the first place. Sorry, and better luck in the future.

  49. Author Photo
    Patrick Burroughs (Celti)

    I like the transparency, even delayed. I like that you’re taking steps. I DON’T like that your “security appliances” block ALL ICMP packets including the “Packet Too Big” messages required for path MTU discovery and breaking my ability to access the Manager over my VPN.

  50. Author Photo

    Buying blended internet direct from your colo provider is a bad idea (as it seems you have learned the hardway)

    You should be getting your transit direct from diverse carriers… this is networking 101

  51. Author Photo

    Love the armchair quarterbacks giving their input. Now, for you QBs, where is your massive company you are running and making decisions and learning lessons from? Oh you don’t have one and you don’t work for one? Sit back and let Linode do their job, they are by far the best provider out there. The cost of this type of infrastructure is gigantic and you wanna-be QBs have no idea what it takes to run a business.

    Great job Linode. I know I’ve made the right choice by using you.

  52. Author Photo

    Excellent. I knew you guys were “on it”. I really appreciate the detail you provided.

  53. Author Photo

    Thank you for releasing this honest and detailed report

  54. Author Photo

    Regarding CloudFlare, did you shop around for any other DNS DDOS protection services? The reason I ask is because CloudFlare happily caches too many dodgy websites. Some sources that may be of interest:
    http://news.netcraft.com/archives/2015/10/12/certificate-authorities-issue-hundreds-of-deceptive-ssl-certificates-to-fraudsters.html (large number of phishing certificates issued by CloudFlare)
    http://www.crimeflare.com (non-profit that investigates CloudFlare and its customers)

  55. Author Photo

    I appreciate the update, but i find a bit late too.
    Also i don’t really get why Mr. Forster signing this post?
    And don’t get me wrong, i have nothing against him, i don’t doubt his intentions or knowledge.
    But i expected a statement from someone from the top of the food chain . This was also one of my main problems when the events happened, its like nobody cares from the top management, until one of the engineers realized that they can’t be silent anymore.
    I still have that feeling, and is pretty alarming .

  56. Author Photo

    It’s time to move to IPv6-only internet. Attacking a single address will become impractical if a host can have millions of them changed automatically in an unpredictable way.

  57. Author Photo

    Appreciate the info.

    It is a minor point, I know, but status.linode.com should either be un-available over https, or have its own cert.

    try this in chrome…

    https://status.linode.com

  58. Author Photo

    Thanks Linode Team for acknowledging your challenges, and courageously taking adaptive actions 🙂

  59. Author Photo

    Great job! Didn’t know such a story ongoing since my site was on all the time. Really appreciate all the hard work of LINODE support team!

  60. Author Photo

    Thank you for the very interesting update. Best of luck for the future.

    I’m also quite curious on who could benefit from such attacks in the first place.

  61. Author Photo

    I am using Cloud Flare to protect the blog from DDOS attack, is there any other best application available to replace cloudflare? Is there a way to stop the DDOS or brute force attack for wordpress sites?

  62. Author Photo

    Great write up & good to see such honesty and transparency. I think it is important for readers of this to understand that DDoS attacks can affect anyone at any time on any host. Obviously when you are on the receiving end of a nullroute it is not nice, but It’s important to note though that providers do not want for you to have downtime, but if a DDoS directed at you is affecting other customers and you don’t have some form of mitigation, there is seldom any other option than to take this action. As they said, ‘cut off a finger to save the hand’. I’m quite sure that if someone else is being DDoS’d that you would prefer to see them nullrouted than have your own service impacted, so that has to work both ways in my eyes.

    It’s important to look at the issue objectively – DDoS attacks are not going to go away and really if you have concerns around protection then this does mean paying for a mitigation service, especially if outages will be more costly than the monthly sub.

    @Srinivas – You’ll need a CloudFlare business plan for DDoS attack mitigation. Simply being behind CloudFlare on a free plan won’t give you this protection, and there isn’t another service that I am aware of that provides free DDoS protection without at least having some other paid service. Keep in mind that CloudFlare isn’t an application, but rather a service which is totally separate from your Wordpress sites. If you want to run something locally to stop a brute force attack then have a look at a plugin such as Wordfence, which is very effective. Another good plugin is iQ Block Country which uses GeoLocation – you can lock down your back end to whitelisted countries only. Plugins are not infallible, but they definitely add extra security. Another good way to stop brute force attacks is by not using obvious account names for the administration area of your site…lots of tools will try to brute force on usernames like ‘admin’ – as with any security approach, it’s all about the layers!

    As a final note, I do always find it interesting when posts like this attract the critics who dish out ‘advice’ about how X and Y should have already been done, or that they are amateur, etc. I would like to know which fairytale jobs they have at companies that have everything 100% perfect with 100% uptime and 0% chance of outages or attacks…

    Fair play Linode, tip of the cap.

  63. Author Photo

    Thank you for your honesty and transparency. Very very good post. Thank you for your hard work during the attacks even on holidays. Keep pushing Linode Team!

  64. Author Photo

    yeah thank you also for your transparency. I remember what happened, evthg gave tears and I think, as many people, we planned to move to another company. Even some days ago, I compared with AWS, reading their doc for RDS, EC2, ELB, S3 etc, but Linode, even with much less available options and possibilities if we compare to amazon, Linode stay for us a better company, with a great support and reactive, providing faster and cheaper solutions.

  65. Author Photo

    I started with Linode 4 years ago, I loved the service and I am not going to go away from you guys. I know how painful firefighting could be, thanks to your team for working so hard. And please do everything that could prevent this from repeating.

  66. Author Photo

    Hello,
    on the article you said following

    “our nameservers are now protected by Cloudflare, and our websites are now protected by powerful commercial traffic scrubbing appliances.”

    but seems it is not anymore. did you moved away from cloudflare protection? if yes then why? many hosting giants now rely on cloudflare protection.

  67. Author Photo

    Thank you for this update and the recent additional high memory and $5 options.

Kommentar abgeben

Ihre E-Mail Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit *gekennzeichnet