Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security Network The Internet Technology

GitHub Survived the Biggest DDoS Attack Ever Recorded (wired.com) 144

A 1.35 terabit-per-second DDoS attack hit GitHub all at once last Wednesday. "It was the most powerful distributed denial of service attack recorded to date -- and it used an increasingly popular DDoS method, no botnet required," reports Wired. From the report: GitHub briefly struggled with intermittent outages as a digital system assessed the situation. Within 10 minutes it had automatically called for help from its DDoS mitigation service, Akamai Prolexic. Prolexic took over as an intermediary, routing all the traffic coming into and out of GitHub, and sent the data through its scrubbing centers to weed out and block malicious packets. After eight minutes, attackers relented and the assault dropped off. "We modeled our capacity based on fives times the biggest attack that the internet has ever seen," Josh Shaul, vice president of web security at Akamai told WIRED hours after the GitHub attack ended. "So I would have been certain that we could handle 1.3 Tbps, but at the same time we never had a terabit and a half come in all at once. It's one thing to have the confidence. It's another thing to see it actually play out how you'd hope."

Akamai defended against the attack in a number of ways. In addition to Prolexic's general DDoS defense infrastructure, the firm had also recently implemented specific mitigations for a type of DDoS attack stemming from so-called memcached servers. These database caching systems work to speed networks and websites, but they aren't meant to be exposed on the public internet; anyone can query them, and they'll likewise respond to anyone. About 100,000 memcached servers, mostly owned by businesses and other institutions, currently sit exposed online with no authentication protection, meaning an attacker can access them, and send them a special command packet that the server will respond to with a much larger reply.

This discussion has been archived. No new comments can be posted.

GitHub Survived the Biggest DDoS Attack Ever Recorded

Comments Filter:
  • TFA doesn't give any detail around this. How does one generate that much traffic without the need of a botnet?

    • Re: (Score:3, Informative)

      by Anonymous Coward

      TFS does give this link: https://blog.cloudflare.com/memcrashed-major-amplification-attacks-from-port-11211/

      So the answer is, vulnerable memcached servers amplify the packets for anyone who can IP spoof. The attacker doesn't need a botnet, because one accidentally exists already.

    • How does one generate that much traffic without the need of a botnet?

      Maybe it's one of those "unstoppable" weapons that Putin has been bragging about . . . ?

      If so, you won't be able to find any information about it . . . unless you hire Russian Hackers to dig it up . . .

    • TFA doesn't give any detail around this. How does one generate that much traffic without the need of a botnet?

      It depends on what you mean by "botnet". The attacker sent spoofed memcached [wikipedia.org] requests to UDP servers, which were then replicated and forwarded to the victim. I some sense, these UDP servers are acting as a "botnet" even though they are not running any malware controlled by the hacker. More info here [cloudflare.com].

      A bigger question is: Cui bono? Why is someone attacking Github?

      • I some sense, these UDP servers are acting as a "botnet" even though they are not running any malware controlled by the hacker.

        Well, if an external actor can force these machines to do their bidding at a time of their choosing - in what sense are they NOT part of a botnet?

    • by EvilSS ( 557649 )
      It is an amplification attack. The attacker sends a few bites in the request, with a spoofed IP. The server responds to the spoofed IP address with a flood of data the attacker requested. It's like calling pizza hut and having 100,000,000 pizzas delivered to your enemy's house.
    • Because too many network admins don't bother to read and implement BCP 38 [ietf.org] on top of too many network admins leaving memcached servers publicly accessible.

      • by rtb61 ( 674572 )

        So clearly a penalty should be applied. Whilst they were tricked into the attack, they were committing the attack. So time for the courts to step in, those who committed the actual attack, should be hauled before the courts to prove they did not do the attack willingly and if they can not, pay the criminal penalty for the attack. Ignorance is not excuse, that is their chosen profession, that is their source of income, they have professional liability and should be held to account.

        Should not countries suppl

    • by account_deleted ( 4530225 ) on Sunday March 04, 2018 @08:55PM (#56207891)
      Comment removed based on user account deletion
    • by Burdell ( 228580 )

      The memcached traffic amplification factor is around 15000x, so to get 1.3Tbps of attack traffic requires fewer than 90 hosts with gigabit Internet access.

      • by pots ( 5047349 )
        It would take more than that, I assume. The whole point of using a DDoS attack, instead of DoS, is that you're making your attack from many vectors. If there's only a small number of misbehaving servers than those can just be blocked.
  • Why do people do stupid shit like this? Github is neither a bad actor nor deserving of this. Why don't they go after the fucking Trump Organization or Oracle or something like this.
    • by Anonymous Coward

      These kids of attacks are often used to mask another attack against the systems. I would want to be extra vigilant on the integrity of accounts and the projects if I were involved with this. Although, the fact that nerd rage is the best and worst kind of rage continues to hold, so it might just be a single retaliatory personality at large.

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      It happened for the same reason it happened in 2015:

      https://www.theverge.com/2015/... [theverge.com]

      In short, activists inside and outside of China are using GitHub to write and share code for software to circumvent the government's "Great Firewall" in one way or another...they did not succeed in taking GitHub offline, so they decided to show their technical prowess and their sheer (if amplified) bandwidth abuse potential by conducting a second attack. They're still trying to take GitHub offline, badly, people need to be

      • It happened for the same reason it happened in 2015:

        https://www.theverge.com/2015/... [theverge.com]

        In short, activists inside and outside of China are using GitHub to write and share code for software to circumvent the government's "Great Firewall" in one way or another...they did not succeed in taking GitHub offline, so they decided to show their technical prowess and their sheer (if amplified) bandwidth abuse potential by conducting a second attack. They're still trying to take GitHub offline, badly, people need to be made more aware this is happening...the last time was only three years ago and it was a shocking attempt at China to try and impose censorship of the Internet, as they see fit, inside the firewall AND out. This isn't a conspiracy theory or conjecture, China are very definitely waging an online "war" of sorts and this is more or less a demonstration of their capabilities.

        This doesn't shock me in the least because 90% of brute force attempts on my tiny VPS that hosts my blog come from Chinese IP addresses. It's gotten so bad that I just block the whole country. I download the zone file from http://www.ipdeny.com/ [ipdeny.com]

        • This doesn't shock me in the least because 90% of brute force attempts on my tiny VPS that hosts my blog come from Chinese IP addresses.

          That doesn't mean much. Back in the early 2000's ... someone I know used to have a botnet of tens of thousands of computers, 90% of which were in China. I'm not sure what the situation is these days, but back then Chinese boxes were by far the easiest to "hack", so they were a popular choice. Any scans or attacks being done by this individual would have appeared to be coming from his Chinese botnet, despite the fact that he himself resided in a western nation.

          tl;dr the fact that you're seeing attacks fro

  • why would someone go through the trouble of attacking github? For giggles? Do they like closed source or mercurial that much?
    • A test.

      They went after the largest of the large. Github learned they can handle that much traffic. The bot net operators learned their capacity.

      What happens when the bot net turns itself towards an entire small country, government site, or any small company that doesn't pay the ransom.

      • by ls671 ( 1122017 )

        Well they better hurry because those memcached servers are going to get patched one way or another.

    • Attacks don't have to be successful in order to be informative. Now, thanks to the VP of Akamai, it is better known what Prolexic can ostensibly handle. This doesn't mean that GitHub is about to get hit by a 6Tb/s DDoS to check as there's no need, plus if it was successful Akamai would just up the capacity to some greater unknown number.

      What it does mean is that a likely amount to DDoS anyone, even when they are protected by Prolexic, can be used as a baseline. As most won't have that high level of protecti

    • by Anonymous Coward

      I've pointed this out elsewhere, but to give you an answer that's probably closer to the truth than people would like to admit, it's almost certainly a repeat of an attack from 2015:

      https://www.theverge.com/2015/... [theverge.com]

      GitHub has apparently hosted at times (it may still, I don't know) projects and software, plus the source obviously, to circumvent the "Great Firewall" that's used to censor the Internet in China...and they aren't happy about it, as you can probably guess by the whole terabit of bandwidth directe

    • Maybe it's the FSF: Richard Stallman is one of the most vocal critics of GitHub...
  • The memcache servers ARE a ready made botnet.

    Imagine if they had made a beowolf cluster of mem.... oh, wait.....

  • by blahbooboo ( 839709 ) on Sunday March 04, 2018 @07:32PM (#56207579)

    Such a shame there are nefarious people who do these DDOS. What a huge waste of time and resources by their target entities to defeat the attacks.

    • by Kjella ( 173770 )

      Such a shame there are nefarious people who do these DDOS. What a huge waste of time and resources by their target entities to defeat the attacks.

      On the bright side, what survives is strong. Around the turn of the century /. was infamous for having its own DDoS [wikipedia.org] effect, these days it takes huge malicious effort to bring down a site. There's a war on but it's rare that the bad guys win...

    • There are always people that have been left out, can't get in, or are disenfranchised in some way or another. Or more simply, these folks can make money, wreak havoc and feel powerful, and have lots of time on their hands. Most importantly, they're k-rad now in their circle. These tools are at their disposal, the internet being open, allows it, until the free market does something, ie: DDos protection.

      This is why security isn't and has never been free.

      --
      'I aint coming down' - Eddie Vedder, cover

  • Was checking out another blog post on, really love this resource. Keep up the awesome work
  • Digital! (Score:4, Funny)

    by zmooc ( 33175 ) <zmooc@[ ]oc.net ['zmo' in gap]> on Sunday March 04, 2018 @08:02PM (#56207699) Homepage

    (...) as a digital system assessed the situation (...)

    Who knew those analog steam powered ddos protection engines would go of fashion this fast.

  • for new and updated software, i never noticed any outage, i guess the admin that keep github percolating has got some good skillz, kudos to github admin...
  • [Akamai] sent the data through its scrubbing centers to weed out and block malicious packets.

    There was the challenge to handle the load, but identifying packets to drop was quite easy this time: they all came from same UDP port for memcached.

    • by ls671 ( 1122017 )

      Exactly, it shouldn't be too hard to patch this even if this isn't done at the server level.

      Given the size of the hole, I like to think that sysadmins and network admins should get sufficient pressure to patch this relatively quickly.

  • Back in the day UDP was considered unreliable because it could be dropped by the network at any time for any reason.

    It should be noted that UDP is apparently just as reliable as TCP at the network level, in that equipment in general does -not- drop UDP at all. Behaviorally speaking the network attempts to guarantee delivery of everything, which is interesting and possibly unnecessary.

    • The network doesn't normally care if the packet is TCP or UDP, it just tries it's best to deliver it. Sometimes it cannot be delivered, usually because of congestion but sometimes because of corruption.

      The difference between TCP and UDP is that when your UDP packet does get dropped the network stack on the client/server doesn't care, the application data is simply lost. With TCP the network stack will re-send your data and reduce the transmission rate to try to prevent further packet loss (the assumption be

    • by tlhIngan ( 30335 )

      Back in the day UDP was considered unreliable because it could be dropped by the network at any time for any reason.

      It should be noted that UDP is apparently just as reliable as TCP at the network level, in that equipment in general does -not- drop UDP at all. Behaviorally speaking the network attempts to guarantee delivery of everything, which is interesting and possibly unnecessary.

      Wrong. UDP is considered unreliable because UDP does not guarantee delivery. If you get a UDP packet, the only thing you know

  • Some other site (cough fark cough) is claiming a DDOS attack. True dat?

    I feel one kind of pain for someone who buys old hardware/software and does their best. I have a whole nuther level of pain for anyone targeted by salivating short-cortexed idiots who for whatever twisted reason decide to target people doing their best (or sitting around in lounge chairs drinking Coronas, long as they aren't hurting anyone).
  • So what kind of costs does Github have from Akamai Prolexic? Do they charge on a per problem basis or an annual subscription?

    Here is some info on the firm:
    https://en.wikipedia.org/wiki/... [wikipedia.org]

  • Forgive me for sounding naive, since I've also been told to deploy memcached in this fashion, knowing that this is insecure, while asking why is memcached deployed without requiring authenticated BY DEFAULT?

    I feel naive because this is a so-simple-it's-obvious solution.

    What am I missing?

    • by Anonymous Coward

      Forgive me for sounding naive, since I've also been told to deploy memcached in this fashion, knowing that this is insecure, while asking why is memcached deployed without requiring authenticated BY DEFAULT?

      It's the same reason that your homes bedroom door and frame isn't by default built to withstand failing after one good strong kick.
      Unlike the exterior doors that are, an internal door does not typically require defenses against attacks that won't be made on them.

      Most of us also would not be interested in paying the higher cost of using exterior doors everywhere inside our homes. I know for myself this is true, despite the fact some idiot out there is likely to use an internal door in place of their front d

    • It depends on where it's exposed.

      If memcached is running somewhere on your backend, that's fine. E.g., a user hits a web page, so your web frontend talks to database and application servers over your intranet to generate a page for that user. Those servers are perfectly fine with unauthenticated memcached on a private LAN. It's not ideal from a security standpoint, but it's enough to prevent this type of attack.

      Something is terribly wrong if memcached is responding directly to requests from internet clients

  • "Prolexic took over as an intermediary, routing all the traffic coming into and out of GitHub, and sent the data through its scrubbing centers to weed out and block malicious packets."

    So, they probably just filtered all UDP packets with a source port of 11211. Looks like it was not only the biggest DDOS but also the easiest to defeat...

Genetics explains why you look like your father, and if you don't, why you should.

Working...