Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security The Internet

Could the Internet Be Taken Down In 30 Minutes? 289

GhostX9 writes "Tom's Hardware recently interviewed Dino A. Dai Zovi, a former member of Sandia National Labs' IDART (the guys who test the security of national agencies). Although most of the interview is focused on personal computer security, they asked him about L0pht's claim in 1998 if the Internet could still be taken down in 30 minutes given the advances on both the security and threat sides. He said that the risk was still true."
This discussion has been archived. No new comments can be posted.

Could the Internet Be Taken Down In 30 Minutes?

Comments Filter:
  • nah. (Score:3, Informative)

    by neo ( 4625 ) on Monday April 06, 2009 @02:02PM (#27478735)

    Actually, this is exactly what it's supposed to survive.

  • by Anonymous Coward on Monday April 06, 2009 @02:08PM (#27478815)

    http://www.networkworld.com/news/2009/040209-obama-cybersecurity-bill.html

    A federally enabled Internet kill switch will place an Internet Off Button in the White House which can be used to instantly deactivate the Internet in case of an emergency, such as the plebes getting riled up. This bill, introduced to the Senate on April Fools, is expected to pass.

  • Re:nah. (Score:5, Informative)

    by canajin56 ( 660655 ) on Monday April 06, 2009 @02:09PM (#27478835)
    Not true! ARPANET was designed as it was because there were only a few super computing sites at the time, and they were separated by quite a bit. The redundancy comes in to play only because they didn't want to lose important access if a router broke somewhere, as they are wont to do. All it was designed for was to survive a single point of failure. But even that is distorted. Just because ARPANET was designed that way decades ago, doesn't mean that large corporations decided to keep with that philosophy when they took over!
  • Re:Yes (Score:3, Informative)

    by Jurily ( 900488 ) <jurily&gmail,com> on Monday April 06, 2009 @02:13PM (#27478899)

    By a nuclear war for example.

    That doesn't count.

    Unless of course, you'd be worried about your WoW account while billions of people are dying.

  • by Lumpy ( 12016 ) on Monday April 06, 2009 @02:26PM (#27479075) Homepage

    Nope if you take out ALL The root servers right now I'll still be able to get around on the internet. My servers will still serve up information. my clients will still work.

    Do it get to use the for dummies name resolution? nope.

    If I type in 74.125.67.100 in my browser, google still shows up.

    granted everything in google is useless as they dont log the IP addresses, but that's moot for me. PLUS I can always go to one of the alternate DNS servers and use them. or my local cache... that would work for weeks without the root servers.

  • by Paralizer ( 792155 ) on Monday April 06, 2009 @02:28PM (#27479109) Homepage
    When Pakistan decided to block youtube [slashdot.org] they inadvertently caused a global routing blackhole. The internet is built with the BGP routing protocol, which is based on trust. You trust that your peers will advertise correct routes. If they don't then you get misinformation like in the Pakistan/Youtube situation and it spreads, pretty soon everyone thinks going through Pakistan is the best way to reach youtube so all traffic (or almost all) goes there, then Pakistan simply drops those packets.

    Of course this was an accident, but a malicious attack could simply advertise lots of incorrect routes and hose up everything ... at least for a little while.
  • by six ( 1673 ) on Monday April 06, 2009 @02:47PM (#27479373) Homepage

    root DNS != Backbone

    You can DDOS a server, a network, even big routers, but you can't DDOS the internet.

    Cutting random cables here and there won't work either, at most you're going to isolate parts of the net.

    The only way to take down the internet in 30 minutes is to exploit vulnerabilities in the BGP core routing protocol and announce netblocks that somehow (that's where something has to be exploited, bypassing filters, smaller blocks and routing costs considerations) takes the priority over other routes for every router that receives the announce.

    Not saying that's impossible, but still tough ...

  • by Anonymous Coward on Monday April 06, 2009 @02:48PM (#27479405)

    People misunderstand the scope and power of this law. Sure, only American & NATO NAPs will be turned off, so some IP routing may continue. However, DNS will be vaporized, as it is currently controlled by America. So your internet will become your hosts file, and any IP addresses you've memorized. Have fun with that.

  • by vlm ( 69642 ) on Monday April 06, 2009 @02:51PM (#27479451)

    BGP by design trusts in routing settings being honest... just program a router with can't-get-there-from-here routes, and you'll down the surrounding area's Internet speed, or even connections.

    No, no one trusts their peers anymore and their configs reflect that. Not since at least the 90s. Since before I started doing BGP support, everyone has filtered their customers routes. WAY WAY too many people try to redistribute 10/8 from their IGP, or maybe try to send us a 0/0. And unscientifically, I'd say about 25% of newbie BGP admins think they own their previous ISPs IP space... so if old ISP gave them 1.2.3/24 they'd ask us to modify our filters to allow the /24, we'd check (have to check each and every customer every time) and see its part of their old ISP's /18, and we'd educate them.

  • Re:Mutant Porn! (Score:2, Informative)

    by neo ( 4625 ) on Monday April 06, 2009 @03:23PM (#27479883)

    Isn't this called Hentai?

  • by Vellmont ( 569020 ) on Monday April 06, 2009 @03:30PM (#27479993) Homepage


    Yet, it has never happened. It hasn't even come close to happening

    Not exactly. [wikipedia.org] It was shortly before my time, but the reports are that "the internet" had some significant problems.

    I think you're right that it has to be hard enough for it to be too difficult for you average a-hole. The claim was that this might take a group of exceptional a-holes. The thing about a-holes is, they generally don't like other a-holes.

  • Re:YAH!! (Score:3, Informative)

    by Fungii ( 153063 ) on Monday April 06, 2009 @03:40PM (#27480123)

    Oh, I don't know.. maybe it could have meant the ability to survive a single point of failure?

  • Re:YAH!! (Score:3, Informative)

    by 644bd346996 ( 1012333 ) on Monday April 06, 2009 @03:52PM (#27480255)

    I'm pretty sure that not having a single point of failure was considered part of "reliability" even back then.

  • by BitZtream ( 692029 ) on Monday April 06, 2009 @04:01PM (#27480333)

    Funny, during all that I had no interruption to YouTube.

    Because ... the Internet functioned as it was supposed to and the BGP filters at some backbone provider up the food chain from me prevented me from noticing.

    Did you read the article you linked to? Let me help you:

    The telecom company that carries most of Pakistan's traffic, PCCW, has found it necessary to shut Pakistan off from the Internet while they filter out the malicious routes that a Pakistani ISP

    Lets read that carefully. PCCW turned off Pakistan. They turned the country off to prevent the problem from continuing to cause more wide spread problems, and to buy themselves some time. End of story. Most of the rest of the Internet had no clue.

    There are also methods to detect router black holes and prevent them, so even when this sort of thing occurs, it is automatically worked around at some backbone providers.

    This has all happened before, and will all happen again and no one that matters will notice next time either. Nor will it be nearly as scary as this thread would like everyone to think it is.

  • by afidel ( 530433 ) on Monday April 06, 2009 @05:50PM (#27481773)
    Are you behind a consumer grade firewall appliance ala Netgear or Linksys? If you are then you are almost 100% guaranteed to be more at risk running your own resolved than you are forwarding to a decent ISP run setup. The reason is that none of the consumer grade firewalls support source port randomization meaning you are very vulnerable to DNS cache poising attacks.
  • by hr raattgift ( 249975 ) on Monday April 06, 2009 @06:08PM (#27481977)

    That really depends on what the vulnerability is.

    There are several implementations of BGP from different vendors and at least two open source implementations. The protocol is also relatively simple. Consequently it's hard to imagine a vulnerability that is structural within BGP such that enough partitioning happens to make large the Internet unusable.

    In the early 1990s there was a moment where there was a very large partition when AS Path prepending was used for the first time. Cisco routers did not mind the back-to-back duplicate AS. Proteon, Wellfleet and some other implementations discarded the NLRI (prefix/mask + routing information) as part of routing-information-loop avoidance. Gated-derived routers had different approach in its NLRI loop-avoidance code, and rather than use the NLRI or discard that one update, it dropped the TCP session figuring that there was a data corruption bug. The result: BGP sessions between "core" IOS-talking routers and "core" gated-derived routers bounced up and down for a while. This affected most of the exterior routing gateways of ANS, which operated the NSFNET Backbone Service at the time.

    This sort of "reset" policy is now known to have been a serious mistake and now is very rare.

    Also in the early 1990s there was a hardware interaction problem involving Cisco 7000-series routers equipped with Silicon Switching Processor cards. A "covering" prefix arriving via any routing protocol -- typically BGP -- would cause all the "covered" (longer match of the same prefix) to be deleted with demand-population bringing those routes back into the radix tree like data structure. Demand population used the same CPU that TCP ACK processing and other activities used, so a router in the "core" with a relatively full routing table and a high packet per second arrival rate of a mix of prefixes (as in "core" routers generally) would simply melt down. This would starve timer-sensitive activities like TCP ACKing and processing the BGP protocol state machine. This in turn led to BGP sessions resetting due to time-outs, which in turn reduced the traffic load substantially on the melting-down router. This would "thaw" the router enough that it would bring the BGP sessions back up long enough to receive a covering prefix, and so forth in a loop. This crippled one very large "tier 1" ISP for an hour and change.

    There have been a number of minor "ouchies" related to information obtained from BGP neighbours in the years since, with the most embarassing ones having to do with specific implementations' reactions to very long data sets (e.g. extremely large AS_Set attributes, extremely long AS Paths).

    There was also concern some years ago (late 1990s) about the frequency of BGP updates, and that a series of actors publishing up/down/up/down transitions as fast as they could might lead to a router "meltdown" with consequences along the lines of the situation described a couple of paragraphs up. This was considered a long term possibility, and as a result a couple of different approaches evolved suppress oscillating prefixes or blocks thereof at a level much lower than that where BGP's fundamentally built in mechanisms (TCP window sizes and fundamental NLRI/RIB processing speeds) would kick in.

    The modern BGP "basket" is much less systematically rickety; the systemic ricketyness is the result of BGP being fundamentally being a "push" distribution of vectors rather than a "pull" acquisition of nonlocal (but widely distributed) connectivity and policy maps (as happened when one fed desired map data from USENET's u.* hierarchy into pathalias, for example, using one or more "smarthosts" as the equivalent of IP's 0.0.0.0/0 default).

    Sadly, because the "push" NLRIs are not easily cryptographically signed by the source site (unlike PGP around a UUCP/USENET map file or even around an individual entry) there is still a requirement to trust your largest neighbours, although in the early 1990s the remained ANS's Policy Routing DataBase

  • No, I suspect not. (Score:1, Informative)

    by Anonymous Coward on Tuesday April 07, 2009 @01:45AM (#27485437)

    Because I'm anonymous, it's not likely that many people will see this... BUT...

    Yes, lots of people are kind of right when they mention BGP and route flapping, but that isn't what he L0pht problem was about.

    It was about being able to disrupt the connections between the BGP servers themselves through ICMP and TCP packets being forged.

    People haven't being twiddling their thumbs and I suspect the interviewee isn't that clued in on what's happened since.

    There's this obscure feature called TCP-MD5:
    http://www.ietf.org/rfc/rfc2385.txt
    Protection of BGP Sessions via the TCP MD5 Signature Option

    This effectively disarms the attack that L0pht were thinking about when Mudge went to see the President back in the day.

    What would an attacker need to do today? I'm not sure... could a DDoS attack cause a similar problem by targetting a particular router's interface with lots of packets? That's hard to imagine. If it were possible then why don't DDoS attacks cause something like that today when someone decides a web host needs 1GB/s or 10GB/s of junk traffic? Today the infrastructure remains functional and its the tails where the customers are that run into problems.

    But otherwise, to launch the same attack that was being talked about back then would require not only guessing IP#'s, port numbers and sequence numbers but also MD5 secret passwords. That plus the dampening of route flapping is likely to defeat the current attacks.

"Gravitation cannot be held responsible for people falling in love." -- Albert Einstein

Working...