Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security The Internet

Bill Cheswick On Internet Security 37

Franki3 invites our attention to a SecurityFocus interview with Bill Cheswick. He started the Internet Mapping Project in the 90s; you have probably seen the maps that resulted. The interview ranges over firewalling, logging, NIDS and IPS, how to fight DDoS, and the future of BGP and DNS. From the interview: "I have been impressed with the response of the network community. These problems, and others like security weaknesses, security exploits, etc., usually get dealt with in a few days. For example, the SYN packet DOS attacks in 1996 quickly brought together ad hoc teams of experts, and within a week, patches with new mitigations were appearing from the vendors. You can take the Internet down, but probably not for very long."
This discussion has been archived. No new comments can be posted.

Bill Cheswick On Internet Security

Comments Filter:
  • A week? (Score:2, Insightful)

    by Nemetroid ( 883968 )
    I would call a week very long time for something as vital as the Internet now is.
    • Re: (Score:3, Insightful)

      by PhxBlue ( 562201 )
      Now, yes; but it was nowhere that important 11 years ago.
    • Re: (Score:2, Insightful)

      Well, if you don't want to risk the outage get a private network set up. Shouldn't be that expensive. ;)

      Since most net servers are Window or Linux and most routers are made by two or so vendors there will be exposures that take out lots of infrastructure in the future, just like in the past. Even if they have a fix in ten minutes it will take days to get the patches out and applied due to the complexity of getting the patches out without a well functioning public network. "Crap, someone has pwned the Cis
    • Root Servers (Score:5, Interesting)

      by Kadin2048 ( 468275 ) <.ten.yxox. .ta. .nidak.todhsals.> on Tuesday January 23, 2007 @04:53PM (#17728366) Homepage Journal
      I thought his comments about the DNS root servers were interesting.
      The DNS root servers appear to be 13 hosts, but are actually many more. They have been under varying, continual, low-level attacks for many years, a process that tends to toughen the defenses and make them quite robust. A few years ago there was a strong attack on the root servers, taking 9 of the 13 down at some point. ... There are other root servers, of course. Anyone can run one, it is just a question of getting people to use it. I understand that China is proceeding with root servers of their own. DNSSEC is a way to get the right DNS answer, but its deployment has had problems for at least 10 years.
      It's interesting that the system works as well as it does: one would think that with just 13 IP addresses to target, the root servers would melt from DDoS attacks far more often than they do.

      Their technique of hiding many geographically-separated servers behind one IP address is interesting. For example, ISC's server at 192.5.5.241 (the "F" server) has over 40 sites, including Ottawa, Palo Alto, New York City, San Francisco, and Madrid. Given the obvious advantages of this configuration, it actually surprised me that there are root servers not doing this: VeriSign, University of Maryland, NASA, the U.S. DoD, the U.S. Army, and ICANN all seem to have single-site root servers. I wonder whether those organizations are taking the responsibility that they hold seriously enough, if cost or level of effort are what's stopping them.

      Also, the number of servers that have IPv6 addresses is a bit disappointing (B, F, H, K, M), but I suppose understandable given the slow uptake of that technology. In many ways, the root DNS system is seemingly one of the oldest and least-noticed parts of the Internet's infrastructure; if the network as a whole were a city, it's the stonework aqueducts far beneath the streets, that nobody thinks about as long as the water comes out when you turn the tap.
      • How do they manage to put multiple systems behind one IP without a single point of failure at a, for instance, a NAT system? Do you have any sources for this that explain how it is done?
        • Re:Root Servers (Score:4, Informative)

          by gkhan1 ( 886823 ) <oskarsigvardsson@@@gmail...com> on Tuesday January 23, 2007 @05:39PM (#17729018)
          They use something called Anycast [wikipedia.org]. See article for details.
        • Re:Root Servers (Score:5, Informative)

          by Kadin2048 ( 468275 ) <.ten.yxox. .ta. .nidak.todhsals.> on Tuesday January 23, 2007 @06:04PM (#17729328) Homepage Journal
          I think it's actually fairly simple: they let multiple (widely separated) servers announce themselves on the same IP address, and these propagate into the routing system. When somebody sends a packet to one of these servers, the routers along the way naturally tend to send it to the closest one. Thus if you're in Beijing and send a packet to the IP address for the F nameserver, your packet makes it's way to the box in Beijing, while someone in NYC gets their local one. (There are could be subtleties that I'm missing, but there doesn't seem to be a whole lot to it other than that.)

          The problem with this (as the WP article points out) is that it's virtually useless for stateful connections like TCP, so it's not useful for load balancing web servers and other things of that nature. But since DNS uses UDP, it doesn't matter if one packet goes to one server, and then the routers decide to send the next one to a different server with the same IP. This means you don't need the usual NAT system that would be required in order to load-balance a HTTP farm: most of that is really only needed because you need to keep the various connections between clients and servers sorted out. When you're using a stateless protocol, it's a lot simpler.

          I was pretty impressed with it, too.
  • by Anonymous Coward on Tuesday January 23, 2007 @03:53PM (#17727608)

    You can take the Internet down, but probably not for very long.


    For as long as the story is on the front page of Slashdot, at least.
  • Mirrored (Score:3, Informative)

    by rumith ( 983060 ) on Tuesday January 23, 2007 @04:02PM (#17727706)
  • The interview ranges over firewalling, logging, NIDS and IPS, how to fight DDoS, and the future of BGP and DNS.

    FWIW, FYI, TFA is SFW but IMO not OMFGF.
    • Re:alphabet soup (Score:4, Insightful)

      by 99BottlesOfBeerInMyF ( 813746 ) on Tuesday January 23, 2007 @04:23PM (#17727966)

      The interview ranges over firewalling, logging, NIDS(Network Intrusion Detection System) and IPS(Intrusion Prevention System), how to fight DDoS(Distributed Denial of Service), and the future of BGP(Border Gateway Protocol) and DNS(Domain Name System).

      If you don't know what all of these are, the chances are you won't care about or understand what he has to say anyway.

      • If you don't know what all of these are, the chances are you won't care about or understand what he has to say anyway.
        Maybe true, but slashdot is a site written in English for a general readership, so it is not unreasonable to expect the article summaries to be comprehensible to a non-specialist.
  • Error 102: Connection timed out.
  • IPS (Score:5, Interesting)

    by 99BottlesOfBeerInMyF ( 813746 ) on Tuesday January 23, 2007 @04:20PM (#17727922)

    From TFA:

    What do you think about reactive firewalls, also knows as IPS (Intrusion Prevention Systems)?

    Bill Cheswick: Reactive security is an idea that keeps popping up. It seems logical. Why not send out a virus to cure a virus, for example? How about having an attacked host somehow stifle the attacker, or tell a firewall to block the noxious packets.

    These are very tricky things to do, and the danger is always that an attacker can make you DOS yourself or someone else. As an attacker, I can make you shut down connections by making them appear to misbehave. This is often easier than launching the original attack that the reactive system was designed to suppress. (By the way, this happens a lot in biological immune systems as well. There are a number of diseases that trigger dangerous or fatal immune system responses.)

    So I am skeptical about these systems. They may work out, but I want to keep an eye on the actual user experiences with these.

    I think that Mr. Cheswick is mostly correct in his opinions, but in the case of IPS's some of them certainly are effective, if not for mitigating minor attacks, at least for keeping the network up and running during these attacks. He talks about making a network operator DoS themselves by feigning an attack, but to make this work you have to assume there is no meat in the loop. Just because someone appears to attack me does not mean I filter all packets from that IP(s). I'm not going to let my network automatically block traffic, although rate limiting can be automated to some degree. The real thing is, if your tools allow you enough visibility into your network to map what is your normal and critical traffic, you can block large swaths of noncritical traffic without serious financial consequences. Compared to the cost of a complete outage, this is a huge leap forward.

    Still, may of the IPS tools on the market today do not provide that ability and you need to get a good toolset together.

    For all these problems, and others in the past, I have been impressed with the response of the network community. These problems, and others like security weaknesses, security exploits, etc., usually get dealt with in a few days. For example, the SYN packet DOS attacks in 1996 quickly brought together ad hoc teams of experts, and within a week, patches with new mitigations were appearing from the vendors. You can take the Internet down, but probably not for very long.

    Since the 90's a lot more effort has gone into formalizing and speeding up collaboration. It used to be that if a major worm or something hit the internet, within a week it would be well known as people called each other and traded notes and techniques for mitigation. Today if I see a novel new and widespread attack, I also have up to date data as to whether or not it is hitting other ISPs and large networks and where and in what traffic rates via information they automatically share with me. Further, I can semi-automatically create a signature that matches that attack, a filter for that attack for my routers and firewall type devices, and share that information with them along with my notes. Even if the network is down, I still often have the contact info for the security people at those networks so if my Internet access is out I can look at who else has been hit and call them.

    This has really started to take off only in the last year or so, but what Mr. Cheswick applied to the 90's is today that same process on speed. Personally, I think anyone would be hard pressed to take out "the internet" today and the closest one might come would be a very sneaky attack on the Windows monoculture.

    • Re:IPS (Score:4, Interesting)

      by guruevi ( 827432 ) on Tuesday January 23, 2007 @04:50PM (#17728334)
      A lot of people seem to have a misunderstanding about the concepts of the internet and especially (D)DoS. The fact that you're under attack, doesn't mean you can just rate limit and be over with it. You can't limit the number of requests are being sent and thus the only thing you can do is rate limit the responses to such requests so that you don't clog your upload. Most providers do have synchronous and separated bandwidth thus your down link will be full anyway.

      Reactive (automated) things repel me too. I've seen them, evaluated them but the program/computer/system is too 'dumb' to recognize something bad is happening and where exactly to solve it. That's why we Network & System admins are still in business. You could implement a type of AI, but then it's getting too expensive. The other thing is: who decides and how. You can set rules, but then you have to operate within the rules. You can set self-adjusting rules, but then if the attacker's intelligence > systems intelligence, it can still be altered, bent or even misused.

      The other thing that would be good if feasible (both cost- and programming wise) and that thing just plain scares me is using a closed-loop AI over a large set of parallel systems. You can't have any influence on the system and the system will start to recognize stuff just as a real system admin. The problem is, that since anyone can't just influence the system, you'll eventually have a problem and the system is going to shut you and everyone else out. If you meddle with it, the system will go reactive and you'll have your favorite sci-fi horror movie realized
      • Re:IPS (Score:5, Interesting)

        by 99BottlesOfBeerInMyF ( 813746 ) on Tuesday January 23, 2007 @05:03PM (#17728520)

        A lot of people seem to have a misunderstanding about the concepts of the internet and especially (D)DoS. The fact that you're under attack, doesn't mean you can just rate limit and be over with it. You can't limit the number of requests are being sent and thus the only thing you can do is rate limit the responses to such requests so that you don't clog your upload. Most providers do have synchronous and separated bandwidth thus your down link will be full anyway.

        Actually, this depends upon what technologies you have deployed. I was writing from the perspective of a tier-1 ISP operator. You certainly can blackhole traffic matching certain characteristics or hand it off to a dedicated filtering appliance which filters out particular patterns and onramps the remaining traffic back into your network. Additionally, more and more a large ISPs are starting to sell this service to their large customers, so as the recipient of a DDoS attack I log into a dedicated interface, insert the attack characteristics I'm seeing, and my ISP filters the attack at his peering edge, before the rest of it ever transits his network and reaches me.

        Reactive (automated) things repel me too. I've seen them, evaluated them but the program/computer/system is too 'dumb' to recognize something bad is happening and where exactly to solve it.

        In general this is true, but in particular there are exceptions. I've seen logs of major DDoS attacks automatically castrated while the admin was away over the weekend. Obviously you have to be very conservative about this to prevent false positives and a lot of network admins are understandable hesitant.

        You could implement a type of AI, but then it's getting too expensive. The other thing is: who decides and how. You can set rules, but then you have to operate within the rules. You can set self-adjusting rules, but then if the attacker's intelligence > systems intelligence, it can still be altered, bent or even misused.

        A well crafted system is self-adjusting, but without pulling people out of the loop. You can certainly implement some hard and fast rules though by white-listing critical traffic. When the "AI" decides an attack is occurring and shuts down traffic, it should have a auto-generated picture (relational database) of what traffic is normal and what traffic is vital. Thus it can follow priorities and shut down Web traffic to some office, while still allowing the payroll server to connect to the bank.

        The problem is, that since anyone can't just influence the system, you'll eventually have a problem and the system is going to shut you and everyone else out. If you meddle with it, the system will go reactive and you'll have your favorite sci-fi horror movie realized

        Umm if we were there for AIs a lot of out problems would already be gone and replaced with a different set of problems. If this ever happens I'll be more worried about who the mail server is voting for than whether or not my e-mail is marked as spam. I think we just walked off the deep end of this conversation.

  • by jmorris42 ( 1458 ) * <{jmorris} {at} {beau.org}> on Tuesday January 23, 2007 @04:59PM (#17728472)
    > You can take the Internet down, but probably not for very long.

    Dunno, we have yet to experience a real widespread outage. If someone managed to take out enough of the net that it couldn't be used to colaborate on the fix or to distribute it the time to repair would be a lot worse.

    It is something I wonder about. First the net was attacked by kids looking for thrills. Now it is attacked by spammers looking t make a profit. The scenario I worry about is if a determined foe with resources attacked it with the goal of simply inflicting maximal damage.

    The raw materials are out there, just waiting to be weaponized. Imagine a combo punch, a Warhol worm from hell to nuke the Windows boxes, reflashing as many as possible into boat anchors within the first hour. Follow that up with an attack on the backbone routers, again with the goal of bricking as many as possible. If you get enough it makes recovery damn near impossible since you need the net to get the fixes. Sure it would be possible to clean up the mess and bring up enough of the net to get the important things moving in a day or two but a full cleanup would take months. Would enough people would lose confidence in depending on the net for critical commerce to gut the stocks of some major players and set things back to a pre net mindset?
  • by Anonymous Coward
    Cheswick is a very good speaker and I recommend hearing him talk if you get a chance. I got to here him talk at an Infragard conference about internet mapping was used to do damage assessment after the US bombed Serbia. I don't know that the military actually used the data, but he showed us pictures of how packets were routed before and after the attacks.
    He also discussed how you could detect unauthorized connections in a network by injecting packets with source addresses external to the network and seeing
    • Re: (Score:1, Insightful)

      by Anonymous Coward
      Of course smart people with covert network connections would notice the packets came in on the wrong interface and would reply back on the same interface so as not to reveal the covert connection.

      Extra connections into a network are more difficult to hide then this. They must pass all traffic that should be getting through and drop all traffic that should not. They must spoof all ICMP TTL expiration messages. They must also spoof all inbound ICMP TTL expiration messages. Also, all other routers in the path
  • Wow, these maps are really beautiful - They look like a cross between Paul Klee's painting "Composition With Fruit" and Joan Miro's "Frustrated Cat"

    Alright, that's a stretch, but they could be confused for modern art if the viewer was not aware of their origin.
    • I think that tells a lot about modern art.
    • by ches ( 134162 )
      I have given samples of the maps to MOMA and the Hirshorn, at their request. They haven't appeared to do anything with them. Perhaps they are waiting for me to die.

      ches

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...