Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Internet Security

Secret Repairs Preceded TCP Flaw Release 204

efranco cuts and pastes: "Only the math had changed. But the emergence of a workable exploit for an old TCP security hole prompted a secret initiative to fix the Internet, giving network operators a week to secure vulnerable routers. The clandestine repair effort livened an already intense period for security pros already juggling a bevy of Windows security patches." We ran a story on a this a few days ago.
This discussion has been archived. No new comments can be posted.

Secret Repairs Preceded TCP Flaw Release

Comments Filter:
  • Cisco Fix (Score:5, Informative)

    by thebra ( 707939 ) * on Friday April 23, 2004 @12:27PM (#8951148) Homepage Journal
    is here [cisco.com] as posted from an article on the register [theregister.co.uk].
    • Re:Cisco Fix (Score:5, Interesting)

      by robslimo ( 587196 ) on Friday April 23, 2004 @01:11PM (#8951693) Homepage Journal
      When the previous /. story was posted about the TCP flaw, I checked out the NANOG mailing list. [merit.edu]

      There was plenty of discussion about it, including various vendor issues (Cisco and Juniper) & fixes, as well as some ISPs dragging their feet on implementing MD5 over peer links. I could tell from some of the things mentioned there that they (the network ops) had advance knowledge of the vulnerability.

      Most interesting was this [merit.edu] about looking glasses being too free with info that would allow a TCP reset in one try.
      • I could tell from some of the things mentioned there that they (the network ops) had advance knowledge of the vulnerability.

        Yup. [slashdot.org]

        -dk
  • by pointbeing ( 701902 ) on Friday April 23, 2004 @12:30PM (#8951187)
    These days it's risky to release information about a security vulnerability without having a patch in place first. Look at Blaster - I believe that the author *used a security bulletin* to write the worm and then just targeted unpatched machines.

    I think we're gonna see a lot more of this. If you release information before you fix it these days you're just inviting people to test your shiny new vulnerability ;-)

    • by burtman007 ( 771587 ) on Friday April 23, 2004 @12:35PM (#8951280)
      This poses an interesting dilema then: Is it better to release information on a discovered vulnerability if you know about it, or should you not release it and hope you can patch it before anyone else discovers it?
      • by adrianbaugh ( 696007 ) on Friday April 23, 2004 @12:49PM (#8951430) Homepage Journal
        You could always release it to the company whose product is affected and give them $suitable_time to fix the vulnerability before you post on Bugtraq. That way it isn't just you that's working on a fix, and you look like you've tried to be a responsible netizen when, having failed to fix the problem in $reasonable_time, their shit gets cracked to pieces. That has always been the responsible way of announcing vulnerabilities; I don't see that this changes the situation.
        • by Slime-dogg ( 120473 ) on Friday April 23, 2004 @01:30PM (#8951924) Journal

          I am for that. If the information is not released after a reasonable amount of time, the company may never take responsibility for it being there. We've witnessed this several times from a certain big company. Also, the moment that the vulnerability goes public, there should be a side note that says "The company was repeated informed of this vulnerability over a span of X months , but chose not to improve the quality of their product."

          If massive numbers of users are infected by a virus created as a result of this announcement, then the company should be held completely responsible. They would have had months to address the issue, but chose not to.

        • by innocent_white_lamb ( 151825 ) on Friday April 23, 2004 @01:55PM (#8952210)
          You could always release it to the company whose product is affected and give them $suitable_time to fix the vulnerability before you post on Bugtraq.

          The obvious problem with that approach is that the fact that there is no guarantee that you, as the discoverer of the vulnerability, are the first or even the fiftieth person to discover it.

          Therefore, while you're being a nice guy and letting Company X have the time to repair their software, the other 49 (or 4900) black hats are exploiting the hell out of other peoples' networks.

          Tell me there is a bug and no fix available yet, I can take my systems offline or disable something or at least consider some protective action of some kind. Don't tell me there's a bug and I'm a sitting duck until someone bothers to make mention.

          The first option seems better to me.
        • by theLOUDroom ( 556455 ) on Friday April 23, 2004 @02:33PM (#8952585)
          You could always release it to the company whose product is affected and give them $suitable_time to fix the vulnerability before you post on Bugtraq. That way it isn't just you that's working on a fix, and you look like you've tried to be a responsible netizen when, having failed to fix the problem in $reasonable_time, their shit gets cracked to pieces. That has always been the responsible way of announcing vulnerabilities; I don't see that this changes the situation.

          Well, let me give you a hypothetical situation where this is NOT the reasonable solution:
          You discover an OS vulnerability, not by chance, but because someone exploited it to steal your online banking information. With a little reseach, you find out that the work is being done by some zombie net with thousands of nodes that will take forever to shut down.
          The OS vendor has a piss-poor security record and you KNOW that they will take forever to release a patch, but you've found a temporary fix that while removing certain functionality, prevents the exploit.
          Should you:
          A) A post full-disclosure immediately, allowing users to quick-fix their systems and preventing countless acts of information theft.
          B) Send an email to the vendor and wait when they tell you it's going to take 6 weeks to fix.

          The problem with your approach is that it assumes no one but the vendor can do anything about the problem. The user always has the choice to quit using the affected product.
          • Yea, the same users who don't install well known years old patches are going to search out and early adopt a patch from 'some guy'. Puhleeze.
            • Yea, the same users who don't install well known years old patches are going to search out and early adopt a patch from 'some guy'. Puhleeze.

              Those are the users who are going to ge hosed no matter what. It doesn't matter if you choose A or B they're still going to get owned.
              Since you can't do anything about them, you should be worried about they people who are actually going to do something once they hear the announcement.

          • by Anonymous Coward
            There's a fundamental difference between your scenario and the traditional vulnerability discovery: the existance of an attack in the wild. In your case, you are not so much writing up the discovery a vulnerability as you are writing up a report on an attack that just in your scenario exploits a previously unknown vulnerability.

            Why is this important? Most people have this dualistic view of the world that tends to come down to the concept of inevitability. See, in your scenario, the attack is already out th
      • by dubdays ( 410710 ) on Friday April 23, 2004 @12:58PM (#8951524)

        That is an interesting question. I guess it would depend on where the vulnerability resides. For example, if the TCP problem could be fixed at routers of the internet backbone, then would it be beneficial for the public to have specific knowledge of the vulnerability? No, because it would lead to attacks before all equipment/software could be fixed.

        However, I can see how it would definitely be beneficial to release data to he public in other circumstances. Think about any/all OS specific threats. If those aren't released to the public, no one would even have the opportunity to fix them.

        Ultimately, I would say that vulnerabilities that lie within the realm of the end user should be made public. Those threats that would undermine the entire internet infrastructure are probably best left in the hands of those who can be trusted (as much as possible) with the knowledge, because publicly documented threats do not only go into the hands of those who are benevolent.

      • Are you talking from a customers standpoint, or a vendor?

        At the place I work, if we find a severe bug, we personally call every company that has that version of the software and have them download an update. My company doesn't produce networking software though and we only have 50-75 companies running any given version of our product at one time. This is usually bugs that affect mathematical calculations though or cause database corruptions.

        From the point of view of a customer: if I found a serious fla
        • Publicly releasing the info would not benefit me in any way unless I was a security products vendor hoping to cash in on Cisco's failure.

          I mostly agree.

          If the vendor refuses, within a reasonable time frame, to fix the problem taking it public may be the only way to force them to do so.

          You might also consider that while you may have a means of protection (vendor supplied or home grown) others are still vunerable. And this may impact you. And thus it may be to your benefit to alert others.

          SteveM

    • by WwWonka ( 545303 ) on Friday April 23, 2004 @12:40PM (#8951336)
      I think the scary thing is the average shrinking time period between published vulnerability and working published exploit/worm.

      In the past it was well over thirty days, but recently that has dramatically decreased to less than that. With Microsoft's new policy of new patches every thirty days (if there is a need for them) it more than widens that window of oppurtunity for mass system compromising prior to a patch.
      • Ofcourse a worm can be written much faster than a patch. A worm's test is its release, you don't have to write any documentation or be slowed by a development team, and you don't care about any side effects it may have(the more the better).
      • Man, I watched a guy write an exploit for this one. They're incredibly simple to write, and had he actually knew C or just wrote it in Perl, he would have had it done in less than 4 hours. He didn't release it obviously, but anyone worthy of the title "coder" or "software developer" can do it probably in a matter of minutes with a respectable library of code that makes template packets for you to fill in the fields and takes command line arguements. The hardest part seems to be working around the little
    • by Anonymous Coward on Friday April 23, 2004 @12:53PM (#8951477)
      Yes, "script kiddies" and amateur hackers will definitely continue exploiting already-widely-known vulnerabilities, and automated worm tools will make it easier for them to do it quickly.

      However, moderately talented hackers will still be able to find and exploit vulnerabilities before they are widely known (i.e. when they are known only to a handful of hackers and possibly the software vendor, but no public disclosure has been made). This latter group makes fewer headlines but is far more dangerous.

      Already, the industry is making noises that details about the nature of the exploit should not be made available--that the vendor should just release a patch and announce to their customers "Install this. We can't tell you why." As a customer, you don't know what component you're touching, you don't know what's changing, and you don't know how to test to see if the bug was actually fixed. Blindly installing unlabelled patches is the end result of this "disclosure creates exploits" discussion.

      Disclosure does not create exploits, however. Disclosure increases the ability of amateurs to add their exploits to the pile of existing exploits. Pros, generally speaking, don't write worms that hit the whole internet. Pros break into single systems and steal data. They don't make the news, but the damage they do is much worse.

      Don't buy the Microsoft-Symantec party line. Full disclosure helps more people than it hurts. The day you become vulnerable is the day you start using software with bugs, not they day the vendor is finally convinced to make a vulnerability announcement.
    • These days it's risky to release information about a security vulnerability without having a patch in place first. Look at Blaster - I believe that the author *used a security bulletin* to write the worm and then just targeted unpatched machines.

      I wonder if this is a variation of the argument regarding whether we should "Slow Down the Security Patch Cycle? [slashdot.org]"

      The story you tell about Blaster is similar to the Computer World story regarding the Witty worm [computerworld.com]:

      Until managed applications become the norm, h


    • It's not only risky, it's possibly actionable in a court of law (to be tested) if someone discovers a vulnerability and negligently releases it causing panic and disruption. There are precedents in copyright and confidential information that reduce or penalise the defence of "in the public interest" where the disclosure was not responsible. For example, in some cases it is better to go to the police first. If the police don't resolve the problem in adequate time, then it's justified to take it a more public
      • I agree with the social contract aspect, but you're way off base with it being actionable to report flaws, at least in the US. The "precedents" you're talking about are where theres an existing contract, like an NDA in place. You could _possibly_ be liable if you caused a panic and the exploit wasn't real. You ARE, of course, susceptible to scare tactics like DMCA and libel suits, where there's no real case but they want to pressure you to shut up.
  • by Anonymous Coward on Friday April 23, 2004 @12:30PM (#8951192)
    When will I be able to download a fixed version?
  • by Anonymous Coward on Friday April 23, 2004 @12:31PM (#8951206)
    The best kind!

    "What are you doing?"

    "Can't tell you."

    "When will you be done?"

    "Can't say."

    "Is there anything you can tell me?"

    "This will save your life."

    "Really?"

    "No."
    • ... "Okay, then you're fired."
  • Any user without the technical competence to inspect and repair TCP/IP packets on the fly should not be allowed to use the Internet. Such vulnerabilities only exist because people too lazy and ignorant to download the patches for their Cisco routers!
    • I don't really know what to make of what your saying, okay sentence one must be a joke right? Sentence two, are you aware _many_ network technicians have to look after upwards of 100 routers?

      (It would be physically impossible to do what your saying, you can't inspect and repair a "Tcp/ip" packet? Block it maybe, although blocking ACK packets would fundamentally break TCP.IP? Even if you could don't'cha think you'd timeout the socket before you could re-transmit?)
  • Paradox (Score:5, Funny)

    by Prince Vegeta SSJ4 ( 718736 ) on Friday April 23, 2004 @12:33PM (#8951247)
    The TCP issue publicized yesterday was publicly known as early as 1998

    Yesterday was 1998? Whew, I thought it was 2004 and 6 years of my life were wasted

  • Old News (Score:3, Informative)

    by ErichTheWebGuy ( 745925 ) on Friday April 23, 2004 @12:33PM (#8951248) Homepage
    This was reported [slashdot.org] three days ago by another reader.

    Yawn.
  • "We ran a story on a this a few days ago." What's a "this" ?
  • IPv6 (Score:2, Interesting)

    Does anyone know if this affects IPv6? I wonder if this situation could somehow be leveraged to promote it...
    • It would affect TCP/IP V6, but have no fear, UDP/IP is immune!!!
    • Re:IPv6 (Score:5, Informative)

      by leerpm ( 570963 ) on Friday April 23, 2004 @12:37PM (#8951289)
      No. TCP is a different layer than IPv6. It has no effect on IPv6 or IPv4.
    • Re:IPv6 (Score:5, Informative)

      by Anonymous Coward on Friday April 23, 2004 @12:41PM (#8951342)
      IPv6 is a layer below, thats why its called TCP/IP. IPv6 is only an addressing scheme for bit packing, and how many bits in a reference. Tcp is above it, they are independent of each other. For more information google "OSI Network Layer Model".
      • Re:IPv6 (Score:2, Informative)

        by Wicked187 ( 529065 )
        IPv6 is more than a drop in replacement for IPv4 at Layer 3 of the OSI model. Everyone just assumes that since things map best to a model, that it is 100% accurate. TCP/IP is represented best by it's own model, the DoD Model. In this model IP falls into layer 2... TCP and UDP is layer 3. This really doesn't make much of a difference though. In a layered communication model, each layer has to be aware of the layers directly above and below in order to properly pass the information on. Ethernet has to k
    • Re:IPv6 (Score:3, Informative)

      by tbaggy ( 151760 )
      IPv6 could be used to alleviate this by using Ipv6 network layer encryption. Still, it would be easier to just MD5 your BGP tcp sessions or fix the tcp stack with a patch vs. move to IPv6
    • Re:IPv6 (Score:2, Informative)

      IPv6 is immune, and in a grand display of irony, IPX/SPX is also safe.
  • according to this article [com.com] on C|net.

    From the article:
    "The actual threat to the Internet is really small right now," Watson said on Wednesday. "You could have isolated attacks against small networks, but they would most likely be able to recover quickly."

    • The assumption seems to be that someone would use this to take down sites or other disruption. Haven't there been cases of IP block hijacking using lax BGP security in the past? (Wasn't that how one company rerouted root DNS for a while several years back?)

      There have been cases of "fossil" IP blocks being hijacked in the last few years by spammers. (Sometimes as simple as registering an expired domain that an ancient contact email address points to.) They seem to be paying for malware to be written. Don't

  • by Anonymous Coward on Friday April 23, 2004 @12:37PM (#8951292)
    It's effective when used as the external skin of a layered approach.

    Some would say it should be disposed of entirely, in favor of the bugtraq, etc. totally-open approach.

    That approach is IMO foolish. Why throw away a useful layer of security? In 1992 it was debatable; the interim years have shown without a doubt that the totally-open approach produces more script kiddies than it does patched systems.
    • The whole TCP window thing seems entirely obvious to me, i just hadn't realised that windows were sufficiently big to be guessed. As we start to see faster and faster transfers we'll need larger and larger windows and this will just get easier.

      However I cant see why BGP needs to implement a large window - in fact in a device which needs to run as fast as possible it's surely disadvantageous.

      I've seen TCP RST attacks in the mid nineties actually used on IRC - only the application of the exploit to bgp is n
      • ummm....

        you do realize that the window on OSes like BSD and Linux isn't anywhere near as big as it is on some of these routers. This has nothing to do with the bandwidth.

        This exploit takes advantage of the fact that the router vendors have very predictable TCP implementations.

    • by Anonymous Coward
      When was this "totally open" approach widely used? Sometime after 1992 I guess, right? Must have missed it.

      I think the whole reason the "totally open" approach exists, and will always exist, is to deal with unresponsive vendors. If a software vendor (who shall remain nameless) sits on a major security bug for six months, there is a problem with the "security through obscurity" model, and it's this--there is no profit motive to fix security bugs nobody knows about. If customers think they're safe, that'
    • by aug24 ( 38229 ) on Friday April 23, 2004 @01:17PM (#8951770) Homepage
      Rot. Non-full-disclosure has generally meant that we didn't have any progress at all cos the vendors typically wouldn't do jack till they had to.

      For instance, there was a mail on BugTraq not too long ago about a bug that the finder chased with whichever company it was for about six weeks. No reply. No acknowledgement, no fix. He gave up and went open - they fixed it in a week.

      Now, how many other people had found that bug and were trying to make an exploit out of it? What if he had kepy schtum and the black-hats had got in?

      That's what full-disclosure is for, to force vendors to fix stuff they could otherwise ignore.

      Justin.
    • This has nothing to do with security through obscurity since the protocols and everything were open. The fact of publicly disclosing flaws is a different topic, not security through obscurity. For example, you could decide to publicly disclose a Windows security hole, while not disclusing a Linux one (until it's fixed).
    • 1. If you know who to keep in the loop, and who not to. If the blackhats get it early, you're even worse off than before because then people think they have more time to fix it. Obviously OSS is rarely a good candidate, since it's trivial to join mailing lists and such.

      2. If the company actually is responsive and bothers to fix it. Given time, black hats will find it and have a new and unknown exploits. If they in addition cover their tracks well, it could do considerable damage before found and fixed. So
    • The problem is that, there is no reason to think that these things are not know before they are published. There are networks of black hats out that that know things and keep them quiet. By not publising an expoit as soon as its know we increase the time these people have to use these exploits totally unchecked. At least if a vulnerability gets published then the responsible parties are forced to fix the problem due to constant harrassment and the marginalization of their products. In the mean time wher
    • What you desribe isn't "security through obscurity" complaint many close vendors try to get away with. After all most username/password creditial security schemes depend on this: you keep your password a secret (and if you can your username too!).

      However in actual code and software systems, security through obscurity is weak and fallacious. Just because the general user base doesn't know a system is exploitable means the system is still secured. The exploit there whether or not the users realize it. Op
  • by platypibri ( 762478 ) <platypibri@@@gmail...com> on Friday April 23, 2004 @12:40PM (#8951323) Homepage Journal
    Yes, I would prefer to know immediately if I was vulnerable. However, the vast majority of defense is against script kiddies who wait to have exploits handed to them so they can copy and paste some malicious code together to prove what "hackers" they are. Why should we tell them before there's a patch? I dunno. Hopefully someone smarter than me is working on it.
    • What we need is a not-for-profit organization with a team of researchers finding secuirty holes in common software, with a pay-to-subscribe list where the security holes can be released to those that need the information before the script kiddies. Script kiddies probably wouldn't pay to get on this list and thus get the proof of concepts. There are other issues as well though... like what happens when some script kiddy does pay to get on and then leaks all the informaiton to other script kiddies. Or when so
    • You're not vulnerable, because this only affects TCP connections with easily guessed source ports and known hosts that last for a significant amount of time and where things can't be restarted efficiently. This matters primarily for BGP which you don't use unless you're running a backbone (which is why they contacted the backbone providers first). The main other possibility is that if you have an SSH connection going for a long time, another user on either of the machines (who can get netstat info) who has
  • this is not uncommon (Score:5, Informative)

    by quelrods ( 521005 ) * <(quel) (at) (quelrod.net)> on Friday April 23, 2004 @12:40PM (#8951325) Homepage
    Usually people take it upon themselves to notify vendors of bugs and give them time to work on patches or workarounds before releasing the information. For anyone that reads full disclosure lists such as bugtraq this is very commoon. Also, when the bug affects key internet infrastructure, the admins of big isps/colos/routers are informed and given time to patch. This is good for the internet and good for vulnerability researches instead of looking like malicious people who just want to destroy the internet.
    • One thing I find funny is that there were many people that used due diligence to report flaws to the vendor, and said vendor doesn't bother to fix it.

      One must wonder why such companies don't assume that there may be dozens of other people that have independently discovered the exploit and are USING it rather than reporting it.
  • by Eagle5596 ( 575899 ) <slashUser.5596@org> on Friday April 23, 2004 @12:45PM (#8951387)
    Dilbert is in the Boss's office.

    Dilbert: I discovered a hole in our internet security.

    Boss: What?!!

    Boss: Good grief, man! How could you put a hole in our internet?

    Dilbert, angry: I didn't PUT it there, I FOUND it.. and it's not...

    Boss: It's your job to fix that hole. I want you to work 24-7!

    Dilbert: Actually, that's NOT my job. But I'll inform our network management group.

    Boss, yelling: PASSING THE BUCK!!! YOU'RE A BUCK PASSER!!!

    Dilbert: Forget it! There's no hole! It got better!

    Boss: That's more like it.

    Last panel, the boss is sitting alone smiling.

    Boss thinks: I fixed the internet.
  • Paranoia? (Score:3, Interesting)

    by dawg ball ( 773621 ) on Friday April 23, 2004 @12:51PM (#8951449) Homepage
    Is this just a case of paranoia reigning supreme? From what I understand of this problem (And it is very possible that I don't know all the details) it only poses a risk under a very specific set of circumstances and that this set of circumstances is very common. Are we becoming ParaNET?

    • Hell man, I've been actively working on my paranoia and tin-foil-hat persona since I started writing code!

      In fact, these days I believe Bill Hicks was killed by the Beiderbeck Group using carcinogenic material supplied by the CIA under the orders of undead vampire George Bush Snr!

      Justin.
  • by osewa77 ( 603622 ) <naijasms@NOspaM.gmail.com> on Friday April 23, 2004 @12:51PM (#8951456) Homepage
    Many networks used home-grown routers based on Linux, FreeBSC, user-space TCP/Firewall/VPN implementations, or even windows. However, the vendor list only includes commercial router manufacturers. This seems to me like a serious problem waiting to happen; the would-be exploiter now know what systems would remain unpatched for a long time.
    • by stratjakt ( 596332 ) on Friday April 23, 2004 @12:59PM (#8951545) Journal
      The problem affects mainly huge peering sessions between big routers, the kind that last for days. You can essentially trick the routers into dropping the peering sessions, leading to route flapping and other hassles.

      Big backbone providers don't generally use home-grown linux routers.

      It has no real bearing on some home/office router running linux made out of an old 486.
  • by Anonymous Coward on Friday April 23, 2004 @01:21PM (#8951825)
    And now you blew it! Thanks a lot.

    Now if you'll all step this way please...
  • by fremen ( 33537 ) on Friday April 23, 2004 @01:29PM (#8951917)
    This attack vector has been known for years, and the TCP windowing nonsense has too. Programs like tcpkill [brown.edu] have used the RST trick in conjunction with TCP INS windows for a while and have seen quite a bit of use. What's new with this attack that wasn't already in the wild?
  • trap? (Score:3, Interesting)

    by mabu ( 178417 ) on Friday April 23, 2004 @01:52PM (#8952169)
    I'm wondering why all the hooplah about this, especially after steps were taken to deal with it before publicizing it... unless at the same time, systems were put in place to ID attempts to exploit the vulnerability.

    That would make a lot more sense. Protect against the exploit, publicize it, then watch what happens to determine which groups are most adept at quickly exploiting published vulnerabilities and raid their location. Neat idea for a large-scale honeypot.

    Although, most of us know that the majority of exploits are now being deployed by spammers. They don't have any incentive to take major backbones down so this effort might just reveal a few more script kiddies that aren't really the problem.
    • That would make a lot more sense. Protect against the exploit, publicize it, then watch what happens to determine which groups are most adept at quickly exploiting published vulnerabilities and raid their location. Neat idea for a large-scale honeypot.


      The exploit depends on a spoofed source IP address. There's no way of tracing it back to the true source.
      • If you have access to all the routers along the way its no problem to trace it back with a few proformace degrading modifications. Once you have solved the problem you could monitor all the interfaces along the way looking for the packets that exploit the flaw. This would take a massive undertaking, every major router on the net would have to be envolved to trace it back interface by interface. It would likely be somewhere in the far east on some rooted irix box or something. And that still dosn't really ge
  • by chrysalis ( 50680 ) on Friday April 23, 2004 @02:01PM (#8952267) Homepage
    No, everyone is not vulnerable to the recently published vulnerability in the TCP protocol that allows to shut down BGP routers. Because Cisco hardware is, stupid writers yell that the whole internet is vulnerable. Come on, Cisco is not the internet.

    As stated by Theo de Raadt and Henning Brauer, OpenBSD is not vulnerable because (quoting Henning) :

    Even without TCP MD5, bgpd on OpenBSD is not affected, because:

    we use random emphereal ports
    we do not use insanely hughe window sizes as Cisco does
    we require the RST sequence number to be right on the edge of the window

    (quoting Theo) :

    That is right. If you have a Cisco, you can tear down BGP sessions by spoofing:

    64K of

    SYN?s or RST?s sent to #.#.#.#:179 -> #.#.#.#:{1024,+512,+512,...}

    The SYN and RST methods are different, but the end effect is that a tiny little burst of packets will cause a flap.

    OpenBSD (and I am sure other systems too) have for some time contained partial countermeasures against these things.

    OpenBSD has one other thing. The target port numbers have been random for quite some time. Instead of the Unix/Windows way of 1024,1025,1026,... adding 1 to the port number each time a new local socket is established? we have been doing random for quite some time. That means a random selection between 1024 and 49151. This makes both these attacks 48,000 times harder; unless you already know the remote port number in question, you must now send 48,000 more packets to effect a change.

    We?ve made a few post-3.5 changes of our own, since we are uncomfortable with the ACK-storm potention of the solutions being proposed by the UK and Cisco people; in-the window SYN or RST?s cause ACK replies which are rate limited.

    It will have the most impact on vendors who do BGP over poor TCP stacks. In particular, Cisco.

    Cisco has not been teaching engineers to block SYN?s coming in; they have only been teaching them to block SYN-ACK?s from going out in return. And? well, you?ll see.

    Ehm, actually OpenBSD is vulnerable. To quote Mike Frantzen : The exploit has a one in 206,703,891,006,465 chance of succeeding. An exhaustive search would require 11,162,010,114,349,110 bytes of traffic which would take 962 days at a saturated gigabit per second. Or two hundred years on a T1. :)

    • Come on, Cisco is not the internet.

      Well, no, and technically, the transformer substations aren't the power grid, and the switches aren't the phone network. But whether you're susceptible or not, it's not going to matter if your upstreams' routes dampen because they (like just about everyone) are using susceptible Cisco or Juniper routers.

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...