Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security

XP/Vista IGMP Buffer Overflow — Explained 208

HalvarFlake writes "With all the hoopla about the remotely exploitable, kernel-level buffer overflow discussed in today's security bulletin MS08-0001, what is the actual bug that triggers this? The bulletin doesn't give all that much information. This movie (Flash required) goes through the process of examining the 'pre-patch' version of tcpip.sys and comparing it against the 'post-patch' version of tcpip.sys. This comparison yields the actual code that causes the overflow: A mistake in the calculation of the required size in a dynamic allocation."
This discussion has been archived. No new comments can be posted.

XP/Vista IGMP Buffer Overflow — Explained

Comments Filter:
  • well gee (Score:5, Funny)

    by sentientbrendan ( 316150 ) on Tuesday January 08, 2008 @11:23PM (#21964604)
    >This comparison yields the actual code that causes the overflow:
    >A mistake in the calculation of the required size in a dynamic allocation

    I hope no one else makes this mistake.
    • Event ID 4226 (Score:5, Informative)

      by Xenographic ( 557057 ) on Wednesday January 09, 2008 @12:39AM (#21965090) Journal
      Actually, there's one more comparison they've screwed up. Anyone who has installed the Event ID 4226 [lvllord.de] patch to increase the allowed number of half-open connections so their BitTorrent speeds don't suck ass just had that patch undone by this new version of TCPIP.SYS.

      The only good thing is that, while the page hasn't been updated since 2006, the patch seems to work on the new TCPIP.SYS (I just tested it on my own machine).

      I realize I'm sort of hijacking the first post, but given how many of us are probably downloading Linux ISOs right now, I figured it's important enough that people wouldn't mind a reminder... :-] Oh, and I'll add one more detail not mentioned here. According to F-Secure, there haven't been any exploits for this found in the wild--yet.
      • Woah...

        Now, don't get me wrong. I think that's a really cool hack. I admire the effort.

        Seriously though, WTF? That's a rootkit technique. Changes of this nature should be made to source code, not binaries. It's way more maintainable and sustainable that way.
        • by Scoth ( 879800 ) on Wednesday January 09, 2008 @01:13AM (#21965252)
          While I don't necessarily disagree with you... feel free to release your patch to tcpip.c and give us a link to the updated source file as soon as you get a chance ;)

          Sometimes, if a closed-source vendor isn't going to release an update/fix/tweak, the community has to do what they can to do it. Given what many people use Bittorrent for, I suspect getting a rootkit from this patch is the least of their worries. The rest of us will either just have to trust it, use BT on a non-Windows platform, or deal with the slower speeds.

          This does bring up an interesting possibility - rather than completely reimplement Windows through something like ReactOS, or translate the API like WINE, how about replacing components of a real Windows install with F/OSS replacements? Drop in a workalike, but open source tcpip.sys and know where it's coming from.
          • This does bring up an interesting possibility - rather than completely reimplement Windows through something like ReactOS, or translate the API like WINE, how about replacing components of a real Windows install with F/OSS replacements? Drop in a workalike, but open source tcpip.sys and know where it's coming from.

            Actually WINE and ReactOS both reimplement large sections of Windows in ways that can be used on native Windows too. ReactOS does so more because it reimplements the lower layer where WINE uses emulation - but even in WINE higher-level DLLs are implemented natively.
            I wouldn't be surprised if you could use the ReactOS version of tcpip.sys on a real Windows (although you may discover some bugs :-))

            • by Ed Avis ( 5917 )
              Yeah, the Wine guys were muttering about using their implementation of DirectX 10 on Windows XP, so gamers wouldn't have to upgrade to Vista to play the latest games. I don't know what became of that.

              A 'Wine for Windows' or 'ReactOS for Windows' distribution replacing certain Windows DLLs with their free equivalents would be a fun toy and a useful way to get more testing for these two projects. I'd install it at once... uh, on a spare machine...
              • by TheLink ( 130905 )
                Actually if the wine guys did release a decent DX10 for WinXP, it would make the other stuff easier for them.

                Because it means more people would stick to XP and thus the "goal posts" won't move as often or as much.

                If things go that way Microsoft might end up like a BIOS vendor :).

          • Why should I *slave* for micro$soft?

            Let them release the code for Win98 GPL and see how fast it surpasses Pista!
        • Rootkit? (Score:5, Informative)

          by Xenographic ( 557057 ) on Wednesday January 09, 2008 @01:17AM (#21965268) Journal
          > Seriously though, WTF? That's a rootkit technique.

          Rootkits use a lot of techniques that are also used by legitimate software. Yes, that patcher (and its patch) does get detected by a few anti-virus programs because worms, like torrents, benefit from being able to connect to more peers. It's not a virus in or of itself, though, plenty of people have checked it out.

          > Changes of this nature should be made to source code, not binaries. It's way more maintainable and sustainable that way.

          I fully agree, but it's kinda hard to get the source for Microsoft programs. Last I heard, you had to be a big university, pay tons of money, sign NDAs, etc. Besides, this limitation wasn't an accident. It was a deliberate "feature" they put in because they thought it would slow down worms. They're not going to fix it just because people ask.
      • Re:Event ID 4226 (Score:5, Informative)

        by Jugalator ( 259273 ) on Wednesday January 09, 2008 @05:20AM (#21966286) Journal
        There are a lot of misinformation spread on the lvllord patch though. The people using it often don't seem to have a good idea of what it actually does, and when it is actually mostly in effect. This should be mandatory reading [64.233.183.104] before binary patching your system files...
    • Re:well gee (Score:5, Funny)

      by nizo ( 81281 ) * on Wednesday January 09, 2008 @02:04AM (#21965512) Homepage Journal
      It worked so well for Office 2003, perhaps Microsoft could create a patch that would keep the OS from opening insecure packets from other vendors and their older products?
  • by Ai Olor-Wile ( 997427 ) on Tuesday January 08, 2008 @11:26PM (#21964628) Homepage
    Hooray! Windows vulnerabilities are so commonplace now that there are public educational documentaries about their life-cycles and internals, so that the people can stay informed. Brilliant!
  • by EmbeddedJanitor ( 597831 ) on Tuesday January 08, 2008 @11:27PM (#21964630)
    OMG! I thought it might be a bug, but thankfully it's just a mistake!
  • by palegray.net ( 1195047 ) <philip DOT paradis AT palegray DOT net> on Tuesday January 08, 2008 @11:28PM (#21964644) Homepage Journal
    Darn pesky kids and their fancy buffer overflows. I outta HEAP on the insults, but I'll try to stick to my PROGRAM of keeping my smoke STACK cool.

  • Slashvertisment (Score:3, Insightful)

    by Phlegethon_River ( 1136619 ) on Tuesday January 08, 2008 @11:34PM (#21964694)
    Yep, the submitter's email is from the company that stands to gain from more hits to this video (the ad at the end of the video).
    • Re:Slashvertisment (Score:5, Insightful)

      by QuantumG ( 50515 ) <qg@biodome.org> on Wednesday January 09, 2008 @12:15AM (#21964948) Homepage Journal
      so? He did something (some) people consider cool.. why shouldn't he stand to gain from telling people about it?

      Slashvertisment used to mean that you were claiming Slashdot was taking money to advertise something as a story. You seem to be using it to refer to anyone who submits their own website to Slashdot. Attention whore? Yes. Slashvertisment? No.

  • Lol MS sux0rz! ph34r my 1337 h4x!1one

    Everyone should be forced to give up manual memory allocation regardless of the power it can afford.

    #include "fucktard_troll.h"

    Now that that's done with, I see things like this as an argument in favor of moving stuff off of the CPU and into dedicated hardware. Why should your CPU be tied up with things at this level? The absolutely overwhelming majority of all data on every network uses one of two network layer protocols (IPv4 or IPv6) and one of two transport layer protocols (TCP or UDP). Why shouldn't those four combinations be handled by hardware, so we can leave the computer to run the applications? We already do this with 3d rendering, why not networking?
    • by Anonymous Coward on Tuesday January 08, 2008 @11:49PM (#21964804)
      I see things like this as an argument in favor of moving stuff off of the CPU and into dedicated hardware. Why should your CPU be tied up with things at this level? The absolutely overwhelming majority of all data on every network uses one of two network layer protocols (IPv4 or IPv6) and one of two transport layer protocols (TCP or UDP). Why shouldn't those four combinations be handled by hardware, so we can leave the computer to run the applications? We already do this with 3d rendering, why not networking?

      Do you have any idea how many millions of ethernet cards have been sold? Are they all going to be made obsolete?

      These days CPUs are so fast that the minor overhead of a network driver is negligible, unless you're going to ultra-fast speeds (some high-performance network cards do offload this to hardware).

      However, you still could have buffer overflows in the network drivers/firmware.
      • Re: (Score:3, Insightful)

        by eht ( 8912 )
        The cards won't be made obsolete, any more than 2d cards are made obsolete, a number of my machines have 2d only cards and they work fine for a large amount of the non gaming I do.

        I don't think anyone advocates softmodems, so why do we tolerate mostly soft network cards.
        • Re: (Score:3, Informative)

          by mr_mischief ( 456295 )
          Most Ethernet cards aren't "mostly soft". The network stack is, well, a stack. The physical layer and link layer are usually handled by the card. The stuff above that might be handled in firmware or a driver, but I'd rather not have IPv4 shove onto my Ethernet card as the only option. Some cards have gone soft to cut costs, but mid to high end cards are all hard. High-end server cards often have IP acceleration built in, but leave other options open.
    • Software is more flexible than hardware. We have plenty of hardware to do the work, and the parts that benefit from offloading (e.g. checksumming) are already offloaded. No point to adding new hardware.
    • by Arainach ( 906420 ) on Wednesday January 09, 2008 @12:19AM (#21964964)
      Because TCP and UDP headers aren't of fixed sizes and as such are incredibly difficult to handle in hardware. Hardware switching has been tried - ATM for instance - but it's not that simple. TCP/IP was designed as a software protocol, and it's an unfortunate reality that some protocols are easily handled in hardware and others are not.

      IPv6 makes some steps towards having simpler hardware handling, but as long as IPv4 is still around, we won't see hardware switching become commonplace.
      • Re: (Score:3, Informative)

        by kelnos ( 564113 )

        Because TCP and UDP headers aren't of fixed sizes and as such are incredibly difficult to handle in hardware.

        UDP headers [wikipedia.org] are always 8 bytes long. TCP headers [wikipedia.org] are indeed not fixed-length, but will always be a multiple of 4 bytes, will always be at least 20 bytes, and there's a field in the first 20 bytes that tells how large the header is. All of this can certainly be interpreted by hardware, but, as usual, it's cheaper to do it in software.

      • Hear, hear. Once upon a time I designed a packet-ized format for data telemetry and storage. The design was straightforward, but it included a variable-sized record-header (but always a factor of 8) on top of variable-sized record payloads. I thought it was a good idea at the time, but it turned out to be problematic for S/W implementation, especially for inexperienced devs. I could have saved ourselves a lot of time and pain if I had made the record headers fixed-length. Mea culpa.

        Makes one appreciate just
        • Having to teach inexperienced devs how to deal with variable sized record payloads is better than having the hardware restrict your packet contents such that software features become impossible to implement...

          --jeffk++
          • by smcdow ( 114828 )
            It wasn't the variable-sized headers that caused the bugs; they were fine with that. It was the variable-sized record headers that did the trick.

            But your point is taken. But, I'm not ever expecting this data format to be subsumed into firmware.
    • Everyone should be forced to give up manual memory allocation regardless of the power it can afford.

      I beg your pardon?? What is it you're suggesting with that respect exactly?

    • by guruevi ( 827432 ) on Wednesday January 09, 2008 @12:27AM (#21965026)
      TCP/IP offloading is already done on-chip by several network cards. Spend $10-$50 more on a network card and you would get it. Off course a lot of TCP/IP is still handled in the kernel of the OS just because it is too flexible to be done on-chip. Off course, if you need more performance along the lines of firewalling or traffic shaping, you could get an external appliance that handles it.
      • by mugnyte ( 203225 )

          I think if you continue your "off course" comments, you'll never stop stating the obvious.
    • by Lisandro ( 799651 ) on Wednesday January 09, 2008 @12:30AM (#21965038)
      Because Ethernet is a physical component [wikipedia.org] of the networking chain; protocols other than TCP or UDP can (and are!) be implemented.

      Besides, networking is something that barely taxes CPU power on every processor made from the Intel Pentium days to this date, unlike 3D acceleration. There's little justification to loose the flexibility provided by running it in software to get a negligible CPU performance increase.

      And yes, hardware can be buggy too. There's a shitload of issues with specific hardware that are addressed on their device drivers - again, easier to solve in software than to fix in hardware. Even CPUs suffer from this.
    • Some ethernet hardware can offload a number of expensive yet common operations to be done in hardware. But it doesn't always work [google.com].
    • by gillbates ( 106458 ) on Wednesday January 09, 2008 @02:48AM (#21965710) Homepage Journal

      Because as we all know, manual memory allocation is hard to understand. Programmers shouldn't have to know basic math, right?

      Why don't we just make a language that does it automatically, and then we won't have any problems like this? Right?!

      Those of us who cut their teeth on assembly and C look at this and just wonder in wide amazement. A part of us wonders how anyone could be so negligent - but the other part knows how things work in proprietary software shops. (A hint - the management doesn't consider it a bug unless the customer notices it.) Yes, we've all done this before, but the solution isn't to create a language which dumbs down the programmer (Dude - you're writing directly to memory!!! You must be some kind of uber-hacker!!). Rather, there are steps you can take to virtually eliminate this kind of problem:

      1. A different language isn't the solution (cue the Java trolls). The problem is that the programmer did not know how to correctly allocate the buffer, didn't bother to calculate the size needed, or was just plain sloppy. A sloppy C programmer makes an even sloppier Java programmer; if one can't be bothered to understand the details, they won't be saved by switching to another language.
      2. People do make mistakes, and the field of software engineering knows this. Thats why we advocate things like Formal Technical Reviews - where other engineers review the code you've written. Even if the author of this abomination was fresh out of college and didn't know any better, a thorough review would have caught the mistake.
      3. A good system test plan would have a.) known that such vulnerabilities are common, and b.) stress tested the code for this very situation. One thing I like to do in testing is to put values into fields that are one larger than what the program expects. Does it overflow? Does it crash? Does it correctly detect and properly handle the incorrect input? A good test program would have caught this bug even if the review had missed it.
      4. There are automated tools which can find buffer overflows, uninitialized variables, and the like. Why weren't they used? Or, perhaps they were...
      5. The most likely cause of this bug was not a sloppy programmer, or a bad choice of language (in fact, at this level, Java and C++ are pretty much out because of the performance issues.), but rather, a company that chose to forego the requisite design, review, and testing needed to produce a high quality product. Microsoft's customers have become so accustomed to buggy software that releasing a bug like this - and patching it later - is par for the course. From a business perspective, a buffer overflow is probably considered nothing more than a contingency that has to be dealt with eventually, that need not stop a product from shipping.

      You know, there was a time when formal methods were taught, when programmers were expected to know how to properly allocate and release memory. When things like calculating the size of the buffer, applying basic math(!) and testing your own code were considered just a part of the programmer's job. Now we're hearing people blame languages for the faults of the programmer.

      If I keep going, I suppose I'll start to sound like Bill Cosby. But consider this: the most reliable operating systems to date were built on C (UNIX) and assembly (MVS). If a bunch of old farts (well, perhaps they were young then...) can crank out correct, reliable, fast code without an IDE and a bunch of GUI tools, clearly the language is not to blame.

      The old adage still applies: a poor workman blames his tools . Software engineering works, regardless of the implementation language. This isn't a failure of the language or the environment, but rather, failure to do software engineering right:

      1. The programmer made the initial mistake, and
      2. Then no review of the code was performed, or all of the reviewers missed it, and
      3. No automated audit of the code was done, or
      • by WNight ( 23683 )
        You are right, but if you have to calculate buffer size manually

        buf_size = header_len + packetlen + sizelen + crclen + paddinglen
        my_buf = malloc(buf_size) // barf if my_buf is null
        memcpy(in_buf,my_buf,buf_size)

        there's simply a lot more to code than in Ruby. While in theory you can make it as safe, in practice you've simply got 8+ times as much code, checking it for correctness takes a lot longer.

        Similarly, in languages like Ruby you can iterate through a collection without loop variables, without writing ye
      • Re: (Score:3, Informative)

        by WNight ( 23683 )
        Pardon the other post - I forgot code with gt/lt symbols doesn't paste well...

        You are right, but if you have to calculate buffer size manually

        C:

        buf_size = header_len + packetlen + sizelen + crclen + paddinglen
        my_buf = malloc(buf_size)
        if (null == my_buf) ... // barf if my_buf is null
        memcpy(in_buf,my_buf,buf_size)


        there's simply a lot more to code than in Ruby. While in theory you can make it as safe, in practice you've simply got 8+ times as much code, checking it for correctness takes a lot longer.

        Similarly,
        • by goose-incarnated ( 1145029 ) on Wednesday January 09, 2008 @08:24AM (#21966912) Journal

          char foo[20] = "test string"
          for (i=0;i < strlen(foo);i++) { ... foo[i] }
          You really should not be programming in C.
          Or, come to think of it, without supervision.

          • by WNight ( 23683 )
            I thought that you'd notice the clues that it was psuedo-code, such as "... // barf if my_buf is null"

            But why, specifically, is that code so bad?
      • Re: (Score:3, Informative)

        I was reading through your post and nodding, but then I realised that I just can't agree with your underlying argument. I think this is the part of your post that captures the essence of what I mean:

        You know, there was a time when formal methods were taught, when programmers were expected to know how to properly allocate and release memory. When things like calculating the size of the buffer, applying basic math(!) and testing your own code were considered just a part of the programmer's job. Now we're hearing people blame languages for the faults of the programmer.

        While this is all true, the problem with this argument is that it fails to account for no-one being perfect. If a certain type of error is known to have occurred a non-zero number of times, and other things being equal the models in a certain programming language make that type of error impossible, then that

      • by hey! ( 33014 ) on Wednesday January 09, 2008 @09:31AM (#21967326) Homepage Journal

        Because as we all know, manual memory allocation is hard to understand. Programmers shouldn't have to know basic math, right?


        This is a fallacy. By that argument, number theory is simple because arithmetic is easy, and numerical errors in computations should not occur because the people doing them have mastered the atomic operations.

        [motherhood and apple pie snipped]

        The old adage still applies: a poor workman blames his tools


        Because, in large part, poor workmen choose inappropriate tools.

        It makes no sense to argue assuming a false dichotomy (e.g., "should we use a dynamically typed language with garbage collection, or should we do software engineering?"). The question is how to build robust systems most economically.

        To that end, we have to ask two questions:
        (1) Does making the programmer responsible for memory allocation lead to errors?
        (2) Can taking the responsibility for routine memory allocation out of the programmer's hands create other issues?

        The answers are, yes and yes. It all comes down to cost, schedule and results. It is true that there is no system written in Java, Python or Ruby that could not, theoretically, be written with the same or greater quality in C or assembler. It is also true that there are some systems which are written in C or assembler that would be much more difficult, if not impossible to write in Java, although as the years roll in these are fewer.

        A few years back I was asked to look at an embedded system that was originally designed for the tracking of shipping containers. It used GPS and short bursts of sat phone comm to phone its position home. The client had an application which required that the positional data be secured from interception, ideally of course perfectly secured, but if the data could be protected for several hours that would be sufficient. It doesn't take much imagination to guess who the ultimate users of this would be and in what four letter country they wished to use it.

        The systems in question were programmable, but there was less than 50K of program storage and about 16K of heap/stack RAM we could play with. We did not have the option of altering the hardware in any way other than loading new programming on. The client was pretty sure they weren't going to be able to do it because there wasn't enough space. My conclusion was that while creating a robust protocol given the redundancy of the messages was a challenge, the programing part would be quite feasible in C or assembler. Of course, if I had the option of adding something like a cryptographic java card to the system, the job of creating a robust protocol would have been greatly simplified.

        And ultimately, that's what software engineering amounts to: finding ways to greatly simplify what are otherwise dauntingly complicated problems. Yes, it takes more mojo to do it in assembler, but mojo is a resource like any other. Engineering is getting the most done for the least expenditure of resources.

        So the answer is that is good engineering to use Java or Python or Ruby where it simplifies your solution. It is good engineering to use C or assembler when they simplify your problem. It is bad engineering to choose a tool because using it proves you have large cojones.
      • Because as we all know, manual memory allocation is hard to understand.

        For me, memory allocation is dead simple. It's knowing when to free it that's the bear. In trivial cases where malloc() and free() are in the same function, that's a piece of cake. In more involved cases where buffers are working their way through multi-threaded code and it's not immediately clear which function will be the last one to touch a buffer (and therefore responsible for freeing it), it's a freakin' nightmare.

        I openly admit that I'm a flawed programmer. When everything's going well, I'm ve

      • But consider this: the most reliable operating systems to date were built on C (UNIX) and assembly (MVS).
        And BLISS (VMS).
    • by Anonymous Coward
      The absolutely overwhelming majority of all data on every network uses one of two network layer protocols (IPv4 or IPv6) and one of two transport layer protocols (TCP or UDP).

      You forgot ICMP. And even if you had remembered it, the bug was in IGMP, which is still not on your list, and would thus need to be implemented in software anyway. Sure, IGMP is not used that much, but it only takes one bad guy to send the packet that takes over your system.
    • Everyone should be forced to give up manual memory allocation regardless of the power it can afford.

      Considering that Firefox crashes whenever I happen to hit the "Insert" key when writing a reply on Slashdot, and randomly otherwise, I'm inclined to agree. Programmers, in general, are apparently incapable of dealing with memory management or bounds checking, so they should just use automation.

      Of course simply moving them to Java will just have them do things like starting threads from object constructo

      • Hrm... doesn't seem to crash for me. What version are you using? Or do you have your Insert key somehow tied to something else? Because I just hit Insert 20 or 30 times while typing this and nothing happened.
    • Forget these other retards. Your hardware idea is one of the best I've ever heard.

      Write it out in VHDL, get an FPGA, and take the proof of concept to someone with money. Any web server admin with half a brain can see why having your TCP/IP stack in hardware is preferential to software, even if it does replace the ethernet card.

      Fantastic!!!

  • by PerfectSmurf ( 882935 ) on Tuesday January 08, 2008 @11:45PM (#21964766)
    Or you could read about it on the Security Vunerability Research and Defense blog at http://blogs.technet.com/swi/ [technet.com]
  • by Junior J. Junior III ( 192702 ) on Tuesday January 08, 2008 @11:50PM (#21964812) Homepage
    This movie (Flash required) goes through the process of examining the 'pre-patch' version of tcpip.sys and comparing it against the 'post-patch' version of tcpip.sys. This comparison yields the actual code that

    See? And they said without FOSS, this couldn't be done!
    • by totally bogus dude ( 1040246 ) on Wednesday January 09, 2008 @12:02AM (#21964882)

      The difference is that if it was FOSS, they'd be able to see the comment saying "// this doesn't match the specs but it worked for me in the test I did, so the specs must be wrong."

      • Re: (Score:2, Interesting)

        by Hal_Porter ( 817932 )
        I dunno about that. That assumes the original programmer knew the code was incomplete. Most of the time code has sat around for ages and been looked at by hundreds of people without anyone thinking about a situation where it would fail. Admittedly it's a lot easier to fix code if you have the source code, but it doesn't make it any easier to spot bugs. Whover said "many eyeballs make all bugs shallow" has never worked for a company with thousands of developers building real time systems. Maybe it's true of
    • by mystik ( 38627 ) on Wednesday January 09, 2008 @12:44AM (#21965118) Homepage Journal
      The difference is that this is legally questionable. I'm pretty sure the license forbids reverse compilation and disassembly like this ....

      With FOSS, you know exactly what your rights are.
    • Re: (Score:3, Interesting)

      by kevmatic ( 1133523 )
      Oh, sure, because traversing dozens of lines of "Mov EAX,$4B456E5" and whatever is comparable looking at original source code. Disassembling is a pretty poor for this sort of thing; you really need to start with it narrowed down, like this guy did by diffing it. Most of the time you'll be looking at whole executables if you want to do something like this..

      Also, though its educational purposes are undeniable and it certainly is interesting to say the least, what good is it? It can only be used to make one or
  • "It could be that the purpose of your life is only to serve as a warning to others." http://despair.com/mis24x30prin.html [despair.com]

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...