Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security

XP/Vista IGMP Buffer Overflow — Explained 208

HalvarFlake writes "With all the hoopla about the remotely exploitable, kernel-level buffer overflow discussed in today's security bulletin MS08-0001, what is the actual bug that triggers this? The bulletin doesn't give all that much information. This movie (Flash required) goes through the process of examining the 'pre-patch' version of tcpip.sys and comparing it against the 'post-patch' version of tcpip.sys. This comparison yields the actual code that causes the overflow: A mistake in the calculation of the required size in a dynamic allocation."
This discussion has been archived. No new comments can be posted.

XP/Vista IGMP Buffer Overflow — Explained

Comments Filter:
  • by PerfectSmurf ( 882935 ) on Tuesday January 08, 2008 @11:45PM (#21964766)
    Or you could read about it on the Security Vunerability Research and Defense blog at http://blogs.technet.com/swi/ [technet.com]
  • by Anonymous Coward on Tuesday January 08, 2008 @11:49PM (#21964804)
    I see things like this as an argument in favor of moving stuff off of the CPU and into dedicated hardware. Why should your CPU be tied up with things at this level? The absolutely overwhelming majority of all data on every network uses one of two network layer protocols (IPv4 or IPv6) and one of two transport layer protocols (TCP or UDP). Why shouldn't those four combinations be handled by hardware, so we can leave the computer to run the applications? We already do this with 3d rendering, why not networking?

    Do you have any idea how many millions of ethernet cards have been sold? Are they all going to be made obsolete?

    These days CPUs are so fast that the minor overhead of a network driver is negligible, unless you're going to ultra-fast speeds (some high-performance network cards do offload this to hardware).

    However, you still could have buffer overflows in the network drivers/firmware.
  • by Arainach ( 906420 ) on Wednesday January 09, 2008 @12:19AM (#21964964)
    Because TCP and UDP headers aren't of fixed sizes and as such are incredibly difficult to handle in hardware. Hardware switching has been tried - ATM for instance - but it's not that simple. TCP/IP was designed as a software protocol, and it's an unfortunate reality that some protocols are easily handled in hardware and others are not.

    IPv6 makes some steps towards having simpler hardware handling, but as long as IPv4 is still around, we won't see hardware switching become commonplace.
  • by guruevi ( 827432 ) on Wednesday January 09, 2008 @12:27AM (#21965026)
    TCP/IP offloading is already done on-chip by several network cards. Spend $10-$50 more on a network card and you would get it. Off course a lot of TCP/IP is still handled in the kernel of the OS just because it is too flexible to be done on-chip. Off course, if you need more performance along the lines of firewalling or traffic shaping, you could get an external appliance that handles it.
  • Event ID 4226 (Score:5, Informative)

    by Xenographic ( 557057 ) on Wednesday January 09, 2008 @12:39AM (#21965090) Journal
    Actually, there's one more comparison they've screwed up. Anyone who has installed the Event ID 4226 [lvllord.de] patch to increase the allowed number of half-open connections so their BitTorrent speeds don't suck ass just had that patch undone by this new version of TCPIP.SYS.

    The only good thing is that, while the page hasn't been updated since 2006, the patch seems to work on the new TCPIP.SYS (I just tested it on my own machine).

    I realize I'm sort of hijacking the first post, but given how many of us are probably downloading Linux ISOs right now, I figured it's important enough that people wouldn't mind a reminder... :-] Oh, and I'll add one more detail not mentioned here. According to F-Secure, there haven't been any exploits for this found in the wild--yet.
  • Rootkit? (Score:5, Informative)

    by Xenographic ( 557057 ) on Wednesday January 09, 2008 @01:17AM (#21965268) Journal
    > Seriously though, WTF? That's a rootkit technique.

    Rootkits use a lot of techniques that are also used by legitimate software. Yes, that patcher (and its patch) does get detected by a few anti-virus programs because worms, like torrents, benefit from being able to connect to more peers. It's not a virus in or of itself, though, plenty of people have checked it out.

    > Changes of this nature should be made to source code, not binaries. It's way more maintainable and sustainable that way.

    I fully agree, but it's kinda hard to get the source for Microsoft programs. Last I heard, you had to be a big university, pay tons of money, sign NDAs, etc. Besides, this limitation wasn't an accident. It was a deliberate "feature" they put in because they thought it would slow down worms. They're not going to fix it just because people ask.
  • Because TCP and UDP headers aren't of fixed sizes and as such are incredibly difficult to handle in hardware.
    UDP headers [wikipedia.org] are always 8 bytes long. TCP headers [wikipedia.org] are indeed not fixed-length, but will always be a multiple of 4 bytes, will always be at least 20 bytes, and there's a field in the first 20 bytes that tells how large the header is. All of this can certainly be interpreted by hardware, but, as usual, it's cheaper to do it in software.
  • by mr_mischief ( 456295 ) on Wednesday January 09, 2008 @02:00AM (#21965494) Journal
    Most Ethernet cards aren't "mostly soft". The network stack is, well, a stack. The physical layer and link layer are usually handled by the card. The stuff above that might be handled in firmware or a driver, but I'd rather not have IPv4 shove onto my Ethernet card as the only option. Some cards have gone soft to cut costs, but mid to high end cards are all hard. High-end server cards often have IP acceleration built in, but leave other options open.
  • by Junior J. Junior III ( 192702 ) on Wednesday January 09, 2008 @02:24AM (#21965608) Homepage
    Geez, I can't believe how many people took my grandparent post seriously, like I was actually advocating that you can audit the source code of closed-source software for security holes by decompiling it. Well, I mean, you could, but it'd be fairly ridiculous.
  • Re:Event ID 4226 (Score:5, Informative)

    by Jugalator ( 259273 ) on Wednesday January 09, 2008 @05:20AM (#21966286) Journal
    There are a lot of misinformation spread on the lvllord patch though. The people using it often don't seem to have a good idea of what it actually does, and when it is actually mostly in effect. This should be mandatory reading [64.233.183.104] before binary patching your system files...
  • by WNight ( 23683 ) on Wednesday January 09, 2008 @05:58AM (#21966424) Homepage
    Pardon the other post - I forgot code with gt/lt symbols doesn't paste well...

    You are right, but if you have to calculate buffer size manually

    C:

    buf_size = header_len + packetlen + sizelen + crclen + paddinglen
    my_buf = malloc(buf_size)
    if (null == my_buf) ... // barf if my_buf is null
    memcpy(in_buf,my_buf,buf_size)


    there's simply a lot more to code than in Ruby. While in theory you can make it as safe, in practice you've simply got 8+ times as much code, checking it for correctness takes a lot longer.

    Similarly, in languages like Ruby you can iterate through a collection without loop variables, without writing yet another for loop.

    C:

    char foo[20] = "test string"
    for (i=0;i < strlen(foo);i++) { ... foo[i] }


    Ruby:

    foo = "test string"
    foo.each_character {|c| ... c }


    This savings is exaggerated if you write more complex code:

    a = []
    10.times { a << (rand * 100).to_i }
    puts a.collect {|n| n * 3 }.collect {|n| n = ('1' + n.to_s).to_i }.sort_by {|n| n % 5 }.inspect

    prints: [1105, 190, 1195, 1120, 1135, 166, 187, 163, 1168, 1183]

    No buffer checking needed - if it fails to allocate it'll die cleanly at least. Or you can catch the exception and do whatever you want.

    There's no need to write in C unless you need its features. There's just too much code, and with that code, more chance of errors - not to mention that it's harder code...

    When testing a buffer, throwing something a bit longer at it is good. I tend to just copy a whole slashdot discussion or something else huge and try to paste it into every control I can. That catches the programmers who just allocate large static buffers.

    Programmer: "You can't send back a 200k web request! That form only allowed 300 characters."
    Me: "Yes, until I used the Firefox DOM viewer to change it - just like a hacker would. Verify your input!"
  • by Anonymous Brave Guy ( 457657 ) on Wednesday January 09, 2008 @08:53AM (#21967064)

    I was reading through your post and nodding, but then I realised that I just can't agree with your underlying argument. I think this is the part of your post that captures the essence of what I mean:

    You know, there was a time when formal methods were taught, when programmers were expected to know how to properly allocate and release memory. When things like calculating the size of the buffer, applying basic math(!) and testing your own code were considered just a part of the programmer's job. Now we're hearing people blame languages for the faults of the programmer.

    While this is all true, the problem with this argument is that it fails to account for no-one being perfect. If a certain type of error is known to have occurred a non-zero number of times, and other things being equal the models in a certain programming language make that type of error impossible, then that programming language is unambiguously safer than one that doesn't prevent the error. It might only prevent it once in a blue moon when a really great programmer is using it, and probably a few more times when someone who thinks they're a really great programmer is using it, but it still prevents errors. Pride comes before a fall, and choosing a programming language that it unnecessarily vulnerable to certain classes of programmer error because you believe you're too good to ever make them is like tying your own shoelaces together before running a marathon.

    But consider this: the most reliable operating systems to date were built on C (UNIX) and assembly (MVS). If a bunch of old farts (well, perhaps they were young then...) can crank out correct, reliable, fast code without an IDE and a bunch of GUI tools, clearly the language is not to blame.

    And if a bunch of old farts had cranked out correct, reliable, fast code in that way, I'd be impressed. Since this has almost never been achieved in the entire history of software development, however, this doesn't tell us much. These are, after all, the same old farts who brought us joys like the gets library function in C. (If you're Donald Knuth, I'll acknowledge that you're an exception on the correct, reliable, fast count, but we really need to talk about usability.)

    Your answer to these ailments appears to work exclusively on a "cure" basis: use better processes so more people look at things, use better tools to pick up errors, etc. But prevention is better than cure. If you're using a programming language where the problem simply can't happen, you know the reviews and tools won't miss anything.

    The old adage still applies: a poor workman blames his tools.

    Perhaps. But a good workman knows his tools, and chooses the right one for the job.

    While there certainly is an ethical issue at work here, the problem with buffer overflows can usually be avoided through purely technical means. In the context of a TCP/IP stack, I question whether it's really necessary to resort to known error-prone implementation technologies. We're not talking about an OS kernel or some ludicrously high performance mathematical algorithm, and any performance penalties associated with using a slightly safer language would surely be negligible.

    (Incidentally, I work on high performance maths software, typically with fairly low level languages. We do use reviews and automated tools, and they tell me I don't personally make the kind of memory management error you're criticising, so I have no axe to grind here nor any particular bias towards high level languages when they're not appropriate.)

  • by Nursie ( 632944 ) on Wednesday January 09, 2008 @02:12PM (#21971360)
    OK, this is the post my original comment was aimed at. A linux LiveCD (like Ubuntu installation media) or a linux machine will do this stuff *very* well indeed. It'll give you full access to FAT or NTFS drives, allow you to copy what you like, up to and including full drive images*.

    There's no issue with windows systems that may be rooted or infected because the stuff just won't run. What do your low level DOS utils do?

    I must mention here, too, that a lot of the tools provided in Linux are intuitive and easy to use. "gparted" is a godsend.

    *(which is easy BTW, dd if=/dev/disktocopy of=./imagefile, restore by switching if and of)

    (if you're happy using Win95, it's good with me, just felt like getting in a bit of Linux advocacy seeing as I'm using it loads for disk and filesystem stuf at the moment)

UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things. -- Doug Gwyn

Working...