Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Security

XP/Vista IGMP Buffer Overflow — Explained 208

HalvarFlake writes "With all the hoopla about the remotely exploitable, kernel-level buffer overflow discussed in today's security bulletin MS08-0001, what is the actual bug that triggers this? The bulletin doesn't give all that much information. This movie (Flash required) goes through the process of examining the 'pre-patch' version of tcpip.sys and comparing it against the 'post-patch' version of tcpip.sys. This comparison yields the actual code that causes the overflow: A mistake in the calculation of the required size in a dynamic allocation."
This discussion has been archived. No new comments can be posted.

XP/Vista IGMP Buffer Overflow — Explained

Comments Filter:
  • Slashvertisment (Score:3, Insightful)

    by Phlegethon_River ( 1136619 ) on Tuesday January 08, 2008 @11:34PM (#21964694)
    Yep, the submitter's email is from the company that stands to gain from more hits to this video (the ad at the end of the video).
  • by eht ( 8912 ) on Tuesday January 08, 2008 @11:59PM (#21964860)
    The cards won't be made obsolete, any more than 2d cards are made obsolete, a number of my machines have 2d only cards and they work fine for a large amount of the non gaming I do.

    I don't think anyone advocates softmodems, so why do we tolerate mostly soft network cards.
  • Re:Slashvertisment (Score:5, Insightful)

    by QuantumG ( 50515 ) <qg@biodome.org> on Wednesday January 09, 2008 @12:15AM (#21964948) Homepage Journal
    so? He did something (some) people consider cool.. why shouldn't he stand to gain from telling people about it?

    Slashvertisment used to mean that you were claiming Slashdot was taking money to advertise something as a story. You seem to be using it to refer to anyone who submits their own website to Slashdot. Attention whore? Yes. Slashvertisment? No.

  • by Anonymous Coward on Wednesday January 09, 2008 @12:25AM (#21965006)
    The problem is more fundamental then smarter network hardware, it's the CPU/Memory architecture. Long ago, there where computers that had dedicated hardware for memory content management. Two schemes were used: segment descriptors and memory tag bits. The segment hardware checked that addresses for the data structure fell inside the segment memory limits, and tag bit described memory contents (i.e. integer, float, pointer, etc). This was in the days when logic and memory was much more expensive then today. These design choices made the machines much more reliable.

    Specifically I'm referring to Symbolics Lisp Machines and Burroughs stack machines, both of which had very low software failure rates. Even when a program crashed, the OS kept going. Note that both of these computers had all their main software written in high level languages that had automatic garbage collection that was integrated with the hardware memory support.

    Unfortunately, the quest for performance eliminated these features. Realistically, without hardware support software will never be very reliable. (Even with better hardware there will still be problems, but the current situation will never be very good.) Now that logic and memory are cheap and reliability is a critical issue, we should be considering putting resources into these kind of reliability checks. What are we doing instead? Putting more cores on the die. Yeah, more multi-threading will make software even more reliable in the future.
  • by Lisandro ( 799651 ) on Wednesday January 09, 2008 @12:30AM (#21965038)
    Because Ethernet is a physical component [wikipedia.org] of the networking chain; protocols other than TCP or UDP can (and are!) be implemented.

    Besides, networking is something that barely taxes CPU power on every processor made from the Intel Pentium days to this date, unlike 3D acceleration. There's little justification to loose the flexibility provided by running it in software to get a negligible CPU performance increase.

    And yes, hardware can be buggy too. There's a shitload of issues with specific hardware that are addressed on their device drivers - again, easier to solve in software than to fix in hardware. Even CPUs suffer from this.
  • by mystik ( 38627 ) on Wednesday January 09, 2008 @12:44AM (#21965118) Homepage Journal
    The difference is that this is legally questionable. I'm pretty sure the license forbids reverse compilation and disassembly like this ....

    With FOSS, you know exactly what your rights are.
  • by Scoth ( 879800 ) on Wednesday January 09, 2008 @01:13AM (#21965252)
    While I don't necessarily disagree with you... feel free to release your patch to tcpip.c and give us a link to the updated source file as soon as you get a chance ;)

    Sometimes, if a closed-source vendor isn't going to release an update/fix/tweak, the community has to do what they can to do it. Given what many people use Bittorrent for, I suspect getting a rootkit from this patch is the least of their worries. The rest of us will either just have to trust it, use BT on a non-Windows platform, or deal with the slower speeds.

    This does bring up an interesting possibility - rather than completely reimplement Windows through something like ReactOS, or translate the API like WINE, how about replacing components of a real Windows install with F/OSS replacements? Drop in a workalike, but open source tcpip.sys and know where it's coming from.
  • by gillbates ( 106458 ) on Wednesday January 09, 2008 @02:48AM (#21965710) Homepage Journal

    Because as we all know, manual memory allocation is hard to understand. Programmers shouldn't have to know basic math, right?

    Why don't we just make a language that does it automatically, and then we won't have any problems like this? Right?!

    Those of us who cut their teeth on assembly and C look at this and just wonder in wide amazement. A part of us wonders how anyone could be so negligent - but the other part knows how things work in proprietary software shops. (A hint - the management doesn't consider it a bug unless the customer notices it.) Yes, we've all done this before, but the solution isn't to create a language which dumbs down the programmer (Dude - you're writing directly to memory!!! You must be some kind of uber-hacker!!). Rather, there are steps you can take to virtually eliminate this kind of problem:

    1. A different language isn't the solution (cue the Java trolls). The problem is that the programmer did not know how to correctly allocate the buffer, didn't bother to calculate the size needed, or was just plain sloppy. A sloppy C programmer makes an even sloppier Java programmer; if one can't be bothered to understand the details, they won't be saved by switching to another language.
    2. People do make mistakes, and the field of software engineering knows this. Thats why we advocate things like Formal Technical Reviews - where other engineers review the code you've written. Even if the author of this abomination was fresh out of college and didn't know any better, a thorough review would have caught the mistake.
    3. A good system test plan would have a.) known that such vulnerabilities are common, and b.) stress tested the code for this very situation. One thing I like to do in testing is to put values into fields that are one larger than what the program expects. Does it overflow? Does it crash? Does it correctly detect and properly handle the incorrect input? A good test program would have caught this bug even if the review had missed it.
    4. There are automated tools which can find buffer overflows, uninitialized variables, and the like. Why weren't they used? Or, perhaps they were...
    5. The most likely cause of this bug was not a sloppy programmer, or a bad choice of language (in fact, at this level, Java and C++ are pretty much out because of the performance issues.), but rather, a company that chose to forego the requisite design, review, and testing needed to produce a high quality product. Microsoft's customers have become so accustomed to buggy software that releasing a bug like this - and patching it later - is par for the course. From a business perspective, a buffer overflow is probably considered nothing more than a contingency that has to be dealt with eventually, that need not stop a product from shipping.

    You know, there was a time when formal methods were taught, when programmers were expected to know how to properly allocate and release memory. When things like calculating the size of the buffer, applying basic math(!) and testing your own code were considered just a part of the programmer's job. Now we're hearing people blame languages for the faults of the programmer.

    If I keep going, I suppose I'll start to sound like Bill Cosby. But consider this: the most reliable operating systems to date were built on C (UNIX) and assembly (MVS). If a bunch of old farts (well, perhaps they were young then...) can crank out correct, reliable, fast code without an IDE and a bunch of GUI tools, clearly the language is not to blame.

    The old adage still applies: a poor workman blames his tools . Software engineering works, regardless of the implementation language. This isn't a failure of the language or the environment, but rather, failure to do software engineering right:

    1. The programmer made the initial mistake, and
    2. Then no review of the code was performed, or all of the reviewers missed it, and
    3. No automated audit of the code was done, or
  • by EvanED ( 569694 ) <{evaned} {at} {gmail.com}> on Wednesday January 09, 2008 @03:02AM (#21965748)
    "You mean it is something other than disassemble pre, disassemble post, diff?"

    There's a little bit of actually understanding the diff in there too. That's sort of the hard part.
  • by Anonymous Coward on Wednesday January 09, 2008 @04:06AM (#21966020)
    The absolutely overwhelming majority of all data on every network uses one of two network layer protocols (IPv4 or IPv6) and one of two transport layer protocols (TCP or UDP).

    You forgot ICMP. And even if you had remembered it, the bug was in IGMP, which is still not on your list, and would thus need to be implemented in software anyway. Sure, IGMP is not used that much, but it only takes one bad guy to send the packet that takes over your system.
  • by Gription ( 1006467 ) on Wednesday January 09, 2008 @04:16AM (#21966060)
    There is a real point to his argument. It also happens to be the real flaw in his argument...

    The only real reason to "upgrade" something is if you need something more. For business, need should be defined as something that will do a business function that will make money, replace labor, acquire additional business related information of value, etc... It has to do something you truly need. If all you any business need for is a computer that runs a word processor then he has a genuine point. It assumes that there is no other piece of software that serves a valid business need that anyone else might need.

    A number of pieces of software have been written that require a later OS that fulfill a number of very valuable ($$$) tasks. Also Win 95 is only stable if you have hardware with extremely good drivers under it, a limited number of processes/programs on top of it, and your continuous up-time requirements are somewhat limited. This makes 95 a long way from being the one-size-fits-all solution. (I have one Win 95B station at my desk just to do drive data recovery and to do a few file tasks that XP doesn't want to let you do...)

    Using that same logic there isn't a valid reason for almost anyone to use Vista instead of XP. Plus there is the "Business downside" of the end users having to relearn how to use computers that they already knew how to use.

    Vista's big offerings are two fold:
    - One is what I call the "raccoon" factor. Give people something bright and shiny and their eyes will roll back in their head as they start to murmur, "Gimme, gimme, gimme..." as you can hear the words, "It is new!" echoing softly in the background. This offers them nothing that is real but it does drive people amazingly hard. Look at the number of people that paid $100+ premiums to have an iPhone in the first week of release. A month later no one including themselves remember that they got their phone early and it certainly didn't pay any dividend for the expense but they will do it again: They are raccoons!
    - Two, Vista includes huge DRM underpinnings. After XP was released Bill Gates publicly stated they the next version of Windows wouldn't be an OS but instead it would be a Digital Rights Management Platform. This does nothing for us but does plenty for Mickeysoft and the big media companies. I notice they aren't mentioning that fact any more either!

    Basically Microsoft wrote a new OS for themselves instead of us and they made it really visually flashy so the raccoon in all of us will want to roll our eyes back in our head and buy it. The fact that they forgot to put anything we actually need in it has made its adoption really tank. The only real reason they have sold any volume of it is that you almost can't buy a computer without it. To help the process along Microsoft has pushed for new hardware that doesn't have XP driver support and you will start to see programming tools with limited or missing XP support.

    We are coming up to a point where we are looking at a future where we could lose control of what is on our own computers! Vista is already trying to decide if you should be able to access your own files that are already on your computer! Take this fact and combine it with the whole limitations being rammed down our throat with HDTV and we are looking at being consumers that are buying things that we have no control over. A computer could easily act as a HDTV 'VCR' because that is an amazingly simple function but we have been forced to buy into a system where that isn't allowed. The only HDTV VCR like devices are subscription ($$) based!

    You are being quietly guided into a world where you will tithe endlessly to corporations for simple things that in the past you could buy once and be done with. MS has tried to make the OS subscription based. (tithe) Limited number of play media files are subscription based. (tithe) Buying a cell with an MP3 player in it that you will just replace in a year or two is ano
  • by Anonymous Coward on Wednesday January 09, 2008 @06:47AM (#21966588)

    But consider this: the most reliable operating systems to date were built on C (UNIX) and assembly (MVS).

    Most reliable? How do you measure that? I've never seen a Lisp machine with a kernel panic, but I've seen Unix machines of all sorts die this way. The B5000 series was also famously rock-solid, and their OS was written in an ALGOL dialect (possibly helped by an ALGOL-58 compiler written over a summer for an earlier Burroughs computer by a student named Donald Knuth).

    If you're using Unix as an example of what C can do, I'd say it's pretty good evidence that language *does* matter, and C fails.

    From a business perspective, a buffer overflow is probably considered nothing more than a contingency that has to be dealt with eventually, that need not stop a product from shipping.

    And it was probably the right decision. Look how many people paid good money for Windows XP (and then again with Vista). If people really didn't want poorly designed software, they wouldn't be buying it in such huge quantities.

    Then again, as long as people think that C is an acceptable language to write large programs in, the bar is set pretty low to begin with.
  • by goose-incarnated ( 1145029 ) on Wednesday January 09, 2008 @08:24AM (#21966912) Journal

    char foo[20] = "test string"
    for (i=0;i < strlen(foo);i++) { ... foo[i] }
    You really should not be programming in C.
    Or, come to think of it, without supervision.

  • by Anonymous Brave Guy ( 457657 ) on Wednesday January 09, 2008 @09:00AM (#21967112)

    Please don't write posts like this if you're not going to back them up with reliable sources. Your personal views on the validity of EULAs in whatever jurisdiction you are in don't really count for much if the courts don't agree with you, and in any case are unlikely to be applicable universally.

  • by hey! ( 33014 ) on Wednesday January 09, 2008 @09:31AM (#21967326) Homepage Journal

    Because as we all know, manual memory allocation is hard to understand. Programmers shouldn't have to know basic math, right?


    This is a fallacy. By that argument, number theory is simple because arithmetic is easy, and numerical errors in computations should not occur because the people doing them have mastered the atomic operations.

    [motherhood and apple pie snipped]

    The old adage still applies: a poor workman blames his tools


    Because, in large part, poor workmen choose inappropriate tools.

    It makes no sense to argue assuming a false dichotomy (e.g., "should we use a dynamically typed language with garbage collection, or should we do software engineering?"). The question is how to build robust systems most economically.

    To that end, we have to ask two questions:
    (1) Does making the programmer responsible for memory allocation lead to errors?
    (2) Can taking the responsibility for routine memory allocation out of the programmer's hands create other issues?

    The answers are, yes and yes. It all comes down to cost, schedule and results. It is true that there is no system written in Java, Python or Ruby that could not, theoretically, be written with the same or greater quality in C or assembler. It is also true that there are some systems which are written in C or assembler that would be much more difficult, if not impossible to write in Java, although as the years roll in these are fewer.

    A few years back I was asked to look at an embedded system that was originally designed for the tracking of shipping containers. It used GPS and short bursts of sat phone comm to phone its position home. The client had an application which required that the positional data be secured from interception, ideally of course perfectly secured, but if the data could be protected for several hours that would be sufficient. It doesn't take much imagination to guess who the ultimate users of this would be and in what four letter country they wished to use it.

    The systems in question were programmable, but there was less than 50K of program storage and about 16K of heap/stack RAM we could play with. We did not have the option of altering the hardware in any way other than loading new programming on. The client was pretty sure they weren't going to be able to do it because there wasn't enough space. My conclusion was that while creating a robust protocol given the redundancy of the messages was a challenge, the programing part would be quite feasible in C or assembler. Of course, if I had the option of adding something like a cryptographic java card to the system, the job of creating a robust protocol would have been greatly simplified.

    And ultimately, that's what software engineering amounts to: finding ways to greatly simplify what are otherwise dauntingly complicated problems. Yes, it takes more mojo to do it in assembler, but mojo is a resource like any other. Engineering is getting the most done for the least expenditure of resources.

    So the answer is that is good engineering to use Java or Python or Ruby where it simplifies your solution. It is good engineering to use C or assembler when they simplify your problem. It is bad engineering to choose a tool because using it proves you have large cojones.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...