XP/Vista IGMP Buffer Overflow — Explained 208
HalvarFlake writes "With all the hoopla about the remotely exploitable, kernel-level buffer overflow discussed in today's security bulletin MS08-0001, what is the actual bug that triggers this? The bulletin doesn't give all that much information. This movie (Flash required) goes through the process of examining the 'pre-patch' version of tcpip.sys and comparing it against the 'post-patch' version of tcpip.sys. This comparison yields the actual code that causes the overflow: A mistake in the calculation of the required size in a dynamic allocation."
Slashvertisment (Score:3, Insightful)
Re:Let's get the preliminary stuff out of the way. (Score:3, Insightful)
I don't think anyone advocates softmodems, so why do we tolerate mostly soft network cards.
Re:Slashvertisment (Score:5, Insightful)
Slashvertisment used to mean that you were claiming Slashdot was taking money to advertise something as a story. You seem to be using it to refer to anyone who submits their own website to Slashdot. Attention whore? Yes. Slashvertisment? No.
Re:Let's get the preliminary stuff out of the way. (Score:1, Insightful)
Specifically I'm referring to Symbolics Lisp Machines and Burroughs stack machines, both of which had very low software failure rates. Even when a program crashed, the OS kept going. Note that both of these computers had all their main software written in high level languages that had automatic garbage collection that was integrated with the hardware memory support.
Unfortunately, the quest for performance eliminated these features. Realistically, without hardware support software will never be very reliable. (Even with better hardware there will still be problems, but the current situation will never be very good.) Now that logic and memory are cheap and reliability is a critical issue, we should be considering putting resources into these kind of reliability checks. What are we doing instead? Putting more cores on the die. Yeah, more multi-threading will make software even more reliable in the future.
Re:Let's get the preliminary stuff out of the way (Score:4, Insightful)
Besides, networking is something that barely taxes CPU power on every processor made from the Intel Pentium days to this date, unlike 3D acceleration. There's little justification to loose the flexibility provided by running it in software to get a negligible CPU performance increase.
And yes, hardware can be buggy too. There's a shitload of issues with specific hardware that are addressed on their device drivers - again, easier to solve in software than to fix in hardware. Even CPUs suffer from this.
Re:Windows is open-sores software (Score:4, Insightful)
With FOSS, you know exactly what your rights are.
Re:you BINARY PATCH core OS code??? (Score:5, Insightful)
Sometimes, if a closed-source vendor isn't going to release an update/fix/tweak, the community has to do what they can to do it. Given what many people use Bittorrent for, I suspect getting a rootkit from this patch is the least of their worries. The rest of us will either just have to trust it, use BT on a non-Windows platform, or deal with the slower speeds.
This does bring up an interesting possibility - rather than completely reimplement Windows through something like ReactOS, or translate the API like WINE, how about replacing components of a real Windows install with F/OSS replacements? Drop in a workalike, but open source tcpip.sys and know where it's coming from.
Yes, let's do just that... (Score:4, Insightful)
Because as we all know, manual memory allocation is hard to understand. Programmers shouldn't have to know basic math, right?
Why don't we just make a language that does it automatically, and then we won't have any problems like this? Right?!
Those of us who cut their teeth on assembly and C look at this and just wonder in wide amazement. A part of us wonders how anyone could be so negligent - but the other part knows how things work in proprietary software shops. (A hint - the management doesn't consider it a bug unless the customer notices it.) Yes, we've all done this before, but the solution isn't to create a language which dumbs down the programmer (Dude - you're writing directly to memory!!! You must be some kind of uber-hacker!!). Rather, there are steps you can take to virtually eliminate this kind of problem:
You know, there was a time when formal methods were taught, when programmers were expected to know how to properly allocate and release memory. When things like calculating the size of the buffer, applying basic math(!) and testing your own code were considered just a part of the programmer's job. Now we're hearing people blame languages for the faults of the programmer.
If I keep going, I suppose I'll start to sound like Bill Cosby. But consider this: the most reliable operating systems to date were built on C (UNIX) and assembly (MVS). If a bunch of old farts (well, perhaps they were young then...) can crank out correct, reliable, fast code without an IDE and a bunch of GUI tools, clearly the language is not to blame.
The old adage still applies: a poor workman blames his tools . Software engineering works, regardless of the implementation language. This isn't a failure of the language or the environment, but rather, failure to do software engineering right:
Re:Sounds like HowStuffWorks material! (Score:4, Insightful)
There's a little bit of actually understanding the diff in there too. That's sort of the hard part.
Re:Let's get the preliminary stuff out of the way. (Score:2, Insightful)
You forgot ICMP. And even if you had remembered it, the bug was in IGMP, which is still not on your list, and would thus need to be implemented in software anyway. Sure, IGMP is not used that much, but it only takes one bad guy to send the packet that takes over your system.
Re:Why Windows 95 and NT 4 are enough (Score:5, Insightful)
The only real reason to "upgrade" something is if you need something more. For business, need should be defined as something that will do a business function that will make money, replace labor, acquire additional business related information of value, etc... It has to do something you truly need. If all you any business need for is a computer that runs a word processor then he has a genuine point. It assumes that there is no other piece of software that serves a valid business need that anyone else might need.
A number of pieces of software have been written that require a later OS that fulfill a number of very valuable ($$$) tasks. Also Win 95 is only stable if you have hardware with extremely good drivers under it, a limited number of processes/programs on top of it, and your continuous up-time requirements are somewhat limited. This makes 95 a long way from being the one-size-fits-all solution. (I have one Win 95B station at my desk just to do drive data recovery and to do a few file tasks that XP doesn't want to let you do...)
Using that same logic there isn't a valid reason for almost anyone to use Vista instead of XP. Plus there is the "Business downside" of the end users having to relearn how to use computers that they already knew how to use.
Vista's big offerings are two fold:
- One is what I call the "raccoon" factor. Give people something bright and shiny and their eyes will roll back in their head as they start to murmur, "Gimme, gimme, gimme..." as you can hear the words, "It is new!" echoing softly in the background. This offers them nothing that is real but it does drive people amazingly hard. Look at the number of people that paid $100+ premiums to have an iPhone in the first week of release. A month later no one including themselves remember that they got their phone early and it certainly didn't pay any dividend for the expense but they will do it again: They are raccoons!
- Two, Vista includes huge DRM underpinnings. After XP was released Bill Gates publicly stated they the next version of Windows wouldn't be an OS but instead it would be a Digital Rights Management Platform. This does nothing for us but does plenty for Mickeysoft and the big media companies. I notice they aren't mentioning that fact any more either!
Basically Microsoft wrote a new OS for themselves instead of us and they made it really visually flashy so the raccoon in all of us will want to roll our eyes back in our head and buy it. The fact that they forgot to put anything we actually need in it has made its adoption really tank. The only real reason they have sold any volume of it is that you almost can't buy a computer without it. To help the process along Microsoft has pushed for new hardware that doesn't have XP driver support and you will start to see programming tools with limited or missing XP support.
We are coming up to a point where we are looking at a future where we could lose control of what is on our own computers! Vista is already trying to decide if you should be able to access your own files that are already on your computer! Take this fact and combine it with the whole limitations being rammed down our throat with HDTV and we are looking at being consumers that are buying things that we have no control over. A computer could easily act as a HDTV 'VCR' because that is an amazingly simple function but we have been forced to buy into a system where that isn't allowed. The only HDTV VCR like devices are subscription ($$) based!
You are being quietly guided into a world where you will tithe endlessly to corporations for simple things that in the past you could buy once and be done with. MS has tried to make the OS subscription based. (tithe) Limited number of play media files are subscription based. (tithe) Buying a cell with an MP3 player in it that you will just replace in a year or two is ano
Re:Yes, let's do just that... (Score:1, Insightful)
Most reliable? How do you measure that? I've never seen a Lisp machine with a kernel panic, but I've seen Unix machines of all sorts die this way. The B5000 series was also famously rock-solid, and their OS was written in an ALGOL dialect (possibly helped by an ALGOL-58 compiler written over a summer for an earlier Burroughs computer by a student named Donald Knuth).
If you're using Unix as an example of what C can do, I'd say it's pretty good evidence that language *does* matter, and C fails.
And it was probably the right decision. Look how many people paid good money for Windows XP (and then again with Vista). If people really didn't want poorly designed software, they wouldn't be buying it in such huge quantities.
Then again, as long as people think that C is an acceptable language to write large programs in, the bar is set pretty low to begin with.
Re:Yes, let's do just that... (Score:5, Insightful)
Or, come to think of it, without supervision.
Re:Windows is open-sores software (Score:4, Insightful)
Please don't write posts like this if you're not going to back them up with reliable sources. Your personal views on the validity of EULAs in whatever jurisdiction you are in don't really count for much if the courts don't agree with you, and in any case are unlikely to be applicable universally.
Re:Yes, let's do just that... (Score:4, Insightful)
This is a fallacy. By that argument, number theory is simple because arithmetic is easy, and numerical errors in computations should not occur because the people doing them have mastered the atomic operations.
[motherhood and apple pie snipped]
Because, in large part, poor workmen choose inappropriate tools.
It makes no sense to argue assuming a false dichotomy (e.g., "should we use a dynamically typed language with garbage collection, or should we do software engineering?"). The question is how to build robust systems most economically.
To that end, we have to ask two questions:
(1) Does making the programmer responsible for memory allocation lead to errors?
(2) Can taking the responsibility for routine memory allocation out of the programmer's hands create other issues?
The answers are, yes and yes. It all comes down to cost, schedule and results. It is true that there is no system written in Java, Python or Ruby that could not, theoretically, be written with the same or greater quality in C or assembler. It is also true that there are some systems which are written in C or assembler that would be much more difficult, if not impossible to write in Java, although as the years roll in these are fewer.
A few years back I was asked to look at an embedded system that was originally designed for the tracking of shipping containers. It used GPS and short bursts of sat phone comm to phone its position home. The client had an application which required that the positional data be secured from interception, ideally of course perfectly secured, but if the data could be protected for several hours that would be sufficient. It doesn't take much imagination to guess who the ultimate users of this would be and in what four letter country they wished to use it.
The systems in question were programmable, but there was less than 50K of program storage and about 16K of heap/stack RAM we could play with. We did not have the option of altering the hardware in any way other than loading new programming on. The client was pretty sure they weren't going to be able to do it because there wasn't enough space. My conclusion was that while creating a robust protocol given the redundancy of the messages was a challenge, the programing part would be quite feasible in C or assembler. Of course, if I had the option of adding something like a cryptographic java card to the system, the job of creating a robust protocol would have been greatly simplified.
And ultimately, that's what software engineering amounts to: finding ways to greatly simplify what are otherwise dauntingly complicated problems. Yes, it takes more mojo to do it in assembler, but mojo is a resource like any other. Engineering is getting the most done for the least expenditure of resources.
So the answer is that is good engineering to use Java or Python or Ruby where it simplifies your solution. It is good engineering to use C or assembler when they simplify your problem. It is bad engineering to choose a tool because using it proves you have large cojones.