XP/Vista IGMP Buffer Overflow — Explained 208
HalvarFlake writes "With all the hoopla about the remotely exploitable, kernel-level buffer overflow discussed in today's security bulletin MS08-0001, what is the actual bug that triggers this? The bulletin doesn't give all that much information. This movie (Flash required) goes through the process of examining the 'pre-patch' version of tcpip.sys and comparing it against the 'post-patch' version of tcpip.sys. This comparison yields the actual code that causes the overflow: A mistake in the calculation of the required size in a dynamic allocation."
How about http://blogs.technet.com/swi/ (Score:4, Informative)
Re:Let's get the preliminary stuff out of the way. (Score:4, Informative)
Do you have any idea how many millions of ethernet cards have been sold? Are they all going to be made obsolete?
These days CPUs are so fast that the minor overhead of a network driver is negligible, unless you're going to ultra-fast speeds (some high-performance network cards do offload this to hardware).
However, you still could have buffer overflows in the network drivers/firmware.
Re:Let's get the preliminary stuff out of the way. (Score:5, Informative)
IPv6 makes some steps towards having simpler hardware handling, but as long as IPv4 is still around, we won't see hardware switching become commonplace.
Re:Let's get the preliminary stuff out of the way. (Score:5, Informative)
Event ID 4226 (Score:5, Informative)
The only good thing is that, while the page hasn't been updated since 2006, the patch seems to work on the new TCPIP.SYS (I just tested it on my own machine).
I realize I'm sort of hijacking the first post, but given how many of us are probably downloading Linux ISOs right now, I figured it's important enough that people wouldn't mind a reminder...
Rootkit? (Score:5, Informative)
Rootkits use a lot of techniques that are also used by legitimate software. Yes, that patcher (and its patch) does get detected by a few anti-virus programs because worms, like torrents, benefit from being able to connect to more peers. It's not a virus in or of itself, though, plenty of people have checked it out.
> Changes of this nature should be made to source code, not binaries. It's way more maintainable and sustainable that way.
I fully agree, but it's kinda hard to get the source for Microsoft programs. Last I heard, you had to be a big university, pay tons of money, sign NDAs, etc. Besides, this limitation wasn't an accident. It was a deliberate "feature" they put in because they thought it would slow down worms. They're not going to fix it just because people ask.
Re:Let's get the preliminary stuff out of the way. (Score:3, Informative)
Re:Let's get the preliminary stuff out of the way. (Score:3, Informative)
Re:Windows is open-sores software (Score:4, Informative)
Re:Event ID 4226 (Score:5, Informative)
Re:Yes, let's do just that... (Score:3, Informative)
You are right, but if you have to calculate buffer size manually
C:
buf_size = header_len + packetlen + sizelen + crclen + paddinglen
my_buf = malloc(buf_size)
if (null == my_buf)
memcpy(in_buf,my_buf,buf_size)
there's simply a lot more to code than in Ruby. While in theory you can make it as safe, in practice you've simply got 8+ times as much code, checking it for correctness takes a lot longer.
Similarly, in languages like Ruby you can iterate through a collection without loop variables, without writing yet another for loop.
C:
char foo[20] = "test string"
for (i=0;i < strlen(foo);i++) {
Ruby:
foo = "test string"
foo.each_character {|c|
This savings is exaggerated if you write more complex code:
a = []
10.times { a << (rand * 100).to_i }
puts a.collect {|n| n * 3 }.collect {|n| n = ('1' + n.to_s).to_i }.sort_by {|n| n % 5 }.inspect
prints: [1105, 190, 1195, 1120, 1135, 166, 187, 163, 1168, 1183]
No buffer checking needed - if it fails to allocate it'll die cleanly at least. Or you can catch the exception and do whatever you want.
There's no need to write in C unless you need its features. There's just too much code, and with that code, more chance of errors - not to mention that it's harder code...
When testing a buffer, throwing something a bit longer at it is good. I tend to just copy a whole slashdot discussion or something else huge and try to paste it into every control I can. That catches the programmers who just allocate large static buffers.
Programmer: "You can't send back a 200k web request! That form only allowed 300 characters."
Me: "Yes, until I used the Firefox DOM viewer to change it - just like a hacker would. Verify your input!"
Re:Yes, let's do just that... (Score:3, Informative)
I was reading through your post and nodding, but then I realised that I just can't agree with your underlying argument. I think this is the part of your post that captures the essence of what I mean:
You know, there was a time when formal methods were taught, when programmers were expected to know how to properly allocate and release memory. When things like calculating the size of the buffer, applying basic math(!) and testing your own code were considered just a part of the programmer's job. Now we're hearing people blame languages for the faults of the programmer.
While this is all true, the problem with this argument is that it fails to account for no-one being perfect. If a certain type of error is known to have occurred a non-zero number of times, and other things being equal the models in a certain programming language make that type of error impossible, then that programming language is unambiguously safer than one that doesn't prevent the error. It might only prevent it once in a blue moon when a really great programmer is using it, and probably a few more times when someone who thinks they're a really great programmer is using it, but it still prevents errors. Pride comes before a fall, and choosing a programming language that it unnecessarily vulnerable to certain classes of programmer error because you believe you're too good to ever make them is like tying your own shoelaces together before running a marathon.
But consider this: the most reliable operating systems to date were built on C (UNIX) and assembly (MVS). If a bunch of old farts (well, perhaps they were young then...) can crank out correct, reliable, fast code without an IDE and a bunch of GUI tools, clearly the language is not to blame.
And if a bunch of old farts had cranked out correct, reliable, fast code in that way, I'd be impressed. Since this has almost never been achieved in the entire history of software development, however, this doesn't tell us much. These are, after all, the same old farts who brought us joys like the gets library function in C. (If you're Donald Knuth, I'll acknowledge that you're an exception on the correct, reliable, fast count, but we really need to talk about usability.)
Your answer to these ailments appears to work exclusively on a "cure" basis: use better processes so more people look at things, use better tools to pick up errors, etc. But prevention is better than cure. If you're using a programming language where the problem simply can't happen, you know the reviews and tools won't miss anything.
The old adage still applies: a poor workman blames his tools.
Perhaps. But a good workman knows his tools, and chooses the right one for the job.
While there certainly is an ethical issue at work here, the problem with buffer overflows can usually be avoided through purely technical means. In the context of a TCP/IP stack, I question whether it's really necessary to resort to known error-prone implementation technologies. We're not talking about an OS kernel or some ludicrously high performance mathematical algorithm, and any performance penalties associated with using a slightly safer language would surely be negligible.
(Incidentally, I work on high performance maths software, typically with fairly low level languages. We do use reviews and automated tools, and they tell me I don't personally make the kind of memory management error you're criticising, so I have no axe to grind here nor any particular bias towards high level languages when they're not appropriate.)
Re:Why Windows 95 and NT 4 are enough (Score:3, Informative)
There's no issue with windows systems that may be rooted or infected because the stuff just won't run. What do your low level DOS utils do?
I must mention here, too, that a lot of the tools provided in Linux are intuitive and easy to use. "gparted" is a godsend.
*(which is easy BTW, dd if=/dev/disktocopy of=./imagefile, restore by switching if and of)
(if you're happy using Win95, it's good with me, just felt like getting in a bit of Linux advocacy seeing as I'm using it loads for disk and filesystem stuf at the moment)