Red Hat Introduces NX Software Support For Linux 188
abertoll writes "In this story at ZDnet, Red Hat has apparently added NX support to Linux. NX security technology is a hardware attempt at stopping malicious code." (We recently posted about Transmeta's announcement that its chips will incorporate the NX bit as well.)
Remember kids... (Score:2, Insightful)
Re:diff? (Score:4, Insightful)
There you go (Score:4, Insightful)
See, NX is a good thing, now even Linux has support for it
Cheers.
A cross between... (Score:5, Insightful)
Fine No Execute (Score:5, Insightful)
No execute means that somewhere, somehow there will be an override and the day the override is used the virus' will follow by tricking (and explaining how) to the user why this is needed and bingo, it's in.
And of course I could be completely wrong in that this no execute bit does not exist on older processors and that in itself is going to cause problems. Intel has xbit on newer processors, but what about AMD, VIA, whoever else? Is this part of the Intel half of the WinTel duopoly?
I think it's probably a good idea, but I'm suspicious.
Mod Parent Up (Score:2, Insightful)
Looks like only the wise understand the distinction among "tool" and "feature" and and "technique" and "technology", but the rest of the people who gather their world knowledge from buzzword driven press articles will keep thinking that Visual Basic is a "technology" as well as Java.
Actually it would be interesting to discuss how the scopes of these 3-4 concepts should be in the area of computers.
NX, Impressive! The processor has learned well! (Score:5, Insightful)
translation:
Malicious code executing itself via a buffer overflow is actually one of the lesser evils in the virus world. Most users will gladly allow anything to run on their box, especially if it does something cool (time, weather, cutesy things, etc), and with everyone being root on Windows boxes, this means the program can do whatever the hell it wants and windows won't say anything/much.
The NX bit is great, especially for servers where generally the only kind of attack is a buffer overflow. Like I said the procesor has learned well, but the users must learn also.
Re:AMD once again taking the lead. (Score:3, Insightful)
This is very important (Score:2, Insightful)
I saw mention in the linked article that Microsoft plans NX support in their SP2 release for Windows XP, but seriously... I don't see this as either happening in the first place, or having the potential to really screw things up. Won't this break a huge amount of applications? Think about it, the Windows platform is notorious for allowing programs to execute whatever code they like... even going so far as to say that some rely on that ability. Any thoughts on this?? I just don't see it happening with the current architecture, that's all.
PaX (Score:3, Insightful)
It comes with a pretty high performance overhead, though. A page fault will occur for any miss of the TLB cache while normally they are just loaded from the page table in main memory.
Re:Remember kids... (Score:3, Insightful)
However, bugs happen when writing code.
Worse bugs happen when someone modifies code they don't understand. Some code depends on non-explicit assertions, such as an array size being already checked somewhere else, or some buffer being already initialized somewhere else. The maintenance programmer sees the code like through one of those cardboard tubes in toilet paper rolls, so he/she can easily miss such dependencies. When a change shortcircuits such a non-obvious check, shit happens.
E.g., true story: someone once "optimized away" the initialization of one of my decompression buffers, right before a release, but it never occured to them to also change the compression algorithm to not depend on a pre-initialized buffer. Such dependencies are probably the easiest kind to miss, because nowhere did a compression method call a decompression one, nor viceversa. You can go with a debugger through the code all day long, or make class diagrams, and never occur to you that two files are related.
(Rant: I remember it because then he made a big fuss and threatened to fire me, sue for damages, and whatnot, because "my" code didn't work. It's one of the kinds of behaviour that gives some people a PHB name.)
And it doesn't help that, in pursuit of some misunderstood cutting costs, companies:
A. hire the cheapest incompetents, who'll do worse mistakes. Which might still be ok, except,
B. they never plan for a proper testing _and_ debugging phase (no, a week tacked at the end for QA doesn't suffice)
C. nor for code reviews
D. nor for comprehensive unit tests. And even worse,
E. they accept (or in the case of client PHBs request) tons of changes without also requesting extra time for them, and then you're actually expected to do the quickest hack that could possible _look_ like it works. Remember: those deadlines must be kept at all costs, even if the client gets a piece of untested unstable crap.
F. the ever increasing reliance on buying some magical "+3 cloak of bug avoidance." Thinking that merely using some framework or library magically makes all problems go away. E.g., "it's Java, so we can't possibly have exploits or memory leaks" or worse "we use HTTPS, so we can't possibly have a security problem" or my pet peeve "it says 'enterprise' on the box, so it's inherently safe and scalable."
This kind of religious faith that just buying some magical talisman makes all problems just go away on their own, without any human responsibility or intervention, serves just to lull one in a false illusion of safety. It's probably the number 1 cause of errors A to E.