Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security Software Linux

Security Holes Draw Linux Developers' Ire 477

jd writes "In what looks to be a split that could potentially undermine efforts to assure people that Linux is secure and stable, the developers of the GRSecurity kit and RSBAC are getting increasingly angry over security holes in Linux and the design of the Linux Security Modules. LWN has published a short article by Brad Spengler, the guy behind GRSecurity and it has stoked up a fierce storm, with claims of critical patches being ignored, good security practices being ignored for political reasons, etc. Regardless of the merits of the case by either side, this needs to be aired and examined before it becomes more of a problem. Especially in light of the recent kernel vulnerability debated on Slashdot."
This discussion has been archived. No new comments can be posted.

Security Holes Draw Linux Developers' Ire

Comments Filter:
  • by moz25 ( 262020 ) on Monday January 10, 2005 @08:05AM (#11308973) Homepage
    Given that I'm getting lousy uptimes on my Linux servers because of the mandatory kernel upgrades, I certainly welcome a (constructive) critical look at Linux kernel security.
  • by mirko ( 198274 ) on Monday January 10, 2005 @08:07AM (#11308985) Journal
    uptime is not an issue, especially if it's spoiled by this kind of maintenance.
    Because Linux servers are cheap, just load balance your charge between 2 or more of these and you'll still have the availability which is actually what you need.
  • by aendeuryu ( 844048 ) on Monday January 10, 2005 @08:08AM (#11308987)
    It's interesting to note that this comes out so recently after Linus was named one of ITs best managers. Lord knows he'd have to be to keep so many disgruntled people quelled. In the followup, somebody was citing as an excuse that Linus is one person and that there's only 24 hours in the day, so maybe some patches get missed. I was wondering, with all of the people he delegates to, isn't there somebody who handles all the security issues? Scroll down the LWN article, and somebody mentions that he needs a Kernel Security Officer, with no follow-up. Does Linus not have one of these guys yet?
  • by Wudbaer ( 48473 ) on Monday January 10, 2005 @08:10AM (#11308993) Homepage
    Hey, great argument ! So Linux doesn't even need to be stable, you just can string together several boxes because it is sooo cheap. Yeah right.
  • So it begins. (Score:5, Insightful)

    by Anonymous Coward on Monday January 10, 2005 @08:11AM (#11308994)
    The trade off between security versus usability/accessability begins?

    Will Linux strike the perfect balance? Will Linux be taken over by a lunatic like Theo and go the OpenBSD route? Will Linux lose it's viginity to Windows and become a security nightmare? Stay tuned! All this and more on the next episode of OS wars!
  • Re:FUD (Score:3, Insightful)

    by millahtime ( 710421 ) on Monday January 10, 2005 @08:12AM (#11308996) Homepage Journal
    You are refering to 'decent' linux setups. How many people have what you would refer to as a 'decent' linux setup. Windows could have a 'decent' security setup but most people don't go there. Linux needs the security to rock out of the box if it is to continue it's mainstream grouth without running into the problems windows has.
  • Re:linux vs ??? (Score:1, Insightful)

    by Anonymous Coward on Monday January 10, 2005 @08:19AM (#11309019)
    You don't have to compare Linux to X,Y and Z to draw a conclusion. You can just look at Linux on its own and see that there is problems, and these need to be worked out. It doesn't matter how good the competition is, it's a battle with Linux itself, things need to be improved, that's the moral of the story.
  • Don't be an idiot. (Score:5, Insightful)

    by Anonymous Coward on Monday January 10, 2005 @08:21AM (#11309022)
    There are tons of services that you can't just pop a couple machines together and tada, they are loadbalanced. Just because its easy for simple things like http and smtp, doesn't mean its easy for everything.
  • Here it comes (Score:2, Insightful)

    by Tangwei ( 704210 ) on Monday January 10, 2005 @08:22AM (#11309027)
    For years now people have been carrying the Linux flag due to the fact that its "more secure then windows"... guess what people.. time for being an unknown, unpopular OS is at hand... welcome to being known.
  • by DjReagan ( 143826 ) on Monday January 10, 2005 @08:28AM (#11309046)
    If you can't work out ifyour changes are volatile or not without rebooting the system then I suggest that it might be YOUR sysadmin skills that are lacking.

    Personally, I make sure I know the answers to that sort of question before ANY changes are made to my production systems.
  • Maybe it's time... (Score:5, Insightful)

    by Jace of Fuse! ( 72042 ) on Monday January 10, 2005 @08:35AM (#11309060) Homepage
    Maybe it's time everybody get off of their OS Religious High Horse and finally admited that an OS is only as stable and secure as the user who is administering it.

    My Windows XP machine is solid and secure. My FreeBSD machine is solid and secure. My Windows ME machine -- well -- it runs, and it's quarenteened so I suppose in some ways it's secure.

    Right now I'm installing Gentoo on a box so I'm going to see where this goes, but I am going into it with full realization that no OS is perfect, nor is it perfectly secure. This means that I'm going to take security as seriously with this machine as I do the rest of them.

    Having the source to an OS doesn't make it more secure if you don't read (or understand) every line of it.

    Why people think OSS is automatically more secure is something I never have really understood. There is some added comfort in knowing that most holes will be discovered and fixed promptly, but even that is an assumption one shouldn't bank on.
  • by Anonymous Coward on Monday January 10, 2005 @08:36AM (#11309065)
    Grsecurity guys (Brad and the pax guy mostly) are dead serious. They have been researching their areas of memory management, protection and secure code for years. They really do know it pretty much all. For instance the "AMD NX protection!!!!" that the Redhat raved about was copied from Pax. (Without even crediting properly.)

    They are just the sort of real gurus that can spot new vulnerabilities from code and exploit them in a matter of minutes. When Grsecurity was having serious funding problems last summer Brad was forced to sell new vulnerabilities from Linux kernel code to unmentioned blackhat companies. (Those do exist, believe me. They are doing commercial intelligence, stealing trade secrets with the knownledge..)

    Those guys are technically brilliant, years ahead of what Linux stock kernel has in security features. They are just a bit arrogant and bad with people. Also at the same moment the upstream kernel developers don't like being told that their stuff is complete crap on some area. They downplay it, ignore and use the "whoareyou,Iamthekerneldeveloper,youknownothing" tactic.

    Grsecurity guys could absolutely smash LSM by showing the vulnerabilities they are talking about as pocs. They are just a bit too disgusted and pissed off. There are several other areas like the exec_shield (that *is* atm getting to upstream kernel) that have big faults as well...

    They could prove their other points as well.. But it would be moot since they ARE correct in any case.
  • by EasyTarget ( 43516 ) on Monday January 10, 2005 @08:42AM (#11309084) Journal
    Humm, and how do you react when you come in to the office after a long weekend and find the server is locked in a panic cycle, because some change you made months ago means it won't boot properly? No doubt you blame everybody; developers, documenters, compilers, colleagues, god etc.. But the real reason it failed is because you did not test properly.

    Personally, I know my servers can survive a reboot, because I test them for that. If I make any serious change that may affect startup I assume it will fail, and then set out to prove myself wrong.

    PS: I wish I did not have to.
  • by router ( 28432 ) <a...r@@@gmail...com> on Monday January 10, 2005 @08:56AM (#11309121) Homepage Journal
    He probably has a pre-production environment. That's what you do when you want to know how your changes will affect production. That way you don't fsck with production. I think he stated that above. Some of us don't fsck around. If you wanted to be really paranoid, you would reboot first to make sure nobody else changed anything that would fail a reboot, then make your changes, test to be sure a reboot is really necessary, then reboot again anyway to satisfy your paranoia. In pre-prod. But that's if you're paranoid. And work with a team. And have pre-prod. Maybe I'm crazy.

    andy
  • Re:Here it comes (Score:5, Insightful)

    by I confirm I'm not a ( 720413 ) on Monday January 10, 2005 @08:57AM (#11309123) Journal

    holes in the kernel have been allowed to go on as long as they have?

    Allowed to go as long as they have...by whom? By the volunteers devoting their time to kernel hacking? I'll give you the benefit of the doubt and assume you're an active kernel hacker...

    You compared Linux to Windows in your original post: how many security holes in Windows still remain, years after they were first reported? (For that matter, how many holes are we still unaware of, because the source-code is closed?) Why have these security holes been allowed to go on as long as they have? (Answer: because resources are finite; and Microsoft has other things to focus on. Likewise for Linux. If you feel that too few resources are devoted to security in the kernel: volunteer. Or criticize and offer no helpful solutions. I choose option A).

  • by m50d ( 797211 ) on Monday January 10, 2005 @08:58AM (#11309127) Homepage Journal
    With 2.6 there seems to be a bad trend towards far too much politics in the kernel. The cdrecord problems and reiser4 business (did that ever get sorted out?) together with the IMO stupid policy of putting new features in the stable branch (making deciding whether a feature can be added much harder, since it needs to be that much more stable and necessary before it can be added, but often you can't prove it's necessary without having some kernel branch running with it in) all smack of too much politics. Why can't people just concentrate on making the best kernel possible?
  • by ivi ( 126837 ) on Monday January 10, 2005 @09:01AM (#11309137)

    Long-time shell-provider SDF used Linux ...until they got hacked into.

    Now, it's a 64-bit version of NetBSD.

    OpenBSD claims:

    "Only one remote hole in the default install,
    in more than 8 years!"

    Why not start with a core built for security,
    - ie, rather than one built for popularity?

    My two cents...
  • by I confirm I'm not a ( 720413 ) on Monday January 10, 2005 @09:05AM (#11309152) Journal

    I pretty much agree with you, but... (!)

    Having the source to an OS doesn't make it more secure if you don't read (or understand) every line of it. (my emphaisis)

    Having the source available for anyone to read can lead to the OS (app, library, whatever) being more secure. Assuming that a wide-enough group of people do actually read the code. I'm confident that this happens with Linux, the *BSDs, etc.

    Most people tend to equate OSS with secure, I'd guess, because security-through-obscurity is largely a false promise, and we recall that many-eyes-make-bugs-shallow. Both concepts that appeal to the type of geeks who are interested in security ;)

  • by Tsu Dho Nimh ( 663417 ) <abacaxi@@@hotmail...com> on Monday January 10, 2005 @09:12AM (#11309182)
    From the grsecurity page: "my personal gripe is that for 3 weeks not a single acknowledgement arrived in my mailbox, i don't think that's the way the chief developers are supposed to handle security issues (however small or irrelevant they may have been in this case - it takes a one liner to tell us so)."

    So ... rather than ask on the mailing list who is the best person for security submissions relating to whatever bug he found, he emails the top dude (during Christmas holidays no less) and then whines when no answer is forthcoming within his preferred timeline. Gimme a break!

    As a total noob, I went to kernel,org and found this on the first page:
    Please see http://www.kernel.org/pub/linux/docs/lkml/reportin g-bugs.html if you want to report a Linux kernel bug.

    http://www.tux.org/lkml/#ss5 explains why XX doesn't answer emails - too fricking busy is the usual reason.

    If I were concerned about publishing the bug, I would have asked ON THE LKML LIST for who would be the best person to submit security-related bug and patch to for the XX module.

  • by jedidiah ( 1196 ) on Monday January 10, 2005 @09:20AM (#11309219) Homepage
    Ebay was running solaris and ended up going down in a ball of flames because they were too obtstinant to apply the vendor recommended updates. This isn't a problem limited to Linux.
  • Re:Here it comes (Score:1, Insightful)

    by Anonymous Coward on Monday January 10, 2005 @09:21AM (#11309222)
    So what you're saying is that the Linux community can dish it out, but they can't take it?

    New Windows security flaw -- "Sigh, another bug. Use Linux, Windows is insecure."

    New Linux security flaw -- "Well, um, you ignorant troll, all software has bugs, and most have more than Linux."
  • by R.Caley ( 126968 ) on Monday January 10, 2005 @09:25AM (#11309247)
    [that there are easier ways is] a pretty specious argument

    No, an important security rule of thumb. Don't waste effort fixing the holes which no one would need to exploit because you are wide open in other places.

    Eg, would you worry about people being able to drill into your safe if the safe had no door?

    There are special cases where you might (front of safe visible to trusted people 24/7 or something), but generally speaking, priorities are important.

  • Given that I'm getting lousy uptimes on my Linux servers because of the mandatory kernel upgrades, I certainly welcome a (constructive) critical look at Linux kernel security.

    That's not the point. I am getting ready to force my Unix admins to patch their boxes on a more frequent basis than "yearly", and they are already screaming bloody murder. I am sick and tired of our Unix boxes getting rooted because some admin wants 365+ days of uptime and can't be bothered to test and install a kernel patch that fixes some important hole. That there is NO scheduled maintenance to start is an even larger problem that I'll rant about in a different post.

    And by the way, other Unix systems have capabilities, MAC labels, etc., and you know how most admins implement security on their systems? Every one of the Unix support team knows the root password, and the application support people use setuid scripts to administer their software, like they were doing in 1985. Capabilities, labels, and friends are extremely difficult to implement, and these features cannot save you from the one time a tired kernel programmer accidentally performs an unbounded input or forgets to check a counter. You will always need to patch your kernel (and reboot) in order to maintain operational security. Period. End of discussion.

    And if your only availability measurements are along the lines of

    for i in `cat hosts`; do ssh root@"$i" uptime >> uptime.report; done
    please get a clue: Availability doesn't include all of those times when your users tries to access a rooted system (even though the system is "up"), and it isn't a bad thing to schedule maintenance windows and notify the users and take the system down for patches or upgrades. In fact, I would much rather do so in a controlled fashion, as I can have backups made and documentation updated and I can take my time to do things correctly because I had time to test everything beforehand. Versus a 50+ hour nightmare/marathon starting with the pages from the instrusion detection system (or worse, from someone else's IT security department) at the wee hours of (if you are lucky) Saturday morning.

    So don't complain to me about lousy uptimes. Because when your server gets hacked because of a kernel bug patched three months ago, and you didn't apply the update because "my uptime counter will get reset" (i.e. you are lazy), I have to clean up your mess: Investigating the attack, determining the extent of the intrusion, validating the backups, etc.

    BAH. Rant mode off. I will spare you a discussion of the proper engineering processes that help to lessen (but not eliminate) the risk of security-related software flaws.

  • by Anonymous Coward on Monday January 10, 2005 @09:31AM (#11309279)
    So many people are missing the point. It's Linux, it's open source, you're not living under a Sun or MS dictatorship. If you don't like what Linus is doing, either maintain a separate set of patches or fork the kernel. I mean Jesus, SuSE patches the vanilla kernel, RedHat patches the vanilla kernel, Mandrake patches the vanilla kernel, and want to know what ... I maintain a set of patches that I apply to my SuSE patched kernel sources *SHOCK*. Just because Linus refuses a patch doesn't mean the end of the f-ing world is neigh.
  • by jedidiah ( 1196 ) on Monday January 10, 2005 @09:31AM (#11309282) Homepage
    There's no religion too it.

    A neglected Unix is far safer than any "competently administered Windows".

    Two randomly selected individuals out of any population aren't going to be equal. One is likely to be inferior. This is reality, not a Vonnegurt novel.
  • by conteXXt ( 249905 ) on Monday January 10, 2005 @09:31AM (#11309284)
    "Brad was forced to sell new vulnerabilities from Linux kernel code to unmentioned blackhat companies."

    I guess this is a good reason to trust them?

  • by Anonymous Coward on Monday January 10, 2005 @09:36AM (#11309314)

    Maybe it's time everybody get off of their OS Religious High Horse and finally admited that an OS is only as stable and secure as the user who is administering it.

    Everybody acknowledges that. But that doesn't mean that operating systems are all alike. Linux - out of the box - is far more secure than Windows, and far less secure than OpenBSD.

    My Windows XP machine is solid and secure.

    Really? The last time I tried to secure a Windows machine, Microsoft had a list of 200-odd things to change, including obscure registry entries. Furthermore, the box was practically useless, as half the user applications insisted on being able to write to privileged directories or just plain run as Administrator.

    Sure, in theory you can secure both machines quite well - ignoring the open vs closed tendencies of course. But in practice, it's a nightmare trying to get Windows to sane security settings and also work properly.

    Why people think OSS is automatically more secure is something I never have really understood.

    Nobody thinks that. You have misunderstood the argument that the OSS development model by its nature tends to result in more secure software.

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Monday January 10, 2005 @09:37AM (#11309320)
    Comment removed based on user account deletion
  • by Malor ( 3658 ) on Monday January 10, 2005 @09:42AM (#11309357) Journal
    From a security perspective, the current Linux development model is a nightmare. Introducing new features into the 'stable' codeline is not how to reduce bugs and problems.

    If I'm running 2.6.8, and a new bug comes out, I'm forced to either A) upgrade to the most recent 'stable' kernel, introducing new features about which I know nothing, and which themselves may be security problems, or B) can hope that someone will backport the security fixes to the kernel version I'm running. I don't know enough about kernel development to patch it myself, but I can no longer just drop in the most recent stable kernel and expect it to work unchanged.

    A sysadmin's most precious commodities are time and attention. With this new development model, suddenly I am forced to either pay a great deal of attention (and a great deal of time) to each and every version of the Linux kernel, or I need to pay a vendor to do it for me.

    The kernel developers are, in my opinion, shirking their single most fundamental duty... to ship a stable, secure product. Suddenly, because it's easier for them, they have abrogated the fundamental contract, that they will write great software. (buggy, insecure software is not great, no matter how many features it has.) They just wave their hands vaguely in the air and say tha the distributions will take care of those problems.

    Guys, it's not gonna happen. The way you get stable software is by not adding features. In your case, by branching off to 2.7, and letting us beat the unchanging (except for bugfixes) 2.6 tree to death. If you keep adding features, you keep adding bugs. That's how it works.

    You had this NAILED for years and years... there is a huge community that has built up around the fundamental social contract that even numbered kernels are as stable and secure as you know how to make them, and the odd-numbered branches are the home for new code and new features. Changing that contract simply becuase it makes your lives mildly easier is a hugely destructive idea. You may save yourselves a bit of work, but you create an enormous amount of it for everyone else.

    Ted T'So said:

    Not all 2.6.x kernels will be good; but if we do releases every 1 or 2 weeks, some of them *will* be good. The problem with the -rc releases is that we try to predict in advance which releases in advance will be stable, and we don't seem to be able to do a good job of that. If we do a release every week, my guess is that at least 1 in 3 releases will turn out to be stable enough for most purposes. But we won't know until after 2 or 3 days which releases will be the good ones.
    In other words, he thinks it's perfectly fine if only 1 out of 3 'stable' kernels are actually stable.

    This is not acceptable.

    You can bet that Bill has a big grin on his face about this one. If I want new features with my security fixes, I might as well choose Microsoft and their service packs.

    Heck, they even have a QA team!

  • by Moraelin ( 679338 ) on Monday January 10, 2005 @09:48AM (#11309392) Journal
    And I'll aggree with you about that mindset and hipocrisy too. That's what ticks me off too. The doublespeak and double standards, where the same thing is a hanging offense if it's in Windows, but normal and doesn't even really need a fix if it's in Linux.

    But just to add a couple of minor details:

    A) I'd argue that Microsoft didn't start secure and slowly get down the drain. They started by ignoring security outright.

    E.g., if I remember right, for example, the file server security in NT 3.5 and the pre-SP1 NT 4.0 was entirely in the client. Yes, the client was supposed to check for itself if it's allowed to access a file, and if not, back down. However, if the client was not that nice, it could go ahead and request the file anyway... and get it.

    E.g., MS Bob, in the name of userfriendliness, asked you to change the password if you miss-typed it 3 times. No, not if you successfully logged in after mis-typing it 3 times. That's it. Three failed attempts in a row, and you can set a new password.

    Etc. I could go on for ever, but these are ludicrious enough to illustrate the point: MS didn't start making a compromise here and there. It outright ignored security until it bit them in the ass.

    B) But to be fair, so did everyone else, and some still do.

    E.g., it's not a case of Linux eventually getting as insecure as MS Windows. Linux already _was_ less secure than Windows, oh, say around the time Windows 2000 was released.

    Sorry, I'll probably annoy the pinguinistas, but taking a Linux system as root online back then, meant you had a script kiddie logged in withing hours at most. _And_ most distros made the same MS mistake of installing and starting every possible service by default, and no firewall either. I know my SuSE systems got Apache, MySQL and God knows what else if I didn't uncheck those at install time.

    It took some code reviews paid for by RedHat and the like, before Linux was anywhere _near_ secure.

    C) Basically, sad to say, much as nerds balk at "clueless lusers" running without a firewall or MS for having exploitable bugs, most are just as clueless themselves when it comes to writing secure code.

    And I don't mean just bugs or lack of communication ("oh, I thought YOUR function checked the buffer length already.") I mean outright lacking even the most elementary clue about secure design, and not giving even the bare minimum thought to what could happen.

    Just as end lusers think they're safe without a firewall because they don't directly see the script kiddie breaking in, coders tend to ignore the unseen threats just as well. Mentalities like "oh, surely noone will edit the id in the URL and make themselves superuser" are the norm, not the exception. Or at most they'll repeat mantras they've heard before, without even understanding what those mantras mean.

    It's not even a MS vs Linux thing. Windows, Linux, Solaris, whatever. Unless you have some security minded people trying hard to find a bug or way in, you end up with a catastrophe. The average coder's work is a heap of security holes waiting to be exploited.
  • by Anonymous Coward on Monday January 10, 2005 @09:55AM (#11309423)
    > Grsecurity was having serious funding problems
    > last summer Brad was forced to sell new
    > vulnerabilities from Linux kernel code to
    > unmentioned blackhat companies.

    So, I see. If Grsecurity gets into the standard kernel, and Grsecurity has "funding problems" will Brad put "accidental" vulnerabilities into Grsecurity and sell those to blackhat companies?

    Sorry but this is bull. When it comes to security, a person's integrity is crucial. Yes, we all know the "many eyes" argument, but as Ken Thompson showed in his ACM lectures, it's possible to defeat the "many eyes".

    You're making a serious claim against someone in Brad's position. Either provide reference links to back up your claim or offer a retraction.

  • by tomstdenis ( 446163 ) <tomstdenis@gma[ ]com ['il.' in gap]> on Monday January 10, 2005 @10:02AM (#11309455) Homepage
    quotas. If working and properly setup can "contain" such people.

    Tom
  • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Monday January 10, 2005 @10:04AM (#11309461)
    If you're running a server, then you should rip out everything you don't need.

    If you aren't running it, you don't need to patch it.

    Which only leaves the security/bug fixes for what you do run. Do I worry about a "reboot test" after I upgrade perl? No. Why should I?

    On my Debian systems, the only patch that requires a reboot is a kernel upgrade.

    A "reboot test" might still be a good idea, in the 0.0001% of non-kernel situations where it would show a software problem ... but why bother if there wasn't a problem on the test box?

    I'd rather not reboot my boxes because that seems to be when the hardware fails. Much as most light bulbs that blow seem to blow when you turn them on.
  • by thogard ( 43403 ) on Monday January 10, 2005 @10:11AM (#11309514) Homepage
    Its basic system theory. I can't know all the interactions between all the systems so I can't account for everything. The easy test is to schedule some downtime and tell the system to reboot. If all goes to plan the system is back up in 90 seconds with most server (or 5 minutes for a cisco router).

    At work we have 2 nearly identical systems that can each cope with the entire load and they don't need to talk to each other except for non real time things like end of month reports. The reboot test is a great way to keep from getting bitten by stupid things like a change in openssh's dealing with /dev/random where a change in the startup sequence means there isn't quite enough random state in the system to prevent a deadlock.

    How do you know that the disk label is still working? Most OS's won't let you read it (since its cached). There are many parts of modern hardware that are essential to booting but can't be accessed outside of the reset sequence. There are things like
    flash bios that you can't test from a live system and that perpetual problem with hard disks of "will it wake up next time?"

    I've got a few outstanding bugs with cisco because their NAT/PAT stuff doesn't come up quite the same way as when its entered so the only way to know if a config is going to "stick" is to bring the thing up from a full reset state.

    I agree that a sysadmin should be able to come up with a full deadlock state diagram for the system but when that graph has more than 1000 nodes on it, the only sure way is a 'halt' and hit the reset switch but even that doesn't test a full start from a power off state.
  • by rjh ( 40933 ) <rjh@sixdemonbag.org> on Monday January 10, 2005 @10:18AM (#11309574)
    The kernel developers are, in my opinion, shirking their single most fundamental duty
    Just a few questions:
    1. How much have you paid Linus, Alan Cox, Andrew Morton, et. al., directly?
    2. Did they make any promises to you about the reliability or stability of the kernel?
    3. Take a look at the GPL, particularly that explicit disclaimer of all warranties. Did Linus send you a certified mail letter in which he waived that clause for you?
    4. Did you get certified mail waivers of that clause from all the other kernel developers?
    5. I'm sorry, I didn't hear you the first time--how much did you pay Linus, himself, directly?
    6. Does Linus give you binaries only of the kernel, and thus make you dependent on him?
    7. Does Linus give you source code, and thus give you the option of auditing code yourself?
    8. Have you done your own code audit?
    Just a few simple questions, really. Because before you go about saying that Linus, or any other kernel developer, or any other Free Software developer, has a duty to you, I'd like for you to know what duty means.

    Duty means a debt is owed.

    So--as a result of the community giving you, at no cost, generous license to literally hundreds of millions of dollars of intellectual property... they owe you something? Because they gave you a gift?

    I don't know where this false sense of entitlement within the community arose, but I really hope it goes away soon. You aren't entitled to anything. You aren't entitled to the sweat of my brow, the labor of my hands, the product of my mind--but when I release something under a free license, I give you those things. I say "here, have something; I made this. I want to give it to you."

    And what are you doing?

    Looking the gift horse in the mouth.
  • by arkanes ( 521690 ) <arkanes@NoSPam.gmail.com> on Monday January 10, 2005 @10:21AM (#11309595) Homepage
    Well, the most obvious reason is that you've got startup scripts that require perl, and the new version may have some sort of syntatic change or other issue that'll break your scripts. In fact, this was quite a problem with Python and some older version of Redhat (7.3 or something? I forget)
  • by jayloden ( 806185 ) on Monday January 10, 2005 @10:21AM (#11309598)
    OK, everyone, take a deep breath, calm down, and say it with me: "Linux is not dead. This is not the death of Linux"

    It's going to take more than a couple of articles to bring about the demise of Linux. There are definite reponsibilities and issues that need to be addressed in Linux, as there always will be in any project of any size. Let's all just support our OS, and make sure that we make it known that it's important to us that these issues are addressed. A few negative articles are not going to kill OSS, and Linux has a way of weathering problems. Relax, and support the developers so they can get on with fixing the problem(s).

    -Jay
  • by 10Ghz ( 453478 ) on Monday January 10, 2005 @10:27AM (#11309645)
    The worst politics of all is the Linus'es stance against driver ABIs. That's one reason behind the device support is still slumbering.


    I bet that Linux has alot better device-support out of the box than Windows XP or Windows Server 2003 has. Windows relies on third-party drivers that may or may not work, whereas in Linux they are already in the Kernel, where they receive more attention than third-party drivers would receive. And that's thanks to Linus's stance on drive ABI's. Without that, we would have a kernel that had minimal device-support and which needed flaky third-party drivers just to get a functional system. And those drivers would only work on x86-systems. Why would the OEM's spend time developing driver for PPC or Sparc, since those are niche-systems? But since they are in the kernel, they work on more exotic architectures as well.
  • Favorite Quote (Score:2, Insightful)

    by radoni ( 267396 ) on Monday January 10, 2005 @10:29AM (#11309665)
    But it is important to understand that one can't just pick up the "Bat Phone" and have Linus or Andrew on the other end. Those days are gone.
    - sbergman27, [http://lwn.net/Articles/118251/]

    These PaX "security experts" whine and complain like script kiddies. No wonder.

    I think the real point is made that distro volunteers and employees are more likely to implement a patchset for security reasons. Also, this is in the best intrests of the community, by limiting the amount of direct communication forced upon our Overlord and Savior, and also because most of us are using a distro. If a distro has a security patchset and the vanilla kernel is left with holes, surely someone will take notice and go through the proper channels, doing all that hard contacting work for you.

  • by diegocgteleline.es ( 653730 ) on Monday January 10, 2005 @10:31AM (#11309677)
    "Exec shield" was not copied from PAX, it'd quite hard since the developer of "exec shield" (Ingo Molnar) admits that Pax covers more cases than PAX.

    It'd be nice if someone would ask the PAX developers why they modified their test suite to fail under exec shield. Run the pax test suite in a exec shield kernel and all the vulnerability simulations will succeed. That's not why exec shield is bad, it's because the test suite disables exec shield on purpose (you can disable exec shield, that's a feature)

    BTW, exec shield is not going in the kernel. Exec shield != "amd NX bit". The amd nx bit support has already gone in the kernel, but I'm not surprised at all that no grsecurity patch is going to the kernel. Grsecurity developers have NEVER submitted their patches to mainstream, they haven't even tried it, they haven't listened to constructive criticism. That's why grsecurity is not in the kernel and LSM is. They have just sit back saying "our stuff is better, use it" without even caring. There're lots of projects that have go poop because of that attitude. Remember the guy who rewrote the whole building infrastructure which never go in mainline? He updated his stuff regularly and critized Linus for not getting his obviosly better alternative. He didn't listen to Linus when he said "ok, just split it in small, individual parts" (like everybody else does) "and I'll merge it". When some other guy started to fix the available building system, the "Better stuff" went poop. Same will happen with LSM. LSM is bad? Well, what will happen if the developers decide to fix it, where will go grsecurity?

    I very much prefer a good developers/maintainer than a bad one, so I'll choose LSM at any time even if it is technically inferior. A good maintainer means that in the future he can rewrite his stuff if it's not good enought. That's much better than some guys who sit back in their mailing lists saying "our stuff is better"
  • by bakreule ( 95098 ) <bkreulen@yahoo . c om> on Monday January 10, 2005 @10:34AM (#11309701) Homepage
    I've always thought the new kernel development model was a bad idea. Instead of creating a new 2.7 branch for new code under development and letting the 2.6 branch stablize, the powers that be decided to put everything in 2.6. The downside of this is that there is no "stable" kernel as each new revision contains new "unstable" code and fixes for older versions. I'm not sure what the upside is.

    Some of these bugs, according to the article, have been around for ages, so the new dev model isn't to blame. But Linus and Andrew didn't even respond to these critical vulnerabilities....

    Just go ahead and create a 2.7 branch, and then assign a maintainer to the 2.6 branch and let it stabilize. I don't see any reason for not doing this.

  • by arkanes ( 521690 ) <arkanes@NoSPam.gmail.com> on Monday January 10, 2005 @10:35AM (#11309708) Homepage
    ...Brad was forced to sell new vulnerabilities from Linux kernel code to unmentioned blackhat companies.

    So basically what you're saying is that these are the sort of guys who're so morally broken that they wouldn't pass even the most superficial of background checks for a sensitive position, which is no doubt why they need to get money by selling to blackhats rather than getting a real job in computer security. Basically, exactly the opposite of the sort of person you'd want to trust as a contributor security information and patches. Thanks, I'll remember to disregard anything I see from these morally challenged turdballs in the future.

  • Right on! (Score:5, Insightful)

    by Oestergaard ( 3005 ) on Monday January 10, 2005 @10:36AM (#11309722) Homepage
    You're absolutely correct in what you say.

    2.6 is currently a developer's dream and an administrator's nightmare.

    It is a smoking pile of bleeding edge patchwork. It can do everything in double time and brew coffee concurrently, but it cannot serve a file reliably (for example - as outrageous as it sounds that last part is actually the truth).

    The absolute major top-1 problem is the huge flux of patches; 4000 changesets between 2.6.9 and 2.6.10... One kernel fixes maybe 100 bugs and introduces the same number along with a heap of new features while it deprecates a few old interfaces.

    If 2.6.5 is the latest stable 2.6 kernel for one particular use (which I know for a fact that it is for some uses), you're stuck with a local root vulnerability because most likely 2.6.11 which may have a fix for this one bug will crash with that workload (as 2.6.6-2.6.10 did).

    And the examples I'm pulling out here (file serving and many unstable kernels in a row) are not unreported problems. They are not new problems. They have been worked on, partially fixed, etc. etc. but with the development model as it is, you just cannot expect fixes to have a very long life-time.

    It is very very sad. But I think it will change as someone realizes how bad the situation is. Probably half a year or so from now, when people start getting really annoyed that you *still* cannot route, web-serve or file-serve in any significant volume with Linux 2.6.

    Until then, it's Linux 2.4 and Solaris - both slow compared to 2.6 maybe, but at least they stay up over night :)
  • by Anonymous Coward on Monday January 10, 2005 @10:51AM (#11309840)
    paxtest was modified to discourage the abuse of the test suite. people (in particular, exec-shield proponents) have used earlier paxtest results to show how good execshield was (which it wasn't when one dug a bit deeper). for reference: http://marc.theaimsgroup.com/?l=debian-devel&m=106 804968406987&w=2 and the rest of that thread.

    you're wrong when you say that paxtest disables execshield. to do that one would have to write to /proc. what happens instead is that paxtest demonstrates that there are exploit techniques that can foil execshield. remember again, paxtest simulates exploit techniques, not applications. execshield by design has flaws, paxtest shows you that. it also has implementation flaws, some are shown by paxtest as well.

    the reason we have never submitted PaX (let alone grsecurity) to lkml is because... shock and horror... we never *wanted* to. now tell me what resulting constructive criticism we didn't listen to, i'm all ears.

    you're also confused about LSM and actual modules. the former is a framework, as it is, it's unfixable for grsecurity because the needs of grsecurity are *beyond* the stated role of LSM. as simple as that.
  • by silas_moeckel ( 234313 ) <silas.dsminc-corp@com> on Monday January 10, 2005 @11:06AM (#11309957) Homepage
    Ah guess you have never worked in someplace where there are no acceptable scedualed maitnence windows with outage.

    Preproduction is key here generaly working of of split mirrors and the like to insure things are exact replica's. It's just an issue of procedure if you dont have good procedure here you wotn have good tests. So I would differ on the nigh-on impractical part as matches hardware and a good mirror is the same thing :)

    Course I may be biased I work with exact matching boxes we I can bring up a server on any of the hardware at any time in case of failure.
  • by duffbeer703 ( 177751 ) * on Monday January 10, 2005 @11:08AM (#11309975)
    The thing that is "retarded" is the fact that a huge security hole can slip through the cracks if Linus doesn't check his voluminous email.

    Linux isn't just a hobby toy anymore. If Linus is holding on to things too tightly, he's doing himself and the community a disservice.
  • by JohnFluxx ( 413620 ) on Monday January 10, 2005 @11:21AM (#11310085)
    Actually I think it's more like him mailing bill gates directly.
  • by LurkerXXX ( 667952 ) on Monday January 10, 2005 @11:23AM (#11310095)
    How about having that procedure nicely spelled out on an official website rather than just having to google for it and hoping the article you find has both real and current information? It's not hard. The BSD [openbsd.org] guys do it.

    The Linux guys failed more on the netiquette than the PaX guys. They failed to put forward a real working contact list. Security guys don't like to trust random results from google. How do you know your sending it to the 'real' security person? I can't put the blame on them for this mess at all. Imho, one should never ever ever fail to provide an easy to find current working contact list for exploits.

  • by Sunspire ( 784352 ) on Monday January 10, 2005 @11:24AM (#11310111)
    Because you cannot make any assumptions about the attack vector. Say there's a local vulnerability found in the kernel that can give you privilege escalation. It's no problem right, since you don't allow remote logins, so you're not going to patch it. Wrong.

    Next time there's a small hole in Apache that for instance allows execution as the apache or nobody users, that local kernel security hole will come back to bite you in the ass and lead to your box being rooted.

    It doesn't even have to be a Apache hole. Say some little bit of user supplied input is being used in some chrooted or otherwise jailed context, perhaps you're generating a PS or PDF file in some temp directory on the fly. Again that little security mistake you've made combined with the local privilege escalation flaw you didn't patch will stretch the hole to goatse.cx proportions.

    Unless your machine is unplugged from the net, patch that kernel. Seriously, it's like insurance, a little pain every now and then so that when the shit hits the fan you'll hopefully live through it.
  • Re:So it begins. (Score:3, Insightful)

    by LurkerXXX ( 667952 ) on Monday January 10, 2005 @11:31AM (#11310161)
    Bring new development? You mean like OpenSSH, PF, CARP, etc?

    Yeah, those OPENBSD'ers are stuck in the mud and never think about new development.

  • by Kjella ( 173770 ) on Monday January 10, 2005 @11:40AM (#11310233) Homepage
    I bet that Linux has alot better device-support out of the box than Windows XP or Windows Server 2003 has.

    Absotively.

    Windows relies on third-party drivers that may or may not work, whereas in Linux they are already in the Kernel, where they receive more attention than third-party drivers would receive.

    More attention than from Microsoft? Certainly. More attention from the people selling the hardware in question? I would hope not, for the company making it's sake. The fact of the matter is that it is critical to their business, and that most windows drivers work, and work well. It doesn't matter if you're OEM or retial, if you make doohickeys for Dell, and customers complain to Dell, Dell will complain to you.

    When you think it over though, that makes it better not to have closed source drivers for Linux. Why? Because Linux just isn't critical to their business. Having half-assed buggy, obsolete drivers that the developers can't fix would make the situation worse, not better. Not having a stable ABI is the lesser of two evils, but it is a drawback, not a strength.

    You must understand that to users, putting in a CD to install the drivers is a non-issue (particularly if the task involves installing something *gasp* inside the case). If I thought Linux could get the same level of support as a Windows driver, I would recommend he did. But if Linux was that important, they couldn't afford not to support it anyway. So by the time you have the momentum to make an ABI work, you don't need it. Ironic, isn't it?
  • Actual Concern (Score:2, Insightful)

    by kg4gyt ( 799019 ) on Monday January 10, 2005 @11:49AM (#11310317)
    One has to remember that the linux exploits are much harder to actually take advantage of than the windows exploits. Linux security holes, although serious, one must first gain access to the physical terminal, then have the time needed to actually use the exploit, as opposed to writing a webpage or powerpoint file that someone merely has to open to become a zombie. Linux needs fixing at times, however the security holes are in comparison, not as serious as those Windows users encounter almost daily.
  • by SunFan ( 845761 ) on Monday January 10, 2005 @01:25PM (#11311156)
    If you don't like what Linus is doing, either maintain a separate set of patches or fork the kernel. I mean Jesus, SuSE patches the vanilla kernel, RedHat patches the vanilla kernel, Mandrake patches the vanilla kernel, and want to know what ... I maintain a set of patches that I apply to my SuSE patched kernel sources *SHOCK*.

    Yes, I'm shocked that you assume people have no life and can maintain a set of patches for their kernel just like you do. Sure, with Linux, I'm free to fork it, but I'm also free to write the next great epic novel. I leave you waiting in suspense, I guess, for my novel to appear in the book stores (I hope you are patient!).
  • Re:So it begins. (Score:2, Insightful)

    by SunFan ( 845761 ) on Monday January 10, 2005 @01:36PM (#11311262)
    You mean like OpenSSH, PF, CARP, etc?

    I wonder how many Linux geeks use OpenSSH to tunnel X Windows from their PC over the network and think "Wow, Linux is the bomb, Linux does this cool stuff." Probably too many.

  • by iabervon ( 1971 ) on Monday January 10, 2005 @01:42PM (#11311333) Homepage Journal
    With respect to needing to switch versions to a kernel that is different in unexpected ways, it's exactly the same if you're running a 2.4 kernel.

    There are actually improvements with 2.6: the distros have been invited to take over 2.6.x.y series, so that if they're going to be backporting patches, they can contribute this effort back to the community. In 2.4, the distros carry so many patches that you'd have an easier time backporting from the latest 2.4 vanilla kernel than from a distro kernel with the same nominal version. They have so many patches because they feel the need to add functionality in their stable series.

    Also, Alan Cox is maintaining a tree of "really stable" kernels, where he takes only bugfixes from the current work and adds them to the base version he's using. I haven't determined if he's planning to continue 2.6.9-ac indefinitely, or if he's going to only release 2.6.10-ac kernels once he judges 2.6.10-ac to be sufficiently tested.

    The real issue is that Linus is currently in charge of releasing the stable versions. He's really good at identifying what should go into the stable series, from the perspective of guiding development, but he doesn't have the discipline to identify a completely-working version and call that 2.6.x. My prediction is that, in accordance with the ManagementStyle document, he will eventually decide that people complain about his release descisions, and therefore he should get somebody else (probably Alan Cox) to do that.

    As for development causing security problems, there has yet to be a 2.6 security hole in code that was added during 2.6. In general, new code is checked for all known patterns of bugs (almost all security holes fall into some pattern) and bad practices before being accepted. On occasion, a bug is found which is part of a new pattern, and future code with the same sort of bug will be caught, but existing code with that bug is not necessarily identified. This means that bugs are generally in code that hasn't been changed in a long time, not code which has been changed recently. In fact, there have often been bugs found in old versions which had already been eliminated unknowingly from new versions by people writing replacement code using improved techniques.

    For example, the recent hole was in code written ten years ago to facilitate the switch from a.out to ELF. The hole was a race condition due to changes made several years ago in the requirements of common code.
  • by Blakey Rat ( 99501 ) on Monday January 10, 2005 @01:54PM (#11311448)
    Whoosh, watch my entire point go flying right over your head.

    "MS people" don't need to defend the company based on Microsoft Bob because Microsoft Bob was sold for about a year a decade ago, then was dropped, and nobody has given a second thought about it since then.

    That would be like criticizing Ford because their cars made in 1935 lacked seatbelts. "Ford is the worst auto-maker because their 1935 sedan had no seatbelts, they were trying to kill their users!" Does that make any sense as an argument? No.

    Is Bob insecure? Sure. Does it matter? No, of course not! NOBODY USES IT! Just like nobody drives a 1935 Ford sedan on a daily basis. It doesn't matter. It's a pointless, and stupid, argument. Microsoft Bob can't be used as a hacking tool because NOBODY USES MICROSOFT BOB!

    You're just as deluded as the poster I replied to. Join the rest of us here in 2005, where nobody has even seen Microsoft Bob in 8 years and few people have even heard of it. Nobody gives a shit about Microsoft Bob. Few people did when it was on store shelves, and nobody does now... except a few deluded people on Slashdot.
  • by LurkerXXX ( 667952 ) on Monday January 10, 2005 @04:29PM (#11313499)
    The people aviable are documented and if you don't know a simple IRC session would point it out for you.

    Absolutely friggin brilliant. Because we all know no one ever fakes their idenity in IRC. You sir are a genius!

"May your future be limited only by your dreams." -- Christa McAuliffe

Working...