Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security Software Linux

Linux's Security Through Obscurity 215

An anonymous reader writes "The age-old full disclosure debate has been raging again, this time in no other place than at the foundations of the open-source flagship GNU/Linux operating system: within the Linux kernel itself. It beggars belief, but even Linux creator, Linus Torvalds, has advocated against the sort of openness on which Linux has thrived, arguing that security fixes to the kernel should be obscured in changelogs, saying 'If it's not a very public security issue already, I don't want a simple "git log + grep" to help find it.' Unfortunately, it's not kernel exploit writers who need to grep the changelog in order to find kernel vulnerabilities. On the contrary, it's downstream distributors who rely on changelog information in order to decide when to patch the kernels of their distributions, in order to keep their users safe."
This discussion has been archived. No new comments can be posted.

Linux's Security Through Obscurity

Comments Filter:
  • What the... (Score:5, Insightful)

    by gparent ( 1242548 ) on Thursday July 17, 2008 @09:37AM (#24227123)
    Linux users typically praise open source software on the basis that vulnerabilities can be found easily and patched by anybody who possesses the knowledge to do so, making open source software more secure. Why should this change now?
  • by rubbsdecvik ( 1326987 ) <patrick.rubbs.re ... m ['l.c' in gap]> on Thursday July 17, 2008 @09:39AM (#24227147) Homepage
    It would seem that if the vulnerability is patched in the change log, then it's fixed. I realize that some may need to run on an older kernel, but if a kernel developer found the vulnerability and fixed it, there is little way of knowing if anyone else (read black hat) has already known about it.
  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Thursday July 17, 2008 @09:40AM (#24227173)
    Comment removed based on user account deletion
  • by struppi ( 576767 ) <struppiNO@SPAMguglhupf.net> on Thursday July 17, 2008 @09:41AM (#24227191) Homepage
    The summary and the linked email from Brad Spengler look very flamebait to me. Linus Thorwalds writes in the quoted mail:

    That said, I don't _plan_ messages or obfuscate them, so "overflow" might well be part of the message just because it simply describes the fix. So I'm not claiming that the messages can never help somebody pinpoint interesting commits to look at, I'm just also not at all interested in doing so reliably.

    And from the second email:

    > by 'cover up' i meant that even when you know better, you quite
    > consciously do *not* report the security impact of said bugs
    Yes. Because the only place I consider appropriate is the kernel changelogs, and since those get published with the sources, there is no way I can convince myself that it's a good idea to say "Hey script kiddies, try this" unless it's already very public indeed.

    Also, someone is not satisfied with an email from Linus Thorwalds and he drags the discussion over here to /. - This certainly will solve the problem... (Sorry for RTFA, I should know better)

  • Re:What the... (Score:5, Insightful)

    by HungryHobo ( 1314109 ) on Thursday July 17, 2008 @09:43AM (#24227213)
    As the userbase shifts towards more mainstream users and away from the technically abled the percentage of users to whom the "who possesses the knowledge" actually applies drops and the number who are likely to be slow updating their systems goes up.This changes the game a little. I'm a supporter of the open model but I can see where they're coming from.
  • So (Score:5, Insightful)

    by C_Kode ( 102755 ) on Thursday July 17, 2008 @09:48AM (#24227285) Journal

    So, what they're saying is when you find/fix a vulnerability you should broadcast on BBC otherwise you will be less safe?

    I don't think so. Love it or hate it, obscure security issues do protect some users. Obviously the issues need to tracked and I think changelogs are a good place to do it. There isn't a real reason to inform the world through all channels avaliable. Just fix it, log it, and move on. Anyone who needs to know will know where to look.

  • by fictionpuss ( 1136565 ) on Thursday July 17, 2008 @09:49AM (#24227297)

    The thing is that while security through obscurity is a fools game it can also hurt your users to publish exact details of the security vulnerabilities you've found in your own product before many of your users have had a chance to patch the problem.

    Surely though, the people who are looking to take advantage of security vulnerabilities, are generally the ones who already have a financial motivation to do so? The people who already have their own dark networks to share or buy and sell vulnerabilities?

    Won't they still do this even if it becomes harder to decipher changelogs? The only thing changing then, is that it'll take longer for regular users to see the danger.

  • by dotancohen ( 1015143 ) on Thursday July 17, 2008 @09:52AM (#24227355) Homepage

    Read the replies. Linus is not advocating security through obscurity. He just doesn't want a big flashing sign "SECURITY" on security-related bugfixes. He doesn't want them to stand out in any way at all.

  • by bunratty ( 545641 ) on Thursday July 17, 2008 @09:56AM (#24227409)

    But won't fewer be able to take advantage of security vulnerabilities if it becomes harder to decipher changelogs? Security is not an all-or-nothing situation. The fewer people who know about a vulnerability, the fewer that can exploit it, and that means that users have a lower chance of being exploited.

    That's actually an important point about security. You cannot make a useful system without any vulnerabilities. You can only maker it harder to exploit the vulnerabilities, meaning that fewer will be able to exploit them. For example, you cannot make an uncrackable and useful code, but you can make a code so hard to break that very few will even try.

  • by betterunixthanunix ( 980855 ) on Thursday July 17, 2008 @10:00AM (#24227447)
    You say you have no mercy for commercial distributors, but the truth is that this sort of obscuration will only increase their business. Companies like Red Hat and Novell have the resources to pay people to spend all day reading through changelogs and deciding whether or not a patch is worth applying (in addition to people to are paid to submit patches). Universities may not have those resources, and their computer centers may only have enough time to quickly check a patch for common security fixes using grep. If it becomes impossible to do that, then all that we'll see is an increase in the number of people who buy support from commercial distributors, because they won't be able to support themselves.
  • by Tupper ( 1211 ) on Thursday July 17, 2008 @10:12AM (#24227581) Homepage
    Yeah, he thinks security bugs are just like regular bugs. But he's wrong. Most bugs don't bite most users--- the ones that don't can be ignored. Very few people can ignore security bugs--- they bite everyone. The chance I need a random bugfix is very small; if I don't need it, I don't want it. The chance I want a security bugfix is almost 100%.
  • by Anonymous Coward on Thursday July 17, 2008 @10:13AM (#24227595)

    Your confusing exploiting vulnerabilities with knowing they exist.

    All this does is make it harder to know about vulnerabilities. The fewer who know about vulnerabilities only means the fewer who know how to fix, patch, or work around them.

    To fix your analogy: For example, you cannot make an uncrackable and useful code, but you can make a code so hard to read that few people will even try to read it to find exploits, or to fix the exploits. Those looking to exploit the code could always use the trusty binaries, like they have always done, but the rest of us depend upon the source code to know about possible vulnerabilities, to work around them, and fix them.

  • by mlwmohawk ( 801821 ) on Thursday July 17, 2008 @10:16AM (#24227641)

    In the old argument, freedom requires responsibility, this is a prime example of the conflict.

    In a truly freedom based model, you assume and rely on the fact that Linux users are responsible for their systems, and thus WARNING SECURITY BUG FIX NOW is a good title to an important patch.

    In the less free "sharecropper" future of Linux where user's rely on upstream vendors to "take care of them" and take no responsibility for their systems, hiding such warning is great security theater to make them feel more secure. They are not more secure, we all know, but they FEEL that they are and the kernel guys pretend to act more responsibly in this "post 9-11" fear based world.

    Its all bullshit and everyone who knows anything knows it. What surprised me was Vixie just saying "patch and trust us" without explaining, with specificity, why.

    When even the proponents of freedom start to fear freedom, we are in deep shit.

  • by kipin ( 981566 ) on Thursday July 17, 2008 @10:28AM (#24227781) Homepage
    And what exactly is the problem with this method?
    If you don't have the time to perform security maintenance, but someone else does, why shouldn't they be allowed to make a profit for their time?
  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Thursday July 17, 2008 @10:31AM (#24227821)
    Comment removed based on user account deletion
  • Congratulations your exactly the reason Linus doesn't want a big flashing "Security" sign.

    Linus' point was that most bugs can be potential security problems and if you ignore anything but security fixes you risk not patching in the case of a bug being discovered exploitable after the fix goes into the kernel.

  • by sukotto ( 122876 ) on Thursday July 17, 2008 @10:37AM (#24227877)

    In the same thread he also says "So as far as I'm concerned, 'disclosing' is the fixing of the bug. It's the 'look at the source' approach."

    I don't see any security by obscurity going on here. He fixes the bug, and tells you in the changelog what the bug was.

    What he's NOT doing is announcing in advance how to exploit the bug.

    So why are so many people getting agitated about this?

  • He's right - they're just bugs. Where he isn't right is about OpenBSD - security is a by-product of fixing bugs. They don't just fix the bugs, but when a new class of bug is identified the whole source tree is scanned for that type of bug - both kernel *and* user-land. But then Linux is just a kernel, isn't it?
  • by xtracto ( 837672 ) * on Thursday July 17, 2008 @10:45AM (#24227995) Journal

    Yeah, he thinks security bugs are just like regular bugs. But [I think] he's wrong.

    There, fixed it for you. The fact is that just because from your personal point of view a bug that is potentially useful to gain unauthorized rights does not mean that everybody sees it that way.

    From what I have read about Linus, he is a very pragmatic guy. For him, a security bug is just another bug in the code (and in a simplistic way, it really is true).

    Some people will be more concerned with those bugs, others will be concerned with bugs that reduce the performance of the OS, others will be more interested in bugs that reduce the reliability (as in, crashing every so often, etc).

    The fact is that there are lots of people already classifying bugs, I think what Linus is saying is that he does not consider the job of the kernel guys to do such kiind of classification.

    For them, it is just another bug that must be seen.

  • by Anonymous Coward on Thursday July 17, 2008 @11:02AM (#24228207)

    Agreed. The thing to note is that this cuts both ways.

    *Every* bug is a potential security bug. So should we look for ways to try to convert every bug into a security notice? Of course not! Why waste the time? What happens when it turns out that a bug doesn't have security implications? Do we shout "hurray!" and flag it as such?

    Linus is entirely correct - a bug is a bug and must be fixed.

  • Re:What the... (Score:5, Insightful)

    by hcmtnbiker ( 925661 ) on Thursday July 17, 2008 @11:07AM (#24228259)
    Linux users typically praise open source software on the basis that vulnerabilities can be found easily and patched by anybody who possesses the knowledge to do so, making open source software more secure. Why should this change now?

    This has nothing to do with the openness of the source code or the disclosure of vulnerabilities. Linus just doesn't want big proof of concepts for exploits in the last version of the kernel(which there will of course be people still running) to end up in this version. He doesn't want to aid script kiddies. Anyone can still find and patch parts of the code base, there's no move away from that.
  • by rs232 ( 849320 ) on Thursday July 17, 2008 @11:13AM (#24228351)
    This, in my view, is total nonsense, if you don't mand me saying so, CmdrTaco. The full source is out there for anyone to see, bugs are reported in the kernel mailing list, for anyone to see. How is this in any way shape or form 'security through obscurity'
  • by rs232 ( 849320 ) on Thursday July 17, 2008 @11:15AM (#24228383)
    corrected headline .. :)
  • by NickFortune ( 613926 ) on Thursday July 17, 2008 @11:15AM (#24228385) Homepage Journal

    At that point, slashdot and schneier.com are just trolling

    Umm.... the schneier article is almost seven years old and discussing apparently discusses a release of the 2.2 kernel. I think the article was referenced purely as summary of security-through-obscurity issues, rather than an attack on Linus.

  • Re:What the... (Score:3, Insightful)

    by atlep ( 36041 ) on Thursday July 17, 2008 @11:35AM (#24228683)

    It doesn't really matter that the percentage drops. As long as the absolute number of people actually fixing bugs don't drop, the rate of bug fixing will remain constant.

    The good thing about "anybody can find and fix the bugs" has never meant "I personally can fix the bugs". It means "somebody out there can fix bugs without having to be part of the developer team".

  • by rgviza ( 1303161 ) on Thursday July 17, 2008 @11:49AM (#24228925)

    Actually he has a good point in that you don't want to just go blindly patching everything the day the patch comes out. A lot of patches are trivial and fix hardware that has nothing to do with you. This can lead to downtime if the patch causes a new bug.

    You can break an otherwise healthy system with a bunch of patches you don't need. By the same token if you don't patch a security issue right away it can lead to system compromise.

    Therefore full disclosure of the security issues a patch fixes is necessary. Any system admin worth his salt knows it's a bad idea to just go around fixing stuff if it isn't broken. It can cause you to lose your job.

    If it ain't broke, don't fix it. Without knowledge of what you are fixing and why, you are playing Russian roulette. If you tell your CIO you installed a kernel patch and broke critical systems "just because", he's not going to like that answer.

    If you tell him your server got comprimised because you didn't install an important security update because you didn't know it was a security update, this would cause you to possibly lose your job.

    If you say to your CIO "I have to install every kernel patch, because it's linux and they never tell us what they are doing and why, so if it breaks, don't blame me." You CIO may say "This isn't working out, move to Sun or AIX." after it breaks critical systems more than once.

    If you run all the scenarios disclosure starts to make sense if you are an administrator or user, the people who a lack of information affects the most.

    Without administrators or users, you got nuthin'.

    Disclosure doesn't necessarily need to be full specific disclosure. Linus just needs to say "Install this patch because it fixes a security problem, but I'm not telling you what it is."

    We don't care what it is. If we are told we need to install a release because it fixes an important security issue, we will. We don't need to know what that security issue is.

    -Viz

  • by snspdaarf ( 1314399 ) on Thursday July 17, 2008 @12:10PM (#24229277)
    Problem is, it only takes one. If a exploit is developed, it can get passed around among the Bad Guys, even if they don't have the smarts to do it on their own. Look at all the script kiddies. I like to know about security issues, but I prefer that a patch is available before the world is told how to attack my systems.
  • by menace3society ( 768451 ) on Thursday July 17, 2008 @12:16PM (#24229365)

    What Linus IS doing trolling. Plain and simple.

    There is a policy, or at least a strong convention, in place for Linux that bug fix commits should explain in a fairly detailed fashion what was the bug was and/or how it was fixed. However, most of the security fixes are vague and general.

    Someone pointed this out, and first Linus said there was no "policy." Someone pointed out that, in fact, there was. Then Linus said that wasn't the point, the issue was that he didn't want script kiddies to be able to find potential exploits easily. So someone pointed out that this means that individuals and distros can't tell whether a given bugfix is urgent or not, and Linus replied that the question whether a bug is related to security or not is difficult to answer. Just to make sure that everyone knew he was trolling hard, he flamed OpenBSD for having a better security record than Linux.

    It boggles my mind, the extent to which Linus is able to spew the most outrageous bullshit and Linux nutriders will buy it. He's an excellent programmer and deserving of his reputation, but the cult of hero worship that surrounds him drags down the whole community of Linux users (and by extension, Free Software in general).

  • by Anonymous Coward on Thursday July 17, 2008 @12:24PM (#24229475)

    No, there was only one openssh bug around that time, the rest were PAM/linux specific issues. And that one openssh bug had nothing to do with it being more widely adopted, it was just an ordinary "bug found in relatively new software" situation.

  • by Evets ( 629327 ) on Thursday July 17, 2008 @12:50PM (#24229879) Homepage Journal

    "fixed bug #23456 overflow at line #1234 causing dumaflopper() to return incorrect result - known security problem" or

    "fixed bug #23456 overflow at line #1234 causing dumaflopper() to return incorrect result - important update"

    would be more appropriate IMO. Letting people know where the security fixes are is important in getting the changes widely distributed.

    By hiding it, you're only protecting yourself from second rate hackers. The first rate hackers found the problem and began taking advantage of it well before the development team was aware the problem existed.

    Further, a better community understanding and acceptance of insecurity would be an even better idea. Too many people out there think "I've secured this box, I know what I'm doing, nobody can get in" when in fact there are very few such boxes out there and the real security layer being utilized is the fact that there are so many other machines out there that are easier to control. If you know you are vulnerable the mindset changes.

    Example: "E-mail has lots of viruses, so I don't open up strange email. Now that I have Norton, it protects me so I open up strange email if it has a subject that draws me in." That's a mindset a lot of people have. Norton gives them the mindset they are secure, but the reality is far from that. If everyone knew how insecure they really were, less people would open up spam or virus-laden spam.

  • by TheSHAD0W ( 258774 ) on Thursday July 17, 2008 @01:10PM (#24230157) Homepage

    So, some random - but short, say within 3 days - amount of time later, post a message saying "security fix implemented - please update".

    That will alert folks that there's a security issue without spotlighting the problem.

  • by _Sprocket_ ( 42527 ) on Thursday July 17, 2008 @01:40PM (#24230573)

    The fewer people who know about a vulnerability, the fewer that can exploit it, and that means that users have a lower chance of being exploited.

    Two things to consider:

    1) All it takes is one person to exploit your vulnerability. And that one person doesn't even have to know you exist and target you specifically. Most cases involve targets of opportunity.

    2) These things don't remain secret. How fast the knowledge is spread only depends on the particulars of the situation. But the knowledge will spread. Sometimes very fast. You're unlikely to be dealing with just one potential attacker.

    That's actually an important point about security. You cannot make a useful system without any vulnerabilities. You can only maker it harder to exploit the vulnerabilities, meaning that fewer will be able to exploit them. For example, you cannot make an uncrackable and useful code, but you can make a code so hard to break that very few will even try.

    It depends on what kind of vulnerability we're dealing with. There are known trade-offs in the design of a system and then there's failures in the design or implementation.

    Security is never absolute by design. There are always trade-offs being made (inverse relationship between usability and security, investment of resources vs. value of what's being protected, etc.). Hopefully designers understand the issues and have made wise choices. But even the most well thought out system will ultimately have left some possibility of subverting it. Thus exists the concept that security is not an absolute.

    Bugs and design flaws are a different issue. These are not trade-offs but unintentional mistakes or miscalculations. These are unintentional flaws. It is entirely possible to design or implement a system without flaws. But of course, designing something without flaws or implementing without bugs is difficult.

  • by elronxenu ( 117773 ) on Thursday July 17, 2008 @07:36PM (#24235453) Homepage

    But that also means that there is never a time when you can "let people know", except when it's not an issue any more, at which point there is no _point_ in letting people know any more.

    Actually there is a point. Not everybody runs the latest kernel all the time. And so reporting a fixed security problem is not a matter of "we fixed another security problem for you" but rather "all versions of (linux) between 2.6.xx and 2.6.yy are vulnerable to (problem description) and so please upgrade to 2.6.yy+1."

    However, Linus' role is to manage the huge volume of changes going into the kernel, and making a big song and dance about security fixes will detract from performing that role. Somebody else should do that, and it's often vendors, and that seems quite adequate to me. I regularly see Debian Security advisories about kernel 2.6.18 and upgraded packages. I don't run those kernels myself, but the updates probably come from backported fixes applied in later 2.6.x kernels. Therefore if I run the latest 2.6.x kernel I am safe from all vulnerabilities fixed before 2.6.x was released.

    I think the kernel developers' attitude is that they don't want you to run 2.6.18 or 2.6.22 or whatever; they want you to run the latest released kernel. If there's a bug in 2.6.18 (there are many, apparently) their advice to you will be to upgrade to 2.6.26, or the latest kernel in the series you are running (2.6.25.11 if running 2.6.25, 2.4.36.6 if still running a 2.4 kernel).

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...