Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security

Security — Open Vs. Closed 101

AlexGr points out an article in ACM Queue, "Open vs. Closed," in which Richard Ford prods at all the unknowns and grey areas in the question: is the open source or the closed source model more secure? While Ford notes that "there is no better way to start an argument among a group of developers than proclaiming Operating System A to be 'more secure' than Operating System B," he goes on to provide a nuanced and intelligent discussion on the subject, which includes guidelines as to where the use of "security through obscurity" may be appropriate.
This discussion has been archived. No new comments can be posted.

Security — Open Vs. Closed

Comments Filter:
  • endless debate (Score:3, Insightful)

    by cpearson ( 809811 ) on Tuesday February 06, 2007 @04:09PM (#17909774) Homepage
    Applications and systems developed that are developed rapidly by a small set of programmers would benifit from closed source security especially when producing software for small niches. Systems that are developed on a large scale and mission critial applications benefit from open source models because that can utilize a large tester base.

    Vista Forum [vistahelpforum.com]
  • by fred fleenblat ( 463628 ) on Tuesday February 06, 2007 @04:09PM (#17909784) Homepage
    Businesses that choose to develop closed-source software seem to also choose to ship code prematurely, to over-provision with extra features, to decide on features for marketing rather than security or quality reasons, and generally compromise the product in multiple ways. In that light, closed source isn't itself the security problem, it's just an indicator that there probably are other problems lurking.
  • Simple (Score:4, Insightful)

    by Anonymous Coward on Tuesday February 06, 2007 @04:10PM (#17909804)
    The Operating System most secure is the Operating System less used.
  • by RAMMS+EIN ( 578166 ) on Tuesday February 06, 2007 @04:11PM (#17909808) Homepage Journal
    With regards to the question which product is more secure, the only right answer is that you will never know. The problem is that you can't eliminate bias from a test that is supposed to assess this. Since a single product can't be both open source and closed source, you will always be comparing multiple products. As stated earlier, you can't reliably establish the relative security of these products, let alone attribute the result to open vs. closed source.
  • by ThinkFr33ly ( 902481 ) on Tuesday February 06, 2007 @04:14PM (#17909854)
    One supposed advantage of open source software is that, well, it's open. Everybody can take a look and see if the code has holes. The idea being that the more eyes that look at something, the greater a chance of somebody seeing bugs.

    But the quantity of eyes isn't always the issue. I could put the Linux kernel source code in front of 1 million six year olds, and there is very little chance any of them would find a single bug.

    Obviously, we're not talking about six year old eyes here, but continue the scenario. There are some types of bugs that even very experienced coders wouldn't necessarily spot. Not every kind of security hole is a simple buffer overflow. Some kinds of issues will really only be spotted by a highly trained and specialized set of eyes.

    Now, those highly trained eyes may be looking at the open source code, or they may not. All I'm saying is that the quote "Given enough eyeballs, all bugs are shallow" is not particularly accurate.
  • The Wrong Question (Score:5, Insightful)

    by ThosLives ( 686517 ) on Tuesday February 06, 2007 @04:19PM (#17909936) Journal

    This debate is all about the incorrect question. The reason is that code can be secure or not secure, regardless of its "open" or "closed" status.

    Until the industry realizes that "secure is secure" and stops worrying about the open or proprietary nature of things, this debate will probably prevent things from being as secure as they could be by diverting resources to an analysis rather than any solutions.

    Put another way: Is a homemade door more or less secure than a professionally installed door? My answer is "it depends on the skills of those involved and the quality of materials".

    The same applies to software.

  • by Rosco P. Coltrane ( 209368 ) on Tuesday February 06, 2007 @04:26PM (#17910062)
    Closed security: the Titanic is unsinkable - White Star line
    Open security: the Titanic's hull is made of brittle metal and thus isn't safe - Independent safety inspector
  • Re:endless debate (Score:4, Insightful)

    by HomelessInLaJolla ( 1026842 ) * <sab93badger@yahoo.com> on Tuesday February 06, 2007 @04:32PM (#17910152) Homepage Journal

    Systems that are developed on a large scale and mission critial applications benefit from open source m0dels because that can utilize a large tester base
    I see it in terms of receiving what was paid for.

    A program which costs $200 (typified as the industry and closed source) should not be relying on the consumer to be the (security) beta testers.

    A program which costs nothing, or only a nominal amount (typified as FOSS), is able to ethically rely on the consumer base to be (security) beta testers.

    If I paid for it then it should work (shouldn't break/shouldn't be so easily exploitable). If I didn't pay for it then I should expect to make a contribution.

    Right now the industry is addicted to charging production quality prices for beta (even alpha) quality software.
  • by Kadin2048 ( 468275 ) <slashdot.kadin@xox y . net> on Tuesday February 06, 2007 @04:42PM (#17910320) Homepage Journal
    Actually, his conclusion contains a far more useful test, although it does boil down to common sense:

    The difference between these cases is simple: determinism. In the case of the encryption software, the outcome is deterministic. Knowing everything about the mechanism doesn't compromise the security of the outcome. In contrast, for antivirus software the system is heuristic. As such, some things benefit from disclosure, and some things don't. In these two cases, it's obvious. Unfortunately, that's the exception, not the rule. The problem is that many systems contain aspects that are heuristic and aspects that are deterministic.
    In essence, the question is to ask whether closing the source really results in any increased security; in the case of DRM systems (his example), it does, because they are broken by default and thus knowledge of the 'algorithm' allows the system to be cracked.

    Personally, I would argue that such 'heuristically secured' systems are broken by default, and that there are good reasons why generations of computer scientists have insisted that security through obscurity is meaningless. The "security" provided by such heuristics are of value only to marketing and legal departments, they are not and should not be confused with the security offered by 'deterministically secured' systems (e.g. cryptography is his example). Saying that an application is "secure," when it depends on an attacker not knowing how it works, borders on unethical false advertising.
  • by straponego ( 521991 ) on Tuesday February 06, 2007 @04:50PM (#17910480)
    Okay, let's look at just one service, SSH. Without security through obscurity, you can do things like keep OpenSSH patched, use very good passwords, disallow root logins, restrict logins to certain users (which is kinda security through obscurity, but...)

    And on servers I run like that, I have yet to have a breakin, but I do get up to thousands of connection attempts from ssh worms, from the same servers, every day (well, they would if I stopped dropping them in iptables, but nevermind that). So it's possible that they could hit a user with a bad password, or one they got from another compromised machine.

    On other boxes, like my home box, I put SSH on a high-numbered port. In a couple of years I've had zero attempts hit that port. It would be quite stupid to rely only on this trick, ignoring good discipline in other areas. But as a supplementary layer, it's quite useful. If nothing else, it saves bandwidth.

    It's not sufficient, but it's not inherently bad.

  • by danpsmith ( 922127 ) on Tuesday February 06, 2007 @05:04PM (#17910802)

    Now, those highly trained eyes may be looking at the open source code, or they may not. All I'm saying is that the quote "Given enough eyeballs, all bugs are shallow" is not particularly accurate.

    I think, however, the "open source is more secure" argument tends to follow the idea that behind the scenes, the code under closed source applications tends to be generally faulty, or, at least, Windows code in particular. There could very well be many exploits that, given the code for MS Vista, amateur programmers could easily pick out, simply because the code base is so vast and the amount of people who have full access to it so few.

    It's just like if I write my own little closed source app, at first it may appear to be flawless to me because I am the only one seeing the code. But I might code in an inherently buggy way that would be easily picked up by another set of eyes. Then, as little problems flood in from end users, instead of fixing my coding methodology, I make little fixes to the code that are basically workarounds around perhaps solving a bigger problem that would require more time (something more fundamental to the way the program is structured). As an effect, the "patches" become more and more around fixing faults than providing the functionality intended in the first place. Whereas with open source, someone might've already just forked my project and coded the idea using different data structures or in a largely more efficient way.

    It's not to say that I couldn't be flawless, but, the odds decrease when nobody can see the results. Using closed source software is like running a car without access to the engine. You see things going wrong, but as far as why and how they are happening, if they are huge problems or only small ones, you can't determine without diving into the actual car's components directly. Closed source doesn't allow this. It's not just the fact that there are multiple eyes, then, it's the fact that those eyes are outside the original coder, potentially, sometimes even being the people having the problems themselves. It takes the "how do we recreate the bug?" discussion out, and oftentimes a sufficient end user can not only support his/herself, but improve the codebase.

    Honestly, seems like a better approach. The hard thing is you can't know which is more secure really. But in practice, let's be honest, Linux and OSS get fixed more quickly if they are a widely used project in the OSS community than MS products and "patch tuesday" where they schedule patch releases and recommend strange workarounds for existing security breaches.

  • With closed source and "security through obscurity", you do not know - nor have any means of knowing - who is examining the code, their qualifications, their abilities or their resources. The same is equally true of open source. The difference is that, for closed source, you eliminate your ability to either compensate for, or exploit, this unofficial work. It will happen - code is stolen all the time, even from companies as closed-up as Cisco - but even to acknowledge it could cause irreparable harm. The number of well-publicized cases is very small, compared to the number of cases that are shown later to have happened.

    Closed-source, then, offers no meaningful protection to the companies involved. Precisely because they have no objection to stealing from competitors, corporations who rely on trade secrets and security through obscurity invalidate the very model they are based upon. If you work on the basis of all people being corruptible, you cannot also work on the basis of people not being corruptible. If you abuse the trust of others, you will inevitably be subjct to the abuse of trust.

    Open source doesn't guarantee that the eyes looking at the code are of any particular quality, or that they'll give information back, or that they won't steal the code anyway. But at least you know the possibilities and accept them, you don't pretend they don't exist.

    In the end, the difference between the two models is that one deludes the managers into believing they have something nobody else has. Open Source has its own delusions - that the developers can do a damn thing if a corporation takes the code, patents it, and sues said developers into oblivion, for example. One could argue that both are virtually unsurvivable disasters and that you might as well go for the one that gets you the money and the groupies. On the other hand, the reality is that programmers don't make money (managers do) and the last geek known to have had groupies was Socrates.

  • If you can't prove it is secure by showing me how it works, then it's not secure. How do I know that there isn't some bolt in the back of the bank vault, or some skeleton key, unless you allow me to inspect it myself?

    Security by faith or by fact, which would you prefer?

2.4 statute miles of surgical tubing at Yale U. = 1 I.V.League

Working...