Security — Open Vs. Closed 101
AlexGr points out an article in ACM Queue, "Open vs. Closed," in which Richard Ford prods at all the unknowns and grey areas in the question: is the open source or the closed source model more secure? While Ford notes that "there is no better way to start an argument among a group of developers than proclaiming Operating System A to be 'more secure' than Operating System B," he goes on to provide a nuanced and intelligent discussion on the subject, which includes guidelines as to where the use of "security through obscurity" may be appropriate.
endless debate (Score:3, Insightful)
Vista Forum [vistahelpforum.com]
closed source is just one aspect (Score:5, Insightful)
Simple (Score:4, Insightful)
You Can't Know Which is More Secure (Score:5, Insightful)
The Quantity of the Eyes Isn't Always The Issue (Score:5, Insightful)
But the quantity of eyes isn't always the issue. I could put the Linux kernel source code in front of 1 million six year olds, and there is very little chance any of them would find a single bug.
Obviously, we're not talking about six year old eyes here, but continue the scenario. There are some types of bugs that even very experienced coders wouldn't necessarily spot. Not every kind of security hole is a simple buffer overflow. Some kinds of issues will really only be spotted by a highly trained and specialized set of eyes.
Now, those highly trained eyes may be looking at the open source code, or they may not. All I'm saying is that the quote "Given enough eyeballs, all bugs are shallow" is not particularly accurate.
The Wrong Question (Score:5, Insightful)
This debate is all about the incorrect question. The reason is that code can be secure or not secure, regardless of its "open" or "closed" status.
Until the industry realizes that "secure is secure" and stops worrying about the open or proprietary nature of things, this debate will probably prevent things from being as secure as they could be by diverting resources to an analysis rather than any solutions.
Put another way: Is a homemade door more or less secure than a professionally installed door? My answer is "it depends on the skills of those involved and the quality of materials".
The same applies to software.
I have a pre-canned explanation of open vs closed (Score:5, Insightful)
Open security: the Titanic's hull is made of brittle metal and thus isn't safe - Independent safety inspector
Re:endless debate (Score:4, Insightful)
A program which costs $200 (typified as the industry and closed source) should not be relying on the consumer to be the (security) beta testers.
A program which costs nothing, or only a nominal amount (typified as FOSS), is able to ethically rely on the consumer base to be (security) beta testers.
If I paid for it then it should work (shouldn't break/shouldn't be so easily exploitable). If I didn't pay for it then I should expect to make a contribution.
Right now the industry is addicted to charging production quality prices for beta (even alpha) quality software.
His rule of thumb is useful. (Score:5, Insightful)
Personally, I would argue that such 'heuristically secured' systems are broken by default, and that there are good reasons why generations of computer scientists have insisted that security through obscurity is meaningless. The "security" provided by such heuristics are of value only to marketing and legal departments, they are not and should not be confused with the security offered by 'deterministically secured' systems (e.g. cryptography is his example). Saying that an application is "secure," when it depends on an attacker not knowing how it works, borders on unethical false advertising.
security through obscurity just another layer (Score:5, Insightful)
And on servers I run like that, I have yet to have a breakin, but I do get up to thousands of connection attempts from ssh worms, from the same servers, every day (well, they would if I stopped dropping them in iptables, but nevermind that). So it's possible that they could hit a user with a bad password, or one they got from another compromised machine.
On other boxes, like my home box, I put SSH on a high-numbered port. In a couple of years I've had zero attempts hit that port. It would be quite stupid to rely only on this trick, ignoring good discipline in other areas. But as a supplementary layer, it's quite useful. If nothing else, it saves bandwidth.
It's not sufficient, but it's not inherently bad.
Re:The Quantity of the Eyes Isn't Always The Issue (Score:4, Insightful)
I think, however, the "open source is more secure" argument tends to follow the idea that behind the scenes, the code under closed source applications tends to be generally faulty, or, at least, Windows code in particular. There could very well be many exploits that, given the code for MS Vista, amateur programmers could easily pick out, simply because the code base is so vast and the amount of people who have full access to it so few.
It's just like if I write my own little closed source app, at first it may appear to be flawless to me because I am the only one seeing the code. But I might code in an inherently buggy way that would be easily picked up by another set of eyes. Then, as little problems flood in from end users, instead of fixing my coding methodology, I make little fixes to the code that are basically workarounds around perhaps solving a bigger problem that would require more time (something more fundamental to the way the program is structured). As an effect, the "patches" become more and more around fixing faults than providing the functionality intended in the first place. Whereas with open source, someone might've already just forked my project and coded the idea using different data structures or in a largely more efficient way.
It's not to say that I couldn't be flawless, but, the odds decrease when nobody can see the results. Using closed source software is like running a car without access to the engine. You see things going wrong, but as far as why and how they are happening, if they are huge problems or only small ones, you can't determine without diving into the actual car's components directly. Closed source doesn't allow this. It's not just the fact that there are multiple eyes, then, it's the fact that those eyes are outside the original coder, potentially, sometimes even being the people having the problems themselves. It takes the "how do we recreate the bug?" discussion out, and oftentimes a sufficient end user can not only support his/herself, but improve the codebase.
Honestly, seems like a better approach. The hard thing is you can't know which is more secure really. But in practice, let's be honest, Linux and OSS get fixed more quickly if they are a widely used project in the OSS community than MS products and "patch tuesday" where they schedule patch releases and recommend strange workarounds for existing security breaches.
The quality of the unknown eyes is what matters (Score:3, Insightful)
Closed-source, then, offers no meaningful protection to the companies involved. Precisely because they have no objection to stealing from competitors, corporations who rely on trade secrets and security through obscurity invalidate the very model they are based upon. If you work on the basis of all people being corruptible, you cannot also work on the basis of people not being corruptible. If you abuse the trust of others, you will inevitably be subjct to the abuse of trust.
Open source doesn't guarantee that the eyes looking at the code are of any particular quality, or that they'll give information back, or that they won't steal the code anyway. But at least you know the possibilities and accept them, you don't pretend they don't exist.
In the end, the difference between the two models is that one deludes the managers into believing they have something nobody else has. Open Source has its own delusions - that the developers can do a damn thing if a corporation takes the code, patents it, and sues said developers into oblivion, for example. One could argue that both are virtually unsurvivable disasters and that you might as well go for the one that gets you the money and the groupies. On the other hand, the reality is that programmers don't make money (managers do) and the last geek known to have had groupies was Socrates.
Open security has to be more secure (Score:5, Insightful)
Security by faith or by fact, which would you prefer?