Security — Open Vs. Closed 101
AlexGr points out an article in ACM Queue, "Open vs. Closed," in which Richard Ford prods at all the unknowns and grey areas in the question: is the open source or the closed source model more secure? While Ford notes that "there is no better way to start an argument among a group of developers than proclaiming Operating System A to be 'more secure' than Operating System B," he goes on to provide a nuanced and intelligent discussion on the subject, which includes guidelines as to where the use of "security through obscurity" may be appropriate.
What does slashdot think? (Score:5, Funny)
Re: (Score:2)
"Um. I have no opinion but, if I did, I support whichever puts more food on more people's tables and pays more people's mortgages."
How's that for the mods?
Re: (Score:2)
Re: (Score:1)
omg pwnt.
Re: (Score:2, Funny)
Re: (Score:1)
endless debate (Score:3, Insightful)
Vista Forum [vistahelpforum.com]
Printable view link (Score:1, Informative)
Cleverly hidden on page 2 of 4 advertisement-riddled pages. You would think ACM could focus on the content with less distractions than other sites...guess not.
Re:endless debate (Score:4, Insightful)
A program which costs $200 (typified as the industry and closed source) should not be relying on the consumer to be the (security) beta testers.
A program which costs nothing, or only a nominal amount (typified as FOSS), is able to ethically rely on the consumer base to be (security) beta testers.
If I paid for it then it should work (shouldn't break/shouldn't be so easily exploitable). If I didn't pay for it then I should expect to make a contribution.
Right now the industry is addicted to charging production quality prices for beta (even alpha) quality software.
Re: (Score:1)
Not since '94, no. (circa) '94 was the year in which the government began authorizing enormous amounts of taxpayer money to be funneled into the stock market under the auspices of technological and computing grants.
To sum it up: the problems experienced in security and coding today are a logical result of the artificial inflation of the computing industry for one single purpose. The profit of the politicians who authorized the spending and the banker
Re: (Score:1)
His rule of thumb is useful. (Score:5, Insightful)
Personally, I would argue that such 'heuristically secured' systems are broken by default, and that there are good reasons why generations of computer scientists have insisted that security through obscurity is meaningless. The "security" provided by such heuristics are of value only to marketing and legal departments, they are not and should not be confused with the security offered by 'deterministically secured' systems (e.g. cryptography is his example). Saying that an application is "secure," when it depends on an attacker not knowing how it works, borders on unethical false advertising.
Re: (Score:2)
Re: (Score:1)
closed source is just one aspect (Score:5, Insightful)
Re:closed source is just one aspect (Score:4, Interesting)
Microsoft really is a case in point. They did a lot of what you described, got nailed for it by the press, by consumers, and by corporations, and they really did change their ways. Their Secure Development Lifecycle [microsoft.com] has turned out some pretty high quality releases. For instance, IIS 6 has far fewer vulnerabilities than Apache. One certainly couldn't say that for IIS 5.
Re: (Score:2)
You really think that. It's cute. Now let me tell you how it works in the real world; Software has such a percieved cost for development ( factual or not ) that once a company comes out with something that sorta works, no one else is will
Re: (Score:2)
By bad press.
This is an aspect of the Free Market that I don't think some people fully acknowledge. The invisible hand is not just the consumers buying the product, but those who don't buy the product and complain openly about it. Those open complaints do build up, and you
Re: (Score:3, Interesting)
I've never heard anyone quote such a stat. Where does said statistic come from
Re: (Score:3, Informative)
See: http://rmh.blogs.com/weblog/2005/05/is_microsoft_i i.html [blogs.com]
Those posts are somewhat old, but the trend apparently continues if you go check Secunia, or your favorite vulnerability lists.
Re: (Score:2)
</sarcasm>
Simple (Score:4, Insightful)
Re: (Score:2)
So, OpenVMS, then?
OT: Things you can't ask about VMS. (Score:2, Interesting)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Interesting)
I don't agree.
The central server for a system of airport flight information display screens (FIDS) where I once worked ran an operating system called iRMX. It had pathetic security. The only thing that kept that system secure was the lock on the door to the room.
Re:Simple (Score:4, Informative)
Re:Simple (O/T) (Score:2)
ruby -e "[1383424633,543781664,1718971914].each{|x| print([x].pack('N'))}"
I agree with the output though
Re: (Score:2, Funny)
ruby -e "[1383424633,543781664,1718971914].each{|x| print([x].pack('N'))}"
You must be using some definition of 'simplified' I wasn't previously aware of.
Re: (Score:2)
It's simpler than the original, mr. smarty-pants.
Anyway, ruby -e "puts 'Ruby is fun'" wouldn't be very interesting now, would it?
Re: (Score:2, Funny)
I've written the most secure operating system in the world. No, you can't have it. I forgot where I put it.
Re: (Score:3, Interesting)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
Oh, so THAT's why OpenBSD [openbsd.org] is relatively secure. If more people started using it, I guess it would suddenly get less secure. Thanks for clearing that up.
Your comment gets at the issue that there are more exploits for more commonly used systems. Still, it may be that more secure systems may be used less because they are more difficult (or expensive or whatever) to use - same is probably true of security's component parts such as passwords, physical security, etc.
You Can't Know Which is More Secure (Score:5, Insightful)
Re: (Score:2)
Well, the point of the article was that you can;t even get to that point, since there is no widely accepted measurable definition of 'security', no inclusive metric of security. This means there is no way to define a 'more secure' approach, and therefore all we can do is discuss individual products in comparison with one another.
Re:You Can't Know Which is More Secure (Score:5, Interesting)
And I'm saying that even that is pretty meaningless. Five vulnerabilities were fixed in Mozilla last week, and two in Opera. Which is more secure? Twelve new vulnerabilities have been discovered in Firefox, and one in Opera. Which is more secure? The Apache servers in our sample have been broken into 50 times during the course of our study, compared to 3 break ins for lighttpd. Which is more secure? A team of five experts found three vulnerabilities in the NT kernel and two in Linux. Which is more secure? Static analysis found 10000 possible vulnerabilities in Konqueror and Microsoft reports static analysis found 1000 possible vulnerabilities in MSIE. Which is more secure? Which of the mentioned products should you select, based on the given facts, if your goal is to minimize future break ins?
I honestly don't know the answer to any of the questions I asked. I really think none of the (fictional) data I gave says anything about the relative security about the products it ostensibly pertains to. I _feel_ more secure running OpenBSD than Windows 2000, and, given the absense of reports of OpenBSD machines being broken into on a large scale, that feeling seems justified. But this is entirely based on something that I _don't_ know. I _don't_ know that OpenBSD machines are massively broken into, and thus, I feel safe. However, I also don't know that they are _not_ massively broken into, so my feeling could be entirely misplaced. I certainly don't know that there are no holes in OpenBSD, so even if it hasn't been massively exploited up to now, it could start tomorrow. All I have is the assurance of the developers that they make great efforts to improve security. I believe them, hope they are indeed doing so, and hope they are actually _achieving_ better security that way. But I don't _know_ that.
Re: (Score:2)
You've anchored yourself to a position that can't be assailed but that's not the interesting part. Go read the other three pages.
Re: (Score:2)
Your point's well taken, but your conclusion (here and in your first post above) are hopelessly fatalistic.
You don't give nearly enough credit to the analytical process. Instead you focus on points that might philosophically be true (e.g. "no app is open and clo
The Quantity of the Eyes Isn't Always The Issue (Score:5, Insightful)
But the quantity of eyes isn't always the issue. I could put the Linux kernel source code in front of 1 million six year olds, and there is very little chance any of them would find a single bug.
Obviously, we're not talking about six year old eyes here, but continue the scenario. There are some types of bugs that even very experienced coders wouldn't necessarily spot. Not every kind of security hole is a simple buffer overflow. Some kinds of issues will really only be spotted by a highly trained and specialized set of eyes.
Now, those highly trained eyes may be looking at the open source code, or they may not. All I'm saying is that the quote "Given enough eyeballs, all bugs are shallow" is not particularly accurate.
Re: (Score:1)
Re: (Score:2)
Re:The Quantity of the Eyes Isn't Always The Issue (Score:4, Insightful)
I think, however, the "open source is more secure" argument tends to follow the idea that behind the scenes, the code under closed source applications tends to be generally faulty, or, at least, Windows code in particular. There could very well be many exploits that, given the code for MS Vista, amateur programmers could easily pick out, simply because the code base is so vast and the amount of people who have full access to it so few.
It's just like if I write my own little closed source app, at first it may appear to be flawless to me because I am the only one seeing the code. But I might code in an inherently buggy way that would be easily picked up by another set of eyes. Then, as little problems flood in from end users, instead of fixing my coding methodology, I make little fixes to the code that are basically workarounds around perhaps solving a bigger problem that would require more time (something more fundamental to the way the program is structured). As an effect, the "patches" become more and more around fixing faults than providing the functionality intended in the first place. Whereas with open source, someone might've already just forked my project and coded the idea using different data structures or in a largely more efficient way.
It's not to say that I couldn't be flawless, but, the odds decrease when nobody can see the results. Using closed source software is like running a car without access to the engine. You see things going wrong, but as far as why and how they are happening, if they are huge problems or only small ones, you can't determine without diving into the actual car's components directly. Closed source doesn't allow this. It's not just the fact that there are multiple eyes, then, it's the fact that those eyes are outside the original coder, potentially, sometimes even being the people having the problems themselves. It takes the "how do we recreate the bug?" discussion out, and oftentimes a sufficient end user can not only support his/herself, but improve the codebase.
Honestly, seems like a better approach. The hard thing is you can't know which is more secure really. But in practice, let's be honest, Linux and OSS get fixed more quickly if they are a widely used project in the OSS community than MS products and "patch tuesday" where they schedule patch releases and recommend strange workarounds for existing security breaches.
Re: (Score:2)
Having worked for many closed source companies I believe this to be generally true (scarily, with no exceptions I've seen.. although I believe they must exist). Deadlines are king and they really don't care whether the code is crap and will fall apart in a couple of years time... they want to get something out of the door *now*.
I
The quality of the unknown eyes is what matters (Score:3, Insightful)
Re: (Score:1)
Re: (Score:1)
Yes. There are experienced eyes on it, though, and that's security researchers. One of the most common types of papers in systems security research is automated bug finders, and one of the standard metrics of bug-finding is "how many bugs can you find in the Linux kernel?"
Of course, in many cases, proprietary de
Well... (Score:5, Funny)
Unless of course Operating System A is Open BSD
Re: (Score:1, Funny)
does a password = security through obscurity? (Score:2)
I would have thought that the password authentication method was the part that needed to be secured.
Just look at how many times an auth method has been exploited to bypass passwords entirely.
Re: (Score:2)
I would have thought that the password authentication method was the part that needed to be secured.
Lets see for today a given /24 has on average 57 ongoing SSH login/password dictionary attacks ongoing making it the 4th most common type of network attack. The obscurity part of this defense is essential, but I'm certainly going to restrict my boxes to allowing SSH attempts from couple of specific IPs as well. Security through obscurity is a time tested and vital part of security, but at the same time it
The Wrong Question (Score:5, Insightful)
This debate is all about the incorrect question. The reason is that code can be secure or not secure, regardless of its "open" or "closed" status.
Until the industry realizes that "secure is secure" and stops worrying about the open or proprietary nature of things, this debate will probably prevent things from being as secure as they could be by diverting resources to an analysis rather than any solutions.
Put another way: Is a homemade door more or less secure than a professionally installed door? My answer is "it depends on the skills of those involved and the quality of materials".
The same applies to software.
Re: (Score:2)
Although in this analogy, the homemade door would be built and installed by the homeowner him/herself who also happens to be a door professional doing the work on his/her own time.
In this case, I would argue the homeowner has a higher stake in doing good, secure work as their "personal investment" in a quality job is higher.
Re: (Score:1)
The Wrong Analogy (Score:2)
The real issue is whether the house to which that door allows access is more secure if you publish its plans or not.
That is hard to answer, because you don't know if the homeowner is relying on the secrecy for security, or just wants to sell house plans. If the homeowner thinks his house is safer because no one can open his door without the plan
Re: (Score:2)
A better one would be, is your house more secure if you publish the blueprints and photos of it online, and allow any architect or security specialist in the world to view them, suggest changes, and if you like the suggestions, they will come to your house and carry out the work for you (often for free).
On the other hand, every thief in the world can also study those blueprints and photos...
Are oranges more wholehearted than Hondas? (Score:2)
Open source and closed source are methods, security is a result. Security is an attribute of a product, not of a development technique. A closed-source company can assign a hundred reviewers and get more trained eyeballs on their code than most open source projects ever see.
If you want to measure results, there's so much scatter from other causes that any effects of open vs. closed are swamped in the noise. Which would you pick as an example of
Re: (Score:2)
M$ had a terrible reputation for their code patches because they were as bad at their patches as their original code. In order to get past that problem they had to contract out the auditing of the patches to
Security by Obscurity (Score:4, Interesting)
I have a pre-canned explanation of open vs closed (Score:5, Insightful)
Open security: the Titanic's hull is made of brittle metal and thus isn't safe - Independent safety inspector
*applause* (Score:2)
open how? (Score:1)
I don't think that works.
Algorithm? May be.
It comes down to this, from bad guys
As a symptom of society in general to become more and more suspicious of each other, what is getting adopted is the worst of both the closed and open model is the one that persecutes security r
My Take (Score:5, Interesting)
Of course, open source also makes it easier for the black hats to find the vulnerabilities. So there's an arms race here. If the black hats find the vulnerability first, they can exploit it before it gets patched or worked around. If the white hats find it first, it can be fixed or worked around before it is exploited. The same arms race exists for closed source and open source, but, in the case of closed source software, the developers are (supposedly) the only ones with the source code, which gives them a slight edge in the arms race.
So it seems that both open source and closed source have advantages and disadvantages when it comes to security. Furthermore, I think that both arguments are theoretical, and the advantages that both models have are not always exploited. Having the source available does not help if no white hats are actually auditing it. And this is why open source wins, in my book. With open source, if you're concerned about vulnerabilities in the software and don't trust the rest of the world to have done proper audits and notified you about the results, you can do your own audit. If the developers of the software don't fix the vulnerabilities to your satisfaction, you can do so yourself. With closed source, you are at the mercy of the vendor. If they don't do proper audits, you're out of luck. If they don't fix vulnerabilities, you're out of luck.
Proprietary software vendors do not always have your best interests in mind. It's not unusual for vendors to keep silent about vulnerabilities found and/or fixed in their software, and some vendors have even threatened or sued people who have disclosed vulnerabilities in the vendor's software. The reputation is more important than the _actual_ security of the product, because the actual security is unknowable. With open source, such tacticts don't work. The source is out there, anyone can find the vulnerabilties and assess the security for themselves. If things are fixed, anyone can make a diff between the two versions and see what was fixed. They can't keep the information from you. Your security benefits from that.
You have to take out the weakest link (Score:2)
security through obscurity just another layer (Score:5, Insightful)
And on servers I run like that, I have yet to have a breakin, but I do get up to thousands of connection attempts from ssh worms, from the same servers, every day (well, they would if I stopped dropping them in iptables, but nevermind that). So it's possible that they could hit a user with a bad password, or one they got from another compromised machine.
On other boxes, like my home box, I put SSH on a high-numbered port. In a couple of years I've had zero attempts hit that port. It would be quite stupid to rely only on this trick, ignoring good discipline in other areas. But as a supplementary layer, it's quite useful. If nothing else, it saves bandwidth.
It's not sufficient, but it's not inherently bad.
Re: (Score:3, Informative)
Without security through obscurity, you can do things like keep OpenSSH patched, use very good passwords, disallow root logins, restrict logins to certain users
Not to mention disable password logins altogether, and only allow logins using a key pair (known as public key authentication in SSH terminology). This makes a password guessing attack impossible, and an attacker must either guess (or obtain in another way) your private key, or find a security vulnerability in the software itself. This approach is somewhat more cumbersome to administrate though, but very secure.
Re: (Score:2)
Security can really be a PITA sometimes.
Re: (Score:2)
The key is protected by a good passphrase
I already do, since I started using *nix in 1998.
Re: (Score:2)
The only difference anywhere is how abstract your obscurity is.
Re: (Score:2)
Re: (Score:2)
Port 443 only gets the occasional "Bad protocol version identification '\026\003'", which logically is a HTTPS request.
443 has the added bonus that traffic through it is expected to be encrypted, so perhaps SSH won't raise as much as a red flag as on port 80.
sploit!=patch (Score:2)
My light fixtures are safe, really, trust me. (Score:2)
Re: (Score:2, Informative)
Re: (Score:2)
Re: (Score:2)
Why? Those certifications are to make sure the product is safe - it won't burn your house down, or it will keep your child safe in an accident. They don't test anything else, such as whether the light fixture is attractive or shows dirt, or if the car seat is easy to use or comfortable. If my software needs to be safe (I could get hurt if it malfunctions)
3rd parties like the Chinese government (Score:2)
A couple years later, after the trial was over, Microsoft gives in to Chinese government demands for the source code.
You really think that this kind of 3rd-party review is good? Hint: it is highly unlikely that the Chinese government would report any interesting discoveries back to Microsoft.
Re: (Score:3, Informative)
1) Even if the source code is available for people to check, if nobody else bothers checking but the author there's no difference right?
2) It's the quality of the checking not the quantity. A billion stupid monkeys won't know the difference between good code or bad code.
What you should do is see who made the stuff and what their track record is like.
I can confidently say Firefox will continue to have regular securit
What is UNSECURE (Score:1)
This assumes the code has security-related bugs that are exploitable if found by the bad guys. It also assumes that the development team, despite their best efforts, doesn't find all the bugs that the bad guys could find if they had access to the source code.
Wi
Software Engineering is a young discipline? (Score:1)
Open security has to be more secure (Score:5, Insightful)
Security by faith or by fact, which would you prefer?
Who writes it? For whom? (Score:1)
I don't care how many pictures of keys, keyholes, locks, policemen, security guards, castles, gates or agents in glasses the website hawking the product has, how high it ranks on cnet, how many recommendation
Security = obscurity (Score:2)
Just as all humans are ultimately cellular organisms, or all substances are ultimately subatomic particles. Security is the art of keeping something hidden by requiring something else that is hidden to reveal it, and repeated applications of this principle in various distinguishable implementations.
The lock on a door is only as secure as the secret of where it's key is. Discover this secret, and act upon it, and the secret of the door is revealed.
Likewise, my encrypted email is o
Re: (Score:3, Funny)
closed source graphs (Score:1)
password are another kind of obscurity (Score:1)
The password is the data. The data can and should be remain closed. When we talk about security through obscurity we refer to the procedure, which is the executable code, the algorithm, what the hell the software does and how it does it.
I think that beeing dependent on the software vendor beats any advantage (if there are any) that closed-source may have.