Security of Open vs. Closed Source Software 366
morhoj writes "Cambridge University researcher Ross Anderson just released a paper concluding that open source and closed source software are equally secure. Can't find a copy of the paper online yet, but I thought this would make for an interesting morning conversation. You may not agree with him, but anyone who's on the BugTraq List can tell you that open source software isn't as bug free as we would all like to think." I found Anderson's paper, so read it for yourself. There are some other interesting papers being presented at the conference as well.
MBTF My Ass (Score:2, Interesting)
I hope that was just CNet editorialization, and isn't indicative of the rest of this paper.
Ad Hoc Quackery (Score:1, Interesting)
For the life of me, I can't imagine how closed and open source programs could be equally
secure simply because there's no quantitative measurement to prove that. Even if there was, it would
be so unlikely... Notwithstanding, I believe to generalize this issue at all is just mental
masturbation -- security depends on the development context -- just because something was
developed close/open-source just doesn't make it any secure or less secure by definition, it
doesn't make it equally secure either.
Re:In Other News (Score:2, Interesting)
I mostly agree with him however I like open source software for more reasons than just the fact its "more secure". Often OSS software isn't in fact more secure or reliable. Look at mozilla. Its a great project but its not as nice as IE by a long shot. Anyone using 1.1a in Linux will know that [e.g. me! while at the same time 0.9.9 works fine... ???]
tom
Another viewpoint (Score:3, Interesting)
This is certainly true, however there is a large amount of security appears to come from the community / vendor around the code too. Yes, I'm generalising, but open source programmers treat security problems as security issues, rather than as a PR problem. Even though the apache team ( rightly, in my opinion ) criticized ISS for the manner of their reporting, they did also release a full disclosure release, and a suitable, working patch within 36 hours of the issue going public.
I don't see many vendors responding that quickly, although, to be fair, the apache team did know about the vulnerablity already.
It's all about the "Window of Exposure" really. Go to Bruce Scheiners Cryptogram page [counterpane.com] to see some excellent arguments about peer review, and the whole window of exposure idea.
Maybe... (Score:4, Interesting)
Pros:
Closed-source: No one can see your code, thus eliminating obvious exploits (buffer overflows, race conditioning, etc.) from being quickly jumped on. Less chance that an external developer will accidentally or intentionally misuse some of your libraries or otherwise write in exploitable code.
Open-source: Everyone can see your code, thus allowing a multitude of additional glass-box testers to help patch things more quickly to adapt around problems a project leader may/may not see. Quick turnaround on patching of code.
Cons:
Closed-source: Limited field of testers; slower turnaround on bug/exploit fixes when even reported (can go on unreported for months, or even when reported, may be ignored or shelved indefinitely).
Open-source: Since everyone can see your code, some black-hat punk is invariably going to find some exploit and blast your distributions for it. Also, QA is nigh impossible to timely enforce when 100's of developers submit patches, sometimes anonymously.
Opinion: Both may seem to be even; however, the timeliness of a fix can make all the difference in security, and waiting days vs. weeks or months for a patch can make or break an information-driven business. Also, even if an open-source project is patched with an exploit ingrained, there will still be a quick turnaround on patching it, as there is for any bug. IANA genius, but at least from a business standpoint, it would seem that quick and usually-reliable beats slow but usually-guaranteed.
That should read (Score:3, Interesting)
All software has security vulnerabilities. Software with vulnerabilites is secure as long as nobody knows about the vulnerabalities or nobody exploits the vulnerabilities. Security is a process, not a state. To run a secure system, you have to know as much about the vulnerabilities as the hackers. You have to patch your systems. You have to manage your risk.
All it takes is one hole in some piece of software that you are running. If somebody knows about it and hacks you you are insecure. There are channels for discussing security vulnerabilities for both open and closed source software. Holes in both open and closed source software get patched. In that respect they are equally secure. There are more holes in both. It doesn't matter how many holes, it only takes one. In that respect they are equally insecure.
Re:Another viewpoint (Score:2, Interesting)
This is contradictory to the rest of your post. You mention window of exposure. While you might argue the window of exposure starts with public disclosure, it really does not.
A flaw that is found in a piece of software often was there for years. The window of exposure actually starts when the flaw is introduced, since from that point forward, there is the possibility of a person or group having knowledge of the flaw and not releasing it.
It's entirely possible that there is a blackhat group or groups, which we will probably never discover, that is harboring hundreds or thousands of unreleased vulnerabilites. Such a group would have immense power, the ability to disrupt the information systems of nearly every company on the planet, on a whim, or when hired to do so.
Open source, with it's ease of finding flaws, reduces this "true window" of exposure.
It's easy to fall into the trap of believing that all security threats are script kiddies running tools against well known vulernabilities, since the majority of the attacks reported are of that nature, but that doesn't mean that the threat of a true blackhat group doesn't exist, and couldn't be devestating.
Comment removed (Score:3, Interesting)
Hmmm (Score:2, Interesting)
And yet I find that solaris admins are general very up to speed despite their closed source proprietary software.
I conclude that this is not the determining factor.
I suggest that windows, being the dominant platform, ubiquitous and cheap (sort of) means that many people have been taken the opportunity to admin a box, without training, mentoring or other good habit forming stuff.
They now call themselves 'systems administrator' and yet are , as you spot, often clueless as to what they should be doing.
Paradox: linux's cheapness may prove just as much a burden in generating under trained admins.
security orthogonal to development model (Score:3, Interesting)
I suspect that you can generalize this to security, as well. OpenBSD focuses on security, and it shows. Microsoft doesn't, and it shows. This is not a matter of proprietary v. free.
Re:bugtraq reference (Score:3, Interesting)
Doesn't that really mean that the bugs will just be discovered more slowly?
It is harder to find bugs by trial and error than by reviewing the source.
Can you explain how you arrived at that conclusion.
MS patches most bugs in their products before their is an exploit
How can you know that unless you have access to internal Microsoft information? Almost without exception the updates that I have seen from Microsoft are a reaction to problems found by others. The patches I assume you are talking about are the ones MS fix and we never hear of. How do you know they exist?
I accept your point though that the "2 ways" I talked about of discovering bugs for OSS applies to all people, not just white-hats.
Re:Maybe... (Score:3, Interesting)
For example, more valuable data is stored on MS machines than Linux boxes right now. Of course they're going to sharpen their skills hacking the MS box more.
In my opinion it also has something to do with the MS-hate common in the hacker communities.
(I know using MS as an example is not a good one since their software...well, let's not go that way. But my point is, it depends a lot on what the black hats want to do.)
Answer- Mandrake is more secure than Windows... (Score:3, Interesting)
Windows does none of this. Everything but the kitchen sink is installed and running. It's hard to tell what's running, especially if you're not familiar with Windows' cryptic names for all the services. There are no good explanations of what services really are or what they do. Everything is buried 3-4 levels deep, and poorly organized. Unless you're already familiar with it, it's much harder to figure out than Mandrake.
So yes, Mandrake is more secure. Part of this is the installation itself- it goes through the appropriate steps to build in some security. The other part is general usability. Mandrake wins here, too.
Don't get me wrong, not all Linux distros are as good in this respect. IMO, Redhat is about as bad as Windows, though it's getting better. Others might be a complete disaster.
OSS vs. CSS Security Misses the Point (Score:2, Interesting)
of guarantee of security -- holes can go unnoticed
in OSS for years, and holes can be found in CSS
without having the source at all.
What really matters is *how easily and quickly
the holes are fixed*. Seems that OSS has CSS
beat on this issue hands down.
-- jfh@cise.ufl.edu
Re:Answer- Mandrake is more secure than Windows... (Score:4, Interesting)
Windows, OTOH, starts with the assumption that a complete idiot will be installing it. If networking is crippled by default, it will probably remain crippled until the user returns the computer because it "won't do X". And it makes the almost-reasonable assumption that with an idiot setting it up and using it, the box won't contain anything worth a good cracker's time. And these assumptions are almost OK; the problem is that (1) when the box is used for something serious, it's hard for even a professional administrator to keep up with all the changes needed to make a system secure, (2) they've made the home system default so wide open that serious crackers can take over hundreds of them at a time and use them in assaults on important targets, and (3) MS is so sloppy that everything is a lot more exposed than they intended...
Re:Windows operating systems re-configure themselv (Score:3, Interesting)
"Do tell us not just that he's wrong, but why."
The mathematics is absolutely stupid! The author assumes that bugs are a random event. But they aren't. Bugs are heavily influenced by sociological factors that affect the outcome by more than a factor of ten. A lot of the bugs you seen in Microsoft Internet Explorer, for example, come from the sloppy practices of programmers who are not particularly interested in what they are doing and who are pushed to a tight schedule, so when they see that something needs to be re-written, they can't re-write it, because they don't have time.
Remember when Microsoft released Windows 2000? Someone inside Microsoft said that there were still 63,000 bugs (or known shortcomings) in their database. There was no time to finish the job, and Windows 2000 and Windows XP are still quirky. I just reported a bug in Windows XP, again, which I first saw in Windows 98. All of those operating systems re-configure themselves without telling the user. The company just doesn't care enough to do a good job.
Bugs in software are caused by social factors that we cannot measure. Some programmers write far tighter code than others. Compare the security bugs in OpenBSD [openbsd.org] and in Windows. OpenBSD is far more secure because the people who control it say they want it that way. Microsoft just announced a greater interest in security, but will the company actually devote resources to fixing the code? That's a sociological issue for a company that has always put money first.
It is impossible to test reliability into software, or anything! Reliability is due to design decisions, or the lack of them.
The author says,
"Reliability growth models seek to make this more precise. Suppose that the probability that the i-th bug remains undetected after t random tests is e-Eit. The models cited above show that, after a long period of testing and bug removal, the net effect of the remaining bugs will under certain assumptions converge to a polynomial rather than exponential distribution. In particular, the probability E of a security failure at time t, at which time n bugs have been removed, is [Equation that Slashdot cannot display: E = 1X i=n+1 e-Eit ~~ K=t] over a wide range of values of t. I give in the appendix a brief proof of why this is the case, following the analysis by Brady, Anderson and Ball [7]. For present purposes, note that this explains the slow reliability growth observed in practice."
He is just pulling your leg, and probably his own. Note the word "random" in the second sentence. In mathematics that is a technical term, a precisely defined term, and it doesn't apply here.
The author is just grabbing attention, and it worked. Now he has something to put on his resume, an article in CNET (by someone who doesn't understand the mathematics, but assumes that it must be okay, because it looks so impressive).
Show me the equation that has a term that explains the difference between OpenBSD [openbsd.org] (Five years without a remote hole in the default install!) and Windows XP [jscript.dk] (zero seconds).
Give the source code of Internet Exploder to the OpenBSD coders, and we will see how random bugs are. They could do what they already did with BSD, examine the code for poor practices, and re-design the parts that need it. Then all the "randomness" would stop happening, as if (in the view of some) by magic.