The Myth of Open Source Security Revisited v2.0 207
Dare Obasanjo contributed this followup to an article entitled The Myth of Open Source Security Revisited that appeared on the website kuro5hin. He writes: "The original article tackled the common misconception amongst users of Open Source Software(OSS) that OSS is a panacea when it comes to creating secure software. The article presented anecdotal evidence taken from an article written by John Viega, the original author of GNU Mailman, to illustrate its point. This article follows up the anecdotal evidence presented in the original paper by providing an analysis of similar software applications, their development methodology and the frequency of the discovery of security vulnerabilities." Read on below for his detailed analysis, especially relevant with the currency of security initiatives in the worlds of both open- and closed-source software.
The Myth of Open Source Security Revisited v2.0
The purpose of this article is to expose the fallacy of the belief in the "inherent security" of Open Source software and instead point to a truer means of ensuring the quality of the security of a piece software is high.Apples, Oranges, Penguins and Daemons
When performing experiments to confirm a hypothesis on the effect of a particular variable on an event or observable occurence, it is common practice to utilize control groups. In an attempt to establish cause and effect in such experiments, one tries to hold all variables that may affect the outcome constant except for the variable that the experiment is interested in. Comparisons of the security of software created by Open Source processes and software produced in a proprietary manner have typically involved several variables besides development methodology.
A number of articles have been written that compare the security of Open Source development to proprietary development by comparing security vulnerabilities in Microsoft products to those in Open Source products. Noted Open Source pundit, Eric Raymond wrote an article on NewsForge where he compares Microsoft Windows and IIS to Linux, BSD and Apache. In the article, Eric Raymond states that Open Source development implies that "security holes will be infrequent, the compromises they cause will be relatively minor, and fixes will be rapidly developed and deployed." However, upon investigation it is disputable that Linux distributions have less frequent or more minor security vulnerabilities when compared to recent versions of Windows. In fact the belief in the inherent security of Open Source software over proprietary software seems to be the product of a single comparison, Apache versus Microsoft IIS.
There are a number of variables involved when one compares the security of software such as Microsoft Windows operating systems to Open Source UNIX-like operating systems including the disparity in their market share, the requirements and dispensations of their user base, and the differences in system design. To better compare the impact of source code licensing on the security of the software, it is wise to reduce the number of variables that will skew the conclusion. To this effect it is best to compare software with similar system design and user base than comparing software applications that are significantly distinct. The following section analyzes the frequency of the discovery of security vulnerabilities in UNIX-like operating systems including HP-UX, FreeBSD, RedHat Linux, OpenBSD, Solaris, Mandrake Linux, AIX and Debian GNU/Linux.
Security Vulnerability Face-Off
Below is a listing of UNIX and UNIX-like operating systems with the number of security vulnerabilities that were discovered in them in 2001 according to the Security Focus Vulnerability Archive.
- AIX
- 10 vulnerabilities[6 remote, 3 local, 1 both]
- Debian GNU/Linux
- 13 vulnerabilities[1 remote, 12 local] + 1 Linux kernel vulnerability[1 local]
- FreeBSD
- 24 vulnerabilities[12 remote, 9 local, 3 both]
- HP-UX
- 25 vulnerabilities[12 remote, 12 local, 1 both]
- Mandrake Linux
- 17 vulnerabilities[5 remote, 12 local] + 12 Linux kernel vulnerabilities[5 remote, 7 local]
- OpenBSD
- 13 vulnerabilities[7 remote, 5 local, 1 both]
- Red Hat Linux
- 28 vulnerabilities[5 remote, 22 local, 1 unknown] + 12 Linux kernel vulnerabilities[6 remote, 6 local]
- Solaris
- 38 vulnerabilities[14 remote, 22 local, 2 both]
Factors that have been known to influence the security and quality of a software application are practices such as code auditing (peer review), security-minded architecture design, strict software development practices that restrict certain dangerous programming constructs (e.g. using the str* or scanf* family of functions in C) and validation & verification of the design and implementation of the software. Also reducing the focus on deadlines and only shipping when the system the system is in a satisfactory state is important.
Both the Debian and OpenBSD projects exhibit many of the aforementioned characteristics which help explain why they are the Open Source UNIX operating systems with the best security record. Debian's track record is particularly impressive when one realizes that the Debian Potato consists of over 55 million lines of code (compared to RedHat's 30,000,000 lines of code).
The Road To Secure Software
Exploitable security vulnerabilities in a software application are typically evidence of bugs in the design or implementation of the application. Thus the process of writing secure software is an extension of the process behind writing robust, high quality software. Over the years a number of methodolgies have been developed to tackle the problem of producing high quality software in a repeatable manner within time and budgetary constraints. The most successful methodologies have typically involved using the following software quality assurance, validation and verification techniques; formal methods, code audits, design reviews, extensive testing and codified best practices.
-
Formal Methods: One can use formal proofs based on mathematical
methods and rigor to verify the correctness of software algorithms. Tools
for specifying software using formal techniques exist such as VDM and Z.
Z (pronounced 'zed') is a formal specification notation based on set
theory and first order predicate logic. VDM stands for "The Vienna
Development Method" which consists of a specification language called
VDM-SL, rules for data and operation refinement which allow one to
establish links between abstract requirements specifications and
detailed design specifications down to the level of code, and a proof
theory in which rigorous arguments can be conducted about the properties
of specified systems and the correctness of design decisions.The
previous descriptions were taken from the
Z FAQ and the
VDM FAQ
respectively. A comparison of both specification languages is
available in the paper,
Understanding the differences between VDM and Z
by I.J. Hayes et al.
-
Code Audits: Reviews of source code by developers other than the
author of the code are good ways to catch errors that may have been
overlooked by the original developer. Source code audits can vary from
informal reviews with little structure to formal code inspections or
walkthroughs. Informal reviews typically involve the developer sending
the reviewers source code or descriptions of the software for feedback
on any bugs or design issues. A walkthrough involves the detailed
examination of the source code of the software in question by one or more
reviewers. An inspection is a formal process where a detailed examination
of the source code is directed by reviewers who act in certain roles. A
code inspection is directed by a "moderator", the source code is read by a
"reader" and issues are documented by a "scribe".
-
Testing: The purpose of testing is to find failures. Unfortunately,
no known software testing method can discover all possible failures that
may occur in a faulty application and metrics to establish such details
have not been forthcoming. Thus a correlation between the quality of a
software application and the amount of testing it has endured is
practically non-existent.
There are various categories of tests including unit, component, system, integration, regression, black-box, and white-box tests. There is some overlap in the aforementioned mentioned testing categories.
Unit testing involves testing small pieces of functionality of the application such as methods, functions or subroutines. In unit testing it is usual for other components that the software unit interacts with to be replaced with stubs or dummy methods. Component tests are similar to unit tests with the exception that dummmy and stub methods are replaced with the actual working versions. Integration testing involves testing related components that communicate with each other while system tests involve testing the entire system after it has been built. System testing is necessary even if extensive unit or component testing has occured because it is possible for seperate subroutines to work individually but fail when invoked sequentialy due to side effects or some error in programmer logic. Regression testing involves the process of ensuring that modifications to a software module, component or system have not introduced errors into the software. A lack of sufficient regression testing is one of the reasons why certain software patches break components that worked prior to installation of the patch.
Black-box testing also called functional testing or specification testing test the behavior of the component or system without requiring knowledge of the internal structure of the software. Black-box testing is typically used to test that software meets its functional requirements. White-box testing also called structural or clear-box testing involves tests that utilize knowledge of the internal structure of the software. White-box testing is useful in ensuring that certain statements in the program are excercised and errors discovered. The existence of code coverage tools aid in discovering what percentages of a system are being excercised by the tests.
More information on testing can be found at the comp.software.testing FAQ .
-
Design Reviews: The architecture of a software application can be
reviewed in a formal process called a design review. In design reviews the
developers, domain experts and users examine that the design of the
system meets the requirements and that it contains no significant flaws
of omission or commission before implementation occurs.
-
Codified Best Practices: Some programming languages have libraries
or language features that are prone to abuse and are thus prohibited in
certain disciplined software projects. Functions like
strcpy
,gets
, andscanf
in C are examples of library functions that are poorly designed and allow malicious individuals to use buffer overflows or format string attacks to exploit the security vulnerabilities exposed by using these functions. A number of platforms explicitly disallowgets
especially since alternatives exist. Programming guidelines for such as those written by Peter Galvin in a Unix Insider article on designing secure software are used by development teams to reduce the likelihood of security vulnerabilities in software applications.
Issues Preventing Development of Secure Open Source Software
One of the assumptions that is typically made about Open Source software is that the availability of source code translates to "peer review" of the software application. However, the anecdotal experience of a number of Open Source developers including John Viega belies this assumption.
The term "peer review" implies an extensive review of the source code of an application by competent parties. Many Open Source projects do not get peer reviewed for a number of reasons including
- complexity of code in addition to a lack of documentation makes it
difficult for casual users to understand the code enough to give a
proper review
- developers making improvements to the application typically focus
only on the parts of the application that will affect the feature to be
added instead of the whole system.
- ignorance of developers to security concerns.
- complacency in the belief that since the source is available that
it is being reviewed by others.
Benefits of Open Source to Security-Conscious Users
Despite the fact that source licensing and source code availability are not indicators of the security of a software application, there is still a significant benefit of Open Source to some users concerned about security. Open Source allows experts to audit their software options before making a choice and also in some cases to make improvements without waiting for fixes from the vendor or source code maintainer.
One should note that there are constraints on the feasibility of users auditing the software based on the complexity and size of the code base. For instance, it is unlikely that a user who wants to make a choice of using Linux as a web server for a personal homepage will scrutinize the TCP/IP stack code.
References
- Frankl, Phylis et al. Choosing a Testing Method to Deliver
Reliability. Proceedings of the 19th International Conference on
Software Engineering, pp. 68--78, ACM Press, May 1997.
<
http://citeseer.nj.nec.com/frankl97choosing.html
>
- Hamlet, Dick. Software Quality, Software Process, and
Software Testing. 1994. <
http://citeseer.nj.nec.com/hamlet94software.html
>
-
Hayes, I.J., C.B. Jones and J.E. Nicholls. Understanding the
differences between VDM and Z. Technical Report UMCS-93-8-1,
University of Manchester, Computer Science Dept., 1993.
<
http://citeseer.nj.nec.com/hayes93understanding.ht ml >
-
Miller, Todd C. and Theo De Raadt. strlcpy and strlcat - consistent,
safe, string copy and concatenation. Proceedings of the 1999 USENIX
Annual Technical Conference, FREENIX Track, June 1999.
<
http://www.usenix.org/events/usenix99/full_papers/ millert/millert_html/
>
-
Viega, John. The Myth of Open Source Security. Earthweb.com.
<
http://www.earthweb.com/article/0,,10455_626641,00 .html >
- Gonzalez-Barona, Jesus M. et al. Counting Potatoes: The Size of
Debian 2.2. <
http://people.debian.org/~jgb/debian-counting/coun ting-potatoes/
>
-
Wheeler, David A. More Than A Gigabuck: Estimating GNU/Linux's Size.
<
http://www.counterpane.com/crypto-gram-0003.html
>
Acknowledgements
The following people helped in proofreading this article and/or offering suggestions about content: Jon Beckham, Graham Keith Coleman, Chris Bradfield, and David Dagon. © 2002 Dare Obasanjo
Well, the consistant M$ security lapses show (Score:2, Insightful)
M$ security method isn't new (Score:2, Insightful)
Probing the defenses: looking for where the code doesn't anticipate a certain condition, isn't very efficient, but has been pretty much the way vulnerabilities are found.
Intelligence: lack of source availibility is depriving yourself of 1,000 eyes to find the vulnerability, thus it remains. If their closed code is stolen, without the benefit of freelance auditors, the problem compounds, exploits are found and can be executed when and where they can do the most damage. Open source is inviting those, 1,000 eyes of freelance auditors to report a vulnerability. There still remains the chance some unethical person will spot it and not report it, choosing to exploit it later, but they play roulette in that someone still may find the hole and close it.
Re:M$ security method isn't new (Score:4, Insightful)
I'm always bothered by the articles which conclude that one OS is less secure because more vulnerabilities are discovered in it than in other OS's. I think it would be better to also consider how the vulnerabilities are discovered.
If we know that RedHat Linux had 54 vulnerabilities last year & Win2K had 42, do we really know anything about the relative security of the two OS's? I would be curious to see the vulnerabilities broken down by how they were discovered. Were they discovered prior to being exploited or as a result of an exploit? It would also be important to know how soon patches were available.
Re:M$ security method isn't new (Score:2)
While it would be interesting to know, does it actually matter? Once a vulnerability has been discovered, and until it's fixed, it is a liability waiting to be exploited. The more independent liabilities there are, the less secure your software is.
Re:M$ security method isn't new (Score:2)
Wrong.
Once a vulnerability has been created, and until it's fixed, it is a liability waiting to be exploited.
Re:M$ security method isn't new (Score:2)
I'd love to see you exploit a vulnerability you haven't discovered yet...
Re:M$ security method isn't new (Score:2)
Actually I imagine the Ping of Death was discovered after it was exploited. Discovered because it was exploited.
To be fair... (Score:4, Insightful)
That should be kept in mind when trying to draw conclusions from raw numbers of vulnerabilities.
Re:To be fair... (Score:1)
Re:To be fair... (Score:2, Interesting)
I suppose you could make an argument, though, that Debian sort of cheats. These stats are no doubt from debian stable. But, what percentage of debian users are actually running stable, I wonder?
Re:To be fair... (Score:4, Insightful)
There's actually even more to it than just that.
This is the point Microsoft and others try to make when they say that their closed source model is more secure is that...it's only marginally better at best of course, because whether the vulnerabilities are found or not, they're STILL THERE.
So in comparing raw numbers, no its not a fair contest. There may be 20 exploits found in Debian, and only 12 found in AIX (or whatever the numbers are), but the question is How many more are in AIX that have yet to be discovered vs how many are in Debian that haven't been discovered yet? I'll bet Debian's number is closer to 0 than AIX's.
Another thing to bear in mind: statistics can be maniuplated to say anything you want.
Finding sploits is a good thing (Score:2)
Finding these weaknesses, or "sploits" are a win in the long term for people who enjoy a reliable, bulletproof system. It has to be hacked and torn apart to the point of perfection before one can be proud of reliability.
Yeah Right! (Score:2, Insightful)
Re:To be fair... (Score:2, Insightful)
Re:To be fair... (Score:2)
to actually do an apples-to-apples comparison, you'd have to determine how many of the Linux vulnerabilities were in things like licq or ircd.
Quite true. Just to muddy the waters even more, one would also have to track 'self reported' vulnerabilities based on review vs. exploited first then fixed.
It would also be ineteresting (but unlikely) to know how many vulnerabilities are found by the vendor, and quietly fixed in an update or worse, deemed 'not important enough to worry our users with'.
Regarding Microsoft's Security Initiative (Score:2, Funny)
Re:Regarding Microsoft's Security Initiative (Score:2, Funny)
Formal Methods: "Here, code this"
Code Audits: "Did it compile?"
Testing: "Put it in the final distribution."
Design Reviews: "Are these coffee stains on the spec sheets?"
Codified Best Practices: "Profits are up and we've extended our monopoly, here's your salary bonus."
I wonder what they've planned for the other 27 days...
But isn't the REAL point.... (Score:4, Insightful)
I mean, lets be honest, how many of you programmed some code and it worked perfectly the first time? Maybe sometimes, but even the small programs we forget a " or a ; here and there....
This is putting the works of many, many people together to compile a "program" that is larger than anything I could even dream of accomplishing. i.e., there are bound to be flaws we didn't see in the multi-millons of lines of code.
Back to ontopic.... A security hole is found, we can patch it because we can see the code, we can make it BETTER.
Microsoft....
well, you just wait and hope they eventually make a patch, and half the time the patches suck and are re-exploited in a matter of days.
I'm not claiming that opensource is non-vulnerable or exploit-free.... So this article seems somewhat pointless. Anyone who writes code, knows that an exploit free program of this size is just dreaming. What should really be looked at is the amount of time taken to fix and patch a problem.
Just my
Re:But isn't the REAL point.... (Score:4, Funny)
In fact, I've learned that if code works perfectly the first time, something went terribly, horribly wrong....
Re:But isn't the REAL point.... (Score:2)
Use a bug to catch a bug.
Works perfectly the first time. All the bugs are hiding and you're flying blind.
That's what's so hideous about band-aid patches.
Re:But isn't the REAL point.... (Score:2)
That is, of course, if the vulnerable Open Source software is still maintained. Too many projects fall apart due to insufficient momentum, too small a user-base, or changes in the lives of project leaders.
Sometimes, like in the case of the GIMP, an abandoned project will get picked up, brushed off, extended and enhanced. But this usually not the case.
Re:But isn't the REAL point.... (Score:3)
If I find a problem in (insert 10 year old closed source app whose company was bought out and then the next one bought out then went out of business here), how am I any better off?
If anywthing, what happens in the case of abandonment is another feather in the cap of OSS. It's not the endgame.
Re:But isn't the REAL point.... (Score:2)
Re:But isn't the REAL point.... (Score:2)
Who do you trust?
Big win for open source. With open source (or better, Free software) you get to choose an auditor you personally trust. Even in the worst case, the auditor is working for you, not them. With proprietary, you may choose the vendor or the vendor.
Re:But isn't the REAL point.... (Score:2)
Exactly. With propriety software you choose which vendor you want to trust. You can also make agreements where you can audit the software itself under NDA, but that's usually not very helpful in reality.
With open source, you can choose an auditor that you trust, but how many people or companies have someone they trust that have the technical abilities to audit SOMEONE ELSE'S software for security. It usually comes back to having to trust the people writing the software.
I'm not sure larger companies are willing to trust hobbiests to take the time and use the rigorous procedures in developing their software that are required to create good secure software. I'm not saying there isn't exelently written, relatively secure, Open Source software. If you're selecting software on which the stability and good name of you billion dollar company relies why would you choose Open Source over closed source? The company using the software isn't expecting their internal people to be finding security bugs. They'll likely audit the software, but they want a solution, not problems. They don't expect to find anything in their audits, and they definately don't expect to be fixing it themselves. When it comes down to it they want someone they feel they can trust. That means a reasonable, proven track record of few security problems, and a quick response on fixing issues. They also look for someone who's going to be around for a long time to support the software. That usually means a large, stable company. Large, stable companies require stable income. They usually get that income from selling both software and services. Selling services alone is a much more risky business plan because it's hard to support someone else's software, especially when response time is critical, and if the source is open, anyone can take it and compete for the support contract. That competition drives down support costs, but often reduces the quality of the support in the process. Open source does have advantages, but I don't think those advantages are going to be the major determining factors in companies choosing security software.
It all comes back to who do you trust. Most people don't trust people who don't have much to lose. Companies are also going to have trouble trusting people who have strong anti-business attitudes. There are some vocal people in open source with those views. That adds to the doubts. A lot of small doubts quickly add up to going with the option from the big company.
Re:But isn't the REAL point.... (Score:2)
The creators also should have a better idea of how the code works, so you have less of a chance of the fix causing other bugs.
chucklesnortSPEW
Damn. There goes another perfectly good keyboard.
Are you a programmer? How many different shops have you worked at where the guy who wrote the code is still guaranteed to be around a year later? Two years? Or, I suppose that detailed, thorough and 100% accurate software documentation has always been available everywhere you've worked?
No, this is not an area where closed source software has any advantage. In a system of any size there are zillions of dusty corners that no one still working at the company understands, are not documented adequately (or appear to be, but the documentation is *wrong*) and generally require someone to dig in and figure them out again each time they're touched.
IME, there is a much better chance with OS projects that the original author or else someone else who understands the relevant bit is still hanging around the mailing list (or someone who knows the relevant person and knows of their work), if for no other reason than developers who contribute significantly to a project tend to stay "in touch". This is even more true of the original designers.
There are no guarantees in either environment, and the current reduced mobility of software engineers may have improved the situation somewhat at closed source shops recently, but my experience is that OS is better in this regard.
Re:But isn't the REAL point.... (Score:2)
Dusty corners where the current "old-timers" know to stay far away from. The dusty corners may be where the problems are, but anyone trying to mess with them quickly learns they have opened a can of worms. Very hard to repackage worms.
Re:But isn't the REAL point.... (Score:2)
and half the time the patches suck and are re-exploited in a matter of days.
Forget half the time, please name one time.
Re:But isn't the REAL point.... (Score:2)
(seriously, check TheRegister archives for plenty of "oops it was patched, patch is easily bypassed" type of security warnings for Microsoft.)
In all fairness though, I do see a lot of release notes across all genres of software that read "patch 1.23b, fixed problem in patch 1.23 designed to fix problem introduced by patch 1.22"
Open Source does not GUARENTEE security. (Score:3, Interesting)
1: Access to source: Just because it has not been audited does not mean that it cannot be audited. Software can be considered more secure if the code is at least available to be audited. For this reason, I congradulate Microsoft on the shared source initiative.
2: Independent audit: When in doubt for a mission-critical scenario, hire someone ot audit the code or part of it. This is possible with porprietary software under some licenses and with permission from the vendor, but it is always true of open source.
3: Compartmentalized design-- application runs under minimal permissions. This is a problem with proprietary (IIS) and OS (Sendmail) software alike.
Open Source is no guarantee for security but it helps.
Re:But isn't the REAL point.... (Score:2)
If you can't afford the time to understand the patch, maybe you'd just better wait until it gets reviewed and accepted by the core development team. If you can't afford to wait that long, well, you're screwed, but at least you're no worse off than you would be with a closed source product. If you aren't capable of understanding the code then you shouldn't have a job maintaining mission-critical production systems.
Re:But isn't the REAL point.... (Score:2)
Is the converse of this true as well? If you aren't capable of understanding mission-critical systems you shouldn't have a job maintaining the code? Most of the Systems Engineers that I know understand the code, but very few Programmers understand the systems. Why is the burden to know everything placed upon Systems personnel but not upon development?
hummmm not quite (Score:4, Insightful)
The author wants to "expose the fallacy of the belief in the "inherent security" of Open Source software" (many eyes make safer code) and gives the REAL way to make software more secure of which these 3 caught my eye:
Code audits
Testing
Design reviews
Correct me if I'm wrong, but isn't that exactly the "many eyes make safer code" theory? That open source, having the code available, can have more people do code audits, testing and design reviews than a company with closed source can.
In the real world, he's right, those extra eyes aren't necessarily qualified...but still, on AVERAGE wouldn't there be MORE qualified eyes to do this stuff along with the unqualified?
Re:hummmm not quite (Score:5, Insightful)
This means that OSS has the *potential* to be more secure, but as shown by the article, that potential is not fully realized.
his reasoning is twisted and I call Bull Shit. (Score:2)
Yeah, yeah, yeah, that's what he says because his Mailman program from Red Hat 6. had holes. What a rotten extrapolation! Let's not FUD ourselves into a stupid panic.
The point of free software is to develop a community of users and gain mutual benifit by sharing code and development effort. Mailman? Pardon my ignorance of a bell on the "proffesional" Red Hat distro I never owned. How widely deployed was this package? If it was never that widely used, of course the bugs would remain. Thousands of downloads does not translate into thousands of users really and we might assume that a large portion of those users have upgraded their machines. It is much more correct to extrapolate free software security from Apache, sendmail, exim, openssh, xfree86, the list is very long indeed, where there is a real comunity of users. If a bell or whistle is broke, it can be replaced.
Red Hat, by coming closer to the bad old days of software distribution has left their user base open to some of the bad old day problems. Difficulty in getting updates makes problems. Who would put 6.2 on a machine? No one expects a CD to heal itself yet I'm tempted. I've heard good things about up2date but it's not as easy or dependable as apt-get update and upgrade by a long shot. That cozey old 6.2 environment... nah. Shifting focus from service and equipment sales to software vending is a bad bad idea. Should we let some small problems Red Hat has had run us back into the arms of MicroShit and the like? Nope.
The good news is that low usage also translates into low venerabilty for the rest of us. It's not like Mailman is "the standard" forced on everyone, and I doubt any of it's bugs are as bad as say Outlook's. Think about it. Did we suffer Code-RedMan a few months ago? No we did not. Nor did we suffer network instability over BIND problems and or any other Linux/BSD holes.
The free distribution methods are showing themselves to be best. While I know it's possible for my poor little Debian boxes to be cracked, I also know that the chances are far less than any windoze compooter. The most common applications ARE well reviewed, the rest are so variable as to make life hard for the would be Linux cracker. What potential is ever fully realized in nature?
you aren't quite on either (Score:2)
rather than picking apart minor quibbles in the article, and thus trying to disclaim it entirely, we should look at the big picture and learn from it.
just my 2 cents.
Re:hummmm not quite (Score:2)
Code audits are rather boring, and the usual incentives surrounding work on Free Software do not seem to apply. In addition, a lot of code is poorly commented and incomprehensible, works only by accident (but is correct nevertheless, in the mathematical sense of the word), and so on.
Formal Methods (Score:1, Funny)
Yeah, good luck getting Johnny Hack-job that has an associates degree in C programming to use formal methods. I can imagine the interview process now:
Interviewer: Are you familiar with formal methods?
H4X0R: Huh?
Interview: Are you comfortable with set theory and first order predicate logic?
H4X0R: I know how to code. I learned how to program in C. I am an 3l33t h4x0r.
Interviewer: *sigh*
Open Source still wins (Score:3, Interesting)
Microsoft could even have a better track record than some Open Source systems and I think that I would still choose the open source way.
If you rely on obscurity to be your software security then you will lose every time. In the end it is the freedom to choose and to change in the open that makes a system secure.
Tony
No, Dare (Score:3, Interesting)
In fact the belief in the inherent security of Open Source software over proprietary software seems to be the product of a single comparison, Apache versus Microsoft IIS.
No, I'd venture to say that although you are correct in citing IIS' tendency to destroy the Internet every few months when another virus comes out targeted at the Microsoft web server, there are most definitely other pieces of Microsoft proprietary crap that looks pretty lame when compared to its open source or free software counterpart.
Ever hear of Microsoft Outlook?
The fix is on the way (Score:5, Interesting)
The author's point is correct - while any Open Source package may have been audited, it isn't neccesarily audited well or at all.
But flash-back to the recent announcement of the Sardonix Security Portal [sardonix.org], which aims to be a central clearinghouse for tracking audits and auditors. The goal is to have a list of 1) what's been audited, 2) who audited it, and 3) what that particular auditor's track record is on other software - were holes found after they said it was clean?
Obviously this is a new project, and it's founded on the ashes of an earlier effort that didn't get much involvement, but it's a big step in the right direction and it's got DARPA funding. And it probably will do much better jobs with Open Source software than with Closed Source.
Securability of software (Score:3, Insightful)
Of course not.
But, better that it's more securable in theory (due to the open nature of the source) than not securable at all.
Splint: Secure Lint (Score:2, Informative)
Splint is a GPL'd extended-lint type code analysis program which not only checks syntax (and semantics!) but now includes checks for security vulnerabilities. Essentially you run your code through Splint and it will spit out a detailed list of problems. As with LCLint you can "decorate" your code with stylized comments to provide semanatic information to the parser which allows even more thorough checking. Click here for more details and downloads of Splint [splint.org].
Ya but.... (Score:4, Insightful)
The term "peer review" implies an extensive review of the source code of an application by competent parties. Many Open Source projects do not get peer reviewed for a number of reasons including
* complexity of code in addition to a lack of documentation makes it difficult for casual users to understand the code enough to give a proper review
"Casual Users" are not peers. The term "Peer Review" means that, in this case, the code would be reviewed by other hackers (software engineers), not by the general public.
I am not a hacker, I don't have the skills or knowledge to find security holes in software libre by reviewing the source code. All I can do is use the software, and if I come across a sympton of a problem, I can email the developer to ask what is going on, which often results in a patch within a short period of time.
Re:Ya but.... (Score:4, Insightful)
"Casual Users" are not peers. The term "Peer Review" means that, in this case, the code would be reviewed by other hackers (software engineers), not by the general public.
Hackers/Software Engineers are still "casual users" in most cases. The issue isn't the presence or lack of the general technical knowledge to examine the code, but the effort and focus.
As an example, I'm a programmer with close to 14 years of experience, and a good deal of it focused on engineering of large systems and on security work. I won't go through my CV, but by most anyone's standards I'm eminently qualified to audit code for security defects.
However, when it comes to, for example, Mozilla, I'm unquestionably a "casual user". Why? Because in spite of my extensive experience and knowledge with software systems in *general* I have little knowledge of the inner workins of Mozilla in *particular*. And it matters. A lot. Even though Mozilla is implemented with my most familiar toolset (heavily abstracted C++), it would take me days if not weeks of focused effort to understand the software enough to be able to begin a security audit.
I know this because a few months ago I attempted to fix a bug that had been bothering me for some time. Although I found the Mozilla code to be well-written, nicely structured and generally easy to work with, it still took me almost two full days to understand it enough to correct that one small defect. I probably spent too much time in random curiosity-driven wandering, but even factoring that out it was a *lot* of work.
Doing a security audit is an undertaking on a completely different scale from adding a feature; it requires a fairly detailed understanding of large chunks of the code, and a thorough high-level understanding of how the modules fit together.
This is not to say that closed source is better, because all of this analysis is, for practical purposes, impossible with closed source. However, don't confuse possession of deep technical skills with deep understanding of a particular piece of software. A hacker who isn't a casual user of most of his software is a hacker who doesn't use much software :-)
README, Changlog, LICENSE, AUDIT... huh? (Score:5, Interesting)
I think this is an excellent idea, one that should be expanded upon by other developers.
Oh, and vsftpd 1.0.1 can be obtained from this ftp site at Oxford [ox.ac.uk]. It's written on Linux but I run it on Solaris with just a tweak to a #define.
Re:README, Changlog, LICENSE, AUDIT... huh? (Score:1)
Re:README, Changlog, LICENSE, AUDIT... huh? (Score:2)
Chris develops on Linux, and though he's pretty good at writing portable code, he doesn't have a Solaris system to test on. The #define in question is one of those "feature defines" one often finds. Linux distros come with libpcap, Solaris doesn't. That's all it is.
Using published vulnerabilities as yardstick (Score:4, Insightful)
Take, for example, Solaris. Solaris is the most-used Unix in the world; it is under more external scrutiny than any other Unix, and so you can expect more discovered vulnerabilities than for HPUX or AIX. This doesn't mean AIX or HPUX are intrinsically more-secure; it just means more discovered vulnerabilities on Solaris.
(I don't claim AIX or HPUX is as insecure as Solaris; I'm just saying it's impossible to judge based on number of discovered vulnerabilities.
(And Solaris is pretty secure.)
Then, the BSD and Linux variants are more transperant; anyone can look at the source code, and so possible vulnerabilities are easier to identify.
Nice article, and excellent analysis. My quibbles don't undermine your conclusions; I just *hate* it when people simplify security to number of discovered vulnerabilities.
Security is much more complex than that.
Re:Using published vulnerabilities as yardstick (Score:2, Insightful)
Security is difficult at best to measure, because if you knew your system had holes, then you would know it was insecure. You might say to believe your system is secure is to live in blissful ignorance
Ben
Re:Using published vulnerabilities as yardstick (Score:2, Insightful)
The argument in favor of open source is that "anyeon can fix a bug". Can, not will. There's a big difference, and a lot of
Re:Using published vulnerabilities as yardstick (Score:2, Insightful)
What a load of crap. (Score:2, Interesting)
It just doesn't jive. Some closed source software is more secure, some OSS is more secure. It depends on the talent, hardwork and organizational skills of those involved in the individual projects.
Even if one found a methodical way to compare the mean security level of OSS and closed source software, it would be of no use!
What use is it telling someone Closed source software is in general more secure than OSS when they're only interested in a web server? What they need to know is how secure their potential solutions are.
Also, knowing which method in general produces more secure code won't influence a development team. They have more important things to worry about, ie. how they intend to profit from their work.
Re:What a load of crap. (Score:3, Interesting)
The article didn't say anything even close to that; what it said is that the widely-held belief (particularly among the Slashdot-crowd) that Open Source Software is somehow inherently more secure than closed source software is largely a myth. Try understanding what you read before you call it a 'load of crap'...
It looks pretty silly when you insult the article and then make a post where all your points are pretty much the same as in the article (ie. OSS and closed source software can both be secure or insecure).
I'm sorry. (Score:1)
Maybe I didn't make myself clear, what I intended so say was that the idea of 'sweeping statements' like the one the article debunks are a load of crap, not the article itself.
It looks pretty silly when you insult the article and then make a post where all your points are pretty much the same as in the article (ie. OSS and closed source software can both be secure or insecure).
As I said, it wasn't my intention to insult the article but the idea which it calls a myth ( and it's opposite). In fact, I was insulting the notion of one side being more secure, as well as the notion that finding such an answer is useful. That's what I called a load of crap, the whole debate in the first place :)
market share? (Score:2)
Re:market share? (Score:1)
I agree. For example, how often would anyone use a MacOS X Server? If you believe that the stats are an absolute benchmark of security, then it'd be one of the best... Plus I think they put zeros at times the system didn't exist or people weren't checking them for security. (Did BeOS really have no security holes 1997-1999?)
I would have liked to see the stats calculated by how many times those computers have been actually compromised. Not to mention, how many of those vulnerabilities were potential security flaws, and not ones that are acutally exploitable? That makes the more paranoid/open systems appear less secure.
However, Win NT/2000 and Redhat scored as the worst on the "Number of OS Vulnerabilities by Year" table. Just as I expected... (Win9x is a OS for users--is it really fair to compare it with server OSs (or systems used for both) in this table?)
Spafford on open source security (Score:2, Insightful)
It was a real eye opener for all of us who had read The Cathedral and the Bazaar [tuxedo.org]
For instance this from one of the slides from the talk:
Linux compromises dominate - nearly 4 to 1 over Windows
Commercial Unix compromises usually rare
Windows/Unix compromises are 2 to 1
MacOS compromises do not occur (before OS X)
The slides are still interesting even after two years
Re:Spafford on open source security (Score:2)
He only sites one source that Linux has more reported flaws than windows does. I couldn't find the data online. I can't believe his source is trust worthy unless I can see how the flaws were counted because people tend to over count because of the distributions. (His data is different from other data I have seen).
The rest of the proofs that Linux is less secure is because their are so many executables installed by default and the documentation is in different formats. The other "proof" that Linux is insecure is the number of lines of source in the kernel.
Forgive me if I'm not convinced.
(Note: I'm not claiming that open source is more secure than closed source although I think it is generally fairly secure.)
Severity of Security (Score:2, Insightful)
How many people would run WuFTPD on a production box while there are other options around like Pure-FTPD [sf.net] or ProFTPD [proftpd.com]?
But for windows for example there are relitivly few closed source HTTP Servers. Namely IIS, while on the open source side there is everything from Apache [apache.org] to Abyss [sf.net].
So what this brings me to, another point of Open Source Software, because there are many *options* in a production enviroment for the choice in software, the only costs of changing to a product that is more secure is the time to install it. While in closed source to get Microsofts newsest and most secure IIS 6+++ bundeld with Windows ZP 2003, you will have to shell out a few grand. Thats where security matters in the end, how much money does it cost you in a production enviroment. We are a bunch of capitalists at heart you know
Look at the nature of the vulnerabilities (Score:3, Insightful)
Personally, I find remote vulnerabilites to be a MUCH greater concern than local ones. Looked at this way, we can see Linux clearly coming out ahead, which the champ Debain with only one vulnerability.
The author does make a good point about open source giving a false sense of security. Just because the source is available doesn't mean that it has been thoughouly audited. Still, the freedom to do so is there.
Re:Look at the nature of the vulnerabilities (Score:2, Interesting)
RagManX
Bad Arguments (Score:1)
Please, closed source is no different. Just cause a company produces code for money does not mean they have tons of documents and all the code is easy to read. Bad argument just because a code base is opened or closed does not brand it overly complex or neat and clean automatically.
Good analysis, wrong solution (Score:2)
This was a fairly reasonable (if unexceptional, being a rehash of a rehash) article, until the author got to his recommendations:
However, all of these issues can and are solved in projects with a disciplined software development process, clearly defined roles for the contributers and a semi-structured leadership hierarchy.
This is almost certainly not the path to better free software. Mass movements in the free software community develop bottom-up, not top-down. If the community rises to the challenge of creating secure software, it will happen for the same reason as any other of our successes: because individual contributors see it as worthwhile.
So if you want it to happen, don't focus on rules and leadership. Focus on ways to increase the visibility of good security work and to credit its practitioners. Make people care.
Sardonix: Auditing Open Source Software (Score:3, Informative)
Wanna make security better? Come do something about it. [sardonix.org]
Crispin
----
Crispin Cowan, Ph.D.
Chief Scientist, WireX Communications, Inc. [wirex.com]
Immunix: [immunix.org] Security Hardened Linux Distribution
Available for purchase [wirex.com]
Re:Sardonix: Auditing Open Source Software (Score:2)
Crispin
----
Crispin Cowan, Ph.D.
Chief Scientist, WireX Communications, Inc. [wirex.com]
Immunix: [immunix.org] Security Hardened Linux Distribution
Available for purchase [wirex.com]
It's not the end, it's the beginning. (Score:2, Insightful)
"Publication does not ensure security, but it's an unavoidable step in the process." [counterpane.com]
Commercial vs Open Source bugs (Score:4, Interesting)
The Debian team did not write most of the software that comes with the Debian distribution. Sure they make patches and try to keep things up-to-date, but the software that is in their distribution is included for completeness more than anything else.
Sun on the other hand did write most of the software that comes with Solaris (or at least obfuscated where it came from.) They are directly responsible for security problems with the software they distribute.
When a security problem occurs in Apache, surely it's an Apache security problem that just happens to affect everyone who has Apache installed. If they have Apache installed on Windows, one can't claim it as a Microsoft security bug and blame Microsoft for not auditing every peice of code that happens to compile for their OS.
No one forces the end-administrator to install 99% of the software included with an open source distribution. It is up to the administrator to only install software which they are comfortable with. If the authors of Emacs don't do frequent code audits, don't install it (that's not to say they don't.)
Now... one thing distributions can do to make the end-administrator's job a bit easier is to include statistics along with the application for things like past security vulnerabilities, time since last vulnerability, last code audit, etc. to help them make better decisions about what to install and not to install.
Of course, going the route of only including fully audited code in a distribution just doesn't work. If people need inn, they need inn code review or not. Granted they might take a look at the source while they are compiling it, but the chances of them finding a massive security hole with a curory glance is pretty slim.
That's not to say that distribution vendors are free from blame; especially fully commecial vendors who should at least do some form of audit and mark which packages haven't been audited as 'unsafe,' but come on now... the real blame belongs with the administrator and the developers.
Bug Severity? (Score:3, Insightful)
Bugs are given ratings on their priority, I assume security holes are as well.
I looked through some of those security listings and noticed that some are for applications that are bundled with the OS (so I'm not sure that they should be counted as an OS issue) and that don't result in actually compromising the system (perhaps crashing an application, or corrupting a file, yes). Not that I'm saying that is a 'good' thing but certainly crashing a little-used application which may not even be running on the default install isn't the same as gaining root access nor should they be treated as such; some form of 'validation' of the numbers is needed, e.g.:
Easily Exploited (278):
-- Root Access: 234
-- Crashes programs: 44
etc.
Are Formal Methods Any Use ? (Score:2)
I studied them at uni, and found them dreadful things to use at the time. The main benefit to using them seemed to be that they took about 5 mins per line of source code - anything you do that makes you spend that amount of time looking at your code is going to help you find problems. But I might have been biased, because I wasn't an experienced programmer back then.
Does anyone have any different opinions on this ?
Re:Are Formal Methods Any Use ? (Score:1)
Now don't get me wrong - I'm all for open source and it would be great if the community could focus more on methods and not just code&features
Open Source has more REPORTED and FIXED issues (Score:2, Interesting)
Huh? (Score:1)
Private companies claim security features in their software. They tell their customers that with this, security is assured.
Has any free Open Source softeware EVER claimed this?
As far as I know, every Sys Admin I've ever talked to tells me that nothing is secure on the Internet... it's simply not designed to be! Never was! Hence, claiming Open Source software isn't secure is false - as nothing is secure. What I mean is, claiming security is a lie, you're only as secure as the Admin can make it.
Open Source Was Never a Magic Bullet (Score:3, Insightful)
However there is an important feature of Open Source projects over proprietary stuff is that "openness" breeds honesty and trust.
If a bunch of people say Apache is secure(pulling an OSP out of the air) it is not only because people use Apache and found the claims to be reasonable but people have looked at its open internals and believe that its design is secure. If a bunch of people say IIS is secure(pulling a related closed product out of the air) there seems to be less credibility. Although people do use IIS no one really knows much about the internals of IIS except Microsoft.
Especially with MS's recent performance, are you going to trust the vendor's claims that their closed product is safe and secure? At least with the source you can hire people to do security audits on Open Source programs.
Keep in mind that Microsoft and Apache Project both have the same "no warrenty" on their programs. If you use them and something goes bad(ie. you get hacked) it isn't our fault. It turns out that Microsoft's scheme isn't better and it costs more(you have to buy the product, you have to buy support from Microsoft, you have to pay for them to look at your problems). So why do people continue to believe MS over Apache?
And lastly, Open Source doesn't fix user stupidity. Apache for instance can be very easy to break if you configure it very poorly and IIS can be very secure if you take the time to tighten it.
System doesn't matter, bugs will be found... (Score:2)
Something is very wrong with the data (Score:2)
In any case, if you are a Free Software zealot, you should seek for better arguments than security. Otherwise your friends will come back to you and ask, Why have you betrayed me?, when their machine gets hacked although they use Free Software which has been reviewed by thousands of capable programms.
The REAL point (Score:3, Interesting)
Sure, there are some additional problems, but most come from the design and implimentation _approach_.
If you've read "The Software Conspiracy" from Mark Minasi, you'll be enlightened about software design, and our expectations.
Closed vs. Open isn't the point. Either can be just horrible, or quite wonderful. But the devil is in the details.
What I think many miss is that BUGS = INSECURITY! Not all bugs will cause an insecure system, but some will.
To make a more secure system, we need to make a bug-free system, or nearly so. Look at these software design and implimentation methods.
Formal Methods
Code Audits
Testing
Design Reviews
Codified Best Practices
These are the very practicies that will give good code, even bug-free code if they are followed carefully.
Now, as part of the whole solution, you need more than a solution. You need a "push" too. Pull isn't enough by itself.
It's my opinion that there are a couple of factors that could make this happen.
User demand. We havn't seen much of this, but it may be growing. We also need to work to change the expectation of users. Most of us even, feel that "Oh, it just crashes sometimes" is an acceptble answer. In fact, how many of us, just add the "just reboot, it'll fix it" to the mix. I'm as guilty as anyone. But this just perpetuates the expectation that software isn't very reliable, and we shouldn't expect it to be. Lets change that.
Finally I think the legal route should be available too. [I'll get lots of flames here, but I'm ready...] Like any other DEFECTIVE product, the user should be able to redress damage from a product that wasn't reasonably designed. [Many of you will be howling to burn me at the stake now, but read on if you can] The standard for liability is a reasonable effort. I think those that don't use a strict design and implimentation method are not using due care. These methods have been around for some time now. We just don't use them. It's also fairly clear that they can work. How well they can be implimented in real commercial products we can't know, becuase I don't know of anyone that really uses this type of design method - do you? [And not just in name. In real methodical plodding fashion...]
Lastly, as in Minasi's book, many of you are now screaming - "It'll cost WAY TOO MUCH!"
Bah! How much of your time is spent chasing bugs down in commercial products. Sure, it only cost $100 at the store, but you put in 35 hours figuring out how to work around bug a,c & c. It crashed, and lost your document. It took 3 hours of tech time to find and restore the right version of the data file, or worse, it wasn't backed up, and poof! Companies spend way too much on support of bad products. These costs never get allocated to the real source, but instead it's just lumped into the general support costs. That just allows the vendor to shift the cost to your company, rather than having an "honest" cost of the product up front.
If software vendors had the real threat of laibility, they would then get serious about coding practices. If they didn't, the corp boards and shareholders would make sure it happened. A few examples, and we'd have better software.
Finally, I think that legal liability is the only way this will happen. Until everyone is forced to a higher standard, everyone will seek the lowest common denominator. If you produce better software, but you're new, how will you charge more for it? I just don't think the "market" will fix this. [Not that the courts aren't part of the market, but many will argue they're not, incorrectly IMHO.]
In the end, frankly, OSS might be easier to fix, but who cares? I think the design and implimentation before and while the code is written is much more important. From that perspective, I think OSS has a more difficult time imposing that regimented framework on it's coders and design people. But it's lots easier to show and embarrass the OSS people, precisely because the code is open - thus a better motivator perhaps?
Well, I've said my piece - do your damage.
Security depends much more on the INSTALLER (Score:1)
Microsoft typically will give you the kitchen sink, everything runs even if you need very little. RedHat linux does a similar thing, if you install "Everything" it also starts all the daemons.
If you don't spend 30-45 minutes turning off unwanted services, portscanning your machine, and looking up patches/updates at CERT/RedHat/SANS etc, forget it.. your system will probably get compromised in a matter of days. This goes for *ANY* operating system, you simply have to test it and make sure you are running the minimum necessary to do the job.
The main reason you hear more news about microsoft systems getting infected is simply that there are many more of them, and many more are running the simple default configurations. Linux machines are really just as vulnerable IF YOU DON'T PATCH AND TEST THEM
Here's a little guide [beimborn.com] to turning off unwanted services on a redhat box, and how to audit your systems with a portscanner
The Open Source Fallacy (Score:2)
What should be said about open source... (Score:2)
the other security benefit of open source is that you have the POTENTIAL to audit code before you install it. If security was absolutely critical to you, you could look at the innards of every app you download, skim it for buffer overflows, etc. In practice most people don't bother, but they could if they wanted too.
Yeah, this is authoritative (Score:1, Interesting)
2. I see Red Hat has an "unknown" vulnerablity. WTF is that? Is it "I think there might be a vulnerability here but I don't know"?
One word: sendmail (Score:2)
Re:One word: sendmail (Score:1)
Re:One word: sendmail (Score:2)
Gimmie a break. So does every piece of sotware ever writtten (at least the ones that anyone usues). Users=feature creep.
Re:One word: sendmail (Score:2)
The open source process isn't good at throwing obsolete features out.
Number for OpenBSD ... (Score:2, Informative)
code review (Score:1, Troll)
but the fact remains. For better or worse, at least I am 100% capable of finding the bugs or security holes if they need to be assured of such.
You can say all you like about how little guarantee there is with the code being open, but with the code closed, I can only find problems, I can't assure myself there aren't any more.
-Restil
What the article doesn't mention... (Score:3, Informative)
Just for fun, here [jscript.dk] is a handy summary of some Windows issues, including an XMLHTTP vulnerability that allows a malicious website to read any file on your harddrive, that has been a known issue since December 15th.
management and people approaches doomed to failure (Score:4, Insightful)
The only known and proven way you can get problems like buffer overflows under control is to use high-level languages and tools that make them impossible. Yes, your programs run slower, but a compromise is much more expensive than a couple more machines. Yes, there will still be plenty of other security holes possible, but we can address those through better tools as well.
Microsoft's management approaches to security are doomed to failure, as are efforts and arguments in the open source community that the open source process magically addresses security problems. Microsoft's real security initiative is their switch to C# and "managed APIs". The open source community should take notice. Unless systems like web servers, file servers, mail servers, and authentication under Linux get rewritten in safe, high-level languages like Java, C#, or others, Linux will be so unreliable relative to Microsoft's and other systems that it will become irrelevant.
(However, given the choice between buggy Microsoft C++ code and buggy open source C++ code, I'll still take the buggy open source C++ code any day--it's easier to fix and fixes come out more rapidly.)
Re:management and people approaches doomed to fail (Score:2)
Sure, given large amounts of time and testing resources, you can make C programs reliable. But deadlines are a fact of life. That's why we need systems that allow programmers to write reliable code under real-world conditions and real-world deadlines.
thus it's pretty ignorant to blame the language.
I rather think it's ignorant to claim that C/C++ can be used for writing reliable and secure software under real-world deadlines when 30 years of experience show otherwise. Just look at the bug lists. It isn't working.
You see, after 20 years of programming (much of it in C and later C++), I have learned not to trust myself to do things right.
This part is interesting to me (Score:2)
Why is that? I would finally love to be able to mount (read&write) a NTFS partition should the need arise. Now they don't have to give up properiatary rights to their protocols or interfaces; thats fine. They can have (c) Microsoft etc; however people SHOULD be able to implement and use it's standards for interoperability, so I disagree with that statement. The protocols/interfaces/records/structures should be public and people should be able to interoperate with a Windows machine without having to reverse engineer protocols and structures.
See beyond mere trees (Score:3, Insightful)
Let's cut back to the big picture. Pick any desirable characteristic of software -- resource efficiency, robustness, quality, and, yes, even security -- and guess what? The process by which the software was created largely determines how much of that characteristic the software exhibits. Good work, good code. Crappy work, crappy code. Not exactly a news flash.
Now -- and here's the important part -- take any software, developed by any process, and then consider any desirable characteristic. Do you get more of that characteristic by letting everybody see the source or by keeping it hidden away?
That's the argument for open source.
[As I responded to the author's original posting on Kuro5hin.]
Bug reports tend to miss an important number: (Score:2)
The big issue is 'how many unresolved security holes are there for software X at any given time'. Even more than the number of bugs, that is a really significant number. Microsoft execs are whining about people discussing bugs out in public. The fact is that people started doing this in order to get companies to correct their code.
I won't say that OSS is more secure than proprietary software. I will say that OSS on average tends to have a much higher turnaround for getting bugs fixed and not leaving a system with known problems for very long.
Need to examine these claims carefully (Score:3, Insightful)
I admit that this comment is going to sound very ad hominum: We need to examine Obasanjo's claims carefully. He's worked for Microsoft [gatech.edu] very recently.
Ordinarily, I wouldn't call attention to this, but Microsoft as a company has a really bad track record of astroturfing [aaxnet.com] just about any kind of on- or off-line forum:
Sorry, Dare, but that's the facts: if you lie down with pigs, you wake up smelling a bit like pig excrement.
Wrong assumptions... (Score:3, Insightful)
Since, logically, there is no way to determine which one has more total bugs (found plus unfound), the only recourse is to assume that both systems have roughly equivilent numbers of bugs.
From that foundation, whichever system can demonstrate more FIXED bugs is going to be the one that is more stable. All of the bugs listed by the article are not outstanding bugs, they are fixed.
Re:Yours is a wrongheaded comment (Score:2)
As opposed to using the metric where more bugs left undiscovered is secure? No, I'm saying if arbitrary metrics are going to be used, you MUST use the one that fixes the bugs.
Re:Yours is a wrongheaded comment (Score:2)
Discovered bugs measures a lot of things, but is a lously indicator of how many bugs remain. For example, find an exploitable bug. Stop when you've found one.
Better measure is how hard it is to find a bug, any bug. An approximation is how long it takes someone to find an exploitable bug once they start looking.
Re:Yours is a wrongheaded comment (Score:2)
Yes, I mentioned that in the original post. However, if you do not assume that the number of bugs are the same, the only alternative to to make no assumptions whatsoever, which makes the entire discussion moot. If you are not willing to say system X and system Y have the same number of bugs, then you can say nothing at all, because system X may have 50 bugs or 50 million, and the same is true for system Y.
If some writer is going to compare, say Linux to Windows and base that comparison purely on bug counts, then it is his duty to assume that the total number of bugs are equal. To not do so makes his argument speculation and hearsay at best. Otherwise who is to say that the 100 bugs found in Linux are out of a total of 101 and the two bugs found in Windows are out of two hundred trillion? Or vice versa? The argument is completely invalid.
Featuritis vs. Trustworthiness (Score:2)
1. Honesty
When a vulnerability is discovered, Microsoft should freely admit it, admit it was their mistake, and not try to pass the blame or put a spin on it.
2. Accountability
Microsoft should be willing to accept responsibility for their products and any problems they cause. No more click through absolutions. No more blaming it on harware or third party applications or user error. If something I bought needs a fix, they should make it freely available, to the point of sending me a disk in the mail. If I shelled out $200 for their cardboard box, they can spend an extra buck to send me a disk and a stamp. If they feel a need to charge $201.50 in order to achieve accountability, so be it.
3. Responsiveness
No more brushing things under the table, hoping noone will post an exploit to bug traq. No more suppressing information for months until they feel like dealing with it. Microsoft is getting better about posting fixes online, but they have a long way to go.
5. Openness
Microsot should tell us what each product and each fix is doing -- *exactly*. They should describe the problem instead of villianizing those who find it. They should allow people to fix their own problems. I'm not mandating the open sourcing of Windows, but if they were serious, they'd think about it. Even if you need to sign a 100 NDAs to get it. A much more reasonable and realistic request is the opening of the
6. Cooperation
Microsoft should be more willing to work with other Software companies. No more DOS or Browser or Mulitimedia player wars. No more games with SMB or Java or HTML. No more buying out or undercutting the competition. Microsoft should accept that they aren't the only software developers in the world and encourage a more heterogeneous environment. Not only is it good for security, its good for business.
What a load of Bollocks/FUD (Score:2)
And Pine/Kmail Vs Outlook, and Netscape/Mozilla Vs IE (ANYTHING Vs IE). Basically everything that connects to the Internet that has an analogue between open and closed source has been less badly cracked on the open side.
There have been some belters on the open side, of course and I've had a worm that got in through Bind myself. But, there is no way that I would ever trust closed source software to connect to the 'Net again.
I suppose that my experience is just "anecdotal evidence" but my experience matters more to me than any number of useless metrics, and the metrics given are useless; how sever were the bugs and how long did a fix take to appear? How many of the fixes appeared before an exploit was seen in the wild? The method used punishes the systems which fix bugs before an exploit appears and rewards those that sit and hope that the bug is never "hit" and so don't spoil their "security score" by issuing a vunerability report.
As for the suggested methods of producing secure software, big deal! Apart from formal methods these are all in widespread use by people interested in security. Formal methods, for that matter, do not (and can not) guarantee correct software; I have met two desigers involved in the Airbus 320 project and one of them refuses to fly in the thing and no one can forget the pictures of that Airbus doing loops over Italy with a full load of passengers.
The problem with the "solutions" is not that no one knows what to do but that many (eg MS) don't bother trying. Given a package which has not been properly reviewed before release it's pretty obvious that the open source version is better insofar that it gives the user a chance of doing it for themselves. In an ideal world the source does not matter; in the real world it does.
I don't think we need ex-MS employees coming round here preaching about security, frankly. Closed source removes power from the user and leaves them helpless in the face of bugs that require even a one character patch to the code. Open source gives the user a chance which s/he may or may not be able or willing to take to fix bugs quickly, or to find them first before the black hats, but at least it gives them the chance. Only an idiot would claim that that does not lead to higher security.
The only valid point in this article is that programmers sould write better code and check it more before release. Well, DUH!
TWW