Open Source Security: Still A Myth 502
jpkunst writes "John Viega (coauthor of a.o. Building Secure Software) argues in Open Source Securitey: Still A Myth at O'Reilly's onlamp.com that "open source software may currently be less secure than its commercial counterparts.". According to him, there may be "more eyeballs" looking at open source software, but he does not believe those eyeballs are looking for security problems in a structured way."
Still... (Score:5, Insightful)
Re:Still... (Score:2, Interesting)
"It works on my box...bug must be fixed!"
This strategy doesn't hold water in the business world.
Re:Still... (Score:4, Interesting)
That's completely untrue. (Score:5, Insightful)
This isn't even close to being true. Why are you spreading this misinformation? Serious security problems in large open source projects are very competitive with smaller projects in their bug fix and turn around. Nearly every majory security problem is fixed the day it hits the media. Most often when
There may be exceptions, but they are rare.
Re:That's completely untrue. (Score:3, Interesting)
There are two ways to achieve that: control the media, or fix bugs quickly. 8-)
Someone who discovers a bug in free software usually delays disclosure until the fix is ready. This creates the illusion of quick fixes, despite it usually takes two weeks or more to create a fix. (It's quite instructive to look at the time stamps contained in patches released by GNU/Linux distributors.)
I'd like to take issue with F/OSS patch time.. (Score:5, Insightful)
That's true.
One problem with this is people compare the time-to-released-source-patch-on-a-mailing-list-o
I don't think that's fair for the following reasons:
1) Patch Quality.
It is clear that the volume of basic testing done on many instant-turn-around source patches is zero.
Comparatively, as often as an MS patch manages to break something somewhere, consider how much worse it would be if there weren't a few days of targeted regression testing being done. The official recommendation from MS is to test patches before putting them into production, but there have been a relatively low number of patch recalls from MS.
Finally, i think it bears mentioning that with F/OSS, the initial patch is sometimes re-written over the course of several days until something proper actually is agreed upon and that's the code that actually ends up living with the product.
So i'd consider these source level patches to very often be of "here is something that appears to close the hole and not break anything i tried, good luck!" quality.
2) Patch Applicability
When a hole is discovered in apache, the time it took for an apache developer to submit a source diff is NOT the same deliverable as what you're getting from a commercial vendor patch. A source level patch only does me any good if i am running a source-built tarball in production, and i am relatively current with whatever source base the patch is applied against, and i can handle the manual patch/compile/make install process (and if something goes wrong, i've got to backout the patch and compile/make install _again_)
Most people, especially running production machines, are not running built-from-source software. You install Redhat. You want apache ? You use the redhat apache package. You now need to wait for the updated redhat apache package to get the bugfix, or, you get the latest cvs snap and build from source. Now you've got a lovely problem because the way redhat (or any distro) builds apache is different from the defaults, so you have to go and figure out how your distro likes to build its packages, OR, you need to accept the build defaults and rebase your config files to the new settings.
So really, the vendor binary package is what many people need to wait for before they can truly patch thier machines. THe source diff is nice, but not something they can easily consume
I think between these two points, it's pretty unfair to compare time-to-patch between MS and someone-posted-a-diff-somewhere.
I think if you look at the time from vuln report to updated binary tarball being released by some of the linux distros, you'll be surprised.
Re:Still... (Score:5, Insightful)
I think it may also have to do with the variety of testing. I admit that you are probably right, a lot of OSS vendors don't do extensive testing, but for a lot of them they don't have to. If the vulnerability only affects one product on one hardware platform, you have to test various configurations, but you have at least 1 order of magnitude less testing to do than, say, Microsoft might have for a fix that crosses multiple versions of windows, and may affect PCs, PDAs, etc.
Also, if bugs are found by those in the community, the fix may have time to be tested before it is widely publicized. It seems (just from observing announcements, nothing scientific) that a lot of Microsoft vulnerabilities are discovered by third parties that cannot go and fix them while in OSS they tend to be discovered by people in the security sector, but often they may provide a fix at the time of announcement or not announce until a fix is in cvs.
Re:Still... (Score:5, Insightful)
One of the big shockers out of college and into the big bad business world was the idea of "good enough" versus "doing it right".
E.
Mod Parent Up (Score:3, Interesting)
It's sad to see companies just pushing out products as fast as possible to make the best buck, in the end it causes nothing but problems.
Anyone else encounter this with their current employeer or previous ones? I'd be interested to hear the story.
Re:Mod Parent Up (Score:3, Insightful)
The golden rule of business is to make your customers goals your own goals, because long-lasting relationships are essential to your own long-term success.
Nice, in theory. Not in practice. (Score:4, Interesting)
Maybe. But it is PRACTICED any time a company wants to beat a competitor to market OR to catch up to a competitor in that market.
"Sure, you're also kind of locked in if you just spent $20,000 on a software package you don't wanna throw away but that's full of bugs."
That's it. If you can sell it, it doesn't matter how buggy it is. That way you get MORE MONEY for "maintenance plans" and "support contracts" and "upgrade insurance".
"Still, this will destroy your reputation and do you no good in the end."
A bad rep and a product on the market will always beat a good rep and no product. There's this thing called "emotional investment" that happens a lot in this field. People get their own self-worth confused with the vendor or product and so they will stick with that vendor or product.
"The golden rule of business is to make your customers goals your own goals, because long-lasting relationships are essential to your own long-term success."
The other golden rules are that quarterly earnings matter and if your competition loses, you win.
Re:Mod Parent Up (Score:3, Interesting)
I'm part of a team that maintains a web service that, among other things, has a user-interface that generates a SQL query to generate a report over various database tables. Actually, it doesn't generate the SQL queries, they were all pregenerated and stored in a file. The final webpage contains several of these queries as options that you can then send back to the server through a query string parameter to a page that displays
Re:Mod Parent Up (Score:3, Interesting)
To anyone else out there: Have you ever dealt with this before working for an employeer and it angered/upset/whathaveyou enough to leave the company?
Re:Mod Parent Up (Score:4, Insightful)
Re:Mod Parent Up (Score:3, Interesting)
Because of this, our team leaders were more interested in getting their milestone completion bonuses than getting the bugs out of the system (who cares, we're all going onto new projects, even if not at this company).
Every two weeks we had a milestone for a particular module. Regardless of the state
Re: your sig (Score:3, Funny)
"Warning: May contain traces of peanuts or peanut oil".
Where was this label found? On a glass jar...of peanuts. Keep in mind it was a see-through glass jar, making it obvious even to people who can't read, that it is a jar of freakin' peanuts.
My take on it: The warning was telling us that it might be a jar of fake peanut substitutes, by saying that the jar mearly MAY contain peanuts, instead of saying that it definately did.
Re:Mod Parent Up (Score:3, Insightful)
Sure it rubs me the wrong way every time I have to
Re:Still... (Score:5, Insightful)
One of the big shockers out of college and into the big bad business world was the idea of "good enough" versus "doing it right".
If you think this mindset does not exist in OSS than you are naive. Do you honestly think that OSS software is released without the developers knowing that it contains bugs? OSS developers don't write flawless code. Therefore any OSS code released to the public has been deemed to have reached a point of "Good Enough".
Re:Still... (Score:3, Insightful)
Re:Still... (Score:3, Interesting)
There is horrid source code out there, with no commenting or documentation. Most people point to Linux or Apache or some such for examples of where OSS succeeds, yet avoid looking at all the countless other OSS that has far fewer eyeballs looking over the source code.
It just does not work gene
Re:Still... (Score:3, Insightful)
How about this: the more IMPORTANT a piee of software is, the better OSS-style development will work.
Does this apply more effectively than "OSS is better"?
I'd like to hear your opinion.
Re:Still... (Score:3, Insightful)
Re:Still... (Score:5, Interesting)
He said one year, he was heading up a project that involved writing software for IBM machines. They were nearing the release date and still had dozens (if not more) of bugs to work out. He went to his boss, a B-school guy, and said "look, I know we're close to the deadline, but there are still many bugs that we really need to work out before this thing ships. We don't want to release a product that costs this much and still has some things wrong with him".
Now keep in mind that there were hundreds, if not thousands of companies ready to buy the machines as soon as it was released. They had orders from companies around the world. Because they were competing with other companies selling similar products, the need to meet the deadline was even more important.
Back to the story, his boss looked at him and said "so you mean to tell me that you think we should delay the release of a product that has the potential, and is almost guaranteed, to earn us hundreds of millions of dollars for a few bugs? I don't think so. We'll release the product and support it later on. Tech support will cost us less in the long run than delays at this point".
So they released the product, sent developer level techs around the world after companies began to complain about the bugs, and that was that.
Moral of the story? Sometimes, from a busines stand point, you should release the product and support its bugs later on. But that usually depends on the amount of competition in the market and money that is riding on the product. Yeah it sucks from a developers stand point, but developers dont make business decisions in the real world.
See Examples. HL2, DNF, etc.
Shocked? (Score:3, Interesting)
The idea of "good enough" or "I am sick and tired of this project" is not just found in the business world, it is basic human nature.
Re:Still... (Score:5, Insightful)
*cough* Disable javascript which is essential to many business's core web applications*cough*
*cough*Break standard compliant web sites and standards because we can*cough*
*cough* I could go all day coughing under my breath about things MS breaks and on purpose*cough*
Real operating systems aren't so independent on every other piece that by changing one component, you may break many unrelated components. I don't know about other Open Source vendors, but Red Hat does extremely intensive testing, I would assume Novell does too. The nice thing is, it usually goes significantly quicker because if they update a web browser, they don't need to make sure it doens't break the Office Suite, Mail Client and File Browser.
Regards,
Steve
Re:Still... (Score:5, Insightful)
Re:Still... (Score:3, Insightful)
Oh please. Take a look at the track record for the largest commercial software vendor (Microsoft) and the Linux distribution of your choice when it comes to security updates. When was the last time that you heard of a Linux security fix that had serious repercussions to other software installed on the box, and when was the last time that a Linux patch failed to fix the problem in question and had to be backed out?
Microsoft has released some amazingly bad patches in the past.
In the "real" world Free So
Missing the point (Score:5, Insightful)
Open source is not a magic bullet that will automatically solve all our security problems, as much as I advocate open source software, and open source software is not automatically more secure. The reason why this falsity perpetuates is that people tend to think of security in terms of buffer overruns instead of a secure structure. No development methodology can ensure this secure structure because it is an issue which is either solved in the design phase or not at all.
The question shouldn't be "Can this software be compromised" because you should assume that all software can be, but rather "what happens if this software is compromised." Some open source projects are very good at this, and some aren't.
It is also true that for some projects (like OpenSSL), this question is irrellevant because the primary usefulness is as a library, so the application will have no security itself. But these are the exceptions rather than the rule.
Compare the security of Sendmail (open source) to Postfix (also open source). Which is more secure by design? Compare Apache to IIS. Which is more secure by design? (IIS drops permissions after authentication, Apache does so before). Compare Sendmail's security design to IIS. Which is more secure by design?
Open source is important even from a security viewpoint as it allows us to better understand the architecture of the program we are considering and make educated choices about whether we can run it in a secure manner. However, it is no magic answer and just because something is open source does not guarantee its security.
Mod parent up (Score:3, Insightful)
It is more dependent on the programmer AND the person configuring it. Look at PHPNuke. Look at djbdns.
Thinking that many eyes can spot security problems is like thinking that a million monkeys can type out Shakespeare.
You need skills and experience, eyes are just a useful option.
Re:Still... (Score:5, Insightful)
It's a well known fact that MS 'hides' bugs from public view until they're fixed. These submarine bugs surface really close to the fix date and skew stats. OSS has the extreme opposite effect, since it relies on public announcement of bugs for fixing them.
Re:Still... (Score:3, Informative)
OSS bugs were termed live once they were informed about it while MS' were live once MS acknowledged the bug, often months after they were informed about it. Check out some Eeye data:
Upcoming advisories [eeye.com]
Published advisories (click to see time to fix) [eeye.com]
IBM is also bad, but Microsoft seems to be the worst, with most vulnerabilities taking well over 130 days to fix.
Securitey (Score:5, Funny)
Dammit (Score:4, Funny)
More Eyeballs (Score:4, Insightful)
Re:More Eyeballs (Score:5, Insightful)
But again, the problem is the problems are not being found in the first place. Look for example, at Sendmail. It's 25 years old [ccone.at], but is *still* a buggy, buggy app. It STILL isn't secure and bug-free. The inevitable comparison with MS willl come up, so let's look at that. First off, MS hasn't even been *around* for 25 years. As far as specific products go... with all of its patches, W2K is generally considered quite stable, and relatively secure (again, with all of its patches in place). W2K is about 5 years old at this point.
So, I think that this article has some merit.
Re:More Eyeballs (Score:5, Insightful)
And look at the success they've achieved with that style. If we learn anything from Sendmail, its that security must be designed in, rather than an afterthought.
Re:More Eyeballs (Score:3, Interesting)
Are you sure that's a fair comparison? Sendmail is a kludge. It's had bugfixes tacked onto features, tacked onto bugfixes, all heaped into a 25-year-old codebase. It's never been rewritten from the ground up, and by today's standards, it was a mess 25 years ago (when it was written, security was barely a blip on the radar screen). The same can be said for a package like WuFTPd.
What about a package like Qmail, or an
Re:More Eyeballs (Score:3, Interesting)
As far as sendmail goes, that's why I don't use it. I use procmail for all my SMTP needs. Win2k is a great product, I was really happy when I was using it, but the bottom line is that it still has its problems. There are still patches that get released to address security issues every now and then.
All software has it's problems because it's written by people and people are imperfect. However, there are a lot of ch
Re:More Eyeballs (Score:4, Informative)
1) MSFT is about to celebrate its 30th anniversary (founded 1975, incorporated 1981).
2) Windows has been around for 20 years (Windows 1.0 was beta tested in 1983-1984, released 1985).
3) The Windows NT/2000/XP code base is almost 12 years old (NT 3.1 was released in 1993).
4) Persistently buggy apps are found among both open- and closed-source software. There's no monoply on spaghetti code.
Re:More Eyeballs (Score:3, Informative)
Wrong. [microsoft.com]
Microsoft was founded in 1975. That makes it 29 years old, by my math.
Look for example, at Sendmail. It's 25 years old
Wrong. [chipchat.com]
Even your own link states that Sendmail shipped first in BSD 4.1c, which was not released until late 1982. Sendmail's PREDECESSOR - "delivermail" dates back to 1979.
Not that this all matters - but I find it funny when in a discussion about quality control, people don't bother to get their facts at least kindo
Re:More Eyeballs (Score:4, Informative)
Comment removed (Score:5, Insightful)
Is this a myth too?? (Score:2)
I often hear this claim made by proponents of OSS, but I have yet to see any hard evidence backing it up. Can anyone offer something more solid than assumptions?
Oh, so true. (Score:3, Interesting)
And that's the world that I come from. I've worked with IBM to get AIX patches created that pull bugs fixed on "current" versions back to older versions. I can't tell you how many times I've done it with database vendors - Oracle, Informix, etc. While these aren't fully vetted
Closed source speel chacker (Score:4, Funny)
I have one word for you (Score:4, Informative)
Developers! Developers! Developers! Developers!
Re:I have one word for you (Score:5, Insightful)
Busy eyeballs are better than idling eyeballs.
OSS users/coders still close them up faster... (Score:5, Insightful)
He's right. They may not be looking for security holes and they may not find them because of all the "eyeballs" but they will certainly fix them and release a patch to the community shortly after it is discovered.
Now, even if MSFT did release a patch right away it wouldn't make much of a difference as most people don't update their software. The OSS community, OTOH, is still mostly comprised of people that have a Clue and those people generally patch immediately.
So while what the article states is true currently the OSS community does respond faster and with less problems than their counterparts on the other side of the fence.
Re:OSS users/coders still close them up faster... (Score:2)
Grandma ACclick here [openssl.org].
Huge Upsides? (Score:2, Interesting)
Go team go! (Score:5, Funny)
What a crap ! (Score:3, Insightful)
Corporation's view on security is even less structured.
Actually, corporations are not concerned with security at all ! They are in a business of making money, not secure products.
Now the million dollar question: Who paid this guy ?
Wrong (Score:5, Insightful)
They believe it, but offer no proof. You don't create an OS kernel by hacking in bits of code, you don't create any complex software by just "hacking" it together. Mozilla, OpenOffice, KDE, GNOME, all the major pieces of Linux software, in my opinion, are very structured and follow a solid design process.
Re: (Score:3, Interesting)
Responsibility? (Score:2, Interesting)
Re:Responsibility? (Score:2)
Really on the hook yeah...
Re:Responsibility? (Score:4, Insightful)
They are "on the hook" in the sense that, if the market decides their product is poor and there is a alternative product, the market will move there. In that sense, and in that sense only, closed vendors are "on the hook." Of course, this presumes the existence of a competitor. Does Microsoft have competition? In some markets, yes. In some, no.
So a product without a competitor is no different from an open source product. However, I would argue that market forces act on open source as well. The competition is for developers. Developers will work on a project that is useful and is used. They will tend not to work on projects that are not used. In this space, too, some open source products have competitors and some do not.
Money is not the only market force.
Re:Responsibility? (Score:3, Insightful)
Actually, it's far, far worse, as there's an immediate commercial disincentive to security development.
It costs money.
Someone may do code reviews for free or for fame on opensource, but nobody is going to review commercial proprietary closed source without a fat paycheck.
As long as there is no serious competition any money spent on security is wasted money. And once security becomes a selling point it's most likely much more
No Responsibility (Score:5, Insightful)
1) Incompetence. HR departments don't know how to hire coders. They often think a degree means you know what you're doing. Portfolios are rarely asked for, likely because even if they were, the HR departments wouldn't know what the hell to do with them or how to evaluate them.
2) Time to market. Open Source does things when they're ready. Even projects with time-based releases do a "whatever is ready in that time" release, not a, "we're going to do a, b, c in this time." The rush to get to market doesn't leave a lot of time for security and bug fixing. After all, you can release a patch later, after the profit has started rolling in, right?
3) No corporate incentive. The product has a bug or security hole. Unless it becomes a big deal in the media, why bother paying programmer time to fix it? Your customers are already customers. You've already been paid. Without service contracts, fixing bugs just doesn't have any monetary incentive.
4) No programmer incentive. How many corporate programmers have any reason to put any pride into their work? None of the customers are going to know their name, think about hiring them on a side contract, etc. When software I write entirely for Free has a bug, I know my reputation is at stake, and there's a feeling of "how could I be so dumb, I have to fix this and make things right" feeling. I don't get that feeling for corporate work; if they want it fixed, they can pay me, otherwise, the bug can stay and I can get on with my life.
5) Security Through Obscurity. Why fix something nobody knows about? Not only are you not going to get money from your customers for your efforts/programmer-paychecks, you're not even going to get any PR bonuses.
There are many companies where the above don't apply. Good companies have good HR departments that bring in the other developers into the hiring process to select new employees that are actually skilled. Some companies have corporate pride and worry about quality as well as the bottom line. The above problems are not _rules_, they just common patterns I've noticed in my work, and in the work of others.
As opposed to... (Score:3, Insightful)
Security (Score:2, Interesting)
structure is the problem (Score:2, Informative)
I believe it (Score:5, Interesting)
I'd love to be in charge of a popular project and embed something into the code that isn't a trojan or hack but a simple sentence or two. Something like "Congratulations - you've actually audited this code. Please email me@address for your $50 reward (To the first person only)".
Maybe if we occasionally put these little rewards into the code, people would be more apt to pour through them.
Then again, I'm not a programmer so I'm probably going to get a lot of "This idea sucks because of
Re:I believe it (Score:5, Insightful)
You better read it... (Score:5, Interesting)
This is a common writing technique -- get a reaction based on title and initial statements, and then bring the real argument later on. Just don't walk away thinking this guy is saying open-source code has worse security overall based on the title; that's not what he said.
Re:You better read it... (Score:2)
grep understanding knowledge
I would have to say (Score:5, Interesting)
the article is a balanced and well-written one. From the title and summary, I concluded that this was possibly one of those "Rob Enderle" type Microsoft FUD, but surprisingly the author seems to know what he's talking about and comes up with a pretty balanced argument - the above excerpt is one of the examples.
I agree with some of the conclusions/suggestions like a more structured approach and software engineering techniques, but the fact remains that most software hobbyists (the principal contributors to open source software) *firmly* dislike process and red-tape. And they're right, since they're pursuing a hobby, they should be able to do what they like as they see fit.
But then, he's obviously more qualified than the other Microsoft apologists which've written "knowledgeable" articles about open source insecurity.
John Viega is Chief Scientist of Secure Software, and the coauthor of "Building Secure Software" (Addison-Wesley) and "Network Security with OpenSSL" (O'Reilly).
Eyeballs (Score:2, Funny)
That's because their eyeballs are falling out looking at it.slashdot.org [slashdot.org].
Missed point.. (Score:4, Interesting)
For whatever reason, open source software hasn't had the same problems as Microsoft for instance. Whether that's because of an oversight on the part of hackers/crackers is beside the point. The point is that based on results open source is more secure.
Potential threats don't crash your servers.
Re:Missed point.. (Score:3, Insightful)
Should I fix the lock? Should I buy a another lock from the same vendor? Is my house secure because nobody's tried to break in? Based on results, the house is secure.
I'm getting a new lock.
And closed-source? (Score:5, Interesting)
No I won't say where I work, but it's not MS.
IV&V Testing (Score:3, Insightful)
Only looks at half the issue (Score:2, Interesting)
Boggle (Score:5, Insightful)
Has this guy been working with better vendors than I have? I had to deal with vendors on a regular basis who let some pretty awful stuff slip through QA and some of them could be very defensive about accepting that a bug existed. I had to threaten to shut down multi-hundred thousand dollar contracts to get action sometimes, twice I actually did call bullshit on a vendor and abort the contract.
Money provides a stick to get vendors to fix their problems, but they still have human beings working on their products, and like all human beings they make mistakes, get defensive, have better things to do with their time, etc. Also success (money) can breed indifference in a vendor, once you have a good portion of the market and have people locked into your offerings you have to be just good enough to keep the cost of the customers irritation with you lower than the cost of switching to another product.
Structured vs. Free-Ranging (Score:5, Insightful)
The problem, as the author points out, is that many eyeballs do not equal "eyeballs in depth" or "coordinated eyeballs". The housefly has thousands of "eyes", yet that doesn't make it necessarily more visually acute (contrast it with, say, the eagle or the falcon).
I would suggest that, if you are going to code a secure product, that the people and processes that make up the audit team should themselves be auditted. The flowchart of security shouldn't start at the product itself; it should start at the people and processes that produce the product. Otherwise, what you would end up is a lot of people "reaching for the low-hanging fruit" (as the article suggest), making flashy features work, while the obscurer and necessary work get ignored or done poorly. Security must be managed from top down, not invented along the way by coders.
It's as much about controll as it is security (Score:3, Interesting)
Perhaps my house would more secure if only Microsoft managed all the access in and out of it too. But the reality is, that's the kind of controll I want to have - not them. The same is true with *MY* os systems too.
One thing he (and Microsoft) is missing (Score:5, Insightful)
Trust. More specifically doing away with Trust.
I had a minor epiphany yesterday, read about Microsoft's DRM efforts, and realizing what may be fundamentally wrong with their security. IMHO, Microsoft believes that bad security is due to bugs, and that if they can squash their bugs, they will be able to have secure code, AND be able to TRUST the computer that their code is operating on. I'll even let them consider an insecure algorithm a bug, for the sake of this discussion. I think they really believe they can eventually ship sufficiently bug-free code to be considered Trustworthy in execution.
Contrast that with the attitude toward security that has grown in the Open Source arena. No matter how good you get, bugs will *always* be found. No matter how secure you think your system is, *someone* can always get in. Finally, you have to consider *all* avenues of attack, not just the technical/cracking ones.
Some descendents of these attitudes:
Without physical control, the rest of the security is worthless.
Human engineering is probably the biggest security hole.
Consider security as a value proposition, in two ways:
1: Can I make it sufficiently expensive that they'll attack someone else, instead of me?
2: How much do I want to spend on security, and how do I balance that with a recovery plan?
Security isn't a "nail it down, once" thing, it's a process, and includes evolution.
Bugs will happen, so put security in layers, to try and eliminate single-point-of-failure issues.
It's not so much the code, or the eyeball count, or the specific eyeballs. It's the attitude.
article problems (Score:3, Insightful)
Judging from this article, I would doubt that the author has a true understanding of the open source concept. Just because something lacks structure doesn't mean that it's inferior. What really matters is how vulnerable a box is to being exploited. And in terms of real-world metrics, despite much-vaunted 'security initiatives', open source software has a better record of delivering network services more efficiently, reliably and securely than commercial alternatives.
Meaningless as phrased (Score:5, Insightful)
"What approach do I pick to make $PROJECT most secure?" is a meaningful question. Even more meaningful is "What approach do I pick to make $PROJECT most trustworthy?"
Open source is the answer to both. For a security-critical application like PGP it's imperative to get multiple independent reviews from fresh perspectives. Open source is a necessary but not sufficient criterion for being able to accomplish that.
By definition (Score:5, Insightful)
Otherwise, you could write a tool that probes for those.
The effect would be that that class of exploit would disappear.
Usually, exploits are much trickier (chaotic, even) than that to find and are usually found "in the field" by actually using the software under a variety of conditions when all the "eyeballs" have failed.
But trying to be controversial to sell a book never hurt...
Move along, nothing to see here.
Re:By definition (Score:3, Insightful)
Otherwise, you could write a tool that probes for those.
Whatever happened to "lint"?
And, BTW:
How many security flaws would be solved if everyone followed those three si
And it's gettting worse. (Score:4, Interesting)
Distros getting users into the habit of typing in root passwords everey time the GUI pops up a window is asking for big trouble.
C'mon redhat or suse or debian or someone.
Please please give me a distro where I don't _need_ to be root to install typical unprivileged packages like upgrading a browser. How about install them under '/usr/local' with permissions where anyone in the group 'local' can install them, or hohw about in my home directory. And yes, I know about "configure --prefix=$HOME". That doesn't solve the problem of not having the benefits of a package manager.
Structured security testing vs. fast fixes (Score:3, Interesting)
Where's the evidence? (Score:3, Interesting)
This article is full of speculation on mechanisms, without any real proof. It doesn't even bother to cite the bullshit MS funded studies.
If I want rabid fan baiting with no real evidence, well, I'm on Slashdot already, aren't I?
Both Source (Score:3, Insightful)
Novell said in an internal study they found that open source tends to be more secure in popular applicaitons, so Apache is more secure then IIS (as if we needed them to tell us that!), but they found out that in obscure programs proprietary tended to be more secure. This is probably the main idea behinds Novell's recently annouced both source [com.com] stance. Granted they have financial reasons for not wanting to open source parts of their product line, but this rational does seem logical. Though it would offend the stallmanites.
Security means... (Score:5, Insightful)
$RantMode=on
Computer security means many things [google.com], but can be summed up simply as: The protection of the information and physical assets of a computer system.
As a reminder, this means Hardware AND Software security.
As a Real-world security geek, it appears to me that the three worst software issues are:
Please note that "Crackers hacking into your system in order to steal trade secrets" isn't even on the list.
So, no matter which of the top three you care to rant about having security issues in your software, they ALL can be solved with the same two pieces of software on either your own PC, or on a corporate side, ie: Firewall softare (set to deny all unless allowed), and any reasonably competent virus checker (Scan local drives/emails/web pages before loading to the browser)
So, the real question is not which has more bugs, closed source, or open source, but is instead "Why don't more users have those two pieces of software?"
Maybe, instead of beating each other up about security flaws in software, maybe we could all spend some small amount of our time educating the users to get these two packages, and to keep them up to date.
Imagine if a million geeks all spent an extra 15 minutes while visiting their friends and relatives to educate them about this?
$RantMode=off
John Viega and Mailman (Score:5, Informative)
Most notable for the purpose of this discussion, Viega [viega.org] is the creator of Mailman [list.org], the fantastically-popular GPLd mailing list management software. All was good and well with his view of the many-eyeballs theory until, one day, he found a huge, glaring, holy-shit hole in Mailman a few years ago. He was so alarmed that nobody had ever spotted this that, after fixing it, he reflected on what he'd learned and turned it into a thoughtful article, The Myth of Open Source Security. As he wrote: [developer.com] Again, Mailman was and is an extremely popular program -- this was not a problem of obscurity.
So, the OnLamp.com article under discussion here is a follow-up to his original article, as he points out in the opening to the new article (but people apparently aren't reading.) As you can imagine, Viega is no rabid anti-OSS guy -- he's, in fact, the very model of what we want our developers to be. He writes good software, admits it when he writes bad software, and tells it like it is, even when we don't want to hear it.
(Disclaimers, such as they are: Viega is an adjunct professor at Virginia Tech, where I attend school, and I was the earliest alpha-tester of Mailman, in the late 90s.)
-Waldo Jaquith
OpenBSD is a good example (Score:5, Informative)
Another good example is Kerberos. It's been around a long time, looked at by researchers, students, open source developers, and closed source developers using it as a reference for implementing their versions. Yet, major flaws that weren't subtle have taken a long time to find.
It's not the eyeballs... (Score:3, Insightful)
If Windows had been open source seven years ago, we would have been able to keep a version that didn't integrate IE with the desktop in use, we would have been able to come up with a clean mechanism to split the useful parts of the HTML control from the dangerous parts, and the majority of the script- and zone- based email viruses and worms that have been plaguing the computer industry for most of the past decade would never have happened, and we wouldn't be waiting for the next attack to hit their daft "security zones" train wreck.
If Apple's LaunchServices and Webkit were open source, we'd be able to split LaunchServices in two and have a separate set of bindings for internet applications, and we wouldn't be waiting for the next protocol-based attack in Safari.
One word - Sendmail (Score:5, Interesting)
Any questions?
One real problem with open source is that it's really tough to fix a fundamental architectural problem by ongoing patching. If the problem is too big for one person to rewrite in a short period of time, it's unlikely to ever get fixed.
If the Linux world is to become secure, get behind NSA Secure Linux and push. Make apps work within the NSA Secure Linux mandatory security model. That has a chance of working.
TFA arguments are serously flawed... (Score:4, Insightful)
I repeat: You CANNOT be sure you are secure with closed source, no matter what you do. You CAN secure yourself with open source, if you make effort.
The difference between open and closed source (Score:4, Insightful)
The difference lies not in the number of vulnerabilities. All software, open or closed source, will have holes in it. That'll be the case until we have a system in place to write completely bug-free code and a system to insure vulnerability-free specifications (the worst security problems aren't bugs, they're design features which favor convenience over security). The difference lies in what happens when a vulnerability is discovered. In closed-source software, we've seen time and time again that the response by the vendor is almost always to conceal the problem and deny it exists. In open-source software, by contrast, vulnerabilities are almost always published fairly quickly and fixes made available rapidly. That's because nobody is at the mercy of the original author for a fix. The people who discovered the problem can publish a code fix along with the details of the problem. People affected by the problem can patch the code themselves, if it's important enough.
In addition, security holes by design tend to get eliminated from open-source software. In proprietary software, if an insecure design feature benefits the vendor it's unlikely to be removed short of open revolt by the users. In open-source software if there's another way to do it that provides less security exposure and the original author won't change the design, someone else tends to get fed up, make the change and make the patch available. Eventually the original author either has to bow to user preference or find his own version of the software being ignored in favor of one that does.
Even Worse (Score:5, Insightful)
I was struck by something while reading this passage:
Not only is that sort of developer not looking for security bugs, but they're pretty likely to be just getting their feet wet working on that project and might well introduce a bug. Then, there's a significant possibility that nobody else cares about the feature that one developer added to scratch their own itch, so nobody's going to look at the code that implements it. Yes, there are more eyeballs, but those eyeballs are not evenly distributed. There are certain pieces of code that everybody is looking at, and there are vast tracts of code that practically nobody is looking at - none with an eye toward security. How many Linux drivers have you looked at? I'll bet the majority of the people reading this haven't really looked at any Linux kernel/driver code whatsoever. Have you looked at the code for Apache? Perl/Python/Ruby? MySQL? Gcc? Open-source users outnumber programmers a hundred to one, and each developer has a fairly narrow area that they're either interested in looking at or qualified to look at, so the number of eyeballs on some piece of code implementing an unpopular feature in a popular package is nowhere near what some people seem to think. It might be dozens, it might be one, and quite often it will be zero once the guy who wrote it moved on to something else. That's no better than the almost-always-one you'll get with commercial software, and sometimes it's worse.
Maybe not Open Source itself... (Score:3, Interesting)
My distribution of Linux, Debian is stable because it is not a company, and it doesn't have to release new product too often to make marketting happy. Because there is no profit motive, Debian can take the time to release stable packages. If Debian was not using open source, this was still be the case.
So, it isn't specific to open source, but many open source projects have other features that make them more secure.
His Argument Only Has Merit (Score:3, Interesting)
Pardon me while I laugh myself into a coma.
Structured code auditing (Score:5, Insightful)
As a developer of proprietary software (hey, no flames, it's my job), I can assure you that there is very little structured security analysis of closed source software. Some closed software may be rigorously audited because of its nature, but the same holds true in Open Source (OpenBSD). You're not going to see any security audits for non-security software. You might see a few half-hearted attempts at it (like Microsoft's month long fix-fest), and very localized panic attacks when vulnerabilities are made public, but for the most part it's an ignored area of development.
"Security through obscurity" is still king, because the people making software security decisions in commercial firms generally don't know any other way. They also do not see the financial value in secure software, because it's not something that the customer will pay extra for in non-security related software. Then there's the problem of ignorant coders.
We have all gone through the phase where we think we know about security and encryption. In a proprietary environment such a security ignoramus can reach chief software architect level. In my own work I've seen three "clever" encryption schemes by senior developers that were complete jokes. One scheme even produced *sequential* keys it was so bad. In the Open Source community such security hubris is slapped down quickly.
In short, the author is wrong. Open Source is not inherently more secure than proprietary software, but the open development model encourages a higher level of security analysis.
Professional (Score:3, Insightful)
15 minutes of soapbox fame ... ignore it (Score:3, Insightful)
Auditing is nice, but someone has to pay for it (Score:3, Interesting)
1) These audits must be conducted by third parties, in order to be trusted;
2) These audits are not done for free, and are added to the cost of the software.
The cost of auditing open-source software will probably have to be passed to the customers, for smaller projects. It could be split among groups of interested customers and benefit the whole community, and still remain cheaper than most commercial alternatives.
Of course, big customers (the Navy?) could implement their own auditing scheme and pay for it, and commercial software companies would probably open their source code to these priviledged customers. Unfortunately most small companies cannot afford to call Microsoft, or Accpac, or SAP, and force them to provide their source code and get an audit from a specific auditor. (And, as we saw lately, relying only on the reputation of such auditing companies as the Big Four can mean that they will give good results to their big golf buddies...)
Finally, customers like the Navy would probably get cheaper software if they would go for F/OSS alternatives and audit them at their own cost, rather than pay for audited commercial software.
Security (Score:3, Interesting)
The kind of methodology he wants for OSS just isn't going to happen across the board. Just as in commercial software, the "best practice" style you learned in college gets thrown out once you actually have to DO something.
Large projects require similar methodology just to keep consistent, but small programs will never do so. This is the real world, not the classroom!