Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security

Open Source Security: Still A Myth 502

jpkunst writes "John Viega (coauthor of a.o. Building Secure Software) argues in Open Source Securitey: Still A Myth at O'Reilly's onlamp.com that "open source software may currently be less secure than its commercial counterparts.". According to him, there may be "more eyeballs" looking at open source software, but he does not believe those eyeballs are looking for security problems in a structured way."
This discussion has been archived. No new comments can be posted.

Open Source Security: Still A Myth

Comments Filter:
  • Still... (Score:5, Insightful)

    by bustersnyvel ( 562862 ) on Friday September 17, 2004 @10:18AM (#10276677) Homepage
    ... once something is actually found, it's fixed a lot faster than in most commercial software.
    • Re:Still... (Score:2, Interesting)

      Well yes, that's partially because most of the time OSS "vendors" forego the testing procedures that commercial vendors do.

      "It works on my box...bug must be fixed!"

      This strategy doesn't hold water in the business world.
      • Re:Still... (Score:4, Interesting)

        by bustersnyvel ( 562862 ) on Friday September 17, 2004 @10:27AM (#10276767) Homepage
        That's true for small home-projects, but not for projects like Mozilla, Gnome, OpenOffice.org, Gimp, etc.
        • by jidar ( 83795 ) on Friday September 17, 2004 @12:54PM (#10278453)
          That's true for small home-projects, but not for projects like Mozilla, Gnome, OpenOffice.org, Gimp, etc.


          This isn't even close to being true. Why are you spreading this misinformation? Serious security problems in large open source projects are very competitive with smaller projects in their bug fix and turn around. Nearly every majory security problem is fixed the day it hits the media. Most often when /. has a story on a new security bug in something big like Apache, Mozilla, or the Linux kernel you can find a link to a patch in the story itself, and if not it's always in a comment below.
          There may be exceptions, but they are rare.
          • Nearly every majory security problem is fixed the day it hits the media.

            There are two ways to achieve that: control the media, or fix bugs quickly. 8-)

            Someone who discovers a bug in free software usually delays disclosure until the fix is ready. This creates the illusion of quick fixes, despite it usually takes two weeks or more to create a fix. (It's quite instructive to look at the time stamps contained in patches released by GNU/Linux distributors.)
          • by bmajik ( 96670 ) <matt@mattevans.org> on Friday September 17, 2004 @02:17PM (#10279377) Homepage Journal
            There is the commonly held position that many F/OSS projects get source patches out the doro very quickly.

            That's true.

            One problem with this is people compare the time-to-released-source-patch-on-a-mailing-list-or -in-cvs to the time it takes for something tos how up on windows update.

            I don't think that's fair for the following reasons:

            1) Patch Quality.
            It is clear that the volume of basic testing done on many instant-turn-around source patches is zero.

            Comparatively, as often as an MS patch manages to break something somewhere, consider how much worse it would be if there weren't a few days of targeted regression testing being done. The official recommendation from MS is to test patches before putting them into production, but there have been a relatively low number of patch recalls from MS.

            Finally, i think it bears mentioning that with F/OSS, the initial patch is sometimes re-written over the course of several days until something proper actually is agreed upon and that's the code that actually ends up living with the product.

            So i'd consider these source level patches to very often be of "here is something that appears to close the hole and not break anything i tried, good luck!" quality.

            2) Patch Applicability
            When a hole is discovered in apache, the time it took for an apache developer to submit a source diff is NOT the same deliverable as what you're getting from a commercial vendor patch. A source level patch only does me any good if i am running a source-built tarball in production, and i am relatively current with whatever source base the patch is applied against, and i can handle the manual patch/compile/make install process (and if something goes wrong, i've got to backout the patch and compile/make install _again_)

            Most people, especially running production machines, are not running built-from-source software. You install Redhat. You want apache ? You use the redhat apache package. You now need to wait for the updated redhat apache package to get the bugfix, or, you get the latest cvs snap and build from source. Now you've got a lovely problem because the way redhat (or any distro) builds apache is different from the defaults, so you have to go and figure out how your distro likes to build its packages, OR, you need to accept the build defaults and rebase your config files to the new settings.

            So really, the vendor binary package is what many people need to wait for before they can truly patch thier machines. THe source diff is nice, but not something they can easily consume

            I think between these two points, it's pretty unfair to compare time-to-patch between MS and someone-posted-a-diff-somewhere.

            I think if you look at the time from vuln report to updated binary tarball being released by some of the linux distros, you'll be surprised.

      • Re:Still... (Score:5, Insightful)

        by echeslack ( 618016 ) on Friday September 17, 2004 @10:32AM (#10276825) Homepage Journal

        I think it may also have to do with the variety of testing. I admit that you are probably right, a lot of OSS vendors don't do extensive testing, but for a lot of them they don't have to. If the vulnerability only affects one product on one hardware platform, you have to test various configurations, but you have at least 1 order of magnitude less testing to do than, say, Microsoft might have for a fix that crosses multiple versions of windows, and may affect PCs, PDAs, etc.

        Also, if bugs are found by those in the community, the fix may have time to be tested before it is widely publicized. It seems (just from observing announcements, nothing scientific) that a lot of Microsoft vulnerabilities are discovered by third parties that cannot go and fix them while in OSS they tend to be discovered by people in the security sector, but often they may provide a fix at the time of announcement or not announce until a fix is in cvs.

      • Re:Still... (Score:5, Insightful)

        by erktrek ( 473476 ) on Friday September 17, 2004 @10:35AM (#10276863)
        What about the "just get it done we have a deadline to meet and screw everything else" mentality of commercial vendors?

        One of the big shockers out of college and into the big bad business world was the idea of "good enough" versus "doing it right".

        E.

        • Mod Parent Up (Score:3, Interesting)

          by ktulu1115 ( 567549 )
          Very true... I've discovered the same thing myself, and honestly, I can't stand it.

          It's sad to see companies just pushing out products as fast as possible to make the best buck, in the end it causes nothing but problems.

          Anyone else encounter this with their current employeer or previous ones? I'd be interested to hear the story.
          • Re:Mod Parent Up (Score:3, Insightful)

            by Negatyfus ( 602326 )
            Nah, this only works if you have a monopoly lock-in. Sure, you're also kind of locked in if you just spent $20,000 on a software package you don't wanna throw away but that's full of bugs. Still, this will destroy your reputation and do you no good in the end.

            The golden rule of business is to make your customers goals your own goals, because long-lasting relationships are essential to your own long-term success.
            • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Friday September 17, 2004 @11:43AM (#10277669)
              "Nah, this only works if you have a monopoly lock-in."

              Maybe. But it is PRACTICED any time a company wants to beat a competitor to market OR to catch up to a competitor in that market.

              "Sure, you're also kind of locked in if you just spent $20,000 on a software package you don't wanna throw away but that's full of bugs."

              That's it. If you can sell it, it doesn't matter how buggy it is. That way you get MORE MONEY for "maintenance plans" and "support contracts" and "upgrade insurance".

              "Still, this will destroy your reputation and do you no good in the end."

              A bad rep and a product on the market will always beat a good rep and no product. There's this thing called "emotional investment" that happens a lot in this field. People get their own self-worth confused with the vendor or product and so they will stick with that vendor or product.

              "The golden rule of business is to make your customers goals your own goals, because long-lasting relationships are essential to your own long-term success."

              The other golden rules are that quarterly earnings matter and if your competition loses, you win.
          • Re:Mod Parent Up (Score:3, Interesting)

            by Anonymous Coward
            I'm going to post this anonymously for what should be obvious reasons, but...

            I'm part of a team that maintains a web service that, among other things, has a user-interface that generates a SQL query to generate a report over various database tables. Actually, it doesn't generate the SQL queries, they were all pregenerated and stored in a file. The final webpage contains several of these queries as options that you can then send back to the server through a query string parameter to a page that displays

          • Re:Mod Parent Up (Score:3, Interesting)

            by mikael ( 484 )
            I worked for a network products company that had this happen with one of their legacy products that was being sold to a foreign customer just before the open source community came up with an equivalent version.

            Because of this, our team leaders were more interested in getting their milestone completion bonuses than getting the bugs out of the system (who cares, we're all going onto new projects, even if not at this company).

            Every two weeks we had a milestone for a particular module. Regardless of the state
            • Re: your sig: I see an even worse one at our local supermarket:

              "Warning: May contain traces of peanuts or peanut oil".

              Where was this label found? On a glass jar...of peanuts. Keep in mind it was a see-through glass jar, making it obvious even to people who can't read, that it is a jar of freakin' peanuts.

              My take on it: The warning was telling us that it might be a jar of fake peanut substitutes, by saying that the jar mearly MAY contain peanuts, instead of saying that it definately did.
          • Re:Mod Parent Up (Score:3, Insightful)

            by Suidae ( 162977 )
            I see plenty of poorly designed and difficult to maintain software, but ofte this is NOT a business problem. The problem is that some engineer muffed up the design or analysis and didn't realize it until it was too late to start over. As much as I'd like to rewrite much of the software I have to work on, the fact is if I'm going to get paid, the software has to make a profit, which means I can't go rewriting just because it isn't a elegant as it could be.

            Sure it rubs me the wrong way every time I have to
        • Re:Still... (Score:5, Insightful)

          by Anonymous Coward on Friday September 17, 2004 @10:50AM (#10277021)

          One of the big shockers out of college and into the big bad business world was the idea of "good enough" versus "doing it right".

          If you think this mindset does not exist in OSS than you are naive. Do you honestly think that OSS software is released without the developers knowing that it contains bugs? OSS developers don't write flawless code. Therefore any OSS code released to the public has been deemed to have reached a point of "Good Enough".
          • Re:Still... (Score:3, Insightful)

            by hunterx11 ( 778171 )
            At least OSS developers usually document known bugs. There's no manpage for Windows :)
            • Emphasis on *usually*. And if you weren't aware, almost all manpages are entirely out of date. *Years* out of date, even. Could you even imagine trying to maintain the bash man page? That's an entire job itself.

              There is horrid source code out there, with no commenting or documentation. Most people point to Linux or Apache or some such for examples of where OSS succeeds, yet avoid looking at all the countless other OSS that has far fewer eyeballs looking over the source code.

              It just does not work gene
              • Re:Still... (Score:3, Insightful)

                by benjcurry ( 754899 )
                It just does not work generalizing OSS as better than proprietary when it comes to quality or security matters.

                How about this: the more IMPORTANT a piee of software is, the better OSS-style development will work.

                Does this apply more effectively than "OSS is better"?

                I'd like to hear your opinion.

            • Re:Still... (Score:3, Insightful)

              by Frizzle Fry ( 149026 )
              Disagree. In my experience, the Microsoft KB articles that document known bugs and how to work around them are much more thorough and up-to-date than the known bugs listed in manpages for unix programs.
        • Re:Still... (Score:5, Interesting)

          by Jakhel ( 808204 ) on Friday September 17, 2004 @10:52AM (#10277052)
          You know it's funny that you say that. I was in a software project management class my senior year in college. We were required to create a piece of software that did specific functions and turn it in at the end of the semester. Becuase this was a group project, all groups ended up missing some deadlines here or there, which inevitably cost them man hours in the long run (we were required to keep track fo cost). After about the 3rd missed deadline by groups (due to bug workouts, people not doing their part, etc.), my professory, a former IBM employee, told us a story.

          He said one year, he was heading up a project that involved writing software for IBM machines. They were nearing the release date and still had dozens (if not more) of bugs to work out. He went to his boss, a B-school guy, and said "look, I know we're close to the deadline, but there are still many bugs that we really need to work out before this thing ships. We don't want to release a product that costs this much and still has some things wrong with him".

          Now keep in mind that there were hundreds, if not thousands of companies ready to buy the machines as soon as it was released. They had orders from companies around the world. Because they were competing with other companies selling similar products, the need to meet the deadline was even more important.

          Back to the story, his boss looked at him and said "so you mean to tell me that you think we should delay the release of a product that has the potential, and is almost guaranteed, to earn us hundreds of millions of dollars for a few bugs? I don't think so. We'll release the product and support it later on. Tech support will cost us less in the long run than delays at this point".

          So they released the product, sent developer level techs around the world after companies began to complain about the bugs, and that was that.

          Moral of the story? Sometimes, from a busines stand point, you should release the product and support its bugs later on. But that usually depends on the amount of competition in the market and money that is riding on the product. Yeah it sucks from a developers stand point, but developers dont make business decisions in the real world.

          See Examples. HL2, DNF, etc.
        • Shocked? (Score:3, Interesting)

          by Number_5 ( 519448 )
          You didn't see this in school? All of your assignments were flawless and on time? All of your programs did error checking of all user input? You spent half of the time on every assignment doing error testing with data sets generated to test every boundary condition? What about that History or Literature course that you couldn't care less about?

          The idea of "good enough" or "I am sick and tired of this project" is not just found in the business world, it is basic human nature.
      • Re:Still... (Score:5, Insightful)

        by LnxAddct ( 679316 ) <sgk25@drexel.edu> on Friday September 17, 2004 @10:36AM (#10276865)
        *cough* Service Pack 2 *cough*
        *cough* Disable javascript which is essential to many business's core web applications*cough*
        *cough*Break standard compliant web sites and standards because we can*cough*
        *cough* I could go all day coughing under my breath about things MS breaks and on purpose*cough*

        Real operating systems aren't so independent on every other piece that by changing one component, you may break many unrelated components. I don't know about other Open Source vendors, but Red Hat does extremely intensive testing, I would assume Novell does too. The nice thing is, it usually goes significantly quicker because if they update a web browser, they don't need to make sure it doens't break the Office Suite, Mail Client and File Browser.
        Regards,
        Steve
      • Re:Still... (Score:5, Insightful)

        by Silver Sloth ( 770927 ) on Friday September 17, 2004 @10:36AM (#10276876)
        In my experience most good coders are very proud of their work. Whilst the commercial coders may have to let work that they know is shoddy and full of holes go because they have to meet a dead line OSS coders are looking for peer approval and you don't get that with buggy code. Imagine the shortcuts that are being taken by M$ as the pressure to get Longhorn out in time rises
      • Re:Still... (Score:3, Insightful)

        by Jason Earl ( 1894 )

        Oh please. Take a look at the track record for the largest commercial software vendor (Microsoft) and the Linux distribution of your choice when it comes to security updates. When was the last time that you heard of a Linux security fix that had serious repercussions to other software installed on the box, and when was the last time that a Linux patch failed to fix the problem in question and had to be backed out?

        Microsoft has released some amazingly bad patches in the past.

        In the "real" world Free So

    • Missing the point (Score:5, Insightful)

      by einhverfr ( 238914 ) <`chris.travers' `at' `gmail.com'> on Friday September 17, 2004 @10:49AM (#10277008) Homepage Journal
      I actually think the parent poster and the parent article both miss the point to some extent, though the article is closer to the mark.

      Open source is not a magic bullet that will automatically solve all our security problems, as much as I advocate open source software, and open source software is not automatically more secure. The reason why this falsity perpetuates is that people tend to think of security in terms of buffer overruns instead of a secure structure. No development methodology can ensure this secure structure because it is an issue which is either solved in the design phase or not at all.

      The question shouldn't be "Can this software be compromised" because you should assume that all software can be, but rather "what happens if this software is compromised." Some open source projects are very good at this, and some aren't.

      It is also true that for some projects (like OpenSSL), this question is irrellevant because the primary usefulness is as a library, so the application will have no security itself. But these are the exceptions rather than the rule.

      Compare the security of Sendmail (open source) to Postfix (also open source). Which is more secure by design? Compare Apache to IIS. Which is more secure by design? (IIS drops permissions after authentication, Apache does so before). Compare Sendmail's security design to IIS. Which is more secure by design?

      Open source is important even from a security viewpoint as it allows us to better understand the architecture of the program we are considering and make educated choices about whether we can run it in a secure manner. However, it is no magic answer and just because something is open source does not guarantee its security.
      • Mod parent up (Score:3, Insightful)

        by TheLink ( 130905 )
        Despite what the "religious fanatics" believe, how secure a program is is not whether it is open source or closed source.

        It is more dependent on the programmer AND the person configuring it. Look at PHPNuke. Look at djbdns.

        Thinking that many eyes can spot security problems is like thinking that a million monkeys can type out Shakespeare.

        You need skills and experience, eyes are just a useful option.
  • Securitey (Score:5, Funny)

    by whitelabrat ( 469237 ) on Friday September 17, 2004 @10:19AM (#10276683)
    Looks like geeks with spelling skills are still a Myth too?
  • More Eyeballs (Score:4, Insightful)

    by sobriquet ( 666716 ) on Friday September 17, 2004 @10:19AM (#10276689)
    What about more eyeballs meaning a faster fix?
    • Re:More Eyeballs (Score:5, Insightful)

      by DogDude ( 805747 ) on Friday September 17, 2004 @10:25AM (#10276744)
      What about more eyeballs meaning a faster fix?

      But again, the problem is the problems are not being found in the first place. Look for example, at Sendmail. It's 25 years old [ccone.at], but is *still* a buggy, buggy app. It STILL isn't secure and bug-free. The inevitable comparison with MS willl come up, so let's look at that. First off, MS hasn't even been *around* for 25 years. As far as specific products go... with all of its patches, W2K is generally considered quite stable, and relatively secure (again, with all of its patches in place). W2K is about 5 years old at this point.

      So, I think that this article has some merit.
      • Re:More Eyeballs (Score:5, Insightful)

        by gowen ( 141411 ) <gwowen@gmail.com> on Friday September 17, 2004 @10:41AM (#10276927) Homepage Journal
        Actually, the comparison between Sendmail and Windows 95/98/ME is a good one. They're both from a more innocent time, when code could pretty much trust everything it was being fed. As such, there was little or no security designed into them, and it has had to be bolted on from the outside, in.

        And look at the success they've achieved with that style. If we learn anything from Sendmail, its that security must be designed in, rather than an afterthought.
      • Re:More Eyeballs (Score:3, Interesting)

        by j-turkey ( 187775 )

        Look for example, at Sendmail. It's 25 years old, but is *still* a buggy, buggy app.

        Are you sure that's a fair comparison? Sendmail is a kludge. It's had bugfixes tacked onto features, tacked onto bugfixes, all heaped into a 25-year-old codebase. It's never been rewritten from the ground up, and by today's standards, it was a mess 25 years ago (when it was written, security was barely a blip on the radar screen). The same can be said for a package like WuFTPd.

        What about a package like Qmail, or an

      • Re:More Eyeballs (Score:3, Interesting)

        by Duckman5 ( 665208 )
        Actually, Microsoft was founded in 1975 [microsoft.com]. That would make them almost 30 years old.

        As far as sendmail goes, that's why I don't use it. I use procmail for all my SMTP needs. Win2k is a great product, I was really happy when I was using it, but the bottom line is that it still has its problems. There are still patches that get released to address security issues every now and then.

        All software has it's problems because it's written by people and people are imperfect. However, there are a lot of ch
      • Re:More Eyeballs (Score:4, Informative)

        by Six Nines ( 771061 ) on Friday September 17, 2004 @11:00AM (#10277156)
        A couple of nits to pick...

        1) MSFT is about to celebrate its 30th anniversary (founded 1975, incorporated 1981).

        2) Windows has been around for 20 years (Windows 1.0 was beta tested in 1983-1984, released 1985).

        3) The Windows NT/2000/XP code base is almost 12 years old (NT 3.1 was released in 1993).

        4) Persistently buggy apps are found among both open- and closed-source software. There's no monoply on spaghetti code.
      • Re:More Eyeballs (Score:3, Informative)

        by valkraider ( 611225 )
        First off, MS hasn't even been *around* for 25 years.

        Wrong. [microsoft.com]

        Microsoft was founded in 1975. That makes it 29 years old, by my math.

        Look for example, at Sendmail. It's 25 years old

        Wrong. [chipchat.com]

        Even your own link states that Sendmail shipped first in BSD 4.1c, which was not released until late 1982. Sendmail's PREDECESSOR - "delivermail" dates back to 1979.

        Not that this all matters - but I find it funny when in a discussion about quality control, people don't bother to get their facts at least kindo
  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Friday September 17, 2004 @10:20AM (#10276694)
    Comment removed based on user account deletion
    • "The difference is that when a security hole IS found (whether it be by the good guys or the bad guys), it gets patched VERY quickly compared to commercial software..."

      I often hear this claim made by proponents of OSS, but I have yet to see any hard evidence backing it up. Can anyone offer something more solid than assumptions?

  • by w.p.richardson ( 218394 ) on Friday September 17, 2004 @10:20AM (#10276698) Homepage
    Still as much of a myth as "Securitey"?
  • by spif ( 4749 ) on Friday September 17, 2004 @10:21AM (#10276703) Journal
    OpenBSD [openbsd.org].

    Developers! Developers! Developers! Developers!
  • by garcia ( 6573 ) * on Friday September 17, 2004 @10:21AM (#10276706)
    Others will say, "Open source developers are more clued in to security issues due to a better sense of community, and their software is more secure as a result."

    He's right. They may not be looking for security holes and they may not find them because of all the "eyeballs" but they will certainly fix them and release a patch to the community shortly after it is discovered.

    Now, even if MSFT did release a patch right away it wouldn't make much of a difference as most people don't update their software. The OSS community, OTOH, is still mostly comprised of people that have a Clue and those people generally patch immediately.

    So while what the article states is true currently the OSS community does respond faster and with less problems than their counterparts on the other side of the fence.
  • Huge Upsides? (Score:2, Interesting)

    by rwven ( 663186 )
    You're looking at huge upsides also though. THink about the fact that when a security hole is found...there is usually a fix/patch for it within a two days....or less. Not to mention with the vast amounts of people working on the code...security holes and bugs usually get found and fixed on the code side of things before anyone from the outside finds them. When you compare the two markets, there is honestly very little difference in the security of general open source software and general closed source s
  • Go team go! (Score:5, Funny)

    by Maagma ( 714192 ) on Friday September 17, 2004 @10:21AM (#10276708) Homepage
    Securitey, it's like 'Security' but with an extra 'e' for effort!
  • What a crap ! (Score:3, Insightful)

    by Anonymous Coward on Friday September 17, 2004 @10:22AM (#10276721)
    Open source view on security may not be structured, but it is there, and it is very responsive and practical.

    Corporation's view on security is even less structured.

    Actually, corporations are not concerned with security at all ! They are in a business of making money, not secure products.


    Now the million dollar question: Who paid this guy ?

  • Wrong (Score:5, Insightful)

    by MikeMacK ( 788889 ) on Friday September 17, 2004 @10:23AM (#10276728)
    They are worried that open source developers are too much "hacker" and too little "engineer," cobbling together solutions without going through a structured software engineering process (such as requirements, specification, and analysis).

    They believe it, but offer no proof. You don't create an OS kernel by hacking in bits of code, you don't create any complex software by just "hacking" it together. Mozilla, OpenOffice, KDE, GNOME, all the major pieces of Linux software, in my opinion, are very structured and follow a solid design process.

  • Responsibility? (Score:2, Interesting)

    by webword ( 82711 )
    The deal with proprietary software is that someone is on the hook. Developers at commpanies are looking for security problems because they know that if something goes wrong, their a55 is on the line. Teir responsible. OTOH, with open source, who is responsible? If there is a flaw (yes, although is is quickly fixed) there isn't an entity or organization responsible. This is a huge reason why companies like to purchase software. They have clear legal rights, and the other guy is on the hook.
    • Read your EULAs recently? the vendors entire liability is a product refund, and only if they see fit...

      Really on the hook yeah...
    • Re:Responsibility? (Score:4, Insightful)

      by evilpenguin ( 18720 ) on Friday September 17, 2004 @10:31AM (#10276808)
      Well, yes and no. Read your closed, proprietary license agreements. ALL software is sold AS IS without warranty as to merchantability or fitness. The companies are not "on the hook" in any sense of legal liability.

      They are "on the hook" in the sense that, if the market decides their product is poor and there is a alternative product, the market will move there. In that sense, and in that sense only, closed vendors are "on the hook." Of course, this presumes the existence of a competitor. Does Microsoft have competition? In some markets, yes. In some, no.

      So a product without a competitor is no different from an open source product. However, I would argue that market forces act on open source as well. The competition is for developers. Developers will work on a project that is useful and is used. They will tend not to work on projects that are not used. In this space, too, some open source products have competitors and some do not.

      Money is not the only market force.
      • Re:Responsibility? (Score:3, Insightful)

        by Znork ( 31774 )
        "So a product without a competitor is no different from an open source product."

        Actually, it's far, far worse, as there's an immediate commercial disincentive to security development.

        It costs money.

        Someone may do code reviews for free or for fame on opensource, but nobody is going to review commercial proprietary closed source without a fat paycheck.

        As long as there is no serious competition any money spent on security is wasted money. And once security becomes a selling point it's most likely much more
    • No Responsibility (Score:5, Insightful)

      by DreadSpoon ( 653424 ) on Friday September 17, 2004 @10:39AM (#10276903) Journal
      Actually, if my experiences are any indication, most corporate development teams don't have much care for security concerns. There are several reasons.

      1) Incompetence. HR departments don't know how to hire coders. They often think a degree means you know what you're doing. Portfolios are rarely asked for, likely because even if they were, the HR departments wouldn't know what the hell to do with them or how to evaluate them.

      2) Time to market. Open Source does things when they're ready. Even projects with time-based releases do a "whatever is ready in that time" release, not a, "we're going to do a, b, c in this time." The rush to get to market doesn't leave a lot of time for security and bug fixing. After all, you can release a patch later, after the profit has started rolling in, right?

      3) No corporate incentive. The product has a bug or security hole. Unless it becomes a big deal in the media, why bother paying programmer time to fix it? Your customers are already customers. You've already been paid. Without service contracts, fixing bugs just doesn't have any monetary incentive.

      4) No programmer incentive. How many corporate programmers have any reason to put any pride into their work? None of the customers are going to know their name, think about hiring them on a side contract, etc. When software I write entirely for Free has a bug, I know my reputation is at stake, and there's a feeling of "how could I be so dumb, I have to fix this and make things right" feeling. I don't get that feeling for corporate work; if they want it fixed, they can pay me, otherwise, the bug can stay and I can get on with my life.

      5) Security Through Obscurity. Why fix something nobody knows about? Not only are you not going to get money from your customers for your efforts/programmer-paychecks, you're not even going to get any PR bonuses.

      There are many companies where the above don't apply. Good companies have good HR departments that bring in the other developers into the hiring process to select new employees that are actually skilled. Some companies have corporate pride and worry about quality as well as the bottom line. The above problems are not _rules_, they just common patterns I've noticed in my work, and in the work of others.
  • As opposed to... (Score:3, Insightful)

    by BJZQ8 ( 644168 ) on Friday September 17, 2004 @10:24AM (#10276731) Homepage Journal
    As opposed to the general "closed source" software method of finding bugs/security holes by accident, sweeping them under the rug, and hoping nobody finds them?
  • Security (Score:2, Interesting)

    by Anonymous Coward
    I'd go as far as to say that open-source developers are more likely to be interested in security... some corporations certainly don't put it first. Thats a factor regardless of how many eyes are on the code, or how "structured" their search for flaws is.
  • by Anonymous Coward
    If you're looking for something in the woods and you only have a few people, you have to map out a plan, a structure for searching the woods. You assign people to certain areas, and in this process you make inherent assumptions about your target. You always have specific areas that are searched last, specific areas that are searched by the least skilled people, and specific areas that are searched by people who are skilled but have a specific mindset that colors their search (for instance, they might assu
  • I believe it (Score:5, Interesting)

    by Judg3 ( 88435 ) <jeremy@@@pavleck...com> on Friday September 17, 2004 @10:26AM (#10276753) Homepage Journal
    I'm going to venture a guess that upwards of 90% of the linux community just assumes that the package they downloaded is secure, simple due to the fact it is open source. They don't look at the source code, because they either wouldn't understand or they just think "Hey, it's open source and popular, therefore someone must have poured through the code".

    I'd love to be in charge of a popular project and embed something into the code that isn't a trojan or hack but a simple sentence or two. Something like "Congratulations - you've actually audited this code. Please email me@address for your $50 reward (To the first person only)".

    Maybe if we occasionally put these little rewards into the code, people would be more apt to pour through them.

    Then again, I'm not a programmer so I'm probably going to get a lot of "This idea sucks because of ...." posts hehe.
    • Re:I believe it (Score:5, Insightful)

      by savagedome ( 742194 ) on Friday September 17, 2004 @10:32AM (#10276819)
      Jesus no. If that happens, I will be grepping for "rewards" and similar strings without actually doing anything. The people incharge of big/popular/successful project already take care of these things to being with during the 'design' phase. That is also a part of the reason that these projects are big/popular/successful to begin with. Look at Apache and you will get the idea. More eyeballs != More security. But its the whole flexibility that once something is found, even you can write an internal patch if you are an org running open source software in case you don't feel like waiting even a day or two for the community to release it.
  • by danielrm26 ( 567852 ) * on Friday September 17, 2004 @10:27AM (#10276762) Homepage
    At the end of the article (I read it for some reason) the author seems to somewhat agree that open-source code is at least equal with - if not superior to - proprietary code. This seems to fly in the face of his initial statements.

    This is a common writing technique -- get a reaction based on title and initial statements, and then bring the real argument later on. Just don't walk away thinking this guy is saying open-source code has worse security overall based on the title; that's not what he said.
  • I would have to say (Score:5, Interesting)

    by GillBates0 ( 664202 ) on Friday September 17, 2004 @10:30AM (#10276789) Homepage Journal
    I believe that in the long run, open source software does have the potential to be more secure than closed systems, since open source projects can do everything commercial projects can. When high-quality analysis tools become more common, hopefully the "many eyeballs" phenomenon will work. Still, when it comes to security, money looks like it will be a big catalyst for positive change--and the open source community is largely insulated from it.

    the article is a balanced and well-written one. From the title and summary, I concluded that this was possibly one of those "Rob Enderle" type Microsoft FUD, but surprisingly the author seems to know what he's talking about and comes up with a pretty balanced argument - the above excerpt is one of the examples.

    I agree with some of the conclusions/suggestions like a more structured approach and software engineering techniques, but the fact remains that most software hobbyists (the principal contributors to open source software) *firmly* dislike process and red-tape. And they're right, since they're pursuing a hobby, they should be able to do what they like as they see fit.

    But then, he's obviously more qualified than the other Microsoft apologists which've written "knowledgeable" articles about open source insecurity.

    John Viega is Chief Scientist of Secure Software, and the coauthor of "Building Secure Software" (Addison-Wesley) and "Network Security with OpenSSL" (O'Reilly).

  • Eyeballs (Score:2, Funny)

    by sulli ( 195030 ) *
    he does not believe those eyeballs are looking for security problems in a structured way.

    That's because their eyeballs are falling out looking at it.slashdot.org [slashdot.org].

  • Missed point.. (Score:4, Interesting)

    by underpar ( 792569 ) on Friday September 17, 2004 @10:31AM (#10276804) Homepage
    To me the important part of security is the bottom line: How often are you faced with a serious security problem right now?

    For whatever reason, open source software hasn't had the same problems as Microsoft for instance. Whether that's because of an oversight on the part of hackers/crackers is beside the point. The point is that based on results open source is more secure.

    Potential threats don't crash your servers.
    • Re:Missed point.. (Score:3, Insightful)

      by jamesl ( 106902 )
      I just learned that the lock on the front door of my house has been broken since the house was built. Even though I turn the key when I leave, anyone can turn the knob, walk in and steal my computer. I've been lucky -- nobody's stolen my computer.

      Should I fix the lock? Should I buy a another lock from the same vendor? Is my house secure because nobody's tried to break in? Based on results, the house is secure.

      I'm getting a new lock.

  • And closed-source? (Score:5, Interesting)

    by Anonymous Coward on Friday September 17, 2004 @10:31AM (#10276807)
    Sure most people aren't looking at security in open source in a structued way, but some people [openbsd.org] are. Plus, open source can still be better if nobody is looking at closed source security at all. I know where I work, security defects become fodder for amusement at meetings, rather than seious issues to fix.

    No I won't say where I work, but it's not MS.
  • IV&V Testing (Score:3, Insightful)

    by Artie_Effim ( 700781 ) on Friday September 17, 2004 @10:32AM (#10276813)
    As an IV&V tester (Independant Verification and Validation) I concur with the article. Sure, many eyes helps, but without a proven testing methodology, thought out and complete testing procedures and stylized reporting, bugs/security holes could go unnoticed for a long time.
  • The article fails to consider that, even if open source software has more than vulnerabilities than closed source, those who find such a vulnerability are more likely to publish a fix than an exploit.
  • Boggle (Score:5, Insightful)

    by miu ( 626917 ) on Friday September 17, 2004 @10:34AM (#10276843) Homepage Journal
    However, commercial organizations are more likely to take security seriously, simply because they are more likely to have paying customers demanding some level of security assurance.

    Has this guy been working with better vendors than I have? I had to deal with vendors on a regular basis who let some pretty awful stuff slip through QA and some of them could be very defensive about accepting that a bug existed. I had to threaten to shut down multi-hundred thousand dollar contracts to get action sometimes, twice I actually did call bullshit on a vendor and abort the contract.

    Money provides a stick to get vendors to fix their problems, but they still have human beings working on their products, and like all human beings they make mistakes, get defensive, have better things to do with their time, etc. Also success (money) can breed indifference in a vendor, once you have a good portion of the market and have people locked into your offerings you have to be just good enough to keep the cost of the customers irritation with you lower than the cost of switching to another product.

  • by Vexler ( 127353 ) on Friday September 17, 2004 @10:35AM (#10276859) Journal
    In a recent [slashdot.org] /. story, a small group of programmer had a monster time tackling an off-by-one problem in the OpenBSD kernel - one that is touted as one of the most secure OS's in the world. Judging from the way this particular bug was tracked down and analyzed, it's safe to say that this was a set of eyeballs that had some degree of coordination and management to it.

    The problem, as the author points out, is that many eyeballs do not equal "eyeballs in depth" or "coordinated eyeballs". The housefly has thousands of "eyes", yet that doesn't make it necessarily more visually acute (contrast it with, say, the eagle or the falcon).

    I would suggest that, if you are going to code a secure product, that the people and processes that make up the audit team should themselves be auditted. The flowchart of security shouldn't start at the product itself; it should start at the people and processes that produce the product. Otherwise, what you would end up is a lot of people "reaching for the low-hanging fruit" (as the article suggest), making flashy features work, while the obscurer and necessary work get ignored or done poorly. Security must be managed from top down, not invented along the way by coders.

  • by argoff ( 142580 ) on Friday September 17, 2004 @10:36AM (#10276877)
    I really don't know how true this is other than the simple fact that I've had a lot better success with Linux security than windows security. But I think this also misses another point - that this is as much about controll as it is security.

    Perhaps my house would more secure if only Microsoft managed all the access in and out of it too. But the reality is, that's the kind of controll I want to have - not them. The same is true with *MY* os systems too.
  • by dpilot ( 134227 ) on Friday September 17, 2004 @10:38AM (#10276892) Homepage Journal
    There's something FAR more important about security than the code, the number of eyeballs looking at it, or even the skill of those eyeballs.

    Trust. More specifically doing away with Trust.

    I had a minor epiphany yesterday, read about Microsoft's DRM efforts, and realizing what may be fundamentally wrong with their security. IMHO, Microsoft believes that bad security is due to bugs, and that if they can squash their bugs, they will be able to have secure code, AND be able to TRUST the computer that their code is operating on. I'll even let them consider an insecure algorithm a bug, for the sake of this discussion. I think they really believe they can eventually ship sufficiently bug-free code to be considered Trustworthy in execution.

    Contrast that with the attitude toward security that has grown in the Open Source arena. No matter how good you get, bugs will *always* be found. No matter how secure you think your system is, *someone* can always get in. Finally, you have to consider *all* avenues of attack, not just the technical/cracking ones.

    Some descendents of these attitudes:
    Without physical control, the rest of the security is worthless.
    Human engineering is probably the biggest security hole.
    Consider security as a value proposition, in two ways:
    1: Can I make it sufficiently expensive that they'll attack someone else, instead of me?
    2: How much do I want to spend on security, and how do I balance that with a recovery plan?
    Security isn't a "nail it down, once" thing, it's a process, and includes evolution.
    Bugs will happen, so put security in layers, to try and eliminate single-point-of-failure issues.

    It's not so much the code, or the eyeball count, or the specific eyeballs. It's the attitude.
  • article problems (Score:3, Insightful)

    by JonKatzIsAnIdiot ( 303978 ) <.moc.oohay. .ta. .0002_1624a.> on Friday September 17, 2004 @10:39AM (#10276896)
    He has a point, but there are some flaws in his reasoning. First of all, just because the world can examine the source code of a program, it doesn't mean that people with the necessary skills and knowledge will. However, it does happen. BSD is noticeably absent from the article and anything dealing with open source and code auditing needs to at least touch on it. (But it's dying, I know...) The author also wants us to believe that commercial software has better code auditing software and procedures than open source. He doesn't give much evidence of this, we're just supposed to accept it because they have lots of money and the impoverished open source hackers don't.

    Judging from this article, I would doubt that the author has a true understanding of the open source concept. Just because something lacks structure doesn't mean that it's inferior. What really matters is how vulnerable a box is to being exploited. And in terms of real-world metrics, despite much-vaunted 'security initiatives', open source software has a better record of delivering network services more efficiently, reliably and securely than commercial alternatives.
  • by Beryllium Sphere(tm) ( 193358 ) on Friday September 17, 2004 @10:42AM (#10276937) Journal
    Discussing "the security of open source software" is like discussing "the structural strength of green objects". There are too many projects with different goals and different team cultures.

    "What approach do I pick to make $PROJECT most secure?" is a meaningful question. Even more meaningful is "What approach do I pick to make $PROJECT most trustworthy?"

    Open source is the answer to both. For a security-critical application like PGP it's imperative to get multiple independent reviews from fresh perspectives. Open source is a necessary but not sufficient criterion for being able to accomplish that.
  • By definition (Score:5, Insightful)

    by Progman3K ( 515744 ) on Friday September 17, 2004 @10:43AM (#10276950)
    MOST bugs or flaws that lead to exploits are things that CANNOT be found by using a "structured" method.

    Otherwise, you could write a tool that probes for those.

    The effect would be that that class of exploit would disappear.

    Usually, exploits are much trickier (chaotic, even) than that to find and are usually found "in the field" by actually using the software under a variety of conditions when all the "eyeballs" have failed.

    But trying to be controversial to sell a book never hurt...

    Move along, nothing to see here.
    • Otherwise, you could write a tool that probes for those.

      Whatever happened to "lint"?

      And, BTW:

      1. Allways check return values from function calls, even printf.
      2. Always limit the number of bytes/chars read into ANY variable.
      3. Always check the validity of your input in terms of characters expected, and characters received. For example: if you are looking for a number, make sure you only get numbers, commas, and periods in your input.

      How many security flaws would be solved if everyone followed those three si

  • by Anonymous Coward on Friday September 17, 2004 @10:44AM (#10276957)
    Every darn distro seems to have funny GUI windows that pop up asking for root passwords these days.

    Distros getting users into the habit of typing in root passwords everey time the GUI pops up a window is asking for big trouble.

    C'mon redhat or suse or debian or someone.

    Please please give me a distro where I don't _need_ to be root to install typical unprivileged packages like upgrading a browser. How about install them under '/usr/local' with permissions where anyone in the group 'local' can install them, or hohw about in my home directory. And yes, I know about "configure --prefix=$HOME". That doesn't solve the problem of not having the benefits of a package manager.

  • OSS is virtually unencumbered (in theory) by the things that weigh an organization like Microsoft down. For example, if a single developer notices a buffer overflow potential, they can just fix it. It's not like some middle management jackass down the hall is going to interfere and push the change into oblivion.
  • by lumpenprole ( 114780 ) <lumpenprole@[ ]il.com ['gma' in gap]> on Friday September 17, 2004 @10:48AM (#10277000) Homepage Journal
    Look, I'm not trying to be a knee-jerk, but I'd like a little evidence. A quick search on Security Focus [securityfocus.com] shows IIS and Apache to be about dead even on vulnerabilities. That may not prove that oss is better, but it certainly suggests it's not any worse.

    This article is full of speculation on mechanisms, without any real proof. It doesn't even bother to cite the bullshit MS funded studies.

    If I want rabid fan baiting with no real evidence, well, I'm on Slashdot already, aren't I?

  • Both Source (Score:3, Insightful)

    by Krondor ( 306666 ) on Friday September 17, 2004 @10:49AM (#10277012) Homepage
    I recently attended a web seminar (webinar) Novell hosted about SuSe Enterprise Server 9 security. They talked a lot about the security certificaitons Suse has been awarded, and how even Micrososft has not been granted the highest level of security for it's 2003 server line. They then presented a poll for the attendees, "Which is more secure open source, proprietary, or a combination of open and proprietary software"? As predicted the combination response won. I think the correct answer to which is more secure Open Source or Closed Source depends totally on what programs are being discussed and where they are applied. Remember just because the source is open doesn't mean it's audited and the people that find security holes necessarily want to fix them. With great power comes great responsibility, as Stan Lee so wisely put.

    Novell said in an internal study they found that open source tends to be more secure in popular applicaitons, so Apache is more secure then IIS (as if we needed them to tell us that!), but they found out that in obscure programs proprietary tended to be more secure. This is probably the main idea behinds Novell's recently annouced both source [com.com] stance. Granted they have financial reasons for not wanting to open source parts of their product line, but this rational does seem logical. Though it would offend the stallmanites.
  • Security means... (Score:5, Insightful)

    by CSG_SurferDude ( 96615 ) <wedaa AT wedaa DOT com> on Friday September 17, 2004 @10:50AM (#10277024) Homepage Journal

    $RantMode=on

    Computer security means many things [google.com], but can be summed up simply as: The protection of the information and physical assets of a computer system.

    As a reminder, this means Hardware AND Software security.

    As a Real-world security geek, it appears to me that the three worst software issues are:

    1. Viruses/trojans via email
    2. Viruses propagating on their own
    3. Viruses/trojans via web pages

    Please note that "Crackers hacking into your system in order to steal trade secrets" isn't even on the list.

    So, no matter which of the top three you care to rant about having security issues in your software, they ALL can be solved with the same two pieces of software on either your own PC, or on a corporate side, ie: Firewall softare (set to deny all unless allowed), and any reasonably competent virus checker (Scan local drives/emails/web pages before loading to the browser)

    So, the real question is not which has more bugs, closed source, or open source, but is instead "Why don't more users have those two pieces of software?"

    Maybe, instead of beating each other up about security flaws in software, maybe we could all spend some small amount of our time educating the users to get these two packages, and to keep them up to date.

    Imagine if a million geeks all spent an extra 15 minutes while visiting their friends and relatives to educate them about this?

    $RantMode=off

  • by waldoj ( 8229 ) <waldo@NosPAM.jaquith.org> on Friday September 17, 2004 @10:51AM (#10277036) Homepage Journal
    For those who are or would assail John Viega's credibility, I should remind you who he is.

    Most notable for the purpose of this discussion, Viega [viega.org] is the creator of Mailman [list.org], the fantastically-popular GPLd mailing list management software. All was good and well with his view of the many-eyeballs theory until, one day, he found a huge, glaring, holy-shit hole in Mailman a few years ago. He was so alarmed that nobody had ever spotted this that, after fixing it, he reflected on what he'd learned and turned it into a thoughtful article, The Myth of Open Source Security. As he wrote: [developer.com]
    "For three years, until March 2000, Mailman had a handful of glaring security problems in code that I wrote before I knew much about security. An attacker could use these security holes to gain access to the operating system on Linux computers running the program.


    "These were not obscure bugs: anyone armed with the Unix command grep and an iota of security knowledge could have found them in seconds. Even though Mailman was downloaded and installed thousands of times during that time period, no one reported a thing. I finally realized there were problems as I started to learn more about security. Everyone using Mailman, apparently, assumed that someone else had done the proper security auditing, when, in fact, no one had."
    Again, Mailman was and is an extremely popular program -- this was not a problem of obscurity.

    So, the OnLamp.com article under discussion here is a follow-up to his original article, as he points out in the opening to the new article (but people apparently aren't reading.) As you can imagine, Viega is no rabid anti-OSS guy -- he's, in fact, the very model of what we want our developers to be. He writes good software, admits it when he writes bad software, and tells it like it is, even when we don't want to hear it.

    (Disclaimers, such as they are: Viega is an adjunct professor at Virginia Tech, where I attend school, and I was the earliest alpha-tester of Mailman, in the late 90s.)

    -Waldo Jaquith
  • by harlows_monkeys ( 106428 ) on Friday September 17, 2004 @10:58AM (#10277136) Homepage
    OpenBSD is probably the most secure free OS, yet it has fewer people looking at it than Linux or FreeBSD. Fewer eyeballs are looking at OpenBSD, but they are very good eyeballs.

    Another good example is Kerberos. It's been around a long time, looked at by researchers, students, open source developers, and closed source developers using it as a reference for implementing their versions. Yet, major flaws that weren't subtle have taken a long time to find.

  • by argent ( 18001 ) <peter@slashdot.2 ... m ['ong' in gap]> on Friday September 17, 2004 @11:01AM (#10277164) Homepage Journal
    The main advantage that OSS software has is not the eyeballs on the source code, it's the ability of the community to guide its development. There's no way for a vendor to take a disasterous wrong turn against the wishes of a broad part of their user base.

    If Windows had been open source seven years ago, we would have been able to keep a version that didn't integrate IE with the desktop in use, we would have been able to come up with a clean mechanism to split the useful parts of the HTML control from the dangerous parts, and the majority of the script- and zone- based email viruses and worms that have been plaguing the computer industry for most of the past decade would never have happened, and we wouldn't be waiting for the next attack to hit their daft "security zones" train wreck.

    If Apple's LaunchServices and Webkit were open source, we'd be able to split LaunchServices in two and have a separate set of bindings for internet applications, and we wouldn't be waiting for the next protocol-based attack in Safari.
  • One word - Sendmail (Score:5, Interesting)

    by Animats ( 122034 ) on Friday September 17, 2004 @11:03AM (#10277194) Homepage
    Twenty years of buffer overflows. [google.com]

    Any questions?

    One real problem with open source is that it's really tough to fix a fundamental architectural problem by ongoing patching. If the problem is too big for one person to rewrite in a short period of time, it's unlikely to ever get fixed.

    If the Linux world is to become secure, get behind NSA Secure Linux and push. Make apps work within the NSA Secure Linux mandatory security model. That has a chance of working.

  • by Maljin Jolt ( 746064 ) on Friday September 17, 2004 @11:06AM (#10277225) Journal
    Even if you do not believe in skills of open source community, at least you can hire your own specialist to look at possible problems in critical code. You cannot do that with closed source, you are doomed to remain a believer of code vendor.

    I repeat: You CANNOT be sure you are secure with closed source, no matter what you do. You CAN secure yourself with open source, if you make effort.
  • by Todd Knarr ( 15451 ) on Friday September 17, 2004 @11:10AM (#10277280) Homepage

    The difference lies not in the number of vulnerabilities. All software, open or closed source, will have holes in it. That'll be the case until we have a system in place to write completely bug-free code and a system to insure vulnerability-free specifications (the worst security problems aren't bugs, they're design features which favor convenience over security). The difference lies in what happens when a vulnerability is discovered. In closed-source software, we've seen time and time again that the response by the vendor is almost always to conceal the problem and deny it exists. In open-source software, by contrast, vulnerabilities are almost always published fairly quickly and fixes made available rapidly. That's because nobody is at the mercy of the original author for a fix. The people who discovered the problem can publish a code fix along with the details of the problem. People affected by the problem can patch the code themselves, if it's important enough.

    In addition, security holes by design tend to get eliminated from open-source software. In proprietary software, if an insecure design feature benefits the vendor it's unlikely to be removed short of open revolt by the users. In open-source software if there's another way to do it that provides less security exposure and the original author won't change the design, someone else tends to get fed up, make the change and make the patch available. Eventually the original author either has to bow to user preference or find his own version of the software being ignored in favor of one that does.

  • Even Worse (Score:5, Insightful)

    by Salamander ( 33735 ) <`jeff' `at' `pl.atyp.us'> on Friday September 17, 2004 @11:27AM (#10277468) Homepage Journal

    I was struck by something while reading this passage:

    Most people who look at the source code for open source software don't explicitly look for security bugs. Instead they likely have in mind a particular piece of functionality that they want to augment

    Not only is that sort of developer not looking for security bugs, but they're pretty likely to be just getting their feet wet working on that project and might well introduce a bug. Then, there's a significant possibility that nobody else cares about the feature that one developer added to scratch their own itch, so nobody's going to look at the code that implements it. Yes, there are more eyeballs, but those eyeballs are not evenly distributed. There are certain pieces of code that everybody is looking at, and there are vast tracts of code that practically nobody is looking at - none with an eye toward security. How many Linux drivers have you looked at? I'll bet the majority of the people reading this haven't really looked at any Linux kernel/driver code whatsoever. Have you looked at the code for Apache? Perl/Python/Ruby? MySQL? Gcc? Open-source users outnumber programmers a hundred to one, and each developer has a fairly narrow area that they're either interested in looking at or qualified to look at, so the number of eyeballs on some piece of code implementing an unpopular feature in a popular package is nowhere near what some people seem to think. It might be dozens, it might be one, and quite often it will be zero once the guy who wrote it moved on to something else. That's no better than the almost-always-one you'll get with commercial software, and sometimes it's worse.

  • by Glowing Fish ( 155236 ) on Friday September 17, 2004 @12:05PM (#10277905) Homepage
    The thing is that "open source" can mean many things. Probably the main reason that Linux (the flagship of open source) is secure is just because normal users don't have system control. This is something it inherited from Unix, but isn't specific to its code development process.

    My distribution of Linux, Debian is stable because it is not a company, and it doesn't have to release new product too often to make marketting happy. Because there is no profit motive, Debian can take the time to release stable packages. If Debian was not using open source, this was still be the case.

    So, it isn't specific to open source, but many open source projects have other features that make them more secure.

  • by Master of Transhuman ( 597628 ) on Friday September 17, 2004 @01:11PM (#10278696) Homepage
    if he can prove Microsoft is looking for security flaws in an "structured" way.

    Pardon me while I laugh myself into a coma.

  • by Brandybuck ( 704397 ) on Friday September 17, 2004 @01:47PM (#10279102) Homepage Journal
    ...but he does not believe those eyeballs are looking for security problems in a structured way.

    As a developer of proprietary software (hey, no flames, it's my job), I can assure you that there is very little structured security analysis of closed source software. Some closed software may be rigorously audited because of its nature, but the same holds true in Open Source (OpenBSD). You're not going to see any security audits for non-security software. You might see a few half-hearted attempts at it (like Microsoft's month long fix-fest), and very localized panic attacks when vulnerabilities are made public, but for the most part it's an ignored area of development.

    "Security through obscurity" is still king, because the people making software security decisions in commercial firms generally don't know any other way. They also do not see the financial value in secure software, because it's not something that the customer will pay extra for in non-security related software. Then there's the problem of ignorant coders.

    We have all gone through the phase where we think we know about security and encryption. In a proprietary environment such a security ignoramus can reach chief software architect level. In my own work I've seen three "clever" encryption schemes by senior developers that were complete jokes. One scheme even produced *sequential* keys it was so bad. In the Open Source community such security hubris is slapped down quickly.

    In short, the author is wrong. Open Source is not inherently more secure than proprietary software, but the open development model encourages a higher level of security analysis.
  • Professional (Score:3, Insightful)

    by sad_ ( 7868 ) on Friday September 17, 2004 @02:26PM (#10279460) Homepage
    I think people have to forget about the cliche that all open source software is developed by some kid in his attic. A lot of OSS is developed using the same control process as commercial software, _but_ as an extra the source code is open. I can't see how in these cases it can be worse? Of course there are and always be OSS project made just for fun or out of an itch and perhaps in these cases the article could have a point (although i don't agree, as small project with little developers can take advantage of the 'many eyeball' idea, and people _do_ look).
  • by JamesR2 ( 596069 ) on Friday September 17, 2004 @02:39PM (#10279616)
    Seriously ... computer security is getting lots of windbags their 15 minutes. Resist it! All that matters is that holes are being fixed at an acceptable rate in Linux, Apache, Firefox, IE, Windows, etc.
  • by Goglu ( 774689 ) on Friday September 17, 2004 @04:44PM (#10280692)
    Tee author of this article puts quite some weight on the fact that commercial software can be audited by the company who produces it, but we must no forget that:
    1) These audits must be conducted by third parties, in order to be trusted;
    2) These audits are not done for free, and are added to the cost of the software.

    The cost of auditing open-source software will probably have to be passed to the customers, for smaller projects. It could be split among groups of interested customers and benefit the whole community, and still remain cheaper than most commercial alternatives.

    Of course, big customers (the Navy?) could implement their own auditing scheme and pay for it, and commercial software companies would probably open their source code to these priviledged customers. Unfortunately most small companies cannot afford to call Microsoft, or Accpac, or SAP, and force them to provide their source code and get an audit from a specific auditor. (And, as we saw lately, relying only on the reputation of such auditing companies as the Big Four can mean that they will give good results to their big golf buddies...)

    Finally, customers like the Navy would probably get cheaper software if they would go for F/OSS alternatives and audit them at their own cost, rather than pay for audited commercial software.
  • Security (Score:3, Interesting)

    by ewe2 ( 47163 ) <ewetoo&gmail,com> on Saturday September 18, 2004 @12:59AM (#10283381) Homepage Journal
    sells these days. Oddly he appears to blame everyone else for a bug he didn't spot himself for three years. Users suddenly didn't turn into code monkeys just because they used the software. And you can't turn them into beta-testers against their will. It's a potential, not a given.

    The kind of methodology he wants for OSS just isn't going to happen across the board. Just as in commercial software, the "best practice" style you learned in college gets thrown out once you actually have to DO something.

    Large projects require similar methodology just to keep consistent, but small programs will never do so. This is the real world, not the classroom!

Over the shoulder supervision is more a need of the manager than the programming task.

Working...