Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security Software

US DHS Testing FOSS Security 203

Stony Stevenson alerts us to a US Department of Homeland Security program in which subcontractors have been examining FOSS source code for security vulnerabilities. InformationWeek.com takes a glass-half-empty approach to reporting the story, saying that for FOSS code on average 1 line in 1000 contains a security bug. From the article: 'A total of 7,826 open source project defects have been fixed through the Homeland Security review, or one every two hours since it was launched in 2006 ...' ZDNet Australia prefers to emphasize those FOSS projects that fixed every reported bug, thus achieving a clean bill of health according to DHS. These include PHP, Perl, Python, Postfix, and Samba.
This discussion has been archived. No new comments can be posted.

US DHS Testing FOSS Security

Comments Filter:
  • by Anonymous Coward on Tuesday January 08, 2008 @08:25PM (#21963638)
    Now if they would do the same to Microsoft. Oh yeah...
    • Re:What about MS? (Score:5, Interesting)

      by filbranden ( 1168407 ) on Tuesday January 08, 2008 @10:29PM (#21964652)

      Actually, it would be really nice if it was possible to do it with Microsoft. Microsoft (or most other companies that produce proprietary software) certainly can't do better than what the open source projects do, and certainly their code contains at least as much issues as the ones found in open source projects.

      The ability to do code audits has always been one great advantage of open source software, but until now, it was mostly in theory. Now we start to see big code audit projects such as this one, showing that the advantage is real and that the results of the audit are good, since some of the projects have alread patched all of the issues, and certainly most of others will finish patching them soon. This shows that open source is here to stay, is going mainstream, and will not be stopped by any company's interests.

      All issues that currently exist on Microsoft's code, on the other hand, will be unpatched. Unless they hire some consultant company (why not the same?) to do the audit on their code (certainly under NDA). But you can be sure that, if they do, for one, they won't publish the results of how many issues were found. No transparency there. And also, probably many issues won't be fixed as promptly as all of them were fixed in many of the audited open source projects. This is not a speculation, if you only look at how long it takes for them to fix issues for which there are security vulnerability reports issued, then you realise that the ones only they know about will certainly take much longer.

      • Re: (Score:3, Informative)

        by Shados ( 741919 )
        Well, technically, they don't need to -hire- some consultant companies to do it... While it WILL be under extreme DNA, it is not uncommon for Microsoft's customers to be allowed to get access to the source, if they're big enough.

        Now, I realise it doesn't change your point at all, but its not like MS is the only entity with access to their own code: they have dedicated programs to share even their most closed pieces of code with their customers (if they're important enough).
      • Re: (Score:2, Insightful)

        by DerekLyons ( 302214 )

        This shows that open source is here to stay, is going mainstream, and will not be stopped by any company's interests.

        It also shows that open source has failed to use a common tool to self audit - it took a third party to do so.
        • Re:What about MS? (Score:5, Insightful)

          by splutty ( 43475 ) on Wednesday January 09, 2008 @04:46AM (#21966392)
          I see nothing wrong with a 3rd party specialized in this sort of auditing to actually do it, instead of a whole bunch of programmers & others who probably don't have that specialization, and are most often busy with actually being 'productive' and thus have no time to audit themselves (impossible) or others (not always efficient)
        • Re:What about MS? (Score:5, Insightful)

          by cp.tar ( 871488 ) <cp.tar.bz2@gmail.com> on Wednesday January 09, 2008 @04:48AM (#21966398) Journal

          This shows that open source is here to stay, is going mainstream, and will not be stopped by any company's interests.

          It also shows that open source has failed to use a common tool to self audit - it took a third party to do so.

          Since an audit is usually an independent review, I see it as only logical for it to have been done by a third party.

          The point is, it is open. Anyone may perform an audit at any time they wish to do so.
          And everyone apart from the developers themslves and the users of the software is third party, by definition.

        • Re: (Score:3, Insightful)

          by budgenator ( 254554 )
          Feel free to pay to have any OSS project audited you would like to see made better.
        • Re:What about MS? (Score:4, Insightful)

          by rtb61 ( 674572 ) on Wednesday January 09, 2008 @07:05AM (#21966852) Homepage
          WTF? Your statement makes abso-fucking-lutely no sense at all. In open source there is no such thing as a third party or second party, anyone and I mean absolutely anyone, be they part of the government, employed by a corporation company or private individual that contributes to open source software is a first rate party.

          That is what open source is all about, anybody can contribute their worth while efforts to it. Contribution to open source not only includes code, it also includes, auditing as well as actual innovation and even those other activities like distribution, documentation, promotion and support.

          So your illogical claim of failure is in reality open source success. I will never understand why closed source proprietary zealots just don't get it, I suppose it just goes to prove greed and stupidity really do go hand in hand ;).

  • Fixed? (Score:5, Funny)

    by sjbe ( 173966 ) on Tuesday January 08, 2008 @08:28PM (#21963664)

    A total of 7,826 open source project defects have been fixed through the Homeland Security review


    Do they mean fixed [wikipedia.org] or fixed [wikipedia.org]?
  • The important point here is that proprietary software manufacturers aren't telling you how many security flaws they had. I bet it's more than 1 per 1000 lines, that is an incredibly excellent figure for the first time a scanner like coverity is run. I doubt proprietary work comes close.

    You can't ever say that proprietary software is secure, because there's no way to prove it. With Open Source, you can come a lot closer to proving that it is secure, because you can employ every security test that exists.

    The fact that a coverity scanner bug is reported doesn't mean it's an exploitable security flaw.

    Bruce

    • by QuantumG ( 50515 ) <qg@biodome.org> on Tuesday January 08, 2008 @08:42PM (#21963814) Homepage Journal
      Although I understand what you're trying to say, it does seem a little irrelevant.

      I'm a software security engineer. I can look at source code and tell you if it has some bugs in it that I would consider relevant to security. If I can't find any, I might tell you that it is more secure than if I could... but that's doesn't mean it is secure. I'll never tell you it is secure, because testing simply can't give you that. I can do this on proprietary software or I can do this on Open Source software.. the only difference is that, with the Open Source software, I don't need permission from someone to do the testing and other people don't need permission to check my work.

      Does this mean that more people will check the Open Source software for security flaws? Not necessarily. It completely depends on whether or not someone has an interest in the security of that particular bit of software. Even assuming a similar level of interest in the security of comparable proprietary and Open Source software, there's no guarantee that those who have an interest in testing the Open Source software for security flaws will report back the findings. They may simply decide that the Open Source software is too insecure for their use and go with the proprietary solution - assuming they can have it similarly tested by a trusted third party.

      All in all, the assumption that Open Source software is more secure than proprietary software is most likely true, but there's no hard data.. because the stats on the insecurity of proprietary software are guarded secrets - and that's probably the best reason to assume that proprietary software is less secure.

      • Does this mean that more people will check the Open Source software for security flaws? Not necessarily. It completely depends on whether or not someone has an interest in the security of that particular bit of software.

        I submit that people who are only looking for security flaws don't have a motivation to develop a deep understanding of the software. People who are out to modify the software do. And thus there are not just more eyes, but better eyes with Free Software.

        There is a class of mathematically provable software languages, and you might be able to say with surety that programs in them are secure. For the languages we usually use, you can only say that you have tested them in the ways you know of. And only a person with access to the source can say that. If you want an independent asessment, Open Source software won't stop one from happening, and won't hinder what can be said with NDAs. That's why I think it's more secure.

        Bruce

        • Re: (Score:3, Insightful)

          by QuantumG ( 50515 )

          I submit that people who are only looking for security flaws don't have a motivation to develop a deep understanding of the software. People who are out to modify the software do. And thus there are not just more eyes, but better eyes with Free Software.

          No offense, but that's completely the opposite of the facts. The vast majority of software engineers have no idea what they're doing when it comes to detecting, fixing and avoiding security issues. That's why tools like Coverity exist - and most the time the programmers can't even use them correctly. There are "security consultants" you can hire who basically just explain the results from Coverity, and they're not short on work.

          But hey, don't take my word for it.. go have a chat with your friend Theo de

          • by splutty ( 43475 )

            But hey, don't take my word for it.. go have a chat with your friend Theo de Raadt..

            Ouch... Talk about throwing to the wolves. However, if you want to have a well-informed (albeit somewhat lacking in social graces) person to comment on the state of security in general, he'd probably be quite a good choice. As to the state of security sense/awareness in programmers, he'd probably be one of the best :)
      • Although I understand what you're trying to say, it does seem a little irrelevant.

        I don't really see how it's irrelevant - if a "security defect" exists but cannot be exploited (i.e. if there's a buffer overflow bug but it deals with internal data or data that's already been thoroughly sanitized), it does not present the same risk as a bug that may be easily exploited, for example in the input sanitizing code. It's not really clear how many of these bugs are of each type, and I think it's significant tha

        • by samkass ( 174571 )
          Although I generally agree with the belief that FOSS probably yields better security, I think FOSS has a different characteristic of vulnerability than closed-source software. Specifically, the "ease of exploiting" a vulnerability is increased along with the ease of modification of the software. The most understanding of the system that's out there, the easier it is to take advantage of a vulnerability. I realize that "security through obscurity" is not something you want to depend on, but it is a real e
          • Re: (Score:3, Interesting)

            "For example, MacOS and Windows had a similar number of critical security patches last year."

            Willing to stipulate for the purpose of this discussion.

            However, there were dozens of Windows viruses and hundreds of thousands of compromised machines, and zero MacOS viruses.

            Likewise willing to stipulate.

            Thus, while a certain measure of vulnerability is comparable, the likelihood of actually being attacked is infinitely highder with Windows.

            I would suggest this doesn't necessarily follow. It could
        • by QuantumG ( 50515 )
          No-one was debating Bruce's last point about Coverity returning many false positives.

          As for the use of terminology, excuse me for using an accurate term like "defect" instead of a more popular colloquialism like "hole".

    • RTFA (Score:5, Informative)

      by Pinckney ( 1098477 ) on Tuesday January 08, 2008 @08:47PM (#21963876)

      The important point here is that proprietary software manufacturers aren't telling you how many security flaws they had. I bet it's more than 1 per 1000 lines, that is an incredibly excellent figure for the first time a scanner like coverity is run.
      Actually, the first line of the article reads "Open source code, much like its commercial counterpart, tends to contain one security exposure for every 1,000 lines of code, according to a program launched by the Department of Homeland Security to review and tighten up open source code's security."
      • Most people only read the summary.
        • Pessimism in article (Score:5, Informative)

          by filbranden ( 1168407 ) on Tuesday January 08, 2008 @10:49PM (#21964802)

          Not only did the article say much like its commercial counterpart, but most of the numbers it shows are actually good for open source software.

          For instance, most of the projects discussed had less than 1 bug for 1000 lines of code. For instance, the Linux kernel had .127 bugs per 1000 lines, and that on over 3 million lines of code.

          Also, the article talks about key projects, such as the glibc (which is basically used by everything on a Linux system) that already fixed all the issues.

          Even something huge and complex as Firefox has already fixed half of the issues, and is showing progress on the rest of them (by the fact that some were already verified).

          Overall, I didn't get the half glass empty tone that the summary is implying. And what I found strange is that even the comments on the site itself, and many of them on /. itself, are also taking the pessimistic view.

          I thought that this news are great for open source software. Shows that it has less security issues than average, that the issues are fixed quickly, and still that some programs are certified by a company for use in security related departments such as the DHS. What could be better than that?

      • Actually, the first line of the article reads "Open source code, much like its commercial counterpart, tends to contain one security exposure for every 1,000 lines of code, according to a program launched by the Department of Homeland Security to review and tighten up open source code's security."

        The problem is is how do they know how many lines of code are in the closed commercial programs if they can't see the code?

        Falcon
        • Gee good point! How could they possibly view the code if it's not open to all? I mean, it's not as if there's any possibility they could've gotten a bunch of companies to agree to let them audit their code provided they only released the results in aggregate, without any identifying information.

          Just because it's not open source doesn't mean that nobody is ever able to gain access to it.

    • According to McAfee recently (http://yro.slashdot.org/article.pl?sid=08/01/05/0215201) and Microsoft et al, having your code exposed lets the bad guys exploit it's vulnerabilities. Of course if or when a weakness is taken advantage of, it would likely be fixed vary quickly through the FOSS community, instead of on the first Tuesday of every month like as in Microsoft's business model.
      • According to McAfee recently (http://yro.slashdot.org/article.pl?sid=08/01/05/0215201) and Microsoft et al, having your code exposed lets the bad guys exploit it's vulnerabilities

        Yes they said that, but you don't really believe it, do you? If so, just look up "security by obscurity" and read about it. To give you a clue, the unavailability of source has not prevented 100,000 Windows viruses.

        • I think your logic is a bit confused. The fact that viruses can be created without reading the source code does not prove that there's no value in keeping the code secret. It's like arguing that there's no point in locking your door because 100,000 houses with locks were broken into.
          • by Waffle Iron ( 339739 ) on Tuesday January 08, 2008 @10:27PM (#21964640)

            It's like arguing that there's no point in locking your door because 100,000 houses with locks were broken into.

            A more apt analogy would be: There's no point in locking your door using a limp spaghetti noodle because a limp noodle makes a completely ineffective lock.

            • I see you're trying to imply that Windows is insecure, but I don't see what that has to do with the issue of security through obscurity.
              • I'm not talking about Windows in particular. I'm saying that *security through obscurity* is an ineffective strategy. You say it's like a lock; I say it's like a limp noodle.

                It's barely a speed bump for the evil hackers who feed garbage into programs to crash them, then poke around in a debugger to find where they broke, then write some machine code to take advantage of the bugs. Thinking that lack of access to source code is anything like a "lock" is just self delusion.

                • You're reading too much into my analogy (or perhaps it was flawed). I didn't mean to suggest that STO is a like a "lock". I was just saying that the fact that Windows has been exploited doesn't prove that STO has no security value.
                • If you believe security through obscurity is ineffective, I'd like you to hand over all your encryption keys and passwords: after all, there's no need to keep them obsecure! Eventually, at some point, all of security comes down to obscurity. Security IS the concept that you are hiding (preventing access to) some sort of 'secret'.

                  Something cannot be secure without obscurity.

                  It's like the physics concept: Observation is Interaction.

          • My door is locked, but the mechanism of the lock is easily available in the hardware store for others to scrutinize. And so it should be. This is a different sort of information than the pattern of the key.

            Bruce

            • Re: (Score:3, Insightful)

              Analogies have their limits, so we shouldn't try to take it too far.

              Even those who historically have critized "security through obscurity" never suggested that publishing their design or secrets would lead to better security, but rather that you can't assume your that your design can't be cracked.

              Of course, the preferred approach is "security through design" which has nothing to do with correcting bugs. The latter could be called "security through maintenence". Thus while we might argue about whether closed
              • Even those who historically have critized "security through obscurity" never suggested that publishing their design or secrets would lead to better security

                You're wrong about that. For example, NIST, a US government standards agency, is calling for proposals for a new cryptographic algorithm for government use. Their specification [nist.gov] requires that it be publicly disclosed (and royalty free, too). This is so that they don't pick a weak algorithm. They want any known or theoretical problems to be pointed out to

                • I said historically. I've worked on military crypto systems as recently as 10 years ago and the details were classified. I wouldn't assume that every government agency takes the same approach. Even the individual branches of the military often go their own way.

                  In this post-911 period I've seen a trend toward more secrecy rather than less. For example, the documents that described the military's UHF DAMA waveforms used to be freely available on the Internet, but they aren't now.
              • Re: (Score:3, Informative)

                by mr_mischief ( 456295 )
                There are numerous refutations to your "never suggested that publishing their design or secrets would lead to better security". Many experts have said precisely that.

                An IT Security article on full disclosure [itsecurity.com] states that as early as the middle of the 19th century locksmith Alfred C. Hobbes thought full disclosure was important to clear up the rash of lock picking people were experiencing. It goes on to discuss exactly why full disclosure works so well.

                David Wagner says in an article on security: "Today, many [berkeley.edu]
                • Of course, language choice is a design decision.
                  • It's also a performance decision and a pragmatic decision in the fact that many languages don't bootstrap on bare hardware. Let me know when you have an OS kernel running on the Core 2 Duo written in Java, Python, Erlang, Eiffel, Haskell, or Ruby. ;-)

                    Yeah, lots of software gets written in assembly, C or C++ that probably should be in something else. No, nothing else is able to take their place for everything just yet.
          • I think your logic is a bit confused. The fact that viruses can be created without reading the source code does not prove that there's no value in keeping the code secret. It's like arguing that there's no point in locking your door because 100,000 houses with locks were broken into.

            Fact is anybody can dis-assemble a lock. And of course people can dis-assemble code.
            Not too many people would be interested in breaking into a lock on a door (smashing a Window to get into the house is most generally used by non-government intruders).

            The greatest value in keeping code secret is making sure it cannot be easily re-produced, and thus subverting other individuals or companies from using it without authorization. It's much like music and DRM: in the end it is the licenses which are enforced in

            • Here's a good story about examining how locks work, that shows the value of "disclosed source".

              Anyone can buy a re-key kit for Schlage locks at the Home Depot. Upon opening the cylinder of the lock with that kit, you will discover that (this is approximate, I don't have the lock in front of me) there are 5 pins, and 5 possible levels per pin, and that the minimum number of possible key patterns might thus be 5 ^ 5 or 3125. Which is enough that nobody's carrying all of the possible keys around and will have

            • "Fact is anybody can dis-assemble a lock. And of course people can dis-assemble code"

              There are probably few people who have even read every line of the Linux kernel - imagine trying to dis-assemble Vista looking for vulnerabilities.

              "The greatest value in keeping code secret is making sure it cannot be easily re-produced, and thus subverting other individuals or companies from using it without authorization."

              Perhaps, but your statement says nothing about security issues.

              "Yes there may be value in keeping cod
              • Nobody can reasonably argue that not having the source code makes it easier to create exploits.

                The point is that if there are exploits, it can be easier to fix them. Of course it doesn't guarantee they will be fixed (as in closed-source software), but the opportunity is there for global assistance and peer review (of the code and the fix) that is not available in closed-source software.

                As well, open-source software makes it easier to find built-in vulnerabilities (like the Jap proxy software was found to have a secret back door for the German police by examining it's source).

              • Nobody can reasonably argue that having the source makes it harder to find where exploits might target.
        • According to McAfee recently (http://yro.slashdot.org/article.pl?sid=08/01/05/0215201) and Microsoft et al, having your code exposed lets the bad guys exploit it's vulnerabilities

          Yes they said that, but you don't really believe it, do you? If so, just look up "security by obscurity" and read about it. To give you a clue, the unavailability of source has not prevented 100,000 Windows viruses.

          No I do not believe it. I was just pointing out some (IMHO) rather lame and biased arguments. Openness and transparency (whether it be in software, business models, or just dealing with one's spouse, for example) is generally better than keeping things hidden.

          Make the licenses as restrictive as you please, but at least give people the opportunity to know what they are using. Like listing ingredients on processed food, it's good to know that I'm not consuming something that could possibly do me harm (or be

    • by grcumb ( 781340 ) on Tuesday January 08, 2008 @09:15PM (#21964102) Homepage Journal

      The important point here is that proprietary software manufacturers aren't telling you how many security flaws they had.

      Indeed. FTFA:

      "Our commercial customers wouldn't like it too much if we aired the number of defects found in their code," said Maxwell, when asked about the results from scans on 400 product lines of the firm's private customers.

      One can only speculate about the, er, source of their discomfort.... 8^)

      I bet it's more than 1 per 1000 lines, that is an incredibly excellent figure for the first time a scanner like coverity is run.

      1 per 1000 lines is even more impressive as an average across all 180 FOSS applications tested. Most impressive of all are the highlights:

      • SAMBA: 236 defects in 450,000 lines of code. 228 already fixed.
      • Linux Kernel: 0.127 security faults per thousand lines of code. The kernel scan covered 3,639,322 lines of code.
      • Apache: 135,916 lines of code, which yielded a security defect rate of 0.14 bugs per thousand lines of code. Or 1.4 per 10,000 lines of code, if you prefer. 8^)
      • PostgreSQL: 909,148 lines of code, with a 0.041 per 1000 defect rate.
      • glibc: 83 bugs in 588,931 lines of code, all since fixed.

      Even some of those with more bugs have at least responded well:

      • KDE: 4,712,273 lines of code, fixed 1,554 defects, verified another 25 and has only 65 to go.
      • GNOME: 430,809 lines of code, fixed 357 defects, verified 5 and has 214 to go.

      And my favourite 'backslider' of all, OpenVPN, has yet to fix 100% of the bugs found during this exercise. Of course, that's only 1 bug in over 69,000 lines of code....

      These results should be viewed as excellent, by and large. This doesn't mean all this software is bug-free, just that there aren't a lot of easily preventable bugs in the code base. Most encouraging, though, is how fast they got addressed and fixed by the healthier FOSS projects.

      • Most encouraging, though, is how fast they got addressed and fixed by the healthier FOSS projects.

        Less encouraging is that they existed in the first place - doubly so since all the software you list is more-or-less 'mature'.
    • Oh man... Bruce Perens. What a pleasure. I couldn't have said it better myself.

      (Actually, I was going to make fun of proprietary software for the general idea of having source unavailable).

      More to the point though, I received a lecture on this [typepad.com] in a Software Architecture course a couple years ago and it struck a nerve. Even if you never need to review 99.9% of the code you run, it is nice to be able to look through the 0.1% that might be helpful for you to gain a better understanding of what is going

    • by epine ( 68316 )
      But it's not a first time scan. Amada was checked long ago, and FreeBSD has been running a Coverity server since Jan 2006.

      http://www.linuxtoday.com/developer/2006031800826OSCYDV [linuxtoday.com]

      http://www.freebsd.org/doc/en_US.ISO8859-1/articles/committers-guide/coverity.html [freebsd.org]

      Worst of all, these articles haven't disclosed the classes of software issues detected. I'm sure huge classes of deadlocks and other system-wide issues go undetected. Even if the point of Coverity is to conduct system-wide analysis, I'd still say lar
  • by ComputerSlicer23 ( 516509 ) on Tuesday January 08, 2008 @08:39PM (#21963784)
    Uh.. from the article, the software is called "Prevent Software Quality System"... Wow, I can't think of a bigger misnomer for something that should help improve software quality. I sure don't want to prevent software quality in my own products.
  • The Actual Scan Site (Score:2, Informative)

    by gQuigs ( 913879 )
  • by OzPeter ( 195038 ) on Tuesday January 08, 2008 @08:51PM (#21963908)
    I checked out the Coverity website [coverity.com] and saw on the list of projects the aalib ASCII art library [sourceforge.net] which according to the history hasn't been updated for something like 7 years.

    Damn we better protect ourselves from Terrists hiding their WMD's in ASCI art
  • DHS is a dysfunctional mess for the most part when it comes to INFOSEC/IA. They negative for the sake of negativity approach does not surprise me in the lest. If it's any comfort, DoD takes FOSS quite seriously and makes use of many great FOSS tools and platforms. It really is a cultural difference. Those in the DoD that get the job done are prone to use 'the best tool for the job'. FOSS is a gimme in many (and an ever increasing number of) cases.
  • False positives (Score:3, Interesting)

    by clem.dickey ( 102292 ) on Tuesday January 08, 2008 @09:00PM (#21963984)
    The article did not seem to give any data on false positives. A story here [internetnews.com] has Coverity claiming a 10% false positive rate. But there is no independent confirmation. It would also be interesting to know how hard it is to prove a false positive vs. how hard to fix a true positive. In other words, it it worth Coverity's time to further reduce the false positive rate.
    • Re: (Score:3, Informative)

      by hyc ( 241590 )
      In the OpenLDAP source base the false positive rate was over 75%.

      Prevent 2.4.6
      Tasks (hyc)
      Help
      My Settings
      View Runs
      Diff Runs
      View Projects
      Manage Users
      Main Page

      Return to view-projects

      Logout

      Lifetime Report
      Analysis Summary
      Run count 406
      Lines of code 125,757,965
      File count 333,676
      Defect Summary
      Bug count 4,223
      Results count 31,134
      Checker Summary

  • Well... (Score:5, Insightful)

    by Otter ( 3800 ) on Tuesday January 08, 2008 @09:02PM (#21964008) Journal
    This seems like a genuinely useful activity for DHS, certainly more valuable than x-raying my shoes and confiscating my saline solution.
  • by ThreeGigs ( 239452 ) on Tuesday January 08, 2008 @09:24PM (#21964168)
    From TFA:
    The popular MySQL open source database was not included in the scans for reasons that were not immediately evident.

    Any suggestions as to why MySQL has no results? I'm stumped and wondering why one whole corner of a LAMP foundation was left unchecked.
    • Scanners can have bugs too. Maybe feeding the MySQL source code into it caused it to error or crash for whatever reason.

      Or maybe licensing issues? Although I doubt it, IIRC MySQL is GPL or something.

    • Re: (Score:3, Informative)

      by mr_mischief ( 456295 )
      They did in 2006 [news.com] and found about 0.224 defects per TLOC.

      MySQL uses Coverity and Klockwork [mysql.com] on their certified versions on several different platforms. The certified versions are based on the major releases of community versions, and are typically just more conservative in that they only make changes for critical and security bugs [livejournal.com].

      There's speculation that the community edition tested was actually an old report without a retest even back then, as the certified version based on that community version had zero d
    • by Aladrin ( 926209 )
      Maybe the DHS doesn't use it? I suspect their focus was on software they use, or are planning to use.
  • by ehovland ( 2915 ) * on Tuesday January 08, 2008 @11:00PM (#21964870) Homepage
    First off, prevent is not strictly a security flaw static-analysis checker. It is a static-analysis checker that checks for all sorts of defects. Some of which are directly related to security. Second, I have used prevent extensively over the past year and have found it to be an invaluable tool. It has a pretty low false positive rate and fixing the defects it finds means your code is better. On the code I work on, I find that we have a much lower defect count. But we also have pretty mature code and we really do attempt to make it as bullet proof as possible. But we still have defects.

    My experience is with the C/C++ version of tool. We have also been evaluating the java version of the tool and it is good. But some of the free alternatives like findbugs are still better. I would use findbugs w/ prevent for java if I wanted good coverage.
  • Thank you DHS for the contribution to FOSS!
    We get all the bug fixes, and it will become that much more robust.
    Too bad that Windows will never get this kind of review.

    It probibly has a few less bugs per line,
    but not much hope of getting those fixed.

    On second thought, Mr Allen, I challenge you to compare!
    I am willing to bet that FOSS software,
    just because of its nature of peer review,
    and from my experence of reading ALen Cox's work on the Kernel,
    that it has less bugs than Windows.
  • While the numbers for the Linux kernel look pretty good, there's a bit more to this story, I think.

    IIRC, Coverity is the commercialization of the "Stanford Checker" static analysis tool. By most accounts, it's a pretty nifty tool. Back when it was still a research project, some of the folks working on it would run it against different parts of the kernel and post the bugs it found to LKML. It was a mutually beneficial relationship--the kernel people fixed a lot of bugs, and the Stanford folks got analysi

A consultant is a person who borrows your watch, tells you what time it is, pockets the watch, and sends you a bill for it.

Working...