Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Bug Open Source Programming Software Stats

450 Million Lines of Code Can't Be Wrong: How Open Source Stacks Up 209

An anonymous reader writes "A new report details the analysis of more than 450 million lines of software through the Coverity Scan service, which began as the largest public-private sector research project focused on open source software integrity, and was initiated between Coverity and the U.S. Department of Homeland Security in 2006. Code quality for open source software continues to mirror that of proprietary software — and both continue to surpass the industry standard for software quality. Defect density (defects per 1,000 lines of software code) is a commonly used measurement for software quality. The analysis found an average defect density of .69 for open source software projects, and an average defect density of .68 for proprietary code."
This discussion has been archived. No new comments can be posted.

450 Million Lines of Code Can't Be Wrong: How Open Source Stacks Up

Comments Filter:
  • Correction (Score:3, Insightful)

    by Sla$hPot ( 1189603 ) on Tuesday May 07, 2013 @09:09AM (#43653161)

    "450 Million Lines of Code Can't Be Wrong"
    should have been
    "450 Million Lines of Code Can't ALL Be Wrong"

  • by Anonymous Coward on Tuesday May 07, 2013 @09:10AM (#43653177)

    Just ask apk!

  • Propietary defects are ones that may cause financial harm. FOSS defects are ones that cause annoyance.

    I know that our code has more defects than we'd consider fixing purely because the CBA isn't there.

    • by Cenan ( 1892902 ) on Tuesday May 07, 2013 @09:20AM (#43653333)

      Propietary defects are ones that may cause financial harm. FOSS defects are ones that cause annoyance.

      I know that our code has more defects than we'd consider fixing purely because the CBA isn't there.

      I'm guessing you mean defects in propietary software only gets fixed if they have an impact on the bottom line? Otherwise that whole reply makes no sense.

      Anyways, that is not much different from the OSS model. Whoever cares about the sub-system that has a bug, fixes it, and if nobody cares (or has the skills to fix it) it can go ignored for years. The selector for OSS is different, but the end result is the same: nobody gives a fuck about the end user unless it directly affects their day/paycheck/e-peen.

      • by Bigby ( 659157 )

        No, I think the GP is getting at the point that code analyzed in the analysis likely includes critical proprietary software. Software that needs to work and so they invest the time in making sure it does.

        Meanwhile, the open source side probably included code that is not critical, based on reverse engineering, or experimental in nature. Not that both the proprietary and open source code bases didn't contain both, but I think the context of the code is quite different.

        The results would be much more meaningf

    • So hows that Kool-Aid?

      Somehow the idea on how a product is licensed will affect its quality is very absurd.

      For most open source projects you will only have a small handful of contributors about the same many for a traditional software company. You got good developers and bad ones. Some OSS software is just crap, others strive to be excellent. The same with commercial applications too.

      Any differences in the community vs commercial interest really tend to balance themselves out.
      You have a problem in your

  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Tuesday May 07, 2013 @09:13AM (#43653219)
    Comment removed based on user account deletion
    • by GrugVoth ( 822168 ) on Tuesday May 07, 2013 @09:18AM (#43653309)
      We use coverity where I work on proprietary code and part of their service is to report, anonymously obviously, the defect count, type and lines of code etc back to coverity if you want to. Via this they can get an idea of the defects found using their tool over a very large code base.
      • by Chris Mattern ( 191822 ) on Tuesday May 07, 2013 @09:58AM (#43653887)

        We use coverity where I work on proprietary code and part of their service is to report, anonymously obviously, the defect count, type and lines of code etc back to coverity IF YOU WANT TO.

        Am I detecting a selection bias here? Coverity can run their tests against all of open source. Coverity can run their tests only against that proprietary code that decides to use it and report the results--and it strikes me that only the better, and more open, proprietary shops would be doing this. Is Mircrosoft reporting their code? I doubt it. Is Oracle?

        • We use coverity where I work on proprietary code and part of their service is to report, anonymously obviously, the defect count, type and lines of code etc back to coverity IF YOU WANT TO.

          Am I detecting a selection bias here? Coverity can run their tests against all of open source. Coverity can run their tests only against that proprietary code that decides to use it and report the results--and it strikes me that only the better, and more open, proprietary shops would be doing this. Is Mircrosoft reporting their code? I doubt it. Is Oracle?

          I doubt they ran it against all open source software; just some subset that ideally mirrored the proprietary code in complexity and application. If so it would be a reasonable comparison. Since TFA says they used some 300 OSS programs of various sizes I'd say it was a reasonable approximation of real world defect rates. Since the TFA doesn't name any proprietary products included in the survey it is harder to decide if they are valid results but I am willing to give them the benefit of doubt.

          • Even then not a reasonable comparison. The ability for the scanned proprietary softwares' teams to decide on inclusion feels to me like it would really influence the stats.

            Would you expect there to exist any correlation between how shoddy software is and how likely the authors are to share information about how shoddy their software is? I would expect some correlation.

            • Even then not a reasonable comparison. The ability for the scanned proprietary softwares' teams to decide on inclusion feels to me like it would really influence the stats.

              Would you expect there to exist any correlation between how shoddy software is and how likely the authors are to share information about how shoddy their software is? I would expect some correlation.

              Let's accept the premise that proprietary vendors only submitted what they considered their best code. If the code bases tested matched similar function OSS codebases, then it is a valid comparison of similar types of software. It would say that OSS and proprietary software; of similar functionality, has similar defect rates (for certain size code bases). As with any study, the results should be taken with a grain of salt until you see the underlying methodology and data. That, of course, will not stop peop

        • by chill ( 34294 )

          You might try just RTFA.

          ...and an average defect density of .68 for proprietary code developed by Coverity enterprise customers.

        • Is Mircrosoft reporting their code?

          That would be unfairly skewing the numbers upwards against proprietary software, what with both Windows RT and 8 being completely defective and all.

        • It at least is supposedly anonymous so MS and Oracle could very well be reporting their code. If Coverity allowed you to search by size of projects it might give them away: hmm OS project with 500M lines of code, who could that be?

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Wrong. There are quite a few organizations who have access to Windows source code, yet Windows is still proprietary software. Proprietary just means that you cannot freely share, not that you have no chance to get the source code.

      • Is AOL [iwastesomuchtime.com] one of them?
      • Wrong. There are quite a few organizations who have access to Windows source code, yet Windows is still proprietary software.

        For the purposes of evaluating the Coverity Scan results, it's irrelevant whether other organizations have access to Windows source code. The question is: Does Coverity have that access, and did they use it in compiling their results? I will admit I don't know, but I sincerely doubt it. According to the article, the proprietary results are only from those who are Coverity clients.

        • Coverity doesn't need to see the source code, they just need the client to send them the numbers their tool spat out. Many will, some won't, but I don't think that has anything to do with wanting to hide poor results, communisim, or greed.
  • by Anonymous Coward on Tuesday May 07, 2013 @09:15AM (#43653259)

    "Code quality for open source software continues to mirror that of proprietary software — and both continue to surpass the industry standard for software quality."

    What is this third kind of software that is neither open source nor proprietary which is bringing down the average industry standard for software quality? Because if there is only open source and proprietary then they can't both be better than average. Or perhaps the programmers are from Lake Wobegon?

    • by clodney ( 778910 )

      "Code quality for open source software continues to mirror that of proprietary software — and both continue to surpass the industry standard for software quality."

      What is this third kind of software that is neither open source nor proprietary which is bringing down the average industry standard for software quality? Because if there is only open source and proprietary then they can't both be better than average. Or perhaps the programmers are from Lake Wobegon?

      I had the same reaction, right down to the Lake Wobegon reference. Perhaps they are differentiating between software offered for sale versus tools internal to a business? To some extent that would also explain the difference in quality - cost to fix is much higher if you have shipped thousands of copies, versus telling the one consumer of a report in finance to ignore the one number that is wrong.

      • Wow, yeah, I posted an almost identical sentence myself. Eerie. (Although I didn't have a Wobegon reference... sorry). But yeah, it seems like an odd sentiment. Internal use software is still either "proprietary" or "open source"... isn't it? But good point. If someone calculated the bugs in my excel macros as if they could be used for general purpose computing I'd be in sad shape. (ObNote: I use excel macros as rarely as possible, and normally only at gunpoint).

      • "Code quality for open source software continues to mirror that of proprietary software — and both continue to surpass the industry standard for software quality."

        What is this third kind of software that is neither open source nor proprietary which is bringing down the average industry standard for software quality? Because if there is only open source and proprietary then they can't both be better than average. Or perhaps the programmers are from Lake Wobegon?

        I had the same reaction, right down to the Lake Wobegon reference. Perhaps they are differentiating between software offered for sale versus tools internal to a business? To some extent that would also explain the difference in quality - cost to fix is much higher if you have shipped thousands of copies, versus telling the one consumer of a report in finance to ignore the one number that is wrong.

        An industry standard has nothing to do with actual practice. It is not an average. All it says is it is an acceptable error rate is x.

    • by ZorroXXX ( 610877 ) <(hlovdal) (at) (gmail.com)> on Tuesday May 07, 2013 @09:43AM (#43653629)
      The selection of sample projects is biased. For proprietary software, the data is taken from projects that at least cares as much for code quality that they run some tools (e.g. at least Coverity) to analyse it. I would suspect that the industry standard is below that because there exists some companies that mostly only consider "get the product out the door". For open source the selection is probably also somewhat scewed, in that they have analysed relatively large, mature and highly successfull projects. I would assume those have higher quality than the average sourceforge/github project.
      • by Gr33nJ3ll0 ( 1367543 ) on Tuesday May 07, 2013 @09:59AM (#43653909)
        This is a good point. To build on it, the results reported from the propertiary code has had coverity at least run against it, and usually the problems that it reports fixed. This does not appear to have been done in the case of the Open Source software, which was just scanned, but never given a chance to fix. In that circumstance I would have expected a much much higher result for the Open Source software, because Coverity often reports on very pedantic issues, which are often not important to overall software quality. Further these issues would not show up in anything other than Coverity, making the initial scan the first time these issues were brought to life.
      • by Cenan ( 1892902 )

        The selection is biased yes. But not for the reason you imagine. It's biased because only developers who care about the quality of their code run tools to determine that quality. All the shitty OSS and propietary code outthere didn't participate in the study. The dataset was built with usage statistics from the service and you have to register your project with Coverity in order to participate.

    • Industry Standard != Industry Average
    • ``What is this third kind of software that is neither open source nor proprietary which is bringing down the average industry standard for software quality?''

      Internally-written software that is not being released for ``external'' consumption, perhaps? There's likely far more of that in use than what is being sold for profit or being given away.

    • I just read the Wikipedia article on Lake Wobegon, and it seems that you are referring to "The Lake Wobegon Effect". Unfortunatley this term is ill-coined. It is indeed possible for all the children to be above average in fictional Lake Wobegon, since they are a very small subset of all the children in the world.

      "What is this third kind of software that is neither open source nor proprietary which is bringing down the average industry standard for software quality?"

      As far as exceeding industry standards y

    • by AmiMoJo ( 196126 ) *

      Perhaps it means commercial mass market software, as opposed to vertical software written for one company or that is only used to support one hardware product.

    • What is this third kind of software that is neither open source nor proprietary which is bringing down the average industry standard for software quality?

      Anything featuring the words SCADA or Nuclear Reactor.

  • by Hentes ( 2461350 ) on Tuesday May 07, 2013 @09:16AM (#43653271)

    Errors per lines of code may give you a hard number, but that number has nothing to do with the quality of code. It only takes one well-placed error to ruin a piece of software.

    • It's still a better measure than not trying to measure it at all
      • by Hentes ( 2461350 )

        No it isn't. In this case, you have to ask the subjective but professional opinion of developers.

      • You are wrong, and here's why.

        With no measurements at all, you cannot make informed judgments about the quality of your software. You can only guess. This means you would be unable to convince anyone (sane and intelligent) that your product has n bugs. "Because I say so" is not a metric.

        With a poor measurement--such as one that ranks all defects equally--you have information, but now it's bad information. If you share the information but not the method(s) used to gather it, you can convince people you're right, because you have data about it. Never mind if you are stacking up Product A with 1 show-stopping bug against Product B with 50 cosmetic bugs or unhandled corner cases. By this bugcount-only metric, Product A looks better, and that's just stupid.

        You need good measurements, and sometimes that includes measurements which cannot be quantitatively calculated without human intervention. A human programmer (or QA or other support person) who is familiar with a product will know just how severe a given bug is in terms of its impact. It is why, after all, bug tracking systems generally allow you to prioritize work by severity, fixing the worst bugs first.

        Poor information is worse than no information because it can lead you to make the wrong decisions with confidence. With no information, at least you know you are shooting in the dark.

        • Never mind if you are stacking up Product A with 1 show-stopping bug against Product B with 50 cosmetic bugs or unhandled corner cases.

          Ordinarily I would agree with this, but there is a caveat to consider - that one "show-stopping" bug might only be seen by 5 or 10 percent of your userbase, who would quickly learn not to use the feature that triggers that bug, but those 50 cosmetic bugs will become so visible and glaring and unavoidable that you'll have users going, "Good G*d, this thing looks like shit! How I can I trust such a crappy-written program?", especially if those users are part of the general public, rather than a closed, busin

    • by Rich0 ( 548339 )

      Errors per lines of code may give you a hard number, but that number has nothing to do with the quality of code. It only takes one well-placed error to ruin a piece of software.

      Better still, how do you even measure it? I can understand REPORTED errors per lines of code, but not errors per line of code. How do you know if a line of code contains an error?

      And the differences could be a matter of error reporting. When was the last time you were able to log something on Microsoft's bug-tracking DB?

  • Actually, this study does not say anything directly about code quality, because Density = Total Defects Found / Code Size. The problem is with the "Total Defects Found" part. How they are found and how they are reported may differ vastly from one project/company to other. The report sais that the quality of code increases with larger codebases in propertiary projects. In fact, the best you can say is that the metric decreases with larger codebases in propertiary projects. Maybe many of the defects have no
  • by Cyko_01 ( 1092499 ) on Tuesday May 07, 2013 @09:26AM (#43653389) Homepage
    Everyone knows OSS doesn't have defects, it just develops random features
    • Relevant: http://www.xkcd.com/1172/ [xkcd.com]

    • The problem with open source software isn't the code quality, it's poor UI and poor documentation. Way too many open source projects bring on great programmers, but few, if any, designers or technical writers. The result is software with great functionality, but buried beneath horrid UI's and poor (or non-existent) documentation. I wish I had a nickel for every OSS project website I've been to where the only documentation in sight was a long list of bug-fixes, or whose UI was so confusing as to make it uncl

  • by ZahrGnosis ( 66741 ) on Tuesday May 07, 2013 @09:28AM (#43653431) Homepage

    and both [proprietary and open-source software] continue to surpass the industry standard for software quality

    ... What else is there? And why is this unknown third type of code dragging down the "industry"?

    • Exactly my question, what else is there besides proprietary and open source? How can they both surpass industry standards?

      • Exactly my question, what else is there besides proprietary and open source? How can they both surpass industry standards?

        I think that's based on the unstated and unsupported -- but not entirely unreasonable -- assumption that proprietary and open source projects that don't care enough about quality to run Coverity on their code have lower quality levels than those that do.

        However, I don't believe Google uses Coverity, and we have a pretty serious focus on code quality. At least, in my 20+-year career I haven't seen any other organization with quality standards as high as Google's, so I'd put Google forth as a counterexample

    • There is a huge third group: the military and aerospace industries. Unfortunately, their standards are even higher, like one bug per 420000 lines of code, [fastcompany.com] so they're obviously not the group we need to make this math work.

      Maybe the "industry standard" is whatever buggy math it is that makes that statement make sense to the original author?

      • Military still seems "proprietary" to me. If they meant "commercial", I could see a difference. I also considered "embedded" or "firmware" style code that, while software, is more closely tied to a physical hardware implementation. All of those still seem either "proprietary" or "open source", though, and you're right (@stillnotelf) that these would raise rather than lower industry averages.

        It could include things like javascript that is just out-in-the-wild. If you were to strip programmatic pieces fro

        • Military still seems "proprietary" to me. If they meant "commercial", I could see a difference.

          You're right on the denotations...but I think by connotation and common use, there's a difference. For example, there is no way the US government is reporting bugs in its fighter jet code to Coverity, even anonymously. Maybe we can call military "ultraproprietary" or "hyperproprietary" or "guys-in-black-suits-etary." I think maybe the "standard" is the old average - in past years, with no way to accurately get data, error rates were estimated at 1 defect per 1000 lines. Now they're lower - either code i

      • That is a very specialized segment of military you're referring to. I've been involved in enterprise software development in the military, and their standards weren't anything like that.
  • by tedgyz ( 515156 ) on Tuesday May 07, 2013 @09:29AM (#43653445) Homepage

    Quality metrics can have unexpected side effects [dilbert.com].

    • Re: (Score:2, Funny)

      /* This
      * comment
      * is
      * part
      * of
      * the
      * corporate
      * edict
      * to
      * reduce
      * the
      * defect
      * rate
      * reported
      * by
      * Coverity
      */
        printf("hello world\n");
      • by rnturn ( 11092 )

        Good grief... I certainly hope that Coverity's analyzer strips out comments before it starts evaluating code. Even the dimmest pointy-haired manager would see right through that scam.

        • by sjwt ( 161428 )

          Or maybe it counts comments as errors, if you need to comment on your code, its not intuitive enough!

        • Good grief... I certainly hope that Coverity's analyzer strips out comments before it starts evaluating code. Even the dimmest pointy-haired manager would see right through that scam.

          I'm pretty sure that you're underestimating how dim managers can be.

      • Most code metrics (except for those that specifically evaluate comments) strip out comments before compiling. However, you can always do this:

        print
          (
              "hello world\n"
          )
        ;

        Could probably split up the string too, but I'm too lazy to look up the exact syntax for that.

        • Most code metric tools don't use new line seither but only count ";", "," and "}".
          (Because depending on definition ... see: Watts S. Humphrey, Personal Software Process ... every parameter you pass to a function is considered ONE LINE OF CODE).
          So this: f(1, 3*4, "sup?") and this
          f(1,
          3*4
          ,
          "sup?")
          are the same kines of code.

  • by swm ( 171547 ) * <swmcd@world.std.com> on Tuesday May 07, 2013 @09:34AM (#43653507) Homepage

    FTA:

    As projects surpass one million lines of code, there’s a direct correlation between size and quality for proprietary projects, and an inverse correlation for open source projects.

    The article gives numbers: above 1M LOC, defect density increases for open source projects, and decreases for proprietary projects.
    Increasing defect density with size is plausible: beyond a certain size, the code base becomes intractable.
    Decreasing defect density with size is harder to understand: why should the quality fairy only visit specially big proprietary projects?

    Perhaps the way those proprietary projects get into the MLOC range in the first place is with huge tracts of boilerplate, duplicated code, or machine-generated code.
    That would inflate up the denominator in the defects/KLOC ratio.
    But then that calls the whole defects/KLOC metric into question.

    • Decreasing defect density with size is harder to understand: why should the quality fairy only visit specially big proprietary projects?

      That might have something to do with market penetration and resources dedicated to maintenance...? Those huge proprietary projects probably happen to be the stuff that almost everyone gets to use.

    • by gutnor ( 872759 )
      Maybe also 1 MLOC means popular in both OSS and Proprietary world. In proprietary, popular is slowly becoming legacy, the stuff you cannot change. On the other hand, in the OSS world, popular means load more contribution from people, the time they chose to keep quality on the core feature and the community go wild with the rest of the codebase.
  • Why on earth do they choose 2 colours that are hard to tell apart in that graph ? They were black & dark blue. It took me several seconds to work out which was which. Many other reports/... seem to do similar.

  • by 140Mandak262Jamuna ( 970587 ) on Tuesday May 07, 2013 @09:40AM (#43653581) Journal
    First of all code quality is difficult to measure, and the number of (known) defects per 1000 lines of code is a very poor metric. I could do more (good and bad) in one line of code than a novice who write voluminous code. Leaving that aside, what drives/motivates creating good quality code?

    In open source, a defect gets fixed when someone feels the urge to fix it. Most of the time it is because it is their own dog food. Many open source projects are actually used by their own developers and they fix the issues that irritate them most. And rest of the bugs are based on impact on other users and passion about the software project

    In a closed source project, it is often the bugs that affect the loudest paying customer gets fixed. If it is not going to advance sales, it wont get fixed.

    Given this dynamic it is not at all surprising both methods have similar levels of that elusive "quality". I think software development should eventually follow the model of academic research. There is scientific research done by the universities that have no immediate application or exploitation potentials. The tenured academic professors teach courses and do research on such topics. Then as the commercialization potential gets understood, it starts going towards sponsored projects and eventually it goes into commercial R&D and product development.

    Similarly we could envision people who teach programming languages to college maintaining open source projects. The students develop features and fix bugs for course credit. As the project matures, it might go commercial or might stay open source or it could become a fork. The professors who maintain such OSS projects should get similar bragging rights and prestige like professors who publish academic research on language families or bird migration or the nature of urban planning in ancient Rome.

    • Given the massive bias the US government has towards expensive private software contractors, I am surprised the results were so close.

      MBAs, Politicians and incompetent journalists LOVE poor metrics. Americans love simplistic binary metrics (sorry no citation just experience, it's the culture.)

      Remember klocs? That went on a while. Sounds like this metric dates back to those days-- they don't measure programmers by 1000s of lines coded anymore but they didn't learn their lesson and kept the defect rate measu

      • Given the massive bias the US government has towards expensive private software contractors, I am surprised the results were so close.

        Well, it could be that there really isn't a correlation between quality and what you pay for programming, at least beyond some point, so a good, but lower paid, open source programmer writes just as good code as a good, but higher paid proprietary programmer.

        Or, it could be the higher paid programmers really do turn out better code, but the nature of open source, with multiple people reviewing it mitigates the difference. I hate to use a sports analogy, but I will anyway. I am a lousy golfer, but I can pu

        • I meant to say the US Gov likes to support it's highly paid contractors who in turn "contribute" to it's politicians.

          As far as software quality. It is largely a numbers game. The more eyeballs the better. Sure some people are better than others but overall the majority are of average skill and if you just throw a ton of them at it you'll more than make up for the smart ones. Open or not it comes down to the human power put into it. That being said, project leaders probably have more to do with success/f

    • number of (known) defects per 1000 lines of code is a very poor metric

      It's not a poor metric, but it is a metric of something which isn't very useful. If I already knew the unfixed defects in the product, I'd just fix them.

      More useful metrics relate to simplicity and testability. Is every module understandable on its own by a cleanroom reviewer who first saw it ten minutes ago? How free is the code from hand-tuning? How few parameters are passed? Are there state variables that go uninitialized? How small are the largest individual modules? How completely does the test co

    • by Rich0 ( 548339 )

      First of all code quality is difficult to measure, and the number of (known) defects per 1000 lines of code is a very poor metric.

      That word "known" is a BIG one. It is critical to the metric, and I'd strongly question whether the known vs actual ratio is the same in proprietary and open-source software. The latter usually makes it MUCH easier to report problems, but on the other hand usually involves less structured or regression testing.

      If I'm using Openoffice and it doesn't paginate a document correctly I just log an entry on their bugzilla (or whatever they use). If MS Word does the same thing I hit print preview a few times and

  • Code quality for open source software continues to mirror that of proprietary software — and both continue to surpass the industry standard for software quality. Defect density (defects per 1,000 lines of software code) is a commonly used measurement for software quality.

    Since there are two types of software open source and proprietary and both of them surpass the industry standard for software quality, what exactly is the industry standard based on?

    The article states that the industry standard is 1 defect per 1,000 lines of code. But at the rates given, open source is 1 defect in 1,449 lines of code and proprietary software is 1 defect in 1,470 lines of code. Maybe it's time to change the industry standard?

  • Counterintuitively, defect density is actually an INVERSE indication of quality - better quality code will have MORE defects per line.
    The reason I say that is because better code has fewer lines per problem. Consider strcpy(), a function to copy array of characters (a C string). You can't use strcpy() in your cd - you're supposed to create strcpy(), copying each element of the array.
    Take a moment to consider how you'd write that before looking below.

    Roughly how many lines of code did you use to copy an arr
    • by Rich0 ( 548339 )

      Here's all the code you need, what a better programmer would write:
      while (*dest++ = *src++);

      The problem with that is that when the next non-expert developer comes along they won't grok the code and might break something when things change. Suppose the string delimiter changes for some reason - would a non-expert even appreciate that you're checking for the delimiter there?

      Compact code is not necessarily better, unless you accompany it with a comment or something. You also omitted the extra code to set your pointers to the start of the string (though you also omitted initializing your loop counte

    • by chgros ( 690878 )

      while (source[i] != '\0')
      {
      dest[i] = source[i];
      i++;
      }

      So one error in that code would be 1 defect per five lines or so.

      Here's all the code you need, what a better programmer would write:
      while (*dest++ = *src++);

      Your "better code" is actually not equivalent (the first loop doesn't copy the nul terminator). Even if it was equivalent, I don't think I would necessarily call it "better". This particular piece happens to be fairly idiomatic and

  • What are they counting as a "defect"?

    Their FAQ [coverity.com] lists example, but ends with "and many more".

    Which leads us to the question of who set the "industry standard" at 1.0, and what did THEY define "defect" to mean? If it is a standard there should be a standard list of defect types.

  • "Coverity Scan service ... was initiated between Coverity and the U.S. Department of Homeland Security"

    If software is a "Homeland Security" issue, shouldn't they be focusing on the proprietary software that most consumers, businesses and government agencies are using?

    • It doesn't matter if it's proprietary or open source, the danger is in any system that is compromised.
      Homeland security needs to protect infrastructure and other interests that can impact that state of the nation. Something as benign as somebody hacking the AP twitter feed and posting that a bomb injured the president cost the market over $100B. A series of hacking attacks can result in economic or social destabilization.
      Software is also built in layers, so some parts are proprietary, others are open, but
  • You can have poorly written code, but a good program.

    You can have perfectly written code, but a shoddy bit of software.

    Take for example an OS that hangs because the network layer is pegging the CPU somewhere some how.

    Vs an OS that continues to be responsive even if the network layer overloaded.

    So if they are looking at .69 defect density vs .68 defect density. The community driven software which is designed for an end user vs for a marketing staff to force up-on an end user is going to be close to 100% bett

  • Just because you get paid to program doesn't mean you crap daisies and unicorns.

    I've seen the guts of a fair bit of commercial code, and it's usually not that great. Couple of stories; back in the OS/2 days I had a customer complain that the OS/2 time API specified you could set milliseconds, but this didn't appear to be the case. Well I just so happened to have access to the assembly language function in OS/2 that did that (IIRC it was shipped on one of their dev CDs) and upon examination it appeared tha

  • What this tells me is that current business practices are flawed. There are commercial software companies that are able to produce quality code that exceed most, if not all, open source projects. But such companies are not the norm.

    Here are some questions we should ask:
    1. Does commercial software have a realistic incentive to reach for excellence in coding? Maybe..
    2. Does commercial software have enough resources to produce excellent software? Demonstratable so, but..
    3. Does commercial software use their re

Do you suffer painful hallucination? -- Don Juan, cited by Carlos Casteneda

Working...