450 Million Lines of Code Can't Be Wrong: How Open Source Stacks Up 209
An anonymous reader writes "A new report details the analysis of more than 450 million lines of software through the Coverity Scan service, which began as the largest public-private sector research project focused on open source software integrity, and was initiated between Coverity and the U.S. Department of Homeland Security in 2006. Code quality for open source software continues to mirror that of proprietary software — and both continue to surpass the industry standard for software quality. Defect density (defects per 1,000 lines of software code) is a commonly used measurement for software quality. The analysis found an average defect density of .69 for open source software projects, and an average defect density of .68 for proprietary code."
Correction (Score:3, Insightful)
"450 Million Lines of Code Can't Be Wrong"
should have been
"450 Million Lines of Code Can't ALL Be Wrong"
Re: (Score:2)
You mean: "exit(-1);" ?
Re:Correction (Score:5, Funny)
It mean over 300,000 lines of code are wrong, most of it in the app I keep trying to use.
Re: (Score:2)
No just one critical line in the constructor of each project. All broken but broken slightly :) Bugs don't matter until they effect features I use :)
Re: (Score:2)
Your English is fine. Your German isn't.
Nein! Nein! Nein!
P.S. Reductio ad absurdum much.
P.P.S. If you confess to knowing Luxemburgish, just how A do you think your C is going to be?
Re: (Score:2)
Well, it certainly isn't an operating system either.
65 million lines of HOST file can't be wrong (Score:3, Funny)
Just ask apk!
Re: (Score:2)
Defects fixed for proprietary may differ. (Score:2, Informative)
Propietary defects are ones that may cause financial harm. FOSS defects are ones that cause annoyance.
I know that our code has more defects than we'd consider fixing purely because the CBA isn't there.
Re:Defects fixed for proprietary may differ. (Score:5, Insightful)
Propietary defects are ones that may cause financial harm. FOSS defects are ones that cause annoyance.
I know that our code has more defects than we'd consider fixing purely because the CBA isn't there.
I'm guessing you mean defects in propietary software only gets fixed if they have an impact on the bottom line? Otherwise that whole reply makes no sense.
Anyways, that is not much different from the OSS model. Whoever cares about the sub-system that has a bug, fixes it, and if nobody cares (or has the skills to fix it) it can go ignored for years. The selector for OSS is different, but the end result is the same: nobody gives a fuck about the end user unless it directly affects their day/paycheck/e-peen.
Re: (Score:2)
No, I think the GP is getting at the point that code analyzed in the analysis likely includes critical proprietary software. Software that needs to work and so they invest the time in making sure it does.
Meanwhile, the open source side probably included code that is not critical, based on reverse engineering, or experimental in nature. Not that both the proprietary and open source code bases didn't contain both, but I think the context of the code is quite different.
The results would be much more meaningf
Re: (Score:3)
Correct, as a developer I'd like to fix bugs in my enterprise level software but the overhead required to do so is high enough to outway the benefits.
On the other hand, if you are a developer for a FOSS project you are able to correct broken code at your leisure, submit it for peer review and have it accepted. The overhead is still there but it's spread out to feel less visible (IMHO). I'm perfectly fine writing a fix and having it be rejected for being inadequate. I'm less okay with spending 4 hours having people sign things so that I can correct a defect that takes 2 minutes of development work.
Well sure, but that really has nothing to do with the cost of a bug. The motivation for fixing the bug may be different, but the data release tells us that the two have nearly identical impact when ground down to a (useless) number.
We could go into the merits of using static analysis of code to say anything about code quality other than: "there are 0.69 rookie mistakes for every 1000 lines of code", but that is a whole other story.
The article does a poor job of covering up that this is Coverity peddling the
Re: (Score:3)
So hows that Kool-Aid?
Somehow the idea on how a product is licensed will affect its quality is very absurd.
For most open source projects you will only have a small handful of contributors about the same many for a traditional software company. You got good developers and bad ones. Some OSS software is just crap, others strive to be excellent. The same with commercial applications too.
Any differences in the community vs commercial interest really tend to balance themselves out.
You have a problem in your
Re: (Score:2)
Re: (Score:2)
A fangirl?
No. The exact opposite would be a HateBitch.
Re: (Score:2)
Comment removed (Score:5, Interesting)
Re:it contradicts the definition (Score:5, Informative)
Re:it contradicts the definition (Score:5, Interesting)
Am I detecting a selection bias here? Coverity can run their tests against all of open source. Coverity can run their tests only against that proprietary code that decides to use it and report the results--and it strikes me that only the better, and more open, proprietary shops would be doing this. Is Mircrosoft reporting their code? I doubt it. Is Oracle?
Re: (Score:3)
Am I detecting a selection bias here? Coverity can run their tests against all of open source. Coverity can run their tests only against that proprietary code that decides to use it and report the results--and it strikes me that only the better, and more open, proprietary shops would be doing this. Is Mircrosoft reporting their code? I doubt it. Is Oracle?
I doubt they ran it against all open source software; just some subset that ideally mirrored the proprietary code in complexity and application. If so it would be a reasonable comparison. Since TFA says they used some 300 OSS programs of various sizes I'd say it was a reasonable approximation of real world defect rates. Since the TFA doesn't name any proprietary products included in the survey it is harder to decide if they are valid results but I am willing to give them the benefit of doubt.
Re: (Score:3)
Even then not a reasonable comparison. The ability for the scanned proprietary softwares' teams to decide on inclusion feels to me like it would really influence the stats.
Would you expect there to exist any correlation between how shoddy software is and how likely the authors are to share information about how shoddy their software is? I would expect some correlation.
Re: (Score:3)
Even then not a reasonable comparison. The ability for the scanned proprietary softwares' teams to decide on inclusion feels to me like it would really influence the stats.
Would you expect there to exist any correlation between how shoddy software is and how likely the authors are to share information about how shoddy their software is? I would expect some correlation.
Let's accept the premise that proprietary vendors only submitted what they considered their best code. If the code bases tested matched similar function OSS codebases, then it is a valid comparison of similar types of software. It would say that OSS and proprietary software; of similar functionality, has similar defect rates (for certain size code bases). As with any study, the results should be taken with a grain of salt until you see the underlying methodology and data. That, of course, will not stop peop
Re: (Score:2)
Even then not a reasonable comparison. The ability for the scanned proprietary softwares' teams to decide on inclusion feels to me like it would really influence the stats.
Would you expect there to exist any correlation between how shoddy software is and how likely the authors are to share information about how shoddy their software is? I would expect some correlation.
Let's accept the premise that proprietary vendors only submitted what they considered their best code. If the code bases tested matched similar function OSS codebases, then it is a valid comparison of similar types of software.
I don't see how you're answering the grandparent's concern here. Are you assuming that the code quality is only a function of the function of the code? Otherwise, why would an OSS project in category X be better than average just because a better-than-average proprietary X was submitted?
No, I am saying it would be a valid comparison between OSS and proprietary code if programs that perform the same function were tested in each category. OSS quality has no impact on proprietary quality, and vice versa; but if you compare OSS and proprietary web browsers tehn it would be a reasonable comparison of teh quality of each vs the other.
Let's make this a bit more concrete: Imagine that the defect density of proprietary projects is normal-distributed with a mean of 0.6 and a standard deviation of 0.1. Then you would expect the defect density to vary between less than 0.5 and more than 0.7, with 68% lying between those numbers. If high quality projects are preferentially submitted for review, then the mean of that subset will obviously be lower than 0.6. Let's say that the projects that were sent in were a web browser (0.55), a pdf reader (0.49) and a video player (0.50). You're saying that it would be fair if we compared this with open source web browsers, pdf readers and video players. But just as these categories happened to fluctuate low in error density in the proprietary side, these categories may just as well happen to have atypically high error rates on the OSS side. In fact, unless the quality of the project is strongly correlated with the category it is in, then you would expect the mean on the OSS web browsers, pdf readers and video players to be the same as the total mean, since *they were not selected based on their own quality*, unlike the proprietary software.
The problem is you are assuming they all have a normal distribution. While that may be true for a large data set (in fact the central limit theorem would say so); w
Re: (Score:2)
Re: (Score:2)
You might try just RTFA.
...and an average defect density of .68 for proprietary code developed by Coverity enterprise customers.
Re: (Score:3)
Is Mircrosoft reporting their code?
That would be unfairly skewing the numbers upwards against proprietary software, what with both Windows RT and 8 being completely defective and all.
Re: (Score:2)
It at least is supposedly anonymous so MS and Oracle could very well be reporting their code. If Coverity allowed you to search by size of projects it might give them away: hmm OS project with 500M lines of code, who could that be?
Re: (Score:2)
But most defects are not syntax errors.
I'd be surprised if coverity even bothers with syntax errors. I mean your compiler catches those and if the project doesn't even build its not something you can even theoretically release.
I think it focuses on:
using unitialized variables, unreachable code, buffer overruns, memory leaks, division by zero, infnite loops, sql injection vulnerabilities.
potentially it might be able to catch some threading / synchronization / race condition issues as well.
I don't know, I'm
Re: (Score:3)
Coverity performs "Static Analysis". Static analysis is a well respected technique in the industry and can find classes of software defects that are typically difficult to find through code inspection and testing.
Specifically it can analyze entire call chains and find access to null or un-initialized variables. Compilers typically only do this within a single compilation unit (i.e. a file) while an SA tool like Coverity will do it over an entire call chain over multiple files.
Other classes of defects that S
Re: (Score:2)
Tools to do this are available for free. Most software projects use open-source solutions rather than coverity.
All coverity does is put it all in one nice package for continuous integration.
Re: (Score:2)
I agree, there are free tools that do this. Some are pretty good (cppcheck). I'm not going to pimp for Coverity (or other $$ products) but I will say that they offer features over and above what the free stuff does. They also have a pretty high cost. A few of the features are actually very nice additional analysis that I haven't found in free stuff. The majority of the stuff offered by commercial products are based around integration into a large multi-developer environment and defect tracking processes. As
Re: (Score:2)
That's fine if it's expected (and you can tell your SA tool that it's expected so it's not flagged). Unreachable code that's not expected is a maintenance issue at best and and in many cases indicates a software defect. At the very least wouldn't you want to be made aware that your unreachable code?
Re: (Score:2)
Re: (Score:2, Insightful)
Wrong. There are quite a few organizations who have access to Windows source code, yet Windows is still proprietary software. Proprietary just means that you cannot freely share, not that you have no chance to get the source code.
Re: (Score:2)
Re: (Score:2)
For the purposes of evaluating the Coverity Scan results, it's irrelevant whether other organizations have access to Windows source code. The question is: Does Coverity have that access, and did they use it in compiling their results? I will admit I don't know, but I sincerely doubt it. According to the article, the proprietary results are only from those who are Coverity clients.
Re: (Score:2)
and all the children are above average (Score:5, Funny)
"Code quality for open source software continues to mirror that of proprietary software — and both continue to surpass the industry standard for software quality."
What is this third kind of software that is neither open source nor proprietary which is bringing down the average industry standard for software quality? Because if there is only open source and proprietary then they can't both be better than average. Or perhaps the programmers are from Lake Wobegon?
Re: (Score:3)
"Code quality for open source software continues to mirror that of proprietary software — and both continue to surpass the industry standard for software quality."
What is this third kind of software that is neither open source nor proprietary which is bringing down the average industry standard for software quality? Because if there is only open source and proprietary then they can't both be better than average. Or perhaps the programmers are from Lake Wobegon?
I had the same reaction, right down to the Lake Wobegon reference. Perhaps they are differentiating between software offered for sale versus tools internal to a business? To some extent that would also explain the difference in quality - cost to fix is much higher if you have shipped thousands of copies, versus telling the one consumer of a report in finance to ignore the one number that is wrong.
Re: (Score:3)
Wow, yeah, I posted an almost identical sentence myself. Eerie. (Although I didn't have a Wobegon reference... sorry). But yeah, it seems like an odd sentiment. Internal use software is still either "proprietary" or "open source"... isn't it? But good point. If someone calculated the bugs in my excel macros as if they could be used for general purpose computing I'd be in sad shape. (ObNote: I use excel macros as rarely as possible, and normally only at gunpoint).
Re: (Score:2)
"Code quality for open source software continues to mirror that of proprietary software — and both continue to surpass the industry standard for software quality."
What is this third kind of software that is neither open source nor proprietary which is bringing down the average industry standard for software quality? Because if there is only open source and proprietary then they can't both be better than average. Or perhaps the programmers are from Lake Wobegon?
I had the same reaction, right down to the Lake Wobegon reference. Perhaps they are differentiating between software offered for sale versus tools internal to a business? To some extent that would also explain the difference in quality - cost to fix is much higher if you have shipped thousands of copies, versus telling the one consumer of a report in finance to ignore the one number that is wrong.
An industry standard has nothing to do with actual practice. It is not an average. All it says is it is an acceptable error rate is x.
Re:and all the children are above average (Score:4, Insightful)
Re:and all the children are above average (Score:4, Interesting)
Re: (Score:2)
The selection is biased yes. But not for the reason you imagine. It's biased because only developers who care about the quality of their code run tools to determine that quality. All the shitty OSS and propietary code outthere didn't participate in the study. The dataset was built with usage statistics from the service and you have to register your project with Coverity in order to participate.
Re: (Score:2)
Re: (Score:2)
Re: third kind of software (Score:2)
Internally-written software that is not being released for ``external'' consumption, perhaps? There's likely far more of that in use than what is being sold for profit or being given away.
Re: (Score:2)
As far as exceeding industry standards y
Re: (Score:2)
Perhaps it means commercial mass market software, as opposed to vertical software written for one company or that is only used to support one hardware product.
Re: (Score:2)
Anything featuring the words SCADA or Nuclear Reactor.
Some things can't be measured objectively (Score:3, Insightful)
Errors per lines of code may give you a hard number, but that number has nothing to do with the quality of code. It only takes one well-placed error to ruin a piece of software.
Re: (Score:2)
Re: (Score:2)
No it isn't. In this case, you have to ask the subjective but professional opinion of developers.
Re:Some things can't be measured objectively (Score:4, Insightful)
You are wrong, and here's why.
With no measurements at all, you cannot make informed judgments about the quality of your software. You can only guess. This means you would be unable to convince anyone (sane and intelligent) that your product has n bugs. "Because I say so" is not a metric.
With a poor measurement--such as one that ranks all defects equally--you have information, but now it's bad information. If you share the information but not the method(s) used to gather it, you can convince people you're right, because you have data about it. Never mind if you are stacking up Product A with 1 show-stopping bug against Product B with 50 cosmetic bugs or unhandled corner cases. By this bugcount-only metric, Product A looks better, and that's just stupid.
You need good measurements, and sometimes that includes measurements which cannot be quantitatively calculated without human intervention. A human programmer (or QA or other support person) who is familiar with a product will know just how severe a given bug is in terms of its impact. It is why, after all, bug tracking systems generally allow you to prioritize work by severity, fixing the worst bugs first.
Poor information is worse than no information because it can lead you to make the wrong decisions with confidence. With no information, at least you know you are shooting in the dark.
Re: (Score:2)
Never mind if you are stacking up Product A with 1 show-stopping bug against Product B with 50 cosmetic bugs or unhandled corner cases.
Ordinarily I would agree with this, but there is a caveat to consider - that one "show-stopping" bug might only be seen by 5 or 10 percent of your userbase, who would quickly learn not to use the feature that triggers that bug, but those 50 cosmetic bugs will become so visible and glaring and unavoidable that you'll have users going, "Good G*d, this thing looks like shit! How I can I trust such a crappy-written program?", especially if those users are part of the general public, rather than a closed, busin
Re: (Score:2)
Errors per lines of code may give you a hard number, but that number has nothing to do with the quality of code. It only takes one well-placed error to ruin a piece of software.
Better still, how do you even measure it? I can understand REPORTED errors per lines of code, but not errors per line of code. How do you know if a line of code contains an error?
And the differences could be a matter of error reporting. When was the last time you were able to log something on Microsoft's bug-tracking DB?
The study is not really conclusive (Score:2)
OSS defects (Score:3)
Re: (Score:2)
Relevant: http://www.xkcd.com/1172/ [xkcd.com]
Re: (Score:2)
The problem with open source software isn't the code quality, it's poor UI and poor documentation. Way too many open source projects bring on great programmers, but few, if any, designers or technical writers. The result is software with great functionality, but buried beneath horrid UI's and poor (or non-existent) documentation. I wish I had a nickel for every OSS project website I've been to where the only documentation in sight was a long list of bug-fixes, or whose UI was so confusing as to make it uncl
What else is in the "industry"? (Score:5, Insightful)
Re: (Score:2)
Exactly my question, what else is there besides proprietary and open source? How can they both surpass industry standards?
Re: (Score:2)
Exactly my question, what else is there besides proprietary and open source? How can they both surpass industry standards?
I think that's based on the unstated and unsupported -- but not entirely unreasonable -- assumption that proprietary and open source projects that don't care enough about quality to run Coverity on their code have lower quality levels than those that do.
However, I don't believe Google uses Coverity, and we have a pretty serious focus on code quality. At least, in my 20+-year career I haven't seen any other organization with quality standards as high as Google's, so I'd put Google forth as a counterexample
Re: (Score:2)
There is a huge third group: the military and aerospace industries. Unfortunately, their standards are even higher, like one bug per 420000 lines of code, [fastcompany.com] so they're obviously not the group we need to make this math work.
Maybe the "industry standard" is whatever buggy math it is that makes that statement make sense to the original author?
Re: (Score:2)
Military still seems "proprietary" to me. If they meant "commercial", I could see a difference. I also considered "embedded" or "firmware" style code that, while software, is more closely tied to a physical hardware implementation. All of those still seem either "proprietary" or "open source", though, and you're right (@stillnotelf) that these would raise rather than lower industry averages.
It could include things like javascript that is just out-in-the-wild. If you were to strip programmatic pieces fro
Re: (Score:2)
Military still seems "proprietary" to me. If they meant "commercial", I could see a difference.
You're right on the denotations...but I think by connotation and common use, there's a difference. For example, there is no way the US government is reporting bugs in its fighter jet code to Coverity, even anonymously. Maybe we can call military "ultraproprietary" or "hyperproprietary" or "guys-in-black-suits-etary." I think maybe the "standard" is the old average - in past years, with no way to accurately get data, error rates were estimated at 1 defect per 1000 lines. Now they're lower - either code i
Re: (Score:2)
Unforeseen consequences (Score:3)
Quality metrics can have unexpected side effects [dilbert.com].
Re: (Score:2, Funny)
* comment
* is
* part
* of
* the
* corporate
* edict
* to
* reduce
* the
* defect
* rate
* reported
* by
* Coverity
*/
printf("hello world\n");
Re: (Score:2)
Good grief... I certainly hope that Coverity's analyzer strips out comments before it starts evaluating code. Even the dimmest pointy-haired manager would see right through that scam.
Re: (Score:2)
Or maybe it counts comments as errors, if you need to comment on your code, its not intuitive enough!
Re: (Score:2)
Good grief... I certainly hope that Coverity's analyzer strips out comments before it starts evaluating code. Even the dimmest pointy-haired manager would see right through that scam.
I'm pretty sure that you're underestimating how dim managers can be.
Re: (Score:2)
Most code metrics (except for those that specifically evaluate comments) strip out comments before compiling. However, you can always do this:
print
(
"hello world\n"
)
;
Could probably split up the string too, but I'm too lazy to look up the exact syntax for that.
Re: (Score:2)
Most code metric tools don't use new line seither but only count ";", "," and "}". ... see: Watts S. Humphrey, Personal Software Process ... every parameter you pass to a function is considered ONE LINE OF CODE).
(Because depending on definition
So this: f(1, 3*4, "sup?") and this
f(1,
3*4
,
"sup?")
are the same kines of code.
The quality fairy (Score:3)
FTA:
The article gives numbers: above 1M LOC, defect density increases for open source projects, and decreases for proprietary projects.
Increasing defect density with size is plausible: beyond a certain size, the code base becomes intractable.
Decreasing defect density with size is harder to understand: why should the quality fairy only visit specially big proprietary projects?
Perhaps the way those proprietary projects get into the MLOC range in the first place is with huge tracts of boilerplate, duplicated code, or machine-generated code.
That would inflate up the denominator in the defects/KLOC ratio.
But then that calls the whole defects/KLOC metric into question.
Re: (Score:2)
Decreasing defect density with size is harder to understand: why should the quality fairy only visit specially big proprietary projects?
That might have something to do with market penetration and resources dedicated to maintenance...? Those huge proprietary projects probably happen to be the stuff that almost everyone gets to use.
Re: (Score:2)
Colours in graphs (Score:2)
Why on earth do they choose 2 colours that are hard to tell apart in that graph ? They were black & dark blue. It took me several seconds to work out which was which. Many other reports/... seem to do similar.
Re: (Score:2)
Why on earth do they choose 2 colours that are hard to tell apart in that graph ?
here you go [ebay.com]
Code quality (Score:3)
In open source, a defect gets fixed when someone feels the urge to fix it. Most of the time it is because it is their own dog food. Many open source projects are actually used by their own developers and they fix the issues that irritate them most. And rest of the bugs are based on impact on other users and passion about the software project
In a closed source project, it is often the bugs that affect the loudest paying customer gets fixed. If it is not going to advance sales, it wont get fixed.
Given this dynamic it is not at all surprising both methods have similar levels of that elusive "quality". I think software development should eventually follow the model of academic research. There is scientific research done by the universities that have no immediate application or exploitation potentials. The tenured academic professors teach courses and do research on such topics. Then as the commercialization potential gets understood, it starts going towards sponsored projects and eventually it goes into commercial R&D and product development.
Similarly we could envision people who teach programming languages to college maintaining open source projects. The students develop features and fix bugs for course credit. As the project matures, it might go commercial or might stay open source or it could become a fork. The professors who maintain such OSS projects should get similar bragging rights and prestige like professors who publish academic research on language families or bird migration or the nature of urban planning in ancient Rome.
Re: (Score:2)
Given the massive bias the US government has towards expensive private software contractors, I am surprised the results were so close.
MBAs, Politicians and incompetent journalists LOVE poor metrics. Americans love simplistic binary metrics (sorry no citation just experience, it's the culture.)
Remember klocs? That went on a while. Sounds like this metric dates back to those days-- they don't measure programmers by 1000s of lines coded anymore but they didn't learn their lesson and kept the defect rate measu
Re: (Score:2)
Given the massive bias the US government has towards expensive private software contractors, I am surprised the results were so close.
Well, it could be that there really isn't a correlation between quality and what you pay for programming, at least beyond some point, so a good, but lower paid, open source programmer writes just as good code as a good, but higher paid proprietary programmer.
Or, it could be the higher paid programmers really do turn out better code, but the nature of open source, with multiple people reviewing it mitigates the difference. I hate to use a sports analogy, but I will anyway. I am a lousy golfer, but I can pu
Re: (Score:2)
I meant to say the US Gov likes to support it's highly paid contractors who in turn "contribute" to it's politicians.
As far as software quality. It is largely a numbers game. The more eyeballs the better. Sure some people are better than others but overall the majority are of average skill and if you just throw a ton of them at it you'll more than make up for the smart ones. Open or not it comes down to the human power put into it. That being said, project leaders probably have more to do with success/f
Re: (Score:2)
number of (known) defects per 1000 lines of code is a very poor metric
It's not a poor metric, but it is a metric of something which isn't very useful. If I already knew the unfixed defects in the product, I'd just fix them.
More useful metrics relate to simplicity and testability. Is every module understandable on its own by a cleanroom reviewer who first saw it ten minutes ago? How free is the code from hand-tuning? How few parameters are passed? Are there state variables that go uninitialized? How small are the largest individual modules? How completely does the test co
Re: (Score:2)
First of all code quality is difficult to measure, and the number of (known) defects per 1000 lines of code is a very poor metric.
That word "known" is a BIG one. It is critical to the metric, and I'd strongly question whether the known vs actual ratio is the same in proprietary and open-source software. The latter usually makes it MUCH easier to report problems, but on the other hand usually involves less structured or regression testing.
If I'm using Openoffice and it doesn't paginate a document correctly I just log an entry on their bugzilla (or whatever they use). If MS Word does the same thing I hit print preview a few times and
Question? (Score:2)
Code quality for open source software continues to mirror that of proprietary software — and both continue to surpass the industry standard for software quality. Defect density (defects per 1,000 lines of software code) is a commonly used measurement for software quality.
Since there are two types of software open source and proprietary and both of them surpass the industry standard for software quality, what exactly is the industry standard based on?
The article states that the industry standard is 1 defect per 1,000 lines of code. But at the rates given, open source is 1 defect in 1,449 lines of code and proprietary software is 1 defect in 1,470 lines of code. Maybe it's time to change the industry standard?
HIgher defect density indicates BETTER code (Score:2)
The reason I say that is because better code has fewer lines per problem. Consider strcpy(), a function to copy array of characters (a C string). You can't use strcpy() in your cd - you're supposed to create strcpy(), copying each element of the array.
Take a moment to consider how you'd write that before looking below.
Roughly how many lines of code did you use to copy an arr
Re: (Score:2)
Here's all the code you need, what a better programmer would write:
while (*dest++ = *src++);
The problem with that is that when the next non-expert developer comes along they won't grok the code and might break something when things change. Suppose the string delimiter changes for some reason - would a non-expert even appreciate that you're checking for the delimiter there?
Compact code is not necessarily better, unless you accompany it with a comment or something. You also omitted the extra code to set your pointers to the start of the string (though you also omitted initializing your loop counte
Re: (Score:2)
while (source[i] != '\0')
{
dest[i] = source[i];
i++;
}
So one error in that code would be 1 defect per five lines or so.
Here's all the code you need, what a better programmer would write:
while (*dest++ = *src++);
Your "better code" is actually not equivalent (the first loop doesn't copy the nul terminator). Even if it was equivalent, I don't think I would necessarily call it "better". This particular piece happens to be fairly idiomatic and
Define "defect" (Score:2)
What are they counting as a "defect"?
Their FAQ [coverity.com] lists example, but ends with "and many more".
Which leads us to the question of who set the "industry standard" at 1.0, and what did THEY define "defect" to mean? If it is a standard there should be a standard list of defect types.
WTF does this have to do with "Homeland Security"? (Score:2)
"Coverity Scan service ... was initiated between Coverity and the U.S. Department of Homeland Security"
If software is a "Homeland Security" issue, shouldn't they be focusing on the proprietary software that most consumers, businesses and government agencies are using?
Re: (Score:2)
Homeland security needs to protect infrastructure and other interests that can impact that state of the nation. Something as benign as somebody hacking the AP twitter feed and posting that a bomb injured the president cost the market over $100B. A series of hacking attacks can result in economic or social destabilization.
Software is also built in layers, so some parts are proprietary, others are open, but
Defect density by itself is a poor metric. (Score:2)
You can have poorly written code, but a good program.
You can have perfectly written code, but a shoddy bit of software.
Take for example an OS that hangs because the network layer is pegging the CPU somewhere some how.
Vs an OS that continues to be responsive even if the network layer overloaded.
So if they are looking at .69 defect density vs .68 defect density. The community driven software which is designed for an end user vs for a marketing staff to force up-on an end user is going to be close to 100% bett
Yuh Huh (Score:2)
I've seen the guts of a fair bit of commercial code, and it's usually not that great. Couple of stories; back in the OS/2 days I had a customer complain that the OS/2 time API specified you could set milliseconds, but this didn't appear to be the case. Well I just so happened to have access to the assembly language function in OS/2 that did that (IIRC it was shipped on one of their dev CDs) and upon examination it appeared tha
What does it mean to be a professional programmer? (Score:2)
There is a big difference between getting paid to show up to work, and getting paid to write solid robust software.
Bad business processes (Score:2)
What this tells me is that current business practices are flawed. There are commercial software companies that are able to produce quality code that exceed most, if not all, open source projects. But such companies are not the norm.
Here are some questions we should ask:
1. Does commercial software have a realistic incentive to reach for excellence in coding? Maybe..
2. Does commercial software have enough resources to produce excellent software? Demonstratable so, but..
3. Does commercial software use their re
Re: (Score:2)
The 1970s called. They want their joke back.
Re: (Score:3)
Sure makes the emacs vs vi wars look petty. This is a religious dispute between the believers in greedy capitalism, who think such forces lead to the best balance of highest possible quality at reasonable expense, in all endeavors, and everyone else.
The greedy capitalists think that if you aren't sweating and stressing over your job and the money it provides to feed your hungry children, not to mention your house and car payments, fearing that the loss of your job will ruin your career so that you will n
Re: (Score:2)
Re: (Score:2)
did they select 300 open source projects that weren't abandoned because of excessive bugs?
Although I do find Banshee, Miro, Amarok, Rhythmbox, and xnoise to be very buggy for open source software. (all music player apps) But, I'm not sure if those were included in this study, I suspect most of them were.