US DHS Testing FOSS Security 203
Stony Stevenson alerts us to a US Department of Homeland Security program in which subcontractors have been examining FOSS source code for security vulnerabilities. InformationWeek.com takes a glass-half-empty approach to reporting the story, saying that for FOSS code on average 1 line in 1000 contains a security bug. From the article: 'A total of 7,826 open source project defects have been fixed through the Homeland Security review, or one every two hours since it was launched in 2006 ...' ZDNet Australia prefers to emphasize those FOSS projects that fixed every reported bug, thus achieving a clean bill of health according to DHS. These include PHP, Perl, Python, Postfix, and Samba.
Looking good, too bad the press didn't understand (Score:5, Insightful)
You can't ever say that proprietary software is secure, because there's no way to prove it. With Open Source, you can come a lot closer to proving that it is secure, because you can employ every security test that exists.
The fact that a coverity scanner bug is reported doesn't mean it's an exploitable security flaw.
Bruce
Re:goverment helping FOSS (Score:3, Insightful)
Re:Looking good, too bad the press didn't understa (Score:5, Insightful)
I'm a software security engineer. I can look at source code and tell you if it has some bugs in it that I would consider relevant to security. If I can't find any, I might tell you that it is more secure than if I could... but that's doesn't mean it is secure. I'll never tell you it is secure, because testing simply can't give you that. I can do this on proprietary software or I can do this on Open Source software.. the only difference is that, with the Open Source software, I don't need permission from someone to do the testing and other people don't need permission to check my work.
Does this mean that more people will check the Open Source software for security flaws? Not necessarily. It completely depends on whether or not someone has an interest in the security of that particular bit of software. Even assuming a similar level of interest in the security of comparable proprietary and Open Source software, there's no guarantee that those who have an interest in testing the Open Source software for security flaws will report back the findings. They may simply decide that the Open Source software is too insecure for their use and go with the proprietary solution - assuming they can have it similarly tested by a trusted third party.
All in all, the assumption that Open Source software is more secure than proprietary software is most likely true, but there's no hard data.. because the stats on the insecurity of proprietary software are guarded secrets - and that's probably the best reason to assume that proprietary software is less secure.
Re:AN opportunity to modify the GPL.. (Score:1, Insightful)
Well... (Score:5, Insightful)
Re:"The" PHP? (Score:1, Insightful)
"The Hypertext Preprocessor" sounds as reasonable to me as "The Windows Operating System".
Re:AN opportunity to modify the GPL.. (Score:3, Insightful)
a) it is too restrictive, and would disqualify the GPL as free software. Remember, that the GPL is a distribution license, not a list of restrictions. You should be able to talk to other people (even publicly) about software without contacting the maintainer first. The behavior you describe is responsible, and generally recommended, but should not be forced.
b) as you have it worded, if the restrictions were followed, it would enable a maintainer to prevent anyone from disclosing any security bugs. You say that reporters have to wait for an acknowledgment. What if one is never received? What if there is no maintainer? The solution for this problem is obvious (don't require an acknowledgment), but I should point it out, nonetheless.
c) It is not enforceable in most jurisdictions. In the US, and I assume most of the "free world", you can't prevent someone from talking about your products publicly. You can have them sign an NDA, but that doesn't work for publicly available software. McAfee tried something like this some time ago, stipulating in the EULA that you can't benchmark their software. It got shot down in court.
Re:AN opportunity to modify the GPL.. (Score:3, Insightful)
Not all bugs are easily reproducible - and not all bugs are found by tripping over them. Consider, for example, bugs found by various of the warnings enabled by GCC's -W options. I.e., you get reports saying "this code path has these problems", not reports saying "this code path blew up when I did XXX".
I just looked at an old report from Coverity on one of the free-software projects with which I'm involved - one of the problems it found was in a chunk of code
where it quite appropriately pointed out that we were checking whether cfg was null after dereferencing it rather than before dereferencing it. We subsequently fixed that problem.
It might be possible to construct a scenario where the application would crash due to that bug - or it might not; that bug is in "framework" code, and if the code using that framework code doesn't happen to pass an argument that would cause cfg to be null, there won't be a crash, but some code in the future might pass such an argument (which might be an argument that comes from user input, so it's not as if passing such an argument is a bug - perhaps the code using the framework code is expecting that code to tell the user of the error).
Even if it's possible to construct such a scenario, the software that found the problem doesn't have a deep enough understanding of the code to say "hey, if you open up the app on a file like with this in it and select this menu item and type this into the dialog box that pops up and then click 'OK', it'll crash", so it's not as if the software that's reporting this problem (non-publicly - to see the reports on an app, you have to be a "member" of the project whose code is being scanned, and sign up for an account [coverity.com]) can give "a detailed account of how to reproduce the bug".
Re:Looking good, too bad the press didn't understa (Score:5, Insightful)
I submit that people who are only looking for security flaws don't have a motivation to develop a deep understanding of the software. People who are out to modify the software do. And thus there are not just more eyes, but better eyes with Free Software.
There is a class of mathematically provable software languages, and you might be able to say with surety that programs in them are secure. For the languages we usually use, you can only say that you have tested them in the ways you know of. And only a person with access to the source can say that. If you want an independent asessment, Open Source software won't stop one from happening, and won't hinder what can be said with NDAs. That's why I think it's more secure.
Bruce
Re:Looking good, too bad the press didn't understa (Score:3, Insightful)
Even those who historically have critized "security through obscurity" never suggested that publishing their design or secrets would lead to better security, but rather that you can't assume your that your design can't be cracked.
Of course, the preferred approach is "security through design" which has nothing to do with correcting bugs. The latter could be called "security through maintenence". Thus while we might argue about whether closed or open source produces better design, examining source code for bugs can't compensate for a design that is insecure.
Re:What about MS? (Score:2, Insightful)
It also shows that open source has failed to use a common tool to self audit - it took a third party to do so.
Re:Looking good, too bad the press didn't understa (Score:3, Insightful)
But hey, don't take my word for it.. go have a chat with your friend Theo de Raadt.. he'll give you the skinny on how terrible the majority of C programmers are when it comes to security issues. And don't get him started on the so called "safe" languages.
Re:What about MS? (Score:5, Insightful)
Re:What about MS? (Score:5, Insightful)
It also shows that open source has failed to use a common tool to self audit - it took a third party to do so.
Since an audit is usually an independent review, I see it as only logical for it to have been done by a third party.
The point is, it is open. Anyone may perform an audit at any time they wish to do so.
And everyone apart from the developers themslves and the users of the software is third party, by definition.
Re:What about MS? (Score:3, Insightful)
Re:What about MS? (Score:4, Insightful)
That is what open source is all about, anybody can contribute their worth while efforts to it. Contribution to open source not only includes code, it also includes, auditing as well as actual innovation and even those other activities like distribution, documentation, promotion and support.
So your illogical claim of failure is in reality open source success. I will never understand why closed source proprietary zealots just don't get it, I suppose it just goes to prove greed and stupidity really do go hand in hand ;).
Re:What about MS? (Score:2, Insightful)
I beg to differ. In the never never land of theory you may be right. However, there is a huge practical consideration. Projects, especially the ones mentioned in the article all have a core development group that determines the project's direction. Others may be free to offer changes but if the core group or guiding individual doesn't want to use the changes, then the project doesn't incorporate it. Then you have the choice of simply doing your own modifications that don't get generally accepted or creating a fork.
Evaluating the security of a complex open source projects require an independent group with the resources to perform the audit, one that is influential enough to not be ignored and one that is not part of the the group culture of the project. For all practical purposes, that is a third party.
Re:Looking good, too bad the press didn't understa (Score:3, Insightful)
It's possible to prove almost anything about the programs and operating systems, from type safety and runtime guarantees to any arbitrary set of predicates you want the system to satisfy. That assumes perfect hardware, so at most a security guarantee must be probabilistic, but can be made arbitrarily close to 100% secure by using redundancy and potentially cryptography in hardware to ensure either correct results or the triggering of an error condition.
For some current examples of work in this area you might look at George Necula's work on proof carrying code, which looks pretty interesting. They also have a Java and I think even a subset of C compiler that can output proofs of type safety. I haven't tried them though.
Another big challenge is to figure out how much the security model should cover. Type safety, privilege separation and file permissions are pretty obvious things to include, but what about network security, probabilistic assumptions about cryptographic security, or information control polices like the original Common Criteria required?. It would be useful for normal users to have some sort of classification they could assign to their personal information that the OS could use to keep most processes away from it. I've been interested in capability systems for quite a while too, since they often match a provable security model and programming language a little better than the typical ACL and privilege approaches.