Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Security Software

US DHS Testing FOSS Security 203

Stony Stevenson alerts us to a US Department of Homeland Security program in which subcontractors have been examining FOSS source code for security vulnerabilities. InformationWeek.com takes a glass-half-empty approach to reporting the story, saying that for FOSS code on average 1 line in 1000 contains a security bug. From the article: 'A total of 7,826 open source project defects have been fixed through the Homeland Security review, or one every two hours since it was launched in 2006 ...' ZDNet Australia prefers to emphasize those FOSS projects that fixed every reported bug, thus achieving a clean bill of health according to DHS. These include PHP, Perl, Python, Postfix, and Samba.
This discussion has been archived. No new comments can be posted.

US DHS Testing FOSS Security

Comments Filter:
  • The important point here is that proprietary software manufacturers aren't telling you how many security flaws they had. I bet it's more than 1 per 1000 lines, that is an incredibly excellent figure for the first time a scanner like coverity is run. I doubt proprietary work comes close.

    You can't ever say that proprietary software is secure, because there's no way to prove it. With Open Source, you can come a lot closer to proving that it is secure, because you can employ every security test that exists.

    The fact that a coverity scanner bug is reported doesn't mean it's an exploitable security flaw.

    Bruce

  • by KillerCow ( 213458 ) on Tuesday January 08, 2008 @09:38PM (#21963768)
    Computer terrorism. They don't want a send-mail bug to allow a beachhead for compromising more sensitive systems.
  • by QuantumG ( 50515 ) <qg@biodome.org> on Tuesday January 08, 2008 @09:42PM (#21963814) Homepage Journal
    Although I understand what you're trying to say, it does seem a little irrelevant.

    I'm a software security engineer. I can look at source code and tell you if it has some bugs in it that I would consider relevant to security. If I can't find any, I might tell you that it is more secure than if I could... but that's doesn't mean it is secure. I'll never tell you it is secure, because testing simply can't give you that. I can do this on proprietary software or I can do this on Open Source software.. the only difference is that, with the Open Source software, I don't need permission from someone to do the testing and other people don't need permission to check my work.

    Does this mean that more people will check the Open Source software for security flaws? Not necessarily. It completely depends on whether or not someone has an interest in the security of that particular bit of software. Even assuming a similar level of interest in the security of comparable proprietary and Open Source software, there's no guarantee that those who have an interest in testing the Open Source software for security flaws will report back the findings. They may simply decide that the Open Source software is too insecure for their use and go with the proprietary solution - assuming they can have it similarly tested by a trusted third party.

    All in all, the assumption that Open Source software is more secure than proprietary software is most likely true, but there's no hard data.. because the stats on the insecurity of proprietary software are guarded secrets - and that's probably the best reason to assume that proprietary software is less secure.

  • by Anonymous Coward on Tuesday January 08, 2008 @09:45PM (#21963848)
    We really need a +1 Stupid option
  • Well... (Score:5, Insightful)

    by Otter ( 3800 ) on Tuesday January 08, 2008 @10:02PM (#21964008) Journal
    This seems like a genuinely useful activity for DHS, certainly more valuable than x-raying my shoes and confiscating my saline solution.
  • Re:"The" PHP? (Score:1, Insightful)

    by Anonymous Coward on Tuesday January 08, 2008 @10:13PM (#21964078)
    PHP essentially stands for "Hypertext Preprocessor", if you ignore the "recursive initialism".

    "The Hypertext Preprocessor" sounds as reasonable to me as "The Windows Operating System".
  • by WK2 ( 1072560 ) on Tuesday January 08, 2008 @10:17PM (#21964112) Homepage
    There are two problems with your suggestion:

    a) it is too restrictive, and would disqualify the GPL as free software. Remember, that the GPL is a distribution license, not a list of restrictions. You should be able to talk to other people (even publicly) about software without contacting the maintainer first. The behavior you describe is responsible, and generally recommended, but should not be forced.

    b) as you have it worded, if the restrictions were followed, it would enable a maintainer to prevent anyone from disclosing any security bugs. You say that reporters have to wait for an acknowledgment. What if one is never received? What if there is no maintainer? The solution for this problem is obvious (don't require an acknowledgment), but I should point it out, nonetheless.

    c) It is not enforceable in most jurisdictions. In the US, and I assume most of the "free world", you can't prevent someone from talking about your products publicly. You can have them sign an NDA, but that doesn't work for publicly available software. McAfee tried something like this some time ago, stipulating in the EULA that you can't benchmark their software. It got shot down in court.
  • by Guy Harris ( 3803 ) <guy@alum.mit.edu> on Tuesday January 08, 2008 @10:45PM (#21964302)

    Just thought of this: Make it stipulation of GPL that if you publically report bugs or bug counts in GPL software, that you must also produce a detailed account of how to reproduce the bug, and you must provide that report to the maintainer of the current source (who you got it from, or the root source as listed in the code). Possibly a two-week window between notification (and acknowledgement) and publication.

    Not all bugs are easily reproducible - and not all bugs are found by tripping over them. Consider, for example, bugs found by various of the warnings enabled by GCC's -W options. I.e., you get reports saying "this code path has these problems", not reports saying "this code path blew up when I did XXX".

    I just looked at an old report from Coverity on one of the free-software projects with which I'm involved - one of the problems it found was in a chunk of code

    if (cfg->in)use) {
    report an error;
    return;
    }
    if (cfg != NULL) {
    process what it points to;
    } else {
    report an error and clean up;
    return;
    }

    where it quite appropriately pointed out that we were checking whether cfg was null after dereferencing it rather than before dereferencing it. We subsequently fixed that problem.

    It might be possible to construct a scenario where the application would crash due to that bug - or it might not; that bug is in "framework" code, and if the code using that framework code doesn't happen to pass an argument that would cause cfg to be null, there won't be a crash, but some code in the future might pass such an argument (which might be an argument that comes from user input, so it's not as if passing such an argument is a bug - perhaps the code using the framework code is expecting that code to tell the user of the error).

    Even if it's possible to construct such a scenario, the software that found the problem doesn't have a deep enough understanding of the code to say "hey, if you open up the app on a file like with this in it and select this menu item and type this into the dialog box that pops up and then click 'OK', it'll crash", so it's not as if the software that's reporting this problem (non-publicly - to see the reports on an app, you have to be a "member" of the project whose code is being scanned, and sign up for an account [coverity.com]) can give "a detailed account of how to reproduce the bug".

  • Does this mean that more people will check the Open Source software for security flaws? Not necessarily. It completely depends on whether or not someone has an interest in the security of that particular bit of software.

    I submit that people who are only looking for security flaws don't have a motivation to develop a deep understanding of the software. People who are out to modify the software do. And thus there are not just more eyes, but better eyes with Free Software.

    There is a class of mathematically provable software languages, and you might be able to say with surety that programs in them are secure. For the languages we usually use, you can only say that you have tested them in the ways you know of. And only a person with access to the source can say that. If you want an independent asessment, Open Source software won't stop one from happening, and won't hinder what can be said with NDAs. That's why I think it's more secure.

    Bruce

  • by ClosedSource ( 238333 ) on Wednesday January 09, 2008 @12:51AM (#21965142)
    Analogies have their limits, so we shouldn't try to take it too far.

    Even those who historically have critized "security through obscurity" never suggested that publishing their design or secrets would lead to better security, but rather that you can't assume your that your design can't be cracked.

    Of course, the preferred approach is "security through design" which has nothing to do with correcting bugs. The latter could be called "security through maintenence". Thus while we might argue about whether closed or open source produces better design, examining source code for bugs can't compensate for a design that is insecure.
  • Re:What about MS? (Score:2, Insightful)

    by DerekLyons ( 302214 ) <fairwater@gmaLISPil.com minus language> on Wednesday January 09, 2008 @01:55AM (#21965468) Homepage

    This shows that open source is here to stay, is going mainstream, and will not be stopped by any company's interests.

    It also shows that open source has failed to use a common tool to self audit - it took a third party to do so.
  • by QuantumG ( 50515 ) <qg@biodome.org> on Wednesday January 09, 2008 @05:37AM (#21966362) Homepage Journal

    I submit that people who are only looking for security flaws don't have a motivation to develop a deep understanding of the software. People who are out to modify the software do. And thus there are not just more eyes, but better eyes with Free Software.
    No offense, but that's completely the opposite of the facts. The vast majority of software engineers have no idea what they're doing when it comes to detecting, fixing and avoiding security issues. That's why tools like Coverity exist - and most the time the programmers can't even use them correctly. There are "security consultants" you can hire who basically just explain the results from Coverity, and they're not short on work.

    But hey, don't take my word for it.. go have a chat with your friend Theo de Raadt.. he'll give you the skinny on how terrible the majority of C programmers are when it comes to security issues. And don't get him started on the so called "safe" languages.

  • Re:What about MS? (Score:5, Insightful)

    by splutty ( 43475 ) on Wednesday January 09, 2008 @05:46AM (#21966392)
    I see nothing wrong with a 3rd party specialized in this sort of auditing to actually do it, instead of a whole bunch of programmers & others who probably don't have that specialization, and are most often busy with actually being 'productive' and thus have no time to audit themselves (impossible) or others (not always efficient)
  • Re:What about MS? (Score:5, Insightful)

    by cp.tar ( 871488 ) <cp.tar.bz2@gmail.com> on Wednesday January 09, 2008 @05:48AM (#21966398) Journal

    This shows that open source is here to stay, is going mainstream, and will not be stopped by any company's interests.

    It also shows that open source has failed to use a common tool to self audit - it took a third party to do so.

    Since an audit is usually an independent review, I see it as only logical for it to have been done by a third party.

    The point is, it is open. Anyone may perform an audit at any time they wish to do so.
    And everyone apart from the developers themslves and the users of the software is third party, by definition.

  • Re:What about MS? (Score:3, Insightful)

    by budgenator ( 254554 ) on Wednesday January 09, 2008 @07:52AM (#21966802) Journal
    Feel free to pay to have any OSS project audited you would like to see made better.
  • Re:What about MS? (Score:4, Insightful)

    by rtb61 ( 674572 ) on Wednesday January 09, 2008 @08:05AM (#21966852) Homepage
    WTF? Your statement makes abso-fucking-lutely no sense at all. In open source there is no such thing as a third party or second party, anyone and I mean absolutely anyone, be they part of the government, employed by a corporation company or private individual that contributes to open source software is a first rate party.

    That is what open source is all about, anybody can contribute their worth while efforts to it. Contribution to open source not only includes code, it also includes, auditing as well as actual innovation and even those other activities like distribution, documentation, promotion and support.

    So your illogical claim of failure is in reality open source success. I will never understand why closed source proprietary zealots just don't get it, I suppose it just goes to prove greed and stupidity really do go hand in hand ;).

  • Re:What about MS? (Score:2, Insightful)

    by macurmudgeon ( 900466 ) on Wednesday January 09, 2008 @12:28PM (#21969766) Homepage

    WTF? Your statement makes abso-fucking-lutely no sense at all. In open source there is no such thing as a third party or second party, anyone and I mean absolutely anyone, be they part of the government, employed by a corporation company or private individual that contributes to open source software is a first rate party.

    I beg to differ. In the never never land of theory you may be right. However, there is a huge practical consideration. Projects, especially the ones mentioned in the article all have a core development group that determines the project's direction. Others may be free to offer changes but if the core group or guiding individual doesn't want to use the changes, then the project doesn't incorporate it. Then you have the choice of simply doing your own modifications that don't get generally accepted or creating a fork.

    Evaluating the security of a complex open source projects require an independent group with the resources to perform the audit, one that is influential enough to not be ignored and one that is not part of the the group culture of the project. For all practical purposes, that is a third party.

  • by DamnStupidElf ( 649844 ) <Fingolfin@linuxmail.org> on Wednesday January 09, 2008 @03:39PM (#21972810)
    I haven't tried this, and indeed there isn't much real work going on in provable software languages these days. But I think that it would be possible to set theoretical constraints on a program such that it serves data and does not allow it to be modified. There might be a good Ph.D. paper in it for someone.

    It's possible to prove almost anything about the programs and operating systems, from type safety and runtime guarantees to any arbitrary set of predicates you want the system to satisfy. That assumes perfect hardware, so at most a security guarantee must be probabilistic, but can be made arbitrarily close to 100% secure by using redundancy and potentially cryptography in hardware to ensure either correct results or the triggering of an error condition.

    For some current examples of work in this area you might look at George Necula's work on proof carrying code, which looks pretty interesting. They also have a Java and I think even a subset of C compiler that can output proofs of type safety. I haven't tried them though.

    Another big challenge is to figure out how much the security model should cover. Type safety, privilege separation and file permissions are pretty obvious things to include, but what about network security, probabilistic assumptions about cryptographic security, or information control polices like the original Common Criteria required?. It would be useful for normal users to have some sort of classification they could assign to their personal information that the OS could use to keep most processes away from it. I've been interested in capability systems for quite a while too, since they often match a provable security model and programming language a little better than the typical ACL and privilege approaches.

New York... when civilization falls apart, remember, we were way ahead of you. - David Letterman

Working...