Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security

Security of Open vs. Closed Source Software 366

morhoj writes "Cambridge University researcher Ross Anderson just released a paper concluding that open source and closed source software are equally secure. Can't find a copy of the paper online yet, but I thought this would make for an interesting morning conversation. You may not agree with him, but anyone who's on the BugTraq List can tell you that open source software isn't as bug free as we would all like to think." I found Anderson's paper, so read it for yourself. There are some other interesting papers being presented at the conference as well.
This discussion has been archived. No new comments can be posted.

Security of Open vs. Closed Source Software

Comments Filter:
  • MBTF My Ass (Score:2, Interesting)

    by Anonymous Coward
    The Manufacturer's Estimated Time Before Failure is for physical goods - things that naturally wear out. Not software, which is at the very least a loose mathematical desctiption of a repeatable process. Software and security don't "wear out". If they seem to, they were broken in the first place.

    I hope that was just CNet editorialization, and isn't indicative of the rest of this paper.
    • I agree completely.

      A better model would hypothesize that the initial programmer creates bugs at a constant rate. In reliability terms the "hazard rate" would be flat.

      This would say a given piece of code has a finite number of bugs proportional to either the number of lines or to the time spent coding (without debugging). Debugging should be modelled as a separate process that removes bugs at a rate proportional to the bug density and time spent debugging.

      Under this kind of model, the critical number is the mean time to zero bugs, which is going to depend on the ratio of debuggers to coders, and open source enables this to soar, while in proprietary systems it's something around 1, maybe 2 if you do code reviews.
    • Re:MBTF My Ass (Score:3, Insightful)

      by swillden ( 191260 )

      The Manufacturer's Estimated Time Before Failure is for physical goods - things that naturally wear out. Not software, which is at the very least a loose mathematical desctiption of a repeatable process.

      Read the paper, there's a link to a PDF in the story.

      The paper does indeed use an MTBF-type model to analyze bugs, and there is a significant body of research which supports this approach. As the author says:

      Safety-critical software engineers have known for years that for a large, complex system to have a mean time before failure (MTBF) of (say) 100,000 hours, it must be subject to at least that many hours of testing [4]. This was first observed by Adams in 1985 in a study of the bug history of IBM mainframe operating systems [5], and has been confirmed by extensive empirical investigations since. The first theoretical model explaining it was published in 1996 by Bishop and Bloomfield [6].

      It's certainly not obvious that this model invented for physical goods applies to software, but there is substantial research to show that it does. If you can really demonstrate otherwise, I highly recommend that you get familiar with the literature and then publish your own research paper that explains why it is not an appropriate model. If you can propose a significantly better one, you'll have advanced the state of software engineering and you'll probably be well on your way to a cushy professorship somewhere.

  • by q-soe ( 466472 ) on Friday June 21, 2002 @08:57AM (#3743200) Homepage
    But i think security of software is often down to the admin... I mean you can secure any operating system if you know what you are doing and its easy to build an insecure box - linux and windows.

    How secure is an out of the box mandrake install ? or a windows 2000 ?

    A good admin who is a pro will work hard to secure his servers and patch and look after them - a bad admin is a bad admin regardless of the OS

    • by demaria ( 122790 ) on Friday June 21, 2002 @09:13AM (#3743317) Homepage
      Patches are a big deal, especially in production environments. You can't just willy nilly upgrade the kernel on a high load and important server. Bigger departments/companies have a change management system in place so that everyone know when any piece of software is upgraded, when it will happen, who is to blame, and why it occured. Patches can cause unexpected problems (like that linux one that corrupted the file system a few months back). This process may take days or weeks to complete.
    • "A good admin who is a pro will work hard to secure his servers and patch and look after them - a bad admin is a bad admin regardless of the OS"

      I dont agree with that. If the underlying OS is a secure, good OS, then your assertion holds valid. However, if you're using an unsecure OS, say Winnt 4SP6, then it doesn't matter how competant the admin is. He's limited to how good he is by the OS itself.

      Superb admin + superb OS = Superb integration/setup
      ok admin + superb OS = ok integration/setup
      bad admin + superb OS = Honeypot without "the stuff that catches you" (bad)

      Superb admin + craptacular OS = Meciocre integration/setup
      .....
    • pure fantasy

      Try the Latest Apache Chunked Encoding bug with a released exploit for OpenBSD.

      How can some mad admin skillz sort that one out save switching off the box?
    • by pubjames ( 468013 ) on Friday June 21, 2002 @09:53AM (#3743616)
      A good admin who is a pro will work hard to secure his servers and patch and look after them - a bad admin is a bad admin regardless of the OS

      Many years ago, anyone who wanted to drive a car also had to be a mechanic. Things needed constantly tweaking, they would break down often and were difficult to start and keep running. These days, if someone had a car that kept breaking down, you wouldn't say to them "well, that's your fault. You're obviously not a good mechanic", you'd say "go out and buy yourself a better car, mate".

      Don't blame the administrators for the primitive state of current computer technology.
      • There's a difference. You're comparing a simple action -- driving a car, to one that is not simple by any means -- administering a network. It's like saying that because everyone knows how to operate a television, they should be able to know how to operate television broadcast equipment. Most people these days can operate a computer, does that mean they'll ever be qualified to manage a network of computers with interdependent services? Probably not.
        • There's a difference. You're comparing a simple action -- driving a car, to one that is not simple by any means -- administering a network. It's like saying that because everyone knows how to operate a television, they should be able to know how to operate television broadcast equipment. Most people these days can operate a computer, does that mean they'll ever be qualified to manage a network of computers with interdependent services? Probably not.

          I think you are confusing things as well. Nobody ever mentioned administering a network of computers with interdependent services. The discussion was about a server and it's OS.

          A telephone answering machine is a type of server. I can plug it into a socket and people can call it up and listen to it, even interact with it by pressing keys on their telephone. I don't need to be a telephone engineer to plug one of those in and get it working.

          Basic computer servers should be the same - I need an email server, I plug it into an available network socket, it sends and receives email. Ditto with a web server. It should be secure out of the box, and if it needs patching, it should patch itself. It will be like this one day, in the meantime we just have to put up with the primitive state of things.
      • This is a horrible analogy as it trivializes the role that a Server plays and acts as if it's a simple appliance.
    • Admin or not: security can only be measured _now_. Not tomorrow. Not 5 minutes from now. In 3 seconds your box could be compromised from an unseen source.

      That is the only thing admins can do: look after their systems. The most important knowledge an admin has is the knowledge of how to detect a security breach and how to cut the system off from the rest of the world _immediately_. After that he must check the system all over, because any number of things could be different and it should not be thought of as the same system.
    • I would say the onus of security rests on the purchaser of license, the company or individual, whether it's the sysadmin or the clu^H^H^H people in management who decide. All software is immune from ALL legal ramifications of useability, security, fitness for marketability blah blah blah (no refunds either), so it comes down to the end user to do their homework, research, labtest, and maintain a relationship with the owners of the software for all updates, patches, issues and news. That is a cost foist on the end users so they might as well face it up front. It's just that *I* trust OSS more than those tricky, lying-to-close-a-sale marketing types. Mgmt may feel more comfortable with the sales flacks with their *.ppt slides for making the choice, altho that starts a bad relation with the sysadmins who always bear the brunt of Mgmt's bad purchasing decisions.
    • If you're asking about an out of the box installation, then Mandrake is more secure than Windows. Mandrake's installer runs a few scripts to stop unnecessary services, close ports, build an appropriate firewall, and otherwise lock things down. The user is prompted for a low, medium, or high level of security. A brief description of each level is offered, so even a clueless user can make the right choice. The user hits a button, and the system does the rest. It's point-and-drool simple, and it works.

      Windows does none of this. Everything but the kitchen sink is installed and running. It's hard to tell what's running, especially if you're not familiar with Windows' cryptic names for all the services. There are no good explanations of what services really are or what they do. Everything is buried 3-4 levels deep, and poorly organized. Unless you're already familiar with it, it's much harder to figure out than Mandrake.

      So yes, Mandrake is more secure. Part of this is the installation itself- it goes through the appropriate steps to build in some security. The other part is general usability. Mandrake wins here, too.

      Don't get me wrong, not all Linux distros are as good in this respect. IMO, Redhat is about as bad as Windows, though it's getting better. Others might be a complete disaster.
      • by markmoss ( 301064 ) on Friday June 21, 2002 @01:12PM (#3744840)
        One fundamental difference is that it's perfectly reasonable for a Linux distro to lock down everything by default, so that you've got to make some changes before it's usable for much of anything. (You could play solitaire if the distro included a program...) If you bought Linux, you should know or learn a bit about administering.

        Windows, OTOH, starts with the assumption that a complete idiot will be installing it. If networking is crippled by default, it will probably remain crippled until the user returns the computer because it "won't do X". And it makes the almost-reasonable assumption that with an idiot setting it up and using it, the box won't contain anything worth a good cracker's time. And these assumptions are almost OK; the problem is that (1) when the box is used for something serious, it's hard for even a professional administrator to keep up with all the changes needed to make a system secure, (2) they've made the home system default so wide open that serious crackers can take over hundreds of them at a time and use them in assaults on important targets, and (3) MS is so sloppy that everything is a lot more exposed than they intended...
  • by Nerant ( 71826 ) on Friday June 21, 2002 @08:58AM (#3743203)
    Security bugs in software are inevitable : it is bound to happen , sooner or later. A properly setup system can mitigate some of these problems (ie. chroot, modified security kernels). What my concern is is how long and how public security disclosures are, and how long the affected vendor takes to issue a bugfix.

    • by dnoyeb ( 547705 ) on Friday June 21, 2002 @09:17AM (#3743345) Homepage Journal
      I think we should be careful to draw a distinct line between a Security 'flaw' and a 'bug'.

      A flaw is an error in judgement. A bug is an error in coding. The original poster ended his statement that Open Source has lots of bugs. This is unrelated to security unless they are specifically security related bugs.

      In any event, the speed at which you can lock down the Fort HAS to be a consideration.

      I mean, We have planes flying in Iraq and Afganistan right now. They are being shot at all the time, but they move fast enough to get out of the way. OpenSource moves faster than closed source so I can't possible see how the article writer concluded they were equal.

      Equally buggy, yes. Equally secure, puhleez.

    • Security bugs in software are inevitable : it is bound to happen , sooner or later.

      That attitude is a big dangerous IMO.. That is an excuse for programmers to have bad/lazy coding habits and not program with security in mind..

      Developing a good coding habit and learn and use all known techniques for creating secure code is the only good way to minimize security bugs.

      Even in the year 2002, it's still common to find unchecked strcpy's in newly released code..

      WHen you write software you should design it to be run as root on sensitive boxes without a firewall. But then you should run it chroot as a restricted user with minimal permissions anyway...

      And of course, release securityfixes as fast as possible if bugs ARE found...
      • That is an excuse for programmers to have bad/lazy coding habits and not program with security in mind..
        I disagree entirely - I'm always looking for bugs in my software, because I know that there always will be bugs to find. If you mistakenly believe that perfection can be achieved, you might mistakenly believe that it has been.
  • Of course not... (Score:3, Insightful)

    by Dilbert_ ( 17488 ) on Friday June 21, 2002 @08:59AM (#3743214) Homepage
    Of course there are just as many bugs in open source software as in closed source. Most of it is even written by the same people: what they do at work is closed, what they hack upon during the night is open.
    The main difference lies in the speed and motivation to fix the bugs. Open source bugs can be fixed by anyone, but closed source bugs need to be fixed by vendors who are afraid to even admit they exist, for fear of losing customers.
    • by great throwdini ( 118430 ) on Friday June 21, 2002 @09:22AM (#3743379)

      Open source bugs can be fixed by anyone, but closed source bugs need to be fixed by vendors [...]

      Correction: open source bugs can be fixed by anyone with requisite knowledge, talent, and time. This would include things such as familiarity with the particular software package, affected platforms, and programming language and the energy and ability to ferret out the bug(s) and apply an appropriate fix. Then one has to factor in that package maintainers may or may not readily allow outside submission (e.g., bigotry, internal/peer review, etc.) of fixes, which may slow, hamper, or block the transmission of fixes. Add into this issues of trust, where a "fix" is offered by someone who lacks proper credentials (official or "street") to someone who has no clue how to evaluate the original issue or the proposed remedy.

      Granted, given the nature of open source software, the population of people who may repair a bug may be larger than that for closed applications, but that doesn't force into being an army of people with the inclination or skills to do so, or an effective and trustworthy means to distribute said fixes.

      I favor the potential for open source to improve response time to bugs, but I don't think one can claim "anyone" can address issues in an appropriate manner. There's no reason a skillful and organized firm couldn't address security concerns for a closed application it offers with any less celerity than maintainers of an open application.

  • Security (Score:3, Insightful)

    by Ashcrow ( 469400 ) on Friday June 21, 2002 @09:02AM (#3743227) Homepage
    There will always be software bugs as long as programmers are not perfect. The huge diffrence is the in a closed source environment you'll have to wait for patches from the vendor, or not at all. In the OSS you can patch it yourself, get the unoffical patches for your vendor, get diffrent up-to-date packages, or install the latest version from source.
  • Buglist (Score:2, Insightful)

    OK, Open Source has a lot of bugs, but who does list closed source bugs? I'm sure most of their bugs don't go public, because it is not a good market technique... It isn't fair to compare both lists.

    Just my two cents.
  • Duh... (Score:5, Insightful)

    by sootman ( 158191 ) on Friday June 21, 2002 @09:02AM (#3743234) Homepage Journal
    Security != number of bugs. There's 'severity of bugs' and 'speed of fixes', not to mention the OS's and software's design in the first place--think permissions, user spave vs. kernel space, etc.
    • Re:Duh... (Score:3, Insightful)

      by Rogerborg ( 306625 )
      • Security != number of bugs.

      Well said. Likewise

      • Time when a white hat hacker reports a security flaw in open source code != time when a black hat hacker notices and exploits a flaw in closed source code
      • Time of public disclosure != either of the above
      • Time when a closed source flaw is reported fixed != time when an open source flaw is demonstrably fixed
      • You're forgetting the one that most people here seem to have forgotten:

        Closed source software ! necessarily = Windows.

        Simple concept, but going by the answers here many people don't quite get it.
  • Another viewpoint (Score:3, Interesting)

    by yogi ( 3827 ) on Friday June 21, 2002 @09:05AM (#3743250) Homepage
    Ross Anderson's argument appears to be based around the trade off between massive peer review ( Good Thing! ) and the ease of finding a flaws if you have the source code ( Not so Good Thing ).

    This is certainly true, however there is a large amount of security appears to come from the community / vendor around the code too. Yes, I'm generalising, but open source programmers treat security problems as security issues, rather than as a PR problem. Even though the apache team ( rightly, in my opinion ) criticized ISS for the manner of their reporting, they did also release a full disclosure release, and a suitable, working patch within 36 hours of the issue going public.

    I don't see many vendors responding that quickly, although, to be fair, the apache team did know about the vulnerablity already.

    It's all about the "Window of Exposure" really. Go to Bruce Scheiners Cryptogram page [counterpane.com] to see some excellent arguments about peer review, and the whole window of exposure idea.

    • Re:Another viewpoint (Score:2, Interesting)

      by GigsVT ( 208848 )
      ease of finding a flaws if you have the source code ( Not so Good Thing ).

      This is contradictory to the rest of your post. You mention window of exposure. While you might argue the window of exposure starts with public disclosure, it really does not.

      A flaw that is found in a piece of software often was there for years. The window of exposure actually starts when the flaw is introduced, since from that point forward, there is the possibility of a person or group having knowledge of the flaw and not releasing it.

      It's entirely possible that there is a blackhat group or groups, which we will probably never discover, that is harboring hundreds or thousands of unreleased vulnerabilites. Such a group would have immense power, the ability to disrupt the information systems of nearly every company on the planet, on a whim, or when hired to do so.

      Open source, with it's ease of finding flaws, reduces this "true window" of exposure.

      It's easy to fall into the trap of believing that all security threats are script kiddies running tools against well known vulernabilities, since the majority of the attacks reported are of that nature, but that doesn't mean that the threat of a true blackhat group doesn't exist, and couldn't be devestating.

      • Open source, with it's ease of finding flaws, reduces this "true window" of exposure.


        No, this is wrong.

        Open Source INCREASES the window of expousure. With open source everybody, the good "examiner/reviewer" and the bad attacker, has he ability to find a flaw by analyzing source as soon as the source is released.

        With closed source the attacker needs to analyze the assembly code or needs to drive black box attacks from the outside.

        The "window of exposure" is in both caes the same, the flawed system has "a flaw" since it is installed and running somewhere and such it is open for an attack even if no one ever will know how to attack it.

        If YOU like to distinguish between (hypotetical) window of exposure and true window of exposure you have to conclude that the true window of exposure is in OSS bigger.

        angel'o'sphere
  • by Vengie ( 533896 ) on Friday June 21, 2002 @09:05AM (#3743251)
    From the article....
    "Even though open and closed systems are equally secure in an ideal world, the world is not ideal, and is often adversarial," Anderson said.
    To rehash an old example (learned in my OS class) the multics system had a cruddy password check feature that interacted poorly with the VM. It compared one character at a time and stopped on a poor character. If you set up that character to be on a page boundary, you could check to see if the character was correct by how long it took to check the next character. If you quickly got an error, the character was wrong. Otherwise, you page-faulted and trapped into the os and read the next page (and next character) off disk. End result? OSS --> Easily cracked passwords is pseudo-valid. Time to patch said bug? 5 minutes. Result: Problem solved. Unfortunately, the point NOT highlighted in the article is that with closed source proprietary software, notably windows, you have far less knowledgeable admins who _don't_ apply necessary patches often. (Vengie's Addition to Godwin's Law: In dealing with Internet Security, the probability of a thread discussing Nimda/Code Red turning into blatant MS bashing reaches infinity as the number of posts increase; Lets avoid that one here)
    In the real world, closed source apps DON'T get patched fast and have far more easily recognized buffer over/under run errors. (OSS people are notorious for noting buffer over/underruns in development/testing phases.) Then again, like my OS teacher said...."If you ever want to hack into a system, just find a bug in sendmail." ;)
    • Hmmm (Score:2, Interesting)

      by Clansman ( 6514 )
      "Infortunately, the point NOT highlighted in the article is that with closed source proprietary software, notably windows, you have far less knowledgeable admins who _don't_ apply necessary patches often."

      And yet I find that solaris admins are general very up to speed despite their closed source proprietary software.

      I conclude that this is not the determining factor.

      I suggest that windows, being the dominant platform, ubiquitous and cheap (sort of) means that many people have been taken the opportunity to admin a box, without training, mentoring or other good habit forming stuff.

      They now call themselves 'systems administrator' and yet are , as you spot, often clueless as to what they should be doing.

      Paradox: linux's cheapness may prove just as much a burden in generating under trained admins.
  • Well, Duh (Score:2, Insightful)

    by jweb ( 520801 )
    Not trying to flame or troll here, but this isn't a very shocking conclusion. The amount of bugs in any software is primarily dependent on the quality of your design and implementation. Well-designed closed source programs can be just as secure as open source programs. Conversely, badly designed and coded programs will have many bugs, no matter if they're open-source or not.

    Granted, it may be easier to find and fix bugs in open-source software, but that doesn't mean that a well-designed, well-coded, throughly-tested closed source program can't be relatively bug-free and secure.
  • Equally secure? (Score:2, Insightful)

    by Ngwenya ( 147097 )
    Not sure that he does say that they're equally secure - he said that they are equally secure in an ideal world (Section 3), then goes on to establish the various micro-effects which break that ideal symmetry (vendor trust, quality of testers, etc).

    He also says (S3.4)
    ...assurance growth is not the only, or even the main, issue with open systems and security.


    He then links (maybe speciously) the DRM stuff surrounding TCPA as one of the micro-effects which might skew things. I tend not to agree with him, but you don't go publicly disagreeing with Ross Anderson without thinking lots and reading lots more.

    Disclaimer: I work for HP, and have an interest in the Trusted Computing Platform, so I'm probably biased.

    --Ng
  • Maybe... (Score:4, Interesting)

    by ins0m ( 584887 ) <ins0mni0n&hackermail,com> on Friday June 21, 2002 @09:05AM (#3743255)
    The trade-offs:

    Pros:
    Closed-source: No one can see your code, thus eliminating obvious exploits (buffer overflows, race conditioning, etc.) from being quickly jumped on. Less chance that an external developer will accidentally or intentionally misuse some of your libraries or otherwise write in exploitable code.

    Open-source: Everyone can see your code, thus allowing a multitude of additional glass-box testers to help patch things more quickly to adapt around problems a project leader may/may not see. Quick turnaround on patching of code.

    Cons:
    Closed-source: Limited field of testers; slower turnaround on bug/exploit fixes when even reported (can go on unreported for months, or even when reported, may be ignored or shelved indefinitely).

    Open-source: Since everyone can see your code, some black-hat punk is invariably going to find some exploit and blast your distributions for it. Also, QA is nigh impossible to timely enforce when 100's of developers submit patches, sometimes anonymously.


    Opinion: Both may seem to be even; however, the timeliness of a fix can make all the difference in security, and waiting days vs. weeks or months for a patch can make or break an information-driven business. Also, even if an open-source project is patched with an exploit ingrained, there will still be a quick turnaround on patching it, as there is for any bug. IANA genius, but at least from a business standpoint, it would seem that quick and usually-reliable beats slow but usually-guaranteed.
    • Re:Maybe... (Score:3, Interesting)

      by Wolfier ( 94144 )
      It also depends on what systems black hats "like" to exploit.

      For example, more valuable data is stored on MS machines than Linux boxes right now. Of course they're going to sharpen their skills hacking the MS box more.

      In my opinion it also has something to do with the MS-hate common in the hacker communities.

      (I know using MS as an example is not a good one since their software...well, let's not go that way. But my point is, it depends a lot on what the black hats want to do.)
    • Also, it is worth mentioning that closed-source programmers tend to spend more time on bogus features that purport to satisfy legal contracts, clueless marketers, or plain stupidity.

      (e.g. dvd region control, dancing paperclips, copy protection)

      While open-source programmers tend not to be disturbed by these dark forces, thus can potentially spend more time on improving the quality of the code.
  • HA HA HA HA (Score:2, Insightful)

    by jackb_guppy ( 204733 )
    Idealizing the problem, the researcher defines open-source programs as software in which the bugs are easy to find and closed-source programs as software where the bugs are harder to find. By calculating the average time before a program will fail in each case, he asserts that in the abstract case, both types of programs have the same security.

    If he truely said this... Then the report is laughable.

    1) Windows is open-source, because the bugs are easy to find. But you can not fix them.

    2) He changes all common meanings, so the report can be used as FUD.

    Is he a CS major or MS major? (Martketing Science)

    • He's a well known and highly competent researcher in the security area (especially smartcards).

      He also has a penchant for self-promotion, so the "Marketing Science" suggestion is perhaps not too much of an insult...
    • Re:HA HA HA HA (Score:3, Informative)

      by ishark ( 245915 )
      Idealizing the problem, [....]

      If he truely said this... Then the report is laughable.

      It doesn't take long to verify, you know....

      Acroread->Search->"Idealizing"

      No occurences of 'Idealizing' were found in the document.

      Conclusion: wherever that text comes from, it's not the paper being discussed. More luck next time.

      (-1, Lazy) for not doing the search yourself :)
    • Is he a CS major or MS major? (Martketing Science)

      I'll leave it up to you to decide what the "BS" stands for.
  • by Pentalon ( 466561 ) on Friday June 21, 2002 @09:07AM (#3743269)
    I haven't read the paper yet, but I would say that if generally any two particular pieces of software have the same number of bugs or security issues, the open source software will benefit technical server groups more for the ability of those groups to analyze the code and make their own fixes if necessary, and for the way in which the community generally very quickly responds to discovered flaws. Closed source software does not tend to respond as fast or offer the flexibility of allowing users to analyze the code. Of course, I haven't read the paper yet. Maybe they take that into account.

    • As posted earlier by another user, many corporations have change management restrictions that will not allow an install of a patch that's "hot off the compiler". Many patches that comeout within 24-48 hours in OSS have not gone through the necessary regression testing that a closed-source patch may have recieved. In this case, the only benefit you have with OSS is the choice to use an unstable patch, however, I know few admins who would make such a choice. So, while I agree that there may be a benefit with OSS in this regard, I contend that the benefit is minimal and pratically none in larger organizations.
  • by iiii ( 541004 ) on Friday June 21, 2002 @09:08AM (#3743279) Homepage
    Idealizing the problem, the researcher defines open-source programs as software in which the bugs are easy to find and closed-source programs as software where the bugs are harder to find. By calculating the average time before a program will fail in each case, he asserts that in the abstract case, both types of programs have the same security.

    I am not sure how much value this has. There are a lot of other considerations.

    With open source you have the source, so you can do something about bugs, you can fix them. And you can also look for potential issues in the code. You are in control of your own security. And a potential attacker has no idea what you've done with your particular implementation.

    With closed source you are completely dependent on the vendor to provide fixes. First you have to prove to them that something is wrong, then, if you are lucky, after some period of time, the will provide a udpate which may or may not fix your particular problem. They may not be as motivated as you would be to fix the problem.

    I'll take the Open Source choice any time. That way the people who care about security are the ones in control of security, an arrangement that is likely to work better than any other.

    But at least "he acknowledged that real-world considerations could easily skew his conclusions. "

  • The old saying... (Score:2, Insightful)

    by sootman ( 158191 )
    Proprietary programs should mathematically be as secure as those developed under the open-source model, a Cambridge University researcher argued in a paper presented Thursday at a technical conference in Toulouse, France.

    In theory, there's no difference between theory and practice. In practice, there is.

    Supporters in the Linux community have maintained that open-source programs are more secure, while Microsoft's senior vice president for Windows, Jim Allchin, argued in court that opening up Windows code would undermine security.

    The two things are nowhere near the same. 'Open source development' is not at all the same thing as 'closed source development, opened up later.'

    People complain about posting without reading, but that's it--if it's from news.com/ZD/etc., it's wrong. :-)

  • That should read (Score:3, Interesting)

    by DeadSea ( 69598 ) on Friday June 21, 2002 @09:10AM (#3743294) Homepage Journal
    Ross Anderson just released a paper concluding that open source and closed source software are equally insecure.

    All software has security vulnerabilities. Software with vulnerabilites is secure as long as nobody knows about the vulnerabalities or nobody exploits the vulnerabilities. Security is a process, not a state. To run a secure system, you have to know as much about the vulnerabilities as the hackers. You have to patch your systems. You have to manage your risk.

    All it takes is one hole in some piece of software that you are running. If somebody knows about it and hacks you you are insecure. There are channels for discussing security vulnerabilities for both open and closed source software. Holes in both open and closed source software get patched. In that respect they are equally secure. There are more holes in both. It doesn't matter how many holes, it only takes one. In that respect they are equally insecure.

  • There is an article published in 1999 by IBM's John Viega, Senior Research Associate, discussing Open source software: Will it make me secure? [ibm.com].

    While the article is over 2 years old, the logic behind the man's reasonning is still very actual and he raises some good points.

  • Seriously, all things being equal, wouldn't you want to have access to the source code if you could have it?

    Maybe it's more secure, maybe it isn't. I think security depends as much on the humans who set up and use the system as it does the software. But security is just one selling point.

    If you don't have the source, you can't modify the code. All you can do is configure. (Well, unless you like hacking binary.) But if you have the source, you can de-bug, add features, remove unwanted features, etc.

    And if you don't have the knowledge, skill, or desire to do this on your own, does it hurt you any to have the source available?

    There's another sense in which having the source code makes you more secure: you're not tied to the vendor. If they go out of business, you don't have to go shopping for a new vendor who has a similar product that you'll have to migrate to in order to enjoy upgrades, patches, and tech support. If they decide to add features to a new version that you don't like, you can branch the code off and keep your house version however you like it.

    There's a zillion reasons to prefer open source software. It's not just about security.
  • I'll accept his statement that both are equally secure, especially because it's 90% based on the administrator. The difference is however, an open-source bug has many more eyes upon the code, and hence can be fixed a lot easier. Also, (though this doesn't pertain to most closed-source products) for programs that were written to be platform-independent, all the fix needs to be is a small .diff file, for the administrators who want to be as secure as possible, and the official builds and packages can be released in appropriate time.

    The other great thing about OS is you -yourself- can fix the bug. No, not everyone is a kernel-hacker, but theres many bugs in small programs too.

    IE: Not too long after I had first installed linux, I found out I couldn't play a certain DVD with any DVD player (Ogle, MPlayer, Xine, etc) although they played all of the others ones perfectly. The program was libdvdread (I believe) was dying on a failed (and completely unnecessary) assert(). So I opened up the sources, commented that line, recompiled, and wa-la, I could watch it now.

    So, yeah, there will always be bugs; some OS products may even have more because they're made by people in their spare time (ie: apps like Ogle); but regardless, because there's many more eyes on it, bugs can (and generally are) fixed a lot quicker....
  • bugtraq reference (Score:5, Insightful)

    by MartinG ( 52587 ) on Friday June 21, 2002 @09:16AM (#3743338) Homepage Journal
    open source software isn't as bug free as we would all like to think.

    All this shows is that open source software has had more bugs discovered and fixed than we would have liked there to have been in the first place. It has no relation at all to the number of remaining undiscovered bugs, and therefore no relation to the security of the software in question.

    It's simple:

    Assumptions:
    1) When written, open source and closed source software have on average the same number of security bugs.

    Observasions:
    1) The number of security bigs in a piece of software only decreases when they are fixed.
    2) A security bug is typically fixed after, and as a result of it being discovered. (they can be fixed by accident, but i will neglect this as it's irrelivent anyway)
    3) Closed source software and open source software can both have bugs discovered by trial and error style cracking.
    4) Open source software can have bugs discovered due the sheer numbers of people with access to the source.

    Conclusion:
    1) I conclude that open source sofware will tend to have any bugs discovered more quickly because there are more ways to discover them, and all ways available to closed source are also available to open source.

    Can anyone fault my reasoning? It seems to me that both start equal on average, but open source will tend to have the bugs removed more quickly.
    • by Mr_Silver ( 213637 ) on Friday June 21, 2002 @09:37AM (#3743478)
      4) Open source software can have bugs discovered due the sheer numbers of people with access to the source.

      True, but just because they can doesn't mean that they do. One of the great myths about open source is that *anyone* can just dip in and discover a bug and how to fix it. That simply isn't true.

      I can find bugs in closed and open source bugs in exactly the same way, by using the product until something wrong or unexpected happens. But just because I have access to the source doesn't mean that I could actually fix the bug.

      If you look at projects such as Apache and Mozilla, they tend to have a number of people who know the code very very well and a few that given a couple of hours might be able to work something out and a very large number of people who, in the whole grand scale of things, are of no use at all in providing a fix to a bug.

      This contrasts to a large number of individuals in an organisation who know the code very well and work with it day in day out.

      Finally let us not forget that whenever people talk about security they often use Apache and IIS as their examples. Be aware that these are not really good examples. Not all OSS projects are of Apache's quality and not all closed projects are of IIS' quality.

      You've ended up picking one of the best in the OSS world vs one of the worst in the closed world. It would be a little like compairing Ford's best car with Vauxhalls worst. Just because the Ford won all the time, does it mean that all Ford's are always better than all Vauxhalls?

      (I think Vauxhall is Opal in the US)

      • This contrasts to a large number of individuals in an organisation who know the code very well and work with it day in day out.

        Whoops, this doesn't contrast at all. What I was trying to say is that in a closed organisation you have a number of people who know the code very well, some that could in a couple of hours and thousands of people who are no help at all.

        In other words, pretty much the same as OSS. Just because in OSS everyone has access to the code doesn't mean that they know where to look and how to fix it.

      • The "large numbers of invididuals who know the code well" applies to close source and open source. Open source in addition has those who do not know it that well, but can fix a few bugs. I'm not suggesting that "just anyone" can fix a bug. In order for my conclusion to be correct, there would only have to be one person who has fixed one bug once that they wouldn't have been able to fix without the source. (and even _I_ fall info that category)

        As you say. You can find bugs in open and closed source in the same way, but some people have two ways of finding bugs.
      • True, but just because they can doesn't mean that they do. One of the great myths about open source is that *anyone* can just dip in and discover a bug and how to fix it. That simply isn't true.
        Yes, but.
        First thing in eradicating bugs is to be able to reproduce it.
        Then you need to somehow bring together knowledge of the bug and knowledge of the program.
        Open Source increases the options, and over time tends toward more bug-free programs.
    • 4) Open source software can have bugs discovered due the sheer numbers of people with access to the source.

      Unforunately this hasn't proven out in practice. Having access to the source doesn't mean that people look at the source. Even if they do, it doesn't mean that they know how to fix the bug correctly.(look at that recent ISS fix for Apache that everybody claimed wasn't a proper fix)

      Again I think we're back to the same fact that there is no fundamental difference. It's important that you remember to disconnect "in theory" from "in practice" in any analysis.

      Plus, just because you are not aware of the inner operations of a company doesn't mean that they aren't, in effect, performing the same basic functions as the open source projects. I'd refer you to that article from the Lotus developer who pointed out how little actual difference there is between commercial approaches and what was described by ESR in his Cathedral paper, but I can't find the damn link right now.

      • Having access to the source doesn't mean that people look at the source.

        They don't have to. Only a significant minority have to - and they do.

        The apache exmaple is perfect. Yes, one fix offered was not "proper" (even though it did the job temporarily some say) but the proper fix is available now, and with CSS there likely would not have been any fix so far.
    • The only fault is that you assume open source software will have more "bug testers" (ie, anyone with access to the source) by default. In theory, that's not known before the fact. I have to imagine Eudora has more people working on and testing their mail client than Columba (an open source mail client on sourceforge) does. Just because it's closed doesn't mean there are less people working on the piece of software, and therefore has less people that can fix bugs.

      But practically speaking, I think you're on the money. What it boils down to is what people do when they do find bugs in the source. I think M$ would like you to think people will use this open book to start hacking. It would appear that most people bothering to look at open source projects would prefer to submit a patch than to exploit the security bug.

      As long as people patch and don't exploit, and as long as we hold our dicussion to popular open source products and their closed peers (like Apache vs. IIS rather than Eudora vs. Columba), I think your arguement holds.

      Post script...
      Note that it also helps that open source projects tend not to toss legacy code out the door as often, afaict. Once a bug's gone, it's gone, so to speak.
  • Most of the time when comparing Closed Sourse Security to Open Source Security Most of the time people want to compare Microsoft as a flagship of Closed Source and Poor security. And they use Linux with all the latest patch or Open BSD which is a model of security. I think it would be more fare if you compared Linux as the OS and Solaris as the Closed Source. And check the security for those two in that case you may find less of a gap. Using Microsoft as a judge of anything is really giving closed source a bad name sience they only make junk.
  • I personally run an open source perl portal... Don't worry slashcode, I'm not competing with you :)

    Just recently, a new developer came on board, and really studied the code. He successfully found about 10 odd / abstract ways to exploit the code.

    I might be an ok programmer but I NEVER in my life would have found these. It's just not my specialty....

    Without the code being opensource and open to viewing by others, I never would have thought to look for these types of expoits, and they would have remained in the code. And if someone, with malicious intentions, tried them... I would have been.. in short... screwed!

    NOW...
    Two things happened here to require secure opensource programs. 1) I was willing to quickly make the changes.. and not just quick patches, but really study the code, and look for the best way to fix it.
    2) I had a good samaritan.... He could have been malicious, and had his way. He didn't.

    I personally believe that both are required, and then yes, open source is more secure and will always be.

  • by DeepDarkSky ( 111382 ) on Friday June 21, 2002 @09:34AM (#3743462)
    Closed source can have fewer bugs (security bugs are merely a special kind of bug) if the company that does the development is discplined and puts the focus on the quality (i.e. minimizing the bugs) of the software. Because they are all in the same organization, and they all follow development standards and methodology and provide good QA testing. That is, if the market and marketing department and the bottom line allows them time to do things correctly, which often is not the case.

    Open Source software often depends on a somewhat less uniform and disciplined (but can often independently more disciplined than their commercial counterparts). There is usually less formal organization. This is where it really depends on the quality of work of the people working on these projects.

    Because Open Source projects are less sensitive to the market and the bottom line (in general, except for the projects undertaken by commercial entities), they are not as likely to have quality problems because of lack of time.

    But to say that Open Source projects have less bugs because more eyes are looking at them is a pretty big assumption. Just because more eyes can look at something doesn't mean more eyes will. The bugs can stay in Open Source projects for years before someone finds a problem - in this case, I'd say it depends on how popular this project is and how attractive is it to people who will look at code and look for problems and can understand what to look for.
    If anything, in a short-cycled, less popular piece of software, a commercial software can have better quality than an open source one if the commercial developers are disciplined and dedicated. It is simply a matter of time.

  • There are essentially two approaches to security: Proactive and reactive.

    Good coding, auditing and QA are proactive. They are expensive, boring, take a lot of time and you are never sure that there's nothing left that slipped through anyway.

    Which is why most code, both free and proprietary software, relies mostly on reactive security (though they will always pay lip service and often more to at least good coding).

    In proactive security, there should be little difference between free and proprietary software, as the bugs are found and fixed before the product gets shipped.

    Free software shines in reactive security, however. The blinding speed with which bugs get fixed is impressive.
    I found that Mozilla overflow that made the news recently, and it took the Mozilla team about a week to come up with a fix. My experience with commercial vendors is that it'll take them a week to come back to you and ask for more details. If they give a damn at all.
    For example, there are critical bugs in IE that have gone unfixed for months and are still there. That outlook==worm problem won't go away this century, either.

    It's not the number of bugs that counts, it's the severity and the speed of bugfixes. Give me 10 light apache bugs that are fixed within the week any day, but keep that 2 critical IIS bugs that take a quarter to fix.

  • by Trailer Trash ( 60756 ) on Friday June 21, 2002 @09:41AM (#3743512) Homepage
    I have been running an ISP now for two and a half years, using Linux and FreeBSD exclusively. In that time, the following items have cropped up that I had to fix:

    1. Bind hole (root exploit at the time, now it's chroot'd and running as named.named)
    2. ftpd (root exploit, I turned ftpd off)
    3. telnetd (root exploit, turned it off, too)
    4. openssh (root exploit, simply recompile of new version)
    5. current Apache bug, which even if it's an exploit is far from root or anything else useful

    That comes down to a problem to be fixed every 6 months or so. This is real world. It doesn't matter a rat's ass to me what all shows up on Bugtraq, what matters is if someone is going to be able to hack my boxes. Most of the "bugs" aren't going to leave me open to remote exploit.

    Given that, it's ludicrous to say that my setup is no more secure than a Windows/IIS setup. IIS updates come out weekly, sometimes requiring reboots. I literally don't have the time that it would take to run Windows here.

    And IIS is probably the most-hacked piece of Windows. Want to compare it to Apache? Apache runs as nobody.nobody on most systems, or perhaps www.www. How about IIS? Hack Apache and you're an unprivileged user who'll have to rehack the box from the inside. Hack IIS and you're the Administrator. Even if Apache was as exploitable as IIS, it still wouldn't be as big a deal.

    Michael
  • by Uggy ( 99326 ) on Friday June 21, 2002 @09:43AM (#3743523) Homepage

    Look... why is it that highly paid movie editors who poured over Spider-Man for many months with millions of dollars, couldn't find what the movie viewing public did in the opening weekend? According to movie-mistakes.com:

    Fans have so far spotted 77 continuity errors, the most flaws identified in an opening weekend, according to Web site movie-mistakes.com.

    Jon Sandys, who runs the site, said the number of mistakes could be a symptom of the movie's popularity.

    "It's obviously possible that it's got a higher than average number of errors, but huge numbers of people are going to see it and that makes for lots of pairs of eyes checking every inch of the screen," he told the Independent newspaper today.

    That sound remarkably familiar to Eric Raymond's Cathedral and the Bazaar? When Spider-Man was checked for bugs by the highly paid editor (programming team) and none were found, did they not exist. Is the movie inherently more flawed when the bugs were found and reported by the viewing public (open source programmers)?


    • The many eyes argument tends to fall apart when you see major bugs in MAJOR OSS software projects, like Apache and SSH, that have survived for years, through entire version numbers. The simple fact of the matter is that any complex system is going to have unforseen interactions.
  • This understimates the ease with which some people reverse engineer closed software for living. As a result the "applying a new abstract attack" example is completely bogus. If you know how a vendor thinks, writes and generally does things then you can apply the attack very soon after its release. It is not significantly more difficult then applying to an open source application. There are other such things as well.

    Otherwise a good read. But the fact that it is written in a where reverse engineering is not a flourishing business definitely shows.
  • "In theory, theory and practice are the same, but in practice they're different."
  • Mr. Anderson's paper is only thirteen pages long. A quick review of it shows extensive use of anecdotes and stories. As I learned in High School, the first step of the scientific method is to pose a hypothesis. It seems that he has barely made it past this first step. I say this because his paper appears to be pretty thin on real research. He may have one example with TCPA, but what about all the other open systems? In the end, he may or may not be correct. But let's wait for his peers to have their say in his hypothesis.
  • Wouldn't you be better off thinking of them as equally insecure?
  • Closed source programs are typically not free. If a bug shows up in it, will you have to wait until the next release for the fix, and will you have to pay for it it you don't have maintainence?

    Not to sound cheap, but sometimes it can be a PITA to grab some funds & do the usual hoop jumping to get a purchase order cut. And it
    can take a *long* time, depending on the approval channels.

    With Apache, I had our webserver updated in a few minutes of reading the announcement of the fix.

    -asb
  • If you look at the article's references, you'll see that "The Cathedral and the Bazaar" is quoted along with "Opening the Open Source Debate" from the Toqueville Institution ...
  • by AnotherBlackHat ( 265897 ) on Friday June 21, 2002 @10:19AM (#3743805) Homepage
    Just because Microsoft doesn't publish their source code,
    doesn't mean the source code is not available.
    Crackers aren't afraid to decompile code, or use social engineering to obtain it.
    Non disclosures mean nothing to someone who is writing a virus.

    But it does stop the white hats.

    That asymmetry makes a big difference in the analysis.
    In open source the white hats and black hats are on equal footing.
    In closed source, the black hats have an advantage somewhere
    between alpha and 0, depending on how hard it is to obtain the source.
    Historically, it's been proven over and over that obtaining the source is much easier than the original designers thought,
    which is the reason security through obscurity is treated with such derision in the crypto community.

    Most bugs are found by people running the code.
    Most security holes are found by people who are looking for them.
    Since Black hats have no real difficulty obtaining the source,
    "Closed" source gives them a huge advantage over their white hat counter parts.

    -- this is not a .sig
  • Software, as written, is just going to have a number of bugs proportional to the amount of code. The number of security holes is proportional to the number of bugs, with some constant depending on language and programming style.

    Then there's testing and debugging. It seems to me that essentially nobody just goes through open source code for fun, trying to find bugs; people who look at the code generally join the projects. So, in either case, the team writing the software is also mostly the team that tests it. So you'd expect about the same results.

    The security advantage to open source is that, if you really care about security, you can examine the source yourself and determine how good it is, to the best of your abilities. With closed-source, you have to trust whoever wrote and tested the software, since you can't do it yourself.

    Of course, almost nobody cares that much. Of course, you can probably bet that if the NSA doesn't change something in the version intended for government agency use, it's right. (Even if the NSA were putting in holes only they know about, they wouldn't leave any pre-existing holes)
  • The TCPA "trusted platform" does not improve security. It's only a form of tamper resistance, so that only "approved software" will run. If the "approved software" has a security hole, the TCPA system won't provide any protection.

    The TCPA is basically a boot-loader system that enforces code-signing, using hardware assistance. It's a lot like the XBox boot system, the one that keeps you from running your own programs on an XBox.

  • by _|()|\| ( 159991 ) on Friday June 21, 2002 @10:23AM (#3743830)
    Perhaps you've heard of the programming competition sponsored by Tom DeMarco and Tim Lister in the 80s. They varied the requirements, telling some teams to minimize coding time, others to minimize bugs, etc. The conclusion was that, on the whole, programmers do what they're told. There were some anomalies: one of the rapid development teams had fewer bugs than most, for example.

    I suspect that you can generalize this to security, as well. OpenBSD focuses on security, and it shows. Microsoft doesn't, and it shows. This is not a matter of proprietary v. free.

  • remember, there are no security 'bugs' if there are no attacks.
    a bug implies an imperfection in the code that occurs without any outside pressure.
    that being the case, because new bugs get 'created' all the time, the real measure of security is time to response since before the attack existed it could not be protected against.
    in this respect open source is much more secure due to the sheer volume of people who are able to respond. whether it's the original programming team or some hobbyist in zimbabwe, your chances for a quick response are much greater, not to mention you might fix the thing yourself.
  • If one adhere to the authors thinking (which for a very long time has been adcocated by Micosoft and other commercial software companyes) that it are impossible to create 100% correct programs and one only concern oneself with the time period that a securoty exploit is known than the author are mostly correct.

    However if you want a truly secure system you want a system that is proven to be secure. If people shall trust Internet and it's services one need a truly proven secure Internet - in that regard the author is way off and the paper is really unimportant since it talks about a situation that should be purely academic.

    The only way to prove that a system are secure is to release all specifications (ie. source code) and let everybody try and break the system. If noone has broken the system in a couple of years the system is very secure.

    With closed source you can never prove that the system is secure by pounding at it... because it may exist a security hole that only is easy to find if you have the source code.

    It only takes a disgrountled employee to release the source code (ot just the exploit) of the closed-source system and you have a nightmare.

    If you want to have a truly secure system - use proven secure software.

    This is why algorithms for crypto are released so that all crypto experts can try and break it. This is similar to when ESR says that given enough eyeballs any bug is shallow.

    It is possible to write computer programs that are 100% correct. But the only way to ensure that, is to matemathically prove that the program is correct. It exist an academic programming language called Pro that was created for just that purpose - to prove that a computor language are 100% correct according to it's specifications.

    So in theory it is possble to make 100% correct computor programs. The only way to make sure the proof is correct is to also make sure it's secure in practise by letting other try to find errors in the proof. Thus the only way one can get a 100% correct program is the release the source code.

    In practice thare also exists programs that have been proven to be very secure - because the developers where concerned about security - one good example are qmail.

    A different example is Microsoft who recently said that they can't release their source code because it will threathen the USA security. Deep down in Micosoft software exist at least one unexploited security hole. It only requires one person to find it or one former employee of the houndred or maybe shousends Micosoft employess who knows about the security hole to tell others about it.

    If you are using Micosoft closed softeare you are now sitting on a ticking bomb. So anyone interested in a secure system should not use Micosoft software. Since it it well known that there exists a security hole in it that will compromise your security when it is becomes public knowledge. So anyonw concerned with security and uses Micosoft software are ... well just say that thay maybe should change operating system ASAP.

    With open source I know that if anyone has seen a problem it is fixed - for closed source I know that the company will probably not fix it until an exploit is widely known.

    If one wants to be taken serious whan talking about secure software one need to show that the software is secure and not just talk about security and treat is as an PR problem.

  • by Anonymous Coward
    Seems to me that neither OS or CS is any kind
    of guarantee of security -- holes can go unnoticed
    in OSS for years, and holes can be found in CSS
    without having the source at all.

    What really matters is *how easily and quickly
    the holes are fixed*. Seems that OSS has CSS
    beat on this issue hands down.

    -- jfh@cise.ufl.edu
  • His analysis is based on a mathmatical modeling of the processes.

    I'd say that open source does a better job of actually delivering on the promise of security.

    This article does point out that it can be done, so MS has no reasons not to do a better job.
  • I've found this to be rather true when is compares to comparing Open source vs Close sourced. When it comes to comparing your big name software, open source takes a lead in security. I guess it has to do with more people poking their heads in the code. There is almost a total reversal when it comes down to small programs. Even custom work. I avoid OSS at all costs. These smaller programs don't get much exposure and hense are usually full of holes and bugs and a rarely ever at V1.0 status. I have had much better luck choosing closed source products in this area. I guess you get what you pay for here. Just my observation.

Whatever is not nailed down is mine. Whatever I can pry up is not nailed down. -- Collis P. Huntingdon, railroad tycoon

Working...