Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security

Security of Open vs. Closed Source Software 366

morhoj writes "Cambridge University researcher Ross Anderson just released a paper concluding that open source and closed source software are equally secure. Can't find a copy of the paper online yet, but I thought this would make for an interesting morning conversation. You may not agree with him, but anyone who's on the BugTraq List can tell you that open source software isn't as bug free as we would all like to think." I found Anderson's paper, so read it for yourself. There are some other interesting papers being presented at the conference as well.
This discussion has been archived. No new comments can be posted.

Security of Open vs. Closed Source Software

Comments Filter:
  • by q-soe ( 466472 ) on Friday June 21, 2002 @09:57AM (#3743200) Homepage
    But i think security of software is often down to the admin... I mean you can secure any operating system if you know what you are doing and its easy to build an insecure box - linux and windows.

    How secure is an out of the box mandrake install ? or a windows 2000 ?

    A good admin who is a pro will work hard to secure his servers and patch and look after them - a bad admin is a bad admin regardless of the OS

  • by Nerant ( 71826 ) on Friday June 21, 2002 @09:58AM (#3743203)
    Security bugs in software are inevitable : it is bound to happen , sooner or later. A properly setup system can mitigate some of these problems (ie. chroot, modified security kernels). What my concern is is how long and how public security disclosures are, and how long the affected vendor takes to issue a bugfix.

  • Of course not... (Score:3, Insightful)

    by Dilbert_ ( 17488 ) on Friday June 21, 2002 @09:59AM (#3743214) Homepage
    Of course there are just as many bugs in open source software as in closed source. Most of it is even written by the same people: what they do at work is closed, what they hack upon during the night is open.
    The main difference lies in the speed and motivation to fix the bugs. Open source bugs can be fixed by anyone, but closed source bugs need to be fixed by vendors who are afraid to even admit they exist, for fear of losing customers.
  • Security (Score:3, Insightful)

    by Ashcrow ( 469400 ) on Friday June 21, 2002 @10:02AM (#3743227) Homepage
    There will always be software bugs as long as programmers are not perfect. The huge diffrence is the in a closed source environment you'll have to wait for patches from the vendor, or not at all. In the OSS you can patch it yourself, get the unoffical patches for your vendor, get diffrent up-to-date packages, or install the latest version from source.
  • Buglist (Score:2, Insightful)

    by Bloody Bastard ( 562228 ) on Friday June 21, 2002 @10:02AM (#3743231)
    OK, Open Source has a lot of bugs, but who does list closed source bugs? I'm sure most of their bugs don't go public, because it is not a good market technique... It isn't fair to compare both lists.

    Just my two cents.
  • Duh... (Score:5, Insightful)

    by sootman ( 158191 ) on Friday June 21, 2002 @10:02AM (#3743234) Homepage Journal
    Security != number of bugs. There's 'severity of bugs' and 'speed of fixes', not to mention the OS's and software's design in the first place--think permissions, user spave vs. kernel space, etc.
  • by Vengie ( 533896 ) on Friday June 21, 2002 @10:05AM (#3743251)
    From the article....
    "Even though open and closed systems are equally secure in an ideal world, the world is not ideal, and is often adversarial," Anderson said.
    To rehash an old example (learned in my OS class) the multics system had a cruddy password check feature that interacted poorly with the VM. It compared one character at a time and stopped on a poor character. If you set up that character to be on a page boundary, you could check to see if the character was correct by how long it took to check the next character. If you quickly got an error, the character was wrong. Otherwise, you page-faulted and trapped into the os and read the next page (and next character) off disk. End result? OSS --> Easily cracked passwords is pseudo-valid. Time to patch said bug? 5 minutes. Result: Problem solved. Unfortunately, the point NOT highlighted in the article is that with closed source proprietary software, notably windows, you have far less knowledgeable admins who _don't_ apply necessary patches often. (Vengie's Addition to Godwin's Law: In dealing with Internet Security, the probability of a thread discussing Nimda/Code Red turning into blatant MS bashing reaches infinity as the number of posts increase; Lets avoid that one here)
    In the real world, closed source apps DON'T get patched fast and have far more easily recognized buffer over/under run errors. (OSS people are notorious for noting buffer over/underruns in development/testing phases.) Then again, like my OS teacher said...."If you ever want to hack into a system, just find a bug in sendmail." ;)
  • Well, Duh (Score:2, Insightful)

    by jweb ( 520801 ) <(jweb68) (at) (hotmail.com)> on Friday June 21, 2002 @10:05AM (#3743253)
    Not trying to flame or troll here, but this isn't a very shocking conclusion. The amount of bugs in any software is primarily dependent on the quality of your design and implementation. Well-designed closed source programs can be just as secure as open source programs. Conversely, badly designed and coded programs will have many bugs, no matter if they're open-source or not.

    Granted, it may be easier to find and fix bugs in open-source software, but that doesn't mean that a well-designed, well-coded, throughly-tested closed source program can't be relatively bug-free and secure.
  • Equally secure? (Score:2, Insightful)

    by Ngwenya ( 147097 ) on Friday June 21, 2002 @10:05AM (#3743254)
    Not sure that he does say that they're equally secure - he said that they are equally secure in an ideal world (Section 3), then goes on to establish the various micro-effects which break that ideal symmetry (vendor trust, quality of testers, etc).

    He also says (S3.4)
    ...assurance growth is not the only, or even the main, issue with open systems and security.


    He then links (maybe speciously) the DRM stuff surrounding TCPA as one of the micro-effects which might skew things. I tend not to agree with him, but you don't go publicly disagreeing with Ross Anderson without thinking lots and reading lots more.

    Disclaimer: I work for HP, and have an interest in the Trusted Computing Platform, so I'm probably biased.

    --Ng
  • HA HA HA HA (Score:2, Insightful)

    by jackb_guppy ( 204733 ) on Friday June 21, 2002 @10:06AM (#3743264)
    Idealizing the problem, the researcher defines open-source programs as software in which the bugs are easy to find and closed-source programs as software where the bugs are harder to find. By calculating the average time before a program will fail in each case, he asserts that in the abstract case, both types of programs have the same security.

    If he truely said this... Then the report is laughable.

    1) Windows is open-source, because the bugs are easy to find. But you can not fix them.

    2) He changes all common meanings, so the report can be used as FUD.

    Is he a CS major or MS major? (Martketing Science)

  • by Pentalon ( 466561 ) on Friday June 21, 2002 @10:07AM (#3743269)
    I haven't read the paper yet, but I would say that if generally any two particular pieces of software have the same number of bugs or security issues, the open source software will benefit technical server groups more for the ability of those groups to analyze the code and make their own fixes if necessary, and for the way in which the community generally very quickly responds to discovered flaws. Closed source software does not tend to respond as fast or offer the flexibility of allowing users to analyze the code. Of course, I haven't read the paper yet. Maybe they take that into account.

  • by iiii ( 541004 ) on Friday June 21, 2002 @10:08AM (#3743279) Homepage
    Idealizing the problem, the researcher defines open-source programs as software in which the bugs are easy to find and closed-source programs as software where the bugs are harder to find. By calculating the average time before a program will fail in each case, he asserts that in the abstract case, both types of programs have the same security.

    I am not sure how much value this has. There are a lot of other considerations.

    With open source you have the source, so you can do something about bugs, you can fix them. And you can also look for potential issues in the code. You are in control of your own security. And a potential attacker has no idea what you've done with your particular implementation.

    With closed source you are completely dependent on the vendor to provide fixes. First you have to prove to them that something is wrong, then, if you are lucky, after some period of time, the will provide a udpate which may or may not fix your particular problem. They may not be as motivated as you would be to fix the problem.

    I'll take the Open Source choice any time. That way the people who care about security are the ones in control of security, an arrangement that is likely to work better than any other.

    But at least "he acknowledged that real-world considerations could easily skew his conclusions. "

  • The old saying... (Score:2, Insightful)

    by sootman ( 158191 ) on Friday June 21, 2002 @10:08AM (#3743280) Homepage Journal
    Proprietary programs should mathematically be as secure as those developed under the open-source model, a Cambridge University researcher argued in a paper presented Thursday at a technical conference in Toulouse, France.

    In theory, there's no difference between theory and practice. In practice, there is.

    Supporters in the Linux community have maintained that open-source programs are more secure, while Microsoft's senior vice president for Windows, Jim Allchin, argued in court that opening up Windows code would undermine security.

    The two things are nowhere near the same. 'Open source development' is not at all the same thing as 'closed source development, opened up later.'

    People complain about posting without reading, but that's it--if it's from news.com/ZD/etc., it's wrong. :-)

  • by demaria ( 122790 ) on Friday June 21, 2002 @10:13AM (#3743317) Homepage
    Patches are a big deal, especially in production environments. You can't just willy nilly upgrade the kernel on a high load and important server. Bigger departments/companies have a change management system in place so that everyone know when any piece of software is upgraded, when it will happen, who is to blame, and why it occured. Patches can cause unexpected problems (like that linux one that corrupted the file system a few months back). This process may take days or weeks to complete.
  • by goldspider ( 445116 ) on Friday June 21, 2002 @10:15AM (#3743333) Homepage
    I hate to nitpick, but for a post modded as Insightful, it may be relevant to note that this story is about SECURITY in open vs. closed source software, not BUGS. A totally different kind of discussion.
  • bugtraq reference (Score:5, Insightful)

    by MartinG ( 52587 ) on Friday June 21, 2002 @10:16AM (#3743338) Homepage Journal
    open source software isn't as bug free as we would all like to think.

    All this shows is that open source software has had more bugs discovered and fixed than we would have liked there to have been in the first place. It has no relation at all to the number of remaining undiscovered bugs, and therefore no relation to the security of the software in question.

    It's simple:

    Assumptions:
    1) When written, open source and closed source software have on average the same number of security bugs.

    Observasions:
    1) The number of security bigs in a piece of software only decreases when they are fixed.
    2) A security bug is typically fixed after, and as a result of it being discovered. (they can be fixed by accident, but i will neglect this as it's irrelivent anyway)
    3) Closed source software and open source software can both have bugs discovered by trial and error style cracking.
    4) Open source software can have bugs discovered due the sheer numbers of people with access to the source.

    Conclusion:
    1) I conclude that open source sofware will tend to have any bugs discovered more quickly because there are more ways to discover them, and all ways available to closed source are also available to open source.

    Can anyone fault my reasoning? It seems to me that both start equal on average, but open source will tend to have the bugs removed more quickly.
  • by dnoyeb ( 547705 ) on Friday June 21, 2002 @10:17AM (#3743345) Homepage Journal
    I think we should be careful to draw a distinct line between a Security 'flaw' and a 'bug'.

    A flaw is an error in judgement. A bug is an error in coding. The original poster ended his statement that Open Source has lots of bugs. This is unrelated to security unless they are specifically security related bugs.

    In any event, the speed at which you can lock down the Fort HAS to be a consideration.

    I mean, We have planes flying in Iraq and Afganistan right now. They are being shot at all the time, but they move fast enough to get out of the way. OpenSource moves faster than closed source so I can't possible see how the article writer concluded they were equal.

    Equally buggy, yes. Equally secure, puhleez.

  • by great throwdini ( 118430 ) on Friday June 21, 2002 @10:22AM (#3743379)

    Open source bugs can be fixed by anyone, but closed source bugs need to be fixed by vendors [...]

    Correction: open source bugs can be fixed by anyone with requisite knowledge, talent, and time. This would include things such as familiarity with the particular software package, affected platforms, and programming language and the energy and ability to ferret out the bug(s) and apply an appropriate fix. Then one has to factor in that package maintainers may or may not readily allow outside submission (e.g., bigotry, internal/peer review, etc.) of fixes, which may slow, hamper, or block the transmission of fixes. Add into this issues of trust, where a "fix" is offered by someone who lacks proper credentials (official or "street") to someone who has no clue how to evaluate the original issue or the proposed remedy.

    Granted, given the nature of open source software, the population of people who may repair a bug may be larger than that for closed applications, but that doesn't force into being an army of people with the inclination or skills to do so, or an effective and trustworthy means to distribute said fixes.

    I favor the potential for open source to improve response time to bugs, but I don't think one can claim "anyone" can address issues in an appropriate manner. There's no reason a skillful and organized firm couldn't address security concerns for a closed application it offers with any less celerity than maintainers of an open application.

  • by PhilHibbs ( 4537 ) <snarks@gmail.com> on Friday June 21, 2002 @10:28AM (#3743419) Journal
    That is an excuse for programmers to have bad/lazy coding habits and not program with security in mind..
    I disagree entirely - I'm always looking for bugs in my software, because I know that there always will be bugs to find. If you mistakenly believe that perfection can be achieved, you might mistakenly believe that it has been.
  • by DeepDarkSky ( 111382 ) on Friday June 21, 2002 @10:34AM (#3743462)
    Closed source can have fewer bugs (security bugs are merely a special kind of bug) if the company that does the development is discplined and puts the focus on the quality (i.e. minimizing the bugs) of the software. Because they are all in the same organization, and they all follow development standards and methodology and provide good QA testing. That is, if the market and marketing department and the bottom line allows them time to do things correctly, which often is not the case.

    Open Source software often depends on a somewhat less uniform and disciplined (but can often independently more disciplined than their commercial counterparts). There is usually less formal organization. This is where it really depends on the quality of work of the people working on these projects.

    Because Open Source projects are less sensitive to the market and the bottom line (in general, except for the projects undertaken by commercial entities), they are not as likely to have quality problems because of lack of time.

    But to say that Open Source projects have less bugs because more eyes are looking at them is a pretty big assumption. Just because more eyes can look at something doesn't mean more eyes will. The bugs can stay in Open Source projects for years before someone finds a problem - in this case, I'd say it depends on how popular this project is and how attractive is it to people who will look at code and look for problems and can understand what to look for.
    If anything, in a short-cycled, less popular piece of software, a commercial software can have better quality than an open source one if the commercial developers are disciplined and dedicated. It is simply a matter of time.

  • by jonatha ( 204526 ) on Friday June 21, 2002 @10:35AM (#3743465)
    He's a well known and highly competent researcher in the security area (especially smartcards).

    He also has a penchant for self-promotion, so the "Marketing Science" suggestion is perhaps not too much of an insult...
  • by Mr_Silver ( 213637 ) on Friday June 21, 2002 @10:37AM (#3743478)
    4) Open source software can have bugs discovered due the sheer numbers of people with access to the source.

    True, but just because they can doesn't mean that they do. One of the great myths about open source is that *anyone* can just dip in and discover a bug and how to fix it. That simply isn't true.

    I can find bugs in closed and open source bugs in exactly the same way, by using the product until something wrong or unexpected happens. But just because I have access to the source doesn't mean that I could actually fix the bug.

    If you look at projects such as Apache and Mozilla, they tend to have a number of people who know the code very very well and a few that given a couple of hours might be able to work something out and a very large number of people who, in the whole grand scale of things, are of no use at all in providing a fix to a bug.

    This contrasts to a large number of individuals in an organisation who know the code very well and work with it day in day out.

    Finally let us not forget that whenever people talk about security they often use Apache and IIS as their examples. Be aware that these are not really good examples. Not all OSS projects are of Apache's quality and not all closed projects are of IIS' quality.

    You've ended up picking one of the best in the OSS world vs one of the worst in the closed world. It would be a little like compairing Ford's best car with Vauxhalls worst. Just because the Ford won all the time, does it mean that all Ford's are always better than all Vauxhalls?

    (I think Vauxhall is Opal in the US)

  • by josh crawley ( 537561 ) on Friday June 21, 2002 @10:38AM (#3743495)
    "A good admin who is a pro will work hard to secure his servers and patch and look after them - a bad admin is a bad admin regardless of the OS"

    I dont agree with that. If the underlying OS is a secure, good OS, then your assertion holds valid. However, if you're using an unsecure OS, say Winnt 4SP6, then it doesn't matter how competant the admin is. He's limited to how good he is by the OS itself.

    Superb admin + superb OS = Superb integration/setup
    ok admin + superb OS = ok integration/setup
    bad admin + superb OS = Honeypot without "the stuff that catches you" (bad)

    Superb admin + craptacular OS = Meciocre integration/setup
    .....
  • by Trailer Trash ( 60756 ) on Friday June 21, 2002 @10:41AM (#3743512) Homepage
    I have been running an ISP now for two and a half years, using Linux and FreeBSD exclusively. In that time, the following items have cropped up that I had to fix:

    1. Bind hole (root exploit at the time, now it's chroot'd and running as named.named)
    2. ftpd (root exploit, I turned ftpd off)
    3. telnetd (root exploit, turned it off, too)
    4. openssh (root exploit, simply recompile of new version)
    5. current Apache bug, which even if it's an exploit is far from root or anything else useful

    That comes down to a problem to be fixed every 6 months or so. This is real world. It doesn't matter a rat's ass to me what all shows up on Bugtraq, what matters is if someone is going to be able to hack my boxes. Most of the "bugs" aren't going to leave me open to remote exploit.

    Given that, it's ludicrous to say that my setup is no more secure than a Windows/IIS setup. IIS updates come out weekly, sometimes requiring reboots. I literally don't have the time that it would take to run Windows here.

    And IIS is probably the most-hacked piece of Windows. Want to compare it to Apache? Apache runs as nobody.nobody on most systems, or perhaps www.www. How about IIS? Hack Apache and you're an unprivileged user who'll have to rehack the box from the inside. Hack IIS and you're the Administrator. Even if Apache was as exploitable as IIS, it still wouldn't be as big a deal.

    Michael
  • by Uggy ( 99326 ) on Friday June 21, 2002 @10:43AM (#3743523) Homepage

    Look... why is it that highly paid movie editors who poured over Spider-Man for many months with millions of dollars, couldn't find what the movie viewing public did in the opening weekend? According to movie-mistakes.com:

    Fans have so far spotted 77 continuity errors, the most flaws identified in an opening weekend, according to Web site movie-mistakes.com.

    Jon Sandys, who runs the site, said the number of mistakes could be a symptom of the movie's popularity.

    "It's obviously possible that it's got a higher than average number of errors, but huge numbers of people are going to see it and that makes for lots of pairs of eyes checking every inch of the screen," he told the Independent newspaper today.

    That sound remarkably familiar to Eric Raymond's Cathedral and the Bazaar? When Spider-Man was checked for bugs by the highly paid editor (programming team) and none were found, did they not exist. Is the movie inherently more flawed when the bugs were found and reported by the viewing public (open source programmers)?


  • Re:Duh... (Score:3, Insightful)

    by Rogerborg ( 306625 ) on Friday June 21, 2002 @10:51AM (#3743602) Homepage
    • Security != number of bugs.

    Well said. Likewise

    • Time when a white hat hacker reports a security flaw in open source code != time when a black hat hacker notices and exploits a flaw in closed source code
    • Time of public disclosure != either of the above
    • Time when a closed source flaw is reported fixed != time when an open source flaw is demonstrably fixed
  • by pubjames ( 468013 ) on Friday June 21, 2002 @10:53AM (#3743616)
    A good admin who is a pro will work hard to secure his servers and patch and look after them - a bad admin is a bad admin regardless of the OS

    Many years ago, anyone who wanted to drive a car also had to be a mechanic. Things needed constantly tweaking, they would break down often and were difficult to start and keep running. These days, if someone had a car that kept breaking down, you wouldn't say to them "well, that's your fault. You're obviously not a good mechanic", you'd say "go out and buy yourself a better car, mate".

    Don't blame the administrators for the primitive state of current computer technology.
  • Pose a Hypothesis (Score:2, Insightful)

    by lucabrasi999 ( 585141 ) on Friday June 21, 2002 @10:57AM (#3743652) Journal
    Mr. Anderson's paper is only thirteen pages long. A quick review of it shows extensive use of anecdotes and stories. As I learned in High School, the first step of the scientific method is to pose a hypothesis. It seems that he has barely made it past this first step. I say this because his paper appears to be pretty thin on real research. He may have one example with TCPA, but what about all the other open systems? In the end, he may or may not be correct. But let's wait for his peers to have their say in his hypothesis.
  • Re:In Other News (Score:2, Insightful)

    by nil_null ( 412200 ) on Friday June 21, 2002 @11:01AM (#3743671) Homepage
    Look at mozilla. Its a great project but its not as nice as IE by a long shot. Anyone using 1.1a in Linux will know that [e.g. me! while at the same time 0.9.9 works fine... ???]

    I know this is subjective, but I disagree and think IE isn't as nice as Mozilla. Mozilla 1.0 is smooth, much nicer than IE. I didn't think I'd care for tabbed browsing, pop-up disabling, image blocking, and themes but I really have grown used to these things. The only complaint I have is I haven't figured out a way to sort my bookmarks. I've used 1.1alpha and it is buggy, but it is an alpha release and shouldn't be compared.

    Now the security of Mozilla is something that we don't know too much about (or do we?). We all know about IE's security...

    I have a theory: I think open source software is found to be more insecure in the early stages of its development, whereas closed source software is found to be more insecure later in its development. For example, Linux was considered a very insecure OS about 6 years ago, you didn't run it if you cared even a little about security (FreeBSD seemed to have been the choice for secure x86 *NIX). At that time, we didn't hear too much about Windows NT's security (or maybe I wasn't paying attention).

    Things have changed, now people call Linux a secure OS because it has already exposed many of its vulnerabilities, but Windows is known as the insecure OS because its flaws are poping up all over the place. I'm not saying either are more secure (because you can only make an educated guess), but the open source model allows for discovery of vulnerabilities a little quicker and easier than closed source.
  • by Trepalium ( 109107 ) on Friday June 21, 2002 @11:11AM (#3743745)
    There's a difference. You're comparing a simple action -- driving a car, to one that is not simple by any means -- administering a network. It's like saying that because everyone knows how to operate a television, they should be able to know how to operate television broadcast equipment. Most people these days can operate a computer, does that mean they'll ever be qualified to manage a network of computers with interdependent services? Probably not.
  • Show me the Money! (Score:2, Insightful)

    by Airline_Sickness_Bag ( 111686 ) on Friday June 21, 2002 @11:15AM (#3743778)
    Closed source programs are typically not free. If a bug shows up in it, will you have to wait until the next release for the fix, and will you have to pay for it it you don't have maintainence?

    Not to sound cheap, but sometimes it can be a PITA to grab some funds & do the usual hoop jumping to get a purchase order cut. And it
    can take a *long* time, depending on the approval channels.

    With Apache, I had our webserver updated in a few minutes of reading the announcement of the fix.

    -asb
  • by AnotherBlackHat ( 265897 ) on Friday June 21, 2002 @11:19AM (#3743805) Homepage
    Just because Microsoft doesn't publish their source code,
    doesn't mean the source code is not available.
    Crackers aren't afraid to decompile code, or use social engineering to obtain it.
    Non disclosures mean nothing to someone who is writing a virus.

    But it does stop the white hats.

    That asymmetry makes a big difference in the analysis.
    In open source the white hats and black hats are on equal footing.
    In closed source, the black hats have an advantage somewhere
    between alpha and 0, depending on how hard it is to obtain the source.
    Historically, it's been proven over and over that obtaining the source is much easier than the original designers thought,
    which is the reason security through obscurity is treated with such derision in the crypto community.

    Most bugs are found by people running the code.
    Most security holes are found by people who are looking for them.
    Since Black hats have no real difficulty obtaining the source,
    "Closed" source gives them a huge advantage over their white hat counter parts.

    -- this is not a .sig
  • by Pentalon ( 466561 ) on Friday June 21, 2002 @12:07PM (#3744116)
    While the Mac Classic may not have been rooted, it was also not designed to provide 24/365 network services, multi-user protection, etc. Linux is generally designed as a Unix clone, which was generally designed to provide services to multiple users, either via shells or served some other way over the network (web server, database, thin client server, etc.). In order for Linux to offer this, it has to provide the ability for some people to have access and not others. Any time services like that are offered with selective access, security problems exist -- it's a natural part of trying to identify entities -- everything can be spoofed at some level. Hence the mantra, "Nothing is ever totally secure."

    The Mac Classic (as far as I know) does not offer a web server, network databasing, remote shells, etc. If it does, the Mac OS (9 or before that the Classic runs on) is not stable enough to provide these service reliably: there's no memory protection, and there's no way to log in remotely to fix problems. If those services were provided on the Mac Classic, you would have seen remote root exploits happening.

    Another way of putting it -- what can you do on a rooted Mac Classic? That's like somebody rooting my watch. There's nothing to do with my watch once it's been rooted, and in any case, my watch doesn't really offer the ability for remote control, much less a root environment versus a non-admin environment. Whoever's sitting at my watch (or whoever my watch is sitting on) has control, and there is no other option.

    Also, root exploits are not the only exploits. Crashing a computer remotely is an exploit also (one thing root exploits are used to achieve). Even if the Mac Classic does not offer a remote shell (as far as I know), how hard is it to crash remotely? I worked in a Macintosh computer lab, where the Apples went down constantly because of bad network data. We sometimes couldn't put particular protocols on the ethernet because OS 6/7 couldn't handle it. I suspect that if people tried, it would not have been that hard. (I'm not anti-Apple -- I think that most every kind of computer has appropriate uses).

    Since Mac OS X offers the afore-mentioned services, I suspect that if its use increases, we'll start to see remote exploits happening. This has nothing to do with it being Unix based -- it's a result of what I said before -- any system which offers services or grants selective access based on an identification can and will be exploited.
  • Re:Duh... (Score:2, Insightful)

    by Alan ( 347 ) <arcterex@NOspAm.ufies.org> on Friday June 21, 2002 @12:48PM (#3744378) Homepage
    You mean silly things like embedding the http parser dll into the kernel to speed up page rendering? Yea, that'd be silly :)
  • Re:MBTF My Ass (Score:3, Insightful)

    by swillden ( 191260 ) <shawn-ds@willden.org> on Friday June 21, 2002 @12:55PM (#3744421) Journal

    The Manufacturer's Estimated Time Before Failure is for physical goods - things that naturally wear out. Not software, which is at the very least a loose mathematical desctiption of a repeatable process.

    Read the paper, there's a link to a PDF in the story.

    The paper does indeed use an MTBF-type model to analyze bugs, and there is a significant body of research which supports this approach. As the author says:

    Safety-critical software engineers have known for years that for a large, complex system to have a mean time before failure (MTBF) of (say) 100,000 hours, it must be subject to at least that many hours of testing [4]. This was first observed by Adams in 1985 in a study of the bug history of IBM mainframe operating systems [5], and has been confirmed by extensive empirical investigations since. The first theoretical model explaining it was published in 1996 by Bishop and Bloomfield [6].

    It's certainly not obvious that this model invented for physical goods applies to software, but there is substantial research to show that it does. If you can really demonstrate otherwise, I highly recommend that you get familiar with the literature and then publish your own research paper that explains why it is not an appropriate model. If you can propose a significantly better one, you'll have advanced the state of software engineering and you'll probably be well on your way to a cushy professorship somewhere.

Receiving a million dollars tax free will make you feel better than being flat broke and having a stomach ache. -- Dolph Sharp, "I'm O.K., You're Not So Hot"

Working...