Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Security Open Source Technology

With So Many Eyeballs, Is Open Source Security Better? (esecurityplanet.com) 209

Sean Michael Kerner, writing for eSecurity Planet: Back in 1999, Eric Raymond coined the term "Linus' Law," which stipulates that given enough eyeballs, all bugs are shallow. Linus' Law, named in honor of Linux creator Linus Torvalds, has for nearly two decades been used by some as a doctrine to explain why open source software should have better security. In recent years, open source projects and code have experienced multiple security issues, but does that mean Linus' Law isn't valid?

According to Dirk Hohndel, VP and Chief Open Source Officer at VMware, Linus' Law still works, but there are larger software development issues that impact both open source as well as closed source code that are of equal or greater importance. "I think that in every development model, security is always a challenge," Hohndel said. Hohndel said developers are typically motivated by innovation and figuring out how to make something work, and security isn't always the priority that it should be. "I think security is not something we should think of as an open source versus closed source concept, but as an industry," Hohndel said.

This discussion has been archived. No new comments can be posted.

With So Many Eyeballs, Is Open Source Security Better?

Comments Filter:
  • by Anonymous Coward on Tuesday July 10, 2018 @11:50AM (#56923588)

    A: Other people

    • Exactly! Security audits are not the same as known bugs, so they'll need some new law, some new motivating principle.

      The answer isn't yes or no, the answer is just, "You didn't understand Linus' Law."

      • by raymorris ( 2726007 ) on Tuesday July 10, 2018 @12:55PM (#56924094) Journal

        Exactly. ESR summed up Linus's thoughts as ".. all bugs are shallow", not "all bugs don't exist".

        Linus's exact words were:
        "Somebody finds the problem, and somebody else *understands* it."

        I'll share two examples from my own experience. Somebody found the shell shock bug and suggested a fix. Over the next few hours, hundreds of people looked at it. Some saw that the suggested fix wouldn't quite cover this variation or that variation, so they tweaked it. Florian Weimer, from Red Hat, said those tweaks would never cover all the variations, and suggested an entirely different fix, one that went to crux of the problem. Over the next few days, there was a lot of discussion. Eventually it became clear that Florian had been right. When he looked at the problem, he immediately understood it deeply. Well, it looked deep to us. To him, it was shallow.

        ""Somebody finds the problem, and somebody else *understands* it", Linus said. Stéphane Chazelas found shellshock, Florian understood it, fully, immediately.

        There was no need to release a patch to fix the patch for the patch as we often see from Microsoft, or as we've seen from Intel lately. With hundreds of people looking at it, somebody saw the right solution, easily.

        Here's another example from my personal experience with the Linux storage stack:
        https://slashdot.org/comments.... [slashdot.org]

        • by Aighearach ( 97333 ) on Tuesday July 10, 2018 @01:22PM (#56924244)

          My experience was, I saw a bug report in some open source and tried to fix it, and by the time I had a patch written a better one was already released upstream and I was the last person to upgrade because I was off trying to write a patch.

          There are so many freakin' eyeballs available, volunteers are mostly just jerks like me who are getting in the way trying to help! You have to have an inside line to the developers or security researchers to even learn about a bug early enough to have anybody notice even if you understood it as soon as you heard about it.

          Even writing new types of network servers; somebody announced they were abandoning a web middleware tool that was popular, and so I started plugging away at an apache module, but within a week somebody else released something similar enough to mine that I just stopped coding and used theirs. Sure, my architecture choices were better, but theirs weren't bad enough to amount to bugs so nobody would ever notice or care.

          Programming is easy, the hard part is finding an unserved use case! And fixing known bugs is a pretty obvious use case.

          • by raymorris ( 2726007 ) on Tuesday July 10, 2018 @02:16PM (#56924508) Journal

            Your experiences remind me of something I learned about open source development. I now start by posting about what I intend to do. I've received these responses:

            John is working on that and expects to release it next week.

            No need to do all that, just use setting Xyx and skip the last part.

            That seemed like a good idea, but when we looked into it we noticed this trap.

            We decided we want Betaflight to focus on LOS. Your idea fits better with the iNav fork, which already does most of that.

            Hey that's a good idea. Can you also allow multiples? That would be useful for me. I can help test.

          • Also, "Linus' law" was written before things like automatic bug reporting that every OS does these days. In other words, software companies noticed how useful it was, and started copying it.
          • by rtb61 ( 674572 )

            So lets fix the original line, so 'given enough 'competing eyeballs all bugs are shallow'. You did not fail in your attempt to correct a bug, you played a role, you solution not as good as the other but still a comparison and next time, yours might be the better solution. Fixing bugs is about applying the best solution and having a range to choose from, whilst it does delay things, still works to ensuring the best solution at the time is used. So your effort was most definitely not wasted, just part of the

        • Somebody found the shell shock bug and suggested a fix. Over the next few hours, hundreds of people looked at it. Some saw that the suggested fix wouldn't quite cover this variation or that variation, so they tweaked it. Florian Weimer, from Red Hat, said those tweaks would never cover all the variations, and suggested an entirely different fix, one that went to crux of the problem.

          Shellshock. A bug that was shown to have existed since 1989, and was patched in 2014.

          I'm not sure this is the best example to use when discussing whether open source security is better than closed source security.

          • I think it's a perfect example of the difference between "bugs don't exist" and "the bug is shallow - to someone". Lots. Of people looked into it deeply and couldn't figure out a good way to fix it. Weimer immediately saw what needed to be done - it was shallow to him, with enough eyeballs "the fix will be obvious to someone".

            Compare Intel's Meltdown patches. They release a patch and say everyone should use it. Then two or three weeks later "oh shit, don't install our patch! We'll make a new patch soon.".

          • Somebody found the shell shock bug and suggested a fix. Over the next few hours, hundreds of people looked at it. Some saw that the suggested fix wouldn't quite cover this variation or that variation, so they tweaked it. Florian Weimer, from Red Hat, said those tweaks would never cover all the variations, and suggested an entirely different fix, one that went to crux of the problem.

            Shellshock. A bug that was shown to have existed since 1989, and was patched in 2014.

            I'm not sure this is the best example to use when discussing whether open source security is better than closed source security.

            On the closed source side of things, lots of bugs that impact Windows 10, will also impact Windows XP (with patches available to customers with agreements, or on XP embedded, or POSready). Back when older operating systems were supported, lots of bugs impacted XP, 2000, NT4, and 9x. So it's likely that many of these current bugs also impact ancient versions of Windows.

  • More eyes (Score:4, Interesting)

    by Bengie ( 1121981 ) on Tuesday July 10, 2018 @11:51AM (#56923598)
    More is better when it comes to bugs that are mostly obvious to the typical person, but doesn't benefit complex code that whooshes over the head of 99.9% of people. My co-workers tell me I have an attention to detail. Some code will get 3-5 people looking at it and testing it over a period of a month or two, then the'll ask me to take a look. Many times I will find several bugs in 10-15 minutes by just reading the code, then I'll have questions about the code and ask them to run some tests that will further find some more bugs.

    I do not trust normal humans to anything technical right. I would prefer languages that work better with static analysis, more free tools to provide quality static analysis, and more fuzz testing.
    • I do not trust normal humans to anything technical right.

      You left out a word in the above sentence. Which just confirms the intent of the above sentence....

    • Re:More eyes (Score:5, Informative)

      by bluefoxlucid ( 723572 ) on Tuesday July 10, 2018 @12:09PM (#56923714) Homepage Journal

      This is why some of us are insistent that the decades of experience which gave rise to design patterns actually means something. Folks often counter argue that good programmers "know what their code does" and so the mess of unstructured spaghetti code is fine "as long as it works"; they don't believe in engineering in containment of bugs and impact.

      When you build your code to be a set of tools with defined behaviors and interfaces, you encapsulate bugs. An error in one piece of code creates a defect in the interface, which you correct in one place. This seems like something wholly-imaginary until you realize un-breaking a bug in many flat programs causes unexpected behavior due to other code relying on the defective logic's incorrect impact on some program state.

      In an ideal world, none of this would matter. We do all this stuff because it matters in the real world.

      • by Tablizer ( 95088 )

        decades of experience which gave rise to design patterns actually means something.

        If you mean Gang-of-Four-style (GOF) patterns, it mostly fizzed because, first, it was not clear when to use which pattern. Second, often incorrect assumptions were made about how requirements change in the future. GOF had a shitty crystal ball, time finally revealed. Nobody road-tested that fad long enough, and fanboys dived in face first.

        • People made larger architectural patterns like MVC and declared them replacement, in the same way someone might declare "Victorian" a replacement for joist-and-subfloor floors. I've found the GoF patterns have served well where people used them, and people have found them unclear when they didn't understand architecture.

          Can you provide examples of where, how, and why GoF patterns failed?

          • Re:More eyes (Score:4, Interesting)

            by Let's All Be Chinese ( 2654985 ) on Tuesday July 10, 2018 @06:15PM (#56925600)

            It's not that the patterns themselves have failed, just that their use has fizzled due to failure to live up to the claimed benefits of using them. I've never actually even read the book, but I took a gander at the "antipatterns" book (only thing in category available at the library at that time) and it immediately struck me as "middle management trying to program", or something in a similar vein.

            Now, there's indubitably a lot of "code grinders" Out There for whom this sort of thing is actually a boon. The best and brightest among us tend to scoff at such people, or more specifically at their stumbling and crutches, with all sorts of plausible-sounding but not actually helpful counters like "good people know what their code does", conveniently forgetting that most programmers aren't very good at all. So perhaps "patterns" are a useful crutch to keeping a lid on the damage from the inevitable Dunning-Kruger effects in business programming. I don't know, maybe.

            But it was only until very much later that I found this writeup [fysh.org] and my take-away is that this sort of thing, I think including touting lists of "patterns" as fix-alls for programming trouble, are attempts at taking an inherently mapping mode thing into something suitable for packers to use. The better approach is to knock such people into mapping mode, but that's much harder to sustain. And could well count as cruel and unusual.

            • The best and brightest among us tend to scoff at such people, or more specifically at their stumbling and crutches, with all sorts of plausible-sounding but not actually helpful counters like "good people know what their code does", conveniently forgetting that most programmers aren't very good at all. So perhaps "patterns" are a useful crutch to keeping a lid on the damage from the inevitable Dunning-Kruger effects in business programming. I don't know, maybe.

              Self-referential. You're taking the same approach, but softer: "some of us are smart enough to not need this."

              Some of us are engineers and can build a house without adhering to national and international building and electrical standards and codes. That house will be structurally-sound. We'll understand how each part works and how to maintain it or build additions so it doesn't crumble.

              That doesn't help every other professional who touches our stuff.

              But it was only until very much later that I found this writeup

              Essentially, yes. The thing he should also noti

        • I will add to your post:
          GOF patterns fail because they are not a good fit for most situations. Reality is more nuanced than a few patterns, so to be effective you need many more patterns than are in the book. Instead of memorizing patterns, it becomes easier to analyze the situation, and come up with something that fits the situation, rather than try to jam it into a pre-existing pattern.
      • un-breaking a bug in many flat programs causes unexpected behavior due to other code relying on the defective logic's incorrect impact on some program state.

        To be fair you have the same problem with encapsulation and output/results too. (A semi-infamous one being a fix to a Microsoft Office API to return data as it was spec'd in the docs not as it originally did and which many people coded to. Ironically rather than change the docs to match the code, MS did the opposite). The gist is the same tho - If you encapsulate your code into modules (IE Lego block style) any fixes you make are automatically picked up by the users with no action needed on their part.
        C

        • To be fair you have the same problem with encapsulation and output/results too

          Only at the interface. People will write code that does things to things, and then reuse calls to that code to incrementally do things to things instead of extending the interface on an object to say you can now do a thing to a thing (or creating a filter, or whatever else they could do to achieve the same). Then, when you modify those innards, things break.

          Sometimes, two pieces of those innards are glued together in such a

  • When software doesn't have visible source code, the legitimate users have no assurances regarding what it's doing, other than those imposed by the operating system (which they might not have complete source for either).

    However, the bad guys still take the trouble to disassemble the code and find its vulnerabilities.

    With many eyes, you still might not find all bugs, but you can, and can do so without the unreasonable investment of disassembling the code and reading disassembly - which is not like reading the real source code.

    The larger issue is that we need publicly-disclosed source code for some things, to assure the public good, whether it is proprietary or Open Source. For example the emission control code in automobiles, which it turns out multiple manufacturers have defrauded.

    • by hcs_$reboot ( 1536101 ) on Tuesday July 10, 2018 @11:59AM (#56923648)
      I tend to agree. However, on the other hand, Open Source has "so many eyeballs" that most people trust it blindly, which can be dangerous.
      • Very true, I only read open source source code if there is a bug I need to maneauver around or fix.
        And most code is so bad, you don't really want to read it because of the night mares they induce, e.g. looking at https://lucene.apache.org/ [apache.org]

      • If you are thinking of bugs like Heartbleed, there are also economic issues. OpenSSL was issued under a gift-style license. Big companies that were making billions on desktop software used it, and almost never returned either work or money to the project. This one guy, Ben, had most of the load out of his personal time.

        Now, this is not something the OpenSSL guys might ever have considered, and I am not representing them. But what if OpenSSL had been dual-licensed? All the Free Software folks would have had it for free, and all of the commercial folks would have had to pay a reasonable fee. In fact everybody would be paying something, either by making more great Free Software or by paying money. There might have been fewer commercial users, but there might also have been an income stream for Ben or other developers, and they might have been able to devote more time to finding bugs. So, there might never have been a Heartbleed.

        • This sounds good to me at first, but then I also think back to `99 when I was using a Free SSH version for everything, very happily, and then I got a client that required access only using a certain version of commercial ssh.

          I think this comes down to the whole Free Software vs Open Source split; when there is a split between proprietary and free, then some people will be forced to use a particular one, but if there is an open one such that everybody can use it, then everybody might have compatibility and t

        • by Solandri ( 704621 ) on Tuesday July 10, 2018 @01:51PM (#56924374)
          Whether the eyeballs are paid is irrelevant. One of the interesting findings from the investigation of the Space Shuttle Challenger disaster was that NASA triple-checked components. But it turned out the three (paid) inspectors often assumed the other two were doing their job, and regularly skipped inspections on more-difficult-to-access parts. Since all three were biased to skip the same parts, those parts frequently went uninspected before launch. So in that particular case, having more eyeballs actually led to less security, than having a single inspector who knew the entire burden of security was resting on his/her shoulders.

          People are lazy.
    • Comment removed based on user account deletion
    • The larger issue is that we need publicly-disclosed source code for some things, to assure the public good, whether it is proprietary or Open Source.

      There is a famous saying with elections that it's not the people who vote that count, it's the people who count the votes. Similarly having some source code to audit is great but it's meaningless if the company doesn't actually utilize that exact code or finds some sneaky way to circumvent it.

      That said I do agree with your point.

      • Similarly having some source code to audit is great but it's meaningless if the company doesn't actually utilize that exact code or finds some sneaky way to circumvent it.

        Good point. You need assurance that the code in the device is the code that you see.

        This is a way big problem for government. There are many integrated circuits in our fighter jets, etc. How do we assure that what is inside them is what we think? Thus, we have defense assurance programs that follow the production of a chip from design all

    • I disagree that there are no assurances. At the enterprise level, I'm part of the IT Risk team and we demand static and dynamic code testing on code. While we often don't get the full report, we do get summaries. As part of the risk process, we also look at changes in number of fixed issues, number of new issues, and severity. Granted there is nothing to be done about Windows and big vendors tend to be the worst on providing reasonable assurances.

      With Open Source it is more often blindly trusted becau

      • Sure, what you do is what you can do, for due diligence, if you are using proprietary software. But you are of course putting trust in those auditors. And they are a for-profit business, and it's in their interest to do a good-enough job while not spending too much time.

        One of the things they do (and most consulting companies bigger than one person do, including law firms) is sell highly-qualified people, and then have lower-qualified people actually do the work, under the "supervision" of the more qualifie

    • by jeremyp ( 130771 )

      Even if you do have visible source code, the legitimate users have no assurances regarding what it is doing. Surely the train wreck that is OpenSSL should tell you that.

      Most legitimate users wouldn't understand the code if it was exceptionally simple and clear because, well, they don't understand code. Even relatively competent programmers can have problems with some code bases. For example, I'm sure the guy who introduced the Debian-SSL bug was considered to be a pretty good coder and yet he still screwed

    • by dfghjk ( 711126 )

      "For example the emission control code in automobiles, which it turns out multiple manufacturers have defrauded."

      Nonsense. Without public disclosure of the entire hardware platform publicly-disclosed source code would be meaningless. Furthermore, there would be no reward to programmers for reviewing this code, even if they could, so there would be no benefit to public disclosure of such code. Auto manufacturers are not going to fully expose their engineering nor would be reasonable to expect them to.

      As f

    • However, the bad guys still take the trouble to disassemble the code and find its vulnerabilities.

      I would flip it. The bad guys have a huge incentive to invest the time and effort to audit code for security bugs. However rarely do open source or closed source projects have a large incentive.

      From a blank slate both Open and Closed source applications are at a disadvantage. Some closed source applications have an incentive to have lots of eyes audit the code. Some open source applications have an incentive to have lots of eyes audit the code.

      So I would call the "Eyes" argument a wash between open and

    • We need software freedom for all published programs. Merely being able to see some source code doesn't grant anyone the right to compile that code into an executable they can share (including distribute commercially), install, and run. So, source code for one's own vehicle under a free software license is needed. It's quite easy to maintain a code base where the malware isn't listed but is present in the executables people receive and run while publishing source code that has no malware in it and is license

  • It depends. FOSS software often lacks QA, unit testing, code static/dynamic analysis and regression testing. Compared to a FOSS software with a similar QA process - I would say yes, more eyeballs make it better. Compared to a commercial software with a strict QA process - no.
    • I've been involved in commercial software development for almost 20 years. I have yet to see any small vendor actually implement everything you list.

      Usually, they'll have a few components with some simple impossible-to-fail tests, and say they do "full unit testing". They'll have a "QA tester" rubber-stamp a release because it isn't as buggy as the last one. They'll run code through Valgrind once, ignore the results, and take credit for using analysis tools. Then the development execs go out to a seminar on

    • FOSS software often lacks QA, unit testing, code static/dynamic analysis and regression testing.

      So does comercial software.

  • by sjbe ( 173966 ) on Tuesday July 10, 2018 @11:54AM (#56923612)

    Back in 1999, Eric Raymond coined the term "Linus' Law," which stipulates that given enough eyeballs, all bugs are shallow.

    That's only true if those eyeballs are actually looking for bugs and organized enough to do something about them. Even then it's more like a principle than an actual truth. Some bugs are much harder to find than others no matter how many people are looking.

    • Some bugs are harder to find than others, sure, but that's just a mealy-mouthed platitude.

      The point is, on a small team, some bugs are so hard that they don't even get fixed on the same day they're found. It could takes days or even weeks, historically. Even when they were really working on it.

      Open Source hasn't had a hard bug since the 90s. Every bug, no matter how hard, is fixed within hours of there being public knowledge that it exists and hasn't been fixed yet. Getting package manager to apply the patc

    • It says "... bugs are shallow", not "bugs don't exist".

      See:
      https://it.slashdot.org/commen... [slashdot.org]

  • by swillden ( 191260 ) <shawn-ds@willden.org> on Tuesday July 10, 2018 @12:00PM (#56923654) Journal

    Linus is right... but note that he talked about eyeballs, not open vs closed source. If an open source project is obscure, or if the code is too hard to read, it may not get any scrutiny. On the other hand, closed source code from companies that care about security enough to pay security firms to scrutinize their code, or to hire security-knowledgeable developers and have them look at it carefully, can get a lot of eyeballs.

    In the normal course of events, though, open source code almost always gets more attention than closed source, just because anyone who wants to look, can.

    • Re:Linus is right (Score:5, Insightful)

      by Gregory Eschbacher ( 2878609 ) on Tuesday July 10, 2018 @12:04PM (#56923690)

      This is exactly right. It's not about open vs closed source, but eyeballs. For instance, take the HeartBleed / OpenSSL bugs from a few years ago. OpenSSL is used extremely often and all over the place, including by Google, Facebook, etc. But it had vulnerabilities in it that had existed for years and years, and it was because OpenSSL was really only being maintained by a handful of people.

      But I think even more so, some organizations just aren't dedicating people to finding problems. You can still exploit Android, even though it's powered by Google and Linux. Intel has issues with its processor designs. Apple had a bug a year or so ago where anyone could log in as root. And these are the companies that supposedly have the best developers and essentially unlimited resources.

      • Re:Linus is right (Score:4, Insightful)

        by swillden ( 191260 ) <shawn-ds@willden.org> on Tuesday July 10, 2018 @01:12PM (#56924202) Journal

        You can still exploit Android

        Actually, it's pretty darned hard to do that on an up-to-date device (e.g. Pixel). There will always be vulnerabilities, but SELinux and other efforts have made Android a pretty hard target lately. Except, of course, for the fact that many device makers don't update.

        And these are the companies that supposedly have the best developers and essentially unlimited resources.

        Regarding Google, I think the developers are generally quite good, but resources are far from unlimited. I work on the Android platform security team, and we're always overstretched. That's partly because we set ambitious goals, but mostly because it's just really hard to find people. Part of that is location -- we currently only hire in Mountain View, Kirkland and London, so we can only hire people willing to live in one of those locations -- but most of it is because good software engineers who also know security are just hard to find.

        • > but most of it is because good software engineers who also know security are just hard to find.

          So, why doesn't Google just hire some software engineers and put them through a training camp/apprenticeship? This excuse of every company is getting tired. Are you willing to invest in your work-force or not?

          • > but most of it is because good software engineers who also know security are just hard to find.

            So, why doesn't Google just hire some software engineers and put them through a training camp/apprenticeship? This excuse of every company is getting tired. Are you willing to invest in your work-force or not?

            We do quite a bit of that (apprenticeship, I mean; training camps aren't effective). But training new people takes time and energy from the existing staff, which reduces the work they can get done. Even for experienced hires, it takes a year or more before they're really productive; add another two or three years for those who aren't.

  • It's not a matter of eyes, it's a matter of eyes that are actually looking. Just because a million people uses OpenSSH every day doesn't mean that it's more secure, unless someone sits down and audits it, it could as well be closed source.

    The difference is that if you WANT to audit it, you CAN. Without first reading more NDAs than you'll eventually get to read code.

    • Just because a million people uses OpenSSH every day doesn't mean that it's more secure, unless someone sits down and audits it, it could as well be closed source.

      This is a great distinction that is often overlooked. A million people using a piece of software doesn't mean a million people hunting for security flaws in the code. A large percentage of the users will be the "download and use it" type. Even if you put the source code on their screen and highlighted the section of code with the security flaw fo

  • Comment removed based on user account deletion
  • by tero ( 39203 ) on Tuesday July 10, 2018 @12:11PM (#56923728)

    While the "many eyes" can be theoretically a better model, practice has shown very few actually look at Open Source software with security in mind.

    Even critically important projects like OpenSSL.

    Security review takes time. Time is money (even in OSS world). Security audits require money. They don't get done, unless commercial entity (using OSS) commissions them.

    The "many eyes" is a really bad security model in practice.

    • At home, I have used Linux - first Redhat, then Fedora - since about 1999. I have never used any sort of virus/malware scanning software. I don't know how common this is.
    • So the better one is by obscurity (closed source)?! How many people can do reviews if the code is not open - ~0.
  • by bravecanadian ( 638315 ) on Tuesday July 10, 2018 @12:22PM (#56923800)

    The fact is that there aren't many eyes on most parts of the code, and of the ones that are, very few of them are qualified to find the problems.

    • With Open Source, there's no guarantee that anyone will find the bugs in the code, but with closed source, I guarantee nobody but the company's programmers will be finding and fixing the bugs

  • I use some software. It's in debian (and ubuntu) but hasn't been updated in years. It's not the sort of thing that needs to be updated. But I've manged to find and fix 6 or 7 bugs. serious bugs. coredump now type bugs. I've stopped reporting them back to debian because nobody is looking at bug reports. There is no upstream. I've reported them back to a fork I found on sourceforge but they asked me to rewrite my commit message without oxford commas. Seriously.

    I'm glad it's open source, I wouldn't be a

  • See WordPress. Abysmal architecture, programmed by monkeys on crack, pretty good security. The last critical gap was closed after only 8000 websites had been infected, something like .0002% of the installbase or something.
    Pretty neat.

    I bet that gap in that obscenely expensive Oracle Java web application server thingie isn't found half as fast let alone fixed in such a speed.

    • Nobody is saying that PHP is a nice language. The point of it was that it allowed less-skilled web designers to write software for web presentation. People who never learned the fundamental concepts of computer security. The security problems were a natural result.

      We have much better languages today, and I sure help people use them. But I can't make them do so.

      • First of all: Hey, Gang, check it out! Bruce Perens replied to me on slashdot! Yeah man, I started a thread that was joined by Bruce Perens! Awesome! ... Ok, sorry, had to get that out of my system ...

        I get PHP pretty much the way you pointed out. "PHPs badness is it's advantage", I've argued before. There's a fresh Lerdorf talk on YouTube where he himself says it pretty clearly: "PHP runs shitty code very, very well." ... Big upside that is. The downside is, of course, that PHP is *so* easy to do stuff wit

        • :-)

          Some of the work I do is painful, like arguing over Open Source licensing so that it will continue to be fair for everyone. So, it's nice to hear from people who say I've helped them. I think about them when it gets difficult. Thanks!

          Someone else wrote that book. I was the series editor, which was mostly setting policy, doing PR, and looking over book proposals. That was the first Open Publication book series, and preceded Creative Commons.

          If you learned a language from a book, you can learn another 30.

    • See WordPress...... pretty good security.

      No haha.

      The last critical gap was closed after only 8000 websites had been infected, something like .0002% of the installbase or something.

      If they would start using parameterized queries like the rest of the world, this would have been zero.

  • There's a huge collective blind spot in the programming community as a whole... they somehow believe that Ambient Authority [wikipedia.org] is an acceptable basis for writing applications in a world of persistent networking and mobile code.

  • by EndlessNameless ( 673105 ) on Tuesday July 10, 2018 @12:42PM (#56923990)

    Any bug can be fixed after discovery because all eyes will be on it. (Or, at least, the eyes of most of the experts for that particular system.)

    A huge part of security is being proactive throughout development---in design, code submission, review, and auditing.

    Private companies can hire people to fill these roles as needed, but most open source projects rely on the security consciousness of individual contributors. Since security is often boring or counterproductive to the development of new features, I can easily see security being less of a priority for some developers.

    Good security requires many eyes throughout the development process. It needs ongoing oversight to ensure that every module and code submission is consistent with the security model for the project.

    BSD has one seriously security-conscious leader, but that is not typical. Maybe Red Hat will pay someone to oversee the security of the Linux kernel or audit its code, but most projects won't have that kind of backing. They'll rely on luck of the draw---maybe you attract someone with security expertise, or maybe you don't.

    Without a dedicated security focus, projects should go through hardening phases where they deliberately welcome security experts and design/redesign as necessary for security. Even if it means a slowdown or moratorium on new features. Security takes time and effort, and the only solution is to put more eyes on it.

  • Comment removed based on user account deletion
  • Yes, the bugs can be shallow, but the eyes have to actually look at the code. Unfortunately everyone assumes another has done it or will. Guess what, they haven't and won't.
    Do you see anyone debloating the kernel? I looked into it once (video drivers primarily), but there was so much code duplication scattered in so many places that it would have required more time than I had and more importantly lots of interaction with the lkml...
    Maybe with a GSOC project or some other sponsorship, but until then t
  • by Tailhook ( 98486 )

    Open Source is better, but not because there is some vast horde of people auditing the code for free. There isn't. Linus wasn't talking about auditing code. Linus wasn't even necessarily discussing security. That quote doesn't claim or even imply that some benevolent group of open source programmers are scouring code for security flaws.

    Linus was simply observing that even difficult bugs are quickly understood when enough people look at the problem. The known problem; not latent flaws in some large bo

  • by holophrastic ( 221104 ) on Tuesday July 10, 2018 @01:07PM (#56924176)

    I have trouble with this industry-concept that software security should be put first -- it's an impossible business objective.

    Think about how many industries focus on security. Banks, sure. Money transport, of course. Prisons and jails.

    My air conditioner broke last week. It needed a new capacitor. It was a 5-minute $0 fix. Walk between the houses, open the compartment, pull out the breaker.

    Now imagine your air conditioner, with the software industry's concept of security. Can you? How many check-points for a repairman to get to my air conditioner? How much added hardware? How much added expense in dollars and time? What stops someone from throwing a paint-filled balloon from fifty-feet away?

    Security, when lives aren't at risk, is just so rarely worth it.

    And when lives are at risk? Maybe you have a lock on your front door. Maybe it's a deadbolt. Maybe it's a really fancy locking mechanism, super-secure. Your front door is likely right next to a glass window. Congrats on the lock. Enjoy the bars on your windows.

    And what stops your car, at highway speeds, from hitting another car at highway speeds? Right, a thin strip of white paint. Excellent. Sometimes the paint is yellow, even better.

    We've never focused on security. We simply cannot afford to.

    Instead, we talk about insurance, and criminal law enforcement.

    So that's what I'm suggesting for software. Law enforcement. Deterrents.

    Anything else, well, is just uncivilized.

    • My father used to tell me that most crimes are crimes of opportunity, which I've largely found to be true in my experience. Someone leaves their car unlocked with valuables visible. Someone leaves their phone at the bar as they go to use the restroom. That sort of thing. In the real world, the criminals you're most likely to encounter are common ones with unsophisticated methods, so simple preventative steps coupled with the effective deterrents you mentioned are generally more than adequate to prevent any

    • We already have laws against this stuff, but we can't hire enough people to staff the agencies that are supposed to handle enforcement. Clearly, your solution has already been tried, and it has failed.

    • I have trouble with this industry-concept that software security should be put first -- it's an impossible business objective.

      It would be kind of cool if people could follow basic security principles, like, "Don't use telnetd" or "don't release software with default passwords." You don't need perfect security, but think about it a little, at least.

  • by ytene ( 4376651 ) on Tuesday July 10, 2018 @01:56PM (#56924406)
    I think there are a couple of aspects to this that might be a bit off the beaten track of threads posted so far...

    The first is that we need to think about like-for-like comparisons. When these observations were initially made, 20 years ago, how many projects [either closed source or open source] were using automated source code scanning solutions? i.e. technology specifically written to parse code for flaws?

    In other words, 20 years ago the "landscape" was likely to be close to "even". Today, however, many commercial software development shops use vulnerability scanning solutions and/or routinely conduct binary scans of resultant code. Today, many commercial development shops use automated test harnesses for load testing and regression testing. It is fantastic that they do. They do this because they can afford to and because the rapid advancement of this sort of technology has made it possible. Twenty years ago? Not so much.

    This would suggest that we might start to see a difference in post-production bugs between Open Source and Commercial/Closed Source software where the development environments differ between these two operating models.

    The second observation would be far more tenuous. In the same 20 year period, we have seen many different programming languages "come and go". Obviously the more established platforms (COBOL, C, C++, JAVA) continue to be popular, but this, too, brings differences in bug reports. The longer a language has been in existence, the more mature development becomes, the more libraries become available, the more skilled developers become in preventing even the more obscure bugs.

    I don't have access to the data [and wouldn't know where to look for it, tbh] but I think it would be easy to graph out "average number of vulnerabilities per thousand lines of code" - i.e. defect density - over a 5, 10 or even 20-year period of language use. It would be reassuring to see if that trended down - but even more interesting [and worrying] if it didn't.

    A while back I went looking to see if there were any "big rules" about different programming languages being more or less prone to vulnerabilities than others. I had read [maybe 25 years ago] that Ada was once thought of being a language with very few bugs. The theory was that it's compiler was so strict that if you could get your code to compile, it would probably run just fine. I was really surprised to learn that although there had been a few studies, there didn't seem to be any emergent evidence to suggest that there were differences between languages. I was surprised because my ignorance had suggested to me that helpful and/or heavily typed languages would be less bug-prone that more relaxed ones - i.e. that JAVA would have a lower defect density than C. Apparently [and I'd be happy for anyone to correct me] the evidence does not support this.

    Sorry that this is trending away from the original question, but I think that context is absolutely crucial to get to a good answer to the original post - and that we would find that, like forecasting the weather, it would be pretty hard to do...
    • . Today, however, many commercial software development shops use vulnerability scanning solutions and/or routinely conduct binary scans of resultant code.

      These vulnerability scans are about as effective as using Lint (which is effective). AI vulnerability scanners are in research, but they have a long way to go.......

      • by ytene ( 4376651 )
        Agreed... and sorry if my earlier comment was too abstract, but I was thinking of this in the context of, specifically, "many eyes make all bugs shallow" and the fact that, of course, any form of vulnerability scanner [static or dynamic] is the equivalent of "many eyes".

        The question for me, though, would be to determine whether commercial software shops, by virtue of having a budget to spend on this sort of thing, can now "buy" better bug identification capabilities.

        I'm starting to think there are no
        • I was about to give an example of a company that spends a lot on their team to find security bugs, Android, but then I realized Android is open source. So I don't know.
  • by SwashbucklingCowboy ( 727629 ) on Tuesday July 10, 2018 @02:39PM (#56924614)

    It's pure BS. Yeah, you *can* look at the code, but how many do? And how many have the requisite knowledge to recognize it when something is wrong?

    As noted on Slashdot over 10 years ago (https://it.slashdot.org/story/08/05/11/1339228/the-25-year-old-bsd-bug) it took 25 years to fix a bug in some commonly used open source. My understanding is that the Samba team even coded around the bug instead of looking at the code and getting it fixed.

    Is open source security better than closed source? Sometimes yes; sometimes no. Depends on the developers, the projects and the companies involved. Security is about process and there's a lot more to the process than having access to the source code.

  • Open source isn't necessarily better than closed source and closed source isn't always better than open source.

    Both have their issues and advantages.. Both have their place.

  • You'll certainly find more bugs in open source code, simply because they're easier to find. It doesn't follow that there are more bugs in open source code. It's probably more true that there are many more undiscovered bugs in closed source code.
  • First of all there's a significant difference between "Open Source" and "Free Software". The first only saying that the source code is publicly available, the later saying that your are actively invited to engage in the development process.

    Unfortunately people forget that "The freedom to improve the program" means that the program needs to be simple enough that individuals can have a decent chance of understanding it and making meaningful changes. We see a dangerous trend towards more complex and integrated

  • There are a lot of projects (SystemD, Linux kernel) where the maintainers are hostile to people submitting security patches or filing bugs relating to security.

    https://www.theregister.co.uk/2017/07/28/black_hat_pwnie_awards/ [theregister.co.uk]

    https://www.theregister.co.uk/2017/11/20/security_people_are_morons_says_linus_torvalds/ [theregister.co.uk]

    Eyeballs don't help when they find things and are told to fuck off.

One good suit is worth a thousand resumes.

Working...