With So Many Eyeballs, Is Open Source Security Better? (esecurityplanet.com) 209
Sean Michael Kerner, writing for eSecurity Planet: Back in 1999, Eric Raymond coined the term "Linus' Law," which stipulates that given enough eyeballs, all bugs are shallow. Linus' Law, named in honor of Linux creator Linus Torvalds, has for nearly two decades been used by some as a doctrine to explain why open source software should have better security. In recent years, open source projects and code have experienced multiple security issues, but does that mean Linus' Law isn't valid?
According to Dirk Hohndel, VP and Chief Open Source Officer at VMware, Linus' Law still works, but there are larger software development issues that impact both open source as well as closed source code that are of equal or greater importance. "I think that in every development model, security is always a challenge," Hohndel said. Hohndel said developers are typically motivated by innovation and figuring out how to make something work, and security isn't always the priority that it should be. "I think security is not something we should think of as an open source versus closed source concept, but as an industry," Hohndel said.
According to Dirk Hohndel, VP and Chief Open Source Officer at VMware, Linus' Law still works, but there are larger software development issues that impact both open source as well as closed source code that are of equal or greater importance. "I think that in every development model, security is always a challenge," Hohndel said. Hohndel said developers are typically motivated by innovation and figuring out how to make something work, and security isn't always the priority that it should be. "I think security is not something we should think of as an open source versus closed source concept, but as an industry," Hohndel said.
Q: Who's auditing the code? (Score:5, Insightful)
A: Other people
Re: (Score:3)
Exactly! Security audits are not the same as known bugs, so they'll need some new law, some new motivating principle.
The answer isn't yes or no, the answer is just, "You didn't understand Linus' Law."
Exactly. Shallow, not non-existent. Personal examp (Score:5, Insightful)
Exactly. ESR summed up Linus's thoughts as ".. all bugs are shallow", not "all bugs don't exist".
Linus's exact words were:
"Somebody finds the problem, and somebody else *understands* it."
I'll share two examples from my own experience. Somebody found the shell shock bug and suggested a fix. Over the next few hours, hundreds of people looked at it. Some saw that the suggested fix wouldn't quite cover this variation or that variation, so they tweaked it. Florian Weimer, from Red Hat, said those tweaks would never cover all the variations, and suggested an entirely different fix, one that went to crux of the problem. Over the next few days, there was a lot of discussion. Eventually it became clear that Florian had been right. When he looked at the problem, he immediately understood it deeply. Well, it looked deep to us. To him, it was shallow.
""Somebody finds the problem, and somebody else *understands* it", Linus said. Stéphane Chazelas found shellshock, Florian understood it, fully, immediately.
There was no need to release a patch to fix the patch for the patch as we often see from Microsoft, or as we've seen from Intel lately. With hundreds of people looking at it, somebody saw the right solution, easily.
Here's another example from my personal experience with the Linux storage stack:
https://slashdot.org/comments.... [slashdot.org]
Re:Exactly. Shallow, not non-existent. Personal ex (Score:5, Interesting)
My experience was, I saw a bug report in some open source and tried to fix it, and by the time I had a patch written a better one was already released upstream and I was the last person to upgrade because I was off trying to write a patch.
There are so many freakin' eyeballs available, volunteers are mostly just jerks like me who are getting in the way trying to help! You have to have an inside line to the developers or security researchers to even learn about a bug early enough to have anybody notice even if you understood it as soon as you heard about it.
Even writing new types of network servers; somebody announced they were abandoning a web middleware tool that was popular, and so I started plugging away at an apache module, but within a week somebody else released something similar enough to mine that I just stopped coding and used theirs. Sure, my architecture choices were better, but theirs weren't bad enough to amount to bugs so nobody would ever notice or care.
Programming is easy, the hard part is finding an unserved use case! And fixing known bugs is a pretty obvious use case.
Start by posting your idea (Score:5, Interesting)
Your experiences remind me of something I learned about open source development. I now start by posting about what I intend to do. I've received these responses:
John is working on that and expects to release it next week.
No need to do all that, just use setting Xyx and skip the last part.
That seemed like a good idea, but when we looked into it we noticed this trap.
We decided we want Betaflight to focus on LOS. Your idea fits better with the iNav fork, which already does most of that.
Hey that's a good idea. Can you also allow multiples? That would be useful for me. I can help test.
Re: (Score:2)
Re: (Score:3)
So lets fix the original line, so 'given enough 'competing eyeballs all bugs are shallow'. You did not fail in your attempt to correct a bug, you played a role, you solution not as good as the other but still a comparison and next time, yours might be the better solution. Fixing bugs is about applying the best solution and having a range to choose from, whilst it does delay things, still works to ensuring the best solution at the time is used. So your effort was most definitely not wasted, just part of the
Re: (Score:3)
Somebody found the shell shock bug and suggested a fix. Over the next few hours, hundreds of people looked at it. Some saw that the suggested fix wouldn't quite cover this variation or that variation, so they tweaked it. Florian Weimer, from Red Hat, said those tweaks would never cover all the variations, and suggested an entirely different fix, one that went to crux of the problem.
Shellshock. A bug that was shown to have existed since 1989, and was patched in 2014.
I'm not sure this is the best example to use when discussing whether open source security is better than closed source security.
It's a perfect example of the difference. Meltdown (Score:2)
I think it's a perfect example of the difference between "bugs don't exist" and "the bug is shallow - to someone". Lots. Of people looked into it deeply and couldn't figure out a good way to fix it. Weimer immediately saw what needed to be done - it was shallow to him, with enough eyeballs "the fix will be obvious to someone".
Compare Intel's Meltdown patches. They release a patch and say everyone should use it. Then two or three weeks later "oh shit, don't install our patch! We'll make a new patch soon.".
Re: (Score:2)
Somebody found the shell shock bug and suggested a fix. Over the next few hours, hundreds of people looked at it. Some saw that the suggested fix wouldn't quite cover this variation or that variation, so they tweaked it. Florian Weimer, from Red Hat, said those tweaks would never cover all the variations, and suggested an entirely different fix, one that went to crux of the problem.
Shellshock. A bug that was shown to have existed since 1989, and was patched in 2014.
I'm not sure this is the best example to use when discussing whether open source security is better than closed source security.
On the closed source side of things, lots of bugs that impact Windows 10, will also impact Windows XP (with patches available to customers with agreements, or on XP embedded, or POSready). Back when older operating systems were supported, lots of bugs impacted XP, 2000, NT4, and 9x. So it's likely that many of these current bugs also impact ancient versions of Windows.
Re: (Score:2, Insightful)
Once in a while when I'm bored at work, I'll just pick a random source file in the kernel and give it a good read. Reading other people's code can be a great learning experience, sometimes you may learn of techniques that you were unfamiliar with.
A programmer who doesn't read other people's code is much like a writer who doesn't read other people's books. You don't NEED to do it, but it can be a good source of inspiration too.
Re: (Score:2)
PRECISELY THIS. (Score:2)
When you claim it's better "because many eyes can look at it" the following questions are germane.
#1 - How many eyes ACTUALLY look at it, rather than just "have the opportunity"?
#2 - Of what quality are those eyes? Do they know the programming language in question? Are they up to date?
#3 - Is it just the code in question that matters? Or is it the code, plus a bunch of dependencies to other libraries, pre-compiled or otherwise? How long would it take to become not just a passing pair of eyes, but expert in
More eyes (Score:4, Interesting)
I do not trust normal humans to anything technical right. I would prefer languages that work better with static analysis, more free tools to provide quality static analysis, and more fuzz testing.
Re: (Score:2)
You left out a word in the above sentence. Which just confirms the intent of the above sentence....
Re: (Score:2)
Re: (Score:2)
I think the intent of the above sentence was to suggest they're not a mere 'normal human'.
Re:More eyes (Score:5, Informative)
This is why some of us are insistent that the decades of experience which gave rise to design patterns actually means something. Folks often counter argue that good programmers "know what their code does" and so the mess of unstructured spaghetti code is fine "as long as it works"; they don't believe in engineering in containment of bugs and impact.
When you build your code to be a set of tools with defined behaviors and interfaces, you encapsulate bugs. An error in one piece of code creates a defect in the interface, which you correct in one place. This seems like something wholly-imaginary until you realize un-breaking a bug in many flat programs causes unexpected behavior due to other code relying on the defective logic's incorrect impact on some program state.
In an ideal world, none of this would matter. We do all this stuff because it matters in the real world.
Re: (Score:2)
If you mean Gang-of-Four-style (GOF) patterns, it mostly fizzed because, first, it was not clear when to use which pattern. Second, often incorrect assumptions were made about how requirements change in the future. GOF had a shitty crystal ball, time finally revealed. Nobody road-tested that fad long enough, and fanboys dived in face first.
Re: (Score:3)
People made larger architectural patterns like MVC and declared them replacement, in the same way someone might declare "Victorian" a replacement for joist-and-subfloor floors. I've found the GoF patterns have served well where people used them, and people have found them unclear when they didn't understand architecture.
Can you provide examples of where, how, and why GoF patterns failed?
Re:More eyes (Score:4, Interesting)
It's not that the patterns themselves have failed, just that their use has fizzled due to failure to live up to the claimed benefits of using them. I've never actually even read the book, but I took a gander at the "antipatterns" book (only thing in category available at the library at that time) and it immediately struck me as "middle management trying to program", or something in a similar vein.
Now, there's indubitably a lot of "code grinders" Out There for whom this sort of thing is actually a boon. The best and brightest among us tend to scoff at such people, or more specifically at their stumbling and crutches, with all sorts of plausible-sounding but not actually helpful counters like "good people know what their code does", conveniently forgetting that most programmers aren't very good at all. So perhaps "patterns" are a useful crutch to keeping a lid on the damage from the inevitable Dunning-Kruger effects in business programming. I don't know, maybe.
But it was only until very much later that I found this writeup [fysh.org] and my take-away is that this sort of thing, I think including touting lists of "patterns" as fix-alls for programming trouble, are attempts at taking an inherently mapping mode thing into something suitable for packers to use. The better approach is to knock such people into mapping mode, but that's much harder to sustain. And could well count as cruel and unusual.
Re: (Score:2)
The best and brightest among us tend to scoff at such people, or more specifically at their stumbling and crutches, with all sorts of plausible-sounding but not actually helpful counters like "good people know what their code does", conveniently forgetting that most programmers aren't very good at all. So perhaps "patterns" are a useful crutch to keeping a lid on the damage from the inevitable Dunning-Kruger effects in business programming. I don't know, maybe.
Self-referential. You're taking the same approach, but softer: "some of us are smart enough to not need this."
Some of us are engineers and can build a house without adhering to national and international building and electrical standards and codes. That house will be structurally-sound. We'll understand how each part works and how to maintain it or build additions so it doesn't crumble.
That doesn't help every other professional who touches our stuff.
But it was only until very much later that I found this writeup
Essentially, yes. The thing he should also noti
Re: (Score:2)
I guess I "don't understand architecture" (despite building hundreds of successful applications.)
You can build successful applications organically. That doesn't mean they have good architecture; it just means they happen to function. Architecture affects things like how quickly large programs can be built, how readily they can be modified and extended with large and complex new features, and how likely defects are to arise. People used to talk about Microsoft rewriting nearly all of Windows every release (NT3 to NT4 to XP to Vista) because much of the core architecture was so broken as to be unusab
Re: (Score:2)
GOF patterns fail because they are not a good fit for most situations. Reality is more nuanced than a few patterns, so to be effective you need many more patterns than are in the book. Instead of memorizing patterns, it becomes easier to analyze the situation, and come up with something that fits the situation, rather than try to jam it into a pre-existing pattern.
Re: (Score:2)
un-breaking a bug in many flat programs causes unexpected behavior due to other code relying on the defective logic's incorrect impact on some program state.
To be fair you have the same problem with encapsulation and output/results too. (A semi-infamous one being a fix to a Microsoft Office API to return data as it was spec'd in the docs not as it originally did and which many people coded to. Ironically rather than change the docs to match the code, MS did the opposite). The gist is the same tho - If you encapsulate your code into modules (IE Lego block style) any fixes you make are automatically picked up by the users with no action needed on their part.
C
Re: (Score:2)
To be fair you have the same problem with encapsulation and output/results too
Only at the interface. People will write code that does things to things, and then reuse calls to that code to incrementally do things to things instead of extending the interface on an object to say you can now do a thing to a thing (or creating a filter, or whatever else they could do to achieve the same). Then, when you modify those innards, things break.
Sometimes, two pieces of those innards are glued together in such a
Re: (Score:3)
Yep. I can't count the times I had to help fellow programmers with a problem, and it turned out to be a simple boundary condition, or something like having typed == when they meant != and never noticed when they were staring right at it.
I imagine it's like proofreading your own emails, letters, etc. Often, your mind reads what you intended to write rather than what you actually wrote.
I know there are several times I left the contraction off and every time I proofread it, I saw it there. And the difference can be critical. Is/isn't changes the entire meaning. The your/you're error can make you look dumb but not as fatal a mistake. "Your product is under warranty".
Re: (Score:2)
Yep. I can't count the times I had to help fellow programmers with a problem, and it turned out to be a simple boundary condition, or something like having typed == when they meant != and never noticed when they were staring right at it.
I imagine it's like proofreading your own emails, letters, etc. Often, your mind reads what you intended to write rather than what you actually wrote.
I know there are several times I left the contraction off and every time I proofread it, I saw it there. And the difference can be critical. Is/isn't changes the entire meaning. The your/you're error can make you look dumb but not as fatal a mistake. "Your product is under warranty".
That's what I like to call the "Meddling Interloper" method of debugging. It's when a co-worker comes up behind you as you are furiously poring over some code and points their finger at your screen and says "Isn't there supposed to be a $CHARACTER there?"
"D'oh!" You say...
Visibility is always better than invisibility (Score:5, Insightful)
When software doesn't have visible source code, the legitimate users have no assurances regarding what it's doing, other than those imposed by the operating system (which they might not have complete source for either).
However, the bad guys still take the trouble to disassemble the code and find its vulnerabilities.
With many eyes, you still might not find all bugs, but you can, and can do so without the unreasonable investment of disassembling the code and reading disassembly - which is not like reading the real source code.
The larger issue is that we need publicly-disclosed source code for some things, to assure the public good, whether it is proprietary or Open Source. For example the emission control code in automobiles, which it turns out multiple manufacturers have defrauded.
Re:Visibility is always better than invisibility (Score:4, Interesting)
Re:Visibility is always better than invisibility (Score:4, Insightful)
Very true, I only read open source source code if there is a bug I need to maneauver around or fix.
And most code is so bad, you don't really want to read it because of the night mares they induce, e.g. looking at https://lucene.apache.org/ [apache.org]
Re: (Score:2)
Re:Visibility is always better than invisibility (Score:5, Insightful)
If you are thinking of bugs like Heartbleed, there are also economic issues. OpenSSL was issued under a gift-style license. Big companies that were making billions on desktop software used it, and almost never returned either work or money to the project. This one guy, Ben, had most of the load out of his personal time.
Now, this is not something the OpenSSL guys might ever have considered, and I am not representing them. But what if OpenSSL had been dual-licensed? All the Free Software folks would have had it for free, and all of the commercial folks would have had to pay a reasonable fee. In fact everybody would be paying something, either by making more great Free Software or by paying money. There might have been fewer commercial users, but there might also have been an income stream for Ben or other developers, and they might have been able to devote more time to finding bugs. So, there might never have been a Heartbleed.
Re: (Score:2)
This sounds good to me at first, but then I also think back to `99 when I was using a Free SSH version for everything, very happily, and then I got a client that required access only using a certain version of commercial ssh.
I think this comes down to the whole Free Software vs Open Source split; when there is a split between proprietary and free, then some people will be forced to use a particular one, but if there is an open one such that everybody can use it, then everybody might have compatibility and t
Re:Visibility is always better than invisibility (Score:5, Insightful)
People are lazy.
Re: (Score:2)
Re:Visibility is always better than invisibility (Score:4, Interesting)
We've mostly won this battle in the industry. You can't really not use Open Source in your IT department any longer. And IT managers who insist on avoiding it, rather than learn about it, don't get ahead.
Re: (Score:3)
most IT managers think open source software is less secure because people can see the code
What? Are you implying that most IT managers are technically incompetent? If not, you should!
Re: (Score:2)
I have been scandalized, in general, to read commercial embedded systems code. Getting them to actually understand security has been an uphill battle, and surprisingly remains one today.
Who counts the votes? (Score:2)
The larger issue is that we need publicly-disclosed source code for some things, to assure the public good, whether it is proprietary or Open Source.
There is a famous saying with elections that it's not the people who vote that count, it's the people who count the votes. Similarly having some source code to audit is great but it's meaningless if the company doesn't actually utilize that exact code or finds some sneaky way to circumvent it.
That said I do agree with your point.
Re: (Score:3)
Good point. You need assurance that the code in the device is the code that you see.
This is a way big problem for government. There are many integrated circuits in our fighter jets, etc. How do we assure that what is inside them is what we think? Thus, we have defense assurance programs that follow the production of a chip from design all
Re: (Score:2)
With Open Source it is more often blindly trusted becau
Re: (Score:2)
Sure, what you do is what you can do, for due diligence, if you are using proprietary software. But you are of course putting trust in those auditors. And they are a for-profit business, and it's in their interest to do a good-enough job while not spending too much time.
One of the things they do (and most consulting companies bigger than one person do, including law firms) is sell highly-qualified people, and then have lower-qualified people actually do the work, under the "supervision" of the more qualifie
Re: (Score:3)
Even if you do have visible source code, the legitimate users have no assurances regarding what it is doing. Surely the train wreck that is OpenSSL should tell you that.
Most legitimate users wouldn't understand the code if it was exceptionally simple and clear because, well, they don't understand code. Even relatively competent programmers can have problems with some code bases. For example, I'm sure the guy who introduced the Debian-SSL bug was considered to be a pretty good coder and yet he still screwed
Re: (Score:3)
"For example the emission control code in automobiles, which it turns out multiple manufacturers have defrauded."
Nonsense. Without public disclosure of the entire hardware platform publicly-disclosed source code would be meaningless. Furthermore, there would be no reward to programmers for reviewing this code, even if they could, so there would be no benefit to public disclosure of such code. Auto manufacturers are not going to fully expose their engineering nor would be reasonable to expect them to.
As f
Re: (Score:2)
However, the bad guys still take the trouble to disassemble the code and find its vulnerabilities.
I would flip it. The bad guys have a huge incentive to invest the time and effort to audit code for security bugs. However rarely do open source or closed source projects have a large incentive.
From a blank slate both Open and Closed source applications are at a disadvantage. Some closed source applications have an incentive to have lots of eyes audit the code. Some open source applications have an incentive to have lots of eyes audit the code.
So I would call the "Eyes" argument a wash between open and
Visibility is required but insufficient (Score:2)
We need software freedom for all published programs. Merely being able to see some source code doesn't grant anyone the right to compile that code into an executable they can share (including distribute commercially), install, and run. So, source code for one's own vehicle under a free software license is needed. It's quite easy to maintain a code base where the malware isn't listed but is present in the executables people receive and run while publishing source code that has no malware in it and is license
Re: Visibility is always better than invisibility (Score:4, Insightful)
While there is no guarantee that some eyes will not be malicious, I think your statement doesn't really fit the probabilities. Most people who look at code will be looking because they want to modify it, because they have questions that can be answered better by the source than by documentation, because they are looking for examples of how to do something, or because they are curious. Most of these people have good intentions and will even experience some emotional fulfillment from helping to eliminate a bug and thus help all of the other users. People like the public-benefit aspect of Open Source and want to help.
Re: (Score:2)
And speaking from my own experience, if I have to use a closed source library, I pad my estimates because it's going to take a lot of extra time.
Re:Visibility is always better than invisibility (Score:5, Informative)
You're missing the fact that the code was made to game the test, and changed emission parameters when the vehicle was on a dynamometer, which is the way emissions tests are done. It was found by a little university lab doing an unrelated experiment, that happened to instrument the vehicle while it was in motion, and simply couldn't get their results to agree with the published emission figures.
Re:Visibility is always better than invisibility (Score:4, Funny)
So true. Most people write better when the world is looking over their shoulder. I'd like to put a certain politician on C-span for all of his waking hours. It would work a lot better than simply reading the brain-farts he emits on twitter and in press conferences.
Re: (Score:2)
It's not just that people write better when the world is watching - if there's not a business case to fix closed source code, it probably doesn't get fixed. When the choice is more features that a customer is paying for or refactoring code, more features almost always win out.
If there's an ugly buggy hack in open source code, there's a chance that at some point it will irritate someone enough that they'll fix it.
I know that I've personally left comments in my code to the extent of "I know this is a terrible
Re: (Score:3)
Me? Well, there's this long discussion [github.com] with the Lucky web platform developer the other day. And they just closed a bug I reported in Mailman [launchpad.net]. That sort of thing goes on all of the time.
Depends (Score:2)
Re: (Score:3)
I've been involved in commercial software development for almost 20 years. I have yet to see any small vendor actually implement everything you list.
Usually, they'll have a few components with some simple impossible-to-fail tests, and say they do "full unit testing". They'll have a "QA tester" rubber-stamp a release because it isn't as buggy as the last one. They'll run code through Valgrind once, ignore the results, and take credit for using analysis tools. Then the development execs go out to a seminar on
Re: (Score:2)
Re: (Score:2)
FOSS software often lacks QA, unit testing, code static/dynamic analysis and regression testing.
So does comercial software.
You have to be looking first (Score:4, Insightful)
Back in 1999, Eric Raymond coined the term "Linus' Law," which stipulates that given enough eyeballs, all bugs are shallow.
That's only true if those eyeballs are actually looking for bugs and organized enough to do something about them. Even then it's more like a principle than an actual truth. Some bugs are much harder to find than others no matter how many people are looking.
Re: (Score:2)
Some bugs are harder to find than others, sure, but that's just a mealy-mouthed platitude.
The point is, on a small team, some bugs are so hard that they don't even get fixed on the same day they're found. It could takes days or even weeks, historically. Even when they were really working on it.
Open Source hasn't had a hard bug since the 90s. Every bug, no matter how hard, is fixed within hours of there being public knowledge that it exists and hasn't been fixed yet. Getting package manager to apply the patc
Shallow, not "don't exist" (Score:2)
It says "... bugs are shallow", not "bugs don't exist".
See:
https://it.slashdot.org/commen... [slashdot.org]
Linus is right (Score:3)
Linus is right... but note that he talked about eyeballs, not open vs closed source. If an open source project is obscure, or if the code is too hard to read, it may not get any scrutiny. On the other hand, closed source code from companies that care about security enough to pay security firms to scrutinize their code, or to hire security-knowledgeable developers and have them look at it carefully, can get a lot of eyeballs.
In the normal course of events, though, open source code almost always gets more attention than closed source, just because anyone who wants to look, can.
Re:Linus is right (Score:5, Insightful)
This is exactly right. It's not about open vs closed source, but eyeballs. For instance, take the HeartBleed / OpenSSL bugs from a few years ago. OpenSSL is used extremely often and all over the place, including by Google, Facebook, etc. But it had vulnerabilities in it that had existed for years and years, and it was because OpenSSL was really only being maintained by a handful of people.
But I think even more so, some organizations just aren't dedicating people to finding problems. You can still exploit Android, even though it's powered by Google and Linux. Intel has issues with its processor designs. Apple had a bug a year or so ago where anyone could log in as root. And these are the companies that supposedly have the best developers and essentially unlimited resources.
Re:Linus is right (Score:4, Insightful)
You can still exploit Android
Actually, it's pretty darned hard to do that on an up-to-date device (e.g. Pixel). There will always be vulnerabilities, but SELinux and other efforts have made Android a pretty hard target lately. Except, of course, for the fact that many device makers don't update.
And these are the companies that supposedly have the best developers and essentially unlimited resources.
Regarding Google, I think the developers are generally quite good, but resources are far from unlimited. I work on the Android platform security team, and we're always overstretched. That's partly because we set ambitious goals, but mostly because it's just really hard to find people. Part of that is location -- we currently only hire in Mountain View, Kirkland and London, so we can only hire people willing to live in one of those locations -- but most of it is because good software engineers who also know security are just hard to find.
Re: (Score:2)
> but most of it is because good software engineers who also know security are just hard to find.
So, why doesn't Google just hire some software engineers and put them through a training camp/apprenticeship? This excuse of every company is getting tired. Are you willing to invest in your work-force or not?
Re: (Score:2)
> but most of it is because good software engineers who also know security are just hard to find.
So, why doesn't Google just hire some software engineers and put them through a training camp/apprenticeship? This excuse of every company is getting tired. Are you willing to invest in your work-force or not?
We do quite a bit of that (apprenticeship, I mean; training camps aren't effective). But training new people takes time and energy from the existing staff, which reduces the work they can get done. Even for experienced hires, it takes a year or more before they're really productive; add another two or three years for those who aren't.
No. Just easier to do. (Score:2)
It's not a matter of eyes, it's a matter of eyes that are actually looking. Just because a million people uses OpenSSH every day doesn't mean that it's more secure, unless someone sits down and audits it, it could as well be closed source.
The difference is that if you WANT to audit it, you CAN. Without first reading more NDAs than you'll eventually get to read code.
Re: (Score:2)
This is a great distinction that is often overlooked. A million people using a piece of software doesn't mean a million people hunting for security flaws in the code. A large percentage of the users will be the "download and use it" type. Even if you put the source code on their screen and highlighted the section of code with the security flaw fo
Re: (Score:2)
nope (Score:3)
While the "many eyes" can be theoretically a better model, practice has shown very few actually look at Open Source software with security in mind.
Even critically important projects like OpenSSL.
Security review takes time. Time is money (even in OSS world). Security audits require money. They don't get done, unless commercial entity (using OSS) commissions them.
The "many eyes" is a really bad security model in practice.
Oh? (Score:2)
Re: (Score:2)
No (Score:3)
The fact is that there aren't many eyes on most parts of the code, and of the ones that are, very few of them are qualified to find the problems.
Re: (Score:2)
With Open Source, there's no guarantee that anyone will find the bugs in the code, but with closed source, I guarantee nobody but the company's programmers will be finding and fixing the bugs
There's another problem (Score:2)
I use some software. It's in debian (and ubuntu) but hasn't been updated in years. It's not the sort of thing that needs to be updated. But I've manged to find and fix 6 or 7 bugs. serious bugs. coredump now type bugs. I've stopped reporting them back to debian because nobody is looking at bug reports. There is no upstream. I've reported them back to a fork I found on sourceforge but they asked me to rewrite my commit message without oxford commas. Seriously.
I'm glad it's open source, I wouldn't be a
With more installations it is. (Score:3)
See WordPress. Abysmal architecture, programmed by monkeys on crack, pretty good security. The last critical gap was closed after only 8000 websites had been infected, something like .0002% of the installbase or something.
Pretty neat.
I bet that gap in that obscenely expensive Oracle Java web application server thingie isn't found half as fast let alone fixed in such a speed.
Re: (Score:2)
Nobody is saying that PHP is a nice language. The point of it was that it allowed less-skilled web designers to write software for web presentation. People who never learned the fundamental concepts of computer security. The security problems were a natural result.
We have much better languages today, and I sure help people use them. But I can't make them do so.
Totally agree. (Score:2)
First of all: Hey, Gang, check it out! Bruce Perens replied to me on slashdot! Yeah man, I started a thread that was joined by Bruce Perens! Awesome! ... Ok, sorry, had to get that out of my system ...
I get PHP pretty much the way you pointed out. "PHPs badness is it's advantage", I've argued before. There's a fresh Lerdorf talk on YouTube where he himself says it pretty clearly: "PHP runs shitty code very, very well." ... Big upside that is. The downside is, of course, that PHP is *so* easy to do stuff wit
Re: (Score:2)
:-)
Some of the work I do is painful, like arguing over Open Source licensing so that it will continue to be fair for everyone. So, it's nice to hear from people who say I've helped them. I think about them when it gets difficult. Thanks!
Someone else wrote that book. I was the series editor, which was mostly setting policy, doing PR, and looking over book proposals. That was the first Open Publication book series, and preceded Creative Commons.
If you learned a language from a book, you can learn another 30.
Re: (Score:2)
See WordPress...... pretty good security.
No haha.
The last critical gap was closed after only 8000 websites had been infected, something like .0002% of the installbase or something.
If they would start using parameterized queries like the rest of the world, this would have been zero.
No, collective blind spots (Score:2)
There's a huge collective blind spot in the programming community as a whole... they somehow believe that Ambient Authority [wikipedia.org] is an acceptable basis for writing applications in a world of persistent networking and mobile code.
Problem Lies in "Enough Eyes" (Score:3)
Any bug can be fixed after discovery because all eyes will be on it. (Or, at least, the eyes of most of the experts for that particular system.)
A huge part of security is being proactive throughout development---in design, code submission, review, and auditing.
Private companies can hire people to fill these roles as needed, but most open source projects rely on the security consciousness of individual contributors. Since security is often boring or counterproductive to the development of new features, I can easily see security being less of a priority for some developers.
Good security requires many eyes throughout the development process. It needs ongoing oversight to ensure that every module and code submission is consistent with the security model for the project.
BSD has one seriously security-conscious leader, but that is not typical. Maybe Red Hat will pay someone to oversee the security of the Linux kernel or audit its code, but most projects won't have that kind of backing. They'll rely on luck of the draw---maybe you attract someone with security expertise, or maybe you don't.
Without a dedicated security focus, projects should go through hardening phases where they deliberately welcome security experts and design/redesign as necessary for security. Even if it means a slowdown or moratorium on new features. Security takes time and effort, and the only solution is to put more eyes on it.
Re: (Score:2)
Tragedy of the commons. (Score:2)
Do you see anyone debloating the kernel? I looked into it once (video drivers primarily), but there was so much code duplication scattered in so many places that it would have required more time than I had and more importantly lots of interaction with the lkml...
Maybe with a GSOC project or some other sponsorship, but until then t
No (Score:2)
Open Source is better, but not because there is some vast horde of people auditing the code for free. There isn't. Linus wasn't talking about auditing code. Linus wasn't even necessarily discussing security. That quote doesn't claim or even imply that some benevolent group of open source programmers are scouring code for security flaws.
Linus was simply observing that even difficult bugs are quickly understood when enough people look at the problem. The known problem; not latent flaws in some large bo
Security vs Law Enforcement: 2 impossible missions (Score:5, Interesting)
I have trouble with this industry-concept that software security should be put first -- it's an impossible business objective.
Think about how many industries focus on security. Banks, sure. Money transport, of course. Prisons and jails.
My air conditioner broke last week. It needed a new capacitor. It was a 5-minute $0 fix. Walk between the houses, open the compartment, pull out the breaker.
Now imagine your air conditioner, with the software industry's concept of security. Can you? How many check-points for a repairman to get to my air conditioner? How much added hardware? How much added expense in dollars and time? What stops someone from throwing a paint-filled balloon from fifty-feet away?
Security, when lives aren't at risk, is just so rarely worth it.
And when lives are at risk? Maybe you have a lock on your front door. Maybe it's a deadbolt. Maybe it's a really fancy locking mechanism, super-secure. Your front door is likely right next to a glass window. Congrats on the lock. Enjoy the bars on your windows.
And what stops your car, at highway speeds, from hitting another car at highway speeds? Right, a thin strip of white paint. Excellent. Sometimes the paint is yellow, even better.
We've never focused on security. We simply cannot afford to.
Instead, we talk about insurance, and criminal law enforcement.
So that's what I'm suggesting for software. Law enforcement. Deterrents.
Anything else, well, is just uncivilized.
Re: (Score:2)
My father used to tell me that most crimes are crimes of opportunity, which I've largely found to be true in my experience. Someone leaves their car unlocked with valuables visible. Someone leaves their phone at the bar as they go to use the restroom. That sort of thing. In the real world, the criminals you're most likely to encounter are common ones with unsophisticated methods, so simple preventative steps coupled with the effective deterrents you mentioned are generally more than adequate to prevent any
Re: (Score:2)
We already have laws against this stuff, but we can't hire enough people to staff the agencies that are supposed to handle enforcement. Clearly, your solution has already been tried, and it has failed.
Re: (Score:2)
I have trouble with this industry-concept that software security should be put first -- it's an impossible business objective.
It would be kind of cool if people could follow basic security principles, like, "Don't use telnetd" or "don't release software with default passwords." You don't need perfect security, but think about it a little, at least.
Complex Question (Score:3)
The first is that we need to think about like-for-like comparisons. When these observations were initially made, 20 years ago, how many projects [either closed source or open source] were using automated source code scanning solutions? i.e. technology specifically written to parse code for flaws?
In other words, 20 years ago the "landscape" was likely to be close to "even". Today, however, many commercial software development shops use vulnerability scanning solutions and/or routinely conduct binary scans of resultant code. Today, many commercial development shops use automated test harnesses for load testing and regression testing. It is fantastic that they do. They do this because they can afford to and because the rapid advancement of this sort of technology has made it possible. Twenty years ago? Not so much.
This would suggest that we might start to see a difference in post-production bugs between Open Source and Commercial/Closed Source software where the development environments differ between these two operating models.
The second observation would be far more tenuous. In the same 20 year period, we have seen many different programming languages "come and go". Obviously the more established platforms (COBOL, C, C++, JAVA) continue to be popular, but this, too, brings differences in bug reports. The longer a language has been in existence, the more mature development becomes, the more libraries become available, the more skilled developers become in preventing even the more obscure bugs.
I don't have access to the data [and wouldn't know where to look for it, tbh] but I think it would be easy to graph out "average number of vulnerabilities per thousand lines of code" - i.e. defect density - over a 5, 10 or even 20-year period of language use. It would be reassuring to see if that trended down - but even more interesting [and worrying] if it didn't.
A while back I went looking to see if there were any "big rules" about different programming languages being more or less prone to vulnerabilities than others. I had read [maybe 25 years ago] that Ada was once thought of being a language with very few bugs. The theory was that it's compiler was so strict that if you could get your code to compile, it would probably run just fine. I was really surprised to learn that although there had been a few studies, there didn't seem to be any emergent evidence to suggest that there were differences between languages. I was surprised because my ignorance had suggested to me that helpful and/or heavily typed languages would be less bug-prone that more relaxed ones - i.e. that JAVA would have a lower defect density than C. Apparently [and I'd be happy for anyone to correct me] the evidence does not support this.
Sorry that this is trending away from the original question, but I think that context is absolutely crucial to get to a good answer to the original post - and that we would find that, like forecasting the weather, it would be pretty hard to do...
Re: (Score:2)
. Today, however, many commercial software development shops use vulnerability scanning solutions and/or routinely conduct binary scans of resultant code.
These vulnerability scans are about as effective as using Lint (which is effective). AI vulnerability scanners are in research, but they have a long way to go.......
Re: (Score:2)
The question for me, though, would be to determine whether commercial software shops, by virtue of having a budget to spend on this sort of thing, can now "buy" better bug identification capabilities.
I'm starting to think there are no
Re: (Score:2)
The Many Eyeballs is BS (Score:3)
It's pure BS. Yeah, you *can* look at the code, but how many do? And how many have the requisite knowledge to recognize it when something is wrong?
As noted on Slashdot over 10 years ago (https://it.slashdot.org/story/08/05/11/1339228/the-25-year-old-bsd-bug) it took 25 years to fix a bug in some commonly used open source. My understanding is that the Samba team even coded around the bug instead of looking at the code and getting it fixed.
Is open source security better than closed source? Sometimes yes; sometimes no. Depends on the developers, the projects and the companies involved. Security is about process and there's a lot more to the process than having access to the source code.
Not better, just differant... (Score:2)
Open source isn't necessarily better than closed source and closed source isn't always better than open source.
Both have their issues and advantages.. Both have their place.
More bugs (Score:2)
It's more complicated (Score:2)
First of all there's a significant difference between "Open Source" and "Free Software". The first only saying that the source code is publicly available, the later saying that your are actively invited to engage in the development process.
Unfortunately people forget that "The freedom to improve the program" means that the program needs to be simple enough that individuals can have a decent chance of understanding it and making meaningful changes. We see a dangerous trend towards more complex and integrated
And what happens when people find bugs? (Score:2)
https://www.theregister.co.uk/2017/07/28/black_hat_pwnie_awards/ [theregister.co.uk]
https://www.theregister.co.uk/2017/11/20/security_people_are_morons_says_linus_torvalds/ [theregister.co.uk]
Eyeballs don't help when they find things and are told to fuck off.
Re: (Score:2)
Totally agree. Heartbleed went on for years and every open source zealot yelled "many eyes!" while compiling this mess of spaghetti legacy code.
Re: (Score:2)
You have a problem comprehending tenses.
Errors in code are not bugs until they're found. You write the code, then you find out it doesn't do what you expected. That part of "doesn't do what you expected" is the bug.
So the principle is not, "Many eyes makes all bugs reported," or "Many eyes makes code mistakes visible." It was only that when you have a bug, even if it looks like a "deep" (difficult) bug to one person, or a team of people, if enough people look at it somebody will have the experience or persp
You misunderstood the concept. -2 days to fix (Score:2)
You misunderstand what the quote is about.
It says "... bugs are shallow", not "bugs don't exist".
See:
https://it.slashdot.org/commen... [slashdot.org]
In the case of Heartbleed, it became public on April 7th.
The fix was available on April 5th. Meaning it was patched, and some people protected, before users even knew there was a problem.
Compare some IE bugs which were publicly acknowledged for seven YEARS before being fixed.