Bug Bounties Don't Help If Bugs Never Run Out 235
In 2007 I wrote:
It's virtually certain that if a company like Microsoft offered $1,000 for a new IE exploit, someone would find at least one and report it to them. So the question facing Microsoft when they choose whether to make that offer, is: Would they rather have the $1,000, or the exploit? What responsible company could possibly choose "the $1,000"? Especially considering that if they don't offer the prize, and as a result that particular exploit doesn't get found by a white-hat researcher, someone else will probably find it and sell it on the black market instead?
Well, I still believe that part's true. You can visualize it even more starkly this way: A stranger approaches a company like Microsoft holding two envelopes, one containing $1,000 cash, and the other containing an IE security vulnerability which hasn't yet been discovered in the wild, and asks Microsoft to pick one envelope. It would sound short-sighted and irresponsible for Microsoft to pick the envelope containing the cash — but when Microsoft declines to offer a $1,000 cash prize for vulnerabilities, it's exactly like choosing the envelope with the $1,000. You might argue that it's "not exactly the same" because Microsoft's hypothetical $1,000 prize program would be on offer for bugs which haven't been found yet, but I'd argue that's a distinction without a difference. If Microsoft did offer a $1,000 prize program, it's virtually certain that someone would come forward with a qualifying exploit (and if nobody did, then the program would be moot anyway) — so both scenarios simply describe a choice between $1,000 and finding a new security vulnerability.
But I would argue that there are certain assumptions under which it would make sense not to offer a cash prize program — and, in keeping with my claim that this is equivalent to the envelope-choice problem, under those assumptions it actually would make sense for Microsoft to turn down the envelope containing the vulnerability, and take the cash instead. (When I say it would "make sense", I mean both from a profit-motive standpoint, and for the purposes of protecting the security of their users' computers.)
On Monday night I saw a presentation put on by Seattle's Pacific Science Center "Science Cafe" program, in which Professor Tadayoshi Kohno described how he and his team were able to defeat the security protocols of a car's embedded computer system by finding and exploiting a buffer overflow. That's scary enough, but it was more interesting how his description of the task made it sound like a foregone conclusion that they would find one — you simply sink this many person-hours into the task of looking for a buffer overflow, and eventually you'll find one that can enable a complete takeover of the car. (He confirmed to me afterwards that in his estimation, once the manufacturer had fixed that vulnerability, he figured his same team could have found another one with the same amount of effort.)
More generally, I think it's reasonable to assume that for a given product, there is a certain threshold amount of money/effort/person-hours such that if you throw that much effort at finding a new security vulnerability, you will always find a new one. Suppose you call this the "infinite bug threshold." Obviously the amount of vulnerabilities is not really infinite — you can only do finitely many things to a product in a finite amount of time, after all — but suppose it's so close to infinite as to make no difference, because the manufacturer would never be able to fix all the vulnerabilities that could be found for that amount of effort. I'm sure that $10 million worth of effort, paid to the right people, will always find you a new security vulnerability in the Apache web server; the same is probably true for some dollar number much lower than that, and you could call that the "infinite bug threshold". On the other hand, by definition of that threshold, that means that the amount of vulnerabilities that can be found for any amount of money below that, will be finite and manageable.
(I'm hand-waving over some details here, such as the disputes over whether two different bugs are really considered "distinct," or the fact that once you've found one vulnerability, the cost of finding other closely related vulnerabilities in the same area of the product, often goes way down. But I don't think these complications negate the argument.)
Meanwhile, you have the black-market value of a given type of vulnerability in a given product. This may be the value that you could actually sell it for on the black market, or it may be the maximum amount of effort that a cyber-criminal would invest in finding a new vulnerability. If a cyber-criminal will only start looking for a particular type of vulnerability if they estimate they can find one for less than $50,000 worth of effort, then $50,000 is how much that type of vulnerability is worth to them.
Now consider the case where
infinite bug threshold > black-market value
This is the good case. It means that if the manufacturer offered a prize equal to the black-market value of an exploit, any rational security researcher who found a vulnerability, could sell it to the manufacturer rather than offering it on the black market (assuming they would find the manufacturer more reliable and pleasant to deal with than the Russian cyber-mafia). And we're below the infinite bug threshold, so by definition the manufacturer only has to pay out a finite and manageable number of those prizes, before all such vulnerabilities have been found and fixed. I've made a couple of optimistic assumptions here, such as that the manufacturer would be willing to pay prizes in the first place, and that they could correctly estimate what the black-market value of a bug would be. But at least there's hope.
On other hand, if
infinite bug threshold < black market value
everything gets much worse. This means that no matter how many vulnerabilities you find and fix, by the definition of the infinite bug threshold there will always be another vulnerability that a black-hat will find it worthwhile to discover and exploit.
And that's the pessimistic scenario where it doesn't really matter whether Microsoft chooses the envelope with the vulnerability or the envelope with the $1,000, if the infinite-bug-threshold happens to be below $1,000. (Let's hope it's not that low in practice! But the same analysis would apply to any higher number.) If the black-market-value of a bug is at least $1,000, so that's what the attacker is willing to spend to find one, and if that's above the infinite-bug-threshold, then you might as well not bother fixing any particular bug at that level, because the attacker can always just find another one. It doesn't even matter whether you have a prize program or not; the product is in a permanent state of unfixable vulnerability.
At that point, the only ways to flip the direction of the inequality, to reach the state where "infinite bug threshold > black-market value", would be to decrease the black market value of the vulnerability, or increase the infinite bug threshold for your product. To decrease the black market value, you could implement more severe punishments for cyber-criminals, which makes them less willing to commit risky crimes using a security exploit. Or you could implement greater checks and balances to prevent financial fraud, which decreases the incentives for exploits. But these are society-wide changes that would not be under the control of the software manufacturer. (I'm not sure if there's anything a software company could do by themselves to lower the black-market value of a vulnerability in their product, other than voluntarily decreasing their own market share so that there are fewer computers that can be compromised using their software! Can you think of any other way?)
Raising the infinite bug threshold for the product, on the other hand, may require re-writing the software from scratch, or at least the most vulnerable components, paying stricter attention to security-conscious programming standards. Professor Kohno said after his talk that he believed that if the programmers of the car's embedded systems had followed better security coding practices, such as the principle of least privilege, then his team would not have found vulnerabilities so easily.
I still believe that cash prizes have the potential to achieve security utopia, at least with regard to the particular programs the prizes are offered for — but only where the "infinite bug threshold > black-market value" inequality holds, and only if the company is willing to offer the prizes. If the software is written in a security-conscious manner such that the infinite bug threshold is likely to be higher than the black-market value, and the manufacturer offers a vulnerability prize at least equal to the black-market value, then virtually all vulnerabilities which can be found for less than that much effort, will be reported to the manufacturer and fixed. Once that nirvana has been achieved, for an attacker to find a new exploit, the attacker would have to be (1) irrational (spending an estimated $70,000 to find a vulnerability that is only worth $50,000), and (2) evil beyond merely profit motive (using the bug for $50,000 of ill-gotten gain, instead of simply turning it in to the manufacturer for the same amount of money!). That's not logically impossible, but we would expect it to be rare.
On the other hand, for programs and classes of vulnerabilities where "infinite bug threshold < black-market value", there is literally nothing that can be done to make them secure against an attacker who has time to find the next exploit. You can have multiple lines of defense, like installing anti-virus software on your PC in case a website uses a vulnerability in Internet Explorer to try and infect your computer with a virus. But Kaspersky doesn't make anything for cars.
Bennett Haselton (Score:5, Insightful)
Re: (Score:2)
Oh my god, I've cracked the code
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
Bennett Haselton post (Score:5, Funny)
Why do words, suddenly appear
Every time, Bennett's here?
Just like me
you long to be
free from this
When did slashdot become a blog for Bennett? (Score:2)
It used to be that CmdrTaco or one of the others on the slashdot staff would occassionally post an article, but in general, the standard procedure would be that someone would write something on some other website, and then Slashdot would link to them.
And sometimes, they'd link to one blog over and over again so often that were just rehashes of press releases (eg, coondoggie & Roland Piquepaille) rather than containing any original information or commentary, and they crowd out actual good articles on the
Re:When did slashdot become a blog for Bennett? (Score:5, Funny)
But you don't understand. Bennett discovered DIMINISHING RETURNS.
People need to know.
NEED. To. Know.
software doesn't have bugs (Score:2)
Thesis is essentially that fixing a bug doesn't increase security because the software still has an infinite number of bugs.
Obviously, this is false. A software package may have 6 security bugs. Fixing 3 of them reduces by 50% the chance that the bad guy will find one.
Re: (Score:2)
But IF there are effectively infinitely many vulns that can be found for less than the black market value, then fixing one does not decrease the probability that the attacker will find another one.
Re: (Score:2)
I do it for the cred, for six figure salary. Jail (Score:3)
Aside from the obvious ethical reason, I see two reasons more important than the $1,000 to go "white hat" rather than "black hat".
When a potential employer Googles my name, I want them to find my name on CVEs, Github commits, etc. - demonstrable proof that I do in fact find and fix real-world issues. I'm working on that. Right now, I'd have to point out my contributions, they aren't easily found via Google. For that, having a company or other organization publicly acknowledge my work is much more valuable
Re: (Score:2)
Why doesn't the developer and his/her company do a good job of reducing such bugs in the first place? How much time and money is wasted taking computers offline for updates, how many millions spent on admins patching systems? If the bugs were hard to avoid, I would understand. But c'mon, this is (criminal) laziness and greed, hiring incompetent developers (or paying competent developers the same salary as incompetent ones), and doing a rush job to release the software early to keep some artificial schedule
Re: (Score:2)
Theoretically, yes, the bugs are finite but if it takes decades to find and eliminate them all, they may as well be infinite, practically speaking. Case in point, win xp, which has been out for almost 13 years and bugs are still being found everyday.
Re:When did slashdot become a blog for Bennett? (Score:5, Informative)
Bennett, I think it's mostly that you simply DON'T REALLY HAVE ANYTHING THAT INTERESTING TO SAY.
What you've presented us with is a smugly self-congratulatory "inner dialogue" in which you discovered that "at some point, it becomes more expensive to find bugs than the bug bounty will compensate you!"
Anybody with more than a second grade education can reason this out for themselves, but you saw fit to drone on for pages about it. You do this a lot. it's not informative, interesting, or insightful. You should probably either: a) up your game and offer us some real thoughtful insight; or b) shut the fuck up and take it back to livejournal.
Congratulations, you've discovered the law of diminishing returns - you're a fucking legend.
Re: (Score:3)
Re: (Score:3)
The problem is that the premises seem really dubious, and since there is no actual agreement about bug bounties that I've noticed, the conclusion is not particularly surprising.
Metaphor (Score:2)
Re: (Score:2)
i understand that you think you are using a metaphor (even if it is in fact a simile)
but either way it's false. in a race, you either finish or you don't. in software it's an on going process with (hopefully, but rarely) increasing quality and functionality.
small battles can and are won.
say you have 10,000,000,000 bugs in the system as a evidence.
researchers find and patch 1,000,000,000
is your system more or less likely to have an exploit found?
Re: (Score:2)
wtf, slashdot? "evidence"? i said "an estimate"!
Re: (Score:2)
You're arguing that I can't refer to a "race with no finish line" because all races have finish lines?
For that level of pedantry, I'd expect you to know that it really was a metaphor [dailywritingtips.com].
Re: (Score:2)
i argued that it's a false equivalence because software quality is not based (entirely) on a single finish line.
Re: (Score:2)
A race with no finish line doesn't have a single finish line, either.
Re:Metaphor (Score:5, Insightful)
The notion that you can't have code without these flaws (buffer overruns, dangling pointers, etc) is just asinine. I've worked on significant codebases without any such flaws. You just have to adopt a programming style that doesn't rely on being mistake-free to avoid the issues.
Want to end the danger of buffer overruns? Stop using types where it's even possible.
Want to end the danger of dangling pointers? Managed code doesn't do anything to solve this problem, and is often the worst offender since coders often stop thinking about how memory is recycled, and well-formed objects can hang around in memory for quite some time waiting on the garbage man. So you have to write code where every time you use an object you check that it hasn't been freed, and importantly hasn't been freed and then re-used for the same object! (That happens on purpose in appliance code, where slab allocation is common.)
Heck, for embedded code I simply wouldn't use dynamic allocation at all. All objects created at boot, nothing malloced, nothing freed. Everything fixed sized and only written to with macros that ensure no overruns. I wrote code that way for 5 years - we didn't even use a stack, which is just one more thing that can overflow. That style is too costly for most work, but it's possible, and for life-safety applications it's irresponsible to cheap out.
Re: (Score:2)
While you are technically correct, the reality is that the most serious security vulnerabilities are almost all directly related to buffer overruns (on read or write), allowing an attacker to read or write arbitrary memory. Everything else is a second-class citizen by comparison; denying service by causing Apache to repeatedly crash is far lower priority than compromising all traffic and stealing credentials.
So when we look at that class of serious problems, we find that managed memory languages completely
Bennett's Ego (Score:5, Insightful)
" I was an early advocate of companies offering cash prizes to researchers who found security holes in their products, so that the vulnerabilities can be fixed before the bad guys exploited them. I still believe that prize programs can make a product safer under certain conditions. But I had naively overlooked that under an alternate set of assumptions, you might find that not only do cash prizes not make the product any safer, but that nothing makes the product any safer — you might as well not bother fixing certain security holes at all, whether they were found through a prize program or not."
Is the whole premise of this article Bennett having a conversation with himself, talking about his previous points that no one also cared about? I understand slashdot is trying to start doing op-eds by having this guy write. But everything he writes is this long-winded, blowhard, arrogant, ego-massaging nonsense that no one but him cares about. Here he's writing about his previous writings and how his thoughts have changed..in a poorly-written article with no sense of a conclusion..just rambling.
Bennett is also not an information security expert ..he's just a blowhard..can we have someone really involved in information security, like Bruce Schneier, write articles for Slashdot instead of this nonsense?
Re:Bennett's Ego (Score:5, Insightful)
While I don't share the AC's animosity towards you, the premise of your argument is entirely wrong.
The number of bugs are not limitless, they are very much a finite thing.
The benefit to the company is not limited to closing that single bug. When someone reports one bug, you likely are learning a new method and/or way of thinking in regards to the procedure/module/whatever is involved. One "reported" bug could likely make many dozens or more other bugs readily apparent in your code.
It also teaches your organization how to avoid that bug in the future. How many bugs were in the wild, being used by blackhats for YEARS through multiple iterations of a software package before being caught?
Also, you get to find the mistake in the code and, if you're managing your code correctly, you will know who made the mistake. So you can coach if it was something that should have been caught.
And lastly, it solidifies your place in the market as a leader. People study your code intently, use it more, get more involved. The more people involved, the bigger your talent pool, the more industry respect you have, and as a result the more people will look to you as a company that cares about the stability and long term viability of your product.
Re:Bennett's Ego (Score:5, Funny)
Ugh, I feed the trolls all the time, so I guess today I can mix it up and feed the idiot.
OK, here's a basic concept you seem to be completely misunderstanding: bug != vulnerability.
And even beyond that, not all vulnerabilities are equal. Yes, it is unlikely that Apache will ever be "bug free". It's even more unlikely that Apache is currently "bug free". That does not mean there is currently a heartbleed level vulnerability in Apache. For $10 million, you could absolutely find ways to crash Apache. You could find functions and modules that don't work properly. Hell, you could find those for $10. But there's no guarantee that for even an infinite amount of money, you could find a vulnerability that allows you to steal secret keys or execute arbitrary code. You pay for bounties in the hopes that if those vulnerabilities exist, someone will report them to you.
Here's the basic problem with every fucking article you write: You take a small set of data and jump to ludicrous and arbitrary conclusions. Here's some recent examples:
Your "data": A researcher found a buffer overflow in some car software, and said that even if that was fixed, they could find another one if they tried hard enough.
Your conclusion: All software ever written has an infinite amount of "bugs", which means all software will always be so insecure that there's no point in trying to fix it.
Your "data": You found a garage finding app that you like, but it was lower rated than a different garage finding app that you didn't like.
Your conclusion: The market for all intellectual property is so inefficient that the world is full of great things no one knows about, and we should form some sort of weird, pseudocommunist oligarchy of experts that tell us what to buy and what software to use. And we should eliminate advertising.
Your "data": Someone gave you some weird, bad advice about what you should do about a tent for burning man.
Your conclusion: No one ever follows good advice if it is hard, so everyone should start giving shitty advice because it's easier. Then you made up an acronym, called it a "metric", and despite the fact that it had no mathematical basis whatsoever, and you just made it up, you treated it like a number and used it "mathematically" prove your point.
Your "data": I don't even know, some stupid shit about parking at burning man.
You conclusion: no idea, there's no such thing as a slow enough day at work to make me read that shit.
Re: (Score:2)
Do you think that statement is incorrect? That for $10 million worth of effort, you could always find a new vulnerability in Apache, no matter how many iterations of bug-fixing you've already gone through?
I certainly do. First of all, there are only so many lines of code. Once you hypothetically 'fix' every one of them, you're done. Vulnerabilities exists because people are fallible and make mistakes, but ultimately there will be a limit and the assumption that this limit is effectively unreachable is absurd enough to require evidence on your part.
Programmers and crackers are equally human. They're using the same hardware and software systems to do the analysis. Assuming that the latter will ALWAYS win
Re: (Score:2)
Do you think that statement is incorrect? That for $10 million worth of effort, you could always find a new vulnerability in Apache, no matter how many iterations of bug-fixing you've already gone through?
I certainly do. First of all, there are only so many lines of code. Once you hypothetically 'fix' every one of them, you're done.
Well, theoretically yes. But do you think that Apache could ever reach a state in practice, in the world we actually live in, where you couldn't find a new vulnerability in it for $10 million worth of effort?
Re: (Score:2)
Is there a statement in my post that you think is incorrect or unclear?
Re: (Score:2)
"Theoretically". Got it.
Emphasis added.
So now you're conflating a real-world situation with a hypothetical situation ... no. You do not get to mix real-world and hypotheticals in the same sentence. No one is offering $10 million and no one is likely to offer $10 million.
IF someone would offer $10 million
Re: (Score:2)
http://opensslrampage.org/ [opensslrampage.org]
if OpenSSL had 5 pages of bugs so far... and was widely used in an ecosystem where the source was there, just imagine the nightmare of closed source projects...
patching 100 bugs on average introduces 3 new bugs. now i know bugs != security vulnerabilities. but bugs are why people complain about software stability.
also a 'vulnerability' bug has a black market value that is always going to be higher than bug bounties. however an old exploit has the added value of 'reporting' it afte
By this logic... (Score:2, Insightful)
You should be ashamed of your apathy.
Re: (Score:2)
Re: (Score:2)
Even ignoring the ridiculous jumps and assumptions in your reasoning, your logic is absurdly inconsistent and contradictory. At any given moment, the amount of criminals is finite just as the number of bugs is finite. But it seems that you are not talking about a given point in time. Over an infinite period of time, criminals are also infinite because for ever X new births, some percentage of those people are pretty much guaranteed to become criminals. And every arrest has the potential of creating a new cr
Re: (Score:2)
Re: (Score:2)
If a large number of vulnerabilities that can be found for $5K, then if you spend $5K of your own effort finding one such vulnerability so that you can fix it, the probability approaches zero that the attacker is spending their effort finding the same vulnerability.
Re: (Score:2)
Re: (Score:2)
Again, do you think Apache could ever, in practice, reach a state where you couldn't find one more vulnerability in it for $10 million worth of effort? I would say, probably not. That's probably true for some much lower dollar value as well.
Re: (Score:2)
Okay, I guess I misunderstood parts of your post, but I still see some issues.
First, you're assuming that the only consideration for people that find security vulnerabilities is money, so that if the potential illicit earnings from exploiting the bug are greater than the bounty, they will exploit the bug. This is definitely not true in practice. Some people just want to do good things. And even for people with no conscience whatsoever, they have to deal with the fact that doing something puts you into a hig
Re: (Score:2)
Okay, I guess I misunderstood parts of your post, but I still see some issues.
First, you're assuming that the only consideration for people that find security vulnerabilities is money, so that if the potential illicit earnings from exploiting the bug are greater than the bounty, they will exploit the bug. This is definitely not true in practice. Some people just want to do good things. And even for people with no conscience whatsoever, they have to deal with the fact that doing something puts you into a high stress defensive stance where you constantly have to cover your tracks. Most people wouldn't want that kind of lifestyle.
Yes, that's true -- so that introduces a fudge factor into the amount of the bounty, since it doesn't have to be quite as high as the black market value. It can be less, since most people would prefer dealing with the software manufacturer.
Second, you're assuming that the number of bugs found increases linearly with the dollar amount of bug bounties, but my gut instinct is that it is an asymptotic function. Increased bug bounties offer diminishing returns because after a certain point the limiting factor becomes the fact that bugs are really darn hard to find. (Case in point, OpenSSL. Every major tech company uses OpenSSL and several have conducted regular audits of it. Even with all that effort, no one was able to uncover the Heartbleed bug until earlier this year.) So even if Microsoft were to offer $10 million per bug, I don't think they would start finding more bugs than they could fix.
I don't think my conclusion depends on the assumption
Re: (Score:2)
Everybody arguing as if I had said the number of bugs is literally infinite is missing the point.
Re: (Score:2)
No, we weigh the cost of prosecuting a specific crime against the cost of not prosecuting it, and let some crimes slide.
So we spend a lot more time and effort prosecuting a murder than a jaywalker. Because it's worth more to stop the murderer.
(And when this gets out of whack, we have problems. Red light cameras, GPS devices on cars, and such are reducing the cost of prosecuting some crimes, and that is causing social problems as we start to prosecute crimes that we didn't before. A lot of the complaints
There aren't infinite bugs (Score:2)
If you start with the assumption that you can't make secure software, then you shouldn't make any software at all.
Re:There aren't infinite bugs (Score:5, Interesting)
People talk about bug free code. It is a matter of won't, not a matter of can't.
Sometimes, there are products out there which can be considered "finished". Done as in no extra features needed, and there are no bugs to be found. Simple utilities like /usr/bin/yes come to mind. More complex utilities can be honed to a reasonable degree of functionality (busybox comes to mind.)
The problem isn't the fact that secure or bug free software can't be made. It is that the procedures and processes to do this require resources, and most of the computer industry runs on the "it builds, ship it!" motto [1]. Unfortunately, with how the industry works, if a firm does do the policy of "we will ship it when we are ready", a competitor releasing an early beta of a similar utility will win the race/contracts. So, it is a race to the bottom.
[1]: The exception to this rule being malware, which is probably the most bug-free code written anywhere these days. It is lean, robust, does what it is purposed to do, and is constantly updated without a fuss.
Re:There aren't infinite bugs (Score:5, Interesting)
People talk about bug free code. It is a matter of won't, not a matter of can't.
Sometimes, there are products out there which can be considered "finished". Done as in no extra features needed, and there are no bugs to be found. Simple utilities like /usr/bin/yes come to mind. More complex utilities can be honed to a reasonable degree of functionality (busybox comes to mind.)
The problem isn't the fact that secure or bug free software can't be made. It is that the procedures and processes to do this require resources, and most of the computer industry runs on the "it builds, ship it!" motto [1]. Unfortunately, with how the industry works, if a firm does do the policy of "we will ship it when we are ready", a competitor releasing an early beta of a similar utility will win the race/contracts. So, it is a race to the bottom.
[1]: The exception to this rule being malware, which is probably the most bug-free code written anywhere these days. It is lean, robust, does what it is purposed to do, and is constantly updated without a fuss.
Once upon a time, I read somewhere (Yourdon, possibly) that the number of bugs in a software product tends to remain constant once the product has reached stability. The number for IBM's OS/MVS mainframe operating system was somewhere in the vicinity of 10,000!
It's been likened to pressing on a balloon where when you squeeze one bump in, another pops out, because the process of fixing bugs itself introduces new bugs.
And OS/MVS is about the most critical software you could put on a mainframe. You can't just Ctrl-Alt-Delete a System/370. Or power it off and back on again. Mainframes are expensive, and expected to work virtually continually. Mainframe developers were expensive as well, since after a million dollars or so of hardware and software, paying programmers handsome salaries wasn't as big an issue back then. Plus there was no offshore race to the bottom where price trumped quality at the time. In fact, there wasn't even "perma-temping" yet.
Still, with all those resources on such an important product, they could only hold the bug count constant, not drive it down to zero.
Actually speaking of OS/MVS, there's a program (IEFBR14) whose sole purpose in life is to do nothing. There have been about 6 versions of this program so far, and several of them were bug fixes. More recently, it had to be upgraded to work properly on 64-bit architecture, but some of the bugs were hardware-independent.
Re:There aren't infinite bugs (Score:5, Insightful)
Re: (Score:2)
I think all that matters is that the dollar number matches the black-market value. Then it doesn't matter whether most people would find the effective hourly rate "insulting"; all that matters is that anybody who does find an exploit will turn it in to the company rather than selling it on the black market or exploiting it themselves.
Re: (Score:2)
But what is the "black market rate" for 1 million credit card numbers? $20 a piece? What is the cost to the company if they lose 1 million
credit cards? This is a job for the bean counters but in some cases it might be worth it not to pay for the bug if you think it'll cost you less
than $20 million in mitigation of reputation,etc.. In other cases, it might be worth alot more than $20 million if for instance a lose of 1 million
credit cards causes Bank of America to lose $100 million of business. I think
Re: (Score:2)
What about scarcity? Wouldn't scarcity of exploits on the black market just drive the prices up eventually putting the price over the infinite bug price?
Although I agree that if there is enough money involved someone will find a way even if it is an indirect exploit to an otherwise solid application.
Re: (Score:2)
So, you place a certain amount of value on being able to break into Apache webservers. If existing vulnerabilities get found and fixed, that will, indeed
Re: (Score:2)
Then it doesn't matter whether most people would find the effective hourly rate "insulting"; all that matters is that anybody who does find an exploit will turn it in to the company rather than selling it on the black market or exploiting it themselves.
You're assuming they can only choose one. What is there to prevent someone from exploiting the bug themselves for a while, selling it on the black market (to a discrete buyer), and still eventually turning it in to collect the bounty?
Re: (Score:2)
Right, I forgot to mention something: To prevent double-use like this, a company should say that you don't get paid until they've fixed the bug and issued a patch for it in their software, all without the exploit ever being spotted in the wild. (If someone else finds your vulnerability and exploits it in the wild, that's just bad luck. So to incentivize researchers, Microsoft might have to increase the prize money propo
Re: (Score:2)
Cost of formal verification (Score:2)
However.... (Score:3)
Paying people to find bugs and report them responsibly does give those people an incentive to not do something worse with them.
In a way, this economy takes possible would-be black hats and turns them into white hats. I suspect there are far fewer people capable of finding every last exploit than there are exploits, so if we keep those people busy and paid doing what they do best, at least they won't be doing something more nefarious.
Re: (Score:2)
Paying people to find bugs and report them responsibly does give those people an incentive to not do something worse with them.
Right, I forgot to mention something: To prevent double-use like this, a company should say that you don't get paid until they've fixed the bug and issued a patch for it in their software, all without the exploit ever being spotted in the wild. (If someone else finds your vulnerability and exploits it in the wild, that's just bad luck. So to incentivize researchers, Microsoft might have to increase the prize money proportionally, to make up for the fact that sometimes people won't get paid because their e
Re: (Score:2)
To prevent double-use like this, a company should say that you don't get paid until they've fixed the bug and issued a patch for it in their software, all without the exploit ever being spotted in the wild.
One problem with this is that there's already a documented history of companies rejecting bug reports and not paying the bounty, and then some time later include a fix for it in their periodic updates. It's basically the same process that causes a company's "app store" to reject a submitted tool to do a particular job, and then a few months later releasing their own app that does the same thing.
I know a good number of people who've been bitten by the latter, from both MS and Apple. In the case of a bug
Re: (Score:2)
But fortunately I don't think that's fatal to the analysis because that just leaves the people left over who are willing to do the work to find vulns. All that matters is that they think the software manufacturer is more trustworthy and more likely to pay than the black-marketeers. T
Wrong (Score:5, Insightful)
There is no such thing as a "black market value" of a security vulnerability. Both the demand and supply have curves. I.E there are security researchers who would demand say 1 million bucks before selling the bug to the CIA (because they view that action as unethical, illegal and risky careerwise) while they would gladly accept 10.000$ in a responsible disclosure offer. Other color hats would go to the highest bidder. Similarly, there are large transaction costs and information asymmetries, it's not necessarily true that the demand and supply meet or that they can trust each other. A spy agency might rather develop in house (at a much larger cost) then shop around and rise suspicion.
In short, offering a non-trivial sum of money will always increase the costs of the average attacker and might completely shut off the low impact attacks like spam zombification, email harvesting etc., the developers of which can't invest millions in an exploit but would gladly use the free zero day+exploit just made public.
Security is all or nothing? (Score:2)
tldr (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
To paraphrase:
IF you can easily find a serious security hole AND IF there are a very large number of other serious security holes AND IF there are also a very large number of less serious security holes, THEN there's no point in offering a bounty because the number of less serious security holes plus the number of more serious security holes is so large you'll never fix them all.
Yes that's true. But it doesn't take a page long monolog to say it.
However, IF your bounty turns up a security hole like Heartblee
Re: (Score:2)
Yes, I'm sure the article could have been made shorter.
Re: (Score:2)
Re: (Score:2)
Actually there are e few things:
All of those things baffle me
Re: (Score:2)
Re: (Score:2)
The fact that there are dozens of people responding as if I had said "literally infinitely many bug
Re: (Score:2)
Re: (Score:2, Funny)
Yes, but to be honest, if your car has windscreen wipers that leave a smear, why worry about the random chance of explosion?
Re:tldr (Score:5, Funny)
I did read far enough to realize that this person is an idiot.
So you only got to "Bennett Haselton writes:" then?
Re: (Score:2)
Yup. If you're just going to throw up your hands and say that bug-free software is impossible, why not just intentionally write software that doesn't work at all?
My Linux kernel HAS to be broken. So, why not just edit the source and put an infinite loop at the entry point? The resulting black screen when I boot up must be just as useless as the OS I'm typing on right now, right?
Re: (Score:2)
This is essentially like claiming that fixing the exploding gas tanks in a Pinto is of no use, because the car will still have other issues.
No, it's like claiming that the Pinto is always going to have an exploding gas tank issue, even if you fix the current cause.
Some cars (software) have so much going on that there will always be problems,
unless you use a design process (coding language) that doesn't allow it to happen.
Or do you really think that Microsoft products with millions of lines of code are someday going to be bug free?
It's not the infinite bugs... (Score:2)
From basic programming to advanced (Score:2)
Like so many others, my first code was:
10 PRINT "HELLO WORLD"
We started out with some basic operations and grew from there. Unfortunately most people kept what they liked and discarded the rest. Things like data and input validation are seen as a waste of time by so many. Strings and other data which get passed to other processes in other languages (like SQL, or Windows image libraries) also warrant some inspection.
The types of vulnerabilities we find most often happen because programmers are neglecting
I hate to TL;DR, but... (Score:3)
...the notion that if you can't make software bug free, you may as well not bother is just stupid on a scale that's hard to comprehend. I skimmed as much of that article I could stomach, but I'm done.
If we can't make cars crash proof, we may as well not make them safer.
If we can't make people immortal, we may as well stop advancing medicine.
You know what? If you can't find perfect stories, you may as well stop posting junk like this.
"you might as well not bother" (Score:2)
Re: (Score:2)
That's where you are wrong. (Score:2)
No need to read any further because that is an incorrect assumption.
There cannot be an infinite number of bugs (effectively or otherwise) because there is not an infinite about of code NOR an infinite number of ways to run the finite amount of code.
From TFA:
Re: (Score:2)
Re: (Score:2)
From your inclusion of "really believe" I'd say that your question was rhetorical.
And wrong.
At $10 million per buffer overflow? Yes. There would be a finite number of buffer overflows that would be found and fixed.
At $10 million per X category of bug? Yes. There would be a finite number X's that w
WTF? (Score:2)
I don't think he understands how security works.
True for crap software (Score:2)
Security compiler? (Score:3)
Why not a security compiler? Seems some clever, creative hackers could work up something which would take raw code, subject it to some scrutiny and give output/feedback. Perhaps even a security switch to the standard compilers or even a security test suite. Shouldn't be that hard to do.
Re: (Score:2)
Why not a security compiler? Seems some clever, creative hackers could work up something which would take raw code, subject it to some scrutiny and give output/feedback. Perhaps even a security switch to the standard compilers or even a security test suite. Shouldn't be that hard to do.
Shouldn't be too hard... in the sense that solving the Halting Problem shouldn't be too difficult. I conjecture that with an appropriate set of assumptions it's possible to use Rice's Theorem to prove that security analysis is equivalent to the Halting Problem.
Of course, static analysis can catch some vulnerabilities, and can highlight potential vulnerabilities. That's what Coverity does. But I don't think any mechanical process can defeat a creative attacker.
I think you're working from a few false assumption (Score:5, Insightful)
First, bugs in a given program are not infinite in number. By definition. Because the code itself is finite. Finite code cannot have infinite bugs. Also, due to the nature of code and how it is created, patching one bug usually also takes care of many others. If you have a buffer overflow problem in your input routine, you need only patch it once, in the routine. Not everywhere that routine is being called.
I have spent a few years (closer to decades now) in IT security with a strong focus on code security. In my experience, the effort necessary to find bugs is not linear. Unless the code changes, bug hunting becomes increasingly time consuming. It would be interesting to actually do an analysis of it in depth, but from a gut feeling I would say it's closer to a logarithmic curve. You find a lot of security issues early in development (you have a lot of quick wins easily), issues that can easily even be found in a static analysis (like the mentioned overflow bugs, like unsanitized SQL input and the like), whereas it takes increasingly more time to hunt down elusive security bugs that rely on timing issues or race conditions, especially when interacting with specific other software.
Following this I cannot agree that you cannot "buy away" your bug problems. A sensible approach (ok, I call it sensible 'cause it's mine) is to get the static/easy bugs done in house (good devs can and will actually avoid them altogether), then hire a security analyst or two and THEN offer bug hunting rewards. You will usually only get a few to deal with before it gets quiet.
Exploiting bugs follow the same rules that the rest of the market follows: Finding the bug and developing an exploit for it has to be cheaper than what you hope to reap from exploiting it. If you now offer a reward that's level with the expected gain (adjusted by considerations like the legality of reporting vs. using it and the fact that you needn't actually develop the exploit), you will find someone to squeal. Because there's one more thing working in your favor: Only the first one to squeal gets the money, and unless you know about a bug that I don't know about, chances are that I have a patch done and rolled out before you got your exploit deployed. Your interest to tell me is proportional to how quickly I react to knowing about it. Because the smaller I can make the window in which you can use the bug, the smaller your window gets to make money with the exploit, and the more interesting my offer to pay you to report the bug gets.
Re: (Score:2)
First, bugs in a given program are not infinite in number. By definition. Because the code itself is finite. Finite code cannot have infinite bugs.
I agree... I did wrote, "Obviously the amount of vulnerabilities is not really infinite — you can only do finitely many things to a product in a finite amount of time, after all — but suppose it's so close to infinite as to make no difference, because the manufacturer would never be able to fix all the vulnerabilities that could be found for that amount of effort."
Also, due to the nature of code and how it is created, patching one bug usually also takes care of many others. If you have a buffer overflow problem in your input routine, you need only patch it once, in the routine. Not everywhere that routine is being called.
Right, I also said, "I'm hand-waving over some details here, such as the disputes over whether two different bugs are really conside
duh? (Score:2)
More to bounties than bugs (Score:2)
Stupid Argument (Score:2)
We should stop looking for bugs because we can never find them all. Maybe we should stop prosecuting criminals because we can't seem to stop finding more. There will always be murderers, so let's make killing legal.
Inductive Fallacy (Score:2)
This analysis is based on an erroneous assumption which is derived from an inductive fallacy. Specifically, the author assumes that because one researcher who found one bug believes he could have found a second for roughly the same level of effort means that the researcher believes this process could be repeated indefinitely. I'm certain that if Kohno were asked he would deny the validity of this assumption. I'm sure he would say that his team could find a handful of similar bugs for similar level of effort
Core assumptions are wrong (Score:2)
Then he assumed that given y effort you could then find bug #2. Again a reasonable assumption.
Third assumption, that x=y. This is FALSE. For that assumption to be true, then bugs are being found randomly, not by effort. The truth
WRONG! (Score:2)
i'm missing something (Score:2)
Now, in theory, if there are truly infinitely many such flaws to be found and subsequent ones aren't any har
Eliminating buffer overflows (Score:2)
The problem is C. Programs in all the languages that understand array size, (Pascal, Modula, Ada, Go, Erlang, Eiffel, Haskell, and all the scripting languages) don't have buffer overflow problems.
It's not an overhead problem. That was solved decades ago; compilers can optimize out most subscript checks within inner loops.
I've proposed a way to retrofit array size info to C, [animats.com] but it's a big change to sell. There are many C programmers who think they're so good they don't need subscript checks. Experienc
You can't "inspect in" quality (Score:2)
If we take a less-than-good-enough-quality product and iteratively inspect and rework/repair each defect, we will produce a good-enough product. Intuition seems to suggest that this *should* work! Can't you polish a rough piece of metal until it shines? Heck, mythbusters proved you actually can polish a terd.
The manufacturing industry figured out a long time ago that you can't inspect in quailty. Your process has to produce a product-that-meets-customer-requirements, and if it doesn't you have to fix yo
Infinite? (Score:2)
Re: (Score:3, Funny)
Or it's like pretty much anything bad. Just because you cannot eliminate it completely doesn't mean you give up fighting. Murder is bad, and no matter how many we arrest, there will always be more murderers. That doesn't mean we should eliminate the police.
This "article" might actually be the dumbest thing Bennett Haselton has ever written, which puts it in legitimate contention for dumbest thing anyone has ever written.
Re: (Score:2, Funny)
They're like the small kitchen cockroaches in suburbia. You never can get rid of them, so all you can do is mitigate periodically
I don't like Bennett Haselton's posts either, but isn't that a bit harsh?
Re: (Score:3)
Look where that got the electrical engineers.
http://news.slashdot.org/story... [slashdot.org]