Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Bug Security Software The Almighty Buck Technology

Bug Bounties Don't Help If Bugs Never Run Out 235

Bennett Haselton writes: "I was an early advocate of companies offering cash prizes to researchers who found security holes in their products, so that the vulnerabilities can be fixed before the bad guys exploited them. I still believe that prize programs can make a product safer under certain conditions. But I had naively overlooked that under an alternate set of assumptions, you might find that not only do cash prizes not make the product any safer, but that nothing makes the product any safer — you might as well not bother fixing certain security holes at all, whether they were found through a prize program or not." Read on for the rest of Bennett's thoughts.

In 2007 I wrote:

It's virtually certain that if a company like Microsoft offered $1,000 for a new IE exploit, someone would find at least one and report it to them. So the question facing Microsoft when they choose whether to make that offer, is: Would they rather have the $1,000, or the exploit? What responsible company could possibly choose "the $1,000"? Especially considering that if they don't offer the prize, and as a result that particular exploit doesn't get found by a white-hat researcher, someone else will probably find it and sell it on the black market instead?

Well, I still believe that part's true. You can visualize it even more starkly this way: A stranger approaches a company like Microsoft holding two envelopes, one containing $1,000 cash, and the other containing an IE security vulnerability which hasn't yet been discovered in the wild, and asks Microsoft to pick one envelope. It would sound short-sighted and irresponsible for Microsoft to pick the envelope containing the cash — but when Microsoft declines to offer a $1,000 cash prize for vulnerabilities, it's exactly like choosing the envelope with the $1,000. You might argue that it's "not exactly the same" because Microsoft's hypothetical $1,000 prize program would be on offer for bugs which haven't been found yet, but I'd argue that's a distinction without a difference. If Microsoft did offer a $1,000 prize program, it's virtually certain that someone would come forward with a qualifying exploit (and if nobody did, then the program would be moot anyway) — so both scenarios simply describe a choice between $1,000 and finding a new security vulnerability.

But I would argue that there are certain assumptions under which it would make sense not to offer a cash prize program — and, in keeping with my claim that this is equivalent to the envelope-choice problem, under those assumptions it actually would make sense for Microsoft to turn down the envelope containing the vulnerability, and take the cash instead. (When I say it would "make sense", I mean both from a profit-motive standpoint, and for the purposes of protecting the security of their users' computers.)

On Monday night I saw a presentation put on by Seattle's Pacific Science Center "Science Cafe" program, in which Professor Tadayoshi Kohno described how he and his team were able to defeat the security protocols of a car's embedded computer system by finding and exploiting a buffer overflow. That's scary enough, but it was more interesting how his description of the task made it sound like a foregone conclusion that they would find one — you simply sink this many person-hours into the task of looking for a buffer overflow, and eventually you'll find one that can enable a complete takeover of the car. (He confirmed to me afterwards that in his estimation, once the manufacturer had fixed that vulnerability, he figured his same team could have found another one with the same amount of effort.)

More generally, I think it's reasonable to assume that for a given product, there is a certain threshold amount of money/effort/person-hours such that if you throw that much effort at finding a new security vulnerability, you will always find a new one. Suppose you call this the "infinite bug threshold." Obviously the amount of vulnerabilities is not really infinite — you can only do finitely many things to a product in a finite amount of time, after all — but suppose it's so close to infinite as to make no difference, because the manufacturer would never be able to fix all the vulnerabilities that could be found for that amount of effort. I'm sure that $10 million worth of effort, paid to the right people, will always find you a new security vulnerability in the Apache web server; the same is probably true for some dollar number much lower than that, and you could call that the "infinite bug threshold". On the other hand, by definition of that threshold, that means that the amount of vulnerabilities that can be found for any amount of money below that, will be finite and manageable.

(I'm hand-waving over some details here, such as the disputes over whether two different bugs are really considered "distinct," or the fact that once you've found one vulnerability, the cost of finding other closely related vulnerabilities in the same area of the product, often goes way down. But I don't think these complications negate the argument.)

Meanwhile, you have the black-market value of a given type of vulnerability in a given product. This may be the value that you could actually sell it for on the black market, or it may be the maximum amount of effort that a cyber-criminal would invest in finding a new vulnerability. If a cyber-criminal will only start looking for a particular type of vulnerability if they estimate they can find one for less than $50,000 worth of effort, then $50,000 is how much that type of vulnerability is worth to them.

Now consider the case where

infinite bug threshold > black-market value

This is the good case. It means that if the manufacturer offered a prize equal to the black-market value of an exploit, any rational security researcher who found a vulnerability, could sell it to the manufacturer rather than offering it on the black market (assuming they would find the manufacturer more reliable and pleasant to deal with than the Russian cyber-mafia). And we're below the infinite bug threshold, so by definition the manufacturer only has to pay out a finite and manageable number of those prizes, before all such vulnerabilities have been found and fixed. I've made a couple of optimistic assumptions here, such as that the manufacturer would be willing to pay prizes in the first place, and that they could correctly estimate what the black-market value of a bug would be. But at least there's hope.

On other hand, if

infinite bug threshold < black market value

everything gets much worse. This means that no matter how many vulnerabilities you find and fix, by the definition of the infinite bug threshold there will always be another vulnerability that a black-hat will find it worthwhile to discover and exploit.

And that's the pessimistic scenario where it doesn't really matter whether Microsoft chooses the envelope with the vulnerability or the envelope with the $1,000, if the infinite-bug-threshold happens to be below $1,000. (Let's hope it's not that low in practice! But the same analysis would apply to any higher number.) If the black-market-value of a bug is at least $1,000, so that's what the attacker is willing to spend to find one, and if that's above the infinite-bug-threshold, then you might as well not bother fixing any particular bug at that level, because the attacker can always just find another one. It doesn't even matter whether you have a prize program or not; the product is in a permanent state of unfixable vulnerability.

At that point, the only ways to flip the direction of the inequality, to reach the state where "infinite bug threshold > black-market value", would be to decrease the black market value of the vulnerability, or increase the infinite bug threshold for your product. To decrease the black market value, you could implement more severe punishments for cyber-criminals, which makes them less willing to commit risky crimes using a security exploit. Or you could implement greater checks and balances to prevent financial fraud, which decreases the incentives for exploits. But these are society-wide changes that would not be under the control of the software manufacturer. (I'm not sure if there's anything a software company could do by themselves to lower the black-market value of a vulnerability in their product, other than voluntarily decreasing their own market share so that there are fewer computers that can be compromised using their software! Can you think of any other way?)

Raising the infinite bug threshold for the product, on the other hand, may require re-writing the software from scratch, or at least the most vulnerable components, paying stricter attention to security-conscious programming standards. Professor Kohno said after his talk that he believed that if the programmers of the car's embedded systems had followed better security coding practices, such as the principle of least privilege, then his team would not have found vulnerabilities so easily.

I still believe that cash prizes have the potential to achieve security utopia, at least with regard to the particular programs the prizes are offered for — but only where the "infinite bug threshold > black-market value" inequality holds, and only if the company is willing to offer the prizes. If the software is written in a security-conscious manner such that the infinite bug threshold is likely to be higher than the black-market value, and the manufacturer offers a vulnerability prize at least equal to the black-market value, then virtually all vulnerabilities which can be found for less than that much effort, will be reported to the manufacturer and fixed. Once that nirvana has been achieved, for an attacker to find a new exploit, the attacker would have to be (1) irrational (spending an estimated $70,000 to find a vulnerability that is only worth $50,000), and (2) evil beyond merely profit motive (using the bug for $50,000 of ill-gotten gain, instead of simply turning it in to the manufacturer for the same amount of money!). That's not logically impossible, but we would expect it to be rare.

On the other hand, for programs and classes of vulnerabilities where "infinite bug threshold < black-market value", there is literally nothing that can be done to make them secure against an attacker who has time to find the next exploit. You can have multiple lines of defense, like installing anti-virus software on your PC in case a website uses a vulnerability in Internet Explorer to try and infect your computer with a virus. But Kaspersky doesn't make anything for cars.

This discussion has been archived. No new comments can be posted.

Bug Bounties Don't Help If Bugs Never Run Out

Comments Filter:
  • by Anonymous Coward on Friday April 18, 2014 @01:49PM (#46789015)

    Bennett, I think it's mostly that you simply DON'T REALLY HAVE ANYTHING THAT INTERESTING TO SAY.

    What you've presented us with is a smugly self-congratulatory "inner dialogue" in which you discovered that "at some point, it becomes more expensive to find bugs than the bug bounty will compensate you!"

    Anybody with more than a second grade education can reason this out for themselves, but you saw fit to drone on for pages about it. You do this a lot. it's not informative, interesting, or insightful. You should probably either: a) up your game and offer us some real thoughtful insight; or b) shut the fuck up and take it back to livejournal.

    Congratulations, you've discovered the law of diminishing returns - you're a fucking legend.

Remember to say hello to your bank teller.

Working...