Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security The Almighty Buck

Bill Gates Should Buy Your Buffer Overruns 196

Slashdot regular Bennett Haselton has written in with his latest essay. He starts "WabiSabiLabi generated some controversy recently by announcing their eBay-like site for security researchers to sell security exploits to the highest bidder. But WabiSabiLabi didn't create the black-and-grey market for security exploits, they merely helped draw attention to it. There's nothing that companies like Microsoft can do about the black market where security exploits sell for tens of thousands of dollars, but there's one obvious thing they can do to help protect users: offer to buy up the security vulnerabilities themselves. If they did that, then the exploits would probably never make it onto a black-market auction in the first place, because the "white hat" researchers would have found them and reported them first. Thus I think WabiSabiLabi is doing the world a favor, by shining a spotlight on the black market that thrives when companies won't pay for security bug reports." Click that magical little read more link below to continue the thought.

Really, what is a good argument against companies paying for security exploits? It's virtually certain that if a company like Microsoft offered $1,000 for a new IE exploit, someone would find at least one and report it to them. So the question facing Microsoft when they choose whether to make that offer, is: Would they rather have the $1,000, or the exploit? What responsible company could possibly choose "the $1,000"? Especially considering that if they don't offer the prize, and as a result that particular exploit doesn't get found by a white-hat researcher, someone else will probably find it and sell it on the black market instead? (Throughout this discussion, I'm using Microsoft as a metaphor for all companies which have products in widespread use, and which do not currently pay for security exploits even though they could obviously afford to.)

Perhaps you say that you would be willing to report bugs to Microsoft for free, and I respect people who do that out of selflessness, but that's not the point. Even if you and some other people would do "white-hat testing" for free, there are more people who would do it if there were prizes. The amount of people willing to do security testing for free, has not been enough to keep exploits from being found and sold on the black market -- but if Microsoft offered enough money, it would be. Obviously if Microsoft offered more than the black-market prices, everyone would just sell their exploits to them. But probably Microsoft could offer much less than the black-market prices and still put the black market out of business, because there are lots of researchers who wouldn't sell exploits on the black market even for tens of thousands of dollars, but would be willing to participate in a legal Microsoft "white hat" program for much less money.

Microsoft would undoubtedly say that they do their own in-house testing, and indeed the offer of a prize should not be used as a substitute for good security testing within a company. But at the same time, the fact that a company does their own testing isn't a good reason for not offering a prize. If a company says that they already do their own in-house security audits to catch as many bugs as they can, that still doesn't answer the question: given that a cash offer would probably result in an outsider finding a new exploit that they missed, why wouldn't they want to take it? Even if there are already outsiders who willingly find new exploits and turn them over to Microsoft for free, there's almost certainly at least one more exploit out there that would be found if they offered a cash prize. (And if the cash prize doesn't turn up any new exploits, then the company doesn't pay out and has lost nothing.)

I've done security consulting for companies like Google and Macromedia who paid me "by the bug", so you might think I'm biased in favor of more such "bounty" programs because I think I could make money off of them. Actually, I think that if Microsoft and most other large software companies offered security hole bounties to everyone in the world, almost all exploits would be picked clean by other people, and my chances of getting anything out of it would go way down, and there would be one less buffer protecting me from having to get a real job. But most people's computers would be safer.

Microsoft does in fact "pay" for security exploits in their own way, by crediting people in their security bulletins. To some people, who report exploits in hopes of being recognized, this is apparently enough. And there are third-party companies like iDefense who will buy your security exploits and then use them to gain reputation-credits for themselves, by handing them over for free to the software developer and warning their own clients about the potential risks. But there are a lot of people including me who have found exploits in the past, but don't consider the benefits of being mentioned in a Microsoft security bulletin to be worth the effort of finding a new one. And even the benefits that iDefense gets from reporting security holes, are evidently not sufficient for them to offer enough money for exploits to compete with the black-market prices (if iDefense got that much benefit out of it, then they'd be able to offer so much money that nobody would sell exploits on the black market). So using recognition as payment is evidently not enough; as Lord Beckett says, "Loyalty is no longer the currency of the realm; I'm afraid currency is the currency of the realm."

A cash prize program might mean that some people get mad when they are turned away for offering "exploits" that don't really qualify, but so what? What are they going to do for revenge, release their "exploit" into the wild? If it's not a real exploit, then it won't do any harm, and if it is a real exploit, then Microsoft should have paid them after all! Some people might threaten to sue if they aren't awarded prizes, even if the rules of the program state clearly that Microsoft is the final arbiter of what counts as an exploit. Maybe in some rare cases they would even win. But all of this could be considered a cost of running the program, just like the cost of giving out the prizes themselves -- and all insignificant compared to the cost of an exploit that gets released into the wild and allows a malicious site to do "drive-by installs" of spyware onto people's machines.

Probably the real reason Microsoft doesn't pay for security exploits is that they don't pay the full price for those drive-by installs and other problems when a new exploit is discovered. I've heard hard-core open-source advocates say that either (a) Microsoft should be held liable for the cost of exploits committed using flaws in their software, or that (b) users of Microsoft software should be held liable for exploits committed through their machines (which would drive up the cost of using Windows and IE to the point where nobody would use it). If that happened, Microsoft probably would pay for security exploits to forestall disaster. But let's make the reasonable assumption that neither of those liability rules is going to come to pass. The real price that Microsoft currently pays for security exploits is in terms of reputation, and the price they're paying right now is too low, because people don't realize that Microsoft could find and fix a lot more bugs by spending only a tiny amount of money -- but chooses not to. Despite all the snickering when "Microsoft" and "security" are used in the same sentence, most people seem to believe that Microsoft is doing everything they can to prevent users from being exploited. But as long as Microsoft doesn't pay for security holes, they're emphatically not doing "everything they can".

It's not that I think security bosses at Microsoft are trying to screw anyone over. They probably just have an aversion to the idea of paying for security holes, and what I'm arguing is that such an aversion is irrational. The people they would be paying money to are not criminals or bad people, they're legitimate researchers who just can't afford to do work for Microsoft for free when they could be doing something else for money. Offering cash will bring in new exploits, and every exploit that is reported and fixed is one that can't be sold on the black market later.

There are some interesting details that would have to be worked out about how such a program would be implemented. For example, what happens if Bob reports an exploit, and then Alice later reports the same exploit, before Microsoft has gotten a chance to push the patch out? Microsoft wouldn't want to pay $1,000 to both of them, because then whenever Bob found an exploit, he could collude with Alice so that they both "independently" reported the same bug and got paid twice. Microsoft could pay only Bob, but Alice could get so disillusioned at getting paid nothing that she might stop helping entirely. My own suggestion would be to split the money between all researchers who report the same bug in the time window before the fix is pushed out. If 10 researchers happened to report the same bug and each only got a paltry $100, some of them would quit in disgust, but if researchers start to leave because the average payout-per-person has fallen too low, then that will drive the average payout back up, so the number of active researchers stays in equilibrium.

Another issue: What happens if a researcher reports an exploit confidentially, and then the next day, the exploit appears in the wild? If Microsoft's policy was that they would pay for the exploit anyway, then a researcher would have no incentive not to sell the exploit twice, once to Microsoft and again on the black market (whereupon it might start being used in the wild). On the other hand, if Microsoft refused to pay for exploits that were released in the wild before they issued a patch, then that might leave many researchers feeling cheated if they turned in a genuine exploit and got nothing just because someone else sold it on the black market before the patch came out. My suggestion would be to simply pay for exploits even if they did subsequently get released on the black market -- on the theory that of the white hat researchers who turn in bugs to Microsoft, most of them would be ethically opposed to selling exploits to black marketeers, so they shouldn't be punished if the exploit ends up on the black market since they probably weren't the ones who put it there. Another would be to make the payout so large that even if researchers got no payment when the exploit got leaked into the wild before a patch was issued, the payout from the times that they did get paid, would more than make up for it.

But whatever rules are decided upon, there should be some sort of monetary rewards for people who confidentially report security flaws to big software companies. Whatever you can say about the merits of rewarding people through "recognition", or through social pressures to practice "responsible disclosure", the one obvious fact is that it hasn't been enough -- exploits still get sold on the black market, and every exploit that gets sold on the black market, would have been reported to Microsoft if they'd offered enough money. The talent is out there that could find these bugs and get them fixed. Most of them just can't afford to donate the work for free -- but the amount of money Microsoft would have to pay them, is far less than the benefits that would accrue to people all over the world in terms of fewer drive-by spyware installs, fewer viruses, and fewer security breaches. And if these benefits were reflected back at Microsoft in terms of greater user confidence and fewer snide jokes about "Microsoft security", then everybody would win all around. There are no barriers to making this happen, except for a mindset that it's "bad" to pay for security research. But if you prevent millions of Internet Explorer users from being infected with spyware, you deserve to at least get paid what Bill Gates earns in the time it took you to read this sentence.

This discussion has been archived. No new comments can be posted.

Bill Gates Should Buy Your Buffer Overruns

Comments Filter:
  • Economics (Score:5, Insightful)

    by gad_zuki! ( 70830 ) on Wednesday July 18, 2007 @12:06PM (#19902361)
    If MS offers 10,000 dollars per exploit then thats going to be the minimum bid in the market. Someone will then offer 10,500 and the enterprising hacker will go for the extra cash. I dont see how MS's involvment can help this.

    What might be more interesting is to dock 10,000k from the salaries of the security team everytime someone finds a serious exploit. Sometimes punishments are far more effective than rewards.
  • outsourced testing (Score:4, Insightful)

    by ecklesweb ( 713901 ) on Wednesday July 18, 2007 @12:07PM (#19902381)
    Almost sounds like an argument to outsource testing to the general public and pay them for it. Not sure why MS would do this when they've been outsourcing testing to the general public for years and charging licensing fees for it!

    Cynicism aside, do you think that it really makes business sense for MS to pay for vulnerabilities? Has their revenue really been hurt that badly from their current security practices?
  • by illegalcortex ( 1007791 ) on Wednesday July 18, 2007 @12:08PM (#19902391)
    You could, and that would probably still be a GOOD thing. Because if MS fixed it quickly, it means those who purchased the exploit would get a lot less for their money. Therefore, they'd be less willing to buy exploits in the future, or at least pay less.

    Such a market wouldn't be about *exclusive* knowledge of exploits.
  • Yeah this (Score:4, Insightful)

    by Dunbal ( 464142 ) on Wednesday July 18, 2007 @12:09PM (#19902399)
    Makes much more sense than actually writing secure software in the first place, doesn't it?

    This is a silly idea. It assumes that if Microsoft pays someone to keep quiet about a security vulnerability, no one, ever, will independently discover this SAME vulnerability. Human nature dictates that when you hand out money, you will quickly have people waiting in line.

    Reminds me of the romans paying the barbarians NOT to invade them. Sure, give your enemy an income and make him rich. Makes a LOT of sense...
  • by vortex2.71 ( 802986 ) on Wednesday July 18, 2007 @12:14PM (#19902477)
    Who cares how many times they sell it? The point is that Microsoft can buy it and then fix it, thus elliminating the market value of the exploit. If someone can sell it to other people then good for them. Its still in Microsoft's best interest to buy it as early as possible and fix it as early as possible.
  • Re:Yeah this (Score:3, Insightful)

    by khallow ( 566160 ) on Wednesday July 18, 2007 @12:19PM (#19902571)
    Sounds more analogous to bribing some barbarians to tell you what the tribe is thinking of doing. Then you can patch up your defenses and anticipate the sometimes enemy.
  • Re:Economics (Score:5, Insightful)

    by cowscows ( 103644 ) on Wednesday July 18, 2007 @12:21PM (#19902615) Journal
    Yeah, except that you'd very quickly find yourself without a security team.
  • Using an exploit maliciously is, but finding the exploit is not a bad thing. In fact, it's a good thing. Hence, it should be rewarded.
  • Re:Economics (Score:3, Insightful)

    by MartinG ( 52587 ) on Wednesday July 18, 2007 @12:24PM (#19902661) Homepage Journal
    What might be more interesting is to dock 10,000k from the salaries of the security team everytime someone finds a serious exploit

    Who the hell is going to work there with such an utterly idiotic policy?

    Surely one aspect of this is that they should be looking to attract good people to the team. Threats of "fines" is hardly the way to do it.
  • Good idea but (Score:4, Insightful)

    by sheriff_p ( 138609 ) on Wednesday July 18, 2007 @12:25PM (#19902681)
    I think this is a good idea, but it's unlikely to happen - by buying such a thing, Microsoft sets themselves up in a position of liability - something that software vendors have so far largely managed to avoid.

    Say they buy one exploit, but not another, and some company gets caught by the other. Microsoft have put themselves in a pretty nasty legal liability position there.

    Additionally, it'll look a lot like endorsement of black-hat practices, something MS will want to avoid... ...
  • by benhocking ( 724439 ) <benjaminhocking@nOsPAm.yahoo.com> on Wednesday July 18, 2007 @12:26PM (#19902699) Homepage Journal
    There are a lot of intelligent people who would be willing to do it legally for far cheaper prices than the black market will pay to do it illegally. Not everyone is immoral. Personally, I'd like to believe that most people are basically good people.
  • by Flying pig ( 925874 ) on Wednesday July 18, 2007 @12:29PM (#19902753)
    It makes far more sense to be a legal, well rewarded security researcher with a useful CV than a criminal. Nothing gives a person ethics like being well paid for it.
  • by altoz ( 653655 ) on Wednesday July 18, 2007 @12:31PM (#19902779)
    That'll work once but won't work the next time. Any market has its reputation system and if you're known to sell to both (an obvious thing since Microsoft will have patched it shortly), I'm sure people will bid less and less for your exploit.

    Plus, do you really want to screw over black market customers? They're not your typical customers. I'm sure they'll do a lot worse than not shopping from you again if you screw them over (think identity theft or worse).
  • by mr_mischief ( 456295 ) on Wednesday July 18, 2007 @12:31PM (#19902783) Journal
    I'm not sure it's cynicism when it's so obviously true. In Microsoft's defense, it's very difficult to properly test everything for stability and performance against all the third-party hardware and software out there.

    It's not that difficult, though, to check for buffer overruns, array bounds violations, and stack overflows these days. It's also not that difficult to use proper security protocols as opposed to crap like PPTP, for that matter.

    I think Microsoft's public image has been hurt pretty badly by the likes of Nimda, Blaster, Melissa, and similar widespread attacks. Macs, Solaris, and Linux machines have strong arguments for them, but part of what market share they get would default to Microsoft if people hadn't had such poor experiences with Windows and Office. Heck, I'm a Linux guy, but I'm writing this from an XP box because for some things I still need Windows.

    If Microsoft and their Windows team did more than pay lip service to POSIX, security in depth, minimal daemon/services profiles, a powerful command line, standard networking instead of their proprietary stuff, and proactive security audits then lots of people who run Linux, BSD, Solaris, and OS X would never use anything but Windows. Some of us still would, but if Windows had enough POSIX support to run everything written for Unix, had the security of a decent Linux distro, only enabled what services you actually need running, and had a record of fewer actual vulnerabilities (and not just a comparison that their core OS has fewer "critical" bugs than all of the software that ships with RHEL, when RedHat is more likely to call something "critical" anyway), then why would people bother? OS X would be just for video, audio, and graphics people. Solaris, AIX, and other commercial Unixes would be real niche players. Linux and the BSDs would be mainly curiosities for tinkerers, just as MS tries to portray them, and would have only small installations in the business world. There'd still be a place for all of these, but they'd have a much harder time of it if Windows was real quality work in these areas.

    In short, the embrace and extend tactics, the FUD MS spreads, and the NIH syndrome are finally catching up to Microsoft. So yes, I'd say that although they're not hurting much, what little pain they're having is in large part caused by their security practices.
  • Wouldn't work. (Score:3, Insightful)

    by Spy der Mann ( 805235 ) <`moc.liamg' `ta' `todhsals.nnamredyps'> on Wednesday July 18, 2007 @12:41PM (#19902917) Homepage Journal
    OK, give us your info, and we'll pay you if we consider it's genuine.
    (2 days later) Guess what, it's not a true exploit. Sorry, no pay.
    (1 week later, at Windows update) We've fixed a patch for a recently discovered vulnerability!
  • by BeProf ( 597697 ) on Wednesday July 18, 2007 @12:42PM (#19902933)
    Rewarding unethical behavior?

    What could *possibly* go wrong?
  • Re:Economics (Score:3, Insightful)

    by fermion ( 181285 ) on Wednesday July 18, 2007 @12:43PM (#19902951) Homepage Journal
    In summary, the exploit will generally be more valuable to the attacker than the defender, for many different reasons. Mainly, a baddie might buy a ten exploit for $150K, use one or 2, perhaps make 200K, and while the profit margin may not be great, a profit might at least be generated. On the other hand, MS might get those same exploits for $100K, but where is the upside? Did the exploits cost them anything? No, they externalize all those expenses to the government and the customer. Sure they can afford to lose that $100K, they probably lose that much every week on xBox, but unlike xBox buying exploits does not buy them marketshare, at least not yet.

    Then we have more insidious versions of this story. Sell two low level exploits to MS, get 20K. Use the 20K to capatilize a third major exploit. Such a plan, in recursion, will finance quite a nice bot shop with no money down.

    Ultimately, this is not something that will be solved by hiring people to chase the horses after the barn door has been left open. It is similiar to missle defense. In principle it is not all that hard(although in principle it is really hard), but even after solving the really hard physics issues, one realizes that, for instance, once the launch vehicle has released the payload, say 100 projectiles, only one of which is live, it becomes a numbers game of the defender having to pay for 100 live interceptors, while the attacker only has to pay for on one live munition.

    So, we get back to the recommendation of writing good code. And good code is not code without errors. Good code is code in which errors can be fixed quickly, and the extent of the codebase effected by said changes are limited. What we see with MS is not that the code has bugs. All code has bugs. It is that bugs appear to be, at least in some cases, difficult to fix, and sometimes those fixes break things that should not necessarily be related.

  • by TubeSteak ( 669689 ) on Wednesday July 18, 2007 @12:58PM (#19903185) Journal

    Because if MS fixed it quickly, it means those who purchased the exploit would get a lot less for their money.
    That is a huge assumption to make.

    MS regularly sits on vulnerabilities for months instead of patching them.

    By creating such a marketplace, MS effectively gives away information on which non-public vulnerabilities they are aware of, but have yet to patch. That can't be a good thing.
  • by disasm ( 973689 ) on Wednesday July 18, 2007 @01:45PM (#19903933)
    Great idea...

    Yes, is this the US government? Yes, I got word that Bat_Masterson is planning to blow up the whitehouse. I think you should go arrest him and give me $5000 for reporting this incident before he could wreak havoc. What he's an upstanding citizen? No that's just a front he's pulling he really is a terrorist and needs to be dealt with. Okay, I'll get the check in the mail next week, sounds good to me. Glad to do my part in averting terrorism.

    Sorry Bat, I know you weren't planning that, but I really need the $5000...

    Sam
  • by Peacenik45 ( 988593 ) on Wednesday July 18, 2007 @04:37PM (#19906453)

    Immorality has nothing to do with it. I don't think there's anyone out there who willingly would do something wrong or 'evil'.

    It's just that when you're faced with the opportunity to sell something you worked hard on (or chanced upon) for a lot of money, you probably will want to get as much of a return on your work as possible. You don't want to be the shmuck who turned down $1000 because he was worried about the exploit ending up in the wrong hands. You'd try to justify it. You'd think 'Oh, Microsoft would find out about this eventually' or (as somebody else commented) 'Microsoft probably wouldn't patch this immediately anyways'.

    A guilty conscious wouldn't keep you awake. You'd just realize that there's a lot of shit going on in the world and your one little exploit won't even be noticed.

    Then the next day you'd go out and buy a nice home theatre system with all the money you made.

    That's just human nature.

  • by gatesvp ( 957062 ) on Wednesday July 18, 2007 @05:09PM (#19906847)

    Look, there are lots of good explanations here, and personally, I'm a fan of the "bounty system". When I first saw "bounties" for Ubuntu I was overjoyed! Feeding IT people is really important for IT growth.

    However, in this case, the logical flaw is actually the market, do a cost/benefit analysis. Microsoft, as a monopoly, does not make or lose any significant amount of money on OS security flaws. Companies with a budget capable of supporting security flaw bounties, don't actually need them short-term.

    These big companies are publicly held and security flaw bounties do not help quarterly profits, or even annual profits (why these are important is a different issue). If I have SAP running my 10,000 employee business I can't just leave b/c SAP has too many security holes, moving is very expensive. It's probably cheaper to eat a small customer lawsuit than to switch systems. Now, if I'm really smart/motivated/scared I may move off on the next upgrade cycle, but these cycles only happen every 5-10 years. So SAP won't set up public security bounties b/c it is not beneficial to their shareholders in any way they can fathom. MS has the same deal, sure they can make the OS/DB/IIS more secure, but it must already be secure enough as nobody's leaving, right?

    You have the right idea, but the impetus for broad security testing is simply not there. The only people who would "benefit" from such bounties are actually the unestablished new-comers or the competitors to monopolies (like Linux providers). With an open bounties system, these companies can use the security feature as leverage for marketing their product. But these are still very long-term deals and such a company would need to convince investors that the long-term benefits of such an action outweigh the short-term costs.

    In the case of say, Linux and LAMP and PostGreSQL, we're probably there. These guys are great candidates for such open bounties. And these long-term activities are likely to pay off. Mac OS X may benefit from the same interest as they try and poach desktop/home users. But MS and SAP and other dominant players can't deliver better profits to their investors with such a system, so they won't do anything until investors get scared and start demanding one. We're not there yet.

  • by BritGeek ( 736361 ) <`biz' `at' `madzoga.com'> on Wednesday July 18, 2007 @06:04PM (#19907439)
    It seems to me that this whole area is fraught with problems, and that the proponents of a "free market" are missing some of the history here.

    #1 The history of paying for exploits.
    This is a relatively new phenomenon, but historically where it has happened vulnerabilities have been purchased on the black market, by security research companies such as iDefense (now a subsidiary of Verisign). The reason that these companies did this is because these were (and are) exploitable, and were being happily used by the criminal community. Thus, in that situation, iDefense and other similar companies were able to acquire information about known and exploited vulnerabilities, and inform software vendors so that remediation could proceed.

    While paying money to criminals is not necessarily something that fills anyone with glee, except the criminals of course, it was reasonably clear that the action helped "the greater good". The same is far from true in the case of building a free market in vulnerabilities. The obvious point is that it if a vulnerability applies to some particular product, why should we assume that the legitimate owner of the site or software product will be the highest bidder? It could as easily be a criminal.

    #2 Legality - testing.
    At least in the US, for downloaded software, the situation is such that the legality of testing software for vulnerabilities is moderately safe. For website on the other hand, the situation is that researchers are on rather thinner ice. Some websites do publish policies which describe the situations under which they would never push for prosecution, although many still do not. (Although, the recent discussions on this subject are clearly spurring more sites to do this.) The net for websites is that whether or not the testing activity is viewed as being criminal or not is in large measure up to the tolerance, or otherwise, of the website operator.

    #3 Legality - sale.
    For sale of vulnerabilities, if a researcher approaches a company and says "I have information about a vulnerability in your product/service, and I'd like $x for it", the answer is that any competent prosecutor could get a blackmail conviction. If you are a legitimate security researcher, I'd argue that the last thing you want is to be branded as a blackmailer. And, per point #2, I think you will find that as more and more websites release security testing policies, that those policies explicitly will not indemnify researchers when the results of the research have been resold or in any way used for profit.

    #4 Business ethics.
    Granted that most security researchers are not in fact employed by the companies whose products and services they are researching, why on earth would anyone expect to be compensated by that company? For example, if you show up at the office building of some company with a ladder and bucket and then clean all the windows, the office manager might be grateful, but whether or not you get paid for it is another matter altogether. Why should vulnerabilities be any different? Don't all workers have the right to expect the windows of their offices to be clean and bug free? ;-)

FORTRAN is not a flower but a weed -- it is hardy, occasionally blooms, and grows in every computer. -- A.J. Perlis

Working...