Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security

Is Paying Hackers Good for Business? 94

Jenny writes "In the light of the recent QuickTime vulnerability, revealed for $10,000 spot cash, the UK IT Security Journalist of the Year asks why business treats security research like a big money TV game show. 'There can be no doubt that any kind of public vulnerability research effort will have the opportunity to turn sour, both for the company promoting it and the users of whatever software or service finds itself exposed to attack without any chance to defend itself. Throw a financial reward into the mix and the lure of the hunt, the scent of blood, is going to be too much for all but the most responsible of hackers. There really is no incentive to report their findings to the vulnerable company, and plenty not to. Which is why, especially in the IT security business, there needs to be a code of conduct with regard to responsible disclosure.' Do you think there's any truth to this? Or is it a better idea to find the vulnerabilities as fast as possible, damn the consequences?"
This discussion has been archived. No new comments can be posted.

Is Paying Hackers Good for Business?

Comments Filter:
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Thursday May 10, 2007 @04:29PM (#19073973)
    Comment removed based on user account deletion
  • Disclosure is key (Score:3, Interesting)

    by Uruk ( 4907 ) on Thursday May 10, 2007 @04:32PM (#19074017)
    The value of finding security holes is in disclosing them to everyone, particularly the affected vendor.

    The most damaging holes are the ones that only the bad guys know about. This doesn't tend to advance security in software, it just allows people to take over your machine without your permission.

    Security research or incentivization schemes that don't include a built-in mechanism to promote disclosure of the discovered problems won't help much.
  • Why not pay? (Score:3, Interesting)

    by superbus1929 ( 1069292 ) on Thursday May 10, 2007 @04:44PM (#19074235) Homepage
    Here's my view: the one and only point of trying to find a vulnerability is to find the vulnerability. You don't care how it's done, you want that vulnerability found while you still have SOME control over it instead of after it's been out in the wild, and you have to patch around it. What's the best way to find your vulnerabilities? Have outsiders working towards a prize. Not only is it good publicity, looks great on the winner's resume, you find just about everything wrong with your product. It's truly win-win.

    Anything that is the most thorough way of eventually getting the programme secure is the best way to go about it. Period.
  • Re:Bounty Hunters (Score:5, Interesting)

    by Applekid ( 993327 ) on Thursday May 10, 2007 @04:45PM (#19074241)
    In the US, bounty hunters have legal protection to do what they do. If a company puts up a juicy reward for finding a security hole, the person coming forward could easily get the shaft and then be prosecuted under DMCA.

    At least on the black market, you know, honor among thieves.
  • by 99BottlesOfBeerInMyF ( 813746 ) on Thursday May 10, 2007 @05:03PM (#19074595)

    Responsible disclosure is nothing more than security through obscurity. And security through obscurity is as good as no security at all.

    Actually, security through obscurity is very functional and useful as part of a security scheme. Your password is security through obscurity. Why don't you just give it to everyone if it makes no difference?

    In the intervening months, you have a live, exploitable hole sitting there ripe for attack!

    And if you disclose the wrong vulnerability to the general public you have a live, exploitable hole that everyone knows about sitting there ripe for attack. Which is better?

    Responsible disclosure is simply evaluating what is best for the security of users and disclosing i that manner. n some cases, the best thing for overall security is immediate, public disclosure to pressure the vendor into fixing the hole more quickly and to give users a chance to work around the vulnerability. In other cases, where the vendor is responsive, and ther is no easy way to mitigate the vulnerability for the end user, immediate disclosure increases the risk to users with no real benefit.

    I say, damn the consequences. Report as soon as possible no matter who it embarrasses.

    Who is embarrassed is immaterial. Ignoring the likely consequences of you disclosure method, however, is irresponsible, which is why the alternative is called "responsible disclosure."

    It'll either put more pressure on them to fix the bugs faster...

    In many cases the vendor is very motivated and goes to work with all their resources immediately. Take a look at the OmniWeb vulnerability published by the MOAB project. Omnigroup implemented a fix within a day and had it available for download, but they do the same thing for bugs disclosed to them privately. All the immediate disclosure did was give hackers more time to exploit the problem before the fix reached users. Disclosing a vulnerability to the public before sending it to a responsible and security minded development team is good for no one but blackhats. Also, rushing vendors to write code faster, can result in more bugs in said code, including other vulnerabilities or or bugs.

    ...or push users to more secure platforms where security fixes don't take months and are usually found before their ever exploited in the wild.

    Please. Most users will not switch platforms because of security issues and many are locked into MS's desktop monopoly by some software they absolutely need and price constraints. The vast majority of users never even hear about security vulnerability disclosure in the first place.

    Here's a tip for you from someone who does work in the security industry. If you're looking for a job in the field, don't expose your irresponsible ideas about disclosure if you want a chance at being hired somewhere respectable.

  • by 99BottlesOfBeerInMyF ( 813746 ) on Thursday May 10, 2007 @05:09PM (#19074677)

    Considering how quickly companies tend to SUE you for disclosing a vulnerability, I don't think there can be any true code of conduct between hackers and companies.

    So Apple sued the guy who disclosed this Quicktime vulnerability? If that happened, I never heard about it. In fact, I work i the security industry and very, very rarely hear about any lawsuits, which is why they are news when they do happen.

    Not unless the companies start making it (public) policy that they WILL NOT sue you as long as you disclose a vulnerability to them first, and give them a reasonable time to fix it before going public.

    Why? Would such a statement stop them from later doing it? In general companies don't sue over vulnerability disclosures, no matter whether they are immediate, or if the vendor is given time. The reason security researchers tend to give companies time to fix things is because that is what they think is best for security, overall.

    I think that'll never happen though, and the only way to safeguard a hacker is to make legislation against those type of lawsuits.

    That doesn't really work. Basically you can sue anyone for anything in the US (with very few exceptions). I don't see the need for one here since I very rarely, if ever, hear about anyone being sued for disclosing bugs.

  • bug testing? (Score:4, Interesting)

    by Lord Ender ( 156273 ) on Thursday May 10, 2007 @05:19PM (#19074825) Homepage
    Buying vulnerability info from a third party is just outsourcing your QA. It's just buying testing + bug reporting.

    If a third party demands money to keep QUIET about a vulnerability, that's extortion.

    Much of the animosity here is that many security researchers specialize in breaking things--they haven't ever worked on engineering a large, complex system. They just don't understand how much time is required to test code before it is released. Also, the legal teams for many companies just don't understand that alienating security researchers by filing law suits is only going to make their situation worse.
  • Re:Disclosure is key (Score:2, Interesting)

    by EvanED ( 569694 ) <{evaned} {at} {gmail.com}> on Thursday May 10, 2007 @06:58PM (#19076231)
    The most damaging holes are the ones that only the bad guys know about.

    And:
    The second most damaging holes are the ones that both the bad guys and the developers know about, but no one else does
    The third most damaging holes are the ones that everyone knows about
    The fourth most damaging holes are the ones that only the developers know about

    If you reveal an exploit, you know that you are in the third state. If you do not reveal an exploit to the public, but only tell the developers, you might have made things worse by going into the second state, but you also might have made things better, by moving into the fourth state. So it isn't necessarily a good thing to publicly reveal exploits.

    Here are my general thoughts.
    I.) If you have good evidence that an exploit is in the wild, publicly release details. [We are in case 2; by releasing the exploit, we move to case 3, an improvement.]

    II.) If you would expect that if your exploit was in the wild you would see evidence of that, but you see no such evidence, do not publicly release the exploit *yet*. [We are in case 4; releasing the exploit would move us to case 3, which is getting worse.] Notify the developers. If the developers are reasonably quick with a patch (I would say within a couple weeks in general, maybe until the next "Patch Tuesday" in MS's or similar cases), still do not publicly release the exploit yet. Give maybe a week after the patch is released for security-conscious users and admins to apply it. Then you may release an exploit. If the developers are not being responsive after a reasonable amount of time has passed, release the exploit at that point. Continually monitor for evidence of the exploit being used in the wild; if such evidence surfaces, release the exploit immediately.

    III.) Otherwise (if you can't tell if the exploit is in the wild), do not publicly release the exploit *yet*. Follow the procedure in II, but an expedited version of it. (In other words, be more impatient.) Give a few days instead of a couple weeks, don't wait for the next Patch Tuesday unless it's right around the corner, and only give a day or two after the patch release instead of a week for people to catch up. This is the situation where we can't tell if we are in case 2 or case 4; not releasing immediately tries to minimize the chance/effect of moving from case 4 to case 3, while moving quickly tries to minimize the chance/effect of moving from case 2 to case 3.

    The timing in II and III above can vary according to the severity of the bug and how hard it would be to patch, and also whether there is a workaround. If there is a workaround, both situations (*especially* III) should be biased towards releasing the exploit; if there is no workaround and it requires a patch from the vendor, the situation should become biased towards not releasing the exploit. Also, if you can release information about the vulnerability (such as a suggested workaround) without releasing enough information to be of much help to black hats, then that obviously becomes biased towards releasing that information. In fact, you might immediately release such information as this even in II. Perhaps the vendor's history as to patching behavior should come into play too.

    It's very much a case-by-case scenario. Saying "always release" or "never release" I think is always wrong.

The one day you'd sell your soul for something, souls are a glut.

Working...