Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security Bug

Hackers Disagree On How, When To Disclose Bugs 158

darkreadingman writes to mention a post to the Dark Reading site on the debate over bug disclosure. The Month of Apple Bugs (and recent similar efforts) is drawing a lot of frustration from security researchers. Though the idea is to get these issues out into the open, commentators seem to feel that in the long run these projects are doing more bad than good. From the article: "'I've never found it to be a good thing to release bugs or exploits without giving a vendor a chance to patch it and do the right thing,' says Marc Maiffret, CTO of eEye Security Research, a former script kiddie who co-founded the security firm. 'There are rare exceptions where if a vendor is completely lacking any care for doing the right thing that you might need to release a bug without a patch -- to make the vendor pay attention and do something.'"
This discussion has been archived. No new comments can be posted.

Hackers Disagree On How, When To Disclose Bugs

Comments Filter:
  • Nothing... (Score:4, Funny)

    by roger6106 ( 847020 ) on Thursday January 04, 2007 @03:55PM (#17464474)
    Nothing for you to see here...
    In other news, "Slashdot Editors Disagree On How, When To Post Stories."
    • by Anonymous Coward
      Don't forget "how many times."
    • Re: (Score:2, Insightful)

      by Mixel ( 723232 )
      And in financial news, "Economists Disagree On How, When To Invest Money"
  • by Anonymous Coward
    What we need is a government office that handles this sort of thing, because National Security can depend on bug fixes.

    There needs to be a law against releasing exploits without giving the comapny time to react to the find.

    Perhaps there should be a software developers association that a company can join that handles oversight on this issue. Any "hackers" that find a critical bug with a piece of software could bring it to the association's attention, and there could be sanctions if the developer refuses to
    • Comment removed (Score:5, Informative)

      by account_deleted ( 4530225 ) on Thursday January 04, 2007 @04:00PM (#17464556)
      Comment removed based on user account deletion
      • Re: (Score:3, Funny)

        by 0racle ( 667029 )
        Christ, we'd all still be using telnet.
        You mean, you're not using Telnet?
      • by kfg ( 145172 )
        Christ, we'd all still be using telnet.

        Ummmmmmmmm, did I miss a memo or something?

        KFG
      • Re: (Score:2, Insightful)

        by Anonymous Coward
        The idea is not to make a government commission that tests every piece of code Microsoft or Apple writes. Rather, the idea is to have a goverment commission that handles the release of bug information to the general public.

        If the bug can be quietly fixed without harm to the public then the developer is given time to fix the problem. If there are several exploits in the field then the general public is warned and a fix is made as soon as possible. The commission would have the power to envorce a law preve
        • by causality ( 777677 ) on Thursday January 04, 2007 @06:04PM (#17466448)
          The summary and article talk about withholding the exploit information until the vendor is able to release a patch, as though this is the only possible scenario that could be happening just because it's the only other option (as opposed to immediate full disclosure) happening today.

          But the most egregious examples of "Find security flaw -> Issue patch -> Wash, rinse, repeat" are found in programs (Sendmail? Bind, anyone?) or operating systems (Windows .. 'nuff said) where security was an afterthought that was bolted-on later. What I would like to see is complete and instant full disclosure that is sufficient and inevitable enough to encourage vendors to make this entire model obsolete, namely by making it no longer practical to handle these issues by issuing patches. This would provide an incentive to redesign from the ground up with security in mind so that many of these issues don't happen in the first place, and the ones which occur are reduced in severity.

          Consider the OpenBSD approach, where security was a priority from day one, and the excellent track record they have in this area, and contrast it with Microsoft's track record, where only marketing was a priority from day one. The only way this will change is when it is no longer profitable to place such a low priority on security, and the two ways you arrange that are by demonstrating that the current situation is an arms race that is not sustainable, or, by waiting for a day when Grandma and Joe Sixpack care about computer security enough to refuse to buy anything that doesn't deliver it. Personally, I find the first option to be far more realistic, and it also helps to avoid the "only two choices" dualism that I keep seeing everywhere (especially in politics... "Democrat vs. Republican", "Left vs. Right", "With us or Against us") that is suffocating real change.
          • Consider the OpenBSD approach, where security was a priority from day one, and the excellent track record they have in this area, and contrast it with Microsoft's track record, where only marketing was a priority from day one.

            Yes lets.

            One has made a very successful product and made lots of money, One has produced a probably vastly superior OS that nobody uses. Windows might be bag of shit but in terms of the aims Bill set out to achieve (Getting filthy rich) it is a runaway success.
            • One has made a very successful product and made lots of money, One has produced a probably vastly superior OS that nobody uses. Windows might be bag of shit but in terms of the aims Bill set out to achieve (Getting filthy rich) it is a runaway success.

              You're making my point for me, actually. That Windows accomplished Bill's goals does benefit Bill, but it does nothing for me and nothing for your average user who has a Windows installation. For the 99% of the population who are not Microsoft employees an

          • "by demonstrating that the current situation is an arms race that is not sustainable"

            I'm not sure that that is actually true. I'd say the Internet has been somewhat popular since around 1996. Thinking back over the last 10 years, I think that there's been considerably fewer "crippling" viruses and such as of late. Maybe the current arms race is actually petering down.

            "by waiting for a day when Grandma and Joe Sixpack care about computer security enough to refuse to buy anything that doesn't deliver it."

            I
      • Re: (Score:3, Insightful)

        Nah,
        We would still be using Paper Tape loaded through an ASR33 Teletype :-)

        Seriously though,
        Exposing bugs like this is (IMHO) a pure FUD stunt. Ok, tell the vendor about the bug and if they don't fix it in a reasonable time (variable depending upon severity etc) then by all means publicise the problem in order to get some pressure on them to fix it.
        But getting Officialdoom involved? You are a prime candiate to be sectioned. Civil Servants the world over can't organise their way out of a pape
      • by cp.tar ( 871488 )
        we'd all still be using nothing but telnet

        Here, fixed that for you.

  • Comment removed (Score:4, Interesting)

    by account_deleted ( 4530225 ) on Thursday January 04, 2007 @03:57PM (#17464496)
    Comment removed based on user account deletion
    • Re:2 months (Score:5, Interesting)

      by Cylix ( 55374 ) on Thursday January 04, 2007 @04:19PM (#17464864) Homepage Journal
      The problem with setting any reasonably lengthy period of time is that it results in that much more infection and use. Basically, this grants any purchaser of a 0 day exploit roughly a 2 month window of opportunity to use their new found investment.

      Where as there may not be a patch to solve the problem, but perhaps there is a significant work around that could avoid some trouble.

      This is exactly why it is difficult to assign a window of disclosure to such issues. Not too terribly long ago, some of the larger firms managed to get together and settle on a 30 day notice.

      Also, you might also remember that a little company called Cisco was sitting on a vulnerability for quite a while until someone when psychotic over the deal.

      In the grand scheme of things it comes down to protecting your image. It almost seems like the policy on vehicle recalls. Unless X number of issues arise... just don't deal with it. However, if it becomes substantially used or finds the public eye... it suddenly becomes a much larger problem.

      Honestly, an arbitrary date is rather inflexible and a system that takes in effect the impact of the bug needs to be used. Pump out tons of crap software? That isn't exactly the problem of the common man, but rather the problem of the organization's software development model.

      Organizations and individual people lose time and money to support these industry bug shields. Again, a case by case determination depending upon the level of potential harm.
      • Re: (Score:3, Insightful)

        by Lord Ender ( 156273 )

        The problem with setting any reasonably lengthy period of time is that it results in that much more infection and use.

        Wow. Do you have any evidence whatsoever to back that claim up? Or did you just see it on IRC somewhere?

        Back in reality, it is almost universally assumed that published exploit for which no patch exists will lead to much more damage than a published exploit for which a patch is widely available. In fact, it is so obvious (to almost everyone but you) that such a study has never even been perf

        • by Cylix ( 55374 )
          Your argument fails for the exact same reason you cited for mine.

          Where do you get your data and how do you know an uncovered exploit is not being actively used.

          YOU DON'T...

          Exactly at what point did I say research materials must be placed immediately? I didn't did I? That wasn't a mistake as I wasn't advocating the release of exploitable vulnerabilities immediately.

          You failed to read my post, you failed to interpret what little you did read and ultimately gave off a gunshot reaction to some thought you forme
          • Yes, I don't give evidence to back up my claim because it is... obvious. You made an outrageous claim that flies in the face of common sense and reason. No one would take such an unlikely claim seriously unless there was evidence to back it up.

            Here's an example:
            Which is going to cause more damage:
            a) releasing a contagious pathogen into a population before a vaccine has been developed and distributed
            b) releasing a contagious pathogen into a population after a vaccine has been developed and distributed

            I don't
            • The problem is your analogy is not exactly correct (which is the usual problem with analogies.

              The difference between full disclosure and just informing the companies would be a lot closer to just telling the goverment and the WHO about bird flu cases but not telling any of the general public.

              Still that oversimplification falls wide of the mark.

              Its a generalisation but probably true that if you need to make an analogy of something in order to understand it (or explain it to a 3rd party) then you (or that 3rd
              • I initially intended to use the analogy to show why extraordinary claims require extraordinary evidence. It had enough parallels with the topic at hand, though, that I used it to further clarify my argument. I did this not because I don't understand the realities of computer security (in fact, my livelihood depends on my expertise on the subject), but because the person I was addressing seemed to be having trouble grasping the subject.

                Avoiding all analogies, the best mathematical model I can come up with is
                • by xappax ( 876447 )
                  the damage inflicted due to a vulnerability is proportional to the number of people who have the knowledge required to exploit it multiplied by the amount of time each person has knowledge required to exploit it.

                  Your equation makes intuitive sense, but it doesn't model an important factor - the number of people who know about the vulnerability is not nearly as important as who they are. Here are some classes of people, and how disclosure affects them:

                  Skilled black hat hackers: These people are often
                  • Sure. You can always refine a model until you arrive at, well, the raw data you are modeling.

                    SKs and BHs are the people who do damage with vulnerability knowledge (VK). There are basically two cases when a researcher is considering publication.
                    1) the researcher is the first to discover the VK.
                    2) a small number of BHs already have the VK.

                    For #1, practicing full disclosure does a lot of damage, no question.
                    For #2, practicing full disclosure may decrease the time to patch (giving the small number of BHs slight
                    • by xappax ( 876447 )
                      Those are some good points, and I'm reconsidering - you may be correct that where a single vulnerability is concerned, full disclosure has a greater (on average) damaging effect. I still disagree that it's the most responsible response, however, because I don't think that computer users should be focused only on the latest vulnerability - there's a bigger picture that's often overlooked.

                      Computer software today is, on the whole, extremely insecure. We may never perfect it, but it could be a whole lot bet
                    • by xappax ( 876447 )
                      I still disagree that it's the most responsible response

                      s/it's/"responsible disclosure" is/
        • by Cylix ( 55374 )
          Oh and to start off on your whole mis-rant there...

          It's common sense...

          The longer a problem persists the worse it will become.
          • What a charming oversimplification!

            A security bug is only a problem when someone knows how to exploit it. While no person knows how to exploit it, it is not a problem, and no problem is persisting.
            • by Cylix ( 55374 )
              Now you get it!

              It's not severe if no one knows.

              The window could be that much larger then two months.

              The sliding window works both ways. Longer if it's not a problem right now, but if it's activelly being traded and used... it becomes a much larger problem.
              • Responsible disclosure, as defined in the industry, gives the researcher the moral responsibility to afford a vendor a reasonable amount of time to fix a flaw before releasing the research publicly.

                Yes the window works both ways. That is why the word "reasonable" is used. Two months is considered reasonable for most types of software. And don't even try to condescend. This conversation has provided me with no insight. I make a career of this stuff.
      • by haraldm ( 643017 )

        Unless X number of issues arise... just don't deal with it. However, if it becomes substantially used or finds the public eye... it suddenly becomes a much larger problem.

        Which leads to the suspicion that it's all not a technical but a public perception problem, hence a marketing issue. Which leads me to think we should disclose as early as possible to give the manufacturer some good spanking because after all, it's them who are responsible for the issue, not hackers, and not security folks.

        How's that

        • by Cylix ( 55374 )

          I'm not sure I really advocate holding a proverbial gun to someones head. I'm just not much of an activist in that regard.

          Maybe not a threat so much as a response rating? Surely tracking data on responsiveness would yield long term value in addressing these problems. Couple that data with the line item fixes and vulnerability time lines as well as threat values should show a negative or positive history with regards to quality assurances.

          Honestly, I'm sure something like this has to exist already doesn't it
          • It may not be the best avenue from the consumer stand point, but it would be a gentle start.

            Frankly, I don't intend to be gentle. Manufacturers ignoring security problems (or delaying fixes) for purely economic reasons aren't gentle either, and produce a lot of work for those who have to live with the crap, i.e. systems administrators. I'm not an activist either but I work as an IT consultant having to listen to all these stories, following escalations, and listing to manufacturer's managers who try to t

    • 1 month for them to fix it, 1 month for the customers QA and patch their systems.

      As soon as the patch is released, the crackers will be checking the files replaced and the differences in those files.

      They can usually get and exploit fielded within 24 hours or less.

      The best you can do is to take steps to minimize the threat and log any activity so you can see if you've been cracked. Running snort can tell you if any traffic is suspicious.

      But you should be doing that anyway.

  • Opinion Swing? (Score:5, Insightful)

    by bmajik ( 96670 ) <matt@mattevans.org> on Thursday January 04, 2007 @03:57PM (#17464508) Homepage Journal
    It's good to see that opinion seems to be shifting on the matter.

    A few years ago when Microsoft started pressing for "responsible disclosure", they were pretty much mocked and ridiculed by everybody.

    I'd like to think that there is now some real discourse on the effectiveness and responsibility of full disclosure vs responsible disclosure, and that security researchers are choosing responsibile disclosure more often.

    I'd prefer to think of things that way then to cynically surmise that this is simply a case of "when it's an MS bug, let's roast them with a 0-day disclosure, but if its anyone else, let's give them a fair shake at fixing it"

    • Re: (Score:2, Interesting)

      by Loconut1389 ( 455297 )
      MS should have a program whereby if you tell them first and let them patch it, they'll give some program or hardware (Zune?) to the first reporter of the bug, but if the exploit is released (by anyone) to the wild before the patch, then the offer is null and void. Assuming MS would play fair (and not have an insider leak the bug 2 hours before the patch), seems fair and easy good business for MS. Surely the cost of a Zune or a laptop would be less than the bad press costs.
      • Re: (Score:3, Informative)

        by bmajik ( 96670 )
        They have something sort of like that. If you are the first to responsibly disclose a bug, during the security bulletin, you or your organization will be thanked in the bulletin for disclosing it. I think there is some kind of rudimentary financial compensation. ($500 comes to mind?) also, but i can't find any record of it currently.

        http://www.microsoft.com/technet/security/bulletin /policy.mspx [microsoft.com]

        If you search "microsoft.com" for "responsible disclosure", many of the recent security bulletins list who repor
      • MS should have a program whereby if you tell them first and let them patch it, they'll give some program or hardware (Zune?) to the first reporter of the bug, but if the exploit is released (by anyone) to the wild before the patch, then the offer is null and void. Assuming MS would play fair (and not have an insider leak the bug 2 hours before the patch), seems fair and easy good business for MS. Surely the cost of a Zune or a laptop would be less than the bad press costs.

        Meh. There'd be those people who would just want to piss in the pool and publicly release the exploits pre-patch just to fuck over the guy who told Microsoft.

    • I'd prefer to think of things that way then to cynically surmise that this is simply a case of "when it's an MS bug, let's roast them with a 0-day disclosure, but if its anyone else, let's give them a fair shake at fixing it"

      To be fair, one of the main points to consider is how the vendor has behaved historically. If you submit bugs to a given vendor and they completely ignore them, sometimes for years, until that bug is made public, then no good is likely to come of your discovery until the bug is made

      • by cdrguru ( 88047 )
        So disclosure is supposed to be the hammer over the vendor's head to "make" them fix it?

        Well, what if they have difficulties or other reasons that make it unlikely they are going to fix it? In other words, what if they don't care about your hammer? Then disclosure just insures that it is out there to be used as a weapon against humanity at large.

        Of course, this assumes that you (a) care about "humanity at large" and (b) might be caught in the destruction as well. You know, if you like actually used Windo
        • Well if they don't fix the bug because they don't care then it needs to be released. It's not like one person can hold a monopoly on the knowledge of a bug, if one person/group can find it then so can others. If the company isn't going to fix it then the users should still be able to take action to protect themselves.
        • Re:Opinion Swing? (Score:5, Informative)

          by Nasarius ( 593729 ) on Thursday January 04, 2007 @04:28PM (#17464968)
          You seem to assume that the exploit won't be discovered independently by someone else who isn't quite so altruistic. If they won't fix it, people who are using the software have a right to know that it is vulnerable.
          • Re:Opinion Swing? (Score:4, Insightful)

            by susano_otter ( 123650 ) on Thursday January 04, 2007 @04:48PM (#17465292) Homepage
            people who are using the software have a right to know that it is vulnerable.


            I think such a "right" (I would call it an "entitlement", actually) really only makes sense if there's a reasonable expectation that general purpose computing in a networked context is safe and secure to begin with.

            Given the true nature of computer networking today, far from having "rights", I'd say that software consumers have responsibilities: To avoid networking except with known good components; to develop their own software in-house so that they can better control the vulnerability testing and patching process, to conduct their own testing to their own standards on third-party software; and to not pretend that all their security problems are the responsibility of the third-party software vendor, easily solved by the vendor simply writing perfect software.
            • by geekoid ( 135745 )
              "To avoid networking except with known good components"

              And how do they know that they are using good components of no one can tell them otherwise?

              " to develop their own software in-house so that they can better control the vulnerability testing and patching process"

              Ahh, so anyone who uses a computer need to write their own OS and applications? Has to ahve complet understranding of software and hardware engineering?

              " to conduct their own testing to their own standards on third-party software; "

              And be a maste
              • "To avoid networking except with known good components"

                And how do they know that they are using good components of no one can tell them otherwise?

                By accepting the responsibility to test the components themselves, or else admit that they can't reasonably expect the components to be secure.

                " to develop their own software in-house so that they can better control the vulnerability testing and patching process"

                Ahh, so anyone who uses a computer need to write their own OS and applications? Has to ahve complet und

            • Re: (Score:3, Insightful)

              by mstone ( 8523 )
              ---- To avoid networking except with known good components;

              Components are only part of the story.

              According to the Orange Book (the DOD manual for evaluation of trusted computing systems), the security of a machine can be rated no higher than the rating of its least trusted port. That includes everything, including the power cord and the air. A truly high security system demands a generator inside the same Fermi cage as the device itself, and probably armed guards at the door.

              The internet is an untrusted
              • What you say makes sense to me.

                The point I was trying to get at is that software users do not have a right to the information discovered by other people, regarding the security of the software they're using. Rather, software users have a responsibility to gather their own information, either by investing in information-gathering activities in-house (my idea), or by formally contracting with a third party, and investing resources that way (your idea).

                Either way, I think my basic point still stands: if you w
                • by mstone ( 8523 )
                  There are still problems.

                  Face it, a company that doesn't know how to review its own security also doesn't know how to rate the reliability of a security contractor. That gives rise to a whole class of snake-oil vendors for whom FUD is another word for 'marketing'.

                  Case in point: I think it was McAffee that came out with a white paper last year saying that Mac users Really Should Use AV Software, despite the fact that the software in question only catches bugs for which it has known profiles, and there are
                  • Exactly. I agree with all of this. But none of this is actually relevant to my point. Much earlier in the thread, there is a claim that software users have a right to information collected by someone else about the software. I disagree with this claim, and see no such right. At best, users have a right to truth in advertising from the vendor. But if I study a piece of software that you're using, on my own, and discover security flaws in that software, you have no right to get that information from me,
            • by epine ( 68316 )

              easily solved by the vendor simply writing perfect software.

              Ah yes, the old strawman, haughty and nattily attired. The entire industry knew that Microsoft's integration of IE, Outlook, and ActiveX was a terrible misstep, and Microsoft knew this too, from the moment of first inception. It was a competitive decision to damn the torpedoes and endure the consequences in the aftermath (i.e. by mounting a massive PR campaign to promote responsible disclosure after the barn door was open). "Might makes right" w
              • Let me sum up my argument:

                1. Software buyers are entitled to truth in advertising from software vendors.

                2. Software buyers are responsible for securing their own systems, regardless of whatever lies the software vendors may have told them.

                3. In fact, in the current state of networked computing, it is unreasonable to assume that a given piece of software is secure, regardless of what the vendor claims. Therefore, it is inappropriate for software buyers to blame software vendors for insecurity in the user's
          • someone else who isn't quite so altruistic

            I don't think that "altruistic" means what you think it means. People who use a piece of software, or operate in an environmen where people do (or share globally internetworked systems with people who do) have a personal, vested interest in rational, thoughtful disclosure. That's not altruism, that's enlightened self interest. It's selfish, in the correct, useful sense of that word, to do it right. The people you're worried about aren't the opposite of altruistic
        • So disclosure is supposed to be the hammer over the vendor's head to "make" them fix it?

          If that is the only way to get them to fix it, yes. Several times that I know of bugs and even demo exploits were publicly released after researchers gave up on waiting for the vendor to ever fix it, or even respond saying they would fix it.

          Well, what if they have difficulties or other reasons that make it unlikely they are going to fix it?

          The point of security research is to promote security. If a vendor is unwi

          • not an exploit, but I've seen it take press releases to get ATI to fix bugs in it's drivers. They get them rolled out quite quickly when they're pressured by the media, but couldn't give two shits if it's just a person saying "Your OpenGL is broken".

            Check the OpenSceneGraph development list archives if you don't believe me.
      • If the responsible disclosure rules are well designed there is no reason to treat any vender good or bad differently. You give them all the same amount of time to fix the problem then you disclose the bug. This is self correcting. Good venders would never be caught with their pants down, bad venders will get embarassed time after time until they improve.

        Sensible people should debate "how long is long enough", but I think it insane to fully reveal dangerous expoits directly to the public without providin
        • If the responsible disclosure rules are well designed there is no reason to treat any vender good or bad differently.

          I disagree. If I find a big vulnerability and submit it to the vendor my next action depends upon the vendor. Some bugs take longer to fix and I won't necessarily know what a reasonable amount of time to wait is. If one vendor has a good track record and e-mails me back right away to say that they are working on it in it will take them 20 days, I'm inclined to wait at least 20 days before

    • A few years ago when Microsoft started pressing for "responsible disclosure", they were pretty much mocked and ridiculed by everybody.

      They were mocked because they made a mockery of responsible disclosure by trying to keep the bugs they were informed of quiet rather than trying to fix them. I don't think there's much of an opinion shift; there never were that many people who advocated releasing exploits into the wild before informing the vendor as a courtesy. A few, sure, but always a minority. But it is

    • Re: (Score:1, Informative)

      by Anonymous Coward
      Ehm... no. Over the last years Microsoft has perfectly shown that Responsible Disclosure doesn't work. You tell them a couple of bugs, they won't fix. You post it on Securityfocus, the moderator doesn't approve it. The public doesn't get informed, the bugs remain and get exploited.

      BTST too often.

      What was it about IE being unsafe for 281? Utter bullshit. I've compiled a list of crtical bugs that remained unpatched since 2004, hence IE was never safe since then. They are publicly known, Microsoft knows them,
      • by bmajik ( 96670 )
        How can you back up this accusation? A number of current security bulletins contain the words "responsible disclosure", and they list who reported the bugs to Microsoft.

        That indicates that at least some of the responsibly disclosed bugs get fixed, doesn't it?

        I understand the consternation about unpatched IE vulns. Unfortuneately I don't know off the top of my head what the real story is.
    • Re:Opinion Swing? (Score:5, Interesting)

      by linuxmop ( 37039 ) on Thursday January 04, 2007 @05:38PM (#17466080)
      You are operating under false assumptions.

      There exists a community of underground hackers (crackers?) who search for exploits. They find them, trade them, sell them, and use them to steal data and resources. Gone are the days where script kiddies just hack for fun; there is a serious black market involved, since resource and identity theft can be very lucrative.

      When an exploit is discovered by a researcher, it is likely that the black hats have already discovered it. The software's users are already being harmed, although they may not realize it: smart hackers are good at covering their tracks.

      In this scenario, "responsible disclosure" is anything but responsible. By waiting until the vendor has patched the software, users are being harmed. On the other hand, immediate full disclosure has three important effects:

      One, it eliminates the black market for the exploit. If everyone knows about it, nobody will pay for it. This reduces the overall market for exploits and, compounded over many exploits, will drive hackers out of the market. If it is not profitable to find exploits, fewer people will do it.

      Two, it gives the users an opportunity to take action. If, through full disclosure, I find out that Internet Explorer has a serious security risk, I can switch to Firefox. If my Cisco router has a problem, I may be able to work around it with an alternate configuration. On the other hand, if a researcher reports the exploits to Microsoft and Cisco directly, black hats are free to exploit my computer and my router until patches are released (if they ever are).

      Three, it provides an incentive for vendors to write better software. If every software bug meant a black eye and angry users, you can be sure that there would be better software. On the other hand, the occasional well-timed patch looks like software "maintenance", a concept that shouldn't exist but sounds reasonable to the layman (after all, he has to have his car tuned up every so often, so why not his software?) The result of full disclosure, on the other hand, is more akin to an emergency recall; the producer has clearly made a mistake.

      The concern, of course, is that the black hats don't already have the exploit, and that full disclosure gives it to them. Yes, this is the risk of full disclosure. However, given that black hats have an economic incentive to find exploits, while researchers rarely do, we can expect the probability of this to be low. And even if they don't have the exploit, releasing it still shrinks the exploit market (why pay for exploit B when you can get exploit A for free), it still notifies users of a potential problem, and it still incents vendors to write better software.

      Full disclosure is responsible disclosure.
      • Four, every script kiddie and their dog will have a full set of instructions on a hack to which previously only 'black hat' hackers with serious malicious intent and willing to, apparently, pay for this, were privvy.

        Instead of 500 companies silently being hacked and having some of their data stolen, 5 million people, including companies, are now under attack through the latest combination of script kiddie worm + dangerous hack.

        So yes, it is irresponsible to just throw the data out there - because you vastly
      • by epine ( 68316 )

        I'm totally in this camp myself. The only thing responsible disclosure accomplishes is perpetuating the market for software that was written badly in the first place. Consider companies Rock and Scissors. Rock decides to push their product to market first at all cost. Scissors elects to create a development culture that promotes rigorous coding practices. Well, we all know how this story plays out: formation of a rebel Paper alliance. Then the Paper people are accused of being irresponsible for suffoc
      • If you are correct and well paid criminal blackhats are finding vulnerabilities well before researchers then surely the main result of of full disclosure will be to increase the value of yet-to-be-disclosed vulnerability and to prevent disclosure of those vulnerabilities to a wider community such as script kiddies, for fear of it getting to a whitehat and being fully disclosed.

        Basically the value of non-disclosed vulnerabilities will shoot through the roof compared to non-patched but fully-disclosed vulnera
    • by bky1701 ( 979071 )
      The difference is, that was Microsoft, this is Apple. I would bet if Microsoft was to press for that again right now, it would be mocked and ridiculed by everybody again...

      Meanwhile on OSS, we don't have to worry much about this at all most of the time, and still have no government organization or stupid laws making it that way.

      I figure I'll be modded down into oblivion, but what the hell.
  • March 1996 to April 1998: I was a script kiddie in my mom's basement.

    /not that there is anything wrong with that...
  • "'I've never found it to be a good thing to release bugs or exploits without giving a vendor a chance to patch it and do the right thing,' says Marc Maiffret, CTO of eEye Security Research, a former script kiddie who co-founded the security firm.
    And who are you, miss Kelly Jackson Higgins, to make Mark Maiffret a former script kiddie, where did you get this fact?
    • Actually, he wasn't a script kiddie - he was a hacker known as 'chameleon' who tried to steal U.S. defense secrets and sell them to a accused terrorist named Khalid Ibrahim.

      I guess now he's just another 'trusted source' in the security biz, hmmm?
    • by kfg ( 145172 )
      Newsweek?

      Marc Maiffret could be corporate america's worst nightmare. He's 23, he's frighteningly proficient with computers and he seems to have a special aptitude for being able to remotely hack into any network in the world running on Microsoft Windows. . .Maiffret's journey from slacker-hacker to cofounder of a 120-employee firm was an unlikely one. Six years ago he was a dropout from high school in Orange County, Calif., spending nights teaching himself about computer security when a friend introduced hi

  • If these bugs aren't publicly disclosed many software vendors see no reason to fix them. The reason for this is each time you fix a bug the product has to go back through a full QA cycle - which can cost lots of money.
  • One problem (Score:5, Insightful)

    by daveschroeder ( 516195 ) * on Thursday January 04, 2007 @04:15PM (#17464792)
    One problem in this debate is that often, either side will make it seem like an all-or-nothing proposition; that it's either "full disclosure on day one" (or in this case, "day 0" ;-), or it's feebly report to the vendor and wait helplessly while the faceless vendor takes months to respond, if it even responds at all.

    There actually is a middle ground.

    Some say, "Hey, these vulnerabilities exist whether they're reported or disclosed or not," just as MOAB says in its FAQ. But the problem is that they overlook the practical side. Sure, the vulnerabilities, and maybe even working exploits, exist, but as long as they're hoarded (and not used) by very small and tight-knit groups of people, they're not getting actively exploited in the wild across massive userbases. Could high value 0day exploits perhaps be used for isolated penetration? Sure. But could they be used (for any period of time) for a mass-spread worm or other malware? Nope. It'd be hours before security firms and/or vendors identified the issue.

    So when you choose to disclose previously undocumented issues before giving the vendor any chance to respond, which some claim they're doing to improve security, there is a greater chance of exploit across a much wider base of users, which can have a much wider and catastrophic impact. Some say that as a sysadmin, they'd want to know about such vulnerabilities so that they can protect and mitigate themselves. But other than for high value targets and corporate or government espionage - which can perhaps have their own channels for "earlier" disclosure when identified by entities like US-CERT or Information Assurance agencies - I don't see how people can reasonably expect to be targeted by extremely valuable and as-yet-undocumented vulnerabilities. It's a point of pride - and sometimes money - to sit on such vulnerabilities.

    The bottom line is that the vendor should always be informed in advance, if there is any real concern about security on the platform, and not just ego stroking or slapping down "fanbois". How long in advance and how long a vendor should be waited on is somewhat subjective, of course. Also, no one's saying that an "independent" "security researcher" is beholden to a corporate interest. But then they shouldn't operate under the guise of responsibility or the feigned notion of wanting to "improve security", when some persons' mechanisms for disclosure are nothing more than PR attempts, or another notch in the bedpost (hmm, or probably NOT a notch in the bedpost...)
  • Now, in breaking news, polarizing issues cause people to disagree! Film at 11:00!
  • If the fix is as easy as closing ports 135, 137, 138, 139 and 445 in the firewall, then you can disclose it immediately...
  • I think that "responsible disclosure" is fine for companies that:

    1) Make real attempts to release secure software, rather than just ship shoddy software as fast as they can onto the unsuspecting public.
    2) Have a serious method for responding to issues quickly and effectively when they are found outside the company. This really just means good customer support combined with a good method of patching shipped code safely and effectively.
    3) Treat security researchers as friends who help improve their products.

    F
    • There've been a few companies, as reported by Bruce Schneier, that responded to private disclosure of security flaws with restraining orders and injunctions.

      If you're going for private disclosure, do so as anonymously as possible.
    • by ebyrob ( 165903 )
      These companies are doing the equivalent of shipping cars without airbags in the modern world

      Ya, because small women and young children deserve to be crushed to death for riding in the front seat...

      Of course, if someone were to suggest something likely to have a stastically significant impact on yearly motor vehicle deaths, like say mass transit, that'd just be inconvenient...

      I guess mass transit would be akin to simply not using products from software vendors with poor security track records. In that vein
      • by dfay ( 75405 )
        Wow, I guess I touched a nerve with my airbag analogy. I'm not an airbag manufacturer, however, so please feel free to imagine whatever acceptable analogy you like in its place. (Guns without a safety button, unsterilized medical utensils, take your pick.) However, I'm inclined to think that you did understand my meaning.

        In that vein I suppose airbags are about as useful as trying to turn bad vendors into good ones with nothing more than bug disclosure practices.

        Sounds like you don't think that full disclos
        • by ebyrob ( 165903 )
          Well... Airbags do purportedly save some lives, it's just that they've also taken a few. At first I objected to the airbag analogy, but the more I think about it the more it fits. I'm sure full disclosure causes some reform, and in the case of Microsoft it has clearly had some (seemingly positive) effect. However, the effects of full disclosure are not always positive.

          As to other analogies... I've yet to hear of a medical utensil causing injury because it was too well sterilized.

          Shaping up bad vendors?
  • Giving the vendor a few weeks to create a patch before releasing an exploit is a polite thing to do for the vendor's sake, but what about the users of the vulnerable software? Hiding potential threats from them keeps them from protecting themselves. Even without a fix, you can apply a workaround or if you need serious security, even replace the buggy product. I think that researchers who find security bugs should report the fact that a vulnerability exists in a given software product immediately. They c
  • All's fair... (Score:5, Interesting)

    by mandelbr0t ( 1015855 ) on Thursday January 04, 2007 @04:31PM (#17465008) Journal

    Hackers are not under any obligation to disclose anything. I'm not aware of any law that either forces them to disclose a vulnerability that they have discovered, or any due process that must be followed to do so. I'm also not aware that writing or distributing proof-of-concept code is illegal. Judging by the number of large software vendors either in court (IBM, SCO) or deliberately misinterpreting existing legal documentation (Microsoft and Novell attack the GPL), the law is clearly the only deciding factor in how business will be done in the IT industry.

    Therefore, throw your morals and principals out the window. This is laissez-faire economics at it's best. Mud-slinging, sabotage, legal wrangling, death threats and more await as we determine just who has the best software. If these vendors are truly interested in some good-faith reporting from the people who are discovering the vulnerabilities, maybe a show of good faith on their part might be nice. There's absolutely no incentive to do anything in a reasonable or "nice" way, when dragging a hated vendor's name through the mud is both legal and cool.

    There's a few things I can think of that would improve matters and reach a common ground where truly malicious software is written only by a few bad apples:

    • Laws governing EULAs would reduce the weasel words that we click through blindly as we install software. Many EULAs that I've read actually have a clause that's different for the country of Ireland, as their so-called "lemon law" also applies to software. The EULA as it is written for the United States waives too many consumer rights to be valid in Ireland. Having clear guidelines for what rights you can waive by agreeing to a software EULA is vital.
    • Vendor incentives for disclosing information in accordance with their company policy. When RSA was released to the 'net community at large, there was a sizable reward for proving the ability to crack it. If vendors offered some kind of financial incentive to disclose bugs through their normal process, many people would opt for the immediate cash rather than going for the jugular.
    • Establish criminal and civil liability for writing bad software. Everything goes to a civil court these days, so it's often a battle of who has the better lawyer (mostly because there's no good laws governing EULAs...). What is the software provider's responsibility? Establish industry guidelines for QA testing for off-the-shelf software. Throw some people in jail for writing malicious software. Any company that misrepresents its software for the purpose of taking control of someone's machine should be subject to criminal liability. I don't want to hire a lawyer and roll the dice on a lawsuit. I want the police to press charges and the DA to prosecute, all without my involvement (unless I get to testify).

    Just to be perfectly clear: I am condoning the MOAB and any other MOxB. I've used too much bad software and seen too many vendors be held utterly unaccountable for their pre-meditated actions against the consumer. Lobby groups funded by these large vendors continue to erode consumer rights. If this is not how business is to be done, perhaps the industry leaders should set a better example.

    mandelbr0t
    • Hackers have no legal obligation to do much of anything, but neither is basic human decency (eg. cleaning up after yourself if you make a mess in the company breakroom) a legal obligation. Just because what they're doing isn't illegal doesn't mean it's a good thing to do. Nor am I trying to argue that it *should* be illegal - it shouldn't. I'm just saying that giving them a pass just because they're not breaking the law.

      Also, why give them a pass because they're MOxBing select vendors? Wouldn't it be be
    • by bky1701 ( 979071 )
      I agreed with you up until 3.

      Establish criminal and civil liability for writing bad software. Everything goes to a civil court these days, so it's often a battle of who has the better lawyer (mostly because there's no good laws governing EULAs...). What is the software provider's responsibility? Establish industry guidelines for QA testing for off-the-shelf software. Throw some people in jail for writing malicious software. Any company that misrepresents its software for the purpose of taking control of

      • It's easy to say "bad software" from your point of view, but what exactly IS "bad software"?

        Sorry, I wasn't very clear about this. I did mention that the key point was the misrepresentation of software for the purpose of gaining some control over the victim's computer. As a result, Microsoft would mostly be on the right side of the law, but a company like Gator would not. The difference is in what the software is purported to do. I don't even know what Gator is supposed to do. It doesn't really matter, since it's primary purpose is to send personal information back to the software creator. In th

  • ... which Maria Sharipova poster is the coolest. ... which STTNG uniform best hides Romulan blood stains. ... baked or fried Cheetos. ... best way to get your parents out of the house for the weekend.

  • by PeelBoy ( 34769 )
    How ever the hell I want. When ever the hell I want.

    That's how and when.
  • by CherniyVolk ( 513591 ) on Thursday January 04, 2007 @05:01PM (#17465490)

    In one of my previous posts, I have already talked about this.

    Companies have no other interest or goal other than to make money. Fundamentals people, fundamentals! If you think, for one second that an idea from any company not resulting in immediate profit is correct, you are a fool. They cut corners, discriminate based off of accredited and formal education rather than will and raw expertise and experience, they implement managment schemes that do more harm than good for the sake of book keeping for VCs and shareholder confidence. They have to make every judgment off of a cost analysis report. And what few people understand is, if it's cheaper to continue in the same path, they will even if people are dieing (car manufacturers) or getting screwed (Microsoft software unreliability).

    I can't believe this debate is taken seriously! The Companies want this precedent, because it's cheaper to ignore most exploits than to actually have to hire someone that can do something to better the software. Companies want this because it adds another variable (in their favor) to the cost analysis of fixing a problem... it gives them choice. And as we all know, from Companies' own assertions, that choice is bad and force is the only thing applicable. Companies don't give you much of a choice, why should you give them any? Open Source doesn't get a choice, why should their competitors (proprietary software). If Capitalism is the so-called "best", then it should be able to compete in the exact same fashion and prevail as other systems. So don't do this double standard crap of "Oh, if it's a company software, do 'X' if it's not, then do 'Y'; only because of a benevolent precedence suggesting you should give a Company a break while it's OK to lay hard and firm on some other ideology."

    If a Company releases software that is buggy. The very instance you find an exploit, it should be released to the public with all that you have researched including example exploits. If the Open Source community can fix it quickly, then surely Microsoft or Adobe can too with their all-mighty Capitalist ideals and absolutely-necessary 'management'....

    There is no precedence here. It is not a debate. You paid for the software, and if you don't get what you paid for (and some), then you should have absolutely NO qualms of sticking it back to the person who pawned it off to you. If they are so great, then let them prove it. But they aren't, and that's why they are coming up with all these little social tricks trying to get people to make an exception to further propogate the illusion that proprietary software is "good" the "best money can buy" or what ever.

    You paid for the software. It's yours. You got screwed. Let people know! If you got screwed at the used-car lot, you'd let your friends know the details... you'd even feel socially obligated to do so. Software is NO different. You are socially obligated to blow the whistle for every little thing you find, and blow it till you're blue in the face; you paid for it, and you didn't get what you expected. It is NOT illegal to blow the whistle on crappy products you end up paying for. In fact, for some products it's a federal offense to pawn off crap to the consumer (think Lemon Laws in the United States). If you really want to get technical, then there already is legal precedent set in this regard because it's illegal to sell a car that is reasonably too problematic in the United States. Maybe we should make it illegal for software Companies to release crappy and overly buggy software too!

    If you find an exploit. As soon as you can write up a concise report, sample code et al. and hit the "Send" button. DO IT!
    • Companies have no other interest or goal other than to make money. Fundamentals people, fundamentals! If you think, for one second that an idea from any company not resulting in immediate profit is correct, you are a fool.

      This stems from the simplistic notions that a lot of engineers have that companies are just like giant computers, programmed only to maximize money creation.

      Over time, you will come to realize as I have that companies actually are composed of many PEOPLE. Companies do nothing by themselve
      • Generally the companies that do well are not trying to maximize profit, but generally are trying to help customers the most. That is, in the end, what leads to money - simply trying to lower expenses is not a long term path to wealth creation, and only works for so long - it cannot be sustained.

        The more foolish fool is one who believes as you do - that all companies have simplistic goals, that are not at all influenced by human behaviour.


        The Janitor might have these human emotions you speak of. The Secreta
    • If a Company releases software that is buggy. The very instance you find an exploit, it should be released to the public with all that you have researched including example exploits.

      Why? If there is no known exploit in the wild and without the information I have is unlikely to be one, why should I make things easier on malware authors. If the vendor has a history of quickly fixing reported vulnerabilities, how does it benefit me to undermine the security of my own system in that way? Full disclosure migh

  • ..."hackers disagree".
  • If you don't, they may find away to get a court order to make it so you can't disclose instead of fixing it.

    Or any of a number of dirty tactics.

    Sorry, but even the briefest look at the history of corporate attitude indicates that they can not be trusted. This goes baco to corporation before America even existed, not just American Corporations. One reason Why I agree with Ben Franklin when he said that the constitution should ban corporations.
  • I've always believed in this simple procedure:

    1) The problem is discovered.
    2) The problem is reported to the vendor, the report including a fixed resonable date for either a fix or the date for the final fix (to allow for tough fixes). The time alotted reflects the severity of the problem - more severe results in less time.
    3) When this fixed date or the vendor date (if given) is reached, the problem is disclosed regardless, complete with POC exploit if available.

    This method forces vendors to take security r

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...