Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security

Group Releases Anti-Disclosure Plan 149

dki writes "SecurityFocus reports that the Organization for Internet Safety (OIS), a group of 11 of the largest software and security companies, has released a public draft of a proposed bug disclosure standard. The document outlines a process for reporting and disclosing bugs that aims to eliminate releasing exploits to the general public. Not surprisingly, the OIS was founded out of a Microsoft-hosted security conference. Comments on the draft will be accepted until July 4th; the final copy will be released at the Black Hat Conference in Las Vegas."
This discussion has been archived. No new comments can be posted.

Group Releases Anti-Disclosure Plan

Comments Filter:
  • by robdeadtech ( 232013 ) * on Saturday June 07, 2003 @03:44AM (#6137781)
    some of the more disturbing portions of the defined process...

    "7.1 Advance Notification

    This document does not address processes for notifying selected groups of users about vulnerabilities in advance of the general population. While such âoepre-release notificationsâ are sometimes done, and in very well-controlled cases can be carried out effectively, they are not a recommended practice in the general case. Because this document addresses only activities that are appropriate for typical cases, advance notification is beyond its scope."

    "8.2 Use of Third Parties

    In some cases, investigations may be made more effective by the use of people or organizations other than the Finder and Vendor. There is no requirement to use a third party, but in cases where one is used, it should be a person or organization that the Finder and Vendor have agreed to in advance of its involvement. Characteristics of a good third party include sound judgment, freedom from bias or conflicts of interest, demonstrated technical expertise in security technologies, and discretion regarding the handling of the information it is entrusted with to resolve the dispute. Third parties normally serve in a voluntary capacity, as a service performed in the public interest."

    Members of the OIS... (From the OIS site http://www.oisafety.org/about.html#1) What companies are members of OIS?

    The current members are: @stake, BindView, Caldera International (The SCO Group), Foundstone, Guardent, ISS, Microsoft, NAI, Oracle, SGI, and Symantec.

    We're actively soliciting software vendors and security research companies to join. Send OIS your feedback on the draft until July 7th! (From the OIS site http://www.oisafety.org/resources.html) "Comments on the Security Vulnerability Reporting and Response Process should be sent via email to draft-feedback@oisafety.org [mailto]. Comments should include your name, address, and email contact information. Organizations submitting public comments should include the name and title of the person submitting the comments. While OIS will respond to as many comments as possible, because of the anticipated volume of comments, we cannot guarantee an individual response to every comment."

    • by CAIMLAS ( 41445 ) on Saturday June 07, 2003 @03:49AM (#6137795)
      This makes me wonder - who can be a wonder of OIS? Just anyone? Only people with pertinent projects? Only companies? What about groups like the Debian maintainers or the core kernel devel team? My impression from the article was that it was company or corporate institution-exclusive.
      • by southpolesammy ( 150094 ) on Saturday June 07, 2003 @04:41AM (#6137900) Journal
        To me, it sounds like this is only relevant to organizations that maintain closed-source code trees, since the impetus on fixing the bug lies with their in-house development staff rather than with a decentralized team of programmers. It's to "protect" the company's bottom line from being affected by the propagation of bug exploits for which they have not done their part in fixing.

        On the flip side, this mentality rarely exists in open-source code projects due to the lack of a need to safeguard against decreased shareholder value. Also, IMHO, open-source developers continue to maintain a sense of pride in their work that is for the most part missing in closed-source projects due to the direct recognition (and therefore, culpability) that is possible by putting your name to a piece of software that others can peruse and make comments and criticisms about. With closed-source projects, the company's reputation is primarily at stake while their programmers are largely hidden behind the web of corporate logos and NDA's, and therefore may not have reasons as strong as those of OSS developers to fix and release bugs in a timely manner.

        So in other words, it's to protect lazy software companies from being held accountable for their own poor coding practices and lacadaisical maintenance policies.
        • To me, it sounds like this is only relevant to organizations that maintain closed-source code trees, since the impetus on fixing the bug lies with their in-house development staff rather than with a decentralized team of programmers.

          The open source community is no better at getting patches deployed than the commercial vendors and is often worse. There are plenty of machines out there running unmodified Redhat 6.2 with lots of known vulnerabilities.

          The recent DDoS attacks on the DNS roots were mounted f

          • I disagree with you. What are you talking about here is the responsibily of the sys admin, not the vendor. The opensource community is responsive to security bugs and usually have a patch to correct the problem within a short time. However, it is the responsibility of the sys admin to apply the patch and it is his/her responsibility to monitor the security issues and respond accordingly.
          • by nolife ( 233813 ) on Saturday June 07, 2003 @10:19AM (#6138502) Homepage Journal
            The open source community is no better at getting patches deployed than the commercial vendors and is often worse.

            HUH? You are comparing apples to oranges. So how many commercial vendors do you know that deploy and install patches on your systems?
            Deployment responsibility is the end users wether using open sourse or closed source. The scope of this article and the parent poster are about bug fix acknowledgement and patch creation by software creators. How these patches get installed on your specific systems is another topic.
          • "The open source community is no better at getting patches deployed than the commercial vendors and is often worse."

            I disagree. For example I run both debian and freebsd machines. In both cases I subscribe to their security mailing list. Whenever a vunrability becomes known in any of the thousands of backages supported by them I get an email telling me about it and instructions on how to go about installing the patch.

            I don't know of any commercial OS vendor that does the same thing. WIndows only deploys p
      • This makes me wonder - who can be a wonder of OIS?

        At least in this case, I don't think any of the OIS members are really wonders. They're just trying to pull something over the public's eyes.

    • I wouldn't describe this as discouraging. I am not in the least bit discouraged when the main competitors to Linux implement a security plan that will be less than effective. Good for them, may they get 1000 security holes.
    • by ignatus ( 669972 ) on Saturday June 07, 2003 @04:01AM (#6137828)
      Well, i don't think obscurity can really improve security. Most hackers find the information they need anyway. This is just an attempt of the software companies to cover up their lousy code. If they would implement secure code in the first place, none of this would be nessecary. Way to go closed source!
      • I don't think the parent is insightful at all. I think it's kneejerk. Imagine a complex piece of software, a complex exploit, and a couple thousand users of the vulnerable software. The company responsible is made aware of the exploit and sets about to fix it. The fix involves more than switching to strncmp() or something trivial like that, so fixing the bug takes a couple days. Meanwhile, the thousands of customers have to be notified and convinced to upgrade when a patch is released. If the exploit is p
      • If they would implement secure code in the first place"

        That's the whole point, isn't it?

        SB
    • by GauteL ( 29207 ) on Saturday June 07, 2003 @04:23AM (#6137873)
      The SCO group is part of this?

      The obligatory:
      1. Create crappy software
      2. Make other people correct it's flaws
      3. Sue the fixers for copyright infringement
      4. Profit!
  • microsoft: there's a bug in our bug disclosure process. apply this patch using windows update.

    user: windows update recommends that i install 14 critical updates.

    microsoft: you cannot install this patch without all the updates

    user *installs 10 updates, 11th fails*

    user: uh...

    microsoft: it's your problem. btw, the bug disclosure process bug is your problem too.
  • by robdeadtech ( 232013 ) * on Saturday June 07, 2003 @03:49AM (#6137796)
    Section 9

    All OIS participants must either look like Peter Norton or Steve Balmer. Minimally this can be preformed by wearing khaki pants, blue denim shirt, and sensible shoes.

    No person or organization wearing black, having purple hair, or listening to obscure music may participate as either a Finder, Vendor, Coordinator, or Arbitrator.

  • Excellent! (Score:5, Funny)

    by appleLaserWriter ( 91994 ) on Saturday June 07, 2003 @03:49AM (#6137797)
    I welcome the day when we no longer have security bugs.
    • I welcome the day when we no longer have security bugs.
      That would be the day everyone stops using computers?
      • Probably 90% of security holes are buffer overflows...

        • 90%? Maybe that's a little bit too much, but all in all, most other 'higher' languages are written in hm, well, C/C++.

          Using something that was written in C/C++ doesn't really exclude buffer overruns, just minimizes it.

          • It's a lot easier to audit one C/C++ implementation, even if it is something gigantic like a compiler, than it is to audit an entire system's worth of code. In any case, it's the translation stage that provides safety -- when the language can guarantee safety, it translates directly to C/C++; when it cannot, it inserts runtime checks in the translation. The only way this would fail is if the translation stage was too aggressive in optimizing and failed to insert a runtime check where it should have. This
    • What a boring world that would be. Nothing to talk about on Slashdot!
    • Re:Excellent! (Score:5, Insightful)

      by Florian Weimer ( 88405 ) <fw@deneb.enyo.de> on Saturday June 07, 2003 @05:40AM (#6137988) Homepage
      I welcome the day when we no longer have security bugs.

      Look how this group defines a security vulnerability. According to this one, there are no security vulnerabilities.

      For the purposes of this process, a security vulnerability is a flaw within a software system that can cause it to work contrary to its documented design and could be exploited to cause the system to violate its documented security policy.

      There are a few publicly available systems with documented design and security policies, but it's explicitly stated that the security assumptions only hold when the system has cooperative users and is connect to a trusted network (better yet, no network at all).
      • ...a security vulnerability is a flaw within a software system that can cause it to work contrary to its documented design...

        This is an interesting definition now isn't it? Does this mean that all undocumented behavior is considered a security flaw? If so, we all better get busy. We've got about 15 million vulnerabilities to disclose just for M$ Word alone!!!!
        • Re:Excellent! (Score:3, Interesting)

          Does this mean that all undocumented behavior is considered a security flaw?

          Quite the contrary. According to the definition, undocumented behavior is perfectly acceptable, even if it has obviously unwanted consequences.
      • Good. What are you complaining about? Didn't you see the part that says you are free to disclosure whatever you found if the company does not think it is a security flaw?
  • One line summary (Score:5, Insightful)

    by evil_roy ( 241455 ) on Saturday June 07, 2003 @03:54AM (#6137809)
    "As discussed in paragraph 7.3.7 above, the Finder and Vendor should act in concert to release their respective advisories nearly simultaneously, and only after a remedy is available."

    It's all in that last phrase "only after a remedy is available"

    • Re: One line summary (Score:4, Interesting)

      by Black Parrot ( 19622 ) on Saturday June 07, 2003 @06:23AM (#6138056)


      IOW, it's back to the bad old days when Microsoft didn't bother trying to fix exploitable software at all.

    • It's all in that last phrase "only after a remedy is available"
      See, that's a very funny way to do it. They are trying to accomplish what by this? Remember the SQLSlammer?? How long has the "fix" for the vulnerablilty used by the SQLSlammer been available?? How hard did the worm hit?? I don't care what the group thinks it's going to achieve with their recommendations on vulnerability reporting, with the companies involved, it won't do any good. Even when Microsoft has the fix available, they are stil
    • Yes. Except if vendor does not think it is a vulnerability, if vendor fails to obey the deadlines, if communication between you and vendor breaks down, if the flaw becomes actively exploited or if the flaw gets otherwise published, in which case you can release full details immediately.

      Then again, you only read what you wanted to read, didn't you?
  • by DarklordSatin ( 592675 ) on Saturday June 07, 2003 @03:55AM (#6137811) Homepage
    Personally, I've always thought that a good disclosure policy would be one that informs the software's source of the problem and then waits some period of time befoer disclosing to the public.

    Of course, I'd reccomend a very short wait time, probably between 48 hours and one week. Just enough time to solve the problem if enough resources are diverted to it but not long enough to allow anyone to ignore the problem until later.
  • Responce from 1337 (Score:5, Insightful)

    by SeanTobin ( 138474 ) * <byrdhuntr AT hotmail DOT com> on Saturday June 07, 2003 @03:55AM (#6137815)
    1337 Reveals limited disclosure plan

    Eight hacking groups join together to set an official standard for limiting disclosure of software security holes.

    #1337, Efnet â" Eight computer hacking groups rounded up a three-day Exploits in Computing Forum on Thrusday by formally announcing a coalition against full disclosure of vulnerability information, ending a week of intense speculation, and immediately sparking controversy.

    WanSan, TCPuke, NetLoft, HeavySak, BitEvil, SYNergy, HPLat, and DownScope joined together to declare they would immediately begin following a policy of limited public disclosure of security vulnerability information. Members of the coalition who discover new vulnerabilities will omit from their initial public advisories any details about how a hole might be exploited in an attack, and will not include code that demonstrates the bug. Thirty days after the first advisory, a more detailed notice can be released under the rules. Full disclosure of the vulnerabilities will be shared only among the members for âtestingâ(TM) purposes.

    "We felt that as responsible industry leaders, we, as a voluntary organization, are going to follow a set of reasonable standards," said DXNo, manager of 1337â(TM)s intrusion exploitation, in an interview.

    1337 will also draft a proposed international standard for notifying vendors and the public about newly-discovered software security bugs, following the group's limited disclosure ethic. The organization will admit new members, under an as-yet unwritten set of bylaws. The initial draft of the limited disclosure ethic will limit the disclosure to the home pages of the vulnerable sites.

    A chief objective of the group is to discourage 'full disclosure,' the common practice of revealing complete details about security holes, even if publication might aide attackers in exploiting them. The group believes that any type of full disclosure would assist software vendors into patching various vulnerabilities before they can be widely exploited.

    Publishing complete information, and sometimes "exploit" code that demonstrates a vulnerability, is de rigueur among many computer security professionals, who argue that malicious hackers can acquire the same information themselves, and that network administrators and security gurus often need technical details to properly defend themselves from attack.

    But Culp criticized the practice in an essay published on a Microsoft Web site last month, and blamed "information anarchy" for the epidemic of malicious worms that have struck the Internet in the last year. "It's high time the security community stopped providing blueprints for building these weapons," Culp wrote.

    Under the plan, member groups would share detailed information during the 30-day grace period with âoeother communities in which enforceable frameworks exist to deter onward uncontrolled distribution.â The last category would allow member groups to share details with one another. "They're not going to ban it among themselves," says Levy. "They might be willing to limit the public access to this information, but I highly doubt that they'll limit it among each other."

    "People have to do it Microsoft's way or they'll have this group telling them that they're acting irresponsibly," says Maiffret. "It's going to drive people into the underground, and could lead to more people breaking into computers." The majority of members in 1337 agree with Maiffretâ(TM)s assessment.

    "We are not trying to form a secret society of exploiters," says CKLawz "We are just creating a standard... This represents one of the first process standards between security companies and vendors."

    wyZopa1 estimate that it will take one or two months to produce drafts of the proposed RFCs. He emphasizes that the standards would not just limit vulnerability disclosure, but would also spur vendors to be more responsive to security vulnerability reports. "My goal in the RFC is to have equally stringent standards for vendors and exploiters," says wyZopa1. âoeWe worked hard to discover these vulnerabilities, the developers should work just as hard to fix them. Providing them with all our tools without compensation is not what software development is about.â
  • by Anonymous Coward on Saturday June 07, 2003 @03:57AM (#6137819)
    Face it. Anyone can do security. All you need is the will, the drive, the talent, and the know-how. Which means, me, the average geek, ex-blackhat hacker, technologist, futurist dude.. can play with stuff, learn stuff, share with others, and build my CAREER off it. Not after this.. This would keep all the knowledge I gain from other professionals, from bugtraq, etc, out of my hands, forcing me to subscribe to these people's services, philosophies, etc... no. I will not. This will not stop hackers, this will only stop the admins who are trying to keep everything together. This will be a tax on the admin who manages 1500 machines, and has no time to read simple stuff.. Hackers, don't need bugtraq. Its useful, its a resource, etc, but if they want in bad enough. Fuck em all.
    • by DrSkwid ( 118965 ) on Saturday June 07, 2003 @04:09AM (#6137840) Journal
      All you need is the will, the drive, the talent, and the know-how.

      Well, that's a short list just anyone could sort out in a weekend
    • I have to disagree with much of your post. As you yourself said, anyone can do security. And there is nothing in this proposal per se that would prevent any of your stated goals. Reading reports of others' exploits is hardly the way to build a career. If it comes to that, better you should seek out the exploits themselves, and do your own analysis, just like these people. (And the proposal *does* require the vendor to maintain a publically accessible repository of vulnerabilities.)

      I'm also of mixed mi
  • by ATAMAH ( 578546 ) on Saturday June 07, 2003 @03:57AM (#6137821)
    Why don't they ask themselves why is their product so weak and vulnerable, instead? How much of those vulnerabilities were exposed and fixed *because* they were exposed? It's a GOOD thing. There was a bug, people found it, tried reporting it, got told to fuck off, published it - and the bug got fixed.
    • by Anonymous Coward
      Two problems :

      1) Exposing a vunerability will make any company that does not try to "repair" them liable -> costs money

      2) Having to "repair" the vunerability because of the liability-problem will cost money too

      Now look at the proposed "solution" :

      a) Don't tell anyone you know about he problem, and you can use "plausable deniability" (Just let your opponent *proove* you knew about it). Result : Nobody can effectivily sue you, and the threat of loosing money is gone.

      b) The liability-threat (aka : los
      • Or it could be a win-lose situation, with closed-source vendors losing and open-source vendors winning, in the long term. After customers see how their open-source products have very few security problems, and those that pop up are fixed quickly, and after they deal with never-ending security problems with their closed-source systems (despite this stupid new "plan" of theirs, which will just eliminate a lot of their incentive to close the security holes, but do nothing to keep hackers from learning of them
  • Funny that a New York Times Ad was rotated into this story...

    They, in particular, excel at non-disclosure... Perhaps they'll be joining this "Organization for Internet(Information) Safety"

  • Nice 'process' (Score:5, Informative)

    by HornyBastard77 ( 667965 ) on Saturday June 07, 2003 @04:03AM (#6137831)
    From the draft:

    2.2 Phases

    The basic steps of the Security Vulnerability Reporting and Response Process are:

    # Discovery. The Finder discovers what they consider to be a security vulnerability (the Potential Flaw).

    # Notification. The Finder notifies the Vendor and advises it of the Potential Flaw, and the Vendor confirms that it has received the notification.

    # Investigation. The Vendor investigates the Finderâ(TM)s report in an attempt to verify and validate the Finderâ(TM)s claims, and works collaboratively with the Finder as it does so. # Resolution. If the Potential Flaw is confirmed, the Vendor identifies where the Flaw resides, then develops a remedy (in the form of a software change or a procedure) that eliminates or reduces the risk of the vulnerability.

    # Release. In a coordinated fashion, the Vendor and the Finder publicly release information about the vulnerability, along with its resolution.

    Now look at this under the context of the recent [securityfocus.com] MS Passport Vulnerability to see how effective this process is.

    As an aside, this draft is backed [oisafety.org] by MS and SCO, amongst other companies. It'll be interesting to read the amount of bashing this gets over the weekend.

  • The 11 Companies (Score:5, Informative)

    by x mani x ( 21412 ) <.ac.lligcm.sc. .ta. .esahgm.> on Saturday June 07, 2003 @04:09AM (#6137841) Homepage
    FYI, the 11 companies involved are:

    Microsoft
    @stake
    BindView
    SCO
    Foundstone
    Gu ardent
    Internet Security Systems
    Network Associates
    Oracle
    SGI
    Symantec

    -Mani
    • I can't believe @stake is involved ... doesn't it consist largely of former l0pht hackers?
      • by Trepidity ( 597 )
        I think it was around 7-8 years ago they went from "releasing cool tools" to "whoring for big consulting contracts and never, ever releasing anything cool." Have you seen their website [atstake.com]? Microsoft's website looks more "authentic" than that crap.
    • I can think of two primary reasons to be in such a group -- to stay ahead of the game as a security specialist, or to hide your own security weaknesses.

      Microsoft, I guess, is trying to do both given its recent anti-virus firm acquisition.

      It's interesting -- and possibly encouraging -- to note that neither Apple nor IBM is on that list.

      --
      One of these days I'll have to sue Darl McBride for defamation of character by association. ;-)
  • A plan! (Score:3, Funny)

    by geekoid ( 135745 ) <dadinportlandNO@SPAMyahoo.com> on Saturday June 07, 2003 @04:17AM (#6137856) Homepage Journal
    Well that will stop people from releasing the information.

  • by coyote-san ( 38515 ) on Saturday June 07, 2003 @04:17AM (#6137858)
    Why would anyone follow these guidelines? It might piss off these companies, but anyone who really cares about security would realize that giving the vendors the exclusive right to disclose flaws (regardless how much time has passed or how many systems have been compromised) prevents people from making an informed decision to yank these programs until a solution is identified.

    Mapped to the real world, it's like some idiotic Police Chief knowing damn good and well that several pizza delivery drivers are mugged every night when they go into a four-block area... but refusing to say anything - not even warning these drivers to avoid the area for a while - until after the muggers have been convicted, sentenced, and in prison for a month.
    • by bezuwork's friend ( 589226 ) on Saturday June 07, 2003 @05:00AM (#6137924)
      ... it's like some idiotic Police Chief knowing damn good and well that several pizza delivery drivers are mugged every night when they go into a four-block area... but refusing to say anything ...

      Living in the D.C. Metro area, I was very upset when hearing that the D.C. Police Chief had been against revealing the make of the snipers' car when they finally found it out. Once this information was released, the snipers were caught in 2 hours or so, IIRC.

      I agree with the parent poster - this seems like an apt analogy. At least if a non-negligible number of bugs, patches, fixes, or workarounds, even if just temporary, come from unexpected sources outside of the vendors or finders.

    • > Why would anyone follow these guidelines?

      Bank on it: once they've established the 'standard' they'll start lobbying to have it made law. Disclosing a vulnerability that has been around for a couple of years will get you labeled as a terrorist.

      > It might piss off these companies, but anyone who really cares about security would realize that giving the vendors the exclusive right to disclose flaws (regardless how much time has passed or how many systems have been compromised) prevents people from

  • Doh.... (Score:5, Funny)

    by Tachys ( 445363 ) on Saturday June 07, 2003 @04:18AM (#6137861)
    You have to sign a non-disclosure agreement in order to see the anti-disclosure plan
  • While this is no doubt disturbing, there's more to the security world than those 11 companies. Yes, some of the security firms involved are high profile and there's a lot lost in their soul-selling, but there will always be BugTraq - and there will always be other researchers who don't believe in this shit.
    • Re:Not to worry (Score:4, Informative)

      by lpontiac ( 173839 ) on Saturday June 07, 2003 @04:39AM (#6137897)
      there's more to the security world than those 11 companies. Yes, some of the security firms involved are high profile and there's a lot lost in their soul-selling, but there will always be BugTraq

      Bugtraq is one of those 11 companies. (Bugtraq is part of Symantec)

      • (Bugtraq is part of Symantec)

        Well don't I look like a dick? :) Isn't it a full disclosure list? Wonder what's to become of that, then.

      • Re:Not to worry (Score:3, Interesting)

        by mbogosian ( 537034 )
        Bugtraq is one of those 11 companies. (Bugtraq is part of Symantec)

        A challenge: create an Open alternative to BugTraq

        I have registered the domain names opentraq.org [opentraq.org] and opentraq.net [opentraq.net]. I am willing to have them resolve to DNS servers belonging to a group of volunteers who wish to start and maintain an Open alternative to the BugTraq website. (GNU [gnu.org]? Mozilla [mozilla.org]? Anyone else interested?)

        I will continue to renew the registration as long as someone wants to continue the project. If necessary, I may be willing to t
  • by BEA6D ( 124745 ) <requiredNO@SPAMdiplayedpublicly.com> on Saturday June 07, 2003 @04:27AM (#6137879) Homepage Journal
    can't hurt me.
  • OIS Members (Score:2, Redundant)

    by evenprime ( 324363 )
    According to their page [oisafety.org], the members are:
    • @stake
    • BindView
    • Caldera International (The SCO Group)
    • Foundstone
    • Guardent
    • ISS
    • Microsoft
    • NAI
    • Oracle
    • SGI
    • Symantec

    Considering their backgrounds, it is sad that @stake [computerworld.com] and ISS [linuxworld.com] are involved in an anti-disclosure group.

  • by jkrise ( 535370 ) on Saturday June 07, 2003 @04:44AM (#6137904) Journal
    The document mentions only Finder, Vendor etc. What about the user? Suppose the Finder tells the vendor, "Hey, there's this bug in Passport Password Reset". Now the vendor (Microsoft) works in conjuction (collusion?) with the finder, and says, Look - this is a trade secret. Wait for a few months, and we'll watch if anyone's using this bug except us.

    Poor Joe ServicePack is the one affected, and he figures nowhere in this scheme of things.
    • Err, no, not "wait for a few months". They have to make available a progress report each seven days, detailing what progress was made since the last report, what they'll be doing next, and other information.

      If communication breaks down (you say "You are not taking it seriously", they say "Yes, we are", you say "No, you're not", etc), the process stops, you disclose the vulnerability, and that's it.

      Only if they are seen to be making real progress in addressing the vulnerability you'll stay with the process
  • Title (Score:4, Funny)

    by cperciva ( 102828 ) on Saturday June 07, 2003 @05:05AM (#6137932) Homepage
    Shouldn't the title to this story have been "Group Discloses Anti-Disclosure Plan"?
  • by BillsPetMonkey ( 654200 ) on Saturday June 07, 2003 @05:08AM (#6137936)
    This proposal basically calls for the public to act in the same was as an employee would at finding a bug in the software. Perhaps I missed something here but if a bug is sourced in the public domain it should be disclosed there as well.

    If they want to put me on the payroll, I'll QA and report their software using this convenient bug ticket they've provided;)

    • but if a bug is sourced in the public domain it should be disclosed there as well

      I see nothing in the draft that forbids this.

      Indeed, this is going to propel OSS onto the desktops of the masses.

      Once the vendors get to control what information gets disseminated about their bugs the cost-effective way of dealing with these bugs will not be to fix them, but rather to just sweep them under the rug.

      Yeah, some minor security holes might be patched, but something major, like the MIME/filetype exploit in Windows

      • All OSS has to do is be cool. Do honest work, if you find a bug, report it to everybody and get the fix out quick. I'd rather anyone hacking my system to have a tiny window of opportunity than be able to exploit at will.

        The sad thing is, there will be an army of Windows users who observe that OSS software reports lots of bugs every day, while Microsoft reports none. Obviously, this is evidence of the superiority of the closed source programming model. *sigh*
  • Ha, I say let 'em! (Score:3, Interesting)

    by zonix ( 592337 ) on Saturday June 07, 2003 @05:11AM (#6137942) Journal

    While this is totally bogus, perhaps we just should let them have their little secrecy club. You know, see how well they do compared to other vendors who are more open about their bugs.

    But really, if there's some exploit in some app/service everybody should know immediately - the only solution to an exploit is not always a patch fix from the vendor. You could shut down the service, if applicable, tweak some parameters to dodge the exploit, filter some packet, etc. Or you could even fix the exploit yourself in some instances (in the code that is).

    But hey, who can blame them for wanting to not disclose this info, they're the only ones who can/are allowed to fix the bugs. :-)

    z
  • by BrynM ( 217883 ) * on Saturday June 07, 2003 @05:19AM (#6137961) Homepage Journal
    They forgot to publish the third column:
    Users/Consumers


    3.1.1
    Do nothing. Hope nothing happens to you... not that we would tell you if it could. What you don't know can't hurt you.

    3.1.2
    Do nothing. Hope nothing happens to you... not that we would tell you if it could. What you don't know can't hurt you.

    Repeat until section 7 ("Release Phase")...
    7.2.1

    Thank us for not telling you that your data was vulnerable. Wait for us to issue a patch.
    Unless..."Premature Release"
    7.4.1

    Yell "WTF" and bitch a little. We wouldn't have told you if we didn't have to.
  • Is this a joke? (Score:3, Insightful)

    by BoldAndBusted ( 679561 ) on Saturday June 07, 2003 @05:22AM (#6137965) Homepage
    Objective: A security researcher, customer, or other interested person or organization discovers what they consider to be a security vulnerability, validates the finding, and prepares a report describing the Potential Flaw. Vulnerabilities are found in software products through both directed and undirected research by a variety of individuals: security consultants, IT professionals, independent researchers, academics, etc.
    Anyone see some types of people missing from this list? Like, 12 year old gas huffers, shut-ins, malcontents.... uh, hackers, and... crackers?!? Leave it to a committee of corporations to make a gargantuan "27 B-stroke-6" form (thanks, Terry Gilliam) to fill in just to request that a vendor FIX THEIR OWN BROKEN INSECURE SOFTWARE! What a joke. Sheesh!
    • Re:Is this a joke? (Score:2, Interesting)

      by Anonymous Coward
      Like, 12 year old gas huffers, shut-ins, malcontents.... uh, hackers, and... crackers?!?

      You fail to understand the nature of most security incidents I see.

      The people who actually discover the vulnerabilities are the security consultants, IT professionals, independent researchers, academics, etc. then the malcontented kiddies and gass huffers take those proof of concept scripts and use them to "hax0r"...hence Script Kiddie...

      If you don't release the exploit information and example code until after it's p
      • I understand your point there... I mean, there are a LOT of "wannabe" script-kiddies around, that would just learn a tiny little bit of cut-n-paste and gcc -o attack_them bug.c to go and "hack" around!

        The problem here then is _WHO_ handles the security bugs info. That is, if I know that my bug will be taken care by the vendors/developers ASAP, and we put a "well-defined" time window of X days for them to fix it before they release, then I could agree.

        BUT, we know that the morons which want (LOOOVE for

      • Re:Is this a joke? (Score:3, Insightful)

        by schlach ( 228441 )
        Those teenager hax0r d00ds 99.5% of the time DO NOT find any new vulnerabilities.

        Don't believe the myths, yourself. Sit on Incidents for awhile. When people at the frontlines are saying, "I'm getting a lot of activity on port x, seems to be trying this against Apache/IIS/Wu, anyone else seeing it?" which eventually leads to "I was compromised sometime last night/week, found this in my logs, anyone recognize it?" which eventually leads to a security researcher rediscovering the vulnerability in the affecte
  • uggh... (Score:4, Funny)

    by zonker ( 1158 ) on Saturday June 07, 2003 @05:40AM (#6137989) Homepage Journal
    I'm just waiting for Bruce Schneier [counterpane.com] (author of Applied Cryptography [counterpane.com] and founder of Counterpane Internet Security [counterpane.com]. Oh yeah, and author of the Twofish [counterpane.com] and Blowfish [counterpane.com] algorithms to boot.) to comment on this in the next Cryptogram [counterpane.com]...

    I'm sure he'll have some interesting things to say. ;)

  • Thank you! Please don't let everybody know how to hack... Job security ya know...
  • To me, it looks as slowly, we can't trust any software anymore because of this policy. I mostly stopped buying MS products because of this, and a few more companies will find out sooner or later that I'm not their customer anymore.
    And even if I could analyze the source code, I don't have the time to do so, especially with big projects like sendmail, or PostgreSQL that I use a lot.
  • full disclosure (Score:5, Insightful)

    by oohp ( 657224 ) on Saturday June 07, 2003 @07:11AM (#6138130) Homepage
    Full disclosure only stimulates vendors to come up with a patch quickly. It's their fault in the first place there was an exploitable bug. We don't need a law to regulate bug disclosure. This is security through obscurity and it doesn't work. Information will leak and a limited number of people will take advantage of it anyway, while I, the use won't know to shut that buggy service down.
    • "We don't need a law to regulate bug disclosure"

      I agree - but this isn't a law, its a group of companies deciding that they know best, and trying to force a 'standard' onto everyone else for no particular reason as far as I am concerned.

      Whilst it remains a 'recommended practise' there will be those who will follow it, and those that won't, and all the time it will be completely unenforceable - so what is the point?

  • by 73939133 ( 676561 ) on Saturday June 07, 2003 @07:20AM (#6138147)
    Let's call a spade a spade: companies like Microsoft should be liable for negligence when they sell buggy software for lots of money, at least for the amount of money the user paid for the software. Now, instead, not only do they want to be able to release buggy software with impunity, they also want to get free testing and bug fixes from users, and all that without the embarrassment and risk of having their bugs revealed.
    • Good call! In fact, I would make it industry standard to have just such a system. But at the end of the day, I think that it just wouldn't work. Because it isn't JUST Microsoft that have buggy software, you would also end up damaging all the other OS companies.

      I simply cannot think of an OS where I haven't had major problems at one time or another. A quick list of a few below (take Windows as being in there obviously).

      Let me think.... Solaris 8 x86, wouldn't even boot. RedHat 7.2 screwed my Windows insta

    • As long as we're calling spades, how about the end user who buys the buggy software for top dollar in the first place? Isn't he liable for (not) doing research, doing a cost/benefit analysis, making an informed decision? If a software company doesn't want a public bug disclosure process, as He Who Holds The Credit Card, I will make every effort to avoid their software. I consider that my personal responsibility.
  • by gmuslera ( 3436 ) on Saturday June 07, 2003 @07:26AM (#6138155) Homepage Journal
    Now normal users and expert users will know for sure that Windows is full of bugs, and that they should be afraid of them but neither of them could explain why, just "have faith".

    There are precedents where doing this could be plain wrong, like in the last WebDAV [nwfusion.com] vulnerability in IIS, that was discovered after being actively exploited by black hats. Hiding the facts will not stop crackers to exploit something they know first, and will only make victims unaware of what could or have happened with their sites.

  • by Anonymous Coward on Saturday June 07, 2003 @07:32AM (#6138167)
    The only time most of those guys release patches in a reasonable timeframe is when they're looking at a PR or a lawsuit debacle. If I find a vulnerability, I'll shoot the vendor an email and see if they hop to it or not. If they hop to it, great. If they blow it off, then it's posted publically before they patch.

    That's how it'll always be handled by some people that find vulnerabilities. That's the way it's been handled in the past, and the world hasn't ended. It's just occasionally uncomfortable for a big player that sat on their ass for too long. Tough toenails. They need to put the same energy into just fixing reported problems that they do into trying to put this complicated mechanism in place to cover up problems that they fail to addres
  • by HighOrbit ( 631451 ) on Saturday June 07, 2003 @07:39AM (#6138173)
    obscurity will eventually always fail as a security protocol.
    Being the cost-minimizing/profit-maximizing organizations that they are, they will always wait forever to incur new costs (like fixing bugs) until somebody "lights a fire under them." That's why quick public disclosures is important, because the bug *will* be discovered by the bad guys sooner or later, so it is better to fix it soon than wait for later.
  • Direct from the OIS site...

    "Does the OIS plan to make these processes mandatory in some way?


    Absolutely not. These are best practices, not laws, and people will always be able to choose whether to use them or not."


    Ok, so next time, choose not to participate.

    Next!

    H
  • Fuck 'em (Score:3, Interesting)

    by BigBadBri ( 595126 ) on Saturday June 07, 2003 @08:57AM (#6138310)
    They write broken software.

    Someone finds a hole.

    They'd like us to sit on our arses while they take their time fixing it.

    Meanwhile, some bad ass mofo finds the same hole, and exploits it.

    It's the threat of imminent disclosure that makes the vendors fix their fucked-up software - to delay is merely to invite problems.

    I repeat - fuck 'em.

  • This is surely part of Microsoft's Secure Computing Initiative ... If we can't make software more secure, at least we can decrease the public's perception that it's insecure. See no evil, hear no evil, speak no evil.
  • I find it hilarious how every now and then some vaguely important sounding organization creeps out of the shadows, proclaims something completely unfounded regarding a topic that is undergoing heated discussion, yet nobody has ever heard of them before.

    This happens all the time with those "think tanks", then all those bogus agencies selling certification in the security industry...

    Is this a new art form or something? Hacking the stupid mainstream media who will just present it as fact if it comes from an
  • Searching for bugs and researching the exploitation of same pays off in the following ways:

    It can be interesting and it improves ones ability to read, write, and understand code.

    Doing so in a public forum can create reputation capital for ones consulting services or products. In some cases may lead to employment.

    Some folks are truly motivated by the desire to see vendors patch their software. This is sometimes a result as well.

    The companies involved in the OIS have already established their repu

  • ...you can rest assured that I'm going to disclose it widely, anonymously and with great fanfare.


  • How many "Day Zero" exploits is it going to take before this concept is abandoned?
  • that the exploits will be released to the net anyway, in the form of hackers who don't give a shit anyway.

    This will be yet another attempt by MS and co to stymify OSS (illegal to report bugs) and control the world.

    The world will respond with code red III and Slammer II.
  • by Beryllium Sphere(tm) ( 193358 ) on Saturday June 07, 2003 @11:51AM (#6138837) Journal
    http://www.computer.org/proceedings/sp/1046/104602 14abs.htm

    This was presented at the "2001" Oakland conference, held in 2000.

    There were some fascinating points in the presentation that weren't spelled out in the paper.

    Disclosure didn't matter for the exploits they studied. That's right, didn't matter, for good or for ill. The rate of exploitation in the wild didn't spike after public announcements. The exploitation rate didn't go down after patch release.

    Events that did matter included the public release of scripted exploits, and the cycle of the academic year.

    Exploitation rates finally faded away over time but the fade-out curve didn't relate to patch adoption or availability. In fact, sometimes an exploit would flare up again after fading away. The authors attribute the decay of the exploit rate to boredom on the part of attackers, and maybe to the gradual replacement of vulnerable software.

    If they're right, the policy lessons that follow (you're getting my opinion here) are:
    o Disclosure is harmless and should be allowed
    o Patches should be encouraged. They won't stop the next Code Red but they have benefit for anyone willing and able to install them.
    o Distribution of attack scripts has a high social cost.
    o When a vendor ignores a problem until there's an attack script in the wild, you've got a dilemma.

    Here's a brainstorm: full liability for the damages caused by exploitation of a security hole. Liability lands on the vendor if they ignore a bug report that includes a demonstration attack, lands on the script author if the attack goes into the wild before there's been a chance to fix the problem.
  • Will may my life so much easier, I wont have to worry about reading about the latest bug or attack.

    I'm sure my boss feels so much safer.

    All kidding aside, its all about covering their butt, not to admit problems. But does this increase their liability? They are withholding vital information from their customers about faults in their products.

    If the automotive industry did that, they would raked over the coals.

    Whats next, make it illegal to discuss bugs in a public forum with out their permission.. errrr
  • I think this says it all:
    • The basic steps of the Security Vulnerability Reporting and Response Process are:
      • Discovery. The Finder discovers what they consider to be a security vulnerability (the Potential Flaw).
      • Notification. The Finder notifies the Vendor and advises it of the Potential Flaw, and the Vendor confirms that it has received the notification.
      • Investigation. The Vendor investigates the Finderâ(TM)s report in an attempt to verify and validate the Finderâ(TM)s claims, and works colla
  • as if this even matters. sure its polite to tell a company about a bug before posting the exploit but how many companies pay attention to bug reports of security vulnerabilities without the added incentive of public proof that its broken.

    If you bought a stroller for your child and somebody else discovered that it had a serious design flaw that could cause it to collapse suddenly when going over bumps seriously injuring the child inside would you (a) prefer that the manufacturer was told silently so that t
  • Frankly I don't much care about what they've got in a disclosure framework, as long as it meets at least a few requirements:

    1. The problem must be disclosed to the vendor in a reasonable fashion (through the vendor's preferred method if they make such known) before any other action is taken. It's only fair to give the vendor a chance to fix the problem before making it public.
    2. Disclosure, both to the vendor and the public, must include sufficient technical details to verify whether the exploit is actually va
  • Go and read the F*ING document, you idiots!

    Where the HELL did you find any anti-disclosure in there? First of all, that document details a _process_, in which vendor and finder (of flaw) act together to overcome a security flaw.

    It details how can a finder contact the vendor, give firm deadlines on response time from the vendor, how the interaction between finder and vendor goes, assures the finder that something Is Being Done, and protects the user from disclosure of a flaw for which there are no fix.

    It

One man's constant is another man's variable. -- A.J. Perlis

Working...