Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security

Internet Draft on Vulnerability Disclosures 114

Cowboy71 writes: "An interesting posting on Bugtraq by Stephen Christie announcing the release for comment of an internet-draft "Responsible Disclosure Process" document, prepared by himself and Chris Wysopal of @stake. You can view the full paper at the IETF site."
This discussion has been archived. No new comments can be posted.

Internet Draft on Vulnerability Disclosures

Comments Filter:
  • Good plan (Score:4, Insightful)

    by cigarky ( 89075 ) on Thursday February 21, 2002 @09:11AM (#3043920)
    Its fair that there has been some disagreement between parties, but the current state allows corporations to brand anyone who discloses information responsibly (after appropriately informing the company of a vunerability) as a blackhat hacker and the corporation as a wounded party. They don't care to mention that their vunerability has resulted in countless manhours of sys admin time to correct the fault, deal with the Nimda/Code Red of the week and apply patches - and the corporations often maintain that they should be allowed to keep the information secret from responsible sys admins well after script kiddies are trading the exploit.

    Having a standard document will allow mature parties to avoid being branded crackers if they can follow a published disclosure protocol.

    • Having a standard document will allow mature parties to avoid being branded crackers if they can follow a published disclosure protocol.


      I'd have to disagree with that statement. The terms "blackhat" and "cracker" are inappropriate if applied to people who break into equipment they own or are otherwise authorized to compromise. I've broken into systems on which I had authority to do so, and never otherwise. For example, a box where those who knew the root password were long gone and no bootable media was available. That didn't make me a "blackhat", it made me a system administrator with a job to do, that being restoring legitimate access to our equipment. Similarly, if I have an IIS box and perform a security audit on it with the goal of insuring my IIS box is reasonably safe from compromise, I'm not a blackhat. Of course, a number of other terms come to mind if I were to be running IIS with any intention of being secure.


      It'll be interesting what's said of vendors who don't follow this proposed standard. That in itself might be more useful. "Not only does Foo Corp. produce buggy, insecure software, they don't even follow the disclosure protocol!" :) It might serve to reinforce the full disclosure argument.

    • except Nimda and Code Red both had patches available BEFORE the outbreaks happened. I really get tired of these two examples being brought up in the context of exploit publication/patch publication because the exploits did not work if the patches had been applied. Microsoft has had legitimate issues to be brought up in this context, but not these two.
  • by fist_187 ( 556448 ) on Thursday February 21, 2002 @09:14AM (#3043935) Homepage
    a day or 2 ago, i started thinking about the DMCA and how those who support it defend it...

    usually, they say something like "its to prevent hackers from learning about and exploiting the weaknesses before we have time to fix them" or some similar reason. fair enough, this is valid; i can see how this would be a good thing for preventing software piracy and that sort of thing.

    But, when it comes to security and vulnerability to attack, don't I have a right to know? Did I waive that right in an EULA? I'm pretty sure that if this happened with any other kind of product, the government would swoop down and set things right.

    Think about it. What if ford had kept the firestone recall under wraps (this "vulnerability" can "crash" the "application" and we don't want hackers/competitiors to exploit it). Yeah- good plan... But I'm the one riding in it! This situation sounds pretty ridiculous when its a "real world" product and not software.

    Has anyone else come to this conclusion or know how consumer protection got written out of the DMCA? I'm scratching my head here.
    • It's not quite the same situation.
      The vulnerability with firestone was an event not really open to explotiation.
      The problem with releasing a vulnerability, is that you can pretty much guarantee that soon after a black hat is going to be working on exploit code.
      I can see the argument either way, give the software company time to fix the problem, and then disclose to force their hand.
      The problem is, if a hacker discovers the weakness, then they _are_ going to use it. They're already breaking the law (or intend to) so additional legal constraints are just going to get ignored.
      • I agree that it's not the same situation, but for a much more important reason.

        In the Ford/Firestone situation, people's lives were potentially in immediate danger.

        I've been a sysadmin. I know that dealing with the Nimda of the week can be a royal pain in the butt, but in now way is it fair to compare a security weakness in a piece of software with the Ford/Firestone affair.

        I'll also agree that it's possible that there is some pretty dang important information being stored/manipulated/otherwise dealt with by the software. If it's potentially life threatening, hire some good architects, developers, and support staff, and do it yourself.
    • Consumer protection was never a goal of the DMCA. Corporate profits are the goal of the DMCA (Macrovision, RIAA, MPAA, etc). Screw the consumer. No consumers were involved in this legislation. Sometimes I think we need more consumer lobbyists. Oh yea, we do have the damn congressman we hired to represent us. What is up with that?

      The problem I see with making the process for reporting vulnerabilities too organized is that anyone can locate a vulnerability. If the process becomes too complex they will just go somewhere else to report them and I, as a sysadmin, don't have the time to follow every black hat bboard.
    • I don't think you've gotten the full motivation behind full disclosure.

      It's not about 'arming the hackers', or even informing the public. It's about making sure vulnerabilities get fixed.

      Simply put, when the public doesn't know about a vulnerability, the vendor won't fix it. History has repeated itself ad nauseum. Crackers themselves don't provide sufficient motivation to companies, because vendors aren't liable when their customers' systems are broken into.

      The only effective means to force vendors to make their code more secure has been full disclosure. When people know your product is crap, they will eventually stop buying it.

      The full disclosure advocates took great satisfaction in Bill Gates' proclamation of refocusing Microsoft on making secure software. There is no way he would have done that if Microsoft hadn't been embarrassed time and time again by people releasing vulnerability details to encourage accountability.

    • "Consumer Protection" got written out long before the DMCA, seeing as software is a 'service' and not a 'product'. The DMCA is just admitting that no protection scheme will be strong enough on it's own merits (DeCSS being a prime example), so we'll make circumventing it a crime. Just like if I buy a lock for my house, a talented lockpick can probably always open it, but then s/he's faced with the fact that it's a crime to break in anyway. The prob with DCMA is it takes away the consumers 'fair use' of materials (being able to make backup copies and, um, sharing with friends), but to the producers, it's better than "well, I guess you picked the defective lock fair & square, so take whatever you want!"

      Remember, software remains the property of the folks who wrote it - so maybe you DON'T have the right to know of any defects. The consumers legal rights when it comes to IP royally sucks, just don't expect Thomas Edison's pal Strom Thurmond & Company to come to the rescue for a loooooong time.

      • seeing as software is a 'service' and not a 'product'

        Software is a product, not a service, and it is sold under terms more suitable for a work of art. Software vendors routinely refuse to extend the contract to anything the software might do (I, as you can easily see, ANAL).

        Just like if I buy a lock for my house, a talented lockpick can probably always open it, but then s/he's faced with the fact that it's a crime to break in anyway.

        Right - so it's not necessary to criminalize picking locks. Picking locks or climbing on balconies isn't a crime - but it's an indication of significant obstacles, which makes it relevant to the crime of breaking into a house, and it's definitely grounds for suspicion. The DMCA, on the other hand, aims at criminalizing, metaphorically, the practice of lockpicking and the ownership of ladders.

        Kiwaiti

    • Interesting angle. A better analogy would be(and i believe i remember this actually happening, anyone know the car?) if ignition switch was known to glitch making it easy to start the car with no key. the company had to aknowledge it and recall the part.
    • The DMCA was written under the guise of "Copyright protection". Copyright's are inherently consumer unfriendly. The purpose of copyright is to LIMIT the things that can be done by the consumer. While certain portions of copyright make sense to ensure commercial viability, others are strictly profit maximizing items. (Remember copyright is the club used to keep CD prices high.)

      The original concept of the DMCA was to provide certain additional protections in order to ENCOURAGE the distribution of DIGITAL FORMATS for MEDIA. Using the DMCA to protect software is simply WRONG. Software has always been digital and doesn't suddenly need new and wonderful protections.

      The rather spurious argument used to extend DMCA protection into software is that the software is being used to "enforce" provisions in the DMCA so any attempt to circumvent those enforcement mechanisms MUST be a violation of the DMCA. (Realizing of course that using the DMCA to prosecute DeCSS should be non-sensical then. The program allows the exercising of GRANTED RIGHTS, namely viewing, to a LEGAL PURCHASER of the DMCA protected content.)

      I guess we're lucky enough to have a country full of lawyers who are more than happy to explain to us why logic isn't the proper thing to use when attempting to interpret the law.
  • @Stake = Sellout (Score:3, Insightful)

    by Jsprat23 ( 148634 ) on Thursday February 21, 2002 @09:15AM (#3043938)
    This document is no more than a formalization of @Stake and Microsoft's desire to see the public disclosure that takes place on Security Focus and Cert come to a grinding halt. In their process, the community isn't informed of a the hole/vulnerability until after there's a fix.

    I feel that this gives the companies no motivation to fix the hole. I would instead suggest that when the "reporter" informs the company, The company receives a grace period of 30 days to work on a fix after such a point the "reporter" could come forward, and release the hole publically if he/she/it felt that the company wasn't making a good faith effort to fix the problem. Of course this whole process is null and void if the program is open source/free software and the "reporter" releases a patch for the flaw at the same time the "reporter" releases the flaw.

    Ponder that my friends.
    • Re:@Stake = Sellout (Score:3, Informative)

      by asmithmd1 ( 239950 )
      Go ahead and read the document before posting
      3.7.1 Vendor Responsibilities

      1) The Vendor SHOULD work with the Reporter and involved Coordinators
      to arrange a date after which the vulnerability information may be
      released.

      2) The Vendor MAY ask the Reporter and Coordinator to allow a "Grace
      Period" up to 30 days, during which the Reporter and Coordinator do
      not release details of the vulnerability that could make it easier
      for hackers to create exploit programs.
    • Re:@Stake = Sellout (Score:4, Informative)

      by Raleel ( 30913 ) on Thursday February 21, 2002 @09:32AM (#3044027)
      Apparently you did not read the draft (which I just did)

      (from the draft)
      3.6.2 Reporter Responsibilities

      1) The Reporter SHOULD recognize that it may be difficult for a
      Vendor to resolve a vulnerability within 30 days if (1) the problem
      is related to insecure design, (2) the Vendor has a diverse set of
      hardware, operating systems, and/or product versions to support, or
      (3) the Vendor is not skilled in security.

      2) The Reporter SHOULD grant time extensions to the Vendor if the
      Vendor is acting in good faith to resolve the vulnerability.

      3) If the Vendor is unresponsive or uncooperative, or a dispute
      arises, then the Reporter SHOULD work with a Coordinator to identify
      the best available resolution for the vulnerability.

      and

      3.7.1 Vendor Responsibilities

      1) The Vendor SHOULD work with the Reporter and involved Coordinators
      to arrange a date after which the vulnerability information may be
      released.

      2) The Vendor MAY ask the Reporter and Coordinator to allow a "Grace
      Period" up to 30 days, during which the Reporter and Coordinator do
      not release details of the vulnerability that could make it easier
      for hackers to create exploit programs.


      • Apparently you did not read the draft (which I just did)

        I read the draft and your response, and nowhere in your response did the word "public" appear.

        Therefore, the original poster was correct in asserting this draft is nothing more than an attempt to stifle public dissemination of security holes.

        Maybe you need to re-read the original poster's comments, because it does appear he/she did the prerequisite reading.
        • towards the top it defined the release phase:

          The vendor or other parties may then release
          the information - possibly with additional details - to the security
          community.

          The idea of this is not to stop public disclosure, but rather to stop irresponsible public disclosure. There is nothing wrong with letting a vendor know about the hole, giving him some time to fix it, and then he fixes it. If there is no easy fix, a public disclosure will only allow others who do not do security research (i.e. script kiddies, not true exploit finders) to exploit the vulnerability for malicious reasons.

          Since the other parts of the document designate the security community, and mention specifically two public mailing lists, that counts as public as far as I am concerned.

      • Wow. This whole thing completely indemnifies Microsoft.

        exception 1) If the problem is related to insecure design (Windows)
        2) the Vendor has a diverse set of OS's to support (Windows 2000/XP Professional, Home, and Server Editions)
        3) the Vendor is not skilled in security (Microsoft)

        Sounds like they should love this document. And no, I haven't read the whole thing yet.
        • I'll agree at 1) and 2), but 3) they at least have the potential. The problem is that the environment and business model promotes features over security.

          I should also point out that it only gives them a grace period, does not excuse, nor prevent the bug from being released. The reporter SHOULD give them a grace period, but he does not have to. Certainly, once he has given a grace period and feels that the company is seriously trying to fix it (this is hard to determine, but involving the reporter more seriously, by providing patches for testing against, would certainly help), he can extend it if he wants, or release it.

          Actually, there are no MUSTs or MUST NOTs for the reporter, but there are for the vendor.
    • Re:@Stake = Sellout (Score:3, Interesting)

      by flacco ( 324089 )
      I feel that this gives the companies no motivation to fix the hole.

      It also denies admins the opportunity to at least shut down or wall off the affected service until a real fix is available.

    • Not Really (Score:2, Interesting)

      by evenprime ( 324363 )
      This document is no more than a formalization of @Stake and Microsoft's desire to see the public disclosure that takes place on Security Focus and Cert come to a grinding halt.

      That's possible, but I don't really think so. @stake obviously has roots in the non-corporate security community, so we know that they're aware of the benefits of disclosure. It is possible that they are angling for an RFC as a means of protecting amature security researchers who want to disclose what they find without suffering the fate of Dimitry Sklyarov [freesklyarov.org]. After that debacle, many people stopped disclosing anything because it was obvious that the DMCA would be used to make their lives miserable if they annoyed the companies too badly. A formal RFC that is approved by the IETF would go a long way towards discouraging prosecutors from bringing charges against researchers. ("Members of the jury, how can my client's actions be construed as a crime? He was only following the established procedures laid out by the IETF....")

      The problem with drafting an RFC is that not all bugs are created equal, and that usually doesn't get reflected in a standards document. If a popular server application has a local exploit in the installation/registration process, you might want to treat that differently than if the same server application has a remote exploit caused by an inability to handle malformed requests. Why? The first one will affect sites for a brief period of time while the sysadmin installs some software. The second example affects any site running the software, thus having the ability to impact many more people. I'd be much more likely to give the software manufacturer more time to fix a remote exploit...

  • It is interesting that Chris is in favor of controlling full disclosure. I don't see how he can be objective, since @stake is one of a handfull of security product vendors that is now in bed with Microsoft. They want to limit the accessibility of inforomation to a select few and increase the time limit before the disclosures are made publice. This works well for them as they can then sell themselves as a one of the select few in the know, besides the person who really discovered the vulnerability and released it into the wild. What a bunch of hypocrites.
  • by jsmyth ( 517568 ) <jersmyth&gmail,com> on Thursday February 21, 2002 @09:18AM (#3043956) Homepage

    OK, I've made an attempt to read the document critically. It reminded me of some of its more obvious failings though:

    • Not all "vendors" can be bound by the obligations of either fixing a "flaw" or explaining why the flaw can't be fixed
    • In open source projects, the documentation on "flaws" is often included in the TODO file, which counteracts a large amount of what this document is trying to acheive
    • We still have the open sore of the DMCA to worry about, regarding the release of information that could be used to exploit or reverse engineer communications or data, e.g. Alan Cox's public refusal to document some of the kernel security stuff

    I have to admit that it's a good general solution for presentation to and ratification by the Microsofts of this world - companies for whom marketing departments have more control over release dates than systems engineering or test departments...

    ...but these are the very companies that are LEAST likely to pay attention to the words of the technological minority, in favour of placating the fickle majority. Anyone else see a problem here??

  • Anything new? (Score:5, Insightful)

    by Proaxiom ( 544639 ) on Thursday February 21, 2002 @09:24AM (#3043987)
    I've been going through it, and I can't seem to find any points on which this differs from the existing full disclosure model that most of the security community already follows.

    There are, of course, people who discover vulnerabilities and immediately publish all the details without notifying the vendor, but an RFC is hardly going to stop.

    All the same, guidelines are nice. I'm a little skeptical of vendors sticking to the suggestions. To many SHOULDs and MAYs.

    To recap, the proposed RFC suggests 7 stages in fixing a vulnerability:
    1. Latent flaw. The flaw exists undiscovered.
    2. Discovery. Somebody finds the flaw (the 'Reporter').
    3. Notification. The Reporter notifies the Vendor.
    4. Validation. The vendor verifies the flaw.
    5. Resolution. The vendor fixes the flaw.
    6. Release. The vendor publishes the flaw.
    7. Follow-up. Analysis of the resolution.

    What a nice world this would be.

    It usually works like that right up until step 5. Here's what really happens:
    5. Denial. The vendor denies the flaw really exists, setting his best PR guys on the job.
    6. Demonstration. The Reporter creates exploit code to prove to the vendor that not only does it exist, but it is serious and should be fixed.
    7. Diversion. The Vendor changes the subject by publicly attacking the Reporter for creating the demonstration, labeling it a "Hacker Tool".
    8. Publication. Third party bug tracking systems and security entities make knowledge of the vulnerability widespread to try to scare the Vendor's customers.
    9. Fix. The Vendor repairs the vulnerability, while still denying that it has any real significance.
    10. Release. The Vendor shuffles the release into a service pack or update, and puts it on his web site.

    • What a nice world this would be.

      It usually works like that right up until step 5. Here's what really happens:
      5. Denial. The vendor denies the flaw really exists, setting his best PR guys on the job.
      6. Demonstration. The Reporter creates exploit code to prove to the vendor that not only does it exist, but it is serious and should be fixed.
      That's an excellent description of what actually happens. May I urge you to submit this as a comment to the authors of the Draft? It is something that needs to be addressed in any potential RFC.

      sPh

    • by jsmyth ( 517568 ) <jersmyth&gmail,com> on Thursday February 21, 2002 @09:57AM (#3044161) Homepage
      5. Denial. The vendor denies the flaw really exists, setting his best PR guys on the job.
      6. Demonstration. The Reporter creates exploit code to prove to the vendor that not only does it exist, but it is serious and should be fixed.

      7. Vendor hires a DMCA lawyer to sue the pants off the reporter for exploiting vendor's product
      8. Government incarcerates random employee of reporter's organisation who just happens to be in the country at the time.
      9. Vendor retracts suit.
      10. Government continues to incarcerate random employee, sticking tongue out at the rest of the world in the process.

      I give up.

      • Why does the DMCA come into it? The DMCA covers circumvention of mechanisms which effectively protect access to copyright works. In the case of software vulnerabilities, the software is the copyright work but what is the "effective access protection mechanism" which is being circumvented?
        • What's more is that exploits typically don't violate the "personal integrity" of the software (ie. Apache.) It violate the integrity of a service providers server.
    • Sounds like RealDownload, as described here [grc.com].

      May be in addition to publicly attacking, they could send the lawyers as well :)

      7. Lawsuit. The Vendor threatens the Reporter for creating the demonstration with lawsuits, claiming that the "Hacker Tool" is damaging their reputation.

    • There are, of course, people who discover vulnerabilities and immediately publish all the details without notifying the vendor, but an RFC is hardly going to stop.

      And please remind us again why this is a bad thing?
      • It's not necessarily a bad thing. But a better method is to notify the vendor, give them a predetermined amount of time to release a patch, and then publish the details. Say: "Here is a bug, here's how I exploit it, I release the details in 90 days. You might want to have a patch available by then."

        If the vendor is responsive and cooperative, then releasing the details early will do more harm than good.

        Something that I think is missed in the proposal is accomodation for unique circumstances. There are times when full disclosure is a really bad thing, like when a vulnerability poses significant risk to human life or national security.

      • Because it's best to give the vendor (yes, even Microsoft) a fair amount of time to fix the problem. Every coder is human and is going to make some mistakes. Bugs are a fact of life in coding. Most vendors want their product to be the best out there and will do all they can to patch up any security holes. (Especially the little guys who can't rely on pure marketshare to srive their sales.) Of course, if the vendor doesn't fix the problem (or denies that a problem even exists), then the details should be released to the public.
        • Well historacly vendors tend to ignore faults that are not public known (yes not just MS, other too)
        • Take for example this remote exploit in Slash [sourceforge.net]. Jamie states in a commment for this bug the proper manner for handling bug discoveries:
          Please send security-related issues to us privately

          (malda@slashdot.org is a good place) and give us a chance to
          fix them before posting exploits.

          You can write a whole RFC if you want, but Jamie's explanation is more concise. In short, never publicly announce a bug before you've given the vendor the time necessary to correct the problem. ILOVEYOU is a perfect example of this; instead of writing a virus, someone should have just sent mail to bgates@microsoft.com giving details of the exploit, and allowed the vendor time to fix it. Writing an exploit just puts you at risk of violating the DMCA and does not generate goodwill from the vendor.
    • Re:Anything new? (Score:3, Insightful)

      by KjetilK ( 186133 )


      To recap, the proposed RFC suggests 7 stages in fixing a vulnerability:
      1. Latent flaw. The flaw exists undiscovered.
      2. Discovery. Somebody finds the flaw (the 'Reporter').
      3. Notification. The Reporter notifies the Vendor.


      It usually works like that right up until step 5.



      Not really. :-) What may happen there in point 2 and 3 is that a black-hat who discovers the flaw doesn't become "The Reporter". He keeps it to himself or may share it with other blackhats with the intent of using it for malicious purposes.


      That's why Full Disclosure is a Good Thing[tm]: It ensures that the amount of time between discovery by blackhats (and knowledge only by blackhats) and knowledge to sysadmins is minimized. When sysadmins know, they may decide to shut down their systems. Giving Vendors another 30 days only gives blackhats 30 more days to exploit vulnerable systems. That's not a Good Thing[tm].


      However, vendors should be given prior notice. How long this period should be, I have no idea (I posted a question to /. about this half a year ago, it was pending for months before it was rejected), but a fixed 30 days is much too long, I feel.


      I think that the period should be shorter depending on how long blackhats may have had knowledge about it and how serious the flaw is.

  • This draft (or rather the idea of this draft) was discussed in some detail at the SAAG meeting at the last IETF. A number of people objected to the idea of standardizing stuff like this. The danger is that the MUSTs and SHOULDs could end up in US law. Also there was a previous effort by the GRIP working group which caused a lot of tension with vendors.

    While there were a few dissenting opinions, most people agreed that most intelligent thing to do is to notify the vendor, give them some time to fix the problem, and then publish the vulnerabilty. However, no one spoke out in favour of the formalized system as described in the draft.

    -a
    • These "musts" are not going to end up in a US law any more than the "musts" in the DHCP spec ended up in a law. When I look to buy a home router that does DHCP I check it against the RFC, if it doesn't do the musts, I return it. So now we could have an RFC for how responsible software companies act, if they don't follow the RFC, don't do business with them.
      • There are many, particularly among software vendors, who would like something like this enshrined in law.

        For the most part they are opponents of the full disclosure model, and they would love to have rules imposed on people who discover vulnerabilities.

        So if you discovered a bug and published it's details without notifying the vendor or going through the correct process, you could go to jail.

        And if such legislation was introduced, guess who it would favour, the Customer, the Reporter, or the Vendor?

      • These "musts" are not going to end up in a US law any more than the "musts" in the DHCP spec ended up in a law.
        I would have to disagree with that a little bit. First, if formalized this RFC would very quickly find itself written into RFP's for large software purchases. From there it would work its way into contract law.

        Second, if vendors see an opportunity to combine an "industry standard" with the DMCA to stifle dissent, then they will grab it. It's only a short step from there to having Hollings introduce the combination as law, with 20 year prison terms to follow.

        sPh

    • Just because RFC2505 says you should close your SMTP relay doesn't mean the government is going to pass a law requiring its enforcement.
  • by edA-qa ( 536723 ) <edA-qa@disemia.com> on Thursday February 21, 2002 @09:31AM (#3044022) Homepage
    This process seems to be heavily biased towards the vendor and does not seem to offer very much to the community at large. A vendor not interested in exposing vulnerabilities could easily exploit this process.

    Simply setting up the recommended email address and generating an auto-reply (or the equivalent with form letters and an assistant) to all reports would acknowledge all claims. This auto-reply could immediately include a request for extension. Delay the auto-reply from 4-7 days, put snapshots of similar keyword searches from the internal knowledgebase, and you have a suitable claim for an extension.

    Any disclosure of the flaw before the extension and the vendor can quite happily say that they are following the process and the reporter is not. Meanwhile the vendor themself has no reason, what-so-ever to follow the remaining sections of the process, then can simply allow all periods to lapse and proceed on their own accord. This allows the reporter to be labelled in bad faith, whereas the vendor can artificially appear to act in good faith.

    Not following this best practice would furthermore not generate any additional bad publicity for the offending company. Vendors operating in bad faith will already have a negative image. Vendors operating in good faith will have an extra overhead that they may not be able to support if they follow this process.

    It additionally appears vendor biased because it does not offer any benefit to the security community, or the user community at large. Prevention of exploits does not appear as a goal of this process. Nor does protection from exploitation of flaws. Unless of course we make the unreasonable assumption that exploits do no appear until over 30 days past the point of /reported/ discovery.
    • How many RFCs are written to be enforceable?

      When you are implementing a system that uses an RFC, it is solely your responsibility to comply with the requirements. The same goes here.

      The assumption is that both the Reporter and the Vendor act in good faith.

      Think of this as a proposed guideline that researchers and vendors should follow. Vendors are still free not to fix bugs, and researchers are still free to recklessly publish exploits to the world at large.

      But if they both followed the instructions, things wouldn't be too bad. Of course if a Vendor persists in denying a bug's existence then the Reporter can, and should, publish the details. This is how things work most of the time already.

      You should note an earlier poster who mentioned concern that if this turned into an RFC, it might eventually lead to legislation. Which would be a bad thing.

      • How many RFCs are written to be enforceable?

        Answer: very few, if any.

        However, what RFCs (and BCPs) like this can potentially provide is an escape clause for responsible employees who discover a problem with their company's software to make it public without having their own employer sue their ass under the DMCA or whatever local equivalent might exist. Let's face it, if a Techie gives his PHB a procedure for dealing with incidents that includes a phrase like "in compliance with RFC's X, Y & Z" how many are going to read them before they sign off on them?

        Or you could even be honest and openly use the RFC as a basis for your own company policies on the matter at hand.

    • The goal of this process is not to regulate errors in code or deliberate exploits. It's to provide a mechanism by which the security community can interact with members of that community which are selling software ( you pretty much become a member when you publish code or a program ). What it offers is a formalized protocol for releasing exploit details that will fairly allow a vendor to release a fix, yet allow for a minimum time that the customers as a whole are vulnerable.

      I have heard of the frustration on both sides with regards to this. Some vendors don't have the resources to fix a bug within 7 or 14 days. Some security consultants have reported flaws and have not recieved any word for months.

      The idea is for the security community to realize that vendors are people too, and for vendos to realize that security people aren't all h4X0rZ.
  • Full disclosure of vulnerabilities is as important for software security as free journalism is for figthing koruption. Defining some rules for responsibly doing that is a step in the right direction. However - since M$ has a history of ingnoring industry standards I do not have high hopes that it will actually improve something...
    • If you actually want software to be secure.
      1. Publish the exploit. Get it loose in the wild.
      2. Publish the fix or workaround, if there is one.
      3. Inform the vendor.

      Brutal, but anything less becomes a mess of how long the vendor can delay doing anything about it.
  • by phr2 ( 545169 ) on Thursday February 21, 2002 @09:42AM (#3044086)
    It describes recommended practices for dealing with discovery of security vulnerabilities. It uses standard IETF terminology ("x SHOULD y, z MUST w") to describes actions that should be taken by five entities:
    • Vendor: the producer of software in which vulnerabilities are discovered (e.g. Microsoft)
    • Reporter: someone who finds a security hole (typically someone in the "security community")
    • Coordinator: someone who mediates between Reporters and Vendors
    • Customer: end-user of vendor products
    • Security Community: the usual researchers, sysadmins, etc. who are trying to improve overall information technology security (it doesn't say whose security).
    The first thing I notice is that while there are several places where it says the vendor MUST do this or that (e.g. the vendor MUST respond to vulnerability reports within 7 days), and several things the reporter SHOULD do (the reporter SHOULD provide Vendor with all known details of the issue), there's nothing the reporter MUST do. So it's not an attempt to impose mandatory policies on reporters through a standards process.

    The recommendations make a reasonable attempt to protect vendors and customers (i.e. it doesn't go for the zealous instant-full-disclosure approach) without allowing reports to go ignored for too long. It basically says the vendor should get 7 days to initially respond to the report, then 30 days to fix the problem, and then can request a 30 day grace period to get the fix out to customers before the reporter releases details to the public . If the reporter goes along with this, the vendor must credit the reporter in its own public announcement of the fix.

    It's kind of weird, seeing an internet draft using protocol-like terminology to describe how people and companies (rather than computers) are supposed to interact with one another. I hope this isn't supposed to be an RFC, or else that this kind of thing is normal for the IETF (I haven't read that many IETF docs). I've never seen a thing like this and it seems to turn the IETF into an even more political body than it already is. I thought the IETF was supposed to make recommendations about bits and packets, not people and companies.

    Anyway, I can take issue with some of the points in the document, but at first glance there's nothing in it that I'd call outrageous. It seems like a genuine effort to find a good intermediate policy between instant full disclosure (and instant widespread exploits) and leaving stuff secret forever (letting exploits spread without the public knowing). Whether that policy is the optimal one is of course a matter of opinion and reasonable people can differ on it.

    The document's authors came from several different camps (two names I recognize are Bruce Schneier's co-author Adam Shostack (presumably on the full disclosure side), and Microsoft's Scott Culp representing the Evil Empire) and it looks like they managed to arrive at a consensus. I guess that's a good sign and I hope Bruce and/or Adam will publish their own opinions of the draft soon.

    • "It's kind of weird, seeing an internet draft using protocol-like terminology to describe how people and companies (rather than computers) are supposed to interact with one another. I hope this isn't supposed to be an RFC, or else that this kind of thing is normal for the IETF (I haven't read that many IETF docs). I've never seen a thing like this and it seems to turn the IETF into an even more political body than it already is. I thought the IETF was supposed to make recommendations about bits and packets, not people and companies."

      Uh, this is a perfectly normal for a Best Common
      Practices (BCP). I assume that's what's intended here,
      or perhaps just fodder for the plenary or something.
  • by pongo000 ( 97357 ) on Thursday February 21, 2002 @09:43AM (#3044088)
    This document seems to downplay the role of public disclosure, and instead inserts the coordinators as the middlemen between reporters and vendors. This is fine for Bugtraq, CERT, and all the other so-called "coordinators," but where does that leave the public? Several times, the document addresses public disclosure, but only in the vein that vendors can choose to withhold recognition or other feedback if a reporter chooses to go public with the information.

    I think this document is a big win for the coordinators, but a big loser for the public.
  • Unresponsive Vendors (Score:2, Interesting)

    by meggito ( 516763 )
    One subject that is avoided, and when hit upon is looked down on, is that sometimes vendors refuse to respond and reporters release information publicly. If a vendor is unresponsive and refuses to do anything about a vulnerability there is usually very little a reporter can do to get that vulnerability fixed. If there is no vendor receipt a coordinator should be contacted. If there is still no response then the vendor should be notified that if there is no resonable reply that indicates effort to fix the vulnerability then information about the vulnerability will be released publicly after a grace period of about 15 days (long enough for a reply but short enough to decrease those vulnerable). If they are still unresponsive then the responsible thing to do is to get the information out there and to make an effort to force them to fix the vulnerability. This method should be avoided if at all possible because it causes a quick and less thought out fix to the problem. It is, however, better than leaving the vulnerability open to others who have discovered the vulnerability or will do so and exploit the unknowing users.
    • Isn't this covered by 3.4.4:
      4) Within 10 days of initial notification, the Vendor's Security Response Capability SHOULD provide a clear response to the Reporter and any involved Coordinators.
      That seems about right, but, it in other parts of the document it seems to say that you can't skip steps. What to do if the Vender doesn't conform?
  • I'm just an at-home sysadmin but I have a fat pipe and 5 computers. I have been exploited for DOS attacks (back when I had no firewall and all Win machines).

    Now I run a linux firewall and a couple of other linux distro machines for education/work and two Win32 machines (ME and 2000).

    I'm in the position of knowing almost nothing about security, and although I know something about network programming I know very little about the common networking components available and/or how to fix them myself if there is an exploit.

    This means (obviously) that I am completely dependant on posted security holes to keep my network secure. If this standard keeps me from knowing about a hole for the sooner of 30 days or a patch to even KNOW about a whole doesn't that make me, the consumer, intentionally and knowingly left open to attack by the vendor? Shouldn't they be required to let me know? I can't immagine the pressure high-security network admins would be under if the standard required those 30 days.

    Just my two cents.
    • Ahh, the classic example of DWI (Driving While Ignorant). The good thing is that you acknowledge your ignorance and are willing to work to decrease it.

      First off, don't wait for vendor information, go to CERT, go to BugTraq, etc. Second, try to be proactive instead of reactive. There used to be a tool called Satan (Security Analysis Tool for Networks, or something like that) that can do monitoring of your network for common vulnerabilities. There should be some more tools like that. (Look at grc.com and other sites) Finally, invite someone to do a "hack test" of your network. They should be able to give you a decent idea of potential vulnerabilities. One important note: successfully passing a "hack test" is not a single time event, you need to retest every so often (thing quarterly, semi-annually, or at the very least annually).

      Good luck, read stuff, and experiment. It's the best way to improve.
  • by Zocalo ( 252965 ) on Thursday February 21, 2002 @10:02AM (#3044185) Homepage
    I'm all for vendors of software (any vendor, be it Microsoft about the latest IE exploit or ISC about a hole in BIND) to keep a show stopper under their hat while they try and fix it. Provided that there is no evidence that the Blackhat crowd knows about the problem, but there needs to be constraints - 30 days seems about right. This *has* to become null and void as soon as the problem is exploited though; at least that way the people who care about security can take steps to prevent abuse.

    I've seen a site well and truly compromised because frickin' Microsoft sat on a bug long after the Blackhat's had an exploit. It only took two days before their entire DMZ was rooted and credit card details stolen, and the stupid thing was, if the site had known that there was a problem they could have worked around it and avoid the legal mess they got into and are still in.

    The only saving grace is that this probably won't happen to them again; they are now an ex-customer of Microsoft's and running Apache instead. True, Apache has its own problems, but at least they give you a chance to prevent any issues arising if you care to do so.

    PS. Can I interest anyone in 40 used copies of NT Server? Thought not.

    • Can you give an estimationg of how long is "long after". If it was more than 2 months, that's really irresponsible (but yes, they ahve a track record of that). If it was under a month, it might have been too fast to fix.

      I agree though. If there is evidence that this has appeared in the wild, it should be immediate, even if it means disabling the service.

      Interesting that I get to go to work today to image a machine that has been netbus'd
  • by QuantumG ( 50515 ) <qg@biodome.org> on Thursday February 21, 2002 @10:14AM (#3044250) Homepage Journal
    I know it's a strange term, but generally a whitehat finding a bug doesn't do it all alone. They do it with the help of others. So if I'm looking at a bit of code and see something that looks poorly coded, am I still permitted to go post on a public mailing list "hey, this kinda looks broken, someone wanna take a look at it?" or is that against the draft (and as such I'll be labeled as "malicious"). If I happen to be looking at a config file or a protocol specification and notice that the design seems to have an inherent security flaw (trusting user data for example), am I permitted to talk about it publically, or as a "reporter" am I required to go look at every product that implements that standard and report my findings only to the vendor of those products. I suppose all these questions will get sorted out in the wash eventually.
    • One might want to push for a seperation between responsible disclosure and responsible discovery.

      For Microsoft, if you release any information during discovery you're probably labelled as malicious even if the thoughts are hypothetical or hunches.

      However, its necessary to allow people to work together on the Internet in some form or else we can't benefit from each other's eyes ...
  • One more step in the 'Full-Disclosure' is bad camp seems to be trying to get their idea of responsibility standardized so they have some RFCs and STDs to point to in their argument. Yes, one should make a reasonable attempt at contacting a vendor, and yes it would be nice if every vendor was so helpful as to 'work with the Reporter' in solving the vulnerability. This doesnt work in practice hardly ever. This whole document is merely an echo of Microsoft's and @Stake's recent push towards limited disclosure and the occulation of vulnerabilities (see no evil). This concept is irresponsible and lazy. Just because I happen to be the first person to report a bug doesnt mean im the first to find it, nor does it mean that there are other people out there who are not actively exploiting it. It is my duty to disclose this information to the WHOLE community in as timely a fashion as possible so that the people who will be affected by this can find a work around or an alternative solution until a patch is issued. This RFC as it stands is much too pro-vendor and would do nothing but harm the security industry were it ever to be widely adopted as a working standard.
  • Whois the bad guy? (Score:2, Interesting)

    by nolife ( 233813 )
    Why is the finder of a flaw automatically labeled as the bad guy, and given this list of things to do after the fact? We have completely lost touch with reality here. If a finder releases an exploit or details he/she is considered the faulty party. What about the vender who made the software? I, in no way shape or form, work for that vendor. In the case of closed source, they created the software, they market the software, and they control every aspect of the software. They made a conscience effort to get the software out the door ASAP to make a jump on the competition. In the process they overlooked security, did not test the product fully, and did not care less about you, the end user who found this flaw. Now they want you to work with them and not inform others that are acting as Guinea pigs too? The only reason MS and other large vendors are even considering security now is because of the general public getting wind of it from main stream news media. For me, it depends on the specific vendor for what action to take. MS is not my buddy or friend, and not one I'd care to help for free. They charge per call for general phone support, if they want my support, they can call my 800 number with thier major credit card and pay me, if it turns out not to be a flaw, I'll refund the payment. They have more then enough money to hire thousands of programmers. To them, security is a PR issue and a non-revenue generator.
  • It's only me or it's blocking free speech (if free speech still exists in USA, anyway)? At the majority of the countries deemed "democratic" free speech is guaranteed by constitution. I really can't care more of the vendor's image, I care about my own: if my site get robbed because someone's software for which I paid I would be VERY upset cause my customers would be at risk because of me. If I get it free, it's my decision, I assume the risk since I paid nothing.

    So I could happilly accept the proposed standard if the vendor has to pay for damages ocurred because of its software in a proportion of what you paid (this releases free software from any problem).
    Say you paid $1,000.00 for a Web Server, someone informed the Vendor of a vulnerability and after 30 days they did nothing and your site got hacked, they would pay you 100 times the value ($100,000.00). Believe me when I say they would be much more "responsive".

    Back on the free speech. Imagine a serial killer that only kills white male men wearing blue shirts at midnight in a city. If you live there, would you wait until police captures the killer to publish the story? I, as a male man wearing blue shirt living in that city would be VERY gratefull to know it and, at least, avoid using the shirt or beeing at the streets at midnight.

    Just my 2 cents
  • 10) The Vendor SHOULD provide a mechanism for notifying Customers and the Security Community when new advisories are published.


    How about taking this a bit further and REQUIRE the Vendor contact registered Customers. And I don't mean posting a bulletin to www.mycompany.com/security or offering an option to subscribe to some mailing list.

    Of course I doubt vendors would agree to this requirement, since it would imply the vendor take some kind of responsibility for the vulnerability associated with their product.

  • Hypocrisy (Score:2, Interesting)

    by adipocere ( 201135 )
    I've mentioned this before, but let me point out, from Apache, their security policy [apache.org], which says:

    "We strongly encourage folks to report such problems to our private security mailing list first, before disclosing them in a public forum."

    Let's not bash Microsoft too much, if Apache is doing the same thing.

    • Is that exactly the same situtation? It looks like Apache is asking that you post to their private list before publicly disclosing it. They aren't insisting that you don't publicly disclose it at all or until they have a fix. It seems appropiate that any developer be given a head up directly first, instead of them having to find out about it because a friend of a friend saw it posted on a public forum somewhere.
      • It seems like a pretty similiar situation. They don't want to be just dropped a message, they want it to be "discussed," which is a very gentle way of saying, "Let's talk about this." That implies that they'd like you to wait, etc. It's pretty close.

        In any case, the "Security By Embarassment" crew should do what they do everywhere else - post it everywhere all at once.
        • It does appear to close enough to me. But, maybe if by that sentence they don't want anyone to report a hole, until there is a fix. I only have what I see on the page, and I don't know anyone in Apache so I can't say what was intented. Though from my point of view, they are just asking for it to be reported. Even still, after the report, I can imagine it being appropiate for a small waiting period(5 days or so) is perfectly acceptible, even if Micrsoft gets one, in order to investigate the bug and allow those most knowledge to get a feel for the conditions of the exploit and possibly add to the security report causes and workarounds. BTW, because they call the list private, I suppose it isn't open to the public. Because it's an open sourced project, I personally would feel better if such a list was accessible for anyone that wanted to help. I digress. Anyway, my personal complaint is when the metric "until a patch is issued" is use becuase this indefinate period of time. If apache is attempting to implement this then foobar on them too. After such a *short* period of time is up, then the stuff needs to be disclosed so companies can't be dead beats, and people can be aware of the problem to protect themselves.
          • Maybe a rational approach, like ... the more users, the less time they have to respond. When J. Random piece of software with 100 users has a bug, no big deal.

            However, Apache is, according to Netcraft, still beating IIS, so their period of disclosure should be relatively short.

            Once you start making exceptions, extensions, etc., it starts turning into tax law. "Well, we have to get them less time, since this software runs in hospitals," "It is freeware, maybe we should let them wait longer."

            All I know is, I'd really like to see someone do a study, with control groups, etc., which shows if public disclosure hastens the progress of a patch, or allows for more hacks, or causes early but lousy patches.

            It would be entertaining to take various hacks and decide how they got released, just to prove once and for all what effect this stuff REALLY has, rather than throwing out verbage on what the blackhats THINK will happen.
            • Well whatever the voluntary grace period it should be short, and constant. I'd rather not have a complex security protocol because it would be more annoying than useful.


              All I know is, I'd really like to see someone do a study


              There's an idea. The only problem with a study is to getting close enough to observe without them noticing. After all we need to observe them in their natural habitat. ;-) Otherwise it would be interesting to see the results.

        • Not really similar at all.

          Any time Apache asks anything it's different than when Microsoft "asks" because Apache doesn't walk around toting baseball bats and asking for favors while in the next breath mentioning what a shame it would be if a fine establishment like yours were to burn down.

  • One reason cited for why Reporters (those who find flaws) go public:
    ...malicious intent to damage the reputations of specific vendors

    If you find a glitch with a competitors product, why is it automatically evil to publicly disclose? One valid tactic of advertising (in the US) is to denigrate competing products. When Microsoft announced [microsoft.com] that Windows NT beat Linux in performace tests, did they give Linus private notice and 30 days to respond before issuing the press release? Does Compaq let IBM know beforehand that it has better TPC numbers [tpc.org]? After Dateline NBC staged exploding gas tanks [whatreallyhappened.com], did the Wall Street Journal give them a month to come clean before revealing the scam?

    If you worked at Be in 1998 knew of a fundamental, nearly unfixable flaw in Windows, how much time would you grant Microsoft to address it?
  • by Frater 219 ( 1455 ) on Thursday February 21, 2002 @12:55PM (#3045250) Journal
    [The following is my response to the authors of this draft.]

    I am sure that you are receiving dozens of comments on this draft, so I will try to keep mine brief. I am a security technician and sysadmin for a large nonprofit research organization. In your draft's terms I believe I represent a "User" more than a "Reporter" -- though a user with security-specific experience.

    It seems to me that your draft undervalues the powers of users to protect themselves independent of the actions of vendors. Users are not entirely reliant upon the vendors of the software they presently use to protect themselves, and they can make use of published security information even if a vendor does not choose to acknowledge or proceed responsibly with the knowledge of a vulnerability. Moreover, they have a need for this information outside of its use in getting patches for existing software.

    Most software users are not obligate users of particular pieces of software. They choose among competing software products (or even system designs), and make use of published information about these products in making their choices. They may choose to migrate from an installed software product to a competing one on the basis of published security concerns.

    Because users need security disclosure to make informed decisions about the costs and benefits of pieces of software, they have an interest in a fuller and more analytical disclosure than vendors may desire. Large vendors may prefer users who have already purchased their products not to later question this purchase. They may resist the idea that a /pattern/ of vulnerabilities or poor practices exist in their software. And for a vendor to quietly roll security patches into an "upgrade" may help the user to avoid being cracked, but does not help him or her make responsible decisions about future purchases.

    Security researchers, it seems to me, have an ethical obligation not to aid criminals in attacking users. However, they (you) do not have an obligation to keep vendors from losing business, or to allow vendors to keep users in the dark regarding the comparative security strengths of software products. In many cases, users would be better served by being advised when the software they are using is poorly designed, has a history of vulnerabilities, and is likely to remain vulnerable to new sorts of attack -- rather than merely being told to wait for a patch, or not told anything independently at all.

  • ... while reporters literally don't have to do anything to be compliant with the spec. There are no MUSTs for reporters at all, and only one or two for coordinators. There are about a dozen for vendors.

    I'm not against vendors taking some responsibility for their products. I work for a vendor, and we take our defects (and trying to prevent them in the first place) very seriously. But if someone is going to poke and prod and pry our product to find vulnerabilities, then they should bear some of the responsibility for responsible disclosure.

    • By following the steps of this recommendation, reporters are being responsible. And in return it is completely reasonable for the vendors (who ultimately are responsible for the flaws existance) to have a list of behaviors and practices that a reporter can expect to encounter. By following this process, vendors are aiding and encouraging reporters in working in a middle ground that gives the best to both worlds. A reporter always has the option of immediate, full disclosure. In order to discourage this, it makes sense for vendors to do their absolute best to bend over backwards for a reporter who has made the effort to work within a framework that benefits the vendor more than the reporter. Additionally, there are numerous recommendations in there for a reporter to do the best possible job they can to verify the validity of the issue. Which, again, if everybody is willing to come to the table, is of benefit to the vendor as much as to the reporter.
      • The definition of "MUST" is "you are not in compliance with this protocol unless you do this". What we have here is a situation where the reporter can do (or not do) anything he wants, and technically be in compliance with the protocol.

        If you file an excellent bug report with the vendor and then wait 30 days before telling anyone about it, then you are in compliance. On the other hand, if you create an exploit and release it to the world the first day you discover the bug, then you are technically in compliance with this protocol.

        Not really much of a protocol, if you ask me.
  • The proposed protocol favors only the vendors and those that work closely with them. This does nothing whatsoever for independent security reporters or the actual customers who remain unknowingly exposed to exploits because they aren't being reported.

    Reporting should be public and immediate. Someone who discovers an exploit in a particular piece of software is under no obligation whatsoever to protect that vendors image; in fact, using that as a reason *not* to release an alert is just plain brain-dead. If the vendor releases buggy code and customers migrate to another product - hey, that's capitalism, straight from Adam Smith! The consumer makes an informed choice based on actual facts - which is how the market is supposed to work if people aren't colluding to keep secrets or misinform the consumer.

    Vendors have no right to non-disclosure. Vendors have no right to have their 'image' or 'reputation' protected. If the company is that concerned it's up to *them* to invest the time and energy tracking down the bugs and repairing them. If we find them then we have every right to circulate them publicly for review, pointing out the flaws for all to see. This gives the consumer - you and me - the option to abandon the software in favor of a competitor's product, or to disable whatever part of that software is causing the problem until the vendor can be bothered to put out a patch.

    Incidentally, it also allows us to determine who makes the shittiest software and avoid future purchases of their products. Now why does my cynicism kick in when I consider the fact that the people who put together this proposal are sucking away at the Microsoft tit?

    Max
    • Vendors have no right to non-disclosure. Vendors have no right to have their 'image' or 'reputation' protected. If the company is that concerned it's up to *them* to invest the time and energy tracking down the bugs and repairing them.

      Darn tootin'. To my mind this is nothing more than a re-hash of the old "disclose or not" debate, except the authors have gone to the trouble of writing up the "not" standpoint as an RFC. My comment is, it sucks.

      The customer is left unprotected, under this scheme, until the vendor gets around to fixing the problem. Did you see the part about how the vendor will provide the fix free or charge, or for a nominal charge? No? Maybe that's because it wasn't there.

      And I don't think anybody has yet commented on their new, improved, standardized e-mail address for reporting security problems. Standard for whom? AFAIK, nobody uses that now; why not pick something obvious, like security@domain.com.

      I'm afraid @stake has sold out. Maybe it's something in the "@" symbol -- the same way @home sold its mailing lists to every spammer under the sun.

  • fewer SHOULDs, and MUST NOT have as many MAYs as it currently has.

    I'm just sayin'.

  • The Vendor SHOULD ensure that programmers, designers, and testers are knowledgeable about common flaws in the design and implementation of products.

    Rationale: Some classes of vulnerabilities are well-known and can be easily exploited using repeatable techniques. Educated programmers, designers, and testers can identify and eliminate vulnerabilities before the product is provided to customers, or prevent their introduction into the product in the first place.

    I've been looking for books or other resources that explain how developers can avoid security flaws in their code. Can anyone recommend any resources? Good books?
  • That realizes that the security community (meaning bug hunters and researchers) provide their services, for the most part, free of charge.

    In my book, this is considered a favor.

    So now there's a draft which is going to tell me how to properly do this favor for them or else I am a 4$$hole?

    So if you do me the favor of watching my dog, and I turn around and tell you that you need to watch my dog in my house, and that the dog needs to remain in my garage, which you need to remain in there to make sure he does not eat anything which may make him ill. And on top of that I tell you that you cannot wear anything blue/black/red/white because it makes my dog nervous, and that you MUST play with him for at least 45 minutes. And only feed him the special mixture of dogfood/yogurt served in the yellow tweety bird bowl which you will have to wash at every feeding.

    Or of course, I could just be grateful that you informed me of a vulnerability in my software, and grateful that you are watching my dog.

    (Score: -1 Ranting)
  • The nature of security problems has been changing with some regularity. If this only deals with disclosures regarding a certain vendor, ok...this plan will cover things nicely. But what about questions of disclosure with respect to protocols that aren't bound to (or invented by) any one entity? What about, for example, problems in new wireless technologies? IPv6? And so on? The IETF does not move as quickly as things in the security space do...not that they should, but still, I wonder if this will only go partway, and leave the rest of the problem harder to address.
  • What this document doesn't address is the severity of the vulnerability. If a vendor can get up to 30 days to find a fix for a problem regardless of how severe any potential exploits are, I certainly would not feel comfortable as a sysadmin or customer.

    I have to admit. Considering the amount of SHOULD conditions in the document, it is nowhere even close to being a final document. Addressing the severity of the problem is a very important part of the equation. It simply cannot be ignored.

Friction is a drag.

Working...