Internet Draft on Vulnerability Disclosures 114
Cowboy71 writes: "An interesting posting on Bugtraq by Stephen Christie announcing the release for comment of an internet-draft "Responsible Disclosure Process" document, prepared by himself and Chris Wysopal of @stake. You can view the full paper at the IETF site."
Good plan (Score:4, Insightful)
Having a standard document will allow mature parties to avoid being branded crackers if they can follow a published disclosure protocol.
Re:Good plan (Score:2)
I'd have to disagree with that statement. The terms "blackhat" and "cracker" are inappropriate if applied to people who break into equipment they own or are otherwise authorized to compromise. I've broken into systems on which I had authority to do so, and never otherwise. For example, a box where those who knew the root password were long gone and no bootable media was available. That didn't make me a "blackhat", it made me a system administrator with a job to do, that being restoring legitimate access to our equipment. Similarly, if I have an IIS box and perform a security audit on it with the goal of insuring my IIS box is reasonably safe from compromise, I'm not a blackhat. Of course, a number of other terms come to mind if I were to be running IIS with any intention of being secure.
It'll be interesting what's said of vendors who don't follow this proposed standard. That in itself might be more useful. "Not only does Foo Corp. produce buggy, insecure software, they don't even follow the disclosure protocol!"
Re:Good plan (Score:2)
Has anyone tried this angle? (Score:4, Insightful)
usually, they say something like "its to prevent hackers from learning about and exploiting the weaknesses before we have time to fix them" or some similar reason. fair enough, this is valid; i can see how this would be a good thing for preventing software piracy and that sort of thing.
But, when it comes to security and vulnerability to attack, don't I have a right to know? Did I waive that right in an EULA? I'm pretty sure that if this happened with any other kind of product, the government would swoop down and set things right.
Think about it. What if ford had kept the firestone recall under wraps (this "vulnerability" can "crash" the "application" and we don't want hackers/competitiors to exploit it). Yeah- good plan... But I'm the one riding in it! This situation sounds pretty ridiculous when its a "real world" product and not software.
Has anyone else come to this conclusion or know how consumer protection got written out of the DMCA? I'm scratching my head here.
Re:Has anyone tried this angle? (Score:1)
The vulnerability with firestone was an event not really open to explotiation.
The problem with releasing a vulnerability, is that you can pretty much guarantee that soon after a black hat is going to be working on exploit code.
I can see the argument either way, give the software company time to fix the problem, and then disclose to force their hand.
The problem is, if a hacker discovers the weakness, then they _are_ going to use it. They're already breaking the law (or intend to) so additional legal constraints are just going to get ignored.
Re:Has anyone tried this angle? (Score:1)
In the Ford/Firestone situation, people's lives were potentially in immediate danger.
I've been a sysadmin. I know that dealing with the Nimda of the week can be a royal pain in the butt, but in now way is it fair to compare a security weakness in a piece of software with the Ford/Firestone affair.
I'll also agree that it's possible that there is some pretty dang important information being stored/manipulated/otherwise dealt with by the software. If it's potentially life threatening, hire some good architects, developers, and support staff, and do it yourself.
Re:Has anyone tried this angle? (Score:1)
The problem I see with making the process for reporting vulnerabilities too organized is that anyone can locate a vulnerability. If the process becomes too complex they will just go somewhere else to report them and I, as a sysadmin, don't have the time to follow every black hat bboard.
Re:Has anyone tried this angle? (Score:3, Insightful)
It's not about 'arming the hackers', or even informing the public. It's about making sure vulnerabilities get fixed.
Simply put, when the public doesn't know about a vulnerability, the vendor won't fix it. History has repeated itself ad nauseum. Crackers themselves don't provide sufficient motivation to companies, because vendors aren't liable when their customers' systems are broken into.
The only effective means to force vendors to make their code more secure has been full disclosure. When people know your product is crap, they will eventually stop buying it.
The full disclosure advocates took great satisfaction in Bill Gates' proclamation of refocusing Microsoft on making secure software. There is no way he would have done that if Microsoft hadn't been embarrassed time and time again by people releasing vulnerability details to encourage accountability.
Re:Has anyone tried this angle? (Score:2)
Remember, software remains the property of the folks who wrote it - so maybe you DON'T have the right to know of any defects. The consumers legal rights when it comes to IP royally sucks, just don't expect Thomas Edison's pal Strom Thurmond & Company to come to the rescue for a loooooong time.
Re:Has anyone tried this angle? (Score:1)
Software is a product, not a service, and it is sold under terms more suitable for a work of art. Software vendors routinely refuse to extend the contract to anything the software might do (I, as you can easily see, ANAL).
Just like if I buy a lock for my house, a talented lockpick can probably always open it, but then s/he's faced with the fact that it's a crime to break in anyway.
Right - so it's not necessary to criminalize picking locks. Picking locks or climbing on balconies isn't a crime - but it's an indication of significant obstacles, which makes it relevant to the crime of breaking into a house, and it's definitely grounds for suspicion. The DMCA, on the other hand, aims at criminalizing, metaphorically, the practice of lockpicking and the ownership of ladders.
Kiwaiti
Re:Has anyone tried this angle? (Score:1)
Re:Has anyone tried this angle? (Score:2, Interesting)
The original concept of the DMCA was to provide certain additional protections in order to ENCOURAGE the distribution of DIGITAL FORMATS for MEDIA. Using the DMCA to protect software is simply WRONG. Software has always been digital and doesn't suddenly need new and wonderful protections.
The rather spurious argument used to extend DMCA protection into software is that the software is being used to "enforce" provisions in the DMCA so any attempt to circumvent those enforcement mechanisms MUST be a violation of the DMCA. (Realizing of course that using the DMCA to prosecute DeCSS should be non-sensical then. The program allows the exercising of GRANTED RIGHTS, namely viewing, to a LEGAL PURCHASER of the DMCA protected content.)
I guess we're lucky enough to have a country full of lawyers who are more than happy to explain to us why logic isn't the proper thing to use when attempting to interpret the law.
@Stake = Sellout (Score:3, Insightful)
I feel that this gives the companies no motivation to fix the hole. I would instead suggest that when the "reporter" informs the company, The company receives a grace period of 30 days to work on a fix after such a point the "reporter" could come forward, and release the hole publically if he/she/it felt that the company wasn't making a good faith effort to fix the problem. Of course this whole process is null and void if the program is open source/free software and the "reporter" releases a patch for the flaw at the same time the "reporter" releases the flaw.
Ponder that my friends.
Re:@Stake = Sellout (Score:3, Informative)
3.7.1 Vendor Responsibilities
1) The Vendor SHOULD work with the Reporter and involved Coordinators
to arrange a date after which the vulnerability information may be
released.
2) The Vendor MAY ask the Reporter and Coordinator to allow a "Grace
Period" up to 30 days, during which the Reporter and Coordinator do
not release details of the vulnerability that could make it easier
for hackers to create exploit programs.
Re:@Stake = Sellout (Score:1)
Re:@Stake = Sellout (Score:4, Informative)
(from the draft)
3.6.2 Reporter Responsibilities
1) The Reporter SHOULD recognize that it may be difficult for a
Vendor to resolve a vulnerability within 30 days if (1) the problem
is related to insecure design, (2) the Vendor has a diverse set of
hardware, operating systems, and/or product versions to support, or
(3) the Vendor is not skilled in security.
2) The Reporter SHOULD grant time extensions to the Vendor if the
Vendor is acting in good faith to resolve the vulnerability.
3) If the Vendor is unresponsive or uncooperative, or a dispute
arises, then the Reporter SHOULD work with a Coordinator to identify
the best available resolution for the vulnerability.
and
3.7.1 Vendor Responsibilities
1) The Vendor SHOULD work with the Reporter and involved Coordinators
to arrange a date after which the vulnerability information may be
released.
2) The Vendor MAY ask the Reporter and Coordinator to allow a "Grace
Period" up to 30 days, during which the Reporter and Coordinator do
not release details of the vulnerability that could make it easier
for hackers to create exploit programs.
Re:@Stake = Sellout (Score:2)
I read the draft and your response, and nowhere in your response did the word "public" appear.
Therefore, the original poster was correct in asserting this draft is nothing more than an attempt to stifle public dissemination of security holes.
Maybe you need to re-read the original poster's comments, because it does appear he/she did the prerequisite reading.
Re:@Stake = Sellout (Score:2)
The vendor or other parties may then release
the information - possibly with additional details - to the security
community.
The idea of this is not to stop public disclosure, but rather to stop irresponsible public disclosure. There is nothing wrong with letting a vendor know about the hole, giving him some time to fix it, and then he fixes it. If there is no easy fix, a public disclosure will only allow others who do not do security research (i.e. script kiddies, not true exploit finders) to exploit the vulnerability for malicious reasons.
Since the other parts of the document designate the security community, and mention specifically two public mailing lists, that counts as public as far as I am concerned.
Re:@Stake = Sellout (Score:1)
exception 1) If the problem is related to insecure design (Windows)
2) the Vendor has a diverse set of OS's to support (Windows 2000/XP Professional, Home, and Server Editions)
3) the Vendor is not skilled in security (Microsoft)
Sounds like they should love this document. And no, I haven't read the whole thing yet.
Re:@Stake = Sellout (Score:2)
I should also point out that it only gives them a grace period, does not excuse, nor prevent the bug from being released. The reporter SHOULD give them a grace period, but he does not have to. Certainly, once he has given a grace period and feels that the company is seriously trying to fix it (this is hard to determine, but involving the reporter more seriously, by providing patches for testing against, would certainly help), he can extend it if he wants, or release it.
Actually, there are no MUSTs or MUST NOTs for the reporter, but there are for the vendor.
Re:@Stake = Sellout (Score:3, Interesting)
It also denies admins the opportunity to at least shut down or wall off the affected service until a real fix is available.
Not Really (Score:2, Interesting)
That's possible, but I don't really think so. @stake obviously has roots in the non-corporate security community, so we know that they're aware of the benefits of disclosure. It is possible that they are angling for an RFC as a means of protecting amature security researchers who want to disclose what they find without suffering the fate of Dimitry Sklyarov [freesklyarov.org]. After that debacle, many people stopped disclosing anything because it was obvious that the DMCA would be used to make their lives miserable if they annoyed the companies too badly. A formal RFC that is approved by the IETF would go a long way towards discouraging prosecutors from bringing charges against researchers. ("Members of the jury, how can my client's actions be construed as a crime? He was only following the established procedures laid out by the IETF....")
The problem with drafting an RFC is that not all bugs are created equal, and that usually doesn't get reflected in a standards document. If a popular server application has a local exploit in the installation/registration process, you might want to treat that differently than if the same server application has a remote exploit caused by an inability to handle malformed requests. Why? The first one will affect sites for a brief period of time while the sysadmin installs some software. The second example affects any site running the software, thus having the ability to impact many more people. I'd be much more likely to give the software manufacturer more time to fix a remote exploit...
Protecting turf. (Score:1, Troll)
Alan Cox / DMCA / Open Source "vendors" (Score:4, Interesting)
OK, I've made an attempt to read the document critically. It reminded me of some of its more obvious failings though:
I have to admit that it's a good general solution for presentation to and ratification by the Microsofts of this world - companies for whom marketing departments have more control over release dates than systems engineering or test departments...
...but these are the very companies that are LEAST likely to pay attention to the words of the technological minority, in favour of placating the fickle majority. Anyone else see a problem here??
Re:Alan Cox / DMCA / Open Source "vendors" (Score:2)
Re:Alan Cox / DMCA / Open Source "vendors" (Score:1)
http://slashdot.org/article.pl?sid=01/10/22/172
and here's a followup
http://slashdot.org/article.pl?sid=01/11/11/142
Re:Alan Cox / DMCA / Open Source "vendors" (Score:1, Offtopic)
Re:Alan Cox / DMCA / Open Source "vendors" (Score:1)
Anything new? (Score:5, Insightful)
There are, of course, people who discover vulnerabilities and immediately publish all the details without notifying the vendor, but an RFC is hardly going to stop.
All the same, guidelines are nice. I'm a little skeptical of vendors sticking to the suggestions. To many SHOULDs and MAYs.
To recap, the proposed RFC suggests 7 stages in fixing a vulnerability:
1. Latent flaw. The flaw exists undiscovered.
2. Discovery. Somebody finds the flaw (the 'Reporter').
3. Notification. The Reporter notifies the Vendor.
4. Validation. The vendor verifies the flaw.
5. Resolution. The vendor fixes the flaw.
6. Release. The vendor publishes the flaw.
7. Follow-up. Analysis of the resolution.
What a nice world this would be.
It usually works like that right up until step 5. Here's what really happens:
5. Denial. The vendor denies the flaw really exists, setting his best PR guys on the job.
6. Demonstration. The Reporter creates exploit code to prove to the vendor that not only does it exist, but it is serious and should be fixed.
7. Diversion. The Vendor changes the subject by publicly attacking the Reporter for creating the demonstration, labeling it a "Hacker Tool".
8. Publication. Third party bug tracking systems and security entities make knowledge of the vulnerability widespread to try to scare the Vendor's customers.
9. Fix. The Vendor repairs the vulnerability, while still denying that it has any real significance.
10. Release. The Vendor shuffles the release into a service pack or update, and puts it on his web site.
Re:Anything new? (Score:1)
sPh
Re:Anything new? (Score:4, Funny)
6. Demonstration. The Reporter creates exploit code to prove to the vendor that not only does it exist, but it is serious and should be fixed.
7. Vendor hires a DMCA lawyer to sue the pants off the reporter for exploiting vendor's product
8. Government incarcerates random employee of reporter's organisation who just happens to be in the country at the time.
9. Vendor retracts suit.
10. Government continues to incarcerate random employee, sticking tongue out at the rest of the world in the process.
I give up.
Re:Anything new? (Score:1)
Re:Anything new? (Score:1)
Re:Anything new? (Score:1)
May be in addition to publicly attacking, they could send the lawyers as well
7. Lawsuit. The Vendor threatens the Reporter for creating the demonstration with lawsuits, claiming that the "Hacker Tool" is damaging their reputation.
Re:Anything new? (Score:2)
And please remind us again why this is a bad thing?
Re:Anything new? (Score:1)
If the vendor is responsive and cooperative, then releasing the details early will do more harm than good.
Something that I think is missed in the proposal is accomodation for unique circumstances. There are times when full disclosure is a really bad thing, like when a vulnerability poses significant risk to human life or national security.
Re:Anything new? (Score:2)
Re:Anything new? (Score:2)
Re:Anything new? (Score:2)
You can write a whole RFC if you want, but Jamie's explanation is more concise. In short, never publicly announce a bug before you've given the vendor the time necessary to correct the problem. ILOVEYOU is a perfect example of this; instead of writing a virus, someone should have just sent mail to bgates@microsoft.com giving details of the exploit, and allowed the vendor time to fix it. Writing an exploit just puts you at risk of violating the DMCA and does not generate goodwill from the vendor.
Re:Anything new? (Score:3, Insightful)
Not really.
That's why Full Disclosure is a Good Thing[tm]: It ensures that the amount of time between discovery by blackhats (and knowledge only by blackhats) and knowledge to sysadmins is minimized. When sysadmins know, they may decide to shut down their systems. Giving Vendors another 30 days only gives blackhats 30 more days to exploit vulnerable systems. That's not a Good Thing[tm].
However, vendors should be given prior notice. How long this period should be, I have no idea (I posted a question to
I think that the period should be shorter depending on how long blackhats may have had knowledge about it and how serious the flaw is.
SAAG meeting (Score:1)
While there were a few dissenting opinions, most people agreed that most intelligent thing to do is to notify the vendor, give them some time to fix the problem, and then publish the vulnerabilty. However, no one spoke out in favour of the formalized system as described in the draft.
-a
No need for a law (Score:1)
The fear is valid (Score:1)
For the most part they are opponents of the full disclosure model, and they would love to have rules imposed on people who discover vulnerabilities.
So if you discovered a bug and published it's details without notifying the vendor or going through the correct process, you could go to jail.
And if such legislation was introduced, guess who it would favour, the Customer, the Reporter, or the Vendor?
We still have first amendment protection (Score:1)
Re:No need for a law (Score:2)
Second, if vendors see an opportunity to combine an "industry standard" with the DMCA to stifle dissent, then they will grab it. It's only a short step from there to having Hollings introduce the combination as law, with 20 year prison terms to follow.
sPh
Re:SAAG meeting (Score:2)
Process is Vendor biased (Score:4, Insightful)
Simply setting up the recommended email address and generating an auto-reply (or the equivalent with form letters and an assistant) to all reports would acknowledge all claims. This auto-reply could immediately include a request for extension. Delay the auto-reply from 4-7 days, put snapshots of similar keyword searches from the internal knowledgebase, and you have a suitable claim for an extension.
Any disclosure of the flaw before the extension and the vendor can quite happily say that they are following the process and the reporter is not. Meanwhile the vendor themself has no reason, what-so-ever to follow the remaining sections of the process, then can simply allow all periods to lapse and proceed on their own accord. This allows the reporter to be labelled in bad faith, whereas the vendor can artificially appear to act in good faith.
Not following this best practice would furthermore not generate any additional bad publicity for the offending company. Vendors operating in bad faith will already have a negative image. Vendors operating in good faith will have an extra overhead that they may not be able to support if they follow this process.
It additionally appears vendor biased because it does not offer any benefit to the security community, or the user community at large. Prevention of exploits does not appear as a goal of this process. Nor does protection from exploitation of flaws. Unless of course we make the unreasonable assumption that exploits do no appear until over 30 days past the point of
It's a Internet-Draft, not a proposed regulation (Score:2)
When you are implementing a system that uses an RFC, it is solely your responsibility to comply with the requirements. The same goes here.
The assumption is that both the Reporter and the Vendor act in good faith.
Think of this as a proposed guideline that researchers and vendors should follow. Vendors are still free not to fix bugs, and researchers are still free to recklessly publish exploits to the world at large.
But if they both followed the instructions, things wouldn't be too bad. Of course if a Vendor persists in denying a bug's existence then the Reporter can, and should, publish the details. This is how things work most of the time already.
You should note an earlier poster who mentioned concern that if this turned into an RFC, it might eventually lead to legislation. Which would be a bad thing.
Re:It's a Internet-Draft, not a proposed regulatio (Score:2)
Answer: very few, if any.
However, what RFCs (and BCPs) like this can potentially provide is an escape clause for responsible employees who discover a problem with their company's software to make it public without having their own employer sue their ass under the DMCA or whatever local equivalent might exist. Let's face it, if a Techie gives his PHB a procedure for dealing with incidents that includes a phrase like "in compliance with RFC's X, Y & Z" how many are going to read them before they sign off on them?
Or you could even be honest and openly use the RFC as a basis for your own company policies on the matter at hand.
Re:Process is Vendor biased (Score:2)
I have heard of the frustration on both sides with regards to this. Some vendors don't have the resources to fix a bug within 7 or 14 days. Some security consultants have reported flaws and have not recieved any word for months.
The idea is for the security community to realize that vendors are people too, and for vendos to realize that security people aren't all h4X0rZ.
Paralles to journalism and koruption (Score:1)
Re:Paralles to journalism and koruption (Score:1, Troll)
1. Publish the exploit. Get it loose in the wild.
2. Publish the fix or workaround, if there is one.
3. Inform the vendor.
Brutal, but anything less becomes a mess of how long the vendor can delay doing anything about it.
The document isn't as bad as you might expect (Score:3, Interesting)
The recommendations make a reasonable attempt to protect vendors and customers (i.e. it doesn't go for the zealous instant-full-disclosure approach) without allowing reports to go ignored for too long. It basically says the vendor should get 7 days to initially respond to the report, then 30 days to fix the problem, and then can request a 30 day grace period to get the fix out to customers before the reporter releases details to the public . If the reporter goes along with this, the vendor must credit the reporter in its own public announcement of the fix.
It's kind of weird, seeing an internet draft using protocol-like terminology to describe how people and companies (rather than computers) are supposed to interact with one another. I hope this isn't supposed to be an RFC, or else that this kind of thing is normal for the IETF (I haven't read that many IETF docs). I've never seen a thing like this and it seems to turn the IETF into an even more political body than it already is. I thought the IETF was supposed to make recommendations about bits and packets, not people and companies.
Anyway, I can take issue with some of the points in the document, but at first glance there's nothing in it that I'd call outrageous. It seems like a genuine effort to find a good intermediate policy between instant full disclosure (and instant widespread exploits) and leaving stuff secret forever (letting exploits spread without the public knowing). Whether that policy is the optimal one is of course a matter of opinion and reasonable people can differ on it.
The document's authors came from several different camps (two names I recognize are Bruce Schneier's co-author Adam Shostack (presumably on the full disclosure side), and Microsoft's Scott Culp representing the Evil Empire) and it looks like they managed to arrive at a consensus. I guess that's a good sign and I hope Bruce and/or Adam will publish their own opinions of the draft soon.
Re:The document isn't as bad as you might expect (Score:1)
Uh, this is a perfectly normal for a Best Common
Practices (BCP). I assume that's what's intended here,
or perhaps just fodder for the plenary or something.
What about the role of the public? (Score:3, Interesting)
I think this document is a big win for the coordinators, but a big loser for the public.
Unresponsive Vendors (Score:2, Interesting)
Re:Unresponsive Vendors (Score:2)
What about the rest of us? (Score:1)
Now I run a linux firewall and a couple of other linux distro machines for education/work and two Win32 machines (ME and 2000).
I'm in the position of knowing almost nothing about security, and although I know something about network programming I know very little about the common networking components available and/or how to fix them myself if there is an exploit.
This means (obviously) that I am completely dependant on posted security holes to keep my network secure. If this standard keeps me from knowing about a hole for the sooner of 30 days or a patch to even KNOW about a whole doesn't that make me, the consumer, intentionally and knowingly left open to attack by the vendor? Shouldn't they be required to let me know? I can't immagine the pressure high-security network admins would be under if the standard required those 30 days.
Just my two cents.
Re:What about the rest of us? (Score:1)
First off, don't wait for vendor information, go to CERT, go to BugTraq, etc. Second, try to be proactive instead of reactive. There used to be a tool called Satan (Security Analysis Tool for Networks, or something like that) that can do monitoring of your network for common vulnerabilities. There should be some more tools like that. (Look at grc.com and other sites) Finally, invite someone to do a "hack test" of your network. They should be able to give you a decent idea of potential vulnerabilities. One important note: successfully passing a "hack test" is not a single time event, you need to retest every so often (thing quarterly, semi-annually, or at the very least annually).
Good luck, read stuff, and experiment. It's the best way to improve.
Required exceptions to non-disclosure pending fix (Score:5, Interesting)
I've seen a site well and truly compromised because frickin' Microsoft sat on a bug long after the Blackhat's had an exploit. It only took two days before their entire DMZ was rooted and credit card details stolen, and the stupid thing was, if the site had known that there was a problem they could have worked around it and avoid the legal mess they got into and are still in.
The only saving grace is that this probably won't happen to them again; they are now an ex-customer of Microsoft's and running Apache instead. True, Apache has its own problems, but at least they give you a chance to prevent any issues arising if you care to do so.
PS. Can I interest anyone in 40 used copies of NT Server? Thought not.
Re:Required exceptions to non-disclosure pending f (Score:2)
I agree though. If there is evidence that this has appeared in the wild, it should be immediate, even if it means disabling the service.
Interesting that I get to go to work today to image a machine that has been netbus'd
The draft doesn't cover sniffing in public (Score:3, Insightful)
Re:The draft doesn't cover sniffing in public (Score:3, Interesting)
For Microsoft, if you release any information during discovery you're probably labelled as malicious even if the thoughts are hypothetical or hunches.
However, its necessary to allow people to work together on the Internet in some form or else we can't benefit from each other's eyes
this RFC should NEVER come close to a standard (Score:2, Interesting)
Whois the bad guy? (Score:2, Interesting)
It never ceases to amase me (Score:1)
So I could happilly accept the proposed standard if the vendor has to pay for damages ocurred because of its software in a proportion of what you paid (this releases free software from any problem).
Say you paid $1,000.00 for a Web Server, someone informed the Vendor of a vulnerability and after 30 days they did nothing and your site got hacked, they would pay you 100 times the value ($100,000.00). Believe me when I say they would be much more "responsive".
Back on the free speech. Imagine a serial killer that only kills white male men wearing blue shirts at midnight in a city. If you live there, would you wait until police captures the killer to publish the story? I, as a male man wearing blue shirt living in that city would be VERY gratefull to know it and, at least, avoid using the shirt or beeing at the streets at midnight.
Just my 2 cents
Re:Remember the L0pht? (Score:1)
It's amusing to look a current pictures of Mudge, as well - it really points out the changes that have occured with the former l0pht: Current "Mudge" [stake.com] versus the Mudge that was. [cio.com]
Notification (Score:1)
How about taking this a bit further and REQUIRE the Vendor contact registered Customers. And I don't mean posting a bulletin to www.mycompany.com/security or offering an option to subscribe to some mailing list.
Of course I doubt vendors would agree to this requirement, since it would imply the vendor take some kind of responsibility for the vulnerability associated with their product.
Hypocrisy (Score:2, Interesting)
"We strongly encourage folks to report such problems to our private security mailing list first, before disclosing them in a public forum."
Let's not bash Microsoft too much, if Apache is doing the same thing.
Re:Hypocrisy (Score:1)
Re:Hypocrisy (Score:1)
In any case, the "Security By Embarassment" crew should do what they do everywhere else - post it everywhere all at once.
Re:Hypocrisy (Score:1)
Re:Hypocrisy (Score:1)
However, Apache is, according to Netcraft, still beating IIS, so their period of disclosure should be relatively short.
Once you start making exceptions, extensions, etc., it starts turning into tax law. "Well, we have to get them less time, since this software runs in hospitals," "It is freeware, maybe we should let them wait longer."
All I know is, I'd really like to see someone do a study, with control groups, etc., which shows if public disclosure hastens the progress of a patch, or allows for more hacks, or causes early but lousy patches.
It would be entertaining to take various hacks and decide how they got released, just to prove once and for all what effect this stuff REALLY has, rather than throwing out verbage on what the blackhats THINK will happen.
Re:Hypocrisy (Score:1)
All I know is, I'd really like to see someone do a study
There's an idea. The only problem with a study is to getting close enough to observe without them noticing. After all we need to observe them in their natural habitat.
Re:Hypocrisy (Score:1)
Any time Apache asks anything it's different than when Microsoft "asks" because Apache doesn't walk around toting baseball bats and asking for favors while in the next breath mentioning what a shame it would be if a fine establishment like yours were to burn down.
Motiviation for Disclosure (Score:2)
If you find a glitch with a competitors product, why is it automatically evil to publicly disclose? One valid tactic of advertising (in the US) is to denigrate competing products. When Microsoft announced [microsoft.com] that Windows NT beat Linux in performace tests, did they give Linus private notice and 30 days to respond before issuing the press release? Does Compaq let IBM know beforehand that it has better TPC numbers [tpc.org]? After Dateline NBC staged exploding gas tanks [whatreallyhappened.com], did the Wall Street Journal give them a month to come clean before revealing the scam?
If you worked at Be in 1998 knew of a fundamental, nearly unfixable flaw in Windows, how much time would you grant Microsoft to address it?
Response: Disclosure is Directly Useful to User (Score:3, Insightful)
I am sure that you are receiving dozens of comments on this draft, so I will try to keep mine brief. I am a security technician and sysadmin for a large nonprofit research organization. In your draft's terms I believe I represent a "User" more than a "Reporter" -- though a user with security-specific experience.
It seems to me that your draft undervalues the powers of users to protect themselves independent of the actions of vendors. Users are not entirely reliant upon the vendors of the software they presently use to protect themselves, and they can make use of published security information even if a vendor does not choose to acknowledge or proceed responsibly with the knowledge of a vulnerability. Moreover, they have a need for this information outside of its use in getting patches for existing software.
Most software users are not obligate users of particular pieces of software. They choose among competing software products (or even system designs), and make use of published information about these products in making their choices. They may choose to migrate from an installed software product to a competing one on the basis of published security concerns.
Because users need security disclosure to make informed decisions about the costs and benefits of pieces of software, they have an interest in a fuller and more analytical disclosure than vendors may desire. Large vendors may prefer users who have already purchased their products not to later question this purchase. They may resist the idea that a /pattern/ of vulnerabilities or poor practices exist in their software.
And for a vendor to quietly roll security patches into an "upgrade" may
help the user to avoid being cracked, but does not help him or her make
responsible decisions about future purchases.
Security researchers, it seems to me, have an ethical obligation not to aid criminals in attacking users. However, they (you) do not have an obligation to keep vendors from losing business, or to allow vendors to keep users in the dark regarding the comparative security strengths of software products. In many cases, users would be better served by being advised when the software they are using is poorly designed, has a history of vulnerabilities, and is likely to remain vulnerable to new sorts of attack -- rather than merely being told to wait for a patch, or not told anything independently at all.
Vendor MUST do a lot... (Score:1)
... while reporters literally don't have to do anything to be compliant with the spec. There are no MUSTs for reporters at all, and only one or two for coordinators. There are about a dozen for vendors.
I'm not against vendors taking some responsibility for their products. I work for a vendor, and we take our defects (and trying to prevent them in the first place) very seriously. But if someone is going to poke and prod and pry our product to find vulnerabilities, then they should bear some of the responsibility for responsible disclosure.
Re:Vendor MUST do a lot... (Score:1)
Re:Vendor MUST do a lot... (Score:1)
The definition of "MUST" is "you are not in compliance with this protocol unless you do this". What we have here is a situation where the reporter can do (or not do) anything he wants, and technically be in compliance with the protocol.
If you file an excellent bug report with the vendor and then wait 30 days before telling anyone about it, then you are in compliance. On the other hand, if you create an exploit and release it to the world the first day you discover the bug, then you are technically in compliance with this protocol.
Not really much of a protocol, if you ask me.favors only the vendor...and their friends (Score:2)
Reporting should be public and immediate. Someone who discovers an exploit in a particular piece of software is under no obligation whatsoever to protect that vendors image; in fact, using that as a reason *not* to release an alert is just plain brain-dead. If the vendor releases buggy code and customers migrate to another product - hey, that's capitalism, straight from Adam Smith! The consumer makes an informed choice based on actual facts - which is how the market is supposed to work if people aren't colluding to keep secrets or misinform the consumer.
Vendors have no right to non-disclosure. Vendors have no right to have their 'image' or 'reputation' protected. If the company is that concerned it's up to *them* to invest the time and energy tracking down the bugs and repairing them. If we find them then we have every right to circulate them publicly for review, pointing out the flaws for all to see. This gives the consumer - you and me - the option to abandon the software in favor of a competitor's product, or to disable whatever part of that software is causing the problem until the vendor can be bothered to put out a patch.
Incidentally, it also allows us to determine who makes the shittiest software and avoid future purchases of their products. Now why does my cynicism kick in when I consider the fact that the people who put together this proposal are sucking away at the Microsoft tit?
Max
Re:favors only the vendor...and their friends (Score:2)
Darn tootin'. To my mind this is nothing more than a re-hash of the old "disclose or not" debate, except the authors have gone to the trouble of writing up the "not" standpoint as an RFC. My comment is, it sucks.
The customer is left unprotected, under this scheme, until the vendor gets around to fixing the problem. Did you see the part about how the vendor will provide the fix free or charge, or for a nominal charge? No? Maybe that's because it wasn't there.
And I don't think anybody has yet commented on their new, improved, standardized e-mail address for reporting security problems. Standard for whom? AFAIK, nobody uses that now; why not pick something obvious, like security@domain.com.
I'm afraid @stake has sold out. Maybe it's something in the "@" symbol -- the same way @home sold its mailing lists to every spammer under the sun.
This draft SHOULD have (Score:2)
I'm just sayin'.
Resource recommendations? (Score:2)
Am I the only one... (Score:2)
In my book, this is considered a favor.
So now there's a draft which is going to tell me how to properly do this favor for them or else I am a 4$$hole?
So if you do me the favor of watching my dog, and I turn around and tell you that you need to watch my dog in my house, and that the dog needs to remain in my garage, which you need to remain in there to make sure he does not eat anything which may make him ill. And on top of that I tell you that you cannot wear anything blue/black/red/white because it makes my dog nervous, and that you MUST play with him for at least 45 minutes. And only feed him the special mixture of dogfood/yogurt served in the yellow tweety bird bowl which you will have to wash at every feeding.
Or of course, I could just be grateful that you informed me of a vulnerability in my software, and grateful that you are watching my dog.
(Score: -1 Ranting)
One slight problem (Score:2)
Severity? (Score:1)
What this document doesn't address is the severity of the vulnerability. If a vendor can get up to 30 days to find a fix for a problem regardless of how severe any potential exploits are, I certainly would not feel comfortable as a sysadmin or customer.
I have to admit. Considering the amount of SHOULD conditions in the document, it is nowhere even close to being a final document. Addressing the severity of the problem is a very important part of the equation. It simply cannot be ignored.