Group Releases Anti-Disclosure Plan 149
dki writes "SecurityFocus reports that the Organization for Internet Safety (OIS), a group of 11 of the largest software and security companies, has released a public draft of a proposed bug disclosure standard. The document outlines a process for reporting and disclosing bugs that aims to eliminate releasing exploits to the general public. Not surprisingly, the OIS was founded out of a Microsoft-hosted security conference. Comments on the draft will be accepted until July 4th; the final copy will be released at the Black Hat Conference in Las Vegas."
7.1 and 8.2 esp. disturbing. Send Feedback! (Score:5, Informative)
"7.1 Advance Notification
This document does not address processes for notifying selected groups of users about vulnerabilities in advance of the general population. While such âoepre-release notificationsâ are sometimes done, and in very well-controlled cases can be carried out effectively, they are not a recommended practice in the general case. Because this document addresses only activities that are appropriate for typical cases, advance notification is beyond its scope."
"8.2 Use of Third Parties
In some cases, investigations may be made more effective by the use of people or organizations other than the Finder and Vendor. There is no requirement to use a third party, but in cases where one is used, it should be a person or organization that the Finder and Vendor have agreed to in advance of its involvement. Characteristics of a good third party include sound judgment, freedom from bias or conflicts of interest, demonstrated technical expertise in security technologies, and discretion regarding the handling of the information it is entrusted with to resolve the dispute. Third parties normally serve in a voluntary capacity, as a service performed in the public interest."
Members of the OIS... (From the OIS site http://www.oisafety.org/about.html#1) What companies are members of OIS?
The current members are: @stake, BindView, Caldera International (The SCO Group), Foundstone, Guardent, ISS, Microsoft, NAI, Oracle, SGI, and Symantec.
We're actively soliciting software vendors and security research companies to join. Send OIS your feedback on the draft until July 7th! (From the OIS site http://www.oisafety.org/resources.html) "Comments on the Security Vulnerability Reporting and Response Process should be sent via email to draft-feedback@oisafety.org [mailto]. Comments should include your name, address, and email contact information. Organizations submitting public comments should include the name and title of the person submitting the comments. While OIS will respond to as many comments as possible, because of the anticipated volume of comments, we cannot guarantee an individual response to every comment."
Re:7.1 and 8.2 esp. disturbing. Send Feedback! (Score:5, Interesting)
Re:7.1 and 8.2 esp. disturbing. Send Feedback! (Score:5, Insightful)
On the flip side, this mentality rarely exists in open-source code projects due to the lack of a need to safeguard against decreased shareholder value. Also, IMHO, open-source developers continue to maintain a sense of pride in their work that is for the most part missing in closed-source projects due to the direct recognition (and therefore, culpability) that is possible by putting your name to a piece of software that others can peruse and make comments and criticisms about. With closed-source projects, the company's reputation is primarily at stake while their programmers are largely hidden behind the web of corporate logos and NDA's, and therefore may not have reasons as strong as those of OSS developers to fix and release bugs in a timely manner.
So in other words, it's to protect lazy software companies from being held accountable for their own poor coding practices and lacadaisical maintenance policies.
Re:7.1 and 8.2 esp. disturbing. Send Feedback! (Score:2)
The open source community is no better at getting patches deployed than the commercial vendors and is often worse. There are plenty of machines out there running unmodified Redhat 6.2 with lots of known vulnerabilities.
The recent DDoS attacks on the DNS roots were mounted f
Re:7.1 and 8.2 esp. disturbing. Send Feedback! (Score:2, Insightful)
Re:7.1 and 8.2 esp. disturbing. Send Feedback! (Score:4, Insightful)
HUH? You are comparing apples to oranges. So how many commercial vendors do you know that deploy and install patches on your systems?
Deployment responsibility is the end users wether using open sourse or closed source. The scope of this article and the parent poster are about bug fix acknowledgement and patch creation by software creators. How these patches get installed on your specific systems is another topic.
Re:7.1 and 8.2 esp. disturbing. Send Feedback! (Score:2)
I disagree. For example I run both debian and freebsd machines. In both cases I subscribe to their security mailing list. Whenever a vunrability becomes known in any of the thousands of backages supported by them I get an email telling me about it and instructions on how to go about installing the patch.
I don't know of any commercial OS vendor that does the same thing. WIndows only deploys p
Re:7.1 and 8.2 esp. disturbing. Send Feedback! (Score:1)
At least in this case, I don't think any of the OIS members are really wonders. They're just trying to pull something over the public's eyes.
Re:7.1 and 8.2 esp. disturbing. Send Feedback! (Score:2)
Re:7.1 and 8.2 esp. disturbing. Send Feedback! (Score:3, Funny)
Re:7.1 and 8.2 esp. disturbing. Send Feedback! (Score:5, Insightful)
Re:7.1 and 8.2 esp. disturbing. Send Feedback! (Score:2, Interesting)
Re:7.1 and 8.2 esp. disturbing. Send Feedback! (Score:2)
That's the whole point, isn't it?
SB
Re:7.1 and 8.2 esp. disturbing. Send Feedback! (Score:5, Funny)
The obligatory:
1. Create crappy software
2. Make other people correct it's flaws
3. Sue the fixers for copyright infringement
4. Profit!
Re:7.1 and 8.2 esp. disturbing. Send Feedback! (Score:3, Funny)
Hey, you missed something:
X. ???
z
more bugs (Score:2, Funny)
user: windows update recommends that i install 14 critical updates.
microsoft: you cannot install this patch without all the updates
user *installs 10 updates, 11th fails*
user: uh...
microsoft: it's your problem. btw, the bug disclosure process bug is your problem too.
Section 9 Missing (Score:5, Funny)
All OIS participants must either look like Peter Norton or Steve Balmer. Minimally this can be preformed by wearing khaki pants, blue denim shirt, and sensible shoes.
No person or organization wearing black, having purple hair, or listening to obscure music may participate as either a Finder, Vendor, Coordinator, or Arbitrator.
Re:Section 9 Missing (Score:3, Funny)
(that's what I thought you said! time for bed, damn it!)
Re:Section 9 Missing (Score:1)
Re:Section 9 Missing (Score:2)
Excellent! (Score:5, Funny)
Re:Excellent! (Score:1)
or stops using C/C++ (Score:2)
Re:or stops using C/C++ (Score:2, Informative)
90%? Maybe that's a little bit too much, but all in all, most other 'higher' languages are written in hm, well, C/C++.
Using something that was written in C/C++ doesn't really exclude buffer overruns, just minimizes it.
but they're written once, carefully (Score:3, Insightful)
Re:Excellent! (Score:1)
Re:Excellent! (Score:5, Insightful)
Look how this group defines a security vulnerability. According to this one, there are no security vulnerabilities.
For the purposes of this process, a security vulnerability is a flaw within a software system that can cause it to work contrary to its documented design and could be exploited to cause the system to violate its documented security policy.
There are a few publicly available systems with documented design and security policies, but it's explicitly stated that the security assumptions only hold when the system has cooperative users and is connect to a trusted network (better yet, no network at all).
Re:Excellent! (Score:1)
This is an interesting definition now isn't it? Does this mean that all undocumented behavior is considered a security flaw? If so, we all better get busy. We've got about 15 million vulnerabilities to disclose just for M$ Word alone!!!!
Re:Excellent! (Score:3, Interesting)
Quite the contrary. According to the definition, undocumented behavior is perfectly acceptable, even if it has obviously unwanted consequences.
Re:Excellent! (Score:2)
One line summary (Score:5, Insightful)
It's all in that last phrase "only after a remedy is available"
Re: One line summary (Score:4, Interesting)
IOW, it's back to the bad old days when Microsoft didn't bother trying to fix exploitable software at all.
Re:"only after a remedy is available" (Score:2, Interesting)
See, that's a very funny way to do it. They are trying to accomplish what by this? Remember the SQLSlammer?? How long has the "fix" for the vulnerablilty used by the SQLSlammer been available?? How hard did the worm hit?? I don't care what the group thinks it's going to achieve with their recommendations on vulnerability reporting, with the companies involved, it won't do any good. Even when Microsoft has the fix available, they are stil
Re:One line summary (Score:2)
Then again, you only read what you wanted to read, didn't you?
I say give some time, but not too much (Score:5, Interesting)
Of course, I'd reccomend a very short wait time, probably between 48 hours and one week. Just enough time to solve the problem if enough resources are diverted to it but not long enough to allow anyone to ignore the problem until later.
Responce from 1337 (Score:5, Insightful)
Eight hacking groups join together to set an official standard for limiting disclosure of software security holes.
#1337, Efnet â" Eight computer hacking groups rounded up a three-day Exploits in Computing Forum on Thrusday by formally announcing a coalition against full disclosure of vulnerability information, ending a week of intense speculation, and immediately sparking controversy.
WanSan, TCPuke, NetLoft, HeavySak, BitEvil, SYNergy, HPLat, and DownScope joined together to declare they would immediately begin following a policy of limited public disclosure of security vulnerability information. Members of the coalition who discover new vulnerabilities will omit from their initial public advisories any details about how a hole might be exploited in an attack, and will not include code that demonstrates the bug. Thirty days after the first advisory, a more detailed notice can be released under the rules. Full disclosure of the vulnerabilities will be shared only among the members for âtestingâ(TM) purposes.
"We felt that as responsible industry leaders, we, as a voluntary organization, are going to follow a set of reasonable standards," said DXNo, manager of 1337â(TM)s intrusion exploitation, in an interview.
1337 will also draft a proposed international standard for notifying vendors and the public about newly-discovered software security bugs, following the group's limited disclosure ethic. The organization will admit new members, under an as-yet unwritten set of bylaws. The initial draft of the limited disclosure ethic will limit the disclosure to the home pages of the vulnerable sites.
A chief objective of the group is to discourage 'full disclosure,' the common practice of revealing complete details about security holes, even if publication might aide attackers in exploiting them. The group believes that any type of full disclosure would assist software vendors into patching various vulnerabilities before they can be widely exploited.
Publishing complete information, and sometimes "exploit" code that demonstrates a vulnerability, is de rigueur among many computer security professionals, who argue that malicious hackers can acquire the same information themselves, and that network administrators and security gurus often need technical details to properly defend themselves from attack.
But Culp criticized the practice in an essay published on a Microsoft Web site last month, and blamed "information anarchy" for the epidemic of malicious worms that have struck the Internet in the last year. "It's high time the security community stopped providing blueprints for building these weapons," Culp wrote.
Under the plan, member groups would share detailed information during the 30-day grace period with âoeother communities in which enforceable frameworks exist to deter onward uncontrolled distribution.â The last category would allow member groups to share details with one another. "They're not going to ban it among themselves," says Levy. "They might be willing to limit the public access to this information, but I highly doubt that they'll limit it among each other."
"People have to do it Microsoft's way or they'll have this group telling them that they're acting irresponsibly," says Maiffret. "It's going to drive people into the underground, and could lead to more people breaking into computers." The majority of members in 1337 agree with Maiffretâ(TM)s assessment.
"We are not trying to form a secret society of exploiters," says CKLawz "We are just creating a standard... This represents one of the first process standards between security companies and vendors."
wyZopa1 estimate that it will take one or two months to produce drafts of the proposed RFCs. He emphasizes that the standards would not just limit vulnerability disclosure, but would also spur vendors to be more responsive to security vulnerability reports. "My goal in the RFC is to have equally stringent standards for vendors and exploiters," says wyZopa1. âoeWe worked hard to discover these vulnerabilities, the developers should work just as hard to fix them. Providing them with all our tools without compensation is not what software development is about.â
anti-competition - anti-bad-press (Score:5, Insightful)
All you need isn't love (Score:4, Funny)
Well, that's a short list just anyone could sort out in a weekend
Re:anti-competition - anti-bad-press (Score:1)
I'm also of mixed mi
And yet again - hackers, not bad coders are blamed (Score:5, Insightful)
Re:And yet again - hackers, not bad coders are bla (Score:2, Insightful)
1) Exposing a vunerability will make any company that does not try to "repair" them liable -> costs money
2) Having to "repair" the vunerability because of the liability-problem will cost money too
Now look at the proposed "solution"
a) Don't tell anyone you know about he problem, and you can use "plausable deniability" (Just let your opponent *proove* you knew about it). Result : Nobody can effectivily sue you, and the threat of loosing money is gone.
b) The liability-threat (aka : los
Re:And yet again - hackers, not bad coders are bla (Score:2)
New York Times Banner Ad and Non-Disclosure (Score:2, Funny)
They, in particular, excel at non-disclosure... Perhaps they'll be joining this "Organization for Internet(Information) Safety"
Nice 'process' (Score:5, Informative)
2.2 Phases
The basic steps of the Security Vulnerability Reporting and Response Process are:
# Discovery. The Finder discovers what they consider to be a security vulnerability (the Potential Flaw).
# Notification. The Finder notifies the Vendor and advises it of the Potential Flaw, and the Vendor confirms that it has received the notification.
# Investigation. The Vendor investigates the Finderâ(TM)s report in an attempt to verify and validate the Finderâ(TM)s claims, and works collaboratively with the Finder as it does so. # Resolution. If the Potential Flaw is confirmed, the Vendor identifies where the Flaw resides, then develops a remedy (in the form of a software change or a procedure) that eliminates or reduces the risk of the vulnerability.
# Release. In a coordinated fashion, the Vendor and the Finder publicly release information about the vulnerability, along with its resolution.
Now look at this under the context of the recent [securityfocus.com] MS Passport Vulnerability to see how effective this process is.
As an aside, this draft is backed [oisafety.org] by MS and SCO, amongst other companies. It'll be interesting to read the amount of bashing this gets over the weekend.
The 11 Companies (Score:5, Informative)
Microsoft
@stake
BindView
SCO
Foundstone
G
Internet Security Systems
Network Associates
Oracle
SGI
Symantec
-Mani
Re:The 11 Companies (Score:2)
yes (Score:2)
Re:The 11 Companies (Score:2)
Re:The 11 Companies (Score:2)
Microsoft, I guess, is trying to do both given its recent anti-virus firm acquisition.
It's interesting -- and possibly encouraging -- to note that neither Apple nor IBM is on that list.
--
One of these days I'll have to sue Darl McBride for defamation of character by association.
A plan! (Score:3, Funny)
Why would anyone follow these guidelines? (Score:5, Insightful)
Mapped to the real world, it's like some idiotic Police Chief knowing damn good and well that several pizza delivery drivers are mugged every night when they go into a four-block area... but refusing to say anything - not even warning these drivers to avoid the area for a while - until after the muggers have been convicted, sentenced, and in prison for a month.
Re:Realworld example (Score:5, Insightful)
Living in the D.C. Metro area, I was very upset when hearing that the D.C. Police Chief had been against revealing the make of the snipers' car when they finally found it out. Once this information was released, the snipers were caught in 2 hours or so, IIRC.
I agree with the parent poster - this seems like an apt analogy. At least if a non-negligible number of bugs, patches, fixes, or workarounds, even if just temporary, come from unexpected sources outside of the vendors or finders.
Re: Why would anyone follow these guidelines? (Score:3, Insightful)
> Why would anyone follow these guidelines?
Bank on it: once they've established the 'standard' they'll start lobbying to have it made law. Disclosing a vulnerability that has been around for a couple of years will get you labeled as a terrorist.
> It might piss off these companies, but anyone who really cares about security would realize that giving the vendors the exclusive right to disclose flaws (regardless how much time has passed or how many systems have been compromised) prevents people from
Doh.... (Score:5, Funny)
Not to worry (Score:2)
Re:Not to worry (Score:4, Informative)
Bugtraq is one of those 11 companies. (Bugtraq is part of Symantec)
Re:Not to worry (Score:2)
(Bugtraq is part of Symantec)
Well don't I look like a dick? :) Isn't it a full disclosure list? Wonder what's to become of that, then.
Re:Not to worry (Score:3, Interesting)
A challenge: create an Open alternative to BugTraq
I have registered the domain names opentraq.org [opentraq.org] and opentraq.net [opentraq.net]. I am willing to have them resolve to DNS servers belonging to a group of volunteers who wish to start and maintain an Open alternative to the BugTraq website. (GNU [gnu.org]? Mozilla [mozilla.org]? Anyone else interested?)
I will continue to renew the registration as long as someone wants to continue the project. If necessary, I may be willing to t
what you dont know... (Score:3, Insightful)
Re:what you dont know... (Score:3, Funny)
OIS Members (Score:2, Redundant)
Considering their backgrounds, it is sad that @stake [computerworld.com] and ISS [linuxworld.com] are involved in an anti-disclosure group.
Where is the user, customer or consumer??? (Score:5, Insightful)
Poor Joe ServicePack is the one affected, and he figures nowhere in this scheme of things.
Re:Where is the user, customer or consumer??? (Score:2)
If communication breaks down (you say "You are not taking it seriously", they say "Yes, we are", you say "No, you're not", etc), the process stops, you disclose the vulnerability, and that's it.
Only if they are seen to be making real progress in addressing the vulnerability you'll stay with the process
Title (Score:4, Funny)
Are they still not getting it? (Score:5, Interesting)
If they want to put me on the payroll, I'll QA and report their software using this convenient bug ticket they've provided;)
Yes, and it is good for OSS (Score:2)
I see nothing in the draft that forbids this.
Indeed, this is going to propel OSS onto the desktops of the masses.
Once the vendors get to control what information gets disseminated about their bugs the cost-effective way of dealing with these bugs will not be to fix them, but rather to just sweep them under the rug.
Yeah, some minor security holes might be patched, but something major, like the MIME/filetype exploit in Windows
Re:Yes, and it is good for OSS (Score:2)
The sad thing is, there will be an army of Windows users who observe that OSS software reports lots of bugs every day, while Microsoft reports none. Obviously, this is evidence of the superiority of the closed source programming model. *sigh*
Ha, I say let 'em! (Score:3, Interesting)
While this is totally bogus, perhaps we just should let them have their little secrecy club. You know, see how well they do compared to other vendors who are more open about their bugs.
But really, if there's some exploit in some app/service everybody should know immediately - the only solution to an exploit is not always a patch fix from the vendor. You could shut down the service, if applicable, tweak some parameters to dodge the exploit, filter some packet, etc. Or you could even fix the exploit yourself in some instances (in the code that is).
But hey, who can blame them for wanting to not disclose this info, they're the only ones who can/are allowed to fix the bugs. :-)
zThe Forgotten Column (Score:5, Funny)
Is this a joke? (Score:3, Insightful)
Re:Is this a joke? (Score:2, Interesting)
You fail to understand the nature of most security incidents I see.
The people who actually discover the vulnerabilities are the security consultants, IT professionals, independent researchers, academics, etc. then the malcontented kiddies and gass huffers take those proof of concept scripts and use them to "hax0r"...hence Script Kiddie...
If you don't release the exploit information and example code until after it's p
Re:Is this a joke? (Score:1)
I understand your point there... I mean, there are a LOT of "wannabe" script-kiddies around, that would just learn a tiny little bit of cut-n-paste and gcc -o attack_them bug.c to go and "hack" around!
The problem here then is _WHO_ handles the security bugs info. That is, if I know that my bug will be taken care by the vendors/developers ASAP, and we put a "well-defined" time window of X days for them to fix it before they release, then I could agree.
BUT, we know that the morons which want (LOOOVE for
Re:Is this a joke? (Score:3, Insightful)
Don't believe the myths, yourself. Sit on Incidents for awhile. When people at the frontlines are saying, "I'm getting a lot of activity on port x, seems to be trying this against Apache/IIS/Wu, anyone else seeing it?" which eventually leads to "I was compromised sometime last night/week, found this in my logs, anyone recognize it?" which eventually leads to a security researcher rediscovering the vulnerability in the affecte
uggh... (Score:4, Funny)
I'm sure he'll have some interesting things to say.
God was this needed... (Score:2, Funny)
Which software can we trust now? (Score:2, Interesting)
And even if I could analyze the source code, I don't have the time to do so, especially with big projects like sendmail, or PostgreSQL that I use a lot.
full disclosure (Score:5, Insightful)
Re:full disclosure (Score:1)
I agree - but this isn't a law, its a group of companies deciding that they know best, and trying to force a 'standard' onto everyone else for no particular reason as far as I am concerned.
Whilst it remains a 'recommended practise' there will be those who will follow it, and those that won't, and all the time it will be completely unenforceable - so what is the point?
off-loading even more testing onto the public (Score:5, Insightful)
Re:off-loading even more testing onto the public (Score:1)
I simply cannot think of an OS where I haven't had major problems at one time or another. A quick list of a few below (take Windows as being in there obviously).
Let me think.... Solaris 8 x86, wouldn't even boot. RedHat 7.2 screwed my Windows insta
Re:off-loading even more testing onto the public (Score:2)
Democratization of knowledge (Score:4, Insightful)
There are precedents where doing this could be plain wrong, like in the last WebDAV [nwfusion.com] vulnerability in IIS, that was discovered after being actively exploited by black hats. Hiding the facts will not stop crackers to exploit something they know first, and will only make victims unaware of what could or have happened with their sites.
Covering Up Unpatched Holes (Score:4, Insightful)
That's how it'll always be handled by some people that find vulnerabilities. That's the way it's been handled in the past, and the world hasn't ended. It's just occasionally uncomfortable for a big player that sat on their ass for too long. Tough toenails. They need to put the same energy into just fixing reported problems that they do into trying to put this complicated mechanism in place to cover up problems that they fail to addres
sigh... when will they ever learn..... (Score:4, Insightful)
Being the cost-minimizing/profit-maximizing organizations that they are, they will always wait forever to incur new costs (like fixing bugs) until somebody "lights a fire under them." That's why quick public disclosures is important, because the bug *will* be discovered by the bad guys sooner or later, so it is better to fix it soon than wait for later.
Re:sigh... when will they ever learn..... (Score:1)
Hmm, No Enforcement... (Score:2, Interesting)
Ok, so next time, choose not to participate.
Next!
H
Fuck 'em (Score:3, Interesting)
Someone finds a hole.
They'd like us to sit on our arses while they take their time fixing it.
Meanwhile, some bad ass mofo finds the same hole, and exploits it.
It's the threat of imminent disclosure that makes the vendors fix their fucked-up software - to delay is merely to invite problems.
I repeat - fuck 'em.
Secure Computing Initiative (Score:1)
What is it with these pseudo important names? (Score:2)
This happens all the time with those "think tanks", then all those bogus agencies selling certification in the security industry...
Is this a new art form or something? Hacking the stupid mainstream media who will just present it as fact if it comes from an
This sort of thing was inevitable (Score:2)
It can be interesting and it improves ones ability to read, write, and understand code.
Doing so in a public forum can create reputation capital for ones consulting services or products. In some cases may lead to employment.
Some folks are truly motivated by the desire to see vendors patch their software. This is sometimes a result as well.
The companies involved in the OIS have already established their repu
If I find a security hole in closed software... (Score:2)
...you can rest assured that I'm going to disclose it widely, anonymously and with great fanfare.
I have a question (Score:2)
Somebody tell them (Score:2)
This will be yet another attempt by MS and co to stymify OSS (illegal to report bugs) and control the world.
The world will respond with code red III and Slammer II.
There's hard data about disclosure & vulnerabi (Score:5, Informative)
This was presented at the "2001" Oakland conference, held in 2000.
There were some fascinating points in the presentation that weren't spelled out in the paper.
Disclosure didn't matter for the exploits they studied. That's right, didn't matter, for good or for ill. The rate of exploitation in the wild didn't spike after public announcements. The exploitation rate didn't go down after patch release.
Events that did matter included the public release of scripted exploits, and the cycle of the academic year.
Exploitation rates finally faded away over time but the fade-out curve didn't relate to patch adoption or availability. In fact, sometimes an exploit would flare up again after fading away. The authors attribute the decay of the exploit rate to boredom on the part of attackers, and maybe to the gradual replacement of vulnerable software.
If they're right, the policy lessons that follow (you're getting my opinion here) are:
o Disclosure is harmless and should be allowed
o Patches should be encouraged. They won't stop the next Code Red but they have benefit for anyone willing and able to install them.
o Distribution of attack scripts has a high social cost.
o When a vendor ignores a problem until there's an attack script in the wild, you've got a dilemma.
Here's a brainstorm: full liability for the damages caused by exploitation of a security hole. Liability lands on the vendor if they ignore a bug report that includes a demonstration attack, lands on the script author if the attack goes into the wild before there's been a chance to fix the problem.
Cool! no more advisories! (Score:2)
I'm sure my boss feels so much safer.
All kidding aside, its all about covering their butt, not to admit problems. But does this increase their liability? They are withholding vital information from their customers about faults in their products.
If the automotive industry did that, they would raked over the coals.
Whats next, make it illegal to discuss bugs in a public forum with out their permission.. errrr
It's all about vendor control of disclosures (Score:2)
followers will pay dearly + analogy (Score:2)
If you bought a stroller for your child and somebody else discovered that it had a serious design flaw that could cause it to collapse suddenly when going over bumps seriously injuring the child inside would you (a) prefer that the manufacturer was told silently so that t
Requirements of a disclosure framework (Score:2)
Frankly I don't much care about what they've got in a disclosure framework, as long as it meets at least a few requirements:
Stupid idiot bias -- NOT Anti-disclosure (Score:2)
Where the HELL did you find any anti-disclosure in there? First of all, that document details a _process_, in which vendor and finder (of flaw) act together to overcome a security flaw.
It details how can a finder contact the vendor, give firm deadlines on response time from the vendor, how the interaction between finder and vendor goes, assures the finder that something Is Being Done, and protects the user from disclosure of a flaw for which there are no fix.
It
Re:FRIST PSOT! (Score:1, Offtopic)