Microsoft Makes Major Shift In Disclosure Policy 65
Trailrunner7 writes "Microsoft is changing the way in which it handles vulnerability disclosures, now moving to a model it calls coordinated vulnerability disclosure, in which the researcher and the vendor work together to verify a vulnerability and allow ample time for a patch. However, the new philosophy also recognizes that if there are attacks already happening, it may be necessary to release details of the flaw even before a patch is ready. The new CVD strategy relies on researchers to report vulnerabilities either directly to a vendor or to a trusted third party, such as a CERT-CC, who will then report it to the vendor. The finder and the vendor would then try to agree on a disclosure timeline and work from there." Here's Microsoft's announcement of the new strategy.
Paging Tavis Ormandy, Paging Tavis Ormandy! (Score:5, Insightful)
Mr. Ormandy, I think you know what to do. I really found it amusing that they called the blog posting "Bringing Balance to the Force" when it looks to be completely defined by Microsoft with little or no input from the community.
Following Google (Score:4, Insightful)
Looks like Google's policy announcement from July 20 [slashdot.org] rattled some MS cages.
motivation (Score:5, Insightful)
What is the researcher's motivation to spend the extra time working with Microsoft? They certainly have no obligation to do anything Microsoft asks...
Personally, I prefer the Google and Mozilla method whereby researchers are paid a bounty of a few thousand dollars for reporting vulnerabilities in the manner the vendor prefers. Microsoft would be wise to follow the leaders rather than invent their own convoluted process.
Sudden outbreak of common sense (Score:4, Insightful)
So they are formalizing common sense into a policy.
It is a lot better than the previous formal policy of bat-shit crazy.
Anything's fine, as long as they communicate (Score:2, Insightful)
Re:I don't get it? (Score:1, Insightful)
Right... so that is motivation NOT to help M$...
what is the motivation to report to them?
Microsoft has an obligation to protect their customers from security vulnerabilities by responding to them, one they abdicate constantly.
Security researchers have the obligation that ANY academics have. Tell the truth, show your work.
Re:Good luck getting Apple to agree (Score:3, Insightful)
I will clarify this for you.
Apple is an insular and paranoid company. They are built upon the myth that the Mac/iPhone/iPad/iPod platform is "safe". They are selling an image: of computing platforms that are safe and secure for the end-user. Reality does not agree with Apple.
Most responsible researchers will play Apple's game, and part of their game is sending out inaccurate and vague responses as to when they may (or may not) fix what vulnerabilities have been found. I think it's helpful for people to know how Apple really works.
Re:I don't get it? (Score:3, Insightful)
Re:I don't get it? (Score:1, Insightful)
Re:Following Google (Score:1, Insightful)
How does giving a company 5 days to fix an exploit right? If anything this looks like an effort by MS to get the researchers to agree to work with MS so that the details aren't released before a patch is ready. What possible reason is there for releasing this stuff anyway? Does it make anyone safer? Unlikely. Most people don't care enough about security in the first place. All the early release of the exploit does is give lazy hackers more ammunition. Cause let's face it even if MS fixed these within 24 hours we would still see computers get bitten by it because people don't always update their computers.
Re:Apples to Oranges (Score:3, Insightful)
More like they can't. A problem may be a simple fix inside a problem module, but it's also got to go through rounds of testing to make sure that simple fix actually doesn't break anything. After all, even doing stuff like implementing LUA showed how badly things broke (see Vista).
The problem when you're the giant is you attract all the developers. The problem is, most developers write crap for code, and do things they really shouldn't. If you remember back in the DOS days, people hacked inside DOS data structures all the time - so much so that Microsoft was stuck in that they couldn't move its place in memory or alter it. Or even assume that its values haven't changed. The same thing's happened with Windows. The desktop "window" actually has a title called "Program Manager". The icons and other resources inside explorer.exe and other shell DLL's can never, ever be touched, removed, replaced or altered because apps actually "steal" the icons from within. (Things broke horribly during the XP betas because they renamed the window classes (not to be confused with a C++ class)). Or why "Documents and Settings" is a hardlink on Vista and Windows 7.
I think they're also a short way away from recognizing that if you type "C:\Program Files" to actually take you to %PROGRAMFILES% because people assume that it will always be called "Program Files". (Not "Program Files (x86)", not localized, etc.).
It's a miracle Windows works at all.
Re:Apples to Oranges (Score:3, Insightful)
Except that scale is not the fundamental problem, organizational culture is.
OSS vs CSS vulnerability reporting (Score:4, Insightful)
CSS: find a bug, see a lawyer, contact a CERT, wait several weeks for a response, sign an NDA, share vulnerability informations, wait 2 months, ask for status, wait for an answer for 4 more months, realize that the vendor will do squat about the vulnerability as long as his customers don't know how threatened they are, release the infos to the public to put pressure on the vendor, be threatened by the vendors lawyers, be called a criminal by the vendors customers and the press and politics, have a house-search, wait 2 more months, get patch, realize that it doesn't fix the problem, rinse and repeat
Re:I don't get it? (Score:3, Insightful)
I fear that you are a troll. Nonetheless...
first off the majority of people wouldn't be able to immediately diagnose and patch because they have no idea how to do that.
Yes, but this does not negate the fact that there are many more eyes looking for flaws. A minority of a ton of people can still be a ton of people. The fact that anybody could diagnose and patch immediately is the important part.
second because linux is open source you would be less secure because it is easier to find flaws and backdoors in a system that you can view its source code.
Yes, and not all of those who find these flaws would exploit them. Many would fix them. Also, as pointed out many times on Slashdot, security through obscurity is not security at all.
and since linux uses a general public License if they request to see your source you have to give it to them because it requires that derivative works also fall under GNU's general public license.
This is a misinformed statement. The GPL requires that any publicly distributed derivative works be distributed under the GPL, but not privately-used derivative works. Moreover, the GPL only requires that you provide source code to those who have purchased the work. It's just a happy coincidence that most free (GPL) software also happens to be free (money).
the only way to truly secure yourself is to disconnect.
Truer words have never been spoken. Why is it, again, that we need a cybersecurity policy when we can just disconnect the freaking high-risk computers from the freaking internet?