Google Advocates 7-Day Deadline For Vulnerability Disclosure 94
Trailrunner7 writes "Two security engineers for Google say the company will now support researchers publicizing details of critical vulnerabilities under active exploitation just seven days after they've alerted a company. That new grace period leaves vendors dramatically less time to create and test a patch than the previously recommended 60-day disclosure deadline for the most serious security flaws. The goal, write Chris Evans and Drew Hintz, is to prompt vendors to more quickly seal, or at least publicly react to, critical vulnerabilities and reduce the number of attacks that proliferate because of unprotected software."
Re:And when they get bitten in the ass? (Score:5, Informative)
Why is there only one guy?
How incompetent is the management an organization that does not have enough coverage to deal with those issues?
Re: (Score:2)
Hewlett-Packard started with only two...
Re:And when they get bitten in the ass? (Score:4, Funny)
What we call incompetent, newly minted MBA drones call efficiency optimization.
Re: (Score:2)
New and old MBA drones call this bonuses. Look, I did something! I reduced headcount of people who understand our critical systems to only one!
Re: (Score:2)
Re: (Score:3)
I disagree.
What would they do if the one dev died?
Then likely even 60 days would not be enough to get his replacement up to speed.
Any company that has employees it cannot lose deserves this.
Re:And when they get bitten in the ass? (Score:5, Informative)
Seem like they recommending it only for "critical vulnerabilities under active exploitation". For vulnerabilities where exploits increase as each day passes because of non-disclosure, I would want quick notification.
FTA and not quite in the summary:
“Our standing recommendation is that companies should fix critical vulnerabilities within 60 days — or, if a fix is not possible, they should notify the public about the risk and offer workarounds,” the two said in a blog post today. “We encourage researchers to publish their findings if reported issues will take longer to patch. Based on our experience, however, we believe that more urgent action — within seven days — is appropriate for critical vulnerabilities under active exploitation. The reason for this special designation is that each day an actively exploited vulnerability remains undisclosed to the public and unpatched, more computers will be compromised.”
Re:And when they get bitten in the ass? (Score:5, Interesting)
Seem like they recommending it only for "critical vulnerabilities under active exploitation".
Honestly, I'm a bit surprised that they offer even seven days of cover for vulnerabilities with detected exploits. I can certainly see the wisdom of the "Please, don't release 'proof of concept exploit toolkit, not for use for evil' ten minutes after emailing the vendor about the problem..." appeal; but I'd be inclined to report the discovery of an active exploit immediately, as being a noteworthy event in itself.
Re: (Score:2)
Don't get me wrong, I agree that they are screwed, it's just that the 7-day window is when black-hats are already known to be using the bug. Under those circumstances, you would be screwed no matter what: the 'disclosure' has already happened among the people who are interested in using it for evil. The only value in a delay by the 'responsible' parties is that it reduces the apparent lateness of your fix.
Re: (Score:2)
Delaying disclosure in that situation does no one any favors, except evil exploiters (including governments).
Re:And when they get bitten in the ass? (Score:5, Insightful)
The big kicker is "under active exploitation". If no exploits are known in the wild, it's still necessary to light a fire under the vendor's ass(you can't assume that the flaw isn't just sitting in somebody's high-value-zero-day arsenal, or that it won't be discovered and exploited in the future); but there is a real argument in favor of trying to work with the vendor to get a proper fix in place before releasing the details, and more or less assuring that every dumb script kiddie can implement the attack if they want.
If something is already 'under active exploitation', though, the cat is already out of the bag, and the choice isn't really in your hands anymore. The clock already started ticking. Whether you like it or not, every hour it goes unfixed is more room for more attacks. Keeping quiet about it harms the ability of end users to take protective action, and really only helps the vendor save face, which isn't a terribly valuable feature.
Now, I don't doubt that Google's 'webapps and silent autoupdaters' style gives them a certain self-interested enthusiasm(compared to vendors who cater to much more sedate patch cycles) for fast disclosure; but, again, 'under active exploitation' is the phrase that makes their position(however self-interested) merely realistic. If you know that team black hat already knows about it, you don't really get to choose when it is disclosed, since that has already happened. You only get to choose how slow you make the vendor look.
Re: (Score:3)
The big kicker is "under active exploitation". If no exploits are known in the wild, it's still necessary to light a fire under the vendor's ass(you can't assume that the flaw isn't just sitting in somebody's high-value-zero-day arsenal, or that it won't be discovered and exploited in the future); but there is a real argument in favor of trying to work with the vendor to get a proper fix in place before releasing the details, and more or less assuring that every dumb script kiddie can implement the attack if they want.
And yet Microsoft's policy is that unless it is "under active exploitation" they won't necessarily fix it. They get lots of notices about potential exploits, but don't fix them, even likely high targets, until someone exploits them - which, by then, is really too late.
Re: (Score:2)
Not sure coding works on something the scale of google, but programmers are people and they go on vacation, funerals, get fired, get hired and freeshly acquainted with their jobs too.
Will Google be as supportive of this policy after the first time some major bug hits one of their more minor products and the guy who knows all about it is gone whereever hat week?
Huh?
Sounds like a huge risk (Score:5, Insightful)
What if a bug cant be fixed and systems patched in 7 days time? are they going to cut corners on something like testing?
Going from bug report to design and code a fix, to test, to roll it out to the infrastructure in 5 working days seems like an impossible benchmark to sustain even with the super brainiacs working at google
Re: (Score:2)
Testing? Isn't that what the customers are for? :-)
Re: (Score:1)
Exactly this. While the model may work for Google who seems to continuous beta everything as the users aren't generally the ones paying the bill, those who ship software, enterprise or otherwise, 7 days just isn't enough. While certainly vendors (Hello Microsoft!) are abusive in terms of reasonable fix time, 7 days is far too short.
Re:Sounds like a huge risk (Score:5, Informative)
We're talking about actively exploited critical vulnerabilities.
Fix the hole now! You can make it pretty later.
Re: (Score:2)
Re: (Score:2)
if(exploit) {return false;} else {return true;}
Re: (Score:3)
That's how you wind up with 5 more holes, no thanks.
Re: (Score:1)
Re: (Score:2)
In the long run it'll cost you A LOT more as they surface one by one.
Re: (Score:2)
so thorough QC is broken development process? ...oh gawd, you don't actually work in IT do you?
Re:Sounds like a huge risk (Score:5, Funny)
We're talking about actively exploited critical vulnerabilities. Fix the hole now! You can make it pretty later.
Yea, but I only do bugs once a month. On Tuesdays. I can't be bothered before then. Your problems may seem big, but I choose to do things my way, at my pace. Besides my inaction helps support a large secondary market for security appliances, IT support personnel and the like. We jeopardize an entire sector of the economy by undermining these people.
Re: (Score:1)
Re: (Score:3)
I think you have a bug that inserts random "@" symbols into your text. You have 7 days to fix this before I tell the world!
Re: (Score:1)
Re:Sounds like a huge risk (Score:4, Insightful)
If the programmers can't read their own damn code that they wrote and figure out why the vulnerability happened, they should be fired. They obviously don't know their own code and didn't use comments or worse yet, they don't know what the command they're doing ACTUALLY do and that was the cause of the problem.
Then if it takes more than 7 days to "publish" or "push" a new version of their software live, then the whole project was designed like it's 15 years ago. These days, you need urgent patches faster than that. Let the programmers who wrote the code do the testing so there's zero delay and then don't require some know-nothing 60-year old head of the department review all code before it goes live.
Re: (Score:3)
Are you taking into account testing time for software that may be used on thousands of different configurations? In my mind, that would account for the bulk of the time between notification of an exploit and release of a patch. Of course, this is only for critical exploits that are actively being used, so it's probably better to get out a fix that works for 60% of installs right away and then work on the patch that will work for 100% of installs.
Re:Sounds like a huge risk (Score:4, Insightful)
so it's probably better to get out a fix that works for 60% of installs right away and then work on the patch that will work for 100% of installs.
So you're willing to risk breaking 40% of your customer's installs? Are you willing to skip regression testing to make sure your fix breaks nothing else?
Re: (Score:2)
If you have good unit tests it wont be anywhere near 40%. Automated testing is a lot faster than manual regression testing. Assuming of course you are talking about fixing something like Google.com or Java where the cost of failure it relatively low. If you are fixing pacemakers, you really need to ask yourself why you put a webserver or browser in it to be compromised in the first place.
Even automated regression and unit testing takes time. Even on mid-size projects, it could easily be a few days just to run the automated testing suite in all the supported environments to guarantee you didn't break something. For a large project, it could be weeks or more.
Re: (Score:2)
Ask Microsoft that question and you'll get a Hell Yes since that's happened in just the last year. Remember the recent patch tuesday that borked lots of systems worldwide? I got caught by that one and it was rated critical by MS (highest they share). Went to reboot and got a BSOD and yes I was suprised because I normally didn't get the updates that early.
Re: (Score:2)
Re: (Score:3)
I doubt there is any company in the world you consider very good. Care to give me a couple. Bonus points if you do the lookups of "longest open critical issue" instead of making me prove they were over 7 days.
Re: (Score:2)
However, it also goes to underscore th
Re: (Score:1)
I'm sorry but you should be able to do this in 24-48 hours tops, even with a large system, or you're just a shitty developer. (If you think "I'm a great developer! And that's impossible" then sorry ... you're a shitty developer who doesn't realize it [wikipedia.org].) Someplace like Google has the resources to fix the
Re: (Score:2)
I'm sorry but you should be able to do this in 24-48 hours tops, even with a large system, or you're just a shitty developer.
That's assuming the vulnerability is trivial to diagnose, and easy to fix. Plus, that doesn't take into account the testing time required, not just for the fix, but for the regression testing too. Remember: writing code is only about 10-20% of the time it takes to build software.
Re: (Score:2)
I've had the occasional bug that took a week to track down, purely because it was so difficult to reproduce. And let's not get started on the poor quality of your average b
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I don't. He (or she) doesn't have a clue what he's talking about, not when it comes to security.
That's a dead giveaway, even if it wasn't already obvious. Many, many security bugs repro under specific conditions that may be common (or not; it really doesn't matter) on real-world deployments, but don't closely match developer/tester machines (for example, the PO
Re: (Score:2)
Re: (Score:2)
Re: poor quality bug reports
A good deal of the problem there could be solved with a more structured form. You know, one that isn't just a "short description of problem" with a submit button, and instead one that has sections for "version of software impacted", "activity performed when error occured", and "process to reproduce bug activity", as well as some other data that the reporting form automatically pulls, like the current OS, what versions of standard runtime DLLs are installed, etc.
People who lack th
Re: (Score:3)
The response isn't necessarily to fix the bug. The response is to mitigate the risk due to the vulnerability. One way is to fix the bug that's behind it. Another is to change configurations or add additional layers to remove exposure due to the bug. For instance there was once a vulnerability in SSH caused by one particular authentication method. Since that method was rarely used and there were alternative ways of doing the same kind of authentication, the most popular immediate solution was to just disable
Re: (Score:2)
You have to assume that someone else already discovered the problem and is selling it on the exploit market.
Re: (Score:1)
This is not a deadline for issuing a fix. What TFA is talking about is the delay before you inform the public about a bug that is being actively exploited i.e. one that the bad guys already know about. This gives end-users the option of not using the buggy software at all until a patch is available.
Re: (Score:2)
What if a bug cant be fixed and systems patched in 7 days time? are they going to cut corners on something like testing?
Going from bug report to design and code a fix, to test, to roll it out to the infrastructure in 5 working days seems like an impossible benchmark to sustain even with the super brainiacs working at google
There isn't a good alternative: If a bug is already being actively exploited, the clock started ticking before Google even knew about it, you just didn't know it yet. The secret is already out, at least one attack system is in the wild, etc. If nobody tells the customers, they risk getting owned and don't know to take precautionary measures above and beyond the usual. If somebody tells the customers, at least some of them might be able to mitigate the risk.
There's room for risk-acceptance bargaining in situ
Re: (Score:2)
If nobody tells the customers, they risk getting owned and don't know to take precautionary measures above and beyond the usual.
Exactly. Here's a proposal I made here last year on something called Informed Disclosure [slashdot.org]. Leaving customers in the dark when a workaround that will protect them exists - that's not 'Responsible'. And if it's critical enough, there's always the workaround of disconnecting affected systems. Whether it's 60 days or longer or shorter, customers deserve to know and many vendors wi
Re: (Score:2)
I totally agree. Seven days is long enough for a vendor to formulate a sober verbal response and run it through channels when their customers are already being rooted due to eggregious failings in their software products.
At the very least the customers can increase vigilance around the disclosed vulnerability.
Sure wouldn't hurt if this policy leads to fewer eggregious and embarrassing software flaws in the first place.
Re: (Score:2)
That's what I was thinking... 60 days is a bit long, it's more than enough to scope out a network, gain access, and execute the vulnerability. 7 days is a bit short, not enough time to test, validate, or run through QC. Not sure why google's leaning on the other extreme, but why not compromise at like 21 days with empathy towards more advanced development cycles.
Re: (Score:2)
Change controlled environments? (Score:1)
Re: (Score:2)
Every company I've worked with that has any sort of change control procedures generally has a specific policy for critical/emergency updates. Some of those policies are "apply now, ask questions later" whereas some have a specific policy of "it doesn't matter, ALL changes go the normal route and we'll take the risk." The key is having a policy that at least acknowledges the risk of delaying.
Re: (Score:2)
They're already at significant risk due to the vulnerability. The only difference is that now they have to acknowledge and mitigate that risk instead of pretending it isn't there.
Re: (Score:2)
There should be protocols in place for urgent or emergency out of cycle changes, it usually involves the two or three key technical people agreeing with a manager and a key business decision maker on a course of action and executing it, Any paper work is done by the manager(s) while the technical people fix the issue right then and there.
Re: (Score:2)
Hackers don't give a shit about your change control, they're not going to give you a head start because you're slow to respond to threats.
How does not telling anyone that people are actively exploiting this change that?
Active exploits (Score:2)
Active Exploitable (in the wild) Security flaws should have ZERO day disclosures. And companies should be required to offer up mitigation tips for people who have software that isn't patched.
Re: (Score:2)
7 days? (Score:2)
Google can push out 20 versions of chrome in 7 days.
They don't expect 7 days...they want less than 60 (Score:1)
They're not expecting to get 7 days but they'll reach a compromise close to what they actually want which is probably a couple of weeks, may 30 days.
Personally I think that 2 weeks is reasonable.
You could get into trouble if the guy who knows the intricacies of that area is on holiday/leave for those two weeks but that's an education/complexity problem that you should never place yourself in.
It all relies on having good testability so that you're confident that the changes have no side effects.
Direct (Score:2)
http://googleonlinesecurity.blogspot.com/2013/05/disclosure-timeline-for-vulnerabilities.html [blogspot.com]
Insecure throughout the year (Score:2)
The 7 day limit is probably a compromise between trying to get the vendor to fix the vulnerability that is actively being exploited and disclosing the information and thus increasing the pool of people who'd use the exploit.
For vulnerabilities wher
App approval (Score:4, Insightful)
I would say that after a week they should notify that there is a flaw, but not what the flaw is. Then maybe after 30 days release the kraken (exploitable flaw that is).
Let's say they discover a pacemaker flaw where a simple android app could be cobbled together to give pacemaker people nearby fatal heart attacks. If they release that in a week then they are vile human beings.
Most companies do seem pretty slothful in fixing these things but pushing for a company to process the flaw, analyze the flaw, find a solution, assign the workers, fix it, test it, and deploy it in under a week seems pretty extreme.
Re: (Score:2)
Let's say they discover a pacemaker flaw where a simple android app could be cobbled together to give pacemaker people nearby fatal heart attacks. If they release that in a week then they are vile human beings.
Remember, they're talking about vulns that are actively being exploited, which means people are already dropping dead because of pacemaker problems.
The correct thing to do is to let people know so they can stay at home and reduce exposure to attackers until the flaw is fixed.
Re: (Score:2)
But yes giving information so that people can run for the hills can be useful.
It all boils down to information being power. So who will best use that power should be the key question before releasing