Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security Google Software IT

Google Advocates 7-Day Deadline For Vulnerability Disclosure 94

Trailrunner7 writes "Two security engineers for Google say the company will now support researchers publicizing details of critical vulnerabilities under active exploitation just seven days after they've alerted a company. That new grace period leaves vendors dramatically less time to create and test a patch than the previously recommended 60-day disclosure deadline for the most serious security flaws. The goal, write Chris Evans and Drew Hintz, is to prompt vendors to more quickly seal, or at least publicly react to, critical vulnerabilities and reduce the number of attacks that proliferate because of unprotected software."
This discussion has been archived. No new comments can be posted.

Google Advocates 7-Day Deadline For Vulnerability Disclosure

Comments Filter:
  • by anthony_greer ( 2623521 ) on Thursday May 30, 2013 @09:56AM (#43860821)

    What if a bug cant be fixed and systems patched in 7 days time? are they going to cut corners on something like testing?

    Going from bug report to design and code a fix, to test, to roll it out to the infrastructure in 5 working days seems like an impossible benchmark to sustain even with the super brainiacs working at google

    • Testing? Isn't that what the customers are for? :-)

      • by Anonymous Coward

        Exactly this. While the model may work for Google who seems to continuous beta everything as the users aren't generally the ones paying the bill, those who ship software, enterprise or otherwise, 7 days just isn't enough. While certainly vendors (Hello Microsoft!) are abusive in terms of reasonable fix time, 7 days is far too short.

        • by Anonymous Coward on Thursday May 30, 2013 @10:12AM (#43861065)

          We're talking about actively exploited critical vulnerabilities.
          Fix the hole now! You can make it pretty later.

          • I don't know why, but that makes me think of Darth Stewie in Blue Harvest :)
          • if(exploit) {return false;} else {return true;}

          • That's how you wind up with 5 more holes, no thanks.

            • by Anonymous Coward
              You wind up with 5 more holes, 0 of which are being actively exploited. Win.
          • by LordThyGod ( 1465887 ) on Thursday May 30, 2013 @10:59AM (#43861681)

            We're talking about actively exploited critical vulnerabilities. Fix the hole now! You can make it pretty later.

            Yea, but I only do bugs once a month. On Tuesdays. I can't be bothered before then. Your problems may seem big, but I choose to do things my way, at my pace. Besides my inaction helps support a large secondary market for security appliances, IT support personnel and the like. We jeopardize an entire sector of the economy by undermining these people.

    • by atom1c ( 2868995 )
      Like @Maxwell demon suggested, why stop at launching full-blown products in beta? Simply release their security patches in beta form as well!
      • I think you have a bug that inserts random "@" symbols into your text. You have 7 days to fix this before I tell the world!

        • by atom1c ( 2868995 )
          That remark should have been made as a private message; a public reply qualifies as public disclosure.
    • by slashmydots ( 2189826 ) on Thursday May 30, 2013 @10:09AM (#43861009)
      I'm a software programmer so I can honestly say if a company takes more than 7 days to issue a fix, they aren't good. Let's say there's a team of 20 programmers working on a huge piece of software like an ASP system on a website. If the 1-2 people responsible for the module hear about the problem like 4 days after it was reported, the boss seriously screwed up. That's a lack of communication in their company. A 30 minute delay for "there's cake in the breakroom" and 7+ day delay on "someone's hacking our website" means someone epically screwed up the importance of that e-mail getting relayed to the correct people.

      If the programmers can't read their own damn code that they wrote and figure out why the vulnerability happened, they should be fired. They obviously don't know their own code and didn't use comments or worse yet, they don't know what the command they're doing ACTUALLY do and that was the cause of the problem.

      Then if it takes more than 7 days to "publish" or "push" a new version of their software live, then the whole project was designed like it's 15 years ago. These days, you need urgent patches faster than that. Let the programmers who wrote the code do the testing so there's zero delay and then don't require some know-nothing 60-year old head of the department review all code before it goes live.
      • Are you taking into account testing time for software that may be used on thousands of different configurations? In my mind, that would account for the bulk of the time between notification of an exploit and release of a patch. Of course, this is only for critical exploits that are actively being used, so it's probably better to get out a fix that works for 60% of installs right away and then work on the patch that will work for 100% of installs.

        • by HockeyPuck ( 141947 ) on Thursday May 30, 2013 @10:24AM (#43861235)

          so it's probably better to get out a fix that works for 60% of installs right away and then work on the patch that will work for 100% of installs.

          So you're willing to risk breaking 40% of your customer's installs? Are you willing to skip regression testing to make sure your fix breaks nothing else?

          • Ask Microsoft that question and you'll get a Hell Yes since that's happened in just the last year. Remember the recent patch tuesday that borked lots of systems worldwide? I got caught by that one and it was rated critical by MS (highest they share). Went to reboot and got a BSOD and yes I was suprised because I normally didn't get the updates that early.

        • That barely applies in most real world examples. Oops, special characters are allowed in an input field for social security numbers and post-filtered after match checking so someone can falsely submit a duplicate SSN by adding a pound sign to the end and get verified for multiple accounts that all validated as real SSNs. Simple! Change the order of your code to check the literal text value in the field before filtering or just run the filter sooner. That could not possibly break anyone's system just bec
      • I'm a software programmer so I can honestly say if a company takes more than 7 days to issue a fix, they aren't good.

        I doubt there is any company in the world you consider very good. Care to give me a couple. Bonus points if you do the lookups of "longest open critical issue" instead of making me prove they were over 7 days.

        A 30 minute delay for "there's cake in the breakroom" and 7+ day delay on "someone's hacking our website" means someone epically screwed up the importance of that e-mail getting relay

      • While I generally agree, some projects (like Qt for instance) take a week or more to test something before making a release to production to verify that what was intended to be changed actually was changed, and that it didn't break anything else. This is especially true of platform API projects like Qt, Gtk, etc. where many people rely on the stability of the APIs in those projects, and people using the project then need to have their own testing time on top of that.

        However, it also goes to underscore th
    • by Anonymous Coward

      Going from bug report to design and code a fix, to test, to roll it out to the infrastructure in 5 working days seems like an impossible benchmark to sustain even with the super brainiacs working at google

      I'm sorry but you should be able to do this in 24-48 hours tops, even with a large system, or you're just a shitty developer. (If you think "I'm a great developer! And that's impossible" then sorry ... you're a shitty developer who doesn't realize it [wikipedia.org].) Someplace like Google has the resources to fix the

      • I'm sorry but you should be able to do this in 24-48 hours tops, even with a large system, or you're just a shitty developer.

        That's assuming the vulnerability is trivial to diagnose, and easy to fix. Plus, that doesn't take into account the testing time required, not just for the fix, but for the regression testing too. Remember: writing code is only about 10-20% of the time it takes to build software.

    • The response isn't necessarily to fix the bug. The response is to mitigate the risk due to the vulnerability. One way is to fix the bug that's behind it. Another is to change configurations or add additional layers to remove exposure due to the bug. For instance there was once a vulnerability in SSH caused by one particular authentication method. Since that method was rarely used and there were alternative ways of doing the same kind of authentication, the most popular immediate solution was to just disable

    • by AmiMoJo ( 196126 ) *

      You have to assume that someone else already discovered the problem and is selling it on the exploit market.

    • This is not a deadline for issuing a fix. What TFA is talking about is the delay before you inform the public about a bug that is being actively exploited i.e. one that the bad guys already know about. This gives end-users the option of not using the buggy software at all until a patch is available.

    • What if a bug cant be fixed and systems patched in 7 days time? are they going to cut corners on something like testing?

      Going from bug report to design and code a fix, to test, to roll it out to the infrastructure in 5 working days seems like an impossible benchmark to sustain even with the super brainiacs working at google

      There isn't a good alternative: If a bug is already being actively exploited, the clock started ticking before Google even knew about it, you just didn't know it yet. The secret is already out, at least one attack system is in the wild, etc. If nobody tells the customers, they risk getting owned and don't know to take precautionary measures above and beyond the usual. If somebody tells the customers, at least some of them might be able to mitigate the risk.

      There's room for risk-acceptance bargaining in situ

      • If nobody tells the customers, they risk getting owned and don't know to take precautionary measures above and beyond the usual.

        Exactly. Here's a proposal I made here last year on something called Informed Disclosure [slashdot.org]. Leaving customers in the dark when a workaround that will protect them exists - that's not 'Responsible'. And if it's critical enough, there's always the workaround of disconnecting affected systems. Whether it's 60 days or longer or shorter, customers deserve to know and many vendors wi

        • by epine ( 68316 )

          I totally agree. Seven days is long enough for a vendor to formulate a sober verbal response and run it through channels when their customers are already being rooted due to eggregious failings in their software products.

          At the very least the customers can increase vigilance around the disclosed vulnerability.

          Sure wouldn't hurt if this policy leads to fewer eggregious and embarrassing software flaws in the first place.

    • That's what I was thinking... 60 days is a bit long, it's more than enough to scope out a network, gain access, and execute the vulnerability. 7 days is a bit short, not enough time to test, validate, or run through QC. Not sure why google's leaning on the other extreme, but why not compromise at like 21 days with empathy towards more advanced development cycles.

    • by silviuc ( 676999 )
      If they can't fix it, they should have mitigating measures in place and at least inform their customers of the problem... This usually does not happen and people get hacked.
  • What about coporate environments that are strictly change controlled? The extra visibility may produce significant risk to systems that cannot be patched in such short order...
    • Every company I've worked with that has any sort of change control procedures generally has a specific policy for critical/emergency updates. Some of those policies are "apply now, ask questions later" whereas some have a specific policy of "it doesn't matter, ALL changes go the normal route and we'll take the risk." The key is having a policy that at least acknowledges the risk of delaying.

    • They're already at significant risk due to the vulnerability. The only difference is that now they have to acknowledge and mitigate that risk instead of pretending it isn't there.

    • There should be protocols in place for urgent or emergency out of cycle changes, it usually involves the two or three key technical people agreeing with a manager and a key business decision maker on a course of action and executing it, Any paper work is done by the manager(s) while the technical people fix the issue right then and there.

    • by taviso ( 566920 ) *

      Hackers don't give a shit about your change control, they're not going to give you a head start because you're slow to respond to threats.

      How does not telling anyone that people are actively exploiting this change that?

  • Active Exploitable (in the wild) Security flaws should have ZERO day disclosures. And companies should be required to offer up mitigation tips for people who have software that isn't patched.

    • by gmuslera ( 3436 )
      The problem must be solved as soon as possible, but as could take a bit of time to find the exact problem and test the solution, better to give a few days. Anyway, once the cause is clear, warning users (without disclosing the extent of the problem at the point of making more people exploiting it) so they can take measures to mitigate it should be prioritary. And putting an standard time limit of a week before full disclosure avoid companies to sit on vulnerabilities without doing anything about them for mo
  • Google can push out 20 versions of chrome in 7 days.

  • They're not expecting to get 7 days but they'll reach a compromise close to what they actually want which is probably a couple of weeks, may 30 days.

    Personally I think that 2 weeks is reasonable.

    You could get into trouble if the guy who knows the intricacies of that area is on holiday/leave for those two weeks but that's an education/complexity problem that you should never place yourself in.

    It all relies on having good testability so that you're confident that the changes have no side effects.

  • If we ask the question: "for how many days in a year is a specific browser/application vulnerable to an unpatched exploit?", then we get awful numbers. There are plenty of applications used by millions of people where that number is more than half of the year.

    The 7 day limit is probably a compromise between trying to get the vendor to fix the vulnerability that is actively being exploited and disclosing the information and thus increasing the pool of people who'd use the exploit.

    For vulnerabilities wher
  • App approval (Score:4, Insightful)

    by EmperorOfCanada ( 1332175 ) on Thursday May 30, 2013 @12:07PM (#43862563)
    If one hour ago I was notified of a flaw in my app, and 59 minutes ago I fixed it, and 58 minutes ago I submitted it for approval it could easily be a week before it get approved.

    I would say that after a week they should notify that there is a flaw, but not what the flaw is. Then maybe after 30 days release the kraken (exploitable flaw that is).

    Let's say they discover a pacemaker flaw where a simple android app could be cobbled together to give pacemaker people nearby fatal heart attacks. If they release that in a week then they are vile human beings.

    Most companies do seem pretty slothful in fixing these things but pushing for a company to process the flaw, analyze the flaw, find a solution, assign the workers, fix it, test it, and deploy it in under a week seems pretty extreme.
    • Let's say they discover a pacemaker flaw where a simple android app could be cobbled together to give pacemaker people nearby fatal heart attacks. If they release that in a week then they are vile human beings.

      Remember, they're talking about vulns that are actively being exploited, which means people are already dropping dead because of pacemaker problems.

      The correct thing to do is to let people know so they can stay at home and reduce exposure to attackers until the flaw is fixed.

      • Very good point but I am thinking about the human waste products that actively look for known exploits to exploit. For example there are a whole lot of people who wait for OS updates to see what has changes so they can run out and make exploits knowing that a huge number of people don't upgrade very quickly.

        But yes giving information so that people can run for the hills can be useful.

        It all boils down to information being power. So who will best use that power should be the key question before releasing

Technology is dominated by those who manage what they do not understand.

Working...