Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security Software Technology

Hackers' 'Zero-Day' Exploits Stay Secret For Ten Months On Average 74

Sparrowvsrevolution writes "Maybe instead of zero-day vulnerabilities, we should call them -312-day vulnerabilities. That's how long it takes, on average, for software vendors to become aware of new vulnerabilities in their software after hackers begin to exploit them, according to a study presented by Symantec at an Association of Computing Machinery conference in Raleigh, NC this week. The researchers used data collected from 11 million PCs to correlate a catalogue of zero-day attacks with malware signatures taken from those machines. Using that retrospective analysis, they found 18 attacks that represented zero-day exploits between February 2008 and March of 2010, seven of which weren't previously known to have been zero-days. And most disturbingly, they found that those attacks continued more than 10 months on average – up to 2.5 years in some cases – before the security community became aware of them. 'In fact, 60% of the zero-day vulnerabilities we identify in our study were not known before, which suggests that there are many more zero-day attacks than previously thought — perhaps more than twice as many,' the researchers write."
This discussion has been archived. No new comments can be posted.

Hackers' 'Zero-Day' Exploits Stay Secret For Ten Months On Average

Comments Filter:
  • 5 in use right now (Score:5, Interesting)

    by Anonymous Coward on Wednesday October 17, 2012 @04:33AM (#41679233)

    Given a conservative estimate that a new 0-day exploit is found every 2 months, there are at least 5 unpatched exploits in the wild at any given moment.

    • by Zocalo ( 252965 ) on Wednesday October 17, 2012 @06:40AM (#41679617) Homepage
      That seems awfully conservative to me. Since there is next to no incentive for a Black Hat to reveal any 0-day they are currently exploiting - bug bounty programmes being perhaps the one exception - then there is the possibility that any given exploit that is discovered might have already been found and be in the process of being exploited as an unknown 0-day by someone else. Taken to the extreme, and that could mean that every published and exploitable bug has been utilised a 0-day at some point, even when the person officially credited with discovery has used a responsible disclosure approach and a vendor patch has been available before the details are maed public.

      I'd be very surprised if the number of 0-day exploits in active use, whether by criminals, scammers or government agencies, around the entire world at any given time was in single figures, and the figure even peaking into the three figure range doesn't seem like it's too unrealistic, either.
      • Except that people like me find these things running around on our networks and submit reports to McAfee, Symantec, and such so that their automated systems will detect them. They may be zero day and there may not be any signatures but any security person worth the title absolutely will notice anomalous behavior on their networks and computers.

    • by mcgrew ( 92797 ) *

      Which seemingly begs the question, why are we running AV? AV is clearly useless. It seems the UAC is far better at keeping your equipment free of viruses.

      This article confirms something I've suspected for a long time.

      • by tragedy ( 27079 )

        The UAC can't even keep demons from overrunning their Mars base, how are they going to keep your equipment free of viruses?

        • by mcgrew ( 92797 ) *

          Now you did it... I'm going to have to dig out those old DOOM floppies, find a drive somewhere in that pile of junk parts in the basement, and play DOOM.

  • by Anonymous Coward

    Somebody should do a comparison.

    • by LinuxIsGarbage ( 1658307 ) on Wednesday October 17, 2012 @04:47AM (#41679275)

      In that case there's no excuse because you can fix it yourself.

      • Yeah, because only programmers use free software right?
        • Re: (Score:1, Insightful)

          Well that's the response I get with bug reports.

          • by Errtu76 ( 776778 ) on Wednesday October 17, 2012 @05:42AM (#41679461) Journal

            Perhaps it's your nick that triggers those responses.

            • Re: (Score:2, Insightful)

              by Anonymous Coward

              And why does his nickname matter when it comes to a bug report? A bug is a bug, no matter if Hitler himself reports it. This is just another example of software authors finding ways to avoid providing support; you do realise it's that exact attitude that resulted in "BOFH syndrome" and "UNIX beardo" stereotypes, yes?

              • Re: (Score:2, Insightful)

                by Anonymous Coward
                Unfortunately his nickname identifies him as a troll. Not a lot of people then care if he's a troll with a valid bug report.
              • by mcgrew ( 92797 ) *

                And why does his nickname matter when it comes to a bug report?

                How seriously would Microsoft take a bug report from WindowsIsGarbage? LinuxIsGarbage is obviously a troll account. If his name was Hitler, well, maybe his name really is William R. Hitler. But LinuxIsGarbage is an obvious setup, and nobody in their right mind would even glance at a bug report from him.

            • Because clearly I use must use the same Nick everywhere. I'm exaggerating as I don't even submit bug reports, but I have seen the sentiment "Fix it yourself" expressed before.

              In reality I'm fairly pragmatic. For some things Windows is better (total available applications, and total supported hardware, backwards/forward compatibility), for other things Linux is better (initial support of hardware off the install disc, capability of live disc, capability to work on bare metal, cost). On the mobile side I have

          • by Tsingi ( 870990 )

            I have a free Symantec product installed on my Windows box. All it does is pop up and tell me I'm unprotected and need to send them money to get real protection.

            The cheaper way is to not browse the internet from a Windows box.

            I browse from a Linux box using free software and don't have to pay companies like Symantec to protect me.

            I could uninstall that app, since it does nothing but advertise for Symantec, but I kind of like being reminded that all you people that call Linux Garbage have to pay money

            • by mcgrew ( 92797 ) *

              I browse from a Linux box using free software and don't have to pay companies like Symantec to protect me.

              I have a kubuntu box as well, but I don't have to pay Symantic or anyone else for AV on the Windows box, since there are quite a few free AVs that are superior to Norton and McAffee. One even comes from MS.

              However, I agree -- if they had done a better job of writing Windows, it would need no AV. Windows is the only OS there is that needs AV. Microsoft should be ashamed of itself.

              • by Tsingi ( 870990 )

                there are quite a few free AVs that are superior to Norton and McAffee. One even comes from MS.

                Recommendations?

                • -Microsoft's MSE
                  -Avira
                  -Avast
                  -AVG

                  are realtime scanners that are decent. ClamWin doesn't have one last time I checked, and effectiveness wasn't that great to begin with

                  Though third party validated effectiveness of MSE seems to vary month to month (one month it's top tier, next month it's the bottom) http://www.av-comparatives.org/ [av-comparatives.org] I prefer installing MSE on people's computers because it's hands-off to keep it updated where after a year or so Avast or AVG will bug and nag for an upgrade, and there's a higher c

    • by Lennie ( 16154 ) on Wednesday October 17, 2012 @05:14AM (#41679369)

      I'm just glad when a software vendor releases a fix, including security, it only takes up to a couple of days until my system gets it updates.

      Everything else just means: you need to have fait in the original programmer and the team that handles the vulnerability reports.

      Open source or not.

      I believe open source works better though, I've never seen that someone reported a security bug was delayed for months on end.

      Other then something like this: "Last year (2011) there was a period of several months when the CentOS project did not issue any security advisories or updates for CentOS 6. Many CentOS users got frustrated and worried about their system security"

      Which just means people have the choice to replace CentOS with an other distribution and mostly life happily ever after.

      With a closed system you can't, there is only one vendor of Windows, right ?

      • by Lennie ( 16154 )

        I forgot to mention code review. When it's open source other people can look at the code, if they do is a totally different question. But it is possible. With bigger projects, I believe they do.

        • by Anonymous Coward on Wednesday October 17, 2012 @06:11AM (#41679547)

          When you release something as open source, your reputation is on the line as everybody can inspect your coding. That in turn forces developers to be much more diligent.

          Commercial software, on the other hand, is often a stinking heap of nasty and un-reviewed code. Managers regard it as a waste of resources to do proper code reviews (and consequential cleanups), because "that does not contribute to the development of new features which can be sold for $$$". And because most managers are proud to be ignorant dumbasses.

      • Everything else just means: you need to have faith in the original programmer and the team that handles the vulnerability reports.

        Whom, like Adobe? Who take years to fix issues they already know about? fat chance
        • by Lennie ( 16154 )

          Hey, I'm just saying that is the only choice you have. Other then, use a different software/other developers. Or build it yourself of course.

      • I believe open source works better though, I've never seen that someone reported a security bug was delayed for months on end.

        On big products with big Teams (Firefox, Libre/OpenOffice, GNOME, etc) probably. But there's a LOT of F/OSS that's a one man show. Those are probably slower to update.

  • by FirephoxRising ( 2033058 ) on Wednesday October 17, 2012 @04:36AM (#41679245)
    Wow they are scary numbers. I don't suppose we should be surprised, they want to make use of their exploit and/or they've seen how people are treated if they do point out vulnerabilities.
  • there should be a lecture about this in elementary school, together with an overview of risks of social networks and place to seek help when being 'cyberbullied' just give those kids a basic understanding of the risks of the things they use (or will use) in everyday life - without demonizing them
    • by DarkOx ( 621550 )

      I agree mostly but elementary is pretty young. I am not sure you can get your point across without scaring then. Also most social network terms of service don't really allow ementry age kids anyway. They certainly are not in a position to be managing the security of their own machines.

      Middle school and Junior high though would be a great time to address these topics. They often have some class like "life skills" where basic cooking, check book balancing, and similar personal business matter as taught.

      • by Anonymous Coward

        I am not sure you can get your point across without scaring then.

        *Child sobbing* "Mommy! They said in school that you'll die if you use Facebook!"

  • If we plot the data, we see a distribution in which some exploits are detected immediately. That's one tail of the distribution. On the other tail, there will be exploits detected so far in the future that they, effectively, will never be detected.

    The perfect crime is never detected.

  • Not news (Score:5, Insightful)

    by HarryatRock ( 1494393 ) <harry.rutherford@btinternet.com> on Wednesday October 17, 2012 @05:07AM (#41679353) Journal

    From Wikipedia zero day exploit

    For example in 2008 Microsoft confirmed a vulnerability in Internet Explorer, which affected some versions that were released in 2001.[4] The date the vulnerability was first found by an attacker is not known; however, the vulnerability window in this case could have been up to 7 years.

    Looks like we've known about this for quite some time

    • by Anonymous Coward

      ... there have been even older, much more critical bugs in Windows. Think of the "icon image resource" exploit, which probably existed since Windows 3.1. That would be something like 17 or more years.

      • Re:Actually, (Score:5, Insightful)

        by CastrTroy ( 595695 ) on Wednesday October 17, 2012 @08:07AM (#41680109)
        I'm still waiting for them to fix the "hide file extensions for known file types" exploit. It's the first thing I change anytime I install Windows. And as far as I know, it can't be changed system wide, only per each user account. When executable files can specify their own icon, for instance, look like an image, or a Word document, this is very dangerous behaviour. What purpose does hiding the file extension have? Other then hiding "scary technical things" from dumb users (if they don't have the information, they'll remain stupid) I don't see any reason why this should exist. And it definitely shouldn't be turned on by default if they insist on the feature even existing.
        • Re:Actually, (Score:5, Informative)

          by Anonymous Coward on Wednesday October 17, 2012 @09:50AM (#41681165)

          Even showing the extension you are vulnerable.
          Using the unicode character U+202e one can write from right to left and hide the real extension: for example the executable "SexyL[U+202e]gpj.exe" will be shown as "SexyLexe.jpg" by the filemanager!

          On linux you can create such a file with
          echo > $'SexyL\342\200\256gpj.exe'

          • Re:Actually, (Score:4, Informative)

            by MattskEE ( 925706 ) on Wednesday October 17, 2012 @11:06AM (#41682365)

            Even showing the extension you are vulnerable.
            Using the unicode character U+202e one can write from right to left and hide the real extension: for example the executable "SexyL[U+202e]gpj.exe" will be shown as "SexyLexe.jpg" by the filemanager!

            On linux you can create such a file with
            echo > $'SexyL\342\200\256gpj.exe'

            Rather than simply modding you up I decided to try this out, and it works! Which is kind of creepy.

            • by Feztaa ( 633745 )

              I also tried this; Nautilus displayed the filename as expected, however the statusbar text read:

              "SexyL(etyb 1) detceles "exe.jpg

              Which was meant to look like this:

              "SexyLgpj.exe" selected (1 byte)

              Except that the whole thing got confused by the RTL marker. Also, when displayed by ls, it would only work if it happened to appear in the rightmost column, because any other filenames printed to the right of it get similarly corrupted. In my case it happened to say 'SexyLenO utnubU exe.jpg" (and the reversed 'Ubunt

          • To create a file in Windows, I used python to create a file with the proper name. The following code worked

            fname='SexyL' + unichr(8238) + 'gpj.exe'
            f = open(fname,'w')
            f.close()

            It created a file in my python directory. It shows up as you describe. I was unaware that you could change the text direction in the middle of a line. This kind of thing could probably be used all over the place. If placed on a web server Internet Explorer will actually download the file "properly" with the correct unicode file

        • by mcgrew ( 92797 ) *

          I'm still waiting for them to fix the "hide file extensions for known file types" exploit. It's the first thing I change anytime I install Windows.

          That's something about Windows I've been bitching about for years, and a bright five year old could exploit a user this way.

          What purpose does hiding the file extension have?

          Windows is meant to be usable by the mentally handicapped, like some of the folks I used to work with, who would come to me with "when I click document.mine, why won't it open?" I got those co

  • If software companies were punished for the security holes (or when they leak their databases) then it would become cheaper for them to hire people to fix flaws in house. After all it's easier to find flaws when you have access to the code in the first place. It's not normal that more exploits are found than fixed. It means that more hackers are employed that there should.
  • by Anonymous Coward

    + Principle of Least Privilege: Sandboxing, Firewalls and so on. Powerpoint has no business in reading C++ and CAD files, for example. See http://de.wikipedia.org/wiki/AppArmor http://de.wikipedia.org/wiki/SELinux

    + Memory Safe Programming Languages: More than 50% of real-world exploits are due to C and C++ and of course the pressure to deliver "something working". Bounds checking, guaranteed pointer validness and proper casting rules would eliminate these 50% of exploits. See http://sourceforge.net/projects

    • by Gr8Apes ( 679165 )

      + Managed Security Monitoring: Monitoring a firewall for suspicious traffic requires a lot of speciality knowledge and bespoke analysis scripts to filter out innocuous traffic and leave the suspicious stuff to human experts for investigation. This specialty function is probably best done by specialized companies who do that as their core business. Of course, the firewall must be a completely separate, independent device sitting between the potential targets of an attack and the general internet. A Raspberry PI-class of computer could probably do the job for home users.

      Actually, your firewall and IDS should be separate, ideally, and the IDS is on a special port on a switch configured to receive all traffic on your LAN. That way it can monitor all traffic for unusual activity. HTTP traffic to a web server - no problem. HTTP traffic to an FTP server from an internal workstation? Red Flag.

  • by Herve5 ( 879674 ) on Wednesday October 17, 2012 @05:54AM (#41679501)

    Brought to you by Symantec, the company that makes a living of (exclusively) selling remedies to security holes.
    So, certainly neutral approach.

  • by concealment ( 2447304 ) on Wednesday October 17, 2012 @06:59AM (#41679717) Homepage Journal

    Most designations like "zero-day" assume that hacking is like academia and usually only one person discovers a vulnerability at a time. More likely, many people stumble across it in the course of doing other things, and trade it as a favor to other IT professionals or hackers. Those in turn trade it down the line until it gets to someone who uses it for evil.

    I bet if you surveyed IT professionals, you will find that 90% of us have circumvented security in order to make necessary repairs or alterations at some time or another. It's a nobody's fault type situation; often you're waiting for a system to be upgraded, or integrated, or working your way around older hardware or software. The shortest distance between two points is through the security wall.

  • by dgharmon ( 2564621 ) on Wednesday October 17, 2012 @07:13AM (#41679769) Homepage
    "One aspect of zero-day exploits use that's made them tough to track and count has been how closely targeted they are. Unlike the mass malware infections that typically infect many thousands of machines using known vulnerabilties, the majority of the exploits in Symantec's study only affected a handful of machines--All but four of the exploits infected less than 100 targets, and four were found on only one computer.

    What OS do these machines run on?
  • ...seven of which weren't previously known to have been zero-days

    Aren't all attacks and exploits zero-days, at least on the first day?

    • This use of the term "zero-day" has got to be the dumbest fucking evolution of a term ever. It originated in the warez world and meant that the protection of a piece of software was cracked on the first day of its availability, i.e. day zero. The way it's being used here is epically stupid -- "anything previously unknown to the developers" -- you mean like *every* *single* *bug* reported by a third party? I know trendy jargon makes the IT security industry sound dynamic, shadowy and thrilling, but is it rea

  • If they just wait 2 more days (per the sumary) it can be a PI day vulnerability at 314 days.

    Then everyone would take security seriously protecting their pi. I mean even the Amish have Pie safes.

  • by Anonymous Coward on Wednesday October 17, 2012 @07:54AM (#41679999)

    And yet time and time again, we have people arguing that the responsible thing is to let the vendor sit on the bug report for months, while their customers get infected.

    This is exactly my reasons for arguing full disclosure. You need to inform the customers which software to block from the net by any means possible (which is then up to the customers' IT department) immediately, without caring about the reputation of the vendor. Hiding the bug report is only going to help anyone, if you know for sure that nobody else has found the same hole, and that would require labeling yourself the smartest person on the planet. The safe thing to do is to assume that somebody else is smarter than you, and probably already knows about the hole.

    • by Anonymous Coward

      I'm not saying that I know the answer, but it's more complicated than this. The summary says that it is typical for a first day exploit to affect only a small number of machines until it's publicly announced, and then there is a huge surge in the number of affected machines. So which is better, to keep quiet and have only a small number of infected machines in the wild, or to announce it and have a large number of infected machines until the AV people can distribute updated scanners (at which point the nu

  • In the US, shooting the messenger is the standard in vulnerability disclosure. As such, in the past 5 years most researchers just give up on responsible disclosure, I mean, why bother?

    The good deed you are doing will be met with adverse reaction by the non-technical public, the press and law enforcement. That's a risk researchers just cannot risk, better to just use your research for your own purposes; commercial, nefarious or otherwise, than risk spending 1-10 years in federal pound me in the ass lockup
  • ...and computer security was published in a recent report [mcafee.com] from the European Network and Information Security Agency indicating that banks should always assume their client computers are infected.
    I started moving the PC's I "maintain" (parents etc.) away from Windows and to a separate Ubuntu partition *only* for banking for this very reason. The likelihood that that partition is vulnerable (different OS, no other internet tooling running on it) is significantly lower.
    At the same time, banks start drawing l

If you can't get your work done in the first 24 hours, work nights.

Working...