Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Security Operating Systems Software Windows Bug

Windows Update Can Hurt Security 220

An anonymous reader writes "Researchers at Carnegie Mellon University have shown that given a buggy program with an unknown vulnerability, and a patch, it is possible automatically to create an exploit for unpatched systems. They demonstrate this by showing automatic patch-based exploit generation for several Windows vulnerabilities and patches can be achieved within a few minutes of when a patch is first released. From the article: 'One important security implication is that current patch distribution schemes which stagger patch distribution over long time periods, such as Windows Update... can detract from overall security, and should be redesigned.' The full paper is available as PDF, and will appear at the IEEE Security and Privacy Symposium in May."
This discussion has been archived. No new comments can be posted.

Windows Update Can Hurt Security

Comments Filter:
  • Quiz (Score:5, Funny)

    by Lord Grey ( 463613 ) * on Friday April 18, 2008 @11:30AM (#23118964)
    Fill in the blank:

    Windows _____________ Can Hurt Security
    1. 1) "Applications"
    2. 2) "Network Connectivity"
    3. 3) "Update"
    4. 4) "Users"
    5. 5) ""
    • Re:Quiz (Score:5, Funny)

      by 4D6963 ( 933028 ) on Friday April 18, 2008 @11:36AM (#23119052)

      Fill in the blank:

      Windows _____________ Can Hurt Security

      1. 1) "Applications"
      2. 2) "Network Connectivity"
      3. 3) "Update"
      4. 4) "Users"
      5. 5) ""

      1. 6) "Profit"?
    • Re:Quiz (Score:4, Insightful)

      by Sneftel ( 15416 ) on Friday April 18, 2008 @03:52PM (#23122552)
      I think you mean "__________ ___________ Can Hurt Security". There's nothing Windows-specific about this approach. It would work just as well with apt-get.
  • Doesn't matter (Score:5, Insightful)

    by Z00L00K ( 682162 ) on Friday April 18, 2008 @11:36AM (#23119050) Homepage Journal
    You can never distribute patches synchronously to all the PC:s in the world. And you can't hide what the patch fixes.

    You are damned either way. The only way to avoid complete damnation from security vulnerabilities is to run a large number of different operating systems, but then you are damned to live a life in complete confusion about system maintenance instead.

    The onion principle is a general security term that has been defined a long time ago, but the fact that we are all online in some way or another all the time means that the onion is rotten.

    • Re:Doesn't matter (Score:5, Insightful)

      by Loether ( 769074 ) on Friday April 18, 2008 @11:41AM (#23119120) Homepage
      I admit I didn't rtfa. however if you use bittorrent or a similar system everyone downloading at the same moment would work better and faster. Everyone would have the patches very close to the same time. At the very least that would decrease the amount of time a potential attacker has to attempt this.
      • Re: (Score:2, Insightful)

        by Nos. ( 179609 )
        Its not that simple. My parent's turn their computer on maybe twice a week. Other's don't have constant net connections.
        • by Sancho ( 17056 ) *
          But as pointed out, this is not a flaw specific to Microsoft. The only way that it's reasonable to target a specific vendor in this case is if they don't dump the patches on everyone simultaneously. Users applying patches in a timely fashion isn't the issue.

          Well, either that, or you're advocating not issuing patches at all. That sounds like a pretty bad idea.
        • Perhaps the OS could prevent 'full' access to the network whilst critical patches are outstanding.
      • Re:Doesn't matter (Score:4, Insightful)

        by legirons ( 809082 ) on Friday April 18, 2008 @12:28PM (#23119902)
        Or distribute encrypted patches over the course of a day, then when you publish the key everyone can update
        • Re: (Score:2, Informative)

          by blacklint ( 985235 )
          Theoretically that would work, but my oh my that would be complicated.

          One real world example of essentially the same thing: FIRST Robotics [usfirst.org] wants to make sure that everyone has access to the game manual at the same time at the start of the build season without creating a massive load on their servers, and to make it available for those who don't have internet access where they watch the kickoff. They begin distributing an encrypted version of the manual a week in advance, then release the decryption key
        • by Ungrounded Lightning ( 62228 ) on Friday April 18, 2008 @01:28PM (#23120738) Journal
          Or distribute encrypted patches over the course of a day, then when you publish the key everyone can update

          Which shifts the problem from distributing the update to distributing the key.

          Of course this does have another advantage: Distributing the encrypted update also distributes notification that there WILL be a key, and can tell the users when. Then it becomes a race to get the key and apply the patch before the bad guys can get the key, generate, and deploy an exploit.

          And the downside: The bad guys also know the patch is coming, and when. So they can use their existing botnet(s) to grab a key as soon as possible, then DDOS the key distribution mechanism while they generate and deploy the exploit. This makes things WORSE: A much larger fraction of the machines are vulnerable when the exploit deploys.

          Still worse: If the bad guys crack the encryption, or manage to break in and grab the key early, they get to automatically generate and deploy an exploit while NOBODY has the fix. Oops!

          Ditto even if they don't crack the patch - but the patches exposes that a vulnerability exists and perhaps what module has it, and they find and exploit the vulnerability before the key deploys.

          = = = =

          In a battle between weapons and armor, weapons eventually win.
      • Huh? Distributing a fix through a network of computers not under your control to increase security?

        Sounds insane at the first glance, could make more sense with a bit of tweaking. There is a very large number of machines wanting the fix. Now, one may safely assume that the majority of machines get a "good" fix, and only a few machines try to seed a backdoor. If you find a way to connect to your peers and ask them for some footprint of their patch (MD5, CRC, whatever), you can validate whether the fix you ge
        • by SEMW ( 967629 ) on Friday April 18, 2008 @04:17PM (#23122848)

          If you find a way to connect to your peers and ask them for some footprint of their patch (MD5, CRC, whatever), you can validate whether the fix you get is good or bad.
          MD5 has been cracked. That is to say, there are known methods of creating a file with a high probability of having the same MD5 as some original file.
          And CRC was never designed to be in the least secure against that sort of thing in the first place. It's a good error checker, but it's not secure.

          Yes, there are newer hashes that don't currently have any known vulnerability. But none of which you can be confident that they'll still have no vulnerability in half a decade's time. And if Microsoft were to build what you're suggesting into Windows, a vulnerability beign discovered in whatever hash they used would be a death-knell. How could Microsoft possibly fix it? Distribute a patch to change the hash -- over the compromised patch distribution network?
      • by vux984 ( 928602 )
        I admit I didn't rtfa. however if you use bittorrent or a similar system everyone downloading at the same moment would work better and faster. Everyone would have the patches very close to the same time. At the very least that would decrease the amount of time a potential attacker has to attempt this.

        Meanwhile asynchronously installing patches throughout your enterprise willy nilly the moment they show up will eventually bring your mission critical systems down.

        Increasing security at the expense of stabilit
    • Re:Doesn't matter (Score:5, Insightful)

      by Anonymous Coward on Friday April 18, 2008 @11:42AM (#23119146)

      You can never distribute patches synchronously to all the PC:s in the world.
      True enough.

      And you can't hide what the patch fixes.
      Wrong. You can encrypt the patch.

      Steam has no problem distributing games to players so that they can all unlock them on release day. All you have to do is preload the patch with staggered downloads but not send out the key until the same time. Then all machines can decrypt and patch and install them at roughly the same time, helping to greatly cut down on the time between when the patch can be figured out and the time that machines are still vulnerable.

      Not fool-proof, of course, but it seems like something Microsoft should seriously consider doing.
      • Re: (Score:3, Insightful)

        Which does nothing to help out those who either can't (insert system admin worries here) or don't patch their machines.

        The current system works fine for those people who autopatch. It takes only a very short time to get the latest patch, shorter than it takes to get the bug, find a good page to work it onto, build up enough trust to get people there, and then deploy it. All this really affects is those users who don't patch their machines.
    • by Sancho ( 17056 ) *
      At one time, patches were delivered to corporate customers in a different time frame than standard windows update users, weren't they? Also, beta patches would suffer from flaws, as those are also staggered.

      Nonetheless, outside of these two cases, it seems like Microsoft is being targeting because, well, they're Microsoft. They have a huge market share and a really good update management system, so they're a big target for something like this. That coupled with their history of vulnerabilities, and it's
      • Nonetheless, outside of these two cases, it seems like Microsoft is being targeting because, well, they're Microsoft. They have a huge market share and a really good update management system, so they're a big target for something like this. That coupled with their history of vulnerabilities, and it's easy to understand why they were picked over, say, Apple.

        Nope. If that were correct, then Apple would see 5% (or so) of the "virus" development out there.

        There are millions of Macs out there. If cracking them w

        • by Sancho ( 17056 ) *
          Did you misunderstand me?

          It's easy to understand why the researchers picked Microsoft's Windows Update over Apple's Software Update. We're not talking about exploits, we're talking about the paper.
          • And if what you say now is correct, then there's no reason why the research team could not have included Mac updates and Ubuntu updates.

            I do not see them picking Windows because it is Windows.
        • The "market" for malware consists of customers who want to buy computer cycles for shady purposes. The Mac users certainly aren't clamoring for malware, and the paying customers in this "industry" probably do not really care whether their spam is being delivered by Macs or PCs, even if the Mac ads are very cute. So, I imagine there's really very little money in Mac malware.
        • by RiotingPacifist ( 1228016 ) on Friday April 18, 2008 @03:11PM (#23121980)

          Nope. If that were correct, then Apple would see 5% (or so) of the "virus" development out there.
          You have to put alot of work into making an exploit, do you choose to put that work into something that gives you 90% or 5% returns. Its not like if there were 100 hackers and they all decide to pick on 100 machines at random, no they all try to infect the most machines possible (you need to infect 6% of Windows machines to have the same effect as writing an exploit so good it infects every mac machine), and that means all 100 hackers will go for windows!

          While Apple may be more secure, until you get 50% market share your not going to get 50% of the effort put into attacking you.

    • Re: (Score:3, Informative)

      by piojo ( 995934 )

      And you can't hide what the patch fixes.

      Actually, I disagree. What if Microsoft obfuscated or encrypted executables the same way that (I've heard) Apple does? Then, any vulnerable executable could be fixed and re-encrypted, and the diff between the two versions would just look like garbage. To get the real data, one would need to break the (obfuscation-based) encryption, but that would take a few weeks (plenty of time for everyone that cares to patch themselves). This depends on Microsoft changing the encryption scheme frequently, like Apple doe

      • Re: (Score:3, Insightful)

        by OzPeter ( 195038 )
        How about image fresh system, apply patches, compare result with fresh system? No need to break encryption at all.

        The only way you can stop this is if all system data was encrypted and the user was not trusted with the keys to decrypt.

        Now where have I heard that before??? Hmmm .. TPM anyone?
        • Re: (Score:3, Insightful)

          by Aram Fingal ( 576822 )
          If you encrypt with a new salt value each time an update is performed, that makes the process much more difficult to work around.
      • But if Microsoft did that, how would we find all their undocumented API calls?

        If the executable and libraries all run in memory encrypted or unencrypted wouldn't you be able to tell what the patch changed by comparing an unpatched versus patched process in memory? It would obviously be more difficult because the executable would actually have to be used in a method that the patch affected.
    • Re: (Score:2, Interesting)

      by Ed Avis ( 5917 )
      You can certainly distribute the patch synchronously to 99% of the PCs (the other 1% being those not connected to the Internet or with broken auto-update). Distribute an encrypted patch, and then once all clients have downloaded it reveal the key, which is short and can be sent in a single network packet. Clients could even distribute the key among themselves peer-to-peer. The bigger problem is the timelag between the patch being revealed to the world and the machine starting to run the upgraded version
      • That's a really smart idea. Someone mod this guy up.
      • Re:Doesn't matter (Score:5, Insightful)

        by Ahnteis ( 746045 ) on Friday April 18, 2008 @12:55PM (#23120306)
        99% of PCs are NOT:
        1) Turned on
        AND
        2) Connected to the internet
        at the ANY one time. It doesn't matter if it's 1 packet or 150 packets if the computer is off or not currently connected.
        • This is an interesting problem, although perhaps ultimately not worth solving. Suppose we release an encrypted patch, and then the key a week later. If you can download the patch when your machine is turned on at any point during the week, then if your machine is not on when the key is released, your vulnerability window is from the time you next turn on your computer to the time you get the key, which should be quite short. The likelihood of a botnet finding your machine in that window is small (and, fo
          • Re: (Score:3, Insightful)

            by Aliencow ( 653119 )
            Well, this is called Network Access Control, or NAP in Windows 2008.

            The day my ISP starts controlling wether my machine is "up to date" enough to use it is the day I get a new ISP.

            Plus, it would be over-estimating end-users to think they'd get some fancy router because it lets them wait a bit longer before using their computers....
      • However, would such a scheme be compatible with free software? Under the GPL, would a Linux distributor be permitted to send out encrypted binary patches and only reveal the decryption key later?

        Why not? The distribution isn't complete until the key is published. I don't think a ciphertext would count as a creative work on its own. It certainly doesn't violate the spirit of the GPL.

        Of course, there might be issues with distributing only binaries (encrypted or not) without complete corresponding source code (or the requisite written offer), but that's a different question.

        If you distributed binaries in cleartext, but encrypted the corresponding source code, then you might have a problem.

      • Liability problem? (Score:2, Insightful)

        by bkaul01 ( 619795 )
        And what happens when someone who has downloaded the encrypted patch has his system compromised because you're waiting for some idiot who hasn't to do so before you'll release the key that unlocks it? In a worst case scenario, you could end up facing a class action suit for not enabling the patch. I don't know if such a suit could be successful, but I'd bet someone would try it. At present, if someone has failed to update his system with the latest patch, it's not Microsoft's fault. Under this system, i
      • Patching all the systems in the world in one shot would be terrible if the patch was defective and rendered everything useless, wouldn't it? Some enterprises do not update their systems until enough time has passed that they are pretty certain that the update is not going to kill their systems. If you patch simultaneously, there wouldn't be any guinea pigs to warn of bad updates.
      • by Ungrounded Lightning ( 62228 ) on Friday April 18, 2008 @01:37PM (#23120848) Journal
        Distribute an encrypted patch, and then once all clients have downloaded it reveal the key, which is short and can be sent in a single network packet.

        Which shifts the problem from distributing the update to distributing the key.

        Of course this does have another advantage: Distributing the encrypted update also distributes notification that there WILL be a key, and can tell the users when. Then it becomes a race to get the key and apply the patch before the bad guys can get the key, generate, and deploy an exploit.

        And the downside: The bad guys also know the patch is coming, and when. So they can use their existing botnet(s) to grab a key as soon as possible, then (or simultaneously) DDOS the key distribution mechanism while they generate and deploy the exploit. This makes things WORSE: A much larger fraction of the machines are vulnerable when the exploit deploys.

        Still worse: If the bad guys crack the encryption, or manage to break in and grab the key early, they get to automatically generate and deploy an exploit while NOBODY has the fix. Oops!

        Ditto even if they don't crack the patch - but the patche exposes that a vulnerability exists and perhaps what module has it, and they find and exploit the vulnerability before the key deploys.

        = = = =

        In a battle between weapons and armor, weapons eventually win.
        • by Ed Avis ( 5917 )
          The idea is that because the key is so short (if you used AES encryption, for example, it would only need to be 32 bytes long) it could be distributed very quickly and would be hard for a DDOS to stop.

          The bad guys can't crack a symmetric cipher like AES (or if they can, we are all in much worse trouble than we thought). Similarly they can't break in to servers belonging to Microsoft to grab the key early (or if they can, Microsoft and its users are all in much worse trouble than we thought).
      • Distribute an encrypted patch, and then once all clients have downloaded it reveal the key, which is short and can be sent in a single network packet.

        How would one determine that "all" clients had finished downloading the encrypted patch?

        Couldn't I prevent the patch from ever being applied anywhere in the world by spoofing a client which keeps reporting "haven't finished downloading it yet..." back to the central authority forever?
    • You can never distribute patches synchronously to all the PC:s in the world.

      Maybe you could.

      Consider that a patch is first distributed as an encrypted file. The decryption key is kept secret until everyone has a chance to download the patch. At a pre-determined time, the decryption key is transmitted to everyone (the key is quite small so everyone getting it at nearly the same time shouldn't overwhelm the distribution server; a simplistic multicast or tree-like distribution system (like NTP) could further alleviate this problem). So then every computer patches itself at more-or

    • How about distributing them encrypted over the course of days or weeks, and then releasing a decryption key (pretty minuscule by comparison) once distribution volumes are sufficiently high?
    • Not really. You can distribute fixes asynchronously while still retaining security. But it would require the user to first of all determine when, how and what is to be fixed, and of course the ability to check the fix.

      Easy with OSS, I have no idea how MS is supposed to do it, though.
    • A large number of different operating systems? You do understand that what this means is that no matter what new vulnerabilities are discovered, something of yours can get hit. This is bad.

      The proper solution is to minimise attack surface. Run minimal services that expose minimal resources.

      Furthermore, onion analogies only work in cartoons. The problem with "layered security" is that it implies that more layers is always better. Honeypots, complex email scanners, and IDS can be helpful in the right sit
  • Just roll the entire i386 directory into every patch.
    • by Sancho ( 17056 ) *
      Because it's not trivial to do a diff to find out which files changed.

      It'd take a lot longer, and be only slightly harder, if all of the binaries were encrypted. They've still got to run, though, so the images have to be available in memory.
  • by pembo13 ( 770295 ) on Friday April 18, 2008 @11:37AM (#23119076) Homepage
    There is no good solution to this problem -- that fixing something makes it easier to find old problems. At some point, users need to be responsible enough to apply updates.
    • The vendor CANNOT depend upon the users/admins patching their systems all the time.

      The vendor MUST ship with the minimum number of services running BY DEFAULT and with the minimum rights for those services.

      My problem with Microsoft on that is that they did NOT minimize the number of services. They put a software firewall in front of them.

      Meanwhile, I can put a vanilla Ubunut workstation on the 'Web without a firewall.
    • They could encrypt their OS (and patches) with Windows Media DRM. Then it would be illegal to decrypt and backwards-engineer the patches. The RIAA would enforce.
      • But remember, in Soviet Russia, law enforces YOU!

        In other words, try to enforce something in a country that doesn't care about your problems, since it has its own.
    • by evanbd ( 210358 )
      If I have it set to auto-update as soon as the update is available, it's hard to argue I'm being irresponsible. If it takes longer to get the update to everyone than it takes the attacker to create a new exploit, that's a problem -- and not one that users can solve.
      • The problem with instant auto-update, is that patches that wreck your system get applied instantly.

        And then what about updates that need a reboot? Should that be automatic and instant?

        This is a very hard problem, with no easy answer aside from "build it with security from the ground up"
        • by evanbd ( 210358 )

          Most users turn on auto-update; instant doesn't add any downsides. If you're worried about that, then you need to do your job as a sysadmin, and this wouldn't change anything there either (since you'd have auto-update turned off in either case). Things requiring a reboot would be handled however they are currently -- just with less latency between the patch being available and everyone having it.

          You're right, of course, that there is no easy answer.

    • by RonnyJ ( 651856 )
      The silly thing is, it's even easier to research how vulnerabilities would work on unpatched systems for open-source software - you don't need to examine the patch, you have access to the changes in the source code!
    • by realthing02 ( 1084767 ) on Friday April 18, 2008 @12:35PM (#23120000)
      I think you actually missed the worst part about this summary (not the article...)

      From the summary: "Such as Windows Update... can detract from overall security, and should be redesigned."

      The ellipse represents 14 pages of information in this sentence. And the Actual PDF doesn't say it detracts from security, but rather that the scheme is insecure. Which is quite a difference. Normally I don't do this, but the quote is really stupid when put the way the contributor or editor put in there. The article was interesting enough on its own accord (automatic patch-exploit generation) without having to throw your own personal cracks in there.

      Let's grow up, people.
      • by Ahnteis ( 746045 )
        Thank you for your post. I'd mod it if I had the points, but failing that, I'll just be glad that we got above level of Digg commentary, if only for a moment.
  • by Anonymous Coward on Friday April 18, 2008 @11:38AM (#23119086)
    Profitability is key, not security. Think of sysadmins as janitors. We pay you to wipe up the mess. It's not worth our while to invest in systems that don't create a mess as long as janitors are cheap enough to come with their electronic mops and buckets.

    And you are.

    Sorry.
    • by somersault ( 912633 ) on Friday April 18, 2008 @12:05PM (#23119516) Homepage Journal
      Meh, don't come crying to me when some guy in [insert-evil-country-here] steals your identity, uses it to buy a few Porsche's and setup illegal goat kid porn themed websites in your name. You keep making your messes, I'm happy to make $50000 a year cleaning them up as long as it doesn't happen more than two or three times a year..
    • (shrug)

      I don't mind being a 70k a year janitor. Dunno if it would be cheaper to just not make a mess, but hey, I certainly won't complain about it. Every time your computer barfs, my cash register jingles.
    • by Ungrounded Lightning ( 62228 ) on Friday April 18, 2008 @01:52PM (#23121018) Journal
      Think of sysadmins as janitors. We pay you to wipe up the mess. It's not worth our while to invest in systems that don't create a mess as long as janitors are cheap enough to come with their electronic mops and buckets.

      That works for small messes.

      It doesn't work for somebody getting hold of the company's trade secrets, client list, bidding information, road map, and headhuntable employee names and pay scale.

      It doesn't work for somebody cracking the information on the company accounts and transferring the cash reserves to themselves via untraceable paths.

      It doesn't work for somebody destroying or corrupting the IT infrastructure - especially the databases - and taking the company out of business for days or forever, causing key employees to quit or be fired, etc.

      It doesn't work for somebody corrupting industrial process control infrastructure and literally destroy plants and kill employees, or cause the company to build and ship defective products.

      I could go on.

      Cleaning up IT graffiti is one thing. Cleaning up IT nuclear strikes is quite another.

      IMHO any corporate IT exec who treats malware like graffiti, rather than an early warning of something more serious, is negligent in his fiduciary duty to the shareholders and perhaps criminally negligent in his duty to protect the lives and health of the employees. (Pity that most of 'em do treat the threat in this way. B-( )
  • by utnapistim ( 931738 ) <dan.barbus@g[ ]l.com ['mai' in gap]> on Friday April 18, 2008 @11:39AM (#23119104) Homepage

    ... patch based security is also the model linux uses (as far as I understand).

    Furthermore, for Linux access to the unpatched code is also easy to obtain.

    Somebody please correct me if I'm mistaken.

    • Re: (Score:3, Insightful)

      by Kjella ( 173770 )
      Right, in fact this probably indicates that patch tuesday [wikipedia.org] may not be a bad thing because then at least every admin worth his salt knows that's the time to update the systems. With patches coming in almost daily on Linux, you either have constant patch duty or it's a lot more staggered already. That's assuming you actually do something with the patches and don't just auto-apply everything, in which case I guess it doesn't matter. But, let's try to make everything with an anti-microsoft spin shall we?
    • That was my understanding as well. There will always be a lag time between discovering and patching a flaw. Unless Microsoft starts using something like satellite technology and distributes a satellite receiver to every user of Windows, you'll always have to deal with lag for getting patches out.
      • Unless Microsoft starts using something like satellite technology and distributes a satellite receiver to every user of Windows, you'll always have to deal with lag for getting patches out.

        It's called "IP Multicast".

        Does YOUR ISP support it?
    • Re: (Score:3, Insightful)

      by phantomfive ( 622387 )
      Kernels are usually distributed as binaries, but in general other software is distributed by source. Of course, different distros do it differently.

      However, the fact that you can obtain the code makes no difference, and may even be a hinderance, since an exploit can be created here in as little as a minute just from the binaries.

      The major difference here between Windows and Linux is that Windows is a lot more of a mono-culture. In the linux world, there is no guarantee that an exploit will be availabl
      • by weicco ( 645927 )

        How can it be hinderance to have sources where problem exists and where it is fixed? Just run a damn diff of them and there you have it.

  • Of course there are going to be flaws in this kind of release/patch system. The problem it trying to become savvy enough that you don't need "automatic" updates and can actually patch what needs patched on your system as it needs updated.
    I've heard reports of bugs created by patches and patches to fix bugs from patches that only took it back to base version.
    • Re: (Score:3, Informative)

      by Bacon Bits ( 926911 )
      Yeah, exactly. If this is a problem with Windows, then it's a worse problem with Linux!

      Not only can you reverse engineer the binary, but you have access to the source code and it's modifications. If you read bug trackers or dev mailing lists, you can even pick up security vulnerabilities before the patch is even released just by looking at bugs and diff files.

      You can't put the toothpaste back in the tube, people. Arguing that that means you shouldn't brush your teeth is ridiculous.
  • M$ need to stop having a fixed update day and go back to having them come out when they are ready and not waiting for the next patch day.
  • by Vellmont ( 569020 ) on Friday April 18, 2008 @11:48AM (#23119220) Homepage
    If it's possible to generate an exploit that quickly, we need to completely abandon the current "patch it and hope no one broke in" approach to security. It's never been a good approach, but if any idiot can generate exploits via a point-and-click program, that's obviously a big problem. This problem isn't limited to Windows, and most operating systems aren't patched within even a few hours of a patch release. There's good reasons for this, and bad ones. But no one really wants to trust their critical systems to be patched (and possibly go down and become unworkable) to an instant patch system.

    The fundamental problem here is that a lot of security depends on single points of failure. A real security system relies on the "defense in depth" approach.
  • by analog_line ( 465182 ) on Friday April 18, 2008 @11:53AM (#23119324)
    Couldn't this process (modified of course) do the same thing to any update for any software at all?

    How exactly is this news? I mean, I should update my software when there's a new patch anyway, but now that THIS has been developed I...need to update my software when there's a new patch... Automating it is a pretty neat trick, and it pretty much destroys any argument for security through obscurity, since it means you couldn't patch any hole to maintain the obscurity, but it's not like security through obscurity in the computer software realm has that amazing a track record in any case.
  • From the PDF: (Score:5, Informative)

    by TripMaster Monkey ( 862126 ) on Friday April 18, 2008 @11:55AM (#23119344)
    The PDF outlines three methods of alleviating the problem of staggered patch distribution:

    1) Patch Obfuscation: basically an attempt to hide exactly what the patch fixes by padding out the patch with a lot of lines of nonsense. While this might prove effective, it would only be effective until an improved algorithm for discerning the true reason for the patch is found, and in the meantime, it would create its own set of problems, particularly if the level of obfuscation required balloons the size of the patch to an unmanageable degree.

    2) Patch Encryption: basically distributing the patch in an encrypted format, waiting until it is reasonable to assume that everyone has the patch, and then transmitting a decryption key to decrypt and apply the patch more or less "simultaneously". Problems: this only pushes the problem back one level; meaning the same method of exploitation is just as possible, while this also creates an unacceptable time lag for patches to be applied, which hackers who write exploits the old-fashioned way can exploit to their considerable benefit.

    3) Fast Patch Distribution: basically leveraging technologies like P2P to insure that patches are rolled out...well...fast. Problems again include off-line hoists, as well as hosts who have the misfortune of being on ISPs that take a dim view of P2P.

    In summary, none of the methods outlined have much of a chance to combat this new threat.
    • Re: (Score:2, Funny)

      Fast Patch Distribution: basically leveraging technologies like P2P to insure that patches are rolled out...well...fast. Problems again include off-line hoists, as well as hosts who have the misfortune of being on ISPs that take a dim view of P2P.

      Why not use the bot nets for this kind of stuff? I mean, previous article today already showed, that they have a quite effective way of patching arbitrary systems and distribute mass content.
    • There is a fourth choice -

      security through obscurity - don't provide patches at all, and never release the source code. This will make it much harder for script kiddies.

      It should be noted that Microsoft Research is aligned with the CMU Computer Science department - vis a vis the Microsoft Carnegie Mellon Center for Computational Thinking. That is either ironic, or obvious depending on your viewpoint (either it can fuel Microsoft to do a more thorough job of releasing early, often and being more transparen
    • 2) Patch Encryption: basically distributing the patch in an encrypted format, waiting until it is reasonable to assume that everyone has the patch, and then transmitting a decryption key to decrypt and apply the patch more or less "simultaneously".

      This can make things worse:

      The distribution of the patch alerts the bad guys to the existence of the bug.

      - They can use their botnet infrastructure to get the key early and DDoS the key servers (possibly simultaneously, using the botnet to grab the key thr
  • No fix feasible (Score:5, Interesting)

    by Todd Knarr ( 15451 ) on Friday April 18, 2008 @11:56AM (#23119386) Homepage

    Unfortunately, no fix is feasible. The basic problem is twofold:

    • If you tell someone how to fix a problem, you tell them what the problem was.
    • It's not possible to push updates to all affected systems simultaneously.
    That the first is true should be obvious if you think about it for a minute. As for the second, that comes from the fact that the affected systems are owned by different entities with different requirements and different environments. A fix for a problem affects more than just the fixed software, especially when the fix is in the operating system on which other business-critical software runs. Any fix has to be checked for compatibility with that entity's specific environment, this checking can't start until after the entity has gotten the fix, and everybody's going to take a different amount of time to check and get clearance to deploy.

    The only "fix" would be a mandatory push to all systems at one time, and that won't be accepted by the people who own the systems unless Microsoft or someone else accepts complete 100% liability for all costs associated with any failure. And I just don't see that happening.

  • What would the new system look like? The best I have so far is something that distributes the patch, encrypted, and then releases the key some time later when many systems already have the patch downloaded and can install it immediately. Of course, I'm not sure how much that improves over just using bittorrent to distribute the whole thing with much higher aggregate bandwidth.
  • by Mongoose Disciple ( 722373 ) on Friday April 18, 2008 @11:58AM (#23119414)
    Not to be inflammatory, but it really is.

    Essentially, these people wrote a paper which says that hackers can analyze Windows Updates and figure out how to attack systems that aren't patched yet thereby. It goes into theory and proofs of that. Thanks, everyone else knew this about Windows Update years ago, probably for about as long as there's been a Windows Update.

    It then proposes some solutions which are all, on the whole, worse than the status quo for various reasons. For example, forcing all Windows machines, whether they're turned on or connected to the internet or not, to patch at the very same instant is not realistic.

    They should've called this thing: "Windows Update has problems. Magic can fix them."
    • Re: (Score:3, Informative)

      Didn't actually RTFA, did you?

      If you had, you'd know that this paper did not say that "hackers can analyze Windows Updates and figure out how to attack systems that aren't patched yet thereby". What it did say was that it is possible to write software that can analyze the update for you and churn out an exploit for the security issue identified thereby...in a matter of seconds.
  • by Animats ( 122034 ) on Friday April 18, 2008 @12:27PM (#23119884) Homepage

    This is fascinating. As someone who's worked with automatic theorem proving and proof of correctness techniques, I'd never thought of using them in this way.

    What they're doing works like a proof of correctness system in reverse. They difference executables before and after the patch (which in itself is impressive), then, having isolated the patch, analyze it automatically. Security patches usually consist of adding a test which constrains the valid inputs at some point. So they use a symbolic decision procedure, which is part of a theorem prover, to work back through the code and automatically derive a set of inputs that would be caught by the new test.

    This is more than just an attack on Windows Update. It's true automated exploit generation.

    This is potentially applicable to any security-critical code that changes over time. One could, for example, have something that watched check-ins to the Linux kernel tree and developed new exploits to current stable releases from them.

    • It's impressive (Score:4, Interesting)

      by bmajik ( 96670 ) <matt@mattevans.org> on Friday April 18, 2008 @02:42PM (#23121626) Homepage Journal
      But based on my reading of the paper, it isn't 100% there. You don't get "sploit.exe" dumped onto your disk when the thing is all done. Their stuff only works backwards to a certain point.

      For instance, when they come up with the exploit for WMF reader vulnerability, they're not making you a new WMF file (as I understand, anyway).

      One thing that interested me is the model they invented. The binary differencing was off-the-shelf stuff from eEye. But their model of the x86 machine (cpu, instruction side effects, registers, and memory) is new, and that seems like something that could have been written previously.. I'm surprised they needed to do this. They also define a space of functions that examine the model to determine if badness has happened, for each specific kindof badness they're interested in, i.e. return address changes during execution of call.

      They also appear to require execution traces of P (or P') to run under a machine monitor; I don't think from the instructions in P they work backwards from P/P' difference lines and construct initial conditions of the machine state. Even if that _is_ what they were doing, they only model the salient portions of the binary, not the outside system.

      Even so, what they're doing here is fantastic. The things they're not doing (automatically creating files that trigger the exploit) are all possible offshoots from this paper, if one were to have sufficient computing power and time to create models of the salient portions of the system. For each different data flow into the instruction/memory space, the model would need to describe the line of demarcation. In the case of the WMF/PNG vulnerabilities, that line is on the other side of readfile or mmap or whatever. (i.e. the bytes that trigger the exploit come from the disk). Building a file on disk in a certain way to cause a sequence of x86 instructions to produce the desired memory is a hard problem in and of itself, although I perhaps possible with the tools and techniques they've already got.

      The same would be true of the ASP.NET vulnerability. I beleive they can work backwards from exploit to the in-memory representation of a URL request. At that point, knowing that URLs come from the outside hostile internet, through IIS, etc etc etc, is vuln-specific domain expertise. However, a library of injection points (file on dist, URL request from network, packet from network, etc) could be built around the analysis model. The analysis engine works backwards until it says "here is the memory precondition that leads to an exploit, now i rely on an injection plugin to acheive that memory state."
  • by Opportunist ( 166417 ) on Friday April 18, 2008 @12:44PM (#23120138)
    If you have a patch, you can diff the original and the patched file and find out what got fixed. No secret here.

    So how can you close the gap between fixing and exploiting? That's nothing MS could fix. You have to. Patch early, patch often.

    If any message is contained here, it's that if there is a patch out and you didn't use it, you're extremely vulnerable. That's pretty much it, nothing really new here.
  • 2 points (Score:3, Insightful)

    by v(*_*)vvvv ( 233078 ) on Friday April 18, 2008 @01:06PM (#23120458)
    1) Isn't this an old problem? Not only is this old, but it applies to any computer system, so to single out Windows Update seems naive (as others have said).

    2) I think we are forgetting that the exploits still need to be distributed, and the article refers to worms, but how is this different from any other worm/virus?

    Smarter viruses will attack weaknesses that are yet widely known or patched, so those that use exploits based on public patches are 1) stupider and 2) more predictable.

    So this is less of an "update how" problem, and rather more of an antivirus problem. The previous might be impossible to solve, but the latter we have solutions for.

  • Microsoft thought about this. In the latest update [theregister.co.uk], they've decided that to improve security, they'll disable all input devices on your computer.

    By making a users computer useless, the goal is to get fewer people using computers, thereby reducing security problems.
  • by Ungrounded Lightning ( 62228 ) on Friday April 18, 2008 @02:18PM (#23121356) Journal
    What's important about this is that it can quickly and automatically generate exploits given only OBJECT code - faster even than a good programmer could do it from source.

    This negates the claim that hiding the source code increases security.
  • Not eating can make you hungry.

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...