Forgot your password?
typodupeerror
Security Microsoft IT

MS Giving Exploit Writers Clues To Flaws 63

Posted by kdawson
from the first-word-sounds-like dept.
In the IT trench writes "How's this for a new twist on the old responsible disclosure debate? Hackers are using clues from Microsoft's pre-patch security advisories to create and publish proof-of-concept exploits. The latest zero-day flaw in the Windows DNS Server RPC interface implementation is a perfect example of the tug-o-war within the Microsoft Security Response Center about how much information should be included in the pre-patch advisory."
This discussion has been archived. No new comments can be posted.

MS Giving Exploit Writers Clues To Flaws

Comments Filter:
  • by Skreech (131543) on Monday April 16, 2007 @09:46PM (#18761361)
    I know the ongoing debate about whether open source or closed source has the security advantage when it comes to exploits in code.

    But this is a case where a half-and-half approach is probably the worst of all.
    • Re: (Score:2, Interesting)

      by Anonymous Coward
      >>But this is a case where a half-and-half approach is probably the worst of all.

      Of course, going halfway on either ideology, closed or open source sets you up for trouble. Which is why it is a bad idea for Microsoft to release any details about their patches at all. Yes, I said it. If they are going to try and make a blackbox OS, they shouldn't do it half way.
      • by kestasjk (933987) on Tuesday April 17, 2007 @02:30AM (#18763805) Homepage
        Damn Microsoft! We need to know what patches are being applied so we know what may fail. We need full disclosure!

        Damn Microsoft! Their full disclosure is allowing hackers to write exploits; don't tell the hackers how to hack my system!

        Damn Microsoft! They're kinda going half way in a vain attempt to stop people flaming, as if I'm going to stop doing that! Stick with one or the other, we'll flame you whatever you do anyway.
    • by Anonymous Coward
      It tells me, as an administrator, what to be suspicious of, rather than some secret bug they are patching.

      Now, it's true that it is still in the favor of the virus writers in that hardly 100% of sys admins keep up to date on this stuff (wheres the time?) but it is scary that they can exploit a specific bug base on a vague explanation in the first place..... (scary in that Windows is really that bad...)
    • Chaffing (Score:4, Interesting)

      by goombah99 (560566) on Monday April 16, 2007 @10:11PM (#18761661)
      Microsoft should pre-publish a whole bunch of tasty looking security advisories that are 100% fake every time they publish one that is real. Make them the most enticing looking (remote code exploit with unvalidated input overflow in ssh). Any given cracker will probably pick the fake and quickly waste gobs of time.

      If they wanted to get more diabolical, they could even put some honey pots into the code itself. For example, something that emulates a buffer overflow crash when a certain malfromed word is injected. Or maybe something more tantilizing but useless like a 1 second pause in Internet explorer when a certain tag combination appears followed by a page reload to make them think IE just belched but managed to somehow recover. Hint at this in the pre-pub or leak it on the web (post it in a slashdot comment). they can validate it's existence so they believe the bug really exists too.

      Each time they patch the real security hole they can preload ten new honeypots for the next round of spoofing the hackers and eradicate the old ones so it looks like they are patching real bugs and the hackers never catch on.

      Why am I posting this under this parent? Well because you could only get away with this in closed source. Open source would make this a give-away.

      • Re:Chaffing (Score:5, Insightful)

        by fm6 (162816) on Monday April 16, 2007 @10:23PM (#18761813) Homepage Journal

        Microsoft should pre-publish a whole bunch of tasty looking security advisories that are 100% fake every time they publish one that is real.
        If they had the expertise to do that, they wouldn't have so many security holes in the first place!
      • Re: (Score:3, Interesting)

        by MrShaggy (683273)
        So what you are saying is that some security by obscurity might be a good thing?

        I personally think that there are uses for both. Its natural to have both in place.
        One example is if I am the Coca Cola company, and I wan't to keep the formula a secret. I might go to great lengths in securing the room, in which you can read the formula. You owuld need to know the security, in order to access it in the first place.

        If I had $20,000 in cash at my house, in the basement, and not tell anyone that it is there. If I
        • Re:Chaffing (Score:4, Insightful)

          by norton_I (64015) <hobbes@utrek.dhs.org> on Monday April 16, 2007 @10:49PM (#18762083)

          Aren't all these reasonable 'security by obscurity' examples that work ok?


          Only one of them, the $20,000 in your basement. The reason you only do that for one night is that it isn't a good long-term security solution. Eventually, someone will find out that you have that much cash lying around and your chances of being robbed go way up.
        • Re:Chaffing (Score:4, Interesting)

          by Daengbo (523424) <daengbo@nOspam.gmail.com> on Monday April 16, 2007 @11:31PM (#18762453) Homepage Journal
          The point about security through obscurity is that it shouldn't be used alone as the only source of protection. Many people change their SSH config off the default port to reduce automated attacks, but they don't leave it there in the most open configuration -- they disable root logins and sometimes require key-based logins instead of password-based ones.

          Having an element or elements in your security setup which are irregular is a good part of a complete security picture, but don't for a moment assume that these will even slow down someone who knows what to do and is determined to get into your network. Only real security measures will do that. If you leave an unsecured FTP server on port 12056 facing the internet, someone will eventually find it and exploit it. If you leave phpmyadmin with no root password hidden in your website somewhere with no outside links, they still might find it, and then you are toast. Obscurity just stops most script kiddies. That's not bad, though, is it?
        • It falls apart pretty much at the obscurity versus security part:

          I've broken into houses. I'm neither proud nor ashamed (I was young, it's the least of the stupid things I did). Leaving your door open *would* increase the chances of you being broken into. Being broken into *would* increase the chances someone sees the money you've cleverly laid out in the basement.

          Meanwhile, you would have been much safer having just posted directly on your front door that you had $20K and installing your (*cough* firew
        • I personally think that there are uses for both. Its natural to have both in place.
          One example is if I am the Coca Cola company, and I wan't to keep the formula a secret. I might go to great lengths in securing the room, in which you can read the formula. You owuld need to know the security, in order to access it in the first place.


          No, no, NO!

          The Coca Cola should open source the formula and abandon any trademarks so that The Community can check the formula for health risks and contribute improvements, and m
          • by Ajehals (947354)
            Since when does open sourcing anything give people the right to do anything other than look at the code (recipe in this case)? Duplicating / distributing / creating derivatives of something that is open source is still a copyright violation unless you have a license that says otherwise...
            • Since when does open sourcing anything give people the right to do anything other than look at the code (recipe in this case)? Duplicating / distributing / creating derivatives of something that is open source is still a copyright violation unless you have a license that says otherwise...

              That's not true - the point of open source is that users have the right to change the code and distribute changed versions. The license determines whether you need to release your source code changes, but both GPL and BSD a
              • by Ajehals (947354)
                There is a difference between making source code available and releasing the same code under a permissive license. Just because the GPL, BSD and the various common licenses are the most visible does not mean that that is the only way to release code.

                A situation where a piece of software is open source, but does not come with any rights to distribute and/or modify code would be one where a company or government want to carry out a code review usually (but not exclusively) on the grounds of security, or comp
        • by Sajarak (556353)

          So what you are saying is that some security by obscurity might be a good thing?
          Most security systems use obscurity to some degree. If restricting access by passwords isn't security by obscurity, then what is it?
        • "Aren't all these reasonable 'security by obscurity' examples that work ok?"

          Only for people that doesn't understad what is 'security by obscurity'. Most security systems depend on something being secret. Security by obscurity is when you tell that secret to the enemy, but in a hard to read way.

          And it only works if what they can gain from you is worth less than the cost of deciphering the secret. That is, almost never.

      • Re:Chaffing (Score:4, Insightful)

        by AndrewM1 (648443) on Monday April 16, 2007 @10:36PM (#18761977)
        The problem with this is the bad press MS would get from announcing 11 exploits for every one they discovered. Those "outside the know" would think MS insecurity had gone up by 11x. MS already has major press issues about their many security exploits, they don't need 11 times that.

        Also, introducing fake honey pots in the code would cause problems. If they announced it and fixed each one, the honey pots would be useless. If they announced it but didn't fix it, they'd look like they didn't care/or it would make it obvious it was a honey pot. If they didn't announce it or fix it, then invariably some security researcher would find it (it has to be discoverable to become a honey pot) and blast MS for the security vulnerability.
        • by goombah99 (560566)
          I don't think people would care any more if it was 11x higher. Besides which they could even announce they are doing it. That would shut down those pesky security researchers who always claim that some bug might lead to a remote code exploit without actually working outhow it might. They'd hold their fire because then they would look stupid not MS when they find out it was a honeypot.

          As I mentioned in my first post, each time they send out a patch they fix all the honeypots so no one can tell and then pr
        • by angulion (132742)
          This in addition to honeypots meaning more code, which usually translates to more potential bugs and security vulnerabilities.

          Wouldn't that be ironical - a honeypot leading to a real exploit?

      • by mcpkaaos (449561)
        Well because you could only get away with this in closed source. Open source would make this a give-away.

        Unfortunately, so would a decent debugger. It's a pretty cool idea, though. :)
      • by cp.tar (871488)

        Microsoft should pre-publish a whole bunch of tasty looking security advisories that are 100% fake every time they publish one that is real. Make them the most enticing looking (remote code exploit with unvalidated input overflow in ssh). Any given cracker will probably pick the fake and quickly waste gobs of time.

        OTOH, maybe the cracker will find his time had not gone to waste.

        Just because MS think a piece of their code is good, it doesn't mean it is so. After all, they do need bugfixes and service packs

      • Open source would make this a give-away.

        Open source also reduces the risk of needing this kind of thing in the first place. And that is true regardless of whatever the current state of Linux vulnerabilities vs Windows vulnerabilities.

        In closed source, for whatever reason, MS can't seem to release zero-day patches. That is, they discover the vulnerability, or someone reports it to them, and the patch still has to wait till Patch Tuesday. Only exception to this is if it becomes public in a big way, such as

  • by Anonymous Coward
    Umm, do these people really think that hackers can't reverse engineer a patch and see what it's doing??

    IT admins will be the most affected .. because they will have no idea what the patch is doing and if it will affect other stuff they have. Also, this will be an excuse to further delay the release of patches, since now it will have to go through even more QA.

    Hackers that RTFM .. now that's funny.

    • by Anonymous Coward on Monday April 16, 2007 @10:08PM (#18761629)

      Hackers that RTFM .. now that's funny.
      Actually, hackers DO RTFM [catb.org].

      They also know How To Ask Questions The Smart Way [catb.org].

      Crackers have the upper hand on system administrators, because the focus is very narrow. System administrators have to RTFM and stay up-to-date on everything from why Alice can't print (because her network cable is unplugged) through to debugging the cause of a fatal exception/crash in a plugin they've written for a HTTP daemon. System administrators are very overloaded with work whereas crackers can take it much easier.
      • Of course the really good crackers are also sysadmin's as this gives them the best insight into how a system works and the ways the security is thought out... they probably also write really nice code for a software company or freelance.


        Note: This is a generalisation of a select group. It will not cover every possibility.
  • by Anonymous Coward
    It would have taken an open source project longer to write the security advisory than to change 1 line of code (or 2 tops) to fix the stack overflow issue.

    How hard is it for Microsoft to push out a new update for a change this minor (and important)?

    This is a critical problem for any intranet (Universities come to mind as the largest target) running Microsoft servers. And it can also affect a whole load of dedicated servers running the basic versions of Microsoft server software.

    What are they waiting for?!
    • Re: (Score:3, Insightful)

      by 644bd346996 (1012333)
      Open source projects have to write security advisories, too. They just have the option of including the patch with the advisory.
    • by EmbeddedJanitor (597831) on Monday April 16, 2007 @10:25PM (#18761843)
      In any reasonably complex hunk of software, the chance of being able to confidently fix a oneliner and release it immediately is pretty low. Most software needs verification/testing of some sorts before a change can be mainstreamed.

      I actually think that MS pushes out some patches too fast. My Windows laptop gets autopatched and the problematic parts of the system (wireless networking in particular) sometimes get screwed up for a while until the next patch set arrives. I don't think that MS is responsible for all the breakage. Often, MS makes a change which can break an existing driver or app. From a user's perspective all that you see is that a MS patch breaks the system.

      • by Anonymous Coward on Monday April 16, 2007 @10:52PM (#18762113)
        Let us take a look at the recent topic of a Madwifi vulnerability affecting certain wifi users in Linux.

        Julien Tinnes reported [immunitysec.com] it at 13:48:00 EST on December 7, 2006.

        At 14:17:50 on the same day the patch [madwifi.org] was available in the main source code repository.

        A little while later at 17:08:26 the vulnerability is officially confirmed [madwifi.org] by Madwifi and advisories had been prepared.

        Looking downstream, the response times for an official fixes/advisories by distribution specific security teams were:
        Gentoo: December 10 [gentoo.org]
        SUSE: Confirmed December 8 [novell.com], Fixed December 11 [novell.com]
        Ubuntu: January 9 [ubuntu.com]

        There is certainly some room for improvement here with distribution specific fixes, but that also includes time spent testing the changes to the driver. To be fair to Microsoft (actually, I'm just being overly optimistic), they probably had a patch ready within 30 minutes of the initial vulnerability report as was the case with Madwifi. But instead of giving the customer the option of trying the "beta" patch so they can test it themselves, it is kept private. Days tick by at Microsoft HQ and nothing appears to happen. Eventually, a patch is released on the patch Tuesday of the next month (or the month after that). System administrators get no choice and no chance to test it themselves.
    • Re: (Score:2, Insightful)

      by Quantam (870027)
      Congratulations. You have just unwittingly illustrated the mindset that makes businesses wary of open-source software, and gives bite to Microsoft's FUD. Of course not all open source coders have such a knee-jerk mindset, but you are a member of an influential (in intimidation power, not number) minority.
  • by UnknowingFool (672806) on Monday April 16, 2007 @09:50PM (#18761403)
    I seem to remember that one of the reasons XP was cracked was that the Windows XP team responsible for it gave an interview about it. They were so proud of it that they gushed about the nuances of it. Now, they didn't exactly give a lot to technical detail but they gave enough to crackers to figure out how it worked. Pretty soon there was a key generator program released.
  • Clear choice (Score:4, Insightful)

    by The Bungi (221687) <thebungi@gmail.com> on Monday April 16, 2007 @09:52PM (#18761425) Homepage
    Microsoft should stop providing so much information in their advisories. Or better yet, stop issuing them altogether. Oh, wait. They used to do that, and that proved unpopular.

    Maybe they should do what Mozilla does, which is to "hide" vulnerabilities until they either patch them or feel that a sufficient number of people have applied the patch (which is of course the other problem). Of course, like with Blaster for example, you can release a patch and 30 days later the exploit nails all the people who didn't bother to fucking patch.

    I can see some people's heads exploding with this one.

    • You hit the nail on the head with the Blaster comment; this discussion is moot. Wasn't it 3 or 4 days ago that we all read (the summary of) a story saying most botnets are built with two well-known worms? It might make sense for a few system administrators with enemies to worry about this 'sploit, but 99% of people have bigger things to worry about.
  • Fabulous (Score:5, Insightful)

    by SeaFox (739806) on Monday April 16, 2007 @10:04PM (#18761561)

    How's this for a new twist on the old responsible disclosure debate? Hackers are using clues from Microsoft's pre-patch security advisories to create and publish proof-of-concept exploits.

    That's great. Now they have an excuse to be incredibly vague about the problem in the advisories. It will be like the Government and National Security Letters.

    "We need you to submit to this, to protect you from hackers. We can't discuss the issue as it's a trade secret and a threat to computing security. This is a critical venerability. But we can't tell your why. Just install this patch when it comes out and you'll be better. Trust us, we know what we're doing."
    • This is a critical venerability.
      EOL? [wikipedia.org]
    • "We need you to submit to this, to protect you from hackers. We can't discuss the issue as it's a trade secret and a threat to computing security. This is a critical venerability. But we can't tell your why. Just install this patch when it comes out and you'll be better. Trust us, we know what we're doing."
      This is like the infamous openssh bug that urged everybody to upgrade to version 2 without giving a reason, even though many weren't vulnerable.
  • I wonder if this continues, if the price for exploits will go down, since they can more quickly get replicated, there may be more of an actual market.
  • by Anonymous Coward on Monday April 16, 2007 @10:12PM (#18761675)
    One could find exploit code to the DNS issue before the advisory was published. MSRC didn't reveal any more information than was already publicly known.
    • by Anonymous Coward
      Parent is right - the DNS RPC flaw is a 0day issue discovered in the wild. It was reported to Microsoft, apparently by TippingPoint and the Zero-Day Initiative. That is why they released an Advisory (something they usually only do for issues discovered in the wild).

      Put another way - it was actually initially discovered by the black hats, and an exploitation tool released and used, not confidentially reported to Microsoft under the "Responsible Disclosure" programme, or even publically posted to somewhere li
  • It really doesn't matter how much information you disclose about the technical details or workarounds except in how long it will take to develop the exploit. Once an exploit writer knows there is a critical vuln in a particular area of the system, it's not that hard to narrow down the inputs required to exploit it. In particular, Metasploit makes this much easier [wikibooks.org] to do by being able to see what memory offsets are in EIP when the process segfaults.

    The only real impact is how many people will be able to w

  • by twifosp (532320) on Tuesday April 17, 2007 @12:16AM (#18762817)
    That headline is utter rubbish and sensationalist. Microsoft is not giving anyone clues to create exploits. The wording makes Microsoft sound intentionally malicious. While Microsoft is pretty god damn malicious, they aren't out there trying to help exploit writers.

    The headline should instead read something like Hackers Create Exploits Using Microsoft Published information. This IS what hackers do after all. They read documentation and manuals. They find out how things work with all the available information. They social engineer. Trying to pin this on Microsoft is childish.

    • by atamyrat (980611)
      +1 troll to headline? oh I thought it was -1.

      Now does it mean that I can improve my karma by trolling? :-)
  • by master_p (608214) on Tuesday April 17, 2007 @05:42AM (#18764921)
    C's time has passed! the IT industry can not afford it any more economically as well as politically. Even the slightest mistake can cost millions of dollars.

    And before someone says it's all about the programmers and not the language, I would say I agree: it takes a God programmer to produce a flawless C program. The God programmer category has few members around the world, and most of them are not in Microsoft (hint: they are Linux / open source guys).

    So it's time to stop using this horrific programming language called 'C'. It worked so far, but its flaws are very serious...time to move on!
  • It is melancolie that once a typical person learns a desk top manager, that that person will stay with that desk top manager; Even when it means giving up big bucks [neowin.net], as opposed to just downloading a copy of Ubuntu 7.04 [ubuntu.com] - For Free! Microsoft knows this, cold. When it comes to those who would exploit users sloth for purchasing a known product riddled with flaws, it only takes a enterprising few to ruin every microsoft user's day; Globally. One should notice that Microsoft's "Software Agreement" says you ca

FORTRAN is a good example of a language which is easier to parse using ad hoc techniques. -- D. Gries [What's good about it? Ed.]

Working...