Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?

Microsoft Blames the Messengers 731

Roger writes: "In an essay published on, Scott Culp, Manager of the Microsoft Security Response Center, calls on security experts to "end information anarchy" and stop releasing sample code that exploits security holes in Windows and other operating systems. "It's high time the security community stopped providing the blueprints for building these weapons," Culp writes in the essay. "And it's high time that computer users insisted that the security community live up to its obligation to protect them." See the story on Cnet"
This discussion has been archived. No new comments can be posted.

Microsoft Blames the Messengers

Comments Filter:
  • MS (Score:4, Offtopic)

    by MissMyNewton ( 521420 ) on Wednesday October 17, 2001 @05:35PM (#2443437)
    "It's high time the security community stopped providing the blueprints for building these weapons,"

    It's probably high time that Microsoft stop building houses made of straw to defend against big bad 'net wolves... It'd sure make a lot of our lives easier...

    • by DahGhostfacedFiddlah ( 470393 ) on Wednesday October 17, 2001 @06:04PM (#2443700)
      Supporters of information anarchy claim that publishing full details on exploiting vulnerabilities actually helps security...and bringing pressure on software vendors to address the vulnerabilities. These may be their intentions, but in practice information anarchy is antithetical to all three goals.

      All three goals? There's some on this later - but assuming that he's right with the rest of the entire essay, you'd expect there to be some pressure to address the vulnerabilities, would there not? He even goes further, saying that pulished exploits are antithetical to getting patches out. Brilliant logic.

      Providing a recipe for exploiting a vulnerability doesn't aid administrators in protecting their networks. In the vast majority of cases, the only way to protect against a security vulnerability is to apply a fix that changes the system behavior and eliminates the vulnerability; in other cases, systems can be protected through administrative procedures. But regardless of whether the remediation takes the form of a patch or a workaround, an administrator doesn't need to know how a vulnerability works in order to understand how to protect against it, any more than a person needs to know how to cause a headache in order to take an aspirin.

      I love this analogy. It actually works. For example - if I knew that the cause of my headaches was an allergy to certain foods, I could avoid those foods, and not have to take aspirin. If I know how an exploit works, I can prevent it with my own tools - firewall, etc. and not have to worry too much about the dubious patches.

      Likewise, if information anarchy is intended to spur users into defending their systems, the worms themselves conclusively show that it fails to do this. Long before the worms were built, vendors had delivered security patches that eliminated the vulnerabilities.

      Here he's not talking about e-mail "viruses", but worms. Specifically, worms targetting systems people did not know they had on their system. There was plenty of buzz about Code Red before most people had it, and the patch was applied to thousands of computers as people got worried. I'm not an advocate of having people upgrade through fear, but this still disproves his point.

      Now - here's his reason for published exploits to take pressure off of vendors to publish fixes :

      Finally, information anarchy threatens to undo much of the progress made in recent years with regard to encouraging vendors to openly address security vulnerabilities. At the end of the day, a vendor's paramount responsibility is to its customers, not to a self-described security community. If openly addressing vulnerabilities inevitably leads to those vulnerabilities being exploited, vendors will have no choice but to find other ways to protect their customers.

      Crap...I'm trying to find a problem with the logic, but I can't actually understand the argument - anyone? What other ways are there for vendors to protect their customers than put out fixes?

      Anyway, that said, I'd just like to express my condolences to the author. Did you see his title? "Manager of Microsoft Security Response Center" Poor guy is probably blamed for half the bugs in code he's never heard of. Can blame him for venting a little. I just wouldn't have done it as publicly.
      • by schon ( 31600 ) on Wednesday October 17, 2001 @06:26PM (#2443801)
        an administrator doesn?t need to know how a vulnerability works in order to understand how to protect against it, any more than a person needs to know how to cause a headache in order to take an aspirin.

        I love this analogy. It actually works.

        No, actually it doesnt.

        An asprin only relieves the symptom, not the cause. If you get a headache from hitting your head against the wall, an asprin won't stop you from continuing to hit your head against the wall, all it will do is let you do it longer.

        Perhaps he can answer this though: without exploit code, how do we know the problem is really fixed? Twice to my knowedge MS has released patches that didn't fix the hole they claimed. Publicly available exploits are a failsafe, they provide an independant means of verifying that the hole is actually closed.
        • Perhaps he can answer this though: without exploit code, how do we know the problem is really fixed ? Twice to my knowedge MS has released patches that didn't fix the hole they claimed. Publicly available exploits are a failsafe, they provide an independant means of verifying that the hole is actually closed.

          I think that is the single most important reason for exploit code.

          I read one of the new (yes, I know, the old were much better) Tom Swift books where Tom invents some sort of magical force field and, as the acid test, he makes his robot assistant fire a few rounds at him. Of course, it's dangerous to fire a gun at a person, but other than proving its effectiveness beyond any reasonable doubt by examining the mechanism behind the force field (akin to studying the source code in detail, which, since it isn't open to the public, isn't open to scrutiny) there is no other final way of determining that something works other than trying it.

          If Microsoft is going to be a closed-source software industry, they're going to have to accept the consequences of their decisions. They have to take full responsibility for their own code. Blaming their problems on something else does not eradicate them.
        • A possible response (Score:3, Interesting)

          by IPFreely ( 47576 )
          Perhaps he can answer this though: without exploit code, how do we know the problem is really fixed? Twice to my knowedge MS has released patches that didn't fix the hole they claimed. Publicly available exploits are a failsafe, they provide an independant means of verifying that the hole is actually closed.

          If I was a MS spokeman, I might answer this by saying:
          "Exploits are a proper test of the validity of a patch, but it is not necessary to publish them. They can be developed and tested in closed labs and only the results published."

          To which I would have to ask: "Whose lab and how can we trust them?"

      • If openly addressing vulnerabilities inevitably leads to those vulnerabilities being exploited, vendors will have no choice but to find other ways to protect their customers.
        Crap...I'm trying to find a problem with the logic, but I can't actually understand the argument - anyone? What other ways are there for vendors to protect their customers than put out fixes?

        Considering that this essay is from Microsoft, I think it reads clearly as a thinly veiled threat to sue anyone who points out vulnerabilities in Microsoft products (UCITA, anyone?). In Microsoft logic, if people stop publishing vulnerabilities for fear of being sued, then the problem of people exploiting known vulnerabilities goes away. This logic is akin to leaving a bank vault wide open, but turning off the lights so thieves won't see it.

        In the land of real people, litigation will not solve the problem, and Microsoft needs to know this. The first security expert to get sued will be screwed, but by that time the vulnerability will have been made public, and thus be exploitable. This lawsuit will leave a bad taste in the mouths of the "self-described security community," so that the next exploit that is found will be exploited rather than published. When people start abandoning their products en masse because of constant security problems, Microsoft may realize that they shouldn't've angered the people who point out the chinks in their armor.
    • Re:MS FUD (Score:3, Interesting)

      by xmedar ( 55856 )
      Did anyone else notice this -

      Code Red. Lion. Sadmind. Ramen. Nimda. In the past year, computer worms with these names have attacked computer networks around the world, causing billions of dollars of damage. They paralyzed computer networks, destroyed data, and in some cases left infected computers vulnerable to future attacks

      then further down -

      All of these worms made use of security flaws in the systems they attacked, and if there hadn't been security vulnerabilities in Windows®, Linux, and Solaris®, none of them could have been written. This is a true statement, but it doesn't bring us any closer to a solution.

      Basically they are attempting to put Solaris and Linux in the same boat as M$ware, it looks like the author Scott Culp hasnt met his quarterly quota for marketing FUD and so has thrown that *cough* article together to make up for it.
  • boy, we're sure learning that lesson fast!
  • by 11thangel ( 103409 ) on Wednesday October 17, 2001 @05:35PM (#2443441) Homepage
    They're trying to say "stop finding holes faster than we can make...err...fix them". My my what a cheap political backstab.
  • there are 3 of them pointing at you....

    I think the author/Microsoft should not forget this.

  • Right (Score:5, Informative)

    by IsleOfView ( 23825 ) <[moc.ufgum] [ta] [nosaj]> on Wednesday October 17, 2001 @05:36PM (#2443447) Homepage
    Much better that the "black-hats" "secretly" circulate the information.

    If the security experts didn't find and pubilsh the holes, good luck on Microsoft making the fixes a "priority".
  • history (Score:5, Informative)

    by Telastyn ( 206146 ) on Wednesday October 17, 2001 @05:36PM (#2443450)
    Yes, just like keeping Cryptography code secret improves the algorithm. I agree that the company should be notified before the flaw is announced, but seriously, the entire point of a security response center is to inform users as to vulnerabilities...
    • Re:history (Score:2, Insightful)

      Actually most security firms who announce these flaws inform the company first to allow them to fix the bug/flaw before it can be used as a tool for harm.

      just my $.02
  • Yes, I realize that this isn't a fix, but if obscurity makes it just a little harder for people to do bad things then I don't see why it's such a bad thing. Especially in the case of Microsoft, where only they can fix the source, why should the security companies publish the source on the web instead of sending it directly to microsoft? What gains are there to be had by having the source displayed all over the web?
    • by jonnyq ( 103252 )
      Standard courtesy and many mailing lists recommend just this approach, but many companies have a really bad track record about fixing bugs that noone knows about. therefore, after a period of time, the exploit is published to "force" the company to deal with it.
    • by btellier ( 126120 ) <(moc.liamg) (ta) (reilletb)> on Wednesday October 17, 2001 @05:47PM (#2443557)
      sigh. OK, let's try this again: BECAUSE OTHERWISE PEOPLE WON'T TAKE YOU SERIOUSLY. Now let's review: how many people patched eEye's .IDA exploit when it came out and did not include an exploit? Not bloody many. How many patched it after Code Red made it abundantly clear that this was a very exploitable vulnerability? Hundreds of thousands more. The obvious truth here is that full disclosure and the inclusion of exploit scripts opens people's eyes to the fact that people are going to use this hole to break into YOUR system.

      By not giving exploit scripts you allow sysadmins to become lazy. They figure "Nah, i'll just wait until an exploit comes out before i patch it", while the underground hax0r scene is already searching out your box.
    • by irix ( 22687 ) on Wednesday October 17, 2001 @05:47PM (#2443559) Journal
      What gains are there to be had by having the source displayed all over the web?

      What makes you think that not having it displayed all over the web will make it any less available to to the people who want to do harm?

      Black hats are going to get ahold of the exploit, even if the source code to it is not published on or bugtraq. All that not publishing it there does is provide a false sense of security.

      Publishing the details in a high-visibility location does several things:

      • gets the company who wrote the software much more motiviated to write a fix
      • allows other people to verify that the vulnerability exists
      • lets you and I (white hats) not make the same mistakes that lead to the vulnerability in our code

      The script kiddiez are going to get these exploits when they download them from their favourite r00t kit location. Lets not pretend that not publishing the same exploits to the general public really makes things much safer.

    • by Phydoux ( 137697 )
      I just can't agree with this.

      The problem with not publishing details of the exploit is that Microsoft and other companies will look at it and say "This doesn't look like that bad of a problem, and besides, nobody will find that easily. No sense in making a patch for it. The potential abuse of this hole is negligable."

      So then we end up being at the mercy of the Black Hats to quietly spread the information among themselves.
      No, keeping things secret simply won't help.
    • Re:They Have a Point (Score:5, Informative)

      by blakestah ( 91866 ) <> on Wednesday October 17, 2001 @06:19PM (#2443769) Homepage
      What gains are there to be had by having the source displayed all over the web?

      1) The source display should allow any administrator to verify if he is vulnerable, and, after patching, that he is no longer vulnerable.

      2) The source code should demonstrate the exact nature of the problem for the coders who wish to fix it. They would otherwise need to write their own exploit to test their fixes.

      3) The source code should apply pressure to the software maker. It is akin to being flogged in public. The whole world knows you are vulnerable, and you ought to fix it.

      4) The source code of the exploit should make the exploit obvious but not damage the system.

      Source code exploits will ALWAYS be published in places where some crackers can get them. The challenge is designing an updating system that allows all users to apply patches in a timely fashion. I think Debian is actually closest on this one.

      Microsoft is really going to get nowhere on this one. I've read accounts of people who send exploits to Microsoft in secrecy, and then HAVE to publish the code so that Microsoft is forced to fix the problem. If it doesn't impact Microsoft's marketing, Microsoft doesn't care.

      The other issue that relates to this one is secure as possible by default. This principle applies to all Internet usage of computers. Yet Microsoft blatantly violated it in the following: Office Macros, email attachments, NT/Windows 2000 Server config (running IIS by default), Hotmail...
  • I've heard this one! (Score:5, Interesting)

    by AntiFreeze ( 31247 ) <antifreeze42@gmail. c o m> on Wednesday October 17, 2001 @05:37PM (#2443458) Homepage Journal
    If you don't tell anyone that the construction company used shoddy materials, then no one will figure out how to make the building collapse!
  • by Ripp ( 17047 ) on Wednesday October 17, 2001 @05:37PM (#2443462) Journal
    ...Windows®, Linux, and Solaris®...

    What's wrong with that picture? Linux *is also* a registered trademark, Microsoft. I suggest you recognize it as such.

    Linus, kick some ass here.
  • by cnkeller ( 181482 ) <> on Wednesday October 17, 2001 @05:37PM (#2443463) Homepage
    Gun manufacturer Smith & Wesson has asked that ammunition maker Black Talon stop making bullets since "guns don't kill people, bullets do."

    Because, if the security hole didn't exist in the first place, then Microsoft wouldn't have to worry about all this bad press starting to cost them business; and more importantly mindshare.

  • New Slogan (Score:3, Insightful)

    by InfinityWpi ( 175421 ) on Wednesday October 17, 2001 @05:38PM (#2443465)
    "Hackers don't hack Windows machines... bad code hacks Windows machines."

    Y'know, if they didn't have so many bugs, there wouldn't be anything to release, and therefor, no 'weapons' to build... it's kinda like an army making a tank with wooden components inside, then getting pissy when the other army brings flamethrowers and napalm...

  • by crumbz ( 41803 )
    Information Anarchy? What? Do doctors complain about information anarchy when patients research treatments for diseases on the web?
    Doesn't this guy realize that our systems are becoming more secure everyday, now that people have to take worms, trojans, DoS attacks seriously. Maybe he should bet back to securing Microsoft products and spend less time complaining about system admins trying to share info.
  • by BrookHarty ( 9119 ) on Wednesday October 17, 2001 @05:38PM (#2443473) Homepage Journal
    If we can't eliminate all security vulnerabilities, then it becomes all the more critical that we handle them carefully and responsibly when they're found.

    And hiding all these security flaws would of made windows more secure? Your product is not secure, stop passing the buck.
  • Still leaking? (Score:4, Insightful)

    by Col. Klink (retired) ( 11632 ) on Wednesday October 17, 2001 @05:39PM (#2443474)
    And just how am I supposed to know I've patched a hole if I don't know how it gets exploited?
  • by Mike Schiraldi ( 18296 ) on Wednesday October 17, 2001 @05:39PM (#2443475) Homepage Journal
    It's high time we stopped teaching Chemistry and Biology! People are spreading information that essentially maps out exactly how the human body works, which allows for all sorts of chemical and biological weapons! And explosives, too!

    In other news, Master Lock wants to release a new model made out of twine and butter. They ask the community to avoid discussing the security of the lock, since they anticipate it getting deployed widely, and once the ButterLock is being used to secure mission-critical systems, it will be extremely important to keep its flaws a secret.
    • This comment makes an important point: The only way we can learn about security is by studying security problems.

      In a adversarial environment like computer security, you can't be any good if you only understand one side of the game. Even if you are a "good guy" you must understand how to be a "bad guy" to be worth anything. It's impossible to write antivirus software or truly understand viruses without looking at the code for them. It's impossible to develop a good cryptosystem if you don't have a detailed understanding why previous systems are bad.

      Many people don't quite get how a buffer overflow works (or why they should check buffer limits in their code) until someone describes how the attack works in painstaking detail. This person will now check their buffer limits, but they also know how to write a buffer overflow attack if they are maliciously inclined - a net gain in my book.

      In more general terms, the Army trains people who will never do anything except defend their position in how to attack. Law schools don't break criminal law into classes on prosecution and defense, and police study methods used by criminals. But hey, Microsoft says software is too complex for this traditional process of learning how to defend.
  • by Xzzy ( 111297 ) <sether AT tru7h DOT org> on Wednesday October 17, 2001 @05:39PM (#2443479) Homepage
    By putting out solid information, people who find these exploits are doing two things: Giving the programmers specific information with which to fix the problems, and giving script kiddies some really damn good instructions for hacking into a box.

    The system relies on the reaction time of the programmers.. can they supply a patch before the crackers supply an exploit?

    Those of us in the *nix world seem to do pretty good.. for all sorts of reasons you don't need to go into here. Windows? Heh.. it can take months for something to get patched up. No wonder he's mad that these 'blueprints' are being provided. It's simply an extension of the security through obscurity mode of thought.
    • by btellier ( 126120 ) <(moc.liamg) (ta) (reilletb)> on Wednesday October 17, 2001 @05:59PM (#2443666)
      Back when I did audits in my spare time I followed a specific set of guidelines.

      1. always notify the vendor first.
      2. always wait 2 weeks for a patch.
      3. don't release on weekends or very late at night (sorry, other side of the globe.. i'm in the US)
      4. always supply an exploit, if one is possible.

      And even with all this in place sysadmins still wouldn't patch the problem until they got hacked. If someone doesn't patch their system after all of these steps nothing can make them.

      Scott Culp seems to think that the number of hacks will go down solely by eliminating #4, while in actuality the other 3 steps are the ones which get more boxes hacked. With you average buffer overflow thousands of hackers could write an exploit within maybe two or three hours of seeing a bugtraq post. Not notifying the vendor can cause havoc for weeks before a patch is issued.
  • by The Panther! ( 448321 ) <panther.austin@rr@com> on Wednesday October 17, 2001 @05:39PM (#2443487) Homepage
    In other news, Microsoft has purchased a secret weapon of vast destruction, code named Blamethrower. It strikes out at random targets, displacing reality at near the speed of light.

  • by Corgha ( 60478 ) on Wednesday October 17, 2001 @05:40PM (#2443489)
    it's high time that computer users insisted that the security community live up to its obligation to protect them

    I'm not sure whether anyone, other than law-enforcement agents, is obligated to protect computer users, but if anyone is, surely the people who produce the software are more obligated to prevent or solve these problems than are those who merely report on them.

    Is this, along with the U.S. government's warning to news agencies to be careful what they broadcast, a sign of a new trend?
  • by Derkec ( 463377 ) on Wednesday October 17, 2001 @05:40PM (#2443494)

    Several times we've seen security experts say to a large company, "Hey! there's a nasty exploit here!" The large company indicates they'll fix it and ignores the problem. Only when the exploit is publicized do companies like Microsoft actually take the effort to fix the code. Releasing the information is the only way. Perhaps out of courtesy the security community could give the company with the bug a week's notice.
    • by uhmmmm ( 512629 ) <> on Wednesday October 17, 2001 @06:02PM (#2443683) Homepage
      Perhaps out of courtesy the security community could give the company with the bug a week's notice.

      From the bugtraq FAQ [] (

      0.1.8 What is the proper protocol to report a security vulnerability?

      A sensible protocol to follow while reporting a security vulnerability is as follows:
      1. Contact the product's vendor or maintainer and give them a one week period to respond. If they don't respond post to the list.
      2. If you do hear from the vendor give them what you consider appropriate time to fix the vulnerability. This will depend on the vulnerability and the product. It's up to you to make and estimate. If they don't respond in time post to the list.
      3. If they contact you asking for more time consider extending the deadline in good faith. If they continually fail to meet the deadline post to the list.

      When is it advisable to post to the list without contacting the vendor?
      1. When the product is no longer actively supported.
      2. When you believe the vulnerability to be actively exploited and not informing the community as soon as possible would cause more harm then good.
  • by Suicyco ( 88284 ) on Wednesday October 17, 2001 @05:40PM (#2443495) Homepage

    I thought most security exploits that get released by the major groups are usually passed through MS first and allow them time to provide a patch before issuing the details of the exploit. So why are they so upset? Its not MS nor the security experts who are at fault for not patching machines. At least by publishing them they are provided an incentive to staying on top of security holes, instead of simply allowing them to remain secret. I mean none of the major exploits lately (code red, nimda, etc.) have used unpublished exploits. So this shows a failing in MS's procedures for keeping admins informed and a failing in the admins for keeping on top of their networks. Its such a non-issue, I think MS just wants to preempt law suits or some other such silliness.
    • I thought most security exploits that get released by the major groups are usually passed through MS first and allow them time to provide a patch before issuing the details of the exploit.

      It begs the question though... if the supposed reason that the source is released is because the vendor didn't respond to the threat, then why does the source to the exploit STILL get released even if the vendor DOES issue a patch?
  • by FatRatBastard ( 7583 ) on Wednesday October 17, 2001 @05:41PM (#2443501) Homepage
    I'd wager this is the first volley in another push by MS to cover thier asses by legal means. I see another push to make the release of any information that shows weaknesses a criminal activity. Expect lots of flag waving, anti-terrorism rhetoric to be sprinkled throughout, and some suspect demands that seem to be more motivated at gaining market share than protecting machines.

    God damn... when did I get so cynical? Oh yeah, after reboot #3 of NT 4.0 today. {grumble grumble grumble}
  • It is a good point (Score:3, Interesting)

    by ujube ( 98058 ) on Wednesday October 17, 2001 @05:41PM (#2443502) Homepage
    Although the source of the message certainly lessens its credibility, they have a point. Things like the Honeynet Project have shown a huge _lack_ of intelligent attackers in the wild. The endless waves of attacks filling the internet are pulled off by script kiddies, many of which can't mount a drive, compile a file, or even write a script. And we are feeding them. If we really want things to get better, we have to find a societal solution for the problem. It certainly seems to me that the full disclosure paradigm at least needs to be scrutinized, if not dumped altogether.
    • by gnovos ( 447128 )
      Ask yourself this, which is more dangerous to your business?

      A) Skr1pt Kiddi3z who will enter your system and possibly scrawl "I love you rhonda!" on your front page.

      B) Highly professional "black hat" who will enter your system, steal your new revolutionary prototype plans and provide them for a small charge to your competitor who will get it to market six months before you.

      The current system allows lots of the first kind, but helps prevent many of the second. Microsoft's proposal will reverse this. High profile attacks generally do very little "real" damage, normally just some downtime or some ugly defacements. The attacks that you don't see, or in this case, WON'T EVER SEE, are the ones that will turn your business from market leader to bankruptcy auction...

  • by Maul ( 83993 ) on Wednesday October 17, 2001 @05:41PM (#2443503) Journal
    Code snippits are beneficial, so long as companies like Microsoft promptly provide security updates. I think that examples of attacks provide sysadmins and coders insight into how these holes in security come about, and give software authors an opportunity to think about what holes they might inadvertantly be putting in their software.

    Of course, MS just wants to skirt responsibility for negligance on their part.

  • Bug control (Score:3, Funny)

    by nougatmachine ( 445974 ) <johndagen AT netscape DOT net> on Wednesday October 17, 2001 @05:41PM (#2443505) Homepage
    Eh? The security community should stop documenting weaknesses?

    What a great idea! Then all the malicious hackers will know how to exploit security holes, while those in charge of security won't. Wait a second...isn't that kind of like asking security guards not to carry guns, because those guns might hurt someone?

  • Full disclosure? (Score:5, Insightful)

    by Pete (big-pete) ( 253496 ) <> on Wednesday October 17, 2001 @05:41PM (#2443507)

    Hmm, this has always seemed to be a hot discussion...I'm all for full disclosure, but is it really necessary for people to include exploit code?

    One argument is that it can help people to test their systems for vulnerabilities, bit I think that exploit code is not strictly necessary for this. People who really need it to test systems are in a position where they should have the capability or the resources to generate a "test script" for themselves, once given an accurate description of the vulnerability.

    Making code exploits freely available possibly creates more opportunity for the low-life script kiddies who often don't appreciate exactly what they are doing, or the mechanics of the exploits that they are using. Why should we make it easy for those guys?

    My opinion on this element of full disclosure is still not complete though, and I am fully prepared to be convinced... :)

    -- Pete.

    • by greygent ( 523713 ) on Wednesday October 17, 2001 @06:03PM (#2443693) Homepage
      Releasing exploit code prevents Microsoft from dragging their asses and claiming the vulnerability is "theoretical"...

      It's what L0pht prided themselves on for years, after having MS dismiss their whitepapers as improbable, theoretical, impossible, etc.
    • I'm all for full disclosure, but is it really necessary for people to include exploit code?

      some things are easiest to communicate with sample code. in the absence of the original source code, in which case you could say "look, this function is overrunning this buffer," it would probably be easiest to demonstrate the exact nature of a security flaw using exploit code. although even in the circumstances where you have the original source, having exploit code to look at couldn't hurt in fixing the problem.

      my personal feelings on this is that exploit code should first be sent to the maintainer of the original program, with a deadline for the release of a patch. there should also be a public release describing the problem in a very generic nature. after the deadline, release the exploit, even if the patch isn't out yet. this gives developers time to fix the problem without putting the exploit in the hands of script kiddies. plus, the developers are under a deadline to get it fixed. granted, it's entirely possible for the kiddies to already have code to exploit it, but why give them the tools before it's necessary?

  • by Insideo ( 171350 ) on Wednesday October 17, 2001 @05:42PM (#2443516)
    According to the article, each of the latest worm attacks was preceded by security bulletins which happened to contain exploit code.

    Hate to break it to MS, but all this indicates is that the security sites work. That's right. The people who have access to the code to fix the bugs were given notice. If these bulletins didn't exist, you can bet the worms would have still been created. Remember Code Red II? MS had a fix out months before CR2 hit the web, yet it still managed to infect thousands of machines.

    Security bulletins (even with exploits) are not the problem. The holes in buggy software are the problem.
  • Okay, (Score:4, Informative)

    by trilucid ( 515316 ) <> on Wednesday October 17, 2001 @05:42PM (#2443517) Homepage Journal

    here we go:

    "It's high time the security community stopped providing the blueprints for building these weapons..."

    How about providing the blueprints to your code, so we can secure the systems you release broken to begin with?

    I'm not anti-Microsoft (although I'm getting there, definitely getting there...), I do Windows development also in Visual Studio. I'm near the point of stopping that altogether though. My company is already using Linux for damn near everything (including desktops, not just hosting) anyhow.

    This is more than just your average case of idiocy from MS. If I ran a pharmaceutical company, and a drug we produced killed 500 people, do you think the public would accept some excuse like this? "No, really, it's all the fault of the doctors who showed their patients how to take the pills..."

    Maybe not a perfect analogy, but equally stupid. When will they learn? Probably when Joe Customer starts realizing how indecent their blame machine really is. Apache isn't perfect, Linux isn't perfect... but we admit this and work toward solutions. Average Joe won't stay completely blind forever; most people aren't stupid (my faith in humanity talking here), and you can't fool anyone indefinitely.

    Damn, and I was cutting down on my smoking...

  • by LazyDawg ( 519783 ) <lazydawg AT hotmail DOT com> on Wednesday October 17, 2001 @05:42PM (#2443520) Homepage
    ... and just write pseudocode or a very detailed step-by-step description of what their code does. In the end script kiddies will have to learn to write their own leet tools, and may later on branch these skills into other areas.

    If security experts took the time to make exploit code an exercise for the reader, we might someday end up with skript kiddies who can even write their own hardware drivers for Linux. They might even learn to write and discover new exploits for Windows without the help of security experts.

    Microsoft got it on the nose this time :)
  • by SirSlud ( 67381 ) on Wednesday October 17, 2001 @05:43PM (#2443530) Homepage
    HAHAHAHAHAHA ... oh yeah, I can just see it .. this would allow their marketing/pr department to 'fix' each and every bug.

    Actually, sample code is a very good way to illustrate the severity of a bug.

    A bug might be the result of absolutely brutal programming, but require a programmer to jump through hoops to exploit it. In this sense, the bug isn't so bad, and users can assess the path to patching said holes. On the other hand, a bug could be the result of complex, innocent oversight which can be exploited with 3 lines of code.

    I, for one, think knowing the code to exploit the bug can give admins a good sense of addressing patch priorities.

    Yeah, the security pundits will tell me 'you should be patching 10 secs after the patch comes out regardless of severity', but if you really take that route, you're living in a vacuum. The rest of the world has to worry about priorities .. ie, that old limitation of 24 hrs in a day. Hell, with MS and a large enterprise network, you'd have to assign a full-time worker just to monitor and install patches.

    And I'm of the opinion that trusting MS's stance on the 'severity' of a given bug is about as big a security hole as you can have.

    (Please remember to flame me on both sides, for even cooking .... )
    • by WNight ( 23683 ) on Wednesday October 17, 2001 @06:30PM (#2443828) Homepage
      Real admins will tell you that you shouldn't go throwing patches on production machines until they've been tested, either by you on a redundant machine or by the community at large.

      Exploit code and exact details let you rig together protection with a firewall, or turning off an optional service, until you feel that a suitable patch is available.
  • by victim ( 30647 ) on Wednesday October 17, 2001 @05:44PM (#2443536)
    The security watchdogs of the net have no obligation to me. I am glad they do their tasks, but the owe me nothing.

    My software providers have an obligation to provide me with secure software or none at all. I commend both Debian and Apple for responding to their occasional security problems in a timely manner.

    In the olden days when watchdogs did not release sample code some software providers downplayed their flaws as theoretical problems. If the software providers had been responsive to security flaws, there would be no need for sample code.
  • by batobin ( 10158 ) on Wednesday October 17, 2001 @05:44PM (#2443541) Homepage
    How the hell is it the fault of the security experts? To be honest, someone will find the bug, whether it's a person with malicious intent or not. If such holes are posted, it gives the company the chance to fix them, so that fewer people are struck.

    If holes were not posted, the public would not even know their software is insecure, and it would surely take longer for any company to patch said holes.

    Finally, doesn't blame ultimately fall on the company who made the buggy software in the first place? If I come up with a mathematical formula that proves 2 + 2 = 5, and a math teacher proves that I'm incorrect, who's to blame here? Microsoft believes the math teacher is wrong, something which is obviously misguided.

    One final thing: I don't see Linux/BSD/Apple execs complaining.
  • linux exploits? (Score:5, Insightful)

    by Lxy ( 80823 ) on Wednesday October 17, 2001 @05:44PM (#2443542) Journal
    doing a quick search on bugtraq, I see a lot of linux exploit code too. Hmm... let's blame the linux exploit code for the net-stopping worms like... ummm... and also the.. ahhh... well, you know. No Microsoft, making exploit code widely available does make make your product less secure. You do.
  • by Enonu ( 129798 ) on Wednesday October 17, 2001 @05:44PM (#2443543)
    I can imagine that his Scott Culp is very stressed out right now. Can you imagine being in this guy's position with worms like Code Red floating around?

    So what does he do? He posts an essay which is basically a reflection of his anxiety. However, he misses two very key points on why this information anarchy is a good thing.

    * Patches for popular software that are exploitable tend to come out real quick because the company has to save face and perhaps protect against liability suits.

    * A necessary fear is instilled into companies to put software through a secuirty audi before it goes into production.

    I hope this guy takes a vacation somewhere on the beach to reflect on his thoughts.
  • by The Infamous TommyD ( 21616 ) on Wednesday October 17, 2001 @05:47PM (#2443555)
    I've heard this idea before including from my advisor. The idea is that releasing exploits to the public is creating an environment where it's too easy to hack machines.
    Unfortunately, it's simply untrue that there aren't positive reasons for releasing exploits.
    I can think of several: testing of machines (risky, but useful), understanding of vulnerability (CERT advisories are pretty much useless for this.), research.

    The most important of these (IMHO) is the understanding of the vulnerabilities. In the past, we didn't even talk about vulnerabilities in the open and we have the abhorrent state of affairs we have today. Security isn't even taught in computer science and engineering curricula and when it is, it's treated as a separate set of classes. When I started working in infosec, I had no idea how the exploits worked and what the real coding vulnerabilities were. Without release of exploits, I probably still wouldn't.

  • by adturner ( 6453 ) on Wednesday October 17, 2001 @05:47PM (#2443565) Homepage
    This argument that Microsoft is making is the same stupid argument that was made by Richard M. Smith <> on Friday Aug 10, 2001 shortly after Code Red.

    The short story is that eEye's announcement had absolutely nothing to do with Code Red. The person(s) who developed Code Red figured out the exploit on their own. For more details check out Marc Maiffret's (of eEye) email to the Bugtraq list: =1&mid=203550

    People who argue that full disclosure is harmful just fail to realize the facts of the matter- people who write these attacks all aren't script kiddies and they're quite capable of developing attacks on their own. And the reality is that most vendors only respond to full disclosure to actually fix bugs (and even then it takes too long).

    Nuff said.
  • by TheEviscerator ( 240966 ) on Wednesday October 17, 2001 @05:48PM (#2443571) Homepage
    Ah yes, just found my "MSspin2english" translator. Let's see how those comments look now:

    "It's high time that the security industry stopped pointing out all of the blatant security flaws in our programs", Culp writes. "Since we insist on developing OSes and highly-integrated applications tuned for usability, rather than security, we can't make as much money as we're accustomed to making, what with all of these viruses/worms targeted at our products."

    Culp adds, "it's time that the security industry be held responsible for these worms and viruses, rather than the companies who make products such as ours. By pointing the finger at the amorphous 'security industry', we're better able to deflect blame for the recent rash of high-profile MS OS and web server exploits."

  • Just baffling (Score:3, Interesting)

    by ( 114827 ) <> on Wednesday October 17, 2001 @05:51PM (#2443603) Homepage
    information anarchy... This is the practice of deliberately publishing explicit, step-by-step instructions for exploiting security vulnerabilities, without regard for how the information may be used.

    I would suggest to Bill & Co. that it is published with the highest regard for how the information will be used. Just because it could be used in a negative way doesn't mean that nobody's thought about it. There's not a security guy out there who hasn't at some time weighed the pros and cons of releasing information like that.

    And am I the only one who is insulted by the gratuitous use of the word "weapons", so as to implicitly equate hacking with physical terrorism and fan the flames of paranoia?

  • Microsoft FUD (Score:3, Insightful)

    by Loewe_29 ( 459497 ) on Wednesday October 17, 2001 @05:53PM (#2443623)
    Microsoft is frantically trying to shift the blame from themselves following the Gartner groups recommendation that people stop using IIS. It's not that MS developers focus soley on market share instead of quality and security (not that I blame the developers, since this is exactly what MS management wants and pays them for), it's that web-defacing juveniles are 'terrorists' and security researchers are 'anarchists'.

    MS had it too easy for too long regarding security issues, especially with the news media reporting Outlook vulnerabilitys not as they really are, as a design flaw in Outlook, but as "e-mail viruses."

    "Behind every great fortune there is a crime."
    - Honoré de Balzac

    "You hear a lot about Bill Gates, don't you, whose net worth in January of the year 2000 was equivalent to the combined net worth of the hundred and twenty million poorest Americans, which says something, not only about the software imitator from Redmond, Washington, it says something about millions of workers who work year after year, decade after decade, and are essentially broke."
    - Ralph Nader

    • Re:Microsoft FUD (Score:3, Interesting)

      by Peaker ( 72084 )
      MS had it too easy for too long regarding security issues, especially with the news media reporting Outlook vulnerabilitys not as they really are, as a design flaw in Outlook, but as "e-mail viruses."

      They are a flaw in Windows itself, mainly.
      This flaw is a flaw of *nix systems as well, and the flaw is using ACL's, rather than Capability systems.

      Read the Confused Deputy paper for more information.
  • by PRickard ( 16563 ) <{moc.cb-sm} {ta} {rp}> on Wednesday October 17, 2001 @05:56PM (#2443645) Homepage
    "Yes," said kingdom spokesman Jim Dilldunnam, "the Emperor is aware of his nudity. But His Majesty's nakedness would not be a problem for the uneducated masses if you irresponsible media types would just cease telling them about it."
  • by Happy Monkey ( 183927 ) on Wednesday October 17, 2001 @05:59PM (#2443670) Homepage
    Information Anarchy

    Expect to see this term bandied about frequently.
  • Isn't it ironic... (Score:3, Insightful)

    by Cybercifrado ( 467729 ) on Wednesday October 17, 2001 @06:04PM (#2443703)
    You post linux bugs to bugzilla and they thank you. You post M$ bugs publicly and they flame you. I think more than anything, M$ is pissed because more and more people are starting to realize what a true truckload of CRAP their OS really is. So, we post the bugs in an effort to encourage them to fix it, and for us to give them another chance. What do they do? They blame those who would help them fix it for their own stupid code. I mean come's high time they started taking responsibility for their inadequacies.
  • IMO, a resopnse (Score:5, Interesting)

    by A_Non_Moose ( 413034 ) on Wednesday October 17, 2001 @06:05PM (#2443706) Homepage Journal
    The people who wrote them have been rightly condemned as criminals.

    Ok, I'm going to be snide, the author points to the exploitation tools, but one could also argue that windows (don't laff) "security model", closed source apps, IIS are the *initial* tools of exploitation. Lest I forget, Integration, legislation, co-opting, barriers to entry keep other (maybe better, maybe worse) products from hitting the market and (say it with me) promoting competition.

    It's high time the security community stopped providing blueprints for building these weapons. And it's high time computer users insisted that the security community live up to its obligation to protect them.

    Why? No one believed that certain (ford/chevy?) trucks would blow up like a bomb when hit from the side...what did they do? Yep, they *Proved IT*, by staging a scenario.
    And, not to pick nits or be too smarmy, but "we" are trying to protect users. The fact that PHB's, average users don't *listen* after the 3rd, forth, fifth time of being hacked, wormed, virused, or trojaned via outlook, IIS, IE seem to be nicely sidestepped.

    ...and if there hadn't been security vulnerabilities in Windows®, Linux, and Solaris®, none of them could have been written. This is a true statement, but it doesn't bring us any closer to a solution.

    Uh, yes it choosing the most secure of the bunch! No platform is perfect, but if you choose the one with the best track record, gee, you get...surprise, surprise...less of a chance of being exploited. Once bitten, twice shy... but, then again, see my above paragraph with users/phb's.

    ...information anarchy. This is the practice of deliberately publishing explicit, step-by-step instructions for exploiting security vulnerabilities, without regard for how the information may be used.

    Ok, I'll ignore the buzzword bingo opportunity, and point out that the author does "get it" a little, that the vulnerabilities mentioned had been patched weeks/months ahead of time.
    Ok, cool, Correct me if I a wrong, but I recall seeing a recent article that Microsoft said it needs to "Prioritize" its patches, because, heh, it is confusing!!!

    The thing to be rememberd in reading this article the dangerous assumption is this:
    If an exploit is found and is dangerous "the security community" *needs* these to tear into and discover how to fight whatever threatens the systems in question.
    I'd rather have a fulling working exploit in the hands of a "white hat" than a "black hat".

    Don't forget, please, that most of the worms propagated as the result of *malicous* intent and were discovered, stopped, slowed by people with *clear/clean* intent.

    That fact seem to be missing.


    If I am right, I am right...but if I am wrong, show me I a wrong.

  • by Jetifi ( 188285 ) on Wednesday October 17, 2001 @06:18PM (#2443760) Homepage

    The people who found the .IDA expoit (eEye security) told MS, and waited until a patch was available before making the press release.

    Not only that, but Microsoft thanked eEye in their own press release.

    Not only that, but it has been proven beyond all doubt that Code Red, + CRII were based on old exploit code, NOT eEye sample code.

    Not only that but the old exploit code that Code Red etc. re-hashed, exploited a hole that was fixed by MS in the traditional manner, i.e. with no exploit sample code published, etc. If the original exploit code that Code Red built on was made public in the same way as the .IDA vulnerability was, the f**kin' thing would never have happened, because every competent IDS system out there would have caught Code Red before it even got off the ground.

    The whole thing makes me sick. I can't believe that after Microsoft blitzing^W attempting to blitz the media with it's "renewed security efforts" that they let this slip past marketing. If this is what happened, then before they can even think about 'locking down' IIS, they need to examine their own attitude, and consider abandoning the tried-and-tested-and-FAILED 'security through obscurity' route.

  • by ikekrull ( 59661 ) on Wednesday October 17, 2001 @06:34PM (#2443856) Homepage
    'An adminstrator doesn't need to understand the problem in order to fix it'

    This is pure bullshit. It is *extremely* important to understand how these worms and viruses work in order to respond effectively to such threats.

    If I, as a programmer, was writing a web application in C that could potentially be remotely exploited via buffer overflow, such information is *absolutely fucking critical* to me, so that i can write safe code.

    M$ seem to suffer from the delusion that they are the only people in the world actually writing computer programs.

    This unbelievable arrogance is getting pretty tired, and i imagine that we'll be seeing some pretty big anti-M$ stances being taken by previously devout believers in the near future.

    If you can't put up, M$, then for christs sake shut up.

  • by Toodles ( 60042 ) on Wednesday October 17, 2001 @06:38PM (#2443887) Homepage
    Checking through BugTraq [] and NTBugTraq shows an alarming trend; companies don't care if someone finds an issue with their software. Let me give you an example:

    The Cisco 675 DSL router/modem. This device has very widespread use consumer home and SOHO environments. Other Ciscos in that line were included in a particular issue that cause the router to hang completely until power cycled. Cisco was first notified about this January 10 2000 (no typo there, 01-10-00). A very easy to prove situation was shown to cause this. After 11 months of waiting and two notifications to Cisco, the notifier had given up on Cisco doing The Right Thing (c), and notified BugTraq about the problem, in this [] post, Nov 28th, 2000. Users from around the world tested, and verified the issue. Want to know what happened? Nothing. Not a peep from Cisco about this, untill recently. The vulnerability DOS in the Cisco was never acknowledged by Cisco, and still isn't admitted. However, a notification of DOS vulnerability was finally admitted by Cisco here [], 8-24-2001. Nineteen months since being notified. However, the entire reason for this wasn't the vulnerability mentioned of a skewed HTTP request, but simply its inability to handle multiple http connections. Why? Code Red. The Code Red virus was banging on port 80 so hard that the routers would lock up hard and die until reset. Many thousands of DSL customers were affected by this, and IMHO, a redux of the HTTP code that should have been done over a year and a half before, would have prevented the entire nightmare of Code Red issues for owners of the Cisco 675 (Their systems are another story however).

    Checking for other 'exploit code' on the BugTraq list should show that the people who create it are responsible, usually doing no more than running a 'whoami' in the case of elevated privileges. They don't arm 'script kiddiez', they do it themselves, however the proof that a hole is exploitable is all someone needs to write their own. This is not a bad thing, this is a good thing.

    It is general policy on BugTraq that companies be notified and given sufficient time to resolve issues, usually 3 months or so. If that lapses, it is the infosec engineers responsibility to post the exploit for the world. The company won't listed to the voice of one competant person, but they will listen when their entire customer base gets proof that the company shirked on their responsibilities to protect their customers.


  • by Andrew Dvorak ( 95538 ) on Wednesday October 17, 2001 @07:31PM (#2444108)

    It appears that the advantage of releasing sample code to exploit flaws in computer systems places increased pressure to fix the bug on the manufacturer. This is good, but at a compromise which places serious risk to the consumers of the product. Once suspect code is released, the potential for damage to consumer systems is exponentially increased because the tools to do damage are then available to anybody. Both sides have valid points, but perhaps a set of guidelines to report such bugs which take into account the interests of all involved parties is crucial.

    As far as I am concerned, there are five levels of releasing this information which could be used to balance these interests: 1. Say nothing and somebody else will exploit the bug 2. release this information to the manufacturer of the software product and hope they do something about it 3. release a summary of the bug enough so it is realized by the general public 4. release technical information on what theories are used to exploit the flow 5. release the tools necessary to exploit the flaw

    The above could be thought of as an agenda for the order in which to release word of any flaws, where one step succeeds the other, starting at #2. 5 should be used with extreme caution - in other words: know what you're doing before using this step, because then anybody can make a toy of the tool to execute the exploit on anybody's system.

  • by Tachys ( 445363 ) on Wednesday October 17, 2001 @07:38PM (#2444138)
    I found this story [] talking about serious security problem in Novell Groupwise. But they say it is better if they do not tell you what the problem is. But apply the patch NOW
  • by bug ( 8519 ) on Wednesday October 17, 2001 @07:40PM (#2444147)
    As a security researcher, I can say that this is a difficult issue. I certainly benefit from having access to exploit information in my research and testing, but just as certainly the public release of exploit code is a sword that cuts both ways. At issue in many current IT-related court cases is free speech with regard to software and source code. Examples here are cryptography export regulations court cases and DMCA-related court cases. The free speech argument here (and in my mind the most correct argument) is that, just as for musicians the only practical and unambiguous method of communication is sheet music, that source code is the only practical and unambiguous method of conveying ideas about computer-related subjects. In computer security, a related argument can be made that the only practical and unambiguous method of communicating ideas about security vulnerabilities is through exploit code and programs.

    The security community is so large and diverse that effective controls on exploit code and detailed vulnerability information is impossible. Who would determine who gets access? Microsoft? The US Government? The only practical method is the public one.

    The enemy is not Microsoft's unwillingness to produce patches for their security vulnerabilities. They have actually proven to be one of the more cooperative vendors for recognizing flaws and producing and releasing patches, at least in recent times.

    The enemy is not the public release of explicit vulnerability information, which is necessary for security research.

    The enemy is also not the 13-year-old that breaks into computers. Fighting a war against 13-year-olds is a dumb war.

    The enemy is the fact that software vendors like Microsoft have consistently chosen to place their customers at a ridiculous amount of risk through default configurations of their software, and the fact that a 13-year-old can break into thousands of computers with little effort or skill.

    Why is it that default configurations of all major OSes (note that I'm not singling out Windows here, I'm saying all OSes) come with an absurd amounts of default services open? If the vast majority of customers do not need a service running, then it should not be running. How many nimda infections were from people who had no idea they were running a web server in the first place?

    Why is it that default configurations of most prominent workstation and network client software has poor default configurations, security-wise? Do most users out there really need ActiveX or Javascript in their email client? Not only no, but hell no.

    Yes, vulnerabilities do occur in all software. I don't think that anyone out there has any expection for Microsoft or any other vendor to achieve perfection here. However, the issue here is that the default posture leaves users prone not just to known vulnerabilities, but to ones that have yet to be discovered.

    All software vendors (including but not limited to Microsoft) need to better examine the features of their products to discover potential points of attack. If the majority of users have no need for a particular feature that might be dangerous at some later point in time (e.g., mobile code capabilities, network services, modules to network services like IIS index server, etc.), then they should be disabled by default. Go ahead and make an easy-to-use checkbox for turning that kind of stuff on individually, but don't have it on by default.

    Microsoft has recently stated that it is beginning a new initiative to ship their products in secure configurations. I believe that they probably will succeed somewhat here, but we've been hearing similar lines of bull for so long that they have no credibility here until they actually prove it.

    Microsoft and other vendors should stop whining about the messengers, and should start shipping products with default configurations and initial postures that are likely to withstand existing and future attacks. Default configurations are enemy number one, not public vulnerability research. Let's see some proactive work being done instead of only reactive work. Microsoft has plenty of problems to fix in their own development processes before they worry about fixing the "problems" they feel the security community has.

  • by gotan ( 60103 ) on Wednesday October 17, 2001 @07:42PM (#2444159) Homepage
    The real problem is, that all those security holes make their software look bad. Especially compared to other software. When he mentions that softwaremakers are more aware of security and faster putting out patches, he conveniently forgets to mention, that specifically Microsoft was extremely reluctant to react on security-flaws until they were publicized widely. He also neglects to mention, that it's not only important that there is a patch, but also to make peolpe aware of it. It is very true, that beyond the complexity of "Hello World" there is rarely a piece of perfect software, but he addresses that statement to the wrong people. The security experts already know this, but the customers of microsoft very obviously don't.

    Also it must be said, that most of the damage the worms did was to the image of microsoft. These worms showed the extent of vulnerable machines all over the world, but had there been no worms there would be even more vulnerable machines now, with backdoors open to anyone intelligent and motivated enough to write their own exploit. All those worms that draw so much publicity to the security flaws are just the tip of the iceberg. Someone really malicious will have the abilities to sneak in through a hole without a ready script, and he won't do it with a worm that creates a lot of traffic, but silently install a backdoor and do whatever he set out to do.

    When calculating the damages a worm did, that always includes a complete system check for data integrity, backdoors, etc. But if the hole was there and had to be patched, who is to say, there wasn't someone/thing else than a well known worm that came in, installed backdoors and corrupted data? And that person will probably do far more damage, since he probably choose that computer for a reason. Much damage is already done, when the system had a hole and was attackable for some time, since that means that system security and integrity can no longer be guaranteed. Many worms are only making aware of that fact.

    Microsoft could do far more for the security of their products by making people aware of the importance of patches, but probably that doesn't sit well with marketing.
  • Why not? (Score:3, Insightful)

    by CAIMLAS ( 41445 ) on Wednesday October 17, 2001 @07:42PM (#2444160) Homepage
    Hey, they want the security sites to leave alone exploits - so why not? If they want to blame their best source to the solution for the problems, let them. Watch teh security sites disappear - or rather, stop supporting MS stuff. Then watch MS software go to hell as exploit after exploit rips it appart.
  • This is all bull (Score:5, Insightful)

    by Erore ( 8382 ) on Wednesday October 17, 2001 @07:52PM (#2444206)
    I have about 50 Microsoft NT servers from 3.50 thru Windows 2000 REGISTERED with Microsoft. They have my name, my address, my e-mail address, my telephone number.

    Never once did they contact me or send me a CD with security patches on it. Never did they send me an email to go to a website to download a fix.

    I was told, when I registered my product, that they would keep me informed. They have failed to do so.

    The recent exploits of IIS were from known problems that had previous patches. Many users did not patch their system. They did not know that they had to patch their system. Despite Microsoft knowing who the users of NT IIS were, they did not attempt to contact those users and let them know that patches were available.

    Not only that, until recently Microsoft made it very difficult to find security patches. Their website is large and complex, and items change location all the time. In the past five years finding patches for security fixes of NT systems has gone from extremely easy, to nearly impossible, to finally getting organized and easier again.

    Why is it, that after the outbreak of Code Red, it took days before information was available from a link on Microsoft's main page? Because it is bad marketing. Instead I have to go deeper to find that information. There isn't even a generic link for security from the main page.

    When you do get to their security page, you are told that Microsoft is doing the radical step of giving Security Tool Kits away for FREE!!! Amazing, you bloody well better give it to me for free. It's your buggy code that had the problem in the first place. I'm a registered user, I haven't received a kit yet.

    Microsoft is finally starting to take some initiative with this security thing. But, they shouldn't run around pointing fingers at anyone other than themselves
    • Re:This is all bull (Score:4, Interesting)

      by sheldon ( 2322 ) on Thursday October 18, 2001 @02:17AM (#2445497) is too hard to find?
      • Microsoft sits on registration data about what users have what product, and those registration data contain contact information.

        When you register a Microsoft product, they thank you by sending you advertisment material. No critical upgrades or anything to that effect. AOL sends off cd-roms to everybody in america - for free, hoping a few will try out their service. Microsoft customers have PAID for their product, but Microsoft does not provide them with even notifications of upgrades/updates.

        It's a sad, sad world.
  • by slashkitty ( 21637 ) on Wednesday October 17, 2001 @08:02PM (#2444280) Homepage
    I've tracked down a number of security bugs. After verifying their existance, I immediately contact the company(ies) involved. Guess what? They don't all respond. Some of the problems I have found are with browser software, it was only until I made it public, with sample code, that I was even contacted by the companies.

    In my most recent finds, not made public yet, there are a number of gross privacy bugs in some pretty major websites ( similar to the hotmail problems, but with banking, news and ecommerce sites ).. Well, besides the difficulty in even finding someone in their organization to tell about the problem, once told they ususally do nothing. So, the question I have is what do I do now? Leave your banking site wide open, or make the exploit public to get something done?

  • by Wavicle ( 181176 ) on Wednesday October 17, 2001 @08:48PM (#2444510)
    "Security vulnerabilities are here to stay."

    That isn't the attitude I'd want someone providing my software to take.

  • by gizmo_mathboy ( 43426 ) on Wednesday October 17, 2001 @09:20PM (#2444643)
    It appears to me that Mr. Culp has misunderstood the purpose of the scientific method. The goal of which is to allow other researchers the ability to reproduce one's test/bug/experiment.

    Programmers use code to share their experiments because it is the simplest, best, most consistent way to do so. Not asking security and programming experts not to share "blueprints" is like asking toxicologists not to share the chemical formulas for the compounds they're researching.

    Mr. Culp needs to take a vacation away from the stress of his job and bone up on how to systemically approach problem solving and the sharing of information used to produce repeatable experiments/tests/exploits.
  • by wedogs ( 96591 ) <> on Wednesday October 17, 2001 @10:51PM (#2444955) Homepage
    Culp says...
    "First, let's state the obvious. All of these worms made use of security flaws in the systems they attacked, and if there hadn't been security vulnerabilities in Windows®, Linux, and Solaris®, none of them could have been written. This is a true statement, but it doesn't bring us any closer to a solution. While the industry can and should deliver more secure products, it's unrealistic to expect that we will ever achieve perfection. All non-trivial software contains bugs, and modern software systems are anything but trivial. Indeed, they are among the most complex things humanity has ever developed. Security vulnerabilities are here to stay."

    In the above argument, Culp uses truth to validate fallacy. It's true that no code is perfect. It's false that security will improve by mandating gag orders.

    More to the point, Microsoft is especially frustrated with flaws being exposed in their code. Frankly, I believe the hacks associated with Microsoft products differ fundamentally from the flaws discovered in Solaris and Linux. When a Linux exploit is discovered, hackers and maintainers consider it a design flaw. Therefore, exploits are generally fixed pretty fast on Linux -- usually within a few days. The same is true for Solaris.

    Apparently however, Microsoft does not consider certain exploits to be design flaws. Sometimes, hackers simply leverage "features" (e.g. undocumented APIs) that Microsoft deliberately designed into their applications and/or systems.

    Microsoft applications tend to execute arbitrary code. In other words, Microsoft deliberately empowers IIS, Exchange, Internet Explorer, Outlook and certain Office applications to execute unchecked commands fed over the Internet. Once hackers discover these (badly!) hidden APIs, it is only a matter of time before someone sends you an email which does something nasty to your computer.

    Interestingly, despite these obvious security issues, Microsoft wants their programs to execute arbitrary code. Remember the Microsoft Word viruses? Remember the Excel viruses? Heck, email viruses were fiction until Exchange and Outlook...

    Microsoft has had years of experience and feedback since the first MS-Word virus. Obviously, they understand the risks of allowing applications to execute arbitrary code. Nevertheless, they continue to build this ability into all their major products.

    In fact, arbitrary code execution appears to be one of the core technologies behind Microsoft's .NET initiative. I suspect this is why Microsoft was so reluctant to repair the security flaws within IIS. Code Red and Nimda exploits APIs that Microsoft intends for their .NET initiative. Disabling these APIs would cripple .NET. Therefore, Microsoft did not fix IIS until they could re-think the design of .NET.

    Culp states that vulnerabilities are here to stay. Most likely, .NET will reinforce his point. Given their track record, I expect .NET to be Microsoft's magnum opus of security deficiency.

    At this late stage, re-designing .NET is out of the question. I guess Culp feels controlling what the world is allowed to communicate about .NET is easier.
  • by cornice ( 9801 ) on Wednesday October 17, 2001 @11:27PM (#2445051)
    It's silent for years...

    Many diseases are deadly if untreated. Often the scarriest ones are those that kill silently over time. This is what MS is asking for. Security holes can be an obvious pain or a silent killer. If exploits are not made popular and fixed then the exploit will be available to those who know the most and can potentially do the most harm. Once again this is a plead for a solution that will benefit MS and nobody else.
  • by NZheretic ( 23872 ) on Thursday October 18, 2001 @02:27AM (#2445522) Homepage Journal
    If you have a few hours on your hand and *really* want to better understand what is going on, I would suggest that you sit back and listen to these speechs on Dr Dobbs Technetcast...

    If your looking for authority on the subject they come no higher than Dr. Blaine Burnham, Director, Georgia Tech Information Security Center (GTISC) and previously with the National Security Agency (NSA),

    "Meeting Future Security Challenges" st ream_id=411

    If you listen to Dr Burnhams speech you will understand why it is so important to keep "pushing" Microsoft on its inherent lack of security.

    If you want to sleep at night, don't listen to the following speech by Avi Rubin

    "Computer System Security: Is There Really a Threat" st ream_id=354

    If you listen to the above speech then you will begin to understand Steve Gibsons apocalyptic visions.

    And if you want more, the effect of broadband access

    "Broadband Changes Everything" st ream_id=478

    Directly relating to DDoS ( Distributed Denial of Service )

    "Analyzing Distributed Denial of Service Tools: The Shaft Case" st ream_id=482

    and "Denial of Service" st ream_id=417

    And if you want to get *really* technical, listen how difficult and more technical it is to trace spoofed packets[ Warning - this is heavy tech ]

    "Tracing Anonymous Packets to Their Approximate Source" st ream_id=48

    "I would rather have Loki uncover and exploit our inherent weaknesses now than have the Ice Giants do so at Ragnarok. - David Mohring"

  • EULA (Score:4, Funny)

    by skabb ( 115949 ) on Thursday October 18, 2001 @03:35AM (#2445604)
    Probably the next thing in the MS EULA is;
    Any SECURITY HOLE bundled with the SOFTWARE PRODUCT is the property of Microsoft and protected by copyright laws and international copyright threaties.

  • by skabb ( 115949 ) on Thursday October 18, 2001 @04:45AM (#2445683)
    When a vulnerability shows up on or the like, specifying a vulnerability in a Microsoft product, e.g. "A special crafted URL will overwrite your files" and then there is no information on what the special crafted URL look like, and there is no fix available from Microsoft or others, do you feel more secure?

    Perhaps you could block the request in your packet-filtering system, or at least log it, but without knowing what to look for... what do you do?

    And, knowning that experienced black-hat crackers also reads securityfocus and sites like this, they don't need anything more than this information (there is a buffer overflow in IIS... ) and then they have a target for what to do the next couple of hours. It's a competition you know. The best crack wins. Giving away exploits doesn't give much credit to the cracker copying it, but the first one to discover a "new" one, gets a lot of attention...

    We need to understand the psychology of what makes a crack worthwile, a published exploit every script kiddie can duplicate, but also can the sysadmins countermeasure this fast (provided that they read the right forums as all sysadms should!)
    But a hint of a possibility in a not published exploit gives the black-hats something to compeete for, who is the first one to make the best crack? And the poor end-user is not even knowing what to look for...

    Second. published exploits are easy to scan for... known, but not published exploits will fluctuate in their signature.
    E.g. special HTTP GET request to look for in the logs... you just scan your logs for exactly the string published in the exploit. (or put it in your packet-filter) a not published exploit will result in several different cracks, using the same vulnerability, but probably vary a bit in the exploit methodology, making it harder to scan for.

    Would you dare to use your car if the factory sent you a note that "it has a fault", but not providing any details of the fault? It could be anything...

  • by jlemmerer ( 242376 ) <> on Thursday October 18, 2001 @05:12AM (#2445713) Homepage
    ... saying if you don't publish blueprints, nobody will know where the door is. Microsoft should be glad that all these reports are out, for this is a way they can react to them. It is no good putting one's head in the sand. The programmers at Redmond - the one's who left the doors open in the first place - should just read the reports and fix the holes. Maybe this would contibute to the "Win2000" is secure image Microsoft wants to build up in public opinion. If you don't publish the exploits, end user style people will think "Hey, M$-Software is more secure than all others, because there are no exploits found on the net", trust in the M$ offered security and wonder why their computer is periodically hacked every second week by somebody who has the knowledge, but doesn't publish it.
  • by maxpublic ( 450413 ) on Thursday October 18, 2001 @06:51AM (#2445830) Homepage
    "Information anarchy"? And yet no post I've seen so far challenges the terminology as being inherently useless PR. Microsoft is damned good at dreaming up push-button catch-phrases that become subconciously accepted even by it's detractors as viable descriptors. It's the same sort of tactic that convinces people that EULA's are *actual laws*, when they're nothing of the sort - insofar as I know no court of law has even supported them as valid contractual agreements.

    The phrase "information anarchy" has no coherent meaning other than that defined through MS's statement, and even there it seems to mean "any public publication of security weaknesses in MS products". Yet MS pushes the phrase over and over again in the attempt to link security reports with the word "anarchy" in the hopes that the average idiot will associate publication of flaws in MS software with irresponsible, undemocratic behavior.

    Most of us geeks catch this sort of thing right off (e.g., "viral software") but notice - this one slipped under the wire with nary a comment that I could see.

    One of MS's greatest weapons is the introduction of language which precludes one mindset and reinforces another - social programming at it's finest. Accepting the phrase "information anarchy" as valid substantiates the idea that such a thing actually exists, even if you argue that the security reports don't constitute an example of this nebulous "information anarchy".

    There's no such animal. It's a buzzword with zero meaning other than a poor attempt to lay the blame for MS security holes on people other than those employed at MS.

    Perhaps we should retaliate with terminology of our own that's intimately associated with a Microsoft argument or product. Any ideas (other than the "Microsoft worms" phrase of some days back)?

  • by iceT ( 68610 ) on Thursday October 18, 2001 @11:19AM (#2446806)
    Not publishing the details of a virus does NOT stop the virus from existing. The "I Love You Virus" didn't have a post mortem until AFTER it took down entire corporations networks. Not publishing the details of the virii will NOT stop other hackers from getting their hands on the virus code, and making modifications to it.

    Culp is assuming that the only people smart enough to decipher the viruses are the security people themselves, and THAT is the false assumption that invalidates the theory behind the 'essay'...

I go on working for the same reason a hen goes on laying eggs. -- H.L. Mencken