Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Security

Microsoft Blames the Messengers 731

Roger writes: "In an essay published on microsoft.com, Scott Culp, Manager of the Microsoft Security Response Center, calls on security experts to "end information anarchy" and stop releasing sample code that exploits security holes in Windows and other operating systems. "It's high time the security community stopped providing the blueprints for building these weapons," Culp writes in the essay. "And it's high time that computer users insisted that the security community live up to its obligation to protect them." See the story on Cnet News.com."
This discussion has been archived. No new comments can be posted.

Microsoft Blames the Messengers

Comments Filter:
  • Right (Score:5, Informative)

    by IsleOfView ( 23825 ) <jason@nOsPaM.mugfu.com> on Wednesday October 17, 2001 @05:36PM (#2443447) Homepage
    <sarcasm>
    Much better that the "black-hats" "secretly" circulate the information.
    </sarcasm>

    If the security experts didn't find and pubilsh the holes, good luck on Microsoft making the fixes a "priority".
  • history (Score:5, Informative)

    by Telastyn ( 206146 ) on Wednesday October 17, 2001 @05:36PM (#2443450)
    Yes, just like keeping Cryptography code secret improves the algorithm. I agree that the company should be notified before the flaw is announced, but seriously, the entire point of a security response center is to inform users as to vulnerabilities...
  • by Insideo ( 171350 ) on Wednesday October 17, 2001 @05:42PM (#2443516)
    According to the article, each of the latest worm attacks was preceded by security bulletins which happened to contain exploit code.

    Hate to break it to MS, but all this indicates is that the security sites work. That's right. The people who have access to the code to fix the bugs were given notice. If these bulletins didn't exist, you can bet the worms would have still been created. Remember Code Red II? MS had a fix out months before CR2 hit the web, yet it still managed to infect thousands of machines.

    Security bulletins (even with exploits) are not the problem. The holes in buggy software are the problem.
  • Okay, (Score:4, Informative)

    by trilucid ( 515316 ) <pparadis@havensystems.net> on Wednesday October 17, 2001 @05:42PM (#2443517) Homepage Journal

    here we go:

    "It's high time the security community stopped providing the blueprints for building these weapons..."

    How about providing the blueprints to your code, so we can secure the systems you release broken to begin with?

    I'm not anti-Microsoft (although I'm getting there, definitely getting there...), I do Windows development also in Visual Studio. I'm near the point of stopping that altogether though. My company is already using Linux for damn near everything (including desktops, not just hosting) anyhow.

    This is more than just your average case of idiocy from MS. If I ran a pharmaceutical company, and a drug we produced killed 500 people, do you think the public would accept some excuse like this? "No, really, it's all the fault of the doctors who showed their patients how to take the pills..."

    Maybe not a perfect analogy, but equally stupid. When will they learn? Probably when Joe Customer starts realizing how indecent their blame machine really is. Apache isn't perfect, Linux isn't perfect... but we admit this and work toward solutions. Average Joe won't stay completely blind forever; most people aren't stupid (my faith in humanity talking here), and you can't fool anyone indefinitely.

    Damn, and I was cutting down on my smoking...

  • by batobin ( 10158 ) on Wednesday October 17, 2001 @05:44PM (#2443541) Homepage
    How the hell is it the fault of the security experts? To be honest, someone will find the bug, whether it's a person with malicious intent or not. If such holes are posted, it gives the company the chance to fix them, so that fewer people are struck.

    If holes were not posted, the public would not even know their software is insecure, and it would surely take longer for any company to patch said holes.

    Finally, doesn't blame ultimately fall on the company who made the buggy software in the first place? If I come up with a mathematical formula that proves 2 + 2 = 5, and a math teacher proves that I'm incorrect, who's to blame here? Microsoft believes the math teacher is wrong, something which is obviously misguided.

    One final thing: I don't see Linux/BSD/Apple execs complaining.
  • by adturner ( 6453 ) on Wednesday October 17, 2001 @05:47PM (#2443565) Homepage
    This argument that Microsoft is making is the same stupid argument that was made by Richard M. Smith <rms@privacyfoundation.org> on Friday Aug 10, 2001 shortly after Code Red.

    The short story is that eEye's announcement had absolutely nothing to do with Code Red. The person(s) who developed Code Red figured out the exploit on their own. For more details check out Marc Maiffret's (of eEye) email to the Bugtraq list: http://www.securityfocus.com/cgi-bin/archive.pl?id =1&mid=203550

    People who argue that full disclosure is harmful just fail to realize the facts of the matter- people who write these attacks all aren't script kiddies and they're quite capable of developing attacks on their own. And the reality is that most vendors only respond to full disclosure to actually fix bugs (and even then it takes too long).

    Nuff said.
  • by btellier ( 126120 ) <btellierNO@SPAMgmail.com> on Wednesday October 17, 2001 @05:59PM (#2443666)
    Back when I did audits in my spare time I followed a specific set of guidelines.

    1. always notify the vendor first.
    2. always wait 2 weeks for a patch.
    3. don't release on weekends or very late at night (sorry, other side of the globe.. i'm in the US)
    4. always supply an exploit, if one is possible.

    And even with all this in place sysadmins still wouldn't patch the problem until they got hacked. If someone doesn't patch their system after all of these steps nothing can make them.

    Scott Culp seems to think that the number of hacks will go down solely by eliminating #4, while in actuality the other 3 steps are the ones which get more boxes hacked. With you average buffer overflow thousands of hackers could write an exploit within maybe two or three hours of seeing a bugtraq post. Not notifying the vendor can cause havoc for weeks before a patch is issued.
  • by uhmmmm ( 512629 ) <.uhmmmm. .at. .gmail.com.> on Wednesday October 17, 2001 @06:02PM (#2443683) Homepage
    Perhaps out of courtesy the security community could give the company with the bug a week's notice.

    From the bugtraq FAQ [securityfocus.com] (securityfocus.com):

    0.1.8 What is the proper protocol to report a security vulnerability?

    A sensible protocol to follow while reporting a security vulnerability is as follows:
    1. Contact the product's vendor or maintainer and give them a one week period to respond. If they don't respond post to the list.
    2. If you do hear from the vendor give them what you consider appropriate time to fix the vulnerability. This will depend on the vulnerability and the product. It's up to you to make and estimate. If they don't respond in time post to the list.
    3. If they contact you asking for more time consider extending the deadline in good faith. If they continually fail to meet the deadline post to the list.

    When is it advisable to post to the list without contacting the vendor?
    1. When the product is no longer actively supported.
    2. When you believe the vulnerability to be actively exploited and not informing the community as soon as possible would cause more harm then good.
  • Re:They Have a Point (Score:5, Informative)

    by blakestah ( 91866 ) <blakestah@gmail.com> on Wednesday October 17, 2001 @06:19PM (#2443769) Homepage
    What gains are there to be had by having the source displayed all over the web?

    1) The source display should allow any administrator to verify if he is vulnerable, and, after patching, that he is no longer vulnerable.

    2) The source code should demonstrate the exact nature of the problem for the coders who wish to fix it. They would otherwise need to write their own exploit to test their fixes.

    3) The source code should apply pressure to the software maker. It is akin to being flogged in public. The whole world knows you are vulnerable, and you ought to fix it.

    4) The source code of the exploit should make the exploit obvious but not damage the system.

    Source code exploits will ALWAYS be published in places where some crackers can get them. The challenge is designing an updating system that allows all users to apply patches in a timely fashion. I think Debian is actually closest on this one.

    Microsoft is really going to get nowhere on this one. I've read accounts of people who send exploits to Microsoft in secrecy, and then HAVE to publish the code so that Microsoft is forced to fix the problem. If it doesn't impact Microsoft's marketing, Microsoft doesn't care.

    The other issue that relates to this one is secure as possible by default. This principle applies to all Internet usage of computers. Yet Microsoft blatantly violated it in the following: Office Macros, email attachments, NT/Windows 2000 Server config (running IIS by default), Hotmail...
  • Re:RTFA (Score:5, Informative)

    by 0xA ( 71424 ) on Wednesday October 17, 2001 @06:24PM (#2443790)
    For the closed-source world, I believe that it is better that if you discover an exploit, to send full details to the vendor ASAP, and to release a general statement of a potental vunerability in the software to the general public, but with just info for the end-user to determine severity and criticalness of the bug.

    Speaking as an IIS admin, I get really pissed when I can't find sample code for an exploit. I need to be able to test my systems against a newly published exploit. If I don't have a way to do this all I can do is apply the hotfix and hope it works. What if I want to set up some stateful inspection on my firewall just in case, how do I test that? Without sample code I have no way to really know if I am vulnerable or not. IMHO not testing these things would be a pretty irresposible aproach to managing a datacenter.

  • Re:RTFA (Score:5, Informative)

    by Todd Knarr ( 15451 ) on Wednesday October 17, 2001 @06:25PM (#2443794) Homepage

    Except that that was tried. What happened was that the vendors responded with "We can't reproduce that, you must be mistaken, there's no hole in our product.". After a while, the security community came to the conclusion that the only way to get vendors to wake up and actually fix their products was to release enough details that, if there was any question whether the hole existed, the skeptic could recreate the exploit and try it and see for himself. Which leaves the vendor with no way to spin the story, which is what Microsoft's really pissed off about.

  • Re:MS (Score:2, Informative)

    by rgmoore ( 133276 ) <glandauer@charter.net> on Wednesday October 17, 2001 @06:29PM (#2443821) Homepage
    Administrators need to pick the best tool for the job whatever the vendor.

    Of course that assumes that the people who are in charge of keeping things secure actually have the authority to pick the tools they'll be using to do so. Sadly, that's often not the case. Decisions about things like which operating system to use are made by people higher up in the company, and the poor Admins are stuck trying to do the best they can with the tools they're given.

  • by ikekrull ( 59661 ) on Wednesday October 17, 2001 @06:34PM (#2443856) Homepage
    'An adminstrator doesn't need to understand the problem in order to fix it'

    This is pure bullshit. It is *extremely* important to understand how these worms and viruses work in order to respond effectively to such threats.

    If I, as a programmer, was writing a web application in C that could potentially be remotely exploited via buffer overflow, such information is *absolutely fucking critical* to me, so that i can write safe code.

    M$ seem to suffer from the delusion that they are the only people in the world actually writing computer programs.

    This unbelievable arrogance is getting pretty tired, and i imagine that we'll be seeing some pretty big anti-M$ stances being taken by previously devout believers in the near future.

    If you can't put up, M$, then for christs sake shut up.

  • Re:They Have a Point (Score:2, Informative)

    by 0xA ( 71424 ) on Wednesday October 17, 2001 @06:39PM (#2443891)
    I have no problem with security experts blackmailing MS by saying "release a patch within a few days or I release the code!" But the current assumption that the problem is fixed as soon as a patch is released does far more harm than good. Yes, they are fully within their rights to release the code, but does it do any good besides making them feel righteous?

    If you have a half assed decent network admin most of the time you don't even need the patch. If I see an exploit that trys to run cmd.exe for example I'll just filter it at the router. It will never even reach the web server. I'm not saying I wouldn't apply the patch ASAP but vendor patches are NOT the only way to protect yourself from many of these exploits. Now if I didn't have a sample exploit how am I supposed to protect myself?

  • Re:They Have a Point (Score:2, Informative)

    by Dexx ( 34621 ) on Wednesday October 17, 2001 @06:41PM (#2443898) Homepage
    As well, it lets security guys go to their managers with something to point to.

    "See, we're vulnerable. We need to patch this right away. And update our firewall rules while we're at it."

    Plus grabbing an exploit off the 'net and going through a system in about 10 min makes a decent demonstration for the board room. "See, anybody can do this almost this quickly. Now about that budget.."
  • by Anonymous Coward on Wednesday October 17, 2001 @07:01PM (#2443986)
    Silly Geek, why, to test the fix, of course.

    ac
  • by Andrew Dvorak ( 95538 ) on Wednesday October 17, 2001 @07:31PM (#2444108)

    It appears that the advantage of releasing sample code to exploit flaws in computer systems places increased pressure to fix the bug on the manufacturer. This is good, but at a compromise which places serious risk to the consumers of the product. Once suspect code is released, the potential for damage to consumer systems is exponentially increased because the tools to do damage are then available to anybody. Both sides have valid points, but perhaps a set of guidelines to report such bugs which take into account the interests of all involved parties is crucial.

    As far as I am concerned, there are five levels of releasing this information which could be used to balance these interests: 1. Say nothing and somebody else will exploit the bug 2. release this information to the manufacturer of the software product and hope they do something about it 3. release a summary of the bug enough so it is realized by the general public 4. release technical information on what theories are used to exploit the flow 5. release the tools necessary to exploit the flaw

    The above could be thought of as an agenda for the order in which to release word of any flaws, where one step succeeds the other, starting at #2. 5 should be used with extreme caution - in other words: know what you're doing before using this step, because then anybody can make a toy of the tool to execute the exploit on anybody's system.

  • by spectecjr ( 31235 ) on Wednesday October 17, 2001 @08:54PM (#2444543) Homepage
    i meen the whole buffer overflow thing codered exploited, that is something that you can't just have happen accedently..that had to be codded into it.

    No, actually, it's a direct side effect of the C standard libraries. Things like strcpy, strcat, sprintf... all of these are buffer overflows waiting to happen.

    For example, there's a buffer overflow (probably unintentional... unless you're a conspiracy theorist like yourself) just waiting for someone to exploit it in the Mozilla image handling code. Just imagine; a linux virus that spreads by someone sending a carefully crafted image file to your system. Everything would look fine on the surface; but that image file contains compressed code that expands in such a way that it causes a buffer overflow.

    ... or are you saying that the Mozilla coders intended it to be a security hole?

    Simon
  • by glens ( 6413 ) on Wednesday October 17, 2001 @09:40PM (#2444726)
    Giving up the fairly rare opportunity to moderate on this one...

    A couple of points need making in light of your expressed views:

    Fairly often, if the poor soul who is on the receiving end of a black talon is wearing a good denim or better, the center cavity will plug with material resulting in much less to no expansion. That also means the cutting edges don't get formed too well. This article [firearmstactical.com] seems to provided a sane description of the round.

    If one is in a military environment they'll be looking at the smaller-diameter end of a full-metal-jacketed small arms projectile, or there'll be "war crimes" to answer for. Last I heard, anyway.

  • Re:MS FUD (Score:2, Informative)

    by Thurn und Taxis ( 411165 ) on Wednesday October 17, 2001 @11:36PM (#2445069) Homepage
    All of these worms made use of security flaws in the systems they attacked, and if there hadn?t been security vulnerabilities in Windows®, Linux, and Solaris®

    For that matter, Linux® is also a registered trademark [linuxjournal.com].

    My favorite part, though, is "This is a true statement...." It's true in the same sense that "Hitler, Mahatma Ghandi, and Mother Teresa were collectively responsible for the deaths of 6 million Jews" is a true statement.

  • by NZheretic ( 23872 ) on Thursday October 18, 2001 @02:27AM (#2445522) Homepage Journal
    If you have a few hours on your hand and *really* want to better understand what is going on, I would suggest that you sit back and listen to these speechs on Dr Dobbs Technetcast...

    If your looking for authority on the subject they come no higher than Dr. Blaine Burnham, Director, Georgia Tech Information Security Center (GTISC) and previously with the National Security Agency (NSA),

    "Meeting Future Security Challenges"

    http://www.technetcast.com/tnc_play_stream.html? st ream_id=411

    If you listen to Dr Burnhams speech you will understand why it is so important to keep "pushing" Microsoft on its inherent lack of security.

    If you want to sleep at night, don't listen to the following speech by Avi Rubin

    "Computer System Security: Is There Really a Threat"

    http://technetcast.ddj.com/tnc_play_stream.html? st ream_id=354

    If you listen to the above speech then you will begin to understand Steve Gibsons apocalyptic visions.

    And if you want more, the effect of broadband access

    "Broadband Changes Everything"

    http://www.technetcast.com/tnc_play_stream.html? st ream_id=478

    Directly relating to DDoS ( Distributed Denial of Service )

    "Analyzing Distributed Denial of Service Tools: The Shaft Case"

    http://www.technetcast.com/tnc_play_stream.html? st ream_id=482

    and "Denial of Service"

    http://www.technetcast.com/tnc_play_stream.html? st ream_id=417

    And if you want to get *really* technical, listen how difficult and more technical it is to trace spoofed packets[ Warning - this is heavy tech ]

    "Tracing Anonymous Packets to Their Approximate Source"

    http://www.technetcast.com/tnc_play_stream.html? st ream_id=48

    "I would rather have Loki uncover and exploit our inherent weaknesses now than have the Ice Giants do so at Ragnarok. - David Mohring"

Happiness is twin floppies.

Working...