Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Security

Microsoft Blames the Messengers 731

Roger writes: "In an essay published on microsoft.com, Scott Culp, Manager of the Microsoft Security Response Center, calls on security experts to "end information anarchy" and stop releasing sample code that exploits security holes in Windows and other operating systems. "It's high time the security community stopped providing the blueprints for building these weapons," Culp writes in the essay. "And it's high time that computer users insisted that the security community live up to its obligation to protect them." See the story on Cnet News.com."
This discussion has been archived. No new comments can be posted.

Microsoft Blames the Messengers

Comments Filter:
  • by A_Non_Moose ( 413034 ) on Wednesday October 17, 2001 @05:35PM (#2443444) Homepage Journal
    there are 3 of them pointing at you....

    I think the author/Microsoft should not forget this.

    Moose
  • New Slogan (Score:3, Insightful)

    by InfinityWpi ( 175421 ) on Wednesday October 17, 2001 @05:38PM (#2443465)
    "Hackers don't hack Windows machines... bad code hacks Windows machines."

    Y'know, if they didn't have so many bugs, there wouldn't be anything to release, and therefor, no 'weapons' to build... it's kinda like an army making a tank with wooden components inside, then getting pissy when the other army brings flamethrowers and napalm...

  • Still leaking? (Score:4, Insightful)

    by Col. Klink (retired) ( 11632 ) on Wednesday October 17, 2001 @05:39PM (#2443474)
    And just how am I supposed to know I've patched a hole if I don't know how it gets exploited?
  • by Xzzy ( 111297 ) <sether@@@tru7h...org> on Wednesday October 17, 2001 @05:39PM (#2443479) Homepage
    By putting out solid information, people who find these exploits are doing two things: Giving the programmers specific information with which to fix the problems, and giving script kiddies some really damn good instructions for hacking into a box.

    The system relies on the reaction time of the programmers.. can they supply a patch before the crackers supply an exploit?

    Those of us in the *nix world seem to do pretty good.. for all sorts of reasons you don't need to go into here. Windows? Heh.. it can take months for something to get patched up. No wonder he's mad that these 'blueprints' are being provided. It's simply an extension of the security through obscurity mode of thought.
  • by Corgha ( 60478 ) on Wednesday October 17, 2001 @05:40PM (#2443489)
    it's high time that computer users insisted that the security community live up to its obligation to protect them

    I'm not sure whether anyone, other than law-enforcement agents, is obligated to protect computer users, but if anyone is, surely the people who produce the software are more obligated to prevent or solve these problems than are those who merely report on them.

    Is this, along with the U.S. government's warning to news agencies to be careful what they broadcast, a sign of a new trend?
  • by Derkec ( 463377 ) on Wednesday October 17, 2001 @05:40PM (#2443494)

    Several times we've seen security experts say to a large company, "Hey! there's a nasty exploit here!" The large company indicates they'll fix it and ignores the problem. Only when the exploit is publicized do companies like Microsoft actually take the effort to fix the code. Releasing the information is the only way. Perhaps out of courtesy the security community could give the company with the bug a week's notice.
  • by Suicyco ( 88284 ) on Wednesday October 17, 2001 @05:40PM (#2443495) Homepage

    I thought most security exploits that get released by the major groups are usually passed through MS first and allow them time to provide a patch before issuing the details of the exploit. So why are they so upset? Its not MS nor the security experts who are at fault for not patching machines. At least by publishing them they are provided an incentive to staying on top of security holes, instead of simply allowing them to remain secret. I mean none of the major exploits lately (code red, nimda, etc.) have used unpublished exploits. So this shows a failing in MS's procedures for keeping admins informed and a failing in the admins for keeping on top of their networks. Its such a non-issue, I think MS just wants to preempt law suits or some other such silliness.
  • by FatRatBastard ( 7583 ) on Wednesday October 17, 2001 @05:41PM (#2443501) Homepage
    I'd wager this is the first volley in another push by MS to cover thier asses by legal means. I see another push to make the release of any information that shows weaknesses a criminal activity. Expect lots of flag waving, anti-terrorism rhetoric to be sprinkled throughout, and some suspect demands that seem to be more motivated at gaining market share than protecting machines.

    God damn... when did I get so cynical? Oh yeah, after reboot #3 of NT 4.0 today. {grumble grumble grumble}
  • by jonnyq ( 103252 ) on Wednesday October 17, 2001 @05:41PM (#2443504)
    Standard courtesy and many mailing lists recommend just this approach, but many companies have a really bad track record about fixing bugs that noone knows about. therefore, after a period of time, the exploit is published to "force" the company to deal with it.
  • Full disclosure? (Score:5, Insightful)

    by Pete (big-pete) ( 253496 ) <peter_endean@hotmail.com> on Wednesday October 17, 2001 @05:41PM (#2443507)

    Hmm, this has always seemed to be a hot discussion...I'm all for full disclosure, but is it really necessary for people to include exploit code?

    One argument is that it can help people to test their systems for vulnerabilities, bit I think that exploit code is not strictly necessary for this. People who really need it to test systems are in a position where they should have the capability or the resources to generate a "test script" for themselves, once given an accurate description of the vulnerability.

    Making code exploits freely available possibly creates more opportunity for the low-life script kiddies who often don't appreciate exactly what they are doing, or the mechanics of the exploits that they are using. Why should we make it easy for those guys?

    My opinion on this element of full disclosure is still not complete though, and I am fully prepared to be convinced... :)

    -- Pete.

  • Re:history (Score:2, Insightful)

    by Ghost-in-the-shell ( 103736 ) on Wednesday October 17, 2001 @05:41PM (#2443508) Homepage
    Actually most security firms who announce these flaws inform the company first to allow them to fix the bug/flaw before it can be used as a tool for harm.

    just my $.02
  • by LazyDawg ( 519783 ) <lazydawg@nOspAm.hotmail.com> on Wednesday October 17, 2001 @05:42PM (#2443520) Homepage
    ... and just write pseudocode or a very detailed step-by-step description of what their code does. In the end script kiddies will have to learn to write their own leet tools, and may later on branch these skills into other areas.

    If security experts took the time to make exploit code an exercise for the reader, we might someday end up with skript kiddies who can even write their own hardware drivers for Linux. They might even learn to write and discover new exploits for Windows without the help of security experts.

    Microsoft got it on the nose this time :)
  • by Software ( 179033 ) on Wednesday October 17, 2001 @05:43PM (#2443535) Journal
    By publishing sample code, it really does make it much easier to exploit security holes. The main problem is clueless admins, not lack of information. The good admins need to know a lot of info about the problem to see if affects them, but they don't need sample code. Not giving source would make it a bit harder for the black hats, although a sufficiently good explanation of the problem would be an excellent starting point for a script kiddie.

    At least the guy doesn't ignore that there are problems:

    First, let's state the obvious. All of these worms made use of security flaws in the systems they attacked, and if there hadn't been security vulnerabilities in Windows®, Linux, and Solaris®, none of them could have been written.
    I know I'm preaching to the anti-choir here, but he has a point.
  • linux exploits? (Score:5, Insightful)

    by Lxy ( 80823 ) on Wednesday October 17, 2001 @05:44PM (#2443542) Journal
    doing a quick search on bugtraq, I see a lot of linux exploit code too. Hmm... let's blame the linux exploit code for the net-stopping worms like... ummm... and also the.. ahhh... well, you know. No Microsoft, making exploit code widely available does make make your product less secure. You do.
  • by Enonu ( 129798 ) on Wednesday October 17, 2001 @05:44PM (#2443543)
    I can imagine that his Scott Culp is very stressed out right now. Can you imagine being in this guy's position with worms like Code Red floating around?

    So what does he do? He posts an essay which is basically a reflection of his anxiety. However, he misses two very key points on why this information anarchy is a good thing.

    * Patches for popular software that are exploitable tend to come out real quick because the company has to save face and perhaps protect against liability suits.

    * A necessary fear is instilled into companies to put software through a secuirty audi before it goes into production.

    I hope this guy takes a vacation somewhere on the beach to reflect on his thoughts.
  • by The Infamous TommyD ( 21616 ) on Wednesday October 17, 2001 @05:47PM (#2443555)
    I've heard this idea before including from my advisor. The idea is that releasing exploits to the public is creating an environment where it's too easy to hack machines.
    Unfortunately, it's simply untrue that there aren't positive reasons for releasing exploits.
    I can think of several: testing of machines (risky, but useful), understanding of vulnerability (CERT advisories are pretty much useless for this.), research.

    The most important of these (IMHO) is the understanding of the vulnerabilities. In the past, we didn't even talk about vulnerabilities in the open and we have the abhorrent state of affairs we have today. Security isn't even taught in computer science and engineering curricula and when it is, it's treated as a separate set of classes. When I started working in infosec, I had no idea how the exploits worked and what the real coding vulnerabilities were. Without release of exploits, I probably still wouldn't.

  • by btellier ( 126120 ) <btellierNO@SPAMgmail.com> on Wednesday October 17, 2001 @05:47PM (#2443557)
    sigh. OK, let's try this again: BECAUSE OTHERWISE PEOPLE WON'T TAKE YOU SERIOUSLY. Now let's review: how many people patched eEye's .IDA exploit when it came out and did not include an exploit? Not bloody many. How many patched it after Code Red made it abundantly clear that this was a very exploitable vulnerability? Hundreds of thousands more. The obvious truth here is that full disclosure and the inclusion of exploit scripts opens people's eyes to the fact that people are going to use this hole to break into YOUR system.

    By not giving exploit scripts you allow sysadmins to become lazy. They figure "Nah, i'll just wait until an exploit comes out before i patch it", while the underground hax0r scene is already searching out your box.
  • by irix ( 22687 ) on Wednesday October 17, 2001 @05:47PM (#2443559) Journal
    What gains are there to be had by having the source displayed all over the web?

    What makes you think that not having it displayed all over the web will make it any less available to to the people who want to do harm?

    Black hats are going to get ahold of the exploit, even if the source code to it is not published on incidents.org or bugtraq. All that not publishing it there does is provide a false sense of security.

    Publishing the details in a high-visibility location does several things:

    • gets the company who wrote the software much more motiviated to write a fix
    • allows other people to verify that the vulnerability exists
    • lets you and I (white hats) not make the same mistakes that lead to the vulnerability in our code

    The script kiddiez are going to get these exploits when they download them from their favourite r00t kit location. Lets not pretend that not publishing the same exploits to the general public really makes things much safer.

  • by SIGFPE ( 97527 ) on Wednesday October 17, 2001 @05:47PM (#2443566) Homepage
    It's designed to help lobby politicians. Politicians, who only take up that job because they don't actually have any useful skills, are easily scared by dabblers in black arts like computer programming. It's very easy to whip up a fervor among this largely ignorant set of people making out that by writing code geeks are committing a great sin. Hell, if M$ and the media companies keep this up there may actually come a time when it's illegal for unlicensed individuals to write software on the grounds that you could use that to copy software, 'hack' computers and encrypt communications.
  • by Phydoux ( 137697 ) on Wednesday October 17, 2001 @05:49PM (#2443580)
    I just can't agree with this.

    The problem with not publishing details of the exploit is that Microsoft and other companies will look at it and say "This doesn't look like that bad of a problem, and besides, nobody will find that easily. No sense in making a patch for it. The potential abuse of this hole is negligable."

    So then we end up being at the mercy of the Black Hats to quietly spread the information among themselves.
    No, keeping things secret simply won't help.
  • by 4thAce ( 456825 ) on Wednesday October 17, 2001 @05:50PM (#2443585) Homepage
    How about if we established a group of white-hat hackers to whom one could submit the details of an exploit. They could attempt to confirm or repudiate the description of the problem and try to assist in developing security patches, without releasing the details of the exploit to the world at large. Then after a suitable time for the patches to be applied, the full story could be told.
  • Re:MS (Score:2, Insightful)

    by darnellmc ( 524699 ) on Wednesday October 17, 2001 @05:51PM (#2443596)
    Exactly. MS needs to build products that are less vulnerable. Security companies not publishing code will allow MS to slack on fixes. The threat of hackers helps companies stay on their toes and release the best products and not half-step.

    What if security companies do not bow to MS's wishes? Will MS use the DMCA to bust them?
  • an anology... (Score:2, Insightful)

    by killthiskid ( 197397 ) on Wednesday October 17, 2001 @05:51PM (#2443600) Homepage Journal
    How about lock-picking? There are all sorts of manuals on locking picking... most locks can be easily picked, but people don't do this for the most part. On top of that, people who are really concerned with security know that you need a decent lock (6+ tumblers) or it can be picked.

    Not a bad analogy: if you want to keep something safe and secure, you use a decent lock. Having the info about lock picking gives you the knowledge to do so, and allows you to know just how secure you are.

    The same could be said about software... and if you want a good lock, you educate yourself. MS makes bad locks... those locks can be fixed, but it requires the knowledge of the lock picking manual to do so.

    Don't get me wrong, Linux, BSD, ect. can be a weak lock too... but with OOS, not only do you have the manual, but you can disassemble and rebuild the lock on your own!

  • Microsoft FUD (Score:3, Insightful)

    by Loewe_29 ( 459497 ) on Wednesday October 17, 2001 @05:53PM (#2443623)
    Microsoft is frantically trying to shift the blame from themselves following the Gartner groups recommendation that people stop using IIS. It's not that MS developers focus soley on market share instead of quality and security (not that I blame the developers, since this is exactly what MS management wants and pays them for), it's that web-defacing juveniles are 'terrorists' and security researchers are 'anarchists'.

    MS had it too easy for too long regarding security issues, especially with the news media reporting Outlook vulnerabilitys not as they really are, as a design flaw in Outlook, but as "e-mail viruses."

    "Behind every great fortune there is a crime."
    - Honoré de Balzac

    "You hear a lot about Bill Gates, don't you, whose net worth in January of the year 2000 was equivalent to the combined net worth of the hundred and twenty million poorest Americans, which says something, not only about the software imitator from Redmond, Washington, it says something about millions of workers who work year after year, decade after decade, and are essentially broke."
    - Ralph Nader

  • by ZaneMcAuley ( 266747 ) on Wednesday October 17, 2001 @05:57PM (#2443650) Homepage Journal
    Patches are only of use when theyre applied. A patch not applied is as good as no patch at all.

    Im just glad im a developer and not a sys admin having to apply all those patches :D
  • by Happy Monkey ( 183927 ) on Wednesday October 17, 2001 @05:59PM (#2443670) Homepage
    Information Anarchy

    Expect to see this term bandied about frequently.
  • Re:MS (Score:3, Insightful)

    by SilentChris ( 452960 ) on Wednesday October 17, 2001 @06:00PM (#2443673) Homepage
    "If someone breaks into your house because you left the door unlocked, it's not YOUR fault, but the fault of whoever it was that showed the thief how to use a door knob."

    One would argue that a decent MS admin would remember to keep the door locked.

  • by dkemist ( 199970 ) on Wednesday October 17, 2001 @06:01PM (#2443678)
    Beyond the obvious irony that a Microsoft-ite is blasting the security community over flaws exploited in its own operating environments, I think the most interesting part of the article is Culp's statement "And it's high time ... the security community live up to its obligation to protect [software users]."

    What obligation is he talking about? For a company that epitomizes a big-money capitalist position, that's the most blatant socialist comment I can imagine. Users collectively pay billions of dollars to software manufacturers each year for endless upgrades, yet he thinks a reasonably loosely knit group of professionals working on their free time somehow owes that same user base the right to be protected???? That's bizarre.

    Further, the "Information Anarchy" thing sounds way too much like the "intellectual property virus" tagline they keep using for the GPL. It's a catchy management-speak phrase that sounds nasty and has little real meaning. It's easy to see how they can set the stage to condemn the whole open source community with all it's open and anarchic ways that don't protect innocent users.

  • by pmz ( 462998 ) on Wednesday October 17, 2001 @06:01PM (#2443680) Homepage
    Full disclosure in security is based on the journalistic ideal that information should be shared openly. This is good and helps keep the big guys in check. It keeps them responsible.

    Think about how bad things would be if nothing got fixed, because the big guys never took security bugs seriously. Consider UNIX. What would UNIX be like today if all of the security holes were never reported and fixed? It would be like the swiss-cheese it was twenty years ago. Fortunately, UNIX has had its major holes plugged, and the documentation of these holes has made all of us better administrators and programmers.

  • by greg_barton ( 5551 ) <greg_barton@yaho ... m minus math_god> on Wednesday October 17, 2001 @06:03PM (#2443690) Homepage Journal
    "Arming the enemy"
    ...
    "It's high time the security community stopped providing the blueprints for building these weapons,"

    It's high time Microsoft stop using inflammatory, mitilaristic sounding rhetoric at a time of national crisis. There are too many actual terrorists about for Microsoft to be irresponsibly crying "terrorist."
  • by greygent ( 523713 ) on Wednesday October 17, 2001 @06:03PM (#2443693) Homepage
    Releasing exploit code prevents Microsoft from dragging their asses and claiming the vulnerability is "theoretical"...

    It's what L0pht prided themselves on for years, after having MS dismiss their whitepapers as improbable, theoretical, impossible, etc.
  • by DahGhostfacedFiddlah ( 470393 ) on Wednesday October 17, 2001 @06:04PM (#2443700)
    Supporters of information anarchy claim that publishing full details on exploiting vulnerabilities actually helps security...and bringing pressure on software vendors to address the vulnerabilities. These may be their intentions, but in practice information anarchy is antithetical to all three goals.

    All three goals? There's some on this later - but assuming that he's right with the rest of the entire essay, you'd expect there to be some pressure to address the vulnerabilities, would there not? He even goes further, saying that pulished exploits are antithetical to getting patches out. Brilliant logic.

    Providing a recipe for exploiting a vulnerability doesn't aid administrators in protecting their networks. In the vast majority of cases, the only way to protect against a security vulnerability is to apply a fix that changes the system behavior and eliminates the vulnerability; in other cases, systems can be protected through administrative procedures. But regardless of whether the remediation takes the form of a patch or a workaround, an administrator doesn't need to know how a vulnerability works in order to understand how to protect against it, any more than a person needs to know how to cause a headache in order to take an aspirin.

    I love this analogy. It actually works. For example - if I knew that the cause of my headaches was an allergy to certain foods, I could avoid those foods, and not have to take aspirin. If I know how an exploit works, I can prevent it with my own tools - firewall, etc. and not have to worry too much about the dubious patches.

    Likewise, if information anarchy is intended to spur users into defending their systems, the worms themselves conclusively show that it fails to do this. Long before the worms were built, vendors had delivered security patches that eliminated the vulnerabilities.

    Here he's not talking about e-mail "viruses", but worms. Specifically, worms targetting systems people did not know they had on their system. There was plenty of buzz about Code Red before most people had it, and the patch was applied to thousands of computers as people got worried. I'm not an advocate of having people upgrade through fear, but this still disproves his point.

    Now - here's his reason for published exploits to take pressure off of vendors to publish fixes :

    Finally, information anarchy threatens to undo much of the progress made in recent years with regard to encouraging vendors to openly address security vulnerabilities. At the end of the day, a vendor's paramount responsibility is to its customers, not to a self-described security community. If openly addressing vulnerabilities inevitably leads to those vulnerabilities being exploited, vendors will have no choice but to find other ways to protect their customers.

    Crap...I'm trying to find a problem with the logic, but I can't actually understand the argument - anyone? What other ways are there for vendors to protect their customers than put out fixes?

    Anyway, that said, I'd just like to express my condolences to the author. Did you see his title? "Manager of Microsoft Security Response Center" Poor guy is probably blamed for half the bugs in code he's never heard of. Can blame him for venting a little. I just wouldn't have done it as publicly.
  • Isn't it ironic... (Score:3, Insightful)

    by Cybercifrado ( 467729 ) on Wednesday October 17, 2001 @06:04PM (#2443703)
    You post linux bugs to bugzilla and they thank you. You post M$ bugs publicly and they flame you. I think more than anything, M$ is pissed because more and more people are starting to realize what a true truckload of CRAP their OS really is. So, we post the bugs in an effort to encourage them to fix it, and for us to give them another chance. What do they do? They blame those who would help them fix it for their own stupid code. I mean come on...it's high time they started taking responsibility for their inadequacies.
  • by Saint Nobody ( 21391 ) on Wednesday October 17, 2001 @06:09PM (#2443732) Homepage Journal

    I'm all for full disclosure, but is it really necessary for people to include exploit code?

    some things are easiest to communicate with sample code. in the absence of the original source code, in which case you could say "look, this function is overrunning this buffer," it would probably be easiest to demonstrate the exact nature of a security flaw using exploit code. although even in the circumstances where you have the original source, having exploit code to look at couldn't hurt in fixing the problem.

    my personal feelings on this is that exploit code should first be sent to the maintainer of the original program, with a deadline for the release of a patch. there should also be a public release describing the problem in a very generic nature. after the deadline, release the exploit, even if the patch isn't out yet. this gives developers time to fix the problem without putting the exploit in the hands of script kiddies. plus, the developers are under a deadline to get it fixed. granted, it's entirely possible for the kiddies to already have code to exploit it, but why give them the tools before it's necessary?

  • Two words (Score:3, Insightful)

    by snake_dad ( 311844 ) on Wednesday October 17, 2001 @06:10PM (#2443738) Homepage Journal
    Reverse Engineering.

    Now burn, you troll :-)
  • Re:MS (Score:2, Insightful)

    by NoInfo ( 247461 ) on Wednesday October 17, 2001 @06:17PM (#2443754) Homepage Journal
    "If someone breaks into your house because you left the door unlocked, it's not YOUR fault, but the fault of whoever it was that showed the thief how to use a door knob."

    Poor analogy. More like:
    "If someone breaks into your house because you had a poorly made lock, it's not the lockmaker's fault, but the fault of whoever it was that told the thief about the faulty lock."

    It's not as cut and dry as some of you slashdotters paint it to be. Some might even say it's closer to this:
    "If someone breaks into your house because you had a lock that could be bypassed with a special lockpick, it's not the lockmaker's fault, but the fault of whoever it was that gave you the special lockpick"
  • by karlm ( 158591 ) on Wednesday October 17, 2001 @06:17PM (#2443755) Homepage
    But by the same token, there are still litterally hundreds of semi-competant crackers sitting there just waiting for a good bug to come out. They can write the exploit themselves just as easily as the white hats can write their own test code. Many of these people have no problems circulating their home-brewed exploit code through the boards. In the case of closed-source software, excluding the exploit code means that several hundred black hats are working on exploit code at the same time that a few tens of developers (at most) are just trying to create a test case.


    Even for proprietary software, you want to make the bug fix use the faster open-source development model for as long as possible, becuase most black hats have no qualms about open-sourcing their exploits. Hiding the explot code actually hurts the developers more, especially if their manager only puts one or two programmers on the bug fix because s/he thinks there's no exploit in the wild.

  • by Anonymous Coward on Wednesday October 17, 2001 @06:21PM (#2443777)
    "At the end of the day, a vendor's paramount responsibility is to its customers, not to a self-described security community."

    Problem is, Microsoft's real customers are its stockholders, not the folks who buy their software (either OEMs or end-users).

    MS knows full well where its real responsibilities lie, and acts accordingly.
  • by fanatic ( 86657 ) on Wednesday October 17, 2001 @06:26PM (#2443805)
    but is an exploit REALLY necessary?

    It's very useful. For example, you can scan your network for machines running given servers, then launch exploits agains all those that are running, as a double check to find unpatched srervers. Since MS installs servers by default on damn near everything*, without advising the installer, this is the ONLY to be sure your not running unpatched servers. My organization found numerous vulnerable machines this way, even though we thought we had this nailed down.

    *(example: Visio 2000 installs MSDE, a form of SQL server, vunerable. CiscoWorks 4.2 (getting old, now) installs IIS vulnerable.)
  • by 8bit ( 127134 ) on Wednesday October 17, 2001 @06:27PM (#2443811) Homepage
    Erg...don't compare windos to other products. Not only is he trying to discredit linux and solaris, but he's mooching their security record too. For the dumb people, they'll think linux & solaris was affected by code red/nimda...for the slightly more informed but still stupid, they'll think windos is as secure as the other stuff. Tsk tsk tsk...
  • by WNight ( 23683 ) on Wednesday October 17, 2001 @06:30PM (#2443828) Homepage
    Real admins will tell you that you shouldn't go throwing patches on production machines until they've been tested, either by you on a redundant machine or by the community at large.

    Exploit code and exact details let you rig together protection with a firewall, or turning off an optional service, until you feel that a suitable patch is available.
  • by Toodles ( 60042 ) on Wednesday October 17, 2001 @06:38PM (#2443887) Homepage
    Checking through BugTraq [securityfocus.com] and NTBugTraq shows an alarming trend; companies don't care if someone finds an issue with their software. Let me give you an example:

    The Cisco 675 DSL router/modem. This device has very widespread use consumer home and SOHO environments. Other Ciscos in that line were included in a particular issue that cause the router to hang completely until power cycled. Cisco was first notified about this January 10 2000 (no typo there, 01-10-00). A very easy to prove situation was shown to cause this. After 11 months of waiting and two notifications to Cisco, the notifier had given up on Cisco doing The Right Thing (c), and notified BugTraq about the problem, in this [securityfocus.com] post, Nov 28th, 2000. Users from around the world tested, and verified the issue. Want to know what happened? Nothing. Not a peep from Cisco about this, untill recently. The vulnerability DOS in the Cisco was never acknowledged by Cisco, and still isn't admitted. However, a notification of DOS vulnerability was finally admitted by Cisco here [securityfocus.com], 8-24-2001. Nineteen months since being notified. However, the entire reason for this wasn't the vulnerability mentioned of a skewed HTTP request, but simply its inability to handle multiple http connections. Why? Code Red. The Code Red virus was banging on port 80 so hard that the routers would lock up hard and die until reset. Many thousands of DSL customers were affected by this, and IMHO, a redux of the HTTP code that should have been done over a year and a half before, would have prevented the entire nightmare of Code Red issues for owners of the Cisco 675 (Their systems are another story however).

    Checking for other 'exploit code' on the BugTraq list should show that the people who create it are responsible, usually doing no more than running a 'whoami' in the case of elevated privileges. They don't arm 'script kiddiez', they do it themselves, however the proof that a hole is exploitable is all someone needs to write their own. This is not a bad thing, this is a good thing.

    It is general policy on BugTraq that companies be notified and given sufficient time to resolve issues, usually 3 months or so. If that lapses, it is the infosec engineers responsibility to post the exploit for the world. The company won't listed to the voice of one competant person, but they will listen when their entire customer base gets proof that the company shirked on their responsibilities to protect their customers.

    Toodles

  • by schon ( 31600 ) on Wednesday October 17, 2001 @06:51PM (#2443945)
    It certainly seems to me that the full disclosure paradigm at least needs to be scrutinized, if not dumped altogether.

    In a word, no.

    Here's my response to people who feel the way you do:

    Without publicly available exploits, how does a system administrator really know that the vendor-supplied patch actually fixed the hole?

    This discussion comes up every so often on bugtraq, and it's quickly shown that the people who think this way either have something to hide, or haven't really thought things through.

    The best one was shortly after Code Red, when some self-described "security consultant" posted a letter criticizing eEye for publishing the advisory and sample code that described the hole it used.

    However, there was no response from him when it was pointed out that the Code Red virus was not based, in any way, on the eEye advisory! (Disassembling the code shows that it came from someone else who had discovered the hole independently of eEye)

    Never before had I seen the anti-disclosure argument used so well to contradict itself. (Every argument as to why you shouldn't disclose suddenly became an argument as to why you should disclose.)
  • Bad community move (Score:2, Insightful)

    by redzebra ( 238754 ) on Wednesday October 17, 2001 @06:55PM (#2443964)
    Basicly what is being asked for is :

    1) don't tell anybody of the problem
    2) If you must tell them, don't prove it

    It wories me that some people in the security comunity already seem to accept that the prove should be hidden. I wonder how long it will take untill they think the facts should be hidden too.

    --red.
  • by hack0rama ( 253610 ) on Wednesday October 17, 2001 @07:03PM (#2443992) Homepage Journal
    Many points have been made, the need to know, pressurise the vendor for better security, prevention before patch comes out etc. Along with all these points I think there is also a strong fame factor as well. If I spend all my effort to track down a new exploit, then I dont want to secretly pass it on to the vendor. I want to publish it in all its gory details in bugtraq and let the whole world - especially the fellow geeks - know how clever I am. Dont deny me my >=15 minutes of bugtraq fame !
  • by Anonymous Coward on Wednesday October 17, 2001 @07:08PM (#2444016)

    Yeah, it's high time that Microsfot itself stopped providing all the tools that hackers require to break into customer systems... tools like Internet Explorer and Windows and Word and Outlook...

    EVERY Microsoft product provides all the Active X tools and security flaws that a hacker needs to break into company computers and comproomise data and its about time THAT MICROSOFT STOPPED DISTRIBUTING DEFECTIVE CODE HARMFUL TO THE PUBLIC.

    When are the government and military going to realize that Microsoft itself is the threat to national security? These products themselves are the problem and the tools. Needless to say, Microsoft refuses to improve its software engineering acumen and produce quality products... they just continue to vend out the same junk, rake in obscene amounts of money and issue the occasional manifesto which absolves them of all blame and responsibility.

    -- Speaker
  • by gnovos ( 447128 ) <gnovos@NoSpAM.chipped.net> on Wednesday October 17, 2001 @07:20PM (#2444068) Homepage Journal
    Ask yourself this, which is more dangerous to your business?

    A) Skr1pt Kiddi3z who will enter your system and possibly scrawl "I love you rhonda!" on your front page.

    B) Highly professional "black hat" who will enter your system, steal your new revolutionary prototype plans and provide them for a small charge to your competitor who will get it to market six months before you.

    The current system allows lots of the first kind, but helps prevent many of the second. Microsoft's proposal will reverse this. High profile attacks generally do very little "real" damage, normally just some downtime or some ugly defacements. The attacks that you don't see, or in this case, WON'T EVER SEE, are the ones that will turn your business from market leader to bankruptcy auction...

  • by Anonymous Coward on Wednesday October 17, 2001 @07:31PM (#2444109)
    The rhetoric in the article is quite misleading:

    This is not a call to stop discussing vulnerabilities. Instead, it is a call for security professionals to draw a line beyond which we recognize that we are simply putting other people at risk. By analogy, this isn?t a call for people for give up freedom of speech; only that they stop yelling "fire" in a crowded movie house.

    He purposely uses the canonical example of what type of speech is not considered good. He neglects to mention that in the example, there is supposed to be no fire. If however, there was really a fire, we all would want the person to yell out "Fire! over here. On the drapes next to the fourth balcony." Yelling "fire!" is more important, not less, when there is a crowd in the theater. More people are at risk. They deserve to know that.

    When researchers post detailed descriptions of security holes and exploits, they are yelling "fire" where there is actually a fire. When PR doublespeaker from Microsoft claims, as they have done elsewhere, that "Open Source results in security vulnerabilities" they are the ones who are yelling "fire" where there is in fact none.
  • by Wesley Everest ( 446824 ) on Wednesday October 17, 2001 @07:36PM (#2444132)
    At least Microsoft is using the term Anarchy [geocities.com] correctly. Anarchism means people helping each other with mutual aid [pitzer.edu] without trusting our security to a self-appointed entity acting in its own interest.


    When it comes to running computers safely and productively, protecting the interests of the users (us), who should we trust, Microsoft or ourselves?

  • by bug ( 8519 ) on Wednesday October 17, 2001 @07:40PM (#2444147)
    As a security researcher, I can say that this is a difficult issue. I certainly benefit from having access to exploit information in my research and testing, but just as certainly the public release of exploit code is a sword that cuts both ways. At issue in many current IT-related court cases is free speech with regard to software and source code. Examples here are cryptography export regulations court cases and DMCA-related court cases. The free speech argument here (and in my mind the most correct argument) is that, just as for musicians the only practical and unambiguous method of communication is sheet music, that source code is the only practical and unambiguous method of conveying ideas about computer-related subjects. In computer security, a related argument can be made that the only practical and unambiguous method of communicating ideas about security vulnerabilities is through exploit code and programs.

    The security community is so large and diverse that effective controls on exploit code and detailed vulnerability information is impossible. Who would determine who gets access? Microsoft? The US Government? The only practical method is the public one.

    The enemy is not Microsoft's unwillingness to produce patches for their security vulnerabilities. They have actually proven to be one of the more cooperative vendors for recognizing flaws and producing and releasing patches, at least in recent times.

    The enemy is not the public release of explicit vulnerability information, which is necessary for security research.

    The enemy is also not the 13-year-old that breaks into computers. Fighting a war against 13-year-olds is a dumb war.

    The enemy is the fact that software vendors like Microsoft have consistently chosen to place their customers at a ridiculous amount of risk through default configurations of their software, and the fact that a 13-year-old can break into thousands of computers with little effort or skill.

    Why is it that default configurations of all major OSes (note that I'm not singling out Windows here, I'm saying all OSes) come with an absurd amounts of default services open? If the vast majority of customers do not need a service running, then it should not be running. How many nimda infections were from people who had no idea they were running a web server in the first place?

    Why is it that default configurations of most prominent workstation and network client software has poor default configurations, security-wise? Do most users out there really need ActiveX or Javascript in their email client? Not only no, but hell no.

    Yes, vulnerabilities do occur in all software. I don't think that anyone out there has any expection for Microsoft or any other vendor to achieve perfection here. However, the issue here is that the default posture leaves users prone not just to known vulnerabilities, but to ones that have yet to be discovered.

    All software vendors (including but not limited to Microsoft) need to better examine the features of their products to discover potential points of attack. If the majority of users have no need for a particular feature that might be dangerous at some later point in time (e.g., mobile code capabilities, network services, modules to network services like IIS index server, etc.), then they should be disabled by default. Go ahead and make an easy-to-use checkbox for turning that kind of stuff on individually, but don't have it on by default.

    Microsoft has recently stated that it is beginning a new initiative to ship their products in secure configurations. I believe that they probably will succeed somewhat here, but we've been hearing similar lines of bull for so long that they have no credibility here until they actually prove it.

    Microsoft and other vendors should stop whining about the messengers, and should start shipping products with default configurations and initial postures that are likely to withstand existing and future attacks. Default configurations are enemy number one, not public vulnerability research. Let's see some proactive work being done instead of only reactive work. Microsoft has plenty of problems to fix in their own development processes before they worry about fixing the "problems" they feel the security community has.

  • by gotan ( 60103 ) on Wednesday October 17, 2001 @07:42PM (#2444159) Homepage
    The real problem is, that all those security holes make their software look bad. Especially compared to other software. When he mentions that softwaremakers are more aware of security and faster putting out patches, he conveniently forgets to mention, that specifically Microsoft was extremely reluctant to react on security-flaws until they were publicized widely. He also neglects to mention, that it's not only important that there is a patch, but also to make peolpe aware of it. It is very true, that beyond the complexity of "Hello World" there is rarely a piece of perfect software, but he addresses that statement to the wrong people. The security experts already know this, but the customers of microsoft very obviously don't.

    Also it must be said, that most of the damage the worms did was to the image of microsoft. These worms showed the extent of vulnerable machines all over the world, but had there been no worms there would be even more vulnerable machines now, with backdoors open to anyone intelligent and motivated enough to write their own exploit. All those worms that draw so much publicity to the security flaws are just the tip of the iceberg. Someone really malicious will have the abilities to sneak in through a hole without a ready script, and he won't do it with a worm that creates a lot of traffic, but silently install a backdoor and do whatever he set out to do.

    When calculating the damages a worm did, that always includes a complete system check for data integrity, backdoors, etc. But if the hole was there and had to be patched, who is to say, there wasn't someone/thing else than a well known worm that came in, installed backdoors and corrupted data? And that person will probably do far more damage, since he probably choose that computer for a reason. Much damage is already done, when the system had a hole and was attackable for some time, since that means that system security and integrity can no longer be guaranteed. Many worms are only making aware of that fact.

    Microsoft could do far more for the security of their products by making people aware of the importance of patches, but probably that doesn't sit well with marketing.
  • Why not? (Score:3, Insightful)

    by CAIMLAS ( 41445 ) on Wednesday October 17, 2001 @07:42PM (#2444160)
    Hey, they want the security sites to leave alone exploits - so why not? If they want to blame their best source to the solution for the problems, let them. Watch teh security sites disappear - or rather, stop supporting MS stuff. Then watch MS software go to hell as exploit after exploit rips it appart.
  • This is all bull (Score:5, Insightful)

    by Erore ( 8382 ) on Wednesday October 17, 2001 @07:52PM (#2444206)
    I have about 50 Microsoft NT servers from 3.50 thru Windows 2000 REGISTERED with Microsoft. They have my name, my address, my e-mail address, my telephone number.

    Never once did they contact me or send me a CD with security patches on it. Never did they send me an email to go to a website to download a fix.

    I was told, when I registered my product, that they would keep me informed. They have failed to do so.

    The recent exploits of IIS were from known problems that had previous patches. Many users did not patch their system. They did not know that they had to patch their system. Despite Microsoft knowing who the users of NT IIS were, they did not attempt to contact those users and let them know that patches were available.

    Not only that, until recently Microsoft made it very difficult to find security patches. Their website is large and complex, and items change location all the time. In the past five years finding patches for security fixes of NT systems has gone from extremely easy, to nearly impossible, to finally getting organized and easier again.

    Why is it, that after the outbreak of Code Red, it took days before information was available from a link on Microsoft's main page? Because it is bad marketing. Instead I have to go deeper to find that information. There isn't even a generic link for security from the main page.

    When you do get to their security page, you are told that Microsoft is doing the radical step of giving Security Tool Kits away for FREE!!! Amazing, you bloody well better give it to me for free. It's your buggy code that had the problem in the first place. I'm a registered user, I haven't received a kit yet.

    Microsoft is finally starting to take some initiative with this security thing. But, they shouldn't run around pointing fingers at anyone other than themselves
  • by slashkitty ( 21637 ) on Wednesday October 17, 2001 @08:02PM (#2444280) Homepage
    I've tracked down a number of security bugs. After verifying their existance, I immediately contact the company(ies) involved. Guess what? They don't all respond. Some of the problems I have found are with browser software, it was only until I made it public, with sample code, that I was even contacted by the companies.

    In my most recent finds, not made public yet, there are a number of gross privacy bugs in some pretty major websites ( similar to the hotmail problems, but with banking, news and ecommerce sites ).. Well, besides the difficulty in even finding someone in their organization to tell about the problem, once told they ususally do nothing. So, the question I have is what do I do now? Leave your banking site wide open, or make the exploit public to get something done?

  • by Jettra ( 529165 ) on Wednesday October 17, 2001 @08:41PM (#2444469) Homepage
    There are a few good points in this article. It's true that most people who use security holes to exploit insecure services are simply following a set of steps. This can be shown by simply moving exploited services over to non standard ports (but don't). In these cases security by obscurity is somewhat effective. However, this will not prevent experienced users from deviant behaviour.

    Providing information about a security hole or bug to the company is a nice thought, but does not apply to open source. The code is maintained and updated by the Internet community as a whole. So bugs must be presented openly in order to get noted and fixed.

    Besides 'hacker' groups with malicious intent will share information privately without the companies knowledge. Instead, making this information public as soon as possible is good for everyone. It's good for the company because they will know about the vunerability. It's good for the customer because they can see the unresolved security issues specific to the application and decide wether to shut it down or switch to a more stable solution (or better yet, don't buy into it in the first place). Also, having an outstanding security issue puts pressure on the company providing proprietary solutions to fix thier sloppy mess.

    Perhaps Microsoft should consider reducing the feature set within IIS in order to provide a product that they can properly maintain. Otherwise, they might want to try moving IIS to open source. Seems to work well for Apache.

    Chow

  • by Wavicle ( 181176 ) on Wednesday October 17, 2001 @08:48PM (#2444510)
    "Security vulnerabilities are here to stay."

    That isn't the attitude I'd want someone providing my software to take.

  • by gizmo_mathboy ( 43426 ) on Wednesday October 17, 2001 @09:20PM (#2444643)
    It appears to me that Mr. Culp has misunderstood the purpose of the scientific method. The goal of which is to allow other researchers the ability to reproduce one's test/bug/experiment.

    Programmers use code to share their experiments because it is the simplest, best, most consistent way to do so. Not asking security and programming experts not to share "blueprints" is like asking toxicologists not to share the chemical formulas for the compounds they're researching.

    Mr. Culp needs to take a vacation away from the stress of his job and bone up on how to systemically approach problem solving and the sharing of information used to produce repeatable experiments/tests/exploits.
  • by apropos ( 12176 ) on Wednesday October 17, 2001 @09:35PM (#2444707) Homepage
    Here's my theory, for what it's worth:

    1. If the *type* of exploit is known, and the *point of communication* (i.e., socket) is known, then an "expert" system can eventually be built that will make exploit creation point and click simple.

    2. Any random piece of information can be disseminated to an unlimited number of points on the internet in much much less than 24 hours if there is any semi-organized method of sharing the information. A web site, mailing list, private FTP server, whatever - the internet was created to share information quickly. Code Red shows that even unwilling participants can be used to spread information (or any other payload) to saturation point in less than a day.

    3. Even if only one programmer on the internet is creating exploits, there is a system of sharing this information. This is what has occurred with the "zero day" cracks of games that are shared on IRC, and it is very much a formalized and highly popular system. The only difference is that instead of being freely available to Black Hats and White Hats (like a public mailing list), it's only available as information in trade, and is usually traded for something illegal. This creates a nifty little power hiearchy where fifteen-year-olds become something like the Mafia Dons.

    4. Exploit code proves that there is a hole. This proof cannot be denied by J Random Marketing Department.

    5. A published exploit allows system admins to test whether a published "fix" actually works or not. Even if every admin doesn't do it, a couple will, and if there's a problem it will be announced on security lists (again, spreading at the speed of light).

    Conclusion:

    Because there will always be groups on the internet willing to share this information, security through obscurity will never work.

    As an example, one could interview various games companies in the US and find the mean time between release of a copy-protected piece of software and the crack to bypass the protection. I call this Mean Time Before Crack (MTBC), and it's similar to the open source concept of Mean Time Between Itches (MTBI - the amount of time between the public discussion of a software idea and it's open-source implementation) ;-).
  • by Anonymous Coward on Wednesday October 17, 2001 @10:22PM (#2444860)
    Buffer overflows are a programmer mistake. Since a programmer generally knows at least what language he is coding in (duh) it is his responsibility to learn that language. Mistakes do happen, but don't try to blame strcpy for not telling the programmer how to program.
  • by cornice ( 9801 ) on Wednesday October 17, 2001 @11:27PM (#2445051)
    It's silent for years...

    Many diseases are deadly if untreated. Often the scarriest ones are those that kill silently over time. This is what MS is asking for. Security holes can be an obvious pain or a silent killer. If exploits are not made popular and fixed then the exploit will be available to those who know the most and can potentially do the most harm. Once again this is a plead for a solution that will benefit MS and nobody else.
  • by digit ( 3825 ) on Wednesday October 17, 2001 @11:57PM (#2445114) Journal
    If you did your job and took those exploits and
    fixed the problems there would not be a problem.
    Do your job instead of sticking you heed in the
    sand!!
  • Re:Right (Score:2, Insightful)

    by Your Login Here ( 238436 ) on Thursday October 18, 2001 @01:50AM (#2445451)

    The I Love You virus spreads due to the Autorun code, so rather than give the customer the OPTION to say 'yes, run this', their bug fix is to outright disable it. Some fix, considering that autorun feature was touted by Microsoft as being an ideal way for something or other. Never did quite understand it.

    Well, let me put this descition in perspective. As 'I Love You.vbs' proved, there are a -lot- of Outlook users out there. Now, take that number and multiply it by the probability of a typical office worker clicking on 'Yes, run the unknown code' when the file is named 'I love you!', 'Important!', or something similarly fluffy.

    All my experience with typical computer users tells me that you would still have a major network problem on your hands.

    Personally I think a better solution would be to prevent macros from sending emails without confirmation. However, MS probably implemented mailing lists, or some other feature, by using that function. I heard in the newer version they plan to ask before letting a macro access your Address Book, which seems like a good idea.
  • by Znork ( 31774 ) on Thursday October 18, 2001 @02:57AM (#2445552)
    Of course, for a long time it's been suggested that people use the safer alternatives like strncpy, strncpy and snprintf. Which is why it's a good idea to regularly grep through your code to see you didnt slip up by mistake somewhere.
  • by Delor ( 302500 ) on Thursday October 18, 2001 @02:58AM (#2445555)


    Just a thought. Without verifiable exploit code whats to stop bogus reports?

    Bet everyone would get real sick of responding to fictitious security holes everytime someone got pissed at microsoft and started a rumor about an exploit in microsofts newest toy. (Of course there are so few people that engage in malicious microsoft bashing that this would be a tiny problem anyway)

    D
  • by Jeppe Salvesen ( 101622 ) on Thursday October 18, 2001 @04:39AM (#2445673)
    Microsoft sits on registration data about what users have what product, and those registration data contain contact information.

    When you register a Microsoft product, they thank you by sending you advertisment material. No critical upgrades or anything to that effect. AOL sends off cd-roms to everybody in america - for free, hoping a few will try out their service. Microsoft customers have PAID for their product, but Microsoft does not provide them with even notifications of upgrades/updates.

    It's a sad, sad world.
  • by jlemmerer ( 242376 ) <xcom123@SLACKWAREyahoo.com minus distro> on Thursday October 18, 2001 @05:12AM (#2445713) Homepage
    ... saying if you don't publish blueprints, nobody will know where the door is. Microsoft should be glad that all these reports are out, for this is a way they can react to them. It is no good putting one's head in the sand. The programmers at Redmond - the one's who left the doors open in the first place - should just read the reports and fix the holes. Maybe this would contibute to the "Win2000" is secure image Microsoft wants to build up in public opinion. If you don't publish the exploits, end user style people will think "Hey, M$-Software is more secure than all others, because there are no exploits found on the net", trust in the M$ offered security and wonder why their computer is periodically hacked every second week by somebody who has the knowledge, but doesn't publish it.
  • by maxpublic ( 450413 ) on Thursday October 18, 2001 @06:51AM (#2445830) Homepage
    "Information anarchy"? And yet no post I've seen so far challenges the terminology as being inherently useless PR. Microsoft is damned good at dreaming up push-button catch-phrases that become subconciously accepted even by it's detractors as viable descriptors. It's the same sort of tactic that convinces people that EULA's are *actual laws*, when they're nothing of the sort - insofar as I know no court of law has even supported them as valid contractual agreements.

    The phrase "information anarchy" has no coherent meaning other than that defined through MS's statement, and even there it seems to mean "any public publication of security weaknesses in MS products". Yet MS pushes the phrase over and over again in the attempt to link security reports with the word "anarchy" in the hopes that the average idiot will associate publication of flaws in MS software with irresponsible, undemocratic behavior.

    Most of us geeks catch this sort of thing right off (e.g., "viral software") but notice - this one slipped under the wire with nary a comment that I could see.

    One of MS's greatest weapons is the introduction of language which precludes one mindset and reinforces another - social programming at it's finest. Accepting the phrase "information anarchy" as valid substantiates the idea that such a thing actually exists, even if you argue that the security reports don't constitute an example of this nebulous "information anarchy".

    There's no such animal. It's a buzzword with zero meaning other than a poor attempt to lay the blame for MS security holes on people other than those employed at MS.

    Perhaps we should retaliate with terminology of our own that's intimately associated with a Microsoft argument or product. Any ideas (other than the "Microsoft worms" phrase of some days back)?

    Max
  • Re:MS (Score:3, Insightful)

    by dup_account ( 469516 ) on Thursday October 18, 2001 @11:09AM (#2446748)
    Here's an interesting story.... We recently had a new house built. During the construction locks were install that have a pin that enables the builder to use a common key for all the locks in all the houses he was working on. We didn't know this thou..... After the builder was done, we had a locksmith come in and rekey all the locks (the builder was a _______ whom we didn't trust). The locksmith pointed out to us that the builder had left this pin in, making our house very vounerable (when is /. going to add a spell check?) to break ins.

    I would relate M$ to the builder, and the locksmith to the security boards.
  • by iceT ( 68610 ) on Thursday October 18, 2001 @11:19AM (#2446806)
    Not publishing the details of a virus does NOT stop the virus from existing. The "I Love You Virus" didn't have a post mortem until AFTER it took down entire corporations networks. Not publishing the details of the virii will NOT stop other hackers from getting their hands on the virus code, and making modifications to it.

    Culp is assuming that the only people smart enough to decipher the viruses are the security people themselves, and THAT is the false assumption that invalidates the theory behind the 'essay'...

Thus spake the master programmer: "After three days without programming, life becomes meaningless." -- Geoffrey James, "The Tao of Programming"

Working...