Microsoft Blames the Messengers 731
Roger writes: "In an essay published on microsoft.com, Scott Culp, Manager of the Microsoft Security Response Center, calls on security experts to "end information anarchy" and stop releasing sample code that exploits security holes in Windows and other operating systems. "It's high time the security community stopped providing the blueprints for building these weapons," Culp writes in the essay. "And it's high time that computer users insisted that the security community live up to its obligation to protect them." See the story on Cnet News.com."
When you point the finger of blame... (Score:2, Insightful)
I think the author/Microsoft should not forget this.
Moose
New Slogan (Score:3, Insightful)
Y'know, if they didn't have so many bugs, there wouldn't be anything to release, and therefor, no 'weapons' to build... it's kinda like an army making a tank with wooden components inside, then getting pissy when the other army brings flamethrowers and napalm...
Still leaking? (Score:4, Insightful)
Well, it IS a two way street. (Score:5, Insightful)
The system relies on the reaction time of the programmers.. can they supply a patch before the crackers supply an exploit?
Those of us in the *nix world seem to do pretty good.. for all sorts of reasons you don't need to go into here. Windows? Heh.. it can take months for something to get patched up. No wonder he's mad that these 'blueprints' are being provided. It's simply an extension of the security through obscurity mode of thought.
whose obligation to protect? (Score:5, Insightful)
I'm not sure whether anyone, other than law-enforcement agents, is obligated to protect computer users, but if anyone is, surely the people who produce the software are more obligated to prevent or solve these problems than are those who merely report on them.
Is this, along with the U.S. government's warning to news agencies to be careful what they broadcast, a sign of a new trend?
We've seen what they propose (Score:4, Insightful)
Several times we've seen security experts say to a large company, "Hey! there's a nasty exploit here!" The large company indicates they'll fix it and ignores the problem. Only when the exploit is publicized do companies like Microsoft actually take the effort to fix the code. Releasing the information is the only way. Perhaps out of courtesy the security community could give the company with the bug a week's notice.
Don't they already provide a grace period? (Score:5, Insightful)
I thought most security exploits that get released by the major groups are usually passed through MS first and allow them time to provide a patch before issuing the details of the exploit. So why are they so upset? Its not MS nor the security experts who are at fault for not patching machines. At least by publishing them they are provided an incentive to staying on top of security holes, instead of simply allowing them to remain secret. I mean none of the major exploits lately (code red, nimda, etc.) have used unpublished exploits. So this shows a failing in MS's procedures for keeping admins informed and a failing in the admins for keeping on top of their networks. Its such a non-issue, I think MS just wants to preempt law suits or some other such silliness.
I can see what's going to happen... (Score:4, Insightful)
God damn... when did I get so cynical? Oh yeah, after reboot #3 of NT 4.0 today. {grumble grumble grumble}
Re:They Have a Point (Score:2, Insightful)
Full disclosure? (Score:5, Insightful)
Hmm, this has always seemed to be a hot discussion...I'm all for full disclosure, but is it really necessary for people to include exploit code?
One argument is that it can help people to test their systems for vulnerabilities, bit I think that exploit code is not strictly necessary for this. People who really need it to test systems are in a position where they should have the capability or the resources to generate a "test script" for themselves, once given an accurate description of the vulnerability.
Making code exploits freely available possibly creates more opportunity for the low-life script kiddies who often don't appreciate exactly what they are doing, or the mechanics of the exploits that they are using. Why should we make it easy for those guys?
My opinion on this element of full disclosure is still not complete though, and I am fully prepared to be convinced... :)
-- Pete.
Re:history (Score:2, Insightful)
just my $.02
they really should stop giving actual code (Score:5, Insightful)
If security experts took the time to make exploit code an exercise for the reader, we might someday end up with skript kiddies who can even write their own hardware drivers for Linux. They might even learn to write and discover new exploits for Windows without the help of security experts.
Microsoft got it on the nose this time
OK, this is Slashdot, but the guy has a point (Score:2, Insightful)
At least the guy doesn't ignore that there are problems:
I know I'm preaching to the anti-choir here, but he has a point.linux exploits? (Score:5, Insightful)
Typical response from an overworked manager. (Score:4, Insightful)
So what does he do? He posts an essay which is basically a reflection of his anxiety. However, he misses two very key points on why this information anarchy is a good thing.
* Patches for popular software that are exploitable tend to come out real quick because the company has to save face and perhaps protect against liability suits.
* A necessary fear is instilled into companies to put software through a secuirty audi before it goes into production.
I hope this guy takes a vacation somewhere on the beach to reflect on his thoughts.
Valid Uses of Exploits (Score:3, Insightful)
Unfortunately, it's simply untrue that there aren't positive reasons for releasing exploits.
I can think of several: testing of machines (risky, but useful), understanding of vulnerability (CERT advisories are pretty much useless for this.), research.
The most important of these (IMHO) is the understanding of the vulnerabilities. In the past, we didn't even talk about vulnerabilities in the open and we have the abhorrent state of affairs we have today. Security isn't even taught in computer science and engineering curricula and when it is, it's treated as a separate set of classes. When I started working in infosec, I had no idea how the exploits worked and what the real coding vulnerabilities were. Without release of exploits, I probably still wouldn't.
Re:They Have a Point (Score:5, Insightful)
By not giving exploit scripts you allow sysadmins to become lazy. They figure "Nah, i'll just wait until an exploit comes out before i patch it", while the underground hax0r scene is already searching out your box.
Re:They Have a Point (Score:5, Insightful)
What makes you think that not having it displayed all over the web will make it any less available to to the people who want to do harm?
Black hats are going to get ahold of the exploit, even if the source code to it is not published on incidents.org or bugtraq. All that not publishing it there does is provide a false sense of security.
Publishing the details in a high-visibility location does several things:
The script kiddiez are going to get these exploits when they download them from their favourite r00t kit location. Lets not pretend that not publishing the same exploits to the general public really makes things much safer.
This isn't a statement for readers of /. (Score:2, Insightful)
Re:They Have a Point (Score:3, Insightful)
The problem with not publishing details of the exploit is that Microsoft and other companies will look at it and say "This doesn't look like that bad of a problem, and besides, nobody will find that easily. No sense in making a patch for it. The potential abuse of this hole is negligable."
So then we end up being at the mercy of the Black Hats to quietly spread the information among themselves.
No, keeping things secret simply won't help.
Exploit escrow system (Score:2, Insightful)
Re:MS (Score:2, Insightful)
What if security companies do not bow to MS's wishes? Will MS use the DMCA to bust them?
an anology... (Score:2, Insightful)
Not a bad analogy: if you want to keep something safe and secure, you use a decent lock. Having the info about lock picking gives you the knowledge to do so, and allows you to know just how secure you are.
The same could be said about software... and if you want a good lock, you educate yourself. MS makes bad locks... those locks can be fixed, but it requires the knowledge of the lock picking manual to do so.
Don't get me wrong, Linux, BSD, ect. can be a weak lock too... but with OOS, not only do you have the manual, but you can disassemble and rebuild the lock on your own!
Microsoft FUD (Score:3, Insightful)
MS had it too easy for too long regarding security issues, especially with the news media reporting Outlook vulnerabilitys not as they really are, as a design flaw in Outlook, but as "e-mail viruses."
"Behind every great fortune there is a crime."
- Honoré de Balzac
"You hear a lot about Bill Gates, don't you, whose net worth in January of the year 2000 was equivalent to the combined net worth of the hundred and twenty million poorest Americans, which says something, not only about the software imitator from Redmond, Washington, it says something about millions of workers who work year after year, decade after decade, and are essentially broke."
- Ralph Nader
Re:They Have a Point (Score:1, Insightful)
Im just glad im a developer and not a sys admin having to apply all those patches
NEW MICROSOFT JARGON ALERT: (Score:4, Insightful)
Expect to see this term bandied about frequently.
Re:MS (Score:3, Insightful)
One would argue that a decent MS admin would remember to keep the door locked.
obligation of the security community? (Score:2, Insightful)
What obligation is he talking about? For a company that epitomizes a big-money capitalist position, that's the most blatant socialist comment I can imagine. Users collectively pay billions of dollars to software manufacturers each year for endless upgrades, yet he thinks a reasonably loosely knit group of professionals working on their free time somehow owes that same user base the right to be protected???? That's bizarre.
Further, the "Information Anarchy" thing sounds way too much like the "intellectual property virus" tagline they keep using for the GPL. It's a catchy management-speak phrase that sounds nasty and has little real meaning. It's easy to see how they can set the stage to condemn the whole open source community with all it's open and anarchic ways that don't protect innocent users.
Re:It is a good point (Score:2, Insightful)
Think about how bad things would be if nothing got fixed, because the big guys never took security bugs seriously. Consider UNIX. What would UNIX be like today if all of the security holes were never reported and fixed? It would be like the swiss-cheese it was twenty years ago. Fortunately, UNIX has had its major holes plugged, and the documentation of these holes has made all of us better administrators and programmers.
MS crying terrorist (Score:2, Insightful)
"It's high time the security community stopped providing the blueprints for building these weapons,"
It's high time Microsoft stop using inflammatory, mitilaristic sounding rhetoric at a time of national crisis. There are too many actual terrorists about for Microsoft to be irresponsibly crying "terrorist."
Re:Full disclosure? (Score:5, Insightful)
It's what L0pht prided themselves on for years, after having MS dismiss their whitepapers as improbable, theoretical, impossible, etc.
Some other choice quotes : (Score:5, Insightful)
All three goals? There's some on this later - but assuming that he's right with the rest of the entire essay, you'd expect there to be some pressure to address the vulnerabilities, would there not? He even goes further, saying that pulished exploits are antithetical to getting patches out. Brilliant logic.
Providing a recipe for exploiting a vulnerability doesn't aid administrators in protecting their networks. In the vast majority of cases, the only way to protect against a security vulnerability is to apply a fix that changes the system behavior and eliminates the vulnerability; in other cases, systems can be protected through administrative procedures. But regardless of whether the remediation takes the form of a patch or a workaround, an administrator doesn't need to know how a vulnerability works in order to understand how to protect against it, any more than a person needs to know how to cause a headache in order to take an aspirin.
I love this analogy. It actually works. For example - if I knew that the cause of my headaches was an allergy to certain foods, I could avoid those foods, and not have to take aspirin. If I know how an exploit works, I can prevent it with my own tools - firewall, etc. and not have to worry too much about the dubious patches.
Likewise, if information anarchy is intended to spur users into defending their systems, the worms themselves conclusively show that it fails to do this. Long before the worms were built, vendors had delivered security patches that eliminated the vulnerabilities.
Here he's not talking about e-mail "viruses", but worms. Specifically, worms targetting systems people did not know they had on their system. There was plenty of buzz about Code Red before most people had it, and the patch was applied to thousands of computers as people got worried. I'm not an advocate of having people upgrade through fear, but this still disproves his point.
Now - here's his reason for published exploits to take pressure off of vendors to publish fixes :
Finally, information anarchy threatens to undo much of the progress made in recent years with regard to encouraging vendors to openly address security vulnerabilities. At the end of the day, a vendor's paramount responsibility is to its customers, not to a self-described security community. If openly addressing vulnerabilities inevitably leads to those vulnerabilities being exploited, vendors will have no choice but to find other ways to protect their customers.
Crap...I'm trying to find a problem with the logic, but I can't actually understand the argument - anyone? What other ways are there for vendors to protect their customers than put out fixes?
Anyway, that said, I'd just like to express my condolences to the author. Did you see his title? "Manager of Microsoft Security Response Center" Poor guy is probably blamed for half the bugs in code he's never heard of. Can blame him for venting a little. I just wouldn't have done it as publicly.
Isn't it ironic... (Score:3, Insightful)
Re:Full disclosure? (Score:3, Insightful)
I'm all for full disclosure, but is it really necessary for people to include exploit code?
some things are easiest to communicate with sample code. in the absence of the original source code, in which case you could say "look, this function is overrunning this buffer," it would probably be easiest to demonstrate the exact nature of a security flaw using exploit code. although even in the circumstances where you have the original source, having exploit code to look at couldn't hurt in fixing the problem.
my personal feelings on this is that exploit code should first be sent to the maintainer of the original program, with a deadline for the release of a patch. there should also be a public release describing the problem in a very generic nature. after the deadline, release the exploit, even if the patch isn't out yet. this gives developers time to fix the problem without putting the exploit in the hands of script kiddies. plus, the developers are under a deadline to get it fixed. granted, it's entirely possible for the kiddies to already have code to exploit it, but why give them the tools before it's necessary?
Two words (Score:3, Insightful)
Now burn, you troll
Re:MS (Score:2, Insightful)
Poor analogy. More like:
"If someone breaks into your house because you had a poorly made lock, it's not the lockmaker's fault, but the fault of whoever it was that told the thief about the faulty lock."
It's not as cut and dry as some of you slashdotters paint it to be. Some might even say it's closer to this:
"If someone breaks into your house because you had a lock that could be bypassed with a special lockpick, it's not the lockmaker's fault, but the fault of whoever it was that gave you the special lockpick"
Re:Full disclosure? (Score:2, Insightful)
Even for proprietary software, you want to make the bug fix use the faster open-source development model for as long as possible, becuase most black hats have no qualms about open-sourcing their exploits. Hiding the explot code actually hurts the developers more, especially if their manager only puts one or two programmers on the bug fix because s/he thinks there's no exploit in the wild.
Re:Some other choice quotes : (Score:2, Insightful)
Problem is, Microsoft's real customers are its stockholders, not the folks who buy their software (either OEMs or end-users).
MS knows full well where its real responsibilities lie, and acts accordingly.
Re:Ya, see.. we do.. (Score:5, Insightful)
It's very useful. For example, you can scan your network for machines running given servers, then launch exploits agains all those that are running, as a double check to find unpatched srervers. Since MS installs servers by default on damn near everything*, without advising the installer, this is the ONLY to be sure your not running unpatched servers. My organization found numerous vulnerable machines this way, even though we thought we had this nailed down.
*(example: Visio 2000 installs MSDE, a form of SQL server, vunerable. CiscoWorks 4.2 (getting old, now) installs IIS vulnerable.)
Re:Linus better do some complainin'... (Score:2, Insightful)
Re:Are you serious?! (Score:5, Insightful)
Exploit code and exact details let you rig together protection with a firewall, or turning off an optional service, until you feel that a suitable patch is available.
Inaccurate view of exploits (Score:3, Insightful)
The Cisco 675 DSL router/modem. This device has very widespread use consumer home and SOHO environments. Other Ciscos in that line were included in a particular issue that cause the router to hang completely until power cycled. Cisco was first notified about this January 10 2000 (no typo there, 01-10-00). A very easy to prove situation was shown to cause this. After 11 months of waiting and two notifications to Cisco, the notifier had given up on Cisco doing The Right Thing (c), and notified BugTraq about the problem, in this [securityfocus.com] post, Nov 28th, 2000. Users from around the world tested, and verified the issue. Want to know what happened? Nothing. Not a peep from Cisco about this, untill recently. The vulnerability DOS in the Cisco was never acknowledged by Cisco, and still isn't admitted. However, a notification of DOS vulnerability was finally admitted by Cisco here [securityfocus.com], 8-24-2001. Nineteen months since being notified. However, the entire reason for this wasn't the vulnerability mentioned of a skewed HTTP request, but simply its inability to handle multiple http connections. Why? Code Red. The Code Red virus was banging on port 80 so hard that the routers would lock up hard and die until reset. Many thousands of DSL customers were affected by this, and IMHO, a redux of the HTTP code that should have been done over a year and a half before, would have prevented the entire nightmare of Code Red issues for owners of the Cisco 675 (Their systems are another story however).
Checking for other 'exploit code' on the BugTraq list should show that the people who create it are responsible, usually doing no more than running a 'whoami' in the case of elevated privileges. They don't arm 'script kiddiez', they do it themselves, however the proof that a hole is exploitable is all someone needs to write their own. This is not a bad thing, this is a good thing.
It is general policy on BugTraq that companies be notified and given sufficient time to resolve issues, usually 3 months or so. If that lapses, it is the infosec engineers responsibility to post the exploit for the world. The company won't listed to the voice of one competant person, but they will listen when their entire customer base gets proof that the company shirked on their responsibilities to protect their customers.
Toodles
Re:It is a good point (Score:2, Insightful)
In a word, no.
Here's my response to people who feel the way you do:
Without publicly available exploits, how does a system administrator really know that the vendor-supplied patch actually fixed the hole?
This discussion comes up every so often on bugtraq, and it's quickly shown that the people who think this way either have something to hide, or haven't really thought things through.
The best one was shortly after Code Red, when some self-described "security consultant" posted a letter criticizing eEye for publishing the advisory and sample code that described the hole it used.
However, there was no response from him when it was pointed out that the Code Red virus was not based, in any way, on the eEye advisory! (Disassembling the code shows that it came from someone else who had discovered the hole independently of eEye)
Never before had I seen the anti-disclosure argument used so well to contradict itself. (Every argument as to why you shouldn't disclose suddenly became an argument as to why you should disclose.)
Bad community move (Score:2, Insightful)
1) don't tell anybody of the problem
2) If you must tell them, don't prove it
It wories me that some people in the security comunity already seem to accept that the prove should be hidden. I wonder how long it will take untill they think the facts should be hidden too.
--red.
the geek fame fator (Score:2, Insightful)
"It's high time the security community stopped.... (Score:1, Insightful)
Yeah, it's high time that Microsfot itself stopped providing all the tools that hackers require to break into customer systems... tools like Internet Explorer and Windows and Word and Outlook...
EVERY Microsoft product provides all the Active X tools and security flaws that a hacker needs to break into company computers and comproomise data and its about time THAT MICROSOFT STOPPED DISTRIBUTING DEFECTIVE CODE HARMFUL TO THE PUBLIC.
When are the government and military going to realize that Microsoft itself is the threat to national security? These products themselves are the problem and the tools. Needless to say, Microsoft refuses to improve its software engineering acumen and produce quality products... they just continue to vend out the same junk, rake in obscene amounts of money and issue the occasional manifesto which absolves them of all blame and responsibility.
-- Speaker
Re:It is a good point (Score:3, Insightful)
A) Skr1pt Kiddi3z who will enter your system and possibly scrawl "I love you rhonda!" on your front page.
B) Highly professional "black hat" who will enter your system, steal your new revolutionary prototype plans and provide them for a small charge to your competitor who will get it to market six months before you.
The current system allows lots of the first kind, but helps prevent many of the second. Microsoft's proposal will reverse this. High profile attacks generally do very little "real" damage, normally just some downtime or some ugly defacements. The attacks that you don't see, or in this case, WON'T EVER SEE, are the ones that will turn your business from market leader to bankruptcy auction...
Misleading Rhetoric (Score:1, Insightful)
This is not a call to stop discussing vulnerabilities. Instead, it is a call for security professionals to draw a line beyond which we recognize that we are simply putting other people at risk. By analogy, this isn?t a call for people for give up freedom of speech; only that they stop yelling "fire" in a crowded movie house.
He purposely uses the canonical example of what type of speech is not considered good. He neglects to mention that in the example, there is supposed to be no fire. If however, there was really a fire, we all would want the person to yell out "Fire! over here. On the drapes next to the fourth balcony." Yelling "fire!" is more important, not less, when there is a crowd in the theater. More people are at risk. They deserve to know that.
When researchers post detailed descriptions of security holes and exploits, they are yelling "fire" where there is actually a fire. When PR doublespeaker from Microsoft claims, as they have done elsewhere, that "Open Source results in security vulnerabilities" they are the ones who are yelling "fire" where there is in fact none.
Information anarchy sounds good to me (Score:2, Insightful)
When it comes to running computers safely and productively, protecting the interests of the users (us), who should we trust, Microsoft or ourselves?
difficult problem, but this is not the solution (Score:4, Insightful)
The security community is so large and diverse that effective controls on exploit code and detailed vulnerability information is impossible. Who would determine who gets access? Microsoft? The US Government? The only practical method is the public one.
The enemy is not Microsoft's unwillingness to produce patches for their security vulnerabilities. They have actually proven to be one of the more cooperative vendors for recognizing flaws and producing and releasing patches, at least in recent times.
The enemy is not the public release of explicit vulnerability information, which is necessary for security research.
The enemy is also not the 13-year-old that breaks into computers. Fighting a war against 13-year-olds is a dumb war.
The enemy is the fact that software vendors like Microsoft have consistently chosen to place their customers at a ridiculous amount of risk through default configurations of their software, and the fact that a 13-year-old can break into thousands of computers with little effort or skill.
Why is it that default configurations of all major OSes (note that I'm not singling out Windows here, I'm saying all OSes) come with an absurd amounts of default services open? If the vast majority of customers do not need a service running, then it should not be running. How many nimda infections were from people who had no idea they were running a web server in the first place?
Why is it that default configurations of most prominent workstation and network client software has poor default configurations, security-wise? Do most users out there really need ActiveX or Javascript in their email client? Not only no, but hell no.
Yes, vulnerabilities do occur in all software. I don't think that anyone out there has any expection for Microsoft or any other vendor to achieve perfection here. However, the issue here is that the default posture leaves users prone not just to known vulnerabilities, but to ones that have yet to be discovered.
All software vendors (including but not limited to Microsoft) need to better examine the features of their products to discover potential points of attack. If the majority of users have no need for a particular feature that might be dangerous at some later point in time (e.g., mobile code capabilities, network services, modules to network services like IIS index server, etc.), then they should be disabled by default. Go ahead and make an easy-to-use checkbox for turning that kind of stuff on individually, but don't have it on by default.
Microsoft has recently stated that it is beginning a new initiative to ship their products in secure configurations. I believe that they probably will succeed somewhat here, but we've been hearing similar lines of bull for so long that they have no credibility here until they actually prove it.
Microsoft and other vendors should stop whining about the messengers, and should start shipping products with default configurations and initial postures that are likely to withstand existing and future attacks. Default configurations are enemy number one, not public vulnerability research. Let's see some proactive work being done instead of only reactive work. Microsoft has plenty of problems to fix in their own development processes before they worry about fixing the "problems" they feel the security community has.
The real problem: customers unaware of security (Score:4, Insightful)
Also it must be said, that most of the damage the worms did was to the image of microsoft. These worms showed the extent of vulnerable machines all over the world, but had there been no worms there would be even more vulnerable machines now, with backdoors open to anyone intelligent and motivated enough to write their own exploit. All those worms that draw so much publicity to the security flaws are just the tip of the iceberg. Someone really malicious will have the abilities to sneak in through a hole without a ready script, and he won't do it with a worm that creates a lot of traffic, but silently install a backdoor and do whatever he set out to do.
When calculating the damages a worm did, that always includes a complete system check for data integrity, backdoors, etc. But if the hole was there and had to be patched, who is to say, there wasn't someone/thing else than a well known worm that came in, installed backdoors and corrupted data? And that person will probably do far more damage, since he probably choose that computer for a reason. Much damage is already done, when the system had a hole and was attackable for some time, since that means that system security and integrity can no longer be guaranteed. Many worms are only making aware of that fact.
Microsoft could do far more for the security of their products by making people aware of the importance of patches, but probably that doesn't sit well with marketing.
Why not? (Score:3, Insightful)
This is all bull (Score:5, Insightful)
Never once did they contact me or send me a CD with security patches on it. Never did they send me an email to go to a website to download a fix.
I was told, when I registered my product, that they would keep me informed. They have failed to do so.
The recent exploits of IIS were from known problems that had previous patches. Many users did not patch their system. They did not know that they had to patch their system. Despite Microsoft knowing who the users of NT IIS were, they did not attempt to contact those users and let them know that patches were available.
Not only that, until recently Microsoft made it very difficult to find security patches. Their website is large and complex, and items change location all the time. In the past five years finding patches for security fixes of NT systems has gone from extremely easy, to nearly impossible, to finally getting organized and easier again.
Why is it, that after the outbreak of Code Red, it took days before information was available from a link on Microsoft's main page? Because it is bad marketing. Instead I have to go deeper to find that information. There isn't even a generic link for security from the main page.
When you do get to their security page, you are told that Microsoft is doing the radical step of giving Security Tool Kits away for FREE!!! Amazing, you bloody well better give it to me for free. It's your buggy code that had the problem in the first place. I'm a registered user, I haven't received a kit yet.
Microsoft is finally starting to take some initiative with this security thing. But, they shouldn't run around pointing fingers at anyone other than themselves
If you don't make it public: My experience... (Score:5, Insightful)
In my most recent finds, not made public yet, there are a number of gross privacy bugs in some pretty major websites ( similar to the hotmail problems, but with banking, news and ecommerce sites ).. Well, besides the difficulty in even finding someone in their organization to tell about the problem, once told they ususally do nothing. So, the question I have is what do I do now? Leave your banking site wide open, or make the exploit public to get something done?
Open Source not considered (Score:2, Insightful)
Providing information about a security hole or bug to the company is a nice thought, but does not apply to open source. The code is maintained and updated by the Internet community as a whole. So bugs must be presented openly in order to get noted and fixed.
Besides 'hacker' groups with malicious intent will share information privately without the companies knowledge. Instead, making this information public as soon as possible is good for everyone. It's good for the company because they will know about the vunerability. It's good for the customer because they can see the unresolved security issues specific to the application and decide wether to shut it down or switch to a more stable solution (or better yet, don't buy into it in the first place). Also, having an outstanding security issue puts pressure on the company providing proprietary solutions to fix thier sloppy mess.
Perhaps Microsoft should consider reducing the feature set within IIS in order to provide a product that they can properly maintain. Otherwise, they might want to try moving IIS to open source. Seems to work well for Apache.
Chow
My favorite quote from the essay (Score:5, Insightful)
That isn't the attitude I'd want someone providing my software to take.
Scientific Method Misunderstood (Score:4, Insightful)
Programmers use code to share their experiments because it is the simplest, best, most consistent way to do so. Not asking security and programming experts not to share "blueprints" is like asking toxicologists not to share the chemical formulas for the compounds they're researching.
Mr. Culp needs to take a vacation away from the stress of his job and bone up on how to systemically approach problem solving and the sharing of information used to produce repeatable experiments/tests/exploits.
Secret exploits still travel at the speed of light (Score:2, Insightful)
1. If the *type* of exploit is known, and the *point of communication* (i.e., socket) is known, then an "expert" system can eventually be built that will make exploit creation point and click simple.
2. Any random piece of information can be disseminated to an unlimited number of points on the internet in much much less than 24 hours if there is any semi-organized method of sharing the information. A web site, mailing list, private FTP server, whatever - the internet was created to share information quickly. Code Red shows that even unwilling participants can be used to spread information (or any other payload) to saturation point in less than a day.
3. Even if only one programmer on the internet is creating exploits, there is a system of sharing this information. This is what has occurred with the "zero day" cracks of games that are shared on IRC, and it is very much a formalized and highly popular system. The only difference is that instead of being freely available to Black Hats and White Hats (like a public mailing list), it's only available as information in trade, and is usually traded for something illegal. This creates a nifty little power hiearchy where fifteen-year-olds become something like the Mafia Dons.
4. Exploit code proves that there is a hole. This proof cannot be denied by J Random Marketing Department.
5. A published exploit allows system admins to test whether a published "fix" actually works or not. Even if every admin doesn't do it, a couple will, and if there's a problem it will be announced on security lists (again, spreading at the speed of light).
Conclusion:
Because there will always be groups on the internet willing to share this information, security through obscurity will never work.
As an example, one could interview various games companies in the US and find the mean time between release of a copy-protected piece of software and the crack to bypass the protection. I call this Mean Time Before Crack (MTBC), and it's similar to the open source concept of Mean Time Between Itches (MTBI - the amount of time between the public discussion of a software idea and it's open-source implementation)
Re:Some other choice quotes : (Score:1, Insightful)
What makes AIDS so deadly? (Score:4, Insightful)
Many diseases are deadly if untreated. Often the scarriest ones are those that kill silently over time. This is what MS is asking for. Security holes can be an obvious pain or a silent killer. If exploits are not made popular and fixed then the exploit will be available to those who know the most and can potentially do the most harm. Once again this is a plead for a solution that will benefit MS and nobody else.
microsoft cant do its job (Score:2, Insightful)
fixed the problems there would not be a problem.
Do your job instead of sticking you heed in the
sand!!
Re:Right (Score:2, Insightful)
Well, let me put this descition in perspective. As 'I Love You.vbs' proved, there are a -lot- of Outlook users out there. Now, take that number and multiply it by the probability of a typical office worker clicking on 'Yes, run the unknown code' when the file is named 'I love you!', 'Important!', or something similarly fluffy.
All my experience with typical computer users tells me that you would still have a major network problem on your hands.
Personally I think a better solution would be to prevent macros from sending emails without confirmation. However, MS probably implemented mailing lists, or some other feature, by using that function. I heard in the newer version they plan to ask before letting a macro access your Address Book, which seems like a good idea.
Re:Some other choice quotes : (Score:3, Insightful)
Missed the point.... (Score:2, Insightful)
Just a thought. Without verifiable exploit code whats to stop bogus reports?
Bet everyone would get real sick of responding to fictitious security holes everytime someone got pissed at microsoft and started a rumor about an exploit in microsofts newest toy. (Of course there are so few people that engage in malicious microsoft bashing that this would be a tiny problem anyway)
D
Hard is not the issue (Score:3, Insightful)
When you register a Microsoft product, they thank you by sending you advertisment material. No critical upgrades or anything to that effect. AOL sends off cd-roms to everybody in america - for free, hoping a few will try out their service. Microsoft customers have PAID for their product, but Microsoft does not provide them with even notifications of upgrades/updates.
It's a sad, sad world.
sounds pretty much like... (Score:3, Insightful)
anyone notice the terminology (Score:4, Insightful)
The phrase "information anarchy" has no coherent meaning other than that defined through MS's statement, and even there it seems to mean "any public publication of security weaknesses in MS products". Yet MS pushes the phrase over and over again in the attempt to link security reports with the word "anarchy" in the hopes that the average idiot will associate publication of flaws in MS software with irresponsible, undemocratic behavior.
Most of us geeks catch this sort of thing right off (e.g., "viral software") but notice - this one slipped under the wire with nary a comment that I could see.
One of MS's greatest weapons is the introduction of language which precludes one mindset and reinforces another - social programming at it's finest. Accepting the phrase "information anarchy" as valid substantiates the idea that such a thing actually exists, even if you argue that the security reports don't constitute an example of this nebulous "information anarchy".
There's no such animal. It's a buzzword with zero meaning other than a poor attempt to lay the blame for MS security holes on people other than those employed at MS.
Perhaps we should retaliate with terminology of our own that's intimately associated with a Microsoft argument or product. Any ideas (other than the "Microsoft worms" phrase of some days back)?
Max
Re:MS (Score:3, Insightful)
I would relate M$ to the builder, and the locksmith to the security boards.
Nimda didn't need HELP in taking down networks (Score:3, Insightful)
Culp is assuming that the only people smart enough to decipher the viruses are the security people themselves, and THAT is the false assumption that invalidates the theory behind the 'essay'...